id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1905.02552
Yang Qin
Yang Qin, Louise Freebairn, Jo-An Atkinson, Weicheng Qian, Anahita Safarishahrbijari, Nathaniel D Osgood
Multi-Scale Simulation Modeling for Prevention and Public Health Management of Diabetes in Pregnancy and Sequelae
10 pages, SBP-BRiMS 2019
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diabetes in pregnancy (DIP) is an increasing public health priority in the Australian Capital Territory, particularly due to its impact on risk for developing Type 2 diabetes. While earlier diagnostic screening results in greater capacity for early detection and treatment, such benefits must be balanced with the greater demands this imposes on public health services. To address such planning challenges, a multi-scale hybrid simulation model of DIP was built to explore the interaction of risk factors and capture the dynamics underlying the development of DIP. The impact of interventions on health outcomes at the physiological, health service and population level is measured. Of particular central significance in the model is a compartmental model representing the underlying physiological regulation of glycemic status based on beta-cell dynamics and insulin resistance. The model also simulated the dynamics of continuous BMI evolution, glycemic status change during pregnancy and diabetes classification driven by the individual-level physiological model. We further modeled public health service pathways providing diagnosis and care for DIP to explore the optimization of resource use during service delivery. The model was extensively calibrated against empirical data.
[ { "created": "Fri, 3 May 2019 01:32:36 GMT", "version": "v1" } ]
2019-05-08
[ [ "Qin", "Yang", "" ], [ "Freebairn", "Louise", "" ], [ "Atkinson", "Jo-An", "" ], [ "Qian", "Weicheng", "" ], [ "Safarishahrbijari", "Anahita", "" ], [ "Osgood", "Nathaniel D", "" ] ]
Diabetes in pregnancy (DIP) is an increasing public health priority in the Australian Capital Territory, particularly due to its impact on risk for developing Type 2 diabetes. While earlier diagnostic screening results in greater capacity for early detection and treatment, such benefits must be balanced with the greater demands this imposes on public health services. To address such planning challenges, a multi-scale hybrid simulation model of DIP was built to explore the interaction of risk factors and capture the dynamics underlying the development of DIP. The impact of interventions on health outcomes at the physiological, health service and population level is measured. Of particular central significance in the model is a compartmental model representing the underlying physiological regulation of glycemic status based on beta-cell dynamics and insulin resistance. The model also simulated the dynamics of continuous BMI evolution, glycemic status change during pregnancy and diabetes classification driven by the individual-level physiological model. We further modeled public health service pathways providing diagnosis and care for DIP to explore the optimization of resource use during service delivery. The model was extensively calibrated against empirical data.
1510.06863
Daniel Wilson
Sarah G Earle, Chieh-Hsi Wu, Jane Charlesworth, Nicole Stoesser, N Claire Gordon, Timothy M Walker, Chris C A Spencer, Zamin Iqbal, David A Clifton, Katie L Hopkins, Neil Woodford, E Grace Smith, Nazir Ismail, Martin J Llewelyn, Tim E Peto, Derrick W Crook, Gil McVean, A Sarah Walker, Daniel J Wilson
Identifying lineage effects when controlling for population structure improves power in bacterial association studies
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Bacteria pose unique challenges for genome-wide association studies (GWAS) because of strong structuring into distinct strains and substantial linkage disequilibrium across the genome. While methods developed for human studies can correct for strain structure, this risks considerable loss- of-power because genetic differences between strains often contribute substantial phenotypic variability. Here we propose a new method that captures lineage-level associations even when locus-specific associations cannot be fine-mapped. We demonstrate its ability to detect genes and genetic variants underlying resistance to 17 antimicrobials in 3144 isolates from four taxonomically diverse clonal and recombining bacteria: Mycobacterium tuberculosis, Staphylococcus aureus, Escherichia coli and Klebsiella pneumoniae. Strong selection, recombination and penetrance confer high power to recover known antimicrobial resistance mechanisms, and reveal a candidate association between the outer membrane porin nmpC and cefazolin resistance in E. coli. Hence our method pinpoints locus-specific effects where possible, and boosts power by detecting lineage-level differences when fine-mapping is intractable.
[ { "created": "Fri, 23 Oct 2015 09:21:53 GMT", "version": "v1" }, { "created": "Tue, 27 Oct 2015 19:15:08 GMT", "version": "v2" }, { "created": "Mon, 8 Feb 2016 17:01:23 GMT", "version": "v3" } ]
2016-02-09
[ [ "Earle", "Sarah G", "" ], [ "Wu", "Chieh-Hsi", "" ], [ "Charlesworth", "Jane", "" ], [ "Stoesser", "Nicole", "" ], [ "Gordon", "N Claire", "" ], [ "Walker", "Timothy M", "" ], [ "Spencer", "Chris C A", "" ], [ "Iqbal", "Zamin", "" ], [ "Clifton", "David A", "" ], [ "Hopkins", "Katie L", "" ], [ "Woodford", "Neil", "" ], [ "Smith", "E Grace", "" ], [ "Ismail", "Nazir", "" ], [ "Llewelyn", "Martin J", "" ], [ "Peto", "Tim E", "" ], [ "Crook", "Derrick W", "" ], [ "McVean", "Gil", "" ], [ "Walker", "A Sarah", "" ], [ "Wilson", "Daniel J", "" ] ]
Bacteria pose unique challenges for genome-wide association studies (GWAS) because of strong structuring into distinct strains and substantial linkage disequilibrium across the genome. While methods developed for human studies can correct for strain structure, this risks considerable loss- of-power because genetic differences between strains often contribute substantial phenotypic variability. Here we propose a new method that captures lineage-level associations even when locus-specific associations cannot be fine-mapped. We demonstrate its ability to detect genes and genetic variants underlying resistance to 17 antimicrobials in 3144 isolates from four taxonomically diverse clonal and recombining bacteria: Mycobacterium tuberculosis, Staphylococcus aureus, Escherichia coli and Klebsiella pneumoniae. Strong selection, recombination and penetrance confer high power to recover known antimicrobial resistance mechanisms, and reveal a candidate association between the outer membrane porin nmpC and cefazolin resistance in E. coli. Hence our method pinpoints locus-specific effects where possible, and boosts power by detecting lineage-level differences when fine-mapping is intractable.
2212.07790
Oben \"Ozg\"ur
Oben \"Ozg\"ur, Arwa Rekik, Islem Rekik
Population Template-Based Brain Graph Augmentation for Improving One-Shot Learning Classification
null
null
null
null
q-bio.NC cs.AI cs.LG cs.NE q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The challenges of collecting medical data on neurological disorder diagnosis problems paved the way for learning methods with scarce number of samples. Due to this reason, one-shot learning still remains one of the most challenging and trending concepts of deep learning as it proposes to simulate the human-like learning approach in classification problems. Previous studies have focused on generating more accurate fingerprints of the population using graph neural networks (GNNs) with connectomic brain graph data. Thereby, generated population fingerprints named connectional brain template (CBTs) enabled detecting discriminative bio-markers of the population on classification tasks. However, the reverse problem of data augmentation from single graph data representing brain connectivity has never been tackled before. In this paper, we propose an augmentation pipeline in order to provide improved metrics on our binary classification problem. Divergently from the previous studies, we examine augmentation from a single population template by utilizing graph-based generative adversarial network (gGAN) architecture for a classification problem. We benchmarked our proposed solution on AD/LMCI dataset consisting of brain connectomes with Alzheimer's Disease (AD) and Late Mild Cognitive Impairment (LMCI). In order to evaluate our model's generalizability, we used cross-validation strategy and randomly sampled the folds multiple times. Our results on classification not only provided better accuracy when augmented data generated from one sample is introduced, but yields more balanced results on other metrics as well.
[ { "created": "Wed, 14 Dec 2022 14:56:00 GMT", "version": "v1" } ]
2022-12-16
[ [ "Özgür", "Oben", "" ], [ "Rekik", "Arwa", "" ], [ "Rekik", "Islem", "" ] ]
The challenges of collecting medical data on neurological disorder diagnosis problems paved the way for learning methods with scarce number of samples. Due to this reason, one-shot learning still remains one of the most challenging and trending concepts of deep learning as it proposes to simulate the human-like learning approach in classification problems. Previous studies have focused on generating more accurate fingerprints of the population using graph neural networks (GNNs) with connectomic brain graph data. Thereby, generated population fingerprints named connectional brain template (CBTs) enabled detecting discriminative bio-markers of the population on classification tasks. However, the reverse problem of data augmentation from single graph data representing brain connectivity has never been tackled before. In this paper, we propose an augmentation pipeline in order to provide improved metrics on our binary classification problem. Divergently from the previous studies, we examine augmentation from a single population template by utilizing graph-based generative adversarial network (gGAN) architecture for a classification problem. We benchmarked our proposed solution on AD/LMCI dataset consisting of brain connectomes with Alzheimer's Disease (AD) and Late Mild Cognitive Impairment (LMCI). In order to evaluate our model's generalizability, we used cross-validation strategy and randomly sampled the folds multiple times. Our results on classification not only provided better accuracy when augmented data generated from one sample is introduced, but yields more balanced results on other metrics as well.
1611.06893
Catalina Obando
Catalina Obando, Fabrizio De Vico Fallani
A statistical model for brain networks inferred from large-scale electrophysiological signals
Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract appearing here is slightly shorter than that in the PDF file
null
10.1098/rsif.2016.0940
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network science has been extensively developed to characterize structural properties of complex systems, including brain networks inferred from neuroimaging data. As a result of the inference process, networks estimated from experimentally obtained biological data, represent one instance of a larger number of realizations with similar intrinsic topology. A modeling approach is therefore needed to support statistical inference on the bottom-up local connectivity mechanisms influencing the formation of the estimated brain networks. We adopted a statistical model based on exponential random graphs (ERGM) to reproduce brain networks, or connectomes, estimated by spectral coherence between high-density electroencephalographic (EEG) signals. We validated this approach in a dataset of 108 healthy subjects during eyes-open (EO) and eyes-closed (EC) resting-state conditions. Results showed that the tendency to form triangles and stars, reflecting clustering and node centrality, better explained the global properties of the EEG connectomes as compared to other combinations of graph metrics. Synthetic networks generated by this model configuration replicated the characteristic differences found in brain networks, with EO eliciting significantly higher segregation in the alpha frequency band (8-13 Hz) as compared to EC. Furthermore, the fitted ERGM parameter values provided complementary information showing that clustering connections are significantly more represented from EC to EO in the alpha range, but also in the beta band (14-29 Hz), which is known to play a crucial role in cortical processing of visual input and externally oriented attention. These findings support the current view of the brain functional segregation and integration in terms of modules and hubs, and provide a statistical approach to extract new information on the (re)organizational mechanisms in healthy and diseased brains.
[ { "created": "Mon, 21 Nov 2016 16:38:06 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2016 17:11:38 GMT", "version": "v2" }, { "created": "Wed, 23 Nov 2016 20:29:02 GMT", "version": "v3" }, { "created": "Fri, 17 Feb 2017 16:49:14 GMT", "version": "v4" } ]
2017-03-10
[ [ "Obando", "Catalina", "" ], [ "Fallani", "Fabrizio De Vico", "" ] ]
Network science has been extensively developed to characterize structural properties of complex systems, including brain networks inferred from neuroimaging data. As a result of the inference process, networks estimated from experimentally obtained biological data, represent one instance of a larger number of realizations with similar intrinsic topology. A modeling approach is therefore needed to support statistical inference on the bottom-up local connectivity mechanisms influencing the formation of the estimated brain networks. We adopted a statistical model based on exponential random graphs (ERGM) to reproduce brain networks, or connectomes, estimated by spectral coherence between high-density electroencephalographic (EEG) signals. We validated this approach in a dataset of 108 healthy subjects during eyes-open (EO) and eyes-closed (EC) resting-state conditions. Results showed that the tendency to form triangles and stars, reflecting clustering and node centrality, better explained the global properties of the EEG connectomes as compared to other combinations of graph metrics. Synthetic networks generated by this model configuration replicated the characteristic differences found in brain networks, with EO eliciting significantly higher segregation in the alpha frequency band (8-13 Hz) as compared to EC. Furthermore, the fitted ERGM parameter values provided complementary information showing that clustering connections are significantly more represented from EC to EO in the alpha range, but also in the beta band (14-29 Hz), which is known to play a crucial role in cortical processing of visual input and externally oriented attention. These findings support the current view of the brain functional segregation and integration in terms of modules and hubs, and provide a statistical approach to extract new information on the (re)organizational mechanisms in healthy and diseased brains.
1009.1841
Stuart Borrett Stuart Borrett
Andria K. Salas and Stuart R. Borrett
Evidence for the Dominance of Indirect Effects in 50 Trophically-Based Ecosystem Networks
26 pages, 5 figures, 1 table
Ecological Modelling 222: 1192--1204
10.1016/j.ecolmodel.2010.12.002
null
q-bio.PE q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Indirect effects are powerful influences in ecosystems that may maintain species diversity and alter apparent relationships between species in surprising ways. Here, we applied Network Environ Analysis to 50 empirically-based trophic ecosystem models to test the hypothesis that indirect flows dominate direct flows in ecosystem networks. Further, we used Monte Carlo based perturbations to investigate the robustness of these results to potential error in the underlying data. To explain our findings, we further investigated the importance of the microbial food web in recycling energy-matter using components of the Finn Cycling Index and analysis of Environ Centrality. We found that indirect flows dominate direct flows in 37/50 (74.0%) models. This increases to 31/35 (88.5%) models when we consider only models that have cycling structure and a representation of the microbial food web. The uncertainty analysis reveals that there is less error in the I/D values than the $\pm$ 5% error introduced into the models, suggesting the results are robust to uncertainty. Our results show that the microbial food web mediates a substantial percentage of cycling in some systems (median = 30.2%), but its role is highly variable in these models, in agreement with the literature. Our results, combined with previous work, strongly suggest that indirect effects are dominant components of activity in ecosystems.
[ { "created": "Thu, 9 Sep 2010 18:14:33 GMT", "version": "v1" } ]
2011-04-04
[ [ "Salas", "Andria K.", "" ], [ "Borrett", "Stuart R.", "" ] ]
Indirect effects are powerful influences in ecosystems that may maintain species diversity and alter apparent relationships between species in surprising ways. Here, we applied Network Environ Analysis to 50 empirically-based trophic ecosystem models to test the hypothesis that indirect flows dominate direct flows in ecosystem networks. Further, we used Monte Carlo based perturbations to investigate the robustness of these results to potential error in the underlying data. To explain our findings, we further investigated the importance of the microbial food web in recycling energy-matter using components of the Finn Cycling Index and analysis of Environ Centrality. We found that indirect flows dominate direct flows in 37/50 (74.0%) models. This increases to 31/35 (88.5%) models when we consider only models that have cycling structure and a representation of the microbial food web. The uncertainty analysis reveals that there is less error in the I/D values than the $\pm$ 5% error introduced into the models, suggesting the results are robust to uncertainty. Our results show that the microbial food web mediates a substantial percentage of cycling in some systems (median = 30.2%), but its role is highly variable in these models, in agreement with the literature. Our results, combined with previous work, strongly suggest that indirect effects are dominant components of activity in ecosystems.
1307.1499
Kazuhiro Takemoto
Kazuhiro Takemoto, Kaori Kihara
Modular organization of cancer signaling networks is associated with patient survivability
17 pages, 5 figures
Biosystems 113, 149-154 (2013)
10.1016/j.biosystems.2013.06.003
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular signaling networks are believed to determine cancer robustness. Although cancer patient survivability was reported to correlate with the heterogeneous connectivity of the signaling networks inspired by theoretical studies on the increase of network robustness due to the heterogeneous connectivity, other theoretical and data analytic studies suggest an alternative explanation: the impact of modular organization of networks on biological robustness or adaptation to changing environments. In this study, thus, we evaluate whether the modularity--robustness hypothesis is applicable to cancer using network analysis. We focus on 14 specific cancer types whose molecular signaling networks are available in databases, and show that modular organization of cancer signaling networks is associated with the patient survival rate. In particular, the cancers with less modular signaling networks are more curable. This result is consistent with a prediction from the modularity--robustness hypothesis. Furthermore, we show that the network modularity is a better descriptor of the patient survival rate than the heterogeneous connectivity. However, these results do not contradict the importance of the heterogeneous connectivity. Rather, they provide new and different insights into the relationship between cellular networks and cancer behaviors. Despite several limitations of data analysis, these findings enhance our understanding of adaptive and evolutionary mechanisms of cancer cells.
[ { "created": "Thu, 4 Jul 2013 23:24:13 GMT", "version": "v1" } ]
2013-07-18
[ [ "Takemoto", "Kazuhiro", "" ], [ "Kihara", "Kaori", "" ] ]
Molecular signaling networks are believed to determine cancer robustness. Although cancer patient survivability was reported to correlate with the heterogeneous connectivity of the signaling networks inspired by theoretical studies on the increase of network robustness due to the heterogeneous connectivity, other theoretical and data analytic studies suggest an alternative explanation: the impact of modular organization of networks on biological robustness or adaptation to changing environments. In this study, thus, we evaluate whether the modularity--robustness hypothesis is applicable to cancer using network analysis. We focus on 14 specific cancer types whose molecular signaling networks are available in databases, and show that modular organization of cancer signaling networks is associated with the patient survival rate. In particular, the cancers with less modular signaling networks are more curable. This result is consistent with a prediction from the modularity--robustness hypothesis. Furthermore, we show that the network modularity is a better descriptor of the patient survival rate than the heterogeneous connectivity. However, these results do not contradict the importance of the heterogeneous connectivity. Rather, they provide new and different insights into the relationship between cellular networks and cancer behaviors. Despite several limitations of data analysis, these findings enhance our understanding of adaptive and evolutionary mechanisms of cancer cells.
0906.2281
Thierry Rabilloud
Thierry Rabilloud (BBSI)
Membrane proteins and proteomics: Love is possible, but so difficult
null
Electrophoresis 30, S1 (2009) S174-S180
10.1002/elps.200900050
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite decades of extensive research, the large-scale analysis of membrane proteins remains a difficult task. This is due to the fact that membrane proteins require a carefully balanced hydrophilic and lipophilic environment, which optimum varies with different proteins, while most protein chemistry methods work mainly, if not only, in water-based media. Taking this review [Santoni, Molloy and Rabilloud, Membrane proteins and proteomics: un amour impossible? Electrophoresis 2000, 21, 1054-1070] as a pivotal paper, the current paper analyzes how the field of membrane proteomics exacerbated the trend in proteomics, i.e. developing alternate methods to the historical two-dimensional electrophoresis, and thus putting more and more pressure on the mass spectrometry side. However, in the case of membrane proteins, the incentive in doing so is due to the poor solubility of membrane proteins. This review also shows that in some situations, where this solubility problem is less acute, two-dimensional electrophoresis remains a method of choice. Last but not least, this review also critically examines the alternate approaches that have been used for the proteomic analysis of membrane proteins.
[ { "created": "Fri, 12 Jun 2009 09:11:38 GMT", "version": "v1" } ]
2009-06-15
[ [ "Rabilloud", "Thierry", "", "BBSI" ] ]
Despite decades of extensive research, the large-scale analysis of membrane proteins remains a difficult task. This is due to the fact that membrane proteins require a carefully balanced hydrophilic and lipophilic environment, which optimum varies with different proteins, while most protein chemistry methods work mainly, if not only, in water-based media. Taking this review [Santoni, Molloy and Rabilloud, Membrane proteins and proteomics: un amour impossible? Electrophoresis 2000, 21, 1054-1070] as a pivotal paper, the current paper analyzes how the field of membrane proteomics exacerbated the trend in proteomics, i.e. developing alternate methods to the historical two-dimensional electrophoresis, and thus putting more and more pressure on the mass spectrometry side. However, in the case of membrane proteins, the incentive in doing so is due to the poor solubility of membrane proteins. This review also shows that in some situations, where this solubility problem is less acute, two-dimensional electrophoresis remains a method of choice. Last but not least, this review also critically examines the alternate approaches that have been used for the proteomic analysis of membrane proteins.
2103.10477
Nicholas Boyd
Nicholas Boyd, Samuel Woodhouse, Kalim Mir
Sequencing by Emergence: Modeling and Estimation
null
null
null
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequencing by Emergence (SEQE) is a new single-molecule nucleic acid (DNA/RNA) sequencing technology that estimates sequence as an emergent property of the binding and localization of a repertoire of short oligonucleotide probes. SEQE promises to deliver accurate, ultra-long, haplotype-phased reads at the whole genome-scale for very low cost within 10 minutes. The data SEQE generates requires entirely new inference techniques. In this paper we introduce a probabilistic model of the SEQE measurement process and an algorithm that estimates sequence by solving a convex relaxation of the corresponding maximum likelihood problem. We demonstrate the effectiveness of our algorithm on a variety of simulated datasets.
[ { "created": "Thu, 18 Mar 2021 18:55:13 GMT", "version": "v1" }, { "created": "Tue, 3 Aug 2021 15:05:47 GMT", "version": "v2" } ]
2021-08-04
[ [ "Boyd", "Nicholas", "" ], [ "Woodhouse", "Samuel", "" ], [ "Mir", "Kalim", "" ] ]
Sequencing by Emergence (SEQE) is a new single-molecule nucleic acid (DNA/RNA) sequencing technology that estimates sequence as an emergent property of the binding and localization of a repertoire of short oligonucleotide probes. SEQE promises to deliver accurate, ultra-long, haplotype-phased reads at the whole genome-scale for very low cost within 10 minutes. The data SEQE generates requires entirely new inference techniques. In this paper we introduce a probabilistic model of the SEQE measurement process and an algorithm that estimates sequence by solving a convex relaxation of the corresponding maximum likelihood problem. We demonstrate the effectiveness of our algorithm on a variety of simulated datasets.
2406.06731
Yujiang Wang
Peter N. Taylor, Yujiang Wang, Callum Simpson, Vytene Janiukstyte, Jonathan Horsley, Karoline Leiberg, Beth Little, Harry Clifford, Sophie Adler, Sjoerd B. Vos, Gavin P Winston, Andrew W McEvoy, Anna Miserocchi, Jane de Tisi, John S Duncan
The Imaging Database for Epilepsy And Surgery (IDEAS)
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Magnetic resonance imaging (MRI) is a crucial tool to identify brain abnormalities in a wide range of neurological disorders. In focal epilepsy MRI is used to identify structural cerebral abnormalities. For covert lesions, machine learning and artificial intelligence algorithms may improve lesion detection if abnormalities are not evident on visual inspection. The success of this approach depends on the volume and quality of training data. Herein, we release an open-source dataset of preprocessed MRI scans from 442 individuals with drug-refractory focal epilepsy who had neurosurgical resections, and detailed demographic information. The MRI scan data includes the preoperative 3D T1 and where available 3D FLAIR, as well as a manually inspected complete surface reconstruction and volumetric parcellations. Demographic information includes age, sex, age of onset of epilepsy, location of surgery, histopathology of resected specimen, occurrence and frequency of focal seizures with and without impairment of awareness, focal to bilateral tonic-clonic seizures, number of anti-seizure medications (ASMs) at time of surgery, and a total of 1764 patient years of post-surgical follow up. Crucially, we also include resection masks delineated from post-surgical imaging. To demonstrate the veracity of our data, we successfully replicated previous studies showing long-term outcomes of seizure freedom in the range of around 50%. Our imaging data replicates findings of group level atrophy in patients compared to controls. Resection locations in the cohort were predominantly in the temporal and frontal lobes. We envisage our dataset, shared openly with the community, will catalyse the development and application of computational methods in clinical neurology.
[ { "created": "Mon, 10 Jun 2024 18:53:09 GMT", "version": "v1" } ]
2024-06-12
[ [ "Taylor", "Peter N.", "" ], [ "Wang", "Yujiang", "" ], [ "Simpson", "Callum", "" ], [ "Janiukstyte", "Vytene", "" ], [ "Horsley", "Jonathan", "" ], [ "Leiberg", "Karoline", "" ], [ "Little", "Beth", "" ], [ "Clifford", "Harry", "" ], [ "Adler", "Sophie", "" ], [ "Vos", "Sjoerd B.", "" ], [ "Winston", "Gavin P", "" ], [ "McEvoy", "Andrew W", "" ], [ "Miserocchi", "Anna", "" ], [ "de Tisi", "Jane", "" ], [ "Duncan", "John S", "" ] ]
Magnetic resonance imaging (MRI) is a crucial tool to identify brain abnormalities in a wide range of neurological disorders. In focal epilepsy MRI is used to identify structural cerebral abnormalities. For covert lesions, machine learning and artificial intelligence algorithms may improve lesion detection if abnormalities are not evident on visual inspection. The success of this approach depends on the volume and quality of training data. Herein, we release an open-source dataset of preprocessed MRI scans from 442 individuals with drug-refractory focal epilepsy who had neurosurgical resections, and detailed demographic information. The MRI scan data includes the preoperative 3D T1 and where available 3D FLAIR, as well as a manually inspected complete surface reconstruction and volumetric parcellations. Demographic information includes age, sex, age of onset of epilepsy, location of surgery, histopathology of resected specimen, occurrence and frequency of focal seizures with and without impairment of awareness, focal to bilateral tonic-clonic seizures, number of anti-seizure medications (ASMs) at time of surgery, and a total of 1764 patient years of post-surgical follow up. Crucially, we also include resection masks delineated from post-surgical imaging. To demonstrate the veracity of our data, we successfully replicated previous studies showing long-term outcomes of seizure freedom in the range of around 50%. Our imaging data replicates findings of group level atrophy in patients compared to controls. Resection locations in the cohort were predominantly in the temporal and frontal lobes. We envisage our dataset, shared openly with the community, will catalyse the development and application of computational methods in clinical neurology.
1803.07883
Ernest Greene
Ernest Greene, Yash Patel
Scan transcription of two-dimensional shapes as an alternative neuromorphic concept
7 pages, 4 figures, 53 references
Trends in Artificial Intelligence, 2018, 1, 27-33
null
null
q-bio.NC
http://creativecommons.org/publicdomain/zero/1.0/
Selfridge, along with Sutherland and Marr provided some of the earliest proposals for how to program computers to recognize shapes. Their emphasis on filtering for contour features, especially the orientation of boundary segments, was reinforced by the Nobel Prize winning work of Hubel & Wiesel who discovered that neurons in primary visual cortex selectively respond as a function of contour orientation. Countless investigators and theorists have continued to build on this approach. These models are often described as neuromorphic, which implies that the computational methods are based on biologically plausible principles. Recent work from the present lab has challenged the emphasis on orientation selectivity and the use of neural network principles. The goal of the present report is not to relitigate those issues, but to provide an alternative concept for encoding of shape information that may be useful to neuromorphic modelers.
[ { "created": "Wed, 21 Mar 2018 12:27:53 GMT", "version": "v1" } ]
2018-03-22
[ [ "Greene", "Ernest", "" ], [ "Patel", "Yash", "" ] ]
Selfridge, along with Sutherland and Marr provided some of the earliest proposals for how to program computers to recognize shapes. Their emphasis on filtering for contour features, especially the orientation of boundary segments, was reinforced by the Nobel Prize winning work of Hubel & Wiesel who discovered that neurons in primary visual cortex selectively respond as a function of contour orientation. Countless investigators and theorists have continued to build on this approach. These models are often described as neuromorphic, which implies that the computational methods are based on biologically plausible principles. Recent work from the present lab has challenged the emphasis on orientation selectivity and the use of neural network principles. The goal of the present report is not to relitigate those issues, but to provide an alternative concept for encoding of shape information that may be useful to neuromorphic modelers.
2012.15697
Rene Warren
Rene L. Warren and Inanc Birol
Interactive SARS-CoV-2 mutation timemaps
3 pages, 1 figure
F1000Research 2021, 10:68
10.12688/f1000research.50857.1
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
As the year 2020 draws to an end, several new strains have been reported for the SARS-CoV-2 coronavirus, the agent responsible for the COVID-19 pandemic that has afflicted us all this past year. However, it is difficult to comprehend the scale, in sequence space, geographical location and time, at which SARS-CoV-2 mutates and evolves in its human hosts. To get an appreciation for the rapid evolution of the coronavirus, we built interactive scalable vector graphics maps that show daily nucleotide variations in genomes from the six most populated continents compared to that of the initial, ground-zero SARS-CoV-2 isolate sequenced at the beginning of the year. Availability: Mutation time maps are available from https://bcgsc.github.io/SARS2/
[ { "created": "Thu, 31 Dec 2020 16:23:13 GMT", "version": "v1" } ]
2023-06-09
[ [ "Warren", "Rene L.", "" ], [ "Birol", "Inanc", "" ] ]
As the year 2020 draws to an end, several new strains have been reported for the SARS-CoV-2 coronavirus, the agent responsible for the COVID-19 pandemic that has afflicted us all this past year. However, it is difficult to comprehend the scale, in sequence space, geographical location and time, at which SARS-CoV-2 mutates and evolves in its human hosts. To get an appreciation for the rapid evolution of the coronavirus, we built interactive scalable vector graphics maps that show daily nucleotide variations in genomes from the six most populated continents compared to that of the initial, ground-zero SARS-CoV-2 isolate sequenced at the beginning of the year. Availability: Mutation time maps are available from https://bcgsc.github.io/SARS2/
1512.08798
Carlo Piermarocchi
Anthony Szedlak, Nicholas Smith, Li Liu, Giovanni Paternostro, Carlo Piermarocchi
Evolutionary and topological properties of gene modules and driver mutations in a leukemia gene regulatory network
9 pages, 3 figures
null
10.1371/journal.pcbi.1005009
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The diverse, specialized genes in today's lifeforms evolved from a common core of ancient, elementary genes. However, these genes did not evolve individually: gene expression is controlled by a complex network of interactions, and alterations in one gene may drive reciprocal changes in its proteins' binding partners. We show that the topology of a leukemia gene regulatory network is strongly coupled with evolutionary properties. Slowly-evolving ("cold"), old genes tend to interact with each other, as do rapidly-evolving ("hot"), young genes, causing genes to evolve in clusters. We argue that gene duplication placed old, cold genes at the center of the network, and young, hot genes on the periphery, and demonstrate this with single-node centrality measures and two new measures of efficiency. Integrating centrality measures with evolutionary information, we define a medically-relevant "cancer network core," strongly enriched for common cancer mutations ($p=2\times 10^{-14}$). This could aid in identifying driver mutations and therapeutic targets.
[ { "created": "Tue, 29 Dec 2015 21:15:22 GMT", "version": "v1" } ]
2016-09-28
[ [ "Szedlak", "Anthony", "" ], [ "Smith", "Nicholas", "" ], [ "Liu", "Li", "" ], [ "Paternostro", "Giovanni", "" ], [ "Piermarocchi", "Carlo", "" ] ]
The diverse, specialized genes in today's lifeforms evolved from a common core of ancient, elementary genes. However, these genes did not evolve individually: gene expression is controlled by a complex network of interactions, and alterations in one gene may drive reciprocal changes in its proteins' binding partners. We show that the topology of a leukemia gene regulatory network is strongly coupled with evolutionary properties. Slowly-evolving ("cold"), old genes tend to interact with each other, as do rapidly-evolving ("hot"), young genes, causing genes to evolve in clusters. We argue that gene duplication placed old, cold genes at the center of the network, and young, hot genes on the periphery, and demonstrate this with single-node centrality measures and two new measures of efficiency. Integrating centrality measures with evolutionary information, we define a medically-relevant "cancer network core," strongly enriched for common cancer mutations ($p=2\times 10^{-14}$). This could aid in identifying driver mutations and therapeutic targets.
1912.01890
Maryam Nazarieh
Maryam Nazarieh, Volkhard Helms
Identification of Biomarkers Driving Blood Cell Development
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A blood cell lineage consists of several consecutive developmental stages from the pluripotent or multipotent stem cell to a particular stage of terminally differentiated cells. There is considerable interest in identifying the key regulatory genes that govern blood cell development from the gene expression data without considering the underlying network between transcription factors (TFs) and their target genes. In this study, we introduce a novel expression pattern that key regulators expose along the differentiation path. We deploy this pattern to identify the cell-specific key regulators responsible for the development. As proof of concept, we consider this approach to data on six developmental stages from mouse embryonic stem cells to terminally differentiated macrophages.
[ { "created": "Wed, 4 Dec 2019 10:57:25 GMT", "version": "v1" } ]
2019-12-05
[ [ "Nazarieh", "Maryam", "" ], [ "Helms", "Volkhard", "" ] ]
A blood cell lineage consists of several consecutive developmental stages from the pluripotent or multipotent stem cell to a particular stage of terminally differentiated cells. There is considerable interest in identifying the key regulatory genes that govern blood cell development from the gene expression data without considering the underlying network between transcription factors (TFs) and their target genes. In this study, we introduce a novel expression pattern that key regulators expose along the differentiation path. We deploy this pattern to identify the cell-specific key regulators responsible for the development. As proof of concept, we consider this approach to data on six developmental stages from mouse embryonic stem cells to terminally differentiated macrophages.
1411.6039
Stuart Sevier
Stuart A. Sevier, Herbert Levine
Properties of Cooperatively Induced Phases in Sensing Models
8 pages, 5 figures
Phys. Rev. E 91, 052707 (2015)
10.1103/PhysRevE.91.052707
null
q-bio.CB cond-mat.stat-mech nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A large number of eukaryotic cells are able to directly detect external chemical gradients with great accuracy and the ultimate limit to their sensitivity has been a topic of debate for many years. Previous work has been done to understand many aspects of this process but little attention has been paid to the possibility of emergent sensing states. Here we examine how cooperation between sensors existing in a two dimensional network, as they do on the cell's surface, can both enhance and fundamentally alter the response of the cell to a spatially varying signal. We show that weakly interacting sensors linearly amplify the sensors response to an external gradient while a network of strongly interacting sensors form a collective non-linear response with two separate domains of active and inactive sensors forming what have called a "1/2-state" . In our analysis we examine the cell's ability to sense the direction of a signal and pay special attention to the substantially different behavior realized in the strongly interacting regime.
[ { "created": "Fri, 21 Nov 2014 22:04:51 GMT", "version": "v1" }, { "created": "Mon, 1 Dec 2014 16:57:53 GMT", "version": "v2" } ]
2015-05-20
[ [ "Sevier", "Stuart A.", "" ], [ "Levine", "Herbert", "" ] ]
A large number of eukaryotic cells are able to directly detect external chemical gradients with great accuracy and the ultimate limit to their sensitivity has been a topic of debate for many years. Previous work has been done to understand many aspects of this process but little attention has been paid to the possibility of emergent sensing states. Here we examine how cooperation between sensors existing in a two dimensional network, as they do on the cell's surface, can both enhance and fundamentally alter the response of the cell to a spatially varying signal. We show that weakly interacting sensors linearly amplify the sensors response to an external gradient while a network of strongly interacting sensors form a collective non-linear response with two separate domains of active and inactive sensors forming what have called a "1/2-state" . In our analysis we examine the cell's ability to sense the direction of a signal and pay special attention to the substantially different behavior realized in the strongly interacting regime.
2203.06505
Lucas M. Stolerman
Lucas M. Stolerman, Leonardo Clemente, Canelle Poirier, Kris V. Parag, Atreyee Majumder, Serge Masyn, Bernd Resch, and Mauricio Santillana
Using digital traces to build prospective and real-time county-level early warning systems to anticipate COVID-19 outbreaks in the United States
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ongoing COVID-19 pandemic continues to affect communities around the world. To date, almost 6 million people have died as a consequence of COVID-19, and more than one-quarter of a billion people are estimated to have been infected worldwide. The design of appropriate and timely mitigation strategies to curb the effects of this and future disease outbreaks requires close monitoring of their spatio-temporal trajectories. We present machine learning methods to anticipate sharp increases in COVID-19 activity in US counties in real-time. Our methods leverage Internet-based digital traces -- e.g., disease-related Internet search activity from the general population and clinicians, disease-relevant Twitter micro-blogs, and outbreak trajectories from neighboring locations -- to monitor potential changes in population-level health trends. Motivated by the need for finer spatial-resolution epidemiological insights to improve local decision-making, we build upon previous retrospective research efforts originally conceived at the state level and in the early months of the pandemic. Our methods -- tested in real-time and in an out-of-sample manner on a subset of 97 counties distributed across the US -- frequently anticipated sharp increases in COVID-19 activity 1-6 weeks before the onset of local outbreaks (defined as the time when the effective reproduction number $R_t$ becomes larger than 1 consistently). Given the continued emergence of COVID-19 variants of concern -- such as the most recent one, Omicron -- and the fact that multiple countries have not had full access to vaccines, the framework we present, while conceived for the county-level in the US, could be helpful in countries where similar data sources are available.
[ { "created": "Sat, 12 Mar 2022 19:51:26 GMT", "version": "v1" } ]
2022-03-15
[ [ "Stolerman", "Lucas M.", "" ], [ "Clemente", "Leonardo", "" ], [ "Poirier", "Canelle", "" ], [ "Parag", "Kris V.", "" ], [ "Majumder", "Atreyee", "" ], [ "Masyn", "Serge", "" ], [ "Resch", "Bernd", "" ], [ "Santillana", "Mauricio", "" ] ]
The ongoing COVID-19 pandemic continues to affect communities around the world. To date, almost 6 million people have died as a consequence of COVID-19, and more than one-quarter of a billion people are estimated to have been infected worldwide. The design of appropriate and timely mitigation strategies to curb the effects of this and future disease outbreaks requires close monitoring of their spatio-temporal trajectories. We present machine learning methods to anticipate sharp increases in COVID-19 activity in US counties in real-time. Our methods leverage Internet-based digital traces -- e.g., disease-related Internet search activity from the general population and clinicians, disease-relevant Twitter micro-blogs, and outbreak trajectories from neighboring locations -- to monitor potential changes in population-level health trends. Motivated by the need for finer spatial-resolution epidemiological insights to improve local decision-making, we build upon previous retrospective research efforts originally conceived at the state level and in the early months of the pandemic. Our methods -- tested in real-time and in an out-of-sample manner on a subset of 97 counties distributed across the US -- frequently anticipated sharp increases in COVID-19 activity 1-6 weeks before the onset of local outbreaks (defined as the time when the effective reproduction number $R_t$ becomes larger than 1 consistently). Given the continued emergence of COVID-19 variants of concern -- such as the most recent one, Omicron -- and the fact that multiple countries have not had full access to vaccines, the framework we present, while conceived for the county-level in the US, could be helpful in countries where similar data sources are available.
2212.03123
Aliza Sloan
Aliza T. Sloan, J. A. Scott Kelso
On the emergence of agency
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How do human beings make sense of their relation to the world and realize their ability to effect change? Applying modern concepts and methods of coordination dynamics we demonstrate that patterns of movement and coordination in 3-4 month-olds may be used to identify states and behavioral phenotypes of emergent agency. By means of a complete coordinative analysis of baby and mobile motion and their interaction, we show that the emergence of agency takes the form of a punctuated self-organizing process, with meaning found both in movement and stillness.
[ { "created": "Tue, 6 Dec 2022 16:38:36 GMT", "version": "v1" }, { "created": "Wed, 29 Mar 2023 23:18:18 GMT", "version": "v2" } ]
2023-03-31
[ [ "Sloan", "Aliza T.", "" ], [ "Kelso", "J. A. Scott", "" ] ]
How do human beings make sense of their relation to the world and realize their ability to effect change? Applying modern concepts and methods of coordination dynamics we demonstrate that patterns of movement and coordination in 3-4 month-olds may be used to identify states and behavioral phenotypes of emergent agency. By means of a complete coordinative analysis of baby and mobile motion and their interaction, we show that the emergence of agency takes the form of a punctuated self-organizing process, with meaning found both in movement and stillness.
2302.13298
Chlo\'e Colson
Chlo\'e Colson, Philip K. Maini, Helen M. Byrne
Investigating the influence of growth arrest mechanisms on tumour responses to radiotherapy
33 pages, 22 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer is a heterogeneous disease and tumours of the same type can differ greatly at the genetic and phenotypic levels. Understanding how these differences impact sensitivity to treatment is an essential step towards patient-specific treatment design. In this paper, we investigate how two different mechanisms for growth control may affect tumour cell responses to fractionated radiotherapy (RT) by extending an existing ordinary differential equation model of tumour growth. In the absence of treatment, this model distinguishes between growth arrest due to nutrient insufficiency and competition for space and exhibits three growth regimes: nutrient-limited (NL), space limited (SL) and bistable (BS), where both mechanisms for growth arrest coexist. We study the effect of RT for tumours in each regime, finding that tumours in the SL regime typically respond best to RT, while tumours in the BS regime typically respond worst to RT. For tumours in each regime, we also identify the biological processes that may explain positive and negative treatment outcomes and the dosing regimen which maximises the reduction in tumour burden.
[ { "created": "Sun, 26 Feb 2023 11:36:52 GMT", "version": "v1" } ]
2023-02-28
[ [ "Colson", "Chloé", "" ], [ "Maini", "Philip K.", "" ], [ "Byrne", "Helen M.", "" ] ]
Cancer is a heterogeneous disease and tumours of the same type can differ greatly at the genetic and phenotypic levels. Understanding how these differences impact sensitivity to treatment is an essential step towards patient-specific treatment design. In this paper, we investigate how two different mechanisms for growth control may affect tumour cell responses to fractionated radiotherapy (RT) by extending an existing ordinary differential equation model of tumour growth. In the absence of treatment, this model distinguishes between growth arrest due to nutrient insufficiency and competition for space and exhibits three growth regimes: nutrient-limited (NL), space limited (SL) and bistable (BS), where both mechanisms for growth arrest coexist. We study the effect of RT for tumours in each regime, finding that tumours in the SL regime typically respond best to RT, while tumours in the BS regime typically respond worst to RT. For tumours in each regime, we also identify the biological processes that may explain positive and negative treatment outcomes and the dosing regimen which maximises the reduction in tumour burden.
1503.05485
Johann Mart\'inez
Carlos B. Moreno, Jos\'e-Luis. D\'iaz, J. H. Mart\'inez
Petri Net Modeling of the Brain Circuit Involved in Aggressive Behavior
30 pages, in Spanish. 15 figures, 1 table, Submitted to SaludMental 2015, Mexico, Print version ISSN 0185-3325, SJR, SciELO, Scimago; SaludMental Print version ISSN 0185-3325, (2015)
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this work in to demonstrate the initial results of a research project having as its goal to develop dynamic models of the brain network involved in aggressive behavior. In this way, the complex neural process correlated to basic anger emotions and resulting in aggressive behaviors is purportedly schematized by the use of Petri nets, a work-flow computational tool. Initially, the modeling technique is introduced taking into account the most recent and accepted notion of the neural substrates of emotions, particularly the brain structures involved in the aggression neural network, including their inputs, outputs, and internal connectivity. In order to optimally represent these structures, their connections, and temporal dynamics, the Petri net foundations employed in the simulation theory are defined. Finally, the model and dynamic simulation of the neural process associated with aggressive behavior is presented and evaluated as a feasible in silico experimental support of the Patterned-Process Theory.
[ { "created": "Wed, 18 Mar 2015 16:58:26 GMT", "version": "v1" } ]
2015-03-19
[ [ "Moreno", "Carlos B.", "" ], [ "Díaz", "José-Luis.", "" ], [ "Martínez", "J. H.", "" ] ]
The purpose of this work in to demonstrate the initial results of a research project having as its goal to develop dynamic models of the brain network involved in aggressive behavior. In this way, the complex neural process correlated to basic anger emotions and resulting in aggressive behaviors is purportedly schematized by the use of Petri nets, a work-flow computational tool. Initially, the modeling technique is introduced taking into account the most recent and accepted notion of the neural substrates of emotions, particularly the brain structures involved in the aggression neural network, including their inputs, outputs, and internal connectivity. In order to optimally represent these structures, their connections, and temporal dynamics, the Petri net foundations employed in the simulation theory are defined. Finally, the model and dynamic simulation of the neural process associated with aggressive behavior is presented and evaluated as a feasible in silico experimental support of the Patterned-Process Theory.
1409.2299
Claus Vogl
Claus Vogl
Biallelic Mutation-Drift Diffusion in the Limit of Small Scaled Mutation Rates
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of the allelic proportion $x$ of a biallelic locus subject to the forces of mutation and drift is investigated in a diffusion model, assuming small scaled mutation rates. The overall scaled mutation rate is parametrized with $\theta=(\mu_1+\mu_0)N$ and the ratio of mutation rates with $\alpha=\mu_1/(\mu_1+\mu_0)=1-\beta$. The equilibrium density of this process is beta with parameters $\alpha\theta$ and $\beta\theta$. Away from equilibrium, the transition density can be expanded into a series of modified Jacobi polynomials. If the scaled mutation rates are small, i.e., $\theta \ll 1$, it may be assumed that polymorphism derives from mutations at the boundaries. A model, where the interior dynamics conform to the pure drift diffusion model and the mutations are entering from the boundaries is derived. In equilibrium, the density of the proportion of polymorphic alleles, \ie\ $x$ within the polymorphic region $[1/N,1-1/N]$, is $\alpha\beta\theta(\tfrac1x+\tfrac1{1-x})=\tfrac{\alpha\beta\theta}{x(1-x)}$, while the mutation bias $\alpha$ influences the proportion of monomorphic alleles at 0 and 1. Analogous to the expansion with modified Jacobi polynomials, a series expansion of the transition density is derived, which is connected to Kimura's well known solution of the pure drift model using Gegenbauer polynomials. Two temporal and two spatial regions are separated. The eigenvectors representing the spatial component within the polymorphic region depend neither on the on the scaled mutation rate $\theta$ nor on the mutation bias $\alpha$. Therefore parameter changes, e.g., growing or shrinking populations or changes in the mutation bias, can be modeled relatively easily, without the change of the eigenfunctions necessary for the series expansion with Jacobi polynomials.
[ { "created": "Mon, 8 Sep 2014 11:41:13 GMT", "version": "v1" } ]
2014-09-09
[ [ "Vogl", "Claus", "" ] ]
The evolution of the allelic proportion $x$ of a biallelic locus subject to the forces of mutation and drift is investigated in a diffusion model, assuming small scaled mutation rates. The overall scaled mutation rate is parametrized with $\theta=(\mu_1+\mu_0)N$ and the ratio of mutation rates with $\alpha=\mu_1/(\mu_1+\mu_0)=1-\beta$. The equilibrium density of this process is beta with parameters $\alpha\theta$ and $\beta\theta$. Away from equilibrium, the transition density can be expanded into a series of modified Jacobi polynomials. If the scaled mutation rates are small, i.e., $\theta \ll 1$, it may be assumed that polymorphism derives from mutations at the boundaries. A model, where the interior dynamics conform to the pure drift diffusion model and the mutations are entering from the boundaries is derived. In equilibrium, the density of the proportion of polymorphic alleles, \ie\ $x$ within the polymorphic region $[1/N,1-1/N]$, is $\alpha\beta\theta(\tfrac1x+\tfrac1{1-x})=\tfrac{\alpha\beta\theta}{x(1-x)}$, while the mutation bias $\alpha$ influences the proportion of monomorphic alleles at 0 and 1. Analogous to the expansion with modified Jacobi polynomials, a series expansion of the transition density is derived, which is connected to Kimura's well known solution of the pure drift model using Gegenbauer polynomials. Two temporal and two spatial regions are separated. The eigenvectors representing the spatial component within the polymorphic region depend neither on the on the scaled mutation rate $\theta$ nor on the mutation bias $\alpha$. Therefore parameter changes, e.g., growing or shrinking populations or changes in the mutation bias, can be modeled relatively easily, without the change of the eigenfunctions necessary for the series expansion with Jacobi polynomials.
1203.2503
Francesc Rossell\'o
Gabriel Cardona, Arnau Mir, Francesc Rossello
The expected value under the Yule model of the squared path-difference distance
10 pages, extended version of a paper submitted to Applied Mathematics Letter
null
null
null
q-bio.PE math.PR q-bio.QM
http://creativecommons.org/licenses/publicdomain/
The path-difference metric is one of the oldest and most popular distances for the comparison of phylogenetic trees, but its statistical properties are still quite unknown. In this paper we compute the expected value under the Yule model of evolution of its square on the space of fully resolved rooted phylogenetic trees with n leaves. This complements previous work by Steel-Penny and Mir-Rossell\'o, who computed this mean value for fully resolved unrooted and rooted phylogenetic trees, respectively, under the uniform distribution.
[ { "created": "Mon, 12 Mar 2012 14:33:52 GMT", "version": "v1" } ]
2012-03-13
[ [ "Cardona", "Gabriel", "" ], [ "Mir", "Arnau", "" ], [ "Rossello", "Francesc", "" ] ]
The path-difference metric is one of the oldest and most popular distances for the comparison of phylogenetic trees, but its statistical properties are still quite unknown. In this paper we compute the expected value under the Yule model of evolution of its square on the space of fully resolved rooted phylogenetic trees with n leaves. This complements previous work by Steel-Penny and Mir-Rossell\'o, who computed this mean value for fully resolved unrooted and rooted phylogenetic trees, respectively, under the uniform distribution.
2307.14367
Hadi Abdine
Hadi Abdine, Michail Chatzianastasis, Costas Bouyioukos, Michalis Vazirgiannis
Prot2Text: Multimodal Protein's Function Generation with GNNs and Transformers
null
Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 10757-10765 (2024)
10.1609/aaai.v38i10.28948
null
q-bio.QM cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In recent years, significant progress has been made in the field of protein function prediction with the development of various machine-learning approaches. However, most existing methods formulate the task as a multi-classification problem, i.e. assigning predefined labels to proteins. In this work, we propose a novel approach, Prot2Text, which predicts a protein's function in a free text style, moving beyond the conventional binary or categorical classifications. By combining Graph Neural Networks(GNNs) and Large Language Models(LLMs), in an encoder-decoder framework, our model effectively integrates diverse data types including protein sequence, structure, and textual annotation and description. This multimodal approach allows for a holistic representation of proteins' functions, enabling the generation of detailed and accurate functional descriptions. To evaluate our model, we extracted a multimodal protein dataset from SwissProt, and demonstrate empirically the effectiveness of Prot2Text. These results highlight the transformative impact of multimodal models, specifically the fusion of GNNs and LLMs, empowering researchers with powerful tools for more accurate function prediction of existing as well as first-to-see proteins.
[ { "created": "Tue, 25 Jul 2023 09:35:43 GMT", "version": "v1" }, { "created": "Thu, 21 Dec 2023 16:46:35 GMT", "version": "v2" }, { "created": "Sat, 20 Apr 2024 09:10:47 GMT", "version": "v3" } ]
2024-04-23
[ [ "Abdine", "Hadi", "" ], [ "Chatzianastasis", "Michail", "" ], [ "Bouyioukos", "Costas", "" ], [ "Vazirgiannis", "Michalis", "" ] ]
In recent years, significant progress has been made in the field of protein function prediction with the development of various machine-learning approaches. However, most existing methods formulate the task as a multi-classification problem, i.e. assigning predefined labels to proteins. In this work, we propose a novel approach, Prot2Text, which predicts a protein's function in a free text style, moving beyond the conventional binary or categorical classifications. By combining Graph Neural Networks(GNNs) and Large Language Models(LLMs), in an encoder-decoder framework, our model effectively integrates diverse data types including protein sequence, structure, and textual annotation and description. This multimodal approach allows for a holistic representation of proteins' functions, enabling the generation of detailed and accurate functional descriptions. To evaluate our model, we extracted a multimodal protein dataset from SwissProt, and demonstrate empirically the effectiveness of Prot2Text. These results highlight the transformative impact of multimodal models, specifically the fusion of GNNs and LLMs, empowering researchers with powerful tools for more accurate function prediction of existing as well as first-to-see proteins.
1003.2111
Jens Christian Claussen
Hong-Viet V. Ngo, Jan K\"ohler, J\"org Mayer, Jens Christian Claussen and Heinz Georg Schuster
Triggering up states in all-to-all coupled neurons
epl Europhysics Letters, accepted (2010)
EPL (Europhysics Letters) 89, 68002 (2010)
10.1209/0295-5075/89/68002
null
q-bio.NC cond-mat.stat-mech nlin.CD physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Slow-wave sleep in mammalians is characterized by a change of large-scale cortical activity currently paraphrased as cortical Up/Down states. A recent experiment demonstrated a bistable collective behaviour in ferret slices, with the remarkable property that the Up states can be switched on and off with pulses, or excitations, of same polarity; whereby the effect of the second pulse significantly depends on the time interval between the pulses. Here we present a simple time discrete model of a neural network that exhibits this type of behaviour, as well as quantitatively reproduces the time-dependence found in the experiments.
[ { "created": "Wed, 10 Mar 2010 13:50:57 GMT", "version": "v1" } ]
2012-06-12
[ [ "Ngo", "Hong-Viet V.", "" ], [ "Köhler", "Jan", "" ], [ "Mayer", "Jörg", "" ], [ "Claussen", "Jens Christian", "" ], [ "Schuster", "Heinz Georg", "" ] ]
Slow-wave sleep in mammalians is characterized by a change of large-scale cortical activity currently paraphrased as cortical Up/Down states. A recent experiment demonstrated a bistable collective behaviour in ferret slices, with the remarkable property that the Up states can be switched on and off with pulses, or excitations, of same polarity; whereby the effect of the second pulse significantly depends on the time interval between the pulses. Here we present a simple time discrete model of a neural network that exhibits this type of behaviour, as well as quantitatively reproduces the time-dependence found in the experiments.
q-bio/0311028
Rodrick Wallace
Rodrick Wallace
Systemic lupus erythematosus in African-American women: Cognitive physiological modules, autoimmune disease, and structured psychosocial stress
18 pages, 1 figure
null
null
null
q-bio.NC q-bio.MN
null
Examining elevated rates of systemic lupus erythematosus in African-American women from perspectives of immune cognition suggests the disease constitutes an internalized physiological image of external structured psychosocial stress, a 'pathogenic social hierarchy' involving the synergism of racism and gender discrimination in the context of policy-driven social disintegration which has particularly affected ethnic minorities in the USA. The disorder represents the punctuated resetting of normal immune self-image to a self-attacking excited state, a process formally analogous to models of punctuated equilibrium in evolutionary theory. Both onset and progression of disease may be stratified by a relation to cyclic physiological responses which are long in comparison with heartbeat period: circadian, hormonal, and annual light/termperature cycles.
[ { "created": "Thu, 20 Nov 2003 18:26:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Wallace", "Rodrick", "" ] ]
Examining elevated rates of systemic lupus erythematosus in African-American women from perspectives of immune cognition suggests the disease constitutes an internalized physiological image of external structured psychosocial stress, a 'pathogenic social hierarchy' involving the synergism of racism and gender discrimination in the context of policy-driven social disintegration which has particularly affected ethnic minorities in the USA. The disorder represents the punctuated resetting of normal immune self-image to a self-attacking excited state, a process formally analogous to models of punctuated equilibrium in evolutionary theory. Both onset and progression of disease may be stratified by a relation to cyclic physiological responses which are long in comparison with heartbeat period: circadian, hormonal, and annual light/termperature cycles.
1610.04656
J. C. Phillips
J. C. Phillips
Giant Hub Src and Syk Tyrosine Kinase Thermodynamic Profiles Recapitulate Evolution
18 pages, 8 figures
null
10.1016/j.physa.2017.04.180
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermodynamic scaling theory, previously applied mainly to small proteins, here analyzes quantitative evolution of the titled functional network giant hub enzymes. The broad domain structure identified homologically is confirmed hydropathically using amino acid sequences only. The most surprising results concern the evolution of the tyrosine kinase globular surface roughness from avian to mammals, which is first order, compared to the evolution within mammals from rodents to humans, which is second order. The mystery of the unique amide terminal region of proto oncogene tyrosine protein kinase is resolved by the discovery there of a septad targeting cluster, which is paralleled by an octad catalytic cluster in tyrosine kinase in humans and a few other species. These results, which go far towards explaining why these proteins are among the largest giant hubs in protein interaction networks, use no adjustable parameters.
[ { "created": "Fri, 14 Oct 2016 22:02:31 GMT", "version": "v1" } ]
2017-05-24
[ [ "Phillips", "J. C.", "" ] ]
Thermodynamic scaling theory, previously applied mainly to small proteins, here analyzes quantitative evolution of the titled functional network giant hub enzymes. The broad domain structure identified homologically is confirmed hydropathically using amino acid sequences only. The most surprising results concern the evolution of the tyrosine kinase globular surface roughness from avian to mammals, which is first order, compared to the evolution within mammals from rodents to humans, which is second order. The mystery of the unique amide terminal region of proto oncogene tyrosine protein kinase is resolved by the discovery there of a septad targeting cluster, which is paralleled by an octad catalytic cluster in tyrosine kinase in humans and a few other species. These results, which go far towards explaining why these proteins are among the largest giant hubs in protein interaction networks, use no adjustable parameters.
1810.10893
Juergen Reingruber
Johannes Reisert, J\"urgen Reingruber
The $Ca^{2+}$-activated $Cl^-$ current ensures robust and reliable signal amplification in vertebrate olfactory receptor neurons
31 pages, 10 figures (including SI)
null
10.1073/pnas.1816371116
null
q-bio.NC q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Activation of most primary sensory neurons results in transduction currents that are carried by cations. One notable exception is the vertebrate olfactory receptor neuron (ORN), where the transduction current is carried largely by the anion $Cl^-$. However, it remains unclear why ORNs use an anionic current for signal amplification. We have sought to provide clarification on this topic by studying the so far neglected dynamics of $Na^+$, $Ca^{2+}$, $K^+$ and $Cl^-$ in the small space of olfactory cilia during an odorant response. Using computational modeling and simulations we compared the outcomes of signal amplification based on either $Cl^-$ or $Na^+$ currents. We found that amplification produced by $Na^+$ influx instead of a $Cl^-$ efflux is problematic due to several reasons: First, the $Na^+$ current amplitude varies greatly depending on mucosal ion concentration changes. Second, a $Na^+$ current leads to a large increase in the ciliary $Na^+$ concentration during an odorant response. This increase inhibits and even reverses $Ca^{2+}$ clearance by $Na^+/Ca^{2+}/K^+$ exchange, which is essential for response termination. Finally, a $Na^+$ current increases the ciliary osmotic pressure, which could cause swelling to damage the cilia. By contrast, a transduction pathway based on $Cl^-$ efflux circumvents these problems and renders the odorant response robust and reliable.
[ { "created": "Thu, 25 Oct 2018 14:26:17 GMT", "version": "v1" } ]
2022-06-08
[ [ "Reisert", "Johannes", "" ], [ "Reingruber", "Jürgen", "" ] ]
Activation of most primary sensory neurons results in transduction currents that are carried by cations. One notable exception is the vertebrate olfactory receptor neuron (ORN), where the transduction current is carried largely by the anion $Cl^-$. However, it remains unclear why ORNs use an anionic current for signal amplification. We have sought to provide clarification on this topic by studying the so far neglected dynamics of $Na^+$, $Ca^{2+}$, $K^+$ and $Cl^-$ in the small space of olfactory cilia during an odorant response. Using computational modeling and simulations we compared the outcomes of signal amplification based on either $Cl^-$ or $Na^+$ currents. We found that amplification produced by $Na^+$ influx instead of a $Cl^-$ efflux is problematic due to several reasons: First, the $Na^+$ current amplitude varies greatly depending on mucosal ion concentration changes. Second, a $Na^+$ current leads to a large increase in the ciliary $Na^+$ concentration during an odorant response. This increase inhibits and even reverses $Ca^{2+}$ clearance by $Na^+/Ca^{2+}/K^+$ exchange, which is essential for response termination. Finally, a $Na^+$ current increases the ciliary osmotic pressure, which could cause swelling to damage the cilia. By contrast, a transduction pathway based on $Cl^-$ efflux circumvents these problems and renders the odorant response robust and reliable.
2003.00241
Pawel Krajewski
Pawel Krajewski, Piotr Kachlicki, Anna Piasecka, Maria Surma, Anetta Kuczynska, Krzysztof Mikolajczak, Piotr Ogrodowicz, Aneta Sawikowska, Hanna Cwiek-Kupczynska, Maciej Stobiecki, Pawel Rodziewicz, Lukasz Marczak
In search of biomarkers and the ideotype of barley tolerant to water scarcity
12 pages, 8 figures, 2 supplementary tables
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
In barley plants, water shortage causes many changes on the morphological, physiological and biochemical levels resulting in the reduction of grain yield. In the present study the results of various experiments on the response of the same barley recombinant inbred lines to water shortage, including phenotypic, proteomic and metabolomic traits were integrated. Obtained results suggest that by a multi-omic approach it is possible to indicate proteomic and metabolomic traits important for reaction of barley plants to reduced water availability. Analysis of regression of drought effect (DE) for grain weight per plant on DE of proteomic and metabolomic traits allowed us to suggest ideotype of barley plants tolerant to water shortage. It was shown that grain weight under drought was determined significantly by six proteins in leaves and five in roots, the function of which were connected with defence mechanisms, ion/electron transport, carbon (in leaves) and nitrogen (in roots) metabolism, and in leaves additionally by two proteins of unknown function. Out of numerous metabolites detected in roots only Aspartic and Glutamic acids and one metabolite of unknown function, were found to have significant influence on grain weight per plant. The role of these traits as biomarkers, and especially as suggested targets of ideotype breeding, has to be further studied. One of the direction to be followed is genetic co-localization of proteomic, metabolomic and phenotypic traits in the genetic and physical maps of barley genome that can describe putative functional associations between traits; this is the next step of our analysis that is in progress.
[ { "created": "Sat, 29 Feb 2020 11:36:50 GMT", "version": "v1" } ]
2020-03-03
[ [ "Krajewski", "Pawel", "" ], [ "Kachlicki", "Piotr", "" ], [ "Piasecka", "Anna", "" ], [ "Surma", "Maria", "" ], [ "Kuczynska", "Anetta", "" ], [ "Mikolajczak", "Krzysztof", "" ], [ "Ogrodowicz", "Piotr", "" ], [ "Sawikowska", "Aneta", "" ], [ "Cwiek-Kupczynska", "Hanna", "" ], [ "Stobiecki", "Maciej", "" ], [ "Rodziewicz", "Pawel", "" ], [ "Marczak", "Lukasz", "" ] ]
In barley plants, water shortage causes many changes on the morphological, physiological and biochemical levels resulting in the reduction of grain yield. In the present study the results of various experiments on the response of the same barley recombinant inbred lines to water shortage, including phenotypic, proteomic and metabolomic traits were integrated. Obtained results suggest that by a multi-omic approach it is possible to indicate proteomic and metabolomic traits important for reaction of barley plants to reduced water availability. Analysis of regression of drought effect (DE) for grain weight per plant on DE of proteomic and metabolomic traits allowed us to suggest ideotype of barley plants tolerant to water shortage. It was shown that grain weight under drought was determined significantly by six proteins in leaves and five in roots, the function of which were connected with defence mechanisms, ion/electron transport, carbon (in leaves) and nitrogen (in roots) metabolism, and in leaves additionally by two proteins of unknown function. Out of numerous metabolites detected in roots only Aspartic and Glutamic acids and one metabolite of unknown function, were found to have significant influence on grain weight per plant. The role of these traits as biomarkers, and especially as suggested targets of ideotype breeding, has to be further studied. One of the direction to be followed is genetic co-localization of proteomic, metabolomic and phenotypic traits in the genetic and physical maps of barley genome that can describe putative functional associations between traits; this is the next step of our analysis that is in progress.
1704.03855
Richard Granger
Richard Granger
How brains are built: Principles of computational neuroscience
http://dana.org/news/cerebrum/detail.aspx?id=30356
Cerebrum; Dana Foundation 2011
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
'If I cannot build it, I do not understand it.' So said Nobel laureate Richard Feynman, and by his metric, we understand a bit about physics, less about chemistry, and almost nothing about biology. When we fully understand a phenomenon, we can specify its entire sequence of events, causes, and effects so completely that it is possible to fully simulate it, with all its internal mechanisms intact. Achieving that level of understanding is rare. It is commensurate with constructing a full design for a machine that could serve as a stand-in for the thing being studied. To understand a phenomenon sufficiently to fully simulate it is to understand it computationally. 'Computation' does not refer to computers per se. Rather, it refers to the underlying principles and methods that make them work. As Turing Award recipient Edsger Dijkstra said, computational science 'is no more about computers than astronomy is about telescopes.' Computational science is the study of the hidden rules underlying complex phenomena from physics to psychology. Computational neuroscience, then, has the aim of understanding brains sufficiently well to be able to simulate their functions, thereby subsuming the twin goals of science and engineering: deeply understanding the inner workings of our brains, and being able to construct simulacra of them. As simple robots today substitute for human physical abilities, in settings from factories to hospitals, so brain engineering will construct stand-ins for our mental abilities, and possibly even enable us to fix our brains when they break.
[ { "created": "Wed, 29 Mar 2017 01:18:13 GMT", "version": "v1" } ]
2017-04-13
[ [ "Granger", "Richard", "" ] ]
'If I cannot build it, I do not understand it.' So said Nobel laureate Richard Feynman, and by his metric, we understand a bit about physics, less about chemistry, and almost nothing about biology. When we fully understand a phenomenon, we can specify its entire sequence of events, causes, and effects so completely that it is possible to fully simulate it, with all its internal mechanisms intact. Achieving that level of understanding is rare. It is commensurate with constructing a full design for a machine that could serve as a stand-in for the thing being studied. To understand a phenomenon sufficiently to fully simulate it is to understand it computationally. 'Computation' does not refer to computers per se. Rather, it refers to the underlying principles and methods that make them work. As Turing Award recipient Edsger Dijkstra said, computational science 'is no more about computers than astronomy is about telescopes.' Computational science is the study of the hidden rules underlying complex phenomena from physics to psychology. Computational neuroscience, then, has the aim of understanding brains sufficiently well to be able to simulate their functions, thereby subsuming the twin goals of science and engineering: deeply understanding the inner workings of our brains, and being able to construct simulacra of them. As simple robots today substitute for human physical abilities, in settings from factories to hospitals, so brain engineering will construct stand-ins for our mental abilities, and possibly even enable us to fix our brains when they break.
1502.01326
Lucas Valdez D.
L. D. Valdez, H. H. A. R\^ego, H. E. Stanley, L. A. Braunstein
Predicting the extinction of Ebola spreading in Liberia due to mitigation strategies
null
Scientific Reports 5, Article number: 12172 (2015)
10.1038/srep12172
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Ebola virus is spreading throughout West Africa and is causing thousands of deaths. In order to quantify the effectiveness of different strategies for controlling the spread, we develop a mathematical model in which the propagation of the Ebola virus through Liberia is caused by travel between counties. For the initial months in which the Ebola virus spreads, we find that the arrival times of the disease into the counties predicted by our model are compatible with World Health Organization data, but we also find that reducing mobility is insufficient to contain the epidemic because it delays the arrival of Ebola virus in each county by only a few weeks. We study the effect of a strategy in which safe burials are increased and effective hospitalisation instituted under two scenarios: (i) one implemented in mid-July 2014 and (ii) one in mid-August---which was the actual time that strong interventions began in Liberia. We find that if scenario (i) had been pursued the lifetime of the epidemic would have been three months shorter and the total number of infected individuals 80\% less than in scenario (ii). Our projection under scenario (ii) is that the spreading will stop by mid-spring 2015.
[ { "created": "Wed, 4 Feb 2015 20:43:29 GMT", "version": "v1" }, { "created": "Thu, 5 Feb 2015 20:45:47 GMT", "version": "v2" }, { "created": "Sun, 24 May 2015 03:05:59 GMT", "version": "v3" }, { "created": "Sun, 31 May 2015 15:34:41 GMT", "version": "v4" }, { "created": "Tue, 21 Jul 2015 00:07:54 GMT", "version": "v5" } ]
2015-07-22
[ [ "Valdez", "L. D.", "" ], [ "Rêgo", "H. H. A.", "" ], [ "Stanley", "H. E.", "" ], [ "Braunstein", "L. A.", "" ] ]
The Ebola virus is spreading throughout West Africa and is causing thousands of deaths. In order to quantify the effectiveness of different strategies for controlling the spread, we develop a mathematical model in which the propagation of the Ebola virus through Liberia is caused by travel between counties. For the initial months in which the Ebola virus spreads, we find that the arrival times of the disease into the counties predicted by our model are compatible with World Health Organization data, but we also find that reducing mobility is insufficient to contain the epidemic because it delays the arrival of Ebola virus in each county by only a few weeks. We study the effect of a strategy in which safe burials are increased and effective hospitalisation instituted under two scenarios: (i) one implemented in mid-July 2014 and (ii) one in mid-August---which was the actual time that strong interventions began in Liberia. We find that if scenario (i) had been pursued the lifetime of the epidemic would have been three months shorter and the total number of infected individuals 80\% less than in scenario (ii). Our projection under scenario (ii) is that the spreading will stop by mid-spring 2015.
2109.12434
Satpreet Harcharan Singh
Satpreet Harcharan Singh, Floris van Breugel, Rajesh P. N. Rao, Bingni Wen Brunton
Emergent behavior and neural dynamics in artificial agents tracking turbulent plumes
null
null
null
null
q-bio.NC cs.AI cs.LG cs.NE cs.SY eess.SY
http://creativecommons.org/licenses/by-sa/4.0/
Tracking a turbulent plume to locate its source is a complex control problem because it requires multi-sensory integration and must be robust to intermittent odors, changing wind direction, and variable plume statistics. This task is routinely performed by flying insects, often over long distances, in pursuit of food or mates. Several aspects of this remarkable behavior have been studied in detail in many experimental studies. Here, we take a complementary in silico approach, using artificial agents trained with reinforcement learning to develop an integrated understanding of the behaviors and neural computations that support plume tracking. Specifically, we use deep reinforcement learning (DRL) to train recurrent neural network (RNN) agents to locate the source of simulated turbulent plumes. Interestingly, the agents' emergent behaviors resemble those of flying insects, and the RNNs learn to represent task-relevant variables, such as head direction and time since last odor encounter. Our analyses suggest an intriguing experimentally testable hypothesis for tracking plumes in changing wind direction -- that agents follow local plume shape rather than the current wind direction. While reflexive short-memory behaviors are sufficient for tracking plumes in constant wind, longer timescales of memory are essential for tracking plumes that switch direction. At the level of neural dynamics, the RNNs' population activity is low-dimensional and organized into distinct dynamical structures, with some correspondence to behavioral modules. Our in silico approach provides key intuitions for turbulent plume tracking strategies and motivates future targeted experimental and theoretical developments.
[ { "created": "Sat, 25 Sep 2021 20:57:02 GMT", "version": "v1" }, { "created": "Sat, 18 Dec 2021 00:58:21 GMT", "version": "v2" } ]
2021-12-21
[ [ "Singh", "Satpreet Harcharan", "" ], [ "van Breugel", "Floris", "" ], [ "Rao", "Rajesh P. N.", "" ], [ "Brunton", "Bingni Wen", "" ] ]
Tracking a turbulent plume to locate its source is a complex control problem because it requires multi-sensory integration and must be robust to intermittent odors, changing wind direction, and variable plume statistics. This task is routinely performed by flying insects, often over long distances, in pursuit of food or mates. Several aspects of this remarkable behavior have been studied in detail in many experimental studies. Here, we take a complementary in silico approach, using artificial agents trained with reinforcement learning to develop an integrated understanding of the behaviors and neural computations that support plume tracking. Specifically, we use deep reinforcement learning (DRL) to train recurrent neural network (RNN) agents to locate the source of simulated turbulent plumes. Interestingly, the agents' emergent behaviors resemble those of flying insects, and the RNNs learn to represent task-relevant variables, such as head direction and time since last odor encounter. Our analyses suggest an intriguing experimentally testable hypothesis for tracking plumes in changing wind direction -- that agents follow local plume shape rather than the current wind direction. While reflexive short-memory behaviors are sufficient for tracking plumes in constant wind, longer timescales of memory are essential for tracking plumes that switch direction. At the level of neural dynamics, the RNNs' population activity is low-dimensional and organized into distinct dynamical structures, with some correspondence to behavioral modules. Our in silico approach provides key intuitions for turbulent plume tracking strategies and motivates future targeted experimental and theoretical developments.
1808.03471
Larry Bull
Larry Bull
The Evolution of Sex Chromosomes through the Baldwin Effect
14 pages
null
null
null
q-bio.PE cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has recently been suggested that the fundamental haploid-diploid cycle of eukaryotic sex exploits a rudimentary form of the Baldwin effect. Thereafter the other associated phenomena can be explained as evolution tuning the amount and frequency of learning experienced by an organism. Using the well-known NK model of fitness landscapes it is here shown that the emergence of sex determination systems can also be explained under this view of eukaryotic evolution.
[ { "created": "Fri, 10 Aug 2018 09:54:30 GMT", "version": "v1" }, { "created": "Fri, 15 Feb 2019 14:21:57 GMT", "version": "v2" }, { "created": "Thu, 12 Mar 2020 13:40:45 GMT", "version": "v3" }, { "created": "Mon, 16 Mar 2020 12:15:33 GMT", "version": "v4" } ]
2020-03-17
[ [ "Bull", "Larry", "" ] ]
It has recently been suggested that the fundamental haploid-diploid cycle of eukaryotic sex exploits a rudimentary form of the Baldwin effect. Thereafter the other associated phenomena can be explained as evolution tuning the amount and frequency of learning experienced by an organism. Using the well-known NK model of fitness landscapes it is here shown that the emergence of sex determination systems can also be explained under this view of eukaryotic evolution.
q-bio/0504026
Sanjay Jain
Areejit Samal, Shalini Singh, Varun Giri, Sandeep Krishna, N. Raghuram and Sanjay Jain
Low Degree Metabolites Explain Essential Reactions and Enhance Modularity in Biological Networks
12 pages main text with 2 figures and 2 tables. 16 pages of Supplementary material. Revised version has title changed and contains study of 3 organisms instead of 1 earlier
BMC Bioinformatics 7:118 (2006)
null
null
q-bio.MN
null
Recently there has been a lot of interest in identifying modules at the level of genetic and metabolic networks of organisms, as well as in identifying single genes and reactions that are essential for the organism. A goal of computational and systems biology is to go beyond identification towards an explanation of specific modules and essential genes and reactions in terms of specific structural or evolutionary constraints. In the metabolic networks of E. coli, S. cerevisiae and S. aureus, we identified metabolites with a low degree of connectivity, particularly those that are produced and/or consumed in just a single reaction. Using FBA we also determined reactions essential for growth in these metabolic networks. We find that most reactions identified as essential in these networks turn out to be those involving the production or consumption of low degree metabolites. Applying graph theoretic methods to these metabolic networks, we identified connected clusters of these low degree metabolites. The genes involved in several operons in E. coli are correctly predicted as those of enzymes catalyzing the reactions of these clusters. We independently identified clusters of reactions whose fluxes are perfectly correlated. We find that the composition of the latter `functional clusters' is also largely explained in terms of clusters of low degree metabolites in each of these organisms. Our findings mean that most metabolic reactions that are essential can be tagged by one or more low degree metabolites. Those reactions are essential because they are the only ways of producing or consuming their respective tagged metabolites. Furthermore, reactions whose fluxes are strongly correlated can be thought of as `glued together' by these low degree metabolites.
[ { "created": "Wed, 20 Apr 2005 18:43:03 GMT", "version": "v1" }, { "created": "Fri, 21 Oct 2005 18:52:16 GMT", "version": "v2" } ]
2007-05-23
[ [ "Samal", "Areejit", "" ], [ "Singh", "Shalini", "" ], [ "Giri", "Varun", "" ], [ "Krishna", "Sandeep", "" ], [ "Raghuram", "N.", "" ], [ "Jain", "Sanjay", "" ] ]
Recently there has been a lot of interest in identifying modules at the level of genetic and metabolic networks of organisms, as well as in identifying single genes and reactions that are essential for the organism. A goal of computational and systems biology is to go beyond identification towards an explanation of specific modules and essential genes and reactions in terms of specific structural or evolutionary constraints. In the metabolic networks of E. coli, S. cerevisiae and S. aureus, we identified metabolites with a low degree of connectivity, particularly those that are produced and/or consumed in just a single reaction. Using FBA we also determined reactions essential for growth in these metabolic networks. We find that most reactions identified as essential in these networks turn out to be those involving the production or consumption of low degree metabolites. Applying graph theoretic methods to these metabolic networks, we identified connected clusters of these low degree metabolites. The genes involved in several operons in E. coli are correctly predicted as those of enzymes catalyzing the reactions of these clusters. We independently identified clusters of reactions whose fluxes are perfectly correlated. We find that the composition of the latter `functional clusters' is also largely explained in terms of clusters of low degree metabolites in each of these organisms. Our findings mean that most metabolic reactions that are essential can be tagged by one or more low degree metabolites. Those reactions are essential because they are the only ways of producing or consuming their respective tagged metabolites. Furthermore, reactions whose fluxes are strongly correlated can be thought of as `glued together' by these low degree metabolites.
1304.7992
Nikos Vlassis
Nikos Vlassis, Maria Pires Pacheco, Thomas Sauter
Fast Reconstruction of Compact Context-Specific Metabolic Network Models
fixed an error in the functional analysis of the liver model
null
10.1371/journal.pcbi.1003424
null
q-bio.MN cs.CE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present FASTCORE, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. FASTCORE takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue), and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and FASTCORE iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a chief rival method. Given its simplicity and its excellent performance, FASTCORE can form the backbone of many future metabolic network reconstruction algorithms.
[ { "created": "Tue, 30 Apr 2013 13:31:40 GMT", "version": "v1" }, { "created": "Thu, 10 Oct 2013 14:12:56 GMT", "version": "v2" }, { "created": "Sat, 23 Nov 2013 22:17:18 GMT", "version": "v3" } ]
2015-06-15
[ [ "Vlassis", "Nikos", "" ], [ "Pacheco", "Maria Pires", "" ], [ "Sauter", "Thomas", "" ] ]
Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present FASTCORE, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. FASTCORE takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue), and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and FASTCORE iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a chief rival method. Given its simplicity and its excellent performance, FASTCORE can form the backbone of many future metabolic network reconstruction algorithms.
2106.03563
Roel Ceballos
Roel F. Ceballos
Mortality Analysis of Early COVID-19 Cases in the Philippines Based on Observed Demographic and Clinical Characteristics
null
Recoletos Multidisciplinary Research Journal, 9(1), 91-106 (2021)
10.32871/rmrj2109.01.09
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
This study aims to determine the demographic, epidemiologic, and clinical characteristics of COVID-19 cases that are highly susceptible to COVID-19 infection, with longer hospitalization and at higher risk of mortality and to provide insights that may be useful to assess the vaccination priority program and allocate hospital resources. Methods that were used include descriptive statistics, nonparametric analysis, and survival analysis. Results of the study reveal that women are more susceptible to infection while men are at risk of longer hospitalization and higher mortality. Significant risk factors to COVID-19 mortality are older age, male sex, difficulty breathing, and comorbidities like hypertension and diabetes. Patients with these combined symptoms should be considered for admission to the COVID-19 facility for proper management and care. Also, there is a significant delay in the testing and diagnosis of those who died, implying that timeliness in the testing and diagnosis of patients is crucial in patient survival.
[ { "created": "Thu, 3 Jun 2021 23:56:01 GMT", "version": "v1" } ]
2021-06-08
[ [ "Ceballos", "Roel F.", "" ] ]
This study aims to determine the demographic, epidemiologic, and clinical characteristics of COVID-19 cases that are highly susceptible to COVID-19 infection, with longer hospitalization and at higher risk of mortality and to provide insights that may be useful to assess the vaccination priority program and allocate hospital resources. Methods that were used include descriptive statistics, nonparametric analysis, and survival analysis. Results of the study reveal that women are more susceptible to infection while men are at risk of longer hospitalization and higher mortality. Significant risk factors to COVID-19 mortality are older age, male sex, difficulty breathing, and comorbidities like hypertension and diabetes. Patients with these combined symptoms should be considered for admission to the COVID-19 facility for proper management and care. Also, there is a significant delay in the testing and diagnosis of those who died, implying that timeliness in the testing and diagnosis of patients is crucial in patient survival.
0812.1279
Emilio Hernandez-Garcia
Emilio Hernandez-Garcia, Cristobal Lopez (IFISC), Simone Pigolotti (NBI), Ken H. Andersen (AQUA)
Species competition: coexistence, exclusion and clustering
9 pages, 4 figures. Replaced with published version. Freely available from the publisher site under the Creative Commons Attribution license
Philosophical Transactions of the Royal Society A 367, 3183-3195 (2009)
10.1098/rsta.2009.0086
null
q-bio.PE nlin.PS q-bio.QM
http://creativecommons.org/licenses/by/3.0/
We present properties of Lotka-Volterra equations describing ecological competition among a large number of competing species. First we extend to the case of a non-homogeneous niche space stability conditions for solutions representing species coexistence. Second, we discuss mechanisms leading to species clustering and obtain an analytical solution for a lumped state in a specific instance of the system. We also discuss how realistic ecological interactions may result in different types of competition coefficients.
[ { "created": "Sat, 6 Dec 2008 12:09:02 GMT", "version": "v1" }, { "created": "Wed, 22 Jul 2009 07:16:28 GMT", "version": "v2" } ]
2009-07-22
[ [ "Hernandez-Garcia", "Emilio", "", "IFISC" ], [ "Lopez", "Cristobal", "", "IFISC" ], [ "Pigolotti", "Simone", "", "NBI" ], [ "Andersen", "Ken H.", "", "AQUA" ] ]
We present properties of Lotka-Volterra equations describing ecological competition among a large number of competing species. First we extend to the case of a non-homogeneous niche space stability conditions for solutions representing species coexistence. Second, we discuss mechanisms leading to species clustering and obtain an analytical solution for a lumped state in a specific instance of the system. We also discuss how realistic ecological interactions may result in different types of competition coefficients.
2111.02692
Salman Mohamadi
Salman Mohamadi, Gianfranco.Doretto, Nasser M. Nasrabadi, Donald A. Adjeroh
Human Age Estimation from Gene Expression Data using Artificial Neural Networks
8 pages, 5 figures, This paper is accepted to 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
null
null
null
q-bio.GN cs.AI
http://creativecommons.org/licenses/by/4.0/
The study of signatures of aging in terms of genomic biomarkers can be uniquely helpful in understanding the mechanisms of aging and developing models to accurately predict the age. Prior studies have employed gene expression and DNA methylation data aiming at accurate prediction of age. In this line, we propose a new framework for human age estimation using information from human dermal fibroblast gene expression data. First, we propose a new spatial representation as well as a data augmentation approach for gene expression data. Next in order to predict the age, we design an architecture of neural network and apply it to this new representation of the original and augmented data, as an ensemble classification approach. Our experimental results suggest the superiority of the proposed framework over state-of-the-art age estimation methods using DNA methylation and gene expression data.
[ { "created": "Thu, 4 Nov 2021 08:57:35 GMT", "version": "v1" }, { "created": "Fri, 5 Nov 2021 03:51:18 GMT", "version": "v2" } ]
2021-11-08
[ [ "Mohamadi", "Salman", "" ], [ "Doretto", "Gianfranco.", "" ], [ "Nasrabadi", "Nasser M.", "" ], [ "Adjeroh", "Donald A.", "" ] ]
The study of signatures of aging in terms of genomic biomarkers can be uniquely helpful in understanding the mechanisms of aging and developing models to accurately predict the age. Prior studies have employed gene expression and DNA methylation data aiming at accurate prediction of age. In this line, we propose a new framework for human age estimation using information from human dermal fibroblast gene expression data. First, we propose a new spatial representation as well as a data augmentation approach for gene expression data. Next in order to predict the age, we design an architecture of neural network and apply it to this new representation of the original and augmented data, as an ensemble classification approach. Our experimental results suggest the superiority of the proposed framework over state-of-the-art age estimation methods using DNA methylation and gene expression data.
1305.6259
Binay Panda
Prachi Jain (1), Neeraja M. Krishnan (1) and Binay Panda (1 and 2) ((1) Ganit Labs, Bio-IT Centre, Institute of Bioinformatics and Applied Biotechnology, Bangalore, India, (2) Strand Life Sciences, Bangalore, India)
Augmenting transcriptome assembly combinatorially
"for associated supplementary file, see ftp://115.119.160.213/transcriptome_assembly_supp_text
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA-seq allows detection and precise quantification of transcripts, provides comprehensive understanding of exon/intron boundaries, aids discovery of alternatively spliced isoforms and fusion transcripts along with measurement of allele-specific expression. Researchers interested in studying and constructing transcriptomes, especially for non-model species, often face the conundrum of choosing from a number of available de novo and genome-guided assemblers. A comprehensive comparative study is required to assess and evaluate their efficiency and sensitivity for transcript assembly, reconstruction and recovery. None of the popular assembly tools in use today achieves requisite sensitivity, specificity or recovery of full-length transcripts on its own. Hence, it is imperative that methods be developed in order to augment assemblies generated from multiple tools, with minimal compounding of error. Here, we present an approach to combinatorially augment transciptome assembly based on a rigorous comparative study of popular de novo and genome-guided transcriptome assembly tools.
[ { "created": "Mon, 27 May 2013 15:50:46 GMT", "version": "v1" }, { "created": "Fri, 31 May 2013 07:09:43 GMT", "version": "v2" } ]
2013-06-03
[ [ "Jain", "Prachi", "", "1 and 2" ], [ "Krishnan", "Neeraja M.", "", "1 and 2" ], [ "Panda", "Binay", "", "1 and 2" ] ]
RNA-seq allows detection and precise quantification of transcripts, provides comprehensive understanding of exon/intron boundaries, aids discovery of alternatively spliced isoforms and fusion transcripts along with measurement of allele-specific expression. Researchers interested in studying and constructing transcriptomes, especially for non-model species, often face the conundrum of choosing from a number of available de novo and genome-guided assemblers. A comprehensive comparative study is required to assess and evaluate their efficiency and sensitivity for transcript assembly, reconstruction and recovery. None of the popular assembly tools in use today achieves requisite sensitivity, specificity or recovery of full-length transcripts on its own. Hence, it is imperative that methods be developed in order to augment assemblies generated from multiple tools, with minimal compounding of error. Here, we present an approach to combinatorially augment transciptome assembly based on a rigorous comparative study of popular de novo and genome-guided transcriptome assembly tools.
q-bio/0505013
Kate Davison
K. Davison (1), P. M. Dolukhanov (2), G. R. Sarson (1) and A. Shukurov (1), ((1) School of Mathematics and Statistics, University of Newcastle upon Tyne, (2) School of Historical Studies, University of Newcastle upon Tyne)
Environmental effects on the spread of the Neolithic
36 Pages, 4 Figures, submitted for publication to the Journal of Archaeological Science
null
null
null
q-bio.PE
null
The causes and implications of the regional variations in the spread of the incipient agriculture in Europe remain poorly understood. We apply population dynamics models to study the dispersal of the Neolithic in Europe from a localized area in the Near East, solving the two-dimensional reaction-diffusion equation on a spherical surface. We focus on the role of major river paths and coastlines in the advance of farming to model the rapid advances of the Linear Pottery (LBK) and the Impressed Ware traditions along the Danube-Rhine corridor and the Mediterranean coastline respectively. We argue that the random walk of individuals, which results in diffusion of the population, can be anisotropic in those areas. The standard reaction-diffusion equation is thus supplemented with advection-like terms confined to the proximity of major rivers and coastlines. The model allows for the spatial variation in both the human mobility (diffusivity) and the carrying capacity of landscapes, reflecting the local altitude and latitude. This approach can easily be generalised to include other environmental factors, such as the bioproductivity of landscapes. Our model successfully accounts for the regional variations in the spread of the Neolithic, consistent with the radiocarbon dated data, and reproduces a time delay in the spread of farming to the Eastern Europe and Scandinavia.
[ { "created": "Fri, 6 May 2005 09:36:06 GMT", "version": "v1" }, { "created": "Mon, 9 May 2005 15:11:33 GMT", "version": "v2" } ]
2007-05-23
[ [ "Davison", "K.", "" ], [ "Dolukhanov", "P. M.", "" ], [ "Sarson", "G. R.", "" ], [ "Shukurov", "A.", "" ] ]
The causes and implications of the regional variations in the spread of the incipient agriculture in Europe remain poorly understood. We apply population dynamics models to study the dispersal of the Neolithic in Europe from a localized area in the Near East, solving the two-dimensional reaction-diffusion equation on a spherical surface. We focus on the role of major river paths and coastlines in the advance of farming to model the rapid advances of the Linear Pottery (LBK) and the Impressed Ware traditions along the Danube-Rhine corridor and the Mediterranean coastline respectively. We argue that the random walk of individuals, which results in diffusion of the population, can be anisotropic in those areas. The standard reaction-diffusion equation is thus supplemented with advection-like terms confined to the proximity of major rivers and coastlines. The model allows for the spatial variation in both the human mobility (diffusivity) and the carrying capacity of landscapes, reflecting the local altitude and latitude. This approach can easily be generalised to include other environmental factors, such as the bioproductivity of landscapes. Our model successfully accounts for the regional variations in the spread of the Neolithic, consistent with the radiocarbon dated data, and reproduces a time delay in the spread of farming to the Eastern Europe and Scandinavia.
1303.2935
Sergiy Popov
Zoja Medjanik, Lyudmila Popova, Sergiy Popov
On Systemic Destruction of Human Locomotor System
15 pages, 1 figure
null
null
null
q-bio.TO physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Locomotor system disorders affect a vast majority of people at some time in their life bringing pain, functional limitations, social and economic implications. Modern medicine cannot offer prevention and effective treatment for most chronic musculoskeletal conditions, because their etiology and pathogenesis are unknown. This is due to the lack of systemic understanding of the locomotor system functioning in both healthy and unhealthy states. Here we apply systems sciences to analyze the human locomotor system and develop a general theory that reveals the systemic destructive process in the locomotor system, linking together all its disorders. The systemic destruction involves adaptation and self-organization processes in the locomotor system, whose side effects introduce a positive feedback loop with nervous and vascular disturbances. Most chronic musculoskeletal conditions are just manifestations and consequences of this process. On the basis of our theoretical findings, we developed the world's first technology that effectively counteracts the systemic destruction and improves the locomotor system state at any age, also preventing problems in nervous and cardiovascular systems.
[ { "created": "Sun, 10 Mar 2013 15:05:51 GMT", "version": "v1" } ]
2013-03-13
[ [ "Medjanik", "Zoja", "" ], [ "Popova", "Lyudmila", "" ], [ "Popov", "Sergiy", "" ] ]
Locomotor system disorders affect a vast majority of people at some time in their life bringing pain, functional limitations, social and economic implications. Modern medicine cannot offer prevention and effective treatment for most chronic musculoskeletal conditions, because their etiology and pathogenesis are unknown. This is due to the lack of systemic understanding of the locomotor system functioning in both healthy and unhealthy states. Here we apply systems sciences to analyze the human locomotor system and develop a general theory that reveals the systemic destructive process in the locomotor system, linking together all its disorders. The systemic destruction involves adaptation and self-organization processes in the locomotor system, whose side effects introduce a positive feedback loop with nervous and vascular disturbances. Most chronic musculoskeletal conditions are just manifestations and consequences of this process. On the basis of our theoretical findings, we developed the world's first technology that effectively counteracts the systemic destruction and improves the locomotor system state at any age, also preventing problems in nervous and cardiovascular systems.
1401.3668
Simon Gravel
Simon Gravel, Mike Steel
The existence and abundance of ghost ancestors in biparental populations
15 pages + appendix, 6 figures; updated version contains additional simulations and clarifications
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a randomly-mating biparental population of size $N$ there are, with high probability, individuals who are genealogical ancestors of every extant individual within approximately $\log_2(N)$ generations into the past. We use this result of J. Chang to prove a curious corollary under standard models of recombination: there exist, with high probability, individuals within a constant multiple of $ \log_2(N)$ generations into the past who are simultaneously (i) genealogical ancestors of {\em each} of the individuals at the present, and (ii) genetic ancestors to {\em none} of the individuals at the present. Such ancestral individuals - ancestors of everyone today that left no genetic trace -- represent `ghost' ancestors in a strong sense. In this short note, we use simple analytical argument and simulations to estimate how many such individuals exist in finite Wright-Fisher populations.
[ { "created": "Wed, 15 Jan 2014 17:00:33 GMT", "version": "v1" }, { "created": "Mon, 2 Mar 2015 20:58:40 GMT", "version": "v2" } ]
2015-03-03
[ [ "Gravel", "Simon", "" ], [ "Steel", "Mike", "" ] ]
In a randomly-mating biparental population of size $N$ there are, with high probability, individuals who are genealogical ancestors of every extant individual within approximately $\log_2(N)$ generations into the past. We use this result of J. Chang to prove a curious corollary under standard models of recombination: there exist, with high probability, individuals within a constant multiple of $ \log_2(N)$ generations into the past who are simultaneously (i) genealogical ancestors of {\em each} of the individuals at the present, and (ii) genetic ancestors to {\em none} of the individuals at the present. Such ancestral individuals - ancestors of everyone today that left no genetic trace -- represent `ghost' ancestors in a strong sense. In this short note, we use simple analytical argument and simulations to estimate how many such individuals exist in finite Wright-Fisher populations.
2012.12961
Brian Cleary
Brian Cleary and Aviv Regev
The necessity and power of random, under-sampled experiments in biology
null
null
null
null
q-bio.QM stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
A vast array of transformative technologies developed over the past decade has enabled measurement and perturbation at ever increasing scale, yet our understanding of many systems remains limited by experimental capacity. Overcoming this limitation is not simply a matter of reducing costs with existing approaches; for complex biological systems it will likely never be possible to comprehensively measure and perturb every combination of variables of interest. There is, however, a growing body of work - much of it foundational and precedent setting - that extracts a surprising amount of information from highly under sampled data. For a wide array of biological questions, especially the study of genetic interactions, approaches like these will be crucial to obtain a comprehensive understanding. Yet, there is no coherent framework that unifies these methods, provides a rigorous mathematical foundation to understand their limitations and capabilities, allows us to understand through a common lens their surprising successes, and suggests how we might crystalize the key concepts to transform experimental biology. Here, we review prior work on this topic - both the biology and the mathematical foundations of randomization and low dimensional inference - and propose a general framework to make data collection in a wide array of studies vastly more efficient using random experiments and composite experiments.
[ { "created": "Wed, 23 Dec 2020 20:38:33 GMT", "version": "v1" } ]
2020-12-25
[ [ "Cleary", "Brian", "" ], [ "Regev", "Aviv", "" ] ]
A vast array of transformative technologies developed over the past decade has enabled measurement and perturbation at ever increasing scale, yet our understanding of many systems remains limited by experimental capacity. Overcoming this limitation is not simply a matter of reducing costs with existing approaches; for complex biological systems it will likely never be possible to comprehensively measure and perturb every combination of variables of interest. There is, however, a growing body of work - much of it foundational and precedent setting - that extracts a surprising amount of information from highly under sampled data. For a wide array of biological questions, especially the study of genetic interactions, approaches like these will be crucial to obtain a comprehensive understanding. Yet, there is no coherent framework that unifies these methods, provides a rigorous mathematical foundation to understand their limitations and capabilities, allows us to understand through a common lens their surprising successes, and suggests how we might crystalize the key concepts to transform experimental biology. Here, we review prior work on this topic - both the biology and the mathematical foundations of randomization and low dimensional inference - and propose a general framework to make data collection in a wide array of studies vastly more efficient using random experiments and composite experiments.
1712.00306
Rodrigo Felipe de Oliveira Pena
Rodrigo F.O. Pena, Cesar C. Ceballos, Vinicius Lima, and Antonio C. Roque
Interplay of activation kinetics and the derivative conductance determines resonance properties of neurons
11 pages, 9 figures
Phys. Rev. E 97, 042408 (2018)
10.1103/PhysRevE.97.042408
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a neuron with hyperpolarization activated current ($I_h$), the correct input frequency leads to an enhancement of the output response. This behavior is known as resonance and is well described by the neuronal impedance. In a simple neuron model we derive equations for the neuron's resonance and we link its frequency and existence with the biophysical properties of $I_h$. For a small voltage change, the component of the ratio of current change to voltage change ($dI/dV$) due to the voltage-dependent conductance change ($dg/dV$) is known as derivative conductance ($G_h^{Der}$). We show that both $G_h^{Der}$ and the current activation kinetics (characterized by the activation time constant $\tau_h$) are mainly responsible for controlling the frequency and existence of resonance. The increment of both factors ($G_h^{Der}$ and $\tau_h$) greatly contributes to the appearance of resonance. We also demonstrate that resonance is voltage dependent due to the voltage dependence of $G_h^{Der}$. Our results have important implications and can be used to predict and explain resonance properties of neurons with the $I_h$ current.
[ { "created": "Fri, 1 Dec 2017 13:21:16 GMT", "version": "v1" }, { "created": "Sat, 9 Dec 2017 13:19:56 GMT", "version": "v2" }, { "created": "Thu, 12 Apr 2018 18:38:47 GMT", "version": "v3" } ]
2018-04-18
[ [ "Pena", "Rodrigo F. O.", "" ], [ "Ceballos", "Cesar C.", "" ], [ "Lima", "Vinicius", "" ], [ "Roque", "Antonio C.", "" ] ]
In a neuron with hyperpolarization activated current ($I_h$), the correct input frequency leads to an enhancement of the output response. This behavior is known as resonance and is well described by the neuronal impedance. In a simple neuron model we derive equations for the neuron's resonance and we link its frequency and existence with the biophysical properties of $I_h$. For a small voltage change, the component of the ratio of current change to voltage change ($dI/dV$) due to the voltage-dependent conductance change ($dg/dV$) is known as derivative conductance ($G_h^{Der}$). We show that both $G_h^{Der}$ and the current activation kinetics (characterized by the activation time constant $\tau_h$) are mainly responsible for controlling the frequency and existence of resonance. The increment of both factors ($G_h^{Der}$ and $\tau_h$) greatly contributes to the appearance of resonance. We also demonstrate that resonance is voltage dependent due to the voltage dependence of $G_h^{Der}$. Our results have important implications and can be used to predict and explain resonance properties of neurons with the $I_h$ current.
2005.14149
Jakob Jordan
Jakob Jordan, Maximilian Schmidt, Walter Senn, and Mihai A. Petrovici
Evolving to learn: discovering interpretable plasticity rules for spiking networks
33 pages, 10 figures; J. Jordan and M. Schmidt contributed equally to this work
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so called "plasticity rules", is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms.
[ { "created": "Thu, 28 May 2020 17:06:03 GMT", "version": "v1" }, { "created": "Fri, 12 Jun 2020 13:32:27 GMT", "version": "v2" }, { "created": "Tue, 5 Jan 2021 16:44:40 GMT", "version": "v3" } ]
2021-01-06
[ [ "Jordan", "Jakob", "" ], [ "Schmidt", "Maximilian", "" ], [ "Senn", "Walter", "" ], [ "Petrovici", "Mihai A.", "" ] ]
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so called "plasticity rules", is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms.
1609.00157
Guido Tiana
R. Meloni, C. Camilloni and G. Tiana
Properties of low-dimensional collective variables in the molecular dynamics of biopolymers
null
Phys. Rev. E 94, 052406 (2016)
10.1103/PhysRevE.94.052406
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The description of the dynamics of a complex, high-dimensional system in terms of a low-dimensional set of collective variables Y can be fruitful if the low dimensional representation satisfies a Langevin equation with drift and diffusion coefficients which depend only on Y. We present a computational scheme to evaluate whether a given collective variable provides a faithful low-dimensional representation of the dynamics of a high-dimensional system. The scheme is based on the framework of finite-difference Langevin-equation, similar to that used for molecular-dynamics simulations. This allows one to calculate the drift and diffusion coefficients in any point of the full-dimensional system. The width of the distribution of drift and diffusion coefficients in an ensemble of microscopic points at the same value of Y indicates to which extent the dynamics of Y is described by a simple Langevin equation. Using a simple protein model we show that collective variables often used to describe biopolymers display a non-negligible width both in the drift and in the diffusion coefficients. We also show that the associated effective force is compatible with the equilibrium free--energy calculated from a microscopic sampling, but results in markedly different dynamical properties.
[ { "created": "Thu, 1 Sep 2016 09:33:52 GMT", "version": "v1" }, { "created": "Mon, 28 Nov 2016 10:01:02 GMT", "version": "v2" } ]
2016-11-29
[ [ "Meloni", "R.", "" ], [ "Camilloni", "C.", "" ], [ "Tiana", "G.", "" ] ]
The description of the dynamics of a complex, high-dimensional system in terms of a low-dimensional set of collective variables Y can be fruitful if the low dimensional representation satisfies a Langevin equation with drift and diffusion coefficients which depend only on Y. We present a computational scheme to evaluate whether a given collective variable provides a faithful low-dimensional representation of the dynamics of a high-dimensional system. The scheme is based on the framework of finite-difference Langevin-equation, similar to that used for molecular-dynamics simulations. This allows one to calculate the drift and diffusion coefficients in any point of the full-dimensional system. The width of the distribution of drift and diffusion coefficients in an ensemble of microscopic points at the same value of Y indicates to which extent the dynamics of Y is described by a simple Langevin equation. Using a simple protein model we show that collective variables often used to describe biopolymers display a non-negligible width both in the drift and in the diffusion coefficients. We also show that the associated effective force is compatible with the equilibrium free--energy calculated from a microscopic sampling, but results in markedly different dynamical properties.
0807.0721
Bob Eisenberg
Bob Eisenberg
Permeation as a Diffusion Process
This is a posting of a paper written in 2000, for the Biophysics Textbook On Line Channels, Receptors, and Transporters Louis J. DeFelice, Volume Editor which is hard to find. Bob Eisenberg is also known as RS Eisenberg
Chapter 4 in Biophysics Textbook On Line Channels, Receptors, and Transporters Louis J. DeFelice, Volume Editor, 2000
null
null
q-bio.BM physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper shows how the diffusive movement of ions through a channel protein can be described as a chemical reaction over an arbitrary shaped potential barrier. The result is simple and intuitive but without approximation beyond the electrodiffusion description of ion movement.
[ { "created": "Fri, 4 Jul 2008 11:21:20 GMT", "version": "v1" } ]
2008-07-10
[ [ "Eisenberg", "Bob", "" ] ]
The paper shows how the diffusive movement of ions through a channel protein can be described as a chemical reaction over an arbitrary shaped potential barrier. The result is simple and intuitive but without approximation beyond the electrodiffusion description of ion movement.
1503.04620
Myoungwon Cho
Myoung Won Cho
Two symmetry breaking mechanisms for the development of orientation selectivity in a neural system
null
null
10.3938/jkps.67.1661
null
q-bio.NC stat.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Orientation selectivity is a remarkable feature of the neurons located in the primary visual cortex. Provided that the visual neurons acquire orientation selectivity through activity-dependent Hebbian learning, the development process could be understood as a kind of symmetry breaking phenomenon in the view of physics. The key mechanisms of the development process are examined here in a neural system. Found is that there are at least two different mechanisms which lead to the development of orientation selectivity through breaking the radial symmetry in receptive fields. The first, a simultaneous symmetry breaking mechanism, bases on the competition between neighboring neurons, and the second, a spontaneous one, bases on the nonlinearity in interactions. It turns out that only the second mechanism leads to the formation of a columnar pattern which characteristics accord with those observed in an animal experiment.
[ { "created": "Mon, 16 Mar 2015 12:26:04 GMT", "version": "v1" } ]
2016-01-20
[ [ "Cho", "Myoung Won", "" ] ]
Orientation selectivity is a remarkable feature of the neurons located in the primary visual cortex. Provided that the visual neurons acquire orientation selectivity through activity-dependent Hebbian learning, the development process could be understood as a kind of symmetry breaking phenomenon in the view of physics. The key mechanisms of the development process are examined here in a neural system. Found is that there are at least two different mechanisms which lead to the development of orientation selectivity through breaking the radial symmetry in receptive fields. The first, a simultaneous symmetry breaking mechanism, bases on the competition between neighboring neurons, and the second, a spontaneous one, bases on the nonlinearity in interactions. It turns out that only the second mechanism leads to the formation of a columnar pattern which characteristics accord with those observed in an animal experiment.
1207.4145
Nebojsa Jojic
Nebojsa Jojic, Vladimir Jojic, David Heckerman
Joint discovery of haplotype blocks and complex trait associations from SNP sequences
Appears in Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (UAI2004)
null
null
UAI-P-2004-PG-286-292
q-bio.GN cs.CE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Haplotypes, the global patterns of DNA sequence variation, have important implications for identifying complex traits. Recently, blocks of limited haplotype diversity have been discovered in human chromosomes, intensifying the research on modelling the block structure as well as the transitions or co-occurrence of the alleles in these blocks as a way to compress the variability and infer the associations more robustly. The haplotype block structure analysis is typically complicated by the fact that the phase information for each SNP is missing, i.e., the observed allele pairs are not given in a consistent order across the sequence. The techniques for circumventing this require additional information, such as family data, or a more complex sequencing procedure. In this paper we present a hierarchical statistical model and the associated learning and inference algorithms that simultaneously deal with the allele ambiguity per locus, missing data, block estimation, and the complex trait association. While the blo structure may differ from the structures inferred by other methods, which use the pedigree information or previously known alleles, the parameters we estimate, including the learned block structure and the estimated block transitions per locus, define a good model of variability in the set. The method is completely datadriven and can detect Chron's disease from the SNP data taken from the human chromosome 5q31 with the detection rate of 80% and a small error variance.
[ { "created": "Wed, 11 Jul 2012 14:55:26 GMT", "version": "v1" } ]
2012-07-19
[ [ "Jojic", "Nebojsa", "" ], [ "Jojic", "Vladimir", "" ], [ "Heckerman", "David", "" ] ]
Haplotypes, the global patterns of DNA sequence variation, have important implications for identifying complex traits. Recently, blocks of limited haplotype diversity have been discovered in human chromosomes, intensifying the research on modelling the block structure as well as the transitions or co-occurrence of the alleles in these blocks as a way to compress the variability and infer the associations more robustly. The haplotype block structure analysis is typically complicated by the fact that the phase information for each SNP is missing, i.e., the observed allele pairs are not given in a consistent order across the sequence. The techniques for circumventing this require additional information, such as family data, or a more complex sequencing procedure. In this paper we present a hierarchical statistical model and the associated learning and inference algorithms that simultaneously deal with the allele ambiguity per locus, missing data, block estimation, and the complex trait association. While the blo structure may differ from the structures inferred by other methods, which use the pedigree information or previously known alleles, the parameters we estimate, including the learned block structure and the estimated block transitions per locus, define a good model of variability in the set. The method is completely datadriven and can detect Chron's disease from the SNP data taken from the human chromosome 5q31 with the detection rate of 80% and a small error variance.
1606.01802
Philip Pearce
Philip Pearce, Paul Brownbill, Jiri Janacek, Marie Jirkovska, Lucie Kubinova, Igor L. Chernyavsky and Oliver E. Jensen
Image-Based Modeling of Blood Flow and Oxygen Transfer in Feto-Placental Capillaries
Final version of manuscript
PLoS ONE 11(10):e0165369 (2016)
10.1371/journal.pone.0165369
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During pregnancy, oxygen diffuses from maternal to fetal blood through villous trees in the placenta. In this paper, we simulate blood flow and oxygen transfer in feto-placental capillaries by converting three-dimensional representations of villous and capillary surfaces, reconstructed from confocal laser scanning microscopy, to finite-element meshes, and calculating values of vascular flow resistance and total oxygen transfer. The relationship between the total oxygen transfer rate and the pressure drop through the capillary is shown to be captured across a wide range of pressure drops by physical scaling laws and an upper bound on the oxygen transfer rate. A regression equation is introduced that can be used to estimate the oxygen transfer in a capillary using the vascular resistance. Two techniques for quantifying the effects of statistical variability, experimental uncertainty and pathological placental structure on the calculated properties are then introduced. First, scaling arguments are used to quantify the sensitivity of the model to uncertainties in the geometry and the parameters. Second, the effects of localized dilations in fetal capillaries are investigated using an idealized axisymmetric model, to quantify the possible effect of pathological placental structure on oxygen transfer. The model predicts how, for a fixed pressure drop through a capillary, oxygen transfer is maximized by an optimal width of the dilation. The results could explain the prevalence of fetal hypoxia in cases of delayed villous maturation, a pathology characterized by a lack of the vasculo-syncytial membranes often seen in conjunction with localized capillary dilations.
[ { "created": "Mon, 6 Jun 2016 16:00:54 GMT", "version": "v1" }, { "created": "Tue, 7 Jun 2016 08:47:23 GMT", "version": "v2" }, { "created": "Tue, 9 Aug 2016 16:49:33 GMT", "version": "v3" }, { "created": "Thu, 27 Oct 2016 18:15:33 GMT", "version": "v4" } ]
2016-10-28
[ [ "Pearce", "Philip", "" ], [ "Brownbill", "Paul", "" ], [ "Janacek", "Jiri", "" ], [ "Jirkovska", "Marie", "" ], [ "Kubinova", "Lucie", "" ], [ "Chernyavsky", "Igor L.", "" ], [ "Jensen", "Oliver E.", "" ] ]
During pregnancy, oxygen diffuses from maternal to fetal blood through villous trees in the placenta. In this paper, we simulate blood flow and oxygen transfer in feto-placental capillaries by converting three-dimensional representations of villous and capillary surfaces, reconstructed from confocal laser scanning microscopy, to finite-element meshes, and calculating values of vascular flow resistance and total oxygen transfer. The relationship between the total oxygen transfer rate and the pressure drop through the capillary is shown to be captured across a wide range of pressure drops by physical scaling laws and an upper bound on the oxygen transfer rate. A regression equation is introduced that can be used to estimate the oxygen transfer in a capillary using the vascular resistance. Two techniques for quantifying the effects of statistical variability, experimental uncertainty and pathological placental structure on the calculated properties are then introduced. First, scaling arguments are used to quantify the sensitivity of the model to uncertainties in the geometry and the parameters. Second, the effects of localized dilations in fetal capillaries are investigated using an idealized axisymmetric model, to quantify the possible effect of pathological placental structure on oxygen transfer. The model predicts how, for a fixed pressure drop through a capillary, oxygen transfer is maximized by an optimal width of the dilation. The results could explain the prevalence of fetal hypoxia in cases of delayed villous maturation, a pathology characterized by a lack of the vasculo-syncytial membranes often seen in conjunction with localized capillary dilations.
2404.16866
Chaohao Yuan
Chaohao Yuan, Songyou Li, Geyan Ye, Yikun Zhang, Long-Kai Huang, Wenbing Huang, Wei Liu, Jianhua Yao, Yu Rong
Functional Protein Design with Local Domain Alignment
null
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The core challenge of de novo protein design lies in creating proteins with specific functions or properties, guided by certain conditions. Current models explore to generate protein using structural and evolutionary guidance, which only provide indirect conditions concerning functions and properties. However, textual annotations of proteins, especially the annotations for protein domains, which directly describe the protein's high-level functionalities, properties, and their correlation with target amino acid sequences, remain unexplored in the context of protein design tasks. In this paper, we propose Protein-Annotation Alignment Generation (PAAG), a multi-modality protein design framework that integrates the textual annotations extracted from protein database for controllable generation in sequence space. Specifically, within a multi-level alignment module, PAAG can explicitly generate proteins containing specific domains conditioned on the corresponding domain annotations, and can even design novel proteins with flexible combinations of different kinds of annotations. Our experimental results underscore the superiority of the aligned protein representations from PAAG over 7 prediction tasks. Furthermore, PAAG demonstrates a nearly sixfold increase in generation success rate (24.7% vs 4.7% in zinc finger, and 54.3% vs 8.7% in the immunoglobulin domain) in comparison to the existing model.
[ { "created": "Thu, 18 Apr 2024 09:37:54 GMT", "version": "v1" }, { "created": "Mon, 27 May 2024 07:23:26 GMT", "version": "v2" } ]
2024-05-28
[ [ "Yuan", "Chaohao", "" ], [ "Li", "Songyou", "" ], [ "Ye", "Geyan", "" ], [ "Zhang", "Yikun", "" ], [ "Huang", "Long-Kai", "" ], [ "Huang", "Wenbing", "" ], [ "Liu", "Wei", "" ], [ "Yao", "Jianhua", "" ], [ "Rong", "Yu", "" ] ]
The core challenge of de novo protein design lies in creating proteins with specific functions or properties, guided by certain conditions. Current models explore to generate protein using structural and evolutionary guidance, which only provide indirect conditions concerning functions and properties. However, textual annotations of proteins, especially the annotations for protein domains, which directly describe the protein's high-level functionalities, properties, and their correlation with target amino acid sequences, remain unexplored in the context of protein design tasks. In this paper, we propose Protein-Annotation Alignment Generation (PAAG), a multi-modality protein design framework that integrates the textual annotations extracted from protein database for controllable generation in sequence space. Specifically, within a multi-level alignment module, PAAG can explicitly generate proteins containing specific domains conditioned on the corresponding domain annotations, and can even design novel proteins with flexible combinations of different kinds of annotations. Our experimental results underscore the superiority of the aligned protein representations from PAAG over 7 prediction tasks. Furthermore, PAAG demonstrates a nearly sixfold increase in generation success rate (24.7% vs 4.7% in zinc finger, and 54.3% vs 8.7% in the immunoglobulin domain) in comparison to the existing model.
q-bio/0610018
Michael Deem
Guanyu Wang and Michael W. Deem
A Physical Theory of the Competition that Allows HIV to Escape from the Immune System
5 pages, 2 figures, to appear in Phys. Rev. Lett
null
10.1103/PhysRevLett.97.188106
null
q-bio.PE
null
Competition within the immune system may degrade immune control of viral infections. We formalize the evolution that occurs in both HIV-1 and the immune system quasispecies. Inclusion of competition in the immune system leads to a novel balance between the immune response and HIV-1, in which the eventual outcome is HIV-1 escape rather than control. The analytical model reproduces the three stages of HIV-1 infection. We propose a vaccine regimen that may be able to reduce competition between T cells, potentially eliminating the third stage of HIV-1.
[ { "created": "Sun, 8 Oct 2006 06:14:41 GMT", "version": "v1" }, { "created": "Mon, 16 Oct 2006 15:59:04 GMT", "version": "v2" } ]
2009-11-13
[ [ "Wang", "Guanyu", "" ], [ "Deem", "Michael W.", "" ] ]
Competition within the immune system may degrade immune control of viral infections. We formalize the evolution that occurs in both HIV-1 and the immune system quasispecies. Inclusion of competition in the immune system leads to a novel balance between the immune response and HIV-1, in which the eventual outcome is HIV-1 escape rather than control. The analytical model reproduces the three stages of HIV-1 infection. We propose a vaccine regimen that may be able to reduce competition between T cells, potentially eliminating the third stage of HIV-1.
1310.5017
Lu\'is F. Seoane MSc
Lu\'is F. Seoane and Ricard V. Sol\'e
Synthetic biocomputation design using supervised gene regulatory networks
13 pages, 5 figures
null
null
null
q-bio.NC q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The potential of synthetic biology techniques for designing complex cellular circuits able to solve complicated computations opens a whole domain of exploration, beyond experiments and theory. Such cellular circuits could be used to carry out hard tasks involving decision-making, storage of information, or signal processing. Since Gene Regulatory Networks (GRNs) are the best known technical approach to synthetic designs, it would be desirable to know in advance the potential of such circuits in performing tasks and how classical approximations dealing with neural networks can be translated into GRNs. In this paper such a potential is analyzed. Here we show that feed-forward GRNs are capable of performing classic machine intelligence tasks. Therefore, two important milestones in the success of Artificial Neural Networks are reached for models of GRNs based on Hill equations, namely the back-propagation algorithm and the proof that GRNs can approximate arbitrary positive functions. Potential extensions and implications for synthetic designs are outlined.
[ { "created": "Fri, 18 Oct 2013 13:42:34 GMT", "version": "v1" } ]
2013-10-21
[ [ "Seoane", "Luís F.", "" ], [ "Solé", "Ricard V.", "" ] ]
The potential of synthetic biology techniques for designing complex cellular circuits able to solve complicated computations opens a whole domain of exploration, beyond experiments and theory. Such cellular circuits could be used to carry out hard tasks involving decision-making, storage of information, or signal processing. Since Gene Regulatory Networks (GRNs) are the best known technical approach to synthetic designs, it would be desirable to know in advance the potential of such circuits in performing tasks and how classical approximations dealing with neural networks can be translated into GRNs. In this paper such a potential is analyzed. Here we show that feed-forward GRNs are capable of performing classic machine intelligence tasks. Therefore, two important milestones in the success of Artificial Neural Networks are reached for models of GRNs based on Hill equations, namely the back-propagation algorithm and the proof that GRNs can approximate arbitrary positive functions. Potential extensions and implications for synthetic designs are outlined.
2408.01425
Tajudeen Yahaya Dr.
CD Obadiah, TO Yahaya, AA Aliero, M Abdulkareem
Comparative Evaluation of the Proximate and Cytogenotoxicity of Ash and Rice Chips Used as Mango Fruit Artificial Ripening Agents in Birnin Kebbi, Nigeria
Accept
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
The high demand for mango (Mangifera indica L.) fruits has led sellers to employ ripening agents. However, concerns are growing regarding the potential toxicities of induced ripening, emphasizing the need for scientific investigation. Samples of artificially and naturally ripened mangoes were analyzed for proximate composition using standard protocols. Cytogenotoxicity was then assessed using the Allium cepa L. toxicity test. Twenty (20)A. cepa (onion) bulbs were used, with 5 ripened naturally, 5 with wood ash, 5 with herbaceous ash, and 5 with rice chips, all grown over tap water for five days. The root tips of the bulbs were assayed and examined for chromosomal aberrations. The results revealed a significant (P<0.05) increase in moisture, protein, and ash content of mangoes as ripening agents were introduced. Mangoes ripened with wood ash exhibited the highest moisture content (81%), while those ripened with rice chips had the highest protein (0.5%) and ash content (1.5%). Naturally ripened mangoes displayed the highest fat (0.0095%) and fiber (11.46%) contents. The A. cepa toxicity test indicated significant (p<0.05) differences in the root growth of mangoes ripened with various agents. Wood ash resulted in the highest root growth (2.62cm), while herbaceous ash had the least (2.18%). Chromosomal aberrations, including sticky, vagrant, and laggard abnormalities, were observed in all agents, with herbaceous ash exhibiting the highest and rice chips the least. The obtained results suggest that induced ripening of the fruits could induce toxicities, highlighting the necessity for public awareness regarding the potential dangers posed by these agents.
[ { "created": "Sat, 29 Jun 2024 12:06:44 GMT", "version": "v1" } ]
2024-08-06
[ [ "Obadiah", "CD", "" ], [ "Yahaya", "TO", "" ], [ "Aliero", "AA", "" ], [ "Abdulkareem", "M", "" ] ]
The high demand for mango (Mangifera indica L.) fruits has led sellers to employ ripening agents. However, concerns are growing regarding the potential toxicities of induced ripening, emphasizing the need for scientific investigation. Samples of artificially and naturally ripened mangoes were analyzed for proximate composition using standard protocols. Cytogenotoxicity was then assessed using the Allium cepa L. toxicity test. Twenty (20)A. cepa (onion) bulbs were used, with 5 ripened naturally, 5 with wood ash, 5 with herbaceous ash, and 5 with rice chips, all grown over tap water for five days. The root tips of the bulbs were assayed and examined for chromosomal aberrations. The results revealed a significant (P<0.05) increase in moisture, protein, and ash content of mangoes as ripening agents were introduced. Mangoes ripened with wood ash exhibited the highest moisture content (81%), while those ripened with rice chips had the highest protein (0.5%) and ash content (1.5%). Naturally ripened mangoes displayed the highest fat (0.0095%) and fiber (11.46%) contents. The A. cepa toxicity test indicated significant (p<0.05) differences in the root growth of mangoes ripened with various agents. Wood ash resulted in the highest root growth (2.62cm), while herbaceous ash had the least (2.18%). Chromosomal aberrations, including sticky, vagrant, and laggard abnormalities, were observed in all agents, with herbaceous ash exhibiting the highest and rice chips the least. The obtained results suggest that induced ripening of the fruits could induce toxicities, highlighting the necessity for public awareness regarding the potential dangers posed by these agents.
q-bio/0609049
Haret Rosu
P. Escalante-Minakata, V. Ibarra-Junquera, H.C. Rosu, A. De Leon-Rodriguez, R. Gonzalez-Garcia
An algorithm for real-time estimation of Mezcal fermentation parameters based on redox potential measurements
13 pp, 5 figs, misprints corrected
null
null
null
q-bio.QM
null
We present an algorithm for the continuous monitoring of the biomass and ethanol concentrations and moreover the kinetic rate in the Mezcal fermentation process. This algorithm performs its task having only available the on-line measurements of the redox potential. The procedure includes an artificial neural network (ANN) that relates the redox potential to the ethanol and biomass concentrations. Then a nonlinear-observer-based algorithm uses the biomass estimations to infer the kinetic rate of this fermentation process. The method shows that the redox potential is a valuable indicator of microorganism metabolic activity during the Mezcal fermentation. In addition, the estimated kinetic rate can be considered as a direct evidence of the presence of mixed culture growth in the process. In this work, the detailed design of the software-sensor is presented, as well as its experimental application at the laboratory level
[ { "created": "Thu, 28 Sep 2006 07:24:28 GMT", "version": "v1" }, { "created": "Wed, 16 Jan 2008 23:10:04 GMT", "version": "v2" } ]
2008-01-17
[ [ "Escalante-Minakata", "P.", "" ], [ "Ibarra-Junquera", "V.", "" ], [ "Rosu", "H. C.", "" ], [ "De Leon-Rodriguez", "A.", "" ], [ "Gonzalez-Garcia", "R.", "" ] ]
We present an algorithm for the continuous monitoring of the biomass and ethanol concentrations and moreover the kinetic rate in the Mezcal fermentation process. This algorithm performs its task having only available the on-line measurements of the redox potential. The procedure includes an artificial neural network (ANN) that relates the redox potential to the ethanol and biomass concentrations. Then a nonlinear-observer-based algorithm uses the biomass estimations to infer the kinetic rate of this fermentation process. The method shows that the redox potential is a valuable indicator of microorganism metabolic activity during the Mezcal fermentation. In addition, the estimated kinetic rate can be considered as a direct evidence of the presence of mixed culture growth in the process. In this work, the detailed design of the software-sensor is presented, as well as its experimental application at the laboratory level
1803.02905
Ahmad Maqboul
Ahmad Maqboul and Bakheet Elsadek
A Novel Model of Cancer-Induced Peripheral Neuropathy and the Role of TRPA1 in Pain Transduction
12 pages
Pain Research and Management, Volume 2017, Article ID 3517207
10.1155/2017/3517207
null
q-bio.TO q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Background. Models of cancer-induced neuropathy are designed by injecting cancer cells near the peripheral nerves. The interference of tissue-resident immune cells does not allow a direct contact with nerve fibres which affects the tumor microenvironment and the invasion process. Methods. Anaplastic tumor-1 (AT-1) cells were inoculated within the sciatic nerves (SNs) of male Copenhagen rats. Lumbar dorsal root ganglia (DRGs) and the SNs were collected on days 3, 7, 14, and 21. SN tissues were examined for morphological changes and DRG tissues for immunofluorescence, electrophoretic tendency, and mRNA quantification. Hypersensitivities to cold, mechanical, and thermal stimuli were determined. HC-030031, a selective TRPA1 antagonist, was used to treat cold allodynia. Results. Nociception thresholds were identified on day 6. Immunofluorescent micrographs showed overexpression of TRPA1 on days 7 and 14 and of CGRP on day 14 until day 21. Both TRPA1 and CGRP were coexpressed on the same cells. Immunoblots exhibited an increase in TRPA1 expression on day 14. TRPA1 mRNA underwent an increase on day 7 (normalized to 18S). Injection of HC-030031 transiently reversed the cold allodynia. Conclusion. A novel and a promising model of cancer-induced neuropathy was established, and the role of TRPA1 and CGRP in pain transduction was examined.
[ { "created": "Wed, 7 Mar 2018 22:43:41 GMT", "version": "v1" } ]
2018-03-09
[ [ "Maqboul", "Ahmad", "" ], [ "Elsadek", "Bakheet", "" ] ]
Background. Models of cancer-induced neuropathy are designed by injecting cancer cells near the peripheral nerves. The interference of tissue-resident immune cells does not allow a direct contact with nerve fibres which affects the tumor microenvironment and the invasion process. Methods. Anaplastic tumor-1 (AT-1) cells were inoculated within the sciatic nerves (SNs) of male Copenhagen rats. Lumbar dorsal root ganglia (DRGs) and the SNs were collected on days 3, 7, 14, and 21. SN tissues were examined for morphological changes and DRG tissues for immunofluorescence, electrophoretic tendency, and mRNA quantification. Hypersensitivities to cold, mechanical, and thermal stimuli were determined. HC-030031, a selective TRPA1 antagonist, was used to treat cold allodynia. Results. Nociception thresholds were identified on day 6. Immunofluorescent micrographs showed overexpression of TRPA1 on days 7 and 14 and of CGRP on day 14 until day 21. Both TRPA1 and CGRP were coexpressed on the same cells. Immunoblots exhibited an increase in TRPA1 expression on day 14. TRPA1 mRNA underwent an increase on day 7 (normalized to 18S). Injection of HC-030031 transiently reversed the cold allodynia. Conclusion. A novel and a promising model of cancer-induced neuropathy was established, and the role of TRPA1 and CGRP in pain transduction was examined.
0905.0994
Roberto Amato
Roberto Amato, Michele Pinelli, Daniel D'Andrea, Gennaro Miele, Mario Nicodemi, Giancarlo Raiconi, Sergio Cocozza
A novel approach to simulate gene-environment interactions in complex diseases
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex diseases are multifactorial traits caused by both genetic and environmental factors. They represent the most part of human diseases and include those with largest prevalence and mortality (cancer, heart disease, obesity, etc.). Despite of a large amount of information that have been collected about both genetic and environmental risk factors, there are relatively few examples of studies on their interactions in epidemiological literature. One reason can be the incomplete knowledge of the power of statistical methods designed to search for risk factors and their interactions in this data sets. An improving in this direction would lead to a better understanding and description of gene-environment interaction. To this aim, a possible strategy is to challenge the different statistical methods against data sets where the underlying phenomenon is completely known and fully controllable, like for example simulated ones. We present a mathematical approach that models gene-environment interactions. By this method it is possible to generate simulated populations having gene-environment interactions of any form. We implemented a simple version of this model in a Gene-Environment iNteraction Simulator (GENS), a tool designed to simulate case-control data sets where a one gene-one environment interaction influences the disease risk. The main effort has been to allow user to describe characteristics of population by using standard epidemiological measures and to implement constraints to make the simulator behavior biologically meaningful.
[ { "created": "Thu, 7 May 2009 10:51:42 GMT", "version": "v1" } ]
2009-05-08
[ [ "Amato", "Roberto", "" ], [ "Pinelli", "Michele", "" ], [ "D'Andrea", "Daniel", "" ], [ "Miele", "Gennaro", "" ], [ "Nicodemi", "Mario", "" ], [ "Raiconi", "Giancarlo", "" ], [ "Cocozza", "Sergio", "" ] ]
Complex diseases are multifactorial traits caused by both genetic and environmental factors. They represent the most part of human diseases and include those with largest prevalence and mortality (cancer, heart disease, obesity, etc.). Despite of a large amount of information that have been collected about both genetic and environmental risk factors, there are relatively few examples of studies on their interactions in epidemiological literature. One reason can be the incomplete knowledge of the power of statistical methods designed to search for risk factors and their interactions in this data sets. An improving in this direction would lead to a better understanding and description of gene-environment interaction. To this aim, a possible strategy is to challenge the different statistical methods against data sets where the underlying phenomenon is completely known and fully controllable, like for example simulated ones. We present a mathematical approach that models gene-environment interactions. By this method it is possible to generate simulated populations having gene-environment interactions of any form. We implemented a simple version of this model in a Gene-Environment iNteraction Simulator (GENS), a tool designed to simulate case-control data sets where a one gene-one environment interaction influences the disease risk. The main effort has been to allow user to describe characteristics of population by using standard epidemiological measures and to implement constraints to make the simulator behavior biologically meaningful.
1001.5420
Dhagash Mehta
William Hanan, Dhagash Mehta, Guillaume Moroz, Sepanda Pouryahya
Stability and Bifurcation Analysis of Coupled Fitzhugh-Nagumo Oscillators
"Extended abstract" published in the Joint Conference of ASCM2009 and MACIS2009, Japan, 2009
null
null
null
q-bio.NC cs.SC nlin.CD q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons are the central biological objects in understanding how the brain works. The famous Hodgkin-Huxley model, which describes how action potentials of a neuron are initiated and propagated, consists of four coupled nonlinear differential equations. Because these equations are difficult to deal with, there also exist several simplified models, of which many exhibit polynomial-like non-linearity. Examples of such models are the Fitzhugh-Nagumo (FHN) model, the Hindmarsh-Rose (HR) model, the Morris-Lecar (ML) model and the Izhikevich model. In this work, we first prescribe the biologically relevant parameter ranges for the FHN model and subsequently study the dynamical behaviour of coupled neurons on small networks of two or three nodes. To do this, we use a computational real algebraic geometry method called the Discriminant Variety (DV) method to perform the stability and bifurcation analysis of these small networks. A time series analysis of the FHN model can be found elsewhere in related work[15].
[ { "created": "Fri, 29 Jan 2010 15:25:26 GMT", "version": "v1" } ]
2010-02-01
[ [ "Hanan", "William", "" ], [ "Mehta", "Dhagash", "" ], [ "Moroz", "Guillaume", "" ], [ "Pouryahya", "Sepanda", "" ] ]
Neurons are the central biological objects in understanding how the brain works. The famous Hodgkin-Huxley model, which describes how action potentials of a neuron are initiated and propagated, consists of four coupled nonlinear differential equations. Because these equations are difficult to deal with, there also exist several simplified models, of which many exhibit polynomial-like non-linearity. Examples of such models are the Fitzhugh-Nagumo (FHN) model, the Hindmarsh-Rose (HR) model, the Morris-Lecar (ML) model and the Izhikevich model. In this work, we first prescribe the biologically relevant parameter ranges for the FHN model and subsequently study the dynamical behaviour of coupled neurons on small networks of two or three nodes. To do this, we use a computational real algebraic geometry method called the Discriminant Variety (DV) method to perform the stability and bifurcation analysis of these small networks. A time series analysis of the FHN model can be found elsewhere in related work[15].
1806.10656
Monique Tirion
Hyuntae Na, Daniel ben-Avraham, Monique M. Tirion
Slow Normal Modes of Proteins are Accurately Reproduced across Different Platforms
20 pages plus 7 figures/tables (version to conform with referees remarks)
null
10.1088/1478-3975/aae333
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Protein Data Bank (PDB) contains the atomic structures of over 105 biomolecules with better than 2.8A resolution. The listing of the identities and coordinates of the atoms comprising each macromolecule permits an analysis of the slow-time vibrational response of these large systems to minor perturbations. 3D video animations of individual modes of oscillation demonstrate how regions interdigitate to create cohesive collective motions, providing a comprehensive framework for and familiarity with the overall 3D architecture. Furthermore, the isolation and representation of the softest, slowest deformation coordinates provide opportunities for the development of me- chanical models of enzyme function. The eigenvector decomposition, therefore, must be accurate, reliable as well as rapid to be generally reported upon. We obtain the eigenmodes of a 1.2A 34kDa PDB entry using either exclusively heavy atoms or partly or fully reduced atomic sets; Cartesian or internal coordinates; interatomic force fields derived either from a full Cartesian potential, a reduced atomic potential or a Gaussian distance-dependent potential; and independently devel- oped software. These varied technologies are similar in that each maintains proper stereochemistry either by use of dihedral degrees of freedom which freezes bond lengths and bond angles, or by use of a full atomic potential that includes realistic bond length and angle restraints. We find that the shapes of the slowest eigenvectors are nearly identical, not merely similar.
[ { "created": "Wed, 27 Jun 2018 19:41:54 GMT", "version": "v1" }, { "created": "Tue, 4 Sep 2018 19:35:09 GMT", "version": "v2" } ]
2018-12-05
[ [ "Na", "Hyuntae", "" ], [ "ben-Avraham", "Daniel", "" ], [ "Tirion", "Monique M.", "" ] ]
The Protein Data Bank (PDB) contains the atomic structures of over 105 biomolecules with better than 2.8A resolution. The listing of the identities and coordinates of the atoms comprising each macromolecule permits an analysis of the slow-time vibrational response of these large systems to minor perturbations. 3D video animations of individual modes of oscillation demonstrate how regions interdigitate to create cohesive collective motions, providing a comprehensive framework for and familiarity with the overall 3D architecture. Furthermore, the isolation and representation of the softest, slowest deformation coordinates provide opportunities for the development of me- chanical models of enzyme function. The eigenvector decomposition, therefore, must be accurate, reliable as well as rapid to be generally reported upon. We obtain the eigenmodes of a 1.2A 34kDa PDB entry using either exclusively heavy atoms or partly or fully reduced atomic sets; Cartesian or internal coordinates; interatomic force fields derived either from a full Cartesian potential, a reduced atomic potential or a Gaussian distance-dependent potential; and independently devel- oped software. These varied technologies are similar in that each maintains proper stereochemistry either by use of dihedral degrees of freedom which freezes bond lengths and bond angles, or by use of a full atomic potential that includes realistic bond length and angle restraints. We find that the shapes of the slowest eigenvectors are nearly identical, not merely similar.
1605.09019
Yi-Hsuan Lin
Yi-Hsuan Lin, Julie D. Forman-Kay, and Hue Sun Chan
Sequence-specific polyampholyte phase separation in membraneless organelles
5 pages, 3 figures; additional results; accepted for publication in PRL
Phys. Rev. Lett. 117, 178101 (2016)
10.1103/PhysRevLett.117.178101
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Liquid-liquid phase separation of charge/aromatic-enriched intrinsically disordered proteins (IDPs) is critical in the biological function of membraneless organelles. Much of the physics of this recent discovery remains to be elucidated. Here we present a theory in the random phase approximation to account for electrostatic effects in polyampholyte phase separations, yielding predictions consistent with recent experiments on the IDP Ddx4. The theory is applicable to any charge pattern and thus provides a general analytical framework for studying sequence dependence of IDP phase separation.
[ { "created": "Sun, 29 May 2016 16:02:03 GMT", "version": "v1" }, { "created": "Wed, 21 Sep 2016 19:30:41 GMT", "version": "v2" } ]
2016-10-18
[ [ "Lin", "Yi-Hsuan", "" ], [ "Forman-Kay", "Julie D.", "" ], [ "Chan", "Hue Sun", "" ] ]
Liquid-liquid phase separation of charge/aromatic-enriched intrinsically disordered proteins (IDPs) is critical in the biological function of membraneless organelles. Much of the physics of this recent discovery remains to be elucidated. Here we present a theory in the random phase approximation to account for electrostatic effects in polyampholyte phase separations, yielding predictions consistent with recent experiments on the IDP Ddx4. The theory is applicable to any charge pattern and thus provides a general analytical framework for studying sequence dependence of IDP phase separation.
1603.06142
Ralph Brinks
Ralph Brinks
How to assess case-finding in chronic diseases: Comparison of different indices
18 pages, 9 figures
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, we have proposed a new illness-death model that comprises a state of undiagnosed chronic disease preceding the diagnosed disease. Based on this model, the question arises how case-finding can be assessed in the presence of mortality from all these states. We simulate two scenarios of different performance of case-finding and apply several indices to assess case-finding in both scenarios. One of the prevalence based indices leads to wrong conclusions. Some indices are partly insensitive to distinguish the quality of case-finding. The incidence based indices perform well. If possible, incidence based indices should be preferred.
[ { "created": "Sat, 19 Mar 2016 20:47:11 GMT", "version": "v1" } ]
2016-03-22
[ [ "Brinks", "Ralph", "" ] ]
Recently, we have proposed a new illness-death model that comprises a state of undiagnosed chronic disease preceding the diagnosed disease. Based on this model, the question arises how case-finding can be assessed in the presence of mortality from all these states. We simulate two scenarios of different performance of case-finding and apply several indices to assess case-finding in both scenarios. One of the prevalence based indices leads to wrong conclusions. Some indices are partly insensitive to distinguish the quality of case-finding. The incidence based indices perform well. If possible, incidence based indices should be preferred.
2407.20711
Francesco Sannino
Bruno Buonomo, Alessandra D'Alise, Rossella Della Marca, Francesco Sannino
Information index augmented eRG to model vaccination behaviour: A case study of COVID-19 in the US
19 pages, 4 figures
null
null
null
q-bio.PE math.DS physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Recent pandemics triggered the development of a number of mathematical models and computational tools apt at curbing the socio-economic impact of these and future pandemics. The need to acquire solid estimates from the data led to the introduction of effective approaches such as the \emph{epidemiological Renormalization Group} (eRG). A recognized relevant factor impacting the evolution of pandemics is the feedback stemming from individuals' choices. The latter can be taken into account via the \textit{information index} which accommodates the information--induced perception regarding the status of the disease and the memory of past spread. We, therefore, show how to augment the eRG by means of the information index. We first develop the {\it behavioural} version of the eRG and then test it against the US vaccination campaign for COVID-19. We find that the behavioural augmented eRG improves the description of the pandemic dynamics of the US divisions for which the epidemic peak occurs after the start of the vaccination campaign. Our results strengthen the relevance of taking into account the human behaviour component when modelling pandemic evolution. To inform public health policies, the model can be readily employed to investigate the socio-epidemiological dynamics, including vaccination campaigns, for other regions of the world.
[ { "created": "Tue, 30 Jul 2024 10:11:45 GMT", "version": "v1" } ]
2024-07-31
[ [ "Buonomo", "Bruno", "" ], [ "D'Alise", "Alessandra", "" ], [ "Della Marca", "Rossella", "" ], [ "Sannino", "Francesco", "" ] ]
Recent pandemics triggered the development of a number of mathematical models and computational tools apt at curbing the socio-economic impact of these and future pandemics. The need to acquire solid estimates from the data led to the introduction of effective approaches such as the \emph{epidemiological Renormalization Group} (eRG). A recognized relevant factor impacting the evolution of pandemics is the feedback stemming from individuals' choices. The latter can be taken into account via the \textit{information index} which accommodates the information--induced perception regarding the status of the disease and the memory of past spread. We, therefore, show how to augment the eRG by means of the information index. We first develop the {\it behavioural} version of the eRG and then test it against the US vaccination campaign for COVID-19. We find that the behavioural augmented eRG improves the description of the pandemic dynamics of the US divisions for which the epidemic peak occurs after the start of the vaccination campaign. Our results strengthen the relevance of taking into account the human behaviour component when modelling pandemic evolution. To inform public health policies, the model can be readily employed to investigate the socio-epidemiological dynamics, including vaccination campaigns, for other regions of the world.
1510.05631
Yun S. Song
Jeffrey P. Spence, John A. Kamm, Yun S. Song
The site frequency spectrum for general coalescents
20 pages, 4 figure
Genetics, Vol. 202 No. 4 (2016) 1549-1561
10.1534/genetics.115.184101
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General genealogical processes such as $\Lambda$- and $\Xi$-coalescents, which respectively model multiple and simultaneous mergers, have important applications in studying marine species, strong positive selection, recurrent selective sweeps, strong bottlenecks, large sample sizes, and so on. Recently, there has been significant progress in developing useful inference tools for such general models. In particular, inference methods based on the site frequency spectrum (SFS) have received noticeable attention. Here, we derive a new formula for the expected SFS for general $\Lambda$- and $\Xi$-coalescents, which leads to an efficient algorithm. For time-homogeneous coalescents, the runtime of our algorithm for computing the expected SFS is $O(n^2)$, where $n$ is the sample size. This is a factor of $n^2$ faster than the state-of-the-art method. Furthermore, in contrast to existing methods, our method generalizes to time-inhomogeneous $\Lambda$- and $\Xi$-coalescents with measures that factorize as $\Lambda(dx)/\zeta(t)$ and $\Xi(dx)/\zeta(t)$, respectively, where $\zeta$ denotes a strictly positive function of time. The runtime of our algorithm in this setting is $O(n^3)$. We also obtain general theoretical results for the identifiability of the $\Lambda$ measure when $\zeta$ is a constant function, as well as for the identifiability of the function $\zeta$ under a fixed $\Xi$ measure.
[ { "created": "Mon, 19 Oct 2015 19:35:38 GMT", "version": "v1" }, { "created": "Thu, 11 Feb 2016 23:43:46 GMT", "version": "v2" } ]
2016-05-10
[ [ "Spence", "Jeffrey P.", "" ], [ "Kamm", "John A.", "" ], [ "Song", "Yun S.", "" ] ]
General genealogical processes such as $\Lambda$- and $\Xi$-coalescents, which respectively model multiple and simultaneous mergers, have important applications in studying marine species, strong positive selection, recurrent selective sweeps, strong bottlenecks, large sample sizes, and so on. Recently, there has been significant progress in developing useful inference tools for such general models. In particular, inference methods based on the site frequency spectrum (SFS) have received noticeable attention. Here, we derive a new formula for the expected SFS for general $\Lambda$- and $\Xi$-coalescents, which leads to an efficient algorithm. For time-homogeneous coalescents, the runtime of our algorithm for computing the expected SFS is $O(n^2)$, where $n$ is the sample size. This is a factor of $n^2$ faster than the state-of-the-art method. Furthermore, in contrast to existing methods, our method generalizes to time-inhomogeneous $\Lambda$- and $\Xi$-coalescents with measures that factorize as $\Lambda(dx)/\zeta(t)$ and $\Xi(dx)/\zeta(t)$, respectively, where $\zeta$ denotes a strictly positive function of time. The runtime of our algorithm in this setting is $O(n^3)$. We also obtain general theoretical results for the identifiability of the $\Lambda$ measure when $\zeta$ is a constant function, as well as for the identifiability of the function $\zeta$ under a fixed $\Xi$ measure.
2212.05880
Claudia Solis-Lemus
Reed Nelson, Rosa Aghdam, Claudia Solis-Lemus
MiNAA: Microbiome Network Alignment Algorithm
null
null
null
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Our Microbiome Network Alignment Algorithm (MiNAA) aligns two microbial networks using a combination of the GRAph ALigner (GRAAL) algorithm and the Hungarian algorithm. Network alignment algorithms find pairs of nodes (one node from the first network and the other node from the second network) that have the highest similarity. Traditionally, similarity has been defined as topological similarity such that the neighborhoods around the two nodes are similar. Recent implementations of network alignment methods such as NETAL and L-GRAAL also include measures of biological similarity, yet these methods are restricted to one specific type of biological similarity (e.g. sequence similarity in L-GRAAL). Our work extends existing network alignment implementations by allowing any type of biological similarity to be input by the user. This flexibility allows the user to choose whatever measure of biological similarity is suitable for the study at hand. In addition, unlike most existing network alignment methods that are tailored for protein or gene interaction networks, our work is the first one suited for microbiome networks.
[ { "created": "Mon, 12 Dec 2022 13:37:21 GMT", "version": "v1" } ]
2022-12-13
[ [ "Nelson", "Reed", "" ], [ "Aghdam", "Rosa", "" ], [ "Solis-Lemus", "Claudia", "" ] ]
Our Microbiome Network Alignment Algorithm (MiNAA) aligns two microbial networks using a combination of the GRAph ALigner (GRAAL) algorithm and the Hungarian algorithm. Network alignment algorithms find pairs of nodes (one node from the first network and the other node from the second network) that have the highest similarity. Traditionally, similarity has been defined as topological similarity such that the neighborhoods around the two nodes are similar. Recent implementations of network alignment methods such as NETAL and L-GRAAL also include measures of biological similarity, yet these methods are restricted to one specific type of biological similarity (e.g. sequence similarity in L-GRAAL). Our work extends existing network alignment implementations by allowing any type of biological similarity to be input by the user. This flexibility allows the user to choose whatever measure of biological similarity is suitable for the study at hand. In addition, unlike most existing network alignment methods that are tailored for protein or gene interaction networks, our work is the first one suited for microbiome networks.
q-bio/0412029
Claus O. Wilke
Robert Forster (Caltech) and Claus O. Wilke (UT Austin, Caltech)
Tradeoff between short-term and long-term adaptation in a changing environment
9 pages, 3 figures, PRE in press
null
10.1103/PhysRevE.72.041922
null
q-bio.PE
null
We investigate the competition dynamics of two microbial or viral strains that live in an environment that switches periodically between two states. One of the strains is adapted to the long-term environment, but pays a short-term cost, while the other is adapted to the short-term environment and pays a cost in the long term. We explore the tradeoff between these alternative strategies in extensive numerical simulations, and present a simple analytic model that can predict the outcome of these competitions as a function of the mutation rate and the time scale of the environmental changes. Our model is relevant for arboviruses, which alternate between different host species on a regular basis.
[ { "created": "Thu, 16 Dec 2004 01:07:23 GMT", "version": "v1" }, { "created": "Sat, 3 Sep 2005 19:32:19 GMT", "version": "v2" } ]
2013-05-29
[ [ "Forster", "Robert", "", "Caltech" ], [ "Wilke", "Claus O.", "", "UT Austin, Caltech" ] ]
We investigate the competition dynamics of two microbial or viral strains that live in an environment that switches periodically between two states. One of the strains is adapted to the long-term environment, but pays a short-term cost, while the other is adapted to the short-term environment and pays a cost in the long term. We explore the tradeoff between these alternative strategies in extensive numerical simulations, and present a simple analytic model that can predict the outcome of these competitions as a function of the mutation rate and the time scale of the environmental changes. Our model is relevant for arboviruses, which alternate between different host species on a regular basis.
2007.15476
Roman Reshetnikov
Sergey P. Morozov, Roman V. Reshetnikov, Victor A. Gombolevskiy, Natalia V. Ledikhova, Ivan A. Blokhin, Vladislav G. Kljashtorny, Olesya A. Mokienko, Anton V. Vladzymyrskyy
Diagnostic Accuracy of Computed Tomography for Identifying Hospitalization in Patients with Suspected COVID-19
10 pages, 2 figures, 3 tables
null
null
null
q-bio.QM physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The controversy of computed tomography (CT) use in COVID-19 screening is associated with ambiguous characteristics of chest CT as a diagnostic test. The reported values of CT sensitivity and specificity calculated using RT-PCR as a reference standard vary widely. The objective of this study was to reevaluate the diagnostic and prognostic value of CT using an alternative approach. This study included 973 symptomatic COVID-19 patients aged 42 $\pm$ 17 years, 56% females. We reviewed the disease dynamics between the initial and follow-up CT studies using a "CT0-4" grading system. Sensitivity and specificity were calculated as conditional probabilities that a patient's condition would improve or deteriorate relative to the initial CT study results. For the calculation of negative (NPV) and positive (PPV) predictive values, we estimated the COVID-19 prevalence in Moscow. We used several ARIMA and EST models with different parameters to fit the data on total cases of COVID-19 from March 6, 2020, to July 20, 2020, and forecast the incidence. The "CT0-4" grading scale demonstrated low sensitivity (28%) but high specificity (95%). The best statistical model for describing the pandemic in Moscow was ETS with multiplicative trend, error, and season type. According to our calculations, with the predicted prevalence of 2.1%, the values of NPV and PPV would be 98% and 10%, correspondingly. We associate the low sensitivity and PPV values with the small sample size of the patients with severe symptoms and non-optimal methodological setup for measuring these specific characteristics. The "CT0-4" grading scale was highly specific and predictive for identifying admissions to hospitals of COVID-19 patients. Despite the ambiguous accuracy, chest CT proved to be an effective practical tool for patient management during the pandemic, provided that the necessary infrastructure and human resources are available.
[ { "created": "Wed, 29 Jul 2020 13:02:55 GMT", "version": "v1" }, { "created": "Fri, 31 Jul 2020 10:43:48 GMT", "version": "v2" } ]
2020-08-03
[ [ "Morozov", "Sergey P.", "" ], [ "Reshetnikov", "Roman V.", "" ], [ "Gombolevskiy", "Victor A.", "" ], [ "Ledikhova", "Natalia V.", "" ], [ "Blokhin", "Ivan A.", "" ], [ "Kljashtorny", "Vladislav G.", "" ], [ "Mokienko", "Olesya A.", "" ], [ "Vladzymyrskyy", "Anton V.", "" ] ]
The controversy of computed tomography (CT) use in COVID-19 screening is associated with ambiguous characteristics of chest CT as a diagnostic test. The reported values of CT sensitivity and specificity calculated using RT-PCR as a reference standard vary widely. The objective of this study was to reevaluate the diagnostic and prognostic value of CT using an alternative approach. This study included 973 symptomatic COVID-19 patients aged 42 $\pm$ 17 years, 56% females. We reviewed the disease dynamics between the initial and follow-up CT studies using a "CT0-4" grading system. Sensitivity and specificity were calculated as conditional probabilities that a patient's condition would improve or deteriorate relative to the initial CT study results. For the calculation of negative (NPV) and positive (PPV) predictive values, we estimated the COVID-19 prevalence in Moscow. We used several ARIMA and EST models with different parameters to fit the data on total cases of COVID-19 from March 6, 2020, to July 20, 2020, and forecast the incidence. The "CT0-4" grading scale demonstrated low sensitivity (28%) but high specificity (95%). The best statistical model for describing the pandemic in Moscow was ETS with multiplicative trend, error, and season type. According to our calculations, with the predicted prevalence of 2.1%, the values of NPV and PPV would be 98% and 10%, correspondingly. We associate the low sensitivity and PPV values with the small sample size of the patients with severe symptoms and non-optimal methodological setup for measuring these specific characteristics. The "CT0-4" grading scale was highly specific and predictive for identifying admissions to hospitals of COVID-19 patients. Despite the ambiguous accuracy, chest CT proved to be an effective practical tool for patient management during the pandemic, provided that the necessary infrastructure and human resources are available.
2104.01489
Daniel Yamins
Rosa Cao and Daniel Yamins
Explanatory models in neuroscience: Part 2 -- constraint-based intelligibility
null
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the context of neural network models for neuroscience, concerns have been raised about model intelligibility, and how they relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are causally responsible for that behavior. In biological systems, many of these dependencies are naturally "top-down": ethological imperatives interact with evolutionary and developmental constraints under natural selection. We describe how the optimization techniques used to construct NN models capture some key aspects of these dependencies, and thus help explain why brain systems are as they are -- because when a challenging ecologically-relevant goal is shared by a NN and the brain, it places tight constraints on the possible mechanisms exhibited in both kinds of systems. By combining two familiar modes of explanation -- one based on bottom-up mechanism (whose relation to neural network models we address in a companion paper) and the other on top-down constraints, these models illuminate brain function.
[ { "created": "Sat, 3 Apr 2021 22:14:01 GMT", "version": "v1" }, { "created": "Wed, 14 Apr 2021 16:59:48 GMT", "version": "v2" } ]
2021-04-15
[ [ "Cao", "Rosa", "" ], [ "Yamins", "Daniel", "" ] ]
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the context of neural network models for neuroscience, concerns have been raised about model intelligibility, and how they relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are causally responsible for that behavior. In biological systems, many of these dependencies are naturally "top-down": ethological imperatives interact with evolutionary and developmental constraints under natural selection. We describe how the optimization techniques used to construct NN models capture some key aspects of these dependencies, and thus help explain why brain systems are as they are -- because when a challenging ecologically-relevant goal is shared by a NN and the brain, it places tight constraints on the possible mechanisms exhibited in both kinds of systems. By combining two familiar modes of explanation -- one based on bottom-up mechanism (whose relation to neural network models we address in a companion paper) and the other on top-down constraints, these models illuminate brain function.
1705.07998
Koray Ciftci
Koray \c{C}ift\c{c}i
Synaptic Noise Facilitates the Emergence of Self-Organized Criticality in the Caenorhabditis elegans Neuronal Network
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Avalanches with power-law distributed size parameters have been observed in neuronal networks. This observation might be a manifestation of the self-organized criticality (SOC). Yet, the physiological mechanicsm of this behavior is currently unknown. Describing synaptic noise as transmission failures mainly originating from the probabilistic nature of neurotransmitter release, this study investigates the potential of this noise as a mechanism for driving the functional architecture of the neuronal networks towards SOC. To this end, a simple finite state neuron model, with activity dependent and synapse specific failure probabilities, was built based on the known anatomical connectivity data of the nematode Ceanorhabditis elegans. Beginning from random values, it was observed that synaptic noise levels picked out a set of synapses and consequently an active subnetwork which generates power-law distributed neuronal avalanches. The findings of this study brings up the possibility that synaptic failures might be a component of physiological processes underlying SOC in neuronal networks.
[ { "created": "Mon, 22 May 2017 20:55:43 GMT", "version": "v1" } ]
2017-05-24
[ [ "Çiftçi", "Koray", "" ] ]
Avalanches with power-law distributed size parameters have been observed in neuronal networks. This observation might be a manifestation of the self-organized criticality (SOC). Yet, the physiological mechanicsm of this behavior is currently unknown. Describing synaptic noise as transmission failures mainly originating from the probabilistic nature of neurotransmitter release, this study investigates the potential of this noise as a mechanism for driving the functional architecture of the neuronal networks towards SOC. To this end, a simple finite state neuron model, with activity dependent and synapse specific failure probabilities, was built based on the known anatomical connectivity data of the nematode Ceanorhabditis elegans. Beginning from random values, it was observed that synaptic noise levels picked out a set of synapses and consequently an active subnetwork which generates power-law distributed neuronal avalanches. The findings of this study brings up the possibility that synaptic failures might be a component of physiological processes underlying SOC in neuronal networks.
2305.19544
Liu Hong
Dongyan Zhang, Wuyue Yang, Wanqi Wen, Liangrong Peng, Changjingn Zhuge, Liu Hong
A data-driven analysis on the mediation effect of compartment models between control measures and COVID-19 epidemics
21 pages, 6 figures, 1 tables
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We make a retrospective review on various control measures taken by 127 countries/territories during the first wave of COVID-19 pandemic until July 7, 2020, and evaluate their impacts on the epidemic dynamics quantitatively. The SEIR-QD model, as a representative for general compartment models, is used to fit the epidemic data, enabling the extraction of crucial model parameters and dynamical features. The mediation effect of the SEIR-QD model is revealed by using the mediation analysis with structure equation modeling for multiple mediators operating in parallel. The inherent impacts of these control policies on the transmission dynamics of COVID-19 epidemics are clarified, and compared with results derived from both multiple linear regression and neural-network-based nonlinear regression. Through this data-driven analysis, the mediation effect of compartment models is confirmed, which provides a better understanding on the intrinsic correlations among the strength of control measures and the dynamical features of COVID-19 epidemics.
[ { "created": "Wed, 31 May 2023 04:18:57 GMT", "version": "v1" }, { "created": "Fri, 22 Sep 2023 08:44:50 GMT", "version": "v2" } ]
2023-09-25
[ [ "Zhang", "Dongyan", "" ], [ "Yang", "Wuyue", "" ], [ "Wen", "Wanqi", "" ], [ "Peng", "Liangrong", "" ], [ "Zhuge", "Changjingn", "" ], [ "Hong", "Liu", "" ] ]
We make a retrospective review on various control measures taken by 127 countries/territories during the first wave of COVID-19 pandemic until July 7, 2020, and evaluate their impacts on the epidemic dynamics quantitatively. The SEIR-QD model, as a representative for general compartment models, is used to fit the epidemic data, enabling the extraction of crucial model parameters and dynamical features. The mediation effect of the SEIR-QD model is revealed by using the mediation analysis with structure equation modeling for multiple mediators operating in parallel. The inherent impacts of these control policies on the transmission dynamics of COVID-19 epidemics are clarified, and compared with results derived from both multiple linear regression and neural-network-based nonlinear regression. Through this data-driven analysis, the mediation effect of compartment models is confirmed, which provides a better understanding on the intrinsic correlations among the strength of control measures and the dynamical features of COVID-19 epidemics.
2109.04903
Marcelo Nocelle De Almeida Doutor
Marcelo N. Almeida, Rodolfo Alves de Oliveira, Luiz Olmes, Gustavo S. Semaan, Daniel de Oliveira, Lucio Santos, Marcos Bedo
HELIX: Data-driven characterization of Brazilian land snails
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Decision-support systems benefit from hidden patterns extracted from digital information. In the specific domain of gastropod characterization, morphometrical measurements support biologists in the identification of land snail specimens. Although snails can be easily identified by their excretory and reproductive systems, the after-death mollusk body is commonly inaccessible because of either soft material deterioration or fossilization. This study aims at characterizing Brazilian land snails by morphometrical data features manually taken from the shells. In particular, we examined a dataset of shells by using different learning models that labeled snail specimens with a precision up to 97.5% (F1-Score = .975, CKC = .967 and ROC Area = .998). The extracted patterns describe similarities and trends among land snail species and indicates possible outliers physiologies due to climate traits and breeding. Finally, we show some morphometrical characteristics dominate others according to different feature selection biases. Those data-based patterns can be applied to fast land snail identification whenever their bodies are unavailable, as in the recurrent cases of lost shells in nature or private and museum collections.
[ { "created": "Fri, 10 Sep 2021 14:23:52 GMT", "version": "v1" }, { "created": "Tue, 14 Sep 2021 13:38:17 GMT", "version": "v2" } ]
2021-09-15
[ [ "Almeida", "Marcelo N.", "" ], [ "de Oliveira", "Rodolfo Alves", "" ], [ "Olmes", "Luiz", "" ], [ "Semaan", "Gustavo S.", "" ], [ "de Oliveira", "Daniel", "" ], [ "Santos", "Lucio", "" ], [ "Bedo", "Marcos", "" ] ]
Decision-support systems benefit from hidden patterns extracted from digital information. In the specific domain of gastropod characterization, morphometrical measurements support biologists in the identification of land snail specimens. Although snails can be easily identified by their excretory and reproductive systems, the after-death mollusk body is commonly inaccessible because of either soft material deterioration or fossilization. This study aims at characterizing Brazilian land snails by morphometrical data features manually taken from the shells. In particular, we examined a dataset of shells by using different learning models that labeled snail specimens with a precision up to 97.5% (F1-Score = .975, CKC = .967 and ROC Area = .998). The extracted patterns describe similarities and trends among land snail species and indicates possible outliers physiologies due to climate traits and breeding. Finally, we show some morphometrical characteristics dominate others according to different feature selection biases. Those data-based patterns can be applied to fast land snail identification whenever their bodies are unavailable, as in the recurrent cases of lost shells in nature or private and museum collections.
q-bio/0312008
Bindu Govindan
Bindu S. Govindan and William B. Spillman, Jr. (Applied Biosciences Center, Virginia Tech)
Stabilization of microtubules due to microtubule-associated proteins: A simple model
23 pages, RevTeX, 7 figures included, submitted to Phys. Rev. E
null
null
null
q-bio.SC q-bio.CB
null
A theoretical model of stabilization of a microtubule assembly due to microtubule-associated-proteins(MAP) is presented. MAPs are assumed to bind to the microtubule filaments, thus preventing their disintegration following hydrolysis and enhancing further polymerization. Using mean-field rate equations and explicit numerical simulations, we show that the density of MAP (number of MAP per tubulin in the microtubule) has to exceed a critical value $\rho_{c}$ to stabilize the structure against de-polymerization. At lower densities $\rho < \rho_{c}$, the microtubule population consists mostly of short polymers with exponentially decaying length distribution, whereas at $\rho > \rho_{c}$ the average length increases linearly with time and the microtubules ultimately extend to the cell boundary. Using experimentally measured values of various parameters, the critical ratio of MAP to tubulin required for unlimited growth is seen to be of the order of 1:100 or even smaller.
[ { "created": "Fri, 5 Dec 2003 21:52:26 GMT", "version": "v1" } ]
2012-08-27
[ [ "Govindan", "Bindu S.", "", "Applied Biosciences\n Center, Virginia Tech" ], [ "Spillman,", "William B.", "Jr.", "Applied Biosciences\n Center, Virginia Tech" ] ]
A theoretical model of stabilization of a microtubule assembly due to microtubule-associated-proteins(MAP) is presented. MAPs are assumed to bind to the microtubule filaments, thus preventing their disintegration following hydrolysis and enhancing further polymerization. Using mean-field rate equations and explicit numerical simulations, we show that the density of MAP (number of MAP per tubulin in the microtubule) has to exceed a critical value $\rho_{c}$ to stabilize the structure against de-polymerization. At lower densities $\rho < \rho_{c}$, the microtubule population consists mostly of short polymers with exponentially decaying length distribution, whereas at $\rho > \rho_{c}$ the average length increases linearly with time and the microtubules ultimately extend to the cell boundary. Using experimentally measured values of various parameters, the critical ratio of MAP to tubulin required for unlimited growth is seen to be of the order of 1:100 or even smaller.
1203.5932
Jost Neigenfind
Jost Neigenfind and Sergio Grimbs and Zoran Nikoloski
On the relation between reactions and complexes of (bio)chemical reaction networks
null
null
null
null
q-bio.MN math.CA math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robustness of biochemical systems has become one of the central questions in systems biology although it is notoriously difficult to formally capture its multifaceted nature. Maintenance of normal system function depends not only on the stoichiometry of the underlying interrelated components, but also on a multitude of kinetic parameters. Invariant flux ratios, obtained within flux coupling analysis, as well as invariant complex ratios, derived within chemical reaction network theory, can characterize robust properties of a system at steady state. However, the existing formalisms for the description of these invariants do not provide full characterization as they either only focus on the flux-centric or the concentration-centric view. Here we develop a novel mathematical framework which combines both views and thereby overcomes the limitations of the classical methodologies. Our unified framework will be helpful in analyzing biologically important system properties.
[ { "created": "Tue, 27 Mar 2012 11:06:05 GMT", "version": "v1" } ]
2012-03-28
[ [ "Neigenfind", "Jost", "" ], [ "Grimbs", "Sergio", "" ], [ "Nikoloski", "Zoran", "" ] ]
Robustness of biochemical systems has become one of the central questions in systems biology although it is notoriously difficult to formally capture its multifaceted nature. Maintenance of normal system function depends not only on the stoichiometry of the underlying interrelated components, but also on a multitude of kinetic parameters. Invariant flux ratios, obtained within flux coupling analysis, as well as invariant complex ratios, derived within chemical reaction network theory, can characterize robust properties of a system at steady state. However, the existing formalisms for the description of these invariants do not provide full characterization as they either only focus on the flux-centric or the concentration-centric view. Here we develop a novel mathematical framework which combines both views and thereby overcomes the limitations of the classical methodologies. Our unified framework will be helpful in analyzing biologically important system properties.
2003.08150
Indrajit Ghosh
Sk Shahid Nadim, Indrajit Ghosh, Joydev Chattopadhyay
Short-term predictions and prevention strategies for COVID-19: A model based study
NA
null
10.1016/j.amc.2021.126251
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
An outbreak of respiratory disease caused by a novel coronavirus is ongoing from December 2019. As of July 22, 2020, it has caused an epidemic outbreak with more than 15 million confirmed infections and above 6 hundred thousand reported deaths worldwide. During this period of an epidemic when human-to-human transmission is established and reported cases of coronavirus disease 2019 (COVID-19) are rising worldwide, investigation of control strategies and forecasting are necessary for health care planning. In this study, we propose and analyze a compartmental epidemic model of COVID-19 to predict and control the outbreak. The basic reproduction number and control reproduction number are calculated analytically. A detailed stability analysis of the model is performed to observe the dynamics of the system. We calibrated the proposed model to fit daily data from the United Kingdom (UK) where the situation is still alarming. Our findings suggest that independent self-sustaining human-to-human spread ($R_0>1$, $R_c>1$) is already present. Short-term predictions show that the decreasing trend of new COVID-19 cases is well captured by the model. Further, we found that effective management of quarantined individuals is more effective than management of isolated individuals to reduce the disease burden. Thus, if limited resources are available, then investing on the quarantined individuals will be more fruitful in terms of reduction of cases.
[ { "created": "Wed, 18 Mar 2020 10:59:32 GMT", "version": "v1" }, { "created": "Thu, 2 Jul 2020 17:04:16 GMT", "version": "v2" }, { "created": "Wed, 22 Jul 2020 10:29:58 GMT", "version": "v3" } ]
2021-05-21
[ [ "Nadim", "Sk Shahid", "" ], [ "Ghosh", "Indrajit", "" ], [ "Chattopadhyay", "Joydev", "" ] ]
An outbreak of respiratory disease caused by a novel coronavirus is ongoing from December 2019. As of July 22, 2020, it has caused an epidemic outbreak with more than 15 million confirmed infections and above 6 hundred thousand reported deaths worldwide. During this period of an epidemic when human-to-human transmission is established and reported cases of coronavirus disease 2019 (COVID-19) are rising worldwide, investigation of control strategies and forecasting are necessary for health care planning. In this study, we propose and analyze a compartmental epidemic model of COVID-19 to predict and control the outbreak. The basic reproduction number and control reproduction number are calculated analytically. A detailed stability analysis of the model is performed to observe the dynamics of the system. We calibrated the proposed model to fit daily data from the United Kingdom (UK) where the situation is still alarming. Our findings suggest that independent self-sustaining human-to-human spread ($R_0>1$, $R_c>1$) is already present. Short-term predictions show that the decreasing trend of new COVID-19 cases is well captured by the model. Further, we found that effective management of quarantined individuals is more effective than management of isolated individuals to reduce the disease burden. Thus, if limited resources are available, then investing on the quarantined individuals will be more fruitful in terms of reduction of cases.
1803.09018
Abraham Nunes
Abraham Nunes and Alexander Rudiuk
The Importance of Constraint Smoothness for Parameter Estimation in Computational Cognitive Modeling
null
null
null
null
q-bio.QM cs.LG q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Psychiatric neuroscience is increasingly aware of the need to define psychopathology in terms of abnormal neural computation. The central tool in this endeavour is the fitting of computational models to behavioural data. The most prominent example of this procedure is fitting reinforcement learning (RL) models to decision-making data collected from mentally ill and healthy subject populations. These models are generative models of the decision-making data themselves, and the parameters we seek to infer can be psychologically and neurobiologically meaningful. Currently, the gold standard approach to this inference procedure involves Monte-Carlo sampling, which is robust but computationally intensive---rendering additional procedures, such as cross-validation, impractical. Searching for point estimates of model parameters using optimization procedures remains a popular and interesting option. On a novel testbed simulating parameter estimation from a common RL task, we investigated the effects of smooth vs. boundary constraints on parameter estimation using interior point and deterministic direct search algorithms for optimization. Ultimately, we show that the use of boundary constraints can lead to substantial truncation effects. Our results discourage the use of boundary constraints for these applications.
[ { "created": "Sat, 24 Mar 2018 00:25:20 GMT", "version": "v1" } ]
2018-03-28
[ [ "Nunes", "Abraham", "" ], [ "Rudiuk", "Alexander", "" ] ]
Psychiatric neuroscience is increasingly aware of the need to define psychopathology in terms of abnormal neural computation. The central tool in this endeavour is the fitting of computational models to behavioural data. The most prominent example of this procedure is fitting reinforcement learning (RL) models to decision-making data collected from mentally ill and healthy subject populations. These models are generative models of the decision-making data themselves, and the parameters we seek to infer can be psychologically and neurobiologically meaningful. Currently, the gold standard approach to this inference procedure involves Monte-Carlo sampling, which is robust but computationally intensive---rendering additional procedures, such as cross-validation, impractical. Searching for point estimates of model parameters using optimization procedures remains a popular and interesting option. On a novel testbed simulating parameter estimation from a common RL task, we investigated the effects of smooth vs. boundary constraints on parameter estimation using interior point and deterministic direct search algorithms for optimization. Ultimately, we show that the use of boundary constraints can lead to substantial truncation effects. Our results discourage the use of boundary constraints for these applications.
0811.4149
Aleksandra Walczak
Aleksandra M. Walczak, Andrew Mugler and Chris H. Wiggins
A stochastic spectral analysis of transcriptional regulatory cascades
null
PNAS 106, 6529-6534 (2009)
10.1073/pnas.0811999106
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The past decade has seen great advances in our understanding of the role of noise in gene regulation and the physical limits to signaling in biological networks. Here we introduce the spectral method for computation of the joint probability distribution over all species in a biological network. The spectral method exploits the natural eigenfunctions of the master equation of birth-death processes to solve for the joint distribution of modules within the network, which then inform each other and facilitate calculation of the entire joint distribution. We illustrate the method on a ubiquitous case in nature: linear regulatory cascades. The efficiency of the method makes possible numerical optimization of the input and regulatory parameters, revealing design properties of, e.g., the most informative cascades. We find, for threshold regulation, that a cascade of strong regulations converts a unimodal input to a bimodal output, that multimodal inputs are no more informative than bimodal inputs, and that a chain of up-regulations outperforms a chain of down-regulations. We anticipate that this numerical approach may be useful for modeling noise in a variety of small network topologies in biology.
[ { "created": "Tue, 25 Nov 2008 18:28:23 GMT", "version": "v1" } ]
2009-09-19
[ [ "Walczak", "Aleksandra M.", "" ], [ "Mugler", "Andrew", "" ], [ "Wiggins", "Chris H.", "" ] ]
The past decade has seen great advances in our understanding of the role of noise in gene regulation and the physical limits to signaling in biological networks. Here we introduce the spectral method for computation of the joint probability distribution over all species in a biological network. The spectral method exploits the natural eigenfunctions of the master equation of birth-death processes to solve for the joint distribution of modules within the network, which then inform each other and facilitate calculation of the entire joint distribution. We illustrate the method on a ubiquitous case in nature: linear regulatory cascades. The efficiency of the method makes possible numerical optimization of the input and regulatory parameters, revealing design properties of, e.g., the most informative cascades. We find, for threshold regulation, that a cascade of strong regulations converts a unimodal input to a bimodal output, that multimodal inputs are no more informative than bimodal inputs, and that a chain of up-regulations outperforms a chain of down-regulations. We anticipate that this numerical approach may be useful for modeling noise in a variety of small network topologies in biology.
2010.14898
Francesco Gargano
F. Bagarello, F. Gargano, F. Roccati
Modeling epidemics through ladder operators
null
Chaos, Solitons & Fractals, Volume 140, November 2020, 110193
10.1016/j.chaos.2020.110193
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple model of spreading of some infection in an originally healthy population which is different from other models existing in the literature. In particular, we use an operator technique which allows us to describe in a natural way the possible interactions between healthy and un-healthy populations, and their transformation into recovered and to dead people. After a rather general discussion, we apply our method to the analysis of Chinese data for the SARS-2003 (Severe acute respiratory syndrome; SARS-CoV-1) and the Coronavirus COVID-19 (Corona Virus Disease; SARS-CoV-2 ) and we show that the model works very well in reproducing the long-time behaviour of the disease, and in particular in finding the number of affected and dead people in the limit of large time. Moreover, we show how the model can be easily modified to consider some lockdown measure, and we deduce that this procedure drastically reduces the asymptotic value of infected individuals, as expected, and observed in real life.
[ { "created": "Wed, 28 Oct 2020 11:35:20 GMT", "version": "v1" } ]
2020-10-29
[ [ "Bagarello", "F.", "" ], [ "Gargano", "F.", "" ], [ "Roccati", "F.", "" ] ]
We propose a simple model of spreading of some infection in an originally healthy population which is different from other models existing in the literature. In particular, we use an operator technique which allows us to describe in a natural way the possible interactions between healthy and un-healthy populations, and their transformation into recovered and to dead people. After a rather general discussion, we apply our method to the analysis of Chinese data for the SARS-2003 (Severe acute respiratory syndrome; SARS-CoV-1) and the Coronavirus COVID-19 (Corona Virus Disease; SARS-CoV-2 ) and we show that the model works very well in reproducing the long-time behaviour of the disease, and in particular in finding the number of affected and dead people in the limit of large time. Moreover, we show how the model can be easily modified to consider some lockdown measure, and we deduce that this procedure drastically reduces the asymptotic value of infected individuals, as expected, and observed in real life.
1907.03532
Sarwar Khan
Sarwar Khan and Faisal Ghaffar, Imad Ali, Qazi Mazhar
Classification of Macromolecule Type Based on Sequences of Amino Acids Using Deep Learning
under review
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classification of amino acids and their sequence analysis plays a vital role in life sciences and is a challenging task. This article uses and compares state-of-the-art deep learning models like convolution neural networks (CNN), long short-term memory (LSTM), and gated recurrent units (GRU) to solve macromolecule classification problems using amino acids. These models have efficient frameworks for solving a broad spectrum of complex learning problems compared to traditional machine learning techniques. We use word embedding to represent the amino acid sequences as vectors. The CNN extracts features from amino acid sequences, which are treated as vectors, then fed to the models mentioned above to train a robust classifier. Our results show that word2vec as embedding combined with VGG-16 performs better than LSTM and GRU. The proposed approach gets an error rate of 1.5%.
[ { "created": "Mon, 1 Jul 2019 03:49:01 GMT", "version": "v1" }, { "created": "Thu, 21 Jul 2022 09:57:31 GMT", "version": "v2" }, { "created": "Sat, 23 Jul 2022 05:34:24 GMT", "version": "v3" } ]
2022-07-26
[ [ "Khan", "Sarwar", "" ], [ "Ghaffar", "Faisal", "" ], [ "Ali", "Imad", "" ], [ "Mazhar", "Qazi", "" ] ]
The classification of amino acids and their sequence analysis plays a vital role in life sciences and is a challenging task. This article uses and compares state-of-the-art deep learning models like convolution neural networks (CNN), long short-term memory (LSTM), and gated recurrent units (GRU) to solve macromolecule classification problems using amino acids. These models have efficient frameworks for solving a broad spectrum of complex learning problems compared to traditional machine learning techniques. We use word embedding to represent the amino acid sequences as vectors. The CNN extracts features from amino acid sequences, which are treated as vectors, then fed to the models mentioned above to train a robust classifier. Our results show that word2vec as embedding combined with VGG-16 performs better than LSTM and GRU. The proposed approach gets an error rate of 1.5%.
1511.04786
Mark Flegg
Mark B. Flegg
Smoluchowski reaction kinetics for reactions of any order
null
null
null
null
q-bio.QM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 1917, Marian von Smoluchowski presented a simple mathematical description of diffusion-controlled reactions on the scale of individual molecules. His model postulated that a reaction would occur when two reactants were sufficiently close and, more specifically, presented a succinct relationship between the relative proximity of two reactants at the moment of reaction and the macroscopic reaction rate. Over the last century, Smoluchowski reaction theory has been applied widely in the physical, chemical, environmental and, more recently, the biological sciences. Despite the widespread utility of the Smoluchowski theory, it only describes the rates of second order reactions and is inadequate for the description of higher order reactions for which there is no equivalent method for theoretical investigation. In this paper, we derive a generalised Smoluchowski framework in which we define what should be meant by proximity in this context when more than two reactants are involved. We derive the relationship between the macroscopic reaction rate and the critical proximity at which a reaction occurs for higher order reactions. Using this theoretical framework and using numerical experiments we explore various peculiar properties of multimolecular diffusion-controlled reactions which, due to there being no other numerical method of this nature, have not been previous reported.
[ { "created": "Mon, 16 Nov 2015 00:11:21 GMT", "version": "v1" } ]
2015-11-17
[ [ "Flegg", "Mark B.", "" ] ]
In 1917, Marian von Smoluchowski presented a simple mathematical description of diffusion-controlled reactions on the scale of individual molecules. His model postulated that a reaction would occur when two reactants were sufficiently close and, more specifically, presented a succinct relationship between the relative proximity of two reactants at the moment of reaction and the macroscopic reaction rate. Over the last century, Smoluchowski reaction theory has been applied widely in the physical, chemical, environmental and, more recently, the biological sciences. Despite the widespread utility of the Smoluchowski theory, it only describes the rates of second order reactions and is inadequate for the description of higher order reactions for which there is no equivalent method for theoretical investigation. In this paper, we derive a generalised Smoluchowski framework in which we define what should be meant by proximity in this context when more than two reactants are involved. We derive the relationship between the macroscopic reaction rate and the critical proximity at which a reaction occurs for higher order reactions. Using this theoretical framework and using numerical experiments we explore various peculiar properties of multimolecular diffusion-controlled reactions which, due to there being no other numerical method of this nature, have not been previous reported.
1609.00620
Marc D Ryser
Marc D. Ryser and Kevin A. Murgas
Bone Remodeling as a Spatial Evolutionary Game
29 pages, 6 figures, 1 table
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bone remodeling is a complex process involving cell-cell interactions, biochemical signaling and mechanical stimuli. Early models of the biological aspects of remodeling were non-spatial and focused on the local dynamics at a fixed location in the bone. Several spatial extensions of these models have been proposed, but they generally suffer from two limitations: first, they are not amenable to analysis and are computationally expensive, and second, they neglect the role played by bone-embedded osteocytes. To address these issues, we developed a novel model of spatial remodeling based on the principles of evolutionary game theory. The analytically tractable framework describes the spatial interactions between zones of bone resorption, bone formation and quiescent bone, and explicitly accounts for regulation of remodeling by bone-embedded, mechanotransducing osteocytes. Using tools from the theory of interacting particle systems we systematically classified the different dynamic regimes of the spatial model and identified regions of parameter space that allow for coexistence of resorption, formation and quiescence, as observed in physiological remodeling. In coexistence scenarios, three-dimensional simulations revealed the emergence of sponge-like bone clusters. Comparison between spatial and non-spatial dynamics revealed substantial differences and suggested a stabilizing role of space. Our findings emphasize the importance of accounting for spatial structure and bone-embedded osteocytes when modeling the process of bone remodeling. Thanks to the lattice-based framework, the proposed model can easily be coupled to a mechanical model of bone loading.
[ { "created": "Fri, 2 Sep 2016 14:35:51 GMT", "version": "v1" } ]
2016-09-05
[ [ "Ryser", "Marc D.", "" ], [ "Murgas", "Kevin A.", "" ] ]
Bone remodeling is a complex process involving cell-cell interactions, biochemical signaling and mechanical stimuli. Early models of the biological aspects of remodeling were non-spatial and focused on the local dynamics at a fixed location in the bone. Several spatial extensions of these models have been proposed, but they generally suffer from two limitations: first, they are not amenable to analysis and are computationally expensive, and second, they neglect the role played by bone-embedded osteocytes. To address these issues, we developed a novel model of spatial remodeling based on the principles of evolutionary game theory. The analytically tractable framework describes the spatial interactions between zones of bone resorption, bone formation and quiescent bone, and explicitly accounts for regulation of remodeling by bone-embedded, mechanotransducing osteocytes. Using tools from the theory of interacting particle systems we systematically classified the different dynamic regimes of the spatial model and identified regions of parameter space that allow for coexistence of resorption, formation and quiescence, as observed in physiological remodeling. In coexistence scenarios, three-dimensional simulations revealed the emergence of sponge-like bone clusters. Comparison between spatial and non-spatial dynamics revealed substantial differences and suggested a stabilizing role of space. Our findings emphasize the importance of accounting for spatial structure and bone-embedded osteocytes when modeling the process of bone remodeling. Thanks to the lattice-based framework, the proposed model can easily be coupled to a mechanical model of bone loading.
2107.05569
Emma Leschiera
Emma Leschiera, Tommaso Lorenzi, Shensi Shen, Luis Almeida and Chloe Audebert
A mathematical model to study the impact of intra-tumour heterogeneity on anti-tumour CD8+ T cell immune response
null
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Intra-tumour heterogeneity (ITH) has a strong impact on the efficacy of the immune response against solid tumours. The number of sub-populations of cancer cells expressing different antigens and the percentage of immunogenic cells (i.e. tumour cells that are effectively targeted by immune cells) in a tumour are both expressions of ITH. Here, we present a spatially explicit stochastic individual-based model of the interaction dynamics between tumour cells and CD8+ T cells, which makes it possible to dissect out the specific impact of these two expressions of ITH on anti-tumour immune response. The set-up of numerical simulations of the model is defined so as to mimic scenarios considered in previous experimental studies. Moreover, the ability of the model to qualitatively reproduce experimental observations of successful and unsuccessful immune surveillance is demonstrated. First, the results of numerical simulations of this model indicate that the presence of a larger number of sub-populations of tumour cells that express different antigens is associated with a reduced ability of CD8+ T cells to mount an effective anti-tumour immune response. Second, the presence of a larger percentage of tumour cells that are not effectively targeted by CD8+ T cells may reduce the effectiveness of anti-tumour immunity. Ultimately, the mathematical model presented in this paper may provide a framework to help biologists and clinicians to better understand the mechanisms that are responsible for the emergence of different outcomes of immunotherapy.
[ { "created": "Mon, 12 Jul 2021 16:51:43 GMT", "version": "v1" }, { "created": "Tue, 7 Dec 2021 09:13:23 GMT", "version": "v2" } ]
2021-12-08
[ [ "Leschiera", "Emma", "" ], [ "Lorenzi", "Tommaso", "" ], [ "Shen", "Shensi", "" ], [ "Almeida", "Luis", "" ], [ "Audebert", "Chloe", "" ] ]
Intra-tumour heterogeneity (ITH) has a strong impact on the efficacy of the immune response against solid tumours. The number of sub-populations of cancer cells expressing different antigens and the percentage of immunogenic cells (i.e. tumour cells that are effectively targeted by immune cells) in a tumour are both expressions of ITH. Here, we present a spatially explicit stochastic individual-based model of the interaction dynamics between tumour cells and CD8+ T cells, which makes it possible to dissect out the specific impact of these two expressions of ITH on anti-tumour immune response. The set-up of numerical simulations of the model is defined so as to mimic scenarios considered in previous experimental studies. Moreover, the ability of the model to qualitatively reproduce experimental observations of successful and unsuccessful immune surveillance is demonstrated. First, the results of numerical simulations of this model indicate that the presence of a larger number of sub-populations of tumour cells that express different antigens is associated with a reduced ability of CD8+ T cells to mount an effective anti-tumour immune response. Second, the presence of a larger percentage of tumour cells that are not effectively targeted by CD8+ T cells may reduce the effectiveness of anti-tumour immunity. Ultimately, the mathematical model presented in this paper may provide a framework to help biologists and clinicians to better understand the mechanisms that are responsible for the emergence of different outcomes of immunotherapy.
2203.17255
Jared Reser
Jared Edward Reser
A Cognitive Architecture for Machine Consciousness and Artificial Superintelligence: Thought Is Structured by the Iterative Updating of Working Memory
null
null
null
null
q-bio.NC cs.CL cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This article provides an analytical framework for how to simulate human-like thought processes within a computer. It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought. The focus is on replicating the dynamics of the mammalian working memory system, which features two forms of persistent activity: sustained firing (preserving information on the order of seconds) and synaptic potentiation (preserving information from minutes to hours). The article uses a series of over 40 original figures to systematically demonstrate how the iterative updating of these working memory stores provides functional structure to behavior, cognition, and consciousness. In an AI implementation, these two memory stores should be updated continuously and in an iterative fashion, meaning each state should preserve a proportion of the coactive representations from the state before it. Thus, the set of concepts in working memory will evolve gradually and incrementally over time. This makes each state a revised iteration of the preceding state and causes successive states to overlap and blend with respect to the information they contain. Transitions between states happen as persistent activity spreads activation energy throughout the hierarchical network searching long-term memory for the most appropriate representation to be added to the global workspace. The result is a chain of associatively linked intermediate states capable of advancing toward a solution or goal. Iterative updating is conceptualized here as an information processing strategy, a model of working memory, a theory of consciousness, and an algorithm for designing and programming artificial general intelligence.
[ { "created": "Tue, 29 Mar 2022 22:28:30 GMT", "version": "v1" }, { "created": "Wed, 6 Jul 2022 01:10:56 GMT", "version": "v2" }, { "created": "Tue, 21 Feb 2023 20:46:38 GMT", "version": "v3" }, { "created": "Fri, 10 Nov 2023 05:44:26 GMT", "version": "v4" }, { "created": "Fri, 8 Dec 2023 03:09:37 GMT", "version": "v5" }, { "created": "Thu, 14 Dec 2023 04:11:47 GMT", "version": "v6" } ]
2023-12-15
[ [ "Reser", "Jared Edward", "" ] ]
This article provides an analytical framework for how to simulate human-like thought processes within a computer. It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought. The focus is on replicating the dynamics of the mammalian working memory system, which features two forms of persistent activity: sustained firing (preserving information on the order of seconds) and synaptic potentiation (preserving information from minutes to hours). The article uses a series of over 40 original figures to systematically demonstrate how the iterative updating of these working memory stores provides functional structure to behavior, cognition, and consciousness. In an AI implementation, these two memory stores should be updated continuously and in an iterative fashion, meaning each state should preserve a proportion of the coactive representations from the state before it. Thus, the set of concepts in working memory will evolve gradually and incrementally over time. This makes each state a revised iteration of the preceding state and causes successive states to overlap and blend with respect to the information they contain. Transitions between states happen as persistent activity spreads activation energy throughout the hierarchical network searching long-term memory for the most appropriate representation to be added to the global workspace. The result is a chain of associatively linked intermediate states capable of advancing toward a solution or goal. Iterative updating is conceptualized here as an information processing strategy, a model of working memory, a theory of consciousness, and an algorithm for designing and programming artificial general intelligence.
1106.1612
Stephen Ellner
Stephen P. Ellner and Sebastian J. Schreiber
Temporally variable dispersal and demography can accelerate the spread of invading species
Final version accepted for publication in Theoretical Population Biology, special issue "Developments in structured models: construction, analysis and inference"
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze how temporal variability in local demography and dispersal combine to affect the rate of spread of an invading species. Our model combines state-structured local demography (specified by an integral or matrix projection model) with general dispersal distributions that may depend on the state of the individual or its parent, and it allows very general patterns of stationary temporal variation in both local demography and in the frequency and distribution of dispersal distances. We show that expressions for the asymptotic spread rate and its sensitivity to parameters, that have been derived previously for less general models, continue to hold. Using these results, we show that random temporal variability in dispersal can accelerate population spread. Demographic variability can further accelerate spread if it is positively correlated with dispersal variability, for example if high-fecundity years are also years in which juveniles tend to settle further away from their parents. A simple model for the growth and spread of patches of an invasive plant (perennial pepperweed, Lepidium latifolium) illustrates these effects and shows that they can have substantial impacts on the predicted speed of an invasion wave. Temporal variability in dispersal has gotten very little attention in both the theoretical and empirical literatures on invasive species spread. Our results suggest that this needs to change.
[ { "created": "Wed, 8 Jun 2011 18:08:34 GMT", "version": "v1" }, { "created": "Fri, 6 Apr 2012 16:27:02 GMT", "version": "v2" } ]
2012-04-09
[ [ "Ellner", "Stephen P.", "" ], [ "Schreiber", "Sebastian J.", "" ] ]
We analyze how temporal variability in local demography and dispersal combine to affect the rate of spread of an invading species. Our model combines state-structured local demography (specified by an integral or matrix projection model) with general dispersal distributions that may depend on the state of the individual or its parent, and it allows very general patterns of stationary temporal variation in both local demography and in the frequency and distribution of dispersal distances. We show that expressions for the asymptotic spread rate and its sensitivity to parameters, that have been derived previously for less general models, continue to hold. Using these results, we show that random temporal variability in dispersal can accelerate population spread. Demographic variability can further accelerate spread if it is positively correlated with dispersal variability, for example if high-fecundity years are also years in which juveniles tend to settle further away from their parents. A simple model for the growth and spread of patches of an invasive plant (perennial pepperweed, Lepidium latifolium) illustrates these effects and shows that they can have substantial impacts on the predicted speed of an invasion wave. Temporal variability in dispersal has gotten very little attention in both the theoretical and empirical literatures on invasive species spread. Our results suggest that this needs to change.
1203.0872
Jing Kang Dr.
Jing Kang, Hugh P. C. Robinson, Jianfeng Feng
Diversity of Intrinsic Frequency Encoding Patterns in Rat Cortical Neurons -Mechanisms and Possible Functions
11 pages, 7 figures; PloS One 2010
null
10.1371/journal.pone.0009608
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracellular recordings of single neurons in primary and secondary somatosensory cortices of monkeys in vivo have shown that their firing rate can increase, decrease, or remain constant in different cells, as the external stimulus frequency increases. We observed similar intrinsic firing patterns (increasing, decreasing or constant) in rat somatosensory cortex in vitro, when stimulated with oscillatory input using conductance injection (dynamic clamp). The underlying mechanism of this observation is not obvious, and presents a challenge for mathematical modelling. We propose a simple principle for describing this phenomenon using a leaky integrate-and-fire model with sinusoidal input, an intrinsic oscillation and Poisson noise. Additional enhancement of the gain of encoding could be achieved by local network connections amongst diverse intrinsic response patterns. Our work sheds light on the possible cellular and network mechanisms underlying these opposing neuronal responses, which serve to enhance signal detection.
[ { "created": "Mon, 5 Mar 2012 11:45:39 GMT", "version": "v1" } ]
2012-03-06
[ [ "Kang", "Jing", "" ], [ "Robinson", "Hugh P. C.", "" ], [ "Feng", "Jianfeng", "" ] ]
Extracellular recordings of single neurons in primary and secondary somatosensory cortices of monkeys in vivo have shown that their firing rate can increase, decrease, or remain constant in different cells, as the external stimulus frequency increases. We observed similar intrinsic firing patterns (increasing, decreasing or constant) in rat somatosensory cortex in vitro, when stimulated with oscillatory input using conductance injection (dynamic clamp). The underlying mechanism of this observation is not obvious, and presents a challenge for mathematical modelling. We propose a simple principle for describing this phenomenon using a leaky integrate-and-fire model with sinusoidal input, an intrinsic oscillation and Poisson noise. Additional enhancement of the gain of encoding could be achieved by local network connections amongst diverse intrinsic response patterns. Our work sheds light on the possible cellular and network mechanisms underlying these opposing neuronal responses, which serve to enhance signal detection.
2004.09230
Alexander Nesterov-Mueller Dr.
A. Nesterov-Mueller and R. Popov
On the origin of the standard genetic code as a fusion of prebiotic single-base-pair codes
6 pages
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The genesis of the stand genetic code is considered as a result of a fusion of two AU- and GC-codes distributed in two dominant and two recessive domains. The fusion of these codes is described with simple empirical rules. This formal approach explains the number of the proteinogenic amino acids and the codon assignment in the resulting standard genetic code. It shows how norleucine, pyrrolysine, selenocysteine and two other unknown amino acids, included into the prebiotic codes, disappeared after the fusion. The properties of these two missing amino acids were described. The ambiguous translation observed in mitochondria is explained. The internal structure of the codes allows a more detailed insights into molecular evolution in prebiotic time. In particular, the structure of the oldest single base-pair code is presented. The fusion concept reveals the appearance of the DNA machinery on the level of the single dominant AU-code. The time before the appearance of standard genetic code is divided into four epochs: pre-DNA, 2-code, pre-fusion, and after-fusion epochs. The prebiotic single-base-pair codes may help design novel peptide-based catalysts.
[ { "created": "Mon, 20 Apr 2020 12:15:39 GMT", "version": "v1" } ]
2020-04-21
[ [ "Nesterov-Mueller", "A.", "" ], [ "Popov", "R.", "" ] ]
The genesis of the stand genetic code is considered as a result of a fusion of two AU- and GC-codes distributed in two dominant and two recessive domains. The fusion of these codes is described with simple empirical rules. This formal approach explains the number of the proteinogenic amino acids and the codon assignment in the resulting standard genetic code. It shows how norleucine, pyrrolysine, selenocysteine and two other unknown amino acids, included into the prebiotic codes, disappeared after the fusion. The properties of these two missing amino acids were described. The ambiguous translation observed in mitochondria is explained. The internal structure of the codes allows a more detailed insights into molecular evolution in prebiotic time. In particular, the structure of the oldest single base-pair code is presented. The fusion concept reveals the appearance of the DNA machinery on the level of the single dominant AU-code. The time before the appearance of standard genetic code is divided into four epochs: pre-DNA, 2-code, pre-fusion, and after-fusion epochs. The prebiotic single-base-pair codes may help design novel peptide-based catalysts.
1106.5061
Sharon Aviran
Sharon Aviran, Julius B. Lucks, and Lior Pachter
RNA structure characterization from chemical mapping experiments
8 pages, 3 figures
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite great interest in solving RNA secondary structures due to their impact on function, it remains an open problem to determine structure from sequence. Among experimental approaches, a promising candidate is the "chemical modification strategy", which involves application of chemicals to RNA that are sensitive to structure and that result in modifications that can be assayed via sequencing technologies. One approach that can reveal paired nucleotides via chemical modification followed by sequencing is SHAPE, and it has been used in conjunction with capillary electrophoresis (SHAPE-CE) and high-throughput sequencing (SHAPE-Seq). The solution of mathematical inverse problems is needed to relate the sequence data to the modified sites, and a number of approaches have been previously suggested for SHAPE-CE, and separately for SHAPE-Seq analysis. Here we introduce a new model for inference of chemical modification experiments, whose formulation results in closed-form maximum likelihood estimates that can be easily applied to data. The model can be specialized to both SHAPE-CE and SHAPE-Seq, and therefore allows for a direct comparison of the two technologies. We then show that the extra information obtained with SHAPE-Seq but not with SHAPE-CE is valuable with respect to ML estimation.
[ { "created": "Fri, 24 Jun 2011 20:29:06 GMT", "version": "v1" }, { "created": "Wed, 29 Jun 2011 16:49:19 GMT", "version": "v2" } ]
2011-06-30
[ [ "Aviran", "Sharon", "" ], [ "Lucks", "Julius B.", "" ], [ "Pachter", "Lior", "" ] ]
Despite great interest in solving RNA secondary structures due to their impact on function, it remains an open problem to determine structure from sequence. Among experimental approaches, a promising candidate is the "chemical modification strategy", which involves application of chemicals to RNA that are sensitive to structure and that result in modifications that can be assayed via sequencing technologies. One approach that can reveal paired nucleotides via chemical modification followed by sequencing is SHAPE, and it has been used in conjunction with capillary electrophoresis (SHAPE-CE) and high-throughput sequencing (SHAPE-Seq). The solution of mathematical inverse problems is needed to relate the sequence data to the modified sites, and a number of approaches have been previously suggested for SHAPE-CE, and separately for SHAPE-Seq analysis. Here we introduce a new model for inference of chemical modification experiments, whose formulation results in closed-form maximum likelihood estimates that can be easily applied to data. The model can be specialized to both SHAPE-CE and SHAPE-Seq, and therefore allows for a direct comparison of the two technologies. We then show that the extra information obtained with SHAPE-Seq but not with SHAPE-CE is valuable with respect to ML estimation.
1301.2610
Marcus Kinsella
Shay Zakov, Marcus Kinsella, Vineet Bafna
Detecting Breakage Fusion Bridge cycles in tumor genomes -- an algorithmic approach
null
null
10.1073/pnas.1220977110
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Breakage-Fusion-Bridge (BFB) is a mechanism of genomic instability characterized by the joining and subsequent tearing apart of sister chromatids. When this process is repeated during multiple rounds of cell division, it leads to patterns of copy number increases of chromosomal segments as well as fold-back inversions where duplicated segments are arranged head-to-head. These structural variations can then drive tumorigenesis. BFB can be observed in progress using cytogenetic techniques, but generally BFB must be inferred from data like microarrays or sequencing collected after BFB has ceased. Making correct inferences from this data is not straightforward, particularly given the complexity of some cancer genomes and BFB's ability to generate a wide range of rearrangement patterns. Here we present algorithms to aid the interpretation of evidence for BFB. We first pose the BFB count vector problem: given a chromosome segmentation and segment copy numbers, decide whether BFB can yield a chromosome with the given segment counts. We present the first linear-time algorithm for the problem, improving a previous exponential-time algorithm. We then combine this algorithm with fold-back inversions to develop tests for BFB. We show that, contingent on assumptions about cancer genome evolution, count vectors and fold-back inversions are sufficient evidence for detecting BFB. We apply the presented techniques to paired-end sequencing data from pancreatic tumors and confirm a previous finding of BFB as well as identify a new chromosomal region likely rearranged by BFB cycles, demonstrating the practicality of our approach.
[ { "created": "Fri, 11 Jan 2013 21:28:33 GMT", "version": "v1" } ]
2013-03-20
[ [ "Zakov", "Shay", "" ], [ "Kinsella", "Marcus", "" ], [ "Bafna", "Vineet", "" ] ]
Breakage-Fusion-Bridge (BFB) is a mechanism of genomic instability characterized by the joining and subsequent tearing apart of sister chromatids. When this process is repeated during multiple rounds of cell division, it leads to patterns of copy number increases of chromosomal segments as well as fold-back inversions where duplicated segments are arranged head-to-head. These structural variations can then drive tumorigenesis. BFB can be observed in progress using cytogenetic techniques, but generally BFB must be inferred from data like microarrays or sequencing collected after BFB has ceased. Making correct inferences from this data is not straightforward, particularly given the complexity of some cancer genomes and BFB's ability to generate a wide range of rearrangement patterns. Here we present algorithms to aid the interpretation of evidence for BFB. We first pose the BFB count vector problem: given a chromosome segmentation and segment copy numbers, decide whether BFB can yield a chromosome with the given segment counts. We present the first linear-time algorithm for the problem, improving a previous exponential-time algorithm. We then combine this algorithm with fold-back inversions to develop tests for BFB. We show that, contingent on assumptions about cancer genome evolution, count vectors and fold-back inversions are sufficient evidence for detecting BFB. We apply the presented techniques to paired-end sequencing data from pancreatic tumors and confirm a previous finding of BFB as well as identify a new chromosomal region likely rearranged by BFB cycles, demonstrating the practicality of our approach.
1212.0413
Rafael Najmanovich
Jean-Pierre Sehi Glouzon, Fran\c{c}ois Bolduc, Rafael Najmanovich, Shengrui Wang, Jean-Pierre Perreault
Deep-sequencing of the Peach Latent Mosaic Viroid Reveals New Aspects of Population Heterogeneity
Manuscript submitted to PLoS ONE October 3rd, 2012. Supplementary data can be found at: http://dx.doi.org/10.6084/m9.figshare.100858
PLoS ONE 9(1): e87297
10.1371/journal.pone.0087297
null
q-bio.PE q-bio.BM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Viroids are small circular single-stranded infectious RNAs that are characterized by a relatively high mutation level. Knowledge of their sequence heterogeneity remains largely elusive, and, as yet, no strategy attempting to address this question from a population dynamics point of view is in place. In order to address these important questions, a GF305 indicator peach tree was infected with a single variant of the Avsunviroidae family member Peach latent mosaic viroid (PLMVd). Six months post-inoculation, full-length circular conformers of PLMVd were isolated, deep-sequenced and the resulting sequences analyzed using an original bioinformatics scheme specifically designed and developed in order to evaluate the richness of a given the sequence's population. Two distinct libraries were analyzed, and yielded 1125 and 1061 different PLMVd variants respectively, making this study the most productive to date (by more than an order of magnitude) in terms of the reporting of novel viroid sequences. Sequence variants exhibiting up to ~20% of mutations relative to the inoculated viroid were retrieved, clearly illustrating the high divergence dynamic inside a unique population. Using a novel hierarchical clustering algorithm, the different variants obtained were grouped into either 7 or 8 clusters depending on the library being analyzed. Most of the sequences contained, on average, between 4.6 and 6.3 mutations relative to the variant used initially to inoculate the plant. Interestingly, it was possible to reconstitute the sequence evolution between these clusters. On top of providing a reliable pipeline for the treatment of viroid deep-sequencing, this study sheds new light on the importance of the sequence variation that may take place in a viroid population and which may result in the formation of a quasi-species.
[ { "created": "Mon, 3 Dec 2012 15:17:25 GMT", "version": "v1" } ]
2014-02-04
[ [ "Glouzon", "Jean-Pierre Sehi", "" ], [ "Bolduc", "François", "" ], [ "Najmanovich", "Rafael", "" ], [ "Wang", "Shengrui", "" ], [ "Perreault", "Jean-Pierre", "" ] ]
Viroids are small circular single-stranded infectious RNAs that are characterized by a relatively high mutation level. Knowledge of their sequence heterogeneity remains largely elusive, and, as yet, no strategy attempting to address this question from a population dynamics point of view is in place. In order to address these important questions, a GF305 indicator peach tree was infected with a single variant of the Avsunviroidae family member Peach latent mosaic viroid (PLMVd). Six months post-inoculation, full-length circular conformers of PLMVd were isolated, deep-sequenced and the resulting sequences analyzed using an original bioinformatics scheme specifically designed and developed in order to evaluate the richness of a given the sequence's population. Two distinct libraries were analyzed, and yielded 1125 and 1061 different PLMVd variants respectively, making this study the most productive to date (by more than an order of magnitude) in terms of the reporting of novel viroid sequences. Sequence variants exhibiting up to ~20% of mutations relative to the inoculated viroid were retrieved, clearly illustrating the high divergence dynamic inside a unique population. Using a novel hierarchical clustering algorithm, the different variants obtained were grouped into either 7 or 8 clusters depending on the library being analyzed. Most of the sequences contained, on average, between 4.6 and 6.3 mutations relative to the variant used initially to inoculate the plant. Interestingly, it was possible to reconstitute the sequence evolution between these clusters. On top of providing a reliable pipeline for the treatment of viroid deep-sequencing, this study sheds new light on the importance of the sequence variation that may take place in a viroid population and which may result in the formation of a quasi-species.
2403.03230
Xiaoliang Luo
Xiaoliang Luo, Akilles Rechardt, Guangzhi Sun, Kevin K. Nejad, Felipe Y\'a\~nez, Bati Yilmaz, Kangjoo Lee, Alexandra O. Cohen, Valentina Borghesani, Anton Pashkov, Daniele Marinazzo, Jonathan Nicholas, Alessandro Salatiello, Ilia Sucholutsky, Pasquale Minervini, Sepehr Razavi, Roberta Rocca, Elkhan Yusifov, Tereza Okalova, Nianlong Gu, Martin Ferianc, Mikail Khona, Kaustubh R. Patil, Pui-Shee Lee, Rui Mata, Nicholas E. Myers, Jennifer K Bizley, Sebastian Musslick, Isil Poyraz Bilgin, Guiomar Niso, Justin M. Ales, Michael Gaebler, N Apurva Ratan Murty, Leyla Loued-Khenissi, Anna Behler, Chloe M. Hall, Jessica Dafflon, Sherry Dongqi Bao, Bradley C. Love
Large language models surpass human experts in predicting neuroscience results
null
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts. To evaluate this possibility, we created BrainBench, a forward-looking benchmark for predicting neuroscience results. We find that LLMs surpass experts in predicting experimental outcomes. BrainGPT, an LLM we tuned on the neuroscience literature, performed better yet. Like human experts, when LLMs were confident in their predictions, they were more likely to be correct, which presages a future where humans and LLMs team together to make discoveries. Our approach is not neuroscience-specific and is transferable to other knowledge-intensive endeavors.
[ { "created": "Mon, 4 Mar 2024 15:27:59 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2024 23:32:15 GMT", "version": "v2" }, { "created": "Fri, 21 Jun 2024 17:35:46 GMT", "version": "v3" } ]
2024-06-24
[ [ "Luo", "Xiaoliang", "" ], [ "Rechardt", "Akilles", "" ], [ "Sun", "Guangzhi", "" ], [ "Nejad", "Kevin K.", "" ], [ "Yáñez", "Felipe", "" ], [ "Yilmaz", "Bati", "" ], [ "Lee", "Kangjoo", "" ], [ "Cohen", "Alexandra O.", "" ], [ "Borghesani", "Valentina", "" ], [ "Pashkov", "Anton", "" ], [ "Marinazzo", "Daniele", "" ], [ "Nicholas", "Jonathan", "" ], [ "Salatiello", "Alessandro", "" ], [ "Sucholutsky", "Ilia", "" ], [ "Minervini", "Pasquale", "" ], [ "Razavi", "Sepehr", "" ], [ "Rocca", "Roberta", "" ], [ "Yusifov", "Elkhan", "" ], [ "Okalova", "Tereza", "" ], [ "Gu", "Nianlong", "" ], [ "Ferianc", "Martin", "" ], [ "Khona", "Mikail", "" ], [ "Patil", "Kaustubh R.", "" ], [ "Lee", "Pui-Shee", "" ], [ "Mata", "Rui", "" ], [ "Myers", "Nicholas E.", "" ], [ "Bizley", "Jennifer K", "" ], [ "Musslick", "Sebastian", "" ], [ "Bilgin", "Isil Poyraz", "" ], [ "Niso", "Guiomar", "" ], [ "Ales", "Justin M.", "" ], [ "Gaebler", "Michael", "" ], [ "Murty", "N Apurva Ratan", "" ], [ "Loued-Khenissi", "Leyla", "" ], [ "Behler", "Anna", "" ], [ "Hall", "Chloe M.", "" ], [ "Dafflon", "Jessica", "" ], [ "Bao", "Sherry Dongqi", "" ], [ "Love", "Bradley C.", "" ] ]
Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts. To evaluate this possibility, we created BrainBench, a forward-looking benchmark for predicting neuroscience results. We find that LLMs surpass experts in predicting experimental outcomes. BrainGPT, an LLM we tuned on the neuroscience literature, performed better yet. Like human experts, when LLMs were confident in their predictions, they were more likely to be correct, which presages a future where humans and LLMs team together to make discoveries. Our approach is not neuroscience-specific and is transferable to other knowledge-intensive endeavors.
1305.5319
Bin He
Bin Z. He, Michael Z. Ludwig, Desiree A. Dickerson, Levi Barse, Bharath Arun, Soo Young Park, Natalia A. Tamarina, Scott B. Selleck, Patricia Wittkopp, Graeme I. Bell and Martin Kreitman
Effect of Genetic Variation in a Drosophila Model of Diabetes-Associated Misfolded Human Proinsulin
null
Genetics 196 (2014) 557-567
10.1534/genetics.113.157800
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The identification and validation of gene-gene interactions is a major challenge in human studies. Here, we explore an approach for studying epistasis in humans using a Drosophila melanogaster model of neonatal diabetes mellitus. Expression of mutant preproinsulin, hINSC96Y, in the eye imaginal disc mimics the human disease activating conserved cell stress response pathways leading to cell death and reduction in eye area. Dominant-acting variants in wild-derived inbred lines from the Drosophila Genetics Reference Panel produce a continuous, highly heritable, distribution of eye degeneration phenotypes. A genome-wide association study (GWAS) in 154 sequenced lines identified 29 candidate SNPs in 16 loci with P < 10-5 including one SNP in an intron of the gene sulfateless (sfl) which exceeded a conservative genome-wide significance threshold of P = 0.05 level (-log10 P > 7.62). RNAi knock-downs of sfl enhanced the eye degeneration phenotype in a mutant-hINS-dependent manner. sfl encodes a protein required for sulfation of the glycosaminoglycan, heparan sulfate. Two additional genes in the heparan sulfate (HS) biosynthetic pathway (tout velu, ttv and brother of tout velu, botv) also modified the eye phenotype, suggesting a link between HS-modified proteins and cellular responses to misfolded proteins. Finally, intronic variants marking the QTL were associated with decreased sfl expression, a result consistent with that predicted by RNAi studies. The ability to create a model of human genetic disease in the fly, map a QTL by GWAS to a specific gene (and noncoding variant), validate its contribution to disease with available genetic resources, and experimentally link the variant to a molecular mechanism, demonstrate the many advantages Drosophila holds in determining the genetic underpinnings of human disease.
[ { "created": "Thu, 23 May 2013 05:54:09 GMT", "version": "v1" }, { "created": "Mon, 27 May 2013 15:18:33 GMT", "version": "v2" } ]
2014-04-01
[ [ "He", "Bin Z.", "" ], [ "Ludwig", "Michael Z.", "" ], [ "Dickerson", "Desiree A.", "" ], [ "Barse", "Levi", "" ], [ "Arun", "Bharath", "" ], [ "Park", "Soo Young", "" ], [ "Tamarina", "Natalia A.", "" ], [ "Selleck", "Scott B.", "" ], [ "Wittkopp", "Patricia", "" ], [ "Bell", "Graeme I.", "" ], [ "Kreitman", "Martin", "" ] ]
The identification and validation of gene-gene interactions is a major challenge in human studies. Here, we explore an approach for studying epistasis in humans using a Drosophila melanogaster model of neonatal diabetes mellitus. Expression of mutant preproinsulin, hINSC96Y, in the eye imaginal disc mimics the human disease activating conserved cell stress response pathways leading to cell death and reduction in eye area. Dominant-acting variants in wild-derived inbred lines from the Drosophila Genetics Reference Panel produce a continuous, highly heritable, distribution of eye degeneration phenotypes. A genome-wide association study (GWAS) in 154 sequenced lines identified 29 candidate SNPs in 16 loci with P < 10-5 including one SNP in an intron of the gene sulfateless (sfl) which exceeded a conservative genome-wide significance threshold of P = 0.05 level (-log10 P > 7.62). RNAi knock-downs of sfl enhanced the eye degeneration phenotype in a mutant-hINS-dependent manner. sfl encodes a protein required for sulfation of the glycosaminoglycan, heparan sulfate. Two additional genes in the heparan sulfate (HS) biosynthetic pathway (tout velu, ttv and brother of tout velu, botv) also modified the eye phenotype, suggesting a link between HS-modified proteins and cellular responses to misfolded proteins. Finally, intronic variants marking the QTL were associated with decreased sfl expression, a result consistent with that predicted by RNAi studies. The ability to create a model of human genetic disease in the fly, map a QTL by GWAS to a specific gene (and noncoding variant), validate its contribution to disease with available genetic resources, and experimentally link the variant to a molecular mechanism, demonstrate the many advantages Drosophila holds in determining the genetic underpinnings of human disease.
2303.13984
Alfred Achieng
Alfred O. Achieng, George B. Arhonditsis, Nicholas E. Mandrack, Catherine M. Febria, Bernard Opaa, Tracey J. Coffey, Ken Irvine, Frank O. Masese, Zeph M. Ajode, James E. Barasa, Kevin Obiero and Boaz Kaunda-Arara
Monitoring biodiversity loss in rapidly changing Afrotropical ecosystems: An emerging imperative for governance and research
12 pages, 2 figures and 1 Table and a supplementary table. Accepted for publication in a special issue with Philosophical Transactions of the Royal Society B 2023
null
10.1098/rstb.2022.0271
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Africa is experiencing extensive biodiversity loss due to rapid changes in the environment, where natural resources constitute the main instrument for socioeconomic development and a mainstay source of livelihoods for an increasing population. Lack of data and information deficiency on biodiversity, but also budget constraints and insufficient financial and technical capacity, impede sound policy design and effective implementation of conservation and management measures. The problem is further exacerbated by the lack of harmonized indicators and databases to assess conservation needs and monitor biodiversity losses. We review challenges with biodiversity data (availability, quality, usability, and database access) as a key limiting factor that impact funding and governance. We also evaluate the drivers of both ecosystems change and biodiversity loss as a central piece of knowledge to develop and implement effective policies. While the continent focuses more on the latter, we argue that the two are complementary in shaping restoration and management solutions. We thus underscore the importance of establishing monitoring programs focusing on biodiversity-ecosystem linkages in order to inform evidence-based decisions in ecosystem conservation and restoration in Africa.
[ { "created": "Fri, 24 Mar 2023 13:11:00 GMT", "version": "v1" } ]
2023-06-01
[ [ "Achieng", "Alfred O.", "" ], [ "Arhonditsis", "George B.", "" ], [ "Mandrack", "Nicholas E.", "" ], [ "Febria", "Catherine M.", "" ], [ "Opaa", "Bernard", "" ], [ "Coffey", "Tracey J.", "" ], [ "Irvine", "Ken", "" ], [ "Masese", "Frank O.", "" ], [ "Ajode", "Zeph M.", "" ], [ "Barasa", "James E.", "" ], [ "Obiero", "Kevin", "" ], [ "Kaunda-Arara", "Boaz", "" ] ]
Africa is experiencing extensive biodiversity loss due to rapid changes in the environment, where natural resources constitute the main instrument for socioeconomic development and a mainstay source of livelihoods for an increasing population. Lack of data and information deficiency on biodiversity, but also budget constraints and insufficient financial and technical capacity, impede sound policy design and effective implementation of conservation and management measures. The problem is further exacerbated by the lack of harmonized indicators and databases to assess conservation needs and monitor biodiversity losses. We review challenges with biodiversity data (availability, quality, usability, and database access) as a key limiting factor that impact funding and governance. We also evaluate the drivers of both ecosystems change and biodiversity loss as a central piece of knowledge to develop and implement effective policies. While the continent focuses more on the latter, we argue that the two are complementary in shaping restoration and management solutions. We thus underscore the importance of establishing monitoring programs focusing on biodiversity-ecosystem linkages in order to inform evidence-based decisions in ecosystem conservation and restoration in Africa.
1303.7186
Thouis Jones
Verena Kaynig, Amelio Vazquez-Reina, Seymour Knowles-Barley, Mike Roberts, Thouis R. Jones, Narayanan Kasthuri, Eric Miller, Jeff Lichtman, Hanspeter Pfister
Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images
null
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated sample preparation and electron microscopy enables acquisition of very large image data sets. These technical advances are of special importance to the field of neuroanatomy, as 3D reconstructions of neuronal processes at the nm scale can provide new insight into the fine grained structure of the brain. Segmentation of large-scale electron microscopy data is the main bottleneck in the analysis of these data sets. In this paper we present a pipeline that provides state-of-the art reconstruction performance while scaling to data sets in the GB-TB range. First, we train a random forest classifier on interactive sparse user annotations. The classifier output is combined with an anisotropic smoothing prior in a Conditional Random Field framework to generate multiple segmentation hypotheses per image. These segmentations are then combined into geometrically consistent 3D objects by segmentation fusion. We provide qualitative and quantitative evaluation of the automatic segmentation and demonstrate large-scale 3D reconstructions of neuronal processes from a $\mathbf{27,000}$ $\mathbf{\mu m^3}$ volume of brain tissue over a cube of $\mathbf{30 \; \mu m}$ in each dimension corresponding to 1000 consecutive image sections. We also introduce Mojo, a proofreading tool including semi-automated correction of merge errors based on sparse user scribbles.
[ { "created": "Thu, 28 Mar 2013 17:20:20 GMT", "version": "v1" } ]
2013-03-29
[ [ "Kaynig", "Verena", "" ], [ "Vazquez-Reina", "Amelio", "" ], [ "Knowles-Barley", "Seymour", "" ], [ "Roberts", "Mike", "" ], [ "Jones", "Thouis R.", "" ], [ "Kasthuri", "Narayanan", "" ], [ "Miller", "Eric", "" ], [ "Lichtman", "Jeff", "" ], [ "Pfister", "Hanspeter", "" ] ]
Automated sample preparation and electron microscopy enables acquisition of very large image data sets. These technical advances are of special importance to the field of neuroanatomy, as 3D reconstructions of neuronal processes at the nm scale can provide new insight into the fine grained structure of the brain. Segmentation of large-scale electron microscopy data is the main bottleneck in the analysis of these data sets. In this paper we present a pipeline that provides state-of-the art reconstruction performance while scaling to data sets in the GB-TB range. First, we train a random forest classifier on interactive sparse user annotations. The classifier output is combined with an anisotropic smoothing prior in a Conditional Random Field framework to generate multiple segmentation hypotheses per image. These segmentations are then combined into geometrically consistent 3D objects by segmentation fusion. We provide qualitative and quantitative evaluation of the automatic segmentation and demonstrate large-scale 3D reconstructions of neuronal processes from a $\mathbf{27,000}$ $\mathbf{\mu m^3}$ volume of brain tissue over a cube of $\mathbf{30 \; \mu m}$ in each dimension corresponding to 1000 consecutive image sections. We also introduce Mojo, a proofreading tool including semi-automated correction of merge errors based on sparse user scribbles.
1407.7392
Paolo Moretti
Paula Villa Mart\'in, Paolo Moretti, Miguel A. Mu\~noz
Rounding of abrupt phase transitions in brain networks
10 pages
null
10.1088/1742-5468/2015/01/P01003
null
q-bio.NC cond-mat.dis-nn nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The observation of critical-like behavior in cortical networks represents a major step forward in elucidating how the brain manages information. Understanding the origin and functionality of critical-like dynamics, as well as their robustness, is a major challenge in contemporary neuroscience. Here, we present an extensive numerical study of a family of simple dynamic models, which describe activity propagation in brain networks through the integration of different neighboring spiking potentials, mimicking basic neural interactions. The requirement of signal integration may lead to discontinuous phase transitions in networks that are well described by the mean field approximation, thus preventing the emergence of critical points in such systems. Here we show that criticality in the brain is instead robust, as a consequence of the hierarchical organization of the higher layers of cortical networks, which signals a departure from the mean-field paradigm. We show that, in finite-dimensional hierarchical networks, discontinuous phase transitions exhibit a rounding phenomenon and turn continuous for values of the topological dimension $D\le 2$, due to the presence of structural or topological disorder. Our results may prove significant in explaining the observation of traits of critical behavior in large-scale measurements of brain activity.
[ { "created": "Mon, 28 Jul 2014 12:22:31 GMT", "version": "v1" } ]
2015-06-22
[ [ "Martín", "Paula Villa", "" ], [ "Moretti", "Paolo", "" ], [ "Muñoz", "Miguel A.", "" ] ]
The observation of critical-like behavior in cortical networks represents a major step forward in elucidating how the brain manages information. Understanding the origin and functionality of critical-like dynamics, as well as their robustness, is a major challenge in contemporary neuroscience. Here, we present an extensive numerical study of a family of simple dynamic models, which describe activity propagation in brain networks through the integration of different neighboring spiking potentials, mimicking basic neural interactions. The requirement of signal integration may lead to discontinuous phase transitions in networks that are well described by the mean field approximation, thus preventing the emergence of critical points in such systems. Here we show that criticality in the brain is instead robust, as a consequence of the hierarchical organization of the higher layers of cortical networks, which signals a departure from the mean-field paradigm. We show that, in finite-dimensional hierarchical networks, discontinuous phase transitions exhibit a rounding phenomenon and turn continuous for values of the topological dimension $D\le 2$, due to the presence of structural or topological disorder. Our results may prove significant in explaining the observation of traits of critical behavior in large-scale measurements of brain activity.
2201.03551
Hyun Mo Yang
Hyun Mo Yang, Ariana Campos Yang and Silvia Martorano Raimundo
A model-based assessment of the cost-benefit balance and the plea bargain in criminality -- A qualitative case study of the Covid-19 epidemic shedding light on the "car wash operation" in Brazil
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
We developed a simple mathematical model to describe criminality and the justice system composed of the police investigation and court trial. The model assessed two features of organized crime -- the cost-benefit analysis done by the crime-susceptible to commit a crime and the whistleblowing of the law offenders. The model was formulated considering the mass action law commonly used in the disease propagation modelings, which can shed light on the model's analysis. The crime-susceptible individuals analyze two opposing forces -- committing crime influenced by the law offenders not caught by police neither imprisonment by the court trial (benefit of enjoying the corruption incoming), and the refraction to commit crime influenced by those caught by police or condemned by a court (cost of incarceration). Moreover, we assessed the dilemma for those captured by police investigation to participate in the rewarding whistleblowing program. The model was applied to analyze the "car wash operation" against corruption in Brazil. The model analysis showed that the cost-benefit analysis of crime-susceptible individuals whether the act of bribery is worth or not determined the basic crime reproduction number (threshold); however, the rewarding whistleblowing policies improved the combat to corruption arising a sub-threshold. Some adopted mechanisms to control the Covid-19 pandemic shed light on understanding the "car wash peration" and threatens to the fight against corruption. Appropriate coverage of corruption by media, enhancement of laws against white-collar crimes, well-functioning police investigation and court trial, and the rewarding whistleblowing policies inhibited and decreased the corruption.
[ { "created": "Sun, 9 Jan 2022 18:50:16 GMT", "version": "v1" }, { "created": "Sat, 22 Jan 2022 22:48:07 GMT", "version": "v2" } ]
2022-01-25
[ [ "Yang", "Hyun Mo", "" ], [ "Yang", "Ariana Campos", "" ], [ "Raimundo", "Silvia Martorano", "" ] ]
We developed a simple mathematical model to describe criminality and the justice system composed of the police investigation and court trial. The model assessed two features of organized crime -- the cost-benefit analysis done by the crime-susceptible to commit a crime and the whistleblowing of the law offenders. The model was formulated considering the mass action law commonly used in the disease propagation modelings, which can shed light on the model's analysis. The crime-susceptible individuals analyze two opposing forces -- committing crime influenced by the law offenders not caught by police neither imprisonment by the court trial (benefit of enjoying the corruption incoming), and the refraction to commit crime influenced by those caught by police or condemned by a court (cost of incarceration). Moreover, we assessed the dilemma for those captured by police investigation to participate in the rewarding whistleblowing program. The model was applied to analyze the "car wash operation" against corruption in Brazil. The model analysis showed that the cost-benefit analysis of crime-susceptible individuals whether the act of bribery is worth or not determined the basic crime reproduction number (threshold); however, the rewarding whistleblowing policies improved the combat to corruption arising a sub-threshold. Some adopted mechanisms to control the Covid-19 pandemic shed light on understanding the "car wash peration" and threatens to the fight against corruption. Appropriate coverage of corruption by media, enhancement of laws against white-collar crimes, well-functioning police investigation and court trial, and the rewarding whistleblowing policies inhibited and decreased the corruption.
2108.05186
Quynh Anh Le
Quynh-Anh Le, Rahena Akhter, Kimberly M. Coulton, Ngoc T.N Vo, Le T.Y Duong, Hoang V. Nong, Albert Yaacoub, George Condous, Joerg Eberhard and Ralph Nanan
Periodontitis and preeclampsia in pregnancy: A systematic review and meta-analysis
58 pages, 13 figures
null
null
null
q-bio.OT q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives: A conflicting body of evidence suggests localized periodontal inflammation to spread systemically during pregnancy inducing adverse pregnancy outcomes. This systematic review and meta-analysis aimed to specifically evaluate the relationship between periodontitis and preeclampsia. Methods: Electronic searches were carried out in Medline, Pubmed, Cochrane Controlled Clinical Trial Register to identify and select observational case-control and cohort studies that analyzed the association between periodontal disease and preeclampsia. Prisma guidelines and Moose checklist were followed. Results: Thirty studies including six cohorts and twenty-four case-control studies were selected. Periodontitis was significantly associated with increased risk for preeclampsia, especially in a subgroup analysis including cohort studies and subgroup analysis with lower-middle-income countries. Conclusion: Periodontitis appears as a significant risk factor for preeclampsia, which might be even more pronounced in lower-middle-income countries.
[ { "created": "Mon, 9 Aug 2021 23:48:41 GMT", "version": "v1" } ]
2021-08-12
[ [ "Le", "Quynh-Anh", "" ], [ "Akhter", "Rahena", "" ], [ "Coulton", "Kimberly M.", "" ], [ "Vo", "Ngoc T. N", "" ], [ "Duong", "Le T. Y", "" ], [ "Nong", "Hoang V.", "" ], [ "Yaacoub", "Albert", "" ], [ "Condous", "George", "" ], [ "Eberhard", "Joerg", "" ], [ "Nanan", "Ralph", "" ] ]
Objectives: A conflicting body of evidence suggests localized periodontal inflammation to spread systemically during pregnancy inducing adverse pregnancy outcomes. This systematic review and meta-analysis aimed to specifically evaluate the relationship between periodontitis and preeclampsia. Methods: Electronic searches were carried out in Medline, Pubmed, Cochrane Controlled Clinical Trial Register to identify and select observational case-control and cohort studies that analyzed the association between periodontal disease and preeclampsia. Prisma guidelines and Moose checklist were followed. Results: Thirty studies including six cohorts and twenty-four case-control studies were selected. Periodontitis was significantly associated with increased risk for preeclampsia, especially in a subgroup analysis including cohort studies and subgroup analysis with lower-middle-income countries. Conclusion: Periodontitis appears as a significant risk factor for preeclampsia, which might be even more pronounced in lower-middle-income countries.
1007.0210
Sergei Gepshtein
Sergei Gepshtein and Ivan Tyukin
Uncertainty of visual measurement and efficient allocation of sensory resources
8 pages
null
null
null
q-bio.NC cs.CV cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review the reasoning underlying two approaches to combination of sensory uncertainties. First approach is noncommittal, making no assumptions about properties of uncertainty or parameters of stimulation. Then we explain the relationship between this approach and the one commonly used in modeling "higher level" aspects of sensory systems, such as in visual cue integration, where assumptions are made about properties of stimulation. The two approaches follow similar logic, except in one case maximal uncertainty is minimized, and in the other minimal certainty is maximized. Then we demonstrate how optimal solutions are found to the problem of resource allocation under uncertainty.
[ { "created": "Thu, 1 Jul 2010 16:37:34 GMT", "version": "v1" }, { "created": "Sat, 3 May 2014 00:57:19 GMT", "version": "v2" } ]
2014-05-06
[ [ "Gepshtein", "Sergei", "" ], [ "Tyukin", "Ivan", "" ] ]
We review the reasoning underlying two approaches to combination of sensory uncertainties. First approach is noncommittal, making no assumptions about properties of uncertainty or parameters of stimulation. Then we explain the relationship between this approach and the one commonly used in modeling "higher level" aspects of sensory systems, such as in visual cue integration, where assumptions are made about properties of stimulation. The two approaches follow similar logic, except in one case maximal uncertainty is minimized, and in the other minimal certainty is maximized. Then we demonstrate how optimal solutions are found to the problem of resource allocation under uncertainty.
1407.8234
Jean Carlson
Elizabeth N. Davison, Kimberly J. Schlesinger, Danielle S. Bassett, Mary-Ellen Lynall, Michael B. Miller, Scott T. Grafton, Jean M. Carlson
Brain Network Adaptability Across Task States
22 pages, 9 figures, 1 table
null
10.1371/journal.pcbi.1004029
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Activity in the human brain moves between diverse functional states to meet the demands of our dynamic environment, but fundamental principles guiding these transitions remain poorly understood. Here, we capitalize on recent advances in network science to analyze patterns of functional interactions between brain regions. We use dynamic network representations to probe the landscape of brain reconfigurations that accompany task performance both within and between four cognitive states: a task-free resting state, an attention-demanding state, and two memory-demanding states. Using the formalism of hypergraphs, we identify the presence of groups of functional interactions that fluctuate coherently in strength over time both within (task-specific) and across (task-general) brain states. In contrast to prior emphases on the complexity of many dyadic (region-to-region) relationships, these results demonstrate that brain adapt- ability can be described by common processes that drive the dynamic integration of cognitive systems. Moreover, our results establish the hypergraph as an effective measure for understanding functional brain dynamics, which may also prove useful in examining cross-task, cross-age, and cross-cohort functional change.
[ { "created": "Wed, 30 Jul 2014 22:51:41 GMT", "version": "v1" } ]
2015-06-22
[ [ "Davison", "Elizabeth N.", "" ], [ "Schlesinger", "Kimberly J.", "" ], [ "Bassett", "Danielle S.", "" ], [ "Lynall", "Mary-Ellen", "" ], [ "Miller", "Michael B.", "" ], [ "Grafton", "Scott T.", "" ], [ "Carlson", "Jean M.", "" ] ]
Activity in the human brain moves between diverse functional states to meet the demands of our dynamic environment, but fundamental principles guiding these transitions remain poorly understood. Here, we capitalize on recent advances in network science to analyze patterns of functional interactions between brain regions. We use dynamic network representations to probe the landscape of brain reconfigurations that accompany task performance both within and between four cognitive states: a task-free resting state, an attention-demanding state, and two memory-demanding states. Using the formalism of hypergraphs, we identify the presence of groups of functional interactions that fluctuate coherently in strength over time both within (task-specific) and across (task-general) brain states. In contrast to prior emphases on the complexity of many dyadic (region-to-region) relationships, these results demonstrate that brain adapt- ability can be described by common processes that drive the dynamic integration of cognitive systems. Moreover, our results establish the hypergraph as an effective measure for understanding functional brain dynamics, which may also prove useful in examining cross-task, cross-age, and cross-cohort functional change.
1609.05131
Claire Yilin Lin
Mariel Bedell, Claire Yilin Lin, Emmie Roman-Melendez, Ioannis Sgouralis
Global Sensitivity Analysis in a Mathematical Model of the Renal Interstitium
null
null
null
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The pressure in the renal interstitium is an important factor for normal kidney function. Here we develop a computational model of the rat kidney and use it to investigate the relationship between arterial blood pressure and interstitial fluid pressure. In addition, we investigate how tissue flexibility influences this relationship. Due to the complexity of the model, the large number of parameters, and the inherent uncertainty of the experimental data, we utilize Monte Carlo sampling to study the model's behavior under a wide range of parameter values and to compute first- and total-order sensitivity indices. Characteristically, at elevated arterial blood pressure, the model predicts cases with increased or reduced interstitial pressure. The transition between the two cases is controlled mostly by the compliance of the blood vessels located before the afferent arterioles.
[ { "created": "Sat, 13 Aug 2016 16:52:58 GMT", "version": "v1" } ]
2016-09-19
[ [ "Bedell", "Mariel", "" ], [ "Lin", "Claire Yilin", "" ], [ "Roman-Melendez", "Emmie", "" ], [ "Sgouralis", "Ioannis", "" ] ]
The pressure in the renal interstitium is an important factor for normal kidney function. Here we develop a computational model of the rat kidney and use it to investigate the relationship between arterial blood pressure and interstitial fluid pressure. In addition, we investigate how tissue flexibility influences this relationship. Due to the complexity of the model, the large number of parameters, and the inherent uncertainty of the experimental data, we utilize Monte Carlo sampling to study the model's behavior under a wide range of parameter values and to compute first- and total-order sensitivity indices. Characteristically, at elevated arterial blood pressure, the model predicts cases with increased or reduced interstitial pressure. The transition between the two cases is controlled mostly by the compliance of the blood vessels located before the afferent arterioles.
2111.10890
Josef Tkadlec
Josef Tkadlec, Kamran Kaveh, Krishnendu Chatterjee, Martin A. Nowak
Natural selection of mutants that modify population structure
20 pages, 11 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Evolution occurs in populations of reproducing individuals. It is well known that population structure can affect evolutionary dynamics. Traditionally, natural selection is studied between mutants that differ in reproductive rate, but are subject to the same population structure. Here we study how natural selection acts on mutants that have the same reproductive rate, but experience different population structures. In our framework, mutation alters population structure, which is given by a graph that specifies the dispersal of offspring. Reproduction can be either genetic or cultural. Competing mutants disperse their offspring on different graphs. A more connected graph implies higher motility. We show that enhanced motility tends to increase an invader's fixation probability, but there are interesting exceptions. For island models, we show that the magnitude of the effect depends crucially on the exact layout of the additional links. Finally, we show that for low-dimensional lattices, the effect of altered motility is comparable to that of altered fitness: in the limit of large population size, the invader's fixation probability is either constant or exponentially small, depending on whether it is more or less motile than the resident.
[ { "created": "Sun, 21 Nov 2021 20:11:22 GMT", "version": "v1" } ]
2021-11-23
[ [ "Tkadlec", "Josef", "" ], [ "Kaveh", "Kamran", "" ], [ "Chatterjee", "Krishnendu", "" ], [ "Nowak", "Martin A.", "" ] ]
Evolution occurs in populations of reproducing individuals. It is well known that population structure can affect evolutionary dynamics. Traditionally, natural selection is studied between mutants that differ in reproductive rate, but are subject to the same population structure. Here we study how natural selection acts on mutants that have the same reproductive rate, but experience different population structures. In our framework, mutation alters population structure, which is given by a graph that specifies the dispersal of offspring. Reproduction can be either genetic or cultural. Competing mutants disperse their offspring on different graphs. A more connected graph implies higher motility. We show that enhanced motility tends to increase an invader's fixation probability, but there are interesting exceptions. For island models, we show that the magnitude of the effect depends crucially on the exact layout of the additional links. Finally, we show that for low-dimensional lattices, the effect of altered motility is comparable to that of altered fitness: in the limit of large population size, the invader's fixation probability is either constant or exponentially small, depending on whether it is more or less motile than the resident.
q-bio/0510055
Ovidiu Lipan
Sever Achimescu and Ovidiu Lipan
Signal Propagation in Nonlinear Stochastic Gene Regulatory Networks
45 pages, 14 figures, Excerpts from this manuscript were presented at the 3rd International Conference on Pathways, Networks, and Systems: Theory and Experiments, October 2-7, Rhodes Greece 2005. A reduced version was submitted on May 24th 2005 to IEE Systems Biology. High quality figures at http://www.cbgm.mcg.edu/signal.pdf
null
null
null
q-bio.MN q-bio.QM
null
The structure of a stochastic nonlinear gene regulatory network is uncovered by studying its response to input signal generators. Four applications are studied in detail: a nonlinear connection of two linear systems, the design of a logic pulse, a molecular amplifier and the interference of three signal generators in E2F1 regulatory element. The gene interactions are presented using molecular diagrams that have a precise mathematical structure and retain the biological meaning of the processes.
[ { "created": "Sat, 29 Oct 2005 19:12:41 GMT", "version": "v1" } ]
2007-05-23
[ [ "Achimescu", "Sever", "" ], [ "Lipan", "Ovidiu", "" ] ]
The structure of a stochastic nonlinear gene regulatory network is uncovered by studying its response to input signal generators. Four applications are studied in detail: a nonlinear connection of two linear systems, the design of a logic pulse, a molecular amplifier and the interference of three signal generators in E2F1 regulatory element. The gene interactions are presented using molecular diagrams that have a precise mathematical structure and retain the biological meaning of the processes.
q-bio/0602005
Ulrich H.E. Hansmann
Simon Trebst, Matthias Troyer and Ulrich H.E. Hansmann
Optimized parallel tempering simulations of proteins
22 pages, 7 figures
J. Chem. Phys. 124, 174903 (2006).
10.1063/1.2186639
MTU-PHY-06-HA/03
q-bio.QM cond-mat.stat-mech physics.comp-ph
null
We apply a recently developed adaptive algorithm that systematically improves the efficiency of parallel tempering or replica exchange methods in the numerical simulation of small proteins. Feedback iterations allow us to identify an optimal set of temperatures/replicas which are found to concentrate at the bottlenecks of the simulations. A measure of convergence for the equilibration of the parallel tempering algorithm is discussed. We test our algorithm by simulating the 36-residue villin headpiece sub-domain HP-36 wherewe find a lowest-energy configuration with a root-mean-square-deviation of less than 4 Angstroem to the experimentally determined structure.
[ { "created": "Mon, 6 Feb 2006 10:54:13 GMT", "version": "v1" } ]
2007-05-23
[ [ "Trebst", "Simon", "" ], [ "Troyer", "Matthias", "" ], [ "Hansmann", "Ulrich H. E.", "" ] ]
We apply a recently developed adaptive algorithm that systematically improves the efficiency of parallel tempering or replica exchange methods in the numerical simulation of small proteins. Feedback iterations allow us to identify an optimal set of temperatures/replicas which are found to concentrate at the bottlenecks of the simulations. A measure of convergence for the equilibration of the parallel tempering algorithm is discussed. We test our algorithm by simulating the 36-residue villin headpiece sub-domain HP-36 wherewe find a lowest-energy configuration with a root-mean-square-deviation of less than 4 Angstroem to the experimentally determined structure.
1607.02110
Eugene Serebryany
Eugene Serebryany, Jaie C. Woodard, Bharat V. Adkar, Mohammed Shabab, Jonathan A. King, and Eugene I. Shakhnovich
An internal disulfide locks a misfolded aggregation-prone intermediate in cataract-linked mutants of human {\gamma}D-crystallin
*equal contribution; {\dag}corresponding authors
null
10.1016/j.bpj.2016.11.923
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Considerable mechanistic insight has been gained into amyloid aggregation; however, a large class of non-amyloid protein aggregates are considered 'amorphous,' and in most cases little is known about their mechanisms. Amorphous aggregation of {\gamma}-crystallins in the eye lens causes a widespread disease of aging, cataract. We combined simulations and experiments to study the mechanism of aggregation of two {\gamma}D-crystallin mutants, W42R and W42Q - the former a congenital cataract mutation, and the latter a mimic of age-related oxidative damage. We found that formation of an internal disulfide was necessary and sufficient for aggregation under physiological conditions. Two-chain all-atom simulations predicted that one non-native disulfide in particular, between Cys32 and Cys41, was likely to stabilize an unfolding intermediate prone to intermolecular interactions. Mass spectrometry and mutagenesis experiments confirmed the presence of this bond in the aggregates and its necessity for oxidative aggregation under physiological conditions in vitro. Mining the simulation data linked formation of this disulfide to extrusion of the N-terminal \b{eta}-hairpin and rearrangement of the native \b{eta}-sheet topology. Specific binding between the extruded hairpin and a distal \b{eta}-sheet, in an intermolecular chain reaction similar to domain swapping, is the most probable mechanism of aggregate propagation.
[ { "created": "Thu, 7 Jul 2016 18:19:52 GMT", "version": "v1" } ]
2017-04-05
[ [ "Serebryany", "Eugene", "" ], [ "Woodard", "Jaie C.", "" ], [ "Adkar", "Bharat V.", "" ], [ "Shabab", "Mohammed", "" ], [ "King", "Jonathan A.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
Considerable mechanistic insight has been gained into amyloid aggregation; however, a large class of non-amyloid protein aggregates are considered 'amorphous,' and in most cases little is known about their mechanisms. Amorphous aggregation of {\gamma}-crystallins in the eye lens causes a widespread disease of aging, cataract. We combined simulations and experiments to study the mechanism of aggregation of two {\gamma}D-crystallin mutants, W42R and W42Q - the former a congenital cataract mutation, and the latter a mimic of age-related oxidative damage. We found that formation of an internal disulfide was necessary and sufficient for aggregation under physiological conditions. Two-chain all-atom simulations predicted that one non-native disulfide in particular, between Cys32 and Cys41, was likely to stabilize an unfolding intermediate prone to intermolecular interactions. Mass spectrometry and mutagenesis experiments confirmed the presence of this bond in the aggregates and its necessity for oxidative aggregation under physiological conditions in vitro. Mining the simulation data linked formation of this disulfide to extrusion of the N-terminal \b{eta}-hairpin and rearrangement of the native \b{eta}-sheet topology. Specific binding between the extruded hairpin and a distal \b{eta}-sheet, in an intermolecular chain reaction similar to domain swapping, is the most probable mechanism of aggregate propagation.
2008.08226
Robert Strauss
Robert Strauss
Augmenting Neural Differential Equations to Model Unknown Dynamical Systems with Incomplete State Information
null
null
null
null
q-bio.NC cs.LG physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural Ordinary Differential Equations replace the right-hand side of a conventional ODE with a neural net, which by virtue of the universal approximation theorem, can be trained to the representation of any function. When we do not know the function itself, but have state trajectories (time evolution) of the ODE system we can still train the neural net to learn the representation of the underlying but unknown ODE. However if the state of the system is incompletely known then the right-hand side of the ODE cannot be calculated. The derivatives to propagate the system are unavailable. We show that a specially augmented Neural ODE can learn the system when given incomplete state information. As a worked example we apply neural ODEs to the Lotka-Voltera problem of 3 species, rabbits, wolves, and bears. We show that even when the data for the bear time series is removed the remaining time series of the rabbits and wolves is sufficient to learn the dynamical system despite the missing the incomplete state information. This is surprising since a conventional ODE system cannot output the correct derivatives without the full state as the input. We implement augmented neural ODEs and differential equation solvers in the julia programming language.
[ { "created": "Wed, 19 Aug 2020 02:21:13 GMT", "version": "v1" }, { "created": "Thu, 20 Aug 2020 00:11:16 GMT", "version": "v2" }, { "created": "Sat, 22 Aug 2020 02:59:26 GMT", "version": "v3" } ]
2020-08-25
[ [ "Strauss", "Robert", "" ] ]
Neural Ordinary Differential Equations replace the right-hand side of a conventional ODE with a neural net, which by virtue of the universal approximation theorem, can be trained to the representation of any function. When we do not know the function itself, but have state trajectories (time evolution) of the ODE system we can still train the neural net to learn the representation of the underlying but unknown ODE. However if the state of the system is incompletely known then the right-hand side of the ODE cannot be calculated. The derivatives to propagate the system are unavailable. We show that a specially augmented Neural ODE can learn the system when given incomplete state information. As a worked example we apply neural ODEs to the Lotka-Voltera problem of 3 species, rabbits, wolves, and bears. We show that even when the data for the bear time series is removed the remaining time series of the rabbits and wolves is sufficient to learn the dynamical system despite the missing the incomplete state information. This is surprising since a conventional ODE system cannot output the correct derivatives without the full state as the input. We implement augmented neural ODEs and differential equation solvers in the julia programming language.
1901.03677
Reid Priedhorsky
Reid Priedhorsky (1), Ashlynn R. Daughton (1 and 2), Martha Barnard (3), Fiona O'Connell (3), Dave Osthus (1) ((1) Los Alamos National Laboratory, (2) University of Colorado Boulder, (3) Minnetonka Public Schools)
Estimating influenza incidence using search query deceptiveness and generalized ridge regression
27 pages, 8 figures
null
10.1371/journal.pcbi.1007165
LA-UR 18-24467
q-bio.PE cs.SI stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Seasonal influenza is a sometimes surprisingly impactful disease, causing thousands of deaths per year along with much additional morbidity. Timely knowledge of the outbreak state is valuable for managing an effective response. The current state of the art is to gather this knowledge using in-person patient contact. While accurate, this is time-consuming and expensive. This has motivated inquiry into new approaches using internet activity traces, based on the theory that lay observations of health status lead to informative features in internet data. These approaches risk being deceived by activity traces having a coincidental, rather than informative, relationship to disease incidence; to our knowledge, this risk has not yet been quantitatively explored. We evaluated both simulated and real activity traces of varying deceptiveness for influenza incidence estimation using linear regression. We found that deceptiveness knowledge does reduce error in such estimates, that it may help automatically-selected features perform as well or better than features that require human curation, and that a semantic distance measure derived from the Wikipedia article category tree serves as a useful proxy for deceptiveness. This suggests that disease incidence estimation models should incorporate not only data about how internet features map to incidence but also additional data to estimate feature deceptiveness. By doing so, we may gain one more step along the path to accurate, reliable disease incidence estimation using internet data. This capability would improve public health by decreasing the cost and increasing the timeliness of such estimates.
[ { "created": "Fri, 11 Jan 2019 18:04:42 GMT", "version": "v1" } ]
2020-07-01
[ [ "Priedhorsky", "Reid", "", "1 and 2" ], [ "Daughton", "Ashlynn R.", "", "1 and 2" ], [ "Barnard", "Martha", "" ], [ "O'Connell", "Fiona", "" ], [ "Osthus", "Dave", "" ] ]
Seasonal influenza is a sometimes surprisingly impactful disease, causing thousands of deaths per year along with much additional morbidity. Timely knowledge of the outbreak state is valuable for managing an effective response. The current state of the art is to gather this knowledge using in-person patient contact. While accurate, this is time-consuming and expensive. This has motivated inquiry into new approaches using internet activity traces, based on the theory that lay observations of health status lead to informative features in internet data. These approaches risk being deceived by activity traces having a coincidental, rather than informative, relationship to disease incidence; to our knowledge, this risk has not yet been quantitatively explored. We evaluated both simulated and real activity traces of varying deceptiveness for influenza incidence estimation using linear regression. We found that deceptiveness knowledge does reduce error in such estimates, that it may help automatically-selected features perform as well or better than features that require human curation, and that a semantic distance measure derived from the Wikipedia article category tree serves as a useful proxy for deceptiveness. This suggests that disease incidence estimation models should incorporate not only data about how internet features map to incidence but also additional data to estimate feature deceptiveness. By doing so, we may gain one more step along the path to accurate, reliable disease incidence estimation using internet data. This capability would improve public health by decreasing the cost and increasing the timeliness of such estimates.