id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2011.02372
Elod Mehes
Elod Mehes, Tana S Pottorf, Marton Gulyas, Sandor Paku, Pamela V. Tran, Andras Czirok
A biomimetic kidney tubule model
11 pages, 4 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A critical barrier in the nephrology field is the lack of appropriate in vitro renal tubule models that allow manipulation of various mechanical factors, facilitating studies of disease pathophysiology and drug discovery. Here we report development of a novel in vitro assay system comprised of a renal tubule within an elasto-plastic extracellular matrix microenvironment. This in vitro tubule mimetic device consists of a container with two, pipette-accessible ports, filament-deposition (3D-) printed into 35 mm cell culture dishes. The container is filled with a hydrogel, such as a collagen I or fibrin gel, while a narrow masking tube is threaded through the ports. Following gelation, the masking material is pulled out leaving a tunnel within the gel. Seeding of the tunnels with M1 or MDCK renal epithelial cells through the side ports results in a monolayer with apical-basal polarity, such that laminin and fibronectin are present on the basal surface, while primary cilia project from the apical side of cells into the tubular lumen. The device is optically accessible, and can be live-imaged by phase contrast or epifluorescence microscopy. The lumen of the epithelial-lined tube can be connected through the side ports to a circulatory flow. We demonstrate that kidney epithelial cells are able to adjust the diameter of the model tubule by myosin-II dependent contractility. Furthermore, cells of the tubule are also able to remodel the surrounding hydrogel leading to budding from the main tubule. We propose that this versatile in vitro model system can be developed into a future pre-clinical tool to study pathophysiology of kidney diseases and identify therapeutic compounds.
[ { "created": "Wed, 4 Nov 2020 16:02:55 GMT", "version": "v1" } ]
2020-11-05
[ [ "Mehes", "Elod", "" ], [ "Pottorf", "Tana S", "" ], [ "Gulyas", "Marton", "" ], [ "Paku", "Sandor", "" ], [ "Tran", "Pamela V.", "" ], [ "Czirok", "Andras", "" ] ]
A critical barrier in the nephrology field is the lack of appropriate in vitro renal tubule models that allow manipulation of various mechanical factors, facilitating studies of disease pathophysiology and drug discovery. Here we report development of a novel in vitro assay system comprised of a renal tubule within an elasto-plastic extracellular matrix microenvironment. This in vitro tubule mimetic device consists of a container with two, pipette-accessible ports, filament-deposition (3D-) printed into 35 mm cell culture dishes. The container is filled with a hydrogel, such as a collagen I or fibrin gel, while a narrow masking tube is threaded through the ports. Following gelation, the masking material is pulled out leaving a tunnel within the gel. Seeding of the tunnels with M1 or MDCK renal epithelial cells through the side ports results in a monolayer with apical-basal polarity, such that laminin and fibronectin are present on the basal surface, while primary cilia project from the apical side of cells into the tubular lumen. The device is optically accessible, and can be live-imaged by phase contrast or epifluorescence microscopy. The lumen of the epithelial-lined tube can be connected through the side ports to a circulatory flow. We demonstrate that kidney epithelial cells are able to adjust the diameter of the model tubule by myosin-II dependent contractility. Furthermore, cells of the tubule are also able to remodel the surrounding hydrogel leading to budding from the main tubule. We propose that this versatile in vitro model system can be developed into a future pre-clinical tool to study pathophysiology of kidney diseases and identify therapeutic compounds.
2109.08718
Kaifu Gao
Kaifu Gao, Dong Chen, Alfred J Robison, and Guo-Wei Wei
Proteome-informed machine learning studies of cocaine addiction
null
null
null
null
q-bio.MN cs.LG
http://creativecommons.org/licenses/by/4.0/
Cocaine addiction accounts for a large portion of substance use disorders and threatens millions of lives worldwide. There is an urgent need to come up with efficient anti-cocaine addiction drugs. Unfortunately, no medications have been approved by the Food and Drug Administration (FDA), despite the extensive effort in the past few decades. The main challenge is the intricate molecular mechanisms of cocaine addiction, involving synergistic interactions among proteins upstream and downstream of dopamine transporter (DAT) functions impacted by cocaine. However, traditional in vivo or in vitro experiments can not address the roles of so many proteins, highlighting the need for innovative strategies in the field. We propose a proteome-informed machine learning/deep learning (ML/DL) platform to discover nearly optimal anti-cocaine addiction lead compounds. We construct and analyze proteomic protein-protein interaction (PPI) networks for cocaine dependence to identify 141 involved drug targets and represent over 60,000 associated drug candidates or experimental drugs in the latent space using an autoencoder (EA) model trained from over 104 million molecules. We build 32 ML models for cross-target analysis of these drug candidates for side effects and repurposing potential. We further screen the absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties of these candidates. Our platform reveals that essentially all of the existing drug candidates, including dozens of experimental drugs, fail to pass our cross-target and ADMET screenings. Nonetheless, we have identified two nearly optimal leads for further optimization.
[ { "created": "Fri, 17 Sep 2021 18:58:24 GMT", "version": "v1" } ]
2021-09-21
[ [ "Gao", "Kaifu", "" ], [ "Chen", "Dong", "" ], [ "Robison", "Alfred J", "" ], [ "Wei", "Guo-Wei", "" ] ]
Cocaine addiction accounts for a large portion of substance use disorders and threatens millions of lives worldwide. There is an urgent need to come up with efficient anti-cocaine addiction drugs. Unfortunately, no medications have been approved by the Food and Drug Administration (FDA), despite the extensive effort in the past few decades. The main challenge is the intricate molecular mechanisms of cocaine addiction, involving synergistic interactions among proteins upstream and downstream of dopamine transporter (DAT) functions impacted by cocaine. However, traditional in vivo or in vitro experiments can not address the roles of so many proteins, highlighting the need for innovative strategies in the field. We propose a proteome-informed machine learning/deep learning (ML/DL) platform to discover nearly optimal anti-cocaine addiction lead compounds. We construct and analyze proteomic protein-protein interaction (PPI) networks for cocaine dependence to identify 141 involved drug targets and represent over 60,000 associated drug candidates or experimental drugs in the latent space using an autoencoder (EA) model trained from over 104 million molecules. We build 32 ML models for cross-target analysis of these drug candidates for side effects and repurposing potential. We further screen the absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties of these candidates. Our platform reveals that essentially all of the existing drug candidates, including dozens of experimental drugs, fail to pass our cross-target and ADMET screenings. Nonetheless, we have identified two nearly optimal leads for further optimization.
2302.04481
Louis Kang
Louis Kang, Taro Toyoizumi
A Hopfield-like model with complementary encodings of memories
34 pages including 21 pages of appendices, 9 figures
null
null
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a Hopfield-like autoassociative network for memories representing examples of concepts. Each memory is encoded by two activity patterns with complementary properties. The first is dense and correlated across examples within concepts, and the second is sparse and exhibits no correlation among examples. The network stores each memory as a linear combination of its encodings. During retrieval, the network recovers sparse or dense patterns with a high or low activity threshold, respectively. As more memories are stored, the dense representation at low threshold shifts from examples to concepts, which are learned from accumulating common example features. Meanwhile, the sparse representation at high threshold maintains distinctions between examples due to the high capacity of sparse, decorrelated patterns. Thus, a single network can retrieve memories at both example and concept scales and perform heteroassociation between them. We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts. We also perform network simulations that verify our theoretical results and explicitly demonstrate the capabilities of the network.
[ { "created": "Thu, 9 Feb 2023 07:51:09 GMT", "version": "v1" }, { "created": "Fri, 12 May 2023 06:41:52 GMT", "version": "v2" }, { "created": "Fri, 25 Aug 2023 08:42:03 GMT", "version": "v3" } ]
2023-08-28
[ [ "Kang", "Louis", "" ], [ "Toyoizumi", "Taro", "" ] ]
We present a Hopfield-like autoassociative network for memories representing examples of concepts. Each memory is encoded by two activity patterns with complementary properties. The first is dense and correlated across examples within concepts, and the second is sparse and exhibits no correlation among examples. The network stores each memory as a linear combination of its encodings. During retrieval, the network recovers sparse or dense patterns with a high or low activity threshold, respectively. As more memories are stored, the dense representation at low threshold shifts from examples to concepts, which are learned from accumulating common example features. Meanwhile, the sparse representation at high threshold maintains distinctions between examples due to the high capacity of sparse, decorrelated patterns. Thus, a single network can retrieve memories at both example and concept scales and perform heteroassociation between them. We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts. We also perform network simulations that verify our theoretical results and explicitly demonstrate the capabilities of the network.
2208.13326
Yi-Lin Tsai
Yi-Lin Tsai (1), Dymasius Y. Sitepu (2), Karyn E. Chappell (3), Rishi P. Mediratta (4), C. Jason Wang (4, 5), Peter K. Kitanidis (1, 6, 7, and 8), Christopher B. Field (6, 9, 10, and 11) ((1) Department of Civil and Environmental Engineering, Stanford University, Stanford, CA, USA, (2) Department of Engineering Science, National University of Singapore, Singapore, (3) Department of Engineering, Imperial College London, London, UK, (4) Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA, (5) Department of Health Policy, Stanford University School of Medicine, Stanford, CA, USA, (6) Woods Institute for the Environment, Stanford University, Stanford, CA, USA, (7) Bio-X, Stanford University, Stanford, CA, USA, (8) Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA, USA, (9) Department of Biology, Stanford University, Stanford, CA, USA, (10) Department of Earth System Science, Stanford University, Stanford, CA, USA, (11) Interdisciplinary Environmental Studies Program, Stanford University, Stanford, CA, USA)
Effective approaches to disaster evacuation during a COVID-like pandemic
Supplementary information: https://drive.google.com/file/d/1Lc8QpeEYX4n7S_l9CZ2mrREa2e6MwnIC/view?usp=sharing
null
null
null
q-bio.PE cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since COVID-19 vaccines became available, no studies have quantified how different disaster evacuation strategies can mitigate pandemic risks in shelters. Therefore, we applied an age-structured epidemiological model, known as the Susceptible-Exposed-Infectious-Recovered (SEIR) model, to investigate to what extent different vaccine uptake levels and the Diversion protocol implemented in Taiwan decrease infections and delay pandemic peak occurrences. Taiwan's Diversion protocol involves diverting those in self-quarantine due to exposure, thus preventing them from mingling with the general public at a congregate shelter. The Diversion protocol, combined with sufficient vaccine uptake, can decrease the maximum number of infections and delay outbreaks relative to scenarios without such strategies. When the diversion of all exposed people is not possible, or vaccine uptake is insufficient, the Diversion protocol is still valuable. Furthermore, a group of evacuees that consists primarily of a young adult population tends to experience pandemic peak occurrences sooner and have up to 180% more infections than does a majority elderly group when the Diversion protocol is implemented. However, when the Diversion protocol is not enforced, the majority elderly group suffers from up to 20% more severe cases than the majority young adult group.
[ { "created": "Mon, 29 Aug 2022 01:38:07 GMT", "version": "v1" } ]
2022-08-30
[ [ "Tsai", "Yi-Lin", "", "1, 6, 7, and 8" ], [ "Sitepu", "Dymasius Y.", "", "1, 6, 7, and 8" ], [ "Chappell", "Karyn E.", "", "1, 6, 7, and 8" ], [ "Mediratta", "Rishi P.", "", "1, 6, 7, and 8" ], [ "Wang", "C. Jason", "", "1, 6, 7, and 8" ], [ "Kitanidis", "Peter K.", "", "1, 6, 7, and 8" ], [ "Field", "Christopher B.", "", "6, 9, 10, and 11" ] ]
Since COVID-19 vaccines became available, no studies have quantified how different disaster evacuation strategies can mitigate pandemic risks in shelters. Therefore, we applied an age-structured epidemiological model, known as the Susceptible-Exposed-Infectious-Recovered (SEIR) model, to investigate to what extent different vaccine uptake levels and the Diversion protocol implemented in Taiwan decrease infections and delay pandemic peak occurrences. Taiwan's Diversion protocol involves diverting those in self-quarantine due to exposure, thus preventing them from mingling with the general public at a congregate shelter. The Diversion protocol, combined with sufficient vaccine uptake, can decrease the maximum number of infections and delay outbreaks relative to scenarios without such strategies. When the diversion of all exposed people is not possible, or vaccine uptake is insufficient, the Diversion protocol is still valuable. Furthermore, a group of evacuees that consists primarily of a young adult population tends to experience pandemic peak occurrences sooner and have up to 180% more infections than does a majority elderly group when the Diversion protocol is implemented. However, when the Diversion protocol is not enforced, the majority elderly group suffers from up to 20% more severe cases than the majority young adult group.
2407.06805
Xell Brunet Guasch
Meritxell Brunet Guasch, Nathalie Feeley, Ignacio Soriano, Steve Thorn, Ian Tomlinson, Michael D. Nicholson, Tibor Antal
Quantifying "just-right" APC inactivation for colorectal cancer initiation
Main: 32 pages, 7 figures Supplements: 17 pages, 7 figures
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Dysregulation of the tumour suppressor gene Adenomatous Polyposis Coli (APC) is a canonical step in colorectal cancer development. Curiously, most colorectal tumours carry biallelic mutations that result in only partial loss of APC function, suggesting that a "just-right" level of APC inactivation, and hence Wnt signalling, provides the optimal conditions for tumorigenesis. Mutational processes act variably across the APC gene, which could contribute to the bias against complete APC inactivation. Thus the selective consequences of partial APC loss are unclear. Here we propose a mathematical model to quantify the tumorigenic effect of biallelic APC genotypes, controlling for somatic mutational processes. Analysing sequence data from >2500 colorectal cancers, we find that APC genotypes resulting in partial protein function confer about 50 times higher probability of progressing to cancer compared to complete APC inactivation. The optimal inactivation level varies with anatomical location and additional mutations of Wnt pathway regulators. We use this context dependency to assess the regulatory effect of secondary Wnt drivers in combination with APC in vivo, and provide evidence that mutant AMER1 combines with APC genotypes that lead to relatively low Wnt. The fitness landscape of APC inactivation is consistent across microsatellite unstable and POLE-deficient colorectal cancers and tumours in patients with Familial Adenomatous Polyposis, suggesting a general "just-right" optimum, and pointing to Wnt hyperactivation as a potential cancer vulnerability.
[ { "created": "Tue, 9 Jul 2024 12:24:01 GMT", "version": "v1" } ]
2024-07-10
[ [ "Guasch", "Meritxell Brunet", "" ], [ "Feeley", "Nathalie", "" ], [ "Soriano", "Ignacio", "" ], [ "Thorn", "Steve", "" ], [ "Tomlinson", "Ian", "" ], [ "Nicholson", "Michael D.", "" ], [ "Antal", "Tibor", "" ] ]
Dysregulation of the tumour suppressor gene Adenomatous Polyposis Coli (APC) is a canonical step in colorectal cancer development. Curiously, most colorectal tumours carry biallelic mutations that result in only partial loss of APC function, suggesting that a "just-right" level of APC inactivation, and hence Wnt signalling, provides the optimal conditions for tumorigenesis. Mutational processes act variably across the APC gene, which could contribute to the bias against complete APC inactivation. Thus the selective consequences of partial APC loss are unclear. Here we propose a mathematical model to quantify the tumorigenic effect of biallelic APC genotypes, controlling for somatic mutational processes. Analysing sequence data from >2500 colorectal cancers, we find that APC genotypes resulting in partial protein function confer about 50 times higher probability of progressing to cancer compared to complete APC inactivation. The optimal inactivation level varies with anatomical location and additional mutations of Wnt pathway regulators. We use this context dependency to assess the regulatory effect of secondary Wnt drivers in combination with APC in vivo, and provide evidence that mutant AMER1 combines with APC genotypes that lead to relatively low Wnt. The fitness landscape of APC inactivation is consistent across microsatellite unstable and POLE-deficient colorectal cancers and tumours in patients with Familial Adenomatous Polyposis, suggesting a general "just-right" optimum, and pointing to Wnt hyperactivation as a potential cancer vulnerability.
1909.10116
Audrey Sederberg
Audrey J. Sederberg, Ilya Nemenman
Randomly connected networks generate emergent selectivity and predict decoding properties of large populations of neurons
null
null
10.1371/journal.pcbi.1007875
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in neural recording methods enable sampling from populations of thousands of neurons during the performance of behavioral tasks, raising the question of how recorded activity relates to the theoretical models of computations underlying performance. In the context of decision making in rodents, patterns of functional connectivity between choice-selective cortical neurons, as well as broadly distributed choice information in both excitatory and inhibitory populations, were recently reported [1]. The straightforward interpretation of these data suggests a mechanism relying on specific patterns of anatomical connectivity to achieve selective pools of inhibitory as well as excitatory neurons. We investigate an alternative mechanism for the emergence of these experimental observations using a computational approach. We find that a randomly connected network of excitatory and inhibitory neurons generates single-cell selectivity, patterns of pairwise correlations, and indistinguishable excitatory and inhibitory readout weight distributions, as observed in recorded neural populations. Further, we make the readily verifiable experimental predictions that, for this type of evidence accumulation task, there are no anatomically defined sub-populations of neurons representing choice, and that choice preference of a particular neuron changes with the details of the task. This work suggests that distributed stimulus selectivity and patterns of functional organization in population codes could be emergent properties of randomly connected networks.
[ { "created": "Mon, 23 Sep 2019 01:26:02 GMT", "version": "v1" } ]
2020-07-01
[ [ "Sederberg", "Audrey J.", "" ], [ "Nemenman", "Ilya", "" ] ]
Advances in neural recording methods enable sampling from populations of thousands of neurons during the performance of behavioral tasks, raising the question of how recorded activity relates to the theoretical models of computations underlying performance. In the context of decision making in rodents, patterns of functional connectivity between choice-selective cortical neurons, as well as broadly distributed choice information in both excitatory and inhibitory populations, were recently reported [1]. The straightforward interpretation of these data suggests a mechanism relying on specific patterns of anatomical connectivity to achieve selective pools of inhibitory as well as excitatory neurons. We investigate an alternative mechanism for the emergence of these experimental observations using a computational approach. We find that a randomly connected network of excitatory and inhibitory neurons generates single-cell selectivity, patterns of pairwise correlations, and indistinguishable excitatory and inhibitory readout weight distributions, as observed in recorded neural populations. Further, we make the readily verifiable experimental predictions that, for this type of evidence accumulation task, there are no anatomically defined sub-populations of neurons representing choice, and that choice preference of a particular neuron changes with the details of the task. This work suggests that distributed stimulus selectivity and patterns of functional organization in population codes could be emergent properties of randomly connected networks.
1503.03373
Arni S.R. Srinivasa Rao
Arni S.R. Srinivasa Rao
Population Stability and Momentum
Research Article
Notices of the American Mathematical Society (2014), 9, 61: 1062-1065
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new approach is developed to understand stability of a population and further understanding of population momentum.
[ { "created": "Tue, 10 Feb 2015 18:36:54 GMT", "version": "v1" } ]
2021-06-24
[ [ "Rao", "Arni S. R. Srinivasa", "" ] ]
A new approach is developed to understand stability of a population and further understanding of population momentum.
1912.04724
Guo-Wei Wei
Duc D Nguyen, Zixuan Cang, and Guo-Wei Wei
A review of mathematical representations of biomolecules
33 pages, 13 pages, and 4 tables
null
10.1039/C9CP06554G
null
q-bio.BM math.AT math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, machine learning (ML) has established itself in various worldwide benchmarking competitions in computational biology, including Critical Assessment of Structure Prediction (CASP) and Drug Design Data Resource (D3R) Grand Challenges. However, the intricate structural complexity and high ML dimensionality of biomolecular datasets obstruct the efficient application of ML algorithms in the field. In addition to data and algorithm, an efficient ML machinery for biomolecular predictions must include structural representation as an indispensable component. Mathematical representations that simplify the biomolecular structural complexity and reduce ML dimensionality have emerged as a prime winner in D3R Grand Challenges. This review is devoted to the recent advances in developing low-dimensional and scalable mathematical representations of biomolecules in our laboratory. We discuss three classes of mathematical approaches, including algebraic topology, differential geometry, and graph theory. We elucidate how the physical and biological challenges have guided the evolution and development of these mathematical apparatuses for massive and diverse biomolecular data. We focus the performance analysis on the protein-ligand binding predictions in this review although these methods have had tremendous success in many other applications, such as protein classification, virtual screening, and the predictions of solubility, solvation free energy, toxicity, partition coefficient, protein folding stability changes upon mutation, etc.
[ { "created": "Tue, 3 Dec 2019 21:15:45 GMT", "version": "v1" } ]
2020-04-22
[ [ "Nguyen", "Duc D", "" ], [ "Cang", "Zixuan", "" ], [ "Wei", "Guo-Wei", "" ] ]
Recently, machine learning (ML) has established itself in various worldwide benchmarking competitions in computational biology, including Critical Assessment of Structure Prediction (CASP) and Drug Design Data Resource (D3R) Grand Challenges. However, the intricate structural complexity and high ML dimensionality of biomolecular datasets obstruct the efficient application of ML algorithms in the field. In addition to data and algorithm, an efficient ML machinery for biomolecular predictions must include structural representation as an indispensable component. Mathematical representations that simplify the biomolecular structural complexity and reduce ML dimensionality have emerged as a prime winner in D3R Grand Challenges. This review is devoted to the recent advances in developing low-dimensional and scalable mathematical representations of biomolecules in our laboratory. We discuss three classes of mathematical approaches, including algebraic topology, differential geometry, and graph theory. We elucidate how the physical and biological challenges have guided the evolution and development of these mathematical apparatuses for massive and diverse biomolecular data. We focus the performance analysis on the protein-ligand binding predictions in this review although these methods have had tremendous success in many other applications, such as protein classification, virtual screening, and the predictions of solubility, solvation free energy, toxicity, partition coefficient, protein folding stability changes upon mutation, etc.
1208.0154
Vinod Scaria
Saakshi Jalali, Deeksha Bhartiya, Vinod Scaria
Systematic transcriptome wide analysis of lncRNA-miRNA interactions
null
null
10.1371/journal.pone.0053823
null
q-bio.GN q-bio.MN
http://creativecommons.org/licenses/publicdomain/
Long noncoding RNAs (lncRNAs) are a recently discovered class of non-protein coding RNAs which have now increasingly been shown to be involved in a wide variety of biological processes as regulatory molecules. Little is known regarding the regulatory interactions between noncoding RNA classes. Recent reports have suggested that lncRNAs could potentially interact with other noncoding RNAs including miroRNAs (miRNAs) and modulate their regulatory role through interactions. We hypothesized that long noncoding RNAs could participate as a layer of regulatory interactions with miRNAs. The availability of genome-scale datasets for argonaute targets across human transcriptome has prompted us to reconstruct a genome-scale network of interactions between miRNAs and lncRNAs. We used well characterized experimental Photoactivatable-Ribonucleoside-Enhanced Crosslinking and Immunoprecipitation (PAR-CLIP) datasets and the recent genome-wide annotations for lncRNAs in public domain to construct a comprehensive transcriptome-wide map of miRNA regulatory elements. Comparative analysis revealed many of the miRNAs could target long noncoding RNAs, apart from the coding transcripts thus participating in a novel layer of regulatory interactions between noncoding RNA classes. We also find the miRNA regulatory elements have a positional preference, clustering towards the 3' and 5' ends of the long noncoding transcripts. We also further reconstruct a genome-wide map of miRNA interactions with lncRNAs as well as messenger RNAs. This analysis suggests widespread regulatory interactions between noncoding RNAs classes and suggests a novel functional role for lncRNAs. We also present the first transcriptome scale study on lncRNA-miRNA interactions and the first report of a genome-scale reconstruction of a noncoding RNA regulatory interactome involving lncRNAs.
[ { "created": "Wed, 1 Aug 2012 09:48:39 GMT", "version": "v1" } ]
2015-06-11
[ [ "Jalali", "Saakshi", "" ], [ "Bhartiya", "Deeksha", "" ], [ "Scaria", "Vinod", "" ] ]
Long noncoding RNAs (lncRNAs) are a recently discovered class of non-protein coding RNAs which have now increasingly been shown to be involved in a wide variety of biological processes as regulatory molecules. Little is known regarding the regulatory interactions between noncoding RNA classes. Recent reports have suggested that lncRNAs could potentially interact with other noncoding RNAs including miroRNAs (miRNAs) and modulate their regulatory role through interactions. We hypothesized that long noncoding RNAs could participate as a layer of regulatory interactions with miRNAs. The availability of genome-scale datasets for argonaute targets across human transcriptome has prompted us to reconstruct a genome-scale network of interactions between miRNAs and lncRNAs. We used well characterized experimental Photoactivatable-Ribonucleoside-Enhanced Crosslinking and Immunoprecipitation (PAR-CLIP) datasets and the recent genome-wide annotations for lncRNAs in public domain to construct a comprehensive transcriptome-wide map of miRNA regulatory elements. Comparative analysis revealed many of the miRNAs could target long noncoding RNAs, apart from the coding transcripts thus participating in a novel layer of regulatory interactions between noncoding RNA classes. We also find the miRNA regulatory elements have a positional preference, clustering towards the 3' and 5' ends of the long noncoding transcripts. We also further reconstruct a genome-wide map of miRNA interactions with lncRNAs as well as messenger RNAs. This analysis suggests widespread regulatory interactions between noncoding RNAs classes and suggests a novel functional role for lncRNAs. We also present the first transcriptome scale study on lncRNA-miRNA interactions and the first report of a genome-scale reconstruction of a noncoding RNA regulatory interactome involving lncRNAs.
1809.05024
Akram Yazdani PhD
Azam Yazdani, Akram Yazdani, Philip L. Lorenzi, Ahmad Samiei
Integrated systems approach identifies pathways from the genome to triglycerides through a metabolomic causal network
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introduction: To leverage functionality and clinical relevance into understanding systems biology, one needs to understand the pathway of the genetic effects on risk factors/disease through intermediate molecular levels, such as metabolomics. Systems approaches integrate multi-omic information to find pathways to disease endpoints and make optimal inference decisions. Method: Here, we introduce a multi-stage approach to integrate causal networks in observational studies and GWAS to facilitate mechanistic understanding through identification of pathways from the genome to risk factors/disease via metabolomics. The pathways in causal networks reveal the underlying relationships behind observations, which do not play a significant role in more traditional correlative analyses, where one variable at a time is considered. Results: We identified a causal network over the metabolomic level using the genome directed acyclic graph (G-DAG), to systematically assess whether variations in the genome lead to variations in triglyceride levels as a risk factor of cardiovascular disease. We found LRRC46 and LRRC69 harboring loss-of-function mutations have significant effect on two metabolites with direct effects on triglyceride levels. We also found pathways of FAM198B and C6orf25 to triglycerides through indirect paths from metabolites. Conclusion: Integrating causal networks with GWAS facilitates mechanistic understanding in comparison to one-variable-at-a-time approaches due to accounting for relationships among components at intermediate molecular levels. This approach is complementary to experimental studies to identify efficacious targets in the age of big data sets.
[ { "created": "Thu, 13 Sep 2018 15:51:40 GMT", "version": "v1" } ]
2018-09-14
[ [ "Yazdani", "Azam", "" ], [ "Yazdani", "Akram", "" ], [ "Lorenzi", "Philip L.", "" ], [ "Samiei", "Ahmad", "" ] ]
Introduction: To leverage functionality and clinical relevance into understanding systems biology, one needs to understand the pathway of the genetic effects on risk factors/disease through intermediate molecular levels, such as metabolomics. Systems approaches integrate multi-omic information to find pathways to disease endpoints and make optimal inference decisions. Method: Here, we introduce a multi-stage approach to integrate causal networks in observational studies and GWAS to facilitate mechanistic understanding through identification of pathways from the genome to risk factors/disease via metabolomics. The pathways in causal networks reveal the underlying relationships behind observations, which do not play a significant role in more traditional correlative analyses, where one variable at a time is considered. Results: We identified a causal network over the metabolomic level using the genome directed acyclic graph (G-DAG), to systematically assess whether variations in the genome lead to variations in triglyceride levels as a risk factor of cardiovascular disease. We found LRRC46 and LRRC69 harboring loss-of-function mutations have significant effect on two metabolites with direct effects on triglyceride levels. We also found pathways of FAM198B and C6orf25 to triglycerides through indirect paths from metabolites. Conclusion: Integrating causal networks with GWAS facilitates mechanistic understanding in comparison to one-variable-at-a-time approaches due to accounting for relationships among components at intermediate molecular levels. This approach is complementary to experimental studies to identify efficacious targets in the age of big data sets.
1708.03990
Hiroshi Ashikaga
Hiroshi Ashikaga and Ameneh Asgari-Targhi
Locating Order-Disorder Phase Transition in a Cardiac System
20 pages, 8 figures
Sci Rep 8: 1967, 2018
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To prevent sudden cardiac death, predicting where in the cardiac system an order-disorder phase transition into ventricular fibrillation begins is as important as when it begins. We present a computationally efficient, information-theoretic approach to predicting the locations of wavebreaks that initiate fibrillation in a cardiac system where the order-disorder behavior is controlled by a single driving component, mimicking electrical misfiring from the pulmonary veins or the Purkinje fibers. Communication analysis between the driving component and each component of the system reveals that channel capacity, mutual information and transfer entropy can locate the wavebreaks. This approach is applicable to interventional therapies to prevent sudden death, as well as to a wide range of systems to mitigate or prevent imminent phase transitions.
[ { "created": "Mon, 14 Aug 2017 02:11:12 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2017 04:38:30 GMT", "version": "v2" } ]
2018-02-02
[ [ "Ashikaga", "Hiroshi", "" ], [ "Asgari-Targhi", "Ameneh", "" ] ]
To prevent sudden cardiac death, predicting where in the cardiac system an order-disorder phase transition into ventricular fibrillation begins is as important as when it begins. We present a computationally efficient, information-theoretic approach to predicting the locations of wavebreaks that initiate fibrillation in a cardiac system where the order-disorder behavior is controlled by a single driving component, mimicking electrical misfiring from the pulmonary veins or the Purkinje fibers. Communication analysis between the driving component and each component of the system reveals that channel capacity, mutual information and transfer entropy can locate the wavebreaks. This approach is applicable to interventional therapies to prevent sudden death, as well as to a wide range of systems to mitigate or prevent imminent phase transitions.
1908.09067
Okyaz Eminaga
Okyaz Eminaga, Mahmoud Abbas, Christian Kunder, Andreas M. Loening, Jeanne Shen, James D. Brooks, Curtis P. Langlotz, and Daniel L. Rubin
Plexus Convolutional Neural Network (PlexusNet): A novel neural network architecture for histologic image analysis
null
null
null
null
q-bio.QM cs.AI cs.CV eess.IV q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different convolutional neural network (CNN) models have been tested for their application in histological image analyses. However, these models are prone to overfitting due to their large parameter capacity, requiring more data or valuable computational resources for model training. Given these limitations, we introduced a novel architecture (termed PlexusNet). We utilized 310 Hematoxylin and Eosin stained (H&E) annotated histological images of prostate cancer cases from TCGA-PRAD and Stanford University and 398 H&E whole slides images from the Camelyon 2016 challenge. PlexusNet-architecture -derived models were compared to models derived from several existing "state of the art" architectures. We measured discrimination accuracy, calibration, and clinical utility. An ablation study was conducted to study the effect of each component of PlexusNet on model performance. A well-fitted PlexusNet-based model delivered comparable classification performance (AUC: 0.963) in distinguishing prostate cancer from healthy tissues, although it was at least 23 times smaller, had a better model calibration and clinical utility than the comparison models. A separate smaller PlexusNet model accurately detected slides with breast cancer metastases (AUC: 0.978); it helped reduce the slide number to examine by 43.8% without consequences, although its parameter capacity was 200 times smaller than ResNet18. We found that the partitioning of the development set influences the model calibration for all models. However, with PlexusNet architecture, we could achieve comparable well-calibrated models trained on different partitions. In conclusion, PlexusNet represents a novel model architecture for histological image analysis that achieves classification performance comparable to other models while providing orders-of-magnitude parameter reduction.
[ { "created": "Sat, 24 Aug 2019 01:29:34 GMT", "version": "v1" }, { "created": "Wed, 3 Jun 2020 04:43:21 GMT", "version": "v2" } ]
2020-06-04
[ [ "Eminaga", "Okyaz", "" ], [ "Abbas", "Mahmoud", "" ], [ "Kunder", "Christian", "" ], [ "Loening", "Andreas M.", "" ], [ "Shen", "Jeanne", "" ], [ "Brooks", "James D.", "" ], [ "Langlotz", "Curtis P.", "" ], [ "Rubin", "Daniel L.", "" ] ]
Different convolutional neural network (CNN) models have been tested for their application in histological image analyses. However, these models are prone to overfitting due to their large parameter capacity, requiring more data or valuable computational resources for model training. Given these limitations, we introduced a novel architecture (termed PlexusNet). We utilized 310 Hematoxylin and Eosin stained (H&E) annotated histological images of prostate cancer cases from TCGA-PRAD and Stanford University and 398 H&E whole slides images from the Camelyon 2016 challenge. PlexusNet-architecture -derived models were compared to models derived from several existing "state of the art" architectures. We measured discrimination accuracy, calibration, and clinical utility. An ablation study was conducted to study the effect of each component of PlexusNet on model performance. A well-fitted PlexusNet-based model delivered comparable classification performance (AUC: 0.963) in distinguishing prostate cancer from healthy tissues, although it was at least 23 times smaller, had a better model calibration and clinical utility than the comparison models. A separate smaller PlexusNet model accurately detected slides with breast cancer metastases (AUC: 0.978); it helped reduce the slide number to examine by 43.8% without consequences, although its parameter capacity was 200 times smaller than ResNet18. We found that the partitioning of the development set influences the model calibration for all models. However, with PlexusNet architecture, we could achieve comparable well-calibrated models trained on different partitions. In conclusion, PlexusNet represents a novel model architecture for histological image analysis that achieves classification performance comparable to other models while providing orders-of-magnitude parameter reduction.
1712.04339
Kedi Wu
Kedi Wu, Guo-Wei Wei
Quantitative toxicity prediction using topology based multi-task deep neural networks
arXiv admin note: substantial text overlap with arXiv:1703.10951
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The understanding of toxicity is of paramount importance to human health and environmental protection. Quantitative toxicity analysis has become a new standard in the field. This work introduces element specific persistent homology (ESPH), an algebraic topology approach, for quantitative toxicity prediction. ESPH retains crucial chemical information during the topological abstraction of geometric complexity and provides a representation of small molecules that cannot be obtained by any other method. To investigate the representability and predictive power of ESPH for small molecules, ancillary descriptors have also been developed based on physical models. Topological and physical descriptors are paired with advanced machine learning algorithms, such as deep neural network (DNN), random forest (RF) and gradient boosting decision tree (GBDT), to facilitate their applications to quantitative toxicity predictions. A topology based multi-task strategy is proposed to take the advantage of the availability of large data sets while dealing with small data sets. Four benchmark toxicity data sets that involve quantitative measurements are used to validate the proposed approaches. Extensive numerical studies indicate that the proposed topological learning methods are able to outperform the state-of-the-art methods in the literature for quantitative toxicity analysis. Our online server for computing element-specific topological descriptors (ESTDs) is available at http://weilab.math.msu.edu/TopTox/
[ { "created": "Sat, 9 Dec 2017 21:52:42 GMT", "version": "v1" } ]
2017-12-13
[ [ "Wu", "Kedi", "" ], [ "Wei", "Guo-Wei", "" ] ]
The understanding of toxicity is of paramount importance to human health and environmental protection. Quantitative toxicity analysis has become a new standard in the field. This work introduces element specific persistent homology (ESPH), an algebraic topology approach, for quantitative toxicity prediction. ESPH retains crucial chemical information during the topological abstraction of geometric complexity and provides a representation of small molecules that cannot be obtained by any other method. To investigate the representability and predictive power of ESPH for small molecules, ancillary descriptors have also been developed based on physical models. Topological and physical descriptors are paired with advanced machine learning algorithms, such as deep neural network (DNN), random forest (RF) and gradient boosting decision tree (GBDT), to facilitate their applications to quantitative toxicity predictions. A topology based multi-task strategy is proposed to take the advantage of the availability of large data sets while dealing with small data sets. Four benchmark toxicity data sets that involve quantitative measurements are used to validate the proposed approaches. Extensive numerical studies indicate that the proposed topological learning methods are able to outperform the state-of-the-art methods in the literature for quantitative toxicity analysis. Our online server for computing element-specific topological descriptors (ESTDs) is available at http://weilab.math.msu.edu/TopTox/
2405.12344
Daniel Sadasivan
Daniel Sadasivan, Cole Cantu, Cecilia Marsh, Andrew Graham
A Test of the Thermodynamics of Evolution
10 pages, 3 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Recent research has extended methods from the fields of thermodynamics and statistical mechanics into other disciplines. Most notably, one recent work creates a unified theoretical framework to understand evolutionary biology, machine learning, and thermodynamics. We present simulations of biological evolution used to test this framework. The test simulates organisms whose behavior is determined by specific parameters that play the role of genes. These genes are passed on to new simulated organisms with the capacity to mutate, allowing adaption of the organisms to the environment. With this simulation, we are able to test the the framework in question. The results of our simulation are consistent with the work being tested, providing evidence for it.
[ { "created": "Mon, 20 May 2024 19:36:34 GMT", "version": "v1" } ]
2024-05-22
[ [ "Sadasivan", "Daniel", "" ], [ "Cantu", "Cole", "" ], [ "Marsh", "Cecilia", "" ], [ "Graham", "Andrew", "" ] ]
Recent research has extended methods from the fields of thermodynamics and statistical mechanics into other disciplines. Most notably, one recent work creates a unified theoretical framework to understand evolutionary biology, machine learning, and thermodynamics. We present simulations of biological evolution used to test this framework. The test simulates organisms whose behavior is determined by specific parameters that play the role of genes. These genes are passed on to new simulated organisms with the capacity to mutate, allowing adaption of the organisms to the environment. With this simulation, we are able to test the the framework in question. The results of our simulation are consistent with the work being tested, providing evidence for it.
2111.05092
Elcin Huseyn
Elcin Huseyn
Electrostimulation of Brain Deep Structures in Parkinson's Disease
7 Pages
null
null
null
q-bio.NC cs.NE physics.med-ph
http://creativecommons.org/licenses/by/4.0/
The study involved 56 patients with advanced and late stages of Parkinsons disease, which could be considered as potentially requiring neurosurgical treatment-electrical stimulation of deep brain structures. An algorithm has been developed for selecting patients with advanced and late stages of Parkinsons disease for neurological treatment-implantation of a system for electrical stimulation of deep brain structures in distant neurosurgical centers, which includes two stages for patients with limited mobility - outpatient and inpatient. The development of an algorithm for referral to neurological treatment has shortened the path of a patient with limited mobility from a polyclinic to a neurological center. Electro stimulation of deep brain structures in Parkinsons disease significantly improved the condition of patients-to increase functional activity by 55%, reduce the severity of motor disorders by 55%, and reduce the dose of levodopa drugs by half.
[ { "created": "Sat, 18 Sep 2021 06:01:46 GMT", "version": "v1" } ]
2021-11-10
[ [ "Huseyn", "Elcin", "" ] ]
The study involved 56 patients with advanced and late stages of Parkinsons disease, which could be considered as potentially requiring neurosurgical treatment-electrical stimulation of deep brain structures. An algorithm has been developed for selecting patients with advanced and late stages of Parkinsons disease for neurological treatment-implantation of a system for electrical stimulation of deep brain structures in distant neurosurgical centers, which includes two stages for patients with limited mobility - outpatient and inpatient. The development of an algorithm for referral to neurological treatment has shortened the path of a patient with limited mobility from a polyclinic to a neurological center. Electro stimulation of deep brain structures in Parkinsons disease significantly improved the condition of patients-to increase functional activity by 55%, reduce the severity of motor disorders by 55%, and reduce the dose of levodopa drugs by half.
2407.00201
Yifan Wang
Yifan Wang and Vikram Ravindra and Ananth Grama
Deconvolving Complex Neuronal Networks into Interpretable Task-Specific Connectomes
9 pages, 5 figures
null
null
null
q-bio.NC cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Task-specific functional MRI (fMRI) images provide excellent modalities for studying the neuronal basis of cognitive processes. We use fMRI data to formulate and solve the problem of deconvolving task-specific aggregate neuronal networks into a set of basic building blocks called canonical networks, to use these networks for functional characterization, and to characterize the physiological basis of these responses by mapping them to regions of the brain. Our results show excellent task-specificity of canonical networks, i.e., the expression of a small number of canonical networks can be used to accurately predict tasks; generalizability across cohorts, i.e., canonical networks are conserved across diverse populations, studies, and acquisition protocols; and that canonical networks have strong anatomical and physiological basis. From a methods perspective, the problem of identifying these canonical networks poses challenges rooted in the high dimensionality, small sample size, acquisition variability, and noise. Our deconvolution technique is based on non-negative matrix factorization (NMF) that identifies canonical networks as factors of a suitably constructed matrix. We demonstrate that our method scales to large datasets, yields stable and accurate factors, and is robust to noise.
[ { "created": "Fri, 28 Jun 2024 19:13:48 GMT", "version": "v1" }, { "created": "Wed, 3 Jul 2024 15:37:54 GMT", "version": "v2" } ]
2024-07-04
[ [ "Wang", "Yifan", "" ], [ "Ravindra", "Vikram", "" ], [ "Grama", "Ananth", "" ] ]
Task-specific functional MRI (fMRI) images provide excellent modalities for studying the neuronal basis of cognitive processes. We use fMRI data to formulate and solve the problem of deconvolving task-specific aggregate neuronal networks into a set of basic building blocks called canonical networks, to use these networks for functional characterization, and to characterize the physiological basis of these responses by mapping them to regions of the brain. Our results show excellent task-specificity of canonical networks, i.e., the expression of a small number of canonical networks can be used to accurately predict tasks; generalizability across cohorts, i.e., canonical networks are conserved across diverse populations, studies, and acquisition protocols; and that canonical networks have strong anatomical and physiological basis. From a methods perspective, the problem of identifying these canonical networks poses challenges rooted in the high dimensionality, small sample size, acquisition variability, and noise. Our deconvolution technique is based on non-negative matrix factorization (NMF) that identifies canonical networks as factors of a suitably constructed matrix. We demonstrate that our method scales to large datasets, yields stable and accurate factors, and is robust to noise.
1507.08747
Kavita Vemuri
Kavita Vemuri, Kulvinder Bisla, SaiKrishna Mulpuru, Srinivasa Varadarajan
Does normal pupil diameter differences in population underlie the color selection of the #dress?
6 pages, 4 figures
null
10.1364/JOSAA.33.00A137
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fundamental question that arises from the color composition of the #dress is: 'What are the phenomena that underlie the individual differences in colors reported given all other conditions like light and device for display being identical?'. The main color camps are blue/black (b/b) and white/gold (w/g) and a survey of 384 participants showed near equal distribution. We looked at pupil size differences in the sample population of 53 from the two groups plus a group who switched (w/g to b/b). Our results show that w/g and switch population had significantly ( w/g <b/b, p-value = 0.0086) lower pupil size than b/b camp. A standard infinity focus experiment was then conducted on 18 participants from each group to check if there is bimodality in the population and we again found statistically significant difference (w/g < b/b , p-value = 0.0132). Six participants, half from the w/g camp, were administered dilation drops that increased the pupil size by 3-4mm to check if increase in retinal illuminance will trigger a change in color in the w/g group, but the participants did not report a switch. The results suggest a population difference in normal pupil-size in the three groups.
[ { "created": "Fri, 31 Jul 2015 04:17:42 GMT", "version": "v1" }, { "created": "Wed, 25 Nov 2015 08:45:46 GMT", "version": "v2" } ]
2016-04-20
[ [ "Vemuri", "Kavita", "" ], [ "Bisla", "Kulvinder", "" ], [ "Mulpuru", "SaiKrishna", "" ], [ "Varadarajan", "Srinivasa", "" ] ]
The fundamental question that arises from the color composition of the #dress is: 'What are the phenomena that underlie the individual differences in colors reported given all other conditions like light and device for display being identical?'. The main color camps are blue/black (b/b) and white/gold (w/g) and a survey of 384 participants showed near equal distribution. We looked at pupil size differences in the sample population of 53 from the two groups plus a group who switched (w/g to b/b). Our results show that w/g and switch population had significantly ( w/g <b/b, p-value = 0.0086) lower pupil size than b/b camp. A standard infinity focus experiment was then conducted on 18 participants from each group to check if there is bimodality in the population and we again found statistically significant difference (w/g < b/b , p-value = 0.0132). Six participants, half from the w/g camp, were administered dilation drops that increased the pupil size by 3-4mm to check if increase in retinal illuminance will trigger a change in color in the w/g group, but the participants did not report a switch. The results suggest a population difference in normal pupil-size in the three groups.
0704.2896
Adrian Melott
Bruce S.Lieberman and Adrian L. Melott (University of Kansas)
Considering the Case for Biodiversity Cycles: Reexamining the Evidence for Periodicity in the Fossil Record
Minor modifications to reflect final published version
PLoS ONE 2(8): e759 (2007)
10.1371/journal.pone.0000759
null
q-bio.PE astro-ph physics.geo-ph
null
Medvedev and Melott (2007) have suggested that periodicity in fossil biodiversity may be induced by cosmic rays which vary as the Solar System oscillates normal to the galactic disk. We re-examine the evidence for a 62 million year (Myr) periodicity in biodiversity throughout the Phanerozoic history of animal life reported by Rohde & Mueller (2005), as well as related questions of periodicity in origination and extinction. We find that the signal is robust against variations in methods of analysis, and is based on fluctuations in the Paleozoic and a substantial part of the Mesozoic. Examination of origination and extinction is somewhat ambiguous, with results depending upon procedure. Origination and extinction intensity as defined by RM may be affected by an artifact at 27 Myr in the duration of stratigraphic intervals. Nevertheless, when a procedure free of this artifact is implemented, the 27 Myr periodicity appears in origination, suggesting that the artifact may ultimately be based on a signal in the data. A 62 Myr feature appears in extinction, when this same procedure is used. We conclude that evidence for a periodicity at 62 Myr is robust, and evidence for periodicity at approximately 27 Myr is also present, albeit more ambiguous.
[ { "created": "Sun, 22 Apr 2007 12:08:35 GMT", "version": "v1" }, { "created": "Wed, 25 Jul 2007 18:25:49 GMT", "version": "v2" }, { "created": "Wed, 22 Aug 2007 15:03:22 GMT", "version": "v3" } ]
2007-08-22
[ [ "Lieberman", "Bruce S.", "", "University of Kansas" ], [ "Melott", "Adrian L.", "", "University of Kansas" ] ]
Medvedev and Melott (2007) have suggested that periodicity in fossil biodiversity may be induced by cosmic rays which vary as the Solar System oscillates normal to the galactic disk. We re-examine the evidence for a 62 million year (Myr) periodicity in biodiversity throughout the Phanerozoic history of animal life reported by Rohde & Mueller (2005), as well as related questions of periodicity in origination and extinction. We find that the signal is robust against variations in methods of analysis, and is based on fluctuations in the Paleozoic and a substantial part of the Mesozoic. Examination of origination and extinction is somewhat ambiguous, with results depending upon procedure. Origination and extinction intensity as defined by RM may be affected by an artifact at 27 Myr in the duration of stratigraphic intervals. Nevertheless, when a procedure free of this artifact is implemented, the 27 Myr periodicity appears in origination, suggesting that the artifact may ultimately be based on a signal in the data. A 62 Myr feature appears in extinction, when this same procedure is used. We conclude that evidence for a periodicity at 62 Myr is robust, and evidence for periodicity at approximately 27 Myr is also present, albeit more ambiguous.
1007.3122
Samuel Johnson
Samuel Johnson, J. Marro, and Joaqu\'in J. Torres
Robust short-term memory without synaptic learning
20 pages, 9 figures. Amended to include section on spiking neurons, with general rewrite
Johnson S, Marro J, Torres JJ (2013) Robust Short-Term Memory without Synaptic Learning. PLoS ONE 8(1): e50276
10.1371/journal.pone.0050276
null
q-bio.NC cond-mat.dis-nn nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Short-term memory in the brain cannot in general be explained the way long-term memory can -- as a gradual modification of synaptic weights -- since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds). The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings.
[ { "created": "Mon, 19 Jul 2010 11:11:48 GMT", "version": "v1" }, { "created": "Wed, 30 Jan 2013 19:34:49 GMT", "version": "v2" } ]
2013-01-31
[ [ "Johnson", "Samuel", "" ], [ "Marro", "J.", "" ], [ "Torres", "Joaquín J.", "" ] ]
Short-term memory in the brain cannot in general be explained the way long-term memory can -- as a gradual modification of synaptic weights -- since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds). The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings.
0806.3980
Le Zhang
Le Zhang, L. Leon Chen and Thomas S. Deisboeck
Multi-Scale, Multi-Resolution Brain Cancer Modeling
26 pages, 7 figures
null
null
null
q-bio.CB q-bio.MN q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In advancing discrete-based computational cancer models towards clinical applications, one faces the dilemma of how to deal with an ever growing amount of biomedical data that ought to be incorporated eventually in one form or another. Model scalability becomes of paramount interest. In an effort to start addressing this critical issue, here, we present a novel multi-scale and multi-resolution agent-based in silico glioma model. While "multi-scale" refers to employing an epidermal growth factor receptor (EGFR)-driven molecular network to process cellular phenotypic decisions within the micro-macroscopic environment, "multi-resolution" is achieved through algorithms that classify cells to either active or inactive spatial clusters, which determine the resolution they are simulated at. The aim is to assign computational resources where and when they matter most for maintaining or improving the predictive power of the algorithm, onto specific tumor areas and at particular times. Using a previously described 2D brain tumor model, we have developed four different computational methods for achieving the multi-resolution scheme, three of which are designed to dynamically train on the high-resolution simulation that serves as control. To quantify the algorithms' performance, we rank them by weighing the distinct computational time savings of the simulation runs versus the methods' ability to accurately reproduce the high-resolution results of the control. Finally, to demonstrate the flexibility of the underlying concept, we show the added value of combining the two highest-ranked methods.
[ { "created": "Tue, 24 Jun 2008 20:20:19 GMT", "version": "v1" } ]
2008-06-26
[ [ "Zhang", "Le", "" ], [ "Chen", "L. Leon", "" ], [ "Deisboeck", "Thomas S.", "" ] ]
In advancing discrete-based computational cancer models towards clinical applications, one faces the dilemma of how to deal with an ever growing amount of biomedical data that ought to be incorporated eventually in one form or another. Model scalability becomes of paramount interest. In an effort to start addressing this critical issue, here, we present a novel multi-scale and multi-resolution agent-based in silico glioma model. While "multi-scale" refers to employing an epidermal growth factor receptor (EGFR)-driven molecular network to process cellular phenotypic decisions within the micro-macroscopic environment, "multi-resolution" is achieved through algorithms that classify cells to either active or inactive spatial clusters, which determine the resolution they are simulated at. The aim is to assign computational resources where and when they matter most for maintaining or improving the predictive power of the algorithm, onto specific tumor areas and at particular times. Using a previously described 2D brain tumor model, we have developed four different computational methods for achieving the multi-resolution scheme, three of which are designed to dynamically train on the high-resolution simulation that serves as control. To quantify the algorithms' performance, we rank them by weighing the distinct computational time savings of the simulation runs versus the methods' ability to accurately reproduce the high-resolution results of the control. Finally, to demonstrate the flexibility of the underlying concept, we show the added value of combining the two highest-ranked methods.
0706.2328
Ophir Flomenbom
Ophir Flomenbom, Robert J. Silbey
Unique mechanisms from finite two-state trajectories
null
E. Barkai, F. L. H. Brown, M. Orrit & H. Yang Eds. THEORY AND EVALUATION OF SINGLE-MOLECULE SIGNALS, (October, 2008)
null
null
q-bio.QM cond-mat.other q-bio.OT
null
Single molecule data made of on and off events are ubiquitous. Famous examples include enzyme turnover, probed via fluorescence, and opening and closing of ion-channel, probed via the flux of ions. The data reflects the dynamics in the underlying multi-substate on-off kinetic scheme (KS) of the process, but the determination of the underlying KS is difficult, and sometimes even impossible, due to the loss of information in the mapping of the mutli-dimensional KS onto two dimensions. A way to deal with this problem considers canonical (unique) forms. (Unique canonical form is constructed from an infinitely long trajectory, but many KSs.) Here we introduce canonical forms of reduced dimensions that can handle any KS (i.e. also KSs with symmetry and irreversible transitions). We give the mapping of KSs into reduced dimensions forms, which is based on topology of KSs, and the tools for extracting the reduced dimensions form from finite data. The canonical forms of reduced dimensions constitute a powerful tool in discriminating between KSs.
[ { "created": "Fri, 15 Jun 2007 16:17:11 GMT", "version": "v1" }, { "created": "Fri, 31 Aug 2007 22:27:44 GMT", "version": "v2" } ]
2010-08-16
[ [ "Flomenbom", "Ophir", "" ], [ "Silbey", "Robert J.", "" ] ]
Single molecule data made of on and off events are ubiquitous. Famous examples include enzyme turnover, probed via fluorescence, and opening and closing of ion-channel, probed via the flux of ions. The data reflects the dynamics in the underlying multi-substate on-off kinetic scheme (KS) of the process, but the determination of the underlying KS is difficult, and sometimes even impossible, due to the loss of information in the mapping of the mutli-dimensional KS onto two dimensions. A way to deal with this problem considers canonical (unique) forms. (Unique canonical form is constructed from an infinitely long trajectory, but many KSs.) Here we introduce canonical forms of reduced dimensions that can handle any KS (i.e. also KSs with symmetry and irreversible transitions). We give the mapping of KSs into reduced dimensions forms, which is based on topology of KSs, and the tools for extracting the reduced dimensions form from finite data. The canonical forms of reduced dimensions constitute a powerful tool in discriminating between KSs.
0804.2696
Shuhei Mano
Shuhei Mano
Duality, Ancestral and Diffusion Processes in Models with Selection
36 pages, 5 figures; minor correction, figures added
Theor. Popul. Biol. 75 (2009) 164-175
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ancestral selection graph in population genetics was introduced by KroneNeuhauser (1997) as an analogue of the coalescent genealogy of a sample of genes from a neutrally evolving population. The number of particles in this graph, followed backwards in time, is a birth and death process with quadratic death and linear birth rates. In this paper an explicit form of the probability distribution of the number of particles is obtained by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that the process of fixation of the allele in the diffusion model corresponds to convergence of the ancestral process to its stationary measure. The time to fixation of the allele conditional on fixation is studied in terms of the ancestral process.
[ { "created": "Wed, 16 Apr 2008 23:12:27 GMT", "version": "v1" }, { "created": "Mon, 2 Feb 2009 05:32:17 GMT", "version": "v2" } ]
2013-04-08
[ [ "Mano", "Shuhei", "" ] ]
The ancestral selection graph in population genetics was introduced by KroneNeuhauser (1997) as an analogue of the coalescent genealogy of a sample of genes from a neutrally evolving population. The number of particles in this graph, followed backwards in time, is a birth and death process with quadratic death and linear birth rates. In this paper an explicit form of the probability distribution of the number of particles is obtained by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that the process of fixation of the allele in the diffusion model corresponds to convergence of the ancestral process to its stationary measure. The time to fixation of the allele conditional on fixation is studied in terms of the ancestral process.
0801.4543
Attila Szolnoki
G. Szabo, A. Szolnoki, and I. Borsos
Self-organizing patterns maintained by competing associations in a six-species predator-prey model
6 pages, 8 figures
Phys. Rev. E 77 (2008) 041919
10.1103/PhysRevE.77.041919
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
null
Formation and competition of associations are studied in a six-species ecological model where each species has two predators and two prey. Each site of a square lattice is occupied by an individual belonging to one of the six species. The evolution of the spatial distribution of species is governed by iterated invasions between the neighboring predator-prey pairs with species specific rates and by site exchange between the neutral pairs with a probability $X$. This dynamical rule yields the formation of five associations composed of two or three species with proper spatiotemporal patterns. For large $X$ a cyclic dominance can occur between the three two-species associations whereas one of the two three-species associations prevails in the whole system for low values of $X$ in the final state. Within an intermediate range of $X$ all the five associations coexist due to the fact that cyclic invasions between the two-species associations reduce their resistance temporarily against the invasion of three-species associations.
[ { "created": "Tue, 29 Jan 2008 19:01:07 GMT", "version": "v1" } ]
2008-08-26
[ [ "Szabo", "G.", "" ], [ "Szolnoki", "A.", "" ], [ "Borsos", "I.", "" ] ]
Formation and competition of associations are studied in a six-species ecological model where each species has two predators and two prey. Each site of a square lattice is occupied by an individual belonging to one of the six species. The evolution of the spatial distribution of species is governed by iterated invasions between the neighboring predator-prey pairs with species specific rates and by site exchange between the neutral pairs with a probability $X$. This dynamical rule yields the formation of five associations composed of two or three species with proper spatiotemporal patterns. For large $X$ a cyclic dominance can occur between the three two-species associations whereas one of the two three-species associations prevails in the whole system for low values of $X$ in the final state. Within an intermediate range of $X$ all the five associations coexist due to the fact that cyclic invasions between the two-species associations reduce their resistance temporarily against the invasion of three-species associations.
0809.1138
Alex Feigel
Alexander Feigel, Avraham Englander and Assaf Engel
Derivation of evolutionary payoffs from observable behavior
9 pages, 3 figures
null
null
null
q-bio.PE cs.GT physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interpretation of animal behavior, especially as cooperative or selfish, is a challenge for evolutionary theory. Strategy of a competition should follow from corresponding Darwinian payoffs for the available behavioral options. The payoffs and decision making processes, however, are difficult to observe and quantify. Here we present a general method for the derivation of evolutionary payoffs from observable statistics of interactions. The method is applied to combat of male bowl and doily spiders, to predator inspection by sticklebacks and to territorial defense by lions, demonstrating animal behavior as a new type of game theoretical equilibrium. Games animals play may be derived unequivocally from their observable behavior, the reconstruction, however, can be subjected to fundamental limitations due to our inability to observe all information exchange mechanisms (communication).
[ { "created": "Sat, 6 Sep 2008 06:43:57 GMT", "version": "v1" } ]
2008-09-09
[ [ "Feigel", "Alexander", "" ], [ "Englander", "Avraham", "" ], [ "Engel", "Assaf", "" ] ]
Interpretation of animal behavior, especially as cooperative or selfish, is a challenge for evolutionary theory. Strategy of a competition should follow from corresponding Darwinian payoffs for the available behavioral options. The payoffs and decision making processes, however, are difficult to observe and quantify. Here we present a general method for the derivation of evolutionary payoffs from observable statistics of interactions. The method is applied to combat of male bowl and doily spiders, to predator inspection by sticklebacks and to territorial defense by lions, demonstrating animal behavior as a new type of game theoretical equilibrium. Games animals play may be derived unequivocally from their observable behavior, the reconstruction, however, can be subjected to fundamental limitations due to our inability to observe all information exchange mechanisms (communication).
1509.03516
Yuri Shestopaloff
Yuri K. Shestopaloff
Method for finding metabolic properties based on the general growth law. Liver examples. A General framework for biological modeling
20 pages, 6 figures, 4 tables
PLoS ONE, 2014, 9(6): e99836
10.1371/journal.pone.0099836
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a method for finding metabolic parameters of cells, organs and whole organisms, which is based on the earlier discovered general growth law. Based on the obtained results and analysis of available biological models, we propose a general framework for modeling biological phenomena and discuss how it can be used in Virtual Liver Network project. The foundational idea of the study is that growth of cells, organs, systems and whole organisms, besides biomolecular machinery, is influenced by biophysical mechanisms acting at different scale levels. In particular, the general growth law uniquely defines distribution of nutritional resources between maintenance needs and biomass synthesis at each phase of growth and at each scale level. We exemplify the approach considering metabolic properties of growing human and dog livers and liver transplants. A procedure for verification of obtained results has been introduced too. We found that two examined dogs have high metabolic rates consuming about 0.62 and 1 gram of nutrients per cubic centimeter of liver per day, and verified this using the proposed verification procedure. We also evaluated consumption rate of nutrients in human livers, determining it to be about 0.088 gram of nutrients per cubic centimeter of liver per day for males, and about 0.098 for females. This noticeable difference can be explained by evolutionary development, which required females to have greater liver processing capacity to support pregnancy. We also found how much nutrients go to biomass synthesis and maintenance at each phase of liver and liver transplant growth. Obtained results demonstrate that the proposed approach can be used for finding metabolic characteristics of cells, organs, and whole organisms, which can further serve as important inputs for many applications in biology (protein expression), biotechnology (synthesis of substances), and medicine.
[ { "created": "Thu, 10 Sep 2015 01:02:25 GMT", "version": "v1" } ]
2015-09-14
[ [ "Shestopaloff", "Yuri K.", "" ] ]
We propose a method for finding metabolic parameters of cells, organs and whole organisms, which is based on the earlier discovered general growth law. Based on the obtained results and analysis of available biological models, we propose a general framework for modeling biological phenomena and discuss how it can be used in Virtual Liver Network project. The foundational idea of the study is that growth of cells, organs, systems and whole organisms, besides biomolecular machinery, is influenced by biophysical mechanisms acting at different scale levels. In particular, the general growth law uniquely defines distribution of nutritional resources between maintenance needs and biomass synthesis at each phase of growth and at each scale level. We exemplify the approach considering metabolic properties of growing human and dog livers and liver transplants. A procedure for verification of obtained results has been introduced too. We found that two examined dogs have high metabolic rates consuming about 0.62 and 1 gram of nutrients per cubic centimeter of liver per day, and verified this using the proposed verification procedure. We also evaluated consumption rate of nutrients in human livers, determining it to be about 0.088 gram of nutrients per cubic centimeter of liver per day for males, and about 0.098 for females. This noticeable difference can be explained by evolutionary development, which required females to have greater liver processing capacity to support pregnancy. We also found how much nutrients go to biomass synthesis and maintenance at each phase of liver and liver transplant growth. Obtained results demonstrate that the proposed approach can be used for finding metabolic characteristics of cells, organs, and whole organisms, which can further serve as important inputs for many applications in biology (protein expression), biotechnology (synthesis of substances), and medicine.
1609.02893
Leandro Alonso
Leandro M. Alonso
Emergent computation in simple model of neural activity
Reason for withdrawal is that in the draft the study is not sufficiently well-motivated and incomplete. The draft as it is may be misleading. The network introduced in this draft was studied thoroughly and the results were published elsewhere. Please see https://doi.org/10.1063/1.4984800
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the dynamics of a network consisting of an array of identical cortical units with nearest neighbor interactions under periodic arousal. Each unit consists of two interconnected populations of neurons tuned to a state in which many nonlinear resonances are available. The network is critically balanced due to short-ranged antisymmetric connections between units. For wide ranges of the network parameters, the patterns of activity resemble the dynamics of cellular automata. It is argued that these dynamical states may provide a template in which computation can be implemented.
[ { "created": "Fri, 9 Sep 2016 19:16:34 GMT", "version": "v1" }, { "created": "Sat, 9 Feb 2019 20:45:33 GMT", "version": "v2" } ]
2019-02-12
[ [ "Alonso", "Leandro M.", "" ] ]
We investigate the dynamics of a network consisting of an array of identical cortical units with nearest neighbor interactions under periodic arousal. Each unit consists of two interconnected populations of neurons tuned to a state in which many nonlinear resonances are available. The network is critically balanced due to short-ranged antisymmetric connections between units. For wide ranges of the network parameters, the patterns of activity resemble the dynamics of cellular automata. It is argued that these dynamical states may provide a template in which computation can be implemented.
2207.05197
Rodrigo Cofre
Matthieu Gilson, Enzo Tagliazucchi and Rodrigo Cofre
Entropy production of Multivariate Ornstein-Uhlenbeck processes correlates with consciousness levels in the human brain
null
null
10.1103/PhysRevE.107.024121
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Consciousness is supported by complex patterns of brain activity which are indicative of irreversible non-equilibrium dynamics. While the framework of stochastic thermodynamics has facilitated the understanding of physical systems of this kind, its application to infer the level of consciousness from empirical data remains elusive. We faced this challenge by calculating entropy production in a multivariate Ornstein-Uhlenbeck process fitted to fMRI brain activity recordings. To test this approach, we focused on the transition from wakefulness to deep sleep, revealing a monotonous relationship between entropy production and the level of consciousness. Our results constitute robust signatures of consciousness while also advancing our understanding of the link between consciousness and complexity from the fundamental perspective of statistical physics.
[ { "created": "Mon, 11 Jul 2022 21:27:27 GMT", "version": "v1" }, { "created": "Wed, 25 Jan 2023 16:05:54 GMT", "version": "v2" } ]
2023-03-01
[ [ "Gilson", "Matthieu", "" ], [ "Tagliazucchi", "Enzo", "" ], [ "Cofre", "Rodrigo", "" ] ]
Consciousness is supported by complex patterns of brain activity which are indicative of irreversible non-equilibrium dynamics. While the framework of stochastic thermodynamics has facilitated the understanding of physical systems of this kind, its application to infer the level of consciousness from empirical data remains elusive. We faced this challenge by calculating entropy production in a multivariate Ornstein-Uhlenbeck process fitted to fMRI brain activity recordings. To test this approach, we focused on the transition from wakefulness to deep sleep, revealing a monotonous relationship between entropy production and the level of consciousness. Our results constitute robust signatures of consciousness while also advancing our understanding of the link between consciousness and complexity from the fundamental perspective of statistical physics.
2208.11631
Andrew Murphy
Andrew C. Murphy, Romain Duprat, Theodore D. Satterthwaite, Desmond J. Oathes, Dani S. Bassett
A structurally informed model for modulating functional connectivity
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional connectivity (FC) between brain regions tracks symptom severity in many neuropsychiatric disorders. Transcranial magnetic stimulation (TMS) directly alters regional activity and indirectly alters FC. Predicting how FC will change following TMS is difficult, but would allow novel therapies that target FC to improve symptoms. We address this challenge by proposing a predictive model that explains how TMS-induced activation can change the strength of FC. Here, we focus on the FC of the frontoparietal (FPS) and default mode (DMS) systems given the importance of their FC in executive function and affective disorders. We fit this model to neuroimaging data in 29 individuals who received TMS to the frontal cortex and evaluated the FC between the FPS and DMS. For each individual, we measured the TMS-induced change in FC between the FPS and DMS (the FC network), and the structural coupling between the stimulated area and the FPS and DMS (the structural context network (SCN)). We find that TMS-induced FC changes are best predicted when the model accounts for white matter fibers from the stimulated area to the two systems. We find that the correlation between these two networks (structure-function coupling) - and therefore the predictability of the TMS-induced modulation - was highest when the SCN contained a dense core of intraconnected regions, indicating that the stimulated area had ample access to an anatomical module. Further, we found that when the core of the SCN overlapped with the FPS and DMS, we observed the greatest change in the strength of their FC. Broadly, our findings explain how the structural connectivity of a stimulated region modulates TMS-induced changes in the brain's functional network. Efforts to account for such structural connections could improve predictions of TMS response, further informing the development of TMS protocols for clinical translation.
[ { "created": "Wed, 24 Aug 2022 16:06:01 GMT", "version": "v1" } ]
2022-08-25
[ [ "Murphy", "Andrew C.", "" ], [ "Duprat", "Romain", "" ], [ "Satterthwaite", "Theodore D.", "" ], [ "Oathes", "Desmond J.", "" ], [ "Bassett", "Dani S.", "" ] ]
Functional connectivity (FC) between brain regions tracks symptom severity in many neuropsychiatric disorders. Transcranial magnetic stimulation (TMS) directly alters regional activity and indirectly alters FC. Predicting how FC will change following TMS is difficult, but would allow novel therapies that target FC to improve symptoms. We address this challenge by proposing a predictive model that explains how TMS-induced activation can change the strength of FC. Here, we focus on the FC of the frontoparietal (FPS) and default mode (DMS) systems given the importance of their FC in executive function and affective disorders. We fit this model to neuroimaging data in 29 individuals who received TMS to the frontal cortex and evaluated the FC between the FPS and DMS. For each individual, we measured the TMS-induced change in FC between the FPS and DMS (the FC network), and the structural coupling between the stimulated area and the FPS and DMS (the structural context network (SCN)). We find that TMS-induced FC changes are best predicted when the model accounts for white matter fibers from the stimulated area to the two systems. We find that the correlation between these two networks (structure-function coupling) - and therefore the predictability of the TMS-induced modulation - was highest when the SCN contained a dense core of intraconnected regions, indicating that the stimulated area had ample access to an anatomical module. Further, we found that when the core of the SCN overlapped with the FPS and DMS, we observed the greatest change in the strength of their FC. Broadly, our findings explain how the structural connectivity of a stimulated region modulates TMS-induced changes in the brain's functional network. Efforts to account for such structural connections could improve predictions of TMS response, further informing the development of TMS protocols for clinical translation.
2205.08015
Alexander Strang
Christopher Cebra, and Alexander Strang
Similarity Suppresses Cyclicity: Why Similar Competitors Form Hierarchies
37 pages, 9 figures
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
Competitive systems can exhibit both hierarchical (transitive) and cyclic (intransitive) structures. Despite theoretical interest in cyclic competition, which offers richer dynamics, and occupies a larger subset of the space of possible competitive systems, most real-world systems are predominantly transitive. Why? Here, we introduce a generic mechanism which promotes transitivity, even when there is ample room for cyclicity. Consider a competitive system where outcomes are mediated by competitor attributes via a performance function. We demonstrate that, if competitive outcomes depend smoothly on competitor attributes, then similar competitors compete transitively. We quantify the rate of convergence to transitivity given the similarity of the competitors and the smoothness of the performance function. Thus, we prove the adage regarding apples and oranges. Similar objects admit well ordered comparisons. Diverse objects may not. To test that theory, we run a series of evolution experiments designed to mimic genetic training algorithms. We consider a series of canonical bimatrix games and an ensemble of random performance functions that demonstrate the generality of our mechanism, even when faced with highly cyclic games. We vary the training parameters controlling the evolution process, and the shape parameters controlling the performance function, to evaluate the robustness of our results. These experiments illustrate that, if competitors evolve to optimize performance, then their traits may converge, leading to transitivity.
[ { "created": "Mon, 16 May 2022 23:09:59 GMT", "version": "v1" } ]
2022-05-18
[ [ "Cebra", "Christopher", "" ], [ "Strang", "Alexander", "" ] ]
Competitive systems can exhibit both hierarchical (transitive) and cyclic (intransitive) structures. Despite theoretical interest in cyclic competition, which offers richer dynamics, and occupies a larger subset of the space of possible competitive systems, most real-world systems are predominantly transitive. Why? Here, we introduce a generic mechanism which promotes transitivity, even when there is ample room for cyclicity. Consider a competitive system where outcomes are mediated by competitor attributes via a performance function. We demonstrate that, if competitive outcomes depend smoothly on competitor attributes, then similar competitors compete transitively. We quantify the rate of convergence to transitivity given the similarity of the competitors and the smoothness of the performance function. Thus, we prove the adage regarding apples and oranges. Similar objects admit well ordered comparisons. Diverse objects may not. To test that theory, we run a series of evolution experiments designed to mimic genetic training algorithms. We consider a series of canonical bimatrix games and an ensemble of random performance functions that demonstrate the generality of our mechanism, even when faced with highly cyclic games. We vary the training parameters controlling the evolution process, and the shape parameters controlling the performance function, to evaluate the robustness of our results. These experiments illustrate that, if competitors evolve to optimize performance, then their traits may converge, leading to transitivity.
1604.01674
Sean Robinson
C. Brandon Ogbunugafor and Sean P. Robinson
OFFl models: novel schema for dynamical modeling of biological systems
23 pages, 6 figures. Revised to match published version in PLoS ONE
PLoS ONE 11(6): e0156844 (2016)
10.1371/journal.pone.0156844
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFl.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFl, outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.
[ { "created": "Wed, 6 Apr 2016 16:04:03 GMT", "version": "v1" }, { "created": "Wed, 8 Jun 2016 20:09:27 GMT", "version": "v2" } ]
2016-06-10
[ [ "Ogbunugafor", "C. Brandon", "" ], [ "Robinson", "Sean P.", "" ] ]
Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFl.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFl, outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.
1211.4263
Diana David-Rus
Diana David-Rus
Mathematical framework of epigenetic DNA methylation in gene body Arabidopsis
23 pp, 5 figures, Appendix file
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In aiming to explain the establishment, maintenance and stability of methylation pattern in gene body of Arabidopsis we propose here a theoretical framework for understanding how the methylated and unmethylated states of cytosine residues are maintained and transmitted during DNA replication. Routed in statistical mechanics, the framework built herein is used to explore minimal models of epigenetic inheritance and identify the necessary conditions for stability of methylated/unmethylated states of cytosine over rounds of DNA replication. The models are flexible enough to allow adding new biological concepts and information.
[ { "created": "Sun, 18 Nov 2012 22:20:23 GMT", "version": "v1" } ]
2012-11-20
[ [ "David-Rus", "Diana", "" ] ]
In aiming to explain the establishment, maintenance and stability of methylation pattern in gene body of Arabidopsis we propose here a theoretical framework for understanding how the methylated and unmethylated states of cytosine residues are maintained and transmitted during DNA replication. Routed in statistical mechanics, the framework built herein is used to explore minimal models of epigenetic inheritance and identify the necessary conditions for stability of methylated/unmethylated states of cytosine over rounds of DNA replication. The models are flexible enough to allow adding new biological concepts and information.
2004.04604
Antonio Bianconi Prof.
Antonio Bianconi, Augusto Marcelli, Gaetano Campi, Andrea Perali
Efficiency of Covid-19 Mobile Contact Tracing Containment by Measuring Time Dependent Doubling Time
20 pages, 4 figures, 2 table
Phys. Biol. 17 065006 (2020)
10.1088/1478-3975/abac51
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Covid-19 epidemic of the novel coronavirus (severe acute respiratory syndrome SARS - CoV-2) has been spreading around the world. While different containment policies using non-pharmaceutical interventions have been applied, their efficiency are not known quantitatively. We show that the doubling time Td(t) with the success s factor, the characteristic time of the exponential growth of Td(t) in the arrested regime, is a reliable tool for early predictions of epidemic spread time evolution and it provides a quantitative measure of the success of different containment measures. The efficiency of the containment policy Lockdown case Finding mobile Tracing (LFT) using mandatory mobile contact tracing is much higher than the Lockdown Stop and Go (LSG) policy proposed by the Imperial College team in London. A very low s factor was reached by LFT policy giving the shortest time width of the dome of positive case curve and the lowest number of fatalities. The LFT policy has been able to reduce by a factor 100 the number of fatalities in the first 100 days of the Covid-19 epidemic, to reduce the time width of the Covid-19 pandemic dome by a factor 2.5 and to rapidly stop new outbreaks avoiding the second wave
[ { "created": "Thu, 9 Apr 2020 15:40:45 GMT", "version": "v1" }, { "created": "Sun, 12 Apr 2020 15:41:22 GMT", "version": "v2" }, { "created": "Wed, 29 Jul 2020 04:41:36 GMT", "version": "v3" } ]
2020-10-28
[ [ "Bianconi", "Antonio", "" ], [ "Marcelli", "Augusto", "" ], [ "Campi", "Gaetano", "" ], [ "Perali", "Andrea", "" ] ]
The Covid-19 epidemic of the novel coronavirus (severe acute respiratory syndrome SARS - CoV-2) has been spreading around the world. While different containment policies using non-pharmaceutical interventions have been applied, their efficiency are not known quantitatively. We show that the doubling time Td(t) with the success s factor, the characteristic time of the exponential growth of Td(t) in the arrested regime, is a reliable tool for early predictions of epidemic spread time evolution and it provides a quantitative measure of the success of different containment measures. The efficiency of the containment policy Lockdown case Finding mobile Tracing (LFT) using mandatory mobile contact tracing is much higher than the Lockdown Stop and Go (LSG) policy proposed by the Imperial College team in London. A very low s factor was reached by LFT policy giving the shortest time width of the dome of positive case curve and the lowest number of fatalities. The LFT policy has been able to reduce by a factor 100 the number of fatalities in the first 100 days of the Covid-19 epidemic, to reduce the time width of the Covid-19 pandemic dome by a factor 2.5 and to rapidly stop new outbreaks avoiding the second wave
2302.00855
Zheng Yuan
Zheng Yuan, Yaoyun Zhang, Chuanqi Tan, Wei Wang, Fei Huang, Songfang Huang
Molecular Geometry-aware Transformer for accurate 3D Atomic System modeling
null
null
null
null
q-bio.MN cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Molecular dynamic simulations are important in computational physics, chemistry, material, and biology. Machine learning-based methods have shown strong abilities in predicting molecular energy and properties and are much faster than DFT calculations. Molecular energy is at least related to atoms, bonds, bond angles, torsion angles, and nonbonding atom pairs. Previous Transformer models only use atoms as inputs which lack explicit modeling of the aforementioned factors. To alleviate this limitation, we propose Moleformer, a novel Transformer architecture that takes nodes (atoms) and edges (bonds and nonbonding atom pairs) as inputs and models the interactions among them using rotational and translational invariant geometry-aware spatial encoding. Proposed spatial encoding calculates relative position information including distances and angles among nodes and edges. We benchmark Moleformer on OC20 and QM9 datasets, and our model achieves state-of-the-art on the initial state to relaxed energy prediction of OC20 and is very competitive in QM9 on predicting quantum chemical properties compared to other Transformer and Graph Neural Network (GNN) methods which proves the effectiveness of the proposed geometry-aware spatial encoding in Moleformer.
[ { "created": "Thu, 2 Feb 2023 03:49:57 GMT", "version": "v1" } ]
2023-02-03
[ [ "Yuan", "Zheng", "" ], [ "Zhang", "Yaoyun", "" ], [ "Tan", "Chuanqi", "" ], [ "Wang", "Wei", "" ], [ "Huang", "Fei", "" ], [ "Huang", "Songfang", "" ] ]
Molecular dynamic simulations are important in computational physics, chemistry, material, and biology. Machine learning-based methods have shown strong abilities in predicting molecular energy and properties and are much faster than DFT calculations. Molecular energy is at least related to atoms, bonds, bond angles, torsion angles, and nonbonding atom pairs. Previous Transformer models only use atoms as inputs which lack explicit modeling of the aforementioned factors. To alleviate this limitation, we propose Moleformer, a novel Transformer architecture that takes nodes (atoms) and edges (bonds and nonbonding atom pairs) as inputs and models the interactions among them using rotational and translational invariant geometry-aware spatial encoding. Proposed spatial encoding calculates relative position information including distances and angles among nodes and edges. We benchmark Moleformer on OC20 and QM9 datasets, and our model achieves state-of-the-art on the initial state to relaxed energy prediction of OC20 and is very competitive in QM9 on predicting quantum chemical properties compared to other Transformer and Graph Neural Network (GNN) methods which proves the effectiveness of the proposed geometry-aware spatial encoding in Moleformer.
2210.06512
Zhenyu Yang
Zhenyu Yang, Kyle Lafata, Eugene Vaios, Zongsheng Hu, Trey Mullikin, Fang-Fang Yin, Chunhao Wang
Quantifying U-Net Uncertainty in Multi-Parametric MRI-based Glioma Segmentation by Spherical Image Projection
31 pages, 9 figures, 1 table
null
null
null
q-bio.QM cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The projection of planar MRI data onto a spherical surface is equivalent to a nonlinear image transformation that retains global anatomical information. By incorporating this image transformation process in our proposed spherical projection-based U-Net (SPU-Net) segmentation model design, multiple independent segmentation predictions can be obtained from a single MRI. The final segmentation is the average of all available results, and the variation can be visualized as a pixel-wise uncertainty map. An uncertainty score was introduced to evaluate and compare the performance of uncertainty measurements. The proposed SPU-Net model was implemented on the basis of 369 glioma patients with MP-MRI scans (T1, T1-Ce, T2, and FLAIR). Three SPU-Net models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The SPU-Net model was compared with (1) the classic U-Net model with test-time augmentation (TTA) and (2) linear scaling-based U-Net (LSU-Net) segmentation models in terms of both segmentation accuracy (Dice coefficient, sensitivity, specificity, and accuracy) and segmentation uncertainty (uncertainty map and uncertainty score). The developed SPU-Net model successfully achieved low uncertainty for correct segmentation predictions (e.g., tumor interior or healthy tissue interior) and high uncertainty for incorrect results (e.g., tumor boundaries). This model could allow the identification of missed tumor targets or segmentation errors in U-Net. Quantitatively, the SPU-Net model achieved the highest uncertainty scores for three segmentation targets (ET/TC/WT): 0.826/0.848/0.936, compared to 0.784/0.643/0.872 using the U-Net with TTA and 0.743/0.702/0.876 with the LSU-Net (scaling factor = 2). The SPU-Net also achieved statistically significantly higher Dice coefficients, underscoring the improved segmentation accuracy.
[ { "created": "Wed, 12 Oct 2022 18:11:49 GMT", "version": "v1" }, { "created": "Tue, 14 Mar 2023 20:16:51 GMT", "version": "v2" }, { "created": "Sun, 13 Aug 2023 03:48:51 GMT", "version": "v3" } ]
2023-08-15
[ [ "Yang", "Zhenyu", "" ], [ "Lafata", "Kyle", "" ], [ "Vaios", "Eugene", "" ], [ "Hu", "Zongsheng", "" ], [ "Mullikin", "Trey", "" ], [ "Yin", "Fang-Fang", "" ], [ "Wang", "Chunhao", "" ] ]
The projection of planar MRI data onto a spherical surface is equivalent to a nonlinear image transformation that retains global anatomical information. By incorporating this image transformation process in our proposed spherical projection-based U-Net (SPU-Net) segmentation model design, multiple independent segmentation predictions can be obtained from a single MRI. The final segmentation is the average of all available results, and the variation can be visualized as a pixel-wise uncertainty map. An uncertainty score was introduced to evaluate and compare the performance of uncertainty measurements. The proposed SPU-Net model was implemented on the basis of 369 glioma patients with MP-MRI scans (T1, T1-Ce, T2, and FLAIR). Three SPU-Net models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The SPU-Net model was compared with (1) the classic U-Net model with test-time augmentation (TTA) and (2) linear scaling-based U-Net (LSU-Net) segmentation models in terms of both segmentation accuracy (Dice coefficient, sensitivity, specificity, and accuracy) and segmentation uncertainty (uncertainty map and uncertainty score). The developed SPU-Net model successfully achieved low uncertainty for correct segmentation predictions (e.g., tumor interior or healthy tissue interior) and high uncertainty for incorrect results (e.g., tumor boundaries). This model could allow the identification of missed tumor targets or segmentation errors in U-Net. Quantitatively, the SPU-Net model achieved the highest uncertainty scores for three segmentation targets (ET/TC/WT): 0.826/0.848/0.936, compared to 0.784/0.643/0.872 using the U-Net with TTA and 0.743/0.702/0.876 with the LSU-Net (scaling factor = 2). The SPU-Net also achieved statistically significantly higher Dice coefficients, underscoring the improved segmentation accuracy.
1710.11582
Debashish Chowdhury
Soumendu Ghosh, Bhavya Mishra, Shubhadeep Patra, Andreas Schadschneider and Debashish Chowdhury
A biologically inspired two-species exclusion model: effects of RNA polymerase motor traffic on simultaneous DNA replication
10 pages, including 7 figures
null
10.1088/1742-5468/aab021
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a two-species exclusion model to describe the key features of the conflict between the RNA polymerase (RNAP) motor traffic, engaged in the transcription of a segment of DNA, concomitant with the progress of two DNA replication forks on the same DNA segment. One of the species of particles ($P$) represents RNAP motors while the other ($R$) represents replication forks. Motivated by the biological phenomena that this model is intended to capture, a maximum of only two $R$ particles are allowed to enter the lattice from two opposite ends whereas the unrestricted number of $P$ particles constitute a totally asymmetric simple exclusion process (TASEP) in a segment in the middle of the lattice. Consequently, the lattice consists of three segments; the encounters of the $P$ particles with the $R$ particles are confined within the middle segment (segment $2$) whereas only the $R$ particles can occupy the sites in the segments $1$ and $3$. The model captures three distinct pathways for resolving the co-directional as well as head-collision between the $P$ and $R$ particles. Using Monte Carlo simulations and heuristic analytical arguments that combine exact results for the TASEP with mean-field approximations, we predict the possible outcomes of the conflict between the traffic of RNAP motors ($P$ particles engaged in transcription) and the replication forks ($R$ particles). The outcomes, of course, depend on the dynamical phase of the TASEP of $P$ particles. In principle, the model can be adapted to the experimental conditions to account for the data quantitatively.
[ { "created": "Tue, 31 Oct 2017 16:57:20 GMT", "version": "v1" }, { "created": "Mon, 22 Jan 2018 11:40:53 GMT", "version": "v2" } ]
2018-04-18
[ [ "Ghosh", "Soumendu", "" ], [ "Mishra", "Bhavya", "" ], [ "Patra", "Shubhadeep", "" ], [ "Schadschneider", "Andreas", "" ], [ "Chowdhury", "Debashish", "" ] ]
We introduce a two-species exclusion model to describe the key features of the conflict between the RNA polymerase (RNAP) motor traffic, engaged in the transcription of a segment of DNA, concomitant with the progress of two DNA replication forks on the same DNA segment. One of the species of particles ($P$) represents RNAP motors while the other ($R$) represents replication forks. Motivated by the biological phenomena that this model is intended to capture, a maximum of only two $R$ particles are allowed to enter the lattice from two opposite ends whereas the unrestricted number of $P$ particles constitute a totally asymmetric simple exclusion process (TASEP) in a segment in the middle of the lattice. Consequently, the lattice consists of three segments; the encounters of the $P$ particles with the $R$ particles are confined within the middle segment (segment $2$) whereas only the $R$ particles can occupy the sites in the segments $1$ and $3$. The model captures three distinct pathways for resolving the co-directional as well as head-collision between the $P$ and $R$ particles. Using Monte Carlo simulations and heuristic analytical arguments that combine exact results for the TASEP with mean-field approximations, we predict the possible outcomes of the conflict between the traffic of RNAP motors ($P$ particles engaged in transcription) and the replication forks ($R$ particles). The outcomes, of course, depend on the dynamical phase of the TASEP of $P$ particles. In principle, the model can be adapted to the experimental conditions to account for the data quantitatively.
0902.3132
Claude Pasquier
Vasilis Promponas, Giorgos Palaios, Claude Pasquier, Ioannis Hamodrakas, Stavros Hamodrakas
CoPreTHi: a Web tool which combines transmembrane protein segment prediction methods
null
In Silico Biology 1, 3 (1999) 159-62
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CoPreTHi is a Java based web application, which combines the results of methods that predict the location of transmembrane segments in protein sequences into a joint prediction histogram. Clearly, the joint prediction algorithm, produces superior quality results than individual prediction schemes. The program is available at http://o2.db.uoa.gr/CoPreTHi
[ { "created": "Wed, 18 Feb 2009 13:23:00 GMT", "version": "v1" } ]
2009-02-19
[ [ "Promponas", "Vasilis", "" ], [ "Palaios", "Giorgos", "" ], [ "Pasquier", "Claude", "" ], [ "Hamodrakas", "Ioannis", "" ], [ "Hamodrakas", "Stavros", "" ] ]
CoPreTHi is a Java based web application, which combines the results of methods that predict the location of transmembrane segments in protein sequences into a joint prediction histogram. Clearly, the joint prediction algorithm, produces superior quality results than individual prediction schemes. The program is available at http://o2.db.uoa.gr/CoPreTHi
1711.09489
Christos Skiadas H
Christos H Skiadas and Charilaos Skiadas
The Health Status of a Population estimated: The History of Health State Curves
11 pages, 13 figures
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Following the recent publication of our book on Exploring the Health State of a Population by Dynamic Modeling Methods in The Springer Series on Demographic Methods and Population Analysis (DOI 10.1007/978-3-319-65142-2) we provide this brief presentation of the main findings and improvements regarding the Health State of a Population. (See at: http://www.springer.com/gp/book/9783319651415). Here the brief history of the Health State or Health Status curves for individuals and populations is presented including the main references and important figures along with an illustrated Poster (see Figure 13 and http://www.smtda.net/demographics2018.html). Although the Survival Curve is known as long as the life tables have introduced, the Health State Curve was calculated after the introduction of the advanced stochastic theory of the first exit time. The health state curve is illustrated in several graphs either as a fit curve to data or produced after a large number of stochastic realizations. The Health State, the Life Expectancy and the age at mean zero health state are also estimated. Keywords: Health State and Survival Curves, Health status of a population, First exit time stochastic theory, stochastic simulations of health state, Age at Maximum Curvature, Healthy Life Expectancy and HALE, Standard Deviation, Health State Curves, Maximum human lifespan and other.
[ { "created": "Sun, 26 Nov 2017 23:22:34 GMT", "version": "v1" } ]
2017-11-28
[ [ "Skiadas", "Christos H", "" ], [ "Skiadas", "Charilaos", "" ] ]
Following the recent publication of our book on Exploring the Health State of a Population by Dynamic Modeling Methods in The Springer Series on Demographic Methods and Population Analysis (DOI 10.1007/978-3-319-65142-2) we provide this brief presentation of the main findings and improvements regarding the Health State of a Population. (See at: http://www.springer.com/gp/book/9783319651415). Here the brief history of the Health State or Health Status curves for individuals and populations is presented including the main references and important figures along with an illustrated Poster (see Figure 13 and http://www.smtda.net/demographics2018.html). Although the Survival Curve is known as long as the life tables have introduced, the Health State Curve was calculated after the introduction of the advanced stochastic theory of the first exit time. The health state curve is illustrated in several graphs either as a fit curve to data or produced after a large number of stochastic realizations. The Health State, the Life Expectancy and the age at mean zero health state are also estimated. Keywords: Health State and Survival Curves, Health status of a population, First exit time stochastic theory, stochastic simulations of health state, Age at Maximum Curvature, Healthy Life Expectancy and HALE, Standard Deviation, Health State Curves, Maximum human lifespan and other.
1709.06950
Damian Berger
Damian L. Berger, Lucilla de Arcangelis, and Hans J. Herrmann
Spatial features of synaptic adaptation affecting learning performance
null
Scientific Reports 7, 11016 (2017)
10.1038/s41598-017-11424-5
null
q-bio.NC cond-mat.dis-nn cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have proposed that the diffusion of messenger molecules, such as monoamines, can mediate the plastic adaptation of synapses in supervised learning of neural networks. Based on these findings we developed a model for neural learning, where the signal for plastic adaptation is assumed to propagate through the extracellular space. We investigate the conditions allowing learning of Boolean rules in a neural network. Even fully excitatory networks show very good learning performances. Moreover, the investigation of the plastic adaptation features optimizing the performance suggests that learning is very sensitive to the extent of the plastic adaptation and the spatial range of synaptic connections.
[ { "created": "Wed, 20 Sep 2017 16:18:17 GMT", "version": "v1" } ]
2017-09-21
[ [ "Berger", "Damian L.", "" ], [ "de Arcangelis", "Lucilla", "" ], [ "Herrmann", "Hans J.", "" ] ]
Recent studies have proposed that the diffusion of messenger molecules, such as monoamines, can mediate the plastic adaptation of synapses in supervised learning of neural networks. Based on these findings we developed a model for neural learning, where the signal for plastic adaptation is assumed to propagate through the extracellular space. We investigate the conditions allowing learning of Boolean rules in a neural network. Even fully excitatory networks show very good learning performances. Moreover, the investigation of the plastic adaptation features optimizing the performance suggests that learning is very sensitive to the extent of the plastic adaptation and the spatial range of synaptic connections.
2201.11418
Sakuntala Chatterjee
Shobhan Dev Mandal and Sakuntala Chatterjee
Effect of receptor cooperativity on methylation dynamics in bacterial chemotaxis with weak and strong gradient
null
Physical Review E 105, 014411 (2022)
10.1103/PhysRevE.105.014411
null
q-bio.CB cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
We study methylation dynamics of the chemoreceptors as an {\sl E.coli} cell moves around in a spatially varying chemo-attractant environment. We consider attractant concentration with strong and weak spatial gradient. During the uphill and downhill motion of the cell along the gradient, we measure the temporal variation of average methylation level of the receptor clusters. Our numerical simulations show that the methylation dynamics depends sensitively on the size of the receptor clusters and also on the strength of the gradient. At short times after the beginning of a run, the methylation dynamics is mainly controlled by short runs which are generally associated with high receptor activity. This results in demethylation at short times. But for intermediate or large times, long runs play an important role and depending on receptor cooperativity or gradient strength, the qualitative variation of methylation can be completely different in this time regime. For weak gradient, both for uphill and downhill runs, after the initial demethylation, we find methylation level increases steadily with time for all cluster sizes. Similar qualitative behavior is observed for strong gradient during uphill runs as well. However, the methylation dynamics for downhill runs in strong gradient show highly non-trivial dependence on the receptor cluster size. We explain this behavior as a result of interplay between the sensing and adaptation modules of the signaling network.
[ { "created": "Thu, 27 Jan 2022 10:12:14 GMT", "version": "v1" } ]
2022-01-28
[ [ "Mandal", "Shobhan Dev", "" ], [ "Chatterjee", "Sakuntala", "" ] ]
We study methylation dynamics of the chemoreceptors as an {\sl E.coli} cell moves around in a spatially varying chemo-attractant environment. We consider attractant concentration with strong and weak spatial gradient. During the uphill and downhill motion of the cell along the gradient, we measure the temporal variation of average methylation level of the receptor clusters. Our numerical simulations show that the methylation dynamics depends sensitively on the size of the receptor clusters and also on the strength of the gradient. At short times after the beginning of a run, the methylation dynamics is mainly controlled by short runs which are generally associated with high receptor activity. This results in demethylation at short times. But for intermediate or large times, long runs play an important role and depending on receptor cooperativity or gradient strength, the qualitative variation of methylation can be completely different in this time regime. For weak gradient, both for uphill and downhill runs, after the initial demethylation, we find methylation level increases steadily with time for all cluster sizes. Similar qualitative behavior is observed for strong gradient during uphill runs as well. However, the methylation dynamics for downhill runs in strong gradient show highly non-trivial dependence on the receptor cluster size. We explain this behavior as a result of interplay between the sensing and adaptation modules of the signaling network.
1705.03460
Leroy Cronin Prof
Stuart M. Marshall, Alastair R. G. Murray, and Leroy Cronin
A Probabilistic Framework for Quantifying Biological Complexity
21 pages, 7 figures
null
10.1098/rsta.2016.0342
null
q-bio.OT cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One thing that discriminates living things from inanimate matter is their ability to generate similarly complex or non-random architectures in a large abundance. From DNA sequences to folded protein structures, living cells, microbial communities and multicellular structures, the material configurations in biology can easily be distinguished from non-living material assemblies. This is also true of the products of complex organisms that can themselves construct complex tools, machines, and artefacts. Whilst these objects are not living, they cannot randomly form, as they are the product of a biological organism and hence are either technological or cultural biosignatures. The problem is that it is not obvious how it might be possible to generalise an approach that aims to evaluate complex objects as possible biosignatures. However, if it was possible such a self-contained approach could be useful to explore the cosmos for new life forms. This would require us to prove rigorously that a given artefact is too complex to have formed by chance. In this paper, we present a new type of complexity measure, Pathway Complexity, that allows us to not only threshold the abiotic-biotic divide, but to demonstrate a probabilistic approach based upon object abundance and complexity which can be used to unambiguously assign complex objects as biosignatures. We hope that this approach not only opens up the search for biosignatures beyond earth, but allow us to explore earth for new types of biology, as well as observing when a complex chemical system discovered in the laboratory could be considered alive.
[ { "created": "Tue, 9 May 2017 21:13:25 GMT", "version": "v1" } ]
2018-02-07
[ [ "Marshall", "Stuart M.", "" ], [ "Murray", "Alastair R. G.", "" ], [ "Cronin", "Leroy", "" ] ]
One thing that discriminates living things from inanimate matter is their ability to generate similarly complex or non-random architectures in a large abundance. From DNA sequences to folded protein structures, living cells, microbial communities and multicellular structures, the material configurations in biology can easily be distinguished from non-living material assemblies. This is also true of the products of complex organisms that can themselves construct complex tools, machines, and artefacts. Whilst these objects are not living, they cannot randomly form, as they are the product of a biological organism and hence are either technological or cultural biosignatures. The problem is that it is not obvious how it might be possible to generalise an approach that aims to evaluate complex objects as possible biosignatures. However, if it was possible such a self-contained approach could be useful to explore the cosmos for new life forms. This would require us to prove rigorously that a given artefact is too complex to have formed by chance. In this paper, we present a new type of complexity measure, Pathway Complexity, that allows us to not only threshold the abiotic-biotic divide, but to demonstrate a probabilistic approach based upon object abundance and complexity which can be used to unambiguously assign complex objects as biosignatures. We hope that this approach not only opens up the search for biosignatures beyond earth, but allow us to explore earth for new types of biology, as well as observing when a complex chemical system discovered in the laboratory could be considered alive.
1406.5855
Sang Hoon Lee
Sang Hoon Lee, Mark D. Fricker, Mason A. Porter
Mesoscale analyses of fungal networks as an approach for quantifying phenotypic traits
16 pages, 3 figures, 1 table
Journal of Complex Networks 5, 145 (2017)
10.1093/comnet/cnv034
null
q-bio.QM cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the application of mesoscopic response functions (MRFs) to characterize a large set of networks of fungi and slime moulds grown under a wide variety of different experimental treatments, including inter-species competition and attack by fungivores. We construct 'structural networks' by estimating cord conductances (which yield edge weights) from the experimental data, and we construct 'functional networks' by calculating edge weights based on how much nutrient traffic is predicted to occur along each edge. Both types of networks have the same topology, and we compute MRFs for both families of networks to illustrate two different ways of constructing taxonomies to group the networks into clusters of related fungi and slime moulds. Although both network taxonomies generate intuitively sensible groupings of networks across species, treatments and laboratories, we find that clustering using the functional-network measure appears to give groups with lower intra-group variation in species or treatments. We argue that MRFs provide a useful quantitative analysis of network behaviour that can (1) help summarize an expanding set of increasingly complex biological networks and (2) help extract information that captures subtle changes in intra- and inter-specific phenotypic traits that are integral to a mechanistic understanding of fungal behaviour and ecology. As an accompaniment to our paper, we also make a large data set of fungal networks available in the public domain.
[ { "created": "Mon, 23 Jun 2014 10:24:33 GMT", "version": "v1" }, { "created": "Tue, 24 Mar 2015 15:02:33 GMT", "version": "v2" }, { "created": "Tue, 22 Dec 2015 16:46:42 GMT", "version": "v3" }, { "created": "Sun, 1 May 2016 11:35:50 GMT", "version": "v4" }, { "created": "Thu, 2 Mar 2017 04:53:18 GMT", "version": "v5" } ]
2017-03-03
[ [ "Lee", "Sang Hoon", "" ], [ "Fricker", "Mark D.", "" ], [ "Porter", "Mason A.", "" ] ]
We investigate the application of mesoscopic response functions (MRFs) to characterize a large set of networks of fungi and slime moulds grown under a wide variety of different experimental treatments, including inter-species competition and attack by fungivores. We construct 'structural networks' by estimating cord conductances (which yield edge weights) from the experimental data, and we construct 'functional networks' by calculating edge weights based on how much nutrient traffic is predicted to occur along each edge. Both types of networks have the same topology, and we compute MRFs for both families of networks to illustrate two different ways of constructing taxonomies to group the networks into clusters of related fungi and slime moulds. Although both network taxonomies generate intuitively sensible groupings of networks across species, treatments and laboratories, we find that clustering using the functional-network measure appears to give groups with lower intra-group variation in species or treatments. We argue that MRFs provide a useful quantitative analysis of network behaviour that can (1) help summarize an expanding set of increasingly complex biological networks and (2) help extract information that captures subtle changes in intra- and inter-specific phenotypic traits that are integral to a mechanistic understanding of fungal behaviour and ecology. As an accompaniment to our paper, we also make a large data set of fungal networks available in the public domain.
q-bio/0703040
Oskar Hallatschek
Oskar Hallatschek, David R. Nelson
Gene surfing
null
Theoretical Population Biology, 73 (1), p. 158, 2008.
10.1016/j.tpb.2007.08.008
null
q-bio.PE
null
Spatially resolved genetic data is increasingly used to reconstruct the migrational history of species. To assist such inference, we study, by means of simulations and analytical methods, the dynamics of neutral gene frequencies in a population undergoing a continual range expansion in one dimension. During such a colonization period, lineages can fix at the wave front by means of a ``surfing'' mechanism [Edmonds C.A., Lillie A.S. & Cavalli-Sforza L.L. (2004) Proc Natl Acad Sci USA 101: 975-979]. We quantify this phenomenon in terms of (i) the spatial distribution of lineages that reach fixation and, closely related, (ii) the continual loss of genetic diversity (heterozygosity) at the wave front, characterizing the approach to fixation. Our simulations show that an effective population size can be assigned to the wave that controls the (observable) gradient in heterozygosity left behind the colonization process. This effective population size is markedly higher in pushed waves than in pulled waves, and increases only sub-linearly with deme size. To explain these and other findings, we develop a versatile analytical approach, based on the physics of reaction-diffusion systems, that yields simple predictions for any deterministic population dynamics.
[ { "created": "Sun, 18 Mar 2007 01:27:09 GMT", "version": "v1" } ]
2008-01-15
[ [ "Hallatschek", "Oskar", "" ], [ "Nelson", "David R.", "" ] ]
Spatially resolved genetic data is increasingly used to reconstruct the migrational history of species. To assist such inference, we study, by means of simulations and analytical methods, the dynamics of neutral gene frequencies in a population undergoing a continual range expansion in one dimension. During such a colonization period, lineages can fix at the wave front by means of a ``surfing'' mechanism [Edmonds C.A., Lillie A.S. & Cavalli-Sforza L.L. (2004) Proc Natl Acad Sci USA 101: 975-979]. We quantify this phenomenon in terms of (i) the spatial distribution of lineages that reach fixation and, closely related, (ii) the continual loss of genetic diversity (heterozygosity) at the wave front, characterizing the approach to fixation. Our simulations show that an effective population size can be assigned to the wave that controls the (observable) gradient in heterozygosity left behind the colonization process. This effective population size is markedly higher in pushed waves than in pulled waves, and increases only sub-linearly with deme size. To explain these and other findings, we develop a versatile analytical approach, based on the physics of reaction-diffusion systems, that yields simple predictions for any deterministic population dynamics.
1806.00215
So Nakashima
So Nakashima, Yuki Sughiyama, Tetsuya J. Kobayashi
Lineage EM Algorithm for Inferring Latent States from Cellular Lineage Trees
12 pages; Supplementary Information and full-resolution figures are available at bioarxiv (https://doi.org/10.1101/488981 )
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phenotypic variability in a population of cells can work as the bet-hedging of the cells under an unpredictably changing environment, the typical example of which is the bacterial persistence. To understand the strategy to control such phenomena, it is indispensable to identify the phenotype of each cell and its inheritance. Although recent advancements in microfluidic technology offer us useful lineage data, they are insufficient to directly identify the phenotypes of the cells. An alternative approach is to infer the phenotype from the lineage data by latent-variable estimation. To this end, however, we must resolve the bias problem in the inference from lineage called survivorship bias. In this work, we clarify how the survivor bias distorts statistical estimations. We then propose a latent-variable estimation algorithm without the survivorship bias from lineage trees based on an expectation-maximization (EM) algorithm, which we call Lineage EM algorithm (LEM). LEM provides a statistical method to identify the traits of the cells applicable to various kinds of lineage data.
[ { "created": "Fri, 1 Jun 2018 06:49:17 GMT", "version": "v1" }, { "created": "Wed, 13 Jun 2018 09:08:02 GMT", "version": "v2" }, { "created": "Mon, 10 Dec 2018 03:44:29 GMT", "version": "v3" }, { "created": "Fri, 29 Nov 2019 09:15:20 GMT", "version": "v4" } ]
2019-12-02
[ [ "Nakashima", "So", "" ], [ "Sughiyama", "Yuki", "" ], [ "Kobayashi", "Tetsuya J.", "" ] ]
Phenotypic variability in a population of cells can work as the bet-hedging of the cells under an unpredictably changing environment, the typical example of which is the bacterial persistence. To understand the strategy to control such phenomena, it is indispensable to identify the phenotype of each cell and its inheritance. Although recent advancements in microfluidic technology offer us useful lineage data, they are insufficient to directly identify the phenotypes of the cells. An alternative approach is to infer the phenotype from the lineage data by latent-variable estimation. To this end, however, we must resolve the bias problem in the inference from lineage called survivorship bias. In this work, we clarify how the survivor bias distorts statistical estimations. We then propose a latent-variable estimation algorithm without the survivorship bias from lineage trees based on an expectation-maximization (EM) algorithm, which we call Lineage EM algorithm (LEM). LEM provides a statistical method to identify the traits of the cells applicable to various kinds of lineage data.
1007.4471
Tsvi Tlusty
Tsvi Tlusty
The physical language of molecular codes: A rate-distortion approach to the evolution and emergence of biological codes
Index Terms--Molecular codes, rate-distortion theory, biological information networks, molecular recognition. http://www.weizmann.ac.il/complex/tlusty/papers/IEEE2009.pdf
Workshop on Biological and Bio-Inspired Information Theory, 43rd Annual Conference on Information Sciences and Systems, March 18-20, 2009 2009 , Page(s): 841 - 846
10.1109/CISS.2009.5054834
null
q-bio.BM cs.IT math.IT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The function of the organism hinges on the performance of its information-processing networks, which convey information via molecular recognition. Many paths within these networks utilize molecular codebooks, such as the genetic code, to translate information written in one class of molecules into another molecular "language" . The present paper examines the emergence and evolution of molecular codes in terms of rate-distortion theory and reviews recent results of this approach. We discuss how the biological problem of maximizing the fitness of an organism by optimizing its molecular coding machinery is equivalent to the communication engineering problem of designing an optimal information channel. The fitness of a molecular code takes into account the interplay between the quality of the channel and the cost of resources which the organism needs to invest in its construction and maintenance. We analyze the dynamics of a population of organisms that compete according to the fitness of their codes. The model suggests a generic mechanism for the emergence of molecular codes as a phase transition in an information channel. This mechanism is put into biological context and demonstrated in a simple example.
[ { "created": "Mon, 26 Jul 2010 14:24:05 GMT", "version": "v1" } ]
2010-07-27
[ [ "Tlusty", "Tsvi", "" ] ]
The function of the organism hinges on the performance of its information-processing networks, which convey information via molecular recognition. Many paths within these networks utilize molecular codebooks, such as the genetic code, to translate information written in one class of molecules into another molecular "language" . The present paper examines the emergence and evolution of molecular codes in terms of rate-distortion theory and reviews recent results of this approach. We discuss how the biological problem of maximizing the fitness of an organism by optimizing its molecular coding machinery is equivalent to the communication engineering problem of designing an optimal information channel. The fitness of a molecular code takes into account the interplay between the quality of the channel and the cost of resources which the organism needs to invest in its construction and maintenance. We analyze the dynamics of a population of organisms that compete according to the fitness of their codes. The model suggests a generic mechanism for the emergence of molecular codes as a phase transition in an information channel. This mechanism is put into biological context and demonstrated in a simple example.
1605.05685
Eugen Tarnow
Eugen Tarnow
First direct evidence of two stages in free recall and three corresponding estimates of working memory capacity
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I find that exactly two stages can be seen directly in sequential free recall distributions. These distributions show that the first three recalls come from the emptying of working memory, recalls 6 and above come from a second stage and the 4th and 5th recalls are mixtures of the two. A discontinuity, a rounded step function, is shown to exist in the fitted linear slope of the recall distributions as the recall shifts from the emptying of working memory (positive slope) to the second stage (negative slope). The discontinuity leads to a first estimate of the capacity of working memory at 4-4.5 items. Working memory accounts for the recency effect. The primacy effect comes from the second stage with a contribution also from working memory for short lists (the first item). The different slopes of the working memory and secondary stages, and that the two have different functional forms, accounts for the u-shaped serial position curve. The total recall is shown to be a linear combination of the content of working memory and items recalled in the second stage with 3.0-3.9 items coming from working memory, a second estimate of the capacity of working memory. A third, separate upper limit on the capacity of working memory is found (3.06 items), corresponding to the requirement that the content of working memory cannot exceed the total recall, item by item. This third limit presumably corresponds to the least chunked item. This is the best limit on the capacity of unchunked working memory.
[ { "created": "Wed, 6 Apr 2016 11:17:03 GMT", "version": "v1" } ]
2016-05-19
[ [ "Tarnow", "Eugen", "" ] ]
I find that exactly two stages can be seen directly in sequential free recall distributions. These distributions show that the first three recalls come from the emptying of working memory, recalls 6 and above come from a second stage and the 4th and 5th recalls are mixtures of the two. A discontinuity, a rounded step function, is shown to exist in the fitted linear slope of the recall distributions as the recall shifts from the emptying of working memory (positive slope) to the second stage (negative slope). The discontinuity leads to a first estimate of the capacity of working memory at 4-4.5 items. Working memory accounts for the recency effect. The primacy effect comes from the second stage with a contribution also from working memory for short lists (the first item). The different slopes of the working memory and secondary stages, and that the two have different functional forms, accounts for the u-shaped serial position curve. The total recall is shown to be a linear combination of the content of working memory and items recalled in the second stage with 3.0-3.9 items coming from working memory, a second estimate of the capacity of working memory. A third, separate upper limit on the capacity of working memory is found (3.06 items), corresponding to the requirement that the content of working memory cannot exceed the total recall, item by item. This third limit presumably corresponds to the least chunked item. This is the best limit on the capacity of unchunked working memory.
0908.1685
Tihamer Geyer
Uwe Winter and Tihamer Geyer
Coarse Grained Simulations of a Small Peptide: Effects of Finite Damping and Hydrodynamic Interactions
7 pages, 6 figures, submitted to J Chem Phys
null
10.1063/1.3216573
null
q-bio.BM cond-mat.soft physics.chem-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the coarse grained Brownian Dynamics simulation method the many solvent molecules are replaced by random thermal kicks and an effective friction acting on the particles of interest. For Brownian Dynamics the friction has to be so strong that the particles' velocities are damped much faster than the duration of an integration timestep. Here we show that this conceptual limit can be dropped with an analytic integration of the equations of damped motion. In the resulting Langevin integration scheme our recently proposed approximate form of the hydrodynamic interactions between the particles can be incorparated conveniently, leading to a fast multi-particle propagation scheme, which captures more of the short-time and short-range solvent effects than standard BD. Comparing the dynamics of a bead-spring model of a short peptide, we recommend to run simulations of small biological molecules with the Langevin type finite damping and to include the hydrodynamic interactions.
[ { "created": "Wed, 12 Aug 2009 12:09:45 GMT", "version": "v1" } ]
2015-05-13
[ [ "Winter", "Uwe", "" ], [ "Geyer", "Tihamer", "" ] ]
In the coarse grained Brownian Dynamics simulation method the many solvent molecules are replaced by random thermal kicks and an effective friction acting on the particles of interest. For Brownian Dynamics the friction has to be so strong that the particles' velocities are damped much faster than the duration of an integration timestep. Here we show that this conceptual limit can be dropped with an analytic integration of the equations of damped motion. In the resulting Langevin integration scheme our recently proposed approximate form of the hydrodynamic interactions between the particles can be incorparated conveniently, leading to a fast multi-particle propagation scheme, which captures more of the short-time and short-range solvent effects than standard BD. Comparing the dynamics of a bead-spring model of a short peptide, we recommend to run simulations of small biological molecules with the Langevin type finite damping and to include the hydrodynamic interactions.
q-bio/0611050
Changbong Hyeon
Changbong Hyeon, Ruxandra I Dima and D. Thirumalai
Pathways and kinetic barriers in mechanical unfolding and refolding of RNA and proteins
33 pages 7 figures
Structure (2006) vol 14. 1633-1645
null
null
q-bio.BM cond-mat.soft
null
Using self-organized polymer models, we predict mechanical unfolding and refolding pathways of ribo-zymes, and the green fluorescent protein. In agreement with experiments, there are between six and eight unfolding transitions in the Tetrahymena ribozyme. Depending on the loading rate, the number of rips in the force-ramp unfolding of the Azoarcus ribozymes is between two and four. Force-quench refolding of the P4-P6 subdomain of the Tetrahymena ribozyme occurs through a compact intermediate. Subsequent formation of tertiary contacts between helices P5b-P6a and P5a/P5c-P4 leads to the native state. The force-quench refolding pathways agree with ensemble experiments. In the dominant unfolding route, the N-terminal a helix of GFP unravels first, followed by disruption of the N terminus b strand. There is a third intermediate that involves disruption of three other strands. In accord with experiments, the force-quench refolding pathway of GFP is hierarchic, with the rate-limiting step being the closure of the barrel.
[ { "created": "Thu, 16 Nov 2006 22:03:38 GMT", "version": "v1" } ]
2007-05-23
[ [ "Hyeon", "Changbong", "" ], [ "Dima", "Ruxandra I", "" ], [ "Thirumalai", "D.", "" ] ]
Using self-organized polymer models, we predict mechanical unfolding and refolding pathways of ribo-zymes, and the green fluorescent protein. In agreement with experiments, there are between six and eight unfolding transitions in the Tetrahymena ribozyme. Depending on the loading rate, the number of rips in the force-ramp unfolding of the Azoarcus ribozymes is between two and four. Force-quench refolding of the P4-P6 subdomain of the Tetrahymena ribozyme occurs through a compact intermediate. Subsequent formation of tertiary contacts between helices P5b-P6a and P5a/P5c-P4 leads to the native state. The force-quench refolding pathways agree with ensemble experiments. In the dominant unfolding route, the N-terminal a helix of GFP unravels first, followed by disruption of the N terminus b strand. There is a third intermediate that involves disruption of three other strands. In accord with experiments, the force-quench refolding pathway of GFP is hierarchic, with the rate-limiting step being the closure of the barrel.
1709.02386
Carsten Lemmen
Kaela Slavik, Carsten Lemmen, Wenyan Zhang, Onur Kerimoglu, Knut Klingbeil, Kai W. Wirtz
The large scale impact of offshore wind farm structures on pelagic primary productivity in the southern North Sea
17 pages, 6 figures, re-revised manuscript submitted to Hydrobiologia
null
10.1007/s10750-018-3653-5
null
q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
The increasing demand for renewable energy is projected to result in a 40-fold increase in offshore wind electricity in the European Union by 2030. Despite a great number of local impact studies for selected marine populations, the regional ecosystem impacts of offshore wind farm structures are not yet well assessed nor understood. Our study investigates whether the accumulation of epifauna, dominated by the filter feeder Mytilus edulis (blue mussel), on turbine structures affects pelagic primary productivity and ecosystem functioning in the southern North Sea. We estimate the anthropogenically increased potential distribution based on the current projections of turbine locations and reported patterns of M. edulis settlement. This distribution is integrated through the Modular Coupling System for Shelves and Coasts to state-of-the-art hydrodynamic and ecosystem models. Our simulations reveal non-negligible potential changes in regional annual primary productivity of up to 8% within the offshore wind farm area, and induced maximal increases of the same magnitude in daily productivity also far from the wind farms. Our setup and modular coupling are effective tools for system scale studies of other environmental changes arising from large-scale offshore wind-farming such as ocean physics and distributions of pelagic top predators.
[ { "created": "Thu, 7 Sep 2017 16:43:34 GMT", "version": "v1" }, { "created": "Fri, 30 Mar 2018 11:59:17 GMT", "version": "v2" }, { "created": "Wed, 9 May 2018 10:58:17 GMT", "version": "v3" } ]
2023-12-14
[ [ "Slavik", "Kaela", "" ], [ "Lemmen", "Carsten", "" ], [ "Zhang", "Wenyan", "" ], [ "Kerimoglu", "Onur", "" ], [ "Klingbeil", "Knut", "" ], [ "Wirtz", "Kai W.", "" ] ]
The increasing demand for renewable energy is projected to result in a 40-fold increase in offshore wind electricity in the European Union by 2030. Despite a great number of local impact studies for selected marine populations, the regional ecosystem impacts of offshore wind farm structures are not yet well assessed nor understood. Our study investigates whether the accumulation of epifauna, dominated by the filter feeder Mytilus edulis (blue mussel), on turbine structures affects pelagic primary productivity and ecosystem functioning in the southern North Sea. We estimate the anthropogenically increased potential distribution based on the current projections of turbine locations and reported patterns of M. edulis settlement. This distribution is integrated through the Modular Coupling System for Shelves and Coasts to state-of-the-art hydrodynamic and ecosystem models. Our simulations reveal non-negligible potential changes in regional annual primary productivity of up to 8% within the offshore wind farm area, and induced maximal increases of the same magnitude in daily productivity also far from the wind farms. Our setup and modular coupling are effective tools for system scale studies of other environmental changes arising from large-scale offshore wind-farming such as ocean physics and distributions of pelagic top predators.
1304.2955
Arindam RoyChoudhury
Arindam RoyChoudhury
Change in Recessive Lethal Alleles Frequency in Inbred Populations
9 pages, 3 figures
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a population practicing consanguineous marriage, rare recessive lethal alleles (RRLA) have higher chances of affecting phenotypes. As inbreeding causes more homozygosity and subsequently more deaths, the loss of individuals with RRLA decreases the frequency of these alleles. Although this phenomenon is well studied in general, here some hitherto unstudied cases are presented. An analytical formula for the RRLA frequency is presented for infinite monoecious population practicing several different types of inbreeding. In finite diecious populations, it is found that more severe inbreeding leads to quicker RRLA losses, making the upcoming generations healthier. A population of size 10,000 practicing 30% half-sib marriages loses more than 95% of its RRLA in 100 generations; a population practicing 30% cousin marriages loses about 75% of its RRLA. Our findings also suggest that given enough resources to grow, a small inbred population will be able to rebound while losing the RRLA.
[ { "created": "Wed, 10 Apr 2013 13:39:06 GMT", "version": "v1" } ]
2013-04-11
[ [ "RoyChoudhury", "Arindam", "" ] ]
In a population practicing consanguineous marriage, rare recessive lethal alleles (RRLA) have higher chances of affecting phenotypes. As inbreeding causes more homozygosity and subsequently more deaths, the loss of individuals with RRLA decreases the frequency of these alleles. Although this phenomenon is well studied in general, here some hitherto unstudied cases are presented. An analytical formula for the RRLA frequency is presented for infinite monoecious population practicing several different types of inbreeding. In finite diecious populations, it is found that more severe inbreeding leads to quicker RRLA losses, making the upcoming generations healthier. A population of size 10,000 practicing 30% half-sib marriages loses more than 95% of its RRLA in 100 generations; a population practicing 30% cousin marriages loses about 75% of its RRLA. Our findings also suggest that given enough resources to grow, a small inbred population will be able to rebound while losing the RRLA.
2101.04532
Wlodzislaw Duch
W{\l}odzis{\l}aw Duch
Experiential Learning Styles and Neurocognitive Phenomics
20 pages, extended version of article published in "Brains and Education: Towards Neurocognitive Phenomics", proceedings of: Learning while we are connected, Vol. 3, Eds. N. Reynolds, M. Webb, M.M. Sys{\l}o, V. Dagiene, pp. 12-23. X World Conference on Computers in Education, 2013; Toru\'n, Poland
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Phenomics is concerned with detailed description of all aspects of organisms, from their physical foundations at genetic, molecular and cellular level, to behavioural and psychological traits. Neuropsychiatric phenomics, endorsed by NIMH, provides such broad perspective to understand mental disorders. It is clear that learning sciences also need similar approach that will integrate efforts to understand cognitive processes from the perspective of the brain development, in temporal, spatial, psychological and social aspects. The brain is a substrate shaped by genetic, epigenetic, cellular and environmental factors including education, individual experiences and personal history, culture, social milieu. Learning sciences should thus be based on the foundation of neurocognitive phenomics. A brief review of selected aspects of such approach is presented, outlining new research directions. Central, peripheral and motor processes in the brain are linked to the inventory of the learning styles.
[ { "created": "Tue, 12 Jan 2021 15:10:06 GMT", "version": "v1" } ]
2021-01-13
[ [ "Duch", "Włodzisław", "" ] ]
Phenomics is concerned with detailed description of all aspects of organisms, from their physical foundations at genetic, molecular and cellular level, to behavioural and psychological traits. Neuropsychiatric phenomics, endorsed by NIMH, provides such broad perspective to understand mental disorders. It is clear that learning sciences also need similar approach that will integrate efforts to understand cognitive processes from the perspective of the brain development, in temporal, spatial, psychological and social aspects. The brain is a substrate shaped by genetic, epigenetic, cellular and environmental factors including education, individual experiences and personal history, culture, social milieu. Learning sciences should thus be based on the foundation of neurocognitive phenomics. A brief review of selected aspects of such approach is presented, outlining new research directions. Central, peripheral and motor processes in the brain are linked to the inventory of the learning styles.
1611.00730
William Holmes
William R. Holmes, JinSeok Park, Andre Levchenko, Leah Edelstein-Keshet
A mathematical model coupling polarity signaling to cell adhesion explains diverse cell migration patterns
null
null
10.1371/journal.pcbi.1005524
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells crawling through tissues migrate inside a complex fibrous environment called the extracellular matrix (ECM), which provides signals regulating motility. Here we investigate one such well-known pathway, involving mutually antagonistic signalling molecules (small GTPases Rac and Rho) that control the protrusion and contraction of the cell edges (lamellipodia). Invasive melanoma cells were observed migrating on surfaces with topography (array of posts), coated with adhesive molecules (fibronectin, FN) by Park et al., 2016. Several distinct qualitative behaviors they observed included persistent polarity, oscillation between the cell front and back, and random dynamics. To gain insight into the link between intracellular and ECM signaling, we compared experimental observations to a sequence of mathematical models encoding distinct hypotheses. The successful model required several critical factors. (1) Competition of lamellipodia for limited pools of GTPases. (2) Protrusion / contraction of lamellipodia influence ECM signaling. (3) ECM-mediated activation of Rho. A model combining these elements explains all three cellular behaviors and correctly predicts the results of experimental perturbations. This study yields new insight into how the dynamic interactions between intracellular signaling and the cell's environment influence cell behavior.
[ { "created": "Wed, 2 Nov 2016 19:10:01 GMT", "version": "v1" } ]
2017-07-05
[ [ "Holmes", "William R.", "" ], [ "Park", "JinSeok", "" ], [ "Levchenko", "Andre", "" ], [ "Edelstein-Keshet", "Leah", "" ] ]
Cells crawling through tissues migrate inside a complex fibrous environment called the extracellular matrix (ECM), which provides signals regulating motility. Here we investigate one such well-known pathway, involving mutually antagonistic signalling molecules (small GTPases Rac and Rho) that control the protrusion and contraction of the cell edges (lamellipodia). Invasive melanoma cells were observed migrating on surfaces with topography (array of posts), coated with adhesive molecules (fibronectin, FN) by Park et al., 2016. Several distinct qualitative behaviors they observed included persistent polarity, oscillation between the cell front and back, and random dynamics. To gain insight into the link between intracellular and ECM signaling, we compared experimental observations to a sequence of mathematical models encoding distinct hypotheses. The successful model required several critical factors. (1) Competition of lamellipodia for limited pools of GTPases. (2) Protrusion / contraction of lamellipodia influence ECM signaling. (3) ECM-mediated activation of Rho. A model combining these elements explains all three cellular behaviors and correctly predicts the results of experimental perturbations. This study yields new insight into how the dynamic interactions between intracellular signaling and the cell's environment influence cell behavior.
1904.09504
Steven Tompson
Steven H. Tompson, Ari E. Kahn, Emily B. Falk, Jean M. Vettel, Danielle S. Bassett
Functional brain network architecture supporting the learning of social networks in humans
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most humans have the good fortune to live their lives embedded in richly structured social groups. Yet, it remains unclear how humans acquire knowledge about these social structures to successfully navigate social relationships. Here we address this knowledge gap with an interdisciplinary neuroimaging study drawing on recent advances in network science and statistical learning. Specifically, we collected BOLD MRI data while participants learned the community structure of both social and non-social networks, in order to examine whether the learning of these two types of networks was differentially associated with functional brain network topology. From the behavioral data in both tasks, we found that learners were sensitive to the community structure of the networks, as evidenced by a slower reaction time on trials transitioning between clusters than on trials transitioning within a cluster. From the neuroimaging data collected during the social network learning task, we observed that the functional connectivity of the hippocampus and temporoparietal junction was significantly greater when transitioning between clusters than when transitioning within a cluster. Furthermore, temporoparietal regions of the default mode were more strongly connected to hippocampus, somatomotor, and visual regions during the social task than during the non-social task. Collectively, our results identify neurophysiological underpinnings of social versus non-social network learning, extending our knowledge about the impact of social context on learning processes. More broadly, this work offers an empirical approach to study the learning of social network structures, which could be fruitfully extended to other participant populations, various graph architectures, and a diversity of social contexts in future studies.
[ { "created": "Sat, 20 Apr 2019 22:04:44 GMT", "version": "v1" } ]
2019-04-23
[ [ "Tompson", "Steven H.", "" ], [ "Kahn", "Ari E.", "" ], [ "Falk", "Emily B.", "" ], [ "Vettel", "Jean M.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Most humans have the good fortune to live their lives embedded in richly structured social groups. Yet, it remains unclear how humans acquire knowledge about these social structures to successfully navigate social relationships. Here we address this knowledge gap with an interdisciplinary neuroimaging study drawing on recent advances in network science and statistical learning. Specifically, we collected BOLD MRI data while participants learned the community structure of both social and non-social networks, in order to examine whether the learning of these two types of networks was differentially associated with functional brain network topology. From the behavioral data in both tasks, we found that learners were sensitive to the community structure of the networks, as evidenced by a slower reaction time on trials transitioning between clusters than on trials transitioning within a cluster. From the neuroimaging data collected during the social network learning task, we observed that the functional connectivity of the hippocampus and temporoparietal junction was significantly greater when transitioning between clusters than when transitioning within a cluster. Furthermore, temporoparietal regions of the default mode were more strongly connected to hippocampus, somatomotor, and visual regions during the social task than during the non-social task. Collectively, our results identify neurophysiological underpinnings of social versus non-social network learning, extending our knowledge about the impact of social context on learning processes. More broadly, this work offers an empirical approach to study the learning of social network structures, which could be fruitfully extended to other participant populations, various graph architectures, and a diversity of social contexts in future studies.
2001.05416
Martin Oheim
Martin Oheim, Adi Salomon, Maia Brunstein
Supercritical angle microscopy and spectroscopy
3 figures
null
10.1016/j.bpj.2020.03.029
null
q-bio.QM physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fluorescence detection, either involving propagating or near-field emission, is widely being used in spectroscopy, sensing and microscopy. Total internal reflection fluorescence (TIRF) confines fluorescence excitation by an evanescent (near-) field and it is a popular contrast generator for surface-selective fluorescence assays. Its emission equivalent, supercritical angle fluorescence (SAF) is comparably less established although it achieves a similar optical sectioning as does TIRF. SAF emerges when a fluorescing molecule is located very close to an interface and its near-field emission couples to the higher-refractive index medium (n2 > n1) and becomes propagative. Then, most fluorescence is detectable on the side of the higher-index substrate and a large fraction of this fluorescence is emitted into angles forbidden by Snell's law. SAF as well as the undercritical angle fluorescence (UAF) (far-field emission) components can be collected with microscope objectives having a high-enough detection aperture NA > n2 and be separated in the back-focal plane (BFP) by Fourier filtering. The BFP image encodes information about the fluorophore radiation pattern, and it can be analysed to yield precise information about the refractive index in which the emitters are embedded, their nanometric distance from the interface and their orientation. A SAF microscope can retrieve this near-field information through wide-field optics in a spatially resolved manner, and this functionality can be added to any existing inverted microscope.
[ { "created": "Wed, 15 Jan 2020 16:34:58 GMT", "version": "v1" } ]
2020-06-24
[ [ "Oheim", "Martin", "" ], [ "Salomon", "Adi", "" ], [ "Brunstein", "Maia", "" ] ]
Fluorescence detection, either involving propagating or near-field emission, is widely being used in spectroscopy, sensing and microscopy. Total internal reflection fluorescence (TIRF) confines fluorescence excitation by an evanescent (near-) field and it is a popular contrast generator for surface-selective fluorescence assays. Its emission equivalent, supercritical angle fluorescence (SAF) is comparably less established although it achieves a similar optical sectioning as does TIRF. SAF emerges when a fluorescing molecule is located very close to an interface and its near-field emission couples to the higher-refractive index medium (n2 > n1) and becomes propagative. Then, most fluorescence is detectable on the side of the higher-index substrate and a large fraction of this fluorescence is emitted into angles forbidden by Snell's law. SAF as well as the undercritical angle fluorescence (UAF) (far-field emission) components can be collected with microscope objectives having a high-enough detection aperture NA > n2 and be separated in the back-focal plane (BFP) by Fourier filtering. The BFP image encodes information about the fluorophore radiation pattern, and it can be analysed to yield precise information about the refractive index in which the emitters are embedded, their nanometric distance from the interface and their orientation. A SAF microscope can retrieve this near-field information through wide-field optics in a spatially resolved manner, and this functionality can be added to any existing inverted microscope.
1712.03891
Cameron Smith
Cameron A. Smith and Christian A. Yates
Spatially-extended hybrid methods: a review
43 Pages, 13 Figures, 4 Tables
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many biological and physical systems exhibit behaviour at multiple spatial, temporal or population scales. Multiscale processes provide challenges when they are to be simulated using numerical techniques. While coarser methods such as partial differential equations are typically fast to simulate, they lack the individual-level detail that may be required in regions of low concentration or small spatial scale. However, to simulate at such an individual-level throughout a domain and in regions where concentrations are high can be computationally expensive. Spatially-coupled hybrid methods provide a bridge, allowing for multiple representations of the same species in one spatial domain by partitioning space into distinct modelling subdomains. Over the past twenty years, such hybrid methods have risen to prominence, leading to what is now a very active research area across multiple disciplines including chemistry, physics and mathematics. There are three main motivations for undertaking this review. Firstly, we have collated a large number of spatially-extended hybrid methods and presented them in a single coherent document, while comparing and contrasting them, so that anyone with a need for a multi-scale hybrid method will be able to find the most appropriate one for their need. Secondly, we have provided canonical examples with algorithms and accompanying code, serving to demonstrate how these types of methods work in practice. Finally, we have presented papers that employ these methods on real biological and physical problems, demonstrating their utility. We also consider some open research questions in the area of hybrid method development and the future directions for the field.
[ { "created": "Mon, 11 Dec 2017 17:08:06 GMT", "version": "v1" }, { "created": "Fri, 9 Feb 2018 16:02:05 GMT", "version": "v2" } ]
2018-02-12
[ [ "Smith", "Cameron A.", "" ], [ "Yates", "Christian A.", "" ] ]
Many biological and physical systems exhibit behaviour at multiple spatial, temporal or population scales. Multiscale processes provide challenges when they are to be simulated using numerical techniques. While coarser methods such as partial differential equations are typically fast to simulate, they lack the individual-level detail that may be required in regions of low concentration or small spatial scale. However, to simulate at such an individual-level throughout a domain and in regions where concentrations are high can be computationally expensive. Spatially-coupled hybrid methods provide a bridge, allowing for multiple representations of the same species in one spatial domain by partitioning space into distinct modelling subdomains. Over the past twenty years, such hybrid methods have risen to prominence, leading to what is now a very active research area across multiple disciplines including chemistry, physics and mathematics. There are three main motivations for undertaking this review. Firstly, we have collated a large number of spatially-extended hybrid methods and presented them in a single coherent document, while comparing and contrasting them, so that anyone with a need for a multi-scale hybrid method will be able to find the most appropriate one for their need. Secondly, we have provided canonical examples with algorithms and accompanying code, serving to demonstrate how these types of methods work in practice. Finally, we have presented papers that employ these methods on real biological and physical problems, demonstrating their utility. We also consider some open research questions in the area of hybrid method development and the future directions for the field.
2004.00443
Stephen Edward Moore
Stephen E. Moore and Eric Okyere
Controlling the Transmission Dynamics of COVID-19
13pages, 37 figures
null
null
null
q-bio.PE math.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The outbreak of COVID-19 caused by SARS-CoV-2 in Wuhan and other cities in China in 2019 has become a global pandemic as declared by World Health Organization (WHO) in the first quarter of 2020 . The delay in diagnosis, limited hospital resources and other treatment resources leads to rapid spread of COVID-19. In this article, we consider an optimal control COVID-19 transmission model and assess the impact of some control measures that can lead to the reduction of exposed and infectious individuals in the population. We investigate three control strategies for this deadly infectious disease using personal protection, treatment with early diagnosis, treatment with delay diagnosis and spraying of virus in the environment as time-dependent control functions in our dynamical model to curb the disease spread.
[ { "created": "Tue, 31 Mar 2020 11:37:40 GMT", "version": "v1" }, { "created": "Thu, 2 Apr 2020 15:35:58 GMT", "version": "v2" } ]
2020-04-03
[ [ "Moore", "Stephen E.", "" ], [ "Okyere", "Eric", "" ] ]
The outbreak of COVID-19 caused by SARS-CoV-2 in Wuhan and other cities in China in 2019 has become a global pandemic as declared by World Health Organization (WHO) in the first quarter of 2020 . The delay in diagnosis, limited hospital resources and other treatment resources leads to rapid spread of COVID-19. In this article, we consider an optimal control COVID-19 transmission model and assess the impact of some control measures that can lead to the reduction of exposed and infectious individuals in the population. We investigate three control strategies for this deadly infectious disease using personal protection, treatment with early diagnosis, treatment with delay diagnosis and spraying of virus in the environment as time-dependent control functions in our dynamical model to curb the disease spread.
q-bio/0309010
Prashant Purohit
Prashant K. Purohit (1), Jane' Kondev (2) and Rob Phillips (1) ((1) California Institute of Technology, (2) Brandeis University)
Force steps during viral DNA packaging ?
18 pages, 7 figures, To appear in the Journal of Mechanics and Physics of Solids
null
10.1016/j.jmps.2003.09.016
null
q-bio.BM q-bio.SC
null
Biophysicists and structural biologists increasingly acknowledge the role played by the mechanical properties of macromolecules as a critical element in many biological processes. This change has been brought about, in part, by the advent of single molecule biophysics techniques that have made it possible to exert piconewton forces on key macromolecules and observe their deformations at nanometer length scales, as well as to observe the mechanical action of macromolecules such as molecular motors. This has opened up immense possibilities for a new generation of mechanical investigations that will respond to such measurements in an attempt to develop a coherent theory for the mechanical behavior of macromolecules under conditions where thermal and chemical effects are on an equal footing with deterministic forces. This paper presents an application of the principles of mechanics to the problem of DNA packaging, one of the key events in the life cycle of bacterial viruses with special reference to the nature of the internal forces that are built up during the DNA packaging process.
[ { "created": "Mon, 22 Sep 2003 21:49:15 GMT", "version": "v1" } ]
2009-11-10
[ [ "Purohit", "Prashant K.", "" ], [ "Kondev", "Jane'", "" ], [ "Phillips", "Rob", "" ] ]
Biophysicists and structural biologists increasingly acknowledge the role played by the mechanical properties of macromolecules as a critical element in many biological processes. This change has been brought about, in part, by the advent of single molecule biophysics techniques that have made it possible to exert piconewton forces on key macromolecules and observe their deformations at nanometer length scales, as well as to observe the mechanical action of macromolecules such as molecular motors. This has opened up immense possibilities for a new generation of mechanical investigations that will respond to such measurements in an attempt to develop a coherent theory for the mechanical behavior of macromolecules under conditions where thermal and chemical effects are on an equal footing with deterministic forces. This paper presents an application of the principles of mechanics to the problem of DNA packaging, one of the key events in the life cycle of bacterial viruses with special reference to the nature of the internal forces that are built up during the DNA packaging process.
2301.04355
Thomas G\"otz
Philipp Doenges, Thomas G\"otz, Tyll Krueger, Karol Niedzielewski, Viola Priesemann, Moritz Schaefer
SIR-Model for Households
17 pages,10 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
Households play an important role in disease dynamics. Many infections happening there due to the close contact, while mitigation measures mainly target the transmission between households. Therefore, one can see households as boosting the transmission depending on household size. To study the effect of household size and size distribution, we differentiated the within and between household reproduction rate. There are basically no preventive measures, and thus the close contacts can boost the spread. We explicitly incorporated that typically only a fraction of all household members are infected. Thus, viewing the infection of a household of a given size as a splitting process generating a new, small fully infected sub-household and a remaining still susceptible sub-household we derive a compartmental ODE-model for the dynamics of the sub-households. In this setting, the basic reproduction number as well as prevalence and the peak of an infection wave in a population with given households size distribution can be computed analytically. We compare numerical simulation results of this novel household-ODE model with results from an agent--based model using data for realistic household size distributions of different countries. We find good agreement of both models showing the catalytic effect of large households on the overall disease dynamics.
[ { "created": "Wed, 11 Jan 2023 08:46:21 GMT", "version": "v1" } ]
2023-01-12
[ [ "Doenges", "Philipp", "" ], [ "Götz", "Thomas", "" ], [ "Krueger", "Tyll", "" ], [ "Niedzielewski", "Karol", "" ], [ "Priesemann", "Viola", "" ], [ "Schaefer", "Moritz", "" ] ]
Households play an important role in disease dynamics. Many infections happening there due to the close contact, while mitigation measures mainly target the transmission between households. Therefore, one can see households as boosting the transmission depending on household size. To study the effect of household size and size distribution, we differentiated the within and between household reproduction rate. There are basically no preventive measures, and thus the close contacts can boost the spread. We explicitly incorporated that typically only a fraction of all household members are infected. Thus, viewing the infection of a household of a given size as a splitting process generating a new, small fully infected sub-household and a remaining still susceptible sub-household we derive a compartmental ODE-model for the dynamics of the sub-households. In this setting, the basic reproduction number as well as prevalence and the peak of an infection wave in a population with given households size distribution can be computed analytically. We compare numerical simulation results of this novel household-ODE model with results from an agent--based model using data for realistic household size distributions of different countries. We find good agreement of both models showing the catalytic effect of large households on the overall disease dynamics.
2208.04771
Ignacio Enrique S\'anchez
Lucio Aliperti Car, Gonzalo Farfa\~nuk, Luciana L. Couso, Alfonso Soler-Bistu\'e, Ariel A. Aptekmann, Ignacio E. S\'anchez
r/K selection of GC content in prokaryotes
15 pages, 6 figures
null
null
null
q-bio.PE q-bio.GN
http://creativecommons.org/licenses/by/4.0/
The GC content of prokaryotic genomes is species-specific and takes values from 16 to 77 percent. There are currently no accepted explanations for this diversity of selection for GC content. We analyzed the known correlations between GC content, genome size and amino acid cost in thousands of prokaryotes, together with new or recently compiled data on cell shape and volume, duplication times, motility, nutrient assimilation, sporulation, defense mechanisms and Gram staining. GC content integrates well with these traits into r/K selection theory when phenotypic plasticity is considered. High GC content prokaryotes are r-strategists with cheaper descendants thanks to a lower average amino acid metabolic cost and a smaller cell volume, colonize unstable environments thanks to flagella and a bacillus form and are generalists in terms of resource opportunism and the ability to defend themselves from various hazards. Low GC content prokaryotes are K-strategists specialized for stable environments that maintain homeostasis via a high-cost outer cell membrane and endospore formation as a response to nutrient deprivation and attain a higher nutrient-to-biomass yield. The lower proteome cost of high GC content prokaryotes is driven by the association between GC-rich codons and cheaper amino acids in the genetic code, while the correlation between GC content and genome size may be partly due to a shift in the functional repertoire of genomes driven by r/K selection. In all, we show that molecular diversity in the GC content of prokaryotes and the corresponding species diversity may be a consequence of ecological r/K selection
[ { "created": "Tue, 9 Aug 2022 13:29:28 GMT", "version": "v1" }, { "created": "Fri, 16 Dec 2022 12:35:09 GMT", "version": "v2" } ]
2022-12-19
[ [ "Car", "Lucio Aliperti", "" ], [ "Farfañuk", "Gonzalo", "" ], [ "Couso", "Luciana L.", "" ], [ "Soler-Bistué", "Alfonso", "" ], [ "Aptekmann", "Ariel A.", "" ], [ "Sánchez", "Ignacio E.", "" ] ]
The GC content of prokaryotic genomes is species-specific and takes values from 16 to 77 percent. There are currently no accepted explanations for this diversity of selection for GC content. We analyzed the known correlations between GC content, genome size and amino acid cost in thousands of prokaryotes, together with new or recently compiled data on cell shape and volume, duplication times, motility, nutrient assimilation, sporulation, defense mechanisms and Gram staining. GC content integrates well with these traits into r/K selection theory when phenotypic plasticity is considered. High GC content prokaryotes are r-strategists with cheaper descendants thanks to a lower average amino acid metabolic cost and a smaller cell volume, colonize unstable environments thanks to flagella and a bacillus form and are generalists in terms of resource opportunism and the ability to defend themselves from various hazards. Low GC content prokaryotes are K-strategists specialized for stable environments that maintain homeostasis via a high-cost outer cell membrane and endospore formation as a response to nutrient deprivation and attain a higher nutrient-to-biomass yield. The lower proteome cost of high GC content prokaryotes is driven by the association between GC-rich codons and cheaper amino acids in the genetic code, while the correlation between GC content and genome size may be partly due to a shift in the functional repertoire of genomes driven by r/K selection. In all, we show that molecular diversity in the GC content of prokaryotes and the corresponding species diversity may be a consequence of ecological r/K selection
1504.03746
Willem Wybo
Willem A.M. Wybo, Daniele Boccalini, Benjamin Torben-Nielsen, Marc-Oliver Gewaltig
A sparse reformulation of the Green's function formalism allows efficient simulations of partial differential equations on tree graphs
41 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that when a class of partial differential equations, generalized from the cable equation, is defined on tree graphs, and when the inputs are restricted to a spatially discrete, well chosen set of points, the Green's function (GF) formalism can be rewritten to scale as $O(n)$ with the number $n$ of input locations, contrary to the previously reported $O(n^2)$ scaling. We show that the linear scaling can be combined with an expansion of the remaining kernels as sums of exponentials, to allow efficient simulations of equations from the aforementioned class. We furthermore validate this simulation paradigm on models of nerve cells and explore its relation with more traditional finite difference approaches. Situations in which a gain in computational performance is expected, are discussed.
[ { "created": "Tue, 14 Apr 2015 23:59:40 GMT", "version": "v1" }, { "created": "Wed, 16 Sep 2015 09:47:52 GMT", "version": "v2" } ]
2015-09-17
[ [ "Wybo", "Willem A. M.", "" ], [ "Boccalini", "Daniele", "" ], [ "Torben-Nielsen", "Benjamin", "" ], [ "Gewaltig", "Marc-Oliver", "" ] ]
We prove that when a class of partial differential equations, generalized from the cable equation, is defined on tree graphs, and when the inputs are restricted to a spatially discrete, well chosen set of points, the Green's function (GF) formalism can be rewritten to scale as $O(n)$ with the number $n$ of input locations, contrary to the previously reported $O(n^2)$ scaling. We show that the linear scaling can be combined with an expansion of the remaining kernels as sums of exponentials, to allow efficient simulations of equations from the aforementioned class. We furthermore validate this simulation paradigm on models of nerve cells and explore its relation with more traditional finite difference approaches. Situations in which a gain in computational performance is expected, are discussed.
2305.07482
Braden Brinkman
Jacob T. Crosser and Braden A. W. Brinkman
Applications of information geometry to spiking neural network behavior
36 pages, 14 figures
null
null
null
q-bio.NC cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
The space of possible behaviors complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, though the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model behaviors change as a function of their parameters, giving a quantitative notion of "distances" between model behaviors. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
[ { "created": "Fri, 12 May 2023 13:50:41 GMT", "version": "v1" } ]
2023-05-15
[ [ "Crosser", "Jacob T.", "" ], [ "Brinkman", "Braden A. W.", "" ] ]
The space of possible behaviors complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, though the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model behaviors change as a function of their parameters, giving a quantitative notion of "distances" between model behaviors. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
2110.03529
Feng Cheng
Feng Cheng
Using Single-Trial Representational Similarity Analysis with EEG to track semantic similarity in emotional word processing
null
null
null
null
q-bio.NC cs.CL
http://creativecommons.org/licenses/by/4.0/
Electroencephalography (EEG) is a powerful non-invasive brain imaging technique with a high temporal resolution that has seen extensive use across multiple areas of cognitive science research. This thesis adapts representational similarity analysis (RSA) to single-trial EEG datasets and introduces its principles to EEG researchers unfamiliar with multivariate analyses. We have two separate aims: 1. we want to explore the effectiveness of single-trial RSA on EEG datasets; 2. we want to utilize single-trial RSA and computational semantic models to investigate the role of semantic meaning in emotional word processing. We report two primary findings: 1. single-trial RSA on EEG datasets can produce meaningful and interpretable results given a high number of trials and subjects; 2. single-trial RSA reveals that emotional processing in the 500-800ms time window is associated with additional semantic analysis.
[ { "created": "Mon, 4 Oct 2021 17:17:38 GMT", "version": "v1" } ]
2021-10-08
[ [ "Cheng", "Feng", "" ] ]
Electroencephalography (EEG) is a powerful non-invasive brain imaging technique with a high temporal resolution that has seen extensive use across multiple areas of cognitive science research. This thesis adapts representational similarity analysis (RSA) to single-trial EEG datasets and introduces its principles to EEG researchers unfamiliar with multivariate analyses. We have two separate aims: 1. we want to explore the effectiveness of single-trial RSA on EEG datasets; 2. we want to utilize single-trial RSA and computational semantic models to investigate the role of semantic meaning in emotional word processing. We report two primary findings: 1. single-trial RSA on EEG datasets can produce meaningful and interpretable results given a high number of trials and subjects; 2. single-trial RSA reveals that emotional processing in the 500-800ms time window is associated with additional semantic analysis.
1004.4116
Simon Tavar\'e
A. D. Barbour and Simon Tavar\'e
Assessing molecular variability in cancer genomes
22 pages, 1 figure. Chapter 4 of "Probability and Mathematical Genetics: Papers in Honour of Sir John Kingman" (Editors N.H. Bingham and C.M. Goldie), Cambridge University Press, 2010
null
null
null
q-bio.PE stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of tumour evolution are not well understood. In this paper we provide a statistical framework for evaluating the molecular variation observed in different parts of a colorectal tumour. A multi-sample version of the Ewens Sampling Formula forms the basis for our modelling of the data, and we provide a simulation procedure for use in obtaining reference distributions for the statistics of interest. We also describe the large-sample asymptotics of the joint distributions of the variation observed in different parts of the tumour. While actual data should be evaluated with reference to the simulation procedure, the asymptotics serve to provide theoretical guidelines, for instance with reference to the choice of possible statistics.
[ { "created": "Tue, 13 Apr 2010 22:07:53 GMT", "version": "v1" } ]
2010-04-26
[ [ "Barbour", "A. D.", "" ], [ "Tavaré", "Simon", "" ] ]
The dynamics of tumour evolution are not well understood. In this paper we provide a statistical framework for evaluating the molecular variation observed in different parts of a colorectal tumour. A multi-sample version of the Ewens Sampling Formula forms the basis for our modelling of the data, and we provide a simulation procedure for use in obtaining reference distributions for the statistics of interest. We also describe the large-sample asymptotics of the joint distributions of the variation observed in different parts of the tumour. While actual data should be evaluated with reference to the simulation procedure, the asymptotics serve to provide theoretical guidelines, for instance with reference to the choice of possible statistics.
1410.6049
Dmitry Kobak
Luke Bashford, Dmitry Kobak, Carsten Mehring
Motor skill learning by increasing the movement planning horizon
45 pages, 7 figures
Journal of Neurophysiology 127 (4) 2022, 995-1006
10.1152/jn.00631.2020
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigated motor skill learning using a path tracking task, where human subjects had to track various curved paths as fast as possible, in the absence of any external perturbations. Subjects became better with practice, producing faster and smoother movements even when tracking novel untrained paths. Using a "searchlight" paradigm, where only a short segment of the path ahead of the cursor was shown, we found that subjects with a higher tracking skill took a longer chunk of the future path into account when computing the control policy for the upcoming movement segment. We observed the same effects in a second experiment where tracking speed was fixed and subjects were practicing to increase their accuracy. These findings demonstrate that human subjects increase their planning horizon when acquiring a motor skill.
[ { "created": "Wed, 22 Oct 2014 14:21:05 GMT", "version": "v1" }, { "created": "Sat, 17 Oct 2015 21:14:22 GMT", "version": "v2" } ]
2024-06-06
[ [ "Bashford", "Luke", "" ], [ "Kobak", "Dmitry", "" ], [ "Mehring", "Carsten", "" ] ]
We investigated motor skill learning using a path tracking task, where human subjects had to track various curved paths as fast as possible, in the absence of any external perturbations. Subjects became better with practice, producing faster and smoother movements even when tracking novel untrained paths. Using a "searchlight" paradigm, where only a short segment of the path ahead of the cursor was shown, we found that subjects with a higher tracking skill took a longer chunk of the future path into account when computing the control policy for the upcoming movement segment. We observed the same effects in a second experiment where tracking speed was fixed and subjects were practicing to increase their accuracy. These findings demonstrate that human subjects increase their planning horizon when acquiring a motor skill.
2106.10344
Hamidreza Ramezanpour
Hamidreza Ramezanpour and Mazyar Fallah
The role of temporal cortex in the control of attention
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Attention is an indispensable component of active vision. Contrary to the widely accepted notion that temporal cortex processing primarily focusses on passive object recognition, a series of very recent studies emphasize the role of temporal cortex structures, specifically the superior temporal sulcus (STS) and inferotemporal (IT) cortex, in guiding attention and implementing cognitive programs relevant for behavioral tasks. The goal of this theoretical paper is to advance the hypothesis that the temporal cortex attention network (TAN) entails necessary components to actively participate in attentional control in a flexible task-dependent manner. First, we will briefly discuss the general architecture of the temporal cortex with a focus on the STS and IT cortex of monkeys and their modulation with attention. Then we will review evidence from behavioral and neurophysiological studies that support their guidance of attention in the presence of cognitive control signals. Next, we propose a mechanistic framework for executive control of attention in the temporal cortex. Finally, we summarize the role of temporal cortex in implementing cognitive programs and discuss how they contribute to the dynamic nature of visual attention to ensure flexible behavior.
[ { "created": "Fri, 18 Jun 2021 20:17:15 GMT", "version": "v1" }, { "created": "Wed, 24 Nov 2021 03:08:09 GMT", "version": "v2" } ]
2021-11-25
[ [ "Ramezanpour", "Hamidreza", "" ], [ "Fallah", "Mazyar", "" ] ]
Attention is an indispensable component of active vision. Contrary to the widely accepted notion that temporal cortex processing primarily focusses on passive object recognition, a series of very recent studies emphasize the role of temporal cortex structures, specifically the superior temporal sulcus (STS) and inferotemporal (IT) cortex, in guiding attention and implementing cognitive programs relevant for behavioral tasks. The goal of this theoretical paper is to advance the hypothesis that the temporal cortex attention network (TAN) entails necessary components to actively participate in attentional control in a flexible task-dependent manner. First, we will briefly discuss the general architecture of the temporal cortex with a focus on the STS and IT cortex of monkeys and their modulation with attention. Then we will review evidence from behavioral and neurophysiological studies that support their guidance of attention in the presence of cognitive control signals. Next, we propose a mechanistic framework for executive control of attention in the temporal cortex. Finally, we summarize the role of temporal cortex in implementing cognitive programs and discuss how they contribute to the dynamic nature of visual attention to ensure flexible behavior.
2108.04090
Abby Stylianou
Abby Stylianou, Robert Pless, Nadia Shakoor and Todd Mockler
Classification and Visualization of Genotype x Phenotype Interactions in Biomass Sorghum
ICCV 2021 Workshop on Computer Vision Problems in Plant Phenotyping and Agriculture (CVPPA)
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a simple approach to understanding the relationship between single nucleotide polymorphisms (SNPs), or groups of related SNPs, and the phenotypes they control. The pipeline involves training deep convolutional neural networks (CNNs) to differentiate between images of plants with reference and alternate versions of various SNPs, and then using visualization approaches to highlight what the classification networks key on. We demonstrate the capacity of deep CNNs at performing this classification task, and show the utility of these visualizations on RGB imagery of biomass sorghum captured by the TERRA-REF gantry. We focus on several different genetic markers with known phenotypic expression, and discuss the possibilities of using this approach to uncover genotype x phenotype relationships.
[ { "created": "Mon, 9 Aug 2021 14:39:23 GMT", "version": "v1" } ]
2021-08-10
[ [ "Stylianou", "Abby", "" ], [ "Pless", "Robert", "" ], [ "Shakoor", "Nadia", "" ], [ "Mockler", "Todd", "" ] ]
We introduce a simple approach to understanding the relationship between single nucleotide polymorphisms (SNPs), or groups of related SNPs, and the phenotypes they control. The pipeline involves training deep convolutional neural networks (CNNs) to differentiate between images of plants with reference and alternate versions of various SNPs, and then using visualization approaches to highlight what the classification networks key on. We demonstrate the capacity of deep CNNs at performing this classification task, and show the utility of these visualizations on RGB imagery of biomass sorghum captured by the TERRA-REF gantry. We focus on several different genetic markers with known phenotypic expression, and discuss the possibilities of using this approach to uncover genotype x phenotype relationships.
1409.7589
Gon\c{c}alo Prista
Francisco Cabral, Mario Cachao, Rui Jorge Agostinho, Goncalo Prista
Short note on the Sirenia disappearance from the Euro-North African realm during the Cenozoic: a link between climate and Supernovae?
9 pages, 2 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sirenia are marine mammals that colonized the European shores up to 2.7 Ma. Their biodiversity evolution follows the climate evolution of the Cenozoic. However, several climate events, as well as the global climate trend of this Era are still struggling to be understood. When considering only Earth processes, the climate evolution of the Cenozoic is hard to understand. If the galactic environment is taken into account, some of these climate events, as well as the global climate trend, became more easily understood. The Milky Way, through Supernovae, may bring some answers to why Cenozoic climate had this evolution. With the assumption that SN can induced changes in Earth climate in long time scales, Sirenia disappearance from Europe would be a side effect of this process.
[ { "created": "Fri, 26 Sep 2014 14:25:27 GMT", "version": "v1" } ]
2014-09-29
[ [ "Cabral", "Francisco", "" ], [ "Cachao", "Mario", "" ], [ "Agostinho", "Rui Jorge", "" ], [ "Prista", "Goncalo", "" ] ]
Sirenia are marine mammals that colonized the European shores up to 2.7 Ma. Their biodiversity evolution follows the climate evolution of the Cenozoic. However, several climate events, as well as the global climate trend of this Era are still struggling to be understood. When considering only Earth processes, the climate evolution of the Cenozoic is hard to understand. If the galactic environment is taken into account, some of these climate events, as well as the global climate trend, became more easily understood. The Milky Way, through Supernovae, may bring some answers to why Cenozoic climate had this evolution. With the assumption that SN can induced changes in Earth climate in long time scales, Sirenia disappearance from Europe would be a side effect of this process.
1701.05157
Satohiro Tajima
Satohiro Tajima and Ryota Kanai
Integrated information and dimensionality in continuous attractor dynamics
null
Neurosci Conscious 2017, 3 (1): nix011
10.1093/nc/nix011
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been increasing interest in the integrated information theory (IIT) ofconsciousness, which hypothesizes that consciousness is integrated information withinneuronal dynamics. However, the current formulation of IIT poses both practical andtheoretical problems when we aim to empirically test the theory by computingintegrated information from neuronal signals. For example, measuring integratedinformation requires observing all the elements in the considered system at the sametime, but this is practically rather difficult. In addition, the interpretation of the spatialpartition needed to compute integrated information becomes vague in continuous time-series variables due to a general property of nonlinear dynamical systems known as"embedding." Here, we propose that some aspects of such problems are resolved byconsidering the topological dimensionality of shared attractor dynamics as an indicatorof integrated information in continuous attractor dynamics. In this formulation, theeffects of unobserved nodes on the attractor dynamics can be reconstructed using atechnique called delay embedding, which allows us to identify the dimensionality of anembedded attractor from partial observations. We propose that the topologicaldimensionality represents a critical property of integrated information, as it is invariantto general coordinate transformations. We illustrate this new framework with simpleexamples and discuss how it fits together with recent findings based on neuralrecordings from awake and anesthetized animals. This topological approach extendsthe existing notions of IIT to continuous dynamical systems and offers a much-neededframework for testing the theory with experimental data by substantially relaxing theconditions required for evaluating integrated information in real neural systems.
[ { "created": "Wed, 18 Jan 2017 17:34:36 GMT", "version": "v1" }, { "created": "Fri, 20 Jan 2017 18:03:30 GMT", "version": "v2" } ]
2017-07-04
[ [ "Tajima", "Satohiro", "" ], [ "Kanai", "Ryota", "" ] ]
There has been increasing interest in the integrated information theory (IIT) ofconsciousness, which hypothesizes that consciousness is integrated information withinneuronal dynamics. However, the current formulation of IIT poses both practical andtheoretical problems when we aim to empirically test the theory by computingintegrated information from neuronal signals. For example, measuring integratedinformation requires observing all the elements in the considered system at the sametime, but this is practically rather difficult. In addition, the interpretation of the spatialpartition needed to compute integrated information becomes vague in continuous time-series variables due to a general property of nonlinear dynamical systems known as"embedding." Here, we propose that some aspects of such problems are resolved byconsidering the topological dimensionality of shared attractor dynamics as an indicatorof integrated information in continuous attractor dynamics. In this formulation, theeffects of unobserved nodes on the attractor dynamics can be reconstructed using atechnique called delay embedding, which allows us to identify the dimensionality of anembedded attractor from partial observations. We propose that the topologicaldimensionality represents a critical property of integrated information, as it is invariantto general coordinate transformations. We illustrate this new framework with simpleexamples and discuss how it fits together with recent findings based on neuralrecordings from awake and anesthetized animals. This topological approach extendsthe existing notions of IIT to continuous dynamical systems and offers a much-neededframework for testing the theory with experimental data by substantially relaxing theconditions required for evaluating integrated information in real neural systems.
0805.3523
Kevin Lin
Kevin K. Lin, Eric Shea-Brown, Lai-Sang Young
Reliability of Layered Neural Oscillator Networks
null
null
null
null
q-bio.NC cond-mat.dis-nn nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the reliability of large networks of coupled neural oscillators in response to fluctuating stimuli. Reliability means that a stimulus elicits essentially identical responses upon repeated presentations. We view the problem on two scales: neuronal reliability, which concerns the repeatability of spike times of individual neurons embedded within a network, and pooled-response reliability, which addresses the repeatability of the total synaptic output from the network. We find that individual embedded neurons can be reliable or unreliable depending on network conditions, whereas pooled responses of sufficiently large networks are mostly reliable. We study also the effects of noise, and find that some types affect reliability more seriously than others.
[ { "created": "Thu, 22 May 2008 19:12:59 GMT", "version": "v1" } ]
2008-05-23
[ [ "Lin", "Kevin K.", "" ], [ "Shea-Brown", "Eric", "" ], [ "Young", "Lai-Sang", "" ] ]
We study the reliability of large networks of coupled neural oscillators in response to fluctuating stimuli. Reliability means that a stimulus elicits essentially identical responses upon repeated presentations. We view the problem on two scales: neuronal reliability, which concerns the repeatability of spike times of individual neurons embedded within a network, and pooled-response reliability, which addresses the repeatability of the total synaptic output from the network. We find that individual embedded neurons can be reliable or unreliable depending on network conditions, whereas pooled responses of sufficiently large networks are mostly reliable. We study also the effects of noise, and find that some types affect reliability more seriously than others.
2005.03085
Tom Britton
Tom Britton, Frank Ball and Pieter Trapman
The disease-induced herd immunity level for Covid-19 is substantially lower than the classical herd immunity level
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most countries are suffering severely from the ongoing covid-19 pandemic despite various levels of preventive measures. A common question is if and when a country or region will reach herd immunity $h$. The classical herd immunity level $h_C$ is defined as $h_C=1-1/R_0$, where $R_0$ is the basic reproduction number, for covid-19 estimated to lie somewhere in the range 2.2-3.5 depending on country and region. It is shown here that the disease-induced herd immunity level $h_D$, after an outbreak has taken place in a country/region with a set of preventive measures put in place, is actually substantially smaller than $h_C$. As an illustration we show that if $R_0=2.5$ in an age-structured community with mixing rates fitted to social activity studies, and also categorizing individuals into three categories: low active, average active and high active, and where preventive measures affect all mixing rates proportionally, then the disease-induced herd immunity level is $h_D=43\%$ rather than $h_C=1-1/2.5=60\%$. Consequently, a lower fraction infected is required for herd immunity to appear. The underlying reason is that when immunity is induced by disease spreading, the proportion infected in groups with high contact rates is greater than that in groups with low contact rates. Consequently, disease-induced immunity is stronger than when immunity is uniformly distributed in the community as in the classical herd immunity level.
[ { "created": "Wed, 6 May 2020 19:22:57 GMT", "version": "v1" } ]
2020-05-08
[ [ "Britton", "Tom", "" ], [ "Ball", "Frank", "" ], [ "Trapman", "Pieter", "" ] ]
Most countries are suffering severely from the ongoing covid-19 pandemic despite various levels of preventive measures. A common question is if and when a country or region will reach herd immunity $h$. The classical herd immunity level $h_C$ is defined as $h_C=1-1/R_0$, where $R_0$ is the basic reproduction number, for covid-19 estimated to lie somewhere in the range 2.2-3.5 depending on country and region. It is shown here that the disease-induced herd immunity level $h_D$, after an outbreak has taken place in a country/region with a set of preventive measures put in place, is actually substantially smaller than $h_C$. As an illustration we show that if $R_0=2.5$ in an age-structured community with mixing rates fitted to social activity studies, and also categorizing individuals into three categories: low active, average active and high active, and where preventive measures affect all mixing rates proportionally, then the disease-induced herd immunity level is $h_D=43\%$ rather than $h_C=1-1/2.5=60\%$. Consequently, a lower fraction infected is required for herd immunity to appear. The underlying reason is that when immunity is induced by disease spreading, the proportion infected in groups with high contact rates is greater than that in groups with low contact rates. Consequently, disease-induced immunity is stronger than when immunity is uniformly distributed in the community as in the classical herd immunity level.
1908.11436
Lee Friedman
Lee Friedman
Three errors and two problems in a recent paper: gazenet: End-to-end eye-movement event detection with deep neural networks (Zemblys, Niehorster, and Holmqvist, 2019)
11 pages, 6 figures, Commentary on previously published article
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zemblys et al. \cite{gazeNet} reported on a method for the classification of eye-movements ("gazeNet"). I have found 3 errors and two problems with that paper that are explained herein. \underline{\textit{\textbf{Error 1}}} The gazeNet classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200Hz. Of the six datasets that were used as the training set for the gazeNet algorithm, 2 were actually collected at 200Hz. \underline{\textit{\textbf{Problem 1}}} has to do with the fact that even among the 500Hz data, the inter-timestamp intervals varied widely. \underline{\textit{\textbf{Problem 2}}} is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. \underline{\textit{\textbf{Error 2}}} The gazeNet algorithm was trained on the Lund dataset, and then compared to other methods, not trained on this dataset, in terms of performance on this dataset. This is an inherently unfair comparison, and yet no where in the gazeNet paper is this unfairness mentioned. \underline{\textit{\textbf{Error 3}}} arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving kappa higher, whereas unmatched events should be driving kappa lower.
[ { "created": "Thu, 29 Aug 2019 19:51:09 GMT", "version": "v1" }, { "created": "Tue, 10 Sep 2019 19:15:44 GMT", "version": "v2" }, { "created": "Wed, 6 Nov 2019 13:32:01 GMT", "version": "v3" }, { "created": "Fri, 20 Dec 2019 12:01:02 GMT", "version": "v4" } ]
2019-12-23
[ [ "Friedman", "Lee", "" ] ]
Zemblys et al. \cite{gazeNet} reported on a method for the classification of eye-movements ("gazeNet"). I have found 3 errors and two problems with that paper that are explained herein. \underline{\textit{\textbf{Error 1}}} The gazeNet classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200Hz. Of the six datasets that were used as the training set for the gazeNet algorithm, 2 were actually collected at 200Hz. \underline{\textit{\textbf{Problem 1}}} has to do with the fact that even among the 500Hz data, the inter-timestamp intervals varied widely. \underline{\textit{\textbf{Problem 2}}} is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. \underline{\textit{\textbf{Error 2}}} The gazeNet algorithm was trained on the Lund dataset, and then compared to other methods, not trained on this dataset, in terms of performance on this dataset. This is an inherently unfair comparison, and yet no where in the gazeNet paper is this unfairness mentioned. \underline{\textit{\textbf{Error 3}}} arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving kappa higher, whereas unmatched events should be driving kappa lower.
1610.01189
Jing Xu
Qiaochu Li, Stephen J. King, Ajay Gopinathan, and Jing Xu
Quantitative Determination of the Probability of Multiple-Motor Transport in Bead-Based Assays
null
Biophysical Journal , Volume 110 , Issue 12 , 2720 - 2728 (2016)
10.1016/j.bpj.2016.05.015
null
q-bio.BM q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With their longest dimension typically being less than 100 nm, molecular motors are significantly below the optical-resolution limit. Despite substantial advances in fluorescence-based imaging methodologies, labeling with beads remains critical for optical-trapping-based investigations of molecular motors. A key experimental challenge in bead-based assays is that the number of motors on a bead is not well defined. Particularly for single-molecule investigations, the probability of single versus multiple-motor events has not been experimentally investigated. Here, we used bead travel distance as an indicator of multiple-motor transport and determined the lower-bound probability of bead transport by two or more motors. We limited the ATP concentration to increase our detection sensitivity for multiple- versus single-kinesin transport. Surprisingly, for all but the lowest motor number examined, our measurements exceeded estimations of a previous model by R2-fold. To bridge this apparent gap between theory and experiment, we derived a closed-form expression for the probability of bead transport by multiple motors, and constrained the only free parameter in this model using our experimental measurements. Our data indicate that kinesin extends to ~57 nm during bead transport, suggesting that kinesin exploits its conformational flexibility to interact with microtubules at highly curved interfaces such as those present for vesicle transport in cells. To our knowledge, our findings provide the first experimentally constrained guide for estimating the probability of multiple-motor transport in optical trapping studies. The experimental approach utilized here (limiting ATP concentration) may be generally applicable to studies in which molecular motors are labeled with cargos that are artificial or are purified from cellular extracts.
[ { "created": "Tue, 4 Oct 2016 20:31:02 GMT", "version": "v1" } ]
2016-10-06
[ [ "Li", "Qiaochu", "" ], [ "King", "Stephen J.", "" ], [ "Gopinathan", "Ajay", "" ], [ "Xu", "Jing", "" ] ]
With their longest dimension typically being less than 100 nm, molecular motors are significantly below the optical-resolution limit. Despite substantial advances in fluorescence-based imaging methodologies, labeling with beads remains critical for optical-trapping-based investigations of molecular motors. A key experimental challenge in bead-based assays is that the number of motors on a bead is not well defined. Particularly for single-molecule investigations, the probability of single versus multiple-motor events has not been experimentally investigated. Here, we used bead travel distance as an indicator of multiple-motor transport and determined the lower-bound probability of bead transport by two or more motors. We limited the ATP concentration to increase our detection sensitivity for multiple- versus single-kinesin transport. Surprisingly, for all but the lowest motor number examined, our measurements exceeded estimations of a previous model by R2-fold. To bridge this apparent gap between theory and experiment, we derived a closed-form expression for the probability of bead transport by multiple motors, and constrained the only free parameter in this model using our experimental measurements. Our data indicate that kinesin extends to ~57 nm during bead transport, suggesting that kinesin exploits its conformational flexibility to interact with microtubules at highly curved interfaces such as those present for vesicle transport in cells. To our knowledge, our findings provide the first experimentally constrained guide for estimating the probability of multiple-motor transport in optical trapping studies. The experimental approach utilized here (limiting ATP concentration) may be generally applicable to studies in which molecular motors are labeled with cargos that are artificial or are purified from cellular extracts.
1307.7882
Sergey Petoukhov
Sergey Petoukhov
The genetic code, algebra of projection operators and problems of inherited biological ensembles
110 pages,82 figures
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article is devoted to applications of projection operators to simulate phenomenological properties of the molecular-genetic code system. Oblique projection operators are under consideration, which are connected with matrix representations of the genetic coding system in forms of the Rademacher and Hadamard matrices. Evidences are shown that sums of such projectors give abilities for adequate simulations of ensembles of inherited biological phenomena including ensembles of biological cycles, morphogenetic ensembles of phyllotaxis patterns, mirror-symmetric patterns, etc. For such modeling, the author proposes multidimensional vector spaces, whose subspaces are under a selective control (or coding) by means of a set of matrix operators on base of genetic projectors. Development of genetic biomechanics is discussed. The author proposes and describes special systems of multidimensional numbers under names united-hypercomplex numbers, which attracted his attention when he studied genetic systems and genetic matrices. New rules of long nucleotide sequences are described on the base of the proposed notion of tetra-groups of equivalent oligonucleotides. Described results can be used for developing algebraic biology, bio-technical applications and some other fields of science and technology.
[ { "created": "Tue, 30 Jul 2013 09:15:13 GMT", "version": "v1" }, { "created": "Mon, 30 Dec 2013 17:09:46 GMT", "version": "v2" }, { "created": "Thu, 23 Jan 2014 18:42:26 GMT", "version": "v3" }, { "created": "Sun, 4 May 2014 14:39:01 GMT", "version": "v4" }, { "created": "Tue, 19 Aug 2014 14:55:18 GMT", "version": "v5" }, { "created": "Wed, 31 Dec 2014 07:48:21 GMT", "version": "v6" }, { "created": "Mon, 6 Mar 2017 16:09:03 GMT", "version": "v7" }, { "created": "Wed, 3 May 2017 15:14:16 GMT", "version": "v8" }, { "created": "Tue, 8 Aug 2017 20:52:00 GMT", "version": "v9" } ]
2017-08-10
[ [ "Petoukhov", "Sergey", "" ] ]
This article is devoted to applications of projection operators to simulate phenomenological properties of the molecular-genetic code system. Oblique projection operators are under consideration, which are connected with matrix representations of the genetic coding system in forms of the Rademacher and Hadamard matrices. Evidences are shown that sums of such projectors give abilities for adequate simulations of ensembles of inherited biological phenomena including ensembles of biological cycles, morphogenetic ensembles of phyllotaxis patterns, mirror-symmetric patterns, etc. For such modeling, the author proposes multidimensional vector spaces, whose subspaces are under a selective control (or coding) by means of a set of matrix operators on base of genetic projectors. Development of genetic biomechanics is discussed. The author proposes and describes special systems of multidimensional numbers under names united-hypercomplex numbers, which attracted his attention when he studied genetic systems and genetic matrices. New rules of long nucleotide sequences are described on the base of the proposed notion of tetra-groups of equivalent oligonucleotides. Described results can be used for developing algebraic biology, bio-technical applications and some other fields of science and technology.
2305.08929
Zhongju Yuan
Zhongju Yuan, Tao Shen, Sheng Xu, Leiye Yu, Ruobing Ren, Siqi Sun
AF2-Mutation: Adversarial Sequence Mutations against AlphaFold2 on Protein Tertiary Structure Prediction
null
null
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep learning-based approaches, such as AlphaFold2 (AF2), have significantly advanced protein tertiary structure prediction, achieving results comparable to real biological experimental methods. While AF2 has shown limitations in predicting the effects of mutations, its robustness against sequence mutations remains to be determined. Starting with the wild-type (WT) sequence, we investigate adversarial sequences generated via an evolutionary approach, which AF2 predicts to be substantially different from WT. Our experiments on CASP14 reveal that by modifying merely three residues in the protein sequence using a combination of replacement, deletion, and insertion strategies, the alteration in AF2's predictions, as measured by the Local Distance Difference Test (lDDT), reaches 46.61. Moreover, when applied to a specific protein, SPNS2, our proposed algorithm successfully identifies biologically meaningful residues critical to protein structure determination and potentially indicates alternative conformations, thus significantly expediting the experimental process.
[ { "created": "Mon, 15 May 2023 18:06:08 GMT", "version": "v1" } ]
2023-05-17
[ [ "Yuan", "Zhongju", "" ], [ "Shen", "Tao", "" ], [ "Xu", "Sheng", "" ], [ "Yu", "Leiye", "" ], [ "Ren", "Ruobing", "" ], [ "Sun", "Siqi", "" ] ]
Deep learning-based approaches, such as AlphaFold2 (AF2), have significantly advanced protein tertiary structure prediction, achieving results comparable to real biological experimental methods. While AF2 has shown limitations in predicting the effects of mutations, its robustness against sequence mutations remains to be determined. Starting with the wild-type (WT) sequence, we investigate adversarial sequences generated via an evolutionary approach, which AF2 predicts to be substantially different from WT. Our experiments on CASP14 reveal that by modifying merely three residues in the protein sequence using a combination of replacement, deletion, and insertion strategies, the alteration in AF2's predictions, as measured by the Local Distance Difference Test (lDDT), reaches 46.61. Moreover, when applied to a specific protein, SPNS2, our proposed algorithm successfully identifies biologically meaningful residues critical to protein structure determination and potentially indicates alternative conformations, thus significantly expediting the experimental process.
2211.10472
Jeremy Owen
Jeremy A. Owen, Pranay Talla, John W. Biddle, Jeremy Gunawardena
Thermodynamic bounds on ultrasensitivity in covalent switching
29 pages, 6 figures
null
10.1016/j.bpj.2023.04.015
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Switch-like motifs are among the basic building blocks of biochemical networks. A common motif that can serve as an ultrasensitive switch consists of two enzymes acting antagonistically on a substrate, one making and the other removing a covalent modification. To work as a switch, such covalent modification cycles must be held out of thermodynamic equilibrium by continuous expenditure of energy. Here, we exploit the linear framework for timescale separation to establish tight bounds on the performance of any covalent-modification switch, in terms of the chemical potential difference driving the cycle. The bounds apply to arbitrary enzyme mechanisms, not just Michaelis-Menten, with arbitrary rate constants, and thereby reflect fundamental physical constraints on covalent switching.
[ { "created": "Fri, 18 Nov 2022 19:21:56 GMT", "version": "v1" } ]
2023-05-31
[ [ "Owen", "Jeremy A.", "" ], [ "Talla", "Pranay", "" ], [ "Biddle", "John W.", "" ], [ "Gunawardena", "Jeremy", "" ] ]
Switch-like motifs are among the basic building blocks of biochemical networks. A common motif that can serve as an ultrasensitive switch consists of two enzymes acting antagonistically on a substrate, one making and the other removing a covalent modification. To work as a switch, such covalent modification cycles must be held out of thermodynamic equilibrium by continuous expenditure of energy. Here, we exploit the linear framework for timescale separation to establish tight bounds on the performance of any covalent-modification switch, in terms of the chemical potential difference driving the cycle. The bounds apply to arbitrary enzyme mechanisms, not just Michaelis-Menten, with arbitrary rate constants, and thereby reflect fundamental physical constraints on covalent switching.
1804.11249
Julia Shore
Julia A. Shore, Jeremy G. Sumner and Barbara R. Holland
The impracticalities of multiplicatively-closed codon models: a retreat to linear alternatives
null
Journal of Mathematical Biology, 1-25 (2020)
null
null
q-bio.PE math.RA math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A matrix Lie algebra is a linear space of matrices closed under the operation $ [A, B] = AB-BA $. The "Lie closure" of a set of matrices is the smallest matrix Lie algebra which contains the set. In the context of Markov chain theory, if a set of rate matrices form a Lie algebra, their corresponding Markov matrices are closed under matrix multiplication; this has been found to be a useful property in phylogenetics. Inspired by previous research involving Lie closures of DNA models, it was hypothesised that finding the Lie closure of a codon model could help to solve the problem of mis-estimation of the non-synonymous/synonymous rate ratio, $ \omega $. We propose two different methods of finding a linear space from a model: the first is the \emph{linear closure} which is the smallest linear space which contains the model, and the second is the \emph{linear version} which changes multiplicative constraints in the model to additive ones. For each of these linear spaces we then find the Lie closures of them. Under both methods, it was found that closed codon models would require thousands of parameters, and that any partial solution to this problem that was of a reasonable size violated stochasticity. Investigation of toy models indicated that finding the Lie closure of matrix linear spaces which deviated only slightly from a simple model resulted in a Lie closure that was close to having the maximum number of parameters possible. Given that Lie closures are not practical, we propose further consideration of the two variants of linearly closed models.
[ { "created": "Thu, 26 Apr 2018 00:43:18 GMT", "version": "v1" }, { "created": "Tue, 8 May 2018 05:52:32 GMT", "version": "v2" }, { "created": "Thu, 6 Aug 2020 00:54:05 GMT", "version": "v3" } ]
2020-08-07
[ [ "Shore", "Julia A.", "" ], [ "Sumner", "Jeremy G.", "" ], [ "Holland", "Barbara R.", "" ] ]
A matrix Lie algebra is a linear space of matrices closed under the operation $ [A, B] = AB-BA $. The "Lie closure" of a set of matrices is the smallest matrix Lie algebra which contains the set. In the context of Markov chain theory, if a set of rate matrices form a Lie algebra, their corresponding Markov matrices are closed under matrix multiplication; this has been found to be a useful property in phylogenetics. Inspired by previous research involving Lie closures of DNA models, it was hypothesised that finding the Lie closure of a codon model could help to solve the problem of mis-estimation of the non-synonymous/synonymous rate ratio, $ \omega $. We propose two different methods of finding a linear space from a model: the first is the \emph{linear closure} which is the smallest linear space which contains the model, and the second is the \emph{linear version} which changes multiplicative constraints in the model to additive ones. For each of these linear spaces we then find the Lie closures of them. Under both methods, it was found that closed codon models would require thousands of parameters, and that any partial solution to this problem that was of a reasonable size violated stochasticity. Investigation of toy models indicated that finding the Lie closure of matrix linear spaces which deviated only slightly from a simple model resulted in a Lie closure that was close to having the maximum number of parameters possible. Given that Lie closures are not practical, we propose further consideration of the two variants of linearly closed models.
0901.0955
Yong Chen
Guo-Yong Zhang, Yong Chen, Wei-Kai Qi, and Shao-Meng Qin
Four-state rock-paper-scissors games on constrained Newman-Watts networks
6 pages, 7 figures
Phys. Rev. E 79, 062901 (2009)
10.1103/PhysRevE.79.062901
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
We study the cyclic dominance of three species in two-dimensional constrained Newman-Watts networks with a four-state variant of the rock-paper-scissors game. By limiting the maximal connection distance $R_{max}$ in Newman-Watts networks with the long-rang connection probability $p$, we depict more realistically the stochastic interactions among species within ecosystems. When we fix mobility and vary the value of $p$ or $R_{max}$, the Monte Carlo simulations show that the spiral waves grow in size, and the system becomes unstable and biodiversity is lost with increasing $p$ or $R_{max}$. These results are similar to recent results of Reichenbach \textit{et al.} [Nature (London) \textbf{448}, 1046 (2007)], in which they increase the mobility only without including long-range interactions. We compared extinctions with or without long-range connections and computed spatial correlation functions and correlation length. We conclude that long-range connections could improve the mobility of species, drastically changing their crossover to extinction and making the system more unstable.
[ { "created": "Thu, 8 Jan 2009 00:58:07 GMT", "version": "v1" }, { "created": "Fri, 3 Jul 2009 03:03:12 GMT", "version": "v2" } ]
2009-07-03
[ [ "Zhang", "Guo-Yong", "" ], [ "Chen", "Yong", "" ], [ "Qi", "Wei-Kai", "" ], [ "Qin", "Shao-Meng", "" ] ]
We study the cyclic dominance of three species in two-dimensional constrained Newman-Watts networks with a four-state variant of the rock-paper-scissors game. By limiting the maximal connection distance $R_{max}$ in Newman-Watts networks with the long-rang connection probability $p$, we depict more realistically the stochastic interactions among species within ecosystems. When we fix mobility and vary the value of $p$ or $R_{max}$, the Monte Carlo simulations show that the spiral waves grow in size, and the system becomes unstable and biodiversity is lost with increasing $p$ or $R_{max}$. These results are similar to recent results of Reichenbach \textit{et al.} [Nature (London) \textbf{448}, 1046 (2007)], in which they increase the mobility only without including long-range interactions. We compared extinctions with or without long-range connections and computed spatial correlation functions and correlation length. We conclude that long-range connections could improve the mobility of species, drastically changing their crossover to extinction and making the system more unstable.
1904.12023
Mengyu Dai
Mengyu Dai, Zhengwu Zhang and Anuj Srivastava
Discovering Common Change-Point Patterns in Functional Connectivity Across Subjects
null
Published in Medical Image Analysis, 2019
10.1016/j.media.2019.101532
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies change-points in human brain functional connectivity (FC) and seeks patterns that are common across multiple subjects under identical external stimulus. FC relates to the similarity of fMRI responses across different brain regions when the brain is simply resting or performing a task. While the dynamic nature of FC is well accepted, this paper develops a formal statistical test for finding {\it change-points} in times series associated with FC. It represents short-term connectivity by a symmetric positive-definite matrix, and uses a Riemannian metric on this space to develop a graphical method for detecting change-points in a time series of such matrices. It also provides a graphical representation of estimated FC for stationary subintervals in between the detected change-points. Furthermore, it uses a temporal alignment of the test statistic, viewed as a real-valued function over time, to remove inter-subject variability and to discover common change-point patterns across subjects. This method is illustrated using data from Human Connectome Project (HCP) database for multiple subjects and tasks.
[ { "created": "Fri, 26 Apr 2019 19:34:52 GMT", "version": "v1" } ]
2020-03-05
[ [ "Dai", "Mengyu", "" ], [ "Zhang", "Zhengwu", "" ], [ "Srivastava", "Anuj", "" ] ]
This paper studies change-points in human brain functional connectivity (FC) and seeks patterns that are common across multiple subjects under identical external stimulus. FC relates to the similarity of fMRI responses across different brain regions when the brain is simply resting or performing a task. While the dynamic nature of FC is well accepted, this paper develops a formal statistical test for finding {\it change-points} in times series associated with FC. It represents short-term connectivity by a symmetric positive-definite matrix, and uses a Riemannian metric on this space to develop a graphical method for detecting change-points in a time series of such matrices. It also provides a graphical representation of estimated FC for stationary subintervals in between the detected change-points. Furthermore, it uses a temporal alignment of the test statistic, viewed as a real-valued function over time, to remove inter-subject variability and to discover common change-point patterns across subjects. This method is illustrated using data from Human Connectome Project (HCP) database for multiple subjects and tasks.
1505.02492
Christopher Angstmann
C. N. Angstmann, B. I. Henry and A. V. McGann
A fractional order recovery SIR model from a stochastic process
32 pages, 3 figures
null
10.1007/s11538-016-0151-7
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past several decades there has been a proliferation of epidemiological models with ordinary derivatives replaced by fractional derivatives in an an-hoc manner. These models may be mathematically interesting but their relevance is uncertain. Here we develop an SIR model for an epidemic, including vital dynamics, from an underlying stochastic process. We show how fractional differential operators arise naturally in these models whenever the recovery time from the disease is power law distributed. This can provide a model for a chronic disease process where individuals who are infected for a long time are unlikely to recover. The fractional order recovery model is shown to be consistent with the Kermack-McKendrick age-structured SIR model and it reduces to the Hethcote-Tudor integral equation SIR model. The derivation from a stochastic process is extended to discrete time, providing a stable numerical method for solving the model equations. We have carried out simulations of the fractional order recovery model showing convergence to equilibrium states. The number of infecteds in the endemic equilibrium state increases as the fractional order of the derivative tends to zero.
[ { "created": "Mon, 11 May 2015 05:57:15 GMT", "version": "v1" }, { "created": "Wed, 30 Sep 2015 05:37:16 GMT", "version": "v2" } ]
2016-03-31
[ [ "Angstmann", "C. N.", "" ], [ "Henry", "B. I.", "" ], [ "McGann", "A. V.", "" ] ]
Over the past several decades there has been a proliferation of epidemiological models with ordinary derivatives replaced by fractional derivatives in an an-hoc manner. These models may be mathematically interesting but their relevance is uncertain. Here we develop an SIR model for an epidemic, including vital dynamics, from an underlying stochastic process. We show how fractional differential operators arise naturally in these models whenever the recovery time from the disease is power law distributed. This can provide a model for a chronic disease process where individuals who are infected for a long time are unlikely to recover. The fractional order recovery model is shown to be consistent with the Kermack-McKendrick age-structured SIR model and it reduces to the Hethcote-Tudor integral equation SIR model. The derivation from a stochastic process is extended to discrete time, providing a stable numerical method for solving the model equations. We have carried out simulations of the fractional order recovery model showing convergence to equilibrium states. The number of infecteds in the endemic equilibrium state increases as the fractional order of the derivative tends to zero.
1712.07879
\"Omer Deniz Akyildiz
\"Omer Deniz Aky{\i}ld{\i}z
A probabilistic interpretation of replicator-mutator dynamics
null
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note, we investigate the relationship between probabilistic updating mechanisms and discrete-time replicator-mutator dynamics. We consider the recently shown connection between Bayesian updating and replicator dynamics and extend it to the replicator-mutator dynamics by considering prediction and filtering recursions in hidden Markov models (HMM). We show that it is possible to understand the evolution of the frequency vector of a population under the replicator-mutator equation as a posterior predictive inference procedure in an HMM. This view enables us to derive a natural dual version of the replicator-mutator equation, which corresponds to updating the filtering distribution. Finally, we conclude with the implications of the interpretation and with some comments related to the recent discussions about evolution and learning.
[ { "created": "Thu, 21 Dec 2017 11:15:25 GMT", "version": "v1" } ]
2017-12-22
[ [ "Akyıldız", "Ömer Deniz", "" ] ]
In this note, we investigate the relationship between probabilistic updating mechanisms and discrete-time replicator-mutator dynamics. We consider the recently shown connection between Bayesian updating and replicator dynamics and extend it to the replicator-mutator dynamics by considering prediction and filtering recursions in hidden Markov models (HMM). We show that it is possible to understand the evolution of the frequency vector of a population under the replicator-mutator equation as a posterior predictive inference procedure in an HMM. This view enables us to derive a natural dual version of the replicator-mutator equation, which corresponds to updating the filtering distribution. Finally, we conclude with the implications of the interpretation and with some comments related to the recent discussions about evolution and learning.
1904.02995
Bj{\o}rn Juel
Bj{\o}rn Erik Juel, Renzo Comolatti, Giulio Tononi, and Larissa Albantakis
When is an action caused from within? Quantifying the causal chain leading to actions in simulated agents
Submitted and accepted to Alife 2019 conference. Revised version: edits include adding more references to relevant work and clarifying minor points in response to reviewers
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An agent's actions can be influenced by external factors through the inputs it receives from the environment, as well as internal factors, such as memories or intrinsic preferences. The extent to which an agent's actions are "caused from within", as opposed to being externally driven, should depend on its sensor capacity as well as environmental demands for memory and context-dependent behavior. Here, we test this hypothesis using simulated agents ("animats"), equipped with small adaptive Markov Brains (MB) that evolve to solve a perceptual-categorization task under conditions varied with regards to the agents' sensor capacity and task difficulty. Using a novel formalism developed to identify and quantify the actual causes of occurrences ("what caused what?") in complex networks, we evaluate the direct causes of the animats' actions. In addition, we extend this framework to trace the causal chain ("causes of causes") leading to an animat's actions back in time, and compare the obtained spatio-temporal causal history across task conditions. We found that measures quantifying the extent to which an animat's actions are caused by internal factors (as opposed to being driven by the environment through its sensors) varied consistently with defining aspects of the task conditions they evolved to thrive in.
[ { "created": "Fri, 5 Apr 2019 11:23:27 GMT", "version": "v1" }, { "created": "Fri, 14 Jun 2019 13:49:26 GMT", "version": "v2" } ]
2019-06-17
[ [ "Juel", "Bjørn Erik", "" ], [ "Comolatti", "Renzo", "" ], [ "Tononi", "Giulio", "" ], [ "Albantakis", "Larissa", "" ] ]
An agent's actions can be influenced by external factors through the inputs it receives from the environment, as well as internal factors, such as memories or intrinsic preferences. The extent to which an agent's actions are "caused from within", as opposed to being externally driven, should depend on its sensor capacity as well as environmental demands for memory and context-dependent behavior. Here, we test this hypothesis using simulated agents ("animats"), equipped with small adaptive Markov Brains (MB) that evolve to solve a perceptual-categorization task under conditions varied with regards to the agents' sensor capacity and task difficulty. Using a novel formalism developed to identify and quantify the actual causes of occurrences ("what caused what?") in complex networks, we evaluate the direct causes of the animats' actions. In addition, we extend this framework to trace the causal chain ("causes of causes") leading to an animat's actions back in time, and compare the obtained spatio-temporal causal history across task conditions. We found that measures quantifying the extent to which an animat's actions are caused by internal factors (as opposed to being driven by the environment through its sensors) varied consistently with defining aspects of the task conditions they evolved to thrive in.
2201.07718
Hossam Donya Dr
Hossam Donya, Sheikh Othman, Alexis Dimitriadis
Evaluating and predicting the Efficiency Index for Stereotactic Radiosurgery Plans using RapidMiner GO(JAVA) Based Artificial Intelligence Algorithms
10 pages
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Evaluation the prediction of Efficiency index by DVH parameter for SRS treatment plans using Supervised Machine learning and the performance of predictive model algorithms of RapidMiner GO in the parameter prediction are investigated. Dose volume histogram (DVH) based Efficiency index was calculated for 100 clinical SRS plans generated by Leksell Gamma plan, and the results were compared to predicted values produced by machine learning toolbox of RapidMiner Go, algorithms are namely, Generalized linear model (GLR), Decision Tree Model, Support Vector Machine (SVM), Gradient Boosted Trees (GBT), Random Forest (RF) and Deep learning Model (DL). Root mean square error (RMSE), Average absolute error, Absolute relative error, squared correlation and model building time were determined to evaluate the performance of each algorithm. The GLR algorithm model had square correlation of 0.974 with the smallest RMSE of 0.01, relatively high prediction speed, and fast model building time with 2.812 s, according to the results. The RMSE values for all models were between 0.01 upto 0.021, all algorithms performed well. The RMSE of the Gradient Boosted Tree, Random Forest, and Decision Tree regression algorithms was found to be greater than 0.01, suggesting that they are not appropriate for predicting EI in this analysis. RapidMiner GO machine learning models can be used to predict DVH parameters like EI in SRS treatment planning QA. To effectively evaluate the parameter, it is necessary to choose a suitable machine learning algorithm.
[ { "created": "Wed, 19 Jan 2022 16:56:19 GMT", "version": "v1" } ]
2022-01-20
[ [ "Donya", "Hossam", "" ], [ "Othman", "Sheikh", "" ], [ "Dimitriadis", "Alexis", "" ] ]
Evaluation the prediction of Efficiency index by DVH parameter for SRS treatment plans using Supervised Machine learning and the performance of predictive model algorithms of RapidMiner GO in the parameter prediction are investigated. Dose volume histogram (DVH) based Efficiency index was calculated for 100 clinical SRS plans generated by Leksell Gamma plan, and the results were compared to predicted values produced by machine learning toolbox of RapidMiner Go, algorithms are namely, Generalized linear model (GLR), Decision Tree Model, Support Vector Machine (SVM), Gradient Boosted Trees (GBT), Random Forest (RF) and Deep learning Model (DL). Root mean square error (RMSE), Average absolute error, Absolute relative error, squared correlation and model building time were determined to evaluate the performance of each algorithm. The GLR algorithm model had square correlation of 0.974 with the smallest RMSE of 0.01, relatively high prediction speed, and fast model building time with 2.812 s, according to the results. The RMSE values for all models were between 0.01 upto 0.021, all algorithms performed well. The RMSE of the Gradient Boosted Tree, Random Forest, and Decision Tree regression algorithms was found to be greater than 0.01, suggesting that they are not appropriate for predicting EI in this analysis. RapidMiner GO machine learning models can be used to predict DVH parameters like EI in SRS treatment planning QA. To effectively evaluate the parameter, it is necessary to choose a suitable machine learning algorithm.
2009.00251
Tommaso Lorenzi
Giada Fiandaca, Marcello Delitala, Tommaso Lorenzi
A mathematical study of the influence of hypoxia and acidity on the evolutionary dynamics of cancer
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hypoxia and acidity act as environmental stressors promoting selection for cancer cells with a more aggressive phenotype. As a result, a deeper theoretical understanding of the spatio-temporal processes that drive the adaptation of tumour cells to hypoxic and acidic microenvironments may open up new avenues of research in oncology and cancer treatment. We present a mathematical model to study the influence of hypoxia and acidity on the evolutionary dynamics of cancer cells in vascularised tumours. The model is formulated as a system of partial integro-differential equations that describe the phenotypic evolution of cancer cells in response to dynamic variations in the spatial distribution of three abiotic factors that are key players in tumour metabolism: oxygen, glucose and lactate. The results of numerical simulations of a calibrated version of the model based on real data recapitulate the eco-evolutionary spatial dynamics of tumour cells and their adaptation to hypoxic and acidic microenvironments. Moreover, such results demonstrate how nonlinear interactions between tumour cells and abiotic factors can lead to the formation of environmental gradients which select for cells with phenotypic characteristics that vary with distance from intra-tumour blood vessels, thus promoting the emergence of intra-tumour phenotypic heterogeneity. Finally, our theoretical findings reconcile the conclusions of earlier studies by showing that the order in which resistance to hypoxia and resistance to acidity arise in tumours depend on the ways in which oxygen and lactate act as environmental stressors in the evolutionary dynamics of cancer cells.
[ { "created": "Tue, 1 Sep 2020 06:10:08 GMT", "version": "v1" }, { "created": "Mon, 26 Apr 2021 09:14:08 GMT", "version": "v2" } ]
2021-04-27
[ [ "Fiandaca", "Giada", "" ], [ "Delitala", "Marcello", "" ], [ "Lorenzi", "Tommaso", "" ] ]
Hypoxia and acidity act as environmental stressors promoting selection for cancer cells with a more aggressive phenotype. As a result, a deeper theoretical understanding of the spatio-temporal processes that drive the adaptation of tumour cells to hypoxic and acidic microenvironments may open up new avenues of research in oncology and cancer treatment. We present a mathematical model to study the influence of hypoxia and acidity on the evolutionary dynamics of cancer cells in vascularised tumours. The model is formulated as a system of partial integro-differential equations that describe the phenotypic evolution of cancer cells in response to dynamic variations in the spatial distribution of three abiotic factors that are key players in tumour metabolism: oxygen, glucose and lactate. The results of numerical simulations of a calibrated version of the model based on real data recapitulate the eco-evolutionary spatial dynamics of tumour cells and their adaptation to hypoxic and acidic microenvironments. Moreover, such results demonstrate how nonlinear interactions between tumour cells and abiotic factors can lead to the formation of environmental gradients which select for cells with phenotypic characteristics that vary with distance from intra-tumour blood vessels, thus promoting the emergence of intra-tumour phenotypic heterogeneity. Finally, our theoretical findings reconcile the conclusions of earlier studies by showing that the order in which resistance to hypoxia and resistance to acidity arise in tumours depend on the ways in which oxygen and lactate act as environmental stressors in the evolutionary dynamics of cancer cells.
1806.08785
Les Hatton
Les Hatton and Gregory Warr
CoHSI I; Detailed properties of the Canonical Distribution for Discrete Systems such as the Proteome
13 pages, 11 figures
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The CoHSI (Conservation of Hartley-Shannon Information) distribution is at the heart of a wide-class of discrete systems, defining the length distribution of their components amongst other global properties. Discrete systems such as the known proteome where components are proteins, computer software, where components are functions and texts where components are books, are all known to fit this distribution accurately. In this short paper, we explore its solution and its resulting properties and lay the foundation for a series of papers which will demonstrate amongst other things, why the average length of components is so highly conserved and why long components occur so frequently in these systems. These properties are not amenable to local arguments such as natural selection in the case of the proteome or human volition in the case of computer software, and indeed turn out to be inevitable global properties of discrete systems devolving directly from CoHSI and shared by all. We will illustrate this using examples from the Uniprot protein database as a prelude to subsequent studies.
[ { "created": "Thu, 21 Jun 2018 20:14:03 GMT", "version": "v1" } ]
2018-06-26
[ [ "Hatton", "Les", "" ], [ "Warr", "Gregory", "" ] ]
The CoHSI (Conservation of Hartley-Shannon Information) distribution is at the heart of a wide-class of discrete systems, defining the length distribution of their components amongst other global properties. Discrete systems such as the known proteome where components are proteins, computer software, where components are functions and texts where components are books, are all known to fit this distribution accurately. In this short paper, we explore its solution and its resulting properties and lay the foundation for a series of papers which will demonstrate amongst other things, why the average length of components is so highly conserved and why long components occur so frequently in these systems. These properties are not amenable to local arguments such as natural selection in the case of the proteome or human volition in the case of computer software, and indeed turn out to be inevitable global properties of discrete systems devolving directly from CoHSI and shared by all. We will illustrate this using examples from the Uniprot protein database as a prelude to subsequent studies.
2303.04034
Rohak Jain
Rohak Jain
Deciphering a Sleeping Pathogen: Uncovering Novel Transcriptional Regulators of Hypoxia-Induced Dormancy in Mycobacterium Tuberculosis
21 pages, 11 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Along the pathogenesis of Mycobacterium Tuberculosis (MTB), hypoxia-induced dormancy is a process involving the oxygen-depleted environment encountered inside the lung granuloma, where bacilli enter a viable, non-replicating state termed as latency. Affecting nearly two billion people, latent TB can linger in the host for indefinite periods of time before resuscitating, which significantly strains the accuracy of treatment options and patient prognosis. Transcriptional factors thought to mediate this process have only conferred mild growth defects, signaling that our current understanding of the MTB genetic architecture is highly insufficient. In light of these inconsistencies, the objective of this study was to characterize regulatory mechanisms underlying the transition of MTB into dormancy. The project methodology involved a three-part approach - constructing an aggregate hypoxia dataset, inferring a gene regulatory network based on those observations, and leveraging several downstream network analyses to make sense of it all. Results indicated dormancy to be functionally associated with cell redox homeostasis, metal ion cycling, and cell wall metabolism, all of which modulate essential host-pathogen interactions. Additionally, the crosstalk between individual regulons (Rv0821c and Rv0144; Rv1152 and Rv2359) was shown to be critical in facilitating bacterial persistence and allowing MTB to gain control over key micronutrients within the cell. Defense antioxidants and nutritional immunity were also identified as future avenues to explore further. In providing some of the first insights into the methods utilized by MTB to endure in a hypoxic state, this research suggests a range of strategies that might aid in improved clinical outcomes of TB treatment.
[ { "created": "Sat, 4 Mar 2023 18:26:03 GMT", "version": "v1" }, { "created": "Sun, 5 Nov 2023 15:36:32 GMT", "version": "v2" } ]
2023-11-07
[ [ "Jain", "Rohak", "" ] ]
Along the pathogenesis of Mycobacterium Tuberculosis (MTB), hypoxia-induced dormancy is a process involving the oxygen-depleted environment encountered inside the lung granuloma, where bacilli enter a viable, non-replicating state termed as latency. Affecting nearly two billion people, latent TB can linger in the host for indefinite periods of time before resuscitating, which significantly strains the accuracy of treatment options and patient prognosis. Transcriptional factors thought to mediate this process have only conferred mild growth defects, signaling that our current understanding of the MTB genetic architecture is highly insufficient. In light of these inconsistencies, the objective of this study was to characterize regulatory mechanisms underlying the transition of MTB into dormancy. The project methodology involved a three-part approach - constructing an aggregate hypoxia dataset, inferring a gene regulatory network based on those observations, and leveraging several downstream network analyses to make sense of it all. Results indicated dormancy to be functionally associated with cell redox homeostasis, metal ion cycling, and cell wall metabolism, all of which modulate essential host-pathogen interactions. Additionally, the crosstalk between individual regulons (Rv0821c and Rv0144; Rv1152 and Rv2359) was shown to be critical in facilitating bacterial persistence and allowing MTB to gain control over key micronutrients within the cell. Defense antioxidants and nutritional immunity were also identified as future avenues to explore further. In providing some of the first insights into the methods utilized by MTB to endure in a hypoxic state, this research suggests a range of strategies that might aid in improved clinical outcomes of TB treatment.
1207.4586
Philippe Terrier PhD
Philippe Terrier and Olivier D\'eriaz
Persistent and anti-persistent pattern in stride-to-stride variability of treadmill walking: influence of rhythmic auditory cueing
preprint version of an article accepted in Human Movement Science
Human Movement Science, 2012, 31(6):1585-97
10.1016/j.humov.2012.05.004
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been observed that long time series of Stride Time (ST), Stride Length (SL) and Stride Speed (SS=SL/ST) exhibited statistical persistence (long-range auto-correlation) in overground walking. Rhythmic auditory cueing induced anti-persistent (or anti-correlated) pattern in ST series, while SL and SS remained persistent. On the other hand, it has been shown that SS became anti-persistent in treadmill walking, while ST and SL remained persistent. The aim of this study was to analyze the effect of the combination of treadmill walking (imposed speed) and auditory cueing (imposed cadence) on gait dynamics. Twenty middle-aged subjects performed 6 x 5min walking trials at various imposed speeds on an instrumented treadmill. Freely-chosen walking cadences were measured during the first three trials, and then imposed accordingly in the last three trials by using a metronome. Detrended fluctuation analysis (DFA) was performed on the times series of ST, SL, and SS. Treadmill induced anti-persistent dynamics in the time series of SS, but preserved the persistence of ST and SL. On the contrary, all the three parameters were anti-persistent under dual-constraints condition. Anti-persistent dynamics may be related to a tighter control: deviations are followed by a rapid over-correction, what produces oscillations around target values. Under single constraint condition, while SS is tightly regulated in order to follow the treadmill speed, redundancy between ST and SL would likely allow persistent pattern to occur. Conversely, under dual constraint conditions, the absence of redundancy among SL, ST and SS would explain the generalized anti-persistent pattern.
[ { "created": "Thu, 19 Jul 2012 09:18:05 GMT", "version": "v1" } ]
2012-12-24
[ [ "Terrier", "Philippe", "" ], [ "Dériaz", "Olivier", "" ] ]
It has been observed that long time series of Stride Time (ST), Stride Length (SL) and Stride Speed (SS=SL/ST) exhibited statistical persistence (long-range auto-correlation) in overground walking. Rhythmic auditory cueing induced anti-persistent (or anti-correlated) pattern in ST series, while SL and SS remained persistent. On the other hand, it has been shown that SS became anti-persistent in treadmill walking, while ST and SL remained persistent. The aim of this study was to analyze the effect of the combination of treadmill walking (imposed speed) and auditory cueing (imposed cadence) on gait dynamics. Twenty middle-aged subjects performed 6 x 5min walking trials at various imposed speeds on an instrumented treadmill. Freely-chosen walking cadences were measured during the first three trials, and then imposed accordingly in the last three trials by using a metronome. Detrended fluctuation analysis (DFA) was performed on the times series of ST, SL, and SS. Treadmill induced anti-persistent dynamics in the time series of SS, but preserved the persistence of ST and SL. On the contrary, all the three parameters were anti-persistent under dual-constraints condition. Anti-persistent dynamics may be related to a tighter control: deviations are followed by a rapid over-correction, what produces oscillations around target values. Under single constraint condition, while SS is tightly regulated in order to follow the treadmill speed, redundancy between ST and SL would likely allow persistent pattern to occur. Conversely, under dual constraint conditions, the absence of redundancy among SL, ST and SS would explain the generalized anti-persistent pattern.
1011.4496
Adrian Melott
Adrian L. Melott (U Kansas) and Richard K. Bambach (Smithsonian Inst., Museum of Natural History)
A ubiquitous ~62 Myr periodic fluctuation superimposed on general trends in fossil biodiversity: II, Evolutionary dynamics associated with periodic fluctuation in marine diversity
Paleobiology, in press. 74 pages, 13 figures
Paleobiology 37:383-408,2011
10.1666/09055.1
null
q-bio.PE astro-ph.EP astro-ph.GA physics.bio-ph physics.geo-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate evolutionary dynamics related to periodicity fossil biodiversity. Coherent periodic fluctuation in origination/extinction of marine genera that survive <45 million years is the source of an observed ~62 million year periodicity analyzed in Paper I. We also show that the evolutionary dynamics of "long-lived" genera (those that survive >45 million years) do not participate in the periodic fluctuation in diversity and differ from those of "short-lived" genera. The difference between the evolutionary dynamics of these 2 genera classes indicates that the periodic pattern is not an artifact of variation in quality of the geologic record. The interplay of these two previously undifferentiated systems, together with the secular increase in abundance of "long-lived" genera, is probably the source of heretofore unexplained differences in evolutionary dynamics between the Paleozoic and post-Paleozoic as reported by others. Testing for cycles similar to the 62 Myr cycle in fossil biodiversity superimposed on the long-term trends of the Phanerozoic as described in Paper I, we find a significant (but weaker) signal in sedimentary rock packages, particularly carbonates, which suggests a connection. The presence of a periodic pattern in evolutionary dynamics of the vulnerable "short-lived" component of marine fauna demonstrates that a long-term periodic fluctuation in environmental conditions capable of affecting evolution in the marine realm characterizes our planet. Coincidence in timing is more consistent with a common cause than sampling bias. A previously identified set of mass extinctions preferentially occur during the declining phase of the 62 Myr periodicity, supporting the idea that the periodicity relates to variation in biotically important stresses. Further work should focus on finding links to physical phenomena that might reveal the causal system or systems.
[ { "created": "Fri, 19 Nov 2010 18:51:31 GMT", "version": "v1" } ]
2011-07-26
[ [ "Melott", "Adrian L.", "", "U Kansas" ], [ "Bambach", "Richard K.", "", "Smithsonian Inst.,\n Museum of Natural History" ] ]
We investigate evolutionary dynamics related to periodicity fossil biodiversity. Coherent periodic fluctuation in origination/extinction of marine genera that survive <45 million years is the source of an observed ~62 million year periodicity analyzed in Paper I. We also show that the evolutionary dynamics of "long-lived" genera (those that survive >45 million years) do not participate in the periodic fluctuation in diversity and differ from those of "short-lived" genera. The difference between the evolutionary dynamics of these 2 genera classes indicates that the periodic pattern is not an artifact of variation in quality of the geologic record. The interplay of these two previously undifferentiated systems, together with the secular increase in abundance of "long-lived" genera, is probably the source of heretofore unexplained differences in evolutionary dynamics between the Paleozoic and post-Paleozoic as reported by others. Testing for cycles similar to the 62 Myr cycle in fossil biodiversity superimposed on the long-term trends of the Phanerozoic as described in Paper I, we find a significant (but weaker) signal in sedimentary rock packages, particularly carbonates, which suggests a connection. The presence of a periodic pattern in evolutionary dynamics of the vulnerable "short-lived" component of marine fauna demonstrates that a long-term periodic fluctuation in environmental conditions capable of affecting evolution in the marine realm characterizes our planet. Coincidence in timing is more consistent with a common cause than sampling bias. A previously identified set of mass extinctions preferentially occur during the declining phase of the 62 Myr periodicity, supporting the idea that the periodicity relates to variation in biotically important stresses. Further work should focus on finding links to physical phenomena that might reveal the causal system or systems.
1602.03379
Anupam Mitra
Anupam Mitra, Anagh Pathak, Kaushik Majumdar
Comparison of feature extraction and dimensionality reduction methods for single channel extracellular spike sorting
12 pages, 2 figures
null
null
null
q-bio.QM cs.CV q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spikes in the membrane electrical potentials of neurons play a major role in the functioning of nervous systems of animals. Obtaining the spikes from different neurons has been a challenging problem for decades. Several schemes have been proposed for spike sorting to isolate the spikes of individual neurons from electrical recordings in extracellular media. However, there is much scope for improvement in the accuracies obtained using the prevailing methods of spike sorting. To determine more effective spike sorting strategies using well known methods, we compared different types of signal features and techniques for dimensionality reduction in feature space. We tried to determine an optimum or near optimum feature extraction and dimensionality reduction methods and an optimum or near optimum number of features for spike sorting. We assessed relative performance of well known methods on simulated recordings specially designed for development and benchmarking of spike sorting schemes, with varying number of spike classes and the well established method of $k$-means clustering of selected features. We found that almost all well known methods performed quite well. Nevertheless, from spike waveforms of 64 samples, sampled at 24 kHz, using principal component analysis (PCA) to select around 46 to 55 features led to the better spike sorting performance than most other methods (Wilcoxon signed rank sum test, $p < 0.001$).
[ { "created": "Wed, 10 Feb 2016 14:21:34 GMT", "version": "v1" } ]
2016-02-11
[ [ "Mitra", "Anupam", "" ], [ "Pathak", "Anagh", "" ], [ "Majumdar", "Kaushik", "" ] ]
Spikes in the membrane electrical potentials of neurons play a major role in the functioning of nervous systems of animals. Obtaining the spikes from different neurons has been a challenging problem for decades. Several schemes have been proposed for spike sorting to isolate the spikes of individual neurons from electrical recordings in extracellular media. However, there is much scope for improvement in the accuracies obtained using the prevailing methods of spike sorting. To determine more effective spike sorting strategies using well known methods, we compared different types of signal features and techniques for dimensionality reduction in feature space. We tried to determine an optimum or near optimum feature extraction and dimensionality reduction methods and an optimum or near optimum number of features for spike sorting. We assessed relative performance of well known methods on simulated recordings specially designed for development and benchmarking of spike sorting schemes, with varying number of spike classes and the well established method of $k$-means clustering of selected features. We found that almost all well known methods performed quite well. Nevertheless, from spike waveforms of 64 samples, sampled at 24 kHz, using principal component analysis (PCA) to select around 46 to 55 features led to the better spike sorting performance than most other methods (Wilcoxon signed rank sum test, $p < 0.001$).
1312.6776
Hao Ge
Hao Ge, Hong Qian and Sunney Xiaoliang Xie
Stochastic phenotype transition of a single cell in an intermediate region of gene-state switching
6 pages,4 figures
Physical Review Letters, 114, 078101 (2015)
10.1103/PhysRevLett.114.078101
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple phenotypic states often arise in a single cell with different gene-expression states that undergo transcription regulation with positive feedback. Recent experiments have shown that at least in E. coli, the gene state switching can be neither extremely slow nor exceedingly rapid as many previous theoretical treatments assumed. Rather it is in the intermediate region which is difficult to handle mathematically.Under this condition, from a full chemical-master-equation description we derive a model in which the protein copy-number, for a given gene state, follow a deterministic mean-field description while the protein synthesis rates fluctuate due to stochastic gene-state switching. The simplified kinetics yields a nonequilibrium landscape function, which, similar to the energy function for equilibrium fluctuation, provides the leading orders of fluctuations around each phenotypic state, as well as the transition rates between the two phenotypic states. This rate formula is analogous to Kramers theory for chemical reactions. The resulting behaviors are significantly different from the two limiting cases studied previously.
[ { "created": "Tue, 24 Dec 2013 08:19:29 GMT", "version": "v1" }, { "created": "Tue, 27 Oct 2015 06:40:14 GMT", "version": "v2" } ]
2015-10-28
[ [ "Ge", "Hao", "" ], [ "Qian", "Hong", "" ], [ "Xie", "Sunney Xiaoliang", "" ] ]
Multiple phenotypic states often arise in a single cell with different gene-expression states that undergo transcription regulation with positive feedback. Recent experiments have shown that at least in E. coli, the gene state switching can be neither extremely slow nor exceedingly rapid as many previous theoretical treatments assumed. Rather it is in the intermediate region which is difficult to handle mathematically.Under this condition, from a full chemical-master-equation description we derive a model in which the protein copy-number, for a given gene state, follow a deterministic mean-field description while the protein synthesis rates fluctuate due to stochastic gene-state switching. The simplified kinetics yields a nonequilibrium landscape function, which, similar to the energy function for equilibrium fluctuation, provides the leading orders of fluctuations around each phenotypic state, as well as the transition rates between the two phenotypic states. This rate formula is analogous to Kramers theory for chemical reactions. The resulting behaviors are significantly different from the two limiting cases studied previously.
2004.00959
Arif Ahmed Sekh Dr
Ratnabali Pal, Arif Ahmed Sekh, Samarjit Kar, Dilip K. Prasad
Neural network based country wise risk prediction of COVID-19
null
Applied Sciences, 2020
10.3390/app10186448
null
q-bio.PE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent worldwide outbreak of the novel coronavirus (COVID-19) has opened up new challenges to the research community. Artificial intelligence (AI) driven methods can be useful to predict the parameters, risks, and effects of such an epidemic. Such predictions can be helpful to control and prevent the spread of such diseases. The main challenges of applying AI is the small volume of data and the uncertain nature. Here, we propose a shallow long short-term memory (LSTM) based neural network to predict the risk category of a country. We have used a Bayesian optimization framework to optimize and automatically design country-specific networks. The results show that the proposed pipeline outperforms state-of-the-art methods for data of 180 countries and can be a useful tool for such risk categorization. We have also experimented with the trend data and weather data combined for the prediction. The outcome shows that the weather does not have a significant role. The tool can be used to predict long-duration outbreak of such an epidemic such that we can take preventive steps earlier
[ { "created": "Tue, 31 Mar 2020 20:03:10 GMT", "version": "v1" }, { "created": "Wed, 16 Sep 2020 15:16:15 GMT", "version": "v2" } ]
2020-09-17
[ [ "Pal", "Ratnabali", "" ], [ "Sekh", "Arif Ahmed", "" ], [ "Kar", "Samarjit", "" ], [ "Prasad", "Dilip K.", "" ] ]
The recent worldwide outbreak of the novel coronavirus (COVID-19) has opened up new challenges to the research community. Artificial intelligence (AI) driven methods can be useful to predict the parameters, risks, and effects of such an epidemic. Such predictions can be helpful to control and prevent the spread of such diseases. The main challenges of applying AI is the small volume of data and the uncertain nature. Here, we propose a shallow long short-term memory (LSTM) based neural network to predict the risk category of a country. We have used a Bayesian optimization framework to optimize and automatically design country-specific networks. The results show that the proposed pipeline outperforms state-of-the-art methods for data of 180 countries and can be a useful tool for such risk categorization. We have also experimented with the trend data and weather data combined for the prediction. The outcome shows that the weather does not have a significant role. The tool can be used to predict long-duration outbreak of such an epidemic such that we can take preventive steps earlier
1910.02951
Nicolo' Savioli
Shihao Jin, Nicol\`o Savioli, Antonio de Marvao, Timothy JW Dawes, Axel Gandy, Daniel Rueckert, Declan P O'Regan
Joint analysis of clinical risk factors and 4D cardiac motion for survival prediction using a hybrid deep learning network
4 pages, 2 figures
NeurIPS 2019, Medical Imaging meets NIPS
null
null
q-bio.QM cs.LG eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, a novel approach is proposed for joint analysis of high dimensional time-resolved cardiac motion features obtained from segmented cardiac MRI and low dimensional clinical risk factors to improve survival prediction in heart failure. Different methods are evaluated to find the optimal way to insert conventional covariates into deep prediction networks. Correlation analysis between autoencoder latent codes and covariate features is used to examine how these predictors interact. We believe that similar approaches could also be used to introduce knowledge of genetic variants to such survival networks to improve outcome prediction by jointly analysing cardiac motion traits with inheritable risk factors.
[ { "created": "Mon, 7 Oct 2019 14:04:17 GMT", "version": "v1" } ]
2019-10-09
[ [ "Jin", "Shihao", "" ], [ "Savioli", "Nicolò", "" ], [ "de Marvao", "Antonio", "" ], [ "Dawes", "Timothy JW", "" ], [ "Gandy", "Axel", "" ], [ "Rueckert", "Daniel", "" ], [ "O'Regan", "Declan P", "" ] ]
In this work, a novel approach is proposed for joint analysis of high dimensional time-resolved cardiac motion features obtained from segmented cardiac MRI and low dimensional clinical risk factors to improve survival prediction in heart failure. Different methods are evaluated to find the optimal way to insert conventional covariates into deep prediction networks. Correlation analysis between autoencoder latent codes and covariate features is used to examine how these predictors interact. We believe that similar approaches could also be used to introduce knowledge of genetic variants to such survival networks to improve outcome prediction by jointly analysing cardiac motion traits with inheritable risk factors.
2103.01026
Wei Li
Chenxi Zhou, Bin Yang, Wenliang Fan, Wei Li
Modelling brain based on canonical ensemble with functional MRI: A thermodynamic exploration on neural system
27 pages, 3 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective. Modelling is an important way to study the working mechanism of brain. While the characterization and understanding of brain are still inadequate. This study tried to build a model of brain from the perspective of thermodynamics at system level, which brought a new thinking to brain modelling. Approach. Regarding brain regions as systems, voxels as particles, and intensity of signals as energy of particles, the thermodynamic model of brain was built based on canonical ensemble theory. Two pairs of activated regions and two pairs of inactivated brain regions were selected for comparison in this study, and the analysis on thermodynamic properties based on the model proposed were performed. In addition, the thermodynamic properties were also extracted as input features for the detection of Alzheimer's disease. Main results. The experiment results verified the assumption that the brain also follows the thermodynamic laws. It demonstrated the feasibility and rationality of brain thermodynamic modelling method proposed, indicating that thermodynamic parameters could be applied to describe the state of neural system. Meanwhile, the brain thermodynamic model achieved much better accuracy in detection of Alzheimer's disease, suggesting the potential application of thermodynamic model in auxiliary diagnosis. Significance. (1) Instead of applying some thermodynamic parameters to analyze neural system, a brain model at system level was proposed from perspective of thermodynamics for the first time in this study. (2) The study discovered that the neural system also follows the laws of thermodynamics, which leads to increased internal energy, increased free energy and decreased entropy when system is activated. (3) The detection of neural disease was demonstrated to be benefit from thermodynamic model, implying the immense potential of thermodynamics in auxiliary diagnosis.
[ { "created": "Fri, 26 Feb 2021 13:23:20 GMT", "version": "v1" }, { "created": "Sat, 27 Mar 2021 08:59:52 GMT", "version": "v2" } ]
2021-03-30
[ [ "Zhou", "Chenxi", "" ], [ "Yang", "Bin", "" ], [ "Fan", "Wenliang", "" ], [ "Li", "Wei", "" ] ]
Objective. Modelling is an important way to study the working mechanism of brain. While the characterization and understanding of brain are still inadequate. This study tried to build a model of brain from the perspective of thermodynamics at system level, which brought a new thinking to brain modelling. Approach. Regarding brain regions as systems, voxels as particles, and intensity of signals as energy of particles, the thermodynamic model of brain was built based on canonical ensemble theory. Two pairs of activated regions and two pairs of inactivated brain regions were selected for comparison in this study, and the analysis on thermodynamic properties based on the model proposed were performed. In addition, the thermodynamic properties were also extracted as input features for the detection of Alzheimer's disease. Main results. The experiment results verified the assumption that the brain also follows the thermodynamic laws. It demonstrated the feasibility and rationality of brain thermodynamic modelling method proposed, indicating that thermodynamic parameters could be applied to describe the state of neural system. Meanwhile, the brain thermodynamic model achieved much better accuracy in detection of Alzheimer's disease, suggesting the potential application of thermodynamic model in auxiliary diagnosis. Significance. (1) Instead of applying some thermodynamic parameters to analyze neural system, a brain model at system level was proposed from perspective of thermodynamics for the first time in this study. (2) The study discovered that the neural system also follows the laws of thermodynamics, which leads to increased internal energy, increased free energy and decreased entropy when system is activated. (3) The detection of neural disease was demonstrated to be benefit from thermodynamic model, implying the immense potential of thermodynamics in auxiliary diagnosis.
2306.12348
Andrea Raffo
Andrea Raffo, Ulderico Fugacci, Silvia Biasotti
GEO-Nav: a geometric dataset of voltage-gated sodium channels
null
Computers & Graphics 115 (2023) 285-295
10.1016/j.cag.2023.06.023
null
q-bio.BM cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Voltage-gated sodium (Nav) channels constitute a prime target for drug design and discovery, given their implication in various diseases such as epilepsy, migraine and ataxia to name a few. In this regard, performing morphological analysis is a crucial step in comprehensively understanding their biological function and mechanism, as well as in uncovering subtle details of their mechanism that may be elusive to experimental observations. Despite their tremendous therapeutic potential, drug design resources are deficient, particularly in terms of accurate and comprehensive geometric information. This paper presents a geometric dataset of molecular surfaces that are representative of Nav channels in mammals. For each structure we provide three representations and a number of geometric measures, including length, volume and straightness of the recognized channels. To demonstrate the effective use of GEO-Nav, we have tested it on two methods belonging to two different categories of approaches: a sphere-based and a tessellation-based method.
[ { "created": "Wed, 21 Jun 2023 15:43:55 GMT", "version": "v1" }, { "created": "Wed, 9 Aug 2023 14:54:47 GMT", "version": "v2" } ]
2023-08-10
[ [ "Raffo", "Andrea", "" ], [ "Fugacci", "Ulderico", "" ], [ "Biasotti", "Silvia", "" ] ]
Voltage-gated sodium (Nav) channels constitute a prime target for drug design and discovery, given their implication in various diseases such as epilepsy, migraine and ataxia to name a few. In this regard, performing morphological analysis is a crucial step in comprehensively understanding their biological function and mechanism, as well as in uncovering subtle details of their mechanism that may be elusive to experimental observations. Despite their tremendous therapeutic potential, drug design resources are deficient, particularly in terms of accurate and comprehensive geometric information. This paper presents a geometric dataset of molecular surfaces that are representative of Nav channels in mammals. For each structure we provide three representations and a number of geometric measures, including length, volume and straightness of the recognized channels. To demonstrate the effective use of GEO-Nav, we have tested it on two methods belonging to two different categories of approaches: a sphere-based and a tessellation-based method.
1909.05908
Jahan Schad
Jahan N. Schad
Neurological Nature of Vision and Thought and Mechanisms of Perception Experiences
3 pages
J Neurol Stroke. 2016, 4(5)
10.15406/jnsk.2016.04.00152
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding of the phenomena of vision and thought require clarification of the general mechanism of perception. So far, philosophical inquiries and scientific investigations have not been able to address clearly the mysteries surrounding them. The present work is an attempt to unravel the essences of these phenomenal based on the presumption of computational brain. Within this context, the natures of thought is clarified, and the basis of the experience of perception is established. And by drawing from the successes of the developed tactile vision substitution systems (TVSS), which render some measure of vision,in vision handicapped persons, early or congenital blinds, the true nature of vision as cutaneous sensations is also divulged. The mechanism of perception involves sensing of the stimuli, and autonomous engagement of brain neuronal complexity resolution patterns; that is the brain implicit embedded computational instructions. Upon commencement of the triggers, brain computations, which aso involve engaging body's biophysical feedback system, are performed; and the results are outputted as motor signals that render the realization of perception. However, this requires deployment of a perception medium; an interface. Given the nature of efferent signals, there must be a (known) bio-mechanical system interface, other than the body muscle and skeletal system, which performs the needed function: Considering the fact that the vocal system performs such task for verbalization of brain's synthesis of language, the possibility of its further role in the experience of thought and vision, in the form of mostly quiet (inaudible) recital of the related motor signals, is suggested.
[ { "created": "Mon, 9 Sep 2019 18:13:41 GMT", "version": "v1" } ]
2019-09-16
[ [ "Schad", "Jahan N.", "" ] ]
Understanding of the phenomena of vision and thought require clarification of the general mechanism of perception. So far, philosophical inquiries and scientific investigations have not been able to address clearly the mysteries surrounding them. The present work is an attempt to unravel the essences of these phenomenal based on the presumption of computational brain. Within this context, the natures of thought is clarified, and the basis of the experience of perception is established. And by drawing from the successes of the developed tactile vision substitution systems (TVSS), which render some measure of vision,in vision handicapped persons, early or congenital blinds, the true nature of vision as cutaneous sensations is also divulged. The mechanism of perception involves sensing of the stimuli, and autonomous engagement of brain neuronal complexity resolution patterns; that is the brain implicit embedded computational instructions. Upon commencement of the triggers, brain computations, which aso involve engaging body's biophysical feedback system, are performed; and the results are outputted as motor signals that render the realization of perception. However, this requires deployment of a perception medium; an interface. Given the nature of efferent signals, there must be a (known) bio-mechanical system interface, other than the body muscle and skeletal system, which performs the needed function: Considering the fact that the vocal system performs such task for verbalization of brain's synthesis of language, the possibility of its further role in the experience of thought and vision, in the form of mostly quiet (inaudible) recital of the related motor signals, is suggested.
1210.4313
Vladimir Chechetkin R.
Galina I. Kravatskaya, Vladimir R. Chechetkin, Yury V. Kravatsky and Vladimir G. Tumanyan
Structural attributes of nucleotide sequences in promoter regions of supercoiling-sensitive genes: how to relate microarray expression data with genomic sequences
38 pages, 12 figures
null
10.1016/j.ygeno.2012.10.003
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The level of supercoiling in the chromosome can affect gene expression. To clarify the basis of supercoiling sensitivity, we analyzed the structural features of nucleotide sequences in the vicinity of promoters for the genes with expression enhanced and decreased in response to loss of chromosomal supercoiling in E. coli. Fourier analysis of promoter sequences for supercoiling-sensitive genes reveals the tendency in selection of sequences with helical periodicities close to 10 nt for relaxation-induced genes and to 11 nt for relaxation-repressed genes. The helical periodicities in the subsets of promoters recognized by RNA polymerase with different sigma factors were also studied. A special procedure was developed for study of correlations between the intensities of periodicities in promoter sequences and the expression levels of corresponding genes. Significant correlations of expression with the AT content and with AT periodicities about 10, 11, and 50 nt indicate their role in regulation of supercoiling-sensitive genes.
[ { "created": "Tue, 16 Oct 2012 08:59:59 GMT", "version": "v1" } ]
2012-10-17
[ [ "Kravatskaya", "Galina I.", "" ], [ "Chechetkin", "Vladimir R.", "" ], [ "Kravatsky", "Yury V.", "" ], [ "Tumanyan", "Vladimir G.", "" ] ]
The level of supercoiling in the chromosome can affect gene expression. To clarify the basis of supercoiling sensitivity, we analyzed the structural features of nucleotide sequences in the vicinity of promoters for the genes with expression enhanced and decreased in response to loss of chromosomal supercoiling in E. coli. Fourier analysis of promoter sequences for supercoiling-sensitive genes reveals the tendency in selection of sequences with helical periodicities close to 10 nt for relaxation-induced genes and to 11 nt for relaxation-repressed genes. The helical periodicities in the subsets of promoters recognized by RNA polymerase with different sigma factors were also studied. A special procedure was developed for study of correlations between the intensities of periodicities in promoter sequences and the expression levels of corresponding genes. Significant correlations of expression with the AT content and with AT periodicities about 10, 11, and 50 nt indicate their role in regulation of supercoiling-sensitive genes.
1909.02336
Jos\'e Mar\'ia Mart\'in Olalla Dr.
Jos\'e Mar\'ia Mart\'in-Olalla
Scandinavian bed and rise times in the Age of Enlightenment and in the 21st century show similarity, helped by Daylight Saving Time
3pages, 1figure, 740 words, RevTeX RMP format, longbibliography
Journal of Sleep Research 2019 e12916
10.1111/jsr.12916
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Letter to the Editor published in the Journal of Sleep Research
[ { "created": "Thu, 5 Sep 2019 11:42:58 GMT", "version": "v1" } ]
2019-09-06
[ [ "Martín-Olalla", "José María", "" ] ]
A Letter to the Editor published in the Journal of Sleep Research
q-bio/0507007
Martin Howard
Konstantin Doubrovinski and Martin Howard
Stochastic model for Soj relocation dynamics in Bacillus subtilis
45 pages
Proc. Natl. Acad. Sci. 102 9808-9813 (2005)
10.1073/pnas.0500529102
null
q-bio.SC cond-mat.stat-mech
null
The Bacillus subtilis Spo0J/Soj proteins, implicated in chromosome segregation and transcriptional regulation, show striking dynamics: Soj undergoes irregular relocations from pole to pole or nucleoid to nucleoid. Here we report on a mathematical model of the Soj dynamics. Our model, which is closely based on the available experimental data, readily generates dynamic Soj relocations. We show that the irregularity of the relocations may be due to the stochastic nature of the underlying Spo0J/Soj interactions and diffusion. We propose explanations for the behavior of several Spo0J/Soj mutants including the "freezing" of the Soj dynamics observed in filamentous cells. Our approach underlines the importance of incorporating stochastic effects when modelling spatiotemporal protein dynamics inside cells.
[ { "created": "Tue, 5 Jul 2005 18:06:36 GMT", "version": "v1" } ]
2007-05-23
[ [ "Doubrovinski", "Konstantin", "" ], [ "Howard", "Martin", "" ] ]
The Bacillus subtilis Spo0J/Soj proteins, implicated in chromosome segregation and transcriptional regulation, show striking dynamics: Soj undergoes irregular relocations from pole to pole or nucleoid to nucleoid. Here we report on a mathematical model of the Soj dynamics. Our model, which is closely based on the available experimental data, readily generates dynamic Soj relocations. We show that the irregularity of the relocations may be due to the stochastic nature of the underlying Spo0J/Soj interactions and diffusion. We propose explanations for the behavior of several Spo0J/Soj mutants including the "freezing" of the Soj dynamics observed in filamentous cells. Our approach underlines the importance of incorporating stochastic effects when modelling spatiotemporal protein dynamics inside cells.
2006.13264
Nicole Vike
Nicole L. Vike, Sumra Bari, Khrystyna Stetsiv, Linda Papa, Eric A. Nauman, Thomas M. Talavage, Semyon Slobounov, and Hans C. Breiter
Metabolomic measures of altered energy metabolism mediate the relationship of inflammatory miRNAs to motor control in collegiate football athletes
55 pages, 9 figures, 5 tables
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research has shown there can be detrimental neurological effects of short- and long-term exposure to contact sports. In the present study, metabolomic profiling was combined with inflammatory miRNA quantification, computational behavior with virtual reality (VR) testing of motor control, and head collision event monitoring to explore trans-omic and collision effects on human behavior across a season of players on a collegiate American football team. We integrated permutation-based statistics with mediation analyses to test complex, directional relationships between miRNAs, metabolites, and VR task performance. Fourteen significant mediations (metabolite = mediator; miRNA = independent variable; VR score = dependent variable) were discovered at preseason (N=6) and across season (N=8) with Sobel p-values less than or equal to 0.05 and with total effects at or exceeding 50%. The majority of mediation findings involved long to medium chain fatty acids (2-HG, 8-HOA, UND, sebacate, suberate, and heptanoate). In parallel, TCA metabolites were found to be significantly decreased at postseason relative to preseason. HAEs were associated with metabolomic measures and miRNA levels across-season. Together, these observations suggest a state of chronic HAE-induced neuroinflammation (as evidence by elevated miRNAs) and mitochondrial dysfunction (as observed by abnormal FAs and TCA metabolites) that together produce subtle changes in neurological function (as observed by impaired motor control behavior). These findings point to a shift in mitochondrial metabolism, away from mitochondria function, consistent with other illnesses classified as mitochondrial disorders, suggesting a plausible mechanism underlying HAEs in contact sports and potential avenue for treatment intervention.
[ { "created": "Tue, 23 Jun 2020 18:43:24 GMT", "version": "v1" } ]
2020-06-25
[ [ "Vike", "Nicole L.", "" ], [ "Bari", "Sumra", "" ], [ "Stetsiv", "Khrystyna", "" ], [ "Papa", "Linda", "" ], [ "Nauman", "Eric A.", "" ], [ "Talavage", "Thomas M.", "" ], [ "Slobounov", "Semyon", "" ], [ "Breiter", "Hans C.", "" ] ]
Recent research has shown there can be detrimental neurological effects of short- and long-term exposure to contact sports. In the present study, metabolomic profiling was combined with inflammatory miRNA quantification, computational behavior with virtual reality (VR) testing of motor control, and head collision event monitoring to explore trans-omic and collision effects on human behavior across a season of players on a collegiate American football team. We integrated permutation-based statistics with mediation analyses to test complex, directional relationships between miRNAs, metabolites, and VR task performance. Fourteen significant mediations (metabolite = mediator; miRNA = independent variable; VR score = dependent variable) were discovered at preseason (N=6) and across season (N=8) with Sobel p-values less than or equal to 0.05 and with total effects at or exceeding 50%. The majority of mediation findings involved long to medium chain fatty acids (2-HG, 8-HOA, UND, sebacate, suberate, and heptanoate). In parallel, TCA metabolites were found to be significantly decreased at postseason relative to preseason. HAEs were associated with metabolomic measures and miRNA levels across-season. Together, these observations suggest a state of chronic HAE-induced neuroinflammation (as evidence by elevated miRNAs) and mitochondrial dysfunction (as observed by abnormal FAs and TCA metabolites) that together produce subtle changes in neurological function (as observed by impaired motor control behavior). These findings point to a shift in mitochondrial metabolism, away from mitochondria function, consistent with other illnesses classified as mitochondrial disorders, suggesting a plausible mechanism underlying HAEs in contact sports and potential avenue for treatment intervention.
1711.00418
Ehtibar Dzhafarov
Victor H. Cervantes and Ehtibar N. Dzhafarov
Snow Queen is Evil and Beautiful: Experimental Evidence for Probabilistic Contextuality in Human Choices
To be published in Decision. 12 pp., 6 figures, 4 tables; Version 8 is the proofread-for-pubpication version
Decision 5, 193-204, 2018
null
null
q-bio.NC math.PR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present unambiguous experimental evidence for (quantum-like) probabilistic contextuality in psychology. All previous attempts to find contextuality in a psychological experiment were unsuccessful because of the gross violations of marginal selectivity in behavioral data, making the traditional mathematical tests developed in quantum mechanics inapplicable. In our crowdsourcing experiment respondents were making two simple choices: of one of two characters in a story (The Snow Queen by Hans Christian Andersen), and of one of two characteristics, such as Kind and Evil, so that the character and the characteristic chosen matched the story line. The formal structure of the experiment imitated that of the Einstein-Podolsky-Rosen paradigm in the Bohm-Bell version. Marginal selectivity was violated, indicating that the two choices were directly influencing each other, but the application of a mathematical test developed in the Contextuality-by-Default theory, extending the traditional quantum-mechanical test, indicated a strong presence of contextuality proper, not reducible to direct influences.
[ { "created": "Wed, 1 Nov 2017 16:16:39 GMT", "version": "v1" }, { "created": "Thu, 2 Nov 2017 14:46:06 GMT", "version": "v2" }, { "created": "Thu, 9 Nov 2017 03:23:22 GMT", "version": "v3" }, { "created": "Mon, 13 Nov 2017 16:47:37 GMT", "version": "v4" }, { "created": "Mon, 18 Dec 2017 09:59:14 GMT", "version": "v5" }, { "created": "Fri, 29 Dec 2017 04:54:56 GMT", "version": "v6" }, { "created": "Thu, 18 Jan 2018 22:49:23 GMT", "version": "v7" }, { "created": "Wed, 21 Mar 2018 02:50:45 GMT", "version": "v8" } ]
2019-01-24
[ [ "Cervantes", "Victor H.", "" ], [ "Dzhafarov", "Ehtibar N.", "" ] ]
We present unambiguous experimental evidence for (quantum-like) probabilistic contextuality in psychology. All previous attempts to find contextuality in a psychological experiment were unsuccessful because of the gross violations of marginal selectivity in behavioral data, making the traditional mathematical tests developed in quantum mechanics inapplicable. In our crowdsourcing experiment respondents were making two simple choices: of one of two characters in a story (The Snow Queen by Hans Christian Andersen), and of one of two characteristics, such as Kind and Evil, so that the character and the characteristic chosen matched the story line. The formal structure of the experiment imitated that of the Einstein-Podolsky-Rosen paradigm in the Bohm-Bell version. Marginal selectivity was violated, indicating that the two choices were directly influencing each other, but the application of a mathematical test developed in the Contextuality-by-Default theory, extending the traditional quantum-mechanical test, indicated a strong presence of contextuality proper, not reducible to direct influences.
1303.3920
Tobias Jeppsson
Tobias Jeppsson and P\"ar Forslund
Can Life History Predict the Effect of Demographic Stochasticity on Extinction Risk?
null
The American Naturalist. 2012. 179(6): 706-720
10.1086/665696
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Demographic stochasticity is important in determining extinction risks of small populations, but it is largely unknown how its effect depends on the life histories of species. We modeled effects of demographic stochasticity on extinction risk in a broad range of generalized life histories, using matrix models and branching processes. Extinction risks of life histories varied greatly in their sensitivity to demographic stochasticity. Comparing life histories, extinction risk generally increased with increasing fecundity and decreased with higher ages of maturation. Effects of adult survival depended on age of maturation. At lower ages of maturation, extinction risk peaked at intermediate levels of adult survival, but it increased along with adult survival at higher ages of maturation. These differences were largely explained by differences in sensitivities of population growth to perturbations of life-history traits. Juvenile survival rate contributed most to total demographic variance in the majority of life histories. Our general results confirmed earlier findings, suggesting that empirical patterns can be explained by a relatively simple model. Thus, basic life history information can be used to assign life-history-specific sensitivity to demographic stochasticity. This is of great value when assessing the vulnerability of small populations.
[ { "created": "Fri, 15 Mar 2013 22:45:44 GMT", "version": "v1" } ]
2013-03-19
[ [ "Jeppsson", "Tobias", "" ], [ "Forslund", "Pär", "" ] ]
Demographic stochasticity is important in determining extinction risks of small populations, but it is largely unknown how its effect depends on the life histories of species. We modeled effects of demographic stochasticity on extinction risk in a broad range of generalized life histories, using matrix models and branching processes. Extinction risks of life histories varied greatly in their sensitivity to demographic stochasticity. Comparing life histories, extinction risk generally increased with increasing fecundity and decreased with higher ages of maturation. Effects of adult survival depended on age of maturation. At lower ages of maturation, extinction risk peaked at intermediate levels of adult survival, but it increased along with adult survival at higher ages of maturation. These differences were largely explained by differences in sensitivities of population growth to perturbations of life-history traits. Juvenile survival rate contributed most to total demographic variance in the majority of life histories. Our general results confirmed earlier findings, suggesting that empirical patterns can be explained by a relatively simple model. Thus, basic life history information can be used to assign life-history-specific sensitivity to demographic stochasticity. This is of great value when assessing the vulnerability of small populations.
2403.19637
Samir H.A. Mohammad
Samir H.A. Mohammad, Haneen Farah and Arkady Zgonnikov
In the driver's mind: modeling the dynamics of human overtaking decisions in interactions with oncoming automated vehicles
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding human behavior in overtaking scenarios is crucial for enhancing road safety in mixed traffic with automated vehicles (AVs). Computational models of behavior play a pivotal role in advancing this understanding, as they can provide insight into human behavior generalizing beyond empirical studies. However, existing studies and models of human overtaking behavior have mostly focused on scenarios with simplistic, constant-speed dynamics of oncoming vehicles, disregarding the potential of AVs to proactively influence the decision-making process of the human drivers via implicit communication. Furthermore, so far it remained unknown whether overtaking decisions of human drivers are affected by whether they are interacting with an AV or a human-driven vehicle (HDV). To address these gaps, we conducted a "reverse Wizard-of-Oz" driving simulator experiment with 30 participants who repeatedly interacted with oncoming AVs and HDVs, measuring the drivers' gap acceptance decisions and response times. The oncoming vehicles featured time-varying dynamics designed to influence the overtaking decisions of the participants by briefly decelerating and then recovering to their initial speed. We found that participants did not alter their overtaking behavior when interacting with oncoming AVs compared to HDVs. Furthermore, we did not find any evidence of brief decelerations of the oncoming vehicle affecting the decisions or response times of the participants. Cognitive modeling of the obtained data revealed that a generalized drift-diffusion model with dynamic drift rate and velocity-dependent decision bias best explained the gap acceptance outcomes and response times observed in the experiment. Overall, our findings highlight the potential of cognitive models for further advancing the ongoing development of safer interactions between human drivers and AVs during overtaking maneuvers.
[ { "created": "Thu, 28 Mar 2024 17:50:41 GMT", "version": "v1" } ]
2024-03-29
[ [ "Mohammad", "Samir H. A.", "" ], [ "Farah", "Haneen", "" ], [ "Zgonnikov", "Arkady", "" ] ]
Understanding human behavior in overtaking scenarios is crucial for enhancing road safety in mixed traffic with automated vehicles (AVs). Computational models of behavior play a pivotal role in advancing this understanding, as they can provide insight into human behavior generalizing beyond empirical studies. However, existing studies and models of human overtaking behavior have mostly focused on scenarios with simplistic, constant-speed dynamics of oncoming vehicles, disregarding the potential of AVs to proactively influence the decision-making process of the human drivers via implicit communication. Furthermore, so far it remained unknown whether overtaking decisions of human drivers are affected by whether they are interacting with an AV or a human-driven vehicle (HDV). To address these gaps, we conducted a "reverse Wizard-of-Oz" driving simulator experiment with 30 participants who repeatedly interacted with oncoming AVs and HDVs, measuring the drivers' gap acceptance decisions and response times. The oncoming vehicles featured time-varying dynamics designed to influence the overtaking decisions of the participants by briefly decelerating and then recovering to their initial speed. We found that participants did not alter their overtaking behavior when interacting with oncoming AVs compared to HDVs. Furthermore, we did not find any evidence of brief decelerations of the oncoming vehicle affecting the decisions or response times of the participants. Cognitive modeling of the obtained data revealed that a generalized drift-diffusion model with dynamic drift rate and velocity-dependent decision bias best explained the gap acceptance outcomes and response times observed in the experiment. Overall, our findings highlight the potential of cognitive models for further advancing the ongoing development of safer interactions between human drivers and AVs during overtaking maneuvers.