id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2002.11271
Manuel Lafond
Garance Cordonnier, Manuel Lafond
Comparing copy-number profiles under multi-copy amplifications and deletions
null
null
null
null
q-bio.GN cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During cancer progression, malignant cells accumulate somatic mutations that can lead to genetic aberrations. In particular, evolutionary events akin to segmental duplications or deletions can alter the copy-number profile (CNP) of a set of genes in a genome. Our aim is to compute the evolutionary distance between two cells for which only CNPs are known. This asks for the minimum number of segmental amplifications and deletions to turn one CNP into another. This was recently formalized into a model where each event is assumed to alter a copy-number by $1$ or $-1$, even though these events can affect large portions of a chromosome. We propose a general cost framework where an event can modify the copy-number of a gene by larger amounts. We show that any cost scheme that allows segmental deletions of arbitrary length makes computing the distance strongly NP-hard. We then devise a factor $2$ approximation algorithm for the problem when copy-numbers are non-zero and provide an implementation called \textsf{cnp2cnp}. We evaluate our approach experimentally by reconstructing simulated cancer phylogenies from the pairwise distances inferred by \textsf{cnp2cnp} and compare it against two other alternatives, namely the \textsf{MEDICC} distance and the Euclidean distance. The experimental results show that our distance yields more accurate phylogenies on average than these alternatives if the given CNPs are error-free, but that the \textsf{MEDICC} distance is slightly more robust against error in the data. In all cases, our experiments show that either our approach or the \textsf{MEDICC} approach should preferred over the Euclidean distance.
[ { "created": "Wed, 26 Feb 2020 03:02:05 GMT", "version": "v1" } ]
2020-02-27
[ [ "Cordonnier", "Garance", "" ], [ "Lafond", "Manuel", "" ] ]
During cancer progression, malignant cells accumulate somatic mutations that can lead to genetic aberrations. In particular, evolutionary events akin to segmental duplications or deletions can alter the copy-number profile (CNP) of a set of genes in a genome. Our aim is to compute the evolutionary distance between two cells for which only CNPs are known. This asks for the minimum number of segmental amplifications and deletions to turn one CNP into another. This was recently formalized into a model where each event is assumed to alter a copy-number by $1$ or $-1$, even though these events can affect large portions of a chromosome. We propose a general cost framework where an event can modify the copy-number of a gene by larger amounts. We show that any cost scheme that allows segmental deletions of arbitrary length makes computing the distance strongly NP-hard. We then devise a factor $2$ approximation algorithm for the problem when copy-numbers are non-zero and provide an implementation called \textsf{cnp2cnp}. We evaluate our approach experimentally by reconstructing simulated cancer phylogenies from the pairwise distances inferred by \textsf{cnp2cnp} and compare it against two other alternatives, namely the \textsf{MEDICC} distance and the Euclidean distance. The experimental results show that our distance yields more accurate phylogenies on average than these alternatives if the given CNPs are error-free, but that the \textsf{MEDICC} distance is slightly more robust against error in the data. In all cases, our experiments show that either our approach or the \textsf{MEDICC} approach should preferred over the Euclidean distance.
1608.08612
Andr\'e Amado
Andr\'e Amado, Paulo R. A. Campos
The influence of the composition of tradeoffs on the generation of differentiated cells
6 pages, 5 figures
null
10.1088/1742-5468/aa71d8
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the emergence of cell differentiation under the assumption of the existence of a given number of tradeoffs between genes encoding different functions. In the model the viability of colonies is determined by the capability of their lower level units to perform different functions, which is implicitly determined by external chemical stimuli. Due to the existence of tradeoffs it can be evolutionarily advantageous to evolve the division of labor whereby the cells can suppress their contributions to some of the activities through the activation of regulatory genes, which in its turn inflicts a cost in terms of fitness. Our simulation results show that cell differentiation is more likely as the number of tradeoffs is increased but the outcome also depends on their strength. We observe the existence of critical values for the minimum number of tradeoffs and their strength beyond that maximum cell differentiation can be attained. Remarkably, we observe the occurrence of a maximum tradeoff strength beyond that the population is no longer viable imposing an upper tolerable level of constraint at the system. This tolerance is reduced as the number of tradeoffs grows.
[ { "created": "Tue, 30 Aug 2016 19:40:19 GMT", "version": "v1" } ]
2017-06-06
[ [ "Amado", "André", "" ], [ "Campos", "Paulo R. A.", "" ] ]
We study the emergence of cell differentiation under the assumption of the existence of a given number of tradeoffs between genes encoding different functions. In the model the viability of colonies is determined by the capability of their lower level units to perform different functions, which is implicitly determined by external chemical stimuli. Due to the existence of tradeoffs it can be evolutionarily advantageous to evolve the division of labor whereby the cells can suppress their contributions to some of the activities through the activation of regulatory genes, which in its turn inflicts a cost in terms of fitness. Our simulation results show that cell differentiation is more likely as the number of tradeoffs is increased but the outcome also depends on their strength. We observe the existence of critical values for the minimum number of tradeoffs and their strength beyond that maximum cell differentiation can be attained. Remarkably, we observe the occurrence of a maximum tradeoff strength beyond that the population is no longer viable imposing an upper tolerable level of constraint at the system. This tolerance is reduced as the number of tradeoffs grows.
1604.04398
Michael Sadovsky
Michael G.Sadovsky, Eugenia I.Bondar, Yuliya A.Putintseva, Konstantin V.Krutovsky
Chloroplast Genome Yields Unusual Seven-Cluster Structure C
This is the extended paper presented as a poster at IWBBIO2016 (Granada, Spain)
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We studied the structuredness in a chloroplast genome of Siberian larch. The clusters in 63-dimensional space were identified with elastic map technique, where the objects to be clusterized are the different fragments of the genome. A seven-cluster structure in the distribution of those fragments reported previously has been found. Unlike the previous results, we have found the drastically other composition of the clusters comprising the fragments extracted from coding and non-coding regions of the genome.
[ { "created": "Fri, 15 Apr 2016 08:21:08 GMT", "version": "v1" } ]
2016-04-18
[ [ "Sadovsky", "Michael G.", "" ], [ "Bondar", "Eugenia I.", "" ], [ "Putintseva", "Yuliya A.", "" ], [ "Krutovsky", "Konstantin V.", "" ] ]
We studied the structuredness in a chloroplast genome of Siberian larch. The clusters in 63-dimensional space were identified with elastic map technique, where the objects to be clusterized are the different fragments of the genome. A seven-cluster structure in the distribution of those fragments reported previously has been found. Unlike the previous results, we have found the drastically other composition of the clusters comprising the fragments extracted from coding and non-coding regions of the genome.
1708.07046
Diego Garlaschelli
Assaf Almog, Ori Roethler, Renate Buijink, Stephan Michel, Johanna H Meijer, Jos H. T. Rohling, and Diego Garlaschelli
Uncovering functional brain signature via random matrix theory
null
PLoS Computational Biology 15(5): e1006934 (2019)
10.1371/journal.pcbi.1006934
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain is organized in a modular way, serving multiple functionalities. This multiplicity requires that both positive (e.g. excitatory, phase-coherent) and negative (e.g. inhibitory, phase-opposing) interactions take place across brain modules. Unfortunately, most methods to detect modules from time series either neglect or convert to positive any measured negative correlation. This may leave a significant part of the sign-dependent functional structure undetected. Here we present a novel method, based on random matrix theory, for the identification of sign-dependent modules in the brain. Our method filters out the joint effects of local (unit-specific) noise and global (system-wide) dependencies that empirically obfuscate such structure. The method is guaranteed to identify an optimally contrasted functional `signature', i.e. a partition into modules that are positively correlated internally and negatively correlated across. The method is purely data-driven, does not use any arbitrary threshold or network projection, and outputs only statistically significant structure. In measurements of neuronal gene expression in the biological clock of mice, the method systematically uncovers two otherwise undetectable, negatively correlated modules whose relative size and mutual interaction strength are found to depend on photoperiod. The neurons alternating between the two modules define a candidate region of functional plasticity for circadian modulation.
[ { "created": "Sat, 5 Aug 2017 05:50:45 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2018 09:41:30 GMT", "version": "v2" } ]
2019-07-29
[ [ "Almog", "Assaf", "" ], [ "Roethler", "Ori", "" ], [ "Buijink", "Renate", "" ], [ "Michel", "Stephan", "" ], [ "Meijer", "Johanna H", "" ], [ "Rohling", "Jos H. T.", "" ], [ "Garlaschelli", "Diego", "" ] ...
The brain is organized in a modular way, serving multiple functionalities. This multiplicity requires that both positive (e.g. excitatory, phase-coherent) and negative (e.g. inhibitory, phase-opposing) interactions take place across brain modules. Unfortunately, most methods to detect modules from time series either neglect or convert to positive any measured negative correlation. This may leave a significant part of the sign-dependent functional structure undetected. Here we present a novel method, based on random matrix theory, for the identification of sign-dependent modules in the brain. Our method filters out the joint effects of local (unit-specific) noise and global (system-wide) dependencies that empirically obfuscate such structure. The method is guaranteed to identify an optimally contrasted functional `signature', i.e. a partition into modules that are positively correlated internally and negatively correlated across. The method is purely data-driven, does not use any arbitrary threshold or network projection, and outputs only statistically significant structure. In measurements of neuronal gene expression in the biological clock of mice, the method systematically uncovers two otherwise undetectable, negatively correlated modules whose relative size and mutual interaction strength are found to depend on photoperiod. The neurons alternating between the two modules define a candidate region of functional plasticity for circadian modulation.
2311.07494
Tim Van Den Bossche
Tim Van Den Bossche, Pieter Verschaffelt, Tibo Vande Moortele, Peter Dawyndt, Lennart Martens, Bart Mesuere
Biodiversity analysis of metaproteomics samples with Unipept: a comprehensive tutorial
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Metaproteomics has become a crucial omics technology for studying microbiomes. In this area, the Unipept ecosystem, accessible at https://unipept.ugent.be, has emerged as an invaluable resource for analyzing metaproteomic data. It offers in-depth insights into both taxonomic distributions and functional characteristics of complex ecosystems. This tutorial explains essential concepts like Lowest Common Ancestor (LCA) determination and the handling of peptides with missed cleavages. It also provides a detailed, step-by-step guide on using the Unipept Web application and Unipept Desktop for thorough metaproteomics analyses. By integrating theoretical principles with practical methodologies, this tutorial empowers researchers with the essential knowledge and tools needed to fully utilize metaproteomics in their microbiome studies.
[ { "created": "Mon, 13 Nov 2023 17:31:06 GMT", "version": "v1" } ]
2023-11-14
[ [ "Bossche", "Tim Van Den", "" ], [ "Verschaffelt", "Pieter", "" ], [ "Moortele", "Tibo Vande", "" ], [ "Dawyndt", "Peter", "" ], [ "Martens", "Lennart", "" ], [ "Mesuere", "Bart", "" ] ]
Metaproteomics has become a crucial omics technology for studying microbiomes. In this area, the Unipept ecosystem, accessible at https://unipept.ugent.be, has emerged as an invaluable resource for analyzing metaproteomic data. It offers in-depth insights into both taxonomic distributions and functional characteristics of complex ecosystems. This tutorial explains essential concepts like Lowest Common Ancestor (LCA) determination and the handling of peptides with missed cleavages. It also provides a detailed, step-by-step guide on using the Unipept Web application and Unipept Desktop for thorough metaproteomics analyses. By integrating theoretical principles with practical methodologies, this tutorial empowers researchers with the essential knowledge and tools needed to fully utilize metaproteomics in their microbiome studies.
1903.05947
Junqing Tang
Bin Zhang, Junqing Tang, Yi Wang, Hongfeng Zhang, Gang Xu, Yu Lin, Xiaomin Wu
Designing wildlife crossing structures for ungulates in a desert landscape: A case study in China
20 pages, 7 figures, Submit to Transportation Research Part D
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper reports on the design of wildlife crossing structures (WCSs) along a new expressway in China, which exemplifies the country's increasing efforts on wildlife protection in infrastructure projects. The expert knowledge and field surveys were used to determine the target species in the study area and the quantity, locations, size, and type of the WCSs. The results on relative abundance index and encounter rate showed that the ibex (\textit{Capra ibex}), argali sheep (Ovis ammon), and goitered gazelle (Gazella subgutturosa) are the main ungulates in the study area. Among them, the goitered gazelle is the most widely distributed species. WCSs were proposed based on the estimated crossing hotspots. The mean deviation distance between those hotspots and their nearest proposed WCSs is around 341m. In addition, those 16 proposed underpass WCSs have a width of no less than 12m and height of no lower than 3.5m, which is believed to be sufficient for ungulates in the area. Given the limited availability of high-resolution movement data and wildlife-vehicle collision data during road's early design stage, the approach demonstrated in this paper facilitates practical spatial planning and provides insights into designing WCSs in a desert landscape.
[ { "created": "Thu, 14 Mar 2019 12:44:39 GMT", "version": "v1" } ]
2019-03-15
[ [ "Zhang", "Bin", "" ], [ "Tang", "Junqing", "" ], [ "Wang", "Yi", "" ], [ "Zhang", "Hongfeng", "" ], [ "Xu", "Gang", "" ], [ "Lin", "Yu", "" ], [ "Wu", "Xiaomin", "" ] ]
This paper reports on the design of wildlife crossing structures (WCSs) along a new expressway in China, which exemplifies the country's increasing efforts on wildlife protection in infrastructure projects. The expert knowledge and field surveys were used to determine the target species in the study area and the quantity, locations, size, and type of the WCSs. The results on relative abundance index and encounter rate showed that the ibex (\textit{Capra ibex}), argali sheep (Ovis ammon), and goitered gazelle (Gazella subgutturosa) are the main ungulates in the study area. Among them, the goitered gazelle is the most widely distributed species. WCSs were proposed based on the estimated crossing hotspots. The mean deviation distance between those hotspots and their nearest proposed WCSs is around 341m. In addition, those 16 proposed underpass WCSs have a width of no less than 12m and height of no lower than 3.5m, which is believed to be sufficient for ungulates in the area. Given the limited availability of high-resolution movement data and wildlife-vehicle collision data during road's early design stage, the approach demonstrated in this paper facilitates practical spatial planning and provides insights into designing WCSs in a desert landscape.
1705.08265
Biswa Sengupta
Biswa Sengupta and Karl Friston
Sentient Self-Organization: Minimal dynamics and circular causality
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Theoretical arguments and empirical evidence in neuroscience suggests that organisms represent or model their environment by minimizing a variational free-energy bound on the surprise associated with sensory signals from the environment. In this paper, we study phase transitions in coupled dissipative dynamical systems (complex Ginzburg-Landau equations) under a variety of coupling conditions to model the exchange of a system (agent) with its environment. We show that arbitrary coupling between sensory signals and the internal state of a system -- or those between its action and external (environmental) states -- do not guarantee synchronous dynamics between external and internal states: the spatial structure and the temporal dynamics of sensory signals and action (that comprise the system's Markov blanket) have to be pruned to produce synchrony. This synchrony is necessary for an agent to infer environmental states -- a pre-requisite for survival. Therefore, such sentient dynamics, relies primarily on approximate synchronization between the agent and its niche.
[ { "created": "Thu, 18 May 2017 14:04:59 GMT", "version": "v1" } ]
2017-05-24
[ [ "Sengupta", "Biswa", "" ], [ "Friston", "Karl", "" ] ]
Theoretical arguments and empirical evidence in neuroscience suggests that organisms represent or model their environment by minimizing a variational free-energy bound on the surprise associated with sensory signals from the environment. In this paper, we study phase transitions in coupled dissipative dynamical systems (complex Ginzburg-Landau equations) under a variety of coupling conditions to model the exchange of a system (agent) with its environment. We show that arbitrary coupling between sensory signals and the internal state of a system -- or those between its action and external (environmental) states -- do not guarantee synchronous dynamics between external and internal states: the spatial structure and the temporal dynamics of sensory signals and action (that comprise the system's Markov blanket) have to be pruned to produce synchrony. This synchrony is necessary for an agent to infer environmental states -- a pre-requisite for survival. Therefore, such sentient dynamics, relies primarily on approximate synchronization between the agent and its niche.
2311.12814
Alvaro Prat
Alvaro Prat, Hisham Abdel Aty, Gintautas Kamuntavi\v{c}ius, Tanya Paquet, Povilas Norvai\v{s}as, Piero Gasparotto, Roy Tal
HydraScreen: A Generalizable Structure-Based Deep Learning Approach to Drug Discovery
null
null
null
null
q-bio.BM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose HydraScreen, a deep-learning approach that aims to provide a framework for more robust machine-learning-accelerated drug discovery. HydraScreen utilizes a state-of-the-art 3D convolutional neural network, designed for the effective representation of molecular structures and interactions in protein-ligand binding. We design an end-to-end pipeline for high-throughput screening and lead optimization, targeting applications in structure-based drug design. We assess our approach using established public benchmarks based on the CASF 2016 core set, achieving top-tier results in affinity and pose prediction (Pearson's r = 0.86, RMSE = 1.15, Top-1 = 0.95). Furthermore, we utilize a novel interaction profiling approach to identify potential biases in the model and dataset to boost interpretability and support the unbiased nature of our method. Finally, we showcase HydraScreen's capacity to generalize across unseen proteins and ligands, offering directions for future development of robust machine learning scoring functions. HydraScreen (accessible at https://hydrascreen.ro5.ai) provides a user-friendly GUI and a public API, facilitating easy assessment of individual protein-ligand complexes.
[ { "created": "Fri, 22 Sep 2023 18:48:34 GMT", "version": "v1" } ]
2023-11-23
[ [ "Prat", "Alvaro", "" ], [ "Aty", "Hisham Abdel", "" ], [ "Kamuntavičius", "Gintautas", "" ], [ "Paquet", "Tanya", "" ], [ "Norvaišas", "Povilas", "" ], [ "Gasparotto", "Piero", "" ], [ "Tal", "Roy", "" ] ...
We propose HydraScreen, a deep-learning approach that aims to provide a framework for more robust machine-learning-accelerated drug discovery. HydraScreen utilizes a state-of-the-art 3D convolutional neural network, designed for the effective representation of molecular structures and interactions in protein-ligand binding. We design an end-to-end pipeline for high-throughput screening and lead optimization, targeting applications in structure-based drug design. We assess our approach using established public benchmarks based on the CASF 2016 core set, achieving top-tier results in affinity and pose prediction (Pearson's r = 0.86, RMSE = 1.15, Top-1 = 0.95). Furthermore, we utilize a novel interaction profiling approach to identify potential biases in the model and dataset to boost interpretability and support the unbiased nature of our method. Finally, we showcase HydraScreen's capacity to generalize across unseen proteins and ligands, offering directions for future development of robust machine learning scoring functions. HydraScreen (accessible at https://hydrascreen.ro5.ai) provides a user-friendly GUI and a public API, facilitating easy assessment of individual protein-ligand complexes.
1911.08980
Felix Jost
Felix Jost, Enrico Schalk, Daniela Weber, Hartmut Doehner, Thomas Fischer, Sebastian Sager
Model-based optimal AML consolidation treatment
null
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neutropenia is an adverse event commonly arising during intensive chemotherapy of acute myeloid leukemia (AML). It is often associated with infectious complications. Mathematical modeling, simulation, and optimization of the treatment process would be a valuable tool to support clinical decision making, potentially resulting in less severe side effects and deeper remissions. However, until now, there has been no validated mathematical model available to simulate the effect of chemotherapy treatment on white blood cell (WBC) counts and leukemic cells simultaneously. We developed a population pharmacokinetic/pharmacodynamic (PK/PD) model combining a myelosuppression model considering endogenous granulocyte-colony stimulating factor (G-CSF), a PK model for cytarabine (Ara-C), a subcutaneous absorption model for exogenous G-CSF, and a two-compartment model for leukemic blasts. This model was fitted to data of 44 AML patients during consolidation therapy with a novel Ara-C plus G-CSF schedule from a phase II controlled clinical trial. Additionally, we were able to optimize treatment schedules with respect to disease progression, WBC nadirs, and the amount of Ara-C and G-CSF. The developed PK/PD model provided good prediction accuracies and an interpretation of the interaction between WBCs, G-CSF, and blasts. For 14 patients (those with available bone marrow blast counts), we achieved a median 4.2-fold higher WBC count at nadir, which is the most critical time during consolidation therapy. The simulation results showed that relative bone marrow blast counts remained below the clinically important threshold of 5%, with a median of 60% reduction in Ara-C.
[ { "created": "Wed, 20 Nov 2019 15:46:42 GMT", "version": "v1" }, { "created": "Sat, 18 Apr 2020 12:02:33 GMT", "version": "v2" } ]
2020-04-21
[ [ "Jost", "Felix", "" ], [ "Schalk", "Enrico", "" ], [ "Weber", "Daniela", "" ], [ "Doehner", "Hartmut", "" ], [ "Fischer", "Thomas", "" ], [ "Sager", "Sebastian", "" ] ]
Neutropenia is an adverse event commonly arising during intensive chemotherapy of acute myeloid leukemia (AML). It is often associated with infectious complications. Mathematical modeling, simulation, and optimization of the treatment process would be a valuable tool to support clinical decision making, potentially resulting in less severe side effects and deeper remissions. However, until now, there has been no validated mathematical model available to simulate the effect of chemotherapy treatment on white blood cell (WBC) counts and leukemic cells simultaneously. We developed a population pharmacokinetic/pharmacodynamic (PK/PD) model combining a myelosuppression model considering endogenous granulocyte-colony stimulating factor (G-CSF), a PK model for cytarabine (Ara-C), a subcutaneous absorption model for exogenous G-CSF, and a two-compartment model for leukemic blasts. This model was fitted to data of 44 AML patients during consolidation therapy with a novel Ara-C plus G-CSF schedule from a phase II controlled clinical trial. Additionally, we were able to optimize treatment schedules with respect to disease progression, WBC nadirs, and the amount of Ara-C and G-CSF. The developed PK/PD model provided good prediction accuracies and an interpretation of the interaction between WBCs, G-CSF, and blasts. For 14 patients (those with available bone marrow blast counts), we achieved a median 4.2-fold higher WBC count at nadir, which is the most critical time during consolidation therapy. The simulation results showed that relative bone marrow blast counts remained below the clinically important threshold of 5%, with a median of 60% reduction in Ara-C.
2005.00389
Nicolas Houy
Julien Flaig, Nicolas Houy
Random tags can sustain high heterophily levels
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a spatial model of the emergence of cooperation with synchronous births and deaths. Agents bear a tag and interact with their neighbors by playing the prisoner's dilemma game with strategies depending on their own and opponent's tags. An agent's payoff determines its chances of reproducing and passing on its strategy. We show that when tags are assigned at random rather than inherited, a significant heterophilic population of about 40~\% of the whole can emerge and persist. Heterophilics defect on agents bearing the same tag as theirs and cooperate with others. In our setting, the emergence of heterophily is explained by the correlation between an agent's payoff and its neighbors' payoffs. The advantage of heterophily over homophily (cooperating with agents bearing one's tag and defecting with others) when tags are assigned at random makes the emergence of the later an even more interesting phenomenon than previously thought.
[ { "created": "Fri, 1 May 2020 14:13:50 GMT", "version": "v1" } ]
2020-05-04
[ [ "Flaig", "Julien", "" ], [ "Houy", "Nicolas", "" ] ]
We consider a spatial model of the emergence of cooperation with synchronous births and deaths. Agents bear a tag and interact with their neighbors by playing the prisoner's dilemma game with strategies depending on their own and opponent's tags. An agent's payoff determines its chances of reproducing and passing on its strategy. We show that when tags are assigned at random rather than inherited, a significant heterophilic population of about 40~\% of the whole can emerge and persist. Heterophilics defect on agents bearing the same tag as theirs and cooperate with others. In our setting, the emergence of heterophily is explained by the correlation between an agent's payoff and its neighbors' payoffs. The advantage of heterophily over homophily (cooperating with agents bearing one's tag and defecting with others) when tags are assigned at random makes the emergence of the later an even more interesting phenomenon than previously thought.
1601.06940
Simone Pigolotti
Simone Pigolotti, Pablo Sartori
Protocols for copying and proofreading in template-assisted polymerization
19 pages, 7 figures
J. Stat. Phys. 1-15, December 2015
10.1007/s10955-015-1399-2
null
q-bio.SC cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss how information encoded in a template polymer can be stochastically copied into a copy polymer. We consider four different stochastic copy protocols of increasing complexity, inspired by building blocks of the mRNA translation pathway. In the first protocol, monomer incorporation occurs in a single stochastic transition. We then move to a more elaborate protocol in which an intermediate step can be used for error correction. Finally, we discuss the operating regimes of two kinetic proofreading protocols: one in which proofreading acts from the final copying step, and one in which it acts from an intermediate step. We review known results for these models and, in some cases, extend them to analyze all possible combinations of energetic and kinetic discrimination. We show that, in each of these protocols, only a limited number of these combinations leads to an improvement of the overall copying accuracy.
[ { "created": "Tue, 26 Jan 2016 09:19:53 GMT", "version": "v1" } ]
2016-03-23
[ [ "Pigolotti", "Simone", "" ], [ "Sartori", "Pablo", "" ] ]
We discuss how information encoded in a template polymer can be stochastically copied into a copy polymer. We consider four different stochastic copy protocols of increasing complexity, inspired by building blocks of the mRNA translation pathway. In the first protocol, monomer incorporation occurs in a single stochastic transition. We then move to a more elaborate protocol in which an intermediate step can be used for error correction. Finally, we discuss the operating regimes of two kinetic proofreading protocols: one in which proofreading acts from the final copying step, and one in which it acts from an intermediate step. We review known results for these models and, in some cases, extend them to analyze all possible combinations of energetic and kinetic discrimination. We show that, in each of these protocols, only a limited number of these combinations leads to an improvement of the overall copying accuracy.
2301.02196
Luis Sacouto
Luis Sacouto and Andreas Wichert
Competitive learning to generate sparse representations for associative memory
null
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most well established brain principles, hebbian learning, has led to the theoretical concept of neural assemblies. Based on it, many interesting brain theories have spawned. Palm's work implements this concept through binary associative memory, in a model that not only has a wide cognitive explanatory power but also makes neuroscientific predictions. Yet, associative memory can only work with logarithmic sparse representations, which makes it extremely difficult to apply the model to real data. We propose a biologically plausible network that encodes images into codes that are suitable for associative memory. It is organized into groups of neurons that specialize on local receptive fields, and learn through a competitive scheme. After conducting auto- and hetero-association experiments on two visual data sets, we can conclude that our network not only beats sparse coding baselines, but also that it comes close to the performance achieved using optimal random codes.
[ { "created": "Thu, 5 Jan 2023 17:57:52 GMT", "version": "v1" } ]
2023-01-06
[ [ "Sacouto", "Luis", "" ], [ "Wichert", "Andreas", "" ] ]
One of the most well established brain principles, hebbian learning, has led to the theoretical concept of neural assemblies. Based on it, many interesting brain theories have spawned. Palm's work implements this concept through binary associative memory, in a model that not only has a wide cognitive explanatory power but also makes neuroscientific predictions. Yet, associative memory can only work with logarithmic sparse representations, which makes it extremely difficult to apply the model to real data. We propose a biologically plausible network that encodes images into codes that are suitable for associative memory. It is organized into groups of neurons that specialize on local receptive fields, and learn through a competitive scheme. After conducting auto- and hetero-association experiments on two visual data sets, we can conclude that our network not only beats sparse coding baselines, but also that it comes close to the performance achieved using optimal random codes.
2305.12184
Lucy D'Agostino McGowan
Lucy D'Agostino McGowan, Shirlee Wohl, Justin Lessler
Power and sample size calculations for testing the ratio of reproductive values in phylogenetic samples
null
null
null
null
q-bio.PE stat.AP stat.ME
http://creativecommons.org/licenses/by/4.0/
The quality of the inferences we make from pathogen sequence data is determined by the number and composition of pathogen sequences that make up the sample used to drive that inference. However, there remains limited guidance on how to best structure and power studies when the end goal is phylogenetic inference. One question that we can attempt to answer with molecular data is whether some people are more likely to transmit a pathogen than others. Here we present an estimator to quantify differential transmission, as measured by the ratio of reproductive numbers between people with different characteristics, using transmission pairs linked by molecular data, along with a sample size calculation for this estimator. We also provide extensions to our method to correct for imperfect identification of transmission linked pairs, overdispersion in the transmission process, and group imbalance. We validate this method via simulation and provide tools to implement it in an R package, phylosamp.
[ { "created": "Sat, 20 May 2023 12:39:29 GMT", "version": "v1" }, { "created": "Fri, 9 Jun 2023 18:03:32 GMT", "version": "v2" } ]
2023-06-13
[ [ "McGowan", "Lucy D'Agostino", "" ], [ "Wohl", "Shirlee", "" ], [ "Lessler", "Justin", "" ] ]
The quality of the inferences we make from pathogen sequence data is determined by the number and composition of pathogen sequences that make up the sample used to drive that inference. However, there remains limited guidance on how to best structure and power studies when the end goal is phylogenetic inference. One question that we can attempt to answer with molecular data is whether some people are more likely to transmit a pathogen than others. Here we present an estimator to quantify differential transmission, as measured by the ratio of reproductive numbers between people with different characteristics, using transmission pairs linked by molecular data, along with a sample size calculation for this estimator. We also provide extensions to our method to correct for imperfect identification of transmission linked pairs, overdispersion in the transmission process, and group imbalance. We validate this method via simulation and provide tools to implement it in an R package, phylosamp.
2402.13297
Zhanglu Yan
Zhanglu Yan, Weiran Chu, Yuhua Sheng, Kaiwen Tang, Shida Wang, Yanfeng Liu, Weng-Fai Wong
Integrating Deep Learning and Synthetic Biology: A Co-Design Approach for Enhancing Gene Expression via N-terminal Coding Sequences
null
null
null
null
q-bio.QM cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
N-terminal coding sequence (NCS) influences gene expression by impacting the translation initiation rate. The NCS optimization problem is to find an NCS that maximizes gene expression. The problem is important in genetic engineering. However, current methods for NCS optimization such as rational design and statistics-guided approaches are labor-intensive yield only relatively small improvements. This paper introduces a deep learning/synthetic biology co-designed few-shot training workflow for NCS optimization. Our method utilizes k-nearest encoding followed by word2vec to encode the NCS, then performs feature extraction using attention mechanisms, before constructing a time-series network for predicting gene expression intensity, and finally a direct search algorithm identifies the optimal NCS with limited training data. We took green fluorescent protein (GFP) expressed by Bacillus subtilis as a reporting protein of NCSs, and employed the fluorescence enhancement factor as the metric of NCS optimization. Within just six iterative experiments, our model generated an NCS (MLD62) that increased average GFP expression by 5.41-fold, outperforming the state-of-the-art NCS designs. Extending our findings beyond GFP, we showed that our engineered NCS (MLD62) can effectively boost the production of N-acetylneuraminic acid by enhancing the expression of the crucial rate-limiting GNA1 gene, demonstrating its practical utility. We have open-sourced our NCS expression database and experimental procedures for public use.
[ { "created": "Tue, 20 Feb 2024 05:41:46 GMT", "version": "v1" } ]
2024-02-22
[ [ "Yan", "Zhanglu", "" ], [ "Chu", "Weiran", "" ], [ "Sheng", "Yuhua", "" ], [ "Tang", "Kaiwen", "" ], [ "Wang", "Shida", "" ], [ "Liu", "Yanfeng", "" ], [ "Wong", "Weng-Fai", "" ] ]
N-terminal coding sequence (NCS) influences gene expression by impacting the translation initiation rate. The NCS optimization problem is to find an NCS that maximizes gene expression. The problem is important in genetic engineering. However, current methods for NCS optimization such as rational design and statistics-guided approaches are labor-intensive yield only relatively small improvements. This paper introduces a deep learning/synthetic biology co-designed few-shot training workflow for NCS optimization. Our method utilizes k-nearest encoding followed by word2vec to encode the NCS, then performs feature extraction using attention mechanisms, before constructing a time-series network for predicting gene expression intensity, and finally a direct search algorithm identifies the optimal NCS with limited training data. We took green fluorescent protein (GFP) expressed by Bacillus subtilis as a reporting protein of NCSs, and employed the fluorescence enhancement factor as the metric of NCS optimization. Within just six iterative experiments, our model generated an NCS (MLD62) that increased average GFP expression by 5.41-fold, outperforming the state-of-the-art NCS designs. Extending our findings beyond GFP, we showed that our engineered NCS (MLD62) can effectively boost the production of N-acetylneuraminic acid by enhancing the expression of the crucial rate-limiting GNA1 gene, demonstrating its practical utility. We have open-sourced our NCS expression database and experimental procedures for public use.
2406.09586
Tinglin Huang
Tinglin Huang, Zhenqiao Song, Rex Ying, Wengong Jin
Protein-Nucleic Acid Complex Modeling with Frame Averaging Transformer
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Nucleic acid-based drugs like aptamers have recently demonstrated great therapeutic potential. However, experimental platforms for aptamer screening are costly, and the scarcity of labeled data presents a challenge for supervised methods to learn protein-aptamer binding. To this end, we develop an unsupervised learning approach based on the predicted pairwise contact map between a protein and a nucleic acid and demonstrate its effectiveness in protein-aptamer binding prediction. Our model is based on FAFormer, a novel equivariant transformer architecture that seamlessly integrates frame averaging (FA) within each transformer block. This integration allows our model to infuse geometric information into node features while preserving the spatial semantics of coordinates, leading to greater expressive power than standard FA models. Our results show that FAFormer outperforms existing equivariant models in contact map prediction across three protein complex datasets, with over 10% relative improvement. Moreover, we curate five real-world protein-aptamer interaction datasets and show that the contact map predicted by FAFormer serves as a strong binding indicator for aptamer screening.
[ { "created": "Thu, 13 Jun 2024 20:46:51 GMT", "version": "v1" } ]
2024-06-17
[ [ "Huang", "Tinglin", "" ], [ "Song", "Zhenqiao", "" ], [ "Ying", "Rex", "" ], [ "Jin", "Wengong", "" ] ]
Nucleic acid-based drugs like aptamers have recently demonstrated great therapeutic potential. However, experimental platforms for aptamer screening are costly, and the scarcity of labeled data presents a challenge for supervised methods to learn protein-aptamer binding. To this end, we develop an unsupervised learning approach based on the predicted pairwise contact map between a protein and a nucleic acid and demonstrate its effectiveness in protein-aptamer binding prediction. Our model is based on FAFormer, a novel equivariant transformer architecture that seamlessly integrates frame averaging (FA) within each transformer block. This integration allows our model to infuse geometric information into node features while preserving the spatial semantics of coordinates, leading to greater expressive power than standard FA models. Our results show that FAFormer outperforms existing equivariant models in contact map prediction across three protein complex datasets, with over 10% relative improvement. Moreover, we curate five real-world protein-aptamer interaction datasets and show that the contact map predicted by FAFormer serves as a strong binding indicator for aptamer screening.
2211.00302
Rahimah Zakaria
Adi Wijaya, Noor Akhmad Setiawan, Asma Hayati Ahmad, Rahimah Zakaria, Zahiruddin Othman
Electroencephalography and mild cognitive impairment research: A scoping review and bibliometric analysis (ScoRBA)
28 pages, 4 figures, 2 Tables
null
null
null
q-bio.NC stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
Background: Mild cognitive impairment (MCI) is often considered a precursor to Alzheimer's disease (AD) due to the high rate of progression from MCI to AD. Sensitive neural biomarkers may provide a tool for an accurate MCI diagnosis, enabling earlier and perhaps more effective treatment. Despite the availability of numerous neuroscience techniques, electroencephalography (EEG) is the most popular and frequently used tool among researchers due to its low cost and superior temporal resolution. Objective: We conducted a scoping review of EEG and MCI between 2012 and 2022 to track the progression of research in this field. Methods: In contrast to previous scoping reviews, the data charting was aided by co-occurrence analysis using VOSviewer, while data reporting adopted a Patterns, Advances, Gaps, Evidence of Practice, and Research Recommendations (PAGER) framework to increase the quality of the results. Results: Event-related potentials (ERPs) and EEG, epilepsy, quantitative EEG (QEEG), and EEG-based machine learning were the research themes addressed by 2310 peer-reviewed articles on EEG and MCI. Conclusion: Our review identified the main research themes in EEG and MCI with high-accuracy detection of seizure and MCI performed using ERP/EEG, QEEG and EEG-based machine learning frameworks.
[ { "created": "Tue, 1 Nov 2022 06:52:19 GMT", "version": "v1" } ]
2022-11-02
[ [ "Wijaya", "Adi", "" ], [ "Setiawan", "Noor Akhmad", "" ], [ "Ahmad", "Asma Hayati", "" ], [ "Zakaria", "Rahimah", "" ], [ "Othman", "Zahiruddin", "" ] ]
Background: Mild cognitive impairment (MCI) is often considered a precursor to Alzheimer's disease (AD) due to the high rate of progression from MCI to AD. Sensitive neural biomarkers may provide a tool for an accurate MCI diagnosis, enabling earlier and perhaps more effective treatment. Despite the availability of numerous neuroscience techniques, electroencephalography (EEG) is the most popular and frequently used tool among researchers due to its low cost and superior temporal resolution. Objective: We conducted a scoping review of EEG and MCI between 2012 and 2022 to track the progression of research in this field. Methods: In contrast to previous scoping reviews, the data charting was aided by co-occurrence analysis using VOSviewer, while data reporting adopted a Patterns, Advances, Gaps, Evidence of Practice, and Research Recommendations (PAGER) framework to increase the quality of the results. Results: Event-related potentials (ERPs) and EEG, epilepsy, quantitative EEG (QEEG), and EEG-based machine learning were the research themes addressed by 2310 peer-reviewed articles on EEG and MCI. Conclusion: Our review identified the main research themes in EEG and MCI with high-accuracy detection of seizure and MCI performed using ERP/EEG, QEEG and EEG-based machine learning frameworks.
2004.13695
Carlos Meijide-Garc\'ia
Marcos Matabuena, Carlos Meijide-Garc\'ia, Pablo Rodr\'iguez-Mier, V\'ictor Lebor\'an
COVID-19: Estimating spread in Spain solving an inverse problem with a probabilistic model
36 pag
null
null
null
q-bio.PE q-bio.QM stat.AP
http://creativecommons.org/licenses/by/4.0/
We introduce a new probabilistic model to estimate the real spread of the novel SARS-CoV-2 virus along regions or countries. Our model simulates the behavior of each individual in a population according to a probabilistic model through an inverse problem; we estimate the real number of recovered and infected people using mortality records. In addition, the model is dynamic in the sense that it takes into account the policy measures introduced when we solve the inverse problem. The results obtained in Spain have particular practical relevance: the number of infected individuals can be $17$ times higher than the data provided by the Spanish government on April $26$ $th$ in the worst-case scenario. Assuming that the number of fatalities reflected in the statistics is correct, $9.8$ percent of the population may be contaminated or have already been recovered from the virus in Madrid, one of the most affected regions in Spain. However, if we assume that the number of fatalities is twice as high as the official numbers, the number of infections could have reached $19.5\%$. In Galicia, one of the regions where the effect has been the least, the number of infections does not reach $2.5 \%$ . Based on our findings, we can: i) estimate the risk of a new outbreak before Autumn if we lift the quarantine; ii) may know the degree of immunization of the population in each region; and iii) forecast or simulate the effect of the policies to be introduced in the future based on the number of infected or recovered individuals in the population.
[ { "created": "Tue, 28 Apr 2020 17:50:44 GMT", "version": "v1" }, { "created": "Sun, 3 May 2020 17:01:42 GMT", "version": "v2" } ]
2020-05-05
[ [ "Matabuena", "Marcos", "" ], [ "Meijide-García", "Carlos", "" ], [ "Rodríguez-Mier", "Pablo", "" ], [ "Leborán", "Víctor", "" ] ]
We introduce a new probabilistic model to estimate the real spread of the novel SARS-CoV-2 virus along regions or countries. Our model simulates the behavior of each individual in a population according to a probabilistic model through an inverse problem; we estimate the real number of recovered and infected people using mortality records. In addition, the model is dynamic in the sense that it takes into account the policy measures introduced when we solve the inverse problem. The results obtained in Spain have particular practical relevance: the number of infected individuals can be $17$ times higher than the data provided by the Spanish government on April $26$ $th$ in the worst-case scenario. Assuming that the number of fatalities reflected in the statistics is correct, $9.8$ percent of the population may be contaminated or have already been recovered from the virus in Madrid, one of the most affected regions in Spain. However, if we assume that the number of fatalities is twice as high as the official numbers, the number of infections could have reached $19.5\%$. In Galicia, one of the regions where the effect has been the least, the number of infections does not reach $2.5 \%$ . Based on our findings, we can: i) estimate the risk of a new outbreak before Autumn if we lift the quarantine; ii) may know the degree of immunization of the population in each region; and iii) forecast or simulate the effect of the policies to be introduced in the future based on the number of infected or recovered individuals in the population.
1502.01463
Wolfram Liebermeister
Wolfram Liebermeister, Timo Lubitz, and Jens Hahn
SBtab - Conventions for structured data tables in Systems Biology
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data tables in the form of spreadsheets or delimited text files are the most utilised data format in Systems Biology. However, they are often not sufficiently structured and lack clear naming conventions that would be required for modelling. We propose the SBtab format as an attempt to establish an easy-to-use table format that is both flexible and clearly structured. It comprises defined table types for different kinds of data; syntax rules for usage of names, shortnames, and database identifiers used for annotation; and standardised formulae for reaction stoichiometries. Predefined table types can be used to define biochemical network models and the biochemical constants therein. The user can also define own table types, adjusting SBtab to other types of data. Software code, tools, and further information can be found at www.sbtab.net.
[ { "created": "Thu, 5 Feb 2015 08:53:46 GMT", "version": "v1" }, { "created": "Tue, 29 Sep 2015 04:29:42 GMT", "version": "v2" } ]
2015-09-30
[ [ "Liebermeister", "Wolfram", "" ], [ "Lubitz", "Timo", "" ], [ "Hahn", "Jens", "" ] ]
Data tables in the form of spreadsheets or delimited text files are the most utilised data format in Systems Biology. However, they are often not sufficiently structured and lack clear naming conventions that would be required for modelling. We propose the SBtab format as an attempt to establish an easy-to-use table format that is both flexible and clearly structured. It comprises defined table types for different kinds of data; syntax rules for usage of names, shortnames, and database identifiers used for annotation; and standardised formulae for reaction stoichiometries. Predefined table types can be used to define biochemical network models and the biochemical constants therein. The user can also define own table types, adjusting SBtab to other types of data. Software code, tools, and further information can be found at www.sbtab.net.
1410.4237
Sergei Gepshtein
Sergey Savel'ev and Sergei Gepshtein
Interference of Neural Waves in Distributed Inhibition-stabilized Networks
35 pages, 12 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To gain insight into the neural events responsible for visual perception of static and dynamic optical patterns, we study how neural activation spreads in arrays of inhibition-stabilized neural networks with nearest-neighbor coupling. The activation generated in such networks by local stimuli propagates between locations, forming spatiotemporal waves that affect the dynamics of activation generated by stimuli separated spatially and temporally, and by stimuli with complex spatiotemporal structure. These interactions form characteristic interference patterns that make the network intrinsically selective for certain stimuli, such as modulations of luminance at specific spatial and temporal frequencies and specific velocities of visual motion. Due to the inherent nonlinearity of the network, its intrinsic tuning depends on stimulus intensity and contrast. The interference patterns have multiple features of "lateral" interactions between stimuli, well known in physiological and behavioral studies of visual systems. The diverse phenomena have been previously attributed to distinct neural circuits. Our results demonstrate how the canonical circuit can perform the diverse operations in a manner predicted by neural-wave interference.
[ { "created": "Wed, 15 Oct 2014 21:35:23 GMT", "version": "v1" }, { "created": "Wed, 20 Apr 2016 22:44:32 GMT", "version": "v2" }, { "created": "Thu, 1 Sep 2016 00:02:46 GMT", "version": "v3" } ]
2016-09-02
[ [ "Savel'ev", "Sergey", "" ], [ "Gepshtein", "Sergei", "" ] ]
To gain insight into the neural events responsible for visual perception of static and dynamic optical patterns, we study how neural activation spreads in arrays of inhibition-stabilized neural networks with nearest-neighbor coupling. The activation generated in such networks by local stimuli propagates between locations, forming spatiotemporal waves that affect the dynamics of activation generated by stimuli separated spatially and temporally, and by stimuli with complex spatiotemporal structure. These interactions form characteristic interference patterns that make the network intrinsically selective for certain stimuli, such as modulations of luminance at specific spatial and temporal frequencies and specific velocities of visual motion. Due to the inherent nonlinearity of the network, its intrinsic tuning depends on stimulus intensity and contrast. The interference patterns have multiple features of "lateral" interactions between stimuli, well known in physiological and behavioral studies of visual systems. The diverse phenomena have been previously attributed to distinct neural circuits. Our results demonstrate how the canonical circuit can perform the diverse operations in a manner predicted by neural-wave interference.
1506.07081
David Bortz
John T. Nardini, Douglas A. Chapnick, Xuedong Liu, David. M. Bortz
The effects of MAPK activity on cell-cell adhesion during wound healing
26 pages, 9 figures
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mechanisms underlying collective migration, or the coordinated movement of a population of cells, are not well understood despite its ubiquitous nature. As a means to investigate collective migration, we consider a wound healing scenario in which a population of cells fills in the empty space left from a scratch wound. Here we present a simplified mathematical model that uses reaction-diffusion equations to model collective migration during wound healing with an emphasis on cell movement and its response to both cell signaling and cell-cell adhesion. We use the model to investigate the effect of the MAPK signaling cascade on cell-cell adhesion during wound healing after EGF treatment. Our results suggest that activation of the MAPK signaling cascade stimulates collective migration through increases in the pulling strength of leader cells. We further use the model to suggest that treating a cell population with EGF converts the time to wound closure (as function of wound area) from parabolic to linear.
[ { "created": "Tue, 23 Jun 2015 16:33:23 GMT", "version": "v1" } ]
2015-06-24
[ [ "Nardini", "John T.", "" ], [ "Chapnick", "Douglas A.", "" ], [ "Liu", "Xuedong", "" ], [ "Bortz", "David. M.", "" ] ]
The mechanisms underlying collective migration, or the coordinated movement of a population of cells, are not well understood despite its ubiquitous nature. As a means to investigate collective migration, we consider a wound healing scenario in which a population of cells fills in the empty space left from a scratch wound. Here we present a simplified mathematical model that uses reaction-diffusion equations to model collective migration during wound healing with an emphasis on cell movement and its response to both cell signaling and cell-cell adhesion. We use the model to investigate the effect of the MAPK signaling cascade on cell-cell adhesion during wound healing after EGF treatment. Our results suggest that activation of the MAPK signaling cascade stimulates collective migration through increases in the pulling strength of leader cells. We further use the model to suggest that treating a cell population with EGF converts the time to wound closure (as function of wound area) from parabolic to linear.
2108.03129
Ali Salari
Ali Salari (1), Yohan Chatelain (1), Gregory Kiar (2), Tristan Glatard (1) ((1) Department of Computer Science and Software Engineering, Concordia University, Montr\'eal, QC, Canada, (2) Center for the Developing Brain, Child Mind Institute, New York, NY, USA)
Accurate simulation of operating system updates in neuroimaging using Monte-Carlo arithmetic
10 pages, 4 figures, 19 references
null
null
null
q-bio.NC eess.IV
http://creativecommons.org/licenses/by/4.0/
Operating system (OS) updates introduce numerical perturbations that impact the reproducibility of computational pipelines. In neuroimaging, this has important practical implications on the validity of computational results, particularly when obtained in systems such as high-performance computing clusters where the experimenter does not control software updates. We present a framework to reproduce the variability induced by OS updates in controlled conditions. We hypothesize that OS updates impact computational pipelines mainly through numerical perturbations originating in mathematical libraries, which we simulate using Monte-Carlo arithmetic in a framework called "fuzzy libmath" (FL). We applied this methodology to pre-processing pipelines of the Human Connectome Project, a flagship open-data project in neuroimaging. We found that FL-perturbed pipelines accurately reproduce the variability induced by OS updates and that this similarity is only mildly dependent on simulation parameters. Importantly, we also found between-subject differences were preserved in both cases, though the between-run variability was of comparable magnitude for both FL and OS perturbations. We found the numerical precision in the HCP pre-processed images to be relatively low, with less than 8 significant bits among the 24 available, which motivates further investigation of the numerical stability of components in the tested pipeline. Overall, our results establish that FL accurately simulates results variability due to OS updates, and is a practical framework to quantify numerical uncertainty in neuroimaging.
[ { "created": "Fri, 6 Aug 2021 14:02:51 GMT", "version": "v1" } ]
2021-08-09
[ [ "Salari", "Ali", "" ], [ "Chatelain", "Yohan", "" ], [ "Kiar", "Gregory", "" ], [ "Glatard", "Tristan", "" ] ]
Operating system (OS) updates introduce numerical perturbations that impact the reproducibility of computational pipelines. In neuroimaging, this has important practical implications on the validity of computational results, particularly when obtained in systems such as high-performance computing clusters where the experimenter does not control software updates. We present a framework to reproduce the variability induced by OS updates in controlled conditions. We hypothesize that OS updates impact computational pipelines mainly through numerical perturbations originating in mathematical libraries, which we simulate using Monte-Carlo arithmetic in a framework called "fuzzy libmath" (FL). We applied this methodology to pre-processing pipelines of the Human Connectome Project, a flagship open-data project in neuroimaging. We found that FL-perturbed pipelines accurately reproduce the variability induced by OS updates and that this similarity is only mildly dependent on simulation parameters. Importantly, we also found between-subject differences were preserved in both cases, though the between-run variability was of comparable magnitude for both FL and OS perturbations. We found the numerical precision in the HCP pre-processed images to be relatively low, with less than 8 significant bits among the 24 available, which motivates further investigation of the numerical stability of components in the tested pipeline. Overall, our results establish that FL accurately simulates results variability due to OS updates, and is a practical framework to quantify numerical uncertainty in neuroimaging.
0801.1386
Ralf Blossey
Ralf Blossey, Helmut Schiessel
Kinetic proofreading of gene activation by chromatin remodeling
8 pages, 2 Figures; application added
null
10.2976/1.2909080
null
q-bio.MN q-bio.BM
null
Gene activation in eukaryotes involves the concerted action of histone tail modifiers, chromatin remodellers and transcription factors, whose precise coordination is currently unknown. We demonstrate that the experimentally observed interactions of the molecules are in accord with a kinetic proofreading scheme. Our finding could provide a basis for the development of quantitative models for gene regulation in eukaryotes based on the combinatorical interactions of chromatin modifiers.
[ { "created": "Wed, 9 Jan 2008 10:13:08 GMT", "version": "v1" }, { "created": "Mon, 28 Apr 2008 15:45:17 GMT", "version": "v2" } ]
2008-04-28
[ [ "Blossey", "Ralf", "" ], [ "Schiessel", "Helmut", "" ] ]
Gene activation in eukaryotes involves the concerted action of histone tail modifiers, chromatin remodellers and transcription factors, whose precise coordination is currently unknown. We demonstrate that the experimentally observed interactions of the molecules are in accord with a kinetic proofreading scheme. Our finding could provide a basis for the development of quantitative models for gene regulation in eukaryotes based on the combinatorical interactions of chromatin modifiers.
1910.05410
Ana Gonz\'alez-Marcos
J. de Pedro-Carracedo, A.M. Ugena, and A.P. Gonzalez-Marcos
Phase space reconstruction from a biological time series. A PhotoPlethysmoGraphic signal a case study
12 pages, 12 figures with 37 subfigures
Appl. Sci. 2020, 10(4), 1430
10.3390/app10041430
null
q-bio.OT eess.SP nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the analysis of biological time series, the state space comprises a framework for the study of systems with presumably deterministic properties. However, a physiological experiment typically captures an observable, or, in other words, a series of scalar measurements that characterize the temporal response of the physiological system under study; the dynamic variables that make up the state of the system at any time are not available. Therefore, only from the acquired observations should state vectors reconstructed to emulate the different states of the underlying system. It is what is known as the reconstruction of the state space, called phase space in real-world signals, for now only satisfactorily resolved using the method of delays. Each state vector consists of m components, extracted from successive observations delayed a time t. The morphology of the geometric structure described by the state vectors, as well as their properties, depends on the chosen parameters t and m. The real dynamics of the system under study is subject to the correct determination of the parameters t and m. Only in this way can be deduced characteristics with true physical meaning, revealing aspects that reliably identify the dynamic complexity of the physiological system. The biological signal presented in this work, as a case study, is the PhotoPlethysmoGraphic (PPG) signal. We find that m is five for all the subjects analyzed and that t depends on the time interval in which it evaluates. The H\'enon map and the Lorenz flow are used to facilitate a more intuitive understanding of applied techniques.
[ { "created": "Thu, 10 Oct 2019 11:23:23 GMT", "version": "v1" } ]
2021-08-13
[ [ "de Pedro-Carracedo", "J.", "" ], [ "Ugena", "A. M.", "" ], [ "Gonzalez-Marcos", "A. P.", "" ] ]
In the analysis of biological time series, the state space comprises a framework for the study of systems with presumably deterministic properties. However, a physiological experiment typically captures an observable, or, in other words, a series of scalar measurements that characterize the temporal response of the physiological system under study; the dynamic variables that make up the state of the system at any time are not available. Therefore, only from the acquired observations should state vectors reconstructed to emulate the different states of the underlying system. It is what is known as the reconstruction of the state space, called phase space in real-world signals, for now only satisfactorily resolved using the method of delays. Each state vector consists of m components, extracted from successive observations delayed a time t. The morphology of the geometric structure described by the state vectors, as well as their properties, depends on the chosen parameters t and m. The real dynamics of the system under study is subject to the correct determination of the parameters t and m. Only in this way can be deduced characteristics with true physical meaning, revealing aspects that reliably identify the dynamic complexity of the physiological system. The biological signal presented in this work, as a case study, is the PhotoPlethysmoGraphic (PPG) signal. We find that m is five for all the subjects analyzed and that t depends on the time interval in which it evaluates. The H\'enon map and the Lorenz flow are used to facilitate a more intuitive understanding of applied techniques.
2401.00480
Giulia Bertaglia
Giulia Bertaglia, Lorenzo Pareschi, Giuseppe Toscani
Modelling contagious viral dynamics: a kinetic approach based on mutual utility
null
null
null
null
q-bio.PE nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The temporal evolution of a contagious viral disease is modelled as the dynamic progression of different classes of population with individuals interacting pairwise. This interaction follows a binary mechanism typical of kinetic theory, wherein agents aim to improve their condition with respect to a mutual utility target. To this end, we introduce kinetic equations of Boltzmann-type to describe the time evolution of the probability distributions of the multi-agent system. The interactions between agents are defined using principles from price theory, specifically employing Cobb-Douglas utility functions for binary exchange and the Edgeworth box to depict the common exchange area where utility increases for both agents. Several numerical experiments presented in the paper highlight the significance of this mechanism in driving the phenomenon toward endemicity.
[ { "created": "Sun, 31 Dec 2023 12:46:08 GMT", "version": "v1" }, { "created": "Wed, 7 Feb 2024 10:34:38 GMT", "version": "v2" }, { "created": "Fri, 9 Feb 2024 11:04:16 GMT", "version": "v3" } ]
2024-02-12
[ [ "Bertaglia", "Giulia", "" ], [ "Pareschi", "Lorenzo", "" ], [ "Toscani", "Giuseppe", "" ] ]
The temporal evolution of a contagious viral disease is modelled as the dynamic progression of different classes of population with individuals interacting pairwise. This interaction follows a binary mechanism typical of kinetic theory, wherein agents aim to improve their condition with respect to a mutual utility target. To this end, we introduce kinetic equations of Boltzmann-type to describe the time evolution of the probability distributions of the multi-agent system. The interactions between agents are defined using principles from price theory, specifically employing Cobb-Douglas utility functions for binary exchange and the Edgeworth box to depict the common exchange area where utility increases for both agents. Several numerical experiments presented in the paper highlight the significance of this mechanism in driving the phenomenon toward endemicity.
1107.3450
Henry Tuckwell
Nicholas J Penington, Henry C Tuckwell
Properties of IA in a serotonergic neuron of the dorsal raphe nucleus
16 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Voltage clamp data were analyzed in order to characterize the properties of the fast potassium transient current IA for a serotonergic neuron of the rat dorsal raphe nucleus (DRN). We obtain maximal conductance, time constants of activation and inactivation, and the steady state activation and inactivation functions, as Boltzmann curves, defined by half-activation potentials and slope factors. We employ a novel method to accurately obtain the activation function and compare the results with those obtained by other methods. The form of IA is estimated as g(V-V_{rev}) m^4h with g=20.5 nS. For activation, the half-activation potential is V_a=-52.5 mV with slope factor k_a=16.5 mV, whereas for inactivation the corresponding quantities are -91.5 mV and -9.3 mV. We discuss the results in terms of the corresponding properties of \IA in other cell types and their possible relevance to pacemaking activity in 5-HT cells of the DRN.
[ { "created": "Mon, 18 Jul 2011 14:36:01 GMT", "version": "v1" } ]
2011-07-19
[ [ "Penington", "Nicholas J", "" ], [ "Tuckwell", "Henry C", "" ] ]
Voltage clamp data were analyzed in order to characterize the properties of the fast potassium transient current IA for a serotonergic neuron of the rat dorsal raphe nucleus (DRN). We obtain maximal conductance, time constants of activation and inactivation, and the steady state activation and inactivation functions, as Boltzmann curves, defined by half-activation potentials and slope factors. We employ a novel method to accurately obtain the activation function and compare the results with those obtained by other methods. The form of IA is estimated as g(V-V_{rev}) m^4h with g=20.5 nS. For activation, the half-activation potential is V_a=-52.5 mV with slope factor k_a=16.5 mV, whereas for inactivation the corresponding quantities are -91.5 mV and -9.3 mV. We discuss the results in terms of the corresponding properties of \IA in other cell types and their possible relevance to pacemaking activity in 5-HT cells of the DRN.
2301.11256
Thomas Lavigne
Thomas Lavigne, St\'ephane Urcun, Pierre-Yves Rohan, Giuseppe Scium\`e, Davide Baroli, St\'ephane P.A. Bordas
Multi-compartment poroelastic models of perfused biological soft tissues: implementation in FEniCSx
https://github.com/Th0masLavigne/Dolfinx_Porous_Media.git
null
10.1016/j.jmbbm.2023.105902
null
q-bio.TO physics.comp-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Soft biological tissues demonstrate strong time-dependent and strain-rate mechanical behavior, arising from their intrinsic visco-elasticity and fluid-solid interactions (especially at sufficiently large time scales). The time-dependent mechanical properties of soft tissues influence their physiological functions and are linked to several pathological processes. Poro-elastic modeling represents a promising approach because it allows the integration of multiscale/multiphysics data to probe biologically relevant phenomena at a smaller scale and embeds the relevant mechanisms at the larger scale. The implementation of multi-phasic flow poro-elastic models however is a complex undertaking, requiring extensive knowledge. The open-source software FEniCSx Project provides a novel tool for the automated solution of partial differential equations by the finite element method. This paper aims to provide the required tools to model the mixed formulation of poro-elasticity, from the theory to the implementation, within FEniCSx. Several benchmark cases are studied. A column under confined compression conditions is compared to the Terzaghi analytical solution, using the L2-norm. An implementation of poro-hyper-elasticity is proposed. A bi-compartment column is compared to previously published results (Cast3m implementation). For all cases, accurate results are obtained in terms of a normalized Root Mean Square Error (RMSE). Furthermore, the FEniCSx computation is found three times faster than the legacy FEniCS one. The benefits of parallel computation are also highlighted.
[ { "created": "Thu, 26 Jan 2023 17:45:23 GMT", "version": "v1" }, { "created": "Wed, 1 Mar 2023 19:12:24 GMT", "version": "v2" } ]
2023-06-09
[ [ "Lavigne", "Thomas", "" ], [ "Urcun", "Stéphane", "" ], [ "Rohan", "Pierre-Yves", "" ], [ "Sciumè", "Giuseppe", "" ], [ "Baroli", "Davide", "" ], [ "Bordas", "Stéphane P. A.", "" ] ]
Soft biological tissues demonstrate strong time-dependent and strain-rate mechanical behavior, arising from their intrinsic visco-elasticity and fluid-solid interactions (especially at sufficiently large time scales). The time-dependent mechanical properties of soft tissues influence their physiological functions and are linked to several pathological processes. Poro-elastic modeling represents a promising approach because it allows the integration of multiscale/multiphysics data to probe biologically relevant phenomena at a smaller scale and embeds the relevant mechanisms at the larger scale. The implementation of multi-phasic flow poro-elastic models however is a complex undertaking, requiring extensive knowledge. The open-source software FEniCSx Project provides a novel tool for the automated solution of partial differential equations by the finite element method. This paper aims to provide the required tools to model the mixed formulation of poro-elasticity, from the theory to the implementation, within FEniCSx. Several benchmark cases are studied. A column under confined compression conditions is compared to the Terzaghi analytical solution, using the L2-norm. An implementation of poro-hyper-elasticity is proposed. A bi-compartment column is compared to previously published results (Cast3m implementation). For all cases, accurate results are obtained in terms of a normalized Root Mean Square Error (RMSE). Furthermore, the FEniCSx computation is found three times faster than the legacy FEniCS one. The benefits of parallel computation are also highlighted.
2004.04222
Alexandru Topirceanu
Alexandru Topirceanu, Mihai Udrescu, Radu Marculescu
Centralized and decentralized isolation strategies and their impact on the COVID-19 pandemic dynamics
18 pages, 13 figures
null
null
null
q-bio.PE cs.MA cs.SI physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
The infectious diseases are spreading due to human interactions enabled by various social networks. Therefore, when a new pathogen such as SARS-CoV-2 causes an outbreak, the non-pharmaceutical isolation strategies (e.g., social distancing) are the only possible response to disrupt its spreading. To this end, we introduce the new epidemic model (SICARS) and compare the centralized (C), decentralized (D), and combined (C+D) social distancing strategies, and analyze their efficiency to control the dynamics of COVID-19 on heterogeneous complex networks. Our analysis shows that the centralized social distancing is necessary to minimize the pandemic spreading. The decentralized strategy is insufficient when used alone, but offers the best results when combined with the centralized one. Indeed, the (C+D) is the most efficient isolation strategy at mitigating the network superspreaders and reducing the highest node degrees to less than 10% of their initial values. Our results also indicate that stronger social distancing, e.g., cutting 75% of social ties, can reduce the outbreak by 75% for the C isolation, by 33% for the D isolation, and by 87% for the (C+D) isolation strategy. Finally, we study the impact of proactive versus reactive isolation strategies, as well as their delayed enforcement. We find that the reactive response to the pandemic is less efficient, and delaying the adoption of isolation measures by over one month (since the outbreak onset in a region) can have alarming effects; thus, our study contributes to an understanding of the COVID-19 pandemic both in space and time. We believe our investigations have a high social relevance as they provide insights into understanding how different degrees of social distancing can reduce the peak infection ratio substantially; this can make the COVID-19 pandemic easier to understand and control over an extended period of time.
[ { "created": "Wed, 8 Apr 2020 19:48:12 GMT", "version": "v1" }, { "created": "Fri, 10 Apr 2020 13:16:29 GMT", "version": "v2" } ]
2020-04-20
[ [ "Topirceanu", "Alexandru", "" ], [ "Udrescu", "Mihai", "" ], [ "Marculescu", "Radu", "" ] ]
The infectious diseases are spreading due to human interactions enabled by various social networks. Therefore, when a new pathogen such as SARS-CoV-2 causes an outbreak, the non-pharmaceutical isolation strategies (e.g., social distancing) are the only possible response to disrupt its spreading. To this end, we introduce the new epidemic model (SICARS) and compare the centralized (C), decentralized (D), and combined (C+D) social distancing strategies, and analyze their efficiency to control the dynamics of COVID-19 on heterogeneous complex networks. Our analysis shows that the centralized social distancing is necessary to minimize the pandemic spreading. The decentralized strategy is insufficient when used alone, but offers the best results when combined with the centralized one. Indeed, the (C+D) is the most efficient isolation strategy at mitigating the network superspreaders and reducing the highest node degrees to less than 10% of their initial values. Our results also indicate that stronger social distancing, e.g., cutting 75% of social ties, can reduce the outbreak by 75% for the C isolation, by 33% for the D isolation, and by 87% for the (C+D) isolation strategy. Finally, we study the impact of proactive versus reactive isolation strategies, as well as their delayed enforcement. We find that the reactive response to the pandemic is less efficient, and delaying the adoption of isolation measures by over one month (since the outbreak onset in a region) can have alarming effects; thus, our study contributes to an understanding of the COVID-19 pandemic both in space and time. We believe our investigations have a high social relevance as they provide insights into understanding how different degrees of social distancing can reduce the peak infection ratio substantially; this can make the COVID-19 pandemic easier to understand and control over an extended period of time.
1806.09559
Joaquin Rapela
Joaquin Rapela
Traveling waves appear and disappear in unison with produced speech
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Rapela (2016) we reported traveling waves (TWs) on electrocorticographic (ECoG) recordings from an epileptic subject over speech processing brain regions, while the subject rhythmically produced consonant-vowel syllables (CVSs). In Rapela (2017) we showed that TWs are precisely synchronized, in dynamical systems terms, to these productions. Here we show that TWs do not occur continuously, but tend to appear before the initiation of CVSs and tend to disappear before their termination. During moments of silence, between productions of CVSs, TWs tend to reverse direction. To our knowledge, this is the first study showing TWs related to the production of speech and, more generally, the first report of behavioral correlates of mesoscale TWs in behaving humans.
[ { "created": "Mon, 25 Jun 2018 16:44:05 GMT", "version": "v1" } ]
2018-06-26
[ [ "Rapela", "Joaquin", "" ] ]
In Rapela (2016) we reported traveling waves (TWs) on electrocorticographic (ECoG) recordings from an epileptic subject over speech processing brain regions, while the subject rhythmically produced consonant-vowel syllables (CVSs). In Rapela (2017) we showed that TWs are precisely synchronized, in dynamical systems terms, to these productions. Here we show that TWs do not occur continuously, but tend to appear before the initiation of CVSs and tend to disappear before their termination. During moments of silence, between productions of CVSs, TWs tend to reverse direction. To our knowledge, this is the first study showing TWs related to the production of speech and, more generally, the first report of behavioral correlates of mesoscale TWs in behaving humans.
2103.12187
Brian Leahy
Brian D Leahy, Catherine Racowsky, Daniel Needleman
Inferring simple but precise quantitative models of human oocyte and early embryo development
12 pages, 5 figures
null
null
null
q-bio.CB physics.bio-ph q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Macroscopic, phenomenological models have proven useful as concise framings of our understandings in fields from statistical physics to economics to biology. Constructing a phenomenological model for development would provide a framework for understanding the complicated, regulatory nature of oogenesis and embryogenesis. Here, we use a data-driven approach to infer quantitative, precise models of human oocyte maturation and pre-implantation embryo development, by analyzing existing clinical In-Vitro Fertilization (IVF) data on 7,399 IVF cycles resulting in 57,827 embryos. Surprisingly, we find that both oocyte maturation and early embryo development are quantitatively described by simple models with minimal interactions. This simplicity suggests that oogenesis and embryogenesis are composed of modular processes that are relatively siloed from one another. In particular, our analysis provides strong evidence that (i) pre-antral follicles produce anti-M{\"u}llerian hormone independently of effects from other follicles, (ii) oocytes mature to metaphase-II independently of the woman's age, her BMI, and other factors, (iii) early embryo development is memoryless for the variables assessed here, in that the probability of an embryo transitioning from its current developmental stage to the next is independent of its previous stage. Our results both provide insight into the fundamentals of oogenesis and embryogenesis and have implications for the clinical practice of IVF.
[ { "created": "Mon, 22 Mar 2021 21:34:28 GMT", "version": "v1" } ]
2021-03-24
[ [ "Leahy", "Brian D", "" ], [ "Racowsky", "Catherine", "" ], [ "Needleman", "Daniel", "" ] ]
Macroscopic, phenomenological models have proven useful as concise framings of our understandings in fields from statistical physics to economics to biology. Constructing a phenomenological model for development would provide a framework for understanding the complicated, regulatory nature of oogenesis and embryogenesis. Here, we use a data-driven approach to infer quantitative, precise models of human oocyte maturation and pre-implantation embryo development, by analyzing existing clinical In-Vitro Fertilization (IVF) data on 7,399 IVF cycles resulting in 57,827 embryos. Surprisingly, we find that both oocyte maturation and early embryo development are quantitatively described by simple models with minimal interactions. This simplicity suggests that oogenesis and embryogenesis are composed of modular processes that are relatively siloed from one another. In particular, our analysis provides strong evidence that (i) pre-antral follicles produce anti-M{\"u}llerian hormone independently of effects from other follicles, (ii) oocytes mature to metaphase-II independently of the woman's age, her BMI, and other factors, (iii) early embryo development is memoryless for the variables assessed here, in that the probability of an embryo transitioning from its current developmental stage to the next is independent of its previous stage. Our results both provide insight into the fundamentals of oogenesis and embryogenesis and have implications for the clinical practice of IVF.
1308.1328
Laura Facchini
Laura Facchini, Alberto Bellin and Eleuterio F. Toro
A simple model of filtration and macromolecule transport through microvascular walls
null
Numerical Methods for Hyperbolic Equations: Theory and Applications. An international conference to honour Professor E. F. Toro., London, UK: CRC Press, Taylor-Francis Group, 2012, p. 339-346. ISBN: 9780415621502
null
null
q-bio.SC math.NA q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple Sclerosis (MS) is a disorder that usually appears in adults in their thirties. It has a prevalence that ranges between 2 and 150 per 100 000. Epidemiological studies of MS have provided hints on possible causes for the disease ranging from genetic, environmental and infectious factors to other factors of vascular origin. Despite the tremendous effort spent in the last few years, none of the hypotheses formulated so far has gained wide acceptance and the causes of the disease remain unknown. From a clinical point of view, a high correlation has been recently observed between MS and Chronic Cerebro-Spinal Venous Insufficiency (CCSVI) in a statistically significant number of patients. In this pathological situation CCSVI may induce alterations of blood pressure in brain microvessels, thereby perturbing the exchange of small hydrophilic molecules between the blood and the external cells. In the presence of large pressure alterations it cannot be excluded also the leakage of macromolecules that otherwise would not cross the vessel wall. All these disorders may trigger immune defenses with the destruction of myelin as a side effect. In the present work we investigate the role of perturbed blood pressure in brain microvessels as driving force for an altered exchange of small hydrophilic solutes and leakage of macromolecules into the interstitial fluid. With a simplified, yet realistic, model we obtain closed-form steady-state solutions for fluid flow and solute transport across the microvessel wall. Finally, we use these results (i) to interpret experimental data available in the literature and (ii) to carry out a preliminary analysis of the disorder in the exchange processes triggered by an increase of blood pressure, thereby relating our preliminary results to the hypothesised vascular connection to MS.
[ { "created": "Tue, 6 Aug 2013 16:21:02 GMT", "version": "v1" }, { "created": "Thu, 22 Aug 2013 08:49:47 GMT", "version": "v2" } ]
2013-08-23
[ [ "Facchini", "Laura", "" ], [ "Bellin", "Alberto", "" ], [ "Toro", "Eleuterio F.", "" ] ]
Multiple Sclerosis (MS) is a disorder that usually appears in adults in their thirties. It has a prevalence that ranges between 2 and 150 per 100 000. Epidemiological studies of MS have provided hints on possible causes for the disease ranging from genetic, environmental and infectious factors to other factors of vascular origin. Despite the tremendous effort spent in the last few years, none of the hypotheses formulated so far has gained wide acceptance and the causes of the disease remain unknown. From a clinical point of view, a high correlation has been recently observed between MS and Chronic Cerebro-Spinal Venous Insufficiency (CCSVI) in a statistically significant number of patients. In this pathological situation CCSVI may induce alterations of blood pressure in brain microvessels, thereby perturbing the exchange of small hydrophilic molecules between the blood and the external cells. In the presence of large pressure alterations it cannot be excluded also the leakage of macromolecules that otherwise would not cross the vessel wall. All these disorders may trigger immune defenses with the destruction of myelin as a side effect. In the present work we investigate the role of perturbed blood pressure in brain microvessels as driving force for an altered exchange of small hydrophilic solutes and leakage of macromolecules into the interstitial fluid. With a simplified, yet realistic, model we obtain closed-form steady-state solutions for fluid flow and solute transport across the microvessel wall. Finally, we use these results (i) to interpret experimental data available in the literature and (ii) to carry out a preliminary analysis of the disorder in the exchange processes triggered by an increase of blood pressure, thereby relating our preliminary results to the hypothesised vascular connection to MS.
1503.01555
Yunxin Zhang
Minghua Song and Yunxin Zhang
Mean-field analysis of two-species TASEP with attachment and detachment
null
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In cells, most of cargos are transported by motor proteins along microtubule. Biophysically, unidirectional motion of large number of motor proteins along a single track can be described by totally asymmetric simple exclusion process (TASEP). From which many meaningful properties, such as the appearance of domain wall (defined as the borderline of high density and low density of motor protein along motion track) and boundary layers, can be obtained. However, it is biologically obvious that a single track may be occupied by different motor species. So previous studies based on TASEP of one particle species are not reasonable enough to find more detailed properties of the motion of motors along a single track. To address this problem, TASEP with two particle species is discussed in this study. Theoretical methods to get densities of each particle species are provided. Using these methods, phase transition related properties of particle densities are obtained. Our analysis show that domain wall and boundary layer of single species densities always appear simultaneously with those of the total particle density. The height of domain wall of total particle density is equal to the summation of those of single species. Phase diagrams for typical model parameters are also presented. The methods presented in this study can be generalized to analyze TASEP with more particle species.
[ { "created": "Thu, 5 Mar 2015 07:02:42 GMT", "version": "v1" } ]
2015-03-06
[ [ "Song", "Minghua", "" ], [ "Zhang", "Yunxin", "" ] ]
In cells, most of cargos are transported by motor proteins along microtubule. Biophysically, unidirectional motion of large number of motor proteins along a single track can be described by totally asymmetric simple exclusion process (TASEP). From which many meaningful properties, such as the appearance of domain wall (defined as the borderline of high density and low density of motor protein along motion track) and boundary layers, can be obtained. However, it is biologically obvious that a single track may be occupied by different motor species. So previous studies based on TASEP of one particle species are not reasonable enough to find more detailed properties of the motion of motors along a single track. To address this problem, TASEP with two particle species is discussed in this study. Theoretical methods to get densities of each particle species are provided. Using these methods, phase transition related properties of particle densities are obtained. Our analysis show that domain wall and boundary layer of single species densities always appear simultaneously with those of the total particle density. The height of domain wall of total particle density is equal to the summation of those of single species. Phase diagrams for typical model parameters are also presented. The methods presented in this study can be generalized to analyze TASEP with more particle species.
0904.3231
Arthur Marshall Stoneham
A M Stoneham
Why dream? A Conjecture on Dreaming
7 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I propose that the need for sleep and the occurrence of dreams are intimately linked to the physical processes underlying the continuing replacement of cells and renewal of biomolecules during the lives of higher organisms. Since one major function of our brains is to enable us to react to attack, there must be truly compelling reasons for us to take them off-line for extended periods, as during sleep. I suggest that most replacements occur during sleep. Dreams are suggested as part of the means by which the new neural circuitry (I use the word in an informal sense) is checked.
[ { "created": "Tue, 21 Apr 2009 12:12:50 GMT", "version": "v1" } ]
2009-04-22
[ [ "Stoneham", "A M", "" ] ]
I propose that the need for sleep and the occurrence of dreams are intimately linked to the physical processes underlying the continuing replacement of cells and renewal of biomolecules during the lives of higher organisms. Since one major function of our brains is to enable us to react to attack, there must be truly compelling reasons for us to take them off-line for extended periods, as during sleep. I suggest that most replacements occur during sleep. Dreams are suggested as part of the means by which the new neural circuitry (I use the word in an informal sense) is checked.
2105.11552
Marcus Aguiar de
Vitor M. Marquioni and Marcus A. M. de Aguiar
Modeling viral mutations in the spread of epidemics
19 pages, 8 figures
PLoS ONE 16(7) (2021) e0255438
10.1371/journal.pone.0255438
null
q-bio.PE nlin.CG
http://creativecommons.org/licenses/by/4.0/
Although traditional models of epidemic spreading focus on the number of infected, susceptible and recovered individuals, a lot of attention has been devoted to integrate epidemic models with population genetics. Here we develop an individual-based model for epidemic spreading on networks in which viruses are explicitly represented by finite chains of nucleotides that can mutate inside the host. Under the hypothesis of neutral evolution we compute analytically the average pairwise genetic distance between all infecting viruses over time. We also derive a mean-field version of this equation that can be added directly to compartmental models such as SIR or SEIR to estimate the genetic evolution. We compare our results with the inferred genetic evolution of SARS-CoV-2 at the beginning of the epidemic in China and found good agreement with the analytical solution of our model. Finally, using genetic distance as a proxy for different strains, we use numerical simulations to show that the lower the connectivity between communities, e.g., cities, the higher the probability of reinfection.
[ { "created": "Mon, 24 May 2021 21:51:56 GMT", "version": "v1" } ]
2021-11-24
[ [ "Marquioni", "Vitor M.", "" ], [ "de Aguiar", "Marcus A. M.", "" ] ]
Although traditional models of epidemic spreading focus on the number of infected, susceptible and recovered individuals, a lot of attention has been devoted to integrate epidemic models with population genetics. Here we develop an individual-based model for epidemic spreading on networks in which viruses are explicitly represented by finite chains of nucleotides that can mutate inside the host. Under the hypothesis of neutral evolution we compute analytically the average pairwise genetic distance between all infecting viruses over time. We also derive a mean-field version of this equation that can be added directly to compartmental models such as SIR or SEIR to estimate the genetic evolution. We compare our results with the inferred genetic evolution of SARS-CoV-2 at the beginning of the epidemic in China and found good agreement with the analytical solution of our model. Finally, using genetic distance as a proxy for different strains, we use numerical simulations to show that the lower the connectivity between communities, e.g., cities, the higher the probability of reinfection.
2002.09628
Andreas Mang
Andreas Mang and Spyridon Bakas and Shashank Subramanian and Christos Davatzikos and George Biros
Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology
24 pages, 5 figures
Annual Review of Biomedical Engineering, Vol. 22, 2020, pp. 309-341
10.1146/annurev-bioeng-062117-121105
null
q-bio.QM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Central nervous system (CNS) tumors come with the vastly heterogeneous histologic, molecular and radiographic landscape, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically-acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (i) detection/segmentation of tumor sub-regions, and (ii) computer-aided diagnostic/prognostic/predictive modeling. This paper presents a summary of (i) biophysical growth modeling and simulation, (ii) inverse problems for model calibration, (iii) their integration with imaging workflows, and (iv) their application on clinically-relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
[ { "created": "Sat, 22 Feb 2020 05:18:18 GMT", "version": "v1" } ]
2020-06-30
[ [ "Mang", "Andreas", "" ], [ "Bakas", "Spyridon", "" ], [ "Subramanian", "Shashank", "" ], [ "Davatzikos", "Christos", "" ], [ "Biros", "George", "" ] ]
Central nervous system (CNS) tumors come with the vastly heterogeneous histologic, molecular and radiographic landscape, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically-acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (i) detection/segmentation of tumor sub-regions, and (ii) computer-aided diagnostic/prognostic/predictive modeling. This paper presents a summary of (i) biophysical growth modeling and simulation, (ii) inverse problems for model calibration, (iii) their integration with imaging workflows, and (iv) their application on clinically-relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
2407.14949
Jiachen Zou
Chen Wei, Jiachen Zou, Dietmar Heinke, Quanying Liu
CoCoG-2: Controllable generation of visual stimuli for understanding human concept representation
null
null
null
null
q-bio.NC cs.CV cs.HC
http://creativecommons.org/licenses/by/4.0/
Humans interpret complex visual stimuli using abstract concepts that facilitate decision-making tasks such as food selection and risk avoidance. Similarity judgment tasks are effective for exploring these concepts. However, methods for controllable image generation in concept space are underdeveloped. In this study, we present a novel framework called CoCoG-2, which integrates generated visual stimuli into similarity judgment tasks. CoCoG-2 utilizes a training-free guidance algorithm to enhance generation flexibility. CoCoG-2 framework is versatile for creating experimental stimuli based on human concepts, supporting various strategies for guiding visual stimuli generation, and demonstrating how these stimuli can validate various experimental hypotheses. CoCoG-2 will advance our understanding of the causal relationship between concept representations and behaviors by generating visual stimuli. The code is available at \url{https://github.com/ncclab-sustech/CoCoG-2}.
[ { "created": "Sat, 20 Jul 2024 17:52:32 GMT", "version": "v1" } ]
2024-07-23
[ [ "Wei", "Chen", "" ], [ "Zou", "Jiachen", "" ], [ "Heinke", "Dietmar", "" ], [ "Liu", "Quanying", "" ] ]
Humans interpret complex visual stimuli using abstract concepts that facilitate decision-making tasks such as food selection and risk avoidance. Similarity judgment tasks are effective for exploring these concepts. However, methods for controllable image generation in concept space are underdeveloped. In this study, we present a novel framework called CoCoG-2, which integrates generated visual stimuli into similarity judgment tasks. CoCoG-2 utilizes a training-free guidance algorithm to enhance generation flexibility. CoCoG-2 framework is versatile for creating experimental stimuli based on human concepts, supporting various strategies for guiding visual stimuli generation, and demonstrating how these stimuli can validate various experimental hypotheses. CoCoG-2 will advance our understanding of the causal relationship between concept representations and behaviors by generating visual stimuli. The code is available at \url{https://github.com/ncclab-sustech/CoCoG-2}.
2001.10573
Heyrim Cho
Heyrim Cho and Zuping Wang and Doron Levy
Study of dose-dependent combination immunotherapy using engineered T cells and IL-2 in cervical cancer
8 pages, 7 figures
null
10.1016/j.jtbi.2020.110403
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adoptive T cell based immunotherapy is gaining significant traction in cancer treatment. Despite its limited success, so far, in treating solid cancers, it is increasingly successful, demonstrating to have a broader therapeutic potential. In this paper we develop a mathematical model to study the efficacy of engineered T cell receptor (TCR) T cell therapy targeting the E7 antigen in cervical cancer cell lines. We consider a dynamical system that follows the population of cancer cells, TCR T cells, and IL-2. We demonstrate that there exists a TCR T cell dosage window for a successful cancer elimination that can be expressed in terms of the initial tumor size. We obtain the TCR T cell dose for two cervical cancer cell lines: 4050 and CaSki. Finally, a combination therapy of TCR T cell and IL-2 treatment is studied. We show that certain treatment protocols can improve therapy responses in the 4050 cell line, but not in the CaSki cell line.
[ { "created": "Tue, 28 Jan 2020 20:13:17 GMT", "version": "v1" } ]
2022-04-19
[ [ "Cho", "Heyrim", "" ], [ "Wang", "Zuping", "" ], [ "Levy", "Doron", "" ] ]
Adoptive T cell based immunotherapy is gaining significant traction in cancer treatment. Despite its limited success, so far, in treating solid cancers, it is increasingly successful, demonstrating to have a broader therapeutic potential. In this paper we develop a mathematical model to study the efficacy of engineered T cell receptor (TCR) T cell therapy targeting the E7 antigen in cervical cancer cell lines. We consider a dynamical system that follows the population of cancer cells, TCR T cells, and IL-2. We demonstrate that there exists a TCR T cell dosage window for a successful cancer elimination that can be expressed in terms of the initial tumor size. We obtain the TCR T cell dose for two cervical cancer cell lines: 4050 and CaSki. Finally, a combination therapy of TCR T cell and IL-2 treatment is studied. We show that certain treatment protocols can improve therapy responses in the 4050 cell line, but not in the CaSki cell line.
0907.3927
Petter Holme
Jing Zhao, Petter Holme
Three faces of metabolites: Pathways, localizations and network positions
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To understand the system-wide organization of metabolism, different lines of study have devised different categorizations of metabolites. The relationship and difference between categories can provide new insights for a more detailed description of the organization of metabolism. In this study, we investigate the relative organization of three categorizations of metabolites -- pathways, subcellular localizations and network clusters, by block-model techniques borrowed from social-network studies and further characterize the categories from topological point of view. The picture of the metabolism we obtain is that of peripheral modules, characterized both by being dense network clusters and localized to organelles, connected by a central, highly connected core. Pathways typically run through several network clusters and localizations, connecting them laterally. The strong overlap between organelles and network clusters suggest that these are natural "modules" -- relatively independent sub-systems. The different categorizations divide the core metabolism differently suggesting that this, if possible, should be not be treated as a module on par with the organelles. Although overlapping more than chance, none of the pathways correspond very closely to a network cluster or localization. This, we believe, highlights the benefits of different orthogonal classifications and future experimental categorizations based on simple principles.
[ { "created": "Wed, 22 Jul 2009 20:37:02 GMT", "version": "v1" } ]
2009-07-24
[ [ "Zhao", "Jing", "" ], [ "Holme", "Petter", "" ] ]
To understand the system-wide organization of metabolism, different lines of study have devised different categorizations of metabolites. The relationship and difference between categories can provide new insights for a more detailed description of the organization of metabolism. In this study, we investigate the relative organization of three categorizations of metabolites -- pathways, subcellular localizations and network clusters, by block-model techniques borrowed from social-network studies and further characterize the categories from topological point of view. The picture of the metabolism we obtain is that of peripheral modules, characterized both by being dense network clusters and localized to organelles, connected by a central, highly connected core. Pathways typically run through several network clusters and localizations, connecting them laterally. The strong overlap between organelles and network clusters suggest that these are natural "modules" -- relatively independent sub-systems. The different categorizations divide the core metabolism differently suggesting that this, if possible, should be not be treated as a module on par with the organelles. Although overlapping more than chance, none of the pathways correspond very closely to a network cluster or localization. This, we believe, highlights the benefits of different orthogonal classifications and future experimental categorizations based on simple principles.
1309.5673
Liane Gabora
Peter Bruza, Jerome Busemeyer, and Liane Gabora
Introduction to the Special Issue on Quantum Cognition
null
Journal of Mathematical Psychology, 53, 303-305 (2009)
10.1016/j.jmp.2009.06.002
null
q-bio.NC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The subject of this special issue is quantum models of cognition. At first sight it may seem bizarre, even ridiculous, to draw a connection between quantum mechanics, a highly successful theory usually understood as modeling sub-atomic phenomena, and cognitive science. However, a growing number of researchers are looking to quantum theory to circumvent stubborn problems within their own fields. This is also true within cognitive science and related areas, hence this special issue.
[ { "created": "Mon, 23 Sep 2013 00:39:03 GMT", "version": "v1" }, { "created": "Thu, 4 Jul 2019 19:09:01 GMT", "version": "v2" } ]
2019-07-08
[ [ "Bruza", "Peter", "" ], [ "Busemeyer", "Jerome", "" ], [ "Gabora", "Liane", "" ] ]
The subject of this special issue is quantum models of cognition. At first sight it may seem bizarre, even ridiculous, to draw a connection between quantum mechanics, a highly successful theory usually understood as modeling sub-atomic phenomena, and cognitive science. However, a growing number of researchers are looking to quantum theory to circumvent stubborn problems within their own fields. This is also true within cognitive science and related areas, hence this special issue.
2008.09039
Mathieu Desroches
Tam\`as F\"ul\"op, Mathieu Desroches, Fernando Ant\^onio N\'obrega Santos, Serafim Rodrigues
Why we should use Topological Data Analysis in Ageing: towards defining the "Topological shape of ageing"
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living systems are subject to the arrow of time; from birth, they undergo complex transformations (self-organization) in a constant battle for survival, but inevitably ageing and disease trap them to death. Can ageing be understood and eventually reversed? What tools can be employed to further our understanding of ageing? The present article is an invitation for biologists and clinicians to consider key conceptual ideas and computational tools (known to mathematicians and physicists), which potentially may help dissect some of the underlying processes of ageing and disease. Specifically, we first discuss how to classify and analyze complex systems, as well as highlight critical theoretical difficulties that make complex systems hard to study. Subsequently, we introduce Topological Data Analysis - a novel Big Data tool - which may help in the study of complex systems since it extracts knowledge from data in a holistic approach via topological considerations. These conceptual ideas and tools are discussed in a rather informal way to pave future discussions and collaborations between mathematicians and biologists studying ageing.
[ { "created": "Fri, 14 Aug 2020 10:34:56 GMT", "version": "v1" } ]
2020-08-21
[ [ "Fülöp", "Tamàs", "" ], [ "Desroches", "Mathieu", "" ], [ "Santos", "Fernando Antônio Nóbrega", "" ], [ "Rodrigues", "Serafim", "" ] ]
Living systems are subject to the arrow of time; from birth, they undergo complex transformations (self-organization) in a constant battle for survival, but inevitably ageing and disease trap them to death. Can ageing be understood and eventually reversed? What tools can be employed to further our understanding of ageing? The present article is an invitation for biologists and clinicians to consider key conceptual ideas and computational tools (known to mathematicians and physicists), which potentially may help dissect some of the underlying processes of ageing and disease. Specifically, we first discuss how to classify and analyze complex systems, as well as highlight critical theoretical difficulties that make complex systems hard to study. Subsequently, we introduce Topological Data Analysis - a novel Big Data tool - which may help in the study of complex systems since it extracts knowledge from data in a holistic approach via topological considerations. These conceptual ideas and tools are discussed in a rather informal way to pave future discussions and collaborations between mathematicians and biologists studying ageing.
1509.06346
Annelies Lejon
Annelies Lejon, Bert Mortier and Giovanni Samaey
Variance-reduced simulation of stochastic agent-based models for tumor growth
20 pages, 9 figures,2 movies
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a hybrid PDE/Monte Carlo technique for the variance reduced simulation of an agent-based multiscale model for tumor growth. The variance reduction is achieved by combining a simulation of the stochastic agent-based model on the microscopic scale with a deterministic solution of a simplified (coarse) partial differential equation (PDE) on the macroscopic scale as a control variable. We show that this technique is able to significantly reduce the variance with only the (limited) additional computational cost associated with the deterministic solution of the coarse PDE. We illustrate the performance with numerical experiments in different regimes, both in the avascular and vascular stage of tumor growth.
[ { "created": "Mon, 21 Sep 2015 19:11:01 GMT", "version": "v1" }, { "created": "Tue, 22 Sep 2015 06:34:32 GMT", "version": "v2" }, { "created": "Wed, 14 Oct 2015 12:52:36 GMT", "version": "v3" } ]
2015-10-15
[ [ "Lejon", "Annelies", "" ], [ "Mortier", "Bert", "" ], [ "Samaey", "Giovanni", "" ] ]
We investigate a hybrid PDE/Monte Carlo technique for the variance reduced simulation of an agent-based multiscale model for tumor growth. The variance reduction is achieved by combining a simulation of the stochastic agent-based model on the microscopic scale with a deterministic solution of a simplified (coarse) partial differential equation (PDE) on the macroscopic scale as a control variable. We show that this technique is able to significantly reduce the variance with only the (limited) additional computational cost associated with the deterministic solution of the coarse PDE. We illustrate the performance with numerical experiments in different regimes, both in the avascular and vascular stage of tumor growth.
2011.12053
Martin Rypdal
Kristoffer Rypdal, Filippo Maria Bianchi, Martin Rypdal
Intervention fatigue is the primary cause of strong secondary waves in the COVID-19 pandemic
19 pages, 6 figures
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
As of November 2020, the number of COVID-19 cases is increasing rapidly in many countries. In Europe, the virus spread slowed considerably in the late spring due to strict lockdown, but a second wave of the pandemic grew throughout the fall. In this study, we first reconstruct the time evolution of the effective reproduction numbers ${\cal R}(t)$ for each country by integrating the equations of the classic SIR model. We cluster countries based on the estimated ${\cal R}(t)$ through a suitable time series dissimilarity. The result suggests that simple dynamical mechanisms determine how countries respond to changes in COVID-19 case counts. Inspired by these results, we extend the SIR model to include a social response to explain the number $X(t)$ of new confirmed daily cases. As a first-order model, we assume that the social response is on the form $d_t{\cal R}=-\nu (X-X^*)$, where $X^*$ is a threshold for response. The response rate $\nu$ depends on whether $X^*$ is below or above this threshold, on three parameters $\nu_1,\;\nu_2,\,\nu_3,$, and on $t$. When $X<X^*$, $\nu= \nu_1$, describes the effect of relaxed intervention when the incidence rate is low. When $X>X^*$, $\nu=\nu_2\exp{(-\nu_3t)}$, models the impact of interventions when incidence rate is high. The parameter $\nu_3$ represents the fatigue, i.e., the reduced effect of intervention as time passes. The proposed model reproduces typical evolving patterns of COVID-19 epidemic waves observed in many countries. Estimating the parameters $\nu_1,\,\nu_2,\,\nu_3$ and initial conditions, such as ${\cal R}_0$, for different countries helps to identify important dynamics in their social responses. One conclusion is that the leading cause of the strong second wave in Europe in the fall of 2020 was not the relaxation of interventions during the summer, but rather the general fatigue to interventions developing in the fall.
[ { "created": "Tue, 24 Nov 2020 11:59:32 GMT", "version": "v1" } ]
2020-11-25
[ [ "Rypdal", "Kristoffer", "" ], [ "Bianchi", "Filippo Maria", "" ], [ "Rypdal", "Martin", "" ] ]
As of November 2020, the number of COVID-19 cases is increasing rapidly in many countries. In Europe, the virus spread slowed considerably in the late spring due to strict lockdown, but a second wave of the pandemic grew throughout the fall. In this study, we first reconstruct the time evolution of the effective reproduction numbers ${\cal R}(t)$ for each country by integrating the equations of the classic SIR model. We cluster countries based on the estimated ${\cal R}(t)$ through a suitable time series dissimilarity. The result suggests that simple dynamical mechanisms determine how countries respond to changes in COVID-19 case counts. Inspired by these results, we extend the SIR model to include a social response to explain the number $X(t)$ of new confirmed daily cases. As a first-order model, we assume that the social response is on the form $d_t{\cal R}=-\nu (X-X^*)$, where $X^*$ is a threshold for response. The response rate $\nu$ depends on whether $X^*$ is below or above this threshold, on three parameters $\nu_1,\;\nu_2,\,\nu_3,$, and on $t$. When $X<X^*$, $\nu= \nu_1$, describes the effect of relaxed intervention when the incidence rate is low. When $X>X^*$, $\nu=\nu_2\exp{(-\nu_3t)}$, models the impact of interventions when incidence rate is high. The parameter $\nu_3$ represents the fatigue, i.e., the reduced effect of intervention as time passes. The proposed model reproduces typical evolving patterns of COVID-19 epidemic waves observed in many countries. Estimating the parameters $\nu_1,\,\nu_2,\,\nu_3$ and initial conditions, such as ${\cal R}_0$, for different countries helps to identify important dynamics in their social responses. One conclusion is that the leading cause of the strong second wave in Europe in the fall of 2020 was not the relaxation of interventions during the summer, but rather the general fatigue to interventions developing in the fall.
2402.05826
Silvia Lorenzani
Marzia Bisi and Silvia Lorenzani
Microscopic models for the large-scale spread of SARS-CoV-2 virus: A Statistical Mechanics approach
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we derive a system of Boltzmann-type equations to describe the spread of SARS-CoV-2 virus at the microscopic scale, that is by modeling the human-to-human mechanisms of transmission. To this end, we consider two populations, characterized by specific distribution functions, made up of individuals without symptoms (population $1$) and infected people with symptoms (population $2$). The Boltzmann operators model the interactions between individuals within the same population and among different populations with a probability of transition from one to the other due to contagion or, vice versa, to recovery. In addition, the influence of innate and adaptive immune systems is taken into account. Then, starting from the Boltzmann microscopic description we derive a set of evolution equations for the size and mean state of each population considered. Mathematical properties of such macroscopic equations, as equilibria and their stability, are investigated and some numerical simulations are performed in order to analyze the ability of our model to reproduce the characteristic features of Covid-19.
[ { "created": "Thu, 8 Feb 2024 17:06:17 GMT", "version": "v1" } ]
2024-02-09
[ [ "Bisi", "Marzia", "" ], [ "Lorenzani", "Silvia", "" ] ]
In this work, we derive a system of Boltzmann-type equations to describe the spread of SARS-CoV-2 virus at the microscopic scale, that is by modeling the human-to-human mechanisms of transmission. To this end, we consider two populations, characterized by specific distribution functions, made up of individuals without symptoms (population $1$) and infected people with symptoms (population $2$). The Boltzmann operators model the interactions between individuals within the same population and among different populations with a probability of transition from one to the other due to contagion or, vice versa, to recovery. In addition, the influence of innate and adaptive immune systems is taken into account. Then, starting from the Boltzmann microscopic description we derive a set of evolution equations for the size and mean state of each population considered. Mathematical properties of such macroscopic equations, as equilibria and their stability, are investigated and some numerical simulations are performed in order to analyze the ability of our model to reproduce the characteristic features of Covid-19.
2407.09922
Zhilin Li
Zhilin Li, Yongheng Zhao, Yiqing Hu, Yang Li, Keyao Zhang, Zhibing Gao, Lirou Tan, Hanli Liu, Xiaoli Li, Aihua Cao, Zaixu Cui, Chenguang Zhao
Transcranial low-level laser stimulation in near infrared-II region for brain safety and protection
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The use of near-infrared lasers for transcranial photobiomodulation (tPBM) offers a non-invasive method for influencing brain activity and is beneficial for various neurological conditions. Objective: To investigate the safety and neuroprotective properties of tPBM using near-infrared (NIR)-II laser stimulation. Methods: We conducted thirteen experiments involving multidimensional and quantitative methods and measured serum neurobiomarkers, performed electroencephalogram (EEG) and magnetic resonance imaging (MRI) scans, assessed executive functions, and collected a subjective questionnaire. Results: Significant reductions (n=15) in neuron specific enolase (NSE) levels were observed after treatment, indicating neuroprotective effects. No structural or functional brain abnormalities were observed, confirming the safety of tPBM. Additionally, cognitive and executive functions were not impaired, with participants' feedback indicating minimal discomfort. Conclusions: Our data indicate that NIR-II tPBM is safe with specific parameters, highlighting its potential for brain protection.
[ { "created": "Sat, 13 Jul 2024 15:31:05 GMT", "version": "v1" } ]
2024-07-16
[ [ "Li", "Zhilin", "" ], [ "Zhao", "Yongheng", "" ], [ "Hu", "Yiqing", "" ], [ "Li", "Yang", "" ], [ "Zhang", "Keyao", "" ], [ "Gao", "Zhibing", "" ], [ "Tan", "Lirou", "" ], [ "Liu", "Hanli", ...
Background: The use of near-infrared lasers for transcranial photobiomodulation (tPBM) offers a non-invasive method for influencing brain activity and is beneficial for various neurological conditions. Objective: To investigate the safety and neuroprotective properties of tPBM using near-infrared (NIR)-II laser stimulation. Methods: We conducted thirteen experiments involving multidimensional and quantitative methods and measured serum neurobiomarkers, performed electroencephalogram (EEG) and magnetic resonance imaging (MRI) scans, assessed executive functions, and collected a subjective questionnaire. Results: Significant reductions (n=15) in neuron specific enolase (NSE) levels were observed after treatment, indicating neuroprotective effects. No structural or functional brain abnormalities were observed, confirming the safety of tPBM. Additionally, cognitive and executive functions were not impaired, with participants' feedback indicating minimal discomfort. Conclusions: Our data indicate that NIR-II tPBM is safe with specific parameters, highlighting its potential for brain protection.
q-bio/0506025
Ralf Metzler
Suman Kumar Banik, Tobias Ambjornsson, and Ralf Metzler
Stochastic approach to DNA breathing dynamics
7 pages, 6 figures, epl.cls
null
10.1209/epl/i2005-10144-9
null
q-bio.BM cond-mat.soft
null
We propose a stochastic Gillespie scheme to describe the temporal fluctuations of local denaturation zones in double-stranded DNA as a single molecule time series. It is demonstrated that the model recovers the equilibrium properties. We also study measurable dynamical quantities such as the bubble size autocorrelation function. This efficient computational approach will be useful to analyse in detail recent single molecule experiments on clamped homopolymer breathing domains, to probe the parameter values of the underlying Poland-Scheraga model, as well as to design experimental conditions for similar setups.
[ { "created": "Fri, 17 Jun 2005 11:05:33 GMT", "version": "v1" } ]
2009-11-11
[ [ "Banik", "Suman Kumar", "" ], [ "Ambjornsson", "Tobias", "" ], [ "Metzler", "Ralf", "" ] ]
We propose a stochastic Gillespie scheme to describe the temporal fluctuations of local denaturation zones in double-stranded DNA as a single molecule time series. It is demonstrated that the model recovers the equilibrium properties. We also study measurable dynamical quantities such as the bubble size autocorrelation function. This efficient computational approach will be useful to analyse in detail recent single molecule experiments on clamped homopolymer breathing domains, to probe the parameter values of the underlying Poland-Scheraga model, as well as to design experimental conditions for similar setups.
1303.6739
Maxim Makukov
Vladimir I. shCherbak and Maxim A. Makukov
The "Wow! signal" of the terrestrial genetic code
15 pages, 14 figures, published in Icarus. Version 3: minor corrections + references provided with DOI links. Version 4: some links updated
Icarus, 2013, 224(1), 228-242
10.1016/j.icarus.2013.02.017
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales. Thus, it represents a reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong signal in the genetic code is then a testable consequence of such scenario. Here we show that the terrestrial code displays a thorough precision orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal an ensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes. The patterns are profound to the extent that the code mapping itself is uniquely deduced from their algebraic representation. The signal displays readily recognizable hallmarks of artificiality. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to any natural origin. Plausible way of embedding the signal into the code and possible interpretation of its content are discussed. Overall, while the code is nearly optimized biologically, its limited capacity is used extremely efficiently to store non-biological information.
[ { "created": "Wed, 27 Mar 2013 04:16:16 GMT", "version": "v1" }, { "created": "Mon, 30 Dec 2013 15:10:55 GMT", "version": "v2" }, { "created": "Sun, 20 Jul 2014 05:07:18 GMT", "version": "v3" }, { "created": "Mon, 12 Jun 2017 14:10:01 GMT", "version": "v4" } ]
2017-06-13
[ [ "shCherbak", "Vladimir I.", "" ], [ "Makukov", "Maxim A.", "" ] ]
It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales. Thus, it represents a reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong signal in the genetic code is then a testable consequence of such scenario. Here we show that the terrestrial code displays a thorough precision orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal an ensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes. The patterns are profound to the extent that the code mapping itself is uniquely deduced from their algebraic representation. The signal displays readily recognizable hallmarks of artificiality. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to any natural origin. Plausible way of embedding the signal into the code and possible interpretation of its content are discussed. Overall, while the code is nearly optimized biologically, its limited capacity is used extremely efficiently to store non-biological information.
2210.01551
Hadi Akbarzadeh Khorshidi
Marzieh Soltanolkottabi, Hadi A. Khorshidi, Maarten J. IJzerman
Modeling of Whole Genomic Sequencing Implementation using System Dynamics and Game Theory
The IISE Annual Conference & Expo 2022
null
null
null
q-bio.QM cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biomarker testing is a laboratory test in oncology that is used in the selection of targeted cancer treatments and helping to avoid ineffective treatments. There exist several types of biomarker tests that can be used to detect the presence of particular mutations or variation in gene expression. Whole Genome Sequencing (WGS) is a biomarker test for analyzing the entire genome. WGS can provide more comprehensive diagnostic information, but it is also more expensive than other tests. In this study, System Dynamics and Game Theoretic models are employed to evaluate scenarios, and facilitate organizational decision making regarding WGS implementation. These models evaluate the clinical and economic value of WGS as well as its affordability and accessibility. The evaluated scenarios have covered the timing of implementing WGS using time to diagnosis and total cost.
[ { "created": "Sun, 2 Oct 2022 09:55:44 GMT", "version": "v1" } ]
2022-10-05
[ [ "Soltanolkottabi", "Marzieh", "" ], [ "Khorshidi", "Hadi A.", "" ], [ "IJzerman", "Maarten J.", "" ] ]
Biomarker testing is a laboratory test in oncology that is used in the selection of targeted cancer treatments and helping to avoid ineffective treatments. There exist several types of biomarker tests that can be used to detect the presence of particular mutations or variation in gene expression. Whole Genome Sequencing (WGS) is a biomarker test for analyzing the entire genome. WGS can provide more comprehensive diagnostic information, but it is also more expensive than other tests. In this study, System Dynamics and Game Theoretic models are employed to evaluate scenarios, and facilitate organizational decision making regarding WGS implementation. These models evaluate the clinical and economic value of WGS as well as its affordability and accessibility. The evaluated scenarios have covered the timing of implementing WGS using time to diagnosis and total cost.
1712.07640
Nikoleta E. Glynatsi
Nikoleta E. Glynatsi, Vincent Knight and Tamsin E. Lee
An Evolutionary Game Theoretic Model of Rhino Horn Devaluation
null
null
null
null
q-bio.PE cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rhino populations are at a critical level due to the demand for rhino horn and the subsequent poaching. Wildlife managers attempt to secure rhinos with approaches to devalue the horn, the most common of which is dehorning. Game theory has been used to examine the interaction of poachers and wildlife managers where a manager can either `dehorn' their rhinos or leave the horn attached and poachers may behave `selectively' or `indiscriminately'. The approach described in this paper builds on this previous work and investigates the interactions between the poachers. We build an evolutionary game theoretic model and determine which strategy is preferred by a poacher in various different populations of poachers. The purpose of this work is to discover whether conditions which encourage the poachers to behave selectively exist, that is, they only kill those rhinos with full horns. The analytical results show that full devaluation of all rhinos will likely lead to indiscriminate poaching. In turn it shows that devaluing of rhinos can only be effective when implemented along with a strong disincentive framework. This paper aims to contribute to the necessary research required for informed discussion about the lively debate on legalising rhino horn trade.
[ { "created": "Wed, 20 Dec 2017 18:51:37 GMT", "version": "v1" }, { "created": "Thu, 21 Dec 2017 15:17:48 GMT", "version": "v2" }, { "created": "Fri, 12 Oct 2018 09:47:31 GMT", "version": "v3" } ]
2018-10-15
[ [ "Glynatsi", "Nikoleta E.", "" ], [ "Knight", "Vincent", "" ], [ "Lee", "Tamsin E.", "" ] ]
Rhino populations are at a critical level due to the demand for rhino horn and the subsequent poaching. Wildlife managers attempt to secure rhinos with approaches to devalue the horn, the most common of which is dehorning. Game theory has been used to examine the interaction of poachers and wildlife managers where a manager can either `dehorn' their rhinos or leave the horn attached and poachers may behave `selectively' or `indiscriminately'. The approach described in this paper builds on this previous work and investigates the interactions between the poachers. We build an evolutionary game theoretic model and determine which strategy is preferred by a poacher in various different populations of poachers. The purpose of this work is to discover whether conditions which encourage the poachers to behave selectively exist, that is, they only kill those rhinos with full horns. The analytical results show that full devaluation of all rhinos will likely lead to indiscriminate poaching. In turn it shows that devaluing of rhinos can only be effective when implemented along with a strong disincentive framework. This paper aims to contribute to the necessary research required for informed discussion about the lively debate on legalising rhino horn trade.
2209.07583
Caitlin Lienkaemper
Caitlin Lienkaemper
Combinatorial geometry of neural codes, neural data analysis, and neural networks
193 pages, 69 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This dissertation explores applications of discrete geometry in mathematical neuroscience. We begin with convex neural codes, which model the activity of hippocampal place cells and other neurons with convex receptive fields. In Chapter 4, we introduce order-forcing, a tool for constraining convex realizations of codes, and use it to construct new examples of non-convex codes with no local obstructions. In Chapter 5, we relate oriented matroids to convex neural codes, showing that a code has a realization with convex polytopes iff it is the image of a representable oriented matroid under a neural code morphism. We also show that determining whether a code is convex is at least as difficult as determining whether an oriented matroid is representable, implying that the problem of determining whether a code is convex is NP-hard. Next, we turn to the problem of the underlying rank of a matrix. This problem is motivated by the problem of determining the dimensionality of (neural) data which has been corrupted by an unknown monotone transformation. In Chapter 6, we introduce two tools for computing underlying rank, the minimal nodes and the Radon rank. We apply these to analyze calcium imaging data from a larval zebrafish. In Chapter 7, we explore the underlying rank in more detail, establish connections to oriented matroid theory, and show that computing underlying rank is also NP-hard. Finally, we study the dynamics of threshold-linear networks (TLNs), a simple model of the activity of neural circuits. In Chapter 9, we describe the nullcline arrangement of a threshold linear network, and show that a subset of its chambers are an attracting set. In Chapter 10, we focus on combinatorial threshold linear networks (CTLNs), which are TLNs defined from a directed graph. We prove that if the graph of a CTLN is a directed acyclic graph, then all trajectories of the CTLN approach a fixed point.
[ { "created": "Thu, 15 Sep 2022 19:36:56 GMT", "version": "v1" } ]
2022-09-19
[ [ "Lienkaemper", "Caitlin", "" ] ]
This dissertation explores applications of discrete geometry in mathematical neuroscience. We begin with convex neural codes, which model the activity of hippocampal place cells and other neurons with convex receptive fields. In Chapter 4, we introduce order-forcing, a tool for constraining convex realizations of codes, and use it to construct new examples of non-convex codes with no local obstructions. In Chapter 5, we relate oriented matroids to convex neural codes, showing that a code has a realization with convex polytopes iff it is the image of a representable oriented matroid under a neural code morphism. We also show that determining whether a code is convex is at least as difficult as determining whether an oriented matroid is representable, implying that the problem of determining whether a code is convex is NP-hard. Next, we turn to the problem of the underlying rank of a matrix. This problem is motivated by the problem of determining the dimensionality of (neural) data which has been corrupted by an unknown monotone transformation. In Chapter 6, we introduce two tools for computing underlying rank, the minimal nodes and the Radon rank. We apply these to analyze calcium imaging data from a larval zebrafish. In Chapter 7, we explore the underlying rank in more detail, establish connections to oriented matroid theory, and show that computing underlying rank is also NP-hard. Finally, we study the dynamics of threshold-linear networks (TLNs), a simple model of the activity of neural circuits. In Chapter 9, we describe the nullcline arrangement of a threshold linear network, and show that a subset of its chambers are an attracting set. In Chapter 10, we focus on combinatorial threshold linear networks (CTLNs), which are TLNs defined from a directed graph. We prove that if the graph of a CTLN is a directed acyclic graph, then all trajectories of the CTLN approach a fixed point.
1908.02333
Sara Ranjbar
Sara Ranjbar, Kyle W. Singleton, Lee Curtin, Susan Christine Massey, Andrea Hawkins-Daarud, Pamela R. Jackson and Kristin R. Swanson
Sex differences in predicting fluid intelligence of adolescent brain from T1-weighted MRIs
8 pages plus references, 2 figures, 2 tables. Submission to the ABCD Neurocognitive Prediction Challenge at MICCAI 2019
null
null
null
q-bio.NC eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fluid intelligence (Gf) has been defined as the ability to reason and solve previously unseen problems. Links to Gf have been found in magnetic resonance imaging (MRI) sequences such as functional MRI and diffusion tensor imaging. As part of the Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge 2019, we sought to predict Gf in children aged 9-10 from T1-weighted (T1W) MRIs. The data included atlas-aligned volumetric T1W images, atlas-defined segmented regions, age, and sex for 3739 subjects used for training and internal validation and 415 subjects used for external validation. We trained sex-specific convolutional neural net (CNN) and random forest models to predict Gf. For the convolutional model, skull-stripped volumetric T1W images aligned to the SRI24 brain atlas were used for training. Volumes of segmented atlas regions along with each subject's age were used to train the random forest regressor models. Performance was measured using the mean squared error (MSE) of the predictions. Random forest models achieved lower MSEs than CNNs. Further, the external validation data had a better MSE for females than males (60.68 vs. 80.74), with a combined MSE of 70.83. Our results suggest that predictive models of Gf from volumetric T1W MRI features alone may perform better when trained separately on male and female data. However, the performance of our models indicates that more information is necessary beyond the available data to make accurate predictions of Gf.
[ { "created": "Tue, 6 Aug 2019 19:09:02 GMT", "version": "v1" } ]
2019-08-08
[ [ "Ranjbar", "Sara", "" ], [ "Singleton", "Kyle W.", "" ], [ "Curtin", "Lee", "" ], [ "Massey", "Susan Christine", "" ], [ "Hawkins-Daarud", "Andrea", "" ], [ "Jackson", "Pamela R.", "" ], [ "Swanson", "Kristin R...
Fluid intelligence (Gf) has been defined as the ability to reason and solve previously unseen problems. Links to Gf have been found in magnetic resonance imaging (MRI) sequences such as functional MRI and diffusion tensor imaging. As part of the Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge 2019, we sought to predict Gf in children aged 9-10 from T1-weighted (T1W) MRIs. The data included atlas-aligned volumetric T1W images, atlas-defined segmented regions, age, and sex for 3739 subjects used for training and internal validation and 415 subjects used for external validation. We trained sex-specific convolutional neural net (CNN) and random forest models to predict Gf. For the convolutional model, skull-stripped volumetric T1W images aligned to the SRI24 brain atlas were used for training. Volumes of segmented atlas regions along with each subject's age were used to train the random forest regressor models. Performance was measured using the mean squared error (MSE) of the predictions. Random forest models achieved lower MSEs than CNNs. Further, the external validation data had a better MSE for females than males (60.68 vs. 80.74), with a combined MSE of 70.83. Our results suggest that predictive models of Gf from volumetric T1W MRI features alone may perform better when trained separately on male and female data. However, the performance of our models indicates that more information is necessary beyond the available data to make accurate predictions of Gf.
2005.03323
Michael Hochberg
Michael E. Hochberg
Importance of suppression and mitigation measures in managing COVID-19 outbreaks
14 pages, 2 tables, 4 figures
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
I employ a simple mathematical model of an epidemic process to evaluate how four basic quantities: the reproduction number (R), the numbers of sensitive (S) and infectious individuals(I), and total community size (N) affect strategies to control COVID-19. Numerical simulations show that strict suppression measures at the beginning of an epidemic can create low infectious numbers, which thereafter can be managed by mitigation measures over longer periods to flatten the epidemic curve. The stronger the suppression measure, the faster it achieves the low numbers of infections that are conducive to subsequent management. We discuss the predictions of this analysis and how it fits into longer-term sequences of measures, including using the herd immunity concept to leverage acquired immunity.
[ { "created": "Thu, 7 May 2020 08:43:25 GMT", "version": "v1" } ]
2020-05-08
[ [ "Hochberg", "Michael E.", "" ] ]
I employ a simple mathematical model of an epidemic process to evaluate how four basic quantities: the reproduction number (R), the numbers of sensitive (S) and infectious individuals(I), and total community size (N) affect strategies to control COVID-19. Numerical simulations show that strict suppression measures at the beginning of an epidemic can create low infectious numbers, which thereafter can be managed by mitigation measures over longer periods to flatten the epidemic curve. The stronger the suppression measure, the faster it achieves the low numbers of infections that are conducive to subsequent management. We discuss the predictions of this analysis and how it fits into longer-term sequences of measures, including using the herd immunity concept to leverage acquired immunity.
1801.09257
Hyekyoung Lee
Hyekyoung Lee, Eunkyung Kim, Hyejin Kang, Youngmin Huh, Youngjo Lee, Seonhee Lim, Dong Soo Lee
Volume entropy and information flow in a brain graph
null
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entropy is a classical measure to quantify the amount of information or complexity of a system. Various entropy-based measures such as functional and spectral entropies have been proposed in brain network analysis. However, they are less widely used than traditional graph theoretic measures such as global and local efficiencies because either they are not well-defined on a graph or difficult to interpret its biological meaning. In this paper, we propose a new entropy-based graph invariant, called volume entropy. It measures the exponential growth rate of the number of paths in a graph, which is a relevant measure if information flows through the graph forever. We model the information propagation on a graph by the generalized Markov system associated to the weighted edge-transition matrix. We estimate the volume entropy using the stationary equation of the generalized Markov system. A prominent advantage of using the stationary equation is that it assigns certain distribution of weights on the edges of the brain graph, which we call the stationary distribution. The stationary distribution shows the information capacity of edges and the direction of information flow on a brain graph. The simulation results show that the volume entropy distinguishes the underlying graph topology and geometry better than the existing graph measures. In brain imaging data application, the volume entropy of brain graphs was significantly related to healthy normal aging from 20s to 60s. In addition, the stationary distribution of information propagation gives a new insight into the information flow of functional brain graph.
[ { "created": "Sun, 28 Jan 2018 17:51:43 GMT", "version": "v1" }, { "created": "Wed, 7 Mar 2018 05:16:50 GMT", "version": "v2" } ]
2018-03-08
[ [ "Lee", "Hyekyoung", "" ], [ "Kim", "Eunkyung", "" ], [ "Kang", "Hyejin", "" ], [ "Huh", "Youngmin", "" ], [ "Lee", "Youngjo", "" ], [ "Lim", "Seonhee", "" ], [ "Lee", "Dong Soo", "" ] ]
Entropy is a classical measure to quantify the amount of information or complexity of a system. Various entropy-based measures such as functional and spectral entropies have been proposed in brain network analysis. However, they are less widely used than traditional graph theoretic measures such as global and local efficiencies because either they are not well-defined on a graph or difficult to interpret its biological meaning. In this paper, we propose a new entropy-based graph invariant, called volume entropy. It measures the exponential growth rate of the number of paths in a graph, which is a relevant measure if information flows through the graph forever. We model the information propagation on a graph by the generalized Markov system associated to the weighted edge-transition matrix. We estimate the volume entropy using the stationary equation of the generalized Markov system. A prominent advantage of using the stationary equation is that it assigns certain distribution of weights on the edges of the brain graph, which we call the stationary distribution. The stationary distribution shows the information capacity of edges and the direction of information flow on a brain graph. The simulation results show that the volume entropy distinguishes the underlying graph topology and geometry better than the existing graph measures. In brain imaging data application, the volume entropy of brain graphs was significantly related to healthy normal aging from 20s to 60s. In addition, the stationary distribution of information propagation gives a new insight into the information flow of functional brain graph.
2107.07603
Joshua Scurll
Joshua M. Scurll
Measuring inter-cluster similarities with Alpha Shape TRIangulation in loCal Subspaces (ASTRICS) facilitates visualization and clustering of high-dimensional data
35 pages, 7 figures
null
null
null
q-bio.QM cs.HC cs.LG stat.CO
http://creativecommons.org/licenses/by/4.0/
Clustering and visualizing high-dimensional (HD) data are important tasks in a variety of fields. For example, in bioinformatics, they are crucial for analyses of single-cell data such as mass cytometry (CyTOF) data. Some of the most effective algorithms for clustering HD data are based on representing the data by nodes in a graph, with edges connecting neighbouring nodes according to some measure of similarity or distance. However, users of graph-based algorithms are typically faced with the critical but challenging task of choosing the value of an input parameter that sets the size of neighbourhoods in the graph, e.g. the number of nearest neighbours to which to connect each node or a threshold distance for connecting nodes. The burden on the user could be alleviated by a measure of inter-node similarity that can have value 0 for dissimilar nodes without requiring any user-defined parameters or thresholds. This would determine the neighbourhoods automatically while still yielding a sparse graph. To this end, I propose a new method called ASTRICS to measure similarity between clusters of HD data points based on local dimensionality reduction and triangulation of critical alpha shapes. I show that my ASTRICS similarity measure can facilitate both clustering and visualization of HD data by using it in Stage 2 of a three-stage pipeline: Stage 1 = perform an initial clustering of the data by any method; Stage 2 = let graph nodes represent initial clusters instead of individual data points and use ASTRICS to automatically define edges between nodes; Stage 3 = use the graph for further clustering and visualization. This trades the critical task of choosing a graph neighbourhood size for the easier task of essentially choosing a resolution at which to view the data. The graph and consequently downstream clustering and visualization are then automatically adapted to the chosen resolution.
[ { "created": "Thu, 15 Jul 2021 20:51:06 GMT", "version": "v1" } ]
2021-07-19
[ [ "Scurll", "Joshua M.", "" ] ]
Clustering and visualizing high-dimensional (HD) data are important tasks in a variety of fields. For example, in bioinformatics, they are crucial for analyses of single-cell data such as mass cytometry (CyTOF) data. Some of the most effective algorithms for clustering HD data are based on representing the data by nodes in a graph, with edges connecting neighbouring nodes according to some measure of similarity or distance. However, users of graph-based algorithms are typically faced with the critical but challenging task of choosing the value of an input parameter that sets the size of neighbourhoods in the graph, e.g. the number of nearest neighbours to which to connect each node or a threshold distance for connecting nodes. The burden on the user could be alleviated by a measure of inter-node similarity that can have value 0 for dissimilar nodes without requiring any user-defined parameters or thresholds. This would determine the neighbourhoods automatically while still yielding a sparse graph. To this end, I propose a new method called ASTRICS to measure similarity between clusters of HD data points based on local dimensionality reduction and triangulation of critical alpha shapes. I show that my ASTRICS similarity measure can facilitate both clustering and visualization of HD data by using it in Stage 2 of a three-stage pipeline: Stage 1 = perform an initial clustering of the data by any method; Stage 2 = let graph nodes represent initial clusters instead of individual data points and use ASTRICS to automatically define edges between nodes; Stage 3 = use the graph for further clustering and visualization. This trades the critical task of choosing a graph neighbourhood size for the easier task of essentially choosing a resolution at which to view the data. The graph and consequently downstream clustering and visualization are then automatically adapted to the chosen resolution.
0803.1372
Yohsuke Murase
Yohsuke Murase, Takashi Shimada, Nobuyasu Ito, Per Arne Rikvold
Effects of stochastic population fluctuations in two models of biological macroevolution
7 pages, 4 figures
Physics Procedia 6, 76-79 (2010)
10.1016/j.phpro.2010.09.031
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two mathematical models of macroevolution are studied. These models have population dynamics at the species level, and mutations and extinction of species are also included. The population dynamics are updated by difference equations with stochastic noise terms that characterize population fluctuations. The effects of the stochastic population fluctuations on diversity and total population sizes on evolutionary time scales are studied. In one model, species can make either predator-prey, mutualistic, or competitive interactions, while the other model allows only predator-prey interactions. When the noise in the population dynamics is strong enough, both models show intermittent behavior and their power spectral densities show approximate $1/f$ fluctuations. In the noiseless limit, the two models have different power spectral densities. For the predator-prey model, $1/f^2$ fluctuations appears, indicating random-walk like behavior, while the other model still shows $1/f$ noise. These results indicate that stochastic population fluctuations may significantly affect long-time evolutionary dynamics.
[ { "created": "Mon, 10 Mar 2008 09:35:22 GMT", "version": "v1" } ]
2014-04-16
[ [ "Murase", "Yohsuke", "" ], [ "Shimada", "Takashi", "" ], [ "Ito", "Nobuyasu", "" ], [ "Rikvold", "Per Arne", "" ] ]
Two mathematical models of macroevolution are studied. These models have population dynamics at the species level, and mutations and extinction of species are also included. The population dynamics are updated by difference equations with stochastic noise terms that characterize population fluctuations. The effects of the stochastic population fluctuations on diversity and total population sizes on evolutionary time scales are studied. In one model, species can make either predator-prey, mutualistic, or competitive interactions, while the other model allows only predator-prey interactions. When the noise in the population dynamics is strong enough, both models show intermittent behavior and their power spectral densities show approximate $1/f$ fluctuations. In the noiseless limit, the two models have different power spectral densities. For the predator-prey model, $1/f^2$ fluctuations appears, indicating random-walk like behavior, while the other model still shows $1/f$ noise. These results indicate that stochastic population fluctuations may significantly affect long-time evolutionary dynamics.
2108.00777
Li Liu
Li Liu, Xianghao Zhan, Xikai Yang, Xiaoqing Guan, Rumeng Wu, Zhan Wang, Zhiyuan Luo, You Wang, Guang Li
CPSC: Conformal prediction with shrunken centroids for efficient prediction reliability quantification and data augmentation, a case in alternative herbal medicine classification with electronic nose
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In machine learning applications, the reliability of predictions is significant for assisted decision and risk control. As an effective framework to quantify the prediction reliability, conformal prediction (CP) was developed with the CPKNN (CP with kNN). However, the conventional CPKNN suffers from high variance and bias and long computational time as the feature dimensionality increases. To address these limitations, a new CP framework-conformal prediction with shrunken centroids (CPSC) is proposed. It regularizes the class centroids to attenuate the irrelevant features and shrink the sample space for predictions and reliability quantification. To compare CPKNN and CPSC, we employed them in the classification of 12 categories of alternative herbal medicine with electronic nose as a case and assessed them in two tasks: 1) offline prediction: the training set was fixed and the accuracy on the testing set was evaluated; 2) online prediction with data augmentation: they filtered unlabeled data to augment the training data based on the prediction reliability and the final accuracy of testing set was compared. The result shows that CPSC significantly outperformed CPKNN in both two tasks: 1) CPSC reached a significantly higher accuracy with lower computation cost, and with the same credibility output, CPSC generally achieves a higher accuracy; 2) the data augmentation process with CPSC robustly manifested a statistically significant improvement in prediction accuracy with different reliability thresholds, and the augmented data were more balanced in classes. This novel CPSC provides higher prediction accuracy and better reliability quantification, which can be a reliable assistance in decision support.
[ { "created": "Mon, 2 Aug 2021 10:49:23 GMT", "version": "v1" } ]
2021-08-03
[ [ "Liu", "Li", "" ], [ "Zhan", "Xianghao", "" ], [ "Yang", "Xikai", "" ], [ "Guan", "Xiaoqing", "" ], [ "Wu", "Rumeng", "" ], [ "Wang", "Zhan", "" ], [ "Luo", "Zhiyuan", "" ], [ "Wang", "You", ...
In machine learning applications, the reliability of predictions is significant for assisted decision and risk control. As an effective framework to quantify the prediction reliability, conformal prediction (CP) was developed with the CPKNN (CP with kNN). However, the conventional CPKNN suffers from high variance and bias and long computational time as the feature dimensionality increases. To address these limitations, a new CP framework-conformal prediction with shrunken centroids (CPSC) is proposed. It regularizes the class centroids to attenuate the irrelevant features and shrink the sample space for predictions and reliability quantification. To compare CPKNN and CPSC, we employed them in the classification of 12 categories of alternative herbal medicine with electronic nose as a case and assessed them in two tasks: 1) offline prediction: the training set was fixed and the accuracy on the testing set was evaluated; 2) online prediction with data augmentation: they filtered unlabeled data to augment the training data based on the prediction reliability and the final accuracy of testing set was compared. The result shows that CPSC significantly outperformed CPKNN in both two tasks: 1) CPSC reached a significantly higher accuracy with lower computation cost, and with the same credibility output, CPSC generally achieves a higher accuracy; 2) the data augmentation process with CPSC robustly manifested a statistically significant improvement in prediction accuracy with different reliability thresholds, and the augmented data were more balanced in classes. This novel CPSC provides higher prediction accuracy and better reliability quantification, which can be a reliable assistance in decision support.
q-bio/0508042
Graziano Vernizzi
G. Vernizzi, P. Ribeca, H. Orland, A. Zee
The Topology of Pseudoknotted Homopolymers
RevTeX 4 pages, 6 figures, minor typos fixed
null
10.1103/PhysRevE.73.031902
SPhT-T05/127, HU-EP-05/37, SFB/CPP-05-38
q-bio.BM cond-mat.soft
null
We consider the folding of a self-avoiding homopolymer on a lattice, with saturating hydrogen bond interactions. Our goal is to numerically evaluate the statistical distribution of the topological genus of pseudoknotted configurations. The genus has been recently proposed for classifying pseudoknots (and their topological complexity) in the context of RNA folding. We compare our results on the distribution of the genus of pseudoknots, with the theoretical predictions of an existing combinatorial model for an infinitely flexible and stretchable homopolymer. We thus obtain that steric and geometric constraints considerably limit the topological complexity of pseudoknotted configurations, as it occurs for instance in real RNA molecules. We also analyze the scaling properties at large homopolymer length, and the genus distributions above and below the critical temperature between the swollen phase and the compact-globule phase, both in two and three dimensions.
[ { "created": "Mon, 29 Aug 2005 18:04:21 GMT", "version": "v1" }, { "created": "Tue, 30 Aug 2005 14:48:16 GMT", "version": "v2" } ]
2009-11-11
[ [ "Vernizzi", "G.", "" ], [ "Ribeca", "P.", "" ], [ "Orland", "H.", "" ], [ "Zee", "A.", "" ] ]
We consider the folding of a self-avoiding homopolymer on a lattice, with saturating hydrogen bond interactions. Our goal is to numerically evaluate the statistical distribution of the topological genus of pseudoknotted configurations. The genus has been recently proposed for classifying pseudoknots (and their topological complexity) in the context of RNA folding. We compare our results on the distribution of the genus of pseudoknots, with the theoretical predictions of an existing combinatorial model for an infinitely flexible and stretchable homopolymer. We thus obtain that steric and geometric constraints considerably limit the topological complexity of pseudoknotted configurations, as it occurs for instance in real RNA molecules. We also analyze the scaling properties at large homopolymer length, and the genus distributions above and below the critical temperature between the swollen phase and the compact-globule phase, both in two and three dimensions.
2109.08416
Matteo Sireci
Jos\'e Camacho Mateu, Matteo Sireci, Miguel A. Mu\~noz
Phenotypic-dependent variability and the emergence of tolerance in bacterial populations
null
null
10.1371/journal.pcbi.1009417
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Ecological and evolutionary dynamics have been historically regarded as unfolding at broadly separated timescales. However, these two types of processes are nowadays well documented to much more tightly than traditionally assumed, especially in communities of microorganisms. With this motivation in mind, here we scrutinize recent experimental results showing evidence of rapid evolution of tolerance by lag in bacterial populations that are periodically exposed to antibiotic stress in laboratory conditions. In particular, the distribution of single-cell lag times evolves its average value to approximately fit the antibiotic-exposure time. Moreover, the distribution develops right skewed heavy tails, revealing the presence of individuals with anomalously large lag times. Here, we develop a parsimonious individual-based model mimicking the actual demographic processes of the experimental setup. Individuals are characterized by a single phenotypic trait: their intrinsic lag time, which is transmitted with variation to the progeny. The model (in a version in which the amplitude of phenotypic variations grows with the parent\'s lag time) is able to reproduce quite well the key empirical observations. Furthermore, we develop a general mathematical framework allowing us to describe with good accuracy the properties of the stochastic model by means of a macroscopic equation, which generalizes the Crow Kimura equation in population genetics. From a broader perspective, this work represents a benchmark for the mathematical framework designed to tackle much more general eco-evolutionary problems, thus paving the road to further research avenues.
[ { "created": "Fri, 17 Sep 2021 09:01:38 GMT", "version": "v1" }, { "created": "Wed, 6 Oct 2021 09:33:01 GMT", "version": "v2" } ]
2021-11-17
[ [ "Mateu", "José Camacho", "" ], [ "Sireci", "Matteo", "" ], [ "Muñoz", "Miguel A.", "" ] ]
Ecological and evolutionary dynamics have been historically regarded as unfolding at broadly separated timescales. However, these two types of processes are nowadays well documented to much more tightly than traditionally assumed, especially in communities of microorganisms. With this motivation in mind, here we scrutinize recent experimental results showing evidence of rapid evolution of tolerance by lag in bacterial populations that are periodically exposed to antibiotic stress in laboratory conditions. In particular, the distribution of single-cell lag times evolves its average value to approximately fit the antibiotic-exposure time. Moreover, the distribution develops right skewed heavy tails, revealing the presence of individuals with anomalously large lag times. Here, we develop a parsimonious individual-based model mimicking the actual demographic processes of the experimental setup. Individuals are characterized by a single phenotypic trait: their intrinsic lag time, which is transmitted with variation to the progeny. The model (in a version in which the amplitude of phenotypic variations grows with the parent\'s lag time) is able to reproduce quite well the key empirical observations. Furthermore, we develop a general mathematical framework allowing us to describe with good accuracy the properties of the stochastic model by means of a macroscopic equation, which generalizes the Crow Kimura equation in population genetics. From a broader perspective, this work represents a benchmark for the mathematical framework designed to tackle much more general eco-evolutionary problems, thus paving the road to further research avenues.
2104.11191
Frederick Matsen IV
Michael Karcher, Cheng Zhang, and Frederick A Matsen IV
Variational Bayesian Supertrees
null
null
null
null
q-bio.PE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given overlapping subsets of a set of taxa (e.g. species), and posterior distributions on phylogenetic tree topologies for each of these taxon sets, how can we infer a posterior distribution on phylogenetic tree topologies for the entire taxon set? Although the equivalent problem for in the non-Bayesian case has attracted substantial research, the Bayesian case has not attracted the attention it deserves. In this paper we develop a variational Bayes approach to this problem and demonstrate its effectiveness.
[ { "created": "Thu, 22 Apr 2021 17:24:00 GMT", "version": "v1" } ]
2021-04-23
[ [ "Karcher", "Michael", "" ], [ "Zhang", "Cheng", "" ], [ "Matsen", "Frederick A", "IV" ] ]
Given overlapping subsets of a set of taxa (e.g. species), and posterior distributions on phylogenetic tree topologies for each of these taxon sets, how can we infer a posterior distribution on phylogenetic tree topologies for the entire taxon set? Although the equivalent problem for in the non-Bayesian case has attracted substantial research, the Bayesian case has not attracted the attention it deserves. In this paper we develop a variational Bayes approach to this problem and demonstrate its effectiveness.
1211.1303
Wahyu Wijaya Hadiwikarta
Wahyu W. Hadiwikarta, Jean-Charles Walter, Jef Hooyberghs, Enrico Carlon
Probing Hybridization parameters from microarray experiments: nearest neighbor model and beyond
13 pages, 11 figures, 1 table, Supplementary Data available in Appendix
Nucleic Acids Research, 2012, Vol. 40, No. 18 e138
10.1093/nar/gks475
null
q-bio.BM cond-mat.soft q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article it is shown how optimized and dedicated microarray experiments can be used to study the thermodynamics of DNA hybridization for a large number of different conformations in a highly parallel fashion. In particular, free energy penalties for mismatches are obtained in two independent ways and are shown to be correlated with values from melting experiments in solution reported in the literature. The additivity principle, which is at the basis of the nearest-neighbor model, and according to which the penalty for two isolated mismatches is equal to the sum of the independent penalties, is thoroughly tested. Additivity is shown to break down for a mismatch distance below 5 nt. The behavior of mismatches in the vicinity of the helix edges, and the behavior of tandem mismatches are also investigated. Finally, some thermodynamic outlying sequences are observed and highlighted. These sequences contain combinations of GA mismatches. The analysis of the microarray data reported in this article provides new insights on the DNA hybridization parameters and can help to increase the accuracy of hybridization-based technologies.
[ { "created": "Tue, 6 Nov 2012 16:40:30 GMT", "version": "v1" } ]
2012-11-07
[ [ "Hadiwikarta", "Wahyu W.", "" ], [ "Walter", "Jean-Charles", "" ], [ "Hooyberghs", "Jef", "" ], [ "Carlon", "Enrico", "" ] ]
In this article it is shown how optimized and dedicated microarray experiments can be used to study the thermodynamics of DNA hybridization for a large number of different conformations in a highly parallel fashion. In particular, free energy penalties for mismatches are obtained in two independent ways and are shown to be correlated with values from melting experiments in solution reported in the literature. The additivity principle, which is at the basis of the nearest-neighbor model, and according to which the penalty for two isolated mismatches is equal to the sum of the independent penalties, is thoroughly tested. Additivity is shown to break down for a mismatch distance below 5 nt. The behavior of mismatches in the vicinity of the helix edges, and the behavior of tandem mismatches are also investigated. Finally, some thermodynamic outlying sequences are observed and highlighted. These sequences contain combinations of GA mismatches. The analysis of the microarray data reported in this article provides new insights on the DNA hybridization parameters and can help to increase the accuracy of hybridization-based technologies.
1712.10184
Niklas L.P. Lundstr\"om
Niklas L.P. Lundstr\"om, Nicolas Loeuille, Xinzhu Meng, Mats Bodin, {\AA}ke Br\"annstr\"om
Meeting yield and conservation objectives by balancing harvesting of juveniles and adults
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sustainable yields that are at least 80% of the maximum sustainable yield are sometimes referred to as pretty good yield (PGY). The range of PGY harvesting strategies is generally broad and thus leaves room to account for additional objectives besides high yield. Here, we analyze stage-dependent harvesting strategies that realize PGY with conservation as a second objective. We show that (1) PGY harvesting strategies can give large conservation benefits and (2) equal harvesting rates of juveniles and adults is often a good strategy. These conclusions are based on trade-off curves between yield and four measures of conservation that form in two established population models, one age-structured and one stage-structured model, when considering different harvesting rates of juveniles and adults. These conclusions hold for a broad range of parameter settings, though our investigation of robustness also reveals that (3) predictions of the age-structured model are more sensitive to variations in parameter values than those of the stage-structured model. Finally, we find that (4) measures of stability that are often quite difficult to assess in the field (e.g.~basic reproduction ratio and resilience) are systematically negatively correlated with impacts on biomass and impact on size structure, so that these later quantities can provide integrative signals to detect possible collapses.
[ { "created": "Fri, 29 Dec 2017 11:24:53 GMT", "version": "v1" }, { "created": "Mon, 26 Nov 2018 13:36:30 GMT", "version": "v2" }, { "created": "Mon, 25 Feb 2019 13:25:42 GMT", "version": "v3" } ]
2019-02-26
[ [ "Lundström", "Niklas L. P.", "" ], [ "Loeuille", "Nicolas", "" ], [ "Meng", "Xinzhu", "" ], [ "Bodin", "Mats", "" ], [ "Brännström", "Åke", "" ] ]
Sustainable yields that are at least 80% of the maximum sustainable yield are sometimes referred to as pretty good yield (PGY). The range of PGY harvesting strategies is generally broad and thus leaves room to account for additional objectives besides high yield. Here, we analyze stage-dependent harvesting strategies that realize PGY with conservation as a second objective. We show that (1) PGY harvesting strategies can give large conservation benefits and (2) equal harvesting rates of juveniles and adults is often a good strategy. These conclusions are based on trade-off curves between yield and four measures of conservation that form in two established population models, one age-structured and one stage-structured model, when considering different harvesting rates of juveniles and adults. These conclusions hold for a broad range of parameter settings, though our investigation of robustness also reveals that (3) predictions of the age-structured model are more sensitive to variations in parameter values than those of the stage-structured model. Finally, we find that (4) measures of stability that are often quite difficult to assess in the field (e.g.~basic reproduction ratio and resilience) are systematically negatively correlated with impacts on biomass and impact on size structure, so that these later quantities can provide integrative signals to detect possible collapses.
2305.09867
Wei Xie
Keqi Wang, Wei Xie, Hua Zheng
Stochastic Molecular Reaction Queueing Network Modeling for In Vitro Transcription Process
11 pages, 3 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
To facilitate a rapid response to pandemic threats, this paper focuses on developing a mechanistic simulation model for in vitro transcription (IVT) process, a crucial step in mRNA vaccine manufacturing. To enhance production and support industry 4.0, this model is proposed to improve the prediction and analysis of IVT enzymatic reaction network. It incorporates a novel stochastic molecular reaction queueing network with a regulatory kinetic model characterizing the effect of bioprocess state variables on reaction rates. The empirical study demonstrates that the proposed model has a promising performance under different production conditions and it could offer potential improvements in mRNA product quality and yield.
[ { "created": "Wed, 17 May 2023 00:44:11 GMT", "version": "v1" }, { "created": "Wed, 21 Jun 2023 17:51:40 GMT", "version": "v2" } ]
2023-06-22
[ [ "Wang", "Keqi", "" ], [ "Xie", "Wei", "" ], [ "Zheng", "Hua", "" ] ]
To facilitate a rapid response to pandemic threats, this paper focuses on developing a mechanistic simulation model for in vitro transcription (IVT) process, a crucial step in mRNA vaccine manufacturing. To enhance production and support industry 4.0, this model is proposed to improve the prediction and analysis of IVT enzymatic reaction network. It incorporates a novel stochastic molecular reaction queueing network with a regulatory kinetic model characterizing the effect of bioprocess state variables on reaction rates. The empirical study demonstrates that the proposed model has a promising performance under different production conditions and it could offer potential improvements in mRNA product quality and yield.
1404.3908
Alan Braslau
J.-L. Sikorav, A. Braslau and A. Goldar
A construction of the genetic material and of proteins
14 pages, 3 figures, 1 table. arXiv admin note: substantial text overlap with arXiv:1401.3203
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A theoretical construction of the genetic material establishes the unique and ideal character of DNA. A similar conclusion is reached for amino acids and proteins.
[ { "created": "Tue, 15 Apr 2014 13:44:59 GMT", "version": "v1" } ]
2014-04-16
[ [ "Sikorav", "J. -L.", "" ], [ "Braslau", "A.", "" ], [ "Goldar", "A.", "" ] ]
A theoretical construction of the genetic material establishes the unique and ideal character of DNA. A similar conclusion is reached for amino acids and proteins.
1609.04033
Peter Grindrod
Peter Grindrod
On Human Consciousness
2 Figures 15 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the implications of the mathematical analysis of neurone-to-neurone dynamical complex networks. We show how the dynamical behaviour of small scale strongly connected networks lead naturally to non-binary information processing and thus multiple hypothesis decision making, even at the very lowest level of the brain's architecture. In turn we build on these ideas to address the hard problem of consciousness. We discuss how a proposed "dual hierarchy model", made up form of both external perceived, physical, elements of increasing complexity, and internal mental elements (experiences), may support a leaning and evolving consciousness. We discuss the idea that a human brain ought to be able to re-conjure subjective mental feelings at will and thus these cannot depend on internal nose (chatter) or internal instability-driven activity. An immediate consequence of this model, grounded in dynamical systems and non-binary information processing, is that finite human brains must always be learning or forgetteing and that any possible subjective internal feeling that may be idealised with a countable infinity of facets, can never be learned by zombies or automata: though it can be experienced more and more fully by an evolving brain (yet never in totality, not even in a lifetime).
[ { "created": "Sun, 11 Sep 2016 17:52:56 GMT", "version": "v1" } ]
2016-09-15
[ [ "Grindrod", "Peter", "" ] ]
We consider the implications of the mathematical analysis of neurone-to-neurone dynamical complex networks. We show how the dynamical behaviour of small scale strongly connected networks lead naturally to non-binary information processing and thus multiple hypothesis decision making, even at the very lowest level of the brain's architecture. In turn we build on these ideas to address the hard problem of consciousness. We discuss how a proposed "dual hierarchy model", made up form of both external perceived, physical, elements of increasing complexity, and internal mental elements (experiences), may support a leaning and evolving consciousness. We discuss the idea that a human brain ought to be able to re-conjure subjective mental feelings at will and thus these cannot depend on internal nose (chatter) or internal instability-driven activity. An immediate consequence of this model, grounded in dynamical systems and non-binary information processing, is that finite human brains must always be learning or forgetteing and that any possible subjective internal feeling that may be idealised with a countable infinity of facets, can never be learned by zombies or automata: though it can be experienced more and more fully by an evolving brain (yet never in totality, not even in a lifetime).
2407.20978
Andrew Hornback
Monica Isgut, Andrew Hornback, Yunan Luo, Asma Khimani, Neha Jain, May D. Wang
Are gene-by-environment interactions leveraged in multi-modality neural networks for breast cancer prediction?
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Polygenic risk scores (PRSs) can significantly enhance breast cancer risk prediction when combined with clinical risk factor data. While many studies have explored the value-add of PRSs, little is known about the potential impact of gene-by-gene or gene-by-environment interactions towards enhancing the risk discrimination capabilities of multi-modal models combining PRSs with clinical data. In this study, we integrated data on 318 individual genotype variants along with clinical data in a neural network to explore whether gene-by-gene (i.e., between individual variants) and/or gene-by-environment (between clinical risk factors and variants) interactions could be leveraged jointly during training to improve breast cancer risk prediction performance. We benchmarked our approach against a baseline model combining traditional univariate PRSs with clinical data in a logistic regression model and ran an interpretability analysis to identify feature interactions. While our model did not demonstrate improved performance over the baseline, we discovered 248 (<1%) statistically significant gene-by-gene and gene-by-environment interactions out of the ~53.6k possible feature pairs, the most contributory of which included rs6001930 (MKL1) and rs889312 (MAP3K1), with age and menopause being the most heavily interacting non-genetic risk factors. We also modeled the significant interactions as a network of highly connected features, suggesting that potential higher-order interactions are captured by the model. Although gene-by-environment (or gene-by-gene) interactions did not enhance breast cancer risk prediction performance in neural networks, our study provides evidence that these interactions can be leveraged by these models to inform their predictions. This study represents the first application of neural networks to screen for interactions impacting breast cancer risk using real-world data.
[ { "created": "Tue, 30 Jul 2024 17:15:20 GMT", "version": "v1" } ]
2024-07-31
[ [ "Isgut", "Monica", "" ], [ "Hornback", "Andrew", "" ], [ "Luo", "Yunan", "" ], [ "Khimani", "Asma", "" ], [ "Jain", "Neha", "" ], [ "Wang", "May D.", "" ] ]
Polygenic risk scores (PRSs) can significantly enhance breast cancer risk prediction when combined with clinical risk factor data. While many studies have explored the value-add of PRSs, little is known about the potential impact of gene-by-gene or gene-by-environment interactions towards enhancing the risk discrimination capabilities of multi-modal models combining PRSs with clinical data. In this study, we integrated data on 318 individual genotype variants along with clinical data in a neural network to explore whether gene-by-gene (i.e., between individual variants) and/or gene-by-environment (between clinical risk factors and variants) interactions could be leveraged jointly during training to improve breast cancer risk prediction performance. We benchmarked our approach against a baseline model combining traditional univariate PRSs with clinical data in a logistic regression model and ran an interpretability analysis to identify feature interactions. While our model did not demonstrate improved performance over the baseline, we discovered 248 (<1%) statistically significant gene-by-gene and gene-by-environment interactions out of the ~53.6k possible feature pairs, the most contributory of which included rs6001930 (MKL1) and rs889312 (MAP3K1), with age and menopause being the most heavily interacting non-genetic risk factors. We also modeled the significant interactions as a network of highly connected features, suggesting that potential higher-order interactions are captured by the model. Although gene-by-environment (or gene-by-gene) interactions did not enhance breast cancer risk prediction performance in neural networks, our study provides evidence that these interactions can be leveraged by these models to inform their predictions. This study represents the first application of neural networks to screen for interactions impacting breast cancer risk using real-world data.
q-bio/0607034
Marcus Kaiser
Marcus Kaiser and Claus C. Hilgetag
Nonoptimal Component Placement, but Short Processing Paths, due to Long-Distance Projections in Neural Systems
11 pages, 5 figures
PLoS Comput Biol 2(7): e95 (2006)
10.1371/journal.pcbi.0020095
null
q-bio.NC cond-mat.stat-mech
null
It has been suggested that neural systems across several scales of organization show optimal component placement, in which any spatial rearrangement of the components would lead to an increase of total wiring. Using extensive connectivity datasets for diverse neural networks combined with spatial coordinates for network nodes, we applied an optimization algorithm to the network layouts, in order to search for wire-saving component rearrangements. We found that optimized component rearrangements could substantially reduce total wiring length in all tested neural networks. Specifically, total wiring among 95 primate (Macaque) cortical areas could be decreased by 32%, and wiring of neuronal networks in the nematode Caenorhabditis elegans could be reduced by 48% on the global level, and by 49% for neurons within frontal ganglia. Wiring length reductions were possible due to the existence of long-distance projections in neural networks. We explored the role of these projections by comparing the original networks with minimally rewired networks of the same size, which possessed only the shortest possible connections. In the minimally rewired networks, the number of processing steps along the shortest paths between components was significantly increased compared to the original networks. Additional benchmark comparisons also indicated that neural networks are more similar to network layouts that minimize the length of processing paths, rather than wiring length. These findings suggest that neural systems are not exclusively optimized for minimal global wiring, but for a variety of factors including the minimization of processing steps.
[ { "created": "Fri, 21 Jul 2006 17:05:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kaiser", "Marcus", "" ], [ "Hilgetag", "Claus C.", "" ] ]
It has been suggested that neural systems across several scales of organization show optimal component placement, in which any spatial rearrangement of the components would lead to an increase of total wiring. Using extensive connectivity datasets for diverse neural networks combined with spatial coordinates for network nodes, we applied an optimization algorithm to the network layouts, in order to search for wire-saving component rearrangements. We found that optimized component rearrangements could substantially reduce total wiring length in all tested neural networks. Specifically, total wiring among 95 primate (Macaque) cortical areas could be decreased by 32%, and wiring of neuronal networks in the nematode Caenorhabditis elegans could be reduced by 48% on the global level, and by 49% for neurons within frontal ganglia. Wiring length reductions were possible due to the existence of long-distance projections in neural networks. We explored the role of these projections by comparing the original networks with minimally rewired networks of the same size, which possessed only the shortest possible connections. In the minimally rewired networks, the number of processing steps along the shortest paths between components was significantly increased compared to the original networks. Additional benchmark comparisons also indicated that neural networks are more similar to network layouts that minimize the length of processing paths, rather than wiring length. These findings suggest that neural systems are not exclusively optimized for minimal global wiring, but for a variety of factors including the minimization of processing steps.
2407.07126
Loretta del Mercato
Anna Chiara Siciliano, Stefania Forciniti, Valentina Onesto, Helena Iuele, Donatella Delle Cave, Federica Carnevali, Giuseppe Gigli, Enza Lonardo, Loretta L. del Mercato
A 3D Pancreatic Cancer Model with Integrated Optical Sensors for Noninvasive Metabolism Monitoring and Drug Screening
null
Advanced Healthcare Materials 2024, 2401138
10.1002/adhm.202401138
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A distinct feature of pancreatic ductal adenocarcinoma (PDAC) is a prominent tumor microenvironment (TME) with remarkable cellular and spatial heterogeneity that meaningfully impacts disease biology and treatment resistance. The dynamic crosstalk between cancer cells and the dense stromal compartment leads to spatially and temporally heterogeneous metabolic alterations, such as acidic pH that contributes to drug resistance in PDAC. Thus, monitoring the extracellular pH metabolic fluctuations within the TME is crucial to predict and to quantify anticancer drug efficacy. Here, a simple and reliable alginate-based 3D PDAC model embedding ratiometric optical pH sensors and cocultures of tumor (AsPC-1) and stromal cells for simultaneously monitoring metabolic pH variations and quantify drug response is presented. By means of time-lapse confocal laser scanning microscopy (CLSM) coupled with a fully automated computational analysis, the extracellular pH metabolic variations are monitored and quantified over time during drug testing with gemcitabine, folfirinox, and paclitaxel, commonly used in PDAC therapy. In particular, the extracellular acidification is more pronounced after drugs treatment, resulting in increased antitumor effect correlated with apoptotic cell death. These findings highlight the importance of studying the influence of cellular metabolic mechanisms on tumor response to therapy in 3D tumor models, this being crucial for the development of personalized medicine approaches.
[ { "created": "Tue, 9 Jul 2024 07:51:47 GMT", "version": "v1" } ]
2024-07-11
[ [ "Siciliano", "Anna Chiara", "" ], [ "Forciniti", "Stefania", "" ], [ "Onesto", "Valentina", "" ], [ "Iuele", "Helena", "" ], [ "Cave", "Donatella Delle", "" ], [ "Carnevali", "Federica", "" ], [ "Gigli", "Giuse...
A distinct feature of pancreatic ductal adenocarcinoma (PDAC) is a prominent tumor microenvironment (TME) with remarkable cellular and spatial heterogeneity that meaningfully impacts disease biology and treatment resistance. The dynamic crosstalk between cancer cells and the dense stromal compartment leads to spatially and temporally heterogeneous metabolic alterations, such as acidic pH that contributes to drug resistance in PDAC. Thus, monitoring the extracellular pH metabolic fluctuations within the TME is crucial to predict and to quantify anticancer drug efficacy. Here, a simple and reliable alginate-based 3D PDAC model embedding ratiometric optical pH sensors and cocultures of tumor (AsPC-1) and stromal cells for simultaneously monitoring metabolic pH variations and quantify drug response is presented. By means of time-lapse confocal laser scanning microscopy (CLSM) coupled with a fully automated computational analysis, the extracellular pH metabolic variations are monitored and quantified over time during drug testing with gemcitabine, folfirinox, and paclitaxel, commonly used in PDAC therapy. In particular, the extracellular acidification is more pronounced after drugs treatment, resulting in increased antitumor effect correlated with apoptotic cell death. These findings highlight the importance of studying the influence of cellular metabolic mechanisms on tumor response to therapy in 3D tumor models, this being crucial for the development of personalized medicine approaches.
1804.10647
Duc Nguyen
Duc Duy Nguyen, Zixuan Cang, Kedi Wu, Menglun Wang, Yin Cao, Guo-Wei Wei
Mathematical deep learning for pose and binding affinity prediction and ranking in D3R Grand Challenges
15 pages, 4 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advanced mathematics, such as multiscale weighted colored graph and element specific persistent homology, and machine learning including deep neural networks were integrated to construct mathematical deep learning models for pose and binding affinity prediction and ranking in the last two D3R grand challenges in computer-aided drug design and discovery. D3R Grand Challenge 2 (GC2) focused on the pose prediction and binding affinity ranking and free energy prediction for Farnesoid X receptor ligands. Our models obtained the top place in absolute free energy prediction for free energy Set 1 in Stage 2. The latest competition, D3R Grand Challenge 3 (GC3), is considered as the most difficult challenge so far. It has 5 subchallenges involving Cathepsin S and five other kinase targets, namely VEGFR2, JAK2, p38-$\alpha$, TIE2, and ABL1. There is a total of 26 official competitive tasks for GC3. Our predictions were ranked 1st in 10 out of 26 official competitive tasks.
[ { "created": "Fri, 27 Apr 2018 18:54:15 GMT", "version": "v1" } ]
2018-05-01
[ [ "Nguyen", "Duc Duy", "" ], [ "Cang", "Zixuan", "" ], [ "Wu", "Kedi", "" ], [ "Wang", "Menglun", "" ], [ "Cao", "Yin", "" ], [ "Wei", "Guo-Wei", "" ] ]
Advanced mathematics, such as multiscale weighted colored graph and element specific persistent homology, and machine learning including deep neural networks were integrated to construct mathematical deep learning models for pose and binding affinity prediction and ranking in the last two D3R grand challenges in computer-aided drug design and discovery. D3R Grand Challenge 2 (GC2) focused on the pose prediction and binding affinity ranking and free energy prediction for Farnesoid X receptor ligands. Our models obtained the top place in absolute free energy prediction for free energy Set 1 in Stage 2. The latest competition, D3R Grand Challenge 3 (GC3), is considered as the most difficult challenge so far. It has 5 subchallenges involving Cathepsin S and five other kinase targets, namely VEGFR2, JAK2, p38-$\alpha$, TIE2, and ABL1. There is a total of 26 official competitive tasks for GC3. Our predictions were ranked 1st in 10 out of 26 official competitive tasks.
2109.15027
Cecilia Berardo
Cecilia Berardo, Stefan Geritz
Analysis of a functional response with prey-density dependent handling time from an evolutionary perspective
29 pages, 15 figures
null
null
null
q-bio.PE math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Theoretical models show that in a non-constant environment two predator species feeding on one and the same prey may coexist because the two species occupy different temporal niches: the one with the longer handling time has the advantage when prey is rare so that holding on to the same catch is the better option, while the species with the shorter handling time has the advantage when prey is common and easy to catch. In this paper we address the question whether a predator species with a handling time that is not fixed but decreases with prey density could be selective superior regardless of whether the prey is rare or common, as such predator would be able to occupy both temporal niches all by itself. To that end we study the Rosenzweig-MacArthur model with a modified Holling type II functional response with a density dependent handling time and a handling time dependent conversion factor. We find that the population dynamics tend to be richer than that of the standard model with fixed handling times because of the possibility of multiple positive equilibria and positive attractors. Increasing the strength of the density dependence eventually stabilises the population. Using the framework of adaptive dynamics, we study the evolution of the strength of the density dependence. We find that a predator with even a weakly density dependent handling time can invade both monomorphic and dimorphic populations of predators with fixed handling times. Eventually the strength of the density dependence of the handling time evolves to a level where population cycles are lost, with that the possibility of predator coexistence as well.
[ { "created": "Thu, 30 Sep 2021 11:51:37 GMT", "version": "v1" } ]
2021-10-01
[ [ "Berardo", "Cecilia", "" ], [ "Geritz", "Stefan", "" ] ]
Theoretical models show that in a non-constant environment two predator species feeding on one and the same prey may coexist because the two species occupy different temporal niches: the one with the longer handling time has the advantage when prey is rare so that holding on to the same catch is the better option, while the species with the shorter handling time has the advantage when prey is common and easy to catch. In this paper we address the question whether a predator species with a handling time that is not fixed but decreases with prey density could be selective superior regardless of whether the prey is rare or common, as such predator would be able to occupy both temporal niches all by itself. To that end we study the Rosenzweig-MacArthur model with a modified Holling type II functional response with a density dependent handling time and a handling time dependent conversion factor. We find that the population dynamics tend to be richer than that of the standard model with fixed handling times because of the possibility of multiple positive equilibria and positive attractors. Increasing the strength of the density dependence eventually stabilises the population. Using the framework of adaptive dynamics, we study the evolution of the strength of the density dependence. We find that a predator with even a weakly density dependent handling time can invade both monomorphic and dimorphic populations of predators with fixed handling times. Eventually the strength of the density dependence of the handling time evolves to a level where population cycles are lost, with that the possibility of predator coexistence as well.
1306.5366
Michael Courtney
Michael W. Courtney and Joshua M. Courtney
National Oceanic and Atmospheric Administration Publishes Misleading Information on Gulf of Mexico "Dead Zone"
6 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mississippi River nutrient loads and water stratification on the Louisiana-Texas shelf contribute to an annually recurring, short-lived hypoxic bottom layer in areas of the northern Gulf of Mexico comprising less than 2% of the total Gulf of Mexico bottom area. Many publications demonstrate increases in biomass and fisheries production attributed to nutrient loading from river plumes. Decreases in fisheries production when nutrient loads are decreased are also well documented. However, the National Oceanic and Atmospheric Administration (NOAA) persists in describing the area adjacent to the Mississippi River discharge as a "dead zone" and predicting dire consequences if nutrient loads are not reduced. In reality, these areas teem with aquatic life and provide 70-80% of the Gulf of Mexico fishery production. On June 18, 2013, NOAA published a misleading figure purporting to show the "dead zone" in an article predicting a possible record dead zone area for 2013 (http://www.noaanews.noaa.gov/stories2013/20130618_deadzone.html). This area is not a region of hypoxic bottom water at all nor is it related directly to 2013 predicted hypoxia. This figure appeared as early as 2004 in a National Aeronautics and Space Administration (NASA) article (http://www.nasa.gov/vision/earth/environment/dead_zone.html) as a satellite image where the red area represents turbidity and is much larger than the short-lived areas of hypoxic bottom water documented in actual NOAA measurements. Thus, it is misleading for NOAA to characterize the red area in that image as a "dead zone." The NOAA has also published other misleading and exaggerated descriptions of the consequences of nutrient loading.
[ { "created": "Sun, 23 Jun 2013 01:54:55 GMT", "version": "v1" } ]
2013-06-25
[ [ "Courtney", "Michael W.", "" ], [ "Courtney", "Joshua M.", "" ] ]
Mississippi River nutrient loads and water stratification on the Louisiana-Texas shelf contribute to an annually recurring, short-lived hypoxic bottom layer in areas of the northern Gulf of Mexico comprising less than 2% of the total Gulf of Mexico bottom area. Many publications demonstrate increases in biomass and fisheries production attributed to nutrient loading from river plumes. Decreases in fisheries production when nutrient loads are decreased are also well documented. However, the National Oceanic and Atmospheric Administration (NOAA) persists in describing the area adjacent to the Mississippi River discharge as a "dead zone" and predicting dire consequences if nutrient loads are not reduced. In reality, these areas teem with aquatic life and provide 70-80% of the Gulf of Mexico fishery production. On June 18, 2013, NOAA published a misleading figure purporting to show the "dead zone" in an article predicting a possible record dead zone area for 2013 (http://www.noaanews.noaa.gov/stories2013/20130618_deadzone.html). This area is not a region of hypoxic bottom water at all nor is it related directly to 2013 predicted hypoxia. This figure appeared as early as 2004 in a National Aeronautics and Space Administration (NASA) article (http://www.nasa.gov/vision/earth/environment/dead_zone.html) as a satellite image where the red area represents turbidity and is much larger than the short-lived areas of hypoxic bottom water documented in actual NOAA measurements. Thus, it is misleading for NOAA to characterize the red area in that image as a "dead zone." The NOAA has also published other misleading and exaggerated descriptions of the consequences of nutrient loading.
2401.17433
Yingnan Song
Hao Wu, Yingnan Song, Ammar Hoori, Ananya Subramaniam, Juhwan Lee, Justin Kim, Tao Hu, Sadeer Al-Kindi, Wei-Ming Huang, Chun-Ho Yun, Chung-Lieh Hung, Sanjay Rajagopalan, David L. Wilson
Coronary CTA and Quantitative Cardiac CT Perfusion (CCTP) in Coronary Artery Disease
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
We assessed the benefit of combining stress cardiac CT perfusion (CCTP) myocardial blood flow (MBF) with coronary CT angiography (CCTA) using our innovative CCTP software. By combining CCTA and CCTP, one can uniquely identify a flow limiting stenosis (obstructive-lesion + low-MBF) versus MVD (no-obstructive-lesion + low-MBF. We retrospectively evaluated 104 patients with suspected CAD, including 18 with diabetes, who underwent CCTA+CCTP. Whole heart and territorial MBF was assessed using our automated pipeline for CCTP analysis that included beam hardening correction; temporal scan registration; automated segmentation; fast, accurate, robust MBF estimation; and visualization. Stenosis severity was scored using the CCTA coronary-artery-disease-reporting-and-data-system (CAD-RADS), with obstructive stenosis deemed as CAD-RADS>=3. We established a threshold MBF (MBF=199-mL/min-100g) for normal perfusion. In patients with CAD-RADS>=3, 28/37(76%) patients showed ischemia in the corresponding territory. Two patients with obstructive disease had normal perfusion, suggesting collaterals and/or a hemodynamically insignificant stenosis. Among diabetics, 10 of 18 (56%) demonstrated diffuse ischemia consistent with MVD. Among non-diabetics, only 6% had MVD. Sex-specific prevalence of MVD was 21%/24% (M/F). On a per-vessel basis (n=256), MBF showed a significant difference between territories with and without obstructive stenosis (165 +/- 61 mL/min-100g vs. 274 +/- 62 mL/min-100g, p <0.05). A significant and negative rank correlation (rho=-0.53, p<0.05) between territory MBF and CAD-RADS was seen. CCTA in conjunction with a new automated quantitative CCTP approach can augment the interpretation of CAD, enabling the distinction of ischemia due to obstructive lesions and MVD.
[ { "created": "Tue, 30 Jan 2024 20:44:07 GMT", "version": "v1" } ]
2024-02-01
[ [ "Wu", "Hao", "" ], [ "Song", "Yingnan", "" ], [ "Hoori", "Ammar", "" ], [ "Subramaniam", "Ananya", "" ], [ "Lee", "Juhwan", "" ], [ "Kim", "Justin", "" ], [ "Hu", "Tao", "" ], [ "Al-Kindi", "Sad...
We assessed the benefit of combining stress cardiac CT perfusion (CCTP) myocardial blood flow (MBF) with coronary CT angiography (CCTA) using our innovative CCTP software. By combining CCTA and CCTP, one can uniquely identify a flow limiting stenosis (obstructive-lesion + low-MBF) versus MVD (no-obstructive-lesion + low-MBF. We retrospectively evaluated 104 patients with suspected CAD, including 18 with diabetes, who underwent CCTA+CCTP. Whole heart and territorial MBF was assessed using our automated pipeline for CCTP analysis that included beam hardening correction; temporal scan registration; automated segmentation; fast, accurate, robust MBF estimation; and visualization. Stenosis severity was scored using the CCTA coronary-artery-disease-reporting-and-data-system (CAD-RADS), with obstructive stenosis deemed as CAD-RADS>=3. We established a threshold MBF (MBF=199-mL/min-100g) for normal perfusion. In patients with CAD-RADS>=3, 28/37(76%) patients showed ischemia in the corresponding territory. Two patients with obstructive disease had normal perfusion, suggesting collaterals and/or a hemodynamically insignificant stenosis. Among diabetics, 10 of 18 (56%) demonstrated diffuse ischemia consistent with MVD. Among non-diabetics, only 6% had MVD. Sex-specific prevalence of MVD was 21%/24% (M/F). On a per-vessel basis (n=256), MBF showed a significant difference between territories with and without obstructive stenosis (165 +/- 61 mL/min-100g vs. 274 +/- 62 mL/min-100g, p <0.05). A significant and negative rank correlation (rho=-0.53, p<0.05) between territory MBF and CAD-RADS was seen. CCTA in conjunction with a new automated quantitative CCTP approach can augment the interpretation of CAD, enabling the distinction of ischemia due to obstructive lesions and MVD.
1903.04576
Joaquin Torres
Javier A. Galad\'i and Joaqu\'in J. Torres and J. Marro
Emergence of Brain Rhythms: Model Interpretation of EEG Data
24 pages, 7 figures
null
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electroencephalography (EEG) monitors ---by either intrusive or noninvasive electrodes--- time and frequency variations and spectral content of voltage fluctuations or waves, known as brain rhythms, which in some way uncover activity during both rest periods and specific events in which the subject is under stimulus. This is a useful tool to explore brain behavior, as it complements imaging techniques that have a poorer temporal resolution. We here approach the understanding of EEG data from first principles by studying a networked model of excitatory and inhibitory neurons which generates a variety of comparable waves. In fact, we thus reproduce $\alpha$, $\beta,$ $\gamma$ and other rhythms as observed by EEG, and identify the details of the respectively involved complex phenomena, including a precise relationship between an input and the collective response to it. It ensues the potentiality of our model to better understand actual mind mechanisms and its possible disorders, and we also describe kind of stochastic resonance phenomena which locate main qualitative changes of mental behavior in (e.g.) humans. We also discuss the plausible use of these findings to design deep learning algorithms to detect the occurence of phase transitions in the brain and to analyse its consequences.
[ { "created": "Mon, 11 Mar 2019 20:13:42 GMT", "version": "v1" } ]
2019-03-13
[ [ "Galadí", "Javier A.", "" ], [ "Torres", "Joaquín J.", "" ], [ "Marro", "J.", "" ] ]
Electroencephalography (EEG) monitors ---by either intrusive or noninvasive electrodes--- time and frequency variations and spectral content of voltage fluctuations or waves, known as brain rhythms, which in some way uncover activity during both rest periods and specific events in which the subject is under stimulus. This is a useful tool to explore brain behavior, as it complements imaging techniques that have a poorer temporal resolution. We here approach the understanding of EEG data from first principles by studying a networked model of excitatory and inhibitory neurons which generates a variety of comparable waves. In fact, we thus reproduce $\alpha$, $\beta,$ $\gamma$ and other rhythms as observed by EEG, and identify the details of the respectively involved complex phenomena, including a precise relationship between an input and the collective response to it. It ensues the potentiality of our model to better understand actual mind mechanisms and its possible disorders, and we also describe kind of stochastic resonance phenomena which locate main qualitative changes of mental behavior in (e.g.) humans. We also discuss the plausible use of these findings to design deep learning algorithms to detect the occurence of phase transitions in the brain and to analyse its consequences.
2203.02814
John Stevenson PhD
John C. Stevenson
Competitive Exclusion in an Artificial Foraging Ecosystem
10 pages, 10 figures, 1 table
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial ecosystems provide an additional experimental tool to support laboratory work, field work, and theoretical development in competitive exclusion research. A novel application of a spatiotemporal agent based model is presented which simulates two foraging species of different intrinsic growth rates competing for the same replenishing resource in stable and seasonal environments. This experimental approach provides precise control over the parameters for the environment, the species, and individual movements. Detailed trajectories of these non-equilibrium populations and their characteristics are produced. Narrow zones of potential coexistence are identified within the environmental and intrinsic growth parameter spaces. An example of commensalism driven by the local spatial dynamics is identified. Results of these experiments are discussed in context of modern coexistence theory and research in movement-mediated community assembly. Constraints on possible origination scenarios are identified.
[ { "created": "Sat, 5 Mar 2022 20:22:14 GMT", "version": "v1" }, { "created": "Mon, 27 Feb 2023 20:44:57 GMT", "version": "v2" }, { "created": "Tue, 30 May 2023 19:43:06 GMT", "version": "v3" } ]
2023-06-01
[ [ "Stevenson", "John C.", "" ] ]
Artificial ecosystems provide an additional experimental tool to support laboratory work, field work, and theoretical development in competitive exclusion research. A novel application of a spatiotemporal agent based model is presented which simulates two foraging species of different intrinsic growth rates competing for the same replenishing resource in stable and seasonal environments. This experimental approach provides precise control over the parameters for the environment, the species, and individual movements. Detailed trajectories of these non-equilibrium populations and their characteristics are produced. Narrow zones of potential coexistence are identified within the environmental and intrinsic growth parameter spaces. An example of commensalism driven by the local spatial dynamics is identified. Results of these experiments are discussed in context of modern coexistence theory and research in movement-mediated community assembly. Constraints on possible origination scenarios are identified.
2404.12895
Jan Erik Bellingrath
Jan Erik Bellingrath
The emergence of the width of subjective temporality: the self-simulational theory of temporal extension from the perspective of the free energy principle
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The self-simulational theory of temporal extension describes an information-theoretically formalized mechanism by which the width of subjective temporality emerges from the architecture of self-modelling. In this paper, the perspective of the free energy principle will be assumed, to cast the emergence of subjective temporality, along with a mechanism for duration estimation, from first principles of the physics of self-organization. Using active inference, a deep parametric generative model of temporal inference is simulated, which realizes the described dynamics on a computational level. Two biases (i.e. variations) of time-perception naturally emerge from the simulation. This concerns the intentional binding effect (i.e. the subjective compression of the temporal interval between voluntarily initiated actions and subsequent sensory consequences) and empirically documented alterations of subjective time experience in deep and concentrated states of meditative absorption. Generally, numerous systematic and domain-specific variations of subjective time experience are computationally explained, as enabled by integration with current active inference accounts mapping onto the respective domains. This concerns the temporality modulating role of negative valence, impulsivity, boredom, flow-states, and near death-experiences, amongst others. The self-simulational theory of temporal extension, from the perspective of the free energy principle, explains how the subjective temporal Now emerges and varies from first principles, accounting for why sometimes, subjective time seems to fly, and sometimes, moments feel like eternities; with the computational mechanism being readily deployable synthetically.
[ { "created": "Fri, 19 Apr 2024 13:58:34 GMT", "version": "v1" }, { "created": "Fri, 7 Jun 2024 09:14:54 GMT", "version": "v2" } ]
2024-06-10
[ [ "Bellingrath", "Jan Erik", "" ] ]
The self-simulational theory of temporal extension describes an information-theoretically formalized mechanism by which the width of subjective temporality emerges from the architecture of self-modelling. In this paper, the perspective of the free energy principle will be assumed, to cast the emergence of subjective temporality, along with a mechanism for duration estimation, from first principles of the physics of self-organization. Using active inference, a deep parametric generative model of temporal inference is simulated, which realizes the described dynamics on a computational level. Two biases (i.e. variations) of time-perception naturally emerge from the simulation. This concerns the intentional binding effect (i.e. the subjective compression of the temporal interval between voluntarily initiated actions and subsequent sensory consequences) and empirically documented alterations of subjective time experience in deep and concentrated states of meditative absorption. Generally, numerous systematic and domain-specific variations of subjective time experience are computationally explained, as enabled by integration with current active inference accounts mapping onto the respective domains. This concerns the temporality modulating role of negative valence, impulsivity, boredom, flow-states, and near death-experiences, amongst others. The self-simulational theory of temporal extension, from the perspective of the free energy principle, explains how the subjective temporal Now emerges and varies from first principles, accounting for why sometimes, subjective time seems to fly, and sometimes, moments feel like eternities; with the computational mechanism being readily deployable synthetically.
2204.11899
David Navidad Maeso
D. Navidad Maeso, M. Patriarca, E. Heinsalu
Influence of invasion on natural selection in dispersal-structured populations
16 pages, 8 figures
Physica A 598 (2022) 127389
10.1016/j.physa.2022.127389
null
q-bio.PE nlin.AO nlin.PS physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
We investigate how initial and boundary conditions influence the competition dynamics and outcome in dispersal-structured populations. The study is carried out through numerical modeling of the heterogeneous Brownian bugs model, in which individuals are characterized by diversified diffusion coefficients and compete for resources within a finite interaction range. We observed that if instead of being distributed randomly across the domain the population is initially localized within a small region, the dynamics and the stationary state of the system differ significantly. The outcome of the natural selection is determined by different competition processes, fluctuations, and spreading of the organisms.
[ { "created": "Mon, 25 Apr 2022 18:04:40 GMT", "version": "v1" } ]
2022-04-27
[ [ "Maeso", "D. Navidad", "" ], [ "Patriarca", "M.", "" ], [ "Heinsalu", "E.", "" ] ]
We investigate how initial and boundary conditions influence the competition dynamics and outcome in dispersal-structured populations. The study is carried out through numerical modeling of the heterogeneous Brownian bugs model, in which individuals are characterized by diversified diffusion coefficients and compete for resources within a finite interaction range. We observed that if instead of being distributed randomly across the domain the population is initially localized within a small region, the dynamics and the stationary state of the system differ significantly. The outcome of the natural selection is determined by different competition processes, fluctuations, and spreading of the organisms.
1505.07335
Guy Karlebach
Guy Karlebach
A Novel Algorithm for the Maximal Fit Problem in Boolean Networks
null
null
null
null
q-bio.MN cs.CE cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulatory networks (GRNs) are increasingly used for explaining biological processes with complex transcriptional regulation. A GRN links the expression levels of a set of genes via regulatory controls that gene products exert on one another. Boolean networks are a common modeling choice since they balance between detail and ease of analysis. However, even for Boolean networks the problem of fitting a given network model to an expression dataset is NP-Complete. Previous methods have addressed this issue heuristically or by focusing on acyclic networks and specific classes of regulation functions. In this paper we introduce a novel algorithm for this problem that makes use of sampling in order to handle large datasets. Our algorithm can handle time series data for any network type and steady state data for acyclic networks. Using in-silico time series data we demonstrate good performance on large datasets with a significant level of noise.
[ { "created": "Mon, 25 May 2015 08:12:41 GMT", "version": "v1" }, { "created": "Tue, 31 May 2016 19:32:39 GMT", "version": "v2" }, { "created": "Mon, 20 Jun 2016 02:29:12 GMT", "version": "v3" } ]
2016-06-21
[ [ "Karlebach", "Guy", "" ] ]
Gene regulatory networks (GRNs) are increasingly used for explaining biological processes with complex transcriptional regulation. A GRN links the expression levels of a set of genes via regulatory controls that gene products exert on one another. Boolean networks are a common modeling choice since they balance between detail and ease of analysis. However, even for Boolean networks the problem of fitting a given network model to an expression dataset is NP-Complete. Previous methods have addressed this issue heuristically or by focusing on acyclic networks and specific classes of regulation functions. In this paper we introduce a novel algorithm for this problem that makes use of sampling in order to handle large datasets. Our algorithm can handle time series data for any network type and steady state data for acyclic networks. Using in-silico time series data we demonstrate good performance on large datasets with a significant level of noise.
2312.14342
Luka Maisuradze
Luka Maisuradze, Megan C. King, Ivan V. Surovtsev, Simon G. J. Mochrie, Mark D. Shattuck, Corey S. O'Hern
Identifying topologically associating domains using differential kernels
23 pages, 10 figures
PLoS Computational Biology 20 (2024) e1012221
10.1371/journal.pcbi.1012221
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Chromatin is a polymer complex of DNA and proteins that regulates gene expression. The three-dimensional structure and organization of chromatin controls DNA transcription and replication. High-throughput chromatin conformation capture techniques generate Hi-C maps that can provide insight into the 3D structure of chromatin. Hi-C maps can be represented as a symmetric matrix where each element represents the average contact probability or number of contacts between two chromatin loci. Previous studies have detected topologically associating domains (TADs), or self-interacting regions in Hi-C maps within which the contact probability is greater than that outside the region. Many algorithms have been developed to identify TADs within Hi-C maps. However, most TAD identification algorithms are unable to identify nested or overlapping TADs and for a given Hi-C map there is significant variation in the location and number of TADs identified by different methods. We develop a novel method, KerTAD, using a kernel-based technique from computer vision and image processing that is able to accurately identify nested and overlapping TADs. We benchmark this method against state-of-the-art TAD identification methods on both synthetic and experimental data sets. We find that KerTAD consistently has higher true positive rates (TPR) and lower false discovery rates (FDR) than all tested methods for both synthetic and manually annotated experimental Hi-C maps. The TPR for KerTAD is also largely insensitive to increasing noise and sparsity, in contrast to the other methods. We also find that KerTAD is consistent in the number and size of TADs identified across replicate experimental Hi-C maps for several organisms. KerTAD will improve automated TAD identification and enable researchers to better correlate changes in TADs to biological phenomena, such as enhancer-promoter interactions and disease states.
[ { "created": "Fri, 22 Dec 2023 00:19:13 GMT", "version": "v1" } ]
2024-07-23
[ [ "Maisuradze", "Luka", "" ], [ "King", "Megan C.", "" ], [ "Surovtsev", "Ivan V.", "" ], [ "Mochrie", "Simon G. J.", "" ], [ "Shattuck", "Mark D.", "" ], [ "O'Hern", "Corey S.", "" ] ]
Chromatin is a polymer complex of DNA and proteins that regulates gene expression. The three-dimensional structure and organization of chromatin controls DNA transcription and replication. High-throughput chromatin conformation capture techniques generate Hi-C maps that can provide insight into the 3D structure of chromatin. Hi-C maps can be represented as a symmetric matrix where each element represents the average contact probability or number of contacts between two chromatin loci. Previous studies have detected topologically associating domains (TADs), or self-interacting regions in Hi-C maps within which the contact probability is greater than that outside the region. Many algorithms have been developed to identify TADs within Hi-C maps. However, most TAD identification algorithms are unable to identify nested or overlapping TADs and for a given Hi-C map there is significant variation in the location and number of TADs identified by different methods. We develop a novel method, KerTAD, using a kernel-based technique from computer vision and image processing that is able to accurately identify nested and overlapping TADs. We benchmark this method against state-of-the-art TAD identification methods on both synthetic and experimental data sets. We find that KerTAD consistently has higher true positive rates (TPR) and lower false discovery rates (FDR) than all tested methods for both synthetic and manually annotated experimental Hi-C maps. The TPR for KerTAD is also largely insensitive to increasing noise and sparsity, in contrast to the other methods. We also find that KerTAD is consistent in the number and size of TADs identified across replicate experimental Hi-C maps for several organisms. KerTAD will improve automated TAD identification and enable researchers to better correlate changes in TADs to biological phenomena, such as enhancer-promoter interactions and disease states.
0804.4714
Thimo Rohlf
Thimo Rohlf and Chris Winkler
Network Structure and Dynamics, and Emergence of Robustness by Stabilizing Selection in an Artificial Genome
7 pages, 7 figures. Submitted to the "8th German Workshop on Artificial Life (GWAL 8)"
null
null
null
q-bio.MN cond-mat.dis-nn cs.NE q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetic regulation is a key component in development, but a clear understanding of the structure and dynamics of genetic networks is not yet at hand. In this work we investigate these properties within an artificial genome model originally introduced by Reil. We analyze statistical properties of randomly generated genomes both on the sequence- and network level, and show that this model correctly predicts the frequency of genes in genomes as found in experimental data. Using an evolutionary algorithm based on stabilizing selection for a phenotype, we show that robustness against single base mutations, as well as against random changes in initial network states that mimic stochastic fluctuations in environmental conditions, can emerge in parallel. Evolved genomes exhibit characteristic patterns on both sequence and network level.
[ { "created": "Wed, 30 Apr 2008 01:05:33 GMT", "version": "v1" } ]
2008-05-01
[ [ "Rohlf", "Thimo", "" ], [ "Winkler", "Chris", "" ] ]
Genetic regulation is a key component in development, but a clear understanding of the structure and dynamics of genetic networks is not yet at hand. In this work we investigate these properties within an artificial genome model originally introduced by Reil. We analyze statistical properties of randomly generated genomes both on the sequence- and network level, and show that this model correctly predicts the frequency of genes in genomes as found in experimental data. Using an evolutionary algorithm based on stabilizing selection for a phenotype, we show that robustness against single base mutations, as well as against random changes in initial network states that mimic stochastic fluctuations in environmental conditions, can emerge in parallel. Evolved genomes exhibit characteristic patterns on both sequence and network level.
q-bio/0509029
Hyuk Kang
H. Kang, J. Jo, H.J. Kim, M.Y. Choi, S.W. Rhee, and D.S. Koh
Glucose metabolism and oscillatory behavior of pancreatic islets
11 pages, 16 figures
null
10.1103/PhysRevE.72.051905
null
q-bio.CB cond-mat.other q-bio.QM
null
A variety of oscillations are observed in pancreatic islets.We establish a model, incorporating two oscillatory systems of different time scales: One is the well-known bursting model in pancreatic beta-cells and the other is the glucose-insulin feedback model which considers direct and indirect feedback of secreted insulin. These two are coupled to interact with each other in the combined model, and two basic assumptions are made on the basis of biological observations: The conductance g_{K(ATP)} for the ATP-dependent potassium current is a decreasing function of the glucose concentration whereas the insulin secretion rate is given by a function of the intracellular calcium concentration. Obtained via extensive numerical simulations are complex oscillations including clusters of bursts, slow and fast calcium oscillations, and so on. We also consider how the intracellular glucose concentration depends upon the extracellular glucose concentration, and examine the inhibitory effects of insulin.
[ { "created": "Fri, 23 Sep 2005 07:44:20 GMT", "version": "v1" } ]
2009-11-11
[ [ "Kang", "H.", "" ], [ "Jo", "J.", "" ], [ "Kim", "H. J.", "" ], [ "Choi", "M. Y.", "" ], [ "Rhee", "S. W.", "" ], [ "Koh", "D. S.", "" ] ]
A variety of oscillations are observed in pancreatic islets.We establish a model, incorporating two oscillatory systems of different time scales: One is the well-known bursting model in pancreatic beta-cells and the other is the glucose-insulin feedback model which considers direct and indirect feedback of secreted insulin. These two are coupled to interact with each other in the combined model, and two basic assumptions are made on the basis of biological observations: The conductance g_{K(ATP)} for the ATP-dependent potassium current is a decreasing function of the glucose concentration whereas the insulin secretion rate is given by a function of the intracellular calcium concentration. Obtained via extensive numerical simulations are complex oscillations including clusters of bursts, slow and fast calcium oscillations, and so on. We also consider how the intracellular glucose concentration depends upon the extracellular glucose concentration, and examine the inhibitory effects of insulin.
q-bio/0610050
Mauricio Barahona
Martin Hemberg and Mauricio Barahona
Perfect Sampling of the Master Equation for Gene Regulatory Networks
Minor rewriting; final version to be published in Biophysical Journal
null
10.1529/biophysj.106.099390
null
q-bio.QM q-bio.GN
null
We present a Perfect Sampling algorithm that can be applied to the Master Equation of Gene Regulatory Networks (GRNs). The method recasts Gillespie's Stochastic Simulation Algorithm (SSA) in the light of Markov Chain Monte Carlo methods and combines it with the Dominated Coupling From The Past (DCFTP) algorithm to provide guaranteed sampling from the stationary distribution. We show how the DCFTP-SSA can be generically applied to genetic networks with feedback formed by the interconnection of linear enzymatic reactions and nonlinear Monod- and Hill-type elements. We establish rigorous bounds on the error and convergence of the DCFTP-SSA, as compared to the standard SSA, through a set of increasingly complex examples. Once the building blocks for GRNs have been introduced, the algorithm is applied to study properly averaged dynamic properties of two experimentally relevant genetic networks: the toggle switch, a two-dimensional bistable system, and the repressilator, a six-dimensional genetic oscillator.
[ { "created": "Fri, 27 Oct 2006 17:48:44 GMT", "version": "v1" }, { "created": "Tue, 10 Apr 2007 15:09:09 GMT", "version": "v2" } ]
2009-11-13
[ [ "Hemberg", "Martin", "" ], [ "Barahona", "Mauricio", "" ] ]
We present a Perfect Sampling algorithm that can be applied to the Master Equation of Gene Regulatory Networks (GRNs). The method recasts Gillespie's Stochastic Simulation Algorithm (SSA) in the light of Markov Chain Monte Carlo methods and combines it with the Dominated Coupling From The Past (DCFTP) algorithm to provide guaranteed sampling from the stationary distribution. We show how the DCFTP-SSA can be generically applied to genetic networks with feedback formed by the interconnection of linear enzymatic reactions and nonlinear Monod- and Hill-type elements. We establish rigorous bounds on the error and convergence of the DCFTP-SSA, as compared to the standard SSA, through a set of increasingly complex examples. Once the building blocks for GRNs have been introduced, the algorithm is applied to study properly averaged dynamic properties of two experimentally relevant genetic networks: the toggle switch, a two-dimensional bistable system, and the repressilator, a six-dimensional genetic oscillator.
1910.01724
Yiyin Zhou
Aurel A. Lazar and Nikul H. Ukani and Yiyin Zhou
Sparse Identification of Contrast Gain Control in the Fruit Fly Photoreceptor and Amacrine Cell Layer
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The fruit fly's natural visual environment is often characterized by light intensities ranging across several orders of magnitude and by rapidly varying contrast across space and time. Fruit fly photoreceptors robustly transduce and, in conjunction with amacrine cells, process visual scenes and provide the resulting signal to downstream targets. Here we model the first step of visual processing in the photoreceptor-amacrine cell layer. We propose a novel divisive normalization processor (DNP) for modeling the computation taking place in the photoreceptor-amacrine cell layer. The DNP explicitly models the photoreceptor feedforward and temporal feedback processing paths and the spatio-temporal feedback path of the amacrine cells. We then formally characterize the contrast gain control of the DNP and provide sparse identification algorithms that can efficiently identify each the feedforward and feedback DNP components. The algorithms presented here are the first demonstration of tractable and robust identification of the components of a divisive normalization processor. The sparse identification algorithms can be readily employed in experimental settings, and their effectiveness is demonstrated with several examples.
[ { "created": "Thu, 3 Oct 2019 21:20:05 GMT", "version": "v1" } ]
2019-10-07
[ [ "Lazar", "Aurel A.", "" ], [ "Ukani", "Nikul H.", "" ], [ "Zhou", "Yiyin", "" ] ]
The fruit fly's natural visual environment is often characterized by light intensities ranging across several orders of magnitude and by rapidly varying contrast across space and time. Fruit fly photoreceptors robustly transduce and, in conjunction with amacrine cells, process visual scenes and provide the resulting signal to downstream targets. Here we model the first step of visual processing in the photoreceptor-amacrine cell layer. We propose a novel divisive normalization processor (DNP) for modeling the computation taking place in the photoreceptor-amacrine cell layer. The DNP explicitly models the photoreceptor feedforward and temporal feedback processing paths and the spatio-temporal feedback path of the amacrine cells. We then formally characterize the contrast gain control of the DNP and provide sparse identification algorithms that can efficiently identify each the feedforward and feedback DNP components. The algorithms presented here are the first demonstration of tractable and robust identification of the components of a divisive normalization processor. The sparse identification algorithms can be readily employed in experimental settings, and their effectiveness is demonstrated with several examples.
2111.15100
Yin Ni
Yin Ni, Fupeng Sun, Yihao Luo, Zhengrui Xiang, Huafei Sun
A Novel Heart Disease Classification Algorithm based on Fourier Transform and Persistent Homology
null
null
null
null
q-bio.QM math.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Classification and prediction of heart disease is a significant problem to realize medical treatment and life protection. In this paper, persistent homology is involved to analyze electrocardiograms and a novel heart disease classification method is proposed. Each electrocardiogram becomes a point cloud by sliding windows and fast Fourier transform embedding. The obtained point cloud reveals periodicity and stability characteristics of electrocardiograms. By persistent homology, three topological features including normalized persistent entropy, maximum life of time and maximum life of Betty number are extracted. These topological features show the structural differences between different types of electrocardiograms and display encouraging potentiality in classification of heart disease.
[ { "created": "Tue, 30 Nov 2021 03:38:03 GMT", "version": "v1" } ]
2021-12-01
[ [ "Ni", "Yin", "" ], [ "Sun", "Fupeng", "" ], [ "Luo", "Yihao", "" ], [ "Xiang", "Zhengrui", "" ], [ "Sun", "Huafei", "" ] ]
Classification and prediction of heart disease is a significant problem to realize medical treatment and life protection. In this paper, persistent homology is involved to analyze electrocardiograms and a novel heart disease classification method is proposed. Each electrocardiogram becomes a point cloud by sliding windows and fast Fourier transform embedding. The obtained point cloud reveals periodicity and stability characteristics of electrocardiograms. By persistent homology, three topological features including normalized persistent entropy, maximum life of time and maximum life of Betty number are extracted. These topological features show the structural differences between different types of electrocardiograms and display encouraging potentiality in classification of heart disease.
1705.06354
Yiwei Li
Yiwei Li, Osman Kahraman, Christoph A. Haselwandter
Stochastic lattice model of synaptic membrane protein domains
null
Phys. Rev. E 95, 052406 (2017)
10.1103/PhysRevE.95.052406
null
q-bio.SC physics.bio-ph q-bio.BM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurotransmitter receptor molecules, concentrated in synaptic membrane domains along with scaffolds and other kinds of proteins, are crucial for signal transmission across chemical synapses. In common with other membrane protein domains, synaptic domains are characterized by low protein copy numbers and protein crowding, with rapid stochastic turnover of individual molecules. We study here in detail a stochastic lattice model of the receptor-scaffold reaction-diffusion dynamics at synaptic domains that was found previously to capture, at the mean-field level, the self-assembly, stability, and characteristic size of synaptic domains observed in experiments. We show that our stochastic lattice model yields quantitative agreement with mean-field models of nonlinear diffusion in crowded membranes. Through a combination of analytic and numerical solutions of the master equation governing the reaction dynamics at synaptic domains, together with kinetic Monte Carlo simulations, we find substantial discrepancies between mean-field and stochastic models for the reaction dynamics at synaptic domains. Based on the reaction and diffusion properties of synaptic receptors and scaffolds suggested by previous experiments and mean-field calculations, we show that the stochastic reaction-diffusion dynamics of synaptic receptors and scaffolds provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the observed single-molecule trajectories, and spatial heterogeneity in the effective rates at which receptors and scaffolds are recycled at the cell membrane. Our work sheds light on the physical mechanisms and principles linking the collective properties of membrane protein domains to the stochastic dynamics that rule their molecular~components.
[ { "created": "Wed, 17 May 2017 21:33:45 GMT", "version": "v1" } ]
2017-05-19
[ [ "Li", "Yiwei", "" ], [ "Kahraman", "Osman", "" ], [ "Haselwandter", "Christoph A.", "" ] ]
Neurotransmitter receptor molecules, concentrated in synaptic membrane domains along with scaffolds and other kinds of proteins, are crucial for signal transmission across chemical synapses. In common with other membrane protein domains, synaptic domains are characterized by low protein copy numbers and protein crowding, with rapid stochastic turnover of individual molecules. We study here in detail a stochastic lattice model of the receptor-scaffold reaction-diffusion dynamics at synaptic domains that was found previously to capture, at the mean-field level, the self-assembly, stability, and characteristic size of synaptic domains observed in experiments. We show that our stochastic lattice model yields quantitative agreement with mean-field models of nonlinear diffusion in crowded membranes. Through a combination of analytic and numerical solutions of the master equation governing the reaction dynamics at synaptic domains, together with kinetic Monte Carlo simulations, we find substantial discrepancies between mean-field and stochastic models for the reaction dynamics at synaptic domains. Based on the reaction and diffusion properties of synaptic receptors and scaffolds suggested by previous experiments and mean-field calculations, we show that the stochastic reaction-diffusion dynamics of synaptic receptors and scaffolds provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the observed single-molecule trajectories, and spatial heterogeneity in the effective rates at which receptors and scaffolds are recycled at the cell membrane. Our work sheds light on the physical mechanisms and principles linking the collective properties of membrane protein domains to the stochastic dynamics that rule their molecular~components.
1912.05903
Weikaixin Kong
Weikaixin Kong, Xinyu Tu, Zhengwei Xie and Zhuo Huang
Prediction and optimization of NaV1.7 inhibitors based on machine learning methods
The evaluation of the model in the results section of this article is not comprehensive enough.We will carry out further work. The article needs to be polished. There are certain disadvantages to the molecular optimization method. The discussion part is not deep enough, so withdraw is needed
null
null
null
q-bio.QM cs.LG q-bio.BM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We used machine learning methods to predict NaV1.7 inhibitors and found the model RF-CDK that performed best on the imbalanced dataset. Using the RF-CDK model for screening drugs, we got effective compounds K1. We use the cell patch clamp method to verify K1. However, because the model evaluation method in this article is not comprehensive enough, there is still a lot of research work to be performed, such as comparison with other existing methods. The target protein has multiple active sites and requires our further research. We need more detailed models to consider this biological process and compare it with the current results, which is an error in this article. So we want to withdraw this article.
[ { "created": "Fri, 29 Nov 2019 16:56:03 GMT", "version": "v1" }, { "created": "Sat, 15 Feb 2020 10:01:40 GMT", "version": "v2" } ]
2020-02-18
[ [ "Kong", "Weikaixin", "" ], [ "Tu", "Xinyu", "" ], [ "Xie", "Zhengwei", "" ], [ "Huang", "Zhuo", "" ] ]
We used machine learning methods to predict NaV1.7 inhibitors and found the model RF-CDK that performed best on the imbalanced dataset. Using the RF-CDK model for screening drugs, we got effective compounds K1. We use the cell patch clamp method to verify K1. However, because the model evaluation method in this article is not comprehensive enough, there is still a lot of research work to be performed, such as comparison with other existing methods. The target protein has multiple active sites and requires our further research. We need more detailed models to consider this biological process and compare it with the current results, which is an error in this article. So we want to withdraw this article.
0710.2342
Ruben Moreno Bote
Ruben Moreno-Bote, Alfonso Renart, Nestor Parga
Theory of input spike auto- and cross-correlations and their effect on the response of spiking neurons
null
null
null
null
q-bio.NC physics.bio-ph
null
Spike correlations between neurons are ubiquitous in the cortex, but their role is at present not understood. Here we describe the firing response of a leaky integrate-and-fire neuron (LIF) when it receives a temporarily correlated input generated by presynaptic correlated neuronal populations. Input correlations are characterized in terms of the firing rates, Fano factors, correlation coefficients and correlation timescale of the neurons driving the target neuron. We show that the sum of the presynaptic spike trains cannot be well described by a Poisson process. Solutions of the output firing rate are found in the limit of short and long correlation time scales.
[ { "created": "Thu, 11 Oct 2007 20:28:44 GMT", "version": "v1" } ]
2007-10-15
[ [ "Moreno-Bote", "Ruben", "" ], [ "Renart", "Alfonso", "" ], [ "Parga", "Nestor", "" ] ]
Spike correlations between neurons are ubiquitous in the cortex, but their role is at present not understood. Here we describe the firing response of a leaky integrate-and-fire neuron (LIF) when it receives a temporarily correlated input generated by presynaptic correlated neuronal populations. Input correlations are characterized in terms of the firing rates, Fano factors, correlation coefficients and correlation timescale of the neurons driving the target neuron. We show that the sum of the presynaptic spike trains cannot be well described by a Poisson process. Solutions of the output firing rate are found in the limit of short and long correlation time scales.
2209.01164
Alexander McGhee
Alexander J McGhee, Eric O McGhee, Jack E Famiglietti, W. Gregory Sawyer
In Situ 3D Spatiotemporal Measurement of Soluble Biomarkers in Organoid Culture
8 pages text with 3 figures, 3 pages appendix, 2 pages references, manuscript submitted to in vitro models
null
null
null
q-bio.QM q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Advanced cell culture techniques such as 3D bio-printing and hydrogel-based cell embedding techniques harbor many new and exciting opportunities to study cells in environments that closely recapitulate in-vivo conditions. Researchers often study these environments using fluorescence microscopy to visualize the protein association with objects such as cells within the 3D environment, yet quantification of concentration profiles in the microenvironment has remained elusive. Here, we present a method to continuously measure the time-dependent concentration gradient of various biomarkers within a 3D cell culture assay using bead-based immunoassays to sequester and concentrate the fluorescence intensity of these tagged proteins. This assay allows for near real-time in situ biomarker detection and enables spatiotemporal quantification of biomarker concentration. Snapshots of concentration profiles can be taken, or time series analysis can be performed enabling time-varying biomarker production estimation. Example assays utilize an osteosacroma tumoroid as a case study for a quantitative single-plexed gel encapsulated assay, and a qualitative multi-plexed 3D bioprinted assay. In both cases, a time-varying cytokine concentration gradient is measured. An estimation for the production rate of the IL-8 cytokine per second per osteosarcoma cell results from fitting an analytical function for continuous point source diffusion to the measured concentration gradient and reveals that each cell produces approximately two IL-8 cytokines per second. Proper calibration and use of this assay is exhaustively explored for the case of diffusion-limited Langmuir kinetics of a spherical adsorber.
[ { "created": "Fri, 2 Sep 2022 16:49:06 GMT", "version": "v1" }, { "created": "Tue, 6 Sep 2022 00:17:50 GMT", "version": "v2" }, { "created": "Wed, 7 Sep 2022 02:16:35 GMT", "version": "v3" } ]
2022-09-08
[ [ "McGhee", "Alexander J", "" ], [ "McGhee", "Eric O", "" ], [ "Famiglietti", "Jack E", "" ], [ "Sawyer", "W. Gregory", "" ] ]
Advanced cell culture techniques such as 3D bio-printing and hydrogel-based cell embedding techniques harbor many new and exciting opportunities to study cells in environments that closely recapitulate in-vivo conditions. Researchers often study these environments using fluorescence microscopy to visualize the protein association with objects such as cells within the 3D environment, yet quantification of concentration profiles in the microenvironment has remained elusive. Here, we present a method to continuously measure the time-dependent concentration gradient of various biomarkers within a 3D cell culture assay using bead-based immunoassays to sequester and concentrate the fluorescence intensity of these tagged proteins. This assay allows for near real-time in situ biomarker detection and enables spatiotemporal quantification of biomarker concentration. Snapshots of concentration profiles can be taken, or time series analysis can be performed enabling time-varying biomarker production estimation. Example assays utilize an osteosacroma tumoroid as a case study for a quantitative single-plexed gel encapsulated assay, and a qualitative multi-plexed 3D bioprinted assay. In both cases, a time-varying cytokine concentration gradient is measured. An estimation for the production rate of the IL-8 cytokine per second per osteosarcoma cell results from fitting an analytical function for continuous point source diffusion to the measured concentration gradient and reveals that each cell produces approximately two IL-8 cytokines per second. Proper calibration and use of this assay is exhaustively explored for the case of diffusion-limited Langmuir kinetics of a spherical adsorber.
q-bio/0505033
Roeland M.H. Merks
Roeland M. H. Merks, Erica D. Perryn, Abbas Shirinifard, and James A. Glazier
Contact-inhibited chemotaxis in de novo and sprouting blood-vessel growth
Thoroughly revised version, now in press in PLoS Computational Biology. 53 pages, 13 figures, 2 supporting figures, 56 supporting movies, source code and parameters files for computer simulations provided. Supporting information: http://www.psb.ugent.be/~romer/ploscompbiol/ Source code: http://sourceforge.net/projects/tst/
PLoS Computational Biology 2008, 4(9): e1000163
10.1371/journal.pcbi.1000163
null
q-bio.TO q-bio.CB
null
Blood vessels form either when dispersed endothelial cells (the cells lining the inner walls of fully-formed blood vessels) organize into a vessel network (vasculogenesis), or by sprouting or splitting of existing blood vessels (angiogenesis). Although they are closely related biologically, no current model explains both phenomena with a single biophysical mechanism. Most computational models describe sprouting at the level of the blood vessel, ignoring how cell behavior drives branch splitting during sprouting. We present a cell-based, Glazier-Graner-Hogeweg-model simulation of the initial patterning before the vascular cords form lumens, based on plausible behaviors of endothelial cells. The endothelial cells secrete a chemoattractant, which attracts other endothelial cells. As in the classic Keller-Segel model, chemotaxis by itself causes cells to aggregate into isolated clusters. However, including experimentally-observed adhesion-driven contact inhibition of chemotaxis in the simulation causes randomly-distributed cells to organize into networks and cell aggregates to sprout, reproducing aspects of both de novo and sprouting blood-vessel growth. We discuss two branching instabilities responsible for our results. Cells at the surfaces of cell clusters attempting to migrate to the centers of the clusters produce a buckling instability. In a model variant that eliminates the surface-normal force, a dissipative mechanism drives sprouting, with the secreted chemical acting both as a chemoattractant and as an inhibitor of pseudopod extension. The branching instabilities responsible for our results, which result from contact inhibition of chemotaxis, are both generic developmental mechanisms and interesting examples of unusual patterning instabilities.
[ { "created": "Tue, 17 May 2005 21:50:49 GMT", "version": "v1" }, { "created": "Thu, 27 Jul 2006 14:56:25 GMT", "version": "v2" }, { "created": "Fri, 30 May 2008 18:31:35 GMT", "version": "v3" } ]
2008-09-24
[ [ "Merks", "Roeland M. H.", "" ], [ "Perryn", "Erica D.", "" ], [ "Shirinifard", "Abbas", "" ], [ "Glazier", "James A.", "" ] ]
Blood vessels form either when dispersed endothelial cells (the cells lining the inner walls of fully-formed blood vessels) organize into a vessel network (vasculogenesis), or by sprouting or splitting of existing blood vessels (angiogenesis). Although they are closely related biologically, no current model explains both phenomena with a single biophysical mechanism. Most computational models describe sprouting at the level of the blood vessel, ignoring how cell behavior drives branch splitting during sprouting. We present a cell-based, Glazier-Graner-Hogeweg-model simulation of the initial patterning before the vascular cords form lumens, based on plausible behaviors of endothelial cells. The endothelial cells secrete a chemoattractant, which attracts other endothelial cells. As in the classic Keller-Segel model, chemotaxis by itself causes cells to aggregate into isolated clusters. However, including experimentally-observed adhesion-driven contact inhibition of chemotaxis in the simulation causes randomly-distributed cells to organize into networks and cell aggregates to sprout, reproducing aspects of both de novo and sprouting blood-vessel growth. We discuss two branching instabilities responsible for our results. Cells at the surfaces of cell clusters attempting to migrate to the centers of the clusters produce a buckling instability. In a model variant that eliminates the surface-normal force, a dissipative mechanism drives sprouting, with the secreted chemical acting both as a chemoattractant and as an inhibitor of pseudopod extension. The branching instabilities responsible for our results, which result from contact inhibition of chemotaxis, are both generic developmental mechanisms and interesting examples of unusual patterning instabilities.
0710.4127
David R. Bickel
D. R. Bickel, Z. Montazeri, P.-C. Hsieh, M. Beatty, S. J. Lawit, and N. J. Bate
Gene network reconstruction from transcriptional dynamics under kinetic model uncertainty: a case for the second derivative
Errors in figures corrected; new material added; old mathematics condensed. The supplementary file is on the arXiv; see the publication for the main text
D. R. Bickel, Z. Montazeri, P.-C. Hsieh, M. Beatty, S. J. Lawit, and N. J. Bate, Bioinformatics 25, 772-779 (2009)
10.1093/bioinformatics/btp028
null
q-bio.MN q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Measurements of gene expression over time enable the reconstruction of transcriptional networks. However, Bayesian networks and many other current reconstruction methods rely on assumptions that conflict with the differential equations that describe transcriptional kinetics. Practical approximations of kinetic models would enable inferring causal relationships between genes from expression data of microarray, tag-based and conventional platforms, but conclusions are sensitive to the assumptions made. Results: The representation of a sufficiently large portion of genome enables computation of an upper bound on how much confidence one may place in influences between genes on the basis of expression data. Information about which genes encode transcription factors is not necessary but may be incorporated if available. The methodology is generalized to cover cases in which expression measurements are missing for many of the genes that might control the transcription of the genes of interest. The assumption that the gene expression level is roughly proportional to the rate of translation led to better empirical performance than did either the assumption that the gene expression level is roughly proportional to the protein level or the Bayesian model average of both assumptions. Availability: http://www.oisb.ca points to R code implementing the methods (R Development Core Team 2004). Supplementary information: http://www.davidbickel.com
[ { "created": "Mon, 22 Oct 2007 19:22:33 GMT", "version": "v1" }, { "created": "Thu, 2 Jul 2009 10:37:32 GMT", "version": "v2" } ]
2009-07-02
[ [ "Bickel", "D. R.", "" ], [ "Montazeri", "Z.", "" ], [ "Hsieh", "P. -C.", "" ], [ "Beatty", "M.", "" ], [ "Lawit", "S. J.", "" ], [ "Bate", "N. J.", "" ] ]
Motivation: Measurements of gene expression over time enable the reconstruction of transcriptional networks. However, Bayesian networks and many other current reconstruction methods rely on assumptions that conflict with the differential equations that describe transcriptional kinetics. Practical approximations of kinetic models would enable inferring causal relationships between genes from expression data of microarray, tag-based and conventional platforms, but conclusions are sensitive to the assumptions made. Results: The representation of a sufficiently large portion of genome enables computation of an upper bound on how much confidence one may place in influences between genes on the basis of expression data. Information about which genes encode transcription factors is not necessary but may be incorporated if available. The methodology is generalized to cover cases in which expression measurements are missing for many of the genes that might control the transcription of the genes of interest. The assumption that the gene expression level is roughly proportional to the rate of translation led to better empirical performance than did either the assumption that the gene expression level is roughly proportional to the protein level or the Bayesian model average of both assumptions. Availability: http://www.oisb.ca points to R code implementing the methods (R Development Core Team 2004). Supplementary information: http://www.davidbickel.com
2004.02141
Paolo Paganetti
Martina Sola, Claudia Magrin, Giona Pedrioli, Sandra Pinton, Agnese Salvade, Stephanie Papin, Paolo Paganetti
Tau affects P53 function and cell fate during the DNA damage response
36 pages, 8 figures, 11 supplementary figures
Communications Biology 3, 245 (2020)
10.1038/s42003-020-0975-4
null
q-bio.CB
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cells are constantly exposed to DNA damaging insults. To protect the organism, cells developed a complex molecular response coordinated by P53, the master regulator of DNA repair, cell division and cell fate. DNA damage accumulation and abnormal cell fate decision may represent a pathomechanism shared by aging-associated disorders such as cancer and neurodegeneration. Here, we examined this hypothesis in the context of tauopathies, a neurodegenerative disorder group characterized by Tau protein deposition. For this, the response to an acute DNA damage was studied in neuroblastoma cells with depleted Tau, as a model of loss-of-function. Under these conditions, altered P53 stability and activity result in reduced cell death and increased cell senescence. This newly discovered function of Tau involves abnormal modification of P53 and its E3 ubiquitin ligase MDM2. Considering the medical need with vast social implications caused by neurodegeneration and cancer, our study may reform our approach to disease-modifying therapies.
[ { "created": "Sun, 5 Apr 2020 10:10:32 GMT", "version": "v1" }, { "created": "Wed, 29 Apr 2020 14:25:50 GMT", "version": "v2" } ]
2020-05-27
[ [ "Sola", "Martina", "" ], [ "Magrin", "Claudia", "" ], [ "Pedrioli", "Giona", "" ], [ "Pinton", "Sandra", "" ], [ "Salvade", "Agnese", "" ], [ "Papin", "Stephanie", "" ], [ "Paganetti", "Paolo", "" ] ]
Cells are constantly exposed to DNA damaging insults. To protect the organism, cells developed a complex molecular response coordinated by P53, the master regulator of DNA repair, cell division and cell fate. DNA damage accumulation and abnormal cell fate decision may represent a pathomechanism shared by aging-associated disorders such as cancer and neurodegeneration. Here, we examined this hypothesis in the context of tauopathies, a neurodegenerative disorder group characterized by Tau protein deposition. For this, the response to an acute DNA damage was studied in neuroblastoma cells with depleted Tau, as a model of loss-of-function. Under these conditions, altered P53 stability and activity result in reduced cell death and increased cell senescence. This newly discovered function of Tau involves abnormal modification of P53 and its E3 ubiquitin ligase MDM2. Considering the medical need with vast social implications caused by neurodegeneration and cancer, our study may reform our approach to disease-modifying therapies.
2003.00108
Mufti Mahmud
Mufti Mahmud, M Shamim Kaiser, Amir Hussain
Deep Learning in Mining Biological Data
36 pages, 8 figures, and 6 tables
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent technological advancements in data acquisition tools allowed life scientists to acquire multimodal data from different biological application domains. Broadly categorized in three types (i.e., sequences, images, and signals), these data are huge in amount and complex in nature. Mining such an enormous amount of data for pattern recognition is a big challenge and requires sophisticated data-intensive machine learning techniques. Artificial neural network-based learning systems are well known for their pattern recognition capabilities and lately their deep architectures - known as deep learning (DL) - have been successfully applied to solve many complex pattern recognition problems. Highlighting the role of DL in recognizing patterns in biological data, this article provides - applications of DL to biological sequences, images, and signals data; overview of open access sources of these data; description of open source DL tools applicable on these data; and comparison of these tools from qualitative and quantitative perspectives. At the end, it outlines some open research challenges in mining biological data and puts forward a number of possible future perspectives.
[ { "created": "Fri, 28 Feb 2020 23:14:27 GMT", "version": "v1" } ]
2020-03-03
[ [ "Mahmud", "Mufti", "" ], [ "Kaiser", "M Shamim", "" ], [ "Hussain", "Amir", "" ] ]
Recent technological advancements in data acquisition tools allowed life scientists to acquire multimodal data from different biological application domains. Broadly categorized in three types (i.e., sequences, images, and signals), these data are huge in amount and complex in nature. Mining such an enormous amount of data for pattern recognition is a big challenge and requires sophisticated data-intensive machine learning techniques. Artificial neural network-based learning systems are well known for their pattern recognition capabilities and lately their deep architectures - known as deep learning (DL) - have been successfully applied to solve many complex pattern recognition problems. Highlighting the role of DL in recognizing patterns in biological data, this article provides - applications of DL to biological sequences, images, and signals data; overview of open access sources of these data; description of open source DL tools applicable on these data; and comparison of these tools from qualitative and quantitative perspectives. At the end, it outlines some open research challenges in mining biological data and puts forward a number of possible future perspectives.
2102.02457
Julien Arino
Julien Arino
Describing, modelling and forecasting the spatial and temporal spread of COVID-19 -- A short review
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
SARS-CoV-2 started propagating worldwide in January 2020 and has now reached virtually all communities on the planet. This short review provides evidence of this spread and documents modelling efforts undertaken to understand and forecast it, including a short section about the new variants that emerged in late 2020.
[ { "created": "Thu, 4 Feb 2021 07:35:00 GMT", "version": "v1" } ]
2021-02-05
[ [ "Arino", "Julien", "" ] ]
SARS-CoV-2 started propagating worldwide in January 2020 and has now reached virtually all communities on the planet. This short review provides evidence of this spread and documents modelling efforts undertaken to understand and forecast it, including a short section about the new variants that emerged in late 2020.
2107.12336
Fabrice Larribe
Sorana Froda, Fabrice Larribe
Vaccine efficacy: demystifying an epidemiological concept
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper is an introduction to the concept of vaccine efficacy and its practical applications. We provide illustrative examples aimed at a wider audience.
[ { "created": "Thu, 22 Jul 2021 20:06:37 GMT", "version": "v1" } ]
2021-07-27
[ [ "Froda", "Sorana", "" ], [ "Larribe", "Fabrice", "" ] ]
The paper is an introduction to the concept of vaccine efficacy and its practical applications. We provide illustrative examples aimed at a wider audience.
2110.13189
M. Hamed Mozaffari
M. Hamed Mozaffari and Li-Lin Tay
Spectral unmixing of Raman microscopic images of single human cells using Independent Component Analysis
10 pages, 5 figures
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Application of independent component analysis (ICA) as an unmixing and image clustering technique for high spatial resolution Raman maps is reported. A hyperspectral map of a fixed human cell was collected by a Raman micro spectrometer in a raster pattern on a 0.5um grid. Unlike previously used unsupervised machine learning techniques such as principal component analysis, ICA is based on non-Gaussianity and statistical independence of data which is the case for mixture Raman spectra. Hence, ICA is a great candidate for assembling pseudo-colour maps from the spectral hypercube of Raman spectra. Our experimental results revealed that ICA is capable of reconstructing false colour maps of Raman hyperspectral data of human cells, showing the nuclear region constituents as well as subcellular organelle in the cytoplasm and distribution of mitochondria in the perinuclear region. Minimum preprocessing requirements and label-free nature of the ICA method make it a great unmixed method for extraction of endmembers in Raman hyperspectral maps of living cells.
[ { "created": "Mon, 25 Oct 2021 18:13:24 GMT", "version": "v1" } ]
2022-01-02
[ [ "Mozaffari", "M. Hamed", "" ], [ "Tay", "Li-Lin", "" ] ]
Application of independent component analysis (ICA) as an unmixing and image clustering technique for high spatial resolution Raman maps is reported. A hyperspectral map of a fixed human cell was collected by a Raman micro spectrometer in a raster pattern on a 0.5um grid. Unlike previously used unsupervised machine learning techniques such as principal component analysis, ICA is based on non-Gaussianity and statistical independence of data which is the case for mixture Raman spectra. Hence, ICA is a great candidate for assembling pseudo-colour maps from the spectral hypercube of Raman spectra. Our experimental results revealed that ICA is capable of reconstructing false colour maps of Raman hyperspectral data of human cells, showing the nuclear region constituents as well as subcellular organelle in the cytoplasm and distribution of mitochondria in the perinuclear region. Minimum preprocessing requirements and label-free nature of the ICA method make it a great unmixed method for extraction of endmembers in Raman hyperspectral maps of living cells.
2103.11461
Mar\'ia Vallet-Regi
J Jimenez-Holguin, S Sanchez-Salcedo, M Vallet-Regi, A J Salinas
Development and evaluation of copper-containing mesoporous bioactive glasses for bone defects therapy
28 pages, 10 figures
Microporous and Mesoporous Materials 308, 110454 (2020)
10.1016/j.micromeso.2020.110454
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Mesoporous bioactive glasses, MBG, are gaining increasing interest in the design of new biomaterials for bone defects treatment. An important research trend to enhance their biological behavior is the inclusion of moderate amounts of oxides with therapeutical action such as CuO. In this paper, MBG with composition were synthesized, investigating the influence of the CuO content and some synthesis parameters in their properties. Two batch were developed; first one using HCl as catalyst and chlorides as CaO and CuO precursors, second one, using HNO3 and nitrates. MBG of chlorides batch exhibited calcium/copper phosphate nanoparticles, between 10 and 20 nm. Nevertheless, CuO- containing MBG of nitrates batch showed nuclei of metallic copper nanoparticles larger than 50 nm and quicker in vitro bioactive responses. Thus, they were coated by an apatite-like layer after 24 h soaked in simulated body fluid, a remarkably short period for MBG containing up to 5 % of CuO. A model, focused in the copper location in the glass network, was proposed to relate nanostructure and in vitro behaviour. Moreover, after 24 h in MEM or THB culture media, all the MBG released therapeutic amounts of Ca2+ and Cu2+ ions. Because the quick bioactive response in SBF, the capacity to host biomolecules in their pores and to release therapeutic concentrations of Ca2+ and Cu2+ ions, MBG of the nitrate batch are considered as excellent biomaterials for bone regeneration.
[ { "created": "Sun, 21 Mar 2021 18:45:58 GMT", "version": "v1" } ]
2021-03-23
[ [ "Jimenez-Holguin", "J", "" ], [ "Sanchez-Salcedo", "S", "" ], [ "Vallet-Regi", "M", "" ], [ "Salinas", "A J", "" ] ]
Mesoporous bioactive glasses, MBG, are gaining increasing interest in the design of new biomaterials for bone defects treatment. An important research trend to enhance their biological behavior is the inclusion of moderate amounts of oxides with therapeutical action such as CuO. In this paper, MBG with composition were synthesized, investigating the influence of the CuO content and some synthesis parameters in their properties. Two batch were developed; first one using HCl as catalyst and chlorides as CaO and CuO precursors, second one, using HNO3 and nitrates. MBG of chlorides batch exhibited calcium/copper phosphate nanoparticles, between 10 and 20 nm. Nevertheless, CuO- containing MBG of nitrates batch showed nuclei of metallic copper nanoparticles larger than 50 nm and quicker in vitro bioactive responses. Thus, they were coated by an apatite-like layer after 24 h soaked in simulated body fluid, a remarkably short period for MBG containing up to 5 % of CuO. A model, focused in the copper location in the glass network, was proposed to relate nanostructure and in vitro behaviour. Moreover, after 24 h in MEM or THB culture media, all the MBG released therapeutic amounts of Ca2+ and Cu2+ ions. Because the quick bioactive response in SBF, the capacity to host biomolecules in their pores and to release therapeutic concentrations of Ca2+ and Cu2+ ions, MBG of the nitrate batch are considered as excellent biomaterials for bone regeneration.
1402.0640
Ngoc Hieu Tran
Ngoc Hieu Tran, Xin Chen
Alignment-free comparison of next-generation sequencing data using compression-based distance measures
null
BMC Research Notes, 2014
10.1186/1756-0500-7-320
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Enormous volumes of short reads data from next-generation sequencing (NGS) technologies have posed new challenges to the area of genomic sequence comparison. The multiple sequence alignment approach is hardly applicable to NGS data due to the challenging problem of short read assembly. Thus alignment-free methods need to be developed for the comparison of NGS samples of short reads. Recently, new $k$-mer based distance measures such as {\it CVTree}, $d_{2}^{S}$, {\it co-phylog} have been proposed to address this problem. However, those distances depend considerably on the parameter $k$, and how to choose the optimal $k$ is not trivial since it may depend on different aspects of the sequence data. Hence, in this paper we consider an alternative parameter-free approach: compression-based distance measures. These measures have shown impressive performance on long genome sequences in previous studies, but they have not been tested on NGS short reads. In this study we perform extensive validation and show that the compression-based distances are highly consistent with those distances obtained from the $k$-mer based methods, from the alignment-based approach, and from existing benchmarks in the literature. Moreover, as these measures are parameter-free, no optimization is required and they still perform consistently well on multiple types of sequence data, for different kinds of species and taxonomy levels. The compression-based distance measures are assembly-free, alignment-free, parameter-free, and thus represent useful tools for the comparison of long genome sequences and NGS samples of short reads.
[ { "created": "Tue, 4 Feb 2014 07:09:17 GMT", "version": "v1" } ]
2020-03-25
[ [ "Tran", "Ngoc Hieu", "" ], [ "Chen", "Xin", "" ] ]
Enormous volumes of short reads data from next-generation sequencing (NGS) technologies have posed new challenges to the area of genomic sequence comparison. The multiple sequence alignment approach is hardly applicable to NGS data due to the challenging problem of short read assembly. Thus alignment-free methods need to be developed for the comparison of NGS samples of short reads. Recently, new $k$-mer based distance measures such as {\it CVTree}, $d_{2}^{S}$, {\it co-phylog} have been proposed to address this problem. However, those distances depend considerably on the parameter $k$, and how to choose the optimal $k$ is not trivial since it may depend on different aspects of the sequence data. Hence, in this paper we consider an alternative parameter-free approach: compression-based distance measures. These measures have shown impressive performance on long genome sequences in previous studies, but they have not been tested on NGS short reads. In this study we perform extensive validation and show that the compression-based distances are highly consistent with those distances obtained from the $k$-mer based methods, from the alignment-based approach, and from existing benchmarks in the literature. Moreover, as these measures are parameter-free, no optimization is required and they still perform consistently well on multiple types of sequence data, for different kinds of species and taxonomy levels. The compression-based distance measures are assembly-free, alignment-free, parameter-free, and thus represent useful tools for the comparison of long genome sequences and NGS samples of short reads.
1711.03658
Nicholas Battista
Nicholas A. Battista, Leigh B. Pearcy, W. Christopher Strickland
Modeling the prescription opioid epidemic
27 pages, 13 figures. Bull Math Biol (2019)
null
10.1007/s11538-019-00605-0
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Opioid addiction has become a global epidemic and a national health crisis in recent years, with the number of opioid overdose fatalities steadily increasing since the 1990s. In contrast to the dynamics of a typical illicit drug or disease epidemic, opioid addiction has its roots in legal, prescription medication - a fact which greatly increases the exposed population and provides additional drug accessibility for addicts. In this paper, we present a mathematical model for prescription drug addiction and treatment with parameters and validation based on data from the opioid epidemic. Key dynamics considered include addiction through prescription, addiction from illicit sources, and treatment. Through mathematical analysis, we show that no addiction-free equilibrium can exist without stringent control over how opioids are administered and prescribed, effectively transforming the dynamics of the opioid epidemic into those found in a purely illicit drug model. Numerical sensitivity analysis suggests that relatively low states of endemic addiction can be obtained by primarily focusing on medical prevention followed by aggressive treatment of remaining cases - even when the probability of relapse from treatment remains high. Further empirical study focused on understanding the rate of illicit drug dependence versus overdose risk, along with the current and changing rates of opioid prescription and treatment, would shed significant light on optimal control efforts and feasible outcomes for this epidemic and drug epidemics in general.
[ { "created": "Fri, 10 Nov 2017 01:01:06 GMT", "version": "v1" }, { "created": "Sat, 2 Jun 2018 23:44:13 GMT", "version": "v2" } ]
2019-05-03
[ [ "Battista", "Nicholas A.", "" ], [ "Pearcy", "Leigh B.", "" ], [ "Strickland", "W. Christopher", "" ] ]
Opioid addiction has become a global epidemic and a national health crisis in recent years, with the number of opioid overdose fatalities steadily increasing since the 1990s. In contrast to the dynamics of a typical illicit drug or disease epidemic, opioid addiction has its roots in legal, prescription medication - a fact which greatly increases the exposed population and provides additional drug accessibility for addicts. In this paper, we present a mathematical model for prescription drug addiction and treatment with parameters and validation based on data from the opioid epidemic. Key dynamics considered include addiction through prescription, addiction from illicit sources, and treatment. Through mathematical analysis, we show that no addiction-free equilibrium can exist without stringent control over how opioids are administered and prescribed, effectively transforming the dynamics of the opioid epidemic into those found in a purely illicit drug model. Numerical sensitivity analysis suggests that relatively low states of endemic addiction can be obtained by primarily focusing on medical prevention followed by aggressive treatment of remaining cases - even when the probability of relapse from treatment remains high. Further empirical study focused on understanding the rate of illicit drug dependence versus overdose risk, along with the current and changing rates of opioid prescription and treatment, would shed significant light on optimal control efforts and feasible outcomes for this epidemic and drug epidemics in general.
0802.3876
Yonatan Bilu
Yonatan Bilu
The evolution of microRNA-regulation in duplicated genes facilitates expression divergence
null
null
null
null
q-bio.MN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The evolution of microRNA regulation in metazoans is a mysterious process: MicroRNA sequences are highly conserved among distal organisms, but on the other hand, there is no evident conservation of their targets. Results: We study this extensive rewiring of the microRNA regulatory network by analyzing the evolutionary trajectories of duplicated genes in D. melanogatser. We find that in general microRNA-targeted genes tend to avoid gene duplication. However, in cases where gene duplication is evident, we find that the gene that displays high divergence from the ancestral gene at the sequence level is also likely to be associated in an opposing manner with the microRNA regulatory system - if the ancestral gene is a miRNA target then the divergent gene tends not to be, and vice versa. Conclusions: This suggests that miRNAs not only have a role in conferring expression robustness, as was suggested by previous works, but are also an accessible tool in evolving expression divergence.
[ { "created": "Tue, 26 Feb 2008 19:11:25 GMT", "version": "v1" } ]
2008-02-27
[ [ "Bilu", "Yonatan", "" ] ]
Background: The evolution of microRNA regulation in metazoans is a mysterious process: MicroRNA sequences are highly conserved among distal organisms, but on the other hand, there is no evident conservation of their targets. Results: We study this extensive rewiring of the microRNA regulatory network by analyzing the evolutionary trajectories of duplicated genes in D. melanogatser. We find that in general microRNA-targeted genes tend to avoid gene duplication. However, in cases where gene duplication is evident, we find that the gene that displays high divergence from the ancestral gene at the sequence level is also likely to be associated in an opposing manner with the microRNA regulatory system - if the ancestral gene is a miRNA target then the divergent gene tends not to be, and vice versa. Conclusions: This suggests that miRNAs not only have a role in conferring expression robustness, as was suggested by previous works, but are also an accessible tool in evolving expression divergence.
1709.01464
Wayne Hayes
Dillon P. Kanne, Wayne B. Hayes
SANA: separating the search algorithm from the objective function in biological network alignment, Part 1: Search
8 pages main text, 2 pages Supplementary
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological network alignment is currently in a state of disarray, with more than two dozen network alignment tools having been introduced in the past decade, with no clear winner, and other new tools being published almost quarterly. Part of the problem is that almost every new tool proposes both a new objective function and a new search algorithm to optimize said objective. These two aspects of alignment are orthogonal, and confounding them makes it difficult to evaluate them separately. A more systematic approach is needed. To this end, we bring these two orthogonal issues into sharp focus in two companion papers. In Part 1 (this paper) we show that simulated annealing, as implemented by SANA, far outperforms all other existing search algorithms across a wide range of objectives. Part 2 (our companion paper) then uses SANA to compare over a dozen objectives in terms of the biology they recover, demonstrating that some objective functions recover useful biological information while others do not. We propose that further work should focus on improving objective functions, with SANA the obvious choice as the search algorithm.
[ { "created": "Tue, 5 Sep 2017 15:53:55 GMT", "version": "v1" } ]
2017-09-06
[ [ "Kanne", "Dillon P.", "" ], [ "Hayes", "Wayne B.", "" ] ]
Biological network alignment is currently in a state of disarray, with more than two dozen network alignment tools having been introduced in the past decade, with no clear winner, and other new tools being published almost quarterly. Part of the problem is that almost every new tool proposes both a new objective function and a new search algorithm to optimize said objective. These two aspects of alignment are orthogonal, and confounding them makes it difficult to evaluate them separately. A more systematic approach is needed. To this end, we bring these two orthogonal issues into sharp focus in two companion papers. In Part 1 (this paper) we show that simulated annealing, as implemented by SANA, far outperforms all other existing search algorithms across a wide range of objectives. Part 2 (our companion paper) then uses SANA to compare over a dozen objectives in terms of the biology they recover, demonstrating that some objective functions recover useful biological information while others do not. We propose that further work should focus on improving objective functions, with SANA the obvious choice as the search algorithm.
1110.5385
Stuart Borrett Stuart Borrett
Sarah L. Fann and Stuart R. Borrett
Environ centrality reveals the tendency of indirect effects to homogenize the functional importance of species in ecosystems
21 pages, 6 figures, 3 tables
Fann, SL and SR Borrett. 2012. Environ centrality reveals the tendency of indirect effects to homogenize the functional importance of species in ecosystems. Journal of Theoretical Biology 294(7), 74-86
10.1016/j.jtbi.2011.10.030
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecologists and conservation biologists need to identify the relative importance of species to make sound management decisions and effectively allocate scarce resources. We introduce a new method, termed environ centrality, to determine the relative importance of a species in an ecosystem network with respect to ecosystem energy--matter exchange. We demonstrate the uniqueness of environ centrality by comparing it to other common centrality metrics and then show its ecological significance. Specifically, we tested two hypotheses on a set of 50 empirically-based ecosystem network models. The first concerned the distribution of centrality in the community. We hypothesized that the functional importance of species would tend to be concentrated into a few dominant species followed by a group of species with lower, more even importance as is often seen in dominance--diversity curves. Second, we tested the systems ecology hypothesis that indirect relationships homogenize the functional importance of species in ecosystems. Our results support both hypotheses and highlight the importance of detritus and nutrient recyclers such as fungi and bacteria in generating the energy--matter flow in ecosystems. Our homogenization results suggest that indirect effects are important in part because they tend to even the importance of species in ecosystems. A core contribution of this work is that it creates a formal, mathematical method to quantify the importance species play in generating ecosystem activity by integrating direct, indirect, and boundary effects in ecological systems.
[ { "created": "Tue, 25 Oct 2011 00:20:30 GMT", "version": "v1" } ]
2011-11-28
[ [ "Fann", "Sarah L.", "" ], [ "Borrett", "Stuart R.", "" ] ]
Ecologists and conservation biologists need to identify the relative importance of species to make sound management decisions and effectively allocate scarce resources. We introduce a new method, termed environ centrality, to determine the relative importance of a species in an ecosystem network with respect to ecosystem energy--matter exchange. We demonstrate the uniqueness of environ centrality by comparing it to other common centrality metrics and then show its ecological significance. Specifically, we tested two hypotheses on a set of 50 empirically-based ecosystem network models. The first concerned the distribution of centrality in the community. We hypothesized that the functional importance of species would tend to be concentrated into a few dominant species followed by a group of species with lower, more even importance as is often seen in dominance--diversity curves. Second, we tested the systems ecology hypothesis that indirect relationships homogenize the functional importance of species in ecosystems. Our results support both hypotheses and highlight the importance of detritus and nutrient recyclers such as fungi and bacteria in generating the energy--matter flow in ecosystems. Our homogenization results suggest that indirect effects are important in part because they tend to even the importance of species in ecosystems. A core contribution of this work is that it creates a formal, mathematical method to quantify the importance species play in generating ecosystem activity by integrating direct, indirect, and boundary effects in ecological systems.
2111.11256
Francisco Villavicencio
Jos\'e Manuel Aburto, Ugofilippo Basellini, Annette Baudisch, Francisco Villavicencio
Drewnowski's index to measure lifespan variation: Revisiting the Gini coefficient of the life table
28 pages, 5 figures
Theor. Popul. Biol. 148 (2022) 1-10
10.1016/j.tpb.2022.08.003
null
q-bio.PE econ.GN q-fin.EC stat.AP
http://creativecommons.org/licenses/by/4.0/
The Gini coefficient of the life table is a concentration index that provides information on lifespan variation. Originally proposed by economists to measure income and wealth inequalities, it has been widely used in population studies to investigate variation in ages at death. We focus on a complementary indicator, Drewnowski's index, which is as a measure of equality. We study its mathematical properties and analyze how changes over time relate to changes in life expectancy. Further, we identify the threshold age below which mortality improvements are translated into decreasing lifespan variation and above which these improvements translate into increasing lifespan inequality. We illustrate our theoretical findings simulating scenarios of mortality improvement in the Gompertz model. Our experiments demonstrate how Drewnowski's index can serve as an indicator of the shape of mortality patterns. These properties, along with our analytical findings, support studying lifespan variation alongside life expectancy trends in multiple species.
[ { "created": "Fri, 19 Nov 2021 11:18:24 GMT", "version": "v1" } ]
2022-09-28
[ [ "Aburto", "José Manuel", "" ], [ "Basellini", "Ugofilippo", "" ], [ "Baudisch", "Annette", "" ], [ "Villavicencio", "Francisco", "" ] ]
The Gini coefficient of the life table is a concentration index that provides information on lifespan variation. Originally proposed by economists to measure income and wealth inequalities, it has been widely used in population studies to investigate variation in ages at death. We focus on a complementary indicator, Drewnowski's index, which is as a measure of equality. We study its mathematical properties and analyze how changes over time relate to changes in life expectancy. Further, we identify the threshold age below which mortality improvements are translated into decreasing lifespan variation and above which these improvements translate into increasing lifespan inequality. We illustrate our theoretical findings simulating scenarios of mortality improvement in the Gompertz model. Our experiments demonstrate how Drewnowski's index can serve as an indicator of the shape of mortality patterns. These properties, along with our analytical findings, support studying lifespan variation alongside life expectancy trends in multiple species.
1409.2448
Omer Weissbrod
Omer Weissbrod, Christoph Lippert, Dan Geiger, and David Heckerman
Accurate Liability Estimation Improves Power in Ascertained Case Control Studies
null
Nature methods 12(4):332-334 (2015)
10.1038/nmeth.3285
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear mixed models (LMMs) have emerged as the method of choice for confounded genome-wide association studies. However, the performance of LMMs in non-randomly ascertained case-control studies deteriorates with increasing sample size. We propose a framework called LEAP (Liability Estimator As a Phenotype, https://github.com/omerwe/LEAP) that tests for association with estimated latent values corresponding to severity of phenotype, and demonstrate that this can lead to a substantial power increase.
[ { "created": "Mon, 8 Sep 2014 18:12:11 GMT", "version": "v1" }, { "created": "Tue, 9 Sep 2014 17:12:23 GMT", "version": "v2" }, { "created": "Sun, 21 Feb 2016 11:01:49 GMT", "version": "v3" } ]
2016-02-23
[ [ "Weissbrod", "Omer", "" ], [ "Lippert", "Christoph", "" ], [ "Geiger", "Dan", "" ], [ "Heckerman", "David", "" ] ]
Linear mixed models (LMMs) have emerged as the method of choice for confounded genome-wide association studies. However, the performance of LMMs in non-randomly ascertained case-control studies deteriorates with increasing sample size. We propose a framework called LEAP (Liability Estimator As a Phenotype, https://github.com/omerwe/LEAP) that tests for association with estimated latent values corresponding to severity of phenotype, and demonstrate that this can lead to a substantial power increase.
2306.02075
Kelly Jo\"elle Gatore Sinigirira
David Niyukuri, Kelly Joelle Gatore Sinigirira, Jean De Dieu Kwizera, Salma Omar Abd-Almageid Adam
Ebola transmission dynamics: will future Ebola outbreaks become cyclic?
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The Ebola Virus Disease (EVD) can persist in some body fluids after clinical recovery. In Guinea and the Democratic Republic of the Congo, there are well-documented cases of EVD reemergence that were associated with previous outbreaks. In many cases, male EVD survivors were associated with the re-introduction of new outbreaks. This has shown that even after controlling an EVD outbreak, a close biomedical monitoring of survivors and contacts is critical to avoiding future outbreaks. Thus, in order to explore the main features of EVD transmission dynamics in the context of re-emergence, we used a compartmental model by considering vaccination around EVD contacts. Analytical and numerical analyses of the model were conducted. The model is mathematically and epidemiologically well-posed. We computed the reproductive number (R0) and the disease equilibrium points (disease-free equilibrium and endemic equilibrium) for the re-emerging outbreak. The stability analysis of the model around those equilibrium points was performed. The model undergoes a backward bifurcation at R0 close to 1, regardless R0 < 1, the disease will not be eradicated. This means that R0 cannot be considered as an intervention control measure in our model.
[ { "created": "Sat, 3 Jun 2023 10:42:35 GMT", "version": "v1" } ]
2023-06-06
[ [ "Niyukuri", "David", "" ], [ "Sinigirira", "Kelly Joelle Gatore", "" ], [ "Kwizera", "Jean De Dieu", "" ], [ "Adam", "Salma Omar Abd-Almageid", "" ] ]
The Ebola Virus Disease (EVD) can persist in some body fluids after clinical recovery. In Guinea and the Democratic Republic of the Congo, there are well-documented cases of EVD reemergence that were associated with previous outbreaks. In many cases, male EVD survivors were associated with the re-introduction of new outbreaks. This has shown that even after controlling an EVD outbreak, a close biomedical monitoring of survivors and contacts is critical to avoiding future outbreaks. Thus, in order to explore the main features of EVD transmission dynamics in the context of re-emergence, we used a compartmental model by considering vaccination around EVD contacts. Analytical and numerical analyses of the model were conducted. The model is mathematically and epidemiologically well-posed. We computed the reproductive number (R0) and the disease equilibrium points (disease-free equilibrium and endemic equilibrium) for the re-emerging outbreak. The stability analysis of the model around those equilibrium points was performed. The model undergoes a backward bifurcation at R0 close to 1, regardless R0 < 1, the disease will not be eradicated. This means that R0 cannot be considered as an intervention control measure in our model.