id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2208.01647
Vipul Mann
Vipul Mann and Venkat Venkatasubramanian
AI-driven Hypergraph Network of Organic Chemistry: Network Statistics and Applications in Reaction Classification
null
null
10.1039/D2RE00309K
null
q-bio.MN cs.AI cs.LG q-bio.QM stat.CO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Rapid discovery of new reactions and molecules in recent years has been facilitated by the advancements in high throughput screening, accessibility to a much more complex chemical design space, and the development of accurate molecular modeling frameworks. A holistic study of the growing chemistry literature is, therefore, required that focuses on understanding the recent trends and extrapolating them into possible future trajectories. To this end, several network theory-based studies have been reported that use a directed graph representation of chemical reactions. Here, we perform a study based on representing chemical reactions as hypergraphs where the hyperedges represent chemical reactions and nodes represent the participating molecules. We use a standard reactions dataset to construct a hypernetwork and report its statistics such as degree distributions, average path length, assortativity or degree correlations, PageRank centrality, and graph-based clusters (or communities). We also compute each statistic for an equivalent directed graph representation of reactions to draw parallels and highlight differences between the two. To demonstrate the AI applicability of hypergraph reaction representation, we generate dense hypergraph embeddings and use them in the reaction classification problem. We conclude that the hypernetwork representation is flexible, preserves reaction context, and uncovers hidden insights that are otherwise not apparent in a traditional directed graph representation of chemical reactions.
[ { "created": "Tue, 2 Aug 2022 14:12:03 GMT", "version": "v1" }, { "created": "Mon, 27 Mar 2023 15:43:43 GMT", "version": "v2" } ]
2023-03-28
[ [ "Mann", "Vipul", "" ], [ "Venkatasubramanian", "Venkat", "" ] ]
Rapid discovery of new reactions and molecules in recent years has been facilitated by the advancements in high throughput screening, accessibility to a much more complex chemical design space, and the development of accurate molecular modeling frameworks. A holistic study of the growing chemistry literature is, therefore, required that focuses on understanding the recent trends and extrapolating them into possible future trajectories. To this end, several network theory-based studies have been reported that use a directed graph representation of chemical reactions. Here, we perform a study based on representing chemical reactions as hypergraphs where the hyperedges represent chemical reactions and nodes represent the participating molecules. We use a standard reactions dataset to construct a hypernetwork and report its statistics such as degree distributions, average path length, assortativity or degree correlations, PageRank centrality, and graph-based clusters (or communities). We also compute each statistic for an equivalent directed graph representation of reactions to draw parallels and highlight differences between the two. To demonstrate the AI applicability of hypergraph reaction representation, we generate dense hypergraph embeddings and use them in the reaction classification problem. We conclude that the hypernetwork representation is flexible, preserves reaction context, and uncovers hidden insights that are otherwise not apparent in a traditional directed graph representation of chemical reactions.
1804.04166
John Vandermeer
John Vandermeer
Some elementary mechanisms for critical transitions and hysteresis in simple predator prey models
7 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trait-mediated indirect effects are increasingly acknowledged as important components in the dynamics of ecological systems. The hamiltonian form of the LV equations is traditionally modified by adding density dependence to the prey variable and functional response to the predator variable. Enriching these non-linear elements with a trait-mediation added to the carrying capacity of the prey creates the dynamics of critical transitions and hysteretic zones.
[ { "created": "Wed, 11 Apr 2018 18:57:35 GMT", "version": "v1" } ]
2018-04-13
[ [ "Vandermeer", "John", "" ] ]
Trait-mediated indirect effects are increasingly acknowledged as important components in the dynamics of ecological systems. The hamiltonian form of the LV equations is traditionally modified by adding density dependence to the prey variable and functional response to the predator variable. Enriching these non-linear elements with a trait-mediation added to the carrying capacity of the prey creates the dynamics of critical transitions and hysteretic zones.
2010.09660
James Fitzgerald
Tirthabir Biswas, James E. Fitzgerald
Geometric framework to predict structure from function in neural networks
46 pages, 12 figures, final version, corrected typos, minor clarifications, new Discussion material
Physical Review Research 4, 023255, 2022
10.1103/PhysRevResearch.4.023255
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate the solution space of all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. A generalization accounting for noise further reveals that the solution space geometry can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning. We ultimately use this geometric characterization to derive certainty conditions guaranteeing a non-zero synapse between neurons. Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that follow generally from the model architecture.
[ { "created": "Mon, 19 Oct 2020 16:56:55 GMT", "version": "v1" }, { "created": "Tue, 22 Dec 2020 13:33:56 GMT", "version": "v2" }, { "created": "Wed, 15 Dec 2021 12:39:57 GMT", "version": "v3" }, { "created": "Thu, 30 Jun 2022 11:37:36 GMT", "version": "v4" } ]
2022-07-01
[ [ "Biswas", "Tirthabir", "" ], [ "Fitzgerald", "James E.", "" ] ]
Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate the solution space of all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. A generalization accounting for noise further reveals that the solution space geometry can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning. We ultimately use this geometric characterization to derive certainty conditions guaranteeing a non-zero synapse between neurons. Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that follow generally from the model architecture.
0706.3090
Sergei Mukhin I
I.N. Krivonos and S.I. Mukhin
Flexible-to-semiflexible chain crossover on the pressure-area isotherm of lipid bilayer
31 pages; 7 figures; submitted to JETP
null
10.1007/s11447-008-1011-6
null
q-bio.QM q-bio.BM
null
We found theoretically that competition between ~Kq^4 and ~Qq^2 terms in the Fourier transformed conformational energy of a single lipid chain, in combination with inter-chain entropic repulsion in the hydrophobic part of the lipid (bi)layer, may cause a crossover on the bilayer pressure-area isotherm P(A)~(A-A_0)^{-n}. The crossover manifests itself in the transition from n=5/3 to n=3. Our microscopic model represents a single lipid molecule as a worm-like chain with finite irreducible cross-section area A_0, flexural rigidity K and stretching modulus Q in a parabolic potential with self-consistent curvature B(A) formed by entropic interactions between hydrocarbon chains in the lipid layer. The crossover area per lipid A* obeys relation Q^2/(KB(A*))~1 . We predict a peculiar possibility to deduce effective elastic moduli K and Q of the individual hydrocarbon chain from the analysis of the isotherm possessing such crossover. Also calculated is crossover-related behavior of the area compressibility modulus K_a, equilibrium area per lipid A_t, and chain order parameter S.
[ { "created": "Thu, 21 Jun 2007 06:43:32 GMT", "version": "v1" } ]
2009-11-13
[ [ "Krivonos", "I. N.", "" ], [ "Mukhin", "S. I.", "" ] ]
We found theoretically that competition between ~Kq^4 and ~Qq^2 terms in the Fourier transformed conformational energy of a single lipid chain, in combination with inter-chain entropic repulsion in the hydrophobic part of the lipid (bi)layer, may cause a crossover on the bilayer pressure-area isotherm P(A)~(A-A_0)^{-n}. The crossover manifests itself in the transition from n=5/3 to n=3. Our microscopic model represents a single lipid molecule as a worm-like chain with finite irreducible cross-section area A_0, flexural rigidity K and stretching modulus Q in a parabolic potential with self-consistent curvature B(A) formed by entropic interactions between hydrocarbon chains in the lipid layer. The crossover area per lipid A* obeys relation Q^2/(KB(A*))~1 . We predict a peculiar possibility to deduce effective elastic moduli K and Q of the individual hydrocarbon chain from the analysis of the isotherm possessing such crossover. Also calculated is crossover-related behavior of the area compressibility modulus K_a, equilibrium area per lipid A_t, and chain order parameter S.
1311.7433
Ramon Grima
Ramon Grima, Nils Walter and Santiago Schnell
Single molecule enzymology a la Michaelis-Menten
Review article, 32 pages, 4 figures; accepted for publication in FEBS Journal
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past one hundred years, deterministic rate equations have been successfully used to infer enzyme-catalysed reaction mechanisms and to estimate rate constants from reaction kinetics experiments conducted in vitro. In recent years, sophisticated experimental techniques have been developed that allow the measurement of enzyme- catalysed and other biopolymer-mediated reactions inside single cells at the single molecule level. Time course data obtained by these methods are considerably noisy because molecule numbers within cells are typically quite small. As a consequence, the interpretation and analysis of single cell data requires stochastic methods, rather than deterministic rate equations. Here we concisely review both experimental and theoretical techniques which enable single molecule analysis with particular emphasis on the major developments in the field of theoretical stochastic enzyme kinetics, from its inception in the mid-twentieth century to its modern day status. We discuss the differences between stochastic and deterministic rate equation models, how these depend on enzyme molecule numbers and substrate inflow into the reaction compartment and how estimation of rate constants from single cell data is possible using recently developed stochastic approaches.
[ { "created": "Thu, 28 Nov 2013 22:09:18 GMT", "version": "v1" } ]
2013-12-02
[ [ "Grima", "Ramon", "" ], [ "Walter", "Nils", "" ], [ "Schnell", "Santiago", "" ] ]
In the past one hundred years, deterministic rate equations have been successfully used to infer enzyme-catalysed reaction mechanisms and to estimate rate constants from reaction kinetics experiments conducted in vitro. In recent years, sophisticated experimental techniques have been developed that allow the measurement of enzyme- catalysed and other biopolymer-mediated reactions inside single cells at the single molecule level. Time course data obtained by these methods are considerably noisy because molecule numbers within cells are typically quite small. As a consequence, the interpretation and analysis of single cell data requires stochastic methods, rather than deterministic rate equations. Here we concisely review both experimental and theoretical techniques which enable single molecule analysis with particular emphasis on the major developments in the field of theoretical stochastic enzyme kinetics, from its inception in the mid-twentieth century to its modern day status. We discuss the differences between stochastic and deterministic rate equation models, how these depend on enzyme molecule numbers and substrate inflow into the reaction compartment and how estimation of rate constants from single cell data is possible using recently developed stochastic approaches.
1210.1204
Carl Boettiger
Carl Boettiger and Alan Hastings
Early Warning Signals and the Prosecutor's Fallacy
3 figures; Proceedings of the Royal Society B 2012
null
10.1098/rspb.2012.2085
null
q-bio.PE physics.soc-ph q-bio.QM
http://creativecommons.org/licenses/by/3.0/
Early warning signals have been proposed to forecast the possibility of a critical transition, such as the eutrophication of a lake, the collapse of a coral reef, or the end of a glacial period. Because such transitions often unfold on temporal and spatial scales that can be difficult to approach by experimental manipulation, research has often relied on historical observations as a source of natural experiments. Here we examine a critical difference between selecting systems for study based on the fact that we have observed a critical transition and those systems for which we wish to forecast the approach of a transition. This difference arises by conditionally selecting systems known to experience a transition of some sort and failing to account for the bias this introduces -- a statistical error often known as the Prosecutor's Fallacy. By analysing simulated systems that have experienced transitions purely by chance, we reveal an elevated rate of false positives in common warning signal statistics. We further demonstrate a model-based approach that is less subject to this bias than these more commonly used summary statistics. We note that experimental studies with replicates avoid this pitfall entirely.
[ { "created": "Wed, 3 Oct 2012 19:56:10 GMT", "version": "v1" } ]
2012-10-04
[ [ "Boettiger", "Carl", "" ], [ "Hastings", "Alan", "" ] ]
Early warning signals have been proposed to forecast the possibility of a critical transition, such as the eutrophication of a lake, the collapse of a coral reef, or the end of a glacial period. Because such transitions often unfold on temporal and spatial scales that can be difficult to approach by experimental manipulation, research has often relied on historical observations as a source of natural experiments. Here we examine a critical difference between selecting systems for study based on the fact that we have observed a critical transition and those systems for which we wish to forecast the approach of a transition. This difference arises by conditionally selecting systems known to experience a transition of some sort and failing to account for the bias this introduces -- a statistical error often known as the Prosecutor's Fallacy. By analysing simulated systems that have experienced transitions purely by chance, we reveal an elevated rate of false positives in common warning signal statistics. We further demonstrate a model-based approach that is less subject to this bias than these more commonly used summary statistics. We note that experimental studies with replicates avoid this pitfall entirely.
1710.09821
Megan L. Gelsinger
Megan L. Gelsinger, Laura L. Tupper and David S. Matteson
Cell Line Classification Using Electric Cell-substrate Impedance Sensing (ECIS)
40 pages, 10 figures, 8 tables
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider cell line classification using multivariate time series data obtained from electric cell-substrate impedance sensing (ECIS) technology. The ECIS device, which monitors the attachment and spreading of mammalian cells in real time through the collection of electrical impedance data, has historically been used to study one cell line at a time. However, we show that if applied to data from multiple cell lines, ECIS can be used to classify unknown or potentially mislabeled cells, which may help to mitigate the current crisis of reproducibility in the biological literature. We assess a range of approaches to this new problem, testing different classification methods and deriving a dictionary of 29 features to characterize ECIS data. Our analysis also makes use of simultaneous multi-frequency ECIS data, where previous studies have focused on only one frequency. In classification tests on fifteen mammalian cell lines, we obtain very high out-of-sample accuracy. These preliminary findings provide a baseline for future large-scale studies in this field.
[ { "created": "Thu, 26 Oct 2017 17:36:53 GMT", "version": "v1" }, { "created": "Sat, 3 Mar 2018 15:55:39 GMT", "version": "v2" }, { "created": "Wed, 20 Nov 2019 16:41:21 GMT", "version": "v3" } ]
2019-11-21
[ [ "Gelsinger", "Megan L.", "" ], [ "Tupper", "Laura L.", "" ], [ "Matteson", "David S.", "" ] ]
We consider cell line classification using multivariate time series data obtained from electric cell-substrate impedance sensing (ECIS) technology. The ECIS device, which monitors the attachment and spreading of mammalian cells in real time through the collection of electrical impedance data, has historically been used to study one cell line at a time. However, we show that if applied to data from multiple cell lines, ECIS can be used to classify unknown or potentially mislabeled cells, which may help to mitigate the current crisis of reproducibility in the biological literature. We assess a range of approaches to this new problem, testing different classification methods and deriving a dictionary of 29 features to characterize ECIS data. Our analysis also makes use of simultaneous multi-frequency ECIS data, where previous studies have focused on only one frequency. In classification tests on fifteen mammalian cell lines, we obtain very high out-of-sample accuracy. These preliminary findings provide a baseline for future large-scale studies in this field.
2402.13382
Alessandro Farini
Elisabetta Baldanzi, Paolo Antonino Grasso, Clara Gori, Massimo Gurioli, Federcio Tommasi, Alessandro Farini
Visual acuity and contrast sensitivity under monochromatic yellow light
Submitted to Nuovo Cimento, special number for "SIF simposio Optometria": this version is before referee comments
null
null
null
q-bio.QM physics.med-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
This study investigates the impact of monochromatic lighting on visual acuity (VA) and contrast sensitivity (CS). Traditional assessments of VA and CS are typically conducted under the illumination of ``white'' light, but variations in color temperature can influence outcomes. Utilizing data from an exhibition by Olafur Eliasson, where a room was illuminated with low-pressure sodium lamps, creating an almost monochromatic yellow light, we compared visual assessments in the yellow room with conventional lighting in a white room. For VA, the results show no significant differences between the two lighting conditions, while for CS, a more nuanced situation is observed. The bias in CS measurements is clinically relevant, and the p-value suggests that further investigation with a larger, more diverse sample may be worthwhile. Despite limitations, such as higher illumination conditions than standard protocols, the unique ``laboratory'' offered by the exhibition facilitated measurements not easily achievable in a traditional setting.
[ { "created": "Mon, 22 Jan 2024 12:25:08 GMT", "version": "v1" } ]
2024-02-22
[ [ "Baldanzi", "Elisabetta", "" ], [ "Grasso", "Paolo Antonino", "" ], [ "Gori", "Clara", "" ], [ "Gurioli", "Massimo", "" ], [ "Tommasi", "Federcio", "" ], [ "Farini", "Alessandro", "" ] ]
This study investigates the impact of monochromatic lighting on visual acuity (VA) and contrast sensitivity (CS). Traditional assessments of VA and CS are typically conducted under the illumination of ``white'' light, but variations in color temperature can influence outcomes. Utilizing data from an exhibition by Olafur Eliasson, where a room was illuminated with low-pressure sodium lamps, creating an almost monochromatic yellow light, we compared visual assessments in the yellow room with conventional lighting in a white room. For VA, the results show no significant differences between the two lighting conditions, while for CS, a more nuanced situation is observed. The bias in CS measurements is clinically relevant, and the p-value suggests that further investigation with a larger, more diverse sample may be worthwhile. Despite limitations, such as higher illumination conditions than standard protocols, the unique ``laboratory'' offered by the exhibition facilitated measurements not easily achievable in a traditional setting.
1801.04184
Pierre Barrat-Charlaix
Matteo Figliuzzi, Pierre Barrat-Charlaix and Martin Weigt
How pairwise coevolutionary models capture the collective residue variability in proteins
17 pages, 3 figures and one table
Molecular Biology and Evolution 35, 1018 (2018)
10.1093/molbev/msy007
null
q-bio.QM cond-mat.stat-mech q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Global coevolutionary models of homologous protein families, as constructed by direct coupling analysis (DCA), have recently gained popularity in particular due to their capacity to accurately predict residue-residue contacts from sequence information alone, and thereby to facilitate tertiary and quaternary protein structure prediction. More recently, they have also been used to predict fitness effects of amino-acid substitutions in proteins, and to predict evolutionary conserved protein-protein interactions. These models are based on two currently unjustified hypotheses: (a) correlations in the amino-acid usage of different positions are resulting collectively from networks of direct couplings; and (b) pairwise couplings are sufficient to capture the amino-acid variability. Here we propose a highly precise inference scheme based on Boltzmann-machine learning, which allows us to systematically address these hypotheses. We show how correlations are built up in a highly collective way by a large number of coupling paths, which are based on the protein's three-dimensional structure. We further find that pairwise coevolutionary models capture the collective residue variability across homologous proteins even for quantities which are not imposed by the inference procedure, like three-residue correlations, the clustered structure of protein families in sequence space or the sequence distances between homologs. These findings strongly suggest that pairwise coevolutionary models are actually sufficient to accurately capture the residue variability in homologous protein families.
[ { "created": "Fri, 12 Jan 2018 14:52:10 GMT", "version": "v1" } ]
2019-09-23
[ [ "Figliuzzi", "Matteo", "" ], [ "Barrat-Charlaix", "Pierre", "" ], [ "Weigt", "Martin", "" ] ]
Global coevolutionary models of homologous protein families, as constructed by direct coupling analysis (DCA), have recently gained popularity in particular due to their capacity to accurately predict residue-residue contacts from sequence information alone, and thereby to facilitate tertiary and quaternary protein structure prediction. More recently, they have also been used to predict fitness effects of amino-acid substitutions in proteins, and to predict evolutionary conserved protein-protein interactions. These models are based on two currently unjustified hypotheses: (a) correlations in the amino-acid usage of different positions are resulting collectively from networks of direct couplings; and (b) pairwise couplings are sufficient to capture the amino-acid variability. Here we propose a highly precise inference scheme based on Boltzmann-machine learning, which allows us to systematically address these hypotheses. We show how correlations are built up in a highly collective way by a large number of coupling paths, which are based on the protein's three-dimensional structure. We further find that pairwise coevolutionary models capture the collective residue variability across homologous proteins even for quantities which are not imposed by the inference procedure, like three-residue correlations, the clustered structure of protein families in sequence space or the sequence distances between homologs. These findings strongly suggest that pairwise coevolutionary models are actually sufficient to accurately capture the residue variability in homologous protein families.
0709.1397
Claude Pasquier
Claude Pasquier (ISBDC), Fabrice Girardot (ISBDC), Karim Jevardat De Fombelle (ISBDC), Richard Christen (ISBDC)
THEA: ontology-driven analysis of microarray data
null
Bioinformatics 20, 16 (2004) 2636-43
10.1093/bioinformatics/bth295
null
q-bio.QM
null
MOTIVATION: Microarray technology makes it possible to measure thousands of variables and to compare their values under hundreds of conditions. Once microarray data are quantified, normalized and classified, the analysis phase is essentially a manual and subjective task based on visual inspection of classes in the light of the vast amount of information available. Currently, data interpretation clearly constitutes the bottleneck of such analyses and there is an obvious need for tools able to fill the gap between data processed with mathematical methods and existing biological knowledge. RESULTS: THEA (Tools for High-throughput Experiments Analysis) is an integrated information processing system allowing convenient handling of data. It allows to automatically annotate data issued from classification systems with selected biological information coming from a knowledge base and to either manually search and browse through these annotations or automatically generate meaningful generalizations according to statistical criteria (data mining). AVAILABILITY: The software is available on the website http://thea.unice.fr/
[ { "created": "Mon, 10 Sep 2007 13:58:57 GMT", "version": "v1" } ]
2007-09-11
[ [ "Pasquier", "Claude", "", "ISBDC" ], [ "Girardot", "Fabrice", "", "ISBDC" ], [ "De Fombelle", "Karim Jevardat", "", "ISBDC" ], [ "Christen", "Richard", "", "ISBDC" ] ]
MOTIVATION: Microarray technology makes it possible to measure thousands of variables and to compare their values under hundreds of conditions. Once microarray data are quantified, normalized and classified, the analysis phase is essentially a manual and subjective task based on visual inspection of classes in the light of the vast amount of information available. Currently, data interpretation clearly constitutes the bottleneck of such analyses and there is an obvious need for tools able to fill the gap between data processed with mathematical methods and existing biological knowledge. RESULTS: THEA (Tools for High-throughput Experiments Analysis) is an integrated information processing system allowing convenient handling of data. It allows to automatically annotate data issued from classification systems with selected biological information coming from a knowledge base and to either manually search and browse through these annotations or automatically generate meaningful generalizations according to statistical criteria (data mining). AVAILABILITY: The software is available on the website http://thea.unice.fr/
1303.6103
Ganna Rozhnova
Ganna Rozhnova, Ana Nunes, Alan J. McKane
Impact of commuting on disease persistence in heterogeneous metapopulations
main text (8 pages, 5 figures); supplementary material (17 pages, 3 figures) and electronic supplementary material (ESM.mov)
Ecological Complexity 19, 124-129 (2014)
null
null
q-bio.PE physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use a stochastic metapopulation model to study the combined effects of seasonality and spatial heterogeneity on disease persistence. We find a pronounced effect of enhanced persistence associated with strong heterogeneity, intermediate coupling strength and moderate seasonal forcing. Analytic calculations show that this effect is not related with the phase lag between epidemic bursts in different patches, but rather with the linear stability properties of the attractor that describes the steady state of the system in the large population limit.
[ { "created": "Mon, 25 Mar 2013 11:58:20 GMT", "version": "v1" }, { "created": "Fri, 20 Feb 2015 09:42:16 GMT", "version": "v2" } ]
2015-02-23
[ [ "Rozhnova", "Ganna", "" ], [ "Nunes", "Ana", "" ], [ "McKane", "Alan J.", "" ] ]
We use a stochastic metapopulation model to study the combined effects of seasonality and spatial heterogeneity on disease persistence. We find a pronounced effect of enhanced persistence associated with strong heterogeneity, intermediate coupling strength and moderate seasonal forcing. Analytic calculations show that this effect is not related with the phase lag between epidemic bursts in different patches, but rather with the linear stability properties of the attractor that describes the steady state of the system in the large population limit.
2011.10281
Kelin Xia
JunJie Wee and Kelin Xia
Ollivier persistent Ricci curvature (OPRC) based molecular representation for drug design
38 pages, 9 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Efficient molecular featurization is one of the major issues for machine learning models in drug design. Here we propose persistent Ricci curvature (PRC), in particular Ollivier persistent Ricci curvature (OPRC), for the molecular featurization and feature engineering, for the first time. Filtration process proposed in persistent homology is employed to generate a series of nested molecular graphs. Persistence and variation of Ollivier Ricci curvatures on these nested graphs are defined as Ollivier persistent Ricci curvature. Moreover, persistent attributes, which are statistical and combinatorial properties of OPRCs during the filtration process, are used as molecular descriptors, and further combined with machine learning models, in particular, gradient boosting tree (GBT). Our OPRC-GBT model is used in the prediction of protein-ligand binding affinity, which is one of key steps in drug design. Based on three most-commonly used datasets from the well-established protein-ligand binding databank, i.e., PDBbind, we intensively test our model and compare with existing models. It has been found that our model are better than all machine learning models with traditional molecular descriptors.
[ { "created": "Fri, 20 Nov 2020 08:53:26 GMT", "version": "v1" } ]
2020-11-23
[ [ "Wee", "JunJie", "" ], [ "Xia", "Kelin", "" ] ]
Efficient molecular featurization is one of the major issues for machine learning models in drug design. Here we propose persistent Ricci curvature (PRC), in particular Ollivier persistent Ricci curvature (OPRC), for the molecular featurization and feature engineering, for the first time. Filtration process proposed in persistent homology is employed to generate a series of nested molecular graphs. Persistence and variation of Ollivier Ricci curvatures on these nested graphs are defined as Ollivier persistent Ricci curvature. Moreover, persistent attributes, which are statistical and combinatorial properties of OPRCs during the filtration process, are used as molecular descriptors, and further combined with machine learning models, in particular, gradient boosting tree (GBT). Our OPRC-GBT model is used in the prediction of protein-ligand binding affinity, which is one of key steps in drug design. Based on three most-commonly used datasets from the well-established protein-ligand binding databank, i.e., PDBbind, we intensively test our model and compare with existing models. It has been found that our model are better than all machine learning models with traditional molecular descriptors.
1505.05782
Casey Bergman
Florence Gutzwiller, Catarina R. Carmo, Danny E. Miller, Danny W. Rice, Irene L. Newton, R. Scott Hawley, Luis Teixeira and Casey M. Bergman
Dynamics of Wolbachia pipientis gene expression across the Drosophila melanogaster life cycle
58 pages, 6 figures, 6 supplemental figures, 4 supplemental files (available at https://github.com/bergmanlab/wolbachia/tree/master/gutzwiller_et_al/arxiv)
G3 5(12): 2843-2856 (2015)
10.1534/g3.115.021931
null
q-bio.GN
http://creativecommons.org/licenses/by/3.0/
Symbiotic interactions between microbes and their multicellular hosts have manifold impacts on molecular, cellular and organismal biology. To identify candidate bacterial genes involved in maintaining endosymbiotic associations with insect hosts, we analyzed genome-wide patterns of gene expression in the alpha-proteobacteria Wolbachia pipientis across the life cycle of Drosophila melanogaster using public data from the modENCODE project that was generated in a Wolbachia-infected version of the ISO1 reference strain. We find that the majority of Wolbachia genes are expressed at detectable levels in D. melanogaster across the entire life cycle, but that only 7.8% of 1195 Wolbachia genes exhibit robust stage- or sex-specific expression differences when studied in the "holo-organism" context. Wolbachia genes that are differentially expressed during development are typically up-regulated after D. melanogaster embryogenesis, and include many bacterial membrane, secretion system and ankyrin-repeat containing proteins. Sex-biased genes are often organised as small operons of uncharacterised genes and are mainly up-regulated in adult males D. melanogaster in an age-dependent manner suggesting a potential role in cytoplasmic incompatibility. Our results indicate that large changes in Wolbachia gene expression across the Drosophila life-cycle are relatively rare when assayed across all host tissues, but that candidate genes to understand host-microbe interaction in facultative endosymbionts can be successfully identified using holo-organism expression profiling. Our work also shows that mining public gene expression data in D. melanogaster provides a rich set of resources to probe the functional basis of the Wolbachia-Drosophila symbiosis and annotate the transcriptional outputs of the Wolbachia genome.
[ { "created": "Thu, 21 May 2015 16:28:05 GMT", "version": "v1" }, { "created": "Tue, 16 Jun 2015 14:33:56 GMT", "version": "v2" } ]
2015-12-14
[ [ "Gutzwiller", "Florence", "" ], [ "Carmo", "Catarina R.", "" ], [ "Miller", "Danny E.", "" ], [ "Rice", "Danny W.", "" ], [ "Newton", "Irene L.", "" ], [ "Hawley", "R. Scott", "" ], [ "Teixeira", "Luis", ""...
Symbiotic interactions between microbes and their multicellular hosts have manifold impacts on molecular, cellular and organismal biology. To identify candidate bacterial genes involved in maintaining endosymbiotic associations with insect hosts, we analyzed genome-wide patterns of gene expression in the alpha-proteobacteria Wolbachia pipientis across the life cycle of Drosophila melanogaster using public data from the modENCODE project that was generated in a Wolbachia-infected version of the ISO1 reference strain. We find that the majority of Wolbachia genes are expressed at detectable levels in D. melanogaster across the entire life cycle, but that only 7.8% of 1195 Wolbachia genes exhibit robust stage- or sex-specific expression differences when studied in the "holo-organism" context. Wolbachia genes that are differentially expressed during development are typically up-regulated after D. melanogaster embryogenesis, and include many bacterial membrane, secretion system and ankyrin-repeat containing proteins. Sex-biased genes are often organised as small operons of uncharacterised genes and are mainly up-regulated in adult males D. melanogaster in an age-dependent manner suggesting a potential role in cytoplasmic incompatibility. Our results indicate that large changes in Wolbachia gene expression across the Drosophila life-cycle are relatively rare when assayed across all host tissues, but that candidate genes to understand host-microbe interaction in facultative endosymbionts can be successfully identified using holo-organism expression profiling. Our work also shows that mining public gene expression data in D. melanogaster provides a rich set of resources to probe the functional basis of the Wolbachia-Drosophila symbiosis and annotate the transcriptional outputs of the Wolbachia genome.
2009.13402
Sana Yasin
Sana Yasin, Syed Asad Hussain, Sinem Aslan, Imran Raza, Muhammad Muzammel, Alice Othmani
EEG based Major Depressive disorder and Bipolar disorder detection using Neural Networks: A review
29 pages,2 figures and 18 Tables
null
null
null
q-bio.NC cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
Mental disorders represent critical public health challenges as they are leading contributors to the global burden of disease and intensely influence social and financial welfare of individuals. The present comprehensive review concentrate on the two mental disorders: Major depressive Disorder (MDD) and Bipolar Disorder (BD) with noteworthy publications during the last ten years. There is a big need nowadays for phenotypic characterization of psychiatric disorders with biomarkers. Electroencephalography (EEG) signals could offer a rich signature for MDD and BD and then they could improve understanding of pathophysiological mechanisms underling these mental disorders. In this review, we focus on the literature works adopting neural networks fed by EEG signals. Among those studies using EEG and neural networks, we have discussed a variety of EEG based protocols, biomarkers and public datasets for depression and bipolar disorder detection. We conclude with a discussion and valuable recommendations that will help to improve the reliability of developed models and for more accurate and more deterministic computational intelligence based systems in psychiatry. This review will prove to be a structured and valuable initial point for the researchers working on depression and bipolar disorders recognition by using EEG signals.
[ { "created": "Mon, 28 Sep 2020 15:19:54 GMT", "version": "v1" }, { "created": "Thu, 4 Feb 2021 18:58:54 GMT", "version": "v2" } ]
2021-02-05
[ [ "Yasin", "Sana", "" ], [ "Hussain", "Syed Asad", "" ], [ "Aslan", "Sinem", "" ], [ "Raza", "Imran", "" ], [ "Muzammel", "Muhammad", "" ], [ "Othmani", "Alice", "" ] ]
Mental disorders represent critical public health challenges as they are leading contributors to the global burden of disease and intensely influence social and financial welfare of individuals. The present comprehensive review concentrate on the two mental disorders: Major depressive Disorder (MDD) and Bipolar Disorder (BD) with noteworthy publications during the last ten years. There is a big need nowadays for phenotypic characterization of psychiatric disorders with biomarkers. Electroencephalography (EEG) signals could offer a rich signature for MDD and BD and then they could improve understanding of pathophysiological mechanisms underling these mental disorders. In this review, we focus on the literature works adopting neural networks fed by EEG signals. Among those studies using EEG and neural networks, we have discussed a variety of EEG based protocols, biomarkers and public datasets for depression and bipolar disorder detection. We conclude with a discussion and valuable recommendations that will help to improve the reliability of developed models and for more accurate and more deterministic computational intelligence based systems in psychiatry. This review will prove to be a structured and valuable initial point for the researchers working on depression and bipolar disorders recognition by using EEG signals.
2004.07548
Arik Yochelis
Carsten Beta, Nir S. Gov, and Arik Yochelis
Why A Large Scale Mode Can Be Essential For Understanding Intracellular Actin Waves
13 pages, 4 figures
null
10.3390/cells9061533
Cells 9, 1533 (2020)
q-bio.CB nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During the last decade, intracellular actin waves have attracted much attention due to their essential role in various cellular functions, ranging from motility to cytokinesis. Experimental methods have advanced significantly and can capture the dynamics of actin waves over a large range of spatio-temporal scales. However, the corresponding coarse-grained theory mostly avoids the full complexity of this multi-scale phenomenon. In this perspective, we focus on a minimal continuum model of activator-inhibitor type and highlight the qualitative role of mass-conservation, which is typically overlooked. Specifically, our interest is to connect between the mathematical mechanisms of pattern formation in the presence of a large-scale mode, due to mass-conservation, and distinct behaviors of actin waves.
[ { "created": "Thu, 16 Apr 2020 09:32:54 GMT", "version": "v1" }, { "created": "Tue, 16 Jun 2020 11:55:09 GMT", "version": "v2" } ]
2020-06-24
[ [ "Beta", "Carsten", "" ], [ "Gov", "Nir S.", "" ], [ "Yochelis", "Arik", "" ] ]
During the last decade, intracellular actin waves have attracted much attention due to their essential role in various cellular functions, ranging from motility to cytokinesis. Experimental methods have advanced significantly and can capture the dynamics of actin waves over a large range of spatio-temporal scales. However, the corresponding coarse-grained theory mostly avoids the full complexity of this multi-scale phenomenon. In this perspective, we focus on a minimal continuum model of activator-inhibitor type and highlight the qualitative role of mass-conservation, which is typically overlooked. Specifically, our interest is to connect between the mathematical mechanisms of pattern formation in the presence of a large-scale mode, due to mass-conservation, and distinct behaviors of actin waves.
0801.0011
Quang-Cuong Pham
Nicolas Tabareau, Jean-Jacques Slotine, Quang-Cuong Pham
How synchronization protects from noise
14 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synchronization phenomena are pervasive in biology. In neuronal networks, the mechanisms of synchronization have been extensively studied from both physiological and computational viewpoints. The functional role of synchronization has also attracted much interest and debate. In particular, synchronization may allow distant sites in the brain to communicate and cooperate with each other, and therefore it may play a role in temporal binding and in attention and sensory-motor integration mechanisms. In this article, we study another role for synchronization: the so-called "collective enhancement of precision." We argue, in a full nonlinear dynamical context, that synchronization may help protect interconnected neurons from the influence of random perturbations -- intrinsic neuronal noise -- which affect all neurons in the nervous system. This property may allow reliable computations to be carried out even in the presence of significant noise (as experimentally found e.g., in retinal ganglion cells in primates), as mathematically it is key to obtaining meaningful downstream signals, whether in terms of precisely-timed interaction (temporal coding), population coding, or frequency coding. Using stochastic contraction theory, we show how synchronization of nonlinear dynamical systems helps protect these systems from random perturbations. Our main contribution is a mathematical proof that, under specific quantified conditions, the impact of noise on each individual system and on the spatial mean can essentially be cancelled through synchronization. Similar concepts may be applicable to questions in systems biology.
[ { "created": "Fri, 28 Dec 2007 22:28:47 GMT", "version": "v1" }, { "created": "Sun, 6 Apr 2008 16:37:38 GMT", "version": "v2" }, { "created": "Thu, 13 Nov 2008 20:56:31 GMT", "version": "v3" }, { "created": "Wed, 1 Apr 2009 04:47:34 GMT", "version": "v4" }, { "cre...
2009-06-18
[ [ "Tabareau", "Nicolas", "" ], [ "Slotine", "Jean-Jacques", "" ], [ "Pham", "Quang-Cuong", "" ] ]
Synchronization phenomena are pervasive in biology. In neuronal networks, the mechanisms of synchronization have been extensively studied from both physiological and computational viewpoints. The functional role of synchronization has also attracted much interest and debate. In particular, synchronization may allow distant sites in the brain to communicate and cooperate with each other, and therefore it may play a role in temporal binding and in attention and sensory-motor integration mechanisms. In this article, we study another role for synchronization: the so-called "collective enhancement of precision." We argue, in a full nonlinear dynamical context, that synchronization may help protect interconnected neurons from the influence of random perturbations -- intrinsic neuronal noise -- which affect all neurons in the nervous system. This property may allow reliable computations to be carried out even in the presence of significant noise (as experimentally found e.g., in retinal ganglion cells in primates), as mathematically it is key to obtaining meaningful downstream signals, whether in terms of precisely-timed interaction (temporal coding), population coding, or frequency coding. Using stochastic contraction theory, we show how synchronization of nonlinear dynamical systems helps protect these systems from random perturbations. Our main contribution is a mathematical proof that, under specific quantified conditions, the impact of noise on each individual system and on the spatial mean can essentially be cancelled through synchronization. Similar concepts may be applicable to questions in systems biology.
2105.10222
Mar\'ia Vallet-Regi
A Philippart, N Gomez-Cerezo, D Arcos, AJ Salinas, E Boccardi, M Vallet-Regi, AR Boccaccini
Novel ion-doped mesoporous glasses for bone tissue engineering: Study of their strctural characteristics influenced by the presence of phosphorous oxide
16 pages, 9 figures
J. Non-Cryst Solids. 455, 90-97 (2017)
10.1016/j.jnoncrysol.2016.10.031
null
q-bio.TO physics.med-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Ion-doped binary SiO2-CaO and ternary SiO2-CaO-P2O5 mesoporous bioactive glasses were synthesised and characterised to evaluate the influence of P2O5 in the glass network structure. Strontium, copper and cobalt oxides in a proportion of 0.8 mol% were selected as dopants because the osteogenic and angiogenic properties reported for these elements. Although the four glass compositions investigated presented analogous textural properties, TEM analysis revealed that the structure of those containing P2O5 exhibited an increased ordered mesoporosity. Furthermore, 29Si NMR revealed that the incorporation of P2O5 increased the network connectivity and that this compound captured the Sr2+, Cu2+ and Co2+ ions preventing them to behave as modifiers of the silica network. In addition, 31P NMR results revealed that the nature of the cation directly influences the characteristics of the phosphate clusters. In this study, we have proven that phosphorous oxide entraps doping-metallic ions, granting these glasses with a greater mesopores order.
[ { "created": "Fri, 21 May 2021 09:18:51 GMT", "version": "v1" } ]
2021-05-24
[ [ "Philippart", "A", "" ], [ "Gomez-Cerezo", "N", "" ], [ "Arcos", "D", "" ], [ "Salinas", "AJ", "" ], [ "Boccardi", "E", "" ], [ "Vallet-Regi", "M", "" ], [ "Boccaccini", "AR", "" ] ]
Ion-doped binary SiO2-CaO and ternary SiO2-CaO-P2O5 mesoporous bioactive glasses were synthesised and characterised to evaluate the influence of P2O5 in the glass network structure. Strontium, copper and cobalt oxides in a proportion of 0.8 mol% were selected as dopants because the osteogenic and angiogenic properties reported for these elements. Although the four glass compositions investigated presented analogous textural properties, TEM analysis revealed that the structure of those containing P2O5 exhibited an increased ordered mesoporosity. Furthermore, 29Si NMR revealed that the incorporation of P2O5 increased the network connectivity and that this compound captured the Sr2+, Cu2+ and Co2+ ions preventing them to behave as modifiers of the silica network. In addition, 31P NMR results revealed that the nature of the cation directly influences the characteristics of the phosphate clusters. In this study, we have proven that phosphorous oxide entraps doping-metallic ions, granting these glasses with a greater mesopores order.
1003.4475
Gerald Teschl
Julian King, Karl Unterkofler, Gerald Teschl, Susanne Teschl, Helin Koc, Hartmann Hinterhuber, and Anton Amann
A mathematical model for breath gas analysis of volatile organic compounds with special emphasis on acetone
38 pages
J. Math. Biol. 63, 959-999 (2011)
10.1007/s00285-010-0398-9
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommended standardized procedures for determining exhaled lower respiratory nitric oxide and nasal nitric oxide have been developed by task forces of the European Respiratory Society and the American Thoracic Society. These recommendations have paved the way for the measurement of nitric oxide to become a diagnostic tool for specific clinical applications. It would be desirable to develop similar guidelines for the sampling of other trace gases in exhaled breath, especially volatile organic compounds (VOCs) which reflect ongoing metabolism. The concentrations of water-soluble, blood-borne substances in exhaled breath are influenced by: (i) breathing patterns affecting gas exchange in the conducting airways; (ii) the concentrations in the tracheo-bronchial lining fluid; (iii) the alveolar and systemic concentrations of the compound. The classical Farhi equation takes only the alveolar concentrations into account. Real-time measurements of acetone in end-tidal breath under an ergometer challenge show characteristics which cannot be explained within the Farhi setting. Here we develop a compartment model that reliably captures these profiles and is capable of relating breath to the systemic concentrations of acetone. By comparison with experimental data it is inferred that the major part of variability in breath acetone concentrations (e.g., in response to moderate exercise or altered breathing patterns) can be attributed to airway gas exchange, with minimal changes of the underlying blood and tissue concentrations. Moreover, it is deduced that measured end-tidal breath concentrations of acetone determined during resting conditions and free breathing will be rather poor indicators for endogenous levels. Particularly, the current formulation includes the classical Farhi and the Scheid series inhomogeneity model as special limiting cases.
[ { "created": "Tue, 23 Mar 2010 17:26:44 GMT", "version": "v1" }, { "created": "Mon, 5 Jul 2010 16:34:01 GMT", "version": "v2" }, { "created": "Thu, 27 Jan 2011 10:02:07 GMT", "version": "v3" } ]
2015-03-13
[ [ "King", "Julian", "" ], [ "Unterkofler", "Karl", "" ], [ "Teschl", "Gerald", "" ], [ "Teschl", "Susanne", "" ], [ "Koc", "Helin", "" ], [ "Hinterhuber", "Hartmann", "" ], [ "Amann", "Anton", "" ] ]
Recommended standardized procedures for determining exhaled lower respiratory nitric oxide and nasal nitric oxide have been developed by task forces of the European Respiratory Society and the American Thoracic Society. These recommendations have paved the way for the measurement of nitric oxide to become a diagnostic tool for specific clinical applications. It would be desirable to develop similar guidelines for the sampling of other trace gases in exhaled breath, especially volatile organic compounds (VOCs) which reflect ongoing metabolism. The concentrations of water-soluble, blood-borne substances in exhaled breath are influenced by: (i) breathing patterns affecting gas exchange in the conducting airways; (ii) the concentrations in the tracheo-bronchial lining fluid; (iii) the alveolar and systemic concentrations of the compound. The classical Farhi equation takes only the alveolar concentrations into account. Real-time measurements of acetone in end-tidal breath under an ergometer challenge show characteristics which cannot be explained within the Farhi setting. Here we develop a compartment model that reliably captures these profiles and is capable of relating breath to the systemic concentrations of acetone. By comparison with experimental data it is inferred that the major part of variability in breath acetone concentrations (e.g., in response to moderate exercise or altered breathing patterns) can be attributed to airway gas exchange, with minimal changes of the underlying blood and tissue concentrations. Moreover, it is deduced that measured end-tidal breath concentrations of acetone determined during resting conditions and free breathing will be rather poor indicators for endogenous levels. Particularly, the current formulation includes the classical Farhi and the Scheid series inhomogeneity model as special limiting cases.
1905.02853
Nils Gehlenborg
Sabrina Nusrat, Theresa Harbig, Nils Gehlenborg
Tasks, Techniques, and Tools for Genomic Data Visualization
25 pages, 21 figures, 6 tables
null
10.1111/cgf.13727
null
q-bio.GN cs.HC q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Genomic data visualization is essential for interpretation and hypothesis generation as well as a valuable aid in communicating discoveries. Visual tools bridge the gap between algorithmic approaches and the cognitive skills of investigators. Addressing this need has become crucial in genomics, as biomedical research is increasingly data-driven and many studies lack well-defined hypotheses. A key challenge in data-driven research is to discover unexpected patterns and to formulate hypotheses in an unbiased manner in vast amounts of genomic and other associated data. Over the past two decades, this has driven the development of numerous data visualization techniques and tools for visualizing genomic data. Based on a comprehensive literature survey, we propose taxonomies for data, visualization, and tasks involved in genomic data visualization. Furthermore, we provide a comprehensive review of published genomic visualization tools in the context of the proposed taxonomies.
[ { "created": "Wed, 8 May 2019 00:53:22 GMT", "version": "v1" } ]
2019-11-04
[ [ "Nusrat", "Sabrina", "" ], [ "Harbig", "Theresa", "" ], [ "Gehlenborg", "Nils", "" ] ]
Genomic data visualization is essential for interpretation and hypothesis generation as well as a valuable aid in communicating discoveries. Visual tools bridge the gap between algorithmic approaches and the cognitive skills of investigators. Addressing this need has become crucial in genomics, as biomedical research is increasingly data-driven and many studies lack well-defined hypotheses. A key challenge in data-driven research is to discover unexpected patterns and to formulate hypotheses in an unbiased manner in vast amounts of genomic and other associated data. Over the past two decades, this has driven the development of numerous data visualization techniques and tools for visualizing genomic data. Based on a comprehensive literature survey, we propose taxonomies for data, visualization, and tasks involved in genomic data visualization. Furthermore, we provide a comprehensive review of published genomic visualization tools in the context of the proposed taxonomies.
1912.08141
Martin Engqvist
Gang Li, Jan Zrimec, Boyang Ji, Jun Geng, Johan Larsbrink, Aleksej Zelezniak, Jens Nielsen, and Martin KM Engqvist
Performance of regression models as a function of experiment noise
null
Bioinformatics and Biology Insights. January 2021
10.1177/11779322211020315
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
A challenge in developing machine learning regression models is that it is difficult to know whether maximal performance has been reached on a particular dataset, or whether further model improvement is possible. In biology this problem is particularly pronounced as sample labels (response variables) are typically obtained through experiments and therefore have experiment noise associated with them. Such label noise puts a fundamental limit to the performance attainable by regression models. We address this challenge by deriving a theoretical upper bound for the coefficient of determination (R2) for regression models. This theoretical upper bound depends only on the noise associated with the response variable in a dataset as well as its variance. The upper bound estimate was validated via Monte Carlo simulations and then used as a tool to bootstrap performance of regression models trained on biological datasets, including protein sequence data, transcriptomic data, and genomic data. Although we study biological datasets in this work, the new upper bound estimates will hold true for regression models from any research field or application area where response variables have associated noise.
[ { "created": "Tue, 17 Dec 2019 17:13:18 GMT", "version": "v1" }, { "created": "Thu, 26 Dec 2019 11:12:48 GMT", "version": "v2" }, { "created": "Thu, 16 Jan 2020 12:08:32 GMT", "version": "v3" } ]
2021-07-28
[ [ "Li", "Gang", "" ], [ "Zrimec", "Jan", "" ], [ "Ji", "Boyang", "" ], [ "Geng", "Jun", "" ], [ "Larsbrink", "Johan", "" ], [ "Zelezniak", "Aleksej", "" ], [ "Nielsen", "Jens", "" ], [ "Engqvist", ...
A challenge in developing machine learning regression models is that it is difficult to know whether maximal performance has been reached on a particular dataset, or whether further model improvement is possible. In biology this problem is particularly pronounced as sample labels (response variables) are typically obtained through experiments and therefore have experiment noise associated with them. Such label noise puts a fundamental limit to the performance attainable by regression models. We address this challenge by deriving a theoretical upper bound for the coefficient of determination (R2) for regression models. This theoretical upper bound depends only on the noise associated with the response variable in a dataset as well as its variance. The upper bound estimate was validated via Monte Carlo simulations and then used as a tool to bootstrap performance of regression models trained on biological datasets, including protein sequence data, transcriptomic data, and genomic data. Although we study biological datasets in this work, the new upper bound estimates will hold true for regression models from any research field or application area where response variables have associated noise.
1505.07494
Herbert Levine
Mohit Kumar Jolly, Marcelo Boareto, Bin Huang, Dongya Jia, Mingyang Lu, Jose N Onuchic, Herbert Levine, Eshel Ben-Jacob
Implications of the hybrid epithelial/mesenchymal phenotype in metastasis
review article, 9 figures
null
null
null
q-bio.CB q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding cell-fate decisions during tumorigenesis and metastasis is a major challenge in modern cancer biology. One canonical cell-fate decision that cancer cells undergo is Epithelial-to-Mesenchymal Transition (EMT) and its reverse Mesenchymal-to-Epithelial Transition (MET). While transitioning between these two phenotypes - epithelial and mesenchymal - cells can also attain a hybrid epithelial/mesenchymal (i.e. partial or intermediate EMT) phenotype. Cells in this phenotype have mixed epithelial (e.g. adhesion) and mesenchymal (e.g. migration) properties, thereby allowing them to move collectively as clusters of Circulating Tumor Cells (CTCs). If these clusters enter the circulation, they can be more apoptosis-resistant and more capable of initiating metastatic lesions than cancer cells moving individually with wholly mesenchymal phenotypes, having undergo a complete EMT. Here, we review the operating principles of the core regulatory network for EMT/MET that acts as a three-way switch giving rise to three distinct phenotypes - epithelial, mesenchymal and hybrid epithelial/mesenchymal. We further characterize this hybrid E/M phenotype in terms of its capabilities in terms of collective cell migration, tumor-initiation, cell-cell communication, and drug resistance. We elucidate how the highly interconnected coupling between these modules coordinates cell-fate decisions among a population of cancer cells in the dynamic tumor, hence facilitating tumor-stoma interactions, formation of CTC clusters, and consequently cancer metastasis. Finally, we discuss the multiple advantages that the hybrid epithelial/mesenchymal phenotype have as compared to a complete EMT phenotype and argue that these collectively migrating cells are the primary 'bad actors' of metastasis.
[ { "created": "Wed, 27 May 2015 21:18:33 GMT", "version": "v1" } ]
2015-05-29
[ [ "Jolly", "Mohit Kumar", "" ], [ "Boareto", "Marcelo", "" ], [ "Huang", "Bin", "" ], [ "Jia", "Dongya", "" ], [ "Lu", "Mingyang", "" ], [ "Onuchic", "Jose N", "" ], [ "Levine", "Herbert", "" ], [ "Be...
Understanding cell-fate decisions during tumorigenesis and metastasis is a major challenge in modern cancer biology. One canonical cell-fate decision that cancer cells undergo is Epithelial-to-Mesenchymal Transition (EMT) and its reverse Mesenchymal-to-Epithelial Transition (MET). While transitioning between these two phenotypes - epithelial and mesenchymal - cells can also attain a hybrid epithelial/mesenchymal (i.e. partial or intermediate EMT) phenotype. Cells in this phenotype have mixed epithelial (e.g. adhesion) and mesenchymal (e.g. migration) properties, thereby allowing them to move collectively as clusters of Circulating Tumor Cells (CTCs). If these clusters enter the circulation, they can be more apoptosis-resistant and more capable of initiating metastatic lesions than cancer cells moving individually with wholly mesenchymal phenotypes, having undergo a complete EMT. Here, we review the operating principles of the core regulatory network for EMT/MET that acts as a three-way switch giving rise to three distinct phenotypes - epithelial, mesenchymal and hybrid epithelial/mesenchymal. We further characterize this hybrid E/M phenotype in terms of its capabilities in terms of collective cell migration, tumor-initiation, cell-cell communication, and drug resistance. We elucidate how the highly interconnected coupling between these modules coordinates cell-fate decisions among a population of cancer cells in the dynamic tumor, hence facilitating tumor-stoma interactions, formation of CTC clusters, and consequently cancer metastasis. Finally, we discuss the multiple advantages that the hybrid epithelial/mesenchymal phenotype have as compared to a complete EMT phenotype and argue that these collectively migrating cells are the primary 'bad actors' of metastasis.
1212.0050
Filippo Simini
Tommaso Anfodillo, Marco Carrer, Filippo Simini, Ionel Popa, Jayanth R. Banavar, Amos Maritan
An allometry-based approach for understanding forest structure, predicting tree-size distribution and assessing the degree of disturbance
22 pages, 4 figures
Proc. R. Soc. B 22 January 2013 vol. 280 no. 1751 20122375
10.1098/rspb.2012.2375
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tree-size distribution is one of the most investigated subjects in plant population biology. The forestry literature reports that tree-size distribution trajectories vary across different stands and/or species, while the metabolic scaling theory suggests that the tree number scales universally as -2 power of diameter. Here, we propose a simple functional scaling model in which these two opposing results are reconciled. Basic principles related to crown shape, energy optimization and the finite size scaling approach were used to define a set of relationships based on a single parameter, which allows us to predict the slope of the tree-size distributions in a steady state condition. We tested the model predictions on four temperate mountain forests. Plots (4 ha each, fully mapped) were selected with different degrees of human disturbance (semi-natural stands vs. formerly managed). Results showed that the size distribution range successfully fitted by the model is related to the degree of forest disturbance: in semi-natural forests the range is wide, while in formerly managed forests, the agreement with the model is confined to a very restricted range. We argue that simple allometric relationships, at individual level, shape the structure of the whole forest community.
[ { "created": "Sat, 1 Dec 2012 01:01:59 GMT", "version": "v1" } ]
2012-12-04
[ [ "Anfodillo", "Tommaso", "" ], [ "Carrer", "Marco", "" ], [ "Simini", "Filippo", "" ], [ "Popa", "Ionel", "" ], [ "Banavar", "Jayanth R.", "" ], [ "Maritan", "Amos", "" ] ]
Tree-size distribution is one of the most investigated subjects in plant population biology. The forestry literature reports that tree-size distribution trajectories vary across different stands and/or species, while the metabolic scaling theory suggests that the tree number scales universally as -2 power of diameter. Here, we propose a simple functional scaling model in which these two opposing results are reconciled. Basic principles related to crown shape, energy optimization and the finite size scaling approach were used to define a set of relationships based on a single parameter, which allows us to predict the slope of the tree-size distributions in a steady state condition. We tested the model predictions on four temperate mountain forests. Plots (4 ha each, fully mapped) were selected with different degrees of human disturbance (semi-natural stands vs. formerly managed). Results showed that the size distribution range successfully fitted by the model is related to the degree of forest disturbance: in semi-natural forests the range is wide, while in formerly managed forests, the agreement with the model is confined to a very restricted range. We argue that simple allometric relationships, at individual level, shape the structure of the whole forest community.
1705.00276
Christophe Guyeux
Christophe Guyeux and Bashar Al-Nuaimi and Bassam AlKindy and Jean-Fran\c{c}ois Couchot and Michel Salomon
On the ability to reconstruct ancestral genomes from Mycobacterium genus
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Technical signs of progress during the last decades has led to a situation in which the accumulation of genome sequence data is increasingly fast and cheap. The huge amount of molecular data available nowadays can help addressing new and essential questions in Evolution. However, reconstructing evolution of DNA sequences requires models, algorithms, statistical and computational methods of ever increasing complexity. Since most dramatic genomic changes are caused by genome rearrangements (gene duplications, gain/loss events), it becomes crucial to understand their mechanisms and reconstruct ancestors of the given genomes. This problem is known to be NP-complete even in the "simplest" case of three genomes. Heuristic algorithms are usually executed to provide approximations of the exact solution. We state that, even if the ancestral reconstruction problem is NP-hard in theory, its exact resolution is feasible in various situations, encompassing organelles and some bacteria. Such accurate reconstruction, which identifies too some highly homoplasic mutations whose ancestral status is undecidable, will be initiated in this work-in-progress, to reconstruct ancestral genomes of two Mycobacterium pathogenetic bacterias. By mixing automatic reconstruction of obvious situations with human interventions on signaled problematic cases, we will indicate that it should be possible to achieve a concrete, complete, and really accurate reconstruction of lineages of the Mycobacterium tuberculosis complex. Thus, it is possible to investigate how these genomes have evolved from their last common ancestors.
[ { "created": "Sun, 30 Apr 2017 06:58:57 GMT", "version": "v1" } ]
2017-05-02
[ [ "Guyeux", "Christophe", "" ], [ "Al-Nuaimi", "Bashar", "" ], [ "AlKindy", "Bassam", "" ], [ "Couchot", "Jean-François", "" ], [ "Salomon", "Michel", "" ] ]
Technical signs of progress during the last decades has led to a situation in which the accumulation of genome sequence data is increasingly fast and cheap. The huge amount of molecular data available nowadays can help addressing new and essential questions in Evolution. However, reconstructing evolution of DNA sequences requires models, algorithms, statistical and computational methods of ever increasing complexity. Since most dramatic genomic changes are caused by genome rearrangements (gene duplications, gain/loss events), it becomes crucial to understand their mechanisms and reconstruct ancestors of the given genomes. This problem is known to be NP-complete even in the "simplest" case of three genomes. Heuristic algorithms are usually executed to provide approximations of the exact solution. We state that, even if the ancestral reconstruction problem is NP-hard in theory, its exact resolution is feasible in various situations, encompassing organelles and some bacteria. Such accurate reconstruction, which identifies too some highly homoplasic mutations whose ancestral status is undecidable, will be initiated in this work-in-progress, to reconstruct ancestral genomes of two Mycobacterium pathogenetic bacterias. By mixing automatic reconstruction of obvious situations with human interventions on signaled problematic cases, we will indicate that it should be possible to achieve a concrete, complete, and really accurate reconstruction of lineages of the Mycobacterium tuberculosis complex. Thus, it is possible to investigate how these genomes have evolved from their last common ancestors.
1412.8081
Carl Boettiger
Carl Boettiger, Marc Mangel, Stephan Munch
Avoiding tipping points in fisheries management through Gaussian Process Dynamic Programming
2015 Proceedings of the Royal Society B: Biological Sciences
null
10.1098/rspb-2014-1631
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Model uncertainty and limited data are fundamental challenges to robust management of human intervention in a natural system. These challenges are acutely highlighted by concerns that many ecological systems may contain tipping points, such as Allee population sizes. Before a collapse, we do not know where the tipping points lie, if they exist at all. Hence, we know neither a complete model of the system dynamics nor do we have access to data in some large region of state-space where such a tipping point might exist. We illustrate how a Bayesian Non-Parametric (BNP) approach using a Gaussian Process (GP) prior provides a flexible representation of this inherent uncertainty. We embed GPs in a Stochastic Dynamic Programming (SDP) framework in order to make robust management predictions with both model uncertainty and limited data. We use simulations to evaluate this approach as compared with the standard approach of using model selection to choose from a set of candidate models. We find that model selection erroneously favors models without tipping points -- leading to harvest policies that guarantee extinction. The GPDP performs nearly as well as the true model and significantly outperforms standard approaches. We illustrate this using examples of simulated single-species dynamics, where the standard model selection approach should be most effective, and find that it still fails to account for uncertainty appropriately and leads to population crashes, while management based on the GPDP does not, since it does not underestimate the uncertainty outside of the observed data.
[ { "created": "Sat, 27 Dec 2014 21:08:13 GMT", "version": "v1" } ]
2014-12-30
[ [ "Boettiger", "Carl", "" ], [ "Mangel", "Marc", "" ], [ "Munch", "Stephan", "" ] ]
Model uncertainty and limited data are fundamental challenges to robust management of human intervention in a natural system. These challenges are acutely highlighted by concerns that many ecological systems may contain tipping points, such as Allee population sizes. Before a collapse, we do not know where the tipping points lie, if they exist at all. Hence, we know neither a complete model of the system dynamics nor do we have access to data in some large region of state-space where such a tipping point might exist. We illustrate how a Bayesian Non-Parametric (BNP) approach using a Gaussian Process (GP) prior provides a flexible representation of this inherent uncertainty. We embed GPs in a Stochastic Dynamic Programming (SDP) framework in order to make robust management predictions with both model uncertainty and limited data. We use simulations to evaluate this approach as compared with the standard approach of using model selection to choose from a set of candidate models. We find that model selection erroneously favors models without tipping points -- leading to harvest policies that guarantee extinction. The GPDP performs nearly as well as the true model and significantly outperforms standard approaches. We illustrate this using examples of simulated single-species dynamics, where the standard model selection approach should be most effective, and find that it still fails to account for uncertainty appropriately and leads to population crashes, while management based on the GPDP does not, since it does not underestimate the uncertainty outside of the observed data.
1209.2973
Andrew Das Arulsamy
Andrew Das Arulsamy
Generalized microscopic theory of ion selectivity in voltage-gated ion channels
22 pages, 1 table, no figures
null
null
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ion channels are specific proteins present in the membranes of living cells. They control the flow of specific ions through a cell, initiated by an ion channel's electrochemical gradient. In doing so, they control important physiological processes such as muscle contraction and neuronal connectivity, which cannot be properly activated if these channels go haywire, leading to life-threatening diseases and psychological disorders. Here, we will develop a generalized microscopic theory of ion selectivity applicable to KcsA, Na$_{\rm v}$Rh and Ca$_{\rm v}$ (L-type) ion channels. We unambiguously expose why and how a given ion-channel can be highly selective, and yet has a conductance of the order of one million ions per second, or higher. We will identify and prove the correct physico-biochemical mechanisms that are responsible for the high selectivity of a particular ion in a given ion channel. The above mechanisms consist of five conditions, which can be directly associated to these parameters - (i) dehydration energy, (ii) concentration of the "correct" ions (iii) Coulomb-van-der-Waals attraction, (iv) pore and ionic sizes, and indirectly to (v) the thermodynamic stability and (vi) the "knock-on" assisted permeation.
[ { "created": "Wed, 12 Sep 2012 16:26:50 GMT", "version": "v1" } ]
2012-09-14
[ [ "Arulsamy", "Andrew Das", "" ] ]
Ion channels are specific proteins present in the membranes of living cells. They control the flow of specific ions through a cell, initiated by an ion channel's electrochemical gradient. In doing so, they control important physiological processes such as muscle contraction and neuronal connectivity, which cannot be properly activated if these channels go haywire, leading to life-threatening diseases and psychological disorders. Here, we will develop a generalized microscopic theory of ion selectivity applicable to KcsA, Na$_{\rm v}$Rh and Ca$_{\rm v}$ (L-type) ion channels. We unambiguously expose why and how a given ion-channel can be highly selective, and yet has a conductance of the order of one million ions per second, or higher. We will identify and prove the correct physico-biochemical mechanisms that are responsible for the high selectivity of a particular ion in a given ion channel. The above mechanisms consist of five conditions, which can be directly associated to these parameters - (i) dehydration energy, (ii) concentration of the "correct" ions (iii) Coulomb-van-der-Waals attraction, (iv) pore and ionic sizes, and indirectly to (v) the thermodynamic stability and (vi) the "knock-on" assisted permeation.
2303.12904
Brian Beatty
Jennifer Buoncore, Alexis Sobecki, Brian Lee Beatty
Surface Metrology of Cerebral Arteries Luminal Surface
16 pages, 2 tables, 7 figures
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Atherosclerotic lesions within carotid and cerebral vessels are likely to influence hemodynamics and manifest into vascular pathologies, including Alzheimers Disease and ischemic stroke. Hemodynamics are influenced by changes in luminal diameter of vessels and wall shear stress derived from turbulence, which directly relates to the surface topography of the lumen. In this study, we performed a quantitative assessment of surface metrology of carotid and cerebral arteries in relation to vessel size and location among individuals. We speculate intracranial vessels will follow suit of extracranial vessels, with increased surface roughness in larger-diameter vessels. Samples of the internal carotid, common carotid, and multiple branches of the Circle of Willis were collected at 18 different sites from 10 human whole body donors. The arterial surface metrology was analyzed using a Sensofar S Neox 3D optical profiler, from which ISO 25718-2 areal roughness parameters, texture direction, motifs analysis, and scale-sensitive fractal analyses were analyzed using SensoMap software. The most significant differences between individuals, though surface roughness appears also greater in the larger vessels. In comparison, the side (left vs. right) is almost immaterial. With further research in this field, the pathophysiology of intracranial atherosclerosis a the role of atherosclerosis in neurodegenerative disorders.
[ { "created": "Wed, 22 Mar 2023 20:46:56 GMT", "version": "v1" } ]
2023-03-24
[ [ "Buoncore", "Jennifer", "" ], [ "Sobecki", "Alexis", "" ], [ "Beatty", "Brian Lee", "" ] ]
Atherosclerotic lesions within carotid and cerebral vessels are likely to influence hemodynamics and manifest into vascular pathologies, including Alzheimers Disease and ischemic stroke. Hemodynamics are influenced by changes in luminal diameter of vessels and wall shear stress derived from turbulence, which directly relates to the surface topography of the lumen. In this study, we performed a quantitative assessment of surface metrology of carotid and cerebral arteries in relation to vessel size and location among individuals. We speculate intracranial vessels will follow suit of extracranial vessels, with increased surface roughness in larger-diameter vessels. Samples of the internal carotid, common carotid, and multiple branches of the Circle of Willis were collected at 18 different sites from 10 human whole body donors. The arterial surface metrology was analyzed using a Sensofar S Neox 3D optical profiler, from which ISO 25718-2 areal roughness parameters, texture direction, motifs analysis, and scale-sensitive fractal analyses were analyzed using SensoMap software. The most significant differences between individuals, though surface roughness appears also greater in the larger vessels. In comparison, the side (left vs. right) is almost immaterial. With further research in this field, the pathophysiology of intracranial atherosclerosis a the role of atherosclerosis in neurodegenerative disorders.
0710.3210
Jeremy Sumner
J G Sumner
Entanglement, Invariants, and Phylogenetics
PhD thesis
null
null
null
q-bio.QM q-bio.PE
null
This thesis develops and expands upon known techniques of mathematical physics relevant to the analysis of the popular Markov model of phylogenetic trees required in biology to reconstruct the evolutionary relationships of taxonomic units from biomolecular sequence data. The techniques of mathematical physics are plethora and have been developed for some time. The Markov model of phylogenetics and its analysis is a relatively new technique where most progress to date has been achieved by using discrete mathematics. This thesis takes a group theoretical approach to the problem by beginning with a remarkable mathematical parallel to the process of scattering in particle physics. This is shown to equate to branching events in the evolutionary history of molecular units. The major technical result of this thesis is the derivation of existence proofs and computational techniques for calculating polynomial group invariant functions on a multi-linear space where the group action is that relevant to a Markovian time evolution. The practical results of this thesis are an extended analysis of the use of invariant functions in distance based methods and the presentation of a new reconstruction technique for quartet trees which is consistent with the most general Markov model of sequence evolution.
[ { "created": "Wed, 17 Oct 2007 02:51:48 GMT", "version": "v1" } ]
2007-10-18
[ [ "Sumner", "J G", "" ] ]
This thesis develops and expands upon known techniques of mathematical physics relevant to the analysis of the popular Markov model of phylogenetic trees required in biology to reconstruct the evolutionary relationships of taxonomic units from biomolecular sequence data. The techniques of mathematical physics are plethora and have been developed for some time. The Markov model of phylogenetics and its analysis is a relatively new technique where most progress to date has been achieved by using discrete mathematics. This thesis takes a group theoretical approach to the problem by beginning with a remarkable mathematical parallel to the process of scattering in particle physics. This is shown to equate to branching events in the evolutionary history of molecular units. The major technical result of this thesis is the derivation of existence proofs and computational techniques for calculating polynomial group invariant functions on a multi-linear space where the group action is that relevant to a Markovian time evolution. The practical results of this thesis are an extended analysis of the use of invariant functions in distance based methods and the presentation of a new reconstruction technique for quartet trees which is consistent with the most general Markov model of sequence evolution.
2405.02593
Nina Fefferman
John S. McAlister, Michael J. Blum, Yana Bromberg, Nina H. Fefferman, Qiang He, Eric Lofgren, Debra L. Miller, Courtney Schreiner, K. Selcuk Candan, Heather Szabo-Rogers, and J. Michael Reed
An Interdisciplinary Perspective of the Built-Environment Microbiome
23 pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The built environment provides an excellent setting for interdisciplinary research on the dynamics of microbial communities. The system is simplified compared to many natural settings, and to some extent the entire environment can be manipulated, from architectural design, to materials use, air flow, human traffic, and capacity to disrupt microbial communities through cleaning. Here we provide an overview of the ecology of the microbiome in the built environment. We address niche space and refugia, population and community (metagenomic) dynamics, spatial ecology within a building, including the major microbial transmission mechanisms, as well as evolution. We also address the landscape ecology connecting microbiomes between physically separated buildings. At each stage we pay particular attention to the actual and potential interface between disciplines, such as ecology, epidemiology, materials science, and human social behavior. We end by identifying some opportunities for future interdisciplinary research on the microbiome of the built environment.
[ { "created": "Sat, 4 May 2024 07:31:55 GMT", "version": "v1" } ]
2024-05-07
[ [ "McAlister", "John S.", "" ], [ "Blum", "Michael J.", "" ], [ "Bromberg", "Yana", "" ], [ "Fefferman", "Nina H.", "" ], [ "He", "Qiang", "" ], [ "Lofgren", "Eric", "" ], [ "Miller", "Debra L.", "" ], [ ...
The built environment provides an excellent setting for interdisciplinary research on the dynamics of microbial communities. The system is simplified compared to many natural settings, and to some extent the entire environment can be manipulated, from architectural design, to materials use, air flow, human traffic, and capacity to disrupt microbial communities through cleaning. Here we provide an overview of the ecology of the microbiome in the built environment. We address niche space and refugia, population and community (metagenomic) dynamics, spatial ecology within a building, including the major microbial transmission mechanisms, as well as evolution. We also address the landscape ecology connecting microbiomes between physically separated buildings. At each stage we pay particular attention to the actual and potential interface between disciplines, such as ecology, epidemiology, materials science, and human social behavior. We end by identifying some opportunities for future interdisciplinary research on the microbiome of the built environment.
2310.02455
Alicia Dickenstein
Alicia Dickenstein, Magal\'i Giaroli, Mercedes P\'erez Mill\'an and Rick Rischter
Multistationarity questions in reduced vs extended biochemical networks
36 pages, 6 figures. References added
null
null
null
q-bio.MN math.AG math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address several questions in reduced versus extended networks via the elimination or addition of intermediate complexes in the framework of chemical reaction networks with mass-action kinetics. We clarify and extend advances in the literature concerning multistationarity in this context. We establish general results about MESSI systems, which we use to compute the circuits of multistationarity for significant biochemical networks.
[ { "created": "Tue, 3 Oct 2023 21:58:13 GMT", "version": "v1" }, { "created": "Thu, 12 Oct 2023 14:42:26 GMT", "version": "v2" } ]
2023-10-13
[ [ "Dickenstein", "Alicia", "" ], [ "Giaroli", "Magalí", "" ], [ "Millán", "Mercedes Pérez", "" ], [ "Rischter", "Rick", "" ] ]
We address several questions in reduced versus extended networks via the elimination or addition of intermediate complexes in the framework of chemical reaction networks with mass-action kinetics. We clarify and extend advances in the literature concerning multistationarity in this context. We establish general results about MESSI systems, which we use to compute the circuits of multistationarity for significant biochemical networks.
q-bio/0703037
Eric Shea-Brown
Eric Shea-Brown, Kresimir Josic, Jaime de la Rocha, Brent Doiron
Universal properties of correlation transfer in integrate-and-fire neurons
null
null
10.1103/PhysRevLett.100.108102
null
q-bio.NC cond-mat.dis-nn
null
One of the fundamental characteristics of a nonlinear system is how it transfers correlations in its inputs to correlations in its outputs. This is particularly important in the nervous system, where correlations between spiking neurons are prominent. Using linear response and asymptotic methods for pairs of unconnected integrate-and-fire (IF) neurons receiving white noise inputs, we show that this correlation transfer depends on the output spike firing rate in a strong, stereotyped manner, and is, surprisingly, almost independent of the interspike variance. For cells receiving heterogeneous inputs, we further show that correlation increases with the geometric mean spiking rate in the same stereotyped manner, greatly extending the generality of this relationship. We present an immediate consequence of this relationship for population coding via tuning curves.
[ { "created": "Fri, 16 Mar 2007 15:59:59 GMT", "version": "v1" } ]
2013-05-29
[ [ "Shea-Brown", "Eric", "" ], [ "Josic", "Kresimir", "" ], [ "de la Rocha", "Jaime", "" ], [ "Doiron", "Brent", "" ] ]
One of the fundamental characteristics of a nonlinear system is how it transfers correlations in its inputs to correlations in its outputs. This is particularly important in the nervous system, where correlations between spiking neurons are prominent. Using linear response and asymptotic methods for pairs of unconnected integrate-and-fire (IF) neurons receiving white noise inputs, we show that this correlation transfer depends on the output spike firing rate in a strong, stereotyped manner, and is, surprisingly, almost independent of the interspike variance. For cells receiving heterogeneous inputs, we further show that correlation increases with the geometric mean spiking rate in the same stereotyped manner, greatly extending the generality of this relationship. We present an immediate consequence of this relationship for population coding via tuning curves.
2303.13642
Andrew Magee
Andrew F. Magee, Andrew J. Holbrook, Jonathan E. Pekar, Itzue W. Caviedes-Solis, Fredrick A. Matsen IV, Guy Baele, Joel O. Wertheim, Xiang Ji, Philippe Lemey, Marc A. Suchard
Random-effects substitution models for phylogenetics via scalable gradient approximations
null
null
null
null
q-bio.PE stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic and discrete-trait evolutionary inference depend heavily on an appropriate characterization of the underlying character substitution process. In this paper, we present random-effects substitution models that extend common continuous-time Markov chain models into a richer class of processes capable of capturing a wider variety of substitution dynamics. As these random-effects substitution models often require many more parameters than their usual counterparts, inference can be both statistically and computationally challenging. Thus, we also propose an efficient approach to compute an approximation to the gradient of the data likelihood with respect to all unknown substitution model parameters. We demonstrate that this approximate gradient enables scaling of sampling-based inference, namely Bayesian inference via Hamiltonian Monte Carlo, under random-effects substitution models across large trees and state-spaces. Applied to a dataset of 583 SARS-CoV-2 sequences, an HKY model with random-effects shows strong signals of nonreversibility in the substitution process, and posterior predictive model checks clearly show that it is a more adequate model than a reversible model. When analyzing the pattern of phylogeographic spread of 1441 influenza A virus (H3N2) sequences between 14 regions, a random-effects phylogeographic substitution model infers that air travel volume adequately predicts almost all dispersal rates. A random-effects state-dependent substitution model reveals no evidence for an effect of arboreality on the swimming mode in the tree frog subfamily Hylinae. Simulations reveal that random-effects substitution models can accommodate both negligible and radical departures from the underlying base substitution model. We show that our gradient-based inference approach is over an order of magnitude more time efficient than conventional approaches.
[ { "created": "Thu, 23 Mar 2023 20:00:34 GMT", "version": "v1" }, { "created": "Mon, 25 Sep 2023 23:41:54 GMT", "version": "v2" } ]
2023-09-27
[ [ "Magee", "Andrew F.", "" ], [ "Holbrook", "Andrew J.", "" ], [ "Pekar", "Jonathan E.", "" ], [ "Caviedes-Solis", "Itzue W.", "" ], [ "Matsen", "Fredrick A.", "IV" ], [ "Baele", "Guy", "" ], [ "Wertheim", "Joel ...
Phylogenetic and discrete-trait evolutionary inference depend heavily on an appropriate characterization of the underlying character substitution process. In this paper, we present random-effects substitution models that extend common continuous-time Markov chain models into a richer class of processes capable of capturing a wider variety of substitution dynamics. As these random-effects substitution models often require many more parameters than their usual counterparts, inference can be both statistically and computationally challenging. Thus, we also propose an efficient approach to compute an approximation to the gradient of the data likelihood with respect to all unknown substitution model parameters. We demonstrate that this approximate gradient enables scaling of sampling-based inference, namely Bayesian inference via Hamiltonian Monte Carlo, under random-effects substitution models across large trees and state-spaces. Applied to a dataset of 583 SARS-CoV-2 sequences, an HKY model with random-effects shows strong signals of nonreversibility in the substitution process, and posterior predictive model checks clearly show that it is a more adequate model than a reversible model. When analyzing the pattern of phylogeographic spread of 1441 influenza A virus (H3N2) sequences between 14 regions, a random-effects phylogeographic substitution model infers that air travel volume adequately predicts almost all dispersal rates. A random-effects state-dependent substitution model reveals no evidence for an effect of arboreality on the swimming mode in the tree frog subfamily Hylinae. Simulations reveal that random-effects substitution models can accommodate both negligible and radical departures from the underlying base substitution model. We show that our gradient-based inference approach is over an order of magnitude more time efficient than conventional approaches.
1501.07850
Valeriu Predoi Dr
V. Predoi
Estimating viral infection parameters using Markov Chain Monte Carlo simulations
11 pages, 6 figues
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a mathematical model quantifying the viral infection of pandemic influenza H1N1pdm09-H275 wild type (WT) and H1N1pdm09-H275Y mutant (MUT) strains, we describe a simple method of estimating the model's constant parameters using Monte Carlo methods. Monte Carlo parameter estimation methods present certain advantages over the bootstrapping methods previously used in such studies: the result comprises actual parameter distributions (posteriors) that can be used to compare different viral strains; the recovered parameter distributions offer an exact method to compute credible intervals (similar to the frequentist 95% parametric confidence intervals (CI)), that, in turn, using a suitable analysis statistic, will be narrower than the ones obtained from bootstrapping; given an appropriate computational parallelization, Monte Carlo methods are also faster and less computationally intensive than bootstrapping. We fit Gaussian distributions to the parameter posterior distributions and use a two-sided Kolmogorov-Smirnoff test to compare the two strains from a parametric point of view; our example result shows that the two strains are 94% different. Furthermore, based on the obtained parameter values, we estimate the reproductive number R0 for each strain and show that the infectivity of the mutant strain is larger than the wild type strain.
[ { "created": "Fri, 30 Jan 2015 17:19:53 GMT", "version": "v1" }, { "created": "Sat, 7 Feb 2015 15:52:36 GMT", "version": "v2" } ]
2015-02-10
[ [ "Predoi", "V.", "" ] ]
Given a mathematical model quantifying the viral infection of pandemic influenza H1N1pdm09-H275 wild type (WT) and H1N1pdm09-H275Y mutant (MUT) strains, we describe a simple method of estimating the model's constant parameters using Monte Carlo methods. Monte Carlo parameter estimation methods present certain advantages over the bootstrapping methods previously used in such studies: the result comprises actual parameter distributions (posteriors) that can be used to compare different viral strains; the recovered parameter distributions offer an exact method to compute credible intervals (similar to the frequentist 95% parametric confidence intervals (CI)), that, in turn, using a suitable analysis statistic, will be narrower than the ones obtained from bootstrapping; given an appropriate computational parallelization, Monte Carlo methods are also faster and less computationally intensive than bootstrapping. We fit Gaussian distributions to the parameter posterior distributions and use a two-sided Kolmogorov-Smirnoff test to compare the two strains from a parametric point of view; our example result shows that the two strains are 94% different. Furthermore, based on the obtained parameter values, we estimate the reproductive number R0 for each strain and show that the infectivity of the mutant strain is larger than the wild type strain.
1711.11408
Daniel Chicharro
Daniel Chicharro, Giuseppe Pica, Stefano Panzeri
The identity of information: how deterministic dependencies constrain information synergy and redundancy
null
null
10.3390/e20030169
null
q-bio.NC stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) which separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom stating that there cannot be redundancy between two independent sources about a copy of themselves. However, Bertschinger et al. (2012) showed that with a deterministically related sources-target copy this axiom is incompatible with ensuring PID nonnegativity. Here we study systematically the effect of deterministic target-sources dependencies. We introduce two synergy stochasticity axioms that generalize the identity axiom, and we derive general expressions separating stochastic and deterministic PID components. Our analysis identifies how negative terms can originate from deterministic dependencies and shows how different assumptions on information identity, implicit in the stochasticity and identity axioms, determine the PID structure. The implications for studying neural coding are discussed.
[ { "created": "Mon, 13 Nov 2017 12:27:42 GMT", "version": "v1" } ]
2018-04-04
[ [ "Chicharro", "Daniel", "" ], [ "Pica", "Giuseppe", "" ], [ "Panzeri", "Stefano", "" ] ]
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) which separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom stating that there cannot be redundancy between two independent sources about a copy of themselves. However, Bertschinger et al. (2012) showed that with a deterministically related sources-target copy this axiom is incompatible with ensuring PID nonnegativity. Here we study systematically the effect of deterministic target-sources dependencies. We introduce two synergy stochasticity axioms that generalize the identity axiom, and we derive general expressions separating stochastic and deterministic PID components. Our analysis identifies how negative terms can originate from deterministic dependencies and shows how different assumptions on information identity, implicit in the stochasticity and identity axioms, determine the PID structure. The implications for studying neural coding are discussed.
1610.01674
Philippe Gambette
Tushar Agarwal, Philippe Gambette, David Morrison
Who is Who in Phylogenetic Networks: Articles, Authors and Programs
10 pages, 3 figures
null
null
null
q-bio.PE cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phylogenetic network emerged in the 1990s as a new model to represent the evolution of species in the case where coexisting species transfer genetic information through hybridization, recombination, lateral gene transfer, etc. As is true for many rapidly evolving fields, there is considerable fragmentation and diversity in methodologies, standards and vocabulary in phylogenetic network research, thus creating the need for an integrated database of articles, authors, techniques, keywords and software. We describe such a database, "Who is Who in Phylogenetic Networks", available at http://phylnet.univ-mlv.fr. "Who is Who in Phylogenetic Networks" comprises more than 600 publications and 500 authors interlinked with a rich set of more than 200 keywords related to phylogenetic networks. The database is integrated with web-based tools to visualize authorship and collaboration networks and analyze these networks using common graph and social network metrics such as centrality (betweenness, eigenvector, degree and closeness) and clustering. We provide downloads of raw information about entries in the database, and a facility to suggest modifications and contribute new information to the database. We also present in this article common use cases of the database and identify trends in the research on phylogenetic networks using the information in the database and textual analysis.
[ { "created": "Wed, 5 Oct 2016 22:25:08 GMT", "version": "v1" } ]
2016-10-07
[ [ "Agarwal", "Tushar", "" ], [ "Gambette", "Philippe", "" ], [ "Morrison", "David", "" ] ]
The phylogenetic network emerged in the 1990s as a new model to represent the evolution of species in the case where coexisting species transfer genetic information through hybridization, recombination, lateral gene transfer, etc. As is true for many rapidly evolving fields, there is considerable fragmentation and diversity in methodologies, standards and vocabulary in phylogenetic network research, thus creating the need for an integrated database of articles, authors, techniques, keywords and software. We describe such a database, "Who is Who in Phylogenetic Networks", available at http://phylnet.univ-mlv.fr. "Who is Who in Phylogenetic Networks" comprises more than 600 publications and 500 authors interlinked with a rich set of more than 200 keywords related to phylogenetic networks. The database is integrated with web-based tools to visualize authorship and collaboration networks and analyze these networks using common graph and social network metrics such as centrality (betweenness, eigenvector, degree and closeness) and clustering. We provide downloads of raw information about entries in the database, and a facility to suggest modifications and contribute new information to the database. We also present in this article common use cases of the database and identify trends in the research on phylogenetic networks using the information in the database and textual analysis.
2306.01915
Beth Cimini
Erin Weisbart, Callum Tromans-Coia, Barbara Diaz-Rohrer, David R Stirling, Fernanda Garcia-Fossa, Rebecca A Senft, Mark C Hiner, Marcelo B de Jesus, Kevin W Eliceiri, Beth A Cimini
CellProfiler plugins -- an easy image analysis platform integration for containers and Python tools
17 pages, 2 figures, 1 table
J Mic (2023)
10.1111/jmi.13223
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
CellProfiler is a widely used software for creating reproducible, reusable image analysis workflows without needing to code. In addition to the >90 modules that make up the main CellProfiler program, CellProfiler has a plugins system that allows for the creation of new modules which integrate with other Python tools or tools that are packaged in software containers. The CellProfiler-plugins repository contains a number of these CellProfiler modules, especially modules that are experimental and/or dependency-heavy. Here, we present an upgraded CellProfiler-plugins repository, an example of accessing containerized tools, improved documentation, and added citation/reference tools to facilitate the use and contribution of the community.
[ { "created": "Fri, 2 Jun 2023 21:00:09 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2023 13:54:13 GMT", "version": "v2" } ]
2024-01-29
[ [ "Weisbart", "Erin", "" ], [ "Tromans-Coia", "Callum", "" ], [ "Diaz-Rohrer", "Barbara", "" ], [ "Stirling", "David R", "" ], [ "Garcia-Fossa", "Fernanda", "" ], [ "Senft", "Rebecca A", "" ], [ "Hiner", "Mark C"...
CellProfiler is a widely used software for creating reproducible, reusable image analysis workflows without needing to code. In addition to the >90 modules that make up the main CellProfiler program, CellProfiler has a plugins system that allows for the creation of new modules which integrate with other Python tools or tools that are packaged in software containers. The CellProfiler-plugins repository contains a number of these CellProfiler modules, especially modules that are experimental and/or dependency-heavy. Here, we present an upgraded CellProfiler-plugins repository, an example of accessing containerized tools, improved documentation, and added citation/reference tools to facilitate the use and contribution of the community.
1002.1499
Simon Tindemans
Simon H. Tindemans, Bela M. Mulder
Microtubule length distributions in the presence of protein-induced severing
9 pages, 4 figures
Phys. Rev. E 81, 031910 (2010)
10.1103/PhysRevE.81.031910
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microtubules are highly regulated dynamic elements of the cytoskeleton of eukaryotic cells. One of the regulation mechanisms observed in living cells is the severing by the proteins katanin and spastin. We introduce a model for the dynamics of microtubules in the presence of randomly occurring severing events. Under the biologically motivated assumption that the newly created plus end undergoes a catastrophe, we investigate the steady state length distribution. We show that the presence of severing does not affect the number of microtubules, regardless of the distribution of severing events. In the special case in which the microtubules cannot recover from the depolymerizing state (no rescue events) we derive an analytical expression for the length distribution. In the general case we transform the problem into a single ODE that is solved numerically.
[ { "created": "Sun, 7 Feb 2010 23:13:42 GMT", "version": "v1" } ]
2010-03-12
[ [ "Tindemans", "Simon H.", "" ], [ "Mulder", "Bela M.", "" ] ]
Microtubules are highly regulated dynamic elements of the cytoskeleton of eukaryotic cells. One of the regulation mechanisms observed in living cells is the severing by the proteins katanin and spastin. We introduce a model for the dynamics of microtubules in the presence of randomly occurring severing events. Under the biologically motivated assumption that the newly created plus end undergoes a catastrophe, we investigate the steady state length distribution. We show that the presence of severing does not affect the number of microtubules, regardless of the distribution of severing events. In the special case in which the microtubules cannot recover from the depolymerizing state (no rescue events) we derive an analytical expression for the length distribution. In the general case we transform the problem into a single ODE that is solved numerically.
2205.04655
Thi Kim Thoa Thieu
Thi Kim Thoa Thieu and Roderick Melnik
Effects of random inputs and short-term synaptic plasticity in a LIF conductance model for working memory applications
14 pages, 11 figures. arXiv admin note: substantial text overlap with arXiv:2202.09482
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Working memory (WM) has been intensively used to enable the temporary storing of information for processing purposes, playing an important role in the execution of various cognitive tasks. Recent studies have shown that information in WM is not only maintained through persistent recurrent activity but also can be stored in activity-silent states such as in short-term synaptic plasticity (STSP). Motivated by important applications of the STSP mechanisms in WM, the main focus of the present work is on the analysis of the effects of random inputs on a leaky integrate-and-fire (LIF) synaptic conductance neuron under STSP. Furthermore, the irregularity of spike trains can carry the information about previous stimulation in a neuron. A LIF conductance neuron with multiple inputs and coefficient of variation (CV) of the inter-spike-interval (ISI) can bring an output decoded neuron. Our numerical results show that an increase in the standard deviations in the random input current and the random refractory period can lead to an increased irregularity of spike trains of the output neuron.
[ { "created": "Tue, 10 May 2022 03:52:20 GMT", "version": "v1" } ]
2022-05-19
[ [ "Thieu", "Thi Kim Thoa", "" ], [ "Melnik", "Roderick", "" ] ]
Working memory (WM) has been intensively used to enable the temporary storing of information for processing purposes, playing an important role in the execution of various cognitive tasks. Recent studies have shown that information in WM is not only maintained through persistent recurrent activity but also can be stored in activity-silent states such as in short-term synaptic plasticity (STSP). Motivated by important applications of the STSP mechanisms in WM, the main focus of the present work is on the analysis of the effects of random inputs on a leaky integrate-and-fire (LIF) synaptic conductance neuron under STSP. Furthermore, the irregularity of spike trains can carry the information about previous stimulation in a neuron. A LIF conductance neuron with multiple inputs and coefficient of variation (CV) of the inter-spike-interval (ISI) can bring an output decoded neuron. Our numerical results show that an increase in the standard deviations in the random input current and the random refractory period can lead to an increased irregularity of spike trains of the output neuron.
1502.03041
Arni S.R. Srinivasa Rao
Arni S.R. Srinivasa Rao and James R. Carey
Generalization of Carey's Equality and a Theorem on Stationary Population
null
Journal of Mathematical Biology (2015), 71, 3: 583 - 594
10.1007/s00285-014-0831-6
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Carey's Equality pertaining to stationary models is well known. In this paper, we have stated and proved a fundamental theorem related to the formation of this Equality. This theorem will provide an in-depth understanding of the role of each captive subject, and their corresponding follow-up duration in a stationary population. We have demonstrated a numerical example of a captive cohort and the survival pattern of medfly populations. These results can be adopted to understand age-structure and aging process in stationary and non-stationary population population models. Key words: Captive cohort, life expectancy, symmetric patterns.
[ { "created": "Tue, 10 Feb 2015 18:57:35 GMT", "version": "v1" } ]
2021-06-15
[ [ "Rao", "Arni S. R. Srinivasa", "" ], [ "Carey", "James R.", "" ] ]
Carey's Equality pertaining to stationary models is well known. In this paper, we have stated and proved a fundamental theorem related to the formation of this Equality. This theorem will provide an in-depth understanding of the role of each captive subject, and their corresponding follow-up duration in a stationary population. We have demonstrated a numerical example of a captive cohort and the survival pattern of medfly populations. These results can be adopted to understand age-structure and aging process in stationary and non-stationary population population models. Key words: Captive cohort, life expectancy, symmetric patterns.
2109.01117
Alkan Kabak\c{c}io\u{g}lu
H. Coban and A. Kabakcioglu
Nested canalizing functions minimize sensitivity and simultaneously promote criticality
5 pages, 3 figures. Submitted for publication
Phys. Rev. Lett. 128, 118101 (2022)
10.1103/PhysRevLett.128.118101
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that nested canalizing functions are the minimum-sensitivity Boolean functions for any given activity ratio and we characterize the sensitivity boundary which has a nontrivial fractal structure. We further observe, on an extensive database of regulatory functions curated from the literature, that this bound severely constrains the robustness of biological networks. Our findings suggest that the accumulation near the "edge of chaos" in these systems is a natural consequence of a drive towards maximum stability while maintaining plasticity in transcriptional activity.
[ { "created": "Thu, 2 Sep 2021 17:45:06 GMT", "version": "v1" } ]
2022-03-23
[ [ "Coban", "H.", "" ], [ "Kabakcioglu", "A.", "" ] ]
We prove that nested canalizing functions are the minimum-sensitivity Boolean functions for any given activity ratio and we characterize the sensitivity boundary which has a nontrivial fractal structure. We further observe, on an extensive database of regulatory functions curated from the literature, that this bound severely constrains the robustness of biological networks. Our findings suggest that the accumulation near the "edge of chaos" in these systems is a natural consequence of a drive towards maximum stability while maintaining plasticity in transcriptional activity.
1711.04350
Anton Zadorin
Anton S. Zadorin, Yannick Rondelez
Selection strategies for randomly partitioned genetic replicators
36 pages, 7 figures
Phys. Rev. E 99, 062416 (2019)
10.1103/PhysRevE.99.062416
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The amplification cycle of many replicators (natural or artificial) involves the usage of a host compartment, inside of which the replicator express phenotypic compounds necessary to carry out its genetic replication. For example, viruses infect cells, where they express their own proteins and replicate. In this process, the host cell boundary limits the diffusion of the viral protein products, thereby ensuring that phenotypic compounds, such as proteins, promote the replication of the genes that encoded them. This role of maintaining spatial co-localization, also called genotype-phenotype linkage, is a critical function of compartments in natural selection. In most cases however, individual replicating elements do not distribute systematically among the hosts, but are randomly partitioned. Depending on the replicator-to-host ratio, more than one variant may thus occupy some compartments, blurring the genotype-phenotype linkage and affecting the effectiveness of natural selection. We derive selection equations for a variety of such random multiple occupancy situations, in particular considering the effect of replicator population polymorphism and internal replication dynamics. We conclude that the deleterious effect of random multiple occupancy on selection is relatively benign, and may even completely vanish is some specific cases. In addition, given that higher mean occupancy allows larger populations to be channeled through the selection process, and thus provide a better exploration of phenotypic diversity, we show that it may represent a valid strategy in both natural and technological cases.
[ { "created": "Sun, 12 Nov 2017 20:36:54 GMT", "version": "v1" }, { "created": "Wed, 15 May 2019 22:43:35 GMT", "version": "v2" } ]
2019-07-03
[ [ "Zadorin", "Anton S.", "" ], [ "Rondelez", "Yannick", "" ] ]
The amplification cycle of many replicators (natural or artificial) involves the usage of a host compartment, inside of which the replicator express phenotypic compounds necessary to carry out its genetic replication. For example, viruses infect cells, where they express their own proteins and replicate. In this process, the host cell boundary limits the diffusion of the viral protein products, thereby ensuring that phenotypic compounds, such as proteins, promote the replication of the genes that encoded them. This role of maintaining spatial co-localization, also called genotype-phenotype linkage, is a critical function of compartments in natural selection. In most cases however, individual replicating elements do not distribute systematically among the hosts, but are randomly partitioned. Depending on the replicator-to-host ratio, more than one variant may thus occupy some compartments, blurring the genotype-phenotype linkage and affecting the effectiveness of natural selection. We derive selection equations for a variety of such random multiple occupancy situations, in particular considering the effect of replicator population polymorphism and internal replication dynamics. We conclude that the deleterious effect of random multiple occupancy on selection is relatively benign, and may even completely vanish is some specific cases. In addition, given that higher mean occupancy allows larger populations to be channeled through the selection process, and thus provide a better exploration of phenotypic diversity, we show that it may represent a valid strategy in both natural and technological cases.
1512.01622
Christian Fink
Stephen V. Gliske, Eugene Lim, Katherine A. Holman, William C. Stacey, Christian G. Fink
Narrowband oscillations from asynchronous neural activity
5 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the possibility that narrowband oscillations may emerge from completely asynchronous, independent neural firing. We find that a population of asynchronous neurons may produce narrowband oscillations if each neuron fires quasi-periodically, and we deduce bounds on the degree of variability in neural spike-timing which will permit the emergence of such oscillations. These results suggest a novel mechanism of neural rhythmogenesis, and they help to explain recent experimental reports of large-amplitude local field potential oscillations in the absence of neural spike-timing synchrony. Simply put, although synchrony can produce oscillations, oscillations do not always imply the existence of synchrony.
[ { "created": "Sat, 5 Dec 2015 04:15:57 GMT", "version": "v1" } ]
2015-12-08
[ [ "Gliske", "Stephen V.", "" ], [ "Lim", "Eugene", "" ], [ "Holman", "Katherine A.", "" ], [ "Stacey", "William C.", "" ], [ "Fink", "Christian G.", "" ] ]
We investigate the possibility that narrowband oscillations may emerge from completely asynchronous, independent neural firing. We find that a population of asynchronous neurons may produce narrowband oscillations if each neuron fires quasi-periodically, and we deduce bounds on the degree of variability in neural spike-timing which will permit the emergence of such oscillations. These results suggest a novel mechanism of neural rhythmogenesis, and they help to explain recent experimental reports of large-amplitude local field potential oscillations in the absence of neural spike-timing synchrony. Simply put, although synchrony can produce oscillations, oscillations do not always imply the existence of synchrony.
2210.04172
Yangsong Zhang
Jianbo Chen, Yangsong Zhang, Yudong Pan, Peng Xu, Cuntai Guan
A Transformer-based deep neural network model for SSVEP classification
null
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signal in the brain-computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data become urgent. In recent years, developing the methods that can work in inter-subject classification scenario has become a promising new direction. As the popular deep learning model nowadays, Transformer has excellent performance and has been used in EEG signal classification tasks. Therefore, in this study, we propose a deep learning model for SSVEP classification based on Transformer structure in inter-subject classification scenario, termed as SSVEPformer, which is the first application of the transformer to the classification of SSVEP. Inspired by previous studies, the model adopts the frequency spectrum of SSVEP data as input, and explores the spectral and spatial domain information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) is proposed to further improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12-class task; Dataset 2: 35 subjects, 40-class task) in the inter-subject classification scenario. The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate, compared with other baseline methods. The proposed model validates the feasibility of deep learning models based on Transformer structure for SSVEP classification task, and could serve as a potential model to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.
[ { "created": "Sun, 9 Oct 2022 05:28:35 GMT", "version": "v1" } ]
2022-10-11
[ [ "Chen", "Jianbo", "" ], [ "Zhang", "Yangsong", "" ], [ "Pan", "Yudong", "" ], [ "Xu", "Peng", "" ], [ "Guan", "Cuntai", "" ] ]
Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signal in the brain-computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data become urgent. In recent years, developing the methods that can work in inter-subject classification scenario has become a promising new direction. As the popular deep learning model nowadays, Transformer has excellent performance and has been used in EEG signal classification tasks. Therefore, in this study, we propose a deep learning model for SSVEP classification based on Transformer structure in inter-subject classification scenario, termed as SSVEPformer, which is the first application of the transformer to the classification of SSVEP. Inspired by previous studies, the model adopts the frequency spectrum of SSVEP data as input, and explores the spectral and spatial domain information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) is proposed to further improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12-class task; Dataset 2: 35 subjects, 40-class task) in the inter-subject classification scenario. The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate, compared with other baseline methods. The proposed model validates the feasibility of deep learning models based on Transformer structure for SSVEP classification task, and could serve as a potential model to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.
1704.06530
Qixia Yuan
Andrzej Mizera and Jun Pang and Hongyang Qu and Qixia Yuan
Taming Asynchrony for Attractor Detection in Large Boolean Networks (Technical Report)
28 pages, version 3 (correct a mistake in Table 2)
null
null
null
q-bio.MN cs.DC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Boolean networks is a well-established formalism for modelling biological systems. A vital challenge for analysing a Boolean network is to identify all the attractors. This becomes more challenging for large asynchronous Boolean networks, due to the asynchronous updating scheme. Existing methods are prohibited due to the well-known state-space explosion problem in large Boolean networks. In this paper, we tackle this challenge by proposing a SCC-based decomposition method. We prove the correctness of our proposed method and demonstrate its efficiency with two real-life biological networks.
[ { "created": "Thu, 20 Apr 2017 16:52:26 GMT", "version": "v1" }, { "created": "Wed, 7 Jun 2017 08:39:57 GMT", "version": "v2" }, { "created": "Tue, 13 Jun 2017 13:28:02 GMT", "version": "v3" } ]
2017-06-14
[ [ "Mizera", "Andrzej", "" ], [ "Pang", "Jun", "" ], [ "Qu", "Hongyang", "" ], [ "Yuan", "Qixia", "" ] ]
Boolean networks is a well-established formalism for modelling biological systems. A vital challenge for analysing a Boolean network is to identify all the attractors. This becomes more challenging for large asynchronous Boolean networks, due to the asynchronous updating scheme. Existing methods are prohibited due to the well-known state-space explosion problem in large Boolean networks. In this paper, we tackle this challenge by proposing a SCC-based decomposition method. We prove the correctness of our proposed method and demonstrate its efficiency with two real-life biological networks.
2407.00099
Yubai Yuan
Yubai Yuan, Babak Shahbaba, Norbert Fortin, Keiland Cooper, Qing Nie, Annie Qu
Optimal Transport for Latent Integration with An Application to Heterogeneous Neuronal Activity Data
null
null
null
null
q-bio.NC cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
Detecting dynamic patterns of task-specific responses shared across heterogeneous datasets is an essential and challenging problem in many scientific applications in medical science and neuroscience. In our motivating example of rodent electrophysiological data, identifying the dynamical patterns in neuronal activity associated with ongoing cognitive demands and behavior is key to uncovering the neural mechanisms of memory. One of the greatest challenges in investigating a cross-subject biological process is that the systematic heterogeneity across individuals could significantly undermine the power of existing machine learning methods to identify the underlying biological dynamics. In addition, many technically challenging neurobiological experiments are conducted on only a handful of subjects where rich longitudinal data are available for each subject. The low sample sizes of such experiments could further reduce the power to detect common dynamic patterns among subjects. In this paper, we propose a novel heterogeneous data integration framework based on optimal transport to extract shared patterns in complex biological processes. The key advantages of the proposed method are that it can increase discriminating power in identifying common patterns by reducing heterogeneity unrelated to the signal by aligning the extracted latent spatiotemporal information across subjects. Our approach is effective even with a small number of subjects, and does not require auxiliary matching information for the alignment. In particular, our method can align longitudinal data across heterogeneous subjects in a common latent space to capture the dynamics of shared patterns while utilizing temporal dependency within subjects.
[ { "created": "Thu, 27 Jun 2024 04:29:21 GMT", "version": "v1" } ]
2024-07-02
[ [ "Yuan", "Yubai", "" ], [ "Shahbaba", "Babak", "" ], [ "Fortin", "Norbert", "" ], [ "Cooper", "Keiland", "" ], [ "Nie", "Qing", "" ], [ "Qu", "Annie", "" ] ]
Detecting dynamic patterns of task-specific responses shared across heterogeneous datasets is an essential and challenging problem in many scientific applications in medical science and neuroscience. In our motivating example of rodent electrophysiological data, identifying the dynamical patterns in neuronal activity associated with ongoing cognitive demands and behavior is key to uncovering the neural mechanisms of memory. One of the greatest challenges in investigating a cross-subject biological process is that the systematic heterogeneity across individuals could significantly undermine the power of existing machine learning methods to identify the underlying biological dynamics. In addition, many technically challenging neurobiological experiments are conducted on only a handful of subjects where rich longitudinal data are available for each subject. The low sample sizes of such experiments could further reduce the power to detect common dynamic patterns among subjects. In this paper, we propose a novel heterogeneous data integration framework based on optimal transport to extract shared patterns in complex biological processes. The key advantages of the proposed method are that it can increase discriminating power in identifying common patterns by reducing heterogeneity unrelated to the signal by aligning the extracted latent spatiotemporal information across subjects. Our approach is effective even with a small number of subjects, and does not require auxiliary matching information for the alignment. In particular, our method can align longitudinal data across heterogeneous subjects in a common latent space to capture the dynamics of shared patterns while utilizing temporal dependency within subjects.
1902.10825
Hamed Azami
Hamed Azami, Steven E. Arnold, Saeid Sanei, Zhuoqing Chang, Guillermo Sapiro, Javier Escudero, Anoopum S. Gupta
Multiscale Fluctuation-based Dispersion Entropy and its Applications to Neurological Diseases
null
null
null
null
q-bio.NC eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fluctuation-based dispersion entropy (FDispEn) is a new approach to estimate the dynamical variability of the fluctuations of signals. It is based on Shannon entropy and fluctuation-based dispersion patterns. To quantify the physiological dynamics over multiple time scales, multiscale FDispEn (MFDE) is developed in this article. MFDE is robust to the presence of baseline wanders, or trends, in the data. We evaluate MFDE, compared with popular multiscale sample entropy (MSE), and the recently introduced multiscale dispersion entropy (MDE), on selected synthetic data and five neurological diseases' datasets: 1) focal and non-focal electroencephalograms (EEGs); 2) walking stride interval signals for young, elderly, and Parkinson's subjects; 3) stride interval fluctuations for Huntington's disease and amyotrophic lateral sclerosis; 4) EEGs for controls and Alzheimer's disease patients; and 5) eye movement data for Parkinson's disease and ataxia. MFDE dealt with the problem of undefined MSE values and, compared with MDE, led to more stable entropy values over the scale factors for pink noise. Overall, MFDE was the fastest and most consistent method for the discrimination of different states of neurological data, especially where the mean value of a time series considerably changes along the signal (e.g., eye movement data). This study shows that MFDE is a relevant new metric to gain further insights into the dynamics of neurological diseases recordings.
[ { "created": "Wed, 27 Feb 2019 23:13:06 GMT", "version": "v1" } ]
2019-03-01
[ [ "Azami", "Hamed", "" ], [ "Arnold", "Steven E.", "" ], [ "Sanei", "Saeid", "" ], [ "Chang", "Zhuoqing", "" ], [ "Sapiro", "Guillermo", "" ], [ "Escudero", "Javier", "" ], [ "Gupta", "Anoopum S.", "" ] ]
Fluctuation-based dispersion entropy (FDispEn) is a new approach to estimate the dynamical variability of the fluctuations of signals. It is based on Shannon entropy and fluctuation-based dispersion patterns. To quantify the physiological dynamics over multiple time scales, multiscale FDispEn (MFDE) is developed in this article. MFDE is robust to the presence of baseline wanders, or trends, in the data. We evaluate MFDE, compared with popular multiscale sample entropy (MSE), and the recently introduced multiscale dispersion entropy (MDE), on selected synthetic data and five neurological diseases' datasets: 1) focal and non-focal electroencephalograms (EEGs); 2) walking stride interval signals for young, elderly, and Parkinson's subjects; 3) stride interval fluctuations for Huntington's disease and amyotrophic lateral sclerosis; 4) EEGs for controls and Alzheimer's disease patients; and 5) eye movement data for Parkinson's disease and ataxia. MFDE dealt with the problem of undefined MSE values and, compared with MDE, led to more stable entropy values over the scale factors for pink noise. Overall, MFDE was the fastest and most consistent method for the discrimination of different states of neurological data, especially where the mean value of a time series considerably changes along the signal (e.g., eye movement data). This study shows that MFDE is a relevant new metric to gain further insights into the dynamics of neurological diseases recordings.
1904.05412
Jesse Greener
Mir Pouyan Zarabadi, Manon Couture, Steve J. Charette, Jesse Greener
A generalized kinetic framework applied to whole-cell catalysis in biofilm flow reactors clarifies performance enhancements
Main paper 4 pages 3 figures. Plus supporting information
null
null
null
q-bio.QM physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A common kinetic framework for studies of whole-cell catalysis is vital for understanding and optimizing bioflow reactors. In this work, we demonstrate the applicability of a flow-adapted version of Michaelis-Menten kinetics to a catalytic bacterial biofilm. A three-electrode microfluidic electrochemical flow cell measured increased turnover rates by as much as 50% from a Geobacter sulfurreducens biofilm as flow rate was varied. Based on parameters from the applied kinetic framework, flow-induced increases to turnover rate, catalytic efficiency and device reaction capacity could be linked to an increase in catalytic biomass. This study demonstrates that a standardized kinetic framework is critical for quantitative measurements of new living catalytic systems in flow cells and for benchmarking against well-studied catalytic systems such as enzymes.
[ { "created": "Wed, 10 Apr 2019 19:56:15 GMT", "version": "v1" } ]
2019-04-12
[ [ "Zarabadi", "Mir Pouyan", "" ], [ "Couture", "Manon", "" ], [ "Charette", "Steve J.", "" ], [ "Greener", "Jesse", "" ] ]
A common kinetic framework for studies of whole-cell catalysis is vital for understanding and optimizing bioflow reactors. In this work, we demonstrate the applicability of a flow-adapted version of Michaelis-Menten kinetics to a catalytic bacterial biofilm. A three-electrode microfluidic electrochemical flow cell measured increased turnover rates by as much as 50% from a Geobacter sulfurreducens biofilm as flow rate was varied. Based on parameters from the applied kinetic framework, flow-induced increases to turnover rate, catalytic efficiency and device reaction capacity could be linked to an increase in catalytic biomass. This study demonstrates that a standardized kinetic framework is critical for quantitative measurements of new living catalytic systems in flow cells and for benchmarking against well-studied catalytic systems such as enzymes.
2006.02396
Wejia Wang
Geng Li, Weijia Wang, Jiahui Lin, Zhiyang Huang, Jianqiang Liang, Huabo Wu, Jianping Wen, Zengru Di, Bertrand Roehner, and Zhangang Han
How initial distribution affects symmetry breaking induced by panic in ants: experiment and flee-pheromone model
null
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collective escaping is a ubiquitous phenomenon in animal groups. Symmetry breaking caused by panic escape exhibits a shared feature across species that one exit is used more than the other when agents escaping from a closed space with two symmetrically located exists. Intuitively, one exit will be used more by more individuals close to it, namely there is an asymmetric distribution initially. We used ant groups to investigate how initial distribution of colonies would influence symmetry breaking in collective escaping. Surprisingly, there was no positive correlation between symmetry breaking and the asymmetrically initial distribution, which was quite counter-intuitive. In the experiments, a flee stage was observed and accordingly a flee-pheromone model was introduced to depict this special behavior in the early stage of escaping. Simulation results fitted well with the experiment. Furthermore, the flee stage duration was calibrated quantitatively and the model reproduced the observation demonstrated by our previous work. This paper explicitly distinguished two stages in ant panic escaping for the first time, thus enhancing the understanding in escaping behavior of ant colonies.
[ { "created": "Wed, 3 Jun 2020 17:11:24 GMT", "version": "v1" } ]
2020-06-04
[ [ "Li", "Geng", "" ], [ "Wang", "Weijia", "" ], [ "Lin", "Jiahui", "" ], [ "Huang", "Zhiyang", "" ], [ "Liang", "Jianqiang", "" ], [ "Wu", "Huabo", "" ], [ "Wen", "Jianping", "" ], [ "Di", "Zengru...
Collective escaping is a ubiquitous phenomenon in animal groups. Symmetry breaking caused by panic escape exhibits a shared feature across species that one exit is used more than the other when agents escaping from a closed space with two symmetrically located exists. Intuitively, one exit will be used more by more individuals close to it, namely there is an asymmetric distribution initially. We used ant groups to investigate how initial distribution of colonies would influence symmetry breaking in collective escaping. Surprisingly, there was no positive correlation between symmetry breaking and the asymmetrically initial distribution, which was quite counter-intuitive. In the experiments, a flee stage was observed and accordingly a flee-pheromone model was introduced to depict this special behavior in the early stage of escaping. Simulation results fitted well with the experiment. Furthermore, the flee stage duration was calibrated quantitatively and the model reproduced the observation demonstrated by our previous work. This paper explicitly distinguished two stages in ant panic escaping for the first time, thus enhancing the understanding in escaping behavior of ant colonies.
2405.08601
Robert Worden
Robert Worden
The Requirement for Cognition, in an Equation
12 pages
null
null
null
q-bio.NC q-bio.PE
http://creativecommons.org/licenses/by/4.0/
A model of the evolution of cognition is used to derive a Requirement Equation (RE), which defines what computations the fittest possible brain must make, or must choose actions as if it had made those computations. The terms in the RE depend on factors outside an animals brain, which can be modelled without making assumptions about how the brain works, from knowledge of the animals habitat and biology. In simple domains where the choices of actions have small information content, it may not be necessary to build internal models of reality; short cut computations may be just as good at choosing actions. In complex domains such as 3D spatial cognition, which underpins many complex choices of action, the RE implies that brains build Bayesian internal models of the animals surroundings; and that the models are constrained to be true to external reality.
[ { "created": "Tue, 14 May 2024 13:41:39 GMT", "version": "v1" } ]
2024-05-15
[ [ "Worden", "Robert", "" ] ]
A model of the evolution of cognition is used to derive a Requirement Equation (RE), which defines what computations the fittest possible brain must make, or must choose actions as if it had made those computations. The terms in the RE depend on factors outside an animals brain, which can be modelled without making assumptions about how the brain works, from knowledge of the animals habitat and biology. In simple domains where the choices of actions have small information content, it may not be necessary to build internal models of reality; short cut computations may be just as good at choosing actions. In complex domains such as 3D spatial cognition, which underpins many complex choices of action, the RE implies that brains build Bayesian internal models of the animals surroundings; and that the models are constrained to be true to external reality.
0912.0041
Mike Steel Prof.
Mike Steel
Can we avoid 'SIN' in the house of 'No Common Mechanism'?
33 pages, 1 figure
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 'no common mechanism' (NCM) models of character evolution, each character can evolve on a phylogenetic tree under a partially or totally separate process (e.g. with its own branch lengths). In such cases, the usual conditions that suffice to establish the statistical consistency of tree reconstruction by methods such as maximum likelihood (ML) break down, suggesting that such methods may be prone to statistical inconsistency (SIN). In this paper we ask whether we can avoid SIN for tree topology reconstruction when adopting such models, either by using ML or any other method that could be devised. We prove that it is possible to avoid SIN for certain NCM models, but not for others, and the results depend delicately on the tree reconstruction method employed. We also describe the biological relevance of some recent mathematical results for the more usual 'common mechanism' setting. Our results are not intended to justify NCM, rather to set in place a framework within which such questions can be formally addressed.
[ { "created": "Mon, 30 Nov 2009 23:58:38 GMT", "version": "v1" } ]
2009-12-02
[ [ "Steel", "Mike", "" ] ]
In 'no common mechanism' (NCM) models of character evolution, each character can evolve on a phylogenetic tree under a partially or totally separate process (e.g. with its own branch lengths). In such cases, the usual conditions that suffice to establish the statistical consistency of tree reconstruction by methods such as maximum likelihood (ML) break down, suggesting that such methods may be prone to statistical inconsistency (SIN). In this paper we ask whether we can avoid SIN for tree topology reconstruction when adopting such models, either by using ML or any other method that could be devised. We prove that it is possible to avoid SIN for certain NCM models, but not for others, and the results depend delicately on the tree reconstruction method employed. We also describe the biological relevance of some recent mathematical results for the more usual 'common mechanism' setting. Our results are not intended to justify NCM, rather to set in place a framework within which such questions can be formally addressed.
2004.14298
Dayun Yan
Dayun Yan, Qihui Wang, Manish Adhikari, Alisa Malyavko, Li Lin, Denis B. Zolotukhin, Xiaoliang Yao, Megan Kirschner, Jonathan H. Sherman, Michael Keidar
A New Physically Triggered Cell Death via Transbarrier Contactless Cold Atmospheric Plasma Treatment of Cancer Cells
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For years, extensive efforts have been made to discover effective, non-invasive anti-cancer therapies. Cold atmospheric plasma (CAP), is a near room temperature ionized gas composed of reactive species, charged particles, neutral particles, and electrons. CAP also has several physical factors including thermal radiation, ultraviolet (UV) radiation, and electromagnetic (EM) waves. Most of the previously reported biological effects of CAP have relied on direct contact between bulk plasma and cells, resulting in the chemical effects generally seen after CAP treatment. In this paper, we demonstrate that the electromagnetic emission produced by CAP can lead to the death of B16F10 melanoma cancer cells via a transbarrier contactless method. When compared with the effect of reactive species, the effect of the physical factors causes much greater growth inhibition. The physical-triggered growth inhibition is due to a new type of cell death, characterized by rapid leakage of bulk water from the cells, resulting in bubbles on the cell membrane, and cytoplasm shrinkage. The results of this study introduce a new possible mechanism of CAP induced cancer cell death and build a foundation for CAP to be used as a non-invasive cancer treatment in the future.
[ { "created": "Mon, 30 Mar 2020 18:10:43 GMT", "version": "v1" } ]
2020-04-30
[ [ "Yan", "Dayun", "" ], [ "Wang", "Qihui", "" ], [ "Adhikari", "Manish", "" ], [ "Malyavko", "Alisa", "" ], [ "Lin", "Li", "" ], [ "Zolotukhin", "Denis B.", "" ], [ "Yao", "Xiaoliang", "" ], [ "Kirsch...
For years, extensive efforts have been made to discover effective, non-invasive anti-cancer therapies. Cold atmospheric plasma (CAP), is a near room temperature ionized gas composed of reactive species, charged particles, neutral particles, and electrons. CAP also has several physical factors including thermal radiation, ultraviolet (UV) radiation, and electromagnetic (EM) waves. Most of the previously reported biological effects of CAP have relied on direct contact between bulk plasma and cells, resulting in the chemical effects generally seen after CAP treatment. In this paper, we demonstrate that the electromagnetic emission produced by CAP can lead to the death of B16F10 melanoma cancer cells via a transbarrier contactless method. When compared with the effect of reactive species, the effect of the physical factors causes much greater growth inhibition. The physical-triggered growth inhibition is due to a new type of cell death, characterized by rapid leakage of bulk water from the cells, resulting in bubbles on the cell membrane, and cytoplasm shrinkage. The results of this study introduce a new possible mechanism of CAP induced cancer cell death and build a foundation for CAP to be used as a non-invasive cancer treatment in the future.
q-bio/0409025
Richard P. Sear
Richard P. Sear
A model for the accidental catalysis of protein unfolding in vivo
5 pages, 2 figures
null
10.1209/epl/i2004-10249-7
null
q-bio.BM
null
Activated processes such as protein unfolding are highly sensitive to heterogeneity in the environment. We study a highly simplified model of a protein in a random heterogeneous environment, a model of the in vivo environment. It is found that if the heterogeneity is sufficiently large the total rate of the process is essentially a random variable; this may be the cause of the species-to-species variability in the rate of prion protein conversion found by Deleault et al. [Nature, 425 (2003) 717].
[ { "created": "Tue, 21 Sep 2004 12:26:43 GMT", "version": "v1" } ]
2009-11-10
[ [ "Sear", "Richard P.", "" ] ]
Activated processes such as protein unfolding are highly sensitive to heterogeneity in the environment. We study a highly simplified model of a protein in a random heterogeneous environment, a model of the in vivo environment. It is found that if the heterogeneity is sufficiently large the total rate of the process is essentially a random variable; this may be the cause of the species-to-species variability in the rate of prion protein conversion found by Deleault et al. [Nature, 425 (2003) 717].
q-bio/0608025
Shuji Ishihara
Mikiya Otsuji, Shuji Ishihara, Carl Co, Kozo Kaibuchi, Atsushi Mochizuki, and Shinya Kuroda
A mass conserved reaction-diffusion system captures properties of cell polarity
PDF only
null
10.1371/journal.pcbi.0030108
null
q-bio.CB
null
Various molecules exclusively accumulate at the front or back of migrating eukaryotic cells in response to a shallow gradient of extracellular signals. Directional sensing and signal amplification highlight the essential properties in the migrating cells, known as cell polarity. In addition to these, such properties of cell polarity involve unique determination of migrating direction (uniqueness of axis) and localized gradient sensing at the front edge (localization of sensitivity), both of which may be required for smooth migration. Here we provide the mass conservation system based on the reaction-diffusion system with two components, where the mass of the two components is always conserved. Using two models belonging to this mass conservation system, we demonstrate through both numerical simulation and analytical approximations that the spatial pattern with a single peak (uniqueness of axis) can be generally observed and that the existent peak senses a gradient of parameters at the peak position, which guides the movement of the peak. We extended this system with multiple components, and we developed a multiple-component model in which cross-talk between members of the Rho family of small GTPases is involved. This model also exhibits the essential properties of the two models with two components. Thus, the mass conservation system shows properties similar to those of cell polarity, such as uniqueness of axis and localization of sensitivity, in addition to directional sensing and signal amplification.
[ { "created": "Fri, 11 Aug 2006 09:50:46 GMT", "version": "v1" } ]
2015-06-26
[ [ "Otsuji", "Mikiya", "" ], [ "Ishihara", "Shuji", "" ], [ "Co", "Carl", "" ], [ "Kaibuchi", "Kozo", "" ], [ "Mochizuki", "Atsushi", "" ], [ "Kuroda", "Shinya", "" ] ]
Various molecules exclusively accumulate at the front or back of migrating eukaryotic cells in response to a shallow gradient of extracellular signals. Directional sensing and signal amplification highlight the essential properties in the migrating cells, known as cell polarity. In addition to these, such properties of cell polarity involve unique determination of migrating direction (uniqueness of axis) and localized gradient sensing at the front edge (localization of sensitivity), both of which may be required for smooth migration. Here we provide the mass conservation system based on the reaction-diffusion system with two components, where the mass of the two components is always conserved. Using two models belonging to this mass conservation system, we demonstrate through both numerical simulation and analytical approximations that the spatial pattern with a single peak (uniqueness of axis) can be generally observed and that the existent peak senses a gradient of parameters at the peak position, which guides the movement of the peak. We extended this system with multiple components, and we developed a multiple-component model in which cross-talk between members of the Rho family of small GTPases is involved. This model also exhibits the essential properties of the two models with two components. Thus, the mass conservation system shows properties similar to those of cell polarity, such as uniqueness of axis and localization of sensitivity, in addition to directional sensing and signal amplification.
1306.1200
Vijay Mohan K Namboodiri
Vijay Mohan K Namboodiri, Stefan Mihalas, Tanya Marton, Marshall G Hussain Shuler
A general theory of intertemporal decision-making and the perception of time
37 pages, 4 main figures, 3 supplementary figures
Front. Behav. Neurosci. 8:61
10.3389/fnbeh.2014.00061
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animals and humans make decisions based on their expected outcomes. Since relevant outcomes are often delayed, perceiving delays and choosing between earlier versus later rewards (intertemporal decision-making) is an essential component of animal behavior. The myriad observations made in experiments studying intertemporal decision-making and time perception have not yet been rationalized within a single theory. Here we present a theory-Training--Integrated Maximized Estimation of Reinforcement Rate (TIMERR)--that explains a wide variety of behavioral observations made in intertemporal decision-making and the perception of time. Our theory postulates that animals make intertemporal choices to optimize expected reward rates over a limited temporal window; this window includes a past integration interval (over which experienced reward rate is estimated) and the expected delay to future reward. Using this theory, we derive a mathematical expression for the subjective representation of time. A unique contribution of our work is in finding that the past integration interval directly determines the steepness of temporal discounting and the nonlinearity of time perception. In so doing, our theory provides a single framework to understand both intertemporal decision-making and time perception.
[ { "created": "Wed, 5 Jun 2013 18:28:59 GMT", "version": "v1" }, { "created": "Sat, 9 Nov 2013 16:06:47 GMT", "version": "v2" } ]
2014-02-21
[ [ "Namboodiri", "Vijay Mohan K", "" ], [ "Mihalas", "Stefan", "" ], [ "Marton", "Tanya", "" ], [ "Shuler", "Marshall G Hussain", "" ] ]
Animals and humans make decisions based on their expected outcomes. Since relevant outcomes are often delayed, perceiving delays and choosing between earlier versus later rewards (intertemporal decision-making) is an essential component of animal behavior. The myriad observations made in experiments studying intertemporal decision-making and time perception have not yet been rationalized within a single theory. Here we present a theory-Training--Integrated Maximized Estimation of Reinforcement Rate (TIMERR)--that explains a wide variety of behavioral observations made in intertemporal decision-making and the perception of time. Our theory postulates that animals make intertemporal choices to optimize expected reward rates over a limited temporal window; this window includes a past integration interval (over which experienced reward rate is estimated) and the expected delay to future reward. Using this theory, we derive a mathematical expression for the subjective representation of time. A unique contribution of our work is in finding that the past integration interval directly determines the steepness of temporal discounting and the nonlinearity of time perception. In so doing, our theory provides a single framework to understand both intertemporal decision-making and time perception.
1609.01384
Liang Zhan
Liang Zhan, Lisanne M. Jenkins, Ouri E. Wolfson, Johnson J. GadElkarim, Kevin Nocito, Paul M. Thompson, Olusola A. Ajilore, Moo K. Chung, Alex D. Leow
The Importance of Being Negative: A serious treatment of non-trivial edges in brain functional connectome
null
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the modularity of fMRI-derived brain networks or connectomes can inform the study of brain function organization. However, fMRI connectomes additionally involve negative edges, which are not rigorously accounted for by existing approaches to modularity that either ignores or arbitrarily weight these connections. Furthermore, most Q maximization-based modularity algorithms yield variable results with suboptimal reproducibility. Here we present an alternative, reproducible approach that exploits how frequent the BOLD-signal correlation between two nodes is negative. We validated this novel probability-based modularity approach on two independent publicly-available resting-state connectome dataset (the Human Connectome Project and the 1000 Functional Connectomes) and demonstrated that negative correlations alone are sufficient in understanding resting-state modularity. In fact, this approach a) permits a dual formulation, leading to equivalent solutions regardless of whether one considers positive or negative edges; b) is theoretically linked to the Ising model defined on the connectome, thus yielding modularity result that maximizes data likelihood. We additionally were able to detect sex differences in modularity that the most widely utilized methods did not. Results confirmed the superiority of our approach in that: a) correlations with the highest probability of being negative are consistently placed between modules, b) due to the equivalent dual forms, no arbitrary weighting factor is required to balance the influence between negative and positive correlations, unlike existing Q maximization-based modularity approaches. As datasets like HCP become widely available for analysis by the neuroscience community at large, appropriate computational tools to understand the neurobiological information of negative edges in fMRI connectomes are increasingly important.
[ { "created": "Tue, 6 Sep 2016 04:03:12 GMT", "version": "v1" }, { "created": "Sat, 14 Jan 2017 05:54:59 GMT", "version": "v2" }, { "created": "Mon, 5 Jun 2017 21:19:43 GMT", "version": "v3" } ]
2017-06-07
[ [ "Zhan", "Liang", "" ], [ "Jenkins", "Lisanne M.", "" ], [ "Wolfson", "Ouri E.", "" ], [ "GadElkarim", "Johnson J.", "" ], [ "Nocito", "Kevin", "" ], [ "Thompson", "Paul M.", "" ], [ "Ajilore", "Olusola A.", ...
Understanding the modularity of fMRI-derived brain networks or connectomes can inform the study of brain function organization. However, fMRI connectomes additionally involve negative edges, which are not rigorously accounted for by existing approaches to modularity that either ignores or arbitrarily weight these connections. Furthermore, most Q maximization-based modularity algorithms yield variable results with suboptimal reproducibility. Here we present an alternative, reproducible approach that exploits how frequent the BOLD-signal correlation between two nodes is negative. We validated this novel probability-based modularity approach on two independent publicly-available resting-state connectome dataset (the Human Connectome Project and the 1000 Functional Connectomes) and demonstrated that negative correlations alone are sufficient in understanding resting-state modularity. In fact, this approach a) permits a dual formulation, leading to equivalent solutions regardless of whether one considers positive or negative edges; b) is theoretically linked to the Ising model defined on the connectome, thus yielding modularity result that maximizes data likelihood. We additionally were able to detect sex differences in modularity that the most widely utilized methods did not. Results confirmed the superiority of our approach in that: a) correlations with the highest probability of being negative are consistently placed between modules, b) due to the equivalent dual forms, no arbitrary weighting factor is required to balance the influence between negative and positive correlations, unlike existing Q maximization-based modularity approaches. As datasets like HCP become widely available for analysis by the neuroscience community at large, appropriate computational tools to understand the neurobiological information of negative edges in fMRI connectomes are increasingly important.
2311.16712
Tom\'a\v{s} Ra\v{c}ek
Tom\'a\v{s} Svoboda, Tom\'a\v{s} Ra\v{c}ek, Josef Handl, Jozef Sabo, Adri\'an Ro\v{s}inec, {\L}ukasz Opio{\l}a, Wojciech Jesionek, Milan E\v{s}ner, Mark\'eta Pernisov\'a, Natallia Madzia Valasevich, Ale\v{s} K\v{r}enek, Radka Svobodov\'a
Onedata4Sci: Life science data management solution based on Onedata
null
null
null
null
q-bio.QM cs.DC
http://creativecommons.org/licenses/by-sa/4.0/
Life-science experimental methods generate vast and ever-increasing volumes of data, which provide highly valuable research resources. However, management of these data is nontrivial and applicable software solutions are currently subject to intensive development. The solutions mainly fall into one of the two groups: general data management systems (e.g. Onedata, iRODS, B2SHARE, CERNBox) or very specialised data management solutions (e.g. solutions for biomolecular simulation data, biological imaging data, genomic data). To bridge this gap between them, we provide Onedata4Sci, a prototype data management solution, which is focused on the management of life science data and covers four key steps of the data life cycle, i.e. data acquisition, user access, computational processing and archiving. Onedata4Sci is based on the Onedata data management system. It is written in Python, fully containerised, with the support for processing the stored data in Kubernetes. The applicability of Onedata4Sci is shown in three distinct use cases -- plant imaging data, cellular imaging data, and cryo-electron microscopy data. Despite the use cases covering very different types of data and user patterns, Onedata4Sci demonstrated an ability to successfully handle all these conditions. Complete source codes of Onedata4Sci are available on GitHub (https://github.com/CERIT-SC/onedata4sci), and its documentation and manual for installation are also provided.
[ { "created": "Tue, 28 Nov 2023 11:52:35 GMT", "version": "v1" } ]
2023-11-29
[ [ "Svoboda", "Tomáš", "" ], [ "Raček", "Tomáš", "" ], [ "Handl", "Josef", "" ], [ "Sabo", "Jozef", "" ], [ "Rošinec", "Adrián", "" ], [ "Opioła", "Łukasz", "" ], [ "Jesionek", "Wojciech", "" ], [ "Ešn...
Life-science experimental methods generate vast and ever-increasing volumes of data, which provide highly valuable research resources. However, management of these data is nontrivial and applicable software solutions are currently subject to intensive development. The solutions mainly fall into one of the two groups: general data management systems (e.g. Onedata, iRODS, B2SHARE, CERNBox) or very specialised data management solutions (e.g. solutions for biomolecular simulation data, biological imaging data, genomic data). To bridge this gap between them, we provide Onedata4Sci, a prototype data management solution, which is focused on the management of life science data and covers four key steps of the data life cycle, i.e. data acquisition, user access, computational processing and archiving. Onedata4Sci is based on the Onedata data management system. It is written in Python, fully containerised, with the support for processing the stored data in Kubernetes. The applicability of Onedata4Sci is shown in three distinct use cases -- plant imaging data, cellular imaging data, and cryo-electron microscopy data. Despite the use cases covering very different types of data and user patterns, Onedata4Sci demonstrated an ability to successfully handle all these conditions. Complete source codes of Onedata4Sci are available on GitHub (https://github.com/CERIT-SC/onedata4sci), and its documentation and manual for installation are also provided.
2109.12204
Jian Jiang
Jian Jiang
MIIDL: a Python package for microbial biomarkers identification powered by interpretable deep learning
3 pages, 1 figure
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Detecting microbial biomarkers used to predict disease phenotypes and clinical outcomes is crucial for disease early-stage screening and diagnosis. Most methods for biomarker identification are linear-based, which is very limited as biological processes are rarely fully linear. The introduction of machine learning to this field tends to bring a promising solution. However, identifying microbial biomarkers in an interpretable, data-driven and robust manner remains challenging. We present MIIDL, a Python package for the identification of microbial biomarkers based on interpretable deep learning. MIIDL innovatively applies convolutional neural networks, a variety of interpretability algorithms and plenty of pre-processing methods to provide a one-stop and robust pipeline for microbial biomarkers identification from high-dimensional and sparse data sets.
[ { "created": "Fri, 24 Sep 2021 21:30:10 GMT", "version": "v1" } ]
2021-09-29
[ [ "Jiang", "Jian", "" ] ]
Detecting microbial biomarkers used to predict disease phenotypes and clinical outcomes is crucial for disease early-stage screening and diagnosis. Most methods for biomarker identification are linear-based, which is very limited as biological processes are rarely fully linear. The introduction of machine learning to this field tends to bring a promising solution. However, identifying microbial biomarkers in an interpretable, data-driven and robust manner remains challenging. We present MIIDL, a Python package for the identification of microbial biomarkers based on interpretable deep learning. MIIDL innovatively applies convolutional neural networks, a variety of interpretability algorithms and plenty of pre-processing methods to provide a one-stop and robust pipeline for microbial biomarkers identification from high-dimensional and sparse data sets.
1912.00053
Boyue Fang
Boyue Fang and Yutong Feng
Drug dissemination strategy with an SEIR-based SUC model
20pages, 10figures
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
According to the features of drug addiction, this paper constructs an SEIR-based SUC model to describe and predict the spread of drug addiction. Predictions are that the number of drug addictions will continue to fluctuate with reduced amplitude and eventually stabilize. To seek the fountainhead of heroin, we identified the most likely origins of drugs in Philadelphia, PA, Cuyahoga and Hamilton, OH, Jefferson, KY, Kanawha, WV, and Bedford, VA. Based on the facts, advised concentration includes the spread of Oxycodone, Hydrocodone, Heroin, and Buprenorphine. In other words, drug transmission in the two states of Ohio and Pennsylvania require awareness. According to the propagation curve predicted by our model, the transfer of KY state is still in its early stage, while that of VA, WV is in the middle point, and OH, PA in its latter ones. As a result of this, the number of drug addictions in KY, OH, and VA is projected to increase in three years. For methodology, with the Principal component analysis technique, 22 variables in socio-economic data related to the continuous use of Opioid drugs was filtered, where the 'Relationship' Part deserves a highlight. Based on them, by using the K-means algorithm, 464 counties were categorized into three baskets. To combat the opioid crisis, a specific action will discuss in the sensitivity analysis section. After modeling and analytics, innovation is required to control addicts and advocate anti-drug news campaigns. This part also verified the effectiveness of model when $d_1<0.2; r_1,r_2,r_3<0.3; 15<\beta_1,\beta_2,\beta_3<25$. In other words, if such boundary exceeded, the number of drug addictions may rocket and peak in a short period.
[ { "created": "Fri, 29 Nov 2019 19:47:21 GMT", "version": "v1" }, { "created": "Sun, 19 Jan 2020 17:27:09 GMT", "version": "v2" } ]
2020-01-22
[ [ "Fang", "Boyue", "" ], [ "Feng", "Yutong", "" ] ]
According to the features of drug addiction, this paper constructs an SEIR-based SUC model to describe and predict the spread of drug addiction. Predictions are that the number of drug addictions will continue to fluctuate with reduced amplitude and eventually stabilize. To seek the fountainhead of heroin, we identified the most likely origins of drugs in Philadelphia, PA, Cuyahoga and Hamilton, OH, Jefferson, KY, Kanawha, WV, and Bedford, VA. Based on the facts, advised concentration includes the spread of Oxycodone, Hydrocodone, Heroin, and Buprenorphine. In other words, drug transmission in the two states of Ohio and Pennsylvania require awareness. According to the propagation curve predicted by our model, the transfer of KY state is still in its early stage, while that of VA, WV is in the middle point, and OH, PA in its latter ones. As a result of this, the number of drug addictions in KY, OH, and VA is projected to increase in three years. For methodology, with the Principal component analysis technique, 22 variables in socio-economic data related to the continuous use of Opioid drugs was filtered, where the 'Relationship' Part deserves a highlight. Based on them, by using the K-means algorithm, 464 counties were categorized into three baskets. To combat the opioid crisis, a specific action will discuss in the sensitivity analysis section. After modeling and analytics, innovation is required to control addicts and advocate anti-drug news campaigns. This part also verified the effectiveness of model when $d_1<0.2; r_1,r_2,r_3<0.3; 15<\beta_1,\beta_2,\beta_3<25$. In other words, if such boundary exceeded, the number of drug addictions may rocket and peak in a short period.
0707.4318
Boris Veytsman
Boris Veytsman and Leila Akhmadeyeva
Simple Mathematical Model Of Pathologic Microsatellite Expansions: When Self-Reparation Does Not Work
null
J. Theor. Biol., 2006, v. 242, 401--408
10.1016/j.jtbi.2006.03.008
null
q-bio.GN q-bio.CB
null
We propose a simple model of pathologic microsatellite expansion, and describe an inherent self-repairing mechanism working against expansion. We prove that if the probabilities of elementary expansions and contractions are equal, microsatellite expansions are always self-repairing. If these probabilities are different, self-reparation does not work. Mosaicism, anticipation and reverse mutation cases are discussed in the framework of the model. We explain these phenomena and provide some theoretical evidence for their properties, for example the rarity of reverse mutations.
[ { "created": "Sun, 29 Jul 2007 21:31:57 GMT", "version": "v1" } ]
2007-07-31
[ [ "Veytsman", "Boris", "" ], [ "Akhmadeyeva", "Leila", "" ] ]
We propose a simple model of pathologic microsatellite expansion, and describe an inherent self-repairing mechanism working against expansion. We prove that if the probabilities of elementary expansions and contractions are equal, microsatellite expansions are always self-repairing. If these probabilities are different, self-reparation does not work. Mosaicism, anticipation and reverse mutation cases are discussed in the framework of the model. We explain these phenomena and provide some theoretical evidence for their properties, for example the rarity of reverse mutations.
2005.04421
Nobuto Takeuchi
Nobuto Takeuchi, Namiko Mitarai and Kunihiko Kaneko
A scaling law of multilevel evolution: how the balance between within- and among-collective evolution is determined
Accepted in Genetics after a minor revision. Revised based on referee reports. Added results on a binary-trait model. Conclusions do not change
null
null
null
q-bio.PE cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerous living systems are hierarchically organised, whereby replicating components are grouped into reproducing collectives -- e.g., organelles are grouped into cells, and cells are grouped into multicellular organisms. In such systems, evolution can operate at two levels: evolution among collectives, which tends to promote selfless cooperation among components within collectives (called altruism), and evolution within collectives, which tends to promote cheating among components within collectives. The balance between within- and among-collective evolution thus exerts profound impacts on the fitness of these systems. Here, we investigate how this balance depends on the size of a collective (denoted by $N$) and the mutation rate of components ($m$) through mathematical analyses and computer simulations of multiple population genetics models. We first confirm a previous result that increasing $N$ or $m$ accelerates within-collective evolution relative to among-collective evolution, thus promoting the evolution of cheating. Moreover, we show that when within- and among-collective evolution exactly balance each other out, the following scaling relation generally holds: $Nm^{\alpha}$ is a constant, where scaling exponent $\alpha$ depends on multiple parameters, such as the strength of selection and whether altruism is a binary or quantitative trait. This relation indicates that although $N$ and $m$ have quantitatively distinct impacts on the balance between within- and among-collective evolution, their impacts become identical if $m$ is scaled with a proper exponent. Our results thus provide a novel insight into conditions under which cheating or altruism evolves in hierarchically-organised replicating systems.
[ { "created": "Sat, 9 May 2020 11:55:44 GMT", "version": "v1" }, { "created": "Mon, 29 Jun 2020 21:49:22 GMT", "version": "v2" }, { "created": "Thu, 23 Sep 2021 11:57:00 GMT", "version": "v3" }, { "created": "Fri, 15 Oct 2021 11:23:49 GMT", "version": "v4" } ]
2021-10-18
[ [ "Takeuchi", "Nobuto", "" ], [ "Mitarai", "Namiko", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Numerous living systems are hierarchically organised, whereby replicating components are grouped into reproducing collectives -- e.g., organelles are grouped into cells, and cells are grouped into multicellular organisms. In such systems, evolution can operate at two levels: evolution among collectives, which tends to promote selfless cooperation among components within collectives (called altruism), and evolution within collectives, which tends to promote cheating among components within collectives. The balance between within- and among-collective evolution thus exerts profound impacts on the fitness of these systems. Here, we investigate how this balance depends on the size of a collective (denoted by $N$) and the mutation rate of components ($m$) through mathematical analyses and computer simulations of multiple population genetics models. We first confirm a previous result that increasing $N$ or $m$ accelerates within-collective evolution relative to among-collective evolution, thus promoting the evolution of cheating. Moreover, we show that when within- and among-collective evolution exactly balance each other out, the following scaling relation generally holds: $Nm^{\alpha}$ is a constant, where scaling exponent $\alpha$ depends on multiple parameters, such as the strength of selection and whether altruism is a binary or quantitative trait. This relation indicates that although $N$ and $m$ have quantitatively distinct impacts on the balance between within- and among-collective evolution, their impacts become identical if $m$ is scaled with a proper exponent. Our results thus provide a novel insight into conditions under which cheating or altruism evolves in hierarchically-organised replicating systems.
1904.00214
Carl Pearson
Carl A. B. Pearson, Kaja M. Abbas, Samuel Clifford, Stefan Flasche, Thomas J. Hladish
Serostatus Testing & Dengue Vaccine Cost-Benefit Thresholds
null
null
10.1098/rsif.2019.0234
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The World Health Organisation currently recommends pre-screening for past infection prior to administration of the only licensed dengue vaccine, CYD-TDV. Using a bounding analysis, we show that despite additional testing costs, this approach can improve the economic viability of CYD-TDV: effective testing reduces unnecessary vaccination costs while increasing the health benefit for vaccine recipients. When testing is cheap enough, those trends outweigh additional screening costs and make test-then-vaccinate strategies net-beneficial in many settings. We derived these results using a general approach for determining price thresholds for testing and vaccination, as well as indicating optimal start and end ages of routine test-then-vaccinate programs. This approach only requires age-specific seroprevalence and a cost estimate for second infections. We demonstrate this approach across settings commonly used to evaluate CYD-TDV economics, and highlight implications of our simple model for more detailed studies. We found trends showing test-then-vaccinate strategies are generally more beneficial starting at younger ages, and that in some settings multiple years of testing can be more beneficial than only testing once, despite increased investment in testing.
[ { "created": "Sat, 30 Mar 2019 12:57:03 GMT", "version": "v1" } ]
2019-08-22
[ [ "Pearson", "Carl A. B.", "" ], [ "Abbas", "Kaja M.", "" ], [ "Clifford", "Samuel", "" ], [ "Flasche", "Stefan", "" ], [ "Hladish", "Thomas J.", "" ] ]
The World Health Organisation currently recommends pre-screening for past infection prior to administration of the only licensed dengue vaccine, CYD-TDV. Using a bounding analysis, we show that despite additional testing costs, this approach can improve the economic viability of CYD-TDV: effective testing reduces unnecessary vaccination costs while increasing the health benefit for vaccine recipients. When testing is cheap enough, those trends outweigh additional screening costs and make test-then-vaccinate strategies net-beneficial in many settings. We derived these results using a general approach for determining price thresholds for testing and vaccination, as well as indicating optimal start and end ages of routine test-then-vaccinate programs. This approach only requires age-specific seroprevalence and a cost estimate for second infections. We demonstrate this approach across settings commonly used to evaluate CYD-TDV economics, and highlight implications of our simple model for more detailed studies. We found trends showing test-then-vaccinate strategies are generally more beneficial starting at younger ages, and that in some settings multiple years of testing can be more beneficial than only testing once, despite increased investment in testing.
1308.1514
Tetsuya Hiraiwa
Tetsuya Hiraiwa, Akihiro Nagamatsu, Naohiro Akuzawa, Masatoshi Nishikawa, and Tatsuo Shibata
Relevance of intracellular polarity to accuracy of eukaryotic chemotaxis
In page 3, typo has been corrected
null
10.1088/1478-3975/11/5/056002
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemotactic cells establish cell polarity in the absence of external guidance cues. Such self-organized polarity is induced by spontaneous symmetry breaking in the intracellular activities, which produces an emergent memory effect associated with slow-changing mode. Therefore, spontaneously established polarity should play a pivotal role in efficient chemotaxis. In this study, we develop a model of chemotactic cell migration that demonstrates the connection between intracellular polarity and chemotactic accuracy. Spontaneous polarity formation and gradient sensing are described by a stochastic differential equation. We demonstrate that the direction of polarity persists over a characteristic time that is predicted to depend on the chemoattractant concentration. Next, we theoretically derive the chemotactic accuracy as a function of both the gradient sensing ability and the characteristic time of polarity direction. The results indicate that the accuracy can be improved by the polarity. Furthermore, the analysis of chemotactic accuracy suggests that accuracy is maximized at some optimal responsiveness to extracellular perturbations. To obtain the model parameters, we studied the correlation time of random cell migration in cell tracking analysis of Dictyostelium cells. As predicted, the persistence time depended on the chemoattractant concentration. From the fitted parameters, we inferred that polarized Dictyosteium cells can respond optimally to a chemical gradient. Chemotactic accuracy was almost 10 times larger than can be achieved by non-polarized gradient sensing. Using the obtained parameter values, we show that polarity also improves the dynamic range of chemotaxis.
[ { "created": "Wed, 7 Aug 2013 09:23:47 GMT", "version": "v1" }, { "created": "Fri, 9 Aug 2013 17:22:47 GMT", "version": "v2" } ]
2015-06-16
[ [ "Hiraiwa", "Tetsuya", "" ], [ "Nagamatsu", "Akihiro", "" ], [ "Akuzawa", "Naohiro", "" ], [ "Nishikawa", "Masatoshi", "" ], [ "Shibata", "Tatsuo", "" ] ]
Chemotactic cells establish cell polarity in the absence of external guidance cues. Such self-organized polarity is induced by spontaneous symmetry breaking in the intracellular activities, which produces an emergent memory effect associated with slow-changing mode. Therefore, spontaneously established polarity should play a pivotal role in efficient chemotaxis. In this study, we develop a model of chemotactic cell migration that demonstrates the connection between intracellular polarity and chemotactic accuracy. Spontaneous polarity formation and gradient sensing are described by a stochastic differential equation. We demonstrate that the direction of polarity persists over a characteristic time that is predicted to depend on the chemoattractant concentration. Next, we theoretically derive the chemotactic accuracy as a function of both the gradient sensing ability and the characteristic time of polarity direction. The results indicate that the accuracy can be improved by the polarity. Furthermore, the analysis of chemotactic accuracy suggests that accuracy is maximized at some optimal responsiveness to extracellular perturbations. To obtain the model parameters, we studied the correlation time of random cell migration in cell tracking analysis of Dictyostelium cells. As predicted, the persistence time depended on the chemoattractant concentration. From the fitted parameters, we inferred that polarized Dictyosteium cells can respond optimally to a chemical gradient. Chemotactic accuracy was almost 10 times larger than can be achieved by non-polarized gradient sensing. Using the obtained parameter values, we show that polarity also improves the dynamic range of chemotaxis.
1511.08621
Gao-De Li Dr
Gao-De Li
Pfcrmp May Play a Key Role in Chloroquine Antimalarial Action and Resistance Development
5 pages, 1 figure, 2 tables
null
null
null
q-bio.SC q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It was proposed earlier that Pfcrmp (Plasmodium falciparum chloroquine resistance marker protein) may be the chloroquine's target protein in nucleus. In this communication, further evidence is presented to support the view that Pfcrmp may play a key role in chloroquine antimalarial actions as well as resistance development.
[ { "created": "Fri, 27 Nov 2015 11:06:43 GMT", "version": "v1" }, { "created": "Mon, 30 Nov 2015 16:09:09 GMT", "version": "v2" } ]
2015-12-01
[ [ "Li", "Gao-De", "" ] ]
It was proposed earlier that Pfcrmp (Plasmodium falciparum chloroquine resistance marker protein) may be the chloroquine's target protein in nucleus. In this communication, further evidence is presented to support the view that Pfcrmp may play a key role in chloroquine antimalarial actions as well as resistance development.
1204.3760
Namiko Mitarai
Weihan Li, Sandeep Krishna, Simone Pigolotti, Namiko Mitarai, Mogens H. Jensen
Switching between oscillations and homeostasis in competing negative and positive feedback motifs
8 figures
J. Theor. Biol. 307, 205 (2012)
10.1016/j.jtbi.2012.04.011
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze a class of network motifs in which a short, two-node positive feed- back motif is inserted in a three-node negative feedback loop. We demonstrate that such networks can undergo a bifurcation to a state where a stable fixed point and a stable limit cycle coexist. At the bifurcation point the period of the oscillations diverges. Further, intrinsic noise can make the system switch between oscillatory state and the stationary state spontaneously. We find that this switching also occurs in previous models of circadian clocks that use this combination of positive and negative feedback. Our results suggest that real- life circadian systems may need specific regulation to prevent or minimize such switching events.
[ { "created": "Tue, 17 Apr 2012 11:16:31 GMT", "version": "v1" } ]
2012-09-10
[ [ "Li", "Weihan", "" ], [ "Krishna", "Sandeep", "" ], [ "Pigolotti", "Simone", "" ], [ "Mitarai", "Namiko", "" ], [ "Jensen", "Mogens H.", "" ] ]
We analyze a class of network motifs in which a short, two-node positive feed- back motif is inserted in a three-node negative feedback loop. We demonstrate that such networks can undergo a bifurcation to a state where a stable fixed point and a stable limit cycle coexist. At the bifurcation point the period of the oscillations diverges. Further, intrinsic noise can make the system switch between oscillatory state and the stationary state spontaneously. We find that this switching also occurs in previous models of circadian clocks that use this combination of positive and negative feedback. Our results suggest that real- life circadian systems may need specific regulation to prevent or minimize such switching events.
0704.1390
Azam Gholami
Azam Gholami, Martin Falcke, Erwin Frey
Velocity oscillations in actin-based motility
5 pages, 6 figures
null
10.1088/1367-2630/10/3/033022
HMI 18779, LMU-ASC 18/07
q-bio.CB
null
We present a simple and generic theoretical description of actin-based motility, where polymerization of filaments maintains propulsion. The dynamics is driven by polymerization kinetics at the filaments' free ends, crosslinking of the actin network, attachment and detachment of filaments to the obstacle interfaces and entropic forces. We show that spontaneous oscillations in the velocity emerge in a broad range of parameter values, and compare our findings with experiments.
[ { "created": "Wed, 11 Apr 2007 10:49:47 GMT", "version": "v1" } ]
2015-05-13
[ [ "Gholami", "Azam", "" ], [ "Falcke", "Martin", "" ], [ "Frey", "Erwin", "" ] ]
We present a simple and generic theoretical description of actin-based motility, where polymerization of filaments maintains propulsion. The dynamics is driven by polymerization kinetics at the filaments' free ends, crosslinking of the actin network, attachment and detachment of filaments to the obstacle interfaces and entropic forces. We show that spontaneous oscillations in the velocity emerge in a broad range of parameter values, and compare our findings with experiments.
q-bio/0603035
Rajesh Kavasseri
Rajesh G Kavasseri, Radhakrishnan Nagarajan
Synchronization in Electrically Coupled Neural Networks
null
null
null
null
q-bio.NC
null
In this report, we investigate the synchronization of temporal activity in an electrically coupled neural network model. The electrical coupling is established by homotypic static gap-junctions (Connexin 43). Two distinct network topologies, namely: {\em sparse random network, (SRN)} and {\em fully connected network, (FCN)} are used to establish the connectivity. The strength of connectivity in the FCN is governed by the {\em mean gap junctional conductance} ($\mu$). In the case of the SRN, the overall strength of connectivity is governed by the {\em density of connections} ($\delta$) and the connection strength between two neurons ($S_0$). The synchronization of the network with increasing gap junctional strength and varying population sizes is investigated. It was observed that the network {\em abruptly} makes a transition from a weakly synchronized to a well synchronized regime when ($\delta$) or ($\mu$) exceeds a critical value. It was also observed that the ($\delta$, $\mu$) values used to achieve synchronization decreases with increasing network size.
[ { "created": "Wed, 29 Mar 2006 21:35:42 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kavasseri", "Rajesh G", "" ], [ "Nagarajan", "Radhakrishnan", "" ] ]
In this report, we investigate the synchronization of temporal activity in an electrically coupled neural network model. The electrical coupling is established by homotypic static gap-junctions (Connexin 43). Two distinct network topologies, namely: {\em sparse random network, (SRN)} and {\em fully connected network, (FCN)} are used to establish the connectivity. The strength of connectivity in the FCN is governed by the {\em mean gap junctional conductance} ($\mu$). In the case of the SRN, the overall strength of connectivity is governed by the {\em density of connections} ($\delta$) and the connection strength between two neurons ($S_0$). The synchronization of the network with increasing gap junctional strength and varying population sizes is investigated. It was observed that the network {\em abruptly} makes a transition from a weakly synchronized to a well synchronized regime when ($\delta$) or ($\mu$) exceeds a critical value. It was also observed that the ($\delta$, $\mu$) values used to achieve synchronization decreases with increasing network size.
2311.10539
Allyson Quinn Ryan
Allyson Quinn Ryan and Carl D. Modes
ToSkA: Topological Skeleton Analysis for Network-Based Shape Representation and Evaluation of Objects from Cells to Death Stars
10 pages, 6 figures
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Shape analysis and classification are popular methods for biologists, biophysicists and mathematicians investigating relationships between object function and form. Classic shape descriptors, such as sphericity, can be very powerful; however, when evaluating complex shapes, these descriptors can be insufficient for rigorous assessment. Here, we present "ToSkA: Topological Skeleton Analysis" a method to analyse complex objects by representing their shape asymmetries as networks. Using global neighbourhood principles, classic network science metrics and spatial feature embedding we are able to create unique object profiles for classification. It is also possible to track objects over time and extract significantly different shape features between experiments. Importantly, we have incorporated the capacity to measure absolute spatial features of objects (e.g., branch lengths). This adds additional layers of sensitivity to object classification. Furthermore, because topology is an inherent property of system identity, ToSkA is able to identify segmentation errors that alter object topology by observing the emergence or loss of cycles in network representations. Combined, the analytics of ToSkA presented here allow for the flexibility and in-depth shape profiling necessary for complex objects often observed in biological and physical settings where robust, yet precise, system configuration is essential to downstream processes.
[ { "created": "Fri, 17 Nov 2023 14:06:10 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2023 13:17:05 GMT", "version": "v2" } ]
2023-12-07
[ [ "Ryan", "Allyson Quinn", "" ], [ "Modes", "Carl D.", "" ] ]
Shape analysis and classification are popular methods for biologists, biophysicists and mathematicians investigating relationships between object function and form. Classic shape descriptors, such as sphericity, can be very powerful; however, when evaluating complex shapes, these descriptors can be insufficient for rigorous assessment. Here, we present "ToSkA: Topological Skeleton Analysis" a method to analyse complex objects by representing their shape asymmetries as networks. Using global neighbourhood principles, classic network science metrics and spatial feature embedding we are able to create unique object profiles for classification. It is also possible to track objects over time and extract significantly different shape features between experiments. Importantly, we have incorporated the capacity to measure absolute spatial features of objects (e.g., branch lengths). This adds additional layers of sensitivity to object classification. Furthermore, because topology is an inherent property of system identity, ToSkA is able to identify segmentation errors that alter object topology by observing the emergence or loss of cycles in network representations. Combined, the analytics of ToSkA presented here allow for the flexibility and in-depth shape profiling necessary for complex objects often observed in biological and physical settings where robust, yet precise, system configuration is essential to downstream processes.
2407.18237
Korabel
Anna Gavrilova, Nickolay Korabel, Victoria J. Allan, and Sergei Fedotov
Heterogeneous model for superdiffusive movement of dense-core vesicles in C. elegans
9 pages, 5 figures
null
null
null
q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Transport of dense core vesicles (DCVs) in neurons is crucial for distributing molecules like neuropeptides and growth factors. We studied the experimental trajectories of dynein-driven directed movement of DCVs in the ALA neuron C. elegans over a duration of up to 6 seconds. We analysed the DCV movement in three strains of C. elegans: 1) with normal kinesin-1 function, 2) with reduced function in kinesin light chain 2 (KLC-2), and 3) a null mutation in kinesin light chain 1 (KLC-1). We find that DCVs move superdiffusively with displacement variance $var(x) \sim t^2$ in all three strains with low reversal rates and frequent immobilization of DCVs. The distribution of DCV displacements fits a beta-binomial distribution with the mean and the variance following linear and quadratic growth patterns, respectively. We propose a simple heterogeneous random walk model to explain the observed superdiffusive retrograde transport behaviour of DCV movement. This model involves a random probability with the beta density for a DCV to resume its movement or remain in the same position.
[ { "created": "Thu, 25 Jul 2024 17:52:43 GMT", "version": "v1" } ]
2024-07-26
[ [ "Gavrilova", "Anna", "" ], [ "Korabel", "Nickolay", "" ], [ "Allan", "Victoria J.", "" ], [ "Fedotov", "Sergei", "" ] ]
Transport of dense core vesicles (DCVs) in neurons is crucial for distributing molecules like neuropeptides and growth factors. We studied the experimental trajectories of dynein-driven directed movement of DCVs in the ALA neuron C. elegans over a duration of up to 6 seconds. We analysed the DCV movement in three strains of C. elegans: 1) with normal kinesin-1 function, 2) with reduced function in kinesin light chain 2 (KLC-2), and 3) a null mutation in kinesin light chain 1 (KLC-1). We find that DCVs move superdiffusively with displacement variance $var(x) \sim t^2$ in all three strains with low reversal rates and frequent immobilization of DCVs. The distribution of DCV displacements fits a beta-binomial distribution with the mean and the variance following linear and quadratic growth patterns, respectively. We propose a simple heterogeneous random walk model to explain the observed superdiffusive retrograde transport behaviour of DCV movement. This model involves a random probability with the beta density for a DCV to resume its movement or remain in the same position.
1505.04613
Daniele De Martino
Daniele De Martino
Genome-scale estimate of the metabolic turnover of E. Coli from the energy balance analysis
10 pages, 5 figures, 1 table
null
10.1088/1478-3975/13/1/016003
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article the notion of metabolic turnover is revisited in the light of recent results of out-of-equilibrium thermodynamics. By means of Monte Carlo methods we perform an exact uniform sampling of the steady state fluxes in a genome scale metabolic network of E Coli from which we infer the metabolites turnover times. However the latter are inferred from net fluxes, and we argue that this approximation is not valid for enzymes working nearby thermodynamic equilibrium. We recalculate turnover times from total fluxes by performing an energy balance analysis of the network and recurring to the fluctuation theorem. We find in many cases values one of order of magnitude lower, implying a faster picture of intermediate metabolism.
[ { "created": "Mon, 18 May 2015 12:44:48 GMT", "version": "v1" } ]
2016-02-17
[ [ "De Martino", "Daniele", "" ] ]
In this article the notion of metabolic turnover is revisited in the light of recent results of out-of-equilibrium thermodynamics. By means of Monte Carlo methods we perform an exact uniform sampling of the steady state fluxes in a genome scale metabolic network of E Coli from which we infer the metabolites turnover times. However the latter are inferred from net fluxes, and we argue that this approximation is not valid for enzymes working nearby thermodynamic equilibrium. We recalculate turnover times from total fluxes by performing an energy balance analysis of the network and recurring to the fluctuation theorem. We find in many cases values one of order of magnitude lower, implying a faster picture of intermediate metabolism.
0712.0545
Akihiko Nakajima
Akihiko Nakajima, Kunihiko Kaneko
Regulative Differentiation as Bifurcation of Interacting Cell Population
27 pages, 9 figures
null
null
null
q-bio.CB
null
In multicellular organisms, several cell states coexist. For determining each cell type, cell-cell interactions are often essential, in addition to intracellular gene expression dynamics. Based on dynamical systems theory, we propose a mechanism for cell differentiation with regulation of populations of each cell type by taking simple cell models with gene expression dynamics. By incorporating several interaction kinetics, we found that the cell models with a single intracellular positive-feedback loop exhibit a cell fate switching, with a change in the total number of cells. The number of a given cell type or the population ratio of each cell type is preserved against the change in the total number of cells, depending on the form of cell-cell interaction. The differentiation is a result of bifurcation of cell states via the intercellular interactions, while the population regulation is explained by self-consistent determination of the bifurcation parameter through cell-cell interactions. The relevance of this mechanism to development and differentiation in several multicellular systems is discussed.
[ { "created": "Tue, 4 Dec 2007 15:16:59 GMT", "version": "v1" } ]
2007-12-05
[ [ "Nakajima", "Akihiko", "" ], [ "Kaneko", "Kunihiko", "" ] ]
In multicellular organisms, several cell states coexist. For determining each cell type, cell-cell interactions are often essential, in addition to intracellular gene expression dynamics. Based on dynamical systems theory, we propose a mechanism for cell differentiation with regulation of populations of each cell type by taking simple cell models with gene expression dynamics. By incorporating several interaction kinetics, we found that the cell models with a single intracellular positive-feedback loop exhibit a cell fate switching, with a change in the total number of cells. The number of a given cell type or the population ratio of each cell type is preserved against the change in the total number of cells, depending on the form of cell-cell interaction. The differentiation is a result of bifurcation of cell states via the intercellular interactions, while the population regulation is explained by self-consistent determination of the bifurcation parameter through cell-cell interactions. The relevance of this mechanism to development and differentiation in several multicellular systems is discussed.
1812.00487
Mark Alber
Mikahl Banwarth-Kuhn, Ali Nematbakhsh, Kevin W. Rodriguez, Stephen Snipes, Carolyn G. Rasmussen, G. Venugopala Reddy, Mark Alber
Cell-based model of the generation and maintenance of the shape and structure of the multi-layered shoot apical meristem of Arabidopsis thaliana
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the central problems in animal and plant developmental biology is deciphering how chemical and mechanical signals interact within a tissue to produce organs of defined size, shape and function. Cell walls in plants impose a unique constraint on cell expansion since cells are under turgor pressure and do not move relative to one another. Cell wall extensibility and constantly changing distribution of stress on the wall are mechanical properties that vary between individual cells and contribute to rates of expansion and orientation of cell division. How exactly cell wall mechanical properties influence cell behavior is still largely unknown. To address this problem, a novel, subcellular element computational model of growth of stem cells within the multilayered shoot apical meristem (SAM) of Arabidopsis thaliana is developed and calibrated using experimental data. Novel features of the model include separate, detailed descriptions of cell wall extensibility and mechanical stiffness, deformation of the middle lamella and increase in cytoplasmic pressure generating internal turgor pressure. The model is used to test novel hypothesized mechanisms of formation of the shape and structure of the growing, multilayered SAM based on WUS concentration of individual cells controlling cell growth rates and layer dependent anisotropic mechanical properties of subcellular components of individual cells determining anisotropic cell expansion directions. Model simulations also provide a detailed prediction of distribution of stresses in the growing tissue which can be tested in future experiments.
[ { "created": "Sun, 2 Dec 2018 23:15:45 GMT", "version": "v1" } ]
2018-12-04
[ [ "Banwarth-Kuhn", "Mikahl", "" ], [ "Nematbakhsh", "Ali", "" ], [ "Rodriguez", "Kevin W.", "" ], [ "Snipes", "Stephen", "" ], [ "Rasmussen", "Carolyn G.", "" ], [ "Reddy", "G. Venugopala", "" ], [ "Alber", "Mark...
One of the central problems in animal and plant developmental biology is deciphering how chemical and mechanical signals interact within a tissue to produce organs of defined size, shape and function. Cell walls in plants impose a unique constraint on cell expansion since cells are under turgor pressure and do not move relative to one another. Cell wall extensibility and constantly changing distribution of stress on the wall are mechanical properties that vary between individual cells and contribute to rates of expansion and orientation of cell division. How exactly cell wall mechanical properties influence cell behavior is still largely unknown. To address this problem, a novel, subcellular element computational model of growth of stem cells within the multilayered shoot apical meristem (SAM) of Arabidopsis thaliana is developed and calibrated using experimental data. Novel features of the model include separate, detailed descriptions of cell wall extensibility and mechanical stiffness, deformation of the middle lamella and increase in cytoplasmic pressure generating internal turgor pressure. The model is used to test novel hypothesized mechanisms of formation of the shape and structure of the growing, multilayered SAM based on WUS concentration of individual cells controlling cell growth rates and layer dependent anisotropic mechanical properties of subcellular components of individual cells determining anisotropic cell expansion directions. Model simulations also provide a detailed prediction of distribution of stresses in the growing tissue which can be tested in future experiments.
1309.2614
Dmitry Krotov
Dmitry Krotov, Julien O. Dubuis, Thomas Gregor, and William Bialek
Morphogenesis at criticality?
null
null
10.1073/pnas.1324186111
null
q-bio.MN cond-mat.dis-nn cond-mat.stat-mech nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial patterns in the early fruit fly embryo emerge from a network of interactions among transcription factors, the gap genes, driven by maternal inputs. Such networks can exhibit many qualitatively different behaviors, separated by critical surfaces. At criticality, we should observe strong correlations in the fluctuations of different genes around their mean expression levels, a slowing of the dynamics along some but not all directions in the space of possible expression levels, correlations of expression fluctuations over long distances in the embryo, and departures from a Gaussian distribution of these fluctuations. Analysis of recent experiments on the gap genes shows that all these signatures are observed, and that the different signatures are related in ways predicted by theory. While there might be other explanations for these individual phenomena, the confluence of evidence suggests that this genetic network is tuned to criticality.
[ { "created": "Tue, 10 Sep 2013 19:08:29 GMT", "version": "v1" } ]
2015-06-17
[ [ "Krotov", "Dmitry", "" ], [ "Dubuis", "Julien O.", "" ], [ "Gregor", "Thomas", "" ], [ "Bialek", "William", "" ] ]
Spatial patterns in the early fruit fly embryo emerge from a network of interactions among transcription factors, the gap genes, driven by maternal inputs. Such networks can exhibit many qualitatively different behaviors, separated by critical surfaces. At criticality, we should observe strong correlations in the fluctuations of different genes around their mean expression levels, a slowing of the dynamics along some but not all directions in the space of possible expression levels, correlations of expression fluctuations over long distances in the embryo, and departures from a Gaussian distribution of these fluctuations. Analysis of recent experiments on the gap genes shows that all these signatures are observed, and that the different signatures are related in ways predicted by theory. While there might be other explanations for these individual phenomena, the confluence of evidence suggests that this genetic network is tuned to criticality.
2108.05925
Mbarga Manga Joseph Arsene
Mbarga Manga Joseph Arsene
Synergy Test for Antibacterial Activity: Towards the Research for a Consensus between the Fractional Inhibitory Concentration (Checkboard Method) and the Increase in Fold Area (Disc Diffusion Method)
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Antibiotic resistance is a topical problem for both humans and animals and has been the subject of special monitoring for two decades. Several recent studies, including ours, have shown that this phenomenon is accentuated by the transfer of certain genetic elements and cross resistance acquisition, the latter or both resulting from the misuse of these antimicrobial drugs. A more careful use of antimicrobials, the search for new antibacterial compounds (including probiotics and phages) are the most recommended alternatives to overcome this situation. However, tests for modulations of antimicrobial activity can also play a major role. The main goal of synergy studies is to assess whether substances with antibacterial properties can improve the effectiveness of existing antimicrobials or give them a second life against resistant germs. Moreover, recent studies have demonstrated the ability of silver nanoparticles and extracts of certain plants to boost the effectiveness of certain antibiotics. such as ampicillin, benzylpenicillin, cefazolin, ciprofloxacin, nitrofurantoin, and kanamycin. Yet, from these studies we found that there was a serious problem with the interpretation of the results when using the disk method with determination of the increase in fold area. Indeed, the use of this method firstly requires the determination of the diameter of inhibition of the antibiotic alone; follow by the determination of the combination of antibiotic + modulating substance (MS) or extract.....
[ { "created": "Thu, 12 Aug 2021 18:58:42 GMT", "version": "v1" } ]
2021-08-16
[ [ "Arsene", "Mbarga Manga Joseph", "" ] ]
Antibiotic resistance is a topical problem for both humans and animals and has been the subject of special monitoring for two decades. Several recent studies, including ours, have shown that this phenomenon is accentuated by the transfer of certain genetic elements and cross resistance acquisition, the latter or both resulting from the misuse of these antimicrobial drugs. A more careful use of antimicrobials, the search for new antibacterial compounds (including probiotics and phages) are the most recommended alternatives to overcome this situation. However, tests for modulations of antimicrobial activity can also play a major role. The main goal of synergy studies is to assess whether substances with antibacterial properties can improve the effectiveness of existing antimicrobials or give them a second life against resistant germs. Moreover, recent studies have demonstrated the ability of silver nanoparticles and extracts of certain plants to boost the effectiveness of certain antibiotics. such as ampicillin, benzylpenicillin, cefazolin, ciprofloxacin, nitrofurantoin, and kanamycin. Yet, from these studies we found that there was a serious problem with the interpretation of the results when using the disk method with determination of the increase in fold area. Indeed, the use of this method firstly requires the determination of the diameter of inhibition of the antibiotic alone; follow by the determination of the combination of antibiotic + modulating substance (MS) or extract.....
1403.6222
Nen Saito
Nen Saito and Kunihiko Kaneko
Theoretical Analysis of Discreteness-Induced Transition in Autocatalytic Reaction Dynamics
7 pages, 4 figures
Physical Review E 91, 022707 (2015)
10.1103/PhysRevE.91.022707
null
q-bio.MN physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transitions in the qualitative behavior of chemical reaction dynamics with a decrease in molecule number have attracted much attention. Here, a method based on a Markov process with a tridiagonal transition matrix is applied to the analysis of this transition in reaction dynamics. The transition to bistability due to the small-number effect and the mean switching time between the bistable states are analytically calculated in agreement with numerical simulations. In addition, a novel transition involving the reversal of the chemical reaction flow is found in the model under an external flow, and also in a three-component model. The generality of this transition and its correspondence to biological phenomena are also discussed.
[ { "created": "Tue, 25 Mar 2014 03:15:20 GMT", "version": "v1" }, { "created": "Tue, 24 Feb 2015 06:43:54 GMT", "version": "v2" } ]
2015-02-25
[ [ "Saito", "Nen", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Transitions in the qualitative behavior of chemical reaction dynamics with a decrease in molecule number have attracted much attention. Here, a method based on a Markov process with a tridiagonal transition matrix is applied to the analysis of this transition in reaction dynamics. The transition to bistability due to the small-number effect and the mean switching time between the bistable states are analytically calculated in agreement with numerical simulations. In addition, a novel transition involving the reversal of the chemical reaction flow is found in the model under an external flow, and also in a three-component model. The generality of this transition and its correspondence to biological phenomena are also discussed.
2003.00751
Jianfeng Pei
Xiaobing Deng, Xiaoyu Yu and Jianfeng Pei
Regulation of interferon production as a potential strategy for COVID-19 treatment
9 pages, 3 figure
null
null
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regulating the upstream of the cytokines production could be a promising strategy to the treatment of COVID-19. We suggest to pay more attention to the dysregulated IFN-I production in COVID-19 and to considerate cGAS, ALK and STING as potential therapeutic targets preventing cytokine storm. Approved drugs like suramin and ALK inhibitors are worthy of clinical trials.
[ { "created": "Mon, 2 Mar 2020 10:36:33 GMT", "version": "v1" } ]
2020-03-03
[ [ "Deng", "Xiaobing", "" ], [ "Yu", "Xiaoyu", "" ], [ "Pei", "Jianfeng", "" ] ]
Regulating the upstream of the cytokines production could be a promising strategy to the treatment of COVID-19. We suggest to pay more attention to the dysregulated IFN-I production in COVID-19 and to considerate cGAS, ALK and STING as potential therapeutic targets preventing cytokine storm. Approved drugs like suramin and ALK inhibitors are worthy of clinical trials.
1511.00983
Sergio Gabriel Quesada Acuna
Bernal Morera-Brenes and Juli\'an Monge-N\'ajera
A new giant species of placented worm and the mechanism by which onychophorans weave their nets (Onychophora: Peripatidae)
16 pages, 18 figures
Rev. Biol. Trop. (Int. J. Trop. Biol. ISSN-0034-7744) Vol. 58 (4): 1127-1142, December 2010
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Onychophorans, or velvet worms, are poorly known and rare animals. Here we report the discovery of a new species that is also the largest onychophoran found so far, a 22cm long female from the Caribbean coastal forest of Costa Rica. Specimens were examined with Scanning Electron Microscopy; Peripatus solorzanoi sp. nov., is diagnosed as follows: primary papillae convex and conical with rounded bases, with more than 18 scale ranks. Apical section large, spherical, with a basal diameter of at least 20 ranks. Apical piece with 6-7 scale ranks. Outer blade 1 principal tooth, 1 accessory tooth, 1 vestigial accessory tooth (formula: 1/1/1); inner blade 1 principal tooth, 1 accessory tooth, 1 rudimentary accessory tooth, 9 to 10 denticles (formula: 1/1/1/9-10). Accessory tooth blunt in both blades. Four pads in the fourth and fifth oncopods; 4th. pad arched. The previously unknown mechanism by which onychophorans weave their adhesive is simple: muscular action produces a swinging movement of the adhesive-spelling organs; as a result, the streams cross in mid air, weaving the net. Like all onychophorans, P. solorzanoi is a rare species: active protection of the habitat of the largest onychophoran ever described, is considered urgent.
[ { "created": "Tue, 3 Nov 2015 16:58:53 GMT", "version": "v1" } ]
2015-11-04
[ [ "Morera-Brenes", "Bernal", "" ], [ "Monge-Nájera", "Julián", "" ] ]
Onychophorans, or velvet worms, are poorly known and rare animals. Here we report the discovery of a new species that is also the largest onychophoran found so far, a 22cm long female from the Caribbean coastal forest of Costa Rica. Specimens were examined with Scanning Electron Microscopy; Peripatus solorzanoi sp. nov., is diagnosed as follows: primary papillae convex and conical with rounded bases, with more than 18 scale ranks. Apical section large, spherical, with a basal diameter of at least 20 ranks. Apical piece with 6-7 scale ranks. Outer blade 1 principal tooth, 1 accessory tooth, 1 vestigial accessory tooth (formula: 1/1/1); inner blade 1 principal tooth, 1 accessory tooth, 1 rudimentary accessory tooth, 9 to 10 denticles (formula: 1/1/1/9-10). Accessory tooth blunt in both blades. Four pads in the fourth and fifth oncopods; 4th. pad arched. The previously unknown mechanism by which onychophorans weave their adhesive is simple: muscular action produces a swinging movement of the adhesive-spelling organs; as a result, the streams cross in mid air, weaving the net. Like all onychophorans, P. solorzanoi is a rare species: active protection of the habitat of the largest onychophoran ever described, is considered urgent.
1910.01949
Julius Kirkegaard
Julius B. Kirkegaard, Bjarke F. Nielsen, Ala Trusina, and Kim Sneppen
Self-assembly, Buckling & Density-Invariant Growth of Three-dimensional Vascular Networks
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The experimental actualisation of organoids modelling organs from brains to pancreases has revealed that much of the diverse morphologies of organs are emergent properties of simple intercellular "rules" and not the result of top-down orchestration. In contrast to other organs, the initial plexus of the vascular system is formed by aggregation of cells in the process known as vasculogenesis. Here we study this self-assembling process of blood vessels in three dimensions through a set of simple rules that align intercellular apical-basal and planar cell polarity. We demonstrate that a fully connected network of tubes emerges above a critical initial density of cells. Through planar cell polarity our model demonstrates convergent extension, and this polarity furthermore allows for both morphology-maintaining growth and growth-induced buckling. We compare this buckling to the special vasculature of Islets of Langerhans in the pancreas and suggest that the mechanism behind the vascular density-maintaining growth of these islets could be the result of growth-induced buckling.
[ { "created": "Fri, 4 Oct 2019 13:40:24 GMT", "version": "v1" } ]
2019-10-07
[ [ "Kirkegaard", "Julius B.", "" ], [ "Nielsen", "Bjarke F.", "" ], [ "Trusina", "Ala", "" ], [ "Sneppen", "Kim", "" ] ]
The experimental actualisation of organoids modelling organs from brains to pancreases has revealed that much of the diverse morphologies of organs are emergent properties of simple intercellular "rules" and not the result of top-down orchestration. In contrast to other organs, the initial plexus of the vascular system is formed by aggregation of cells in the process known as vasculogenesis. Here we study this self-assembling process of blood vessels in three dimensions through a set of simple rules that align intercellular apical-basal and planar cell polarity. We demonstrate that a fully connected network of tubes emerges above a critical initial density of cells. Through planar cell polarity our model demonstrates convergent extension, and this polarity furthermore allows for both morphology-maintaining growth and growth-induced buckling. We compare this buckling to the special vasculature of Islets of Langerhans in the pancreas and suggest that the mechanism behind the vascular density-maintaining growth of these islets could be the result of growth-induced buckling.
2401.07730
Mathijs Mabesoone
Stefan Leopold-Messer, Clara Chepkirui, Mathijs F.J. Mabesoone, Joshua Mayer, Lucas Paoli, Shinichi Sunagawa, Agustinus R. Uria, Toshiyuki Wakimoto, J\"orn Piel
Animal-associated marine Acidobacteria with a rich natural product repertoire
null
Chem, 9 (12), 2023, pp. 3696-3713
10.1016/j.chempr.2023.11.003
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sponges have long been recognized as a rich source of bioactive natural products. Various studies suggest that many of these compounds are produced by symbiotic bacteria. However, substance supplies and functional insights about the producers remain limited because cultivation remains unsuccessful. To identify alternative, sustainable sources of sponge-derived polyketides, we computationally analyzed 5289 characterized and orphan trans-acyltransferase polyketide synthases, enzymes with widespread roles in polyketide biosynthesis by bacterial symbionts. The analytical workflow predicted marine animal-derived Acidobacteria of the family Acanthopleuribacteraceae with large sets of biosynthetic gene clusters to be enriched in sponge-type chemistry. Targeted compound isolation from a chiton-associated strain yielded new congeners of the phorboxazoles and calyculins, potent and scarce cytotoxins exclusively known from sponges. These first natural products of Acidobacteria and new coral metagenomic data on a third family member suggest animal-associated Acanthopleuribacteraceae as a rich source of sponge-type as well as novel metabolites
[ { "created": "Mon, 15 Jan 2024 14:43:44 GMT", "version": "v1" }, { "created": "Wed, 17 Jan 2024 07:57:49 GMT", "version": "v2" } ]
2024-01-18
[ [ "Leopold-Messer", "Stefan", "" ], [ "Chepkirui", "Clara", "" ], [ "Mabesoone", "Mathijs F. J.", "" ], [ "Mayer", "Joshua", "" ], [ "Paoli", "Lucas", "" ], [ "Sunagawa", "Shinichi", "" ], [ "Uria", "Agustinus R....
Sponges have long been recognized as a rich source of bioactive natural products. Various studies suggest that many of these compounds are produced by symbiotic bacteria. However, substance supplies and functional insights about the producers remain limited because cultivation remains unsuccessful. To identify alternative, sustainable sources of sponge-derived polyketides, we computationally analyzed 5289 characterized and orphan trans-acyltransferase polyketide synthases, enzymes with widespread roles in polyketide biosynthesis by bacterial symbionts. The analytical workflow predicted marine animal-derived Acidobacteria of the family Acanthopleuribacteraceae with large sets of biosynthetic gene clusters to be enriched in sponge-type chemistry. Targeted compound isolation from a chiton-associated strain yielded new congeners of the phorboxazoles and calyculins, potent and scarce cytotoxins exclusively known from sponges. These first natural products of Acidobacteria and new coral metagenomic data on a third family member suggest animal-associated Acanthopleuribacteraceae as a rich source of sponge-type as well as novel metabolites
2110.00315
Ga\"el Bardon
Ga\"el Bardon and Fr\'ed\'eric Barraquand
Effects of stage structure on coexistence: mixed benefits
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The properties of competition models where all individuals are identical are relatively well-understood; however, juveniles and adults can experience or generate competition differently. We study here less well-known structured competition models in discrete time that allow multiple life history parameters to depend on adult or juvenile population densities. A numerical study with Ricker density-dependence suggested that when competition coefficients acting on juvenile survival and fertility reflect opposite competitive hierarchies, stage structure could foster coexistence. We revisit and expand those results. First, through a Beverton-Holt two-species juvenile-adult model, we confirm that these findings do not depend on the specifics of density-dependence or life cycles, and obtain analytical expressions explaining how this coexistence emerging from stage structure can occur. Second, we show using a community-level sensitivity analysis that such emergent coexistence is robust to perturbations of parameter values. Finally, we ask whether these results extend from two to many species, using simulations. We show that they do not, as coexistence emerging from stage structure is only seen for very similar life-history parameters. Such emergent coexistence is therefore not likely to be a key mechanism of coexistence in very diverse ecosystems, although it may contribute to explaining coexistence of certain pairs of intensely competing species.
[ { "created": "Fri, 1 Oct 2021 11:10:30 GMT", "version": "v1" }, { "created": "Thu, 9 Jun 2022 13:54:22 GMT", "version": "v2" }, { "created": "Tue, 14 Feb 2023 10:13:39 GMT", "version": "v3" }, { "created": "Tue, 21 Mar 2023 09:31:01 GMT", "version": "v4" } ]
2023-03-22
[ [ "Bardon", "Gaël", "" ], [ "Barraquand", "Frédéric", "" ] ]
The properties of competition models where all individuals are identical are relatively well-understood; however, juveniles and adults can experience or generate competition differently. We study here less well-known structured competition models in discrete time that allow multiple life history parameters to depend on adult or juvenile population densities. A numerical study with Ricker density-dependence suggested that when competition coefficients acting on juvenile survival and fertility reflect opposite competitive hierarchies, stage structure could foster coexistence. We revisit and expand those results. First, through a Beverton-Holt two-species juvenile-adult model, we confirm that these findings do not depend on the specifics of density-dependence or life cycles, and obtain analytical expressions explaining how this coexistence emerging from stage structure can occur. Second, we show using a community-level sensitivity analysis that such emergent coexistence is robust to perturbations of parameter values. Finally, we ask whether these results extend from two to many species, using simulations. We show that they do not, as coexistence emerging from stage structure is only seen for very similar life-history parameters. Such emergent coexistence is therefore not likely to be a key mechanism of coexistence in very diverse ecosystems, although it may contribute to explaining coexistence of certain pairs of intensely competing species.
1804.07720
Richard A Neher
Richard A. Neher and Aleksandra M. Walczak
Progress and open problems in evolutionary dynamics
Rapporteur report for the proceedings of the 2017 Solvay Conference
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolution has fascinated quantitative and physical scientists for decades: how can the random process of mutation, recombination, and duplication of genetic information generate the diversity of life? What determines the rate of evolution? Are there quantitative laws that govern and constrain evolution? Is evolution repeatable or predictable? Historically, the study of evolution involved classifying and comparing species, typically based on morphology. In addition to phenotypes on the organismal and molecular scales, we now use whole-genome sequencing to uncover not only the differences between species but also to characterize genetic diversity within-species in unprecedented detail. This diversity can be compared to predictions of quantitative models of evolutionary dynamics. Here, we review key theoretical models of population genetics and evolution along with examples of data from lab evolution experiments, longitudinal sampling of viral populations, microbial communities and the studies of immune repertoires. In all these systems, evolution is shaped by often variable biological and physical environments. While these variable environments can be modeled implicitly in cases such as host-pathogen co-evolution, the dynamic environment, and emerging ecology often cannot be ignored. Integrating dynamics on different scales, both in terms of observation and theoretical models, remains a major challenge towards a better understanding of evolution.
[ { "created": "Fri, 20 Apr 2018 16:55:05 GMT", "version": "v1" } ]
2018-04-23
[ [ "Neher", "Richard A.", "" ], [ "Walczak", "Aleksandra M.", "" ] ]
Evolution has fascinated quantitative and physical scientists for decades: how can the random process of mutation, recombination, and duplication of genetic information generate the diversity of life? What determines the rate of evolution? Are there quantitative laws that govern and constrain evolution? Is evolution repeatable or predictable? Historically, the study of evolution involved classifying and comparing species, typically based on morphology. In addition to phenotypes on the organismal and molecular scales, we now use whole-genome sequencing to uncover not only the differences between species but also to characterize genetic diversity within-species in unprecedented detail. This diversity can be compared to predictions of quantitative models of evolutionary dynamics. Here, we review key theoretical models of population genetics and evolution along with examples of data from lab evolution experiments, longitudinal sampling of viral populations, microbial communities and the studies of immune repertoires. In all these systems, evolution is shaped by often variable biological and physical environments. While these variable environments can be modeled implicitly in cases such as host-pathogen co-evolution, the dynamic environment, and emerging ecology often cannot be ignored. Integrating dynamics on different scales, both in terms of observation and theoretical models, remains a major challenge towards a better understanding of evolution.
1608.08174
Christopher Lester
Christopher Lester, Christian A. Yates, Ruth E. Baker
Efficient parameter sensitivity computation for spatially-extended reaction networks
35 pages
null
10.1063/1.4973219
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reaction-diffusion models are widely used to study spatially-extended chemical reaction systems. In order to understand how the dynamics of a reaction-diffusion model are affected by changes in its input parameters, efficient methods for computing parametric sensitivities are required. In this work, we focus on stochastic models of spatially-extended chemical reaction systems that involve partitioning the computational domain into voxels. Parametric sensitivities are often calculated using Monte Carlo techniques that are typically computationally expensive; however, variance reduction techniques can decrease the number of Monte Carlo simulations required. By exploiting the characteristic dynamics of spatially-extended reaction networks, we are able to adapt existing finite difference schemes to robustly estimate parametric sensitivities in a spatially-extended network. We show that algorithmic performance depends on the dynamics of the given network and the choice of summary statistics. We then describe a hybrid technique that dynamically chooses the most appropriate simulation method for the network of interest. Our method is tested for functionality and accuracy in a range of different scenarios.
[ { "created": "Mon, 29 Aug 2016 18:39:25 GMT", "version": "v1" }, { "created": "Mon, 5 Sep 2016 00:36:22 GMT", "version": "v2" } ]
2017-03-08
[ [ "Lester", "Christopher", "" ], [ "Yates", "Christian A.", "" ], [ "Baker", "Ruth E.", "" ] ]
Reaction-diffusion models are widely used to study spatially-extended chemical reaction systems. In order to understand how the dynamics of a reaction-diffusion model are affected by changes in its input parameters, efficient methods for computing parametric sensitivities are required. In this work, we focus on stochastic models of spatially-extended chemical reaction systems that involve partitioning the computational domain into voxels. Parametric sensitivities are often calculated using Monte Carlo techniques that are typically computationally expensive; however, variance reduction techniques can decrease the number of Monte Carlo simulations required. By exploiting the characteristic dynamics of spatially-extended reaction networks, we are able to adapt existing finite difference schemes to robustly estimate parametric sensitivities in a spatially-extended network. We show that algorithmic performance depends on the dynamics of the given network and the choice of summary statistics. We then describe a hybrid technique that dynamically chooses the most appropriate simulation method for the network of interest. Our method is tested for functionality and accuracy in a range of different scenarios.
2003.09875
Luigi Brugnano
Luigi Brugnano, Felice Iavernaro
A multi-region variant of the SIR model and its extensions
7 pages (1 typo fixed)
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note, we describe simple generalizations of the basic SIR model for epidemic, in case of a multi-region scenario, to be used for predicting the COVID-19 epidemic spread in Italy.
[ { "created": "Sun, 22 Mar 2020 12:38:22 GMT", "version": "v1" }, { "created": "Tue, 31 Mar 2020 15:55:16 GMT", "version": "v2" }, { "created": "Sat, 18 Apr 2020 09:25:52 GMT", "version": "v3" }, { "created": "Wed, 13 May 2020 10:13:20 GMT", "version": "v4" } ]
2020-05-14
[ [ "Brugnano", "Luigi", "" ], [ "Iavernaro", "Felice", "" ] ]
In this note, we describe simple generalizations of the basic SIR model for epidemic, in case of a multi-region scenario, to be used for predicting the COVID-19 epidemic spread in Italy.
2008.08625
Martin Frasch
Aude Castel, Patrick M. Burns, Javier Benito, Hai L. Liu, Shikha Kuthiala, Lucien D. Durosier, Yael S. Frank, Mingju Cao, Maril\`ene Paquet, Gilles Fecteau, Andr\'e Desrochers, Martin G. Frasch
Recording and manipulation of vagus nerve electrical activity in chronically instrumented unanesthetized near term fetal sheep
Accompanying data repository: https://doi.org/10.6084/m9.figshare.7228307.v2
Journal of Neuroscience Methods, 2021
10.1016/j.jneumeth.2021.109257
null
q-bio.QM q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background: The chronically instrumented pregnant sheep has been used as a model of human fetal development and responses to pathophysiologic stimuli. This is due to the unique amenability of the unanesthetized fetal sheep to the surgical placement and maintenance of catheters and electrodes, allowing repetitive blood sampling, substance injection, recording of bioelectrical activity, application of electric stimulation and in vivo organ imaging. Recently, there has been growing interest in pleiotropic effects of vagus nerve stimulation (VNS) on various organ systems such as innate immunity, metabolism, and appetite control. There is no approach to study this in utero and corresponding physiological understanding is scarce. New Method: Based on our previous presentation of a stable chronically instrumented unanesthetized fetal sheep model, here we describe the surgical instrumentation procedure allowing successful implantation of a cervical uni- or bilateral VNS probe with or without vagotomy. Results: In a cohort of 53 animals, we present the changes in blood gas, metabolic, and inflammatory markers during the postoperative period. We detail the design of a VNS probe which also allows recording from the nerve. We also present an example of vagus electroneurogram (VENG) recorded from the VNS probe and an analytical approach to the data. Comparison with Existing Methods: This method represents the first implementation of VENG/VNS in a large pregnant mammalian organism. Conclusions: This study describes a new surgical procedure allowing to record and manipulate chronically the vagus nerve activity in an animal model of human pregnancy.
[ { "created": "Wed, 19 Aug 2020 18:39:54 GMT", "version": "v1" } ]
2021-06-16
[ [ "Castel", "Aude", "" ], [ "Burns", "Patrick M.", "" ], [ "Benito", "Javier", "" ], [ "Liu", "Hai L.", "" ], [ "Kuthiala", "Shikha", "" ], [ "Durosier", "Lucien D.", "" ], [ "Frank", "Yael S.", "" ], [ ...
Background: The chronically instrumented pregnant sheep has been used as a model of human fetal development and responses to pathophysiologic stimuli. This is due to the unique amenability of the unanesthetized fetal sheep to the surgical placement and maintenance of catheters and electrodes, allowing repetitive blood sampling, substance injection, recording of bioelectrical activity, application of electric stimulation and in vivo organ imaging. Recently, there has been growing interest in pleiotropic effects of vagus nerve stimulation (VNS) on various organ systems such as innate immunity, metabolism, and appetite control. There is no approach to study this in utero and corresponding physiological understanding is scarce. New Method: Based on our previous presentation of a stable chronically instrumented unanesthetized fetal sheep model, here we describe the surgical instrumentation procedure allowing successful implantation of a cervical uni- or bilateral VNS probe with or without vagotomy. Results: In a cohort of 53 animals, we present the changes in blood gas, metabolic, and inflammatory markers during the postoperative period. We detail the design of a VNS probe which also allows recording from the nerve. We also present an example of vagus electroneurogram (VENG) recorded from the VNS probe and an analytical approach to the data. Comparison with Existing Methods: This method represents the first implementation of VENG/VNS in a large pregnant mammalian organism. Conclusions: This study describes a new surgical procedure allowing to record and manipulate chronically the vagus nerve activity in an animal model of human pregnancy.
1308.0964
Jeromos Vukov
Jeromos Vukov, Attila Szolnoki, Gy\"orgy Szab\'o
Diverging fluctuations in a spatial five-species cyclic dominance game
accepted for publication in Physical Review E
Physical Review E 88, 022123 (2013)
10.1103/PhysRevE.88.022123
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A five-species predator-prey model is studied on a square lattice where each species has two prey and two predators on the analogy to the Rock-Paper-Scissors-Lizard-Spock game. The evolution of the spatial distribution of species is governed by site exchange and invasion between the neighboring predator-prey pairs, where the cyclic symmetry can be characterized by two different invasion rates. The mean-field analysis has indicated periodic oscillations in the species densities with a frequency becoming zero for a specific ratio of invasion rates. When varying the ratio of invasion rates, the appearance of this zero-eigenvalue mode is accompanied by neutrality between the species associations. Monte Carlo simulations of the spatial system reveal diverging fluctuations at a specific invasion rate, which can be related to the vanishing dominance between all pairs of species associations.
[ { "created": "Mon, 5 Aug 2013 12:54:39 GMT", "version": "v1" } ]
2013-08-19
[ [ "Vukov", "Jeromos", "" ], [ "Szolnoki", "Attila", "" ], [ "Szabó", "György", "" ] ]
A five-species predator-prey model is studied on a square lattice where each species has two prey and two predators on the analogy to the Rock-Paper-Scissors-Lizard-Spock game. The evolution of the spatial distribution of species is governed by site exchange and invasion between the neighboring predator-prey pairs, where the cyclic symmetry can be characterized by two different invasion rates. The mean-field analysis has indicated periodic oscillations in the species densities with a frequency becoming zero for a specific ratio of invasion rates. When varying the ratio of invasion rates, the appearance of this zero-eigenvalue mode is accompanied by neutrality between the species associations. Monte Carlo simulations of the spatial system reveal diverging fluctuations at a specific invasion rate, which can be related to the vanishing dominance between all pairs of species associations.
1208.4127
Kimberly Glass
Kimberly Glass and Michelle Girvan
Annotation Enrichment Analysis: An Alternative Method for Evaluating the Functional Properties of Gene Sets
null
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene annotation databases (compendiums maintained by the scientific community that describe the biological functions performed by individual genes) are commonly used to evaluate the functional properties of experimentally derived gene sets. Overlap statistics, such as Fisher's Exact Test (FET), are often employed to assess these associations, but don't account for non-uniformity in the number of genes annotated to individual functions or the number of functions associated with individual genes. We find FET is strongly biased toward over-estimating overlap significance if a gene set has an unusually high number of annotations. To correct for these biases, we develop Annotation Enrichment Analysis (AEA), which properly accounts for the non-uniformity of annotations. We show that AEA is able to identify biologically meaningful functional enrichments that are obscured by numerous false-positive enrichment scores in FET, and we therefore suggest it be used to more accurately assess the biological properties of gene sets.
[ { "created": "Mon, 20 Aug 2012 21:54:02 GMT", "version": "v1" }, { "created": "Fri, 3 May 2013 13:52:17 GMT", "version": "v2" } ]
2013-05-06
[ [ "Glass", "Kimberly", "" ], [ "Girvan", "Michelle", "" ] ]
Gene annotation databases (compendiums maintained by the scientific community that describe the biological functions performed by individual genes) are commonly used to evaluate the functional properties of experimentally derived gene sets. Overlap statistics, such as Fisher's Exact Test (FET), are often employed to assess these associations, but don't account for non-uniformity in the number of genes annotated to individual functions or the number of functions associated with individual genes. We find FET is strongly biased toward over-estimating overlap significance if a gene set has an unusually high number of annotations. To correct for these biases, we develop Annotation Enrichment Analysis (AEA), which properly accounts for the non-uniformity of annotations. We show that AEA is able to identify biologically meaningful functional enrichments that are obscured by numerous false-positive enrichment scores in FET, and we therefore suggest it be used to more accurately assess the biological properties of gene sets.
1609.02938
Hau-tieng Wu
Su Li, Hau-tieng Wu
Extract fetal ECG from single-lead abdominal ECG by de-shape short time Fourier transform and nonlocal median
null
null
null
null
q-bio.QM physics.data-an stat.AP stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The multiple fundamental frequency detection problem and the source separation problem from a single-channel signal containing multiple oscillatory components and a nonstationary noise are both challenging tasks. To extract the fetal electrocardiogram (ECG) from a single-lead maternal abdominal ECG, we face both challenges. In this paper, we propose a novel method to extract the fetal ECG signal from the single channel maternal abdominal ECG signal, without any additional measurement. The algorithm is composed of three main ingredients. First, the maternal and fetal heart rates are estimated by the de-shape short time Fourier transform, which is a recently proposed nonlinear time-frequency analysis technique; second, the beat tracking technique is applied to accurately obtain the maternal and fetal R peaks; third, the maternal and fetal ECG waveforms are established by the nonlocal median. The algorithm is evaluated on a simulated fetal ECG signal database ({\em fecgsyn} database), and tested on two real databases with the annotation provided by experts ({\em adfecgdb} database and {\em CinC2013} database). In general, the algorithm could be applied to solve other detection and source separation problems, and reconstruct the time-varying wave-shape function of each oscillatory component.
[ { "created": "Fri, 9 Sep 2016 20:26:31 GMT", "version": "v1" } ]
2016-09-13
[ [ "Li", "Su", "" ], [ "Wu", "Hau-tieng", "" ] ]
The multiple fundamental frequency detection problem and the source separation problem from a single-channel signal containing multiple oscillatory components and a nonstationary noise are both challenging tasks. To extract the fetal electrocardiogram (ECG) from a single-lead maternal abdominal ECG, we face both challenges. In this paper, we propose a novel method to extract the fetal ECG signal from the single channel maternal abdominal ECG signal, without any additional measurement. The algorithm is composed of three main ingredients. First, the maternal and fetal heart rates are estimated by the de-shape short time Fourier transform, which is a recently proposed nonlinear time-frequency analysis technique; second, the beat tracking technique is applied to accurately obtain the maternal and fetal R peaks; third, the maternal and fetal ECG waveforms are established by the nonlocal median. The algorithm is evaluated on a simulated fetal ECG signal database ({\em fecgsyn} database), and tested on two real databases with the annotation provided by experts ({\em adfecgdb} database and {\em CinC2013} database). In general, the algorithm could be applied to solve other detection and source separation problems, and reconstruct the time-varying wave-shape function of each oscillatory component.
2108.13666
Matteo di Volo
Hongjie Bi, Matteo Di Volo and Alessandro Torcini
Asynchronous and coherent dynamics in balanced excitatory-inhibitory spiking networks
null
null
null
null
q-bio.NC cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
Dynamic excitatory-inhibitory (E-I) balance is a paradigmatic mechanism invoked to explain the irregular low firing activity observed in the cortex. However, we will show that the E-I balance can be at the origin of other regimes observable in the brain. The analysis is performed by combining simulations of sparse E-I networks composed of N spiking neurons with analytical investigations of low dimensional neural mass models. The bifurcation diagrams, derived for the neural mass model, allow to classify the asynchronous and coherent behaviours emerging any finite in-degree K. In the limit N >> K >> 1 both supra and sub-threshold balanced asynchronous regimes can be observed. Due to structural heterogeneity the asynchronous states are characterized by the splitting of the neurons in three groups: silent, fluctuation and mean driven. The coherent rhythms are characterized by regular or irregular temporal fluctuations joined to spatial coherence similar to coherent fluctuations observed in the cortex over multiple spatial scales. Collective Oscillations (COs) can emerge due to two different mechanisms. A first mechanism similar to the pyramidal-interneuron gamma (PING) one. The second mechanism is intimately related to the presence of current fluctuations, which sustain COs characterized by an essentially simultaneous bursting of the two populations. We observe period-doubling cascades involving the PING-like COs finally leading to the appearance of coherent chaos. For sufficiently strong current fluctuations we report a novel mechanism of frequency locking among collective rhythms promoted by these intrinsic fluctuations. Our analysis suggest that despite PING-like or fluctuation driven COS are observable for any finite in-degree K, in the limit N >> K >> 1 these solutions result in two coexisting balanced regimes: an asynchronous and a fully synchronized one.
[ { "created": "Tue, 31 Aug 2021 08:09:42 GMT", "version": "v1" }, { "created": "Tue, 19 Oct 2021 12:50:27 GMT", "version": "v2" } ]
2023-11-13
[ [ "Bi", "Hongjie", "" ], [ "Di Volo", "Matteo", "" ], [ "Torcini", "Alessandro", "" ] ]
Dynamic excitatory-inhibitory (E-I) balance is a paradigmatic mechanism invoked to explain the irregular low firing activity observed in the cortex. However, we will show that the E-I balance can be at the origin of other regimes observable in the brain. The analysis is performed by combining simulations of sparse E-I networks composed of N spiking neurons with analytical investigations of low dimensional neural mass models. The bifurcation diagrams, derived for the neural mass model, allow to classify the asynchronous and coherent behaviours emerging any finite in-degree K. In the limit N >> K >> 1 both supra and sub-threshold balanced asynchronous regimes can be observed. Due to structural heterogeneity the asynchronous states are characterized by the splitting of the neurons in three groups: silent, fluctuation and mean driven. The coherent rhythms are characterized by regular or irregular temporal fluctuations joined to spatial coherence similar to coherent fluctuations observed in the cortex over multiple spatial scales. Collective Oscillations (COs) can emerge due to two different mechanisms. A first mechanism similar to the pyramidal-interneuron gamma (PING) one. The second mechanism is intimately related to the presence of current fluctuations, which sustain COs characterized by an essentially simultaneous bursting of the two populations. We observe period-doubling cascades involving the PING-like COs finally leading to the appearance of coherent chaos. For sufficiently strong current fluctuations we report a novel mechanism of frequency locking among collective rhythms promoted by these intrinsic fluctuations. Our analysis suggest that despite PING-like or fluctuation driven COS are observable for any finite in-degree K, in the limit N >> K >> 1 these solutions result in two coexisting balanced regimes: an asynchronous and a fully synchronized one.
1410.1493
Michael Manhart
Michael Manhart and Alexandre V. Morozov
Scaling properties of evolutionary paths in a biophysical model of protein adaptation
Revised version
Phys Biol 15:045001, 2015
10.1088/1478-3975/12/4/045001
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The enormous size and complexity of genotypic sequence space frequently requires consideration of coarse-grained sequences in empirical models. We develop scaling relations to quantify the effect of this coarse-graining on properties of fitness landscapes and evolutionary paths. We first consider evolution on a simple Mount Fuji fitness landscape, focusing on how the length and predictability of evolutionary paths scale with the coarse-grained sequence length and alphabet. We obtain simple scaling relations for both the weak- and strong-selection limits, with a non-trivial crossover regime at intermediate selection strengths. We apply these results to evolution on a biophysical fitness landscape that describes how proteins evolve new binding interactions while maintaining their folding stability. We combine the scaling relations with numerical calculations for coarse-grained protein sequences to obtain quantitative properties of the model for realistic binding interfaces and a full amino acid alphabet.
[ { "created": "Mon, 6 Oct 2014 18:59:01 GMT", "version": "v1" }, { "created": "Fri, 13 Mar 2015 23:51:49 GMT", "version": "v2" } ]
2015-06-01
[ [ "Manhart", "Michael", "" ], [ "Morozov", "Alexandre V.", "" ] ]
The enormous size and complexity of genotypic sequence space frequently requires consideration of coarse-grained sequences in empirical models. We develop scaling relations to quantify the effect of this coarse-graining on properties of fitness landscapes and evolutionary paths. We first consider evolution on a simple Mount Fuji fitness landscape, focusing on how the length and predictability of evolutionary paths scale with the coarse-grained sequence length and alphabet. We obtain simple scaling relations for both the weak- and strong-selection limits, with a non-trivial crossover regime at intermediate selection strengths. We apply these results to evolution on a biophysical fitness landscape that describes how proteins evolve new binding interactions while maintaining their folding stability. We combine the scaling relations with numerical calculations for coarse-grained protein sequences to obtain quantitative properties of the model for realistic binding interfaces and a full amino acid alphabet.
q-bio/0702017
Brigitte Gaillard
Sophie Bourgeon (DEPE-IPHC), Thierry Raclot (DEPE-IPHC)
Corticosterone selectively decreases humoral immunity in female eiders during incubation
null
J. Exp. Biol. 209 (2006) 4957-4965
10.1242/jeb.02610
null
q-bio.PE
null
Immunity is hypothesized to share limited resources with other physiological functions and this may partly account for the fitness costs of reproduction. Previous studies have shown that the acquired immunity of female common eider ducks (Somateria mollissima) is suppressed during their incubation, during which they entirely fast. Corticosterone was proposed to be an underlying physiological mechanism for such immunosuppression. Therefore, the current study aimed to assess the effects of exogenous corticosterone on acquired immunity in captive eiders. To this end, females were implanted with corticosterone pellets at different stages of their incubation fast. We measured total immunoglobulin levels, T-cell-mediated immune response, body mass and corticosterone levels in these females and compared them with those of control females prior to and after manipulation (i.e. corticosterone pellet implantation). To mimic corticosterone effects on body mass, we experimentally extended fasting duration in a group of females termed ;late fasters'...
[ { "created": "Thu, 8 Feb 2007 13:46:05 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bourgeon", "Sophie", "", "DEPE-IPHC" ], [ "Raclot", "Thierry", "", "DEPE-IPHC" ] ]
Immunity is hypothesized to share limited resources with other physiological functions and this may partly account for the fitness costs of reproduction. Previous studies have shown that the acquired immunity of female common eider ducks (Somateria mollissima) is suppressed during their incubation, during which they entirely fast. Corticosterone was proposed to be an underlying physiological mechanism for such immunosuppression. Therefore, the current study aimed to assess the effects of exogenous corticosterone on acquired immunity in captive eiders. To this end, females were implanted with corticosterone pellets at different stages of their incubation fast. We measured total immunoglobulin levels, T-cell-mediated immune response, body mass and corticosterone levels in these females and compared them with those of control females prior to and after manipulation (i.e. corticosterone pellet implantation). To mimic corticosterone effects on body mass, we experimentally extended fasting duration in a group of females termed ;late fasters'...
1408.3786
Michael Manhart
Michael Manhart and Alexandre V. Morozov
Protein folding and binding can emerge as evolutionary spandrels through structural coupling
null
Proc Natl Acad Sci USA, 112:1797-1802, 2015
10.1073/pnas.1415895112
null
q-bio.PE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binding interactions between proteins and other molecules mediate numerous cellular processes, including metabolism, signaling, and regulation of gene expression. These interactions evolve in response to changes in the protein's chemical or physical environment (such as the addition of an antibiotic), or when genes duplicate and diverge. Several recent studies have shown the importance of folding stability in constraining protein evolution. Here we investigate how structural coupling between protein folding and binding -- the fact that most proteins can only bind their targets when folded -- gives rise to evolutionary coupling between the traits of folding stability and binding strength. Using biophysical and evolutionary modeling, we show how these protein traits can emerge as evolutionary "spandrels" even if they do not confer an intrinsic fitness advantage. In particular, proteins can evolve strong binding interactions that have no functional role but merely serve to stabilize the protein if misfolding is deleterious. Furthermore, such proteins may have divergent fates, evolving to bind or not bind their targets depending on random mutation events. These observations may explain the abundance of apparently nonfunctional interactions among proteins observed in high-throughput assays. In contrast, for proteins with both functional binding and deleterious misfolding, evolution may be highly predictable at the level of biophysical traits: adaptive paths are tightly constrained to first gain extra folding stability and then partially lose it as the new binding function is developed. These findings have important consequences for our understanding of fundamental evolutionary principles of both natural and engineered proteins.
[ { "created": "Sun, 17 Aug 2014 03:09:05 GMT", "version": "v1" } ]
2015-02-19
[ [ "Manhart", "Michael", "" ], [ "Morozov", "Alexandre V.", "" ] ]
Binding interactions between proteins and other molecules mediate numerous cellular processes, including metabolism, signaling, and regulation of gene expression. These interactions evolve in response to changes in the protein's chemical or physical environment (such as the addition of an antibiotic), or when genes duplicate and diverge. Several recent studies have shown the importance of folding stability in constraining protein evolution. Here we investigate how structural coupling between protein folding and binding -- the fact that most proteins can only bind their targets when folded -- gives rise to evolutionary coupling between the traits of folding stability and binding strength. Using biophysical and evolutionary modeling, we show how these protein traits can emerge as evolutionary "spandrels" even if they do not confer an intrinsic fitness advantage. In particular, proteins can evolve strong binding interactions that have no functional role but merely serve to stabilize the protein if misfolding is deleterious. Furthermore, such proteins may have divergent fates, evolving to bind or not bind their targets depending on random mutation events. These observations may explain the abundance of apparently nonfunctional interactions among proteins observed in high-throughput assays. In contrast, for proteins with both functional binding and deleterious misfolding, evolution may be highly predictable at the level of biophysical traits: adaptive paths are tightly constrained to first gain extra folding stability and then partially lose it as the new binding function is developed. These findings have important consequences for our understanding of fundamental evolutionary principles of both natural and engineered proteins.
1412.0488
Pavel Tomancak
Tobias Pietzsch and Stephan Saalfeld and Stephan Preibisch and Pavel Tomancak
BigDataViewer: Interactive Visualization and Image Processing for Terabyte Data Sets
38 pages, 1 main figure, 27 supplementary figures, under review at Nature Methods
null
null
null
q-bio.QM cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasingly popular light sheet microscopy techniques generate very large 3D time-lapse recordings of living biological specimen. The necessity to make large volumetric datasets available for interactive visualization and analysis has been widely recognized. However, existing solutions build on dedicated servers to generate virtual slices that are transferred to the client applications, practically leading to insufficient frame rates (less than 10 frames per second) for truly interactive experience. An easily accessible open source solution for interactive arbitrary virtual re-slicing of very large volumes and time series of volumes has yet been missing. We fill this gap with BigDataViewer, a Fiji plugin to interactively navigate and visualize large image sequences from both local and remote data sources.
[ { "created": "Mon, 1 Dec 2014 14:24:44 GMT", "version": "v1" } ]
2014-12-02
[ [ "Pietzsch", "Tobias", "" ], [ "Saalfeld", "Stephan", "" ], [ "Preibisch", "Stephan", "" ], [ "Tomancak", "Pavel", "" ] ]
The increasingly popular light sheet microscopy techniques generate very large 3D time-lapse recordings of living biological specimen. The necessity to make large volumetric datasets available for interactive visualization and analysis has been widely recognized. However, existing solutions build on dedicated servers to generate virtual slices that are transferred to the client applications, practically leading to insufficient frame rates (less than 10 frames per second) for truly interactive experience. An easily accessible open source solution for interactive arbitrary virtual re-slicing of very large volumes and time series of volumes has yet been missing. We fill this gap with BigDataViewer, a Fiji plugin to interactively navigate and visualize large image sequences from both local and remote data sources.
2310.14521
Sean Cottrell
Sean Cottrell, Yuta Hozumi, Guo-Wei Wei
K-Nearest-Neighbors Induced Topological PCA for scRNA Sequence Data Analysis
28 pages, 11 figures
null
null
null
q-bio.QM cs.LG math.AT
http://creativecommons.org/licenses/by/4.0/
Single-cell RNA sequencing (scRNA-seq) is widely used to reveal heterogeneity in cells, which has given us insights into cell-cell communication, cell differentiation, and differential gene expression. However, analyzing scRNA-seq data is a challenge due to sparsity and the large number of genes involved. Therefore, dimensionality reduction and feature selection are important for removing spurious signals and enhancing downstream analysis. Traditional PCA, a main workhorse in dimensionality reduction, lacks the ability to capture geometrical structure information embedded in the data, and previous graph Laplacian regularizations are limited by the analysis of only a single scale. We propose a topological Principal Components Analysis (tPCA) method by the combination of persistent Laplacian (PL) technique and L$_{2,1}$ norm regularization to address multiscale and multiclass heterogeneity issues in data. We further introduce a k-Nearest-Neighbor (kNN) persistent Laplacian technique to improve the robustness of our persistent Laplacian method. The proposed kNN-PL is a new algebraic topology technique which addresses the many limitations of the traditional persistent homology. Rather than inducing filtration via the varying of a distance threshold, we introduced kNN-tPCA, where filtrations are achieved by varying the number of neighbors in a kNN network at each step, and find that this framework has significant implications for hyper-parameter tuning. We validate the efficacy of our proposed tPCA and kNN-tPCA methods on 11 diverse benchmark scRNA-seq datasets, and showcase that our methods outperform other unsupervised PCA enhancements from the literature, as well as popular Uniform Manifold Approximation (UMAP), t-Distributed Stochastic Neighbor Embedding (tSNE), and Projection Non-Negative Matrix Factorization (NMF) by significant margins.
[ { "created": "Mon, 23 Oct 2023 03:07:50 GMT", "version": "v1" } ]
2023-10-24
[ [ "Cottrell", "Sean", "" ], [ "Hozumi", "Yuta", "" ], [ "Wei", "Guo-Wei", "" ] ]
Single-cell RNA sequencing (scRNA-seq) is widely used to reveal heterogeneity in cells, which has given us insights into cell-cell communication, cell differentiation, and differential gene expression. However, analyzing scRNA-seq data is a challenge due to sparsity and the large number of genes involved. Therefore, dimensionality reduction and feature selection are important for removing spurious signals and enhancing downstream analysis. Traditional PCA, a main workhorse in dimensionality reduction, lacks the ability to capture geometrical structure information embedded in the data, and previous graph Laplacian regularizations are limited by the analysis of only a single scale. We propose a topological Principal Components Analysis (tPCA) method by the combination of persistent Laplacian (PL) technique and L$_{2,1}$ norm regularization to address multiscale and multiclass heterogeneity issues in data. We further introduce a k-Nearest-Neighbor (kNN) persistent Laplacian technique to improve the robustness of our persistent Laplacian method. The proposed kNN-PL is a new algebraic topology technique which addresses the many limitations of the traditional persistent homology. Rather than inducing filtration via the varying of a distance threshold, we introduced kNN-tPCA, where filtrations are achieved by varying the number of neighbors in a kNN network at each step, and find that this framework has significant implications for hyper-parameter tuning. We validate the efficacy of our proposed tPCA and kNN-tPCA methods on 11 diverse benchmark scRNA-seq datasets, and showcase that our methods outperform other unsupervised PCA enhancements from the literature, as well as popular Uniform Manifold Approximation (UMAP), t-Distributed Stochastic Neighbor Embedding (tSNE), and Projection Non-Negative Matrix Factorization (NMF) by significant margins.
2010.13459
Marie-Constance Corsi
Marie-Constance Corsi, Mario Chavez, Denis Schwartz, Nathalie George, Laurent Hugueville, Ari E. Kahn, Sophie Dupont, Danielle S. Bassett and Fabrizio De Vico Fallani
BCI learning induces core-periphery reorganization in M/EEG multiplex brain networks
This is the version of the article before editing, as submitted by an author to the Journal of Neural Engineering. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online athttp://iopscience.iop.org/article/10.1088/1741-2552/abef39
null
10.1088/1741-2552/abef39
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain-computer interfaces (BCIs) constitute a promising tool for communication and control. However, mastering non-invasive closed-loop systems remains a learned skill that is difficult to develop for a non-negligible proportion of users. The involved learning process induces neural changes associated with a brain network reorganization that remains poorly understood. To address this inter-subject variability, we adopted a multilayer approach to integrate brain network properties from electroencephalographic (EEG) and magnetoencephalographic (MEG) data resulting from a four-session BCI training program followed by a group of healthy subjects. Our method gives access to the contribution of each layer to multilayer network that tends to be equal with time. We show that regardless the chosen modality, a progressive increase in the integration of somatosensory areas in the alpha band was paralleled by a decrease of the integration of visual processing and working memory areas in the beta band. Notably, only brain network properties in multilayer network correlated with future BCI scores in the alpha2 band: positively in somatosensory and decision-making related areas and negatively in associative areas. Our findings cast new light on neural processes underlying BCI training. Integrating multimodal brain network properties provides new information that correlates with behavioral performance and could be considered as a potential marker of BCI learning.
[ { "created": "Mon, 26 Oct 2020 09:56:14 GMT", "version": "v1" }, { "created": "Wed, 17 Mar 2021 09:08:55 GMT", "version": "v2" } ]
2021-03-18
[ [ "Corsi", "Marie-Constance", "" ], [ "Chavez", "Mario", "" ], [ "Schwartz", "Denis", "" ], [ "George", "Nathalie", "" ], [ "Hugueville", "Laurent", "" ], [ "Kahn", "Ari E.", "" ], [ "Dupont", "Sophie", "" ...
Brain-computer interfaces (BCIs) constitute a promising tool for communication and control. However, mastering non-invasive closed-loop systems remains a learned skill that is difficult to develop for a non-negligible proportion of users. The involved learning process induces neural changes associated with a brain network reorganization that remains poorly understood. To address this inter-subject variability, we adopted a multilayer approach to integrate brain network properties from electroencephalographic (EEG) and magnetoencephalographic (MEG) data resulting from a four-session BCI training program followed by a group of healthy subjects. Our method gives access to the contribution of each layer to multilayer network that tends to be equal with time. We show that regardless the chosen modality, a progressive increase in the integration of somatosensory areas in the alpha band was paralleled by a decrease of the integration of visual processing and working memory areas in the beta band. Notably, only brain network properties in multilayer network correlated with future BCI scores in the alpha2 band: positively in somatosensory and decision-making related areas and negatively in associative areas. Our findings cast new light on neural processes underlying BCI training. Integrating multimodal brain network properties provides new information that correlates with behavioral performance and could be considered as a potential marker of BCI learning.
2401.06005
Benjamin Peters
Benjamin Peters, James J. DiCarlo, Todd Gureckis, Ralf Haefner, Leyla Isik, Joshua Tenenbaum, Talia Konkle, Thomas Naselaris, Kimberly Stachenfeld, Zenna Tavares, Doris Tsao, Ilker Yildirim, Nikolaus Kriegeskorte
How does the primate brain combine generative and discriminative computations in vision?
null
null
null
null
q-bio.NC cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes giving rise to it. In this conception, vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
[ { "created": "Thu, 11 Jan 2024 16:07:58 GMT", "version": "v1" } ]
2024-01-12
[ [ "Peters", "Benjamin", "" ], [ "DiCarlo", "James J.", "" ], [ "Gureckis", "Todd", "" ], [ "Haefner", "Ralf", "" ], [ "Isik", "Leyla", "" ], [ "Tenenbaum", "Joshua", "" ], [ "Konkle", "Talia", "" ], [ ...
Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes giving rise to it. In this conception, vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
1403.4987
Pier Francesco Palamara
Pier Francesco Palamara
Population genetics of identity by descent
Ph.D. thesis
null
10.7916/D8V122XT
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent improvements in high-throughput genotyping and sequencing technologies have afforded the collection of massive, genome-wide datasets of DNA information from hundreds of thousands of individuals. These datasets, in turn, provide unprecedented opportunities to reconstruct the history of human populations and detect genotype-phenotype association. Recently developed computational methods can identify long-range chromosomal segments that are identical across samples, and have been transmitted from common ancestors that lived tens to hundreds of generations in the past. These segments reveal genealogical relationships that are typically unknown to the carrying individuals. In this work, we demonstrate that such identical-by-descent (IBD) segments are informative about a number of relevant population genetics features: they enable the inference of details about past population size fluctuations, migration events, and they carry the genomic signature of natural selection. We derive a mathematical model, based on coalescent theory, that allows for a quantitative description of IBD sharing across purportedly unrelated individuals, and develop inference procedures for the reconstruction of recent demographic events, where classical methodologies are statistically underpowered. We analyze IBD sharing in several contemporary human populations, including representative communities of the Jewish Diaspora, Kenyan Maasai samples, and individuals from several Dutch provinces, in all cases retrieving evidence of fine-scale demographic events from recent history. Finally, we expand the presented model to describe distributions for those sites in IBD shared segments that harbor mutation events, showing how these may be used for the inference of mutation rates in humans and other species.
[ { "created": "Wed, 19 Mar 2014 21:37:03 GMT", "version": "v1" } ]
2014-12-19
[ [ "Palamara", "Pier Francesco", "" ] ]
Recent improvements in high-throughput genotyping and sequencing technologies have afforded the collection of massive, genome-wide datasets of DNA information from hundreds of thousands of individuals. These datasets, in turn, provide unprecedented opportunities to reconstruct the history of human populations and detect genotype-phenotype association. Recently developed computational methods can identify long-range chromosomal segments that are identical across samples, and have been transmitted from common ancestors that lived tens to hundreds of generations in the past. These segments reveal genealogical relationships that are typically unknown to the carrying individuals. In this work, we demonstrate that such identical-by-descent (IBD) segments are informative about a number of relevant population genetics features: they enable the inference of details about past population size fluctuations, migration events, and they carry the genomic signature of natural selection. We derive a mathematical model, based on coalescent theory, that allows for a quantitative description of IBD sharing across purportedly unrelated individuals, and develop inference procedures for the reconstruction of recent demographic events, where classical methodologies are statistically underpowered. We analyze IBD sharing in several contemporary human populations, including representative communities of the Jewish Diaspora, Kenyan Maasai samples, and individuals from several Dutch provinces, in all cases retrieving evidence of fine-scale demographic events from recent history. Finally, we expand the presented model to describe distributions for those sites in IBD shared segments that harbor mutation events, showing how these may be used for the inference of mutation rates in humans and other species.
2012.03671
Cailey Kerley
Cailey I. Kerley, Leon Y. Cai, Chang Yu, Logan M. Crawford, Jason M. Elenberger, Eden S. Singh, Kurt G. Schilling, Katherine S. Aboud, Bennett A. Landman, Tonia S. Rex
Joint analysis of structural connectivity and cortical surface features: correlates with mild traumatic brain injury
To be published in Proc SPIE Int Soc Opt Eng. 2021 Feb
null
null
null
q-bio.NC cs.LG q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mild traumatic brain injury (mTBI) is a complex syndrome that affects up to 600 per 100,000 individuals, with a particular concentration among military personnel. About half of all mTBI patients experience a diverse array of chronic symptoms which persist long after the acute injury. Hence, there is an urgent need for better understanding of the white matter and gray matter pathologies associated with mTBI to map which specific brain systems are impacted and identify courses of intervention. Previous works have linked mTBI to disruptions in white matter pathways and cortical surface abnormalities. Herein, we examine these hypothesized links in an exploratory study of joint structural connectivity and cortical surface changes associated with mTBI and its chronic symptoms. Briefly, we consider a cohort of 12 mTBI and 26 control subjects. A set of 588 cortical surface metrics and 4,753 structural connectivity metrics were extracted from cortical surface regions and diffusion weighted magnetic resonance imaging in each subject. Principal component analysis (PCA) was used to reduce the dimensionality of each metric set. We then applied independent component analysis (ICA) both to each PCA space individually and together in a joint ICA approach. We identified a stable independent component across the connectivity-only and joint ICAs which presented significant group differences in subject loadings (p<0.05, corrected). Additionally, we found that two mTBI symptoms, slowed thinking and forgetfulness, were significantly correlated (p<0.05, corrected) with mTBI subject loadings in a surface-only ICA. These surface-only loadings captured an increase in bilateral cortical thickness.
[ { "created": "Wed, 18 Nov 2020 19:39:55 GMT", "version": "v1" }, { "created": "Tue, 15 Dec 2020 16:16:50 GMT", "version": "v2" } ]
2020-12-16
[ [ "Kerley", "Cailey I.", "" ], [ "Cai", "Leon Y.", "" ], [ "Yu", "Chang", "" ], [ "Crawford", "Logan M.", "" ], [ "Elenberger", "Jason M.", "" ], [ "Singh", "Eden S.", "" ], [ "Schilling", "Kurt G.", "" ], ...
Mild traumatic brain injury (mTBI) is a complex syndrome that affects up to 600 per 100,000 individuals, with a particular concentration among military personnel. About half of all mTBI patients experience a diverse array of chronic symptoms which persist long after the acute injury. Hence, there is an urgent need for better understanding of the white matter and gray matter pathologies associated with mTBI to map which specific brain systems are impacted and identify courses of intervention. Previous works have linked mTBI to disruptions in white matter pathways and cortical surface abnormalities. Herein, we examine these hypothesized links in an exploratory study of joint structural connectivity and cortical surface changes associated with mTBI and its chronic symptoms. Briefly, we consider a cohort of 12 mTBI and 26 control subjects. A set of 588 cortical surface metrics and 4,753 structural connectivity metrics were extracted from cortical surface regions and diffusion weighted magnetic resonance imaging in each subject. Principal component analysis (PCA) was used to reduce the dimensionality of each metric set. We then applied independent component analysis (ICA) both to each PCA space individually and together in a joint ICA approach. We identified a stable independent component across the connectivity-only and joint ICAs which presented significant group differences in subject loadings (p<0.05, corrected). Additionally, we found that two mTBI symptoms, slowed thinking and forgetfulness, were significantly correlated (p<0.05, corrected) with mTBI subject loadings in a surface-only ICA. These surface-only loadings captured an increase in bilateral cortical thickness.
1711.06231
Sam Ma
Zhanshan Ma
Extending species-area relationships (SAR) to diversity-area relationships (DAR)
null
null
null
null
q-bio.PE cs.CE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I extend the traditional SAR, which has achieved status of ecological law and plays a critical role in global biodiversity assessment, to the general (alpha- or beta-diversity in Hill numbers) diversity area relationship (DAR). The extension was motivated to remedy the limitation of traditional SAR that only address one aspect of biodiversity scaling, i.e., species richness scaling over space. The extension was made possible by the fact that all Hill numbers are in units of species (referred to as the effective number of species or as species equivalents), and I postulated that Hill numbers should follow the same or similar pattern of SAR. I selected three DAR models, the traditional power law (PL), PLEC (PL with exponential cutoff) and PLIEC (PL with inverse exponential cutoff). I defined three new concepts and derived their quantifications: (i)DAR profile: z-q series where z is the PL scaling parameter at different diversity order (q); (ii)PDO (pair-wise diversity overlap) profile: g-q series where g is the PDO corresponding to q; (iii) MAD (maximal accrual diversity) profile: Dmax-q series where Dmax is the MAD corresponding to q. Furthermore, the PDO-g is quantified based on the self-similarity property of the PL model, and Dmax can be estimated from the PLEC parameters. The three profiles constitute a novel DAR approach to biodiversity scaling. I verified the postulation with the American gut microbiome project (AGP) dataset of 1473 healthy North American individuals (the largest human dataset from a single project to date). The PL model was preferred due to its simplicity and established ecological properties such as self-similarity (necessary for establishing PDO profile), and PLEC has an advantage in establishing the MAD profile. All three profiles for the AGP dataset were successfully quantified and compared with existing SAR parameters in the literature whenever possible.
[ { "created": "Thu, 16 Nov 2017 18:20:25 GMT", "version": "v1" } ]
2017-11-17
[ [ "Ma", "Zhanshan", "" ] ]
I extend the traditional SAR, which has achieved status of ecological law and plays a critical role in global biodiversity assessment, to the general (alpha- or beta-diversity in Hill numbers) diversity area relationship (DAR). The extension was motivated to remedy the limitation of traditional SAR that only address one aspect of biodiversity scaling, i.e., species richness scaling over space. The extension was made possible by the fact that all Hill numbers are in units of species (referred to as the effective number of species or as species equivalents), and I postulated that Hill numbers should follow the same or similar pattern of SAR. I selected three DAR models, the traditional power law (PL), PLEC (PL with exponential cutoff) and PLIEC (PL with inverse exponential cutoff). I defined three new concepts and derived their quantifications: (i)DAR profile: z-q series where z is the PL scaling parameter at different diversity order (q); (ii)PDO (pair-wise diversity overlap) profile: g-q series where g is the PDO corresponding to q; (iii) MAD (maximal accrual diversity) profile: Dmax-q series where Dmax is the MAD corresponding to q. Furthermore, the PDO-g is quantified based on the self-similarity property of the PL model, and Dmax can be estimated from the PLEC parameters. The three profiles constitute a novel DAR approach to biodiversity scaling. I verified the postulation with the American gut microbiome project (AGP) dataset of 1473 healthy North American individuals (the largest human dataset from a single project to date). The PL model was preferred due to its simplicity and established ecological properties such as self-similarity (necessary for establishing PDO profile), and PLEC has an advantage in establishing the MAD profile. All three profiles for the AGP dataset were successfully quantified and compared with existing SAR parameters in the literature whenever possible.
0812.1894
Davide Valenti
Alessandro Giuffrida, Davide Valenti, Graziella Ziino, Bernardo Spagnolo, Antonio Panebianco
A stochastic interspecific competition model to predict the behaviour of Listeria monocytogenes in the fermentation process of a traditional Sicilian salami
19 pages, 2 figures, 2 tables. To be published in Eur. Food Res. Technol
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present paper discusses the use of modified Lotka-Volterra equations in order to stochastically simulate the behaviour of Listeria monocytogenes and Lactic Acid Bacteria (LAB) during the fermentation period (168 h) of a typical Sicilian salami. For this purpose, the differential equation system is set considering T, pH and aw as stochastic variables. Each of them is governed by dynamics that involve a deterministic linear decrease as a function of the time t and an "additive noise" term which instantaneously mimics the fluctuations of T, pH and aw. The choice of a suitable parameter accounting for the interaction of LAB on L. monocytogenes as well as the introduction of appropriate noise levels allows to match the observed data, both for the mean growth curves and for the probability distribution of L. monocytogenes concentration at 168 h.
[ { "created": "Wed, 10 Dec 2008 11:33:33 GMT", "version": "v1" } ]
2008-12-11
[ [ "Giuffrida", "Alessandro", "" ], [ "Valenti", "Davide", "" ], [ "Ziino", "Graziella", "" ], [ "Spagnolo", "Bernardo", "" ], [ "Panebianco", "Antonio", "" ] ]
The present paper discusses the use of modified Lotka-Volterra equations in order to stochastically simulate the behaviour of Listeria monocytogenes and Lactic Acid Bacteria (LAB) during the fermentation period (168 h) of a typical Sicilian salami. For this purpose, the differential equation system is set considering T, pH and aw as stochastic variables. Each of them is governed by dynamics that involve a deterministic linear decrease as a function of the time t and an "additive noise" term which instantaneously mimics the fluctuations of T, pH and aw. The choice of a suitable parameter accounting for the interaction of LAB on L. monocytogenes as well as the introduction of appropriate noise levels allows to match the observed data, both for the mean growth curves and for the probability distribution of L. monocytogenes concentration at 168 h.
1210.2342
Aaron Quinlan Ph.D.
Ryan M. Layer, Ira M. Hall, and Aaron R. Quinlan
LUMPY: A probabilistic framework for structural variant discovery
version 2; updated for journal resubmission; includes comparison to PINDEL and analysis on NA12878 and parents
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Comprehensive discovery of structural variation (SV) in human genomes from DNA sequencing requires the integration of multiple alignment signals including read-pair, split-read and read-depth. However, owing to inherent technical challenges, most existing SV discovery approaches utilize only one signal and consequently suffer from reduced sensitivity, especially at low sequence coverage and for smaller SVs. We present a novel and extremely flexible probabilistic SV discovery framework that is capable of integrating any number of SV detection signals including those generated from read alignments or prior evidence. We demonstrate improved sensitivity over extant methods by combining paired-end and split-read alignments and emphasize the utility of our framework for comprehensive studies of structural variation in heterogeneous tumor genomes. We further discuss the broader utility of this approach for probabilistic integration of diverse genomic interval datasets.
[ { "created": "Mon, 8 Oct 2012 17:06:14 GMT", "version": "v1" }, { "created": "Wed, 22 Jan 2014 01:22:16 GMT", "version": "v2" } ]
2014-01-23
[ [ "Layer", "Ryan M.", "" ], [ "Hall", "Ira M.", "" ], [ "Quinlan", "Aaron R.", "" ] ]
Comprehensive discovery of structural variation (SV) in human genomes from DNA sequencing requires the integration of multiple alignment signals including read-pair, split-read and read-depth. However, owing to inherent technical challenges, most existing SV discovery approaches utilize only one signal and consequently suffer from reduced sensitivity, especially at low sequence coverage and for smaller SVs. We present a novel and extremely flexible probabilistic SV discovery framework that is capable of integrating any number of SV detection signals including those generated from read alignments or prior evidence. We demonstrate improved sensitivity over extant methods by combining paired-end and split-read alignments and emphasize the utility of our framework for comprehensive studies of structural variation in heterogeneous tumor genomes. We further discuss the broader utility of this approach for probabilistic integration of diverse genomic interval datasets.
2004.09639
Vasily Belov
Takeshi Yamagiwa, Yong-Ming Yu, Yoshitaka Inoue, Vasily V. Belov, Mikhail I. Papisov, Sadaki Inokuchi, Masao Kaneki, Morris F. White, Alan J. Fischman, and Ronald G. Tompkins
Impairment of insulin-stimulated glucose utilization is associated with burn-induced insulin resistance in mouse muscle by hyperinsulinemic-isoglycemic clamp
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Burn-induced insulin resistance is associated with increased morbidity and mortality; however, the impact of burn injury on tissue-specific insulin sensitivity and its molecular mechanisms with consideration of insulin state remains unknown in rodent models. This study was designed to characterize a burn mouse model with tissue-specific insulin resistance under insulin clamp conditions. C57BL6/J mice were subjected to 30% full-thickness burn injury and underwent the combination of hyperinsulinemic isoglycemicclamp (HIC) and positron emission tomography (PET). Hepatic glucose production (HGP) and peripheral glucose disappearance rate (Rd) were measured at different time points up to 7 days post injury. Burned mice showed a significant fasting hypoglycemia and hypoinsulinemia (P < 0.01) on post-burn day (PBD) 3 and 7 along with significantly higher energy expenditure (P < 0.01). HICon PBD 3 demonstrated that burn injury induced systemic insulin resistance, resulting from a significant decrease in insulin-stimulated Rd (33.0 +/- 10.2 vs 68.3 +/- 5.9 mg/kg/min; P < 0.05). In contrast, HGP of burned and sham mice was comparable both in the basal and clamp period. PET on PBD 3 showed a lower insulin-stimulated 18F-labeled 2-fluoro-2-deoxy-D-glucose uptake in the quadriceps of burned mice compared with sham-burned mice. Gastrocnemius muscle harvested from burned mice on PBD 3 showed decreased insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 to 34.7% of that in sham-burn mice by immunoblotting analysis (P < 0.05). These findings suggest that impaired insulin-stimulated Rd in skeletal muscle, not elevated HGP, plays a role in the development of burn-induced insulin resistance in a mouse model.
[ { "created": "Mon, 20 Apr 2020 21:16:49 GMT", "version": "v1" } ]
2020-04-22
[ [ "Yamagiwa", "Takeshi", "" ], [ "Yu", "Yong-Ming", "" ], [ "Inoue", "Yoshitaka", "" ], [ "Belov", "Vasily V.", "" ], [ "Papisov", "Mikhail I.", "" ], [ "Inokuchi", "Sadaki", "" ], [ "Kaneki", "Masao", "" ]...
Burn-induced insulin resistance is associated with increased morbidity and mortality; however, the impact of burn injury on tissue-specific insulin sensitivity and its molecular mechanisms with consideration of insulin state remains unknown in rodent models. This study was designed to characterize a burn mouse model with tissue-specific insulin resistance under insulin clamp conditions. C57BL6/J mice were subjected to 30% full-thickness burn injury and underwent the combination of hyperinsulinemic isoglycemicclamp (HIC) and positron emission tomography (PET). Hepatic glucose production (HGP) and peripheral glucose disappearance rate (Rd) were measured at different time points up to 7 days post injury. Burned mice showed a significant fasting hypoglycemia and hypoinsulinemia (P < 0.01) on post-burn day (PBD) 3 and 7 along with significantly higher energy expenditure (P < 0.01). HICon PBD 3 demonstrated that burn injury induced systemic insulin resistance, resulting from a significant decrease in insulin-stimulated Rd (33.0 +/- 10.2 vs 68.3 +/- 5.9 mg/kg/min; P < 0.05). In contrast, HGP of burned and sham mice was comparable both in the basal and clamp period. PET on PBD 3 showed a lower insulin-stimulated 18F-labeled 2-fluoro-2-deoxy-D-glucose uptake in the quadriceps of burned mice compared with sham-burned mice. Gastrocnemius muscle harvested from burned mice on PBD 3 showed decreased insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 to 34.7% of that in sham-burn mice by immunoblotting analysis (P < 0.05). These findings suggest that impaired insulin-stimulated Rd in skeletal muscle, not elevated HGP, plays a role in the development of burn-induced insulin resistance in a mouse model.
2305.07508
Xingang Peng
Xingang Peng, Jiaqi Guan, Qiang Liu, Jianzhu Ma
MolDiff: Addressing the Atom-Bond Inconsistency Problem in 3D Molecule Diffusion Generation
null
null
null
null
q-bio.BM cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep generative models have recently achieved superior performance in 3D molecule generation. Most of them first generate atoms and then add chemical bonds based on the generated atoms in a post-processing manner. However, there might be no corresponding bond solution for the temporally generated atoms as their locations are generated without considering potential bonds. We define this problem as the atom-bond inconsistency problem and claim it is the main reason for current approaches to generating unrealistic 3D molecules. To overcome this problem, we propose a new diffusion model called MolDiff which can generate atoms and bonds simultaneously while still maintaining their consistency by explicitly modeling the dependence between their relationships. We evaluated the generation ability of our proposed model and the quality of the generated molecules using criteria related to both geometry and chemical properties. The empirical studies showed that our model outperforms previous approaches, achieving a three-fold improvement in success rate and generating molecules with significantly better quality.
[ { "created": "Thu, 11 May 2023 08:11:19 GMT", "version": "v1" } ]
2023-05-15
[ [ "Peng", "Xingang", "" ], [ "Guan", "Jiaqi", "" ], [ "Liu", "Qiang", "" ], [ "Ma", "Jianzhu", "" ] ]
Deep generative models have recently achieved superior performance in 3D molecule generation. Most of them first generate atoms and then add chemical bonds based on the generated atoms in a post-processing manner. However, there might be no corresponding bond solution for the temporally generated atoms as their locations are generated without considering potential bonds. We define this problem as the atom-bond inconsistency problem and claim it is the main reason for current approaches to generating unrealistic 3D molecules. To overcome this problem, we propose a new diffusion model called MolDiff which can generate atoms and bonds simultaneously while still maintaining their consistency by explicitly modeling the dependence between their relationships. We evaluated the generation ability of our proposed model and the quality of the generated molecules using criteria related to both geometry and chemical properties. The empirical studies showed that our model outperforms previous approaches, achieving a three-fold improvement in success rate and generating molecules with significantly better quality.