id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1706.07992
Juan B Gutierrez
Elizabeth D. Trippe and Jacob B. Aguilar and Yi H. Yan and Mustafa V. Nural and Jessica A. Brady and Mehdi Assefi and Saeid Safaei and Mehdi Allahyari and Seyedamin Pouriyeh and Mary R. Galinski and Jessica C. Kissinger and Juan B. Gutierrez
A Vision for Health Informatics: Introducing the SKED Framework.An Extensible Architecture for Scientific Knowledge Extraction from Data
8 pages, 4 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goals of the Triple Aim of health care and the goals of P4 medicine outline objectives that require a significant health informatics component. However, the goals do not provide specifications about how all of the new individual patient data will be combined in meaningful ways and with data from other sources, like epidemiological data, to promote the health of individuals and society. We seem to have more data than ever before but few resources and means to use it efficiently. We need a general, extensible solution that integrates and homogenizes data of disparate origin, incompatible formats, and multiple spatial and temporal scales. To address this problem, we introduce the Scientific Knowledge Extraction from Data (SKED) architecture, as a technology-agnostic framework to minimize the overhead of data integration, permit reuse of analytical pipelines, and guarantee reproducible quantitative results. The SKED architecture consists of a Resource Allocation Service to locate resources, and the definition of data primitives to simplify and harmonize data. SKED allows automated knowledge discovery and provides a platform for the realization of the major goals of modern health care.
[ { "created": "Sat, 24 Jun 2017 19:05:04 GMT", "version": "v1" } ]
2017-06-27
[ [ "Trippe", "Elizabeth D.", "" ], [ "Aguilar", "Jacob B.", "" ], [ "Yan", "Yi H.", "" ], [ "Nural", "Mustafa V.", "" ], [ "Brady", "Jessica A.", "" ], [ "Assefi", "Mehdi", "" ], [ "Safaei", "Saeid", "" ], ...
The goals of the Triple Aim of health care and the goals of P4 medicine outline objectives that require a significant health informatics component. However, the goals do not provide specifications about how all of the new individual patient data will be combined in meaningful ways and with data from other sources, like epidemiological data, to promote the health of individuals and society. We seem to have more data than ever before but few resources and means to use it efficiently. We need a general, extensible solution that integrates and homogenizes data of disparate origin, incompatible formats, and multiple spatial and temporal scales. To address this problem, we introduce the Scientific Knowledge Extraction from Data (SKED) architecture, as a technology-agnostic framework to minimize the overhead of data integration, permit reuse of analytical pipelines, and guarantee reproducible quantitative results. The SKED architecture consists of a Resource Allocation Service to locate resources, and the definition of data primitives to simplify and harmonize data. SKED allows automated knowledge discovery and provides a platform for the realization of the major goals of modern health care.
q-bio/0406024
Henri Laurie
Henri Laurie and Rudi Penne
Projective geometry for human motion, with an application to injury risk
19 pp, 4 figs. To be published in SIAM Journal on Applied Mathematics
null
null
null
q-bio.QM q-bio.OT
null
We give an exposition of Plucker vectors for a system of joint axes in projective 3-space. We use Plucker vectors to analyse dependencies among joint axes, and in particular show that two rotational joints rigidly joined by a bar and each with 3 degrees of freedom always forms a 5-dimensional system. We introduce the concept of reduced redundancy in a dependent set of projective Lines, and argue that reduced redundancy in the axes of a body position increases injury risk. We apply this to a simple two-joint model of bowling in cricket, and show by analysis of some experimental data that reduced redundancy around ball release is observed in some cases.
[ { "created": "Fri, 11 Jun 2004 07:29:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Laurie", "Henri", "" ], [ "Penne", "Rudi", "" ] ]
We give an exposition of Plucker vectors for a system of joint axes in projective 3-space. We use Plucker vectors to analyse dependencies among joint axes, and in particular show that two rotational joints rigidly joined by a bar and each with 3 degrees of freedom always forms a 5-dimensional system. We introduce the concept of reduced redundancy in a dependent set of projective Lines, and argue that reduced redundancy in the axes of a body position increases injury risk. We apply this to a simple two-joint model of bowling in cricket, and show by analysis of some experimental data that reduced redundancy around ball release is observed in some cases.
2202.02211
Andr\'e Barreira da Silva Rocha
Annick Laruelle, Andr\'e Rocha, Claudia Manini, Jos\'e I L\'opez, Elena Inarra
Effects of heterogeneity on cancer: a game theory perspective
16 pages, 3 figures
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we explore interactions between cancer cells by using the hawk-dove game. We analyze the heterogeneity of tumors by considering games with populations composed of 2 or 3 types of cells. We determine what strategies are evolutionarily stable in the 2-type and 3-type population games and the corresponding expected payoffs. Our results show that the payoff of the best-off cell in the 2-type population game is higher than that of the best-off cell in the 3-type population game. Translating these mathematical findings to the field of oncology suggests that a branching-type tumor pursues a less aggressive course than a punctuated-type one. Some histological and genomic data of clear cell renal cell carcinomas are consistent with these results.
[ { "created": "Fri, 4 Feb 2022 16:13:09 GMT", "version": "v1" } ]
2022-02-07
[ [ "Laruelle", "Annick", "" ], [ "Rocha", "André", "" ], [ "Manini", "Claudia", "" ], [ "López", "José I", "" ], [ "Inarra", "Elena", "" ] ]
In this study, we explore interactions between cancer cells by using the hawk-dove game. We analyze the heterogeneity of tumors by considering games with populations composed of 2 or 3 types of cells. We determine what strategies are evolutionarily stable in the 2-type and 3-type population games and the corresponding expected payoffs. Our results show that the payoff of the best-off cell in the 2-type population game is higher than that of the best-off cell in the 3-type population game. Translating these mathematical findings to the field of oncology suggests that a branching-type tumor pursues a less aggressive course than a punctuated-type one. Some histological and genomic data of clear cell renal cell carcinomas are consistent with these results.
1611.04080
Peter Kekenes-Huskey
Erik C. Cook, Bin Sun, Peter M. Kekenes-Huskey, Trevor P Creamer
Electrostatic Forces Mediate Fast Association of Calmodulin and the Intrinsically Disordered Regulatory Domain of Calcineurin
66 pgs, 15 figures. Pre-print
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intrinsically disordered proteins (IDPs) and proteins with intrinsically disordered regions (IDRs) govern a daunting number of physiological processes. For such proteins, molecular mechanisms governing their interactions with proteins involved in signal transduction pathways remain unclear. Using the folded, calcium-loaded calmodulin (CaM) interaction with the calcineurin regulatory IDP as a prototype for IDP-mediated signal transduction events, we uncover the interplay of IDP structure and electrostatic interactions in determining the kinetics of protein-protein association. Using an array of biophysical approaches including stopped-flow and computational simulation, we quantify the relative contributions of electrostatic interactions and conformational ensemble characteristics in determining association kinetics of calmodulin (CaM) and the calcineurin regulatory domain (CaN RD). Our chief findings are that CaM/CN RD association rates are strongly dependent on ionic strength and that observed rates are largely determined by the electrostatically-driven interaction strength between CaM and the narrow CaN RD calmodulin recognition domain. These studies highlight a molecular mechanism of controlling signal transduction kinetics that may be utilized in wide-ranging signaling cascades that involve IDPs.
[ { "created": "Sun, 13 Nov 2016 04:22:42 GMT", "version": "v1" } ]
2016-11-15
[ [ "Cook", "Erik C.", "" ], [ "Sun", "Bin", "" ], [ "Kekenes-Huskey", "Peter M.", "" ], [ "Creamer", "Trevor P", "" ] ]
Intrinsically disordered proteins (IDPs) and proteins with intrinsically disordered regions (IDRs) govern a daunting number of physiological processes. For such proteins, molecular mechanisms governing their interactions with proteins involved in signal transduction pathways remain unclear. Using the folded, calcium-loaded calmodulin (CaM) interaction with the calcineurin regulatory IDP as a prototype for IDP-mediated signal transduction events, we uncover the interplay of IDP structure and electrostatic interactions in determining the kinetics of protein-protein association. Using an array of biophysical approaches including stopped-flow and computational simulation, we quantify the relative contributions of electrostatic interactions and conformational ensemble characteristics in determining association kinetics of calmodulin (CaM) and the calcineurin regulatory domain (CaN RD). Our chief findings are that CaM/CN RD association rates are strongly dependent on ionic strength and that observed rates are largely determined by the electrostatically-driven interaction strength between CaM and the narrow CaN RD calmodulin recognition domain. These studies highlight a molecular mechanism of controlling signal transduction kinetics that may be utilized in wide-ranging signaling cascades that involve IDPs.
q-bio/0605004
Reka Albert
Madalena Chaves, Eduardo D. Sontag and Reka Albert
Methods of robustness analysis for Boolean models of gene control networks
29 pages, 8 figures, accepted in IEE Proc. Systems Biology
null
null
null
q-bio.MN q-bio.QM
null
As a discrete approach to genetic regulatory networks, Boolean models provide an essential qualitative description of the structure of interactions among genes and proteins. Boolean models generally assume only two possible states (expressed or not expressed) for each gene or protein in the network as well as a high level of synchronization among the various regulatory processes. In this paper, we discuss and compare two possible methods of adapting qualitative models to incorporate the continuous-time character of regulatory networks. The first method consists of introducing asynchronous updates in the Boolean model. In the second method, we adopt the approach introduced by L. Glass to obtain a set of piecewise linear differential equations which continuously describe the states of each gene or protein in the network. We apply both methods to a particular example: a Boolean model of the segment polarity gene network of Drosophila melanogaster. We analyze the dynamics of the model, and provide a theoretical characterization of the model's gene pattern prediction as a function of the timescales of the various processes.
[ { "created": "Mon, 1 May 2006 18:44:03 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chaves", "Madalena", "" ], [ "Sontag", "Eduardo D.", "" ], [ "Albert", "Reka", "" ] ]
As a discrete approach to genetic regulatory networks, Boolean models provide an essential qualitative description of the structure of interactions among genes and proteins. Boolean models generally assume only two possible states (expressed or not expressed) for each gene or protein in the network as well as a high level of synchronization among the various regulatory processes. In this paper, we discuss and compare two possible methods of adapting qualitative models to incorporate the continuous-time character of regulatory networks. The first method consists of introducing asynchronous updates in the Boolean model. In the second method, we adopt the approach introduced by L. Glass to obtain a set of piecewise linear differential equations which continuously describe the states of each gene or protein in the network. We apply both methods to a particular example: a Boolean model of the segment polarity gene network of Drosophila melanogaster. We analyze the dynamics of the model, and provide a theoretical characterization of the model's gene pattern prediction as a function of the timescales of the various processes.
q-bio/0412035
Carl Troein
Bj\"orn Samuelsson and Carl Troein
Counting attractors in synchronously updated random Boolean networks
11 pages, 6 figures
null
null
LU TP 04-43
q-bio.MN cond-mat.soft
null
Despite their apparent simplicity, random Boolean networks display a rich variety of dynamical behaviors. Much work has been focused on the properties and abundance of attractors. We here derive an expression for the number of attractors in the special case of one input per node. Approximating some other non-chaotic networks to be of this class, we apply the analytic results to them. For this approximation, we observe a strikingly good agreement on the numbers of attractors of various lengths. Furthermore, we find that for long cycle lengths, there are some properties that seem strange in comparison to real dynamical systems. However, those properties can be interesting from the viewpoint of constraint satisfaction problems.
[ { "created": "Thu, 16 Dec 2004 19:37:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Samuelsson", "Björn", "" ], [ "Troein", "Carl", "" ] ]
Despite their apparent simplicity, random Boolean networks display a rich variety of dynamical behaviors. Much work has been focused on the properties and abundance of attractors. We here derive an expression for the number of attractors in the special case of one input per node. Approximating some other non-chaotic networks to be of this class, we apply the analytic results to them. For this approximation, we observe a strikingly good agreement on the numbers of attractors of various lengths. Furthermore, we find that for long cycle lengths, there are some properties that seem strange in comparison to real dynamical systems. However, those properties can be interesting from the viewpoint of constraint satisfaction problems.
q-bio/0506026
Ulrich Gerland
Richard A. Neher and Ulrich Gerland
DNA as a programmable viscoelastic nanoelement
10 pages, 7 figures; final version to appear in Biophysical Journal
Biophysical Journal, 89, 3846-3855 (2005)
10.1529/biophysj.105.068866
null
q-bio.BM cond-mat.soft
null
The two strands of a DNA molecule with a repetitive sequence can pair into many different basepairing patterns. For perfectly periodic sequences, early bulk experiments of Poerschke indicate the existence of a sliding process, permitting the rapid transition between different relative strand positions [Biophys. Chem. 2 (1974) 83]. Here, we use a detailed theoretical model to study the basepairing dynamics of periodic and nearly periodic DNA. As suggested by Poerschke, DNA sliding is mediated by basepairing defects (bulge loops), which can diffuse along the DNA. Moreover, a shear force f on opposite ends of the two strands yields a characteristic dynamic response: An outward average sliding velocity v~1/N is induced in a double strand of length N, provided f is larger than a threshold f_c. Conversely, if the strands are initially misaligned, they realign even against an external force less than f_c. These dynamics effectively result in a viscoelastic behavior of DNA under shear forces, with properties that are programmable through the choice of the DNA sequence. We find that a small number of mutations in periodic sequences does not prevent DNA sliding, but introduces a time delay in the dynamic response. We clarify the mechanism for the time delay and describe it quantitatively within a phenomenological model. Based on our findings, we suggest new dynamical roles for DNA in artificial nanoscale devices. The basepairing dynamics described here is also relevant for the extension of repetitive sequences inside genomic DNA.
[ { "created": "Fri, 17 Jun 2005 11:59:55 GMT", "version": "v1" }, { "created": "Mon, 19 Sep 2005 13:57:17 GMT", "version": "v2" } ]
2007-05-23
[ [ "Neher", "Richard A.", "" ], [ "Gerland", "Ulrich", "" ] ]
The two strands of a DNA molecule with a repetitive sequence can pair into many different basepairing patterns. For perfectly periodic sequences, early bulk experiments of Poerschke indicate the existence of a sliding process, permitting the rapid transition between different relative strand positions [Biophys. Chem. 2 (1974) 83]. Here, we use a detailed theoretical model to study the basepairing dynamics of periodic and nearly periodic DNA. As suggested by Poerschke, DNA sliding is mediated by basepairing defects (bulge loops), which can diffuse along the DNA. Moreover, a shear force f on opposite ends of the two strands yields a characteristic dynamic response: An outward average sliding velocity v~1/N is induced in a double strand of length N, provided f is larger than a threshold f_c. Conversely, if the strands are initially misaligned, they realign even against an external force less than f_c. These dynamics effectively result in a viscoelastic behavior of DNA under shear forces, with properties that are programmable through the choice of the DNA sequence. We find that a small number of mutations in periodic sequences does not prevent DNA sliding, but introduces a time delay in the dynamic response. We clarify the mechanism for the time delay and describe it quantitatively within a phenomenological model. Based on our findings, we suggest new dynamical roles for DNA in artificial nanoscale devices. The basepairing dynamics described here is also relevant for the extension of repetitive sequences inside genomic DNA.
2406.15669
Jason Yang
Jason Yang, Ariane Mora, Shengchao Liu, Bruce J. Wittmann, Anima Anandkumar, Frances H. Arnold, Yisong Yue
CARE: a Benchmark Suite for the Classification and Retrieval of Enzymes
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Enzymes are important proteins that catalyze chemical reactions. In recent years, machine learning methods have emerged to predict enzyme function from sequence; however, there are no standardized benchmarks to evaluate these methods. We introduce CARE, a benchmark and dataset suite for the Classification And Retrieval of Enzymes (CARE). CARE centers on two tasks: (1) classification of a protein sequence by its enzyme commission (EC) number and (2) retrieval of an EC number given a chemical reaction. For each task, we design train-test splits to evaluate different kinds of out-of-distribution generalization that are relevant to real use cases. For the classification task, we provide baselines for state-of-the-art methods. Because the retrieval task has not been previously formalized, we propose a method called Contrastive Reaction-EnzymE Pretraining (CREEP) as one of the first baselines for this task. CARE is available at https://github.com/jsunn-y/CARE/.
[ { "created": "Fri, 21 Jun 2024 22:01:05 GMT", "version": "v1" } ]
2024-06-25
[ [ "Yang", "Jason", "" ], [ "Mora", "Ariane", "" ], [ "Liu", "Shengchao", "" ], [ "Wittmann", "Bruce J.", "" ], [ "Anandkumar", "Anima", "" ], [ "Arnold", "Frances H.", "" ], [ "Yue", "Yisong", "" ] ]
Enzymes are important proteins that catalyze chemical reactions. In recent years, machine learning methods have emerged to predict enzyme function from sequence; however, there are no standardized benchmarks to evaluate these methods. We introduce CARE, a benchmark and dataset suite for the Classification And Retrieval of Enzymes (CARE). CARE centers on two tasks: (1) classification of a protein sequence by its enzyme commission (EC) number and (2) retrieval of an EC number given a chemical reaction. For each task, we design train-test splits to evaluate different kinds of out-of-distribution generalization that are relevant to real use cases. For the classification task, we provide baselines for state-of-the-art methods. Because the retrieval task has not been previously formalized, we propose a method called Contrastive Reaction-EnzymE Pretraining (CREEP) as one of the first baselines for this task. CARE is available at https://github.com/jsunn-y/CARE/.
1702.02563
Liangdong Zhou
Liangdong Zhou, Bastian Harrach, and Jin Keun Seo
Monotonicity-based Electrical Impedance Tomography for Lung Imaging
27 pages, 17 figures
null
10.1088/1361-6420/aaaf84
null
q-bio.QM math.OC physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e., the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used this monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.
[ { "created": "Wed, 8 Feb 2017 01:30:23 GMT", "version": "v1" }, { "created": "Mon, 5 Feb 2018 17:37:18 GMT", "version": "v2" } ]
2018-02-16
[ [ "Zhou", "Liangdong", "" ], [ "Harrach", "Bastian", "" ], [ "Seo", "Jin Keun", "" ] ]
This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e., the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used this monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.
1902.10070
Ludovico Minati
Ludovico Minati, Natsue Yoshimura, Mattia Frasca, Stanislaw Drozdz, Yasuharu Koike
Warped phase coherence: an empirical synchronization measure combining phase and amplitude information
null
Chaos 29, 021102 (2019)
10.1063/1.5082749
null
q-bio.NC nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The entrainment between weakly-coupled nonlinear oscillators, as well as between complex signals such as those representing physiological activity, is frequently assessed in terms of whether a stable relationship is detectable between the instantaneous phases extracted from the measured or simulated time-series via the analytic signal. Here, we demonstrate that adding a possibly complex constant value to this normally null-mean signal has a non-trivial warping effect. Among other consequences, this introduces a level of sensitivity to the amplitude fluctuations and average relative phase. By means of simulations of Roessler systems and experiments on single-transistor oscillator networks, it is shown that the resulting coherence measure may have an empirical value in improving the inference of the structural couplings from the dynamics. When tentatively applied to the electroencephalogram recorded while performing imaginary and real movements, this straightforward modification of the phase locking value substantially improved the classification accuracy. Hence, its possible practical relevance in brain-computer and brain-machine interfaces deserves consideration.
[ { "created": "Wed, 6 Feb 2019 14:01:34 GMT", "version": "v1" } ]
2019-02-27
[ [ "Minati", "Ludovico", "" ], [ "Yoshimura", "Natsue", "" ], [ "Frasca", "Mattia", "" ], [ "Drozdz", "Stanislaw", "" ], [ "Koike", "Yasuharu", "" ] ]
The entrainment between weakly-coupled nonlinear oscillators, as well as between complex signals such as those representing physiological activity, is frequently assessed in terms of whether a stable relationship is detectable between the instantaneous phases extracted from the measured or simulated time-series via the analytic signal. Here, we demonstrate that adding a possibly complex constant value to this normally null-mean signal has a non-trivial warping effect. Among other consequences, this introduces a level of sensitivity to the amplitude fluctuations and average relative phase. By means of simulations of Roessler systems and experiments on single-transistor oscillator networks, it is shown that the resulting coherence measure may have an empirical value in improving the inference of the structural couplings from the dynamics. When tentatively applied to the electroencephalogram recorded while performing imaginary and real movements, this straightforward modification of the phase locking value substantially improved the classification accuracy. Hence, its possible practical relevance in brain-computer and brain-machine interfaces deserves consideration.
2406.03938
Yuval Rabani
Yuval Rabani and Leonard J. Schulman and Alistair Sinclair
Diversity in Evolutionary Dynamics
null
null
null
null
q-bio.PE cs.CE cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the dynamics imposed by natural selection on the populations of two competing, sexually reproducing, haploid species. In this setting, the fitness of any genome varies over time due to the changing population mix of the competing species; crucially, this fitness variation arises naturally from the model itself, without the need for imposing it exogenously as is typically the case. Previous work on this model [14] showed that, in the special case where each of the two species exhibits just two phenotypes, genetic diversity is maintained at all times. This finding supported the tenet that sexual reproduction is advantageous because it promotes diversity, which increases the survivability of a species. In the present paper we consider the more realistic case where there are more than two phenotypes available to each species. The conclusions about diversity in general turn out to be very different from the two-phenotype case. Our first result is negative: namely, we show that sexual reproduction does not guarantee the maintenance of diversity at all times, i.e., the result of [14] does not generalize. Our counterexample consists of two competing species with just three phenotypes each. We show that, for any time~$t_0$ and any $\varepsilon>0$, there is a time $t\ge t_0$ at which the combined diversity of both species is smaller than~$\varepsilon$. Our main result is a complementary positive statement, which says that in any non-degenerate example, diversity is maintained in a weaker, "infinitely often" sense. Thus, our results refute the supposition that sexual reproduction ensures diversity at all times, but affirm a weaker assertion that extended periods of high diversity are necessarily a recurrent event.
[ { "created": "Thu, 6 Jun 2024 10:24:44 GMT", "version": "v1" }, { "created": "Fri, 5 Jul 2024 09:20:47 GMT", "version": "v2" } ]
2024-07-18
[ [ "Rabani", "Yuval", "" ], [ "Schulman", "Leonard J.", "" ], [ "Sinclair", "Alistair", "" ] ]
We consider the dynamics imposed by natural selection on the populations of two competing, sexually reproducing, haploid species. In this setting, the fitness of any genome varies over time due to the changing population mix of the competing species; crucially, this fitness variation arises naturally from the model itself, without the need for imposing it exogenously as is typically the case. Previous work on this model [14] showed that, in the special case where each of the two species exhibits just two phenotypes, genetic diversity is maintained at all times. This finding supported the tenet that sexual reproduction is advantageous because it promotes diversity, which increases the survivability of a species. In the present paper we consider the more realistic case where there are more than two phenotypes available to each species. The conclusions about diversity in general turn out to be very different from the two-phenotype case. Our first result is negative: namely, we show that sexual reproduction does not guarantee the maintenance of diversity at all times, i.e., the result of [14] does not generalize. Our counterexample consists of two competing species with just three phenotypes each. We show that, for any time~$t_0$ and any $\varepsilon>0$, there is a time $t\ge t_0$ at which the combined diversity of both species is smaller than~$\varepsilon$. Our main result is a complementary positive statement, which says that in any non-degenerate example, diversity is maintained in a weaker, "infinitely often" sense. Thus, our results refute the supposition that sexual reproduction ensures diversity at all times, but affirm a weaker assertion that extended periods of high diversity are necessarily a recurrent event.
1205.0335
John Hopfield
Filip Ponulak and John J. Hopfield
Rapid, parallel path planning by propagating wavefronts of spiking neural activity
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware.
[ { "created": "Wed, 2 May 2012 06:36:56 GMT", "version": "v1" } ]
2012-05-03
[ [ "Ponulak", "Filip", "" ], [ "Hopfield", "John J.", "" ] ]
Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware.
1604.08045
Sacha Epskamp
Sacha Epskamp, Joost Kruis and Maarten Marsman
Estimating psychopathological networks: be careful what you wish for
Published in PlosOne
null
10.1371/journal.pone.0179891
null
q-bio.NC stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network models, in which psychopathological disorders are conceptualized as a complex interplay of psychological and biological components, have become increasingly popular in the recent psychopathological literature. These network models often contain significant numbers of unknown parameters, yet the sample sizes available in psychological research are limited. As such, general assumptions about the true network are introduced to reduce the number of free parameters. Incorporating these assumptions, however, means that the resulting network will lead to reflect the particular structure assumed by the estimation method---a crucial and often ignored aspect of psychopathological networks. For example, observing a sparse structure and simultaneously assuming a sparse structure does not imply that the true model is, in fact, sparse. To illustrate this point, we discuss recent literature and show the effect of the assumption of sparsity in three simulation studies.
[ { "created": "Tue, 26 Apr 2016 12:57:51 GMT", "version": "v1" }, { "created": "Thu, 28 Apr 2016 15:52:45 GMT", "version": "v2" }, { "created": "Sun, 18 Sep 2016 10:53:07 GMT", "version": "v3" }, { "created": "Thu, 1 Jun 2017 09:18:25 GMT", "version": "v4" }, { "cr...
2017-09-13
[ [ "Epskamp", "Sacha", "" ], [ "Kruis", "Joost", "" ], [ "Marsman", "Maarten", "" ] ]
Network models, in which psychopathological disorders are conceptualized as a complex interplay of psychological and biological components, have become increasingly popular in the recent psychopathological literature. These network models often contain significant numbers of unknown parameters, yet the sample sizes available in psychological research are limited. As such, general assumptions about the true network are introduced to reduce the number of free parameters. Incorporating these assumptions, however, means that the resulting network will lead to reflect the particular structure assumed by the estimation method---a crucial and often ignored aspect of psychopathological networks. For example, observing a sparse structure and simultaneously assuming a sparse structure does not imply that the true model is, in fact, sparse. To illustrate this point, we discuss recent literature and show the effect of the assumption of sparsity in three simulation studies.
1408.4081
Liya Wang
Liya Wang and Doreen Ware
CloudSTRUCTURE: infer population STRUCTURE on the cloud
2 pages
null
null
null
q-bio.QM q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present CloudSTRUCTURE, an application for running parallel analyses with the population genetics program STRUCTURE. The HPC ready application, powered by iPlant cyber-infrastructure, provides a fast (by parallelization) and convenient (through a user friendly GUI) way to calculate like-lihood values across multiple values of K (number of genetic groups) and numbers of iterations. The results are automati-cally summarized for easier determination of the K value that best fit the data. In addition, CloudSTRUCTURE will reformat STRUCTURE output for use in downstream programs, such as TASSEL for association analysis with population structure ef-fects stratified.
[ { "created": "Mon, 18 Aug 2014 17:59:23 GMT", "version": "v1" } ]
2014-08-19
[ [ "Wang", "Liya", "" ], [ "Ware", "Doreen", "" ] ]
We present CloudSTRUCTURE, an application for running parallel analyses with the population genetics program STRUCTURE. The HPC ready application, powered by iPlant cyber-infrastructure, provides a fast (by parallelization) and convenient (through a user friendly GUI) way to calculate like-lihood values across multiple values of K (number of genetic groups) and numbers of iterations. The results are automati-cally summarized for easier determination of the K value that best fit the data. In addition, CloudSTRUCTURE will reformat STRUCTURE output for use in downstream programs, such as TASSEL for association analysis with population structure ef-fects stratified.
1409.1975
Edwin Wang Dr.
Chabane Tibiche and Edwin Wang
GeneNetMiner: accurately mining gene regulatory networks from literature
Related papers can be found at http://www.cancer-systemsbiology.org
null
null
null
q-bio.MN q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/3.0/
GeneNetMiner is standalone software which parses the sentences of iHOP and captures regulatory relations. The regulatory relations are either gene gene regulations or gene biological processes relations. Capturing of gene biological process relations is a unique feature for the tools of this kind. These relations can be used to build up gene regulatory networks for specific biological processes, diseases, or phenotypes. Users are able to search genes and biological processes to find the regulatory relationships between them. Each regulatory relationship has been assigned a confidence score, which indicates the probability of the true relation. Furthermore, it reports the sentence containing the queried terms, which allows users to manually checking whether the relation is true if they wish. GeneNetMiner is able to accurately capture the regulatory relationships between genes from literature.
[ { "created": "Sat, 6 Sep 2014 03:06:08 GMT", "version": "v1" } ]
2014-09-09
[ [ "Tibiche", "Chabane", "" ], [ "Wang", "Edwin", "" ] ]
GeneNetMiner is standalone software which parses the sentences of iHOP and captures regulatory relations. The regulatory relations are either gene gene regulations or gene biological processes relations. Capturing of gene biological process relations is a unique feature for the tools of this kind. These relations can be used to build up gene regulatory networks for specific biological processes, diseases, or phenotypes. Users are able to search genes and biological processes to find the regulatory relationships between them. Each regulatory relationship has been assigned a confidence score, which indicates the probability of the true relation. Furthermore, it reports the sentence containing the queried terms, which allows users to manually checking whether the relation is true if they wish. GeneNetMiner is able to accurately capture the regulatory relationships between genes from literature.
1906.00631
Sayantari Ghosh
Indrani Bose and Sayantari Ghosh
Bifurcation and Criticality
13 Pages, 5 Figures
Journal of Statistical Mechanics: Theory and Experiment, 043403, 2019
10.1088/1742-5468/ab11d8
null
q-bio.QM cond-mat.stat-mech nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Equilibrium and nonequilibrium systems exhibit power-law singularities close to their critical and bifurcation points respectively. A recent study has shown that biochemical nonequilibrium models with positive feedback belong to the universality class of the mean-field Ising model. Through a mapping between the two systems, effective thermodynamic quantities like temperature, magnetic field and order parameter can be expressed in terms of biochemical parameters. In this paper, we demonstrate the equivalence using a simple deterministic approach. As an illustration we consider a model of population dynamics exhibiting the Allee effect for which we determine the exact phase diagram. We further consider a two-variable model of positive feedback, the genetic toggle, and discuss the conditions under which the model belongs to the mean-field Ising universality class. In the biochemical models, the supercritical pitchfork bifurcation point serves as the critical point. The dynamical behaviour predicted by the two models is in qualitative agreement with experimental observations and opens up the possibility of exploring critical point phenomena in laboratory populations and synthetic biological circuits.
[ { "created": "Mon, 3 Jun 2019 08:36:34 GMT", "version": "v1" } ]
2019-06-12
[ [ "Bose", "Indrani", "" ], [ "Ghosh", "Sayantari", "" ] ]
Equilibrium and nonequilibrium systems exhibit power-law singularities close to their critical and bifurcation points respectively. A recent study has shown that biochemical nonequilibrium models with positive feedback belong to the universality class of the mean-field Ising model. Through a mapping between the two systems, effective thermodynamic quantities like temperature, magnetic field and order parameter can be expressed in terms of biochemical parameters. In this paper, we demonstrate the equivalence using a simple deterministic approach. As an illustration we consider a model of population dynamics exhibiting the Allee effect for which we determine the exact phase diagram. We further consider a two-variable model of positive feedback, the genetic toggle, and discuss the conditions under which the model belongs to the mean-field Ising universality class. In the biochemical models, the supercritical pitchfork bifurcation point serves as the critical point. The dynamical behaviour predicted by the two models is in qualitative agreement with experimental observations and opens up the possibility of exploring critical point phenomena in laboratory populations and synthetic biological circuits.
2405.14120
Matthew Andres Moreno
Matthew Andres Moreno and Mark T. Holder and Jeet Sukumaran
DendroPy 5: a mature Python library for phylogenetic computing
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contemporary bioinformatics has seen in profound new visibility into the composition, structure, and history of the natural world around us. Arguably, the central pillar of bioinformatics is phylogenetics -- the study of hereditary relatedness among organisms. Insight from phylogenetic analysis has touched nearly every corner of biology. Examples range across natural history, population genetics and phylogeography, conservation biology, public health, medicine, in vivo and in silico experimental evolution, application-oriented evolutionary algorithms, and beyond. High-throughput genetic and phenotypic data has realized groundbreaking results, in large part, through conjunction with open-source software used to process and analyze it. Indeed, the preceding decades have ushered in a flourishing ecosystem of bioinformatics software applications and libraries. Over the course of its nearly fifteen-year history, the DendroPy library for phylogenetic computation in Python has established a generalist niche in serving the bioinformatics community. Here, we report on the recent major release of the library, DendroPy version 5. The software release represents a major milestone in transitioning the library to a sustainable long-term development and maintenance trajectory. As such, this work positions DendroPy to continue fulfilling a key supporting role in phyloinformatics infrastructure.
[ { "created": "Thu, 23 May 2024 02:47:39 GMT", "version": "v1" }, { "created": "Thu, 30 May 2024 04:54:02 GMT", "version": "v2" } ]
2024-05-31
[ [ "Moreno", "Matthew Andres", "" ], [ "Holder", "Mark T.", "" ], [ "Sukumaran", "Jeet", "" ] ]
Contemporary bioinformatics has seen in profound new visibility into the composition, structure, and history of the natural world around us. Arguably, the central pillar of bioinformatics is phylogenetics -- the study of hereditary relatedness among organisms. Insight from phylogenetic analysis has touched nearly every corner of biology. Examples range across natural history, population genetics and phylogeography, conservation biology, public health, medicine, in vivo and in silico experimental evolution, application-oriented evolutionary algorithms, and beyond. High-throughput genetic and phenotypic data has realized groundbreaking results, in large part, through conjunction with open-source software used to process and analyze it. Indeed, the preceding decades have ushered in a flourishing ecosystem of bioinformatics software applications and libraries. Over the course of its nearly fifteen-year history, the DendroPy library for phylogenetic computation in Python has established a generalist niche in serving the bioinformatics community. Here, we report on the recent major release of the library, DendroPy version 5. The software release represents a major milestone in transitioning the library to a sustainable long-term development and maintenance trajectory. As such, this work positions DendroPy to continue fulfilling a key supporting role in phyloinformatics infrastructure.
1103.5181
Ying Ding
Huijun Wang, Ying Ding, Jie Tang, Xiao Dong, Bing He, Judy Qiu, David J. Wild
Finding Complex Biological Relationships in Recent PubMed Articles Using Bio-LDA
14 pages, 8 figures, 10 tables
PLoS ONE (2011) 6(3): e17243
10.1371/journal.pone.0017243
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The overwhelming amount of available scholarly literature in the life sciences poses significant challenges to scientists wishing to keep up with important developments related to their research, but also provides a useful resource for the discovery of recent information concerning genes, diseases, compounds and the interactions between them. In this paper, we describe an algorithm called Bio-LDA that uses extracted biological terminology to automatically identify latent topics, and provides a variety of measures to uncover putative relations among topics and bio-terms. Relationships identified using those approaches are combined with existing data in life science datasets to provide additional insight. Three case studies demonstrate the utility of the Bio-LDA model, including association predication, association search and connectivity map generation. This combined approach offers new opportunities for knowledge discovery in many areas of biology including target identification, lead hopping and drug repurposing.
[ { "created": "Sun, 27 Mar 2011 04:13:16 GMT", "version": "v1" } ]
2011-03-29
[ [ "Wang", "Huijun", "" ], [ "Ding", "Ying", "" ], [ "Tang", "Jie", "" ], [ "Dong", "Xiao", "" ], [ "He", "Bing", "" ], [ "Qiu", "Judy", "" ], [ "Wild", "David J.", "" ] ]
The overwhelming amount of available scholarly literature in the life sciences poses significant challenges to scientists wishing to keep up with important developments related to their research, but also provides a useful resource for the discovery of recent information concerning genes, diseases, compounds and the interactions between them. In this paper, we describe an algorithm called Bio-LDA that uses extracted biological terminology to automatically identify latent topics, and provides a variety of measures to uncover putative relations among topics and bio-terms. Relationships identified using those approaches are combined with existing data in life science datasets to provide additional insight. Three case studies demonstrate the utility of the Bio-LDA model, including association predication, association search and connectivity map generation. This combined approach offers new opportunities for knowledge discovery in many areas of biology including target identification, lead hopping and drug repurposing.
1110.0763
Denis Boyer
Denis Boyer, Margaret C. Crofoot, Peter D. Walsh
Non-random walks in monkeys and humans
18 pages, 3 figures
J. R. Soc. Interface 9, 842-847 (2012)
null
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Principles of self-organization play an increasingly central role in models of human activity. Notably, individual human displacements exhibit strongly recurrent patterns that are characterized by scaling laws and can be mechanistically modelled as self-attracting walks. Recurrence is not, however, unique to human displacements. Here we report that the mobility patterns of wild capuchin monkeys are not random walks and exhibit recurrence properties similar to those of cell phone users, suggesting spatial cognition mechanisms shared with humans. We also show that the highly uneven visitation patterns within monkey home ranges are not entirely self-generated but are forced by spatio-temporal habitat heterogeneities. If models of human mobility are to become useful tools for predictive purposes, they will need to consider the interaction between memory and environmental heterogeneities.
[ { "created": "Tue, 4 Oct 2011 17:34:20 GMT", "version": "v1" }, { "created": "Tue, 27 Mar 2012 16:20:20 GMT", "version": "v2" } ]
2012-03-28
[ [ "Boyer", "Denis", "" ], [ "Crofoot", "Margaret C.", "" ], [ "Walsh", "Peter D.", "" ] ]
Principles of self-organization play an increasingly central role in models of human activity. Notably, individual human displacements exhibit strongly recurrent patterns that are characterized by scaling laws and can be mechanistically modelled as self-attracting walks. Recurrence is not, however, unique to human displacements. Here we report that the mobility patterns of wild capuchin monkeys are not random walks and exhibit recurrence properties similar to those of cell phone users, suggesting spatial cognition mechanisms shared with humans. We also show that the highly uneven visitation patterns within monkey home ranges are not entirely self-generated but are forced by spatio-temporal habitat heterogeneities. If models of human mobility are to become useful tools for predictive purposes, they will need to consider the interaction between memory and environmental heterogeneities.
1007.5378
Thierry Rabilloud
Thierry Rabilloud (BBSI), Denis Hochstrasser, Richard J Simpson
Is a gene-centric human proteome project the best way for proteomics to serve biology?
null
Proteomics (2010) epub ahead of print
10.1002/pmic.201000220
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the recent developments in proteomic technologies, a complete human proteome project (HPP) appears feasible for the first time. However, there is still debate as to how it should be designed and what it should encompass. In "proteomics speak", the debate revolves around the central question as to whether a gene-centric or a protein-centric proteomics approach is the most appropriate way forward. In this paper, we try to shed light on what these definitions mean, how large-scale proteomics such as a HPP can insert into the larger omics chorus, and what we can reasonably expect from a HPP in the way it has been proposed so far.
[ { "created": "Fri, 30 Jul 2010 06:45:41 GMT", "version": "v1" } ]
2010-08-31
[ [ "Rabilloud", "Thierry", "", "BBSI" ], [ "Hochstrasser", "Denis", "" ], [ "Simpson", "Richard J", "" ] ]
With the recent developments in proteomic technologies, a complete human proteome project (HPP) appears feasible for the first time. However, there is still debate as to how it should be designed and what it should encompass. In "proteomics speak", the debate revolves around the central question as to whether a gene-centric or a protein-centric proteomics approach is the most appropriate way forward. In this paper, we try to shed light on what these definitions mean, how large-scale proteomics such as a HPP can insert into the larger omics chorus, and what we can reasonably expect from a HPP in the way it has been proposed so far.
1910.05679
Andrew Francis
Mareike Fischer and Andrew Francis
The space of tree-based phylogenetic networks
16 pages, 7 figures
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks are generalizations of phylogenetic trees that allow the representation of reticulation events such as horizontal gene transfer or hybridization, and can also represent uncertainty in inference. A subclass of these, tree-based phylogenetic networks, have been introduced to capture the extent to which reticulate evolution nevertheless broadly follows tree-like patterns. Several important operations that change a general phylogenetic network have been developed in recent years, and are important for allowing algorithms to move around spaces of networks; a vital ingredient in finding an optimal network given some biological data. A key such operation is the Nearest Neighbor Interchange, or NNI. While it is already known that the space of unrooted phylogenetic networks is connected under NNI, it has been unclear whether this also holds for the subspace of tree-based networks. In this paper we show that the space of unrooted tree-based phylogenetic networks is indeed connected under the NNI operation. We do so by explicitly showing how to get from one such network to another one without losing tree-basedness along the way. Moreover, we introduce some new concepts, for instance ``shoat networks'', and derive some interesting aspects concerning tree-basedness. Last, we use our results to derive an upper bound on the size of the space of tree-based networks.
[ { "created": "Sun, 13 Oct 2019 03:51:02 GMT", "version": "v1" } ]
2019-10-15
[ [ "Fischer", "Mareike", "" ], [ "Francis", "Andrew", "" ] ]
Phylogenetic networks are generalizations of phylogenetic trees that allow the representation of reticulation events such as horizontal gene transfer or hybridization, and can also represent uncertainty in inference. A subclass of these, tree-based phylogenetic networks, have been introduced to capture the extent to which reticulate evolution nevertheless broadly follows tree-like patterns. Several important operations that change a general phylogenetic network have been developed in recent years, and are important for allowing algorithms to move around spaces of networks; a vital ingredient in finding an optimal network given some biological data. A key such operation is the Nearest Neighbor Interchange, or NNI. While it is already known that the space of unrooted phylogenetic networks is connected under NNI, it has been unclear whether this also holds for the subspace of tree-based networks. In this paper we show that the space of unrooted tree-based phylogenetic networks is indeed connected under the NNI operation. We do so by explicitly showing how to get from one such network to another one without losing tree-basedness along the way. Moreover, we introduce some new concepts, for instance ``shoat networks'', and derive some interesting aspects concerning tree-basedness. Last, we use our results to derive an upper bound on the size of the space of tree-based networks.
0811.2662
Davide Valenti
Alessandro Giuffrida, Graziella Ziino, Davide Valenti, Giorgio Donato, Antonio Panebianco
Application of an interspecific competition model to predict the growth of Aeromonas hydrophila on fish surfaces during the refrigerated storage
16 pages, 1 table, 2 figures
Published in Archiv fur Lebensmittelhygiene, vol. 58, 136-141 (2007)
10.2377/0003-925X-58-136
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growth of Aeromonas hydrophila and aerobic natural flora (APC) on gilthead seabream surfaces was evaluated during the refrigerated storage (21 days). The related growth curves were compared with those obtained by a conventional third order predictive model obtaining a low agreement between observed and predicted data (Root Mean Squared Error = 1.77 for Aeromonas hydrophila and 0.64 for APC). The Lotka-Volterra interspecific competition model was used in order to calculate the degree of interaction between the two bacterial populations (\beta_{Ah/APC} and \beta{APC/Ah}, respectively, the interspecific competition coefficients of APC on Aeromonas hydrophila and vice-versa). Afterwards, the Lotka-Volterra equations were applied as tertiary predictive model, taking into account, simultaneously, the environmental fluctuations and the bacterial interspecific competition. This approach allowed to obtain a best fitting to the observed mean growth curves with a Root Mean Squared Error of 0.09 for Aeromonas hydrophila and 0.28 for APC. Finally, authors carry out some considerations about the necessary use of competitive models in the context of the new trends in predictive microbiology.
[ { "created": "Mon, 17 Nov 2008 10:36:06 GMT", "version": "v1" } ]
2008-11-18
[ [ "Giuffrida", "Alessandro", "" ], [ "Ziino", "Graziella", "" ], [ "Valenti", "Davide", "" ], [ "Donato", "Giorgio", "" ], [ "Panebianco", "Antonio", "" ] ]
The growth of Aeromonas hydrophila and aerobic natural flora (APC) on gilthead seabream surfaces was evaluated during the refrigerated storage (21 days). The related growth curves were compared with those obtained by a conventional third order predictive model obtaining a low agreement between observed and predicted data (Root Mean Squared Error = 1.77 for Aeromonas hydrophila and 0.64 for APC). The Lotka-Volterra interspecific competition model was used in order to calculate the degree of interaction between the two bacterial populations (\beta_{Ah/APC} and \beta{APC/Ah}, respectively, the interspecific competition coefficients of APC on Aeromonas hydrophila and vice-versa). Afterwards, the Lotka-Volterra equations were applied as tertiary predictive model, taking into account, simultaneously, the environmental fluctuations and the bacterial interspecific competition. This approach allowed to obtain a best fitting to the observed mean growth curves with a Root Mean Squared Error of 0.09 for Aeromonas hydrophila and 0.28 for APC. Finally, authors carry out some considerations about the necessary use of competitive models in the context of the new trends in predictive microbiology.
2309.09554
Emeric Chalchat
Emeric Chalchat (IRBA, AME2P), Julien Siracusa, Luis Pe\~nailillo, Alexandra Malgoyre, Cyprien Bourrilhon, Keyne Charlot, Vincent Martin, Sebastian Garcia-Vicencio
Neuromuscular and Metabolic Responses during Repeated Bouts of Loaded Downhill Walking
Medicine and Science in Sports and Exercise, 2023
null
10.1249/MSS.0000000000003295
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this study was to compare vastus lateralis (VL) and rectus femoris (RF) muscles for their nervous and mechanical adaptations during two bouts of downhill walking (DW) with load carriage performed two weeks apart. Moreover, we investigated cardio-metabolic and perceived exertion responses during both DW bouts. Methods Seventeen participants performed two 45-min sessions of loaded DW (30% of body mass; slope: -25 %; speed: 4.5 km$\times$h -1 ) separated by two weeks. Rating of perceived exertion (RPE), cost of walking (C w ), heart rate (HR), and EMG activity of thigh muscles were assessed during the DW. Muscle shear elastic modulus ($\mu$) of RF and VL were assessed before each exercise bout. Maximal voluntary contraction (MVC) torque was assessed before (PRE), immediately after (POST), 24 and 48 h after the two exercise bouts. Results MVC torque decreased from POST (-23.7 $\pm$ 9.2%) to 48 h (-19.2 $\pm$ 11.9%) after the first exercise (Ex1), whereas it was significantly reduced only at POST (-14.6 $\pm$ 11.0%) after the second exercise (Ex2) (p < 0.001). RPE (Ex1: 12.3 $\pm$ 1.9; Ex2: 10.8 $\pm$ 2.0), HR (Ex1: 156 $\pm$ 23 bpm; Ex2: 145 $\pm$ 25 bpm), C w (Ex1: 4.5 $\pm$ 0.9 J$\times$m -1 $\times$kg -1 ; Ex2: 4.1 $\pm$ 0.7 J$\times$m -1 $\times$kg -1 ) and RF EMG activity (Ex1: 0.071 $\pm$ 0.028 mV; Ex2: 0.041 $\pm$ 0.014 mV) were significantly decreased during Ex2 compared to Ex1 (p < 0.01). RF $\mu$ was significantly greater in Ex2 (0.44 $\pm$ 0.18) compared to Ex1 (0.56 $\pm$ 0.27; p < 0.001). Conclusions The RF muscle displayed specific mechanical and nervous adaptations to repeated DW bouts as compared to VL. Moreover, the muscle adaptations conferred by the first bout of DW could have induced greater exercise efficiency, inducing lesser perceived exertion and cardio-metabolic demand when the same exercise was repeated two weeks later.
[ { "created": "Mon, 18 Sep 2023 08:06:24 GMT", "version": "v1" } ]
2023-09-19
[ [ "Chalchat", "Emeric", "", "IRBA, AME2P" ], [ "Siracusa", "Julien", "" ], [ "Peñailillo", "Luis", "" ], [ "Malgoyre", "Alexandra", "" ], [ "Bourrilhon", "Cyprien", "" ], [ "Charlot", "Keyne", "" ], [ "Martin", ...
The aim of this study was to compare vastus lateralis (VL) and rectus femoris (RF) muscles for their nervous and mechanical adaptations during two bouts of downhill walking (DW) with load carriage performed two weeks apart. Moreover, we investigated cardio-metabolic and perceived exertion responses during both DW bouts. Methods Seventeen participants performed two 45-min sessions of loaded DW (30% of body mass; slope: -25 %; speed: 4.5 km$\times$h -1 ) separated by two weeks. Rating of perceived exertion (RPE), cost of walking (C w ), heart rate (HR), and EMG activity of thigh muscles were assessed during the DW. Muscle shear elastic modulus ($\mu$) of RF and VL were assessed before each exercise bout. Maximal voluntary contraction (MVC) torque was assessed before (PRE), immediately after (POST), 24 and 48 h after the two exercise bouts. Results MVC torque decreased from POST (-23.7 $\pm$ 9.2%) to 48 h (-19.2 $\pm$ 11.9%) after the first exercise (Ex1), whereas it was significantly reduced only at POST (-14.6 $\pm$ 11.0%) after the second exercise (Ex2) (p < 0.001). RPE (Ex1: 12.3 $\pm$ 1.9; Ex2: 10.8 $\pm$ 2.0), HR (Ex1: 156 $\pm$ 23 bpm; Ex2: 145 $\pm$ 25 bpm), C w (Ex1: 4.5 $\pm$ 0.9 J$\times$m -1 $\times$kg -1 ; Ex2: 4.1 $\pm$ 0.7 J$\times$m -1 $\times$kg -1 ) and RF EMG activity (Ex1: 0.071 $\pm$ 0.028 mV; Ex2: 0.041 $\pm$ 0.014 mV) were significantly decreased during Ex2 compared to Ex1 (p < 0.01). RF $\mu$ was significantly greater in Ex2 (0.44 $\pm$ 0.18) compared to Ex1 (0.56 $\pm$ 0.27; p < 0.001). Conclusions The RF muscle displayed specific mechanical and nervous adaptations to repeated DW bouts as compared to VL. Moreover, the muscle adaptations conferred by the first bout of DW could have induced greater exercise efficiency, inducing lesser perceived exertion and cardio-metabolic demand when the same exercise was repeated two weeks later.
1202.3690
Renxue Wang
Renxue Wang
Hybrid zone dynamics under weak Haldane's rule
25 pages, 8 figures, 61 references
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/3.0/
The ability of genetic isolation to block gene flow plays a key role in the speciation of sexually reproducing organisms. This paper analyses the hybrid zone dynamics affected by "weak" Haldane's rule, namely the incomplete hybrids inferiority (sterility/inviability) against the heterogametic (XY or ZW) sex caused by a Dobzhansky-Muller incompatibility. Different strengths of incompatibility, dispersal and density-dependent regulation are considered; and the gene flow and clinal structures of allele frequencies in the presence of short-range dispersal (the stepping-stone model) are examined. I show that a weak heterogametic hybrid incompatibility could constitute a substantial barrier that could reduce gene flow and result in non-coincident and discordant clines of alleles. It is found that the differential gene flow is more pronounced under a stronger density-dependent regulation. This study provides a mechanistic explanation for how an adaptive mutation, which may only have a marginal fitness effect, could set a gene up as an evolutionary hot-spot.
[ { "created": "Thu, 16 Feb 2012 20:37:23 GMT", "version": "v1" } ]
2012-02-17
[ [ "Wang", "Renxue", "" ] ]
The ability of genetic isolation to block gene flow plays a key role in the speciation of sexually reproducing organisms. This paper analyses the hybrid zone dynamics affected by "weak" Haldane's rule, namely the incomplete hybrids inferiority (sterility/inviability) against the heterogametic (XY or ZW) sex caused by a Dobzhansky-Muller incompatibility. Different strengths of incompatibility, dispersal and density-dependent regulation are considered; and the gene flow and clinal structures of allele frequencies in the presence of short-range dispersal (the stepping-stone model) are examined. I show that a weak heterogametic hybrid incompatibility could constitute a substantial barrier that could reduce gene flow and result in non-coincident and discordant clines of alleles. It is found that the differential gene flow is more pronounced under a stronger density-dependent regulation. This study provides a mechanistic explanation for how an adaptive mutation, which may only have a marginal fitness effect, could set a gene up as an evolutionary hot-spot.
1411.5383
Zeyuan Allen-Zhu
Zeyuan Allen-Zhu, Rati Gelashvili, Silvio Micali, Nir Shavit
Johnson-Lindenstrauss Compression with Neuroscience-Based Constraints
A shorter version of this paper has appeared in the Proceedings of the National Academy of Sciences
null
10.1073/pnas.1419100111
null
q-bio.NC cs.DS math.PR math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Johnson-Lindenstrauss (JL) matrices implemented by sparse random synaptic connections are thought to be a prime candidate for how convergent pathways in the brain compress information. However, to date, there is no complete mathematical support for such implementations given the constraints of real neural tissue. The fact that neurons are either excitatory or inhibitory implies that every so implementable JL matrix must be sign-consistent (i.e., all entries in a single column must be either all non-negative or all non-positive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix had better be sparse. We construct sparse JL matrices that are sign-consistent, and prove that our construction is essentially optimal. Our work answers a mathematical question that was triggered by earlier work and is necessary to justify the existence of JL compression in the brain, and emphasizes that inhibition is crucial if neurons are to perform efficient, correlation-preserving compression.
[ { "created": "Wed, 19 Nov 2014 21:12:12 GMT", "version": "v1" } ]
2014-11-21
[ [ "Allen-Zhu", "Zeyuan", "" ], [ "Gelashvili", "Rati", "" ], [ "Micali", "Silvio", "" ], [ "Shavit", "Nir", "" ] ]
Johnson-Lindenstrauss (JL) matrices implemented by sparse random synaptic connections are thought to be a prime candidate for how convergent pathways in the brain compress information. However, to date, there is no complete mathematical support for such implementations given the constraints of real neural tissue. The fact that neurons are either excitatory or inhibitory implies that every so implementable JL matrix must be sign-consistent (i.e., all entries in a single column must be either all non-negative or all non-positive), and the fact that any given neuron connects to a relatively small subset of other neurons implies that the JL matrix had better be sparse. We construct sparse JL matrices that are sign-consistent, and prove that our construction is essentially optimal. Our work answers a mathematical question that was triggered by earlier work and is necessary to justify the existence of JL compression in the brain, and emphasizes that inhibition is crucial if neurons are to perform efficient, correlation-preserving compression.
1109.1932
Tom Michoel
Tom Michoel, Anagha Joshi, Bruno Nachtergaele, Yves Van de Peer
Enrichment and aggregation of topological motifs are independent organizational principles of integrated interaction networks
12 pages, 5 figures
Molecular BioSystems 7, 2769 - 2778 (2011)
10.1039/c1mb05241a
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Topological network motifs represent functional relationships within and between regulatory and protein-protein interaction networks. Enriched motifs often aggregate into self-contained units forming functional modules. Theoretical models for network evolution by duplication-divergence mechanisms and for network topology by hierarchical scale-free networks have suggested a one-to-one relation between network motif enrichment and aggregation, but this relation has never been tested quantitatively in real biological interaction networks. Here we introduce a novel method for assessing the statistical significance of network motif aggregation and for identifying clusters of overlapping network motifs. Using an integrated network of transcriptional, posttranslational and protein-protein interactions in yeast we show that network motif aggregation reflects a local modularity property which is independent of network motif enrichment. In particular our method identified novel functional network themes for a set of motifs which are not enriched yet aggregate significantly and challenges the conventional view that network motif enrichment is the most basic organizational principle of complex networks.
[ { "created": "Fri, 9 Sep 2011 08:13:49 GMT", "version": "v1" } ]
2011-09-12
[ [ "Michoel", "Tom", "" ], [ "Joshi", "Anagha", "" ], [ "Nachtergaele", "Bruno", "" ], [ "Van de Peer", "Yves", "" ] ]
Topological network motifs represent functional relationships within and between regulatory and protein-protein interaction networks. Enriched motifs often aggregate into self-contained units forming functional modules. Theoretical models for network evolution by duplication-divergence mechanisms and for network topology by hierarchical scale-free networks have suggested a one-to-one relation between network motif enrichment and aggregation, but this relation has never been tested quantitatively in real biological interaction networks. Here we introduce a novel method for assessing the statistical significance of network motif aggregation and for identifying clusters of overlapping network motifs. Using an integrated network of transcriptional, posttranslational and protein-protein interactions in yeast we show that network motif aggregation reflects a local modularity property which is independent of network motif enrichment. In particular our method identified novel functional network themes for a set of motifs which are not enriched yet aggregate significantly and challenges the conventional view that network motif enrichment is the most basic organizational principle of complex networks.
0707.1307
Atul Narang
A. Narang S. S. Pilyugin
Bistability of the lac operon during growth of Escherichia coli on lactose and lactose + glucose
34 pages, Bull Math Biol
null
null
null
q-bio.MN q-bio.CB
null
The lac operon of Escherichia coli exhibits bistability. Early studies showed that bistability occurs during growth on TMG/succinate and lactose + glucose, but not during growth on lactose. More recent studies with lacGFP-transfected cells show bistability with TMG/succinate, but not with lactose and lactose + glucose. In the literature, these results are attributed to variations of the positive feedback generated by induction. Specifically, during growth on TMG/succinate, induction generates positive feedback because the permease stimulates the accumulation of TMG, which, in turn, promotes the synthesis of more permease. This positive feedback is attenuated during growth on lactose because hydrolysis of lactose by galactosidase suppresses the stimulatory effect of the permease. But the stabilizing effect of dilution also changes dramatically as a function of the medium composition. For instance, during growth on TMG/succinate, the dilution rate of the permease is proportional to its activity, $e$, because the specific growth rate is independent of $e$. However, during growth on lactose, the permease dilution rate is proportional to $e^2$ because the specific growth rate is proportional to the specific lactose uptake rate, which in turn, proportional to $e$. Here, we show that: (a) This dependence on $e^2$ creates such a strong stabilizing effect that bistability is virtually impossible during growth on lactose, even in the face of positive feedback. (b) This stabilizing effect is weakened during growth on lactose + glucose because the specific growth rate on glucose is independent of $e$, so that the dilution rate once again contains a term that is proportional to $e$. We discuss the experimental data in the light of these results.
[ { "created": "Mon, 9 Jul 2007 17:26:42 GMT", "version": "v1" } ]
2007-07-10
[ [ "Pilyugin", "A. Narang S. S.", "" ] ]
The lac operon of Escherichia coli exhibits bistability. Early studies showed that bistability occurs during growth on TMG/succinate and lactose + glucose, but not during growth on lactose. More recent studies with lacGFP-transfected cells show bistability with TMG/succinate, but not with lactose and lactose + glucose. In the literature, these results are attributed to variations of the positive feedback generated by induction. Specifically, during growth on TMG/succinate, induction generates positive feedback because the permease stimulates the accumulation of TMG, which, in turn, promotes the synthesis of more permease. This positive feedback is attenuated during growth on lactose because hydrolysis of lactose by galactosidase suppresses the stimulatory effect of the permease. But the stabilizing effect of dilution also changes dramatically as a function of the medium composition. For instance, during growth on TMG/succinate, the dilution rate of the permease is proportional to its activity, $e$, because the specific growth rate is independent of $e$. However, during growth on lactose, the permease dilution rate is proportional to $e^2$ because the specific growth rate is proportional to the specific lactose uptake rate, which in turn, proportional to $e$. Here, we show that: (a) This dependence on $e^2$ creates such a strong stabilizing effect that bistability is virtually impossible during growth on lactose, even in the face of positive feedback. (b) This stabilizing effect is weakened during growth on lactose + glucose because the specific growth rate on glucose is independent of $e$, so that the dilution rate once again contains a term that is proportional to $e$. We discuss the experimental data in the light of these results.
2004.00914
Aleksandra Arda\v{s}eva
Aleksandra Arda\v{s}eva, Robert A. Gatenby, Alexander R. A. Anderson, Helen M. Byrne, Philip K. Maini, Tommaso Lorenzi
A comparative study between discrete and continuum models for the evolution of competing phenotype-structured cell populations in dynamical environments
15 pages, 9 figures
Phys. Rev. E 102, 042404 (2020)
10.1103/PhysRevE.102.042404
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Deterministic continuum models formulated in terms of non-local partial differential equations for the evolutionary dynamics of populations structured by phenotypic traits have been used recently to address open questions concerning the adaptation of asexual species to periodically fluctuating environmental conditions. These deterministic continuum models are usually defined on the basis of population-scale phenomenological assumptions and cannot capture adaptive phenomena that are driven by stochastic variability in the evolutionary paths of single individuals. In this paper, we develop a stochastic individual-based model for the coevolution between two competing phenotype-structured cell populations that are exposed to time-varying nutrient levels and undergo spontaneous, heritable phenotypic variations with different probabilities. The evolution of every cell is described by a set of rules that result in a discrete-time branching random walk on the space of phenotypic states. We formally show that the deterministic continuum counterpart of this model comprises a system of non-local partial differential equations for the cell population density functions coupled with an ordinary differential equation for the nutrient concentration. We compare the individual-based model and its continuum analogue, focussing on scenarios whereby the predictions of the two models differ. Our results clarify the conditions under which significant differences between the two models can emerge due to stochastic effects associated with small population levels. These differences arise in the presence of low probabilities of phenotypic variation, and become more apparent when the two populations are characterised by less fit initial mean phenotypes and smaller initial levels of phenotypic heterogeneity.
[ { "created": "Thu, 2 Apr 2020 10:04:53 GMT", "version": "v1" } ]
2020-10-14
[ [ "Ardaševa", "Aleksandra", "" ], [ "Gatenby", "Robert A.", "" ], [ "Anderson", "Alexander R. A.", "" ], [ "Byrne", "Helen M.", "" ], [ "Maini", "Philip K.", "" ], [ "Lorenzi", "Tommaso", "" ] ]
Deterministic continuum models formulated in terms of non-local partial differential equations for the evolutionary dynamics of populations structured by phenotypic traits have been used recently to address open questions concerning the adaptation of asexual species to periodically fluctuating environmental conditions. These deterministic continuum models are usually defined on the basis of population-scale phenomenological assumptions and cannot capture adaptive phenomena that are driven by stochastic variability in the evolutionary paths of single individuals. In this paper, we develop a stochastic individual-based model for the coevolution between two competing phenotype-structured cell populations that are exposed to time-varying nutrient levels and undergo spontaneous, heritable phenotypic variations with different probabilities. The evolution of every cell is described by a set of rules that result in a discrete-time branching random walk on the space of phenotypic states. We formally show that the deterministic continuum counterpart of this model comprises a system of non-local partial differential equations for the cell population density functions coupled with an ordinary differential equation for the nutrient concentration. We compare the individual-based model and its continuum analogue, focussing on scenarios whereby the predictions of the two models differ. Our results clarify the conditions under which significant differences between the two models can emerge due to stochastic effects associated with small population levels. These differences arise in the presence of low probabilities of phenotypic variation, and become more apparent when the two populations are characterised by less fit initial mean phenotypes and smaller initial levels of phenotypic heterogeneity.
1812.09662
Steven Frank
Steven A. Frank
The common patterns of abundance: the log series and Zipf's law
New extended introduction, minor editing
F1000Research 8:334 (2019)
10.12688/f1000research.18681.1
null
q-bio.PE cond-mat.stat-mech math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a language corpus, the probability that a word occurs $n$ times is often proportional to $1/n^2$. Assigning rank, $s$, to words according to their abundance, $\log s$ vs $\log n$ typically has a slope of minus one. That simple Zipf's law pattern also arises in the population sizes of cities, the sizes of corporations, and other patterns of abundance. By contrast, for the abundances of different biological species, the probability of a population of size $n$ is typically proportional to $1/n$, declining exponentially for larger $n$, the log series pattern. This article shows that the differing patterns of Zipf's law and the log series arise as the opposing endpoints of a more general theory. The general theory follows from the generic form of all probability patterns as a consequence of conserved average values and the associated invariances of scale. To understand the common patterns of abundance, the generic form of probability distributions plus the conserved average abundance is sufficient. The general theory includes cases that are between the Zipf and log series endpoints, providing a broad framework for analyzing widely observed abundance patterns.
[ { "created": "Sun, 23 Dec 2018 05:38:14 GMT", "version": "v1" }, { "created": "Sat, 12 Jan 2019 10:28:55 GMT", "version": "v2" } ]
2019-03-27
[ [ "Frank", "Steven A.", "" ] ]
In a language corpus, the probability that a word occurs $n$ times is often proportional to $1/n^2$. Assigning rank, $s$, to words according to their abundance, $\log s$ vs $\log n$ typically has a slope of minus one. That simple Zipf's law pattern also arises in the population sizes of cities, the sizes of corporations, and other patterns of abundance. By contrast, for the abundances of different biological species, the probability of a population of size $n$ is typically proportional to $1/n$, declining exponentially for larger $n$, the log series pattern. This article shows that the differing patterns of Zipf's law and the log series arise as the opposing endpoints of a more general theory. The general theory follows from the generic form of all probability patterns as a consequence of conserved average values and the associated invariances of scale. To understand the common patterns of abundance, the generic form of probability distributions plus the conserved average abundance is sufficient. The general theory includes cases that are between the Zipf and log series endpoints, providing a broad framework for analyzing widely observed abundance patterns.
2007.05723
Guoye Guan
Guoye Guan, Ming-Kin Wong, Zhongying Zhao, Lei-Han Tang and Chao Tang
Speed and fate diversity tradeoff in nematode's early embryogenesis
7 pages, 5 figures, 4 supplemental figures, 2 supplemental tables
null
null
null
q-bio.CB physics.bio-ph q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nematode species are well-known for their invariant cell lineage pattern during development. Combining knowledge about the fate specification induced by asymmetric division and the anti-correlation between cell cycle length and cell volume in Caenorhabditis elegans, we propose a model to simulate lineage initiation by altering cell volume segregation ratio in each division, and quantify the derived pattern's performance in proliferation speed, fate diversity and space robustness. The stereotypic pattern in C. elegans embryo is found to be one of the most optimal solutions taking minimum time to achieve the cell number before gastrulation, by programming asymmetric division as a strategy.
[ { "created": "Sat, 11 Jul 2020 08:58:42 GMT", "version": "v1" }, { "created": "Tue, 23 Mar 2021 14:08:56 GMT", "version": "v2" } ]
2021-03-24
[ [ "Guan", "Guoye", "" ], [ "Wong", "Ming-Kin", "" ], [ "Zhao", "Zhongying", "" ], [ "Tang", "Lei-Han", "" ], [ "Tang", "Chao", "" ] ]
Nematode species are well-known for their invariant cell lineage pattern during development. Combining knowledge about the fate specification induced by asymmetric division and the anti-correlation between cell cycle length and cell volume in Caenorhabditis elegans, we propose a model to simulate lineage initiation by altering cell volume segregation ratio in each division, and quantify the derived pattern's performance in proliferation speed, fate diversity and space robustness. The stereotypic pattern in C. elegans embryo is found to be one of the most optimal solutions taking minimum time to achieve the cell number before gastrulation, by programming asymmetric division as a strategy.
1305.5533
Christian Mulder PhD
Christian Mulder, A. Jan Hendriks
Half-Saturation Constants in Functional Responses
6 pages, 4 figures, 1 table
Global Ecology and Conservation 2 (2014) 161-169
10.1016/j.gecco.2014.09.006
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our aim is to provide an overview of half-saturation constants reported in literature and to explore their consistency with body size. In many ecological models, intake of nutrients by plants and consumption of food by animals is considered to be a hyperbolic function of the nutrient concentration and the food density, respectively. However, data on the concentration (or density) at which half of the maximum intake rate is reached are scarce, limiting the applicability of the computational models. The meta-analysis was conducted on literature published worldwide. Most studies focused on algae and invertebrates, whereas some included fish, birds and mammals. The half-saturation constants obtained were linked to body size using ordinary regression analysis. The observed trends were compared to those noted in reviews on other density parameters. Half-saturation constants for different clades range within one or two orders of magnitude. Although these constants are inherently variable, exploring allometric relationships across different taxa helps to improve consistent parameterization of ecological models.
[ { "created": "Thu, 23 May 2013 19:58:13 GMT", "version": "v1" } ]
2014-10-08
[ [ "Mulder", "Christian", "" ], [ "Hendriks", "A. Jan", "" ] ]
Our aim is to provide an overview of half-saturation constants reported in literature and to explore their consistency with body size. In many ecological models, intake of nutrients by plants and consumption of food by animals is considered to be a hyperbolic function of the nutrient concentration and the food density, respectively. However, data on the concentration (or density) at which half of the maximum intake rate is reached are scarce, limiting the applicability of the computational models. The meta-analysis was conducted on literature published worldwide. Most studies focused on algae and invertebrates, whereas some included fish, birds and mammals. The half-saturation constants obtained were linked to body size using ordinary regression analysis. The observed trends were compared to those noted in reviews on other density parameters. Half-saturation constants for different clades range within one or two orders of magnitude. Although these constants are inherently variable, exploring allometric relationships across different taxa helps to improve consistent parameterization of ecological models.
1305.1352
Michael Manhart
Michael Manhart and Alexandre V. Morozov
Statistical Physics of Evolutionary Trajectories on Fitness Landscapes
null
null
10.1142/9789814590297_0017
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Random walks on multidimensional nonlinear landscapes are of interest in many areas of science and engineering. In particular, properties of adaptive trajectories on fitness landscapes determine population fates and thus play a central role in evolutionary theory. The topography of fitness landscapes and its effect on evolutionary dynamics have been extensively studied in the literature. We will survey the current research knowledge in this field, focusing on a recently developed systematic approach to characterizing path lengths, mean first-passage times, and other statistics of the path ensemble. This approach, based on general techniques from statistical physics, is applicable to landscapes of arbitrary complexity and structure. It is especially well-suited to quantifying the diversity of stochastic trajectories and repeatability of evolutionary events. We demonstrate this methodology using a biophysical model of protein evolution that describes how proteins maintain stability while evolving new functions.
[ { "created": "Mon, 6 May 2013 23:22:28 GMT", "version": "v1" } ]
2014-10-08
[ [ "Manhart", "Michael", "" ], [ "Morozov", "Alexandre V.", "" ] ]
Random walks on multidimensional nonlinear landscapes are of interest in many areas of science and engineering. In particular, properties of adaptive trajectories on fitness landscapes determine population fates and thus play a central role in evolutionary theory. The topography of fitness landscapes and its effect on evolutionary dynamics have been extensively studied in the literature. We will survey the current research knowledge in this field, focusing on a recently developed systematic approach to characterizing path lengths, mean first-passage times, and other statistics of the path ensemble. This approach, based on general techniques from statistical physics, is applicable to landscapes of arbitrary complexity and structure. It is especially well-suited to quantifying the diversity of stochastic trajectories and repeatability of evolutionary events. We demonstrate this methodology using a biophysical model of protein evolution that describes how proteins maintain stability while evolving new functions.
1105.0780
Amparo Baillo
Amparo Ba\'illo, Laura Mart\'inez-Mu\~noz and Mario Mellado
Homogeneity tests for Michaelis-Menten curves with application to fluorescence resonance energy transfer data
26 pages, 2 figures
null
null
null
q-bio.BM q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resonance energy transfer methods are in wide use for evaluating protein-protein interactions and protein conformational changes in living cells. Fluorescence resonance energy transfer (FRET) measures energy transfer as a function of the acceptor:donor ratio, generating FRET saturation curves. Modeling these curves by Michaelis-Menten kinetics allows characterization by two parameters, which serve to evaluate apparent affinity between two proteins and to compare this affinity in different experimental conditions. To reduce the effect of sampling variability, several statistical samples of the saturation curve are generated in the same biological conditions. Here we study three procedures to determine whether statistical samples in a collection are homogeneous, in the sense that they are extracted from the same regression model. From the hypothesis testing viewpoint, we considered an F test and a procedure based on bootstrap resampling. The third method analyzed the problem from the model selection viewpoint, and used the Akaike information criterion (AIC). Although we only considered the Michaelis-Menten model, all statistical procedures would be applicable to any other nonlinear regression model. We compared the performance of the homogeneity testing methods in a Monte Carlo study and through analysis in living cells of FRET saturation curves for dimeric complexes of CXCR4, a seven-transmembrane receptor of the G protein-coupled receptor family. We show that the F test, the bootstrap procedure and the model selection method lead in general to similar conclusions, although AIC gave the best results when sample sizes were small, whereas the F test and the bootstrap method were more appropriate for large samples. In practice, all three methods are easy to use simultaneously and show consistency, facilitating conclusions on sample homogeneity.
[ { "created": "Wed, 4 May 2011 09:59:08 GMT", "version": "v1" } ]
2011-05-05
[ [ "Baíllo", "Amparo", "" ], [ "Martínez-Muñoz", "Laura", "" ], [ "Mellado", "Mario", "" ] ]
Resonance energy transfer methods are in wide use for evaluating protein-protein interactions and protein conformational changes in living cells. Fluorescence resonance energy transfer (FRET) measures energy transfer as a function of the acceptor:donor ratio, generating FRET saturation curves. Modeling these curves by Michaelis-Menten kinetics allows characterization by two parameters, which serve to evaluate apparent affinity between two proteins and to compare this affinity in different experimental conditions. To reduce the effect of sampling variability, several statistical samples of the saturation curve are generated in the same biological conditions. Here we study three procedures to determine whether statistical samples in a collection are homogeneous, in the sense that they are extracted from the same regression model. From the hypothesis testing viewpoint, we considered an F test and a procedure based on bootstrap resampling. The third method analyzed the problem from the model selection viewpoint, and used the Akaike information criterion (AIC). Although we only considered the Michaelis-Menten model, all statistical procedures would be applicable to any other nonlinear regression model. We compared the performance of the homogeneity testing methods in a Monte Carlo study and through analysis in living cells of FRET saturation curves for dimeric complexes of CXCR4, a seven-transmembrane receptor of the G protein-coupled receptor family. We show that the F test, the bootstrap procedure and the model selection method lead in general to similar conclusions, although AIC gave the best results when sample sizes were small, whereas the F test and the bootstrap method were more appropriate for large samples. In practice, all three methods are easy to use simultaneously and show consistency, facilitating conclusions on sample homogeneity.
2312.16086
Mikhail Katkov
Mikhail Katkov
Notes on Retroactive Interference Model of Forgetting
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
We present analytical derivation of the minimal and maximal number of items retained in recently introduced Retroactive Interference Model of Forgetting. Also we computed the probability that two items presented at different times are retained in the memory at a later time analytically.
[ { "created": "Tue, 26 Dec 2023 15:17:41 GMT", "version": "v1" } ]
2023-12-27
[ [ "Katkov", "Mikhail", "" ] ]
We present analytical derivation of the minimal and maximal number of items retained in recently introduced Retroactive Interference Model of Forgetting. Also we computed the probability that two items presented at different times are retained in the memory at a later time analytically.
1604.08100
Piero Procacci
Piero Procacci
I. Dissociation free energies in drug-receptor systems via non equilibrium alchemical simulations: theoretical framework
34 pages, 4 figures
null
10.1039/C5CP05519A
null
q-bio.BM physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this contribution I critically revise the alchemical reversible approach in the context of the statistical mechanics theory of non covalent bonding in drug receptor systems. I show that most of the pitfalls and entanglements for the binding free energies evaluation in computer simulations are rooted in the equilibrium assumption that is implicit in the reversible method. These critical issues can be resolved by using a non-equilibrium variant of the alchemical method in molecular dynamics simulations, relying on the production of many independent trajectories with a continuous dynamical evolution of an externally driven alchemical coordinate, completing the decoupling of the ligand in a matter of few tens of picoseconds rather than nanoseconds. The absolute binding free energy can be recovered from the annihilation work distributions by applying an unbiased unidirectional free energy estimate, on the assumption that any observed work distribution is given by a mixture of normal distributions, whose components are identical in either direction of the non-equilibrium process, with weights regulated by the Crooks theorem. I finally show that the inherent reliability and accuracy of the unidirectional estimate of the decoupling free energies, based on the production of few hundreds of non-equilibrium independent sub-nanoseconds unrestrained alchemical annihilation processes, is a direct consequence of the funnel-like shape of the free energy surface in molecular recognition. An application of the technique on a real drug-receptor system is presented in the companion paper.
[ { "created": "Wed, 27 Apr 2016 15:04:31 GMT", "version": "v1" } ]
2016-06-29
[ [ "Procacci", "Piero", "" ] ]
In this contribution I critically revise the alchemical reversible approach in the context of the statistical mechanics theory of non covalent bonding in drug receptor systems. I show that most of the pitfalls and entanglements for the binding free energies evaluation in computer simulations are rooted in the equilibrium assumption that is implicit in the reversible method. These critical issues can be resolved by using a non-equilibrium variant of the alchemical method in molecular dynamics simulations, relying on the production of many independent trajectories with a continuous dynamical evolution of an externally driven alchemical coordinate, completing the decoupling of the ligand in a matter of few tens of picoseconds rather than nanoseconds. The absolute binding free energy can be recovered from the annihilation work distributions by applying an unbiased unidirectional free energy estimate, on the assumption that any observed work distribution is given by a mixture of normal distributions, whose components are identical in either direction of the non-equilibrium process, with weights regulated by the Crooks theorem. I finally show that the inherent reliability and accuracy of the unidirectional estimate of the decoupling free energies, based on the production of few hundreds of non-equilibrium independent sub-nanoseconds unrestrained alchemical annihilation processes, is a direct consequence of the funnel-like shape of the free energy surface in molecular recognition. An application of the technique on a real drug-receptor system is presented in the companion paper.
2304.00333
Stanislav Mintchev
Brian L. Frost and Stanislav M. Mintchev
A high-efficiency model indicating the role of inhibition in the resilience of neuronal networks to damage resulting from traumatic injury
28 pages (with references and appendix; 20 pages with references only)
null
null
null
q-bio.NC math.DS
http://creativecommons.org/licenses/by/4.0/
Recent investigations of traumatic brain injuries have shown that these injuries can result in conformational changes at the level of individual neurons in the cerebral cortex. Focal axonal swelling is one consequence of such injuries and leads to a variable width along the cell axon. Simulations of the electrical properties of axons impacted in such a way show that this damage may have a nonlinear deleterious effect on spike-encoded signal transmission. The computational cost of these simulations complicates the investigation of the effects of such damage at a network level. We have developed an efficient algorithm that faithfully reproduces the spike train filtering properties seen in physical simulations. We use this algorithm to explore the impact of focal axonal swelling on small networks of integrate and fire neurons. We explore also the effects of architecture modifications to networks impacted in this manner. In all tested networks, our results indicate that the addition of presynaptic inhibitory neurons either increases or leaves unchanged the fidelity of the network's processing properties with respect to this damage.
[ { "created": "Sat, 1 Apr 2023 15:08:24 GMT", "version": "v1" } ]
2023-04-04
[ [ "Frost", "Brian L.", "" ], [ "Mintchev", "Stanislav M.", "" ] ]
Recent investigations of traumatic brain injuries have shown that these injuries can result in conformational changes at the level of individual neurons in the cerebral cortex. Focal axonal swelling is one consequence of such injuries and leads to a variable width along the cell axon. Simulations of the electrical properties of axons impacted in such a way show that this damage may have a nonlinear deleterious effect on spike-encoded signal transmission. The computational cost of these simulations complicates the investigation of the effects of such damage at a network level. We have developed an efficient algorithm that faithfully reproduces the spike train filtering properties seen in physical simulations. We use this algorithm to explore the impact of focal axonal swelling on small networks of integrate and fire neurons. We explore also the effects of architecture modifications to networks impacted in this manner. In all tested networks, our results indicate that the addition of presynaptic inhibitory neurons either increases or leaves unchanged the fidelity of the network's processing properties with respect to this damage.
2009.14283
Nathaniel Linden
Nathaniel J Linden, Dennis R Tabuena, Nicholas A Steinmetz, William J Moody, Steven L Brunton, Bingni W Brunton
Go with the FLOW: Visualizing spatiotemporal dynamics in optical widefield calcium imaging
25 pages, 8 figures
null
10.1098/rsif.2021.0523
null
q-bio.NC math.DS
http://creativecommons.org/licenses/by/4.0/
Widefield calcium imaging has recently emerged as a powerful experimental technique to record coordinated large-scale brain activity. These measurements present a unique opportunity to characterize spatiotemporal coherent structures that underlie neural activity across many regions of the brain. In this work, we leverage analytic techniques from fluid dynamics to develop a visualization framework that highlights features of flow across the cortex, mapping wave fronts that may be correlated with behavioral events. First, we transform the time series of widefield calcium images into time-varying vector fields using optic flow. Next, we extract concise diagrams summarizing the dynamics, which we refer to as FLOW (flow lines in optical widefield imaging) portraits. These FLOW portraits provide an intuitive map of dynamic calcium activity, including regions of initiation and termination, as well as the direction and extent of activity spread. To extract these structures, we use the finite-time Lyapunov exponent (FTLE) technique developed to analyze time-varying manifolds in unsteady fluids. Importantly, our approach captures coherent structures that are poorly represented by traditional modal decomposition techniques. We demonstrate the application of FLOW portraits on three simple synthetic datasets and two widefield calcium imaging datasets, including cortical waves in the developing mouse and spontaneous cortical activity in an adult mouse.
[ { "created": "Tue, 29 Sep 2020 19:47:29 GMT", "version": "v1" }, { "created": "Thu, 24 Jun 2021 20:13:40 GMT", "version": "v2" } ]
2021-09-07
[ [ "Linden", "Nathaniel J", "" ], [ "Tabuena", "Dennis R", "" ], [ "Steinmetz", "Nicholas A", "" ], [ "Moody", "William J", "" ], [ "Brunton", "Steven L", "" ], [ "Brunton", "Bingni W", "" ] ]
Widefield calcium imaging has recently emerged as a powerful experimental technique to record coordinated large-scale brain activity. These measurements present a unique opportunity to characterize spatiotemporal coherent structures that underlie neural activity across many regions of the brain. In this work, we leverage analytic techniques from fluid dynamics to develop a visualization framework that highlights features of flow across the cortex, mapping wave fronts that may be correlated with behavioral events. First, we transform the time series of widefield calcium images into time-varying vector fields using optic flow. Next, we extract concise diagrams summarizing the dynamics, which we refer to as FLOW (flow lines in optical widefield imaging) portraits. These FLOW portraits provide an intuitive map of dynamic calcium activity, including regions of initiation and termination, as well as the direction and extent of activity spread. To extract these structures, we use the finite-time Lyapunov exponent (FTLE) technique developed to analyze time-varying manifolds in unsteady fluids. Importantly, our approach captures coherent structures that are poorly represented by traditional modal decomposition techniques. We demonstrate the application of FLOW portraits on three simple synthetic datasets and two widefield calcium imaging datasets, including cortical waves in the developing mouse and spontaneous cortical activity in an adult mouse.
1301.5948
Michael Inouye
Gad Abraham, Jason A. Tye-Din, Oneil G. Bhalala, Adam Kowalczyk, Justin Zobel, and Michael Inouye
Accurate and robust genomic prediction of celiac disease using statistical learning
Main text: 26 pages, 7 figures; Supplementary: 18 pages including genomic prediction model, available at http://dx.doi.org/10.6084/m9.figshare.154193 (alternative link: http://figshare.com/articles/Accurate_and_robust_genomic_prediction_of_celiac_disease_using_statistical_learning/154193)
null
null
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European CD cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87-0.89) and in independent replication across cohorts (AUC of 0.86-0.9), despite differences in ethnicity. The models explained 30-35% of disease variance and up to $\sim43\%$ of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with $\geq99.6\%$ negative predictive value however, unlike HLA typing, patients can also be stratified into categories of higher-risk for CD who would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a predictive GRS provides clinically relevant information to improve upon current diagnostic pathways for CD, and support further studies evaluating the clinical utility of this approach in CD and other complex diseases.
[ { "created": "Fri, 25 Jan 2013 02:03:05 GMT", "version": "v1" }, { "created": "Wed, 18 Dec 2013 22:27:02 GMT", "version": "v2" }, { "created": "Fri, 20 Dec 2013 23:22:25 GMT", "version": "v3" } ]
2013-12-24
[ [ "Abraham", "Gad", "" ], [ "Tye-Din", "Jason A.", "" ], [ "Bhalala", "Oneil G.", "" ], [ "Kowalczyk", "Adam", "" ], [ "Zobel", "Justin", "" ], [ "Inouye", "Michael", "" ] ]
Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European CD cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87-0.89) and in independent replication across cohorts (AUC of 0.86-0.9), despite differences in ethnicity. The models explained 30-35% of disease variance and up to $\sim43\%$ of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with $\geq99.6\%$ negative predictive value however, unlike HLA typing, patients can also be stratified into categories of higher-risk for CD who would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a predictive GRS provides clinically relevant information to improve upon current diagnostic pathways for CD, and support further studies evaluating the clinical utility of this approach in CD and other complex diseases.
2008.03358
Nour Riman
Nour Riman, Jonathan D. Victor, Sebastian D. Boie, and Bard Ermentrout
The Dynamics of Bilateral Olfactory Search and Navigation
null
SIAM Review 2021 63:1, 100-120
10.1137/19M1265934
null
q-bio.QM nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animals use stereo sampling of odor concentration to localize sources and follow odor trails. We analyze the dynamics of a bilateral model that depends on the simultaneous comparison between odor concentrations detected by left and right sensors. The general model consists of three differential equations for the positions in the plane and the heading. When the odor landscape is an infinite trail, then we reduce the dynamics to a planar system whose dynamics have just two fixed points. Using an integrable approximation (for short sensors) we estimate the basin of attraction. In the case of a radially symmetric landscape, we again can reduce the dynamics to a planar system, but the behavior is considerably richer with multi-stability, isolas, and limit cycles. As in the linear trail case, there is also an underlying integrable system when the sensors are short. In odor landscapes that consist of multiple spots and trail segments, we find periodic and chaotic dynamics and characterize the behavior on trails with gaps and that turn corners.
[ { "created": "Fri, 7 Aug 2020 20:03:51 GMT", "version": "v1" } ]
2021-03-16
[ [ "Riman", "Nour", "" ], [ "Victor", "Jonathan D.", "" ], [ "Boie", "Sebastian D.", "" ], [ "Ermentrout", "Bard", "" ] ]
Animals use stereo sampling of odor concentration to localize sources and follow odor trails. We analyze the dynamics of a bilateral model that depends on the simultaneous comparison between odor concentrations detected by left and right sensors. The general model consists of three differential equations for the positions in the plane and the heading. When the odor landscape is an infinite trail, then we reduce the dynamics to a planar system whose dynamics have just two fixed points. Using an integrable approximation (for short sensors) we estimate the basin of attraction. In the case of a radially symmetric landscape, we again can reduce the dynamics to a planar system, but the behavior is considerably richer with multi-stability, isolas, and limit cycles. As in the linear trail case, there is also an underlying integrable system when the sensors are short. In odor landscapes that consist of multiple spots and trail segments, we find periodic and chaotic dynamics and characterize the behavior on trails with gaps and that turn corners.
1801.06226
Eve Armstrong
Eve Armstrong
Computational model of avian nervous system nuclei governing learned song
21 pages and 11 figures, without appendices
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The means by which neuronal activity yields robust behavior is a ubiquitous question in neuroscience. In the songbird, the timing of a highly stereotyped song motif is attributed to the cortical nucleus HVC, and to feedback to HVC from downstream nuclei in the song motor pathway. Control of the acoustic structure appears to be shared by various structures, whose functional connectivity is largely unknown. Currently there exists no model for functional synaptic architecture that links HVC to song output in a manner consistent with experiments. Here we build on a previous model of HVC in which a distinct functional architecture may act as a pattern generator to drive downstream regions. Using a specific functional connectivity of the song motor pathway, we show how this HVC mechanism can generate simple representations of the driving forces for song. The model reproduces observed correlations between neuronal and respiratory activity and acoustic features of song. It makes testable predictions regarding the electrophysiology of distinct populations in the robust nucleus of the arcopallium (RA), the connectivity within HVC and RA and between them, and the activity patterns of vocal-respiratory neurons in the brainstem.
[ { "created": "Thu, 18 Jan 2018 20:27:21 GMT", "version": "v1" } ]
2018-01-22
[ [ "Armstrong", "Eve", "" ] ]
The means by which neuronal activity yields robust behavior is a ubiquitous question in neuroscience. In the songbird, the timing of a highly stereotyped song motif is attributed to the cortical nucleus HVC, and to feedback to HVC from downstream nuclei in the song motor pathway. Control of the acoustic structure appears to be shared by various structures, whose functional connectivity is largely unknown. Currently there exists no model for functional synaptic architecture that links HVC to song output in a manner consistent with experiments. Here we build on a previous model of HVC in which a distinct functional architecture may act as a pattern generator to drive downstream regions. Using a specific functional connectivity of the song motor pathway, we show how this HVC mechanism can generate simple representations of the driving forces for song. The model reproduces observed correlations between neuronal and respiratory activity and acoustic features of song. It makes testable predictions regarding the electrophysiology of distinct populations in the robust nucleus of the arcopallium (RA), the connectivity within HVC and RA and between them, and the activity patterns of vocal-respiratory neurons in the brainstem.
2309.02598
Srilekha Mamidala
Srilekha Mamidala
The SLED (Shelf Life Expiration Date) Tracking System: Using Machine Learning Algorithms to Combat Food Waste and Food Borne Illnesses
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
The issue of food waste is a major problem contributing to the emission of greenhouse gases into the environment in addition to causing illness in humans. This research aimed to develop a correlation between the amount of time until a food spoiled and dates on food labels in conjunction with sensory observations. Sensory observations are more accurate as they are immediate observations that are specific to the food. This experiment observed bananas, bread, milk, eggs, and leafy greens over a period of time using characteristics specific to the food to quantify food spoilage. It was shown that the actual time until spoilage for all foods was longer than that of the best by date and that sensory observations proved to be a more accurate factor in determining spoilage. From this data, a machine learning algorithm was trained to predict if food was spoiled or not, in addition to the number of days until spoilage. This was presented to the consumer as an app, where the user can track foods and are reminded to check on them to prevent wastage. In addition, the experimental procedures were incorporated into a test kit for the consumer to take instructed observations to assess the spoilage of their food, which are then entered into the app to improve the algorithm. This paper discusses the individual effects of sensorial observations on each food and examines the shifting of consumer habits through an app and test kit to combat environmental consequences of food waste.
[ { "created": "Tue, 5 Sep 2023 21:45:47 GMT", "version": "v1" } ]
2023-09-07
[ [ "Mamidala", "Srilekha", "" ] ]
The issue of food waste is a major problem contributing to the emission of greenhouse gases into the environment in addition to causing illness in humans. This research aimed to develop a correlation between the amount of time until a food spoiled and dates on food labels in conjunction with sensory observations. Sensory observations are more accurate as they are immediate observations that are specific to the food. This experiment observed bananas, bread, milk, eggs, and leafy greens over a period of time using characteristics specific to the food to quantify food spoilage. It was shown that the actual time until spoilage for all foods was longer than that of the best by date and that sensory observations proved to be a more accurate factor in determining spoilage. From this data, a machine learning algorithm was trained to predict if food was spoiled or not, in addition to the number of days until spoilage. This was presented to the consumer as an app, where the user can track foods and are reminded to check on them to prevent wastage. In addition, the experimental procedures were incorporated into a test kit for the consumer to take instructed observations to assess the spoilage of their food, which are then entered into the app to improve the algorithm. This paper discusses the individual effects of sensorial observations on each food and examines the shifting of consumer habits through an app and test kit to combat environmental consequences of food waste.
2304.02794
Roberto Mor\'an-Tovar
Roberto Mor\'an-Tovar and Michael L\"assig
Nonequilibrium antigen recognition during infections and vaccinations
null
null
null
null
q-bio.PE q-bio.SC
http://creativecommons.org/licenses/by/4.0/
The immune response to an acute primary infection is a coupled process of antigen proliferation, molecular recognition by naive B cells, and their subsequent proliferation and antibody shedding. This process contains a fundamental problem: the recognition of an exponentially time-dependent antigen signal. Here we show that B cells can efficiently recognise new antigens by a tuned kinetic proofreading mechanism, where the molecular recognition machinery is adapted to the complexity of the immune repertoire. This process produces potent, specific and fast recognition of antigens, maintaining a spectrum of genetically distinct B cell lineages as input for affinity maturation. We show that the proliferation-recognition dynamics of a primary infection is a generalised Luria-Delbr\"uck process, akin to the dynamics of the classic fluctuation experiment. This map establishes a link between signal recognition dynamics and evolution. We derive the resulting statistics of the activated immune repertoire: antigen binding affinity, expected size, and frequency of active B cell clones are related by power laws, which define the class of generalised Luria-Delbr\"uck processes. Their exponents depend on the antigen and B cell proliferation rate, the number of proofreading steps, and the lineage density of the naive repertoire. We extend the model to include spatio-temporal processes, including the diffusion-recognition dynamics of a vaccination. Empirical data of activated mouse immune repertoires are found to be consistent with activation involving about three proofreading steps. The model predicts key clinical characteristics of acute infections and vaccinations, including the emergence of elite neutralisers and the effects of immune ageing. More broadly, our results establish infections and vaccinations as a new probe into the global architecture and functional principles of immune repertoires.
[ { "created": "Wed, 5 Apr 2023 23:56:05 GMT", "version": "v1" }, { "created": "Mon, 29 May 2023 21:30:23 GMT", "version": "v2" }, { "created": "Mon, 26 Jun 2023 12:50:59 GMT", "version": "v3" }, { "created": "Tue, 27 Feb 2024 14:37:30 GMT", "version": "v4" } ]
2024-02-28
[ [ "Morán-Tovar", "Roberto", "" ], [ "Lässig", "Michael", "" ] ]
The immune response to an acute primary infection is a coupled process of antigen proliferation, molecular recognition by naive B cells, and their subsequent proliferation and antibody shedding. This process contains a fundamental problem: the recognition of an exponentially time-dependent antigen signal. Here we show that B cells can efficiently recognise new antigens by a tuned kinetic proofreading mechanism, where the molecular recognition machinery is adapted to the complexity of the immune repertoire. This process produces potent, specific and fast recognition of antigens, maintaining a spectrum of genetically distinct B cell lineages as input for affinity maturation. We show that the proliferation-recognition dynamics of a primary infection is a generalised Luria-Delbr\"uck process, akin to the dynamics of the classic fluctuation experiment. This map establishes a link between signal recognition dynamics and evolution. We derive the resulting statistics of the activated immune repertoire: antigen binding affinity, expected size, and frequency of active B cell clones are related by power laws, which define the class of generalised Luria-Delbr\"uck processes. Their exponents depend on the antigen and B cell proliferation rate, the number of proofreading steps, and the lineage density of the naive repertoire. We extend the model to include spatio-temporal processes, including the diffusion-recognition dynamics of a vaccination. Empirical data of activated mouse immune repertoires are found to be consistent with activation involving about three proofreading steps. The model predicts key clinical characteristics of acute infections and vaccinations, including the emergence of elite neutralisers and the effects of immune ageing. More broadly, our results establish infections and vaccinations as a new probe into the global architecture and functional principles of immune repertoires.
0705.2286
Bernhard Mehlig
A. Eriksson, P. Fernstrom, B. Mehlig, and S. Sagitov
An accurate model for genetic hitch-hiking
12 pages, 10 figures
Genetics 178, 439 (2008)
null
null
q-bio.PE
null
We suggest a simple deterministic approximation for the growth of the favoured-allele frequency during a selective sweep. Using this approximation we introduce an accurate model for genetic hitch-hiking. Only when Ns < 10 (N is the population size and s denotes the selection coefficient), are discrepancies between our approximation and direct numerical simulations of a Moran model noticeable. Our model describes the gene genealogies of a contiguous segment of neutral loci close to the selected one, and it does not assume that the selective sweep happens instantaneously. This enables us to compute SNP distributions on the neutral segment without bias.
[ { "created": "Wed, 16 May 2007 12:37:39 GMT", "version": "v1" } ]
2008-12-19
[ [ "Eriksson", "A.", "" ], [ "Fernstrom", "P.", "" ], [ "Mehlig", "B.", "" ], [ "Sagitov", "S.", "" ] ]
We suggest a simple deterministic approximation for the growth of the favoured-allele frequency during a selective sweep. Using this approximation we introduce an accurate model for genetic hitch-hiking. Only when Ns < 10 (N is the population size and s denotes the selection coefficient), are discrepancies between our approximation and direct numerical simulations of a Moran model noticeable. Our model describes the gene genealogies of a contiguous segment of neutral loci close to the selected one, and it does not assume that the selective sweep happens instantaneously. This enables us to compute SNP distributions on the neutral segment without bias.
1803.09974
Dimitra Maoutsa
Jose Casadiego, Dimitra Maoutsa, Marc Timme
Inferring network connectivity from event timing patterns
6 pages, 5 figures, The first two authors contributed equally to this paper, and should be regarded as co-first authors. [v2: metadata update]
Phys. Rev. Lett. 121, 054101 (2018)
10.1103/PhysRevLett.121.054101
null
q-bio.NC nlin.CD physics.data-an stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing network connectivity from the collective dynamics of a system typically requires access to its complete continuous-time evolution although these are often experimentally inaccessible. Here we propose a theory for revealing physical connectivity of networked systems only from the event time series their intrinsic collective dynamics generate. Representing the patterns of event timings in an event space spanned by inter-event and cross-event intervals, we reveal which other units directly influence the inter-event times of any given unit. For illustration, we linearize an event space mapping constructed from the spiking patterns in model neural circuits to reveal the presence or absence of synapses between any pair of neurons as well as whether the coupling acts in an inhibiting or activating (excitatory) manner. The proposed model-independent reconstruction theory is scalable to larger networks and may thus play an important role in the reconstruction of networks from biology to social science and engineering.
[ { "created": "Tue, 27 Mar 2018 09:15:04 GMT", "version": "v1" }, { "created": "Thu, 29 Mar 2018 12:56:06 GMT", "version": "v2" } ]
2018-08-08
[ [ "Casadiego", "Jose", "" ], [ "Maoutsa", "Dimitra", "" ], [ "Timme", "Marc", "" ] ]
Reconstructing network connectivity from the collective dynamics of a system typically requires access to its complete continuous-time evolution although these are often experimentally inaccessible. Here we propose a theory for revealing physical connectivity of networked systems only from the event time series their intrinsic collective dynamics generate. Representing the patterns of event timings in an event space spanned by inter-event and cross-event intervals, we reveal which other units directly influence the inter-event times of any given unit. For illustration, we linearize an event space mapping constructed from the spiking patterns in model neural circuits to reveal the presence or absence of synapses between any pair of neurons as well as whether the coupling acts in an inhibiting or activating (excitatory) manner. The proposed model-independent reconstruction theory is scalable to larger networks and may thus play an important role in the reconstruction of networks from biology to social science and engineering.
2006.16203
Amanda Parker
Amanda Parker, M. Cristina Marchetti, M. Lisa Manning, J. M. Schwarz
How does the extracellular matrix affect the rigidity of an embedded spheroid?
Main text: 5 pages, 3 figures. Supplementary Material: 13 pages, 14 figures
null
null
null
q-bio.CB cond-mat.soft q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cellularized tissue and polymer networks can both transition from floppy to rigid as a function of their control parameters, and, yet, the two systems often mechanically interact, which may affect their respective rigidities. To study this interaction, we consider a vertex model with interfacial tension (a spheroid) embedded in a spring network in two dimensions. We identify two regimes with different global spheroid shapes, governed by the pressure resulting from competition between interfacial tension and tension in the network. In the first regime, the tissue remains compact, while in the second, a cavitation-like instability leads to the emergence of gaps at the tissue-network interface. Intriguingly, compression of the tissue promotes fluidization, while tension promotes cellular alignment and rigidification, with the mechanisms driving rigidification differing on either side of the instability.
[ { "created": "Mon, 29 Jun 2020 17:13:52 GMT", "version": "v1" } ]
2020-06-30
[ [ "Parker", "Amanda", "" ], [ "Marchetti", "M. Cristina", "" ], [ "Manning", "M. Lisa", "" ], [ "Schwarz", "J. M.", "" ] ]
Cellularized tissue and polymer networks can both transition from floppy to rigid as a function of their control parameters, and, yet, the two systems often mechanically interact, which may affect their respective rigidities. To study this interaction, we consider a vertex model with interfacial tension (a spheroid) embedded in a spring network in two dimensions. We identify two regimes with different global spheroid shapes, governed by the pressure resulting from competition between interfacial tension and tension in the network. In the first regime, the tissue remains compact, while in the second, a cavitation-like instability leads to the emergence of gaps at the tissue-network interface. Intriguingly, compression of the tissue promotes fluidization, while tension promotes cellular alignment and rigidification, with the mechanisms driving rigidification differing on either side of the instability.
q-bio/0610001
Attila Szolnoki
Attila Szolnoki and Gyorgy Szabo
Cooperation enhanced by inhomogeneous activity of teaching for evolutionary Prisoner's Dilemma games
4 pages, 5 figures, corrected typos
Europhysics Letters 77 (2007) 30004
10.1209/0295-5075/77/30004
null
q-bio.PE cond-mat.stat-mech
null
Evolutionary Prisoner's Dilemma games with quenched inhomogeneities in the spatial dynamical rules are considered. The players following one of the two pure strategies (cooperation or defection) are distributed on a two-dimensional lattice. The rate of strategy adoption from a randomly chosen neighbors are controlled by the payoff difference and a two-value pre-factor $w$ characterizing the players whom the strategy learned from. The reduced teaching activity of players is distributed randomly with concentrations $\nu$ at the beginning and fixed further on. Numerical and analytical calculations are performed to study the concentration of cooperators as a function of $w$ and $\nu$ for different noise levels and connectivity structures. Significant increase of cooperation is found within a wide range of parameters for this dynamics. The results highlight the importance of asymmetry characterizing the exchange of master-follower role during the strategy adoptions.
[ { "created": "Sat, 30 Sep 2006 09:11:34 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2006 06:23:30 GMT", "version": "v2" } ]
2007-05-23
[ [ "Szolnoki", "Attila", "" ], [ "Szabo", "Gyorgy", "" ] ]
Evolutionary Prisoner's Dilemma games with quenched inhomogeneities in the spatial dynamical rules are considered. The players following one of the two pure strategies (cooperation or defection) are distributed on a two-dimensional lattice. The rate of strategy adoption from a randomly chosen neighbors are controlled by the payoff difference and a two-value pre-factor $w$ characterizing the players whom the strategy learned from. The reduced teaching activity of players is distributed randomly with concentrations $\nu$ at the beginning and fixed further on. Numerical and analytical calculations are performed to study the concentration of cooperators as a function of $w$ and $\nu$ for different noise levels and connectivity structures. Significant increase of cooperation is found within a wide range of parameters for this dynamics. The results highlight the importance of asymmetry characterizing the exchange of master-follower role during the strategy adoptions.
1507.07422
Raul Fernandez Rojas
Raul Fernandez Rojas, Xu Huang, Keng Liang Ou, Dat Tran, and Sheikh Md. Rabiul Islam
Analysis of Pain Hemodynamic Response Using Near-Infrared Spectroscopy (NIRS)
11 pages, 11 figures
The International Journal of Multimedia & Its Applications (IJMA) Vol. 7, No. 2, April 2015
10.5121/ijma.2015.7203
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite recent advances in brain research, understanding the various signals for pain and pain intensities in the brain cortex is still a complex task due to temporal and spatial variations of brain hemodynamics. In this paper we have investigated pain based on cerebral hemodynamics via near-infrared spectroscopy (NIRS). This study presents a pain stimulation experiment that uses three acupuncture manipulation techniques to safely induce pain in healthy subjects. Acupuncture pain response was presented and hemodynamic pain signal analysis showed the presence of dominant channels and their relationship among surrounding channels, which contribute the further pain research area.
[ { "created": "Fri, 24 Jul 2015 01:05:16 GMT", "version": "v1" } ]
2015-07-28
[ [ "Rojas", "Raul Fernandez", "" ], [ "Huang", "Xu", "" ], [ "Ou", "Keng Liang", "" ], [ "Tran", "Dat", "" ], [ "Islam", "Sheikh Md. Rabiul", "" ] ]
Despite recent advances in brain research, understanding the various signals for pain and pain intensities in the brain cortex is still a complex task due to temporal and spatial variations of brain hemodynamics. In this paper we have investigated pain based on cerebral hemodynamics via near-infrared spectroscopy (NIRS). This study presents a pain stimulation experiment that uses three acupuncture manipulation techniques to safely induce pain in healthy subjects. Acupuncture pain response was presented and hemodynamic pain signal analysis showed the presence of dominant channels and their relationship among surrounding channels, which contribute the further pain research area.
1012.6019
Alexander K. Vidybida
Kseniya Kravchuk and Alexander Vidybida
Delayed feedback causes non-Markovian behavior of neuronal firing statistics
21 pages, 7 figures, 20 refs, submitted to Journal of Physics A. File movie.pdf is added as ancillary file
Ukrainian Mathematical Journal, Vol. 64, pp. 1587--1609 (2012)
null
null
q-bio.NC math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The instantaneous state of a neural network consists of both the degree of excitation of each neuron, the network is composed of, and positions of impulses in communication lines between neurons. In neurophysiological experiments, the neuronal firing moments are registered, but not the state of communication lines. But future spiking moments depend essentially on the past positions of impulses in the lines. This suggests, that the sequence of intervals between firing moments (interspike intervals, ISIs) in the network could be non-Markovian. In this paper, we address this question for a simplest possible neural "net", namely, a single neuron with delayed feedback. The neuron receives excitatory input both from the driving Poisson stream and from its own output through the feedback line. We obtain analytical expressions for conditional probability density $P(t_{n+1} | t_n,...,t_1,t_0)$, which gives the probability to get an output ISI of duration $t_{n+1}$ provided the previous $(n+1)$ output ISIs had durations $t_n,...,t_1,t_0$. It is proven exactly, that $P(t_{n+1} | t_n,...,t_1,t_0)$ does not reduce to $P(t_{n+1} | t_n,...,t_1)$ for any $n \geq 0$. This means that the output ISIs stream cannot be represented as Markov chain of any finite order.
[ { "created": "Wed, 29 Dec 2010 20:01:46 GMT", "version": "v1" }, { "created": "Thu, 30 Dec 2010 14:36:25 GMT", "version": "v2" } ]
2015-03-17
[ [ "Kravchuk", "Kseniya", "" ], [ "Vidybida", "Alexander", "" ] ]
The instantaneous state of a neural network consists of both the degree of excitation of each neuron, the network is composed of, and positions of impulses in communication lines between neurons. In neurophysiological experiments, the neuronal firing moments are registered, but not the state of communication lines. But future spiking moments depend essentially on the past positions of impulses in the lines. This suggests, that the sequence of intervals between firing moments (interspike intervals, ISIs) in the network could be non-Markovian. In this paper, we address this question for a simplest possible neural "net", namely, a single neuron with delayed feedback. The neuron receives excitatory input both from the driving Poisson stream and from its own output through the feedback line. We obtain analytical expressions for conditional probability density $P(t_{n+1} | t_n,...,t_1,t_0)$, which gives the probability to get an output ISI of duration $t_{n+1}$ provided the previous $(n+1)$ output ISIs had durations $t_n,...,t_1,t_0$. It is proven exactly, that $P(t_{n+1} | t_n,...,t_1,t_0)$ does not reduce to $P(t_{n+1} | t_n,...,t_1)$ for any $n \geq 0$. This means that the output ISIs stream cannot be represented as Markov chain of any finite order.
1802.00462
Song Feng
Song Feng, Orkun S. Soyer
In silico evolution of signaling networks using rule-based models: bistable response dynamics
24 pages, 7 figures
null
null
null
q-bio.MN q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the ultimate goals in biology is to understand the design principles of biological systems. Such principles, if they exist, can help us better understand complex, natural biological systems and guide the engineering of de novo ones. Towards deciphering design principles, in silico evolution of biological systems with proper abstraction is a promising approach. Here, we demonstrate the application of in silico evolution combined with rule-based modelling for exploring design principles of cellular signaling networks. This application is based on a computational platform, called BioJazz, which allows in silico evolution of signaling networks with unbounded complexity. We provide a detailed introduction to BioJazz architecture and implementation and describe how it can be used to evolve and/or design signaling networks with defined dynamics. For the latter, we evolve signaling networks with switch-like response dynamics and demonstrate how BioJazz can result in new biological insights on network structures that can endow bistable response dynamics. This example also demonstrated both the power of BioJazz in evolving and designing signaling networks and its limitations at the current stage of development.
[ { "created": "Thu, 1 Feb 2018 19:23:44 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2018 17:08:24 GMT", "version": "v2" } ]
2018-02-07
[ [ "Feng", "Song", "" ], [ "Soyer", "Orkun S.", "" ] ]
One of the ultimate goals in biology is to understand the design principles of biological systems. Such principles, if they exist, can help us better understand complex, natural biological systems and guide the engineering of de novo ones. Towards deciphering design principles, in silico evolution of biological systems with proper abstraction is a promising approach. Here, we demonstrate the application of in silico evolution combined with rule-based modelling for exploring design principles of cellular signaling networks. This application is based on a computational platform, called BioJazz, which allows in silico evolution of signaling networks with unbounded complexity. We provide a detailed introduction to BioJazz architecture and implementation and describe how it can be used to evolve and/or design signaling networks with defined dynamics. For the latter, we evolve signaling networks with switch-like response dynamics and demonstrate how BioJazz can result in new biological insights on network structures that can endow bistable response dynamics. This example also demonstrated both the power of BioJazz in evolving and designing signaling networks and its limitations at the current stage of development.
1407.3219
Areejit Samal
Areejit Samal and Olivier C. Martin
Statistical physics methods provide the exact solution to a long-standing problem of genetics
Final version to appear in Physical Review Letters, Main and Supplemental Material, 20 pages, 11 figures
Phys. Rev. Lett. 114, 238101 (2015)
10.1103/PhysRevLett.114.238101
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analytic and computational methods developed within statistical physics have found applications in numerous disciplines. In this letter, we use such methods to solve a long-standing problem in statistical genetics. The problem, posed by Haldane and Waddington [J.B.S. Haldane and C.H. Waddington, Genetics 16, 357-374 (1931)], concerns so-called recombinant inbred lines (RILs) produced by repeated inbreeding. Haldane and Waddington derived the probabilities of RILs when considering 2 and 3 genes but the case of 4 or more genes has remained elusive. Our solution uses two probabilistic frameworks relatively unknown outside of physics: Glauber's formula and self-consistent equations of the Schwinger-Dyson type. Surprisingly, this combination of statistical formalisms unveils the exact probabilities of RILs for any number of genes. Extensions of the framework may have applications in population genetics and beyond.
[ { "created": "Fri, 11 Jul 2014 17:04:58 GMT", "version": "v1" }, { "created": "Mon, 18 May 2015 06:48:09 GMT", "version": "v2" }, { "created": "Sun, 7 Jun 2015 20:59:14 GMT", "version": "v3" } ]
2015-06-10
[ [ "Samal", "Areejit", "" ], [ "Martin", "Olivier C.", "" ] ]
Analytic and computational methods developed within statistical physics have found applications in numerous disciplines. In this letter, we use such methods to solve a long-standing problem in statistical genetics. The problem, posed by Haldane and Waddington [J.B.S. Haldane and C.H. Waddington, Genetics 16, 357-374 (1931)], concerns so-called recombinant inbred lines (RILs) produced by repeated inbreeding. Haldane and Waddington derived the probabilities of RILs when considering 2 and 3 genes but the case of 4 or more genes has remained elusive. Our solution uses two probabilistic frameworks relatively unknown outside of physics: Glauber's formula and self-consistent equations of the Schwinger-Dyson type. Surprisingly, this combination of statistical formalisms unveils the exact probabilities of RILs for any number of genes. Extensions of the framework may have applications in population genetics and beyond.
2003.12781
Anantanarayanan Thyagaraja Dr
Anantanarayanan Thyagaraja
A phenomenological approach to COVID-19 spread in a population
12 pages, 5 figures
null
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A phenomenological model to describe the Corona Virus(covid-19) Pandemic spread in a given population is developed. It enables the identification of the key quantities required to form adequate policies for control and mitigation in terms of observable parameters using the Landau-Stuart equation. It is intended to be complementary to detailed simulations and methods published recently by Ferguson and collaborators, March 16, (2020). The results suggest that the initial growth/spreading rate gamma-c of the disease, and the fraction of infected persons in the population p-i can be used to define a `retardation/inhibition coefficient' k-star , which is a measure of the effectiveness of the control policies adopted. The results are obtained analytically and numerically using a simple Python code. The solutions provide both qualitative and quantitative information. They substantiate and justify two basic control policies enunciated by WHO and adopted in many countries: a) Systematic and early intensive testing individuals for covid-19 and b) Sequestration policies such as `social/physical distancing' and population density reduction by strict quarantining are essential for making k-star greater than 1, necessary for suppressing the pandemic. The model indicates that relaxing such measures when the infection rate starts to decrease as a result of earlier policies could simply restart the infection rate in the non-infected population. Presently available available statistical data in WHO and other reports can be readily used to determine the the key parameters of the model. Possible extensions to the basic model to make it more realistic are indicated.
[ { "created": "Sat, 28 Mar 2020 13:08:31 GMT", "version": "v1" } ]
2020-03-31
[ [ "Thyagaraja", "Anantanarayanan", "" ] ]
A phenomenological model to describe the Corona Virus(covid-19) Pandemic spread in a given population is developed. It enables the identification of the key quantities required to form adequate policies for control and mitigation in terms of observable parameters using the Landau-Stuart equation. It is intended to be complementary to detailed simulations and methods published recently by Ferguson and collaborators, March 16, (2020). The results suggest that the initial growth/spreading rate gamma-c of the disease, and the fraction of infected persons in the population p-i can be used to define a `retardation/inhibition coefficient' k-star , which is a measure of the effectiveness of the control policies adopted. The results are obtained analytically and numerically using a simple Python code. The solutions provide both qualitative and quantitative information. They substantiate and justify two basic control policies enunciated by WHO and adopted in many countries: a) Systematic and early intensive testing individuals for covid-19 and b) Sequestration policies such as `social/physical distancing' and population density reduction by strict quarantining are essential for making k-star greater than 1, necessary for suppressing the pandemic. The model indicates that relaxing such measures when the infection rate starts to decrease as a result of earlier policies could simply restart the infection rate in the non-infected population. Presently available available statistical data in WHO and other reports can be readily used to determine the the key parameters of the model. Possible extensions to the basic model to make it more realistic are indicated.
1405.7658
Sylvie Stucki
Sylvie Stucki, Pablo Orozco-terWengel, Michael W. Bruford, Licia Colli, Charles Masembe, Riccardo Negrini, Pierre Taberlet, St\'ephane Joost and the NEXTGEN Consortium
High performance computation of landscape genomic models integrating local indices of spatial association
1 figure in text, 1 figure in supplementary material The structure of the article was modified and some explanations were updated. The methods and results presented are the same as in the previous version
null
10.1111/1755-0998.12629
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since its introduction, landscape genomics has developed quickly with the increasing availability of both molecular and topo-climatic data. The current challenges of the field mainly involve processing large numbers of models and disentangling selection from demography. Several methods address the latter, either by estimating a neutral model from population structure or by inferring simultaneously environmental and demographic effects. Here we present Sam$\beta$ada, an integrated approach to study signatures of local adaptation, providing rapid processing of whole genome data and enabling assessment of spatial association using molecular markers. Specifically, candidate loci to adaptation are identified by automatically assessing genome-environment associations. In complement, measuring the Local Indicators of Spatial Association (LISA) for these candidate loci allows to detect whether similar genotypes tend to gather in space, which constitutes a useful indication of the possible kinship relationship between individuals. In this paper, we also analyze SNP data from Ugandan cattle to detect signatures of local adaptation with Sam$\beta$ada, BayEnv, LFMM and an outlier method (FDIST approach in Arlequin) and compare their results. Sam$\beta$ada is an open source software for Windows, Linux and MacOS X available at \url{http://lasig.epfl.ch/sambada}
[ { "created": "Thu, 29 May 2014 19:07:29 GMT", "version": "v1" }, { "created": "Thu, 20 Nov 2014 16:36:30 GMT", "version": "v2" } ]
2016-11-08
[ [ "Stucki", "Sylvie", "" ], [ "Orozco-terWengel", "Pablo", "" ], [ "Bruford", "Michael W.", "" ], [ "Colli", "Licia", "" ], [ "Masembe", "Charles", "" ], [ "Negrini", "Riccardo", "" ], [ "Taberlet", "Pierre", ...
Since its introduction, landscape genomics has developed quickly with the increasing availability of both molecular and topo-climatic data. The current challenges of the field mainly involve processing large numbers of models and disentangling selection from demography. Several methods address the latter, either by estimating a neutral model from population structure or by inferring simultaneously environmental and demographic effects. Here we present Sam$\beta$ada, an integrated approach to study signatures of local adaptation, providing rapid processing of whole genome data and enabling assessment of spatial association using molecular markers. Specifically, candidate loci to adaptation are identified by automatically assessing genome-environment associations. In complement, measuring the Local Indicators of Spatial Association (LISA) for these candidate loci allows to detect whether similar genotypes tend to gather in space, which constitutes a useful indication of the possible kinship relationship between individuals. In this paper, we also analyze SNP data from Ugandan cattle to detect signatures of local adaptation with Sam$\beta$ada, BayEnv, LFMM and an outlier method (FDIST approach in Arlequin) and compare their results. Sam$\beta$ada is an open source software for Windows, Linux and MacOS X available at \url{http://lasig.epfl.ch/sambada}
2008.06367
Philippe Terrier PhD
Philippe Terrier
Gait complexity assessed by detrended fluctuation analysis is sensitive to inconsistencies in stride time series: A modeling study
Article submitted for publication
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Human gait exhibits complex fractal fluctuations among consecutive strides. The time series of gait parameters are long-range correlated (statistical persistence). In contrast, when gait is synchronized with external rhythmic cues, the fluctuation regime is modified to stochastic oscillations around the target frequency (statistical anti-persistence). To highlight these two fluctuation modes, the prevalent methodology is the detrended fluctuation analysis (DFA). The DFA outcome is the scaling exponent, which lies between 0.5 and 1 if the time series exhibit long-range correlations, and below 0.5 if the time series is anti-correlated. A fundamental assumption for applying DFA is that the analyzed time series results from a time-invariant generating process. However, a gait time series may be constituted by an ensemble of sub-segments with distinct fluctuation regimes (e.g., correlated and anti-correlated). Methods: Several proportions of correlated and anti-correlated time series were mixed together and then analyzed through DFA. The original (before mixing) time series were generated via autoregressive fractionally integrated moving average (ARFIMA) modelling or actual gait data. Results: Results evidenced a nonlinear sensitivity of DFA to the mix of correlated and anti-correlated series. Notably, adding a small proportion of correlated segments into an anti-correlated time series had stronger effects than the reverse. Significance: In case of changes in gait control during a walking trial, the resulting time series may be a patchy ensemble of several fluctuation regimes. When applying DFA, the scaling exponent may be misinterpreted. Cued walking studies may be most at risk of suffering this issue in cases of sporadic synchronization with external cues.
[ { "created": "Fri, 14 Aug 2020 13:36:29 GMT", "version": "v1" } ]
2020-08-17
[ [ "Terrier", "Philippe", "" ] ]
Background: Human gait exhibits complex fractal fluctuations among consecutive strides. The time series of gait parameters are long-range correlated (statistical persistence). In contrast, when gait is synchronized with external rhythmic cues, the fluctuation regime is modified to stochastic oscillations around the target frequency (statistical anti-persistence). To highlight these two fluctuation modes, the prevalent methodology is the detrended fluctuation analysis (DFA). The DFA outcome is the scaling exponent, which lies between 0.5 and 1 if the time series exhibit long-range correlations, and below 0.5 if the time series is anti-correlated. A fundamental assumption for applying DFA is that the analyzed time series results from a time-invariant generating process. However, a gait time series may be constituted by an ensemble of sub-segments with distinct fluctuation regimes (e.g., correlated and anti-correlated). Methods: Several proportions of correlated and anti-correlated time series were mixed together and then analyzed through DFA. The original (before mixing) time series were generated via autoregressive fractionally integrated moving average (ARFIMA) modelling or actual gait data. Results: Results evidenced a nonlinear sensitivity of DFA to the mix of correlated and anti-correlated series. Notably, adding a small proportion of correlated segments into an anti-correlated time series had stronger effects than the reverse. Significance: In case of changes in gait control during a walking trial, the resulting time series may be a patchy ensemble of several fluctuation regimes. When applying DFA, the scaling exponent may be misinterpreted. Cued walking studies may be most at risk of suffering this issue in cases of sporadic synchronization with external cues.
1505.01228
Rogerio Normand
Rogerio Normand and Hugo Alexandre Ferreira
Superchords: the atoms of thought
5 pages, 3 figures with left/right images
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electroencephalography (EEG) signals' interpretation is based on waveform analysis, where meaningful information should emerge from a plethora of data. Nonetheless, the continuous increase in computational power and the development of new data processing algorithms in the recent years have put into reach the possibility of analysing raw EEG signals. Bearing that motivation, the authors propose a new approach using raw data EEG signals and deep learning neural networks, for the classification of motor activities (executed and imagery). The hypothesis to be presented here is: each instantaneous measurement of the raw signal of all EEG channels (superchord) is unique per motor activity regardless the moment of measurement. This study has confirmed the hypothesis (results with accuracy over 80%, mean for 109 subjects), reinforcing the need of further research for the understanding of mental processes.
[ { "created": "Wed, 6 May 2015 00:24:39 GMT", "version": "v1" }, { "created": "Thu, 7 May 2015 02:41:07 GMT", "version": "v2" } ]
2015-05-08
[ [ "Normand", "Rogerio", "" ], [ "Ferreira", "Hugo Alexandre", "" ] ]
Electroencephalography (EEG) signals' interpretation is based on waveform analysis, where meaningful information should emerge from a plethora of data. Nonetheless, the continuous increase in computational power and the development of new data processing algorithms in the recent years have put into reach the possibility of analysing raw EEG signals. Bearing that motivation, the authors propose a new approach using raw data EEG signals and deep learning neural networks, for the classification of motor activities (executed and imagery). The hypothesis to be presented here is: each instantaneous measurement of the raw signal of all EEG channels (superchord) is unique per motor activity regardless the moment of measurement. This study has confirmed the hypothesis (results with accuracy over 80%, mean for 109 subjects), reinforcing the need of further research for the understanding of mental processes.
1405.5916
Alexander Conway
Alexander Conway and Fritzie Arce
Activity Modulation of Motor and Somatosensory Neurons in Learning
Final writeup from an undergraduate research project by Alexander Conway
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cortical processes involved in learning are not well understood. Recent experiments have studied population-level response in the orofacial somatosensory (S1) and motor (S1) cortices of rhesus macaque monkeys during adaptation to a simple tongue protrusion task within and across multiple learning sessions. Initial findings have suggested the formation of cell assemblies during adaptation. In this report we explore differences in cell activity between successful and failed trials as the monkey learns during two sessions. The ability to directly compare data across multiple sessions is fairly new and until now research has mostly focused on the activity of neurons during successful trials only. We confirm findings of the development of coherently active cell assemblies and find that neural response differentiates significantly between successful and unsuccessful trials, particularly as the monkey adapts to the task. Our findings motivate further research into the differences in activity between successful and unsuccessful trials in these experiments.
[ { "created": "Thu, 22 May 2014 21:40:30 GMT", "version": "v1" } ]
2014-05-26
[ [ "Conway", "Alexander", "" ], [ "Arce", "Fritzie", "" ] ]
The cortical processes involved in learning are not well understood. Recent experiments have studied population-level response in the orofacial somatosensory (S1) and motor (S1) cortices of rhesus macaque monkeys during adaptation to a simple tongue protrusion task within and across multiple learning sessions. Initial findings have suggested the formation of cell assemblies during adaptation. In this report we explore differences in cell activity between successful and failed trials as the monkey learns during two sessions. The ability to directly compare data across multiple sessions is fairly new and until now research has mostly focused on the activity of neurons during successful trials only. We confirm findings of the development of coherently active cell assemblies and find that neural response differentiates significantly between successful and unsuccessful trials, particularly as the monkey adapts to the task. Our findings motivate further research into the differences in activity between successful and unsuccessful trials in these experiments.
2103.01127
Chika Koyama
Chika Koyama, Taichi Haruna, Kazuto Yamashita
Microwave amplitude reflecting instability of LFP electrode ground field is useful for consciousness state identification
55 pages, 6 figures, 6 supplementary figures, 8 supplementary tables
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
We recently developed original electroencephalogram indices focusing on microwaves of flattish period (named $\tau$), which correlated with volatile anesthesia concentration in dogs. However, this mechanism remains unclear. $\tau$ was defined as a subthreshold wave and burst wave was defined as an above-threshold wave. This study shows that these indices well quantified the morphological features of local field potential waveforms in mice, and made it possible to discriminate the specific waveforms of the state of consciousness; awake, shallow sleep, rapid eye movement sleep, and non-rapid eye movement sleep. In addition, examination of $\tau$ suggested that microwaves are local fluctuations of the electrode that can be formed for each data sampling, and that its amplitude may increase with the degree of arousal.
[ { "created": "Mon, 1 Mar 2021 16:58:19 GMT", "version": "v1" }, { "created": "Thu, 27 May 2021 09:55:35 GMT", "version": "v2" } ]
2021-05-28
[ [ "Koyama", "Chika", "" ], [ "Haruna", "Taichi", "" ], [ "Yamashita", "Kazuto", "" ] ]
We recently developed original electroencephalogram indices focusing on microwaves of flattish period (named $\tau$), which correlated with volatile anesthesia concentration in dogs. However, this mechanism remains unclear. $\tau$ was defined as a subthreshold wave and burst wave was defined as an above-threshold wave. This study shows that these indices well quantified the morphological features of local field potential waveforms in mice, and made it possible to discriminate the specific waveforms of the state of consciousness; awake, shallow sleep, rapid eye movement sleep, and non-rapid eye movement sleep. In addition, examination of $\tau$ suggested that microwaves are local fluctuations of the electrode that can be formed for each data sampling, and that its amplitude may increase with the degree of arousal.
1506.04375
Tatiana T. Marquez-Lago
Jai Denton, Atiyo Ghosh and Tatiana T. Marquez-Lago
Asymmetrical inheritance of plasmids depends on dynamic cellular geometry and volume exclusion effects
36 pages, 4 main figures, 8 figures and text supplementary materials
null
10.1371/journal.pone.0139443
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The asymmetrical inheritance of plasmid DNA, as well as other cellular components, has been shown to be involved in replicative aging. In Saccharomyces cerevisiae, there is an ongoing debate regarding the mechanisms underlying this important asymmetry. Currently proposed models suggest it is established via diffusion, but differ on whether a diffusion barrier is necessary or not. However, no study so far incorporated key aspects to segregation, such as dynamic morphology changes throughout anaphase or plasmids size. Here, we determine the distinct effects and contributions of individual cellular variability, plasmid volume and moving boundaries in the asymmetric segregation of plasmids. We do this by measuring cellular nuclear geometries and plasmid diffusion rates with confocal microscopy, subsequently incorporating this data into a growing domain stochastic spatial simulator. Our modelling and simulations confirms that plasmid asymmetrical inheritance does not require an active barrier to diffusion, and provides a full analysis on plasmid size effects.
[ { "created": "Sun, 14 Jun 2015 10:51:59 GMT", "version": "v1" } ]
2017-02-08
[ [ "Denton", "Jai", "" ], [ "Ghosh", "Atiyo", "" ], [ "Marquez-Lago", "Tatiana T.", "" ] ]
The asymmetrical inheritance of plasmid DNA, as well as other cellular components, has been shown to be involved in replicative aging. In Saccharomyces cerevisiae, there is an ongoing debate regarding the mechanisms underlying this important asymmetry. Currently proposed models suggest it is established via diffusion, but differ on whether a diffusion barrier is necessary or not. However, no study so far incorporated key aspects to segregation, such as dynamic morphology changes throughout anaphase or plasmids size. Here, we determine the distinct effects and contributions of individual cellular variability, plasmid volume and moving boundaries in the asymmetric segregation of plasmids. We do this by measuring cellular nuclear geometries and plasmid diffusion rates with confocal microscopy, subsequently incorporating this data into a growing domain stochastic spatial simulator. Our modelling and simulations confirms that plasmid asymmetrical inheritance does not require an active barrier to diffusion, and provides a full analysis on plasmid size effects.
1705.05074
Ruggero Micheletto
Takahisa Kishino, Sun Zhe, Roberto Marchisio and Ruggero Micheletto
Cross-modal codification of images with auditory stimuli: a language for the visually impaired
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study we describe a methodology to realize visual images cognition in the broader sense, by a cross-modal stimulation through the auditory channel. An original algorithm of conversion from bi-dimensional images to sounds has been established and tested on several subjects. Our results show that subjects where able to discriminate with a precision of 95\% different sounds corresponding to different test geometric shapes. Moreover, after brief learning sessions on simple images, subjects where able to recognize among a group of 16 complex and never-trained images a single target by hearing its acoustical counterpart. Rate of recognition was found to depend on image characteristics, in 90% of the cases, subjects did better than choosing at random. This study contribute to the understanding of cross-modal perception and help for the realization of systems that use acoustical signals to help visually impaired persons to recognize objects and improve navigation
[ { "created": "Mon, 15 May 2017 05:38:01 GMT", "version": "v1" } ]
2017-05-16
[ [ "Kishino", "Takahisa", "" ], [ "Zhe", "Sun", "" ], [ "Marchisio", "Roberto", "" ], [ "Micheletto", "Ruggero", "" ] ]
In this study we describe a methodology to realize visual images cognition in the broader sense, by a cross-modal stimulation through the auditory channel. An original algorithm of conversion from bi-dimensional images to sounds has been established and tested on several subjects. Our results show that subjects where able to discriminate with a precision of 95\% different sounds corresponding to different test geometric shapes. Moreover, after brief learning sessions on simple images, subjects where able to recognize among a group of 16 complex and never-trained images a single target by hearing its acoustical counterpart. Rate of recognition was found to depend on image characteristics, in 90% of the cases, subjects did better than choosing at random. This study contribute to the understanding of cross-modal perception and help for the realization of systems that use acoustical signals to help visually impaired persons to recognize objects and improve navigation
2209.04442
Thomas Schmidt
Thomas Schmidt, Melanie Biafora
A Theory of Visibility Measures in the Dissociation Paradigm
v1: initial upload. v2: added arXiv identifier. v3: corrected an error in mathematical notation in the "definition (iii)" section. v5: adds reference to the published article. Note that the manuscript responding to this preprint has now been published in Psychonomic Bulletin & Review and should be cited preferentially
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Research on perception without awareness primarily relies on the dissociation paradigm, which compares a measure of awareness of a critical stimulus (direct measures) with a measure indicating that the stimulus has been processed at all (indirect measure). We argue that dissociations between direct and indirect measures can only be demonstrated with respect to the critical stimulus feature that generates the indirect effect, and the observer's awareness of that feature, the critical cue. We expand Kahneman's (1968) concept of criterion content to comprise the set of all cues than an observer actually uses to perform the direct task. Different direct measures can then be compared by studying the overlap of their criterion contents and their containment of the critical cue. Because objective and subjective measures may integrate different sets of cues, one measure generally cannot replace the other without sacrificing important information. Using a simple mathematical formalization, we redefine and clarify the concepts of validity, exclusiveness, and exhaustiveness in the dissociation paradigm, show how dissociations among different awareness measures falsify simple theories of consciousness, and formulate the demand that theories of visual awareness should be sufficiently specific to explain dissociations among different facets of awareness.
[ { "created": "Mon, 29 Aug 2022 15:45:27 GMT", "version": "v1" }, { "created": "Tue, 13 Sep 2022 12:14:56 GMT", "version": "v2" }, { "created": "Wed, 16 Nov 2022 13:54:02 GMT", "version": "v3" }, { "created": "Wed, 2 Aug 2023 12:58:05 GMT", "version": "v4" } ]
2023-08-03
[ [ "Schmidt", "Thomas", "" ], [ "Biafora", "Melanie", "" ] ]
Research on perception without awareness primarily relies on the dissociation paradigm, which compares a measure of awareness of a critical stimulus (direct measures) with a measure indicating that the stimulus has been processed at all (indirect measure). We argue that dissociations between direct and indirect measures can only be demonstrated with respect to the critical stimulus feature that generates the indirect effect, and the observer's awareness of that feature, the critical cue. We expand Kahneman's (1968) concept of criterion content to comprise the set of all cues than an observer actually uses to perform the direct task. Different direct measures can then be compared by studying the overlap of their criterion contents and their containment of the critical cue. Because objective and subjective measures may integrate different sets of cues, one measure generally cannot replace the other without sacrificing important information. Using a simple mathematical formalization, we redefine and clarify the concepts of validity, exclusiveness, and exhaustiveness in the dissociation paradigm, show how dissociations among different awareness measures falsify simple theories of consciousness, and formulate the demand that theories of visual awareness should be sufficiently specific to explain dissociations among different facets of awareness.
2106.00855
Kresten Lindorff-Larsen
Kresten Lindorff-Larsen and Birthe B. Kragelund
On the potential of machine learning to examine the relationship between sequence, structure, dynamics and function of intrinsically disordered proteins
30 pages, 3 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intrinsically disordered proteins (IDPs) constitute a broad set of proteins with few uniting and many diverging properties. IDPs-and intrinsically disordered regions (IDRs) interspersed between folded domains-are generally characterized as having no persistent tertiary structure; instead they interconvert between a large number of different and often expanded structures. IDPs and IDRs are involved in an enormously wide range of biological functions and reveal novel mechanisms of interactions, and while they defy the common structure-function paradigm of folded proteins, their structural preferences and dynamics are important for their function. We here discuss open questions in the field of IDPs and IDRs, focusing on areas where machine learning and other computational methods play a role. We discuss computational methods aimed to predict transiently formed local and long-range structure, including methods for integrative structural biology. We discuss the many different ways in which IDPs and IDRs can bind to other molecules, both via short linear motifs, as well as in the formation of larger dynamic complexes such as biomolecular condensates. We discuss how experiments are providing insight into such complexes and may enable more accurate predictions. Finally, we discuss the role of IDPs in disease and how new methods are needed to interpret the mechanistic effects of genomic variants in IDPs.
[ { "created": "Tue, 1 Jun 2021 23:35:02 GMT", "version": "v1" } ]
2021-06-03
[ [ "Lindorff-Larsen", "Kresten", "" ], [ "Kragelund", "Birthe B.", "" ] ]
Intrinsically disordered proteins (IDPs) constitute a broad set of proteins with few uniting and many diverging properties. IDPs-and intrinsically disordered regions (IDRs) interspersed between folded domains-are generally characterized as having no persistent tertiary structure; instead they interconvert between a large number of different and often expanded structures. IDPs and IDRs are involved in an enormously wide range of biological functions and reveal novel mechanisms of interactions, and while they defy the common structure-function paradigm of folded proteins, their structural preferences and dynamics are important for their function. We here discuss open questions in the field of IDPs and IDRs, focusing on areas where machine learning and other computational methods play a role. We discuss computational methods aimed to predict transiently formed local and long-range structure, including methods for integrative structural biology. We discuss the many different ways in which IDPs and IDRs can bind to other molecules, both via short linear motifs, as well as in the formation of larger dynamic complexes such as biomolecular condensates. We discuss how experiments are providing insight into such complexes and may enable more accurate predictions. Finally, we discuss the role of IDPs in disease and how new methods are needed to interpret the mechanistic effects of genomic variants in IDPs.
2308.01839
Rong Ma
Rong Ma, Eric D. Sun, David Donoho and James Zou
Is your data alignable? Principled and interpretable alignability testing and integration of single-cell data
null
Proceedings of the National Academy of Sciences, 2024, 121(10) e2313719121
10.1073/pnas.2313719121
null
q-bio.QM cs.CV q-bio.GN stat.AP stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Single-cell data integration can provide a comprehensive molecular view of cells, and many algorithms have been developed to remove unwanted technical or biological variations and integrate heterogeneous single-cell datasets. Despite their wide usage, existing methods suffer from several fundamental limitations. In particular, we lack a rigorous statistical test for whether two high-dimensional single-cell datasets are alignable (and therefore should even be aligned). Moreover, popular methods can substantially distort the data during alignment, making the aligned data and downstream analysis difficult to interpret. To overcome these limitations, we present a spectral manifold alignment and inference (SMAI) framework, which enables principled and interpretable alignability testing and structure-preserving integration of single-cell data with the same type of features. SMAI provides a statistical test to robustly assess the alignability between datasets to avoid misleading inference, and is justified by high-dimensional statistical theory. On a diverse range of real and simulated benchmark datasets, it outperforms commonly used alignment methods. Moreover, we show that SMAI improves various downstream analyses such as identification of differentially expressed genes and imputation of single-cell spatial transcriptomics, providing further biological insights. SMAI's interpretability also enables quantification and a deeper understanding of the sources of technical confounders in single-cell data.
[ { "created": "Thu, 3 Aug 2023 16:04:14 GMT", "version": "v1" }, { "created": "Thu, 29 Feb 2024 22:35:45 GMT", "version": "v2" } ]
2024-03-04
[ [ "Ma", "Rong", "" ], [ "Sun", "Eric D.", "" ], [ "Donoho", "David", "" ], [ "Zou", "James", "" ] ]
Single-cell data integration can provide a comprehensive molecular view of cells, and many algorithms have been developed to remove unwanted technical or biological variations and integrate heterogeneous single-cell datasets. Despite their wide usage, existing methods suffer from several fundamental limitations. In particular, we lack a rigorous statistical test for whether two high-dimensional single-cell datasets are alignable (and therefore should even be aligned). Moreover, popular methods can substantially distort the data during alignment, making the aligned data and downstream analysis difficult to interpret. To overcome these limitations, we present a spectral manifold alignment and inference (SMAI) framework, which enables principled and interpretable alignability testing and structure-preserving integration of single-cell data with the same type of features. SMAI provides a statistical test to robustly assess the alignability between datasets to avoid misleading inference, and is justified by high-dimensional statistical theory. On a diverse range of real and simulated benchmark datasets, it outperforms commonly used alignment methods. Moreover, we show that SMAI improves various downstream analyses such as identification of differentially expressed genes and imputation of single-cell spatial transcriptomics, providing further biological insights. SMAI's interpretability also enables quantification and a deeper understanding of the sources of technical confounders in single-cell data.
1811.11007
Christopher Whidden
Chris Whidden, Brian C. Claywell, Thayer Fisher, Andrew F. Magee, Mathieu Fourment, Frederick A. Matsen IV
Systematic Exploration of the High Likelihood Set of Phylogenetic Tree Topologies
25 pages, 16 figures
null
null
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian Markov chain Monte Carlo explores tree space slowly, in part because it frequently returns to the same tree topology. An alternative strategy would be to explore tree space systematically, and never return to the same topology. In this paper, we present an efficient parallelized method to map out the high likelihood set of phylogenetic tree topologies via systematic search, which we show to be a good approximation of the high posterior set of tree topologies. Here `likelihood' of a topology refers to the tree likelihood for the corresponding tree with optimized branch lengths. We call this method `phylogenetic topographer' (PT). The PT strategy is very simple: starting in a number of local topology maxima (obtained by hill-climbing from random starting points), explore out using local topology rearrangements, only continuing through topologies that are better than than some likelihood threshold below the best observed topology. We show that the normalized topology likelihoods are a useful proxy for the Bayesian posterior probability of those topologies. By using a non-blocking hash table keyed on unique representations of tree topologies, we avoid visiting topologies more than once across all concurrent threads exploring tree space. We demonstrate that PT can be used directly to approximate a Bayesian consensus tree topology. When combined with an accurate means of evaluating per-topology marginal likelihoods, PT gives an alternative procedure for obtaining Bayesian posterior distributions on phylogenetic tree topologies.
[ { "created": "Tue, 27 Nov 2018 14:19:08 GMT", "version": "v1" } ]
2018-11-28
[ [ "Whidden", "Chris", "" ], [ "Claywell", "Brian C.", "" ], [ "Fisher", "Thayer", "" ], [ "Magee", "Andrew F.", "" ], [ "Fourment", "Mathieu", "" ], [ "Matsen", "Frederick A.", "IV" ] ]
Bayesian Markov chain Monte Carlo explores tree space slowly, in part because it frequently returns to the same tree topology. An alternative strategy would be to explore tree space systematically, and never return to the same topology. In this paper, we present an efficient parallelized method to map out the high likelihood set of phylogenetic tree topologies via systematic search, which we show to be a good approximation of the high posterior set of tree topologies. Here `likelihood' of a topology refers to the tree likelihood for the corresponding tree with optimized branch lengths. We call this method `phylogenetic topographer' (PT). The PT strategy is very simple: starting in a number of local topology maxima (obtained by hill-climbing from random starting points), explore out using local topology rearrangements, only continuing through topologies that are better than than some likelihood threshold below the best observed topology. We show that the normalized topology likelihoods are a useful proxy for the Bayesian posterior probability of those topologies. By using a non-blocking hash table keyed on unique representations of tree topologies, we avoid visiting topologies more than once across all concurrent threads exploring tree space. We demonstrate that PT can be used directly to approximate a Bayesian consensus tree topology. When combined with an accurate means of evaluating per-topology marginal likelihoods, PT gives an alternative procedure for obtaining Bayesian posterior distributions on phylogenetic tree topologies.
2211.09608
Fanny Thomas
Fanny Thomas, C\'ecile Gallea, Virginie Moulier, Noomane Bouaziz, Antoni Valero-Cabr\'e, Dominique Januel
Local alterations of left arcuate fasciculus and transcallosal white matter microstructure in schizophrenia patients with medication-resistant auditory verbal hallucinations: A pilot study
Neuroscience, Elsevier - International Brain Research Organization, 2022
null
10.1016/j.neuroscience.2022.10.027
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Auditory verbal hallucinations (AVH) in schizophrenia (SZ) have been associated with abnormalities of the left arcuate fasciculus and transcallosal white matter projections linking homologous language areas of both hemispheres. While most studies have used a whole-tract approach, here we focused on analyzing local alterations of the above-mentioned pathways in SZ patients suffering medication-resistant AVH.Fractional anisotropy (FA) was estimated along the left arcuate fasciculus and interhemispheric projections of the rostral and caudal corpus callosum. Then, potential associations between white matter tracts and SZ symptoms were explored by correlating local site-by-site FA values and AVH severity estimated via the Auditory Hallucinations Rating Scale (AHRS). Compared to a sample of healthy controls, SZ patients displayed lower FA values in the rostral portion of the left arcuate fasciculus, near the frontal operculum, and in the left and right lateral regions of the rostral portion of the transcallosal pathways. In contrast, SZ patients showed higher FA values than healthy controls in the medial portion of the latter transcallosal pathway and in the midsagittal section of the interhemispheric auditory pathway. Finally, significant correlations were found between local FA values in the left arcuate fasciculus and the severity of the AVH's attentional salience. Contributing to the study of associations between local white matter alterations of language networks and SZ symptoms, our findings highlight local alterations of white matter integrity in these pathways linking language areas in SZ patients with AVH. We also hypothesize a link between the left arcuate fasciculus and the attentional capture of AVH.
[ { "created": "Thu, 17 Nov 2022 15:57:29 GMT", "version": "v1" } ]
2022-11-18
[ [ "Thomas", "Fanny", "" ], [ "Gallea", "Cécile", "" ], [ "Moulier", "Virginie", "" ], [ "Bouaziz", "Noomane", "" ], [ "Valero-Cabré", "Antoni", "" ], [ "Januel", "Dominique", "" ] ]
Auditory verbal hallucinations (AVH) in schizophrenia (SZ) have been associated with abnormalities of the left arcuate fasciculus and transcallosal white matter projections linking homologous language areas of both hemispheres. While most studies have used a whole-tract approach, here we focused on analyzing local alterations of the above-mentioned pathways in SZ patients suffering medication-resistant AVH.Fractional anisotropy (FA) was estimated along the left arcuate fasciculus and interhemispheric projections of the rostral and caudal corpus callosum. Then, potential associations between white matter tracts and SZ symptoms were explored by correlating local site-by-site FA values and AVH severity estimated via the Auditory Hallucinations Rating Scale (AHRS). Compared to a sample of healthy controls, SZ patients displayed lower FA values in the rostral portion of the left arcuate fasciculus, near the frontal operculum, and in the left and right lateral regions of the rostral portion of the transcallosal pathways. In contrast, SZ patients showed higher FA values than healthy controls in the medial portion of the latter transcallosal pathway and in the midsagittal section of the interhemispheric auditory pathway. Finally, significant correlations were found between local FA values in the left arcuate fasciculus and the severity of the AVH's attentional salience. Contributing to the study of associations between local white matter alterations of language networks and SZ symptoms, our findings highlight local alterations of white matter integrity in these pathways linking language areas in SZ patients with AVH. We also hypothesize a link between the left arcuate fasciculus and the attentional capture of AVH.
2307.03211
Theerawit Wilaiprasitporn
Narongrid Seesawad, Piyalitt Ittichaiwong, Thapanun Sudhawiyangkul, Phattarapong Sawangjai, Peti Thuwajit, Paisarn Boonsakan, Supasan Sripodok, Kanyakorn Veerakanjana, Phoomraphee Luenam, Komgrid Charngkaew, Ananya Pongpaibul, Napat Angkathunyakul, Narit Hnoohom, Sumeth Yuenyong, Chanitra Thuwajit, Theerawit Wilaiprasitporn
PseudoCell: Hard Negative Mining as Pseudo Labeling for Deep Learning-Based Centroblast Cell Detection
null
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Patch classification models based on deep learning have been utilized in whole-slide images (WSI) of H&E-stained tissue samples to assist pathologists in grading follicular lymphoma patients. However, these approaches still require pathologists to manually identify centroblast cells and provide refined labels for optimal performance. To address this, we propose PseudoCell, an object detection framework to automate centroblast detection in WSI (source code is available at https://github.com/IoBT-VISTEC/PseudoCell.git). This framework incorporates centroblast labels from pathologists and combines them with pseudo-negative labels obtained from undersampled false-positive predictions using the cell's morphological features. By employing PseudoCell, pathologists' workload can be reduced as it accurately narrows down the areas requiring their attention during examining tissue. Depending on the confidence threshold, PseudoCell can eliminate 58.18-99.35% of non-centroblasts tissue areas on WSI. This study presents a practical centroblast prescreening method that does not require pathologists' refined labels for improvement. Detailed guidance on the practical implementation of PseudoCell is provided in the discussion section.
[ { "created": "Thu, 6 Jul 2023 14:47:27 GMT", "version": "v1" } ]
2023-07-10
[ [ "Seesawad", "Narongrid", "" ], [ "Ittichaiwong", "Piyalitt", "" ], [ "Sudhawiyangkul", "Thapanun", "" ], [ "Sawangjai", "Phattarapong", "" ], [ "Thuwajit", "Peti", "" ], [ "Boonsakan", "Paisarn", "" ], [ "Sripodok"...
Patch classification models based on deep learning have been utilized in whole-slide images (WSI) of H&E-stained tissue samples to assist pathologists in grading follicular lymphoma patients. However, these approaches still require pathologists to manually identify centroblast cells and provide refined labels for optimal performance. To address this, we propose PseudoCell, an object detection framework to automate centroblast detection in WSI (source code is available at https://github.com/IoBT-VISTEC/PseudoCell.git). This framework incorporates centroblast labels from pathologists and combines them with pseudo-negative labels obtained from undersampled false-positive predictions using the cell's morphological features. By employing PseudoCell, pathologists' workload can be reduced as it accurately narrows down the areas requiring their attention during examining tissue. Depending on the confidence threshold, PseudoCell can eliminate 58.18-99.35% of non-centroblasts tissue areas on WSI. This study presents a practical centroblast prescreening method that does not require pathologists' refined labels for improvement. Detailed guidance on the practical implementation of PseudoCell is provided in the discussion section.
0910.0143
Anthony Coolen
A.C.C. Coolen and S. Rabello
Generating functional analysis of complex formation and dissociation in large protein interaction networks
14 pages, to be published in Proc of IW-SMI-2009 in Kyoto (Journal of Phys Conference Series)
null
10.1088/1742-6596/197/1/012006
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze large systems of interacting proteins, using techniques from the non-equilibrium statistical mechanics of disordered many-particle systems. Apart from protein production and removal, the most relevant microscopic processes in the proteome are complex formation and dissociation, and the microscopic degrees of freedom are the evolving concentrations of unbound proteins (in multiple post-translational states) and of protein complexes. Here we only include dimer-complexes, for mathematical simplicity, and we draw the network that describes which proteins are reaction partners from an ensemble of random graphs with an arbitrary degree distribution. We show how generating functional analysis methods can be used successfully to derive closed equations for dynamical order parameters, representing an exact macroscopic description of the complex formation and dissociation dynamics in the infinite system limit. We end this paper with a discussion of the possible routes towards solving the nontrivial order parameter equations, either exactly (in specific limits) or approximately.
[ { "created": "Thu, 1 Oct 2009 11:42:44 GMT", "version": "v1" } ]
2015-05-14
[ [ "Coolen", "A. C. C.", "" ], [ "Rabello", "S.", "" ] ]
We analyze large systems of interacting proteins, using techniques from the non-equilibrium statistical mechanics of disordered many-particle systems. Apart from protein production and removal, the most relevant microscopic processes in the proteome are complex formation and dissociation, and the microscopic degrees of freedom are the evolving concentrations of unbound proteins (in multiple post-translational states) and of protein complexes. Here we only include dimer-complexes, for mathematical simplicity, and we draw the network that describes which proteins are reaction partners from an ensemble of random graphs with an arbitrary degree distribution. We show how generating functional analysis methods can be used successfully to derive closed equations for dynamical order parameters, representing an exact macroscopic description of the complex formation and dissociation dynamics in the infinite system limit. We end this paper with a discussion of the possible routes towards solving the nontrivial order parameter equations, either exactly (in specific limits) or approximately.
q-bio/0607033
Miodrag Krmar
Vladan Pankovic, Rade Glavatovic, Nikola Vunduk, Dejan Banjac, Nemanja Marjanovic, Milan Predojevic
A "Quasi-Rapid" Extinction Population Dynamics and Mammoths Overkill
11 pages, no figures
null
null
NS-PH-12/06
q-bio.PE q-bio.QM
null
In this work we suggest and consider an original, simple mathematical model of a "quasi-rapid" extinction population dynamics. It describes a decrease and final extinction of the population of one prey species by a "quasi-rapid" interaction with one predator species with increasing population. This "quasi-rapid" interaction means ecologically that prey species behaves practically quite passively (since there is no time for any reaction, i.e. defense), like an appropriate environment, in respect to "quasi-rapid" activity of the predator species that can have different "quasi-rapid" hunting abilities. Mathematically, our model is based on a non-Lotka-Volterraian system of two differential equations of the first order, first of which is linear while second, depending of a parameter that characterizes hunting ability is nonlinear. We compare suggested "quasi-rapid" extinction population dynamics and the global model of the overkill of the prehistoric megafauna (mammoths). We demonstrate that our "quasi-rapid" extinction population dynamics is able to restitute successfully correlations between empirical (archeological) data and overkill theory in North America as well as Australia. For this reason, we conclude that global overkill theory, completely mathematically modelable by "quasi-rapid" extinction population dynamics can consistently explain the Pleistocene extinctions of the megafauna.
[ { "created": "Fri, 21 Jul 2006 09:43:04 GMT", "version": "v1" } ]
2007-05-23
[ [ "Pankovic", "Vladan", "" ], [ "Glavatovic", "Rade", "" ], [ "Vunduk", "Nikola", "" ], [ "Banjac", "Dejan", "" ], [ "Marjanovic", "Nemanja", "" ], [ "Predojevic", "Milan", "" ] ]
In this work we suggest and consider an original, simple mathematical model of a "quasi-rapid" extinction population dynamics. It describes a decrease and final extinction of the population of one prey species by a "quasi-rapid" interaction with one predator species with increasing population. This "quasi-rapid" interaction means ecologically that prey species behaves practically quite passively (since there is no time for any reaction, i.e. defense), like an appropriate environment, in respect to "quasi-rapid" activity of the predator species that can have different "quasi-rapid" hunting abilities. Mathematically, our model is based on a non-Lotka-Volterraian system of two differential equations of the first order, first of which is linear while second, depending of a parameter that characterizes hunting ability is nonlinear. We compare suggested "quasi-rapid" extinction population dynamics and the global model of the overkill of the prehistoric megafauna (mammoths). We demonstrate that our "quasi-rapid" extinction population dynamics is able to restitute successfully correlations between empirical (archeological) data and overkill theory in North America as well as Australia. For this reason, we conclude that global overkill theory, completely mathematically modelable by "quasi-rapid" extinction population dynamics can consistently explain the Pleistocene extinctions of the megafauna.
2109.13175
Sushant Vijayan
Sushant Vijayan
Differential Games in the spread of Covid-19
11 pages, 6 figures
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-sa/4.0/
Given the ongoing Covid-19 pandemic, it is of interest to understand how the infections spread as the combined result of measures taken by central planners (governments) and individual behavior. In this work, the spread of Covid-19 is modelled as a differentiable game between the planner and population with appropriate disease spread dynamical equations. We first characterise the equilibrium dynamics of only the population with modifed Susceptible-Infected-Recovered (SIR) equations to highlight the qualitative nature of the equilbrium. Using this result, we formulate the joint equilibrium exposure profile between the planner and population. Additionally, as in case of Covid-19, the role of asymptomatic carriers, inadequacies in testing, contact tracing and quarantining can lead to a significant underestimate of the true infected numbers as compared to just the detected numbers. Therefore, it is vital to model the true infected numbers within the context of choices made by individuals within the population. To incorporate this, we extend our framework by modifying the dynamics to include additional sub-compartments of `undetected infected' and `detected infected' in the disease dynamics. The individuals make their own estimates of the total infected from the detected numbers and base their strategies on those estimates. We show that these considerations lead to a retarded optimal control problem for the players. We present some simulation results based on these results to demonstrate how population behavior, planner control, detection rates and trust in the reported numbers play a key role in how the disease spreads.
[ { "created": "Mon, 27 Sep 2021 16:37:32 GMT", "version": "v1" }, { "created": "Thu, 30 Sep 2021 11:00:26 GMT", "version": "v2" } ]
2021-10-01
[ [ "Vijayan", "Sushant", "" ] ]
Given the ongoing Covid-19 pandemic, it is of interest to understand how the infections spread as the combined result of measures taken by central planners (governments) and individual behavior. In this work, the spread of Covid-19 is modelled as a differentiable game between the planner and population with appropriate disease spread dynamical equations. We first characterise the equilibrium dynamics of only the population with modifed Susceptible-Infected-Recovered (SIR) equations to highlight the qualitative nature of the equilbrium. Using this result, we formulate the joint equilibrium exposure profile between the planner and population. Additionally, as in case of Covid-19, the role of asymptomatic carriers, inadequacies in testing, contact tracing and quarantining can lead to a significant underestimate of the true infected numbers as compared to just the detected numbers. Therefore, it is vital to model the true infected numbers within the context of choices made by individuals within the population. To incorporate this, we extend our framework by modifying the dynamics to include additional sub-compartments of `undetected infected' and `detected infected' in the disease dynamics. The individuals make their own estimates of the total infected from the detected numbers and base their strategies on those estimates. We show that these considerations lead to a retarded optimal control problem for the players. We present some simulation results based on these results to demonstrate how population behavior, planner control, detection rates and trust in the reported numbers play a key role in how the disease spreads.
0901.0976
Richard Williams
Richard J. Williams
Simple MaxEnt Models for Food Web Degree Distributions
29 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Degree distributions have been widely used to characterize biological networks including food webs, and play a vital role in recent models of food web structure. While food webs degree distributions have been suggested to follow various functional forms, to date there has been no mechanistic or statistical explanation for these forms. Here I introduce models for the degree distributions of food webs based on the principle of maximum entropy (MaxEnt) constrained by the number of species, number of links and the number of top or basal species. The MaxEnt predictions are compared to observed distributions in 51 food webs. The distributions of the number of consumers and resources in 23 (45%) and 35 (69%) of the food webs respectively are not significantly different at a 95% confidence level from the MaxEnt distribution. While the resource distributions of niche model webs are well-described by the MaxEnt model, the consumer distributions are more narrowly distributed than predicted by the MaxEnt model. These findings offer a new null model for the most probable degree distributions in food webs. Having an appropriate null hypothesis in place allows informative study of the deviations from it; for example these results suggest that there is relatively little pressure favoring generalist versus specialist consumption strategies but that there is more pressure driving the consumer distribution away from the MaxEnt form. Given the methodological idiosyncrasies of current food web data, further study of such deviations will need to consider both biological drivers and methodological bias.
[ { "created": "Thu, 8 Jan 2009 05:59:21 GMT", "version": "v1" } ]
2009-01-09
[ [ "Williams", "Richard J.", "" ] ]
Degree distributions have been widely used to characterize biological networks including food webs, and play a vital role in recent models of food web structure. While food webs degree distributions have been suggested to follow various functional forms, to date there has been no mechanistic or statistical explanation for these forms. Here I introduce models for the degree distributions of food webs based on the principle of maximum entropy (MaxEnt) constrained by the number of species, number of links and the number of top or basal species. The MaxEnt predictions are compared to observed distributions in 51 food webs. The distributions of the number of consumers and resources in 23 (45%) and 35 (69%) of the food webs respectively are not significantly different at a 95% confidence level from the MaxEnt distribution. While the resource distributions of niche model webs are well-described by the MaxEnt model, the consumer distributions are more narrowly distributed than predicted by the MaxEnt model. These findings offer a new null model for the most probable degree distributions in food webs. Having an appropriate null hypothesis in place allows informative study of the deviations from it; for example these results suggest that there is relatively little pressure favoring generalist versus specialist consumption strategies but that there is more pressure driving the consumer distribution away from the MaxEnt form. Given the methodological idiosyncrasies of current food web data, further study of such deviations will need to consider both biological drivers and methodological bias.
1508.03729
Robert Cameron
Robert P. Cameron, John A. Cameron and Stephen M. Barnett
Were there two forms of Stegosaurus?
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We recognise that Stegosaurus exhibited exterior chirality and could, therefore, have assumed either of two distinct, mirror-image forms. Our preliminary investigations suggest that both existed. Stegosaurus's exterior chirality raises new questions such as the validity of well-known exhibits whilst offering new insights into long-standing questions such as the function of the plates. We inform our discussions throughout with examples of modern-day animals that exhibit exterior chirality.
[ { "created": "Sat, 15 Aug 2015 12:46:40 GMT", "version": "v1" } ]
2015-08-18
[ [ "Cameron", "Robert P.", "" ], [ "Cameron", "John A.", "" ], [ "Barnett", "Stephen M.", "" ] ]
We recognise that Stegosaurus exhibited exterior chirality and could, therefore, have assumed either of two distinct, mirror-image forms. Our preliminary investigations suggest that both existed. Stegosaurus's exterior chirality raises new questions such as the validity of well-known exhibits whilst offering new insights into long-standing questions such as the function of the plates. We inform our discussions throughout with examples of modern-day animals that exhibit exterior chirality.
1308.6333
Dongying Wu
Dongying Wu, Ladan Doroud, Jonathan A. Eisen
TreeOTU: Operational Taxonomic Unit Classification Based on Phylogenetic Trees
23 pages, 6 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/3.0/
Our current understanding of the taxonomic and phylogenetic diversity of cellular organisms, especially the bacteria and archaea, is mostly based upon studies of sequences of the small- subunit rRNAs (ssu-rRNAs). To address the limitation of ssu-rRNA as a phylogenetic marker, such as copy number variation among organisms and complications introduced by horizontal gene transfer, convergent evolution, or evolution rate variations, we have identified protein- coding gene families as alternative Phylogenetic and Phylogenetic Ecology markers (PhyEco). Current nucleotide sequence similarity based Operational Taxonomic Unit (OTU) classification methods are not readily applicable to amino acid sequences of PhyEco markers. We report here the development of TreeOTU, a phylogenetic tree structure based OTU classification method that takes into account of differences in rates of evolution between taxa and between genes. OTU sets built by TreeOTU are more faithful to phylogenetic tree structures than sequence clustering (non phylogenetic) methods for ssu-rRNAs. OTUs built from phylogenetic trees of protein coding PhyEco markers are comparable to our current taxonomic classification at different levels. With the included OTU comparing tools, the TreeOTU is robust in phylogenetic referencing with different phylogenetic markers and trees.
[ { "created": "Wed, 28 Aug 2013 23:40:21 GMT", "version": "v1" } ]
2013-08-30
[ [ "Wu", "Dongying", "" ], [ "Doroud", "Ladan", "" ], [ "Eisen", "Jonathan A.", "" ] ]
Our current understanding of the taxonomic and phylogenetic diversity of cellular organisms, especially the bacteria and archaea, is mostly based upon studies of sequences of the small- subunit rRNAs (ssu-rRNAs). To address the limitation of ssu-rRNA as a phylogenetic marker, such as copy number variation among organisms and complications introduced by horizontal gene transfer, convergent evolution, or evolution rate variations, we have identified protein- coding gene families as alternative Phylogenetic and Phylogenetic Ecology markers (PhyEco). Current nucleotide sequence similarity based Operational Taxonomic Unit (OTU) classification methods are not readily applicable to amino acid sequences of PhyEco markers. We report here the development of TreeOTU, a phylogenetic tree structure based OTU classification method that takes into account of differences in rates of evolution between taxa and between genes. OTU sets built by TreeOTU are more faithful to phylogenetic tree structures than sequence clustering (non phylogenetic) methods for ssu-rRNAs. OTUs built from phylogenetic trees of protein coding PhyEco markers are comparable to our current taxonomic classification at different levels. With the included OTU comparing tools, the TreeOTU is robust in phylogenetic referencing with different phylogenetic markers and trees.
1908.03482
Ken Duffy
Alexander S. Miles, Philip D. Hodgkin and Ken R. Duffy
Inferring differentiation order in adaptive immune responses from population level data
null
null
null
null
q-bio.CB q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A hallmark of the adaptive immune response is the proliferation of pathogen-specific lymphocytes that leave in their wake a long lived population of cells that provide lasting immunity. A subject of ongoing investigation is when during an adaptive immune response those memory cells are produced. In two ground-breaking studies, Buchholz et al. (Science, 2013) and Gerlach et al. (Science, 2013) employed experimental methods that allowed identification of offspring from individual lymphocytes in vivo, which we call clonal data, at a single time point. Through the development, application and fitting of a mathematical model, Buchholz et al. (Science, 2013) concluded that, if memory is produced during the expansion phase, memory cell precursors are made before the effector cells that clear the original pathogen. We sought to determine the general validity and power of the modeling approach introduced in Buchholz et al. (Science, 2013) for quickly evaluating differentiation networks by adapting it to make it suitable for drawing inferences from more readily available non-clonal phenotypic proportion time-courses. We first established the method drew consistent deductions when fit to the non-clonal data in Buchholz et al. (Science, 2013) itself. We fit a variant of the model to data reported in Badovinac et al. (J. Immun., 2007), Schlub et al. (Immun. & Cell Bio., 2010), and Kinjo et al. (Nature Commun., 2015) with necessary simplifications to match different reported data in these papers. The deduction from the model was consistent with that in Buchholz et al. (Science, 2013), albeit with questionable parameterizations. An alternative possibility, supported by the data in Kinjo et al. (Nature Commun., 2015), is that memory precursors are created after the expansion phase, which is a deduction not possible from the mathematical methods provided in Buchholz et al. (Science, 2013).
[ { "created": "Fri, 9 Aug 2019 14:49:35 GMT", "version": "v1" }, { "created": "Wed, 3 Jun 2020 00:19:56 GMT", "version": "v2" } ]
2020-06-04
[ [ "Miles", "Alexander S.", "" ], [ "Hodgkin", "Philip D.", "" ], [ "Duffy", "Ken R.", "" ] ]
A hallmark of the adaptive immune response is the proliferation of pathogen-specific lymphocytes that leave in their wake a long lived population of cells that provide lasting immunity. A subject of ongoing investigation is when during an adaptive immune response those memory cells are produced. In two ground-breaking studies, Buchholz et al. (Science, 2013) and Gerlach et al. (Science, 2013) employed experimental methods that allowed identification of offspring from individual lymphocytes in vivo, which we call clonal data, at a single time point. Through the development, application and fitting of a mathematical model, Buchholz et al. (Science, 2013) concluded that, if memory is produced during the expansion phase, memory cell precursors are made before the effector cells that clear the original pathogen. We sought to determine the general validity and power of the modeling approach introduced in Buchholz et al. (Science, 2013) for quickly evaluating differentiation networks by adapting it to make it suitable for drawing inferences from more readily available non-clonal phenotypic proportion time-courses. We first established the method drew consistent deductions when fit to the non-clonal data in Buchholz et al. (Science, 2013) itself. We fit a variant of the model to data reported in Badovinac et al. (J. Immun., 2007), Schlub et al. (Immun. & Cell Bio., 2010), and Kinjo et al. (Nature Commun., 2015) with necessary simplifications to match different reported data in these papers. The deduction from the model was consistent with that in Buchholz et al. (Science, 2013), albeit with questionable parameterizations. An alternative possibility, supported by the data in Kinjo et al. (Nature Commun., 2015), is that memory precursors are created after the expansion phase, which is a deduction not possible from the mathematical methods provided in Buchholz et al. (Science, 2013).
2303.12000
Ignacio Enrique S\'anchez
Luciana L. Couso, Alfonso Soler-Bistue, Ariel A. Aptekmann, Ignacio E. Sanchez
Ecology theory disentangles microbial dichotomies
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Microbes are often discussed in terms of dichotomies such as copiotrophic/oligotrophic and fast/slow-growing microbes, defined using the characterisation of microbial growth in isolated cultures. The dichotomies are usually qualitative and/or study-specific, sometimes precluding clear-cut results interpretation. We are able to interpret microbial dichotomies as life history strategies by combining ecology theory with Monod curves, a classical laboratory tool of bacterial physiology. Monod curves relate the specific growth rate of a microbe with the concentration of a limiting nutrient, and provide quantities that directly correspond to key ecological parameters in McArthur and Wilsons r/K selection theory, Tilmans resource competition and community structure theory and Grimes triangle of life strategies. The resulting model allows us to reconcile the copiotrophic/oligotrophic and fast/slow-growing dichotomies as different subsamples of a life history strategy triangle that also includes r/K strategists. We analyzed some ecological context by considering the known viable carbon sources for heterotrophic microbes in the framework of community structure theory. This partly explains the microbial diversity observed using metagenomics. In sum, ecology theory in combination with Monod curves can be a unifying quantitative framework for the study of natural microbial communities, calling for the integration of modern laboratory and field experiments.
[ { "created": "Tue, 21 Mar 2023 16:32:48 GMT", "version": "v1" }, { "created": "Sat, 15 Apr 2023 11:03:29 GMT", "version": "v2" } ]
2023-04-18
[ [ "Couso", "Luciana L.", "" ], [ "Soler-Bistue", "Alfonso", "" ], [ "Aptekmann", "Ariel A.", "" ], [ "Sanchez", "Ignacio E.", "" ] ]
Microbes are often discussed in terms of dichotomies such as copiotrophic/oligotrophic and fast/slow-growing microbes, defined using the characterisation of microbial growth in isolated cultures. The dichotomies are usually qualitative and/or study-specific, sometimes precluding clear-cut results interpretation. We are able to interpret microbial dichotomies as life history strategies by combining ecology theory with Monod curves, a classical laboratory tool of bacterial physiology. Monod curves relate the specific growth rate of a microbe with the concentration of a limiting nutrient, and provide quantities that directly correspond to key ecological parameters in McArthur and Wilsons r/K selection theory, Tilmans resource competition and community structure theory and Grimes triangle of life strategies. The resulting model allows us to reconcile the copiotrophic/oligotrophic and fast/slow-growing dichotomies as different subsamples of a life history strategy triangle that also includes r/K strategists. We analyzed some ecological context by considering the known viable carbon sources for heterotrophic microbes in the framework of community structure theory. This partly explains the microbial diversity observed using metagenomics. In sum, ecology theory in combination with Monod curves can be a unifying quantitative framework for the study of natural microbial communities, calling for the integration of modern laboratory and field experiments.
2407.14668
Yizi Zhang
Yizi Zhang, Yanchen Wang, Donato Jimenez-Beneto, Zixuan Wang, Mehdi Azabou, Blake Richards, Olivier Winter, International Brain Laboratory, Eva Dyer, Liam Paninski, Cole Hurwitz
Towards a "universal translator" for neural dynamics at single-cell, single-spike resolution
null
null
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Neuroscience research has made immense progress over the last decade, but our understanding of the brain remains fragmented and piecemeal: the dream of probing an arbitrary brain region and automatically reading out the information encoded in its neural activity remains out of reach. In this work, we build towards a first foundation model for neural spiking data that can solve a diverse set of tasks across multiple brain areas. We introduce a novel self-supervised modeling approach for population activity in which the model alternates between masking out and reconstructing neural activity across different time steps, neurons, and brain regions. To evaluate our approach, we design unsupervised and supervised prediction tasks using the International Brain Laboratory repeated site dataset, which is comprised of Neuropixels recordings targeting the same brain locations across 48 animals and experimental sessions. The prediction tasks include single-neuron and region-level activity prediction, forward prediction, and behavior decoding. We demonstrate that our multi-task-masking (MtM) approach significantly improves the performance of current state-of-the-art population models and enables multi-task learning. We also show that by training on multiple animals, we can improve the generalization ability of the model to unseen animals, paving the way for a foundation model of the brain at single-cell, single-spike resolution.
[ { "created": "Fri, 19 Jul 2024 21:05:28 GMT", "version": "v1" }, { "created": "Tue, 23 Jul 2024 16:14:27 GMT", "version": "v2" } ]
2024-07-24
[ [ "Zhang", "Yizi", "" ], [ "Wang", "Yanchen", "" ], [ "Jimenez-Beneto", "Donato", "" ], [ "Wang", "Zixuan", "" ], [ "Azabou", "Mehdi", "" ], [ "Richards", "Blake", "" ], [ "Winter", "Olivier", "" ], [ ...
Neuroscience research has made immense progress over the last decade, but our understanding of the brain remains fragmented and piecemeal: the dream of probing an arbitrary brain region and automatically reading out the information encoded in its neural activity remains out of reach. In this work, we build towards a first foundation model for neural spiking data that can solve a diverse set of tasks across multiple brain areas. We introduce a novel self-supervised modeling approach for population activity in which the model alternates between masking out and reconstructing neural activity across different time steps, neurons, and brain regions. To evaluate our approach, we design unsupervised and supervised prediction tasks using the International Brain Laboratory repeated site dataset, which is comprised of Neuropixels recordings targeting the same brain locations across 48 animals and experimental sessions. The prediction tasks include single-neuron and region-level activity prediction, forward prediction, and behavior decoding. We demonstrate that our multi-task-masking (MtM) approach significantly improves the performance of current state-of-the-art population models and enables multi-task learning. We also show that by training on multiple animals, we can improve the generalization ability of the model to unseen animals, paving the way for a foundation model of the brain at single-cell, single-spike resolution.
2407.13780
Ulrich Armel Mbou Sob
Ulrich A. Mbou Sob, Qiulin Li, Miguel Arbes\'u, Oliver Bent, Andries P. Smit and Arnu Pretorius
Generative Model for Small Molecules with Latent Space RL Fine-Tuning to Protein Targets
12 pages, 6 figures, Proceedings of the ICML 2024 Workshop on Accessible and Effi- cient Foundation Models for Biological Discovery, Vienna, Austria. 2024
null
null
null
q-bio.BM cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
A specific challenge with deep learning approaches for molecule generation is generating both syntactically valid and chemically plausible molecular string representations. To address this, we propose a novel generative latent-variable transformer model for small molecules that leverages a recently proposed molecular string representation called SAFE. We introduce a modification to SAFE to reduce the number of invalid fragmented molecules generated during training and use this to train our model. Our experiments show that our model can generate novel molecules with a validity rate > 90% and a fragmentation rate < 1% by sampling from a latent space. By fine-tuning the model using reinforcement learning to improve molecular docking, we significantly increase the number of hit candidates for five specific protein targets compared to the pre-trained model, nearly doubling this number for certain targets. Additionally, our top 5% mean docking scores are comparable to the current state-of-the-art (SOTA), and we marginally outperform SOTA on three of the five targets.
[ { "created": "Tue, 2 Jul 2024 16:01:37 GMT", "version": "v1" } ]
2024-07-22
[ [ "Sob", "Ulrich A. Mbou", "" ], [ "Li", "Qiulin", "" ], [ "Arbesú", "Miguel", "" ], [ "Bent", "Oliver", "" ], [ "Smit", "Andries P.", "" ], [ "Pretorius", "Arnu", "" ] ]
A specific challenge with deep learning approaches for molecule generation is generating both syntactically valid and chemically plausible molecular string representations. To address this, we propose a novel generative latent-variable transformer model for small molecules that leverages a recently proposed molecular string representation called SAFE. We introduce a modification to SAFE to reduce the number of invalid fragmented molecules generated during training and use this to train our model. Our experiments show that our model can generate novel molecules with a validity rate > 90% and a fragmentation rate < 1% by sampling from a latent space. By fine-tuning the model using reinforcement learning to improve molecular docking, we significantly increase the number of hit candidates for five specific protein targets compared to the pre-trained model, nearly doubling this number for certain targets. Additionally, our top 5% mean docking scores are comparable to the current state-of-the-art (SOTA), and we marginally outperform SOTA on three of the five targets.
1801.04623
Danielle Bassett
Eli J. Cornblath, Evelyn Tang, Graham L. Baum, Tyler M. Moore, David R. Roalf, Ruben C. Gur, Raquel E. Gur, Fabio Pasqualetti, Theodore D. Satterthwaite, Danielle S. Bassett
Sex differences in network controllability as a predictor of executive function in youth
null
null
null
null
q-bio.NC cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Executive function emerges late in development and displays different developmental trends in males and females. Sex differences in executive function in youth have been linked to vulnerability to psychopathology as well as to behaviors that impinge on health. Yet, the neurobiological basis of these differences is not well understood. Here we test the hypothesis that sex differences in executive function in youth stem from sex differences in the controllability of structural brain networks as they rewire over development. Combining methods from network neuroscience and network control theory, we characterize the network control properties of structural brain networks estimated from diffusion imaging data acquired in males and females in a sample of 882 youth aged 8-22 years. We summarize the control properties of these networks by estimating average and modal controllability, two statistics that probe the ease with which brain areas can drive the network towards easy- versus difficult-to-reach states. We find that females have higher modal controllability in frontal, parietal, and subcortical regions while males have higher average controllability in frontal and subcortical regions. Furthermore, average controllability values in the medial frontal cortex and subcortex, both higher in males, are negatively related to executive function. Finally, we find that average controllability predicts sex-dependent individual differences in activation during an n-back working memory task. Taken together, our findings support the notion that sex differences in the controllability of structural brain networks can partially explain sex differences in executive function. Controllability of structural brain networks also predicts features of task-relevant activation, suggesting the potential for controllability to represent context-specific constraints on network state more generally.
[ { "created": "Mon, 15 Jan 2018 00:10:29 GMT", "version": "v1" } ]
2018-01-16
[ [ "Cornblath", "Eli J.", "" ], [ "Tang", "Evelyn", "" ], [ "Baum", "Graham L.", "" ], [ "Moore", "Tyler M.", "" ], [ "Roalf", "David R.", "" ], [ "Gur", "Ruben C.", "" ], [ "Gur", "Raquel E.", "" ], [ ...
Executive function emerges late in development and displays different developmental trends in males and females. Sex differences in executive function in youth have been linked to vulnerability to psychopathology as well as to behaviors that impinge on health. Yet, the neurobiological basis of these differences is not well understood. Here we test the hypothesis that sex differences in executive function in youth stem from sex differences in the controllability of structural brain networks as they rewire over development. Combining methods from network neuroscience and network control theory, we characterize the network control properties of structural brain networks estimated from diffusion imaging data acquired in males and females in a sample of 882 youth aged 8-22 years. We summarize the control properties of these networks by estimating average and modal controllability, two statistics that probe the ease with which brain areas can drive the network towards easy- versus difficult-to-reach states. We find that females have higher modal controllability in frontal, parietal, and subcortical regions while males have higher average controllability in frontal and subcortical regions. Furthermore, average controllability values in the medial frontal cortex and subcortex, both higher in males, are negatively related to executive function. Finally, we find that average controllability predicts sex-dependent individual differences in activation during an n-back working memory task. Taken together, our findings support the notion that sex differences in the controllability of structural brain networks can partially explain sex differences in executive function. Controllability of structural brain networks also predicts features of task-relevant activation, suggesting the potential for controllability to represent context-specific constraints on network state more generally.
q-bio/0603025
M. D. Betterton
D. C. Clarke, M. D. Betterton, and X. Liu
Systems theory of Smad signaling
To appear in IEE Proceedings Systems Biology. 12 pages of text, 36 pages total
IEE Proceedings - Systems Biology 153 (6), pp. 412-424 (2006)
10.1049/ip-syb:20050055
null
q-bio.MN q-bio.SC
null
Transforming Growth Factor-beta (TGF-beta) signalling is an important regulator of cellular growth and differentiation. The principal intracellular mediators of TGF-beta signalling are the Smad proteins, which upon TGF-beta stimulation accumulate in the nucleus and regulate transcription of target genes. To investigate the mechanisms of Smad nuclear accumulation, we developed a simple mathematical model of canonical Smad signalling. The model was built using both published data and our experimentally determined cellular Smad concentrations (isoforms 2, 3, and 4). We found in mink lung epithelial cells that Smad2 (8.5-12 x 10^4 molecules/cell) was present in similar amounts to Smad4 (9.3-12 x 10^4 molecules/cell), while both were in excess of Smad3 (1.1-2.0 x 10^4 molecules/cell). Variation of the model parameters and statistical analysis showed that Smad nuclear accumulation is most sensitive to parameters affecting the rates of RSmad phosphorylation and dephosphorylation and Smad complex formation/dissociation in the nucleus. Deleting Smad4 from the model revealed that rate-limiting phospho-R-Smad dephosphorylation could be an important mechanism for Smad nuclear accumulation. Furthermore, we observed that binding factors constitutively localised to the nucleus do not efficiently mediate Smad nuclear accumulation if dephosphorylation is rapid. We therefore conclude that an imbalance in the rates of R-Smad phosphorylation and dephosphorylation is likely an important mechanism of Smad nuclear accumulation during TGF-beta signalling.
[ { "created": "Wed, 22 Mar 2006 00:06:57 GMT", "version": "v1" } ]
2007-05-23
[ [ "Clarke", "D. C.", "" ], [ "Betterton", "M. D.", "" ], [ "Liu", "X.", "" ] ]
Transforming Growth Factor-beta (TGF-beta) signalling is an important regulator of cellular growth and differentiation. The principal intracellular mediators of TGF-beta signalling are the Smad proteins, which upon TGF-beta stimulation accumulate in the nucleus and regulate transcription of target genes. To investigate the mechanisms of Smad nuclear accumulation, we developed a simple mathematical model of canonical Smad signalling. The model was built using both published data and our experimentally determined cellular Smad concentrations (isoforms 2, 3, and 4). We found in mink lung epithelial cells that Smad2 (8.5-12 x 10^4 molecules/cell) was present in similar amounts to Smad4 (9.3-12 x 10^4 molecules/cell), while both were in excess of Smad3 (1.1-2.0 x 10^4 molecules/cell). Variation of the model parameters and statistical analysis showed that Smad nuclear accumulation is most sensitive to parameters affecting the rates of RSmad phosphorylation and dephosphorylation and Smad complex formation/dissociation in the nucleus. Deleting Smad4 from the model revealed that rate-limiting phospho-R-Smad dephosphorylation could be an important mechanism for Smad nuclear accumulation. Furthermore, we observed that binding factors constitutively localised to the nucleus do not efficiently mediate Smad nuclear accumulation if dephosphorylation is rapid. We therefore conclude that an imbalance in the rates of R-Smad phosphorylation and dephosphorylation is likely an important mechanism of Smad nuclear accumulation during TGF-beta signalling.
1811.05225
Claus Metzner
Patrick Krauss, Alexandra Zankl, Achim Schilling, Holger Schulze, and Claus Metzner
Analysis of structure and dynamics in three-neuron motifs
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In neural networks with identical neurons, the matrix of connection weights completely describes the network structure and thereby determines how it is processing information. However, due to the non-linearity of these systems, it is not clear if similar microscopic connection structures also imply similar functional properties, or if a network is impacted more by macroscopic structural quantities, such as the ratio of excitatory and inhibitory connections (balance), or the ratio of non-zero connections (density). To clarify these questions, we focus on motifs of three binary neurons with discrete ternary connection strengths, an important class of network building blocks that can be analyzed exhaustively. We develop new, permutation-invariant metrics to quantify the structural and functional distance between two given network motifs. We then use multidimensional scaling to identify and visualize clusters of motifs with similar structural and functional properties. Our comprehensive analysis reveals that the function of a neural network is only weakly correlated with its microscopic structure, but depends strongly on the balance of the connections.
[ { "created": "Tue, 13 Nov 2018 11:34:50 GMT", "version": "v1" } ]
2018-11-14
[ [ "Krauss", "Patrick", "" ], [ "Zankl", "Alexandra", "" ], [ "Schilling", "Achim", "" ], [ "Schulze", "Holger", "" ], [ "Metzner", "Claus", "" ] ]
In neural networks with identical neurons, the matrix of connection weights completely describes the network structure and thereby determines how it is processing information. However, due to the non-linearity of these systems, it is not clear if similar microscopic connection structures also imply similar functional properties, or if a network is impacted more by macroscopic structural quantities, such as the ratio of excitatory and inhibitory connections (balance), or the ratio of non-zero connections (density). To clarify these questions, we focus on motifs of three binary neurons with discrete ternary connection strengths, an important class of network building blocks that can be analyzed exhaustively. We develop new, permutation-invariant metrics to quantify the structural and functional distance between two given network motifs. We then use multidimensional scaling to identify and visualize clusters of motifs with similar structural and functional properties. Our comprehensive analysis reveals that the function of a neural network is only weakly correlated with its microscopic structure, but depends strongly on the balance of the connections.
1605.01387
Alex McAvoy
Yu-Ting Chen, Alex McAvoy, Martin A. Nowak
Fixation probabilities for any configuration of two strategies on regular graphs
28 pages; final version
Scientific Reports 6, 39181 (2016)
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population structure and spatial heterogeneity are integral components of evolutionary dynamics, in general, and of evolution of cooperation, in particular. Structure can promote the emergence of cooperation in some populations and suppress it in others. Here, we provide results for weak selection to favor cooperation on regular graphs for any configuration, meaning any arrangement of cooperators and defectors. Our results extend previous work on fixation probabilities of single, randomly placed mutants. We find that for any configuration cooperation is never favored for birth-death (BD) updating. In contrast, for death-birth (DB) updating, we derive a simple, computationally tractable formula for weak selection to favor cooperation when starting from any configuration containing any number of cooperators and defectors. This formula elucidates two important features: (i) the takeover of cooperation can be enhanced by the strategic placement of cooperators and (ii) adding more cooperators to a configuration can sometimes suppress the evolution of cooperation. These findings give a formal account for how selection acts on all transient states that appear in evolutionary trajectories. They also inform the strategic design of initial states in social networks to maximally promote cooperation. We also derive general results that characterize the interaction of any two strategies, not only cooperation and defection.
[ { "created": "Wed, 4 May 2016 19:16:16 GMT", "version": "v1" }, { "created": "Mon, 5 Dec 2016 18:12:55 GMT", "version": "v2" } ]
2017-10-10
[ [ "Chen", "Yu-Ting", "" ], [ "McAvoy", "Alex", "" ], [ "Nowak", "Martin A.", "" ] ]
Population structure and spatial heterogeneity are integral components of evolutionary dynamics, in general, and of evolution of cooperation, in particular. Structure can promote the emergence of cooperation in some populations and suppress it in others. Here, we provide results for weak selection to favor cooperation on regular graphs for any configuration, meaning any arrangement of cooperators and defectors. Our results extend previous work on fixation probabilities of single, randomly placed mutants. We find that for any configuration cooperation is never favored for birth-death (BD) updating. In contrast, for death-birth (DB) updating, we derive a simple, computationally tractable formula for weak selection to favor cooperation when starting from any configuration containing any number of cooperators and defectors. This formula elucidates two important features: (i) the takeover of cooperation can be enhanced by the strategic placement of cooperators and (ii) adding more cooperators to a configuration can sometimes suppress the evolution of cooperation. These findings give a formal account for how selection acts on all transient states that appear in evolutionary trajectories. They also inform the strategic design of initial states in social networks to maximally promote cooperation. We also derive general results that characterize the interaction of any two strategies, not only cooperation and defection.
1707.01631
Mohammed Alser
Mohammed Alser, Onur Mutlu, Can Alkan
MAGNET: Understanding and Improving the Accuracy of Genome Pre-Alignment Filtering
10 Pages, 13 Figures
IPSI Transactions on Internet Research, 13(2), 2017
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the era of high throughput DNA sequencing (HTS) technologies, calculating the edit distance (i.e., the minimum number of substitutions, insertions, and deletions between a pair of sequences) for billions of genomic sequences is the computational bottleneck in todays read mappers. The shifted Hamming distance (SHD) algorithm proposes a fast filtering strategy that can rapidly filter out invalid mappings that have more edits than allowed. However, SHD shows high inaccuracy in its filtering by admitting invalid mappings to be marked as correct ones. This wastes the execution time and imposes a large computational burden. In this work, we comprehensively investigate four sources that lead to the filtering inaccuracy. We propose MAGNET, a new filtering strategy that maintains high accuracy across different edit distance thresholds and data sets. It significantly improves the accuracy of pre-alignment filtering by one to two orders of magnitude. The MATLAB implementations of MAGNET and SHD are open source and available at: https://github.com/BilkentCompGen/MAGNET.
[ { "created": "Thu, 6 Jul 2017 04:23:15 GMT", "version": "v1" }, { "created": "Sun, 9 Jul 2017 02:23:19 GMT", "version": "v2" } ]
2017-08-17
[ [ "Alser", "Mohammed", "" ], [ "Mutlu", "Onur", "" ], [ "Alkan", "Can", "" ] ]
In the era of high throughput DNA sequencing (HTS) technologies, calculating the edit distance (i.e., the minimum number of substitutions, insertions, and deletions between a pair of sequences) for billions of genomic sequences is the computational bottleneck in todays read mappers. The shifted Hamming distance (SHD) algorithm proposes a fast filtering strategy that can rapidly filter out invalid mappings that have more edits than allowed. However, SHD shows high inaccuracy in its filtering by admitting invalid mappings to be marked as correct ones. This wastes the execution time and imposes a large computational burden. In this work, we comprehensively investigate four sources that lead to the filtering inaccuracy. We propose MAGNET, a new filtering strategy that maintains high accuracy across different edit distance thresholds and data sets. It significantly improves the accuracy of pre-alignment filtering by one to two orders of magnitude. The MATLAB implementations of MAGNET and SHD are open source and available at: https://github.com/BilkentCompGen/MAGNET.
2301.10662
Adam Mielke
Adam Mielke and Lasse Engbo Christiansen
Fundamental Lack of Information in Observed Disease and Hospitalization Data
6 pages, 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a proof based on SEIR-models that shows it is impossible to identify the level of under reporting based on traditional observables of the disease dynamics alone. This means that the true attack rate must be determined through other means.
[ { "created": "Wed, 25 Jan 2023 16:04:52 GMT", "version": "v1" } ]
2023-01-26
[ [ "Mielke", "Adam", "" ], [ "Christiansen", "Lasse Engbo", "" ] ]
We present a proof based on SEIR-models that shows it is impossible to identify the level of under reporting based on traditional observables of the disease dynamics alone. This means that the true attack rate must be determined through other means.
1411.2343
Chuan Fang
C.Fang and T.Jiang
Human Hunting Evolved as an Adaptated Result of Arboreal Locomotion Model of Two-arm Brachiation
7pages,5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various fossil evidences show that hunting is one of major means of ancient human to get foods. But the running speed of our ancestors was much slower than quadruped animals, and they did not have sharp claws and canines. So, they have to rely heavily on stone and wooden tools when they hunting or fighting against other predators, which are very different from the hunting behaviors of other carnivores. There are mainly two types of attack and defense action during human hunting, front or side hit with a wooden stick in hands and stone or wooden spears throwing, and throwing had play an important role in human evolution process. But there is almost no work to study the why only human chose to hunting by this way. Here we suppose that ancient human chose two-arm brachiation as main arboreal locomotion mode because of their suitable body weight. Human body traits include slim body, parallel arranged scapulas, long thumb and powerful grip ability are all evolved as results of two arm brachiation. The relevant adaptive evolution of the shoulder bone structure make human arms with a large range of movement and the long thumb makes human activities to be more accurate and controllable. These are two important body structure advantages of ancient human which makes them could get from arboreal life into a whole new hunting and fighting stage.
[ { "created": "Mon, 10 Nov 2014 07:55:33 GMT", "version": "v1" } ]
2014-11-11
[ [ "Fang", "C.", "" ], [ "Jiang", "T.", "" ] ]
Various fossil evidences show that hunting is one of major means of ancient human to get foods. But the running speed of our ancestors was much slower than quadruped animals, and they did not have sharp claws and canines. So, they have to rely heavily on stone and wooden tools when they hunting or fighting against other predators, which are very different from the hunting behaviors of other carnivores. There are mainly two types of attack and defense action during human hunting, front or side hit with a wooden stick in hands and stone or wooden spears throwing, and throwing had play an important role in human evolution process. But there is almost no work to study the why only human chose to hunting by this way. Here we suppose that ancient human chose two-arm brachiation as main arboreal locomotion mode because of their suitable body weight. Human body traits include slim body, parallel arranged scapulas, long thumb and powerful grip ability are all evolved as results of two arm brachiation. The relevant adaptive evolution of the shoulder bone structure make human arms with a large range of movement and the long thumb makes human activities to be more accurate and controllable. These are two important body structure advantages of ancient human which makes them could get from arboreal life into a whole new hunting and fighting stage.
2004.04299
Alexey Markin
Alexey Markin and Oliver Eulenstein
Quartet-Based Inference Methods are Statistically Consistent Under the Unified Duplication-Loss-Coalescence Model
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classic multispecies coalescent (MSC) model provides the means for theoretical justification of incomplete lineage sorting-aware species tree inference methods. A large body of work in phylogenetics is dedicated to the design of inference methods that are statistically consistent under MSC. One of such particularly popular methods is ASTRAL, a quartet-based species tree inference method. A few recent studies suggested that ASTRAL also performs well when given multi-locus gene trees in simulation studies. Further, Legried et al. recently demonstrated that ASTRAL is statistically consistent under the gene duplication and loss model (GDL). Note that GDL is prevalent in evolutionary histories and is a part of the powerful duplication-loss-coalescence evolutionary model (DLCoal) by Rasmussen and Kellis. In this work we prove that ASTRAL is statistically consistent under the general DLCoal model. Therefore, our result supports the empirical evidence from the simulation-based studies. More broadly, we prove that a randomly chosen quartet from a gene tree (with unique taxa) is more likely to agree with the respective species tree quartet than any of the two other quartets.
[ { "created": "Wed, 8 Apr 2020 23:38:42 GMT", "version": "v1" } ]
2020-04-10
[ [ "Markin", "Alexey", "" ], [ "Eulenstein", "Oliver", "" ] ]
The classic multispecies coalescent (MSC) model provides the means for theoretical justification of incomplete lineage sorting-aware species tree inference methods. A large body of work in phylogenetics is dedicated to the design of inference methods that are statistically consistent under MSC. One of such particularly popular methods is ASTRAL, a quartet-based species tree inference method. A few recent studies suggested that ASTRAL also performs well when given multi-locus gene trees in simulation studies. Further, Legried et al. recently demonstrated that ASTRAL is statistically consistent under the gene duplication and loss model (GDL). Note that GDL is prevalent in evolutionary histories and is a part of the powerful duplication-loss-coalescence evolutionary model (DLCoal) by Rasmussen and Kellis. In this work we prove that ASTRAL is statistically consistent under the general DLCoal model. Therefore, our result supports the empirical evidence from the simulation-based studies. More broadly, we prove that a randomly chosen quartet from a gene tree (with unique taxa) is more likely to agree with the respective species tree quartet than any of the two other quartets.
1803.10789
George F. R. Ellis
George F R Ellis
Controversy in Evolutionary Theory: A multilevel view of the issues
32 pages, 7 figures. Minor formatting improvements
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A conflict exists between field biologists and physiologists ("functional biologists" or "evolutionary ecologists") on the one hand and those working in molecular evolution ("evolutionary biologists" or "population geneticists") on the other concerns the relative importance of natural selection and genetic drift. This paper is concerned with this issue in the case of vertebrates such as birds, fishes, mammals, and specifically humans, and views the issue in that context from a multilevel perspective. It proposes that the resolution is that adaptive selection outcomes occurring at the organism level chain down to determine outcomes at the genome level. The multiple realizability of higher level processes at lower levels then causes the adaptive nature of such processes at the organism level to be largely hidden at the genomic level. The discussion is further related to the "negative view" of selection, the Evo-Devo and Extended Evolutionary Synthesis views, and the levels of selection debate, where processes at the population level can also chain down in a similar way
[ { "created": "Wed, 28 Mar 2018 18:07:05 GMT", "version": "v1" }, { "created": "Fri, 30 Mar 2018 14:52:18 GMT", "version": "v2" } ]
2018-04-02
[ [ "Ellis", "George F R", "" ] ]
A conflict exists between field biologists and physiologists ("functional biologists" or "evolutionary ecologists") on the one hand and those working in molecular evolution ("evolutionary biologists" or "population geneticists") on the other concerns the relative importance of natural selection and genetic drift. This paper is concerned with this issue in the case of vertebrates such as birds, fishes, mammals, and specifically humans, and views the issue in that context from a multilevel perspective. It proposes that the resolution is that adaptive selection outcomes occurring at the organism level chain down to determine outcomes at the genome level. The multiple realizability of higher level processes at lower levels then causes the adaptive nature of such processes at the organism level to be largely hidden at the genomic level. The discussion is further related to the "negative view" of selection, the Evo-Devo and Extended Evolutionary Synthesis views, and the levels of selection debate, where processes at the population level can also chain down in a similar way
0808.3323
Liaofu Luo
Liaofu Luo
Law of Genome Evolution Direction : Coding Information Quantity Grows
16 pages
null
10.1007/s11467-009-0014-x
null
q-bio.GN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of the directionality of genome evolution is studied. Based on the analysis of C-value paradox and the evolution of genome size we propose that the function-coding information quantity of a genome always grows in the course of evolution through sequence duplication, expansion of code, and gene transfer from outside. The function-coding information quantity of a genome consists of two parts, p-coding information quantity which encodes functional protein and n-coding information quantity which encodes other functional elements except amino acid sequence. The evidences on the evolutionary law about the function-coding information quantity are listed. The needs of function is the motive force for the expansion of coding information quantity and the information quantity expansion is the way to make functional innovation and extension for a species. So, the increase of coding information quantity of a genome is a measure of the acquired new function and it determines the directionality of genome evolution.
[ { "created": "Mon, 25 Aug 2008 09:59:12 GMT", "version": "v1" } ]
2015-05-13
[ [ "Luo", "Liaofu", "" ] ]
The problem of the directionality of genome evolution is studied. Based on the analysis of C-value paradox and the evolution of genome size we propose that the function-coding information quantity of a genome always grows in the course of evolution through sequence duplication, expansion of code, and gene transfer from outside. The function-coding information quantity of a genome consists of two parts, p-coding information quantity which encodes functional protein and n-coding information quantity which encodes other functional elements except amino acid sequence. The evidences on the evolutionary law about the function-coding information quantity are listed. The needs of function is the motive force for the expansion of coding information quantity and the information quantity expansion is the way to make functional innovation and extension for a species. So, the increase of coding information quantity of a genome is a measure of the acquired new function and it determines the directionality of genome evolution.
1403.7151
Luiz Pessoa
Luiz Pessoa
Understanding brain networks and brain organization
61 pages, 24 figures
null
10.1016/j.plrev.2014.03.005
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
What is the relationship between brain and behavior? The answer to this question necessitates characterizing the mapping between structure and function. The aim of this paper is to discuss broad issues surrounding the link between structure and function in the brain that will motivate a network perspective to understanding this question. As others in the past, I argue that a network perspective should supplant the common strategy of understanding the brain in terms of individual regions. Whereas this perspective is needed for a fuller characterization of the mind-brain, it should not be viewed as panacea. For one, the challenges posed by the many-to-many mapping between regions and functions is not dissolved by the network perspective. Although the problem is ameliorated, one should not anticipate a one-to-one mapping when the network approach is adopted. Furthermore, decomposition of the brain network in terms of meaningful clusters of regions, such as the ones generated by community-finding algorithms, does not by itself reveal 'true' subnetworks. Given the hierarchical and multi-relational relationship between regions, multiple decompositions will offer different 'slices' of a broader landscape of networks within the brain. Finally, I described how the function of brain regions can be characterized in a multidimensional manner via the idea of diversity profiles. The concept can also be used to describe the way different brain regions participate in networks.
[ { "created": "Thu, 27 Mar 2014 17:59:09 GMT", "version": "v1" } ]
2015-06-19
[ [ "Pessoa", "Luiz", "" ] ]
What is the relationship between brain and behavior? The answer to this question necessitates characterizing the mapping between structure and function. The aim of this paper is to discuss broad issues surrounding the link between structure and function in the brain that will motivate a network perspective to understanding this question. As others in the past, I argue that a network perspective should supplant the common strategy of understanding the brain in terms of individual regions. Whereas this perspective is needed for a fuller characterization of the mind-brain, it should not be viewed as panacea. For one, the challenges posed by the many-to-many mapping between regions and functions is not dissolved by the network perspective. Although the problem is ameliorated, one should not anticipate a one-to-one mapping when the network approach is adopted. Furthermore, decomposition of the brain network in terms of meaningful clusters of regions, such as the ones generated by community-finding algorithms, does not by itself reveal 'true' subnetworks. Given the hierarchical and multi-relational relationship between regions, multiple decompositions will offer different 'slices' of a broader landscape of networks within the brain. Finally, I described how the function of brain regions can be characterized in a multidimensional manner via the idea of diversity profiles. The concept can also be used to describe the way different brain regions participate in networks.
2003.08775
Claus Metzner
Claus Metzner, Franziska H\"orsch, Christoph Mark, Tina Czerwinski, Alexander Winterl, Caroline Voskens, Ben Fabry
Detecting long-range interactions between migrating cells
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemotaxis enables cells to systematically approach distant targets that emit a diffusible guiding substance. However, the visual observation of an encounter between a cell and a target does not necessarily indicate the presence of a chemotactic approach mechanism, as even a blindly migrating cell can come across a target by chance. To distinguish between the chemotactic approach and blind migration, we present an objective method that is based on the analysis of time-lapse recorded cell migration trajectories. First, we validate our method with simulated data, demonstrating that it reliably detects the presence or absence of remote cell-cell interactions. In a second step, we apply the method to data from three-dimensional collagen gels, interspersed with highly migratory natural killer (NK) cells that were derived from two different human donors. We find for one of the donors an attractive interaction between the NK cells, pointing to a cooperative behavior of these immune cells. When adding nearly stationary K562 tumor cells to the system, we find a repulsive interaction between K562 and NK cells for one of the donors. By contrast, we find attractive interactions between NK cells and an IL-15-secreting variant of K562 tumor cells. We therefore speculate that NK cells find wild-type tumor cells only by chance, but are programmed to leave a target quickly after a close encounter. We provide a freely available Python implementation of our p-value method that can serve as a general tool for detecting long-range interactions in collective systems of self-driven agents.
[ { "created": "Thu, 19 Mar 2020 13:26:09 GMT", "version": "v1" }, { "created": "Sun, 20 Jun 2021 15:34:52 GMT", "version": "v2" } ]
2021-06-22
[ [ "Metzner", "Claus", "" ], [ "Hörsch", "Franziska", "" ], [ "Mark", "Christoph", "" ], [ "Czerwinski", "Tina", "" ], [ "Winterl", "Alexander", "" ], [ "Voskens", "Caroline", "" ], [ "Fabry", "Ben", "" ] ]
Chemotaxis enables cells to systematically approach distant targets that emit a diffusible guiding substance. However, the visual observation of an encounter between a cell and a target does not necessarily indicate the presence of a chemotactic approach mechanism, as even a blindly migrating cell can come across a target by chance. To distinguish between the chemotactic approach and blind migration, we present an objective method that is based on the analysis of time-lapse recorded cell migration trajectories. First, we validate our method with simulated data, demonstrating that it reliably detects the presence or absence of remote cell-cell interactions. In a second step, we apply the method to data from three-dimensional collagen gels, interspersed with highly migratory natural killer (NK) cells that were derived from two different human donors. We find for one of the donors an attractive interaction between the NK cells, pointing to a cooperative behavior of these immune cells. When adding nearly stationary K562 tumor cells to the system, we find a repulsive interaction between K562 and NK cells for one of the donors. By contrast, we find attractive interactions between NK cells and an IL-15-secreting variant of K562 tumor cells. We therefore speculate that NK cells find wild-type tumor cells only by chance, but are programmed to leave a target quickly after a close encounter. We provide a freely available Python implementation of our p-value method that can serve as a general tool for detecting long-range interactions in collective systems of self-driven agents.
1606.02627
Umut G\"u\c{c}l\"u
Umut G\"u\c{c}l\"u, Jordy Thielen, Michael Hanke, Marcel A. J. van Gerven
Brains on Beats
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We developed task-optimized deep neural networks (DNNs) that achieved state-of-the-art performance in different evaluation scenarios for automatic music tagging. These DNNs were subsequently used to probe the neural representations of music. Representational similarity analysis revealed the existence of a representational gradient across the superior temporal gyrus (STG). Anterior STG was shown to be more sensitive to low-level stimulus features encoded in shallow DNN layers whereas posterior STG was shown to be more sensitive to high-level stimulus features encoded in deep DNN layers.
[ { "created": "Wed, 8 Jun 2016 16:33:41 GMT", "version": "v1" } ]
2016-06-09
[ [ "Güçlü", "Umut", "" ], [ "Thielen", "Jordy", "" ], [ "Hanke", "Michael", "" ], [ "van Gerven", "Marcel A. J.", "" ] ]
We developed task-optimized deep neural networks (DNNs) that achieved state-of-the-art performance in different evaluation scenarios for automatic music tagging. These DNNs were subsequently used to probe the neural representations of music. Representational similarity analysis revealed the existence of a representational gradient across the superior temporal gyrus (STG). Anterior STG was shown to be more sensitive to low-level stimulus features encoded in shallow DNN layers whereas posterior STG was shown to be more sensitive to high-level stimulus features encoded in deep DNN layers.
2002.01727
Lorenzo Fassina PhD
Lorenzo Fassina, Giacomo Rozzi, Stefano Rossi, Simone Scacchi, Maricla Galetti, Francesco Paolo Lo Muzio, Fabrizio Del Bianco, Piero Colli Franzone, Giuseppe Petrilli, Giuseppe Faggian, Michele Miragoli
Cardiac kinematic parameters computed from video of $\textit{in situ}$ beating heart
null
Scientific Reports 2017;7:46143
10.1038/srep46143
null
q-bio.TO physics.med-ph q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Mechanical function of the heart during open-chest cardiac surgery is exclusively monitored by echocardiographic techniques. However, little is known about local kinematics, particularly for the reperfused regions after ischemic events. We report a novel imaging modality, which extracts local and global kinematic parameters from videos of $\textit{in situ}$ beating hearts, displaying live video cardiograms of the contraction events. A custom algorithm tracked the movement of a video marker positioned $\textit{ad hoc}$ onto a selected area and analyzed, during the entire recording, the contraction trajectory, displacement, velocity, acceleration, kinetic energy and force. Moreover, global epicardial velocity and vorticity were analyzed by means of Particle Image Velocimetry tool. We validated our new technique by i) computational modeling of cardiac ischemia, ii) video recordings of ischemic/reperfused rat hearts, iii) videos of beating human hearts before and after coronary artery bypass graft, and iv) local Frank-Starling effect. In rats, we observed a decrement of kinematic parameters during acute ischemia and a significant increment in the same region after reperfusion. We detected similar behavior in operated patients. This modality adds important functional values on cardiac outcomes and supports the intervention in a contact-free and non-invasive mode. Moreover, it does not require particular operator-dependent skills.
[ { "created": "Wed, 5 Feb 2020 11:21:05 GMT", "version": "v1" } ]
2020-02-06
[ [ "Fassina", "Lorenzo", "" ], [ "Rozzi", "Giacomo", "" ], [ "Rossi", "Stefano", "" ], [ "Scacchi", "Simone", "" ], [ "Galetti", "Maricla", "" ], [ "Muzio", "Francesco Paolo Lo", "" ], [ "Del Bianco", "Fabrizio", ...
Mechanical function of the heart during open-chest cardiac surgery is exclusively monitored by echocardiographic techniques. However, little is known about local kinematics, particularly for the reperfused regions after ischemic events. We report a novel imaging modality, which extracts local and global kinematic parameters from videos of $\textit{in situ}$ beating hearts, displaying live video cardiograms of the contraction events. A custom algorithm tracked the movement of a video marker positioned $\textit{ad hoc}$ onto a selected area and analyzed, during the entire recording, the contraction trajectory, displacement, velocity, acceleration, kinetic energy and force. Moreover, global epicardial velocity and vorticity were analyzed by means of Particle Image Velocimetry tool. We validated our new technique by i) computational modeling of cardiac ischemia, ii) video recordings of ischemic/reperfused rat hearts, iii) videos of beating human hearts before and after coronary artery bypass graft, and iv) local Frank-Starling effect. In rats, we observed a decrement of kinematic parameters during acute ischemia and a significant increment in the same region after reperfusion. We detected similar behavior in operated patients. This modality adds important functional values on cardiac outcomes and supports the intervention in a contact-free and non-invasive mode. Moreover, it does not require particular operator-dependent skills.
1610.05817
Julia Palacios Julia Palacios
Michael D. Karcher and Julia A. Palacios and Shiwei Lan and Vladimir N. Minin
phylodyn: an R package for phylodynamic simulation and inference
9 pages, 3 figures
null
null
null
q-bio.PE stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce phylodyn, an R package for phylodynamic analysis based on gene genealogies. The package main functionality is Bayesian nonparametric estimation of effective population size fluctuations over time. Our implementation includes several Markov chain Monte Carlo-based methods and an integrated nested Laplace approximation-based approach for phylodynamic inference that have been developed in recent years. Genealogical data describe the timed ancestral relationships of individuals sampled from a population of interest. Here, individuals are assumed to be sampled at the same point in time (isochronous sampling) or at different points in time (heterochronous sampling); in addition, sampling events can be modeled with preferential sampling, which means that the intensity of sampling events is allowed to depend on the effective population size trajectory. We assume the coalescent and the sequentially Markov coalescent processes as generative models of genealogies. We include several coalescent simulation functions that are useful for testing our phylodynamics methods via simulation studies. We compare the performance and outputs of various methods implemented in phylodyn and outline their strengths and weaknesses. R package phylodyn is available at https://github.com/mdkarcher/phylodyn.
[ { "created": "Tue, 18 Oct 2016 22:25:58 GMT", "version": "v1" } ]
2016-10-20
[ [ "Karcher", "Michael D.", "" ], [ "Palacios", "Julia A.", "" ], [ "Lan", "Shiwei", "" ], [ "Minin", "Vladimir N.", "" ] ]
We introduce phylodyn, an R package for phylodynamic analysis based on gene genealogies. The package main functionality is Bayesian nonparametric estimation of effective population size fluctuations over time. Our implementation includes several Markov chain Monte Carlo-based methods and an integrated nested Laplace approximation-based approach for phylodynamic inference that have been developed in recent years. Genealogical data describe the timed ancestral relationships of individuals sampled from a population of interest. Here, individuals are assumed to be sampled at the same point in time (isochronous sampling) or at different points in time (heterochronous sampling); in addition, sampling events can be modeled with preferential sampling, which means that the intensity of sampling events is allowed to depend on the effective population size trajectory. We assume the coalescent and the sequentially Markov coalescent processes as generative models of genealogies. We include several coalescent simulation functions that are useful for testing our phylodynamics methods via simulation studies. We compare the performance and outputs of various methods implemented in phylodyn and outline their strengths and weaknesses. R package phylodyn is available at https://github.com/mdkarcher/phylodyn.
0809.4310
Namiko Mitarai
Namiko Mitarai, Kim Sneppen, and Steen Pedersen
Ribosome collisions and Translation efficiency: Optimization by codon usage and mRNA destabilization
5 figures, 3 tables
J. Mol. Biol. 382, 236-245 (2008)
null
null
q-bio.SC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Individual mRNAs are translated by multiple ribosomes that initiate translation with a few seconds interval. The ribosome speed is codon dependant, and ribosome queuing has been suggested to explain specific data for translation of some mRNAs in vivo. By modelling the stochastic translation process as a traffic problem, we here analyze conditions and consequences of collisions and queuing. The model allowed us to determine the on-rate (0.8 to 1.1 initiations per sec) and the time (1 sec) the preceding ribosome occludes initiation for Escherichia coli lacZ mRNA in vivo. We find that ribosome collisions and queues are inevitable consequences of a stochastic translation mechanism that reduce the translation efficiency substantially on natural mRNAs. The cells minimize collisions by having its mRNAs being unstable and by a highly selected codon usage in the start of the mRNA. The cost of mRNA breakdown is offset by the concomitant increase in translational efficiency.
[ { "created": "Thu, 25 Sep 2008 05:15:39 GMT", "version": "v1" } ]
2008-09-26
[ [ "Mitarai", "Namiko", "" ], [ "Sneppen", "Kim", "" ], [ "Pedersen", "Steen", "" ] ]
Individual mRNAs are translated by multiple ribosomes that initiate translation with a few seconds interval. The ribosome speed is codon dependant, and ribosome queuing has been suggested to explain specific data for translation of some mRNAs in vivo. By modelling the stochastic translation process as a traffic problem, we here analyze conditions and consequences of collisions and queuing. The model allowed us to determine the on-rate (0.8 to 1.1 initiations per sec) and the time (1 sec) the preceding ribosome occludes initiation for Escherichia coli lacZ mRNA in vivo. We find that ribosome collisions and queues are inevitable consequences of a stochastic translation mechanism that reduce the translation efficiency substantially on natural mRNAs. The cells minimize collisions by having its mRNAs being unstable and by a highly selected codon usage in the start of the mRNA. The cost of mRNA breakdown is offset by the concomitant increase in translational efficiency.
2403.07612
Steffen Lange
Steffen Lange, Jannik Schmied, Paul Willam, Anja Voss-B\"ohme
Minimal cellular automaton model with heterogeneous cell sizes predicts epithelial colony growth
15 pages, 17 figures
Journal of Theoretical Biology: Biomechanical regulation of cell shape, cell migration, and cell-cell interactions (2024)
10.1016/j.jtbi.2024.111882
null
q-bio.CB nlin.CG physics.bio-ph physics.data-an q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Regulation of cell proliferation is a crucial aspect of tissue development and homeostasis and plays a major role in morphogenesis, wound healing, and tumor invasion. A phenomenon of such regulation is contact inhibition, which describes the dramatic slowing of proliferation, cell migration and individual cell growth when multiple cells are in contact with each other. While many physiological, molecular and genetic factors are known, the mechanism of contact inhibition is still not fully understood. In particular, the relevance of cellular signaling due to interfacial contact for contact inhibition is still debated. Cellular automata (CA) have been employed in the past as numerically efficient mathematical models to study the dynamics of cell ensembles, but they are not suitable to explore the origins of contact inhibition as such agent-based models assume fixed cell sizes. We develop a minimal, data-driven model to simulate the dynamics of planar cell cultures by extending a probabilistic CA to incorporate size changes of individual cells during growth and cell division. We successfully apply this model to previous in-vitro experiments on contact inhibition in epithelial tissue: After a systematic calibration of the model parameters to measurements of single-cell dynamics, our CA model quantitatively reproduces independent measurements of emergent, culture-wide features, like colony size, cell density and collective cell migration. In particular, the dynamics of the CA model also exhibit the transition from a low-density confluent regime to a stationary postconfluent regime with a rapid decrease in cell size and motion. This implies that the volume exclusion principle, a mechanical constraint which is the only inter-cellular interaction incorporated in the model, paired with a size-dependent proliferation rate is sufficient to generate the observed contact inhibition.
[ { "created": "Tue, 12 Mar 2024 12:49:47 GMT", "version": "v1" }, { "created": "Fri, 28 Jun 2024 07:30:49 GMT", "version": "v2" } ]
2024-07-01
[ [ "Lange", "Steffen", "" ], [ "Schmied", "Jannik", "" ], [ "Willam", "Paul", "" ], [ "Voss-Böhme", "Anja", "" ] ]
Regulation of cell proliferation is a crucial aspect of tissue development and homeostasis and plays a major role in morphogenesis, wound healing, and tumor invasion. A phenomenon of such regulation is contact inhibition, which describes the dramatic slowing of proliferation, cell migration and individual cell growth when multiple cells are in contact with each other. While many physiological, molecular and genetic factors are known, the mechanism of contact inhibition is still not fully understood. In particular, the relevance of cellular signaling due to interfacial contact for contact inhibition is still debated. Cellular automata (CA) have been employed in the past as numerically efficient mathematical models to study the dynamics of cell ensembles, but they are not suitable to explore the origins of contact inhibition as such agent-based models assume fixed cell sizes. We develop a minimal, data-driven model to simulate the dynamics of planar cell cultures by extending a probabilistic CA to incorporate size changes of individual cells during growth and cell division. We successfully apply this model to previous in-vitro experiments on contact inhibition in epithelial tissue: After a systematic calibration of the model parameters to measurements of single-cell dynamics, our CA model quantitatively reproduces independent measurements of emergent, culture-wide features, like colony size, cell density and collective cell migration. In particular, the dynamics of the CA model also exhibit the transition from a low-density confluent regime to a stationary postconfluent regime with a rapid decrease in cell size and motion. This implies that the volume exclusion principle, a mechanical constraint which is the only inter-cellular interaction incorporated in the model, paired with a size-dependent proliferation rate is sufficient to generate the observed contact inhibition.
1610.06128
Steven Frank
Steven A. Frank
Puzzles in modern biology. IV. Neurodegeneration, localized origin and widespread decay
null
F1000Research 5:2537 (2016)
10.12688/f1000research.9790.1
null
q-bio.PE q-bio.MN q-bio.TO
http://creativecommons.org/licenses/by/4.0/
The motor neuron disease amyotrophic lateral sclerosis (ALS) typically begins with localized muscle weakness. Progressive, widespread paralysis often follows over a few years. Does the disease begin with local changes in a small piece of neural tissue and then spread? Or does neural decay happen independently across diverse spatial locations? The distinction matters, because local initiation may arise by local changes in a tissue microenvironment, by somatic mutation, or by various epigenetic or regulatory fluctuations in a few cells. A local trigger must be coupled with a mechanism for spread. By contrast, independent decay across spatial locations cannot begin by a local change, but must depend on some global predisposition or spatially distributed change that leads to approximately synchronous decay. This article outlines the conceptual frame by which one contrasts local triggers and spread versus parallel spatially distributed decay. Various neurodegenerative diseases differ in their mechanistic details, but all can usefully be understood as falling along a continuum of interacting local and global processes. Cancer provides an example of disease progression by local triggers and spatial spread, setting a conceptual basis for clarifying puzzles in neurodegeneration. Heart disease also has crucial interactions between global processes, such as circulating lipid levels, and local processes in the development of atherosclerotic plaques. The distinction between local and global processes helps to understand these various age-related diseases.
[ { "created": "Wed, 19 Oct 2016 17:54:01 GMT", "version": "v1" } ]
2016-10-20
[ [ "Frank", "Steven A.", "" ] ]
The motor neuron disease amyotrophic lateral sclerosis (ALS) typically begins with localized muscle weakness. Progressive, widespread paralysis often follows over a few years. Does the disease begin with local changes in a small piece of neural tissue and then spread? Or does neural decay happen independently across diverse spatial locations? The distinction matters, because local initiation may arise by local changes in a tissue microenvironment, by somatic mutation, or by various epigenetic or regulatory fluctuations in a few cells. A local trigger must be coupled with a mechanism for spread. By contrast, independent decay across spatial locations cannot begin by a local change, but must depend on some global predisposition or spatially distributed change that leads to approximately synchronous decay. This article outlines the conceptual frame by which one contrasts local triggers and spread versus parallel spatially distributed decay. Various neurodegenerative diseases differ in their mechanistic details, but all can usefully be understood as falling along a continuum of interacting local and global processes. Cancer provides an example of disease progression by local triggers and spatial spread, setting a conceptual basis for clarifying puzzles in neurodegeneration. Heart disease also has crucial interactions between global processes, such as circulating lipid levels, and local processes in the development of atherosclerotic plaques. The distinction between local and global processes helps to understand these various age-related diseases.
2004.09426
Roy de Kleijn
Rutger Goekoop (Parnassia Group, PsyQ, Netherlands), Roy de Kleijn (Leiden University)
How higher goals are constructed and collapse under stress: a hierarchical Bayesian control systems perspective
null
Neuroscience & Biobehavioral Reviews 123 (2021) 257-285
10.1016/j.neubiorev.2020.12.021
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we show that organisms can be modeled as hierarchical Bayesian control systems with small world and information bottleneck (bow-tie) network structure. Such systems combine hierarchical perception with hierarchical goal setting and hierarchical action control. We argue that hierarchical Bayesian control systems produce deep hierarchies of goal states, from which it follows that organisms must have some form of 'highest goals'. For all organisms, these involve internal (self) models, external (social) models and overarching (normative) models. We show that goal hierarchies tend to decompose in a top-down manner under severe and prolonged levels of stress. This produces behavior that favors short-term and self-referential goals over long term, social and/or normative goals. The collapse of goal hierarchies is universally accompanied by an increase in entropy (disorder) in control systems that can serve as an early warning sign for tipping points (disease or death of the organism). In humans, learning goal hierarchies corresponds to personality development (maturation). The failure of goal hierarchies to mature properly corresponds to personality deficits. A top-down collapse of such hierarchies under stress is identified as a common factor in all forms of episodic mental disorders (psychopathology). The paper concludes by discussing ways of testing these hypotheses empirically.
[ { "created": "Mon, 20 Apr 2020 16:30:52 GMT", "version": "v1" }, { "created": "Tue, 2 Feb 2021 13:30:27 GMT", "version": "v2" } ]
2021-02-03
[ [ "Goekoop", "Rutger", "", "Parnassia Group, PsyQ, Netherlands" ], [ "de Kleijn", "Roy", "", "Leiden University" ] ]
In this paper, we show that organisms can be modeled as hierarchical Bayesian control systems with small world and information bottleneck (bow-tie) network structure. Such systems combine hierarchical perception with hierarchical goal setting and hierarchical action control. We argue that hierarchical Bayesian control systems produce deep hierarchies of goal states, from which it follows that organisms must have some form of 'highest goals'. For all organisms, these involve internal (self) models, external (social) models and overarching (normative) models. We show that goal hierarchies tend to decompose in a top-down manner under severe and prolonged levels of stress. This produces behavior that favors short-term and self-referential goals over long term, social and/or normative goals. The collapse of goal hierarchies is universally accompanied by an increase in entropy (disorder) in control systems that can serve as an early warning sign for tipping points (disease or death of the organism). In humans, learning goal hierarchies corresponds to personality development (maturation). The failure of goal hierarchies to mature properly corresponds to personality deficits. A top-down collapse of such hierarchies under stress is identified as a common factor in all forms of episodic mental disorders (psychopathology). The paper concludes by discussing ways of testing these hypotheses empirically.
q-bio/0701034
Kaushik Majumdar
Kaushik Majumdar
A structural and a functional aspect of stable information processing by the brain
Sixteen pages, two figures. Accepted for publication in Cognitive Neurodynamics (Springer)
null
null
null
q-bio.NC q-bio.QM
null
In this paper a model of neural circuit in the brain has been proposed which is composed of cyclic sub-circuits. A big loop has been defined to be consisting of a feed forward path from the sensory neurons to the highest processing area of the brain and feed back paths from that region back up to close to the same sensory neurons. It has been mathematically shown how some smaller cycles can amplify signal. A big loop processes information by contrast and amplify principle. It has been assumed that the spike train coming out of a firing neuron encodes all the information produced by it as output. This information over a period of time can be extracted by a Fourier transform. The Fourier coefficients arranged in a vector form will uniquely represent the neural spike train over a period of time. The information emanating out of all the neurons in a given neural circuit over a period of time will be represented by a collection of points in a multidimensional vector space. This cluster of points represents the functional or behavioral form of the neural circuit. It has been proposed that a particular cluster of vectors as the representation of a new behavior is chosen by the brain interactively with respect to the memory stored in that circuit and the synaptic plasticity of the circuit. It has been proposed that in this situation a Coulomb force like expression governs the dynamics of functioning of the circuit and stability of the system is reached at the minimum of all the minima of a potential function derived from the force like expression. The calculations have been done with respect to a pseudometric defined in a multidimensional vector space.
[ { "created": "Sun, 21 Jan 2007 20:52:01 GMT", "version": "v1" }, { "created": "Sat, 16 Jun 2007 17:19:37 GMT", "version": "v2" } ]
2007-06-16
[ [ "Majumdar", "Kaushik", "" ] ]
In this paper a model of neural circuit in the brain has been proposed which is composed of cyclic sub-circuits. A big loop has been defined to be consisting of a feed forward path from the sensory neurons to the highest processing area of the brain and feed back paths from that region back up to close to the same sensory neurons. It has been mathematically shown how some smaller cycles can amplify signal. A big loop processes information by contrast and amplify principle. It has been assumed that the spike train coming out of a firing neuron encodes all the information produced by it as output. This information over a period of time can be extracted by a Fourier transform. The Fourier coefficients arranged in a vector form will uniquely represent the neural spike train over a period of time. The information emanating out of all the neurons in a given neural circuit over a period of time will be represented by a collection of points in a multidimensional vector space. This cluster of points represents the functional or behavioral form of the neural circuit. It has been proposed that a particular cluster of vectors as the representation of a new behavior is chosen by the brain interactively with respect to the memory stored in that circuit and the synaptic plasticity of the circuit. It has been proposed that in this situation a Coulomb force like expression governs the dynamics of functioning of the circuit and stability of the system is reached at the minimum of all the minima of a potential function derived from the force like expression. The calculations have been done with respect to a pseudometric defined in a multidimensional vector space.
1911.06795
Benoit Limoges
Claire Stines-Chaumeil, Francois Mavr\'e, Brice Kauffmann, Nicolas Mano, and Benoit Limoges
Mechanism of reconstitution/activation of the soluble PQQ-dependent glucose dehydrogenase from Acinetobacter calcoaceticus: a comprehensive study
45 pages, 11 figures, 6 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to switch on the activity of an enzyme through its spontaneous reconstitution has proven to be a valuable tool in fundamental studies of enzyme structure/reactivity relationships or in the design of artificial signal transduction systems in bioelectronics, synthetic biology, or bioanalytical applications. In particular, those based on the spontaneous reconstitution/activation of the apo-PQQ-dependent soluble glucose dehydrogenase (sGDH) from Acinetobacter calcoaceticus were widely developed. However, the reconstitution mechanism of sGDH with its two cofactors, i.e. the pyrroloquinoline quinone (PQQ) and Ca2+, remains unknown. The objective here is to elucidate this mechanism by stopped-flow kinetics under single-turnover conditions. The reconstitution of sGDH exhibited biphasic kinetics, characteristic of a square reaction scheme associated to two activation pathways. From a complete kinetic analysis, we were able to fully predict the reconstitution dynamic, but also to demonstrate that when PQQ first binds to the apo-sGDH, it strongly impedes the access of Ca2+ to its enclosed position at the bottom of the enzyme binding site, thereby greatly slowing down the reconstitution rate of sGDH. This slow calcium insertion may purposely be accelerated by providing more flexibility to the Ca2+ binding loop through the specific mutation of the calcium coordinating P248 proline residue, reducing thus the kinetic barrier to calcium ion insertion. The dynamic nature of the reconstitution process is also supported by the observation of a clear loop shift and a reorganization of the hydrogen bonding network and van der Waals interactions observed in both active sites of the apo and holo forms, a structural change modulation that was revealed from the refined X-ray structure of apo-sGDH (PDB:5MIN).
[ { "created": "Fri, 15 Nov 2019 18:35:04 GMT", "version": "v1" } ]
2019-11-18
[ [ "Stines-Chaumeil", "Claire", "" ], [ "Mavré", "Francois", "" ], [ "Kauffmann", "Brice", "" ], [ "Mano", "Nicolas", "" ], [ "Limoges", "Benoit", "" ] ]
The ability to switch on the activity of an enzyme through its spontaneous reconstitution has proven to be a valuable tool in fundamental studies of enzyme structure/reactivity relationships or in the design of artificial signal transduction systems in bioelectronics, synthetic biology, or bioanalytical applications. In particular, those based on the spontaneous reconstitution/activation of the apo-PQQ-dependent soluble glucose dehydrogenase (sGDH) from Acinetobacter calcoaceticus were widely developed. However, the reconstitution mechanism of sGDH with its two cofactors, i.e. the pyrroloquinoline quinone (PQQ) and Ca2+, remains unknown. The objective here is to elucidate this mechanism by stopped-flow kinetics under single-turnover conditions. The reconstitution of sGDH exhibited biphasic kinetics, characteristic of a square reaction scheme associated to two activation pathways. From a complete kinetic analysis, we were able to fully predict the reconstitution dynamic, but also to demonstrate that when PQQ first binds to the apo-sGDH, it strongly impedes the access of Ca2+ to its enclosed position at the bottom of the enzyme binding site, thereby greatly slowing down the reconstitution rate of sGDH. This slow calcium insertion may purposely be accelerated by providing more flexibility to the Ca2+ binding loop through the specific mutation of the calcium coordinating P248 proline residue, reducing thus the kinetic barrier to calcium ion insertion. The dynamic nature of the reconstitution process is also supported by the observation of a clear loop shift and a reorganization of the hydrogen bonding network and van der Waals interactions observed in both active sites of the apo and holo forms, a structural change modulation that was revealed from the refined X-ray structure of apo-sGDH (PDB:5MIN).
1205.6512
T. Goldman
James L. Friar, Terrance Goldman, Juan P\'erez-Mercader
Genome Sizes and the Benford Distribution
23 pages, 1 figure (eps)
PLoS ONE 7(5) (2012) e36624
10.1371/journal.pone.0036624
LA-UR 11-10114
q-bio.GN physics.bio-ph physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data on the number of Open Reading Frames (ORFs) coded by genomes from the 3 domains of Life show some notable general features including essential differences between the Prokaryotes and Eukaryotes, with the number of ORFs growing linearly with total genome size for the former, but only logarithmically for the latter. Assuming that the (protein) coding and non-coding fractions of the genome must have different dynamics and that the non-coding fraction must be controlled by a variety of (unspecified) probability distribution functions, we are able to predict that the number of ORFs for Eukaryotes follows a Benford distribution and has a specific logarithmic form. Using the data for 1000+ genomes available to us in early 2010, we find excellent fits to the data over several orders of magnitude, in the linear regime for the Prokaryote data, and the full non-linear form for the Eukaryote data. In their region of overlap the salient features are statistically congruent, which allows us to: interpret the difference between Prokaryotes and Eukaryotes as the manifestation of the increased demand in the biological functions required for the larger Eukaryotes, estimate some minimal genome sizes, and predict a maximal Prokaryote genome size on the order of 8-12 megabasepairs. These results naturally allow a mathematical interpretation in terms of maximal entropy and, therefore, most efficient information transmission.
[ { "created": "Tue, 29 May 2012 22:39:47 GMT", "version": "v1" } ]
2012-05-31
[ [ "Friar", "James L.", "" ], [ "Goldman", "Terrance", "" ], [ "Pérez-Mercader", "Juan", "" ] ]
Data on the number of Open Reading Frames (ORFs) coded by genomes from the 3 domains of Life show some notable general features including essential differences between the Prokaryotes and Eukaryotes, with the number of ORFs growing linearly with total genome size for the former, but only logarithmically for the latter. Assuming that the (protein) coding and non-coding fractions of the genome must have different dynamics and that the non-coding fraction must be controlled by a variety of (unspecified) probability distribution functions, we are able to predict that the number of ORFs for Eukaryotes follows a Benford distribution and has a specific logarithmic form. Using the data for 1000+ genomes available to us in early 2010, we find excellent fits to the data over several orders of magnitude, in the linear regime for the Prokaryote data, and the full non-linear form for the Eukaryote data. In their region of overlap the salient features are statistically congruent, which allows us to: interpret the difference between Prokaryotes and Eukaryotes as the manifestation of the increased demand in the biological functions required for the larger Eukaryotes, estimate some minimal genome sizes, and predict a maximal Prokaryote genome size on the order of 8-12 megabasepairs. These results naturally allow a mathematical interpretation in terms of maximal entropy and, therefore, most efficient information transmission.
1603.05319
Kun Zhao
Kun Zhao, Raja Jurdak
Understanding the spatiotemporal pattern of grazing cattle movement
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we analyse a high-frequency movement dataset for a group of grazing cattle and investigate their spatiotemporal patterns using a simple two-state `stop-and-move' mobility model. We find that the dispersal kernel in the moving state is best described by a mixture exponential distribution, indicating the hierarchical nature of the movement. On the other hand, the waiting time appears to be scale-invariant below a certain cut-off and is best described by a truncated power-law distribution, suggesting heterogenous dynamics in the non-moving state. We explore possible explanations for the observed phenomena, covering factors that can play a role in the generation of mobility patterns, such as the context of grazing environment, the intrinsic decision-making mechanism or the energy status of different activities. In particular, we propose a new hypothesis that the underlying movement pattern can be attributed to the most probable observable energy status under the maximum entropy configuration. These results are not only valuable for modelling cattle movement but also provide new insights for understanding the underlying biological basis of grazing behaviour.
[ { "created": "Thu, 17 Mar 2016 00:17:01 GMT", "version": "v1" } ]
2016-03-18
[ [ "Zhao", "Kun", "" ], [ "Jurdak", "Raja", "" ] ]
In this study, we analyse a high-frequency movement dataset for a group of grazing cattle and investigate their spatiotemporal patterns using a simple two-state `stop-and-move' mobility model. We find that the dispersal kernel in the moving state is best described by a mixture exponential distribution, indicating the hierarchical nature of the movement. On the other hand, the waiting time appears to be scale-invariant below a certain cut-off and is best described by a truncated power-law distribution, suggesting heterogenous dynamics in the non-moving state. We explore possible explanations for the observed phenomena, covering factors that can play a role in the generation of mobility patterns, such as the context of grazing environment, the intrinsic decision-making mechanism or the energy status of different activities. In particular, we propose a new hypothesis that the underlying movement pattern can be attributed to the most probable observable energy status under the maximum entropy configuration. These results are not only valuable for modelling cattle movement but also provide new insights for understanding the underlying biological basis of grazing behaviour.
2009.01969
Herbert Sauro Dr
Herbert M Sauro
SimpleSBML: A Python package for creating, editing, and interrogating SBML models: Version 2.0
Corrections to the original preprint. No change to the software
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this technical report, we describe a new version of SimpleSBML which provides an easier to use interface to python-libSBML allowing users of Python to more easily construct, edit, and inspect SBML based models. The most commonly used package for constructing SBML models in Python is python-libSBML based on the C/C++ library libSBML. python-libSBML is a comprehensive library with a large range of options but can be difficult for new users to learn and requires long scripts to create even the simplest models. Inspecting existing SBML models can also be difficult due to the complexity of the underlying object model. Instead, we present SimpleSBML, a package that allows users to add and inspect species, parameters, reactions, events, and rules to a libSBML model with only one command for each. Models can be exported to SBML format, and SBML files can be imported and converted to SimpleSBML commands making it very easy to edit the original SBML model. In the new version, a range of `get' methods is provided that allows users to inspect existing SBML models without having to understand the underlying object model used by libSBML.
[ { "created": "Fri, 4 Sep 2020 00:38:04 GMT", "version": "v1" }, { "created": "Tue, 4 May 2021 21:26:12 GMT", "version": "v2" }, { "created": "Thu, 19 Aug 2021 00:09:52 GMT", "version": "v3" } ]
2021-08-20
[ [ "Sauro", "Herbert M", "" ] ]
In this technical report, we describe a new version of SimpleSBML which provides an easier to use interface to python-libSBML allowing users of Python to more easily construct, edit, and inspect SBML based models. The most commonly used package for constructing SBML models in Python is python-libSBML based on the C/C++ library libSBML. python-libSBML is a comprehensive library with a large range of options but can be difficult for new users to learn and requires long scripts to create even the simplest models. Inspecting existing SBML models can also be difficult due to the complexity of the underlying object model. Instead, we present SimpleSBML, a package that allows users to add and inspect species, parameters, reactions, events, and rules to a libSBML model with only one command for each. Models can be exported to SBML format, and SBML files can be imported and converted to SimpleSBML commands making it very easy to edit the original SBML model. In the new version, a range of `get' methods is provided that allows users to inspect existing SBML models without having to understand the underlying object model used by libSBML.
q-bio/0312004
Zhongnan Li
Huijie Yang, Fangcui Zhao, Zhongnan Li, Wei Zhang
Diffusion Entropy Approach to Dynamical Characteristics of a Hodgkin-Huxley Neuron
8 figures
Physica A 347(2005)704
10.1016/j.physa.2004.08.017
null
q-bio.NC cond-mat.stat-mech
null
By means of the concept DE we analyze the responses of a HH neuron to two types of spike-train inputs. Two characteristic quantities can be extracted, which can reflect partially the dynamical process of a HH neuron.
[ { "created": "Mon, 1 Dec 2003 21:20:46 GMT", "version": "v1" } ]
2009-11-10
[ [ "Yang", "Huijie", "" ], [ "Zhao", "Fangcui", "" ], [ "Li", "Zhongnan", "" ], [ "Zhang", "Wei", "" ] ]
By means of the concept DE we analyze the responses of a HH neuron to two types of spike-train inputs. Two characteristic quantities can be extracted, which can reflect partially the dynamical process of a HH neuron.
2112.05315
Olga Krivorotko
O.I. Krivorotko, S.I. Kabanikhin
Mathematical models of COVID-19 spread
It is a russian preprint. The English version will update later
null
null
null
q-bio.PE math.OC
http://creativecommons.org/licenses/by/4.0/
The paper presents classification and analysis of the mathematical models of COVID-19 spread in different groups of populations such as the family, school, office (3-100 people), neighborhood (100-5000 people), city, region (0.5-15 million people), country, continent and the world. The classification covers the main types of models including time-series, differential, imitation ones, and their combinations. The time-series models are built from analysis of the time series derived using filtration, regression and network methods (Section 2). The differential models include those derived from systems of ordinary and stochastic differential equations as well as partial-derivative equations (Section 3). The imitation models include cellular automata and agent-based models (Section 4). The fourth group in the classification is combinations of nonlinear Markov chains and optimal control theory, derived within the framework of the mean-field game theory. Due to the novelty of the disease and the difficulties it causes, the parameters of most models are, as a rule, unknown, which necessitates one to solve the inverse problem, so the paper also analyses the main algorithms to solve the inverse problem such as stochastic optimization; nature-like algorithms (genetic; differential evolution; particle swarm, etc.); the understanding method; big-data analysis, and machine learning.
[ { "created": "Fri, 10 Dec 2021 03:29:04 GMT", "version": "v1" }, { "created": "Mon, 31 Jan 2022 06:35:51 GMT", "version": "v2" } ]
2022-02-01
[ [ "Krivorotko", "O. I.", "" ], [ "Kabanikhin", "S. I.", "" ] ]
The paper presents classification and analysis of the mathematical models of COVID-19 spread in different groups of populations such as the family, school, office (3-100 people), neighborhood (100-5000 people), city, region (0.5-15 million people), country, continent and the world. The classification covers the main types of models including time-series, differential, imitation ones, and their combinations. The time-series models are built from analysis of the time series derived using filtration, regression and network methods (Section 2). The differential models include those derived from systems of ordinary and stochastic differential equations as well as partial-derivative equations (Section 3). The imitation models include cellular automata and agent-based models (Section 4). The fourth group in the classification is combinations of nonlinear Markov chains and optimal control theory, derived within the framework of the mean-field game theory. Due to the novelty of the disease and the difficulties it causes, the parameters of most models are, as a rule, unknown, which necessitates one to solve the inverse problem, so the paper also analyses the main algorithms to solve the inverse problem such as stochastic optimization; nature-like algorithms (genetic; differential evolution; particle swarm, etc.); the understanding method; big-data analysis, and machine learning.