id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0407025
Dietrich Stauffer
D. Stauffer and H. Arndt
Simulation and Experiment of Extinction or Adaptation
4 pages including 4 figures
null
null
null
q-bio.PE
null
Can unicellular organisms survive a drastic temperature change, and adapt to it after many generations? In simulations of the Penna model of biological ageing, both extinction and adaptation were found for asexual and sexual reproduction as well as for parasex. These model investigations are the basis for the design of evolution experiments with heterotrophic flagellates.
[ { "created": "Thu, 15 Jul 2004 16:45:50 GMT", "version": "v1" } ]
2007-05-23
[ [ "Stauffer", "D.", "" ], [ "Arndt", "H.", "" ] ]
Can unicellular organisms survive a drastic temperature change, and adapt to it after many generations? In simulations of the Penna model of biological ageing, both extinction and adaptation were found for asexual and sexual reproduction as well as for parasex. These model investigations are the basis for the design of evolution experiments with heterotrophic flagellates.
1808.09104
Nanjie Deng
Nanjie Deng, Lauren Wickstrom, Emilio Gallicchio
Combining Alchemical Transformation with Physical Pathway to Accurately Compute Absolute Binding Free Energy
16 pages, 7 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new method that combines alchemical transformation with physical pathway to accurately and efficiently compute the absolute binding free energy of receptor-ligand complex. Currently, the double decoupling method (DDM) and the potential of mean force approach (PMF) methods are widely used to compute the absolute binding free energy of biomolecules. The DDM relies on alchemically decoupling the ligand from its environments, which can be computationally challenging for large ligands and charged ligands because of the large magnitude of the decoupling free energies involved. On the other hand, the PMF approach uses physical pathway to extract the ligand out of the binding site, thus avoids the alchemical decoupling of the ligand. However, the PMF method has its own drawback because of the reliance on a ligand binding/unbinding pathway free of steric obstruction from the receptor atoms. Therefore, in the presence of deeply buried ligand functional groups the convergence of the PMF calculation can be very slow leading to large errors in the computed binding free energy. Here we develop a new method called AlchemPMF by combining alchemical transformation with physical pathway to overcome the major drawback in the PMF method. We have tested the new approach on the binding of a charged ligand to an allosteric site on HIV-1 Integrase. After 20 ns of simulation per umbrella sampling window, the new method yields absolute binding free energies within ~1 kcal/mol from the experimental result, whereas the standard PMF approach and the DDM calculations result in errors of ~5 kcal/mol and > 2 kcal/mol, respectively. Furthermore, the binding free energy computed using the new method is associated with smaller statistical error compared with those obtained from the existing methods.
[ { "created": "Tue, 28 Aug 2018 03:44:51 GMT", "version": "v1" } ]
2018-08-29
[ [ "Deng", "Nanjie", "" ], [ "Wickstrom", "Lauren", "" ], [ "Gallicchio", "Emilio", "" ] ]
We present a new method that combines alchemical transformation with physical pathway to accurately and efficiently compute the absolute binding free energy of receptor-ligand complex. Currently, the double decoupling method (DDM) and the potential of mean force approach (PMF) methods are widely used to compute the absolute binding free energy of biomolecules. The DDM relies on alchemically decoupling the ligand from its environments, which can be computationally challenging for large ligands and charged ligands because of the large magnitude of the decoupling free energies involved. On the other hand, the PMF approach uses physical pathway to extract the ligand out of the binding site, thus avoids the alchemical decoupling of the ligand. However, the PMF method has its own drawback because of the reliance on a ligand binding/unbinding pathway free of steric obstruction from the receptor atoms. Therefore, in the presence of deeply buried ligand functional groups the convergence of the PMF calculation can be very slow leading to large errors in the computed binding free energy. Here we develop a new method called AlchemPMF by combining alchemical transformation with physical pathway to overcome the major drawback in the PMF method. We have tested the new approach on the binding of a charged ligand to an allosteric site on HIV-1 Integrase. After 20 ns of simulation per umbrella sampling window, the new method yields absolute binding free energies within ~1 kcal/mol from the experimental result, whereas the standard PMF approach and the DDM calculations result in errors of ~5 kcal/mol and > 2 kcal/mol, respectively. Furthermore, the binding free energy computed using the new method is associated with smaller statistical error compared with those obtained from the existing methods.
0810.4738
David Basanta
David Basanta and Andreas Deutsch
A Game Theoretical Perspective on the Somatic Evolution of Cancer
Book chapter in Selected Topics in Cancer Modeling: Genesis, Evolution, Immune Competition, and Therapy. Editors: N. Bellomo, M. Chaplain and E. de Angelis. Springer-Verlag New York, LLC. ISBN-13: 9780817647124
null
null
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Environmental and genetic mutations can transform the cells in a co-operating healthy tissue into an ecosystem of individualistic tumour cells that compete for space and resources. Various selection forces are responsible for driving the evolution of cells in a tumour towards more malignant and aggressive phenotypes that tend to have a fitness advantage over the older populations. Although the evolutionary nature of cancer has been recognised for more than three decades (ever since the seminal work of Nowell) it has been only recently that tools traditionally used by ecological and evolutionary researchers have been adopted to study the evolution of cancer phenotypes in populations of individuals capable of co-operation and competition. In this chapter we will describe game theory as an important tool to study the emergence of cell phenotypes in a tumour and will critically review some of its applications in cancer research. These applications demonstrate that game theory can be used to understand the dynamics of somatic cancer evolution and suggest new therapies in which this knowledge could be applied to gain some control over the evolution of the tumour.
[ { "created": "Mon, 27 Oct 2008 03:48:09 GMT", "version": "v1" } ]
2008-10-28
[ [ "Basanta", "David", "" ], [ "Deutsch", "Andreas", "" ] ]
Environmental and genetic mutations can transform the cells in a co-operating healthy tissue into an ecosystem of individualistic tumour cells that compete for space and resources. Various selection forces are responsible for driving the evolution of cells in a tumour towards more malignant and aggressive phenotypes that tend to have a fitness advantage over the older populations. Although the evolutionary nature of cancer has been recognised for more than three decades (ever since the seminal work of Nowell) it has been only recently that tools traditionally used by ecological and evolutionary researchers have been adopted to study the evolution of cancer phenotypes in populations of individuals capable of co-operation and competition. In this chapter we will describe game theory as an important tool to study the emergence of cell phenotypes in a tumour and will critically review some of its applications in cancer research. These applications demonstrate that game theory can be used to understand the dynamics of somatic cancer evolution and suggest new therapies in which this knowledge could be applied to gain some control over the evolution of the tumour.
1808.10591
Jessica Liebig
Jessica Liebig, Cassie Jansen, Dean Paini, Lauren Gardner, Raja Jurdak
A global model for predicting the arrival of imported dengue infections
32 pages, 20 figures
null
10.1371/journal.pone.0225193
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With approximately half of the world's population at risk of contracting dengue, this mosquito-borne disease is of global concern. International travellers significantly contribute to dengue's rapid and large-scale spread by importing the disease from endemic into non-endemic countries. To prevent future outbreaks and dengue from establishing in non-endemic countries, knowledge about the arrival time and location of infected travellers is crucial. We propose a network model that predicts the monthly number of dengue-infected air passengers arriving at any given airport. We consider international air travel volumes to construct weighted networks, representing passenger flows between airports. We further calculate the probability of passengers, who travel through the international air transport network, being infected with dengue. The probability of being infected depends on the destination, duration and timing of travel. Our findings shed light onto dengue importation routes and reveal country-specific reporting rates that have been until now largely unknown. This paper provides important new knowledge about the spreading dynamics of dengue that is highly beneficial for public health authorities to strategically allocate the often limited resources to more efficiently prevent the spread of dengue.
[ { "created": "Fri, 31 Aug 2018 04:03:23 GMT", "version": "v1" }, { "created": "Fri, 14 Dec 2018 04:08:10 GMT", "version": "v2" }, { "created": "Mon, 11 Nov 2019 00:24:28 GMT", "version": "v3" } ]
2020-07-01
[ [ "Liebig", "Jessica", "" ], [ "Jansen", "Cassie", "" ], [ "Paini", "Dean", "" ], [ "Gardner", "Lauren", "" ], [ "Jurdak", "Raja", "" ] ]
With approximately half of the world's population at risk of contracting dengue, this mosquito-borne disease is of global concern. International travellers significantly contribute to dengue's rapid and large-scale spread by importing the disease from endemic into non-endemic countries. To prevent future outbreaks and dengue from establishing in non-endemic countries, knowledge about the arrival time and location of infected travellers is crucial. We propose a network model that predicts the monthly number of dengue-infected air passengers arriving at any given airport. We consider international air travel volumes to construct weighted networks, representing passenger flows between airports. We further calculate the probability of passengers, who travel through the international air transport network, being infected with dengue. The probability of being infected depends on the destination, duration and timing of travel. Our findings shed light onto dengue importation routes and reveal country-specific reporting rates that have been until now largely unknown. This paper provides important new knowledge about the spreading dynamics of dengue that is highly beneficial for public health authorities to strategically allocate the often limited resources to more efficiently prevent the spread of dengue.
1406.6206
Anatolij Gelimson
Anatolij Gelimson, Ramin Golestanian
Collective Dynamics of Dividing Chemotactic Cells
null
Phys. Rev. Lett. 114, 028101 (2015)
10.1103/PhysRevLett.114.028101
null
q-bio.CB cond-mat.soft cond-mat.stat-mech q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The large scale behaviour of a population of cells that grow and interact through the concentration field of the chemicals they secrete is studied using dynamical renormalization group methods. The combination of the effective long-range chemotactic interaction and lack of number conservation leads to a rich variety of phase behaviour in the system, which includes a sharp transition from a phase that has moderate (or controlled) growth and regulated chemical interactions to a phase with strong (or uncontrolled) growth and no chemical interactions. The transition point has nontrivial critical exponents. Our results might help shed light on the interplay between chemical signalling and growth in tissues and colonies, and in particular on the challenging problem of cancer metastasis.
[ { "created": "Tue, 24 Jun 2014 11:26:14 GMT", "version": "v1" }, { "created": "Fri, 19 Dec 2014 12:25:04 GMT", "version": "v2" }, { "created": "Fri, 16 Jan 2015 11:20:29 GMT", "version": "v3" } ]
2015-01-19
[ [ "Gelimson", "Anatolij", "" ], [ "Golestanian", "Ramin", "" ] ]
The large scale behaviour of a population of cells that grow and interact through the concentration field of the chemicals they secrete is studied using dynamical renormalization group methods. The combination of the effective long-range chemotactic interaction and lack of number conservation leads to a rich variety of phase behaviour in the system, which includes a sharp transition from a phase that has moderate (or controlled) growth and regulated chemical interactions to a phase with strong (or uncontrolled) growth and no chemical interactions. The transition point has nontrivial critical exponents. Our results might help shed light on the interplay between chemical signalling and growth in tissues and colonies, and in particular on the challenging problem of cancer metastasis.
1108.1635
Richard A Neher
Richard A. Neher and Boris I. Shraiman
Genetic Draft and Quasi-Neutrality in Large Facultatively Sexual Populations
Includes supplement as appendix
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large populations may contain numerous simultaneously segregating polymorphisms subject to natural selection. Since selection acts on individuals whose fitness depends on many loci, different loci affect each other's dynamics. This leads to stochastic fluctuations of allele frequencies above and beyond genetic drift - an effect known as genetic draft. Since recombination disrupts associations between alleles, draft is strong when recombination is rare. Here, we study a facultatively outcrossing population in a regime where the frequency of out-crossing and recombination, r, is small compared to the characteristic scale of fitness differences \sigma. In this regime, fit genotypes expand clonally, leading to large fluctuations in the number of recombinant offspring genotypes. The power law tail in the distribution of the latter makes it impossible to capture the dynamics of draft by an effective neutral model. Instead, we find that the fixation time of a neutral allele increases only slowly with the population size but depends sensitively on the ratio r/\sigma. The efficacy of selection is reduced dramatically and alleles behave "quasi-neutrally" even for Ns>> 1, provided that |s|< s_c, where s_c depends strongly on r/\sigma, but only weakly on population size N. In addition, the anomalous fluctuations due to draft change the spectrum of (quasi)-neutral alleles from f(\nu)\sim 1/\nu, corresponding to drift, to \sim1/\nu^2. Finally, draft accelerates the rate of two step adaptations through deleterious intermediates.
[ { "created": "Mon, 8 Aug 2011 08:56:19 GMT", "version": "v1" } ]
2011-08-09
[ [ "Neher", "Richard A.", "" ], [ "Shraiman", "Boris I.", "" ] ]
Large populations may contain numerous simultaneously segregating polymorphisms subject to natural selection. Since selection acts on individuals whose fitness depends on many loci, different loci affect each other's dynamics. This leads to stochastic fluctuations of allele frequencies above and beyond genetic drift - an effect known as genetic draft. Since recombination disrupts associations between alleles, draft is strong when recombination is rare. Here, we study a facultatively outcrossing population in a regime where the frequency of out-crossing and recombination, r, is small compared to the characteristic scale of fitness differences \sigma. In this regime, fit genotypes expand clonally, leading to large fluctuations in the number of recombinant offspring genotypes. The power law tail in the distribution of the latter makes it impossible to capture the dynamics of draft by an effective neutral model. Instead, we find that the fixation time of a neutral allele increases only slowly with the population size but depends sensitively on the ratio r/\sigma. The efficacy of selection is reduced dramatically and alleles behave "quasi-neutrally" even for Ns>> 1, provided that |s|< s_c, where s_c depends strongly on r/\sigma, but only weakly on population size N. In addition, the anomalous fluctuations due to draft change the spectrum of (quasi)-neutral alleles from f(\nu)\sim 1/\nu, corresponding to drift, to \sim1/\nu^2. Finally, draft accelerates the rate of two step adaptations through deleterious intermediates.
2007.11674
Jorge Gaxiola
J.A. Gaxiola-Tirado
Using EEG-based brain connectivity for the study of brain dynamics in brain-computer interfaces
in Spanish
Revista Doctorado UMH, 2020
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The analysis of brain connectivity aims to understand the emergence of functional networks into the brain. This information can be used in the process of electroencephalographic (EEG) signal analysis and classification for a braincomputer interface (BCI). These systems provide an alternative channel of communication and control to people with motor impairments. In this article, four strategies for using the brain connectivity in a BCI environment as a tool to obtain a deeper understanding of the cerebral mechanisms are proposed, with the principal aim of developing a scheme oriented to neuro-rehabilitation of gait in combination with different neurotechnologies and exoskeletons. This scheme would allow improving current schemes and/or to design new control strategies, as well as rehabilitation approaches.
[ { "created": "Sat, 18 Jul 2020 18:05:14 GMT", "version": "v1" } ]
2020-07-28
[ [ "Gaxiola-Tirado", "J. A.", "" ] ]
The analysis of brain connectivity aims to understand the emergence of functional networks into the brain. This information can be used in the process of electroencephalographic (EEG) signal analysis and classification for a braincomputer interface (BCI). These systems provide an alternative channel of communication and control to people with motor impairments. In this article, four strategies for using the brain connectivity in a BCI environment as a tool to obtain a deeper understanding of the cerebral mechanisms are proposed, with the principal aim of developing a scheme oriented to neuro-rehabilitation of gait in combination with different neurotechnologies and exoskeletons. This scheme would allow improving current schemes and/or to design new control strategies, as well as rehabilitation approaches.
1010.3268
Florian Markowetz
Florian Markowetz, Klaas W Mulder, Edoardo M Airoldi, Ihor R Lemischka, Olga G Troyanskaya
Mapping Dynamic Histone Acetylation Patterns to Gene Expression in Nanog-depleted Murine Embryonic Stem Cells
accepted at PLoS Computational Biology
PLoS Comp Bio, 2010 Dec 16;6(12):e1001034
10.1371/journal.pcbi.1001034
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embryonic stem cells (ESC) have the potential to self-renew indefinitely and to differentiate into any of the three germ layers. The molecular mechanisms for self-renewal, maintenance of pluripotency and lineage specification are poorly understood, but recent results point to a key role for epigenetic mechanisms. In this study, we focus on quantifying the impact of histone 3 acetylation (H3K9,14ac) on gene expression in murine embryonic stem cells. We analyze genome-wide histone acetylation patterns and gene expression profiles measured over the first five days of cell differentiation triggered by silencing Nanog, a key transcription factor in ESC regulation. We explore the temporal and spatial dynamics of histone acetylation data and its correlation with gene expression using supervised and unsupervised statistical models. On a genome-wide scale, changes in acetylation are significantly correlated to changes in mRNA expression and, surprisingly, this coherence increases over time. We quantify the predictive power of histone acetylation for gene expression changes in a balanced cross-validation procedure. In an in-depth study we focus on genes central to the regulatory network of Mouse ESC, including those identified in a recent genome-wide RNAi screen and in the PluriNet, a computationally derived stem cell signature. We find that compared to the rest of the genome, ESC-specific genes show significantly more acetylation signal and a much stronger decrease in acetylation over time, which is often not reflected in an concordant expression change. These results shed light on the complexity of the relationship between histone acetylation and gene expression and are a step forward to dissect the multilayer regulatory mechanisms that determine stem cell fate.
[ { "created": "Fri, 15 Oct 2010 20:31:44 GMT", "version": "v1" } ]
2011-01-11
[ [ "Markowetz", "Florian", "" ], [ "Mulder", "Klaas W", "" ], [ "Airoldi", "Edoardo M", "" ], [ "Lemischka", "Ihor R", "" ], [ "Troyanskaya", "Olga G", "" ] ]
Embryonic stem cells (ESC) have the potential to self-renew indefinitely and to differentiate into any of the three germ layers. The molecular mechanisms for self-renewal, maintenance of pluripotency and lineage specification are poorly understood, but recent results point to a key role for epigenetic mechanisms. In this study, we focus on quantifying the impact of histone 3 acetylation (H3K9,14ac) on gene expression in murine embryonic stem cells. We analyze genome-wide histone acetylation patterns and gene expression profiles measured over the first five days of cell differentiation triggered by silencing Nanog, a key transcription factor in ESC regulation. We explore the temporal and spatial dynamics of histone acetylation data and its correlation with gene expression using supervised and unsupervised statistical models. On a genome-wide scale, changes in acetylation are significantly correlated to changes in mRNA expression and, surprisingly, this coherence increases over time. We quantify the predictive power of histone acetylation for gene expression changes in a balanced cross-validation procedure. In an in-depth study we focus on genes central to the regulatory network of Mouse ESC, including those identified in a recent genome-wide RNAi screen and in the PluriNet, a computationally derived stem cell signature. We find that compared to the rest of the genome, ESC-specific genes show significantly more acetylation signal and a much stronger decrease in acetylation over time, which is often not reflected in an concordant expression change. These results shed light on the complexity of the relationship between histone acetylation and gene expression and are a step forward to dissect the multilayer regulatory mechanisms that determine stem cell fate.
2404.18485
Maxime Lenormand
Maxime Lenormand, Jean-Baptiste F\'eret, Guillaume Papuga, Samuel Alleaume and Sandra Luque
Coupling in situ and remote sensing data to assess $\alpha$- and $\beta$-diversity over biogeographic gradients
11 pages, 5 figures + Appendix
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The challenges presented by climate change are escalating and pose significant threats to global biodiversity, which in turn increases the risk of species extinctions. Therefore, meticulous monitoring efforts are necessary to mitigate the consequential impacts on both human well-being and environmental equilibrium. Biodiversity mapping is pivotal for establishing conservation priorities, often accomplished by assessing alpha, beta, and gamma diversity levels. Two main data sources, in situ and remote sensing (RS) data, are key for this task. In situ methods entail direct data collection from specific study areas, offering detailed insights into ecological patterns, albeit limited by resource constraints. Conversely, RS provides a broader observational platform, albeit at lower spatial resolution than in situ approaches. RS-derived diversity metrics have potential, particularly in linking spectral and biological diversity through high-resolution imagery for precise differentiation at fine scales. Coupling in situ and RS data underscores their complementary nature, contingent upon various factors including study scale and logistical considerations. In situ methods excel in precision, while RS offers efficiency and broader coverage. Despite prior investigations predominantly relying on limited datasets, our study endeavors to employ both in situ and RS data to assess plant and spectral species diversity across France at a high spatial resolution, integrating diverse metrics to unravel different biogeographical structures while gaining in understanding the relationship between plant and spectral diversity within and across bioregions.
[ { "created": "Mon, 29 Apr 2024 07:53:50 GMT", "version": "v1" } ]
2024-04-30
[ [ "Lenormand", "Maxime", "" ], [ "Féret", "Jean-Baptiste", "" ], [ "Papuga", "Guillaume", "" ], [ "Alleaume", "Samuel", "" ], [ "Luque", "Sandra", "" ] ]
The challenges presented by climate change are escalating and pose significant threats to global biodiversity, which in turn increases the risk of species extinctions. Therefore, meticulous monitoring efforts are necessary to mitigate the consequential impacts on both human well-being and environmental equilibrium. Biodiversity mapping is pivotal for establishing conservation priorities, often accomplished by assessing alpha, beta, and gamma diversity levels. Two main data sources, in situ and remote sensing (RS) data, are key for this task. In situ methods entail direct data collection from specific study areas, offering detailed insights into ecological patterns, albeit limited by resource constraints. Conversely, RS provides a broader observational platform, albeit at lower spatial resolution than in situ approaches. RS-derived diversity metrics have potential, particularly in linking spectral and biological diversity through high-resolution imagery for precise differentiation at fine scales. Coupling in situ and RS data underscores their complementary nature, contingent upon various factors including study scale and logistical considerations. In situ methods excel in precision, while RS offers efficiency and broader coverage. Despite prior investigations predominantly relying on limited datasets, our study endeavors to employ both in situ and RS data to assess plant and spectral species diversity across France at a high spatial resolution, integrating diverse metrics to unravel different biogeographical structures while gaining in understanding the relationship between plant and spectral diversity within and across bioregions.
2403.14491
Gabriel Riera
Tom\'as M. Coronado, Joan Carles Pons, Gabriel Riera
Counting cherry reduction sequences is counting linear extensions (in phylogenetic tree-child networks)
22 pages, 5 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
Orchard and tree-child networks share an important property with phylogenetic trees: they can be completely reduced to a single node by iteratively deleting cherries and reticulated cherries. As it is the case with phylogenetic trees, the number of ways in which this can be done gives information about the topology of the network. Here, we show that the problem of computing this number in tree-child networks is akin to that of finding the number of linear extensions of the poset induced by each network, and give an algorithm based on this reduction whose complexity is bounded in terms of the level of the network.
[ { "created": "Thu, 21 Mar 2024 15:38:47 GMT", "version": "v1" } ]
2024-03-22
[ [ "Coronado", "Tomás M.", "" ], [ "Pons", "Joan Carles", "" ], [ "Riera", "Gabriel", "" ] ]
Orchard and tree-child networks share an important property with phylogenetic trees: they can be completely reduced to a single node by iteratively deleting cherries and reticulated cherries. As it is the case with phylogenetic trees, the number of ways in which this can be done gives information about the topology of the network. Here, we show that the problem of computing this number in tree-child networks is akin to that of finding the number of linear extensions of the poset induced by each network, and give an algorithm based on this reduction whose complexity is bounded in terms of the level of the network.
2012.10822
Ricardo Ugarte
Ricardo Ugarte
FMO Interaction Energy between 17$\beta$-Estradiol, 17$\alpha$-Estradiol and Human Estrogen Receptor $\alpha$
12 pages, 10 figures; PCA modes movies. arXiv admin note: text overlap with arXiv:1907.10808. The interaction energy in aqueous systems is corrected (Tables III and IV)
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The estrogen receptor is a nuclear hormone receptor activated by the natural steroid hormone 17$\beta$-estradiol (E2). Fragment molecular orbital (FMO) calculations were performed which allowed us to obtain the interaction energy ($E_{int}$) between E2, 17$\alpha$-estradiol (17$\alpha$-E2) and the human estrogen receptor $\alpha$ ligand-binding domain. In aqueous media the MP2/6-31G(d) $E_{int}$ was of -88.52 kcal/mol for E2 and -78.73 kcal/mol for 17$\alpha$-E2. Attractive dispersion interactions were observed between ligands and all surrounding hydrophobic residues. Water molecules were found at the binding site and strong attractive electrostatic interactions were observed between the ligands and the Glu 353 and His 524 residues. The essential dynamics revealed that E2 adapts to the binding site and its motion, in a sense, synchronizes with the whole receptor; while 17$\alpha$-E2, with its motion of greater amplitude compared to E2, disturbs the binding site. Perhaps this feature of the normal substrate is a necessary condition for biological function. Another important requirement relates to the number of water molecules at the binding site. Therefore, negative values in $E_{int}$ is a necessary but not sufficient condition since, it is also necessary to consider the conformers population that fulfill all the requirements that ensure a biological response.
[ { "created": "Sun, 20 Dec 2020 01:27:04 GMT", "version": "v1" }, { "created": "Sat, 8 Jan 2022 00:49:32 GMT", "version": "v2" } ]
2022-01-11
[ [ "Ugarte", "Ricardo", "" ] ]
The estrogen receptor is a nuclear hormone receptor activated by the natural steroid hormone 17$\beta$-estradiol (E2). Fragment molecular orbital (FMO) calculations were performed which allowed us to obtain the interaction energy ($E_{int}$) between E2, 17$\alpha$-estradiol (17$\alpha$-E2) and the human estrogen receptor $\alpha$ ligand-binding domain. In aqueous media the MP2/6-31G(d) $E_{int}$ was of -88.52 kcal/mol for E2 and -78.73 kcal/mol for 17$\alpha$-E2. Attractive dispersion interactions were observed between ligands and all surrounding hydrophobic residues. Water molecules were found at the binding site and strong attractive electrostatic interactions were observed between the ligands and the Glu 353 and His 524 residues. The essential dynamics revealed that E2 adapts to the binding site and its motion, in a sense, synchronizes with the whole receptor; while 17$\alpha$-E2, with its motion of greater amplitude compared to E2, disturbs the binding site. Perhaps this feature of the normal substrate is a necessary condition for biological function. Another important requirement relates to the number of water molecules at the binding site. Therefore, negative values in $E_{int}$ is a necessary but not sufficient condition since, it is also necessary to consider the conformers population that fulfill all the requirements that ensure a biological response.
1602.06981
Augusto Gonzalez
Augusto Gonzalez
Mathematical control theory, the immune system, and cancer
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simple ideas, endowed from the mathematical theory of control, are used in order to analyze in general grounds the human immune system. The general principles are minimization of the pathogen load and economy of resources. They should constrain the parameters describing the immune system. In the simplest linear model, for example, where the response is proportional to the load, the annihilation rate of pathogens in any tissue should be greater than the pathogen's average rate of growth. When nonlinearities are added, a reference value for the number of pathogens is set, and a stability condition emerges, which relates strength of regular threats, barrier height and annihilation rate. The stability condition allows a qualitative comparison between tissues. On the other hand, in cancer immunity, the linear model leads to an expression for the lifetime risk, which accounts for both the effects of carcinogens (endogenous or external) and the immune response.
[ { "created": "Mon, 22 Feb 2016 21:54:30 GMT", "version": "v1" }, { "created": "Wed, 24 Feb 2016 15:30:29 GMT", "version": "v2" }, { "created": "Fri, 8 Jul 2016 15:30:35 GMT", "version": "v3" }, { "created": "Mon, 11 Jul 2016 17:10:05 GMT", "version": "v4" } ]
2016-07-12
[ [ "Gonzalez", "Augusto", "" ] ]
Simple ideas, endowed from the mathematical theory of control, are used in order to analyze in general grounds the human immune system. The general principles are minimization of the pathogen load and economy of resources. They should constrain the parameters describing the immune system. In the simplest linear model, for example, where the response is proportional to the load, the annihilation rate of pathogens in any tissue should be greater than the pathogen's average rate of growth. When nonlinearities are added, a reference value for the number of pathogens is set, and a stability condition emerges, which relates strength of regular threats, barrier height and annihilation rate. The stability condition allows a qualitative comparison between tissues. On the other hand, in cancer immunity, the linear model leads to an expression for the lifetime risk, which accounts for both the effects of carcinogens (endogenous or external) and the immune response.
q-bio/0512030
Gustavo Camelo Neto
G. Camelo-Neto, Ana T.C. Silva, L. Giuggioli, V.M. Kenkre
Effect of Predators of Juvenile Rodents on the Spread of the Hantavirus Epidemic
13 pages, 5 figures, submitted to Physica A
null
null
null
q-bio.PE cond-mat.stat-mech
null
Effects of predators of juvenile mice on the spread of the Hantavirus are analyzed in the context of a recently proposed model. Two critical values of the predation probability are identified. When the smaller of them is exceeded, the hantavirus infection vanishes without extinguishing the mice population. When the larger is exceeded, the entire mice population vanishes. These results suggest the possibility of control of the spread of the epidemic by introducing predators in areas of mice colonies in a suitable way so that such control does not kill all the mice but lowers the epidemic spread.
[ { "created": "Tue, 13 Dec 2005 19:32:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Camelo-Neto", "G.", "" ], [ "Silva", "Ana T. C.", "" ], [ "Giuggioli", "L.", "" ], [ "Kenkre", "V. M.", "" ] ]
Effects of predators of juvenile mice on the spread of the Hantavirus are analyzed in the context of a recently proposed model. Two critical values of the predation probability are identified. When the smaller of them is exceeded, the hantavirus infection vanishes without extinguishing the mice population. When the larger is exceeded, the entire mice population vanishes. These results suggest the possibility of control of the spread of the epidemic by introducing predators in areas of mice colonies in a suitable way so that such control does not kill all the mice but lowers the epidemic spread.
2103.11411
Xiaoqi Bi
Xiaoqi Bi, Carolyn L. Beck
On the Role of Asymptomatic Carriers in Epidemic Spread Processes
19 Pages
null
null
null
q-bio.PE cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an epidemiological compartment model, SAIR(S), that explicitly captures the dynamics of asymptomatic infected individuals in an epidemic spread process. We first present a group model and then discuss networked versions. We provide an investigation of equilibria and stability properties for these models, and present simulation results illustrating the effects of asymptomatic-infected individuals on the spread of the disease. We also discuss local isolation effects on the epidemic dynamics in terms of the networked models. Finally, we provide initial parameter estimation results based on simple least-squares approaches and local test-site data.\par Keywords: Epidemic dynamics, networks, data-informed modeling, stability analysis, parameter estimation
[ { "created": "Sun, 21 Mar 2021 14:33:43 GMT", "version": "v1" } ]
2021-03-23
[ [ "Bi", "Xiaoqi", "" ], [ "Beck", "Carolyn L.", "" ] ]
We present an epidemiological compartment model, SAIR(S), that explicitly captures the dynamics of asymptomatic infected individuals in an epidemic spread process. We first present a group model and then discuss networked versions. We provide an investigation of equilibria and stability properties for these models, and present simulation results illustrating the effects of asymptomatic-infected individuals on the spread of the disease. We also discuss local isolation effects on the epidemic dynamics in terms of the networked models. Finally, we provide initial parameter estimation results based on simple least-squares approaches and local test-site data.\par Keywords: Epidemic dynamics, networks, data-informed modeling, stability analysis, parameter estimation
2208.09763
Brian Mintz
Brian Mintz and Feng Fu
The Point of No Return: Evolution of Excess Mutation Rate is Possible Even for Simple Mutation Models
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Under constant selection, each trait has a fixed fitness, and small mutation rates allow populations to efficiently exploit the optimal trait. Therefore it is reasonable to expect mutation rates will evolve downwards. However, we find this need not be the case, examining several models of mutation. While upwards evolution of mutation rate has been found with frequency or time dependent fitness, we demonstrate its possibility in a much simpler context. This work uses adaptive dynamics to study the evolution of mutation rate, and the replicator-mutator equation to model trait evolution. Our approach differs from previous studies by considering a wide variety of methods to represent mutation. We use a finite string approach inspired by genetics, as well as a model of local mutation on a discretization of the unit intervals, handling mutation beyond the endpoints in three ways. The main contribution of this work is a demonstration that the evolution of mutation rate can be significantly more complicated than what is usually expected in relatively simple models.
[ { "created": "Sat, 20 Aug 2022 23:32:36 GMT", "version": "v1" } ]
2022-08-23
[ [ "Mintz", "Brian", "" ], [ "Fu", "Feng", "" ] ]
Under constant selection, each trait has a fixed fitness, and small mutation rates allow populations to efficiently exploit the optimal trait. Therefore it is reasonable to expect mutation rates will evolve downwards. However, we find this need not be the case, examining several models of mutation. While upwards evolution of mutation rate has been found with frequency or time dependent fitness, we demonstrate its possibility in a much simpler context. This work uses adaptive dynamics to study the evolution of mutation rate, and the replicator-mutator equation to model trait evolution. Our approach differs from previous studies by considering a wide variety of methods to represent mutation. We use a finite string approach inspired by genetics, as well as a model of local mutation on a discretization of the unit intervals, handling mutation beyond the endpoints in three ways. The main contribution of this work is a demonstration that the evolution of mutation rate can be significantly more complicated than what is usually expected in relatively simple models.
2305.15153
Zhangyang Gao
Zhangyang Gao, Xingran Chen, Cheng Tan, Stan Z. Li
MotifRetro: Exploring the Combinability-Consistency Trade-offs in retrosynthesis via Dynamic Motif Editing
null
null
null
null
q-bio.BM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Is there a unified framework for graph-based retrosynthesis prediction? Through analysis of full-, semi-, and non-template retrosynthesis methods, we discovered that they strive to strike an optimal balance between combinability and consistency: \textit{Should atoms be combined as motifs to simplify the molecular editing process, or should motifs be broken down into atoms to reduce the vocabulary and improve predictive consistency?} Recent works have studied several specific cases, while none of them explores different combinability-consistency trade-offs. Therefore, we propose MotifRetro, a dynamic motif editing framework for retrosynthesis prediction that can explore the entire trade-off space and unify graph-based models. MotifRetro comprises two components: RetroBPE, which controls the combinability-consistency trade-off, and a motif editing model, where we introduce a novel LG-EGAT module to dynamiclly add motifs to the molecule. We conduct extensive experiments on USPTO-50K to explore how the trade-off affects the model performance and finally achieve state-of-the-art performance.
[ { "created": "Sat, 20 May 2023 09:08:44 GMT", "version": "v1" } ]
2023-05-25
[ [ "Gao", "Zhangyang", "" ], [ "Chen", "Xingran", "" ], [ "Tan", "Cheng", "" ], [ "Li", "Stan Z.", "" ] ]
Is there a unified framework for graph-based retrosynthesis prediction? Through analysis of full-, semi-, and non-template retrosynthesis methods, we discovered that they strive to strike an optimal balance between combinability and consistency: \textit{Should atoms be combined as motifs to simplify the molecular editing process, or should motifs be broken down into atoms to reduce the vocabulary and improve predictive consistency?} Recent works have studied several specific cases, while none of them explores different combinability-consistency trade-offs. Therefore, we propose MotifRetro, a dynamic motif editing framework for retrosynthesis prediction that can explore the entire trade-off space and unify graph-based models. MotifRetro comprises two components: RetroBPE, which controls the combinability-consistency trade-off, and a motif editing model, where we introduce a novel LG-EGAT module to dynamiclly add motifs to the molecule. We conduct extensive experiments on USPTO-50K to explore how the trade-off affects the model performance and finally achieve state-of-the-art performance.
1910.13501
Giorgio Matteucci
M. Rogora, L. Frate, M.L. Carranza, M. Freppaz, A. Stanisci, I. Bertani, R. Bottarin, A. Brambilla, R. Canullo, M. Carbognani, C. Cerrato, S. Chelli, E. Cremonese, M. Cutini, M. Di Musciano, B. Erschbamer, D. Godone, M. Iocchi, M. Isabellon, A. Magnani, L. Mazzola, U. Morra di Cella, H. Pauli, M. Petey, B. Petriccione, F. Porro, R. Psenner, G. Rossetti, A. Scotti, R. Sommaruga, U. Tappeiner, J.-P. Theurillat, M. Tomaselli, D. Viglietti, R. Viterbi, P. Vittoz, M. Winkler, G. Matteucci
Assessment of climate change effects on mountain ecosystems through a cross-site analysis in the Alps and Apennines
30 pages plus references, 7 figures, 23 tables Paper from the LTER Europe and ILTER network
Scie.Tot.Environ. 624 (2018) 1429-1442
10.1016/j.scitotenv.2017.12.155
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mountain ecosystems are sensitive indicators of climate change. Long-term studies may be extremely useful in assessing the responses of high-elevation ecosystems to climate change and other anthropogenic drivers. Mountain research sites within the LTER (Long-Term Ecosystem Research) network are representative of various types of ecosystems and span a wide bioclimatic and elevational range. Here, we present a synthesis and a review of the main results from long-term ecological studies in mountain ecosystems at 20 LTER sites in Italy, Switzerland and Austria. We analyzed a set of key climate parameters, such as temperature and snow cover duration, in relation to vascular species composition, plant traits, abundance patterns, pedoclimate, nutrient dynamics in soils and water, phenology and composition of freshwater biota. The overall results highlight the rapid response of mountain ecosystems to climate change. As temperatures increased, vegetation cover in alpine and subalpine summits increased as well. Years with limited snow cover duration caused an increase in soil temperature and microbial biomass during the growing season. Effects on freshwater ecosystems were observed, in terms of increases in solutes, decreases in nitrates and changes in plankton phenology and benthos communities. This work highlights the importance of comparing and integrating long-term ecological data collected in different ecosystems, for a more comprehensive overview of the ecological effects of climate change. Nevertheless, there is a need for i) adopting co-located monitoring site networks to improve our ability to obtain sound results from cross-site analysis, ii) carrying out further studies, with fine spatial and temporal resolutions to improve understanding of responses to extreme events, and iii) increasing comparability and standardizing protocols across networks to clarify local from global patterns.
[ { "created": "Tue, 29 Oct 2019 19:50:39 GMT", "version": "v1" } ]
2019-10-31
[ [ "Rogora", "M.", "" ], [ "Frate", "L.", "" ], [ "Carranza", "M. L.", "" ], [ "Freppaz", "M.", "" ], [ "Stanisci", "A.", "" ], [ "Bertani", "I.", "" ], [ "Bottarin", "R.", "" ], [ "Brambilla", "A.", "" ], [ "Canullo", "R.", "" ], [ "Carbognani", "M.", "" ], [ "Cerrato", "C.", "" ], [ "Chelli", "S.", "" ], [ "Cremonese", "E.", "" ], [ "Cutini", "M.", "" ], [ "Di Musciano", "M.", "" ], [ "Erschbamer", "B.", "" ], [ "Godone", "D.", "" ], [ "Iocchi", "M.", "" ], [ "Isabellon", "M.", "" ], [ "Magnani", "A.", "" ], [ "Mazzola", "L.", "" ], [ "di Cella", "U. Morra", "" ], [ "Pauli", "H.", "" ], [ "Petey", "M.", "" ], [ "Petriccione", "B.", "" ], [ "Porro", "F.", "" ], [ "Psenner", "R.", "" ], [ "Rossetti", "G.", "" ], [ "Scotti", "A.", "" ], [ "Sommaruga", "R.", "" ], [ "Tappeiner", "U.", "" ], [ "Theurillat", "J. -P.", "" ], [ "Tomaselli", "M.", "" ], [ "Viglietti", "D.", "" ], [ "Viterbi", "R.", "" ], [ "Vittoz", "P.", "" ], [ "Winkler", "M.", "" ], [ "Matteucci", "G.", "" ] ]
Mountain ecosystems are sensitive indicators of climate change. Long-term studies may be extremely useful in assessing the responses of high-elevation ecosystems to climate change and other anthropogenic drivers. Mountain research sites within the LTER (Long-Term Ecosystem Research) network are representative of various types of ecosystems and span a wide bioclimatic and elevational range. Here, we present a synthesis and a review of the main results from long-term ecological studies in mountain ecosystems at 20 LTER sites in Italy, Switzerland and Austria. We analyzed a set of key climate parameters, such as temperature and snow cover duration, in relation to vascular species composition, plant traits, abundance patterns, pedoclimate, nutrient dynamics in soils and water, phenology and composition of freshwater biota. The overall results highlight the rapid response of mountain ecosystems to climate change. As temperatures increased, vegetation cover in alpine and subalpine summits increased as well. Years with limited snow cover duration caused an increase in soil temperature and microbial biomass during the growing season. Effects on freshwater ecosystems were observed, in terms of increases in solutes, decreases in nitrates and changes in plankton phenology and benthos communities. This work highlights the importance of comparing and integrating long-term ecological data collected in different ecosystems, for a more comprehensive overview of the ecological effects of climate change. Nevertheless, there is a need for i) adopting co-located monitoring site networks to improve our ability to obtain sound results from cross-site analysis, ii) carrying out further studies, with fine spatial and temporal resolutions to improve understanding of responses to extreme events, and iii) increasing comparability and standardizing protocols across networks to clarify local from global patterns.
1806.04754
Asma Azizi Boroojeni
Asma Azizi Boroojeni
Mathematical Models for Predicting and Mitigating the Spread of Chlamydia Sexually Transmitted Infection
PhD thesis, Tulane (2018)
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chlamydia trachomatis (Ct) is the most common bacterial sexually transmitted infection (STI) in the United States and is major cause of infertility, pelvic inflammatory disease, and ectopic pregnancy among women. Despite decades of screening women for Ct, rates continue to increase among them in high prevalent areas such as New Orleans. A pilot study in New Orleans found approximately 11% of 14-24 year old of African Americans (AAs) were infected with Ct. Our goal is to mathematically model the impact of different interventions for AA men resident in New Orleans on the general rate of Ct among women resident at the same region. We create and analyze mathematical models such as multi-risk and continuous-risk compartmental models and agent-based network model to first help understand the spread of Ct and second evaluate and estimate behavioral and biomedical interventions including condom-use, screening, partner notification, social friend notification, and rescreening. Our compartmental models predict the Ct prevalence is a function of the number of partners for a person, and quantify how this distribution changes as a function of condom-use. We also observe that although increased Ct screening and rescreening, and treating partners of infected people will reduce the prevalence, these mitigations alone are not sufficient to control the epidemic. A combination of both sexual partner and social friend notification is needed to mitigate Ct.
[ { "created": "Wed, 6 Jun 2018 05:30:27 GMT", "version": "v1" } ]
2018-06-14
[ [ "Boroojeni", "Asma Azizi", "" ] ]
Chlamydia trachomatis (Ct) is the most common bacterial sexually transmitted infection (STI) in the United States and is major cause of infertility, pelvic inflammatory disease, and ectopic pregnancy among women. Despite decades of screening women for Ct, rates continue to increase among them in high prevalent areas such as New Orleans. A pilot study in New Orleans found approximately 11% of 14-24 year old of African Americans (AAs) were infected with Ct. Our goal is to mathematically model the impact of different interventions for AA men resident in New Orleans on the general rate of Ct among women resident at the same region. We create and analyze mathematical models such as multi-risk and continuous-risk compartmental models and agent-based network model to first help understand the spread of Ct and second evaluate and estimate behavioral and biomedical interventions including condom-use, screening, partner notification, social friend notification, and rescreening. Our compartmental models predict the Ct prevalence is a function of the number of partners for a person, and quantify how this distribution changes as a function of condom-use. We also observe that although increased Ct screening and rescreening, and treating partners of infected people will reduce the prevalence, these mitigations alone are not sufficient to control the epidemic. A combination of both sexual partner and social friend notification is needed to mitigate Ct.
1512.07855
Hideaki Shimazaki
Hideaki Shimazaki
Neurons as an Information-theoretic Engine
16 pages, 4 figures; corrected four typos in the original submission
An extended edition is published as a book chapter. Shimazaki H. (2018) Neural Engine Hypothesis. In Chen Z. and Sarma S.V. (Eds.), Dynamic Neuroscience. Springer
10.1007/978-3-319-71976-4
null
q-bio.NC physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that dynamical gain modulation of neurons' stimulus response is described as an information-theoretic cycle that generates entropy associated with the stimulus-related activity from entropy produced by the modulation. To articulate this theory, we describe stimulus-evoked activity of a neural population based on the maximum entropy principle with constraints on two types of overlapping activities, one that is controlled by stimulus conditions and the other, termed internal activity, that is regulated internally in an organism. We demonstrate that modulation of the internal activity realises gain control of stimulus response, and controls stimulus information. A cycle of neural dynamics is then introduced to model information processing by the neurons during which the stimulus information is dynamically enhanced by the internal gain-modulation mechanism. Based on the conservation law for entropy production, we demonstrate that the cycle generates entropy ascribed to the stimulus-related activity using entropy supplied by the internal mechanism, analogously to a heat engine that produces work from heat. We provide an efficient cycle that achieves the highest entropic efficiency to retain the stimulus information. The theory allows us to quantify efficiency of the internal computation and its theoretical limit.
[ { "created": "Thu, 24 Dec 2015 16:40:01 GMT", "version": "v1" }, { "created": "Wed, 27 Dec 2017 12:47:30 GMT", "version": "v2" } ]
2017-12-29
[ [ "Shimazaki", "Hideaki", "" ] ]
We show that dynamical gain modulation of neurons' stimulus response is described as an information-theoretic cycle that generates entropy associated with the stimulus-related activity from entropy produced by the modulation. To articulate this theory, we describe stimulus-evoked activity of a neural population based on the maximum entropy principle with constraints on two types of overlapping activities, one that is controlled by stimulus conditions and the other, termed internal activity, that is regulated internally in an organism. We demonstrate that modulation of the internal activity realises gain control of stimulus response, and controls stimulus information. A cycle of neural dynamics is then introduced to model information processing by the neurons during which the stimulus information is dynamically enhanced by the internal gain-modulation mechanism. Based on the conservation law for entropy production, we demonstrate that the cycle generates entropy ascribed to the stimulus-related activity using entropy supplied by the internal mechanism, analogously to a heat engine that produces work from heat. We provide an efficient cycle that achieves the highest entropic efficiency to retain the stimulus information. The theory allows us to quantify efficiency of the internal computation and its theoretical limit.
1402.2196
Karin Vadovi\v{c}ov\'a
Karin Vadovi\v{c}ov\'a
Affective and cognitive prefrontal cortex projections to the lateral habenula in humans
I renamed the medioventral part of the anterior thalamus via which the PFC to LHb fibre tracts from ventral anterior (AV) to medial anterior thalamic region. Apologies for that. My co-author decided to remove his name
null
10.3389/fnhum.2014.00819
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anterior insula (AI) and dACC are known to process information about pain, loss, adversities, bad, harmful or suboptimal choices and consequences that threaten survival or well-being. Pain and loss activate also pregenual ACC (pgACC), linked to sad thoughts, hurt and regrets. The lateral habenula (LHb) is stimulated by predicted and received pain, discomfort, aversive outcome, loss. Its chronic stimulation makes us feel worse/low and gradually stops us choosing and moving for suboptimal, hurtful or punished choices, by direct and indirect (via RMTg) inhibition of DRN and VTA/SNc. Response selectivity of LHb neurons suggests their cortical input from affective and cognitive evaluative regions that make expectations about bad or suboptimal outcomes. Based on these facts I predicted direct corticohabenular projections from the dACC, pgACC and AI, as part of the adversity processing circuit that learns to avoid bad outcomes by suppressing dopamine and serotonin signal. Using DTI I found dACC, pgACC, AI, adjacent caudolateral and lateral OFC projections to LHb. I predicted no corticohabenular projections from the reward processing regions: medial OFC and vACC because both respond most strongly to good, high value stimuli and outcomes, inducing serotonin and dopamine release respectively. This lack of LHb projections was confirmed for vACC and likely for mOFC. The surprising findings were the corticohabenular projections from the cognitive prefrontal cortex regions, known for flexible reasoning, planning and combining whatever information are relevant for reaching current goals. I propose that prefrontohabenular projections provide a teaching signal for value-based choice behaviour, to learn to deselect, avoid or inhibit the potentially harmful, low valued or wrong choices, goals, strategies, predictions, models and ways of doing things, to prevent bad or suboptimal consequences.
[ { "created": "Mon, 10 Feb 2014 16:13:53 GMT", "version": "v1" }, { "created": "Fri, 28 Mar 2014 19:33:07 GMT", "version": "v2" }, { "created": "Sat, 26 Apr 2014 21:06:29 GMT", "version": "v3" }, { "created": "Sat, 3 May 2014 07:07:12 GMT", "version": "v4" }, { "created": "Sat, 24 May 2014 13:52:39 GMT", "version": "v5" }, { "created": "Fri, 20 Jun 2014 08:41:29 GMT", "version": "v6" }, { "created": "Fri, 1 Aug 2014 09:54:09 GMT", "version": "v7" }, { "created": "Tue, 30 Sep 2014 07:55:11 GMT", "version": "v8" }, { "created": "Mon, 13 Oct 2014 20:08:06 GMT", "version": "v9" } ]
2014-10-15
[ [ "Vadovičová", "Karin", "" ] ]
Anterior insula (AI) and dACC are known to process information about pain, loss, adversities, bad, harmful or suboptimal choices and consequences that threaten survival or well-being. Pain and loss activate also pregenual ACC (pgACC), linked to sad thoughts, hurt and regrets. The lateral habenula (LHb) is stimulated by predicted and received pain, discomfort, aversive outcome, loss. Its chronic stimulation makes us feel worse/low and gradually stops us choosing and moving for suboptimal, hurtful or punished choices, by direct and indirect (via RMTg) inhibition of DRN and VTA/SNc. Response selectivity of LHb neurons suggests their cortical input from affective and cognitive evaluative regions that make expectations about bad or suboptimal outcomes. Based on these facts I predicted direct corticohabenular projections from the dACC, pgACC and AI, as part of the adversity processing circuit that learns to avoid bad outcomes by suppressing dopamine and serotonin signal. Using DTI I found dACC, pgACC, AI, adjacent caudolateral and lateral OFC projections to LHb. I predicted no corticohabenular projections from the reward processing regions: medial OFC and vACC because both respond most strongly to good, high value stimuli and outcomes, inducing serotonin and dopamine release respectively. This lack of LHb projections was confirmed for vACC and likely for mOFC. The surprising findings were the corticohabenular projections from the cognitive prefrontal cortex regions, known for flexible reasoning, planning and combining whatever information are relevant for reaching current goals. I propose that prefrontohabenular projections provide a teaching signal for value-based choice behaviour, to learn to deselect, avoid or inhibit the potentially harmful, low valued or wrong choices, goals, strategies, predictions, models and ways of doing things, to prevent bad or suboptimal consequences.
1910.09905
Alexey Mazur K
Alexey K. Mazur, Tinh-Suong Nguyen, Eugene Gladyshev
Direct homologous dsDNA-dsDNA pairing: how, where and why?
13 pages, 2 figures, "Perspective" paper for JMB
J. Mol. Biol. 290, 373-377, 2020
10.1016/j.jmb.2019.11.005
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability of homologous chromosomes (or selected chromosomal loci) to pair specifically in the apparent absence of DNA breakage and recombination represents a prominent feature of eukaryotic biology. The mechanism of homology recognition at the basis of such recombination-independent pairing has remained elusive. A number of studies have supported the idea that sequence homology can be sensed between intact DNA double helices in vivo. In particular, recent analyses of the two silencing phenomena in fungi, known as repeat-induced point mutation (RIP) and meiotic silencing by unpaired DNA (MSUD), have provided genetic evidence for the existence of the direct homologous dsDNA-dsDNA pairing. Both RIP and MSUD likely rely on the same search strategy, by which dsDNA segments are matched as arrays of interspersed base-pair triplets. This process is general and very efficient, yet it proceeds normally without the RecA/Rad51/Dmc1 proteins. Further studies of RIP and MSUD may yield surprising insights into the function of DNA in the cell.
[ { "created": "Tue, 22 Oct 2019 11:47:26 GMT", "version": "v1" } ]
2022-10-26
[ [ "Mazur", "Alexey K.", "" ], [ "Nguyen", "Tinh-Suong", "" ], [ "Gladyshev", "Eugene", "" ] ]
The ability of homologous chromosomes (or selected chromosomal loci) to pair specifically in the apparent absence of DNA breakage and recombination represents a prominent feature of eukaryotic biology. The mechanism of homology recognition at the basis of such recombination-independent pairing has remained elusive. A number of studies have supported the idea that sequence homology can be sensed between intact DNA double helices in vivo. In particular, recent analyses of the two silencing phenomena in fungi, known as repeat-induced point mutation (RIP) and meiotic silencing by unpaired DNA (MSUD), have provided genetic evidence for the existence of the direct homologous dsDNA-dsDNA pairing. Both RIP and MSUD likely rely on the same search strategy, by which dsDNA segments are matched as arrays of interspersed base-pair triplets. This process is general and very efficient, yet it proceeds normally without the RecA/Rad51/Dmc1 proteins. Further studies of RIP and MSUD may yield surprising insights into the function of DNA in the cell.
1606.06443
Zhang Chong
Chong Zhang, Jochen Triesch and Bertram E. Shi
An active efficient coding model of the optokinetic nystagmus
null
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optokinetic nystagmus (OKN) is an involuntary eye movement responsible for stabilizing retinal images in the presence of relative motion between an observer and the environment. Fully understanding the development of optokinetic nystagmus requires a neurally plausible computational model that accounts for the neural development and the behavior. To date, work in this area has been limited. We propose a neurally plausible framework for the joint development of disparity and motion tuning in the visual cortex, the optokinetic and vergence eye movements. This framework models the joint emergence of both perception and behavior, and accounts for the importance of the development of normal vergence control and binocular vision in achieving normal monocular OKN (mOKN) behaviors. Because the model includes behavior, we can simulate the same perturbations as performed in past experiments, such as artificially induced strabismus. The proposed model agrees both qualitatively and quantitatively with a number of findings from the literature on both binocular vision as well as the optokinetic reflex. Finally, our model also makes quantitative predictions about the OKN behavior using the same methods used to characterize the OKN in the experimental literature.
[ { "created": "Tue, 21 Jun 2016 07:01:18 GMT", "version": "v1" }, { "created": "Sun, 2 Oct 2016 07:24:00 GMT", "version": "v2" }, { "created": "Tue, 11 Oct 2016 07:07:35 GMT", "version": "v3" } ]
2016-10-12
[ [ "Zhang", "Chong", "" ], [ "Triesch", "Jochen", "" ], [ "Shi", "Bertram E.", "" ] ]
Optokinetic nystagmus (OKN) is an involuntary eye movement responsible for stabilizing retinal images in the presence of relative motion between an observer and the environment. Fully understanding the development of optokinetic nystagmus requires a neurally plausible computational model that accounts for the neural development and the behavior. To date, work in this area has been limited. We propose a neurally plausible framework for the joint development of disparity and motion tuning in the visual cortex, the optokinetic and vergence eye movements. This framework models the joint emergence of both perception and behavior, and accounts for the importance of the development of normal vergence control and binocular vision in achieving normal monocular OKN (mOKN) behaviors. Because the model includes behavior, we can simulate the same perturbations as performed in past experiments, such as artificially induced strabismus. The proposed model agrees both qualitatively and quantitatively with a number of findings from the literature on both binocular vision as well as the optokinetic reflex. Finally, our model also makes quantitative predictions about the OKN behavior using the same methods used to characterize the OKN in the experimental literature.
1401.6337
Ovidiu Radulescu
Sylvain Soliman, Francois Fages, Ovidiu Radulescu
A Constraint Solving Approach to Tropical Equilibration and Model Reduction
8 pages, no figures, proceedings WCB 2013
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model reduction is a central topic in systems biology and dynamical systems theory, for reducing the complexity of detailed models, finding important parameters, and developing multi-scale models for instance. While perturbation theory is a standard mathematical tool to analyze the different time scales of a dynamical system, and decompose the system accordingly, tropical methods provide a simple algebraic framework to perform these analyses systematically in polynomial systems. The crux of these tropicalization methods is in the computation of tropical equilibrations. In this paper we show that constraint-based methods, using reified constraints for expressing the equilibration conditions, make it possible to numerically solve non-linear tropical equilibration problems, out of reach of standard computation methods. We illustrate this approach first with the reduction of simple biochemical mechanisms such as the Michaelis-Menten and Goldbeter-Koshland models, and second, with performance figures obtained on a large scale on the model repository \texttt{biomodels.net}
[ { "created": "Fri, 24 Jan 2014 13:40:39 GMT", "version": "v1" } ]
2014-01-27
[ [ "Soliman", "Sylvain", "" ], [ "Fages", "Francois", "" ], [ "Radulescu", "Ovidiu", "" ] ]
Model reduction is a central topic in systems biology and dynamical systems theory, for reducing the complexity of detailed models, finding important parameters, and developing multi-scale models for instance. While perturbation theory is a standard mathematical tool to analyze the different time scales of a dynamical system, and decompose the system accordingly, tropical methods provide a simple algebraic framework to perform these analyses systematically in polynomial systems. The crux of these tropicalization methods is in the computation of tropical equilibrations. In this paper we show that constraint-based methods, using reified constraints for expressing the equilibration conditions, make it possible to numerically solve non-linear tropical equilibration problems, out of reach of standard computation methods. We illustrate this approach first with the reduction of simple biochemical mechanisms such as the Michaelis-Menten and Goldbeter-Koshland models, and second, with performance figures obtained on a large scale on the model repository \texttt{biomodels.net}
q-bio/0509030
Antonia Kropfinger
Daphn\'e Reiss (IJM), Danielle Nouaud (IJM), St\'ephane Ronsseray (IJM), Dominique Anxolab\'eh\`ere (IJM)
Domesticated P elements in the Drosophila montium species subgroup have a new function related to a DNA binding property
null
Journal of Molecular Evolution vol ? (2005) sous presse
10.1007/s00239-004-0324-0
null
q-bio.GN
null
Molecular domestication of a transposable element is defined as its functional recruitment by the host genome. To date, two independent events of molecular domestication of the P transposable element have been described: in the Drosophila obscura species group and in the Drosophila montium species subgroup. These P neogenes consist to stationary, non repeated sequences, potentially encoding 66 kDa repressor-like proteins (RLs). Here we investigate the function of the montium P neogenes. We provide evidence for the presence of RLs proteins in two montium species (D. tsacasi and D. bocqueti) specifically expressed in adult and larval brain and gonads. We tested the hypothesis that the montium P neogenes function is related to the repression of the transposition of distant related mobile P elements which coexist in the genome. Our results strongly suggest that the montium P neogenes are not recruited to down regulate the P element transposition. Given that all the proteins encoded by mobile or stationary P homologous sequences show a strong conservation of the DNA Binding Domain, we tested the capacity of the RLs proteins to bind DNA in vivo. Immunstaining of polytene chromosomes in D. melanogaster transgenic lines strongly suggest that montium P neogenes encode proteins that bind DNA in vivo. RLs proteins show multiple binding to the chromosomes. We suggest that the property recruited in the case of the montium P neoproteins is their DNA binding property. The possible functions of these neogenes are discussed.
[ { "created": "Fri, 23 Sep 2005 11:07:06 GMT", "version": "v1" } ]
2016-08-16
[ [ "Reiss", "Daphné", "", "IJM" ], [ "Nouaud", "Danielle", "", "IJM" ], [ "Ronsseray", "Stéphane", "", "IJM" ], [ "Anxolabéhère", "Dominique", "", "IJM" ] ]
Molecular domestication of a transposable element is defined as its functional recruitment by the host genome. To date, two independent events of molecular domestication of the P transposable element have been described: in the Drosophila obscura species group and in the Drosophila montium species subgroup. These P neogenes consist to stationary, non repeated sequences, potentially encoding 66 kDa repressor-like proteins (RLs). Here we investigate the function of the montium P neogenes. We provide evidence for the presence of RLs proteins in two montium species (D. tsacasi and D. bocqueti) specifically expressed in adult and larval brain and gonads. We tested the hypothesis that the montium P neogenes function is related to the repression of the transposition of distant related mobile P elements which coexist in the genome. Our results strongly suggest that the montium P neogenes are not recruited to down regulate the P element transposition. Given that all the proteins encoded by mobile or stationary P homologous sequences show a strong conservation of the DNA Binding Domain, we tested the capacity of the RLs proteins to bind DNA in vivo. Immunstaining of polytene chromosomes in D. melanogaster transgenic lines strongly suggest that montium P neogenes encode proteins that bind DNA in vivo. RLs proteins show multiple binding to the chromosomes. We suggest that the property recruited in the case of the montium P neoproteins is their DNA binding property. The possible functions of these neogenes are discussed.
q-bio/0402023
John Hertz
John Hertz, Alexander Lerchner and Mandana Ahmadi
Mean field methods for cortical network dynamics
20 pages, 4 figures, proceedings of Erice School on Cortical Dynamics
null
null
2004-13
q-bio.NC
null
We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate-and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high-conductance state observed experimentally in visual cortex. Finally, an extension of the model to describe an orientation hypercolumn provides understanding of how cortical interactions sharpen orientation tuning, in a way that is consistent with observed firing statistics.
[ { "created": "Tue, 10 Feb 2004 23:01:50 GMT", "version": "v1" } ]
2007-05-23
[ [ "Hertz", "John", "" ], [ "Lerchner", "Alexander", "" ], [ "Ahmadi", "Mandana", "" ] ]
We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate-and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high-conductance state observed experimentally in visual cortex. Finally, an extension of the model to describe an orientation hypercolumn provides understanding of how cortical interactions sharpen orientation tuning, in a way that is consistent with observed firing statistics.
1705.06447
Pierre Casadebaig
Brigitte Mangin, Pierre Casadebaig, El\'ena Cadic, Nicolas Blanchet, Marie-Claude Boniface, S\'ebastien Carr\`ere, J\'er\^ome Gouzy, Ludovic Legrand, Baptiste Mayjonade, Nicolas Pouilly, Thierry Andr\'e, Marie Coque, Jo\"el Piquemal, Marion Laporte, Patrick Vincourt, St\'ephane Mu\~nos, Nicolas B. Langlade
Genetic control of plasticity of oil yield for combined abiotic stresses using a joint approach of crop modeling and genome-wide association
12 pages, 5 figures, Plant, Cell and Environment
null
10.1111/pce.12961
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the genetic basis of phenotypic plasticity is crucial for predicting and managing climate change effects on wild plants and crops. Here, we combined crop modeling and quantitative genetics to study the genetic control of oil yield plasticity for multiple abiotic stresses in sunflower. First we developed stress indicators to characterize 14 environments for three abiotic stresses (cold, drought and nitrogen) using the SUNFLO crop model and phenotypic variations of three commercial varieties. The computed plant stress indicators better explain yield variation than descriptors at the climatic or crop levels. In those environments, we observed oil yield of 317 sunflower hybrids and regressed it with three selected stress indicators. The slopes of cold stress norm reaction were used as plasticity phenotypes in the following genome-wide association study. Among the 65,534 tested SNP, we identified nine QTL controlling oil yield plasticity to cold stress. Associated SNP are localized in genes previously shown to be involved in cold stress responses: oligopeptide transporters, LTP, cystatin, alternative oxidase, or root development. This novel approach opens new perspectives to identify genomic regions involved in genotype-by-environment interaction of a complex traits to multiple stresses in realistic natural or agronomical conditions.
[ { "created": "Thu, 18 May 2017 07:39:14 GMT", "version": "v1" } ]
2017-05-19
[ [ "Mangin", "Brigitte", "" ], [ "Casadebaig", "Pierre", "" ], [ "Cadic", "Eléna", "" ], [ "Blanchet", "Nicolas", "" ], [ "Boniface", "Marie-Claude", "" ], [ "Carrère", "Sébastien", "" ], [ "Gouzy", "Jérôme", "" ], [ "Legrand", "Ludovic", "" ], [ "Mayjonade", "Baptiste", "" ], [ "Pouilly", "Nicolas", "" ], [ "André", "Thierry", "" ], [ "Coque", "Marie", "" ], [ "Piquemal", "Joël", "" ], [ "Laporte", "Marion", "" ], [ "Vincourt", "Patrick", "" ], [ "Muños", "Stéphane", "" ], [ "Langlade", "Nicolas B.", "" ] ]
Understanding the genetic basis of phenotypic plasticity is crucial for predicting and managing climate change effects on wild plants and crops. Here, we combined crop modeling and quantitative genetics to study the genetic control of oil yield plasticity for multiple abiotic stresses in sunflower. First we developed stress indicators to characterize 14 environments for three abiotic stresses (cold, drought and nitrogen) using the SUNFLO crop model and phenotypic variations of three commercial varieties. The computed plant stress indicators better explain yield variation than descriptors at the climatic or crop levels. In those environments, we observed oil yield of 317 sunflower hybrids and regressed it with three selected stress indicators. The slopes of cold stress norm reaction were used as plasticity phenotypes in the following genome-wide association study. Among the 65,534 tested SNP, we identified nine QTL controlling oil yield plasticity to cold stress. Associated SNP are localized in genes previously shown to be involved in cold stress responses: oligopeptide transporters, LTP, cystatin, alternative oxidase, or root development. This novel approach opens new perspectives to identify genomic regions involved in genotype-by-environment interaction of a complex traits to multiple stresses in realistic natural or agronomical conditions.
2205.02465
Thomas Johannes Baumgarten
Petyo Nikolov, Thomas J. Baumgarten, Shady Safwat Hassan, Nur-Deniz F\"ullenbach, Gerald Kircheis, Dieter H\"aussinger, Markus J\"ordens, Markus Butz, Alfons Schnitzler, Stefan J. Groiss
Paired associative stimulation demonstrates alterations in motor cortical synaptic plasticity in patients with hepatic encephalopathy
number of pages: 33; number of tables: 1; number of figures: 5; References: 94
Clinical Neurophysiology, Volume 132, Issue 10, October 2021, Pages 2332-2341
10.1016/j.clinph.2021.07.019
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Objective: Hepatic encephalopathy (HE) is a potentially reversible brain dysfunction caused by liver failure. Altered synaptic plasticity is supposed to play a major role in the pathophysiology of HE. Here, we used paired associative stimulation with an inter-stimulus interval of 25 ms (PAS25), a transcranial magnetic stimulation (TMS) protocol, to test synaptic plasticity of the motor cortex in patients with manifest HE. Methods: 23 HE-patients and 23 healthy controls were enrolled in the study. Motor evoked potential (MEP) amplitudes were assessed as measure for cortical excitability. Time courses of MEP amplitude changes after the PAS25 intervention were compared between both groups. Results: MEP-amplitudes increased after PAS25 in the control group, indicating PAS25-induced synaptic plasticity in healthy controls, as expected. In contrast, MEP-amplitudes within the HE group did not change and were lower than in the control group, indicating no induction of plasticity. Conclusion: Our study revealed reduced synaptic plasticity of the primary motor cortex in HE. Significance: Reduced synaptic plasticity in HE provides a link between pathological changes on the molecular level and early clinical symptoms of the disease. This decrease may be caused by disturbances in the glutamatergic neurotransmission due to the known hyperammonemia in HE patients.
[ { "created": "Thu, 5 May 2022 06:34:29 GMT", "version": "v1" }, { "created": "Fri, 6 May 2022 06:53:23 GMT", "version": "v2" } ]
2022-05-09
[ [ "Nikolov", "Petyo", "" ], [ "Baumgarten", "Thomas J.", "" ], [ "Hassan", "Shady Safwat", "" ], [ "Füllenbach", "Nur-Deniz", "" ], [ "Kircheis", "Gerald", "" ], [ "Häussinger", "Dieter", "" ], [ "Jördens", "Markus", "" ], [ "Butz", "Markus", "" ], [ "Schnitzler", "Alfons", "" ], [ "Groiss", "Stefan J.", "" ] ]
Objective: Hepatic encephalopathy (HE) is a potentially reversible brain dysfunction caused by liver failure. Altered synaptic plasticity is supposed to play a major role in the pathophysiology of HE. Here, we used paired associative stimulation with an inter-stimulus interval of 25 ms (PAS25), a transcranial magnetic stimulation (TMS) protocol, to test synaptic plasticity of the motor cortex in patients with manifest HE. Methods: 23 HE-patients and 23 healthy controls were enrolled in the study. Motor evoked potential (MEP) amplitudes were assessed as measure for cortical excitability. Time courses of MEP amplitude changes after the PAS25 intervention were compared between both groups. Results: MEP-amplitudes increased after PAS25 in the control group, indicating PAS25-induced synaptic plasticity in healthy controls, as expected. In contrast, MEP-amplitudes within the HE group did not change and were lower than in the control group, indicating no induction of plasticity. Conclusion: Our study revealed reduced synaptic plasticity of the primary motor cortex in HE. Significance: Reduced synaptic plasticity in HE provides a link between pathological changes on the molecular level and early clinical symptoms of the disease. This decrease may be caused by disturbances in the glutamatergic neurotransmission due to the known hyperammonemia in HE patients.
1901.11085
Daniel Rubinstein
D. Rubinstein, L. Camarillo-Rodriguez, ZJ Waldman, I. Orosz, J. Stein, S. Das, R. Gorniak, AD Sharan, R. Gross, BC Lega, K. Zaghloul, BC Jobst, KA Davis, PA Wanda, G. Worrell, MR Sperling, SA Weiss
High gamma and beta band oscillations in left ventral posterior parietal cortex are regionally dissociated during verbal episodic encoding and recall
methodological flaws/concerns of scientific validity, and lack of agreement among co-authors
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The posterior parietal cortex (PPC) has a unique role in memory retrieval: fMRI and electrocorticography studies suggest that within the ventral PPC (VPC) specifically, there is an anterior-posterior functional divergence between externally-oriented and internally-oriented attention to memory (AtoM). However, the role of VPC during verbal episodic encoding, and the relationship between encoding- and retrieval-related activity, is less understood. Here we show that activation within a subregion of VPC is doubly dissociated between its anterior and posterior parts, during encoding compared to recall in a free recall task. We found that regional activation defined by increased high gamma power and decreased beta power oscillations during encoding and recall correlated with recall success. During word encoding, iEEG sites that showed this correlation were located anterior to those that showed deactivation. Conversely, during word recall, sites that showed stronger correlations between activity and number of words recalled were located more posteriorly. Our results demonstrate the significance of high gamma and beta oscillations suggesting a push-pull relationship between attention to external stimuli and internal memories within left ventral PPC. Knowledge of this divergence of function along the anterior-posterior axis within left ventral PPC may prove useful for guiding brain stimulation strategies.
[ { "created": "Wed, 30 Jan 2019 20:24:48 GMT", "version": "v1" }, { "created": "Fri, 8 Feb 2019 20:30:30 GMT", "version": "v2" } ]
2019-02-12
[ [ "Rubinstein", "D.", "" ], [ "Camarillo-Rodriguez", "L.", "" ], [ "Waldman", "ZJ", "" ], [ "Orosz", "I.", "" ], [ "Stein", "J.", "" ], [ "Das", "S.", "" ], [ "Gorniak", "R.", "" ], [ "Sharan", "AD", "" ], [ "Gross", "R.", "" ], [ "Lega", "BC", "" ], [ "Zaghloul", "K.", "" ], [ "Jobst", "BC", "" ], [ "Davis", "KA", "" ], [ "Wanda", "PA", "" ], [ "Worrell", "G.", "" ], [ "Sperling", "MR", "" ], [ "Weiss", "SA", "" ] ]
The posterior parietal cortex (PPC) has a unique role in memory retrieval: fMRI and electrocorticography studies suggest that within the ventral PPC (VPC) specifically, there is an anterior-posterior functional divergence between externally-oriented and internally-oriented attention to memory (AtoM). However, the role of VPC during verbal episodic encoding, and the relationship between encoding- and retrieval-related activity, is less understood. Here we show that activation within a subregion of VPC is doubly dissociated between its anterior and posterior parts, during encoding compared to recall in a free recall task. We found that regional activation defined by increased high gamma power and decreased beta power oscillations during encoding and recall correlated with recall success. During word encoding, iEEG sites that showed this correlation were located anterior to those that showed deactivation. Conversely, during word recall, sites that showed stronger correlations between activity and number of words recalled were located more posteriorly. Our results demonstrate the significance of high gamma and beta oscillations suggesting a push-pull relationship between attention to external stimuli and internal memories within left ventral PPC. Knowledge of this divergence of function along the anterior-posterior axis within left ventral PPC may prove useful for guiding brain stimulation strategies.
2310.13480
Tena Dubcek Dr.
Tena Dubcek, Debora Ledergerber, Jana Thomann, Giovanna Aiello, Marc Serra-Garcia, Lukas Imbach and Rafael Polania
Personalized identification, prediction, and stimulation of neural oscillations via data-driven models of epileptic network dynamics
4+2 figures
null
null
null
q-bio.NC cs.LG nlin.AO physics.med-ph q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Neural oscillations are considered to be brain-specific signatures of information processing and communication in the brain. They also reflect pathological brain activity in neurological disorders, thus offering a basis for diagnoses and forecasting. Epilepsy is one of the most common neurological disorders, characterized by abnormal synchronization and desynchronization of the oscillations in the brain. About one third of epilepsy cases are pharmacoresistant, and as such emphasize the need for novel therapy approaches, where brain stimulation appears to be a promising therapeutic option. The development of brain stimulation paradigms, however, is often based on generalized assumptions about brain dynamics, although it is known that significant differences occur between patients and brain states. We developed a framework to extract individualized predictive models of epileptic network dynamics directly from EEG data. The models are based on the dominant coherent oscillations and their dynamical coupling, thus combining an established interpretation of dynamics through neural oscillations, with accurate patient-specific features. We show that it is possible to build a direct correspondence between the models of brain-network dynamics under periodic driving, and the mechanism of neural entrainment via periodic stimulation. When our framework is applied to EEG recordings of patients in status epilepticus (a brain state of perpetual seizure activity), it yields a model-driven predictive analysis of the therapeutic performance of periodic brain stimulation. This suggests that periodic brain stimulation can drive pathological states of epileptic network dynamics towards a healthy functional brain state.
[ { "created": "Fri, 20 Oct 2023 13:21:31 GMT", "version": "v1" } ]
2023-10-23
[ [ "Dubcek", "Tena", "" ], [ "Ledergerber", "Debora", "" ], [ "Thomann", "Jana", "" ], [ "Aiello", "Giovanna", "" ], [ "Serra-Garcia", "Marc", "" ], [ "Imbach", "Lukas", "" ], [ "Polania", "Rafael", "" ] ]
Neural oscillations are considered to be brain-specific signatures of information processing and communication in the brain. They also reflect pathological brain activity in neurological disorders, thus offering a basis for diagnoses and forecasting. Epilepsy is one of the most common neurological disorders, characterized by abnormal synchronization and desynchronization of the oscillations in the brain. About one third of epilepsy cases are pharmacoresistant, and as such emphasize the need for novel therapy approaches, where brain stimulation appears to be a promising therapeutic option. The development of brain stimulation paradigms, however, is often based on generalized assumptions about brain dynamics, although it is known that significant differences occur between patients and brain states. We developed a framework to extract individualized predictive models of epileptic network dynamics directly from EEG data. The models are based on the dominant coherent oscillations and their dynamical coupling, thus combining an established interpretation of dynamics through neural oscillations, with accurate patient-specific features. We show that it is possible to build a direct correspondence between the models of brain-network dynamics under periodic driving, and the mechanism of neural entrainment via periodic stimulation. When our framework is applied to EEG recordings of patients in status epilepticus (a brain state of perpetual seizure activity), it yields a model-driven predictive analysis of the therapeutic performance of periodic brain stimulation. This suggests that periodic brain stimulation can drive pathological states of epileptic network dynamics towards a healthy functional brain state.
1502.07816
Joshua Glaser
Joshua I. Glaser, Bradley M. Zamft, George M. Church, Konrad P. Kording
Puzzle Imaging: Using Large-scale Dimensionality Reduction Algorithms for Localization
null
null
10.1371/journal.pone.0131593
null
q-bio.NC cs.CE cs.CV q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
[ { "created": "Fri, 27 Feb 2015 04:55:54 GMT", "version": "v1" }, { "created": "Sat, 7 Mar 2015 07:16:17 GMT", "version": "v2" }, { "created": "Sun, 21 Jun 2015 19:17:03 GMT", "version": "v3" } ]
2016-02-17
[ [ "Glaser", "Joshua I.", "" ], [ "Zamft", "Bradley M.", "" ], [ "Church", "George M.", "" ], [ "Kording", "Konrad P.", "" ] ]
Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.
1010.1755
Jaewook Joo
Jaewook Joo, Steve Plimpton, Shawn Martin, Laura Swiler, and Jean-Loup Faulon
Sensitivity analysis of a computational model of the IKK-NF-{\kappa}B-I{\kappa}B{\alpha}-A20 signal transduction network
32 pages, 8 figures
Ann. N.Y. Acad. Sci. 1115:221-239 (2007)
10.1196/annals.1407.014
null
q-bio.QM q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The NF-{\kappa}B signaling network plays an important role in many different compartments of the immune system during immune activation. Using a computational model of the NF-{\kappa}B signaling network involving two negative regulators, I{\kappa}B{\alpha} and A20, we performed sensitivity analyses with three different sampling methods and present a ranking of the kinetic rate variables by the strength of their influence on the NF-{\kappa}B signaling response. We also present a classification of temporal response profiles of nuclear NF-{\kappa}B concentration into six clusters, which can be regrouped to three biologically relevant clusters. Lastly, based upon the ranking, we constructed a reduced network of the IKK-NF-{\kappa}B-I{\kappa}B{\alpha}-A20 signal transduction.
[ { "created": "Fri, 8 Oct 2010 18:21:45 GMT", "version": "v1" } ]
2010-10-11
[ [ "Joo", "Jaewook", "" ], [ "Plimpton", "Steve", "" ], [ "Martin", "Shawn", "" ], [ "Swiler", "Laura", "" ], [ "Faulon", "Jean-Loup", "" ] ]
The NF-{\kappa}B signaling network plays an important role in many different compartments of the immune system during immune activation. Using a computational model of the NF-{\kappa}B signaling network involving two negative regulators, I{\kappa}B{\alpha} and A20, we performed sensitivity analyses with three different sampling methods and present a ranking of the kinetic rate variables by the strength of their influence on the NF-{\kappa}B signaling response. We also present a classification of temporal response profiles of nuclear NF-{\kappa}B concentration into six clusters, which can be regrouped to three biologically relevant clusters. Lastly, based upon the ranking, we constructed a reduced network of the IKK-NF-{\kappa}B-I{\kappa}B{\alpha}-A20 signal transduction.
0812.4280
Georgy Karev
Georgy P. Karev
On mathematical theory of selection: Continuous time population dynamics
29 pages; published in J. of Mathematical Biology
Volume 60, Number 1 / January, 2010
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical theory of selection is developed within the frameworks of general models of inhomogeneous populations with continuous time. Methods that allow us to study the distribution dynamics under natural selection and to construct explicit solutions of the models are developed. All statistical characteristics of interest, such as the mean values of the fitness or any trait can be computed effectively, and the results depend in a crucial way on the initial distribution. The developed theory provides an effective method for solving selection systems; it reduces the initial complex model to a special system of ordinary differential equations (the escort system). Applications of the method to the Price equations are given; the solutions of some particular inhomogeneous Malthusian, Ricker and logistic-like models used but not solved in the literature are derived in explicit form.
[ { "created": "Mon, 22 Dec 2008 20:39:09 GMT", "version": "v1" }, { "created": "Tue, 22 Dec 2009 21:18:19 GMT", "version": "v2" } ]
2009-12-22
[ [ "Karev", "Georgy P.", "" ] ]
Mathematical theory of selection is developed within the frameworks of general models of inhomogeneous populations with continuous time. Methods that allow us to study the distribution dynamics under natural selection and to construct explicit solutions of the models are developed. All statistical characteristics of interest, such as the mean values of the fitness or any trait can be computed effectively, and the results depend in a crucial way on the initial distribution. The developed theory provides an effective method for solving selection systems; it reduces the initial complex model to a special system of ordinary differential equations (the escort system). Applications of the method to the Price equations are given; the solutions of some particular inhomogeneous Malthusian, Ricker and logistic-like models used but not solved in the literature are derived in explicit form.
2012.06990
Jean Honorio
Siya Goel and Clark Gedney and Jean Honorio
A Novel Tool for the Accurate and Affordable Early Diagnosis of Pancreatic Cancer via Machine Learning and Bioinformatics
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pancreatic cancer (PC) is the fourth leading cause of cancer death in the United States due to its five-year survival rate of 10%. Late diagnosis, affiliated with the asymptomatic nature in early stages and the location of the cancer with respect to the pancreas, makes current widely-accepted screening methods unavailable. Prior studies have achieved low (70-75%) diagnostic accuracy, possibly because 80% of PC cases are associated with diabetes, leading to misdiagnosis. To address the problems of frequent late diagnosis and misdiagnosis, we developed an accessible, accurate and affordable diagnostic tool for PC, by analyzing the expression of nineteen genes in PC and diabetes. First, machine learning algorithms were trained on four groups of subjects, depending on the occurrence of PC and Diabetes. The models were analyzed with 400 PC subjects at varying stages to ensure validity. Naive Bayes, Neural Network and K-Nearest Neighbors models achieved the highest testing accuracy of around 92.6%. Second, the biological implication of the nineteen genes was investigated using bioinformatics tools. It was found that these genes were significantly involved in regulating the cytoplasm, cytoskeleton and nuclear receptor activity in the pancreas, specifically in acinar and ductal cells. Our novel tool is the first in the literature that achieves a PC diagnostic accuracy of above 90%, having the potential to significantly improve the detection of PC in the background of diabetes and increase the five-year survival rate.
[ { "created": "Sun, 13 Dec 2020 07:25:50 GMT", "version": "v1" } ]
2020-12-15
[ [ "Goel", "Siya", "" ], [ "Gedney", "Clark", "" ], [ "Honorio", "Jean", "" ] ]
Pancreatic cancer (PC) is the fourth leading cause of cancer death in the United States due to its five-year survival rate of 10%. Late diagnosis, affiliated with the asymptomatic nature in early stages and the location of the cancer with respect to the pancreas, makes current widely-accepted screening methods unavailable. Prior studies have achieved low (70-75%) diagnostic accuracy, possibly because 80% of PC cases are associated with diabetes, leading to misdiagnosis. To address the problems of frequent late diagnosis and misdiagnosis, we developed an accessible, accurate and affordable diagnostic tool for PC, by analyzing the expression of nineteen genes in PC and diabetes. First, machine learning algorithms were trained on four groups of subjects, depending on the occurrence of PC and Diabetes. The models were analyzed with 400 PC subjects at varying stages to ensure validity. Naive Bayes, Neural Network and K-Nearest Neighbors models achieved the highest testing accuracy of around 92.6%. Second, the biological implication of the nineteen genes was investigated using bioinformatics tools. It was found that these genes were significantly involved in regulating the cytoplasm, cytoskeleton and nuclear receptor activity in the pancreas, specifically in acinar and ductal cells. Our novel tool is the first in the literature that achieves a PC diagnostic accuracy of above 90%, having the potential to significantly improve the detection of PC in the background of diabetes and increase the five-year survival rate.
0907.0285
Alan Veliz Cuba
Alan Veliz-Cuba
Reduction of Boolean Networks
10 pages, 6 figures
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Boolean networks have been successfully used in modelling gene regulatory networks. In this paper we propose a reduction method that reduces the complexity of a Boolean network but keeps dynamical properties and topological features and hence it makes the analysis easier; as a result, it allows for a better understanding of the role of network topology on the dynamics. In particular, we use the reduction method to study steady states of Boolean models.
[ { "created": "Thu, 2 Jul 2009 03:57:48 GMT", "version": "v1" } ]
2009-07-06
[ [ "Veliz-Cuba", "Alan", "" ] ]
Boolean networks have been successfully used in modelling gene regulatory networks. In this paper we propose a reduction method that reduces the complexity of a Boolean network but keeps dynamical properties and topological features and hence it makes the analysis easier; as a result, it allows for a better understanding of the role of network topology on the dynamics. In particular, we use the reduction method to study steady states of Boolean models.
1710.04038
Ronan M.T. Fleming Dr
Laurent Heirendt and Sylvain Arreckx, Thomas Pfau, Sebasti\'an N. Mendoza, Anne Richelle, Almut Heinken, Hulda S. Haraldsd\'ottir, Jacek Wachowiak, Sarah M. Keating, Vanja Vlasov, Stefania Magnusd\'ottir, Chiam Yu Ng, German Preciat, Alise \v{Z}agare, Siu H.J. Chan, Maike K. Aurich, Catherine M. Clancy, Jennifer Modamio, John T. Sauls, Alberto Noronha, Aarash Bordbar, Benjamin Cousins, Diana C. El Assal, Luis V. Valcarcel, I\~nigo Apaolaza, Susan Ghaderi, Masoud Ahookhosh, Marouen Ben Guebila, Andrejs Kostromins, Nicolas Sompairac, Hoai M. Le, Ding Ma, Yuekai Sun, Lin Wang, James T. Yurkovich, Miguel A.P. Oliveira, Phan T. Vuong, Lemmer P. El Assal, Inna Kuperstein, Andrei Zinovyev, H. Scott Hinton, William A. Bryant, Francisco J. Arag\'on Artacho, Francisco J. Planes, Egils Stalidzans, Alejandro Maass, Santosh Vempala, Michael Hucka, Michael A. Saunders, Costas D. Maranas, Nathan E. Lewis, Thomas Sauter, Bernhard \O. Palsson, Ines Thiele, Ronan M.T. Fleming
Creation and analysis of biochemical constraint-based models: the COBRA Toolbox v3.0
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COnstraint-Based Reconstruction and Analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive software suite of interoperable COBRA methods. It has found widespread applications in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. Version 3.0 includes new methods for quality controlled reconstruction, modelling, topological analysis, strain and experimental design, network visualisation as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimisation solvers for multi-scale, multi-cellular and reaction kinetic modelling, respectively. This protocol can be adapted for the generation and analysis of a constraint-based model in a wide variety of molecular systems biology scenarios. This protocol is an update to the COBRA Toolbox 1.0 and 2.0. The COBRA Toolbox 3.0 provides an unparalleled depth of constraint-based reconstruction and analysis methods.
[ { "created": "Wed, 11 Oct 2017 12:41:59 GMT", "version": "v1" }, { "created": "Fri, 23 Feb 2018 14:40:49 GMT", "version": "v2" } ]
2018-02-26
[ [ "Heirendt", "Laurent", "" ], [ "Arreckx", "Sylvain", "" ], [ "Pfau", "Thomas", "" ], [ "Mendoza", "Sebastián N.", "" ], [ "Richelle", "Anne", "" ], [ "Heinken", "Almut", "" ], [ "Haraldsdóttir", "Hulda S.", "" ], [ "Wachowiak", "Jacek", "" ], [ "Keating", "Sarah M.", "" ], [ "Vlasov", "Vanja", "" ], [ "Magnusdóttir", "Stefania", "" ], [ "Ng", "Chiam Yu", "" ], [ "Preciat", "German", "" ], [ "Žagare", "Alise", "" ], [ "Chan", "Siu H. J.", "" ], [ "Aurich", "Maike K.", "" ], [ "Clancy", "Catherine M.", "" ], [ "Modamio", "Jennifer", "" ], [ "Sauls", "John T.", "" ], [ "Noronha", "Alberto", "" ], [ "Bordbar", "Aarash", "" ], [ "Cousins", "Benjamin", "" ], [ "Assal", "Diana C. El", "" ], [ "Valcarcel", "Luis V.", "" ], [ "Apaolaza", "Iñigo", "" ], [ "Ghaderi", "Susan", "" ], [ "Ahookhosh", "Masoud", "" ], [ "Guebila", "Marouen Ben", "" ], [ "Kostromins", "Andrejs", "" ], [ "Sompairac", "Nicolas", "" ], [ "Le", "Hoai M.", "" ], [ "Ma", "Ding", "" ], [ "Sun", "Yuekai", "" ], [ "Wang", "Lin", "" ], [ "Yurkovich", "James T.", "" ], [ "Oliveira", "Miguel A. P.", "" ], [ "Vuong", "Phan T.", "" ], [ "Assal", "Lemmer P. El", "" ], [ "Kuperstein", "Inna", "" ], [ "Zinovyev", "Andrei", "" ], [ "Hinton", "H. Scott", "" ], [ "Bryant", "William A.", "" ], [ "Artacho", "Francisco J. Aragón", "" ], [ "Planes", "Francisco J.", "" ], [ "Stalidzans", "Egils", "" ], [ "Maass", "Alejandro", "" ], [ "Vempala", "Santosh", "" ], [ "Hucka", "Michael", "" ], [ "Saunders", "Michael A.", "" ], [ "Maranas", "Costas D.", "" ], [ "Lewis", "Nathan E.", "" ], [ "Sauter", "Thomas", "" ], [ "Palsson", "Bernhard Ø.", "" ], [ "Thiele", "Ines", "" ], [ "Fleming", "Ronan M. T.", "" ] ]
COnstraint-Based Reconstruction and Analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive software suite of interoperable COBRA methods. It has found widespread applications in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. Version 3.0 includes new methods for quality controlled reconstruction, modelling, topological analysis, strain and experimental design, network visualisation as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimisation solvers for multi-scale, multi-cellular and reaction kinetic modelling, respectively. This protocol can be adapted for the generation and analysis of a constraint-based model in a wide variety of molecular systems biology scenarios. This protocol is an update to the COBRA Toolbox 1.0 and 2.0. The COBRA Toolbox 3.0 provides an unparalleled depth of constraint-based reconstruction and analysis methods.
2305.19154
Aniello Lampo
Jos\'e Camacho-Mateu, Aniello Lampo, Matteo Sireci, Miguel \'Angel Mu\~noz, Jos\'e A. Cuesta
Sparse species interactions reproduce abundance correlation patterns in microbial communities
null
PNAS Vol. 121 (5) e2309575121 (2024)
10.1073/pnas.2309575121
null
q-bio.PE math.ST q-bio.QM stat.TH
http://creativecommons.org/licenses/by/4.0/
During the last decades macroecology has identified broad-scale patterns of abundances and diversity of microbial communities and put forward some potential explanations for them. However, these advances are not paralleled by a full understanding of the dynamical processes behind them. In particular, abundance fluctuations of different species are found to be correlated, both across time and across communities in metagenomic samples. Reproducing such correlations through appropriate population models remains an open challenge. The present paper tackles this problem and points to sparse species interactions as a necessary mechanism to account for them. Specifically, we discuss several possibilities to include interactions in population models and recognize Lotka-Volterra constants as a successful ansatz. For this, we design a Bayesian inference algorithm to extract sets of interaction constants able to reproduce empirical probability distributions of pairwise correlations for diverse biomes. Importantly, the inferred models still reproduce well-known single-species macroecological patterns concerning abundance fluctuations across both species and communities. Endorsed by the agreement with the empirically observed phenomenology, our analyses provide insights on the properties of the networks of microbial interactions, revealing that sparsity is a crucial feature.
[ { "created": "Tue, 30 May 2023 15:57:08 GMT", "version": "v1" }, { "created": "Wed, 31 May 2023 10:10:03 GMT", "version": "v2" }, { "created": "Wed, 7 Jun 2023 14:53:51 GMT", "version": "v3" }, { "created": "Mon, 12 Jun 2023 12:39:40 GMT", "version": "v4" }, { "created": "Sun, 12 Nov 2023 12:30:59 GMT", "version": "v5" }, { "created": "Wed, 6 Dec 2023 09:34:38 GMT", "version": "v6" } ]
2024-01-26
[ [ "Camacho-Mateu", "José", "" ], [ "Lampo", "Aniello", "" ], [ "Sireci", "Matteo", "" ], [ "Muñoz", "Miguel Ángel", "" ], [ "Cuesta", "José A.", "" ] ]
During the last decades macroecology has identified broad-scale patterns of abundances and diversity of microbial communities and put forward some potential explanations for them. However, these advances are not paralleled by a full understanding of the dynamical processes behind them. In particular, abundance fluctuations of different species are found to be correlated, both across time and across communities in metagenomic samples. Reproducing such correlations through appropriate population models remains an open challenge. The present paper tackles this problem and points to sparse species interactions as a necessary mechanism to account for them. Specifically, we discuss several possibilities to include interactions in population models and recognize Lotka-Volterra constants as a successful ansatz. For this, we design a Bayesian inference algorithm to extract sets of interaction constants able to reproduce empirical probability distributions of pairwise correlations for diverse biomes. Importantly, the inferred models still reproduce well-known single-species macroecological patterns concerning abundance fluctuations across both species and communities. Endorsed by the agreement with the empirically observed phenomenology, our analyses provide insights on the properties of the networks of microbial interactions, revealing that sparsity is a crucial feature.
1905.12405
Ammu P K
Ammu Prasanna Kumar and Suryani Lukman
Review: dual benefits, compositions, recommended storage, and intake duration of mother's milk
70 pages, 1 Figure
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Breastfeeding benefits both infants and mothers. Nutrients in mother's milk help protect infants from multiple diseases including infections, cancers, diabetes, gastrointestinal and respiratory diseases. We performed literature mining on 31,496 mother's-milk-related abstracts from PubMed and the results suggest the need for individualized mother's milk fortification and proper maternal supplementations (e.g. probiotics, vitamin D), because mother's milk compositions (e.g. fatty acids) vary according to maternal diet and responses to infection in mothers and/or infants. We review at details the variability observed in mother's milk compositions and its possible health effects in infants. We also review the effects of storage practices on mother's milk nutrients, recommended durations for mother's milk intake and the associated health benefits.
[ { "created": "Tue, 28 May 2019 16:42:04 GMT", "version": "v1" } ]
2019-05-30
[ [ "Kumar", "Ammu Prasanna", "" ], [ "Lukman", "Suryani", "" ] ]
Breastfeeding benefits both infants and mothers. Nutrients in mother's milk help protect infants from multiple diseases including infections, cancers, diabetes, gastrointestinal and respiratory diseases. We performed literature mining on 31,496 mother's-milk-related abstracts from PubMed and the results suggest the need for individualized mother's milk fortification and proper maternal supplementations (e.g. probiotics, vitamin D), because mother's milk compositions (e.g. fatty acids) vary according to maternal diet and responses to infection in mothers and/or infants. We review at details the variability observed in mother's milk compositions and its possible health effects in infants. We also review the effects of storage practices on mother's milk nutrients, recommended durations for mother's milk intake and the associated health benefits.
2403.13302
Vasiliki Bitsouni
Vasiliki Bitsouni, Nikolaos Gialelis, Ioannis G.Stratis, Vasilis Tsilidis
From primary HPV infection to carcinoma in situ: a mathematical approach of cervical intraepithelial neoplasia
null
null
null
null
q-bio.QM cs.NA math.AP math.NA
http://creativecommons.org/licenses/by/4.0/
Cervical intraepithelial neoplasia (CIN) is the development of abnormal cells on the surface of the cervix, caused by a human papillomavirus (HPV) infection. Although in most of the cases it is resolved by the immune system, a small percentage of people might develop a more serious CIN which, if left untreated, can develop into cervical cancer. Cervical cancer is the fourth most common cancer in women globally, for which the World Health Organization (WHO) recently adopted the Global Strategy for cervical cancer elimination by 2030. With this research topic being more imperative than ever, in this paper, we develop a nonlinear mathematical model describing the CIN progression. The model consists of partial differential equations describing the dynamics of epithelial, dysplastic and immune cells, as well as the dynamics of viral particles. We use our model to explore numerically three important factors of dysplasia progression, namely the geometry of the cervix, the strength of the immune response and the frequency of viral exposure.
[ { "created": "Wed, 20 Mar 2024 04:57:38 GMT", "version": "v1" } ]
2024-03-21
[ [ "Bitsouni", "Vasiliki", "" ], [ "Gialelis", "Nikolaos", "" ], [ "Stratis", "Ioannis G.", "" ], [ "Tsilidis", "Vasilis", "" ] ]
Cervical intraepithelial neoplasia (CIN) is the development of abnormal cells on the surface of the cervix, caused by a human papillomavirus (HPV) infection. Although in most of the cases it is resolved by the immune system, a small percentage of people might develop a more serious CIN which, if left untreated, can develop into cervical cancer. Cervical cancer is the fourth most common cancer in women globally, for which the World Health Organization (WHO) recently adopted the Global Strategy for cervical cancer elimination by 2030. With this research topic being more imperative than ever, in this paper, we develop a nonlinear mathematical model describing the CIN progression. The model consists of partial differential equations describing the dynamics of epithelial, dysplastic and immune cells, as well as the dynamics of viral particles. We use our model to explore numerically three important factors of dysplasia progression, namely the geometry of the cervix, the strength of the immune response and the frequency of viral exposure.
2303.06695
Dana Azouri
Dana Azouri, Oz Granit, Michael Alburquerque, Yishay Mansour, Tal Pupko and Itay Mayrose
The tree reconstruction game: phylogenetic reconstruction using reinforcement learning
* Equal contribution
null
null
null
q-bio.PE cs.AI
http://creativecommons.org/licenses/by/4.0/
We propose a reinforcement-learning algorithm to tackle the challenge of reconstructing phylogenetic trees. The search for the tree that best describes the data is algorithmically challenging, thus all current algorithms for phylogeny reconstruction use various heuristics to make it feasible. In this study, we demonstrate that reinforcement learning can be used to learn an optimal search strategy, thus providing a novel paradigm for predicting the maximum-likelihood tree. Our proposed method does not require likelihood calculation with every step, nor is it limited to greedy uphill moves in the likelihood space. We demonstrate the use of the developed deep-Q-learning agent on a set of unseen empirical data, namely, on unseen environments defined by nucleotide alignments of up to 20 sequences. Our results show that the likelihood scores of the inferred phylogenies are similar to those obtained from widely-used software. It thus establishes a proof-of-concept that it is beneficial to optimize a sequence of moves in the search-space, rather than optimizing the progress made in every single move only. This suggests that a reinforcement-learning based method provides a promising direction for phylogenetic reconstruction.
[ { "created": "Sun, 12 Mar 2023 16:19:06 GMT", "version": "v1" } ]
2023-03-14
[ [ "Azouri", "Dana", "" ], [ "Granit", "Oz", "" ], [ "Alburquerque", "Michael", "" ], [ "Mansour", "Yishay", "" ], [ "Pupko", "Tal", "" ], [ "Mayrose", "Itay", "" ] ]
We propose a reinforcement-learning algorithm to tackle the challenge of reconstructing phylogenetic trees. The search for the tree that best describes the data is algorithmically challenging, thus all current algorithms for phylogeny reconstruction use various heuristics to make it feasible. In this study, we demonstrate that reinforcement learning can be used to learn an optimal search strategy, thus providing a novel paradigm for predicting the maximum-likelihood tree. Our proposed method does not require likelihood calculation with every step, nor is it limited to greedy uphill moves in the likelihood space. We demonstrate the use of the developed deep-Q-learning agent on a set of unseen empirical data, namely, on unseen environments defined by nucleotide alignments of up to 20 sequences. Our results show that the likelihood scores of the inferred phylogenies are similar to those obtained from widely-used software. It thus establishes a proof-of-concept that it is beneficial to optimize a sequence of moves in the search-space, rather than optimizing the progress made in every single move only. This suggests that a reinforcement-learning based method provides a promising direction for phylogenetic reconstruction.
1906.00728
Stephen Fleming
Stephen M. Fleming
Awareness as inference in a higher-order state space
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans have the ability to report the contents of their subjective experience - we can say to each other, "I am aware of X". The decision processes that support these reports about mental contents remain poorly understood. In this article I propose a computational framework that characterises awareness reports as metacognitive decisions (inference) about a generative model of perceptual content. This account is motivated from the perspective of how flexible hierarchical state spaces are built during learning and decision-making. Internal states supporting awareness reports, unlike those covarying with perceptual contents, are simple and abstract, varying along a one-dimensional continuum from absent to present. A critical feature of this architecture is that it is both higher-order and asymmetric: a vast number of perceptual states is nested under "present", but a much smaller number of possible states nested under "absent". Via simulations I show that this asymmetry provides a natural account of observations of "global ignition" in brain imaging studies of awareness reports.
[ { "created": "Fri, 31 May 2019 10:26:26 GMT", "version": "v1" }, { "created": "Mon, 30 Sep 2019 13:21:13 GMT", "version": "v2" }, { "created": "Tue, 3 Dec 2019 11:24:09 GMT", "version": "v3" } ]
2019-12-04
[ [ "Fleming", "Stephen M.", "" ] ]
Humans have the ability to report the contents of their subjective experience - we can say to each other, "I am aware of X". The decision processes that support these reports about mental contents remain poorly understood. In this article I propose a computational framework that characterises awareness reports as metacognitive decisions (inference) about a generative model of perceptual content. This account is motivated from the perspective of how flexible hierarchical state spaces are built during learning and decision-making. Internal states supporting awareness reports, unlike those covarying with perceptual contents, are simple and abstract, varying along a one-dimensional continuum from absent to present. A critical feature of this architecture is that it is both higher-order and asymmetric: a vast number of perceptual states is nested under "present", but a much smaller number of possible states nested under "absent". Via simulations I show that this asymmetry provides a natural account of observations of "global ignition" in brain imaging studies of awareness reports.
1604.00002
Eduardo Cocca Padovani
Eduardo C. Padovani
Characterization of Large-Scale Functional Brain Networks During Ketamine-Medetomidine Anesthetic Induction
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several experiments provide evidence that specialized brain regions functionally interact and reveal that the brain processes and integrates information in a specific and structured manner. Networks can be applied to model brain functional activities, providing means to characterize and quantify this structured form of organization. Reports substantiate that different physiological states or diseases that affect the central nervous system may be associated with alterations in these networks, which might be reflected in graphs of different architectures. However, the relationship between their structure and the organism's distinct physiological conditions is poorly comprehended. Therefore, experiments that estimate the functional neural networks of subjects exposed to different controlled conditions are highly relevant. Within this context, this research has sought to model large-scale functional brain networks during an anesthetic induction process. The experiment was based on intra-cranial recordings of the neural activities of an old-world macaque of the species Macaca fuscata. Neural activity was recorded during a Ketamine-Medetomidine anesthetic induction process, and networks were estimated sequentially in five-second intervals. One and a half minutes after administering the anesthetics, changes occurred in various network properties, revealing a transition in the network architecture. During general anesthesia, functional connectivity and network integration capabilities were reduced at both local and global levels. Additionally, it has been verified that the brain shifted to a highly specific and dynamic state. The results provide empirical evidence and report the relationship between the induced state of anesthesia and functional network properties, contributing to the elucidation of novel aspects of the neural correlates of consciousness.
[ { "created": "Thu, 31 Mar 2016 19:12:04 GMT", "version": "v1" }, { "created": "Thu, 1 Dec 2016 13:44:09 GMT", "version": "v2" }, { "created": "Mon, 4 Sep 2023 21:52:44 GMT", "version": "v3" } ]
2023-09-06
[ [ "Padovani", "Eduardo C.", "" ] ]
Several experiments provide evidence that specialized brain regions functionally interact and reveal that the brain processes and integrates information in a specific and structured manner. Networks can be applied to model brain functional activities, providing means to characterize and quantify this structured form of organization. Reports substantiate that different physiological states or diseases that affect the central nervous system may be associated with alterations in these networks, which might be reflected in graphs of different architectures. However, the relationship between their structure and the organism's distinct physiological conditions is poorly comprehended. Therefore, experiments that estimate the functional neural networks of subjects exposed to different controlled conditions are highly relevant. Within this context, this research has sought to model large-scale functional brain networks during an anesthetic induction process. The experiment was based on intra-cranial recordings of the neural activities of an old-world macaque of the species Macaca fuscata. Neural activity was recorded during a Ketamine-Medetomidine anesthetic induction process, and networks were estimated sequentially in five-second intervals. One and a half minutes after administering the anesthetics, changes occurred in various network properties, revealing a transition in the network architecture. During general anesthesia, functional connectivity and network integration capabilities were reduced at both local and global levels. Additionally, it has been verified that the brain shifted to a highly specific and dynamic state. The results provide empirical evidence and report the relationship between the induced state of anesthesia and functional network properties, contributing to the elucidation of novel aspects of the neural correlates of consciousness.
2212.12049
Benjamin Greenbaum
Andreas Mayer, Christopher J. Russo, Quentin Marcou, William Bialek, Benjamin D. Greenbaum
How different are self and nonself?
null
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Biological and artificial neural networks routinely make reliable distinctions between similar inputs, and the rules for making these distinctions are learned. In some ways, self/nonself discrimination in the immune system is similar, being both reliable and (partly) learned through thymic selection. In contrast to other examples, we show that the distributions of self and nonself peptides are nearly identical but strongly inhomogeneous. Reliable discrimination is possible only because self peptides are a particular finite sample drawn out of this distribution, and the immune system can target the ``spaces'' in between these samples. In conventional learning problems, this would constitute overfitting and lead to disaster. Here, the strong inhomogeneities imply instead that the immune system gains by targeting peptides which are very similar to self, with maximum sensitivity for sequences just one substitution away. This prediction from the structure of the underlying distribution in sequence space agrees, for example, with the observed responses to cancer neoantigens.
[ { "created": "Thu, 22 Dec 2022 21:54:54 GMT", "version": "v1" } ]
2022-12-26
[ [ "Mayer", "Andreas", "" ], [ "Russo", "Christopher J.", "" ], [ "Marcou", "Quentin", "" ], [ "Bialek", "William", "" ], [ "Greenbaum", "Benjamin D.", "" ] ]
Biological and artificial neural networks routinely make reliable distinctions between similar inputs, and the rules for making these distinctions are learned. In some ways, self/nonself discrimination in the immune system is similar, being both reliable and (partly) learned through thymic selection. In contrast to other examples, we show that the distributions of self and nonself peptides are nearly identical but strongly inhomogeneous. Reliable discrimination is possible only because self peptides are a particular finite sample drawn out of this distribution, and the immune system can target the ``spaces'' in between these samples. In conventional learning problems, this would constitute overfitting and lead to disaster. Here, the strong inhomogeneities imply instead that the immune system gains by targeting peptides which are very similar to self, with maximum sensitivity for sequences just one substitution away. This prediction from the structure of the underlying distribution in sequence space agrees, for example, with the observed responses to cancer neoantigens.
q-bio/0403039
Kevin E. Cahill
Kevin Cahill
Alternative Splicing and Genomic Stability
6 pages
Physical Biology, volume 1, issue 2, pages C1 - C4, 2004
10.1088/1478-3967/1/2/C01
null
q-bio.GN q-bio.PE
null
Alternative splicing allows an organism to make different proteins in different cells at different times, all from the same gene. In a cell that uses alternative splicing, the total length of all the exons is much shorter than in a cell that encodes the same set of proteins without alternative splicing. This economical use of exons makes genes more stable during reproduction and development because a genome with a shorter exon length is more resistant to harmful mutations. Genomic stability may be the reason why higher vertebrates splice alternatively. For a broad class of alternatively spliced genes, a formula is given for the increase in their stability.
[ { "created": "Sat, 27 Mar 2004 00:31:08 GMT", "version": "v1" }, { "created": "Thu, 1 Jul 2004 05:55:13 GMT", "version": "v2" } ]
2009-11-10
[ [ "Cahill", "Kevin", "" ] ]
Alternative splicing allows an organism to make different proteins in different cells at different times, all from the same gene. In a cell that uses alternative splicing, the total length of all the exons is much shorter than in a cell that encodes the same set of proteins without alternative splicing. This economical use of exons makes genes more stable during reproduction and development because a genome with a shorter exon length is more resistant to harmful mutations. Genomic stability may be the reason why higher vertebrates splice alternatively. For a broad class of alternatively spliced genes, a formula is given for the increase in their stability.
1910.11041
Steven Kelk
Mark Jones, Philippe Gambette, Leo van Iersel, Remie Janssen, Steven Kelk, Fabio Pardi, Celine Scornavacca
Cutting an alignment with Ockham's razor
null
null
null
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we investigate different parsimony-based approaches towards finding recombination breakpoints in a multiple sequence alignment. This recombination detection task is crucial in order to avoid errors in evolutionary analyses caused by mixing together portions of sequences which had a different evolution history. Following an overview of the field of recombination detection, we formulate four computational problems for this task with different objective functions. The four problems aim to minimize (1) the total homoplasy of all blocks (2) the maximum homoplasy per block (3) the total homoplasy ratio of all blocks and (4) the maximum homoplasy ratio per block. We describe algorithms for each of these problems, which are fixed-parameter tractable (FPT) when the characters are binary. We have implemented and tested the algorithms on simulated data, showing that minimizing the total homoplasy gives, in most cases, the most accurate results. Our implementation and experimental data have been made publicly available. Finally, we also consider the problem of combining blocks into non-contiguous blocks consisting of at most p contiguous parts. Fixing the homoplasy h of each block to 0, we show that this problem is NP-hard when p >= 3, but polynomial-time solvable for p = 2. Furthermore, the problem is FPT with parameter h for binary characters when p = 2. A number of interesting problems remain open.
[ { "created": "Thu, 24 Oct 2019 11:57:29 GMT", "version": "v1" } ]
2019-10-25
[ [ "Jones", "Mark", "" ], [ "Gambette", "Philippe", "" ], [ "van Iersel", "Leo", "" ], [ "Janssen", "Remie", "" ], [ "Kelk", "Steven", "" ], [ "Pardi", "Fabio", "" ], [ "Scornavacca", "Celine", "" ] ]
In this article, we investigate different parsimony-based approaches towards finding recombination breakpoints in a multiple sequence alignment. This recombination detection task is crucial in order to avoid errors in evolutionary analyses caused by mixing together portions of sequences which had a different evolution history. Following an overview of the field of recombination detection, we formulate four computational problems for this task with different objective functions. The four problems aim to minimize (1) the total homoplasy of all blocks (2) the maximum homoplasy per block (3) the total homoplasy ratio of all blocks and (4) the maximum homoplasy ratio per block. We describe algorithms for each of these problems, which are fixed-parameter tractable (FPT) when the characters are binary. We have implemented and tested the algorithms on simulated data, showing that minimizing the total homoplasy gives, in most cases, the most accurate results. Our implementation and experimental data have been made publicly available. Finally, we also consider the problem of combining blocks into non-contiguous blocks consisting of at most p contiguous parts. Fixing the homoplasy h of each block to 0, we show that this problem is NP-hard when p >= 3, but polynomial-time solvable for p = 2. Furthermore, the problem is FPT with parameter h for binary characters when p = 2. A number of interesting problems remain open.
2111.01009
Benson Chen
Benson Chen, Xiang Fu, Regina Barzilay, Tommi Jaakkola
Fragment-based Sequential Translation for Molecular Optimization
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Searching for novel molecular compounds with desired properties is an important problem in drug discovery. Many existing frameworks generate molecules one atom at a time. We instead propose a flexible editing paradigm that generates molecules using learned molecular fragments--meaningful substructures of molecules. To do so, we train a variational autoencoder (VAE) to encode molecular fragments in a coherent latent space, which we then utilize as a vocabulary for editing molecules to explore the complex chemical property space. Equipped with the learned fragment vocabulary, we propose Fragment-based Sequential Translation (FaST), which learns a reinforcement learning (RL) policy to iteratively translate model-discovered molecules into increasingly novel molecules while satisfying desired properties. Empirical evaluation shows that FaST significantly improves over state-of-the-art methods on benchmark single/multi-objective molecular optimization tasks.
[ { "created": "Tue, 26 Oct 2021 21:20:54 GMT", "version": "v1" } ]
2021-11-02
[ [ "Chen", "Benson", "" ], [ "Fu", "Xiang", "" ], [ "Barzilay", "Regina", "" ], [ "Jaakkola", "Tommi", "" ] ]
Searching for novel molecular compounds with desired properties is an important problem in drug discovery. Many existing frameworks generate molecules one atom at a time. We instead propose a flexible editing paradigm that generates molecules using learned molecular fragments--meaningful substructures of molecules. To do so, we train a variational autoencoder (VAE) to encode molecular fragments in a coherent latent space, which we then utilize as a vocabulary for editing molecules to explore the complex chemical property space. Equipped with the learned fragment vocabulary, we propose Fragment-based Sequential Translation (FaST), which learns a reinforcement learning (RL) policy to iteratively translate model-discovered molecules into increasingly novel molecules while satisfying desired properties. Empirical evaluation shows that FaST significantly improves over state-of-the-art methods on benchmark single/multi-objective molecular optimization tasks.
2111.07919
Ga\"etan Vignoud
Philippe Robert and Ga\"etan Vignoud
On the Spontaneous Dynamics of Synaptic Weights in Stochastic Models with Pair-Based STDP
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate spike-timing dependent plasticity (STPD) in the case of a synapse connecting two neural cells. We develop a theoretical analysis of several STDP rules using Markovian theory. In this context there are two different timescales, fast neural activity and slower synaptic weight updates. Exploiting this timescale separation, we derive the long-time limits of a single synaptic weight subject to STDP. We show that the pairing model of presynaptic and postsynaptic spikes controls the synaptic weight dynamics for small external input, on an excitatory synapse. This result implies in particular that mean-field analysis of plasticity may miss some important properties of STDP. Anti-Hebbian STDP seems to favor the emergence of a stable synaptic weight, but only for high external input. In the case of inhibitory synapse the pairing schemes matter less, and we observe convergence of the synaptic weight to a non-null value only for Hebbian STDP. We extensively study different asymptotic regimes for STDP rules, raising interesting questions for future works on adaptative neural networks and, more generally, on adaptive systems.
[ { "created": "Mon, 15 Nov 2021 17:13:05 GMT", "version": "v1" } ]
2021-11-16
[ [ "Robert", "Philippe", "" ], [ "Vignoud", "Gaëtan", "" ] ]
We investigate spike-timing dependent plasticity (STPD) in the case of a synapse connecting two neural cells. We develop a theoretical analysis of several STDP rules using Markovian theory. In this context there are two different timescales, fast neural activity and slower synaptic weight updates. Exploiting this timescale separation, we derive the long-time limits of a single synaptic weight subject to STDP. We show that the pairing model of presynaptic and postsynaptic spikes controls the synaptic weight dynamics for small external input, on an excitatory synapse. This result implies in particular that mean-field analysis of plasticity may miss some important properties of STDP. Anti-Hebbian STDP seems to favor the emergence of a stable synaptic weight, but only for high external input. In the case of inhibitory synapse the pairing schemes matter less, and we observe convergence of the synaptic weight to a non-null value only for Hebbian STDP. We extensively study different asymptotic regimes for STDP rules, raising interesting questions for future works on adaptative neural networks and, more generally, on adaptive systems.
1606.08567
Bruno. Cessac
B. Cessac, A. Le Ny, E. Loecherbach
On the mathematical consequences of binning spike trains
22 pages, 6 figures
Neural Computation, January 2017, Vol. 29, No. 1, Pages 146-170
null
null
q-bio.NC math-ph math.MP physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process which is not Markov any more, but is instead a Variable Length Markov Chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains. In particular, we introduce the DLR framework as a natural setting to mathematically formalize anticipation, i.e. to tell "how good" our nervous system is at making predictions. In a probabilistic sense, this corresponds to condition a process by its future and we discuss how binning may affect our conclusions on this ability. We finally comment what could be the consequences of binning in the detection of spurious phase transitions or in the detection of wrong evidences of criticality.
[ { "created": "Tue, 28 Jun 2016 06:12:31 GMT", "version": "v1" } ]
2017-11-28
[ [ "Cessac", "B.", "" ], [ "Ny", "A. Le", "" ], [ "Loecherbach", "E.", "" ] ]
We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process which is not Markov any more, but is instead a Variable Length Markov Chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains. In particular, we introduce the DLR framework as a natural setting to mathematically formalize anticipation, i.e. to tell "how good" our nervous system is at making predictions. In a probabilistic sense, this corresponds to condition a process by its future and we discuss how binning may affect our conclusions on this ability. We finally comment what could be the consequences of binning in the detection of spurious phase transitions or in the detection of wrong evidences of criticality.
2405.20591
Daniel Messenger
Daniel Messenger and Greg Dwyer and Vanja Dukic
Weak-Form Inference for Hybrid Dynamical Systems in Ecology
null
null
null
null
q-bio.PE cs.LG math.DS
http://creativecommons.org/licenses/by/4.0/
Species subject to predation and environmental threats commonly exhibit variable periods of population boom and bust over long timescales. Understanding and predicting such behavior, especially given the inherent heterogeneity and stochasticity of exogenous driving factors over short timescales, is an ongoing challenge. A modeling paradigm gaining popularity in the ecological sciences for such multi-scale effects is to couple short-term continuous dynamics to long-term discrete updates. We develop a data-driven method utilizing weak-form equation learning to extract such hybrid governing equations for population dynamics and to estimate the requisite parameters using sparse intermittent measurements of the discrete and continuous variables. The method produces a set of short-term continuous dynamical system equations parametrized by long-term variables, and long-term discrete equations parametrized by short-term variables, allowing direct assessment of interdependencies between the two time scales. We demonstrate the utility of the method on a variety of ecological scenarios and provide extensive tests using models previously derived for epizootics experienced by the North American spongy moth (Lymantria dispar dispar).
[ { "created": "Fri, 31 May 2024 03:03:27 GMT", "version": "v1" } ]
2024-06-03
[ [ "Messenger", "Daniel", "" ], [ "Dwyer", "Greg", "" ], [ "Dukic", "Vanja", "" ] ]
Species subject to predation and environmental threats commonly exhibit variable periods of population boom and bust over long timescales. Understanding and predicting such behavior, especially given the inherent heterogeneity and stochasticity of exogenous driving factors over short timescales, is an ongoing challenge. A modeling paradigm gaining popularity in the ecological sciences for such multi-scale effects is to couple short-term continuous dynamics to long-term discrete updates. We develop a data-driven method utilizing weak-form equation learning to extract such hybrid governing equations for population dynamics and to estimate the requisite parameters using sparse intermittent measurements of the discrete and continuous variables. The method produces a set of short-term continuous dynamical system equations parametrized by long-term variables, and long-term discrete equations parametrized by short-term variables, allowing direct assessment of interdependencies between the two time scales. We demonstrate the utility of the method on a variety of ecological scenarios and provide extensive tests using models previously derived for epizootics experienced by the North American spongy moth (Lymantria dispar dispar).
1904.11633
Alicia Dickenstein
Magal\'i Giaroli, Rick Rischter, Mercedes P\'erez Mill\'an, Alicia Dickenstein
Parameter regions that give rise to 2[n/2]+1 positive steady states in the n-site phosphorylation system
20 pages. The maple and SAGE files with our computations are available at: http://mate.dm.uba.ar/~alidick/DGRPMFiles/
null
null
null
q-bio.MN math.AG math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The distributive sequential n-site phosphorylation/dephosphorylation system is an important building block in networks of chemical reactions arising in molecular biology, which has been intensively studied. In the nice paper of Wang and Sontag (2008) it is shown that for certain choices of the reaction rate constants and total conservation constants, the system can have 2[n/2]+1 positive steady states (that is, n+1 positive steady states for n even and n positive steady states for n odd). In this paper we give open parameter regions in the space of reaction rate constants and total conservation constants that ensure these number of positive steady states, while assuming in the modeling that roughly only 1/4 of the intermediates occur in the reaction mechanism. This result is based on the general framework developed by Bihan, Dickenstein, and Giaroli (2018), which can be applied to other networks. We also describe how to implement these tools to search for multistationarity regions in a computer algebra system and present some computer aided results.
[ { "created": "Fri, 26 Apr 2019 00:58:35 GMT", "version": "v1" } ]
2019-04-29
[ [ "Giaroli", "Magalí", "" ], [ "Rischter", "Rick", "" ], [ "Millán", "Mercedes Pérez", "" ], [ "Dickenstein", "Alicia", "" ] ]
The distributive sequential n-site phosphorylation/dephosphorylation system is an important building block in networks of chemical reactions arising in molecular biology, which has been intensively studied. In the nice paper of Wang and Sontag (2008) it is shown that for certain choices of the reaction rate constants and total conservation constants, the system can have 2[n/2]+1 positive steady states (that is, n+1 positive steady states for n even and n positive steady states for n odd). In this paper we give open parameter regions in the space of reaction rate constants and total conservation constants that ensure these number of positive steady states, while assuming in the modeling that roughly only 1/4 of the intermediates occur in the reaction mechanism. This result is based on the general framework developed by Bihan, Dickenstein, and Giaroli (2018), which can be applied to other networks. We also describe how to implement these tools to search for multistationarity regions in a computer algebra system and present some computer aided results.
1707.03957
Pankaj Mehta
Madhu Advani, Guy Bunin, Pankaj Mehta
Environmental engineering is an emergent feature of diverse ecosystems and drives community structure
14 pages, 5 figures
null
null
null
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central question in ecology is to understand the ecological processes that shape community structure. Niche-based theories have emphasized the important role played by competition for maintaining species diversity. Many of these insights have been derived using MacArthur's consumer resource model (MCRM) or its generalizations. Most theoretical work on the MCRM has focused on small ecosystems with a few species and resources. However theoretical insights derived from small ecosystems many not scale up large ecosystems with many resources and species because large systems with many interacting components often display new emergent behaviors that cannot be understood or deduced from analyzing smaller systems. To address this shortcoming, we develop a sophisticated statistical physics inspired cavity method to analyze MCRM when both the number of species and the number of resources is large. We find that in this limit, species generically and consistently perturb their environments and significantly modify available ecological niches. We show how our cavity approach naturally generalizes niche theory to large ecosystems by accounting for the effect of this emergent environmental engineering on species invasion and ecological stability. Our work suggests that environmental engineering is a generic feature of large, natural ecosystems and must be taken into account when analyzing and interpreting community structure. It also highlights the important role that statistical-physics inspired approaches can play in furthering our understanding of ecology.
[ { "created": "Thu, 13 Jul 2017 02:19:00 GMT", "version": "v1" } ]
2017-07-14
[ [ "Advani", "Madhu", "" ], [ "Bunin", "Guy", "" ], [ "Mehta", "Pankaj", "" ] ]
A central question in ecology is to understand the ecological processes that shape community structure. Niche-based theories have emphasized the important role played by competition for maintaining species diversity. Many of these insights have been derived using MacArthur's consumer resource model (MCRM) or its generalizations. Most theoretical work on the MCRM has focused on small ecosystems with a few species and resources. However theoretical insights derived from small ecosystems many not scale up large ecosystems with many resources and species because large systems with many interacting components often display new emergent behaviors that cannot be understood or deduced from analyzing smaller systems. To address this shortcoming, we develop a sophisticated statistical physics inspired cavity method to analyze MCRM when both the number of species and the number of resources is large. We find that in this limit, species generically and consistently perturb their environments and significantly modify available ecological niches. We show how our cavity approach naturally generalizes niche theory to large ecosystems by accounting for the effect of this emergent environmental engineering on species invasion and ecological stability. Our work suggests that environmental engineering is a generic feature of large, natural ecosystems and must be taken into account when analyzing and interpreting community structure. It also highlights the important role that statistical-physics inspired approaches can play in furthering our understanding of ecology.
2208.10108
Sayantan Nag Chowdhury
Sourin Chatterjee, Sayantan Nag Chowdhury, Dibakar Ghosh, and Chittaranjan Hens
Controlling species densities in structurally perturbed intransitive cycles with higher-order interactions
17 pages, 10 figures
null
10.1063/5.0102599
null
q-bio.PE nlin.AO
http://creativecommons.org/licenses/by/4.0/
The persistence of biodiversity of species is a challenging proposition in ecological communities in the face of Darwinian selection. The present article investigates beyond the pairwise competitive interactions and provides a novel perspective for understanding the influence of higher-order interactions on the evolution of social phenotypes. Our simple model yields a prosperous outlook to demonstrate the impact of perturbations on intransitive competitive higher-order interactions. Using a mathematical technique, we show how alone the perturbed interaction network can quickly determine the coexistence equilibrium of competing species instead of solving a large system of ordinary differential equations. It is possible to split the system into multiple feasible cluster states depending on the number of perturbations. Our analysis also reveals the ratio between the unperturbed and perturbed species is inversely proportional to the amount of employed perturbation. Our results suggest that nonlinear dynamical systems and interaction topologies can be interplayed to comprehend species' coexistence under adverse conditions. Particularly our findings signify that less competition between two species increases their abundance and outperforms others.
[ { "created": "Mon, 22 Aug 2022 07:31:07 GMT", "version": "v1" } ]
2022-11-09
[ [ "Chatterjee", "Sourin", "" ], [ "Chowdhury", "Sayantan Nag", "" ], [ "Ghosh", "Dibakar", "" ], [ "Hens", "Chittaranjan", "" ] ]
The persistence of biodiversity of species is a challenging proposition in ecological communities in the face of Darwinian selection. The present article investigates beyond the pairwise competitive interactions and provides a novel perspective for understanding the influence of higher-order interactions on the evolution of social phenotypes. Our simple model yields a prosperous outlook to demonstrate the impact of perturbations on intransitive competitive higher-order interactions. Using a mathematical technique, we show how alone the perturbed interaction network can quickly determine the coexistence equilibrium of competing species instead of solving a large system of ordinary differential equations. It is possible to split the system into multiple feasible cluster states depending on the number of perturbations. Our analysis also reveals the ratio between the unperturbed and perturbed species is inversely proportional to the amount of employed perturbation. Our results suggest that nonlinear dynamical systems and interaction topologies can be interplayed to comprehend species' coexistence under adverse conditions. Particularly our findings signify that less competition between two species increases their abundance and outperforms others.
0906.2452
Liaofu Luo
Liaofu Luo
Protein Folding as a Quantum Transition Between Conformational States
18 pages
null
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The importance of torsion vibration in the transmission of life information is indicated. The localization of quantum torsion state is proved. Following these analyses a formalism on the quantum theory of conformation-electron system is proposed. The conformational-electronic transition is calculated by non-adiabatic operator method. The protein folding is viewed from conformational quantum transition and the folding rate is calculated. The time-scale of microsecond to millisecond for the fundamental folding event (nucleation, collapse, etc) is deduced. The dependence of transition rate W on N inertial moments is given. It indicates how W increases with the number N of torsion angles and decreases with the inertial moment I of atomic group in cooperative transition. The temperature dependence is also deduced which is different from chemical reaction in high-temperature region. It is demonstrated that the conformational dynamics gives deep insights into the folding mechanism and provides a useful tool for analyzing and explaining experimental facts on the rate of protein folding.
[ { "created": "Sat, 13 Jun 2009 08:18:31 GMT", "version": "v1" } ]
2009-06-16
[ [ "Luo", "Liaofu", "" ] ]
The importance of torsion vibration in the transmission of life information is indicated. The localization of quantum torsion state is proved. Following these analyses a formalism on the quantum theory of conformation-electron system is proposed. The conformational-electronic transition is calculated by non-adiabatic operator method. The protein folding is viewed from conformational quantum transition and the folding rate is calculated. The time-scale of microsecond to millisecond for the fundamental folding event (nucleation, collapse, etc) is deduced. The dependence of transition rate W on N inertial moments is given. It indicates how W increases with the number N of torsion angles and decreases with the inertial moment I of atomic group in cooperative transition. The temperature dependence is also deduced which is different from chemical reaction in high-temperature region. It is demonstrated that the conformational dynamics gives deep insights into the folding mechanism and provides a useful tool for analyzing and explaining experimental facts on the rate of protein folding.
q-bio/0702054
Jean-Philippe Vert
Yoshihiro Yamanishi (KEGG), Jean-Philippe Vert (CB)
Kernel matrix regression
null
null
null
null
q-bio.QM math.ST stat.TH
null
We address the problem of filling missing entries in a kernel Gram matrix, given a related full Gram matrix. We attack this problem from the viewpoint of regression, assuming that the two kernel matrices can be considered as explanatory variables and response variables, respectively. We propose a variant of the regression model based on the underlying features in the reproducing kernel Hilbert space by modifying the idea of kernel canonical correlation analysis, and we estimate the missing entries by fitting this model to the existing samples. We obtain promising experimental results on gene network inference and protein 3D structure prediction from genomic datasets. We also discuss the relationship with the em-algorithm based on information geometry.
[ { "created": "Mon, 26 Feb 2007 07:19:34 GMT", "version": "v1" } ]
2011-11-10
[ [ "Yamanishi", "Yoshihiro", "", "KEGG" ], [ "Vert", "Jean-Philippe", "", "CB" ] ]
We address the problem of filling missing entries in a kernel Gram matrix, given a related full Gram matrix. We attack this problem from the viewpoint of regression, assuming that the two kernel matrices can be considered as explanatory variables and response variables, respectively. We propose a variant of the regression model based on the underlying features in the reproducing kernel Hilbert space by modifying the idea of kernel canonical correlation analysis, and we estimate the missing entries by fitting this model to the existing samples. We obtain promising experimental results on gene network inference and protein 3D structure prediction from genomic datasets. We also discuss the relationship with the em-algorithm based on information geometry.
1612.08759
Juan B Gutierrez
Yi H. Yan, Elizabeth D. Trippe, Juan B. Gutierrez
A Method for Massively Parallel Analysis of Time Series
18 pages, 8 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantification of system-wide perturbations from time series -omic data (i.e. a large number of variables with multiple measures in time) provides the basis for many downstream hypothesis generating tools. Here we propose a method, Massively Parallel Analysis of Time Series (MPATS) that can be applied to quantify transcriptome-wide perturbations. The proposed method characterizes each individual time series through its $\ell_1$ distance to every other time series. Application of MPATS to compare biological conditions produces a ranked list of time series based on their magnitude of differences in their $\ell_1$ representation, which then can be further interpreted through enrichment analysis. The performance of MPATS was validated through its application to a study of IFN$\alpha$ dendritic cell responses to viral and bacterial infection. In conjunction with Gene Set Enrichment Analysis (GSEA), MPATS produced consistently identified signature gene sets of anti-bacterial and anti-viral response. Traditional methods such as EDGE and GSEA Time Series (GSEA-TS) failed to identify the relevant signature gene sets. Furthermore, the results of MPATS highlighted the crucial functional difference between STAT1/STAT2 during anti-viral and anti-bacterial response. In our simulation study, MPATS exhibited acceptable performance with small group size (n = 3), when the appropriate effect size is considered. This method can be easily adopted for other -omic data types.
[ { "created": "Tue, 27 Dec 2016 21:43:37 GMT", "version": "v1" } ]
2016-12-30
[ [ "Yan", "Yi H.", "" ], [ "Trippe", "Elizabeth D.", "" ], [ "Gutierrez", "Juan B.", "" ] ]
Quantification of system-wide perturbations from time series -omic data (i.e. a large number of variables with multiple measures in time) provides the basis for many downstream hypothesis generating tools. Here we propose a method, Massively Parallel Analysis of Time Series (MPATS) that can be applied to quantify transcriptome-wide perturbations. The proposed method characterizes each individual time series through its $\ell_1$ distance to every other time series. Application of MPATS to compare biological conditions produces a ranked list of time series based on their magnitude of differences in their $\ell_1$ representation, which then can be further interpreted through enrichment analysis. The performance of MPATS was validated through its application to a study of IFN$\alpha$ dendritic cell responses to viral and bacterial infection. In conjunction with Gene Set Enrichment Analysis (GSEA), MPATS produced consistently identified signature gene sets of anti-bacterial and anti-viral response. Traditional methods such as EDGE and GSEA Time Series (GSEA-TS) failed to identify the relevant signature gene sets. Furthermore, the results of MPATS highlighted the crucial functional difference between STAT1/STAT2 during anti-viral and anti-bacterial response. In our simulation study, MPATS exhibited acceptable performance with small group size (n = 3), when the appropriate effect size is considered. This method can be easily adopted for other -omic data types.
1501.06342
Fernando Alcalde Cuesta
Fernando Alcalde Cuesta, Pablo Gonz\'alez Sequeiros and \'Alvaro Lozano Rojo
Fast and asymptotic computation of the fixation probability for Moran processes on graphs
Corrected typos
BioSystems 129 (2015) 25-35
10.1016/j.biosystems.2015.01.007
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary dynamics has been classically studied for homogeneous populations, but now there is a growing interest in the non-homogenous case. One of the most important models has been proposed by Lieberman, Hauert and Nowak, adapting to a weighted directed graph the classical process described by Moran. The Markov chain associated with the graph can be modified by erasing all non-trivial loops in its state space, obtaining the so-called Embedded Markov chain (EMC). The fixation probability remains unchanged, but the expected time to absorption (fixation or extinction) is reduced. In this paper, we shall use this idea to compute asymptotically the average fixation probability for complete bipartite graphs. To this end, we firstly review some recent results on evolutionary dynamics on graphs trying to clarify some points. We also revisit the 'Star Theorem' proved by Lieberman, Hauert and Nowak for the star graphs. Theoretically, EMC techniques allow fast computation of the fixation probability, but in practice this is not always true. Thus, in the last part of the paper, we compare this algorithm with the standard Monte Carlo method for some kind of complex networks.
[ { "created": "Mon, 26 Jan 2015 11:33:45 GMT", "version": "v1" }, { "created": "Wed, 11 Feb 2015 11:25:07 GMT", "version": "v2" } ]
2015-02-12
[ [ "Cuesta", "Fernando Alcalde", "" ], [ "Sequeiros", "Pablo González", "" ], [ "Rojo", "Álvaro Lozano", "" ] ]
Evolutionary dynamics has been classically studied for homogeneous populations, but now there is a growing interest in the non-homogenous case. One of the most important models has been proposed by Lieberman, Hauert and Nowak, adapting to a weighted directed graph the classical process described by Moran. The Markov chain associated with the graph can be modified by erasing all non-trivial loops in its state space, obtaining the so-called Embedded Markov chain (EMC). The fixation probability remains unchanged, but the expected time to absorption (fixation or extinction) is reduced. In this paper, we shall use this idea to compute asymptotically the average fixation probability for complete bipartite graphs. To this end, we firstly review some recent results on evolutionary dynamics on graphs trying to clarify some points. We also revisit the 'Star Theorem' proved by Lieberman, Hauert and Nowak for the star graphs. Theoretically, EMC techniques allow fast computation of the fixation probability, but in practice this is not always true. Thus, in the last part of the paper, we compare this algorithm with the standard Monte Carlo method for some kind of complex networks.
1909.00731
Frederic Barraquand
Frederic Barraquand, Coralie Picoche, Matteo Detto, Florian Hartig
Inferring species interactions using Granger causality and convergent cross mapping
null
null
10.1007/s12080-020-00482-7
null
q-bio.PE q-bio.QM stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Identifying directed interactions between species from time series of their population densities has many uses in ecology. This key statistical task is equivalent to causal time series inference, which connects to the Granger causality (GC) concept: $x$ causes $y$ if $x$ improves the prediction of $y$ in a dynamic model. However, the entangled nature of nonlinear ecological systems has led to question the appropriateness of Granger causality, especially in its classical linear Multivariate AutoRegressive (MAR) model form. Convergent-cross mapping (CCM), a nonparametric method developed for deterministic dynamical systems, has been suggested as an alternative. Here, we show that linear GC and CCM are able to uncover interactions with surprisingly similar performance, for predator-prey cycles, 2-species deterministic (chaotic) or stochastic competition, as well as 10- and 20-species interaction networks. There is no correspondence between the degree of nonlinearity of the dynamics and which method performs best. Our results therefore imply that Granger causality, even in its linear MAR($p$) formulation, is a valid method for inferring interactions in nonlinear ecological networks; using GC or CCM (or both) can instead be decided based on the aims and specifics of the analysis.
[ { "created": "Mon, 2 Sep 2019 14:26:44 GMT", "version": "v1" }, { "created": "Fri, 31 Jul 2020 14:19:01 GMT", "version": "v2" }, { "created": "Mon, 9 Nov 2020 06:27:49 GMT", "version": "v3" } ]
2020-11-10
[ [ "Barraquand", "Frederic", "" ], [ "Picoche", "Coralie", "" ], [ "Detto", "Matteo", "" ], [ "Hartig", "Florian", "" ] ]
Identifying directed interactions between species from time series of their population densities has many uses in ecology. This key statistical task is equivalent to causal time series inference, which connects to the Granger causality (GC) concept: $x$ causes $y$ if $x$ improves the prediction of $y$ in a dynamic model. However, the entangled nature of nonlinear ecological systems has led to question the appropriateness of Granger causality, especially in its classical linear Multivariate AutoRegressive (MAR) model form. Convergent-cross mapping (CCM), a nonparametric method developed for deterministic dynamical systems, has been suggested as an alternative. Here, we show that linear GC and CCM are able to uncover interactions with surprisingly similar performance, for predator-prey cycles, 2-species deterministic (chaotic) or stochastic competition, as well as 10- and 20-species interaction networks. There is no correspondence between the degree of nonlinearity of the dynamics and which method performs best. Our results therefore imply that Granger causality, even in its linear MAR($p$) formulation, is a valid method for inferring interactions in nonlinear ecological networks; using GC or CCM (or both) can instead be decided based on the aims and specifics of the analysis.
2403.12117
Josua Stadelmaier
Josua Stadelmaier (University of T\"ubingen), Brandon Malone (NEC OncoImmunity), Ralf Eggeling (University of T\"ubingen)
Transfer Learning for T-Cell Response Prediction
20 pages, 9 figures. Source code, compiled data, final model, and a video presentation are available under https://github.com/JosuaStadelmaier/T-cell-response-prediction
null
null
null
q-bio.CB cs.LG
http://creativecommons.org/licenses/by/4.0/
We study the prediction of T-cell response for specific given peptides, which could, among other applications, be a crucial step towards the development of personalized cancer vaccines. It is a challenging task due to limited, heterogeneous training data featuring a multi-domain structure; such data entail the danger of shortcut learning, where models learn general characteristics of peptide sources, such as the source organism, rather than specific peptide characteristics associated with T-cell response. Using a transformer model for T-cell response prediction, we show that the danger of inflated predictive performance is not merely theoretical but occurs in practice. Consequently, we propose a domain-aware evaluation scheme. We then study different transfer learning techniques to deal with the multi-domain structure and shortcut learning. We demonstrate a per-source fine tuning approach to be effective across a wide range of peptide sources and further show that our final model outperforms existing state-of-the-art approaches for predicting T-cell responses for human peptides.
[ { "created": "Mon, 18 Mar 2024 17:32:19 GMT", "version": "v1" } ]
2024-03-20
[ [ "Stadelmaier", "Josua", "", "University of Tübingen" ], [ "Malone", "Brandon", "", "NEC\n OncoImmunity" ], [ "Eggeling", "Ralf", "", "University of Tübingen" ] ]
We study the prediction of T-cell response for specific given peptides, which could, among other applications, be a crucial step towards the development of personalized cancer vaccines. It is a challenging task due to limited, heterogeneous training data featuring a multi-domain structure; such data entail the danger of shortcut learning, where models learn general characteristics of peptide sources, such as the source organism, rather than specific peptide characteristics associated with T-cell response. Using a transformer model for T-cell response prediction, we show that the danger of inflated predictive performance is not merely theoretical but occurs in practice. Consequently, we propose a domain-aware evaluation scheme. We then study different transfer learning techniques to deal with the multi-domain structure and shortcut learning. We demonstrate a per-source fine tuning approach to be effective across a wide range of peptide sources and further show that our final model outperforms existing state-of-the-art approaches for predicting T-cell responses for human peptides.
1706.00146
Steven Frank
Steven A. Frank
Receptor uptake arrays for vitamin B12, siderophores and glycans shape bacterial communities
Added many new references, edited throughout
Ecology & Evolution 7:10175-10195 (2017)
10.1002/ece3.3544
null
q-bio.MN q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Molecular variants of vitamin B12, siderophores and glycans occur. To take up variant forms, bacteria may express an array of receptors. The gut microbe Bacteroides thetaiotaomicron has three different receptors to take up variants of vitamin B12 and 88 receptors to take up various glycans. The design of receptor arrays reflects key processes that shape cellular evolution. Competition may focus each species on a subset of the available nutrient diversity. Some gut bacteria can take up only a narrow range of carbohydrates, whereas species such as B.~thetaiotaomicron can digest many different complex glycans. Comparison of different nutrients, habitats, and genomes provide opportunity to test hypotheses about the breadth of receptor arrays. Another important process concerns fluctuations in nutrient availability. Such fluctuations enhance the value of cellular sensors, which gain information about environmental availability and adjust receptor deployment. Bacteria often adjust receptor expression in response to fluctuations of particular carbohydrate food sources. Some species may adjust expression of uptake receptors for specific siderophores. How do cells use sensor information to control the response to fluctuations? That question about regulatory wiring relates to problems that arise in control theory and artificial intelligence. Control theory clarifies how to analyze environmental fluctuations in relation to the design of sensors and response systems. Recent advances in deep learning studies of artificial intelligence focus on the architecture of regulatory wiring and the ways in which complex control networks represent and classify environmental states. I emphasize the similar design problems that arise in cellular evolution, control theory, and artificial intelligence. I connect those broad concepts to testable hypotheses for bacterial uptake of B12, siderophores and glycans.
[ { "created": "Thu, 1 Jun 2017 01:48:05 GMT", "version": "v1" }, { "created": "Mon, 21 Aug 2017 14:53:03 GMT", "version": "v2" } ]
2018-10-23
[ [ "Frank", "Steven A.", "" ] ]
Molecular variants of vitamin B12, siderophores and glycans occur. To take up variant forms, bacteria may express an array of receptors. The gut microbe Bacteroides thetaiotaomicron has three different receptors to take up variants of vitamin B12 and 88 receptors to take up various glycans. The design of receptor arrays reflects key processes that shape cellular evolution. Competition may focus each species on a subset of the available nutrient diversity. Some gut bacteria can take up only a narrow range of carbohydrates, whereas species such as B.~thetaiotaomicron can digest many different complex glycans. Comparison of different nutrients, habitats, and genomes provide opportunity to test hypotheses about the breadth of receptor arrays. Another important process concerns fluctuations in nutrient availability. Such fluctuations enhance the value of cellular sensors, which gain information about environmental availability and adjust receptor deployment. Bacteria often adjust receptor expression in response to fluctuations of particular carbohydrate food sources. Some species may adjust expression of uptake receptors for specific siderophores. How do cells use sensor information to control the response to fluctuations? That question about regulatory wiring relates to problems that arise in control theory and artificial intelligence. Control theory clarifies how to analyze environmental fluctuations in relation to the design of sensors and response systems. Recent advances in deep learning studies of artificial intelligence focus on the architecture of regulatory wiring and the ways in which complex control networks represent and classify environmental states. I emphasize the similar design problems that arise in cellular evolution, control theory, and artificial intelligence. I connect those broad concepts to testable hypotheses for bacterial uptake of B12, siderophores and glycans.
2302.13268
Hamid Rokni
M. Keramy, K. Jahanian, R. Sani, A. Agha, I. Dehzangy, M. Yan, H. Rokni
A survey of machine learning techniques in medical applications
null
null
null
null
q-bio.GN cs.AI cs.LG q-bio.QM
http://creativecommons.org/publicdomain/zero/1.0/
In recent years, machine learning (ML) has emerged as a powerful tool for solving a wide range of problems, including medical decision-making. The exponential growth of medical data over the past two decades has surpassed the capacity for manual analysis, prompting increased interest in automated data analysis and processing. ML algorithms, capable of learning from data with minimal human intervention, are particularly well-suited for medical data analysis and interpretation. One significant advantage of ML is the reduced cost of collecting labeled training data necessary for supervised learning. While numerous studies have explored the applications of ML in medicine, this survey specifically focuses on the use of ML across various medical research fields. We provide a comprehensive technical overview of existing studies on ML applications in medicine, highlighting the strengths and limitations of these approaches. Additionally, we discuss potential research directions for future exploration. These include the development of more sophisticated reward functions, as the accuracy of the reward function is crucial for ML performance, the integration of ML with other techniques, and the application of ML to new and emerging areas in genomics research. Finally, we summarize our findings and present the current state of the field and the future outlook for ML in medical application.
[ { "created": "Sun, 26 Feb 2023 08:43:08 GMT", "version": "v1" }, { "created": "Mon, 28 Aug 2023 06:08:49 GMT", "version": "v2" }, { "created": "Wed, 17 Jul 2024 01:14:57 GMT", "version": "v3" }, { "created": "Thu, 18 Jul 2024 02:29:50 GMT", "version": "v4" }, { "created": "Tue, 30 Jul 2024 10:29:24 GMT", "version": "v5" } ]
2024-07-31
[ [ "Keramy", "M.", "" ], [ "Jahanian", "K.", "" ], [ "Sani", "R.", "" ], [ "Agha", "A.", "" ], [ "Dehzangy", "I.", "" ], [ "Yan", "M.", "" ], [ "Rokni", "H.", "" ] ]
In recent years, machine learning (ML) has emerged as a powerful tool for solving a wide range of problems, including medical decision-making. The exponential growth of medical data over the past two decades has surpassed the capacity for manual analysis, prompting increased interest in automated data analysis and processing. ML algorithms, capable of learning from data with minimal human intervention, are particularly well-suited for medical data analysis and interpretation. One significant advantage of ML is the reduced cost of collecting labeled training data necessary for supervised learning. While numerous studies have explored the applications of ML in medicine, this survey specifically focuses on the use of ML across various medical research fields. We provide a comprehensive technical overview of existing studies on ML applications in medicine, highlighting the strengths and limitations of these approaches. Additionally, we discuss potential research directions for future exploration. These include the development of more sophisticated reward functions, as the accuracy of the reward function is crucial for ML performance, the integration of ML with other techniques, and the application of ML to new and emerging areas in genomics research. Finally, we summarize our findings and present the current state of the field and the future outlook for ML in medical application.
1310.0730
Bernard Ycart
Bernard Ycart (LJK), Fr\'ed\'eric Pont (CRCT), Jean-Jacques Fourni\'e (CRCT)
Simulation of Gene Regulatory Networks
null
null
null
null
q-bio.MN math.PR q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This limited review is intended as an introduction to the fast growing subject of mathematical modelling of cell metabolism and its biochemical pathways, and more precisely on pathways linked to apoptosis of cancerous cells. Some basic mathematical models of chemical kinetics, with emphasis on stochastic models, are presented.
[ { "created": "Wed, 2 Oct 2013 15:05:48 GMT", "version": "v1" } ]
2013-10-03
[ [ "Ycart", "Bernard", "", "LJK" ], [ "Pont", "Frédéric", "", "CRCT" ], [ "Fournié", "Jean-Jacques", "", "CRCT" ] ]
This limited review is intended as an introduction to the fast growing subject of mathematical modelling of cell metabolism and its biochemical pathways, and more precisely on pathways linked to apoptosis of cancerous cells. Some basic mathematical models of chemical kinetics, with emphasis on stochastic models, are presented.
2004.06680
Andrew Lesniewski
Andrew Lesniewski
Epidemic control via stochastic optimal control
30 pages
null
null
null
q-bio.PE econ.EM math.OC q-fin.CP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of optimal control of the stochastic SIR model. Models of this type are used in mathematical epidemiology to capture the time evolution of highly infectious diseases such as COVID-19. Our approach relies on reformulating the Hamilton-Jacobi-Bellman equation as a stochastic minimum principle. This results in a system of forward backward stochastic differential equations, which is amenable to numerical solution via Monte Carlo simulations. We present a number of numerical solutions of the system under a variety of scenarios.
[ { "created": "Tue, 14 Apr 2020 17:34:07 GMT", "version": "v1" }, { "created": "Fri, 17 Apr 2020 20:36:47 GMT", "version": "v2" }, { "created": "Fri, 1 May 2020 16:18:13 GMT", "version": "v3" } ]
2020-05-04
[ [ "Lesniewski", "Andrew", "" ] ]
We study the problem of optimal control of the stochastic SIR model. Models of this type are used in mathematical epidemiology to capture the time evolution of highly infectious diseases such as COVID-19. Our approach relies on reformulating the Hamilton-Jacobi-Bellman equation as a stochastic minimum principle. This results in a system of forward backward stochastic differential equations, which is amenable to numerical solution via Monte Carlo simulations. We present a number of numerical solutions of the system under a variety of scenarios.
2311.02204
Marcela Ordorica Arango
Anastasia Bizyaeva, Marcela Ordorica Arango, Yunxiu Zhou, Simon Levin, and Naomi Ehrich Leonard
Active risk aversion in SIS epidemics on networks
null
null
null
null
q-bio.PE cs.SY eess.SY math.DS
http://creativecommons.org/licenses/by-sa/4.0/
We present and analyze an actively controlled Susceptible-Infected-Susceptible (actSIS) model of interconnected populations to study how risk aversion strategies, such as social distancing, affect network epidemics. A population using a risk aversion strategy reduces its contact rate with other populations when it perceives an increase in infection risk. The network actSIS model relies on two distinct networks. One is a physical contact network that defines which populations come into contact with which other populations and thus how infection spreads. The other is a communication network, such as an online social network, that defines which populations observe the infection level of which other populations and thus how information spreads. We prove that the model, with these two networks and populations using risk aversion strategies, exhibits a transcritical bifurcation in which an endemic equilibrium emerges. For regular graphs, we prove that the endemic infection level is uniform across populations and reduced by the risk aversion strategy, relative to the network SIS endemic level. We show that when communication is sufficiently sparse, this initially stable equilibrium loses stability in a secondary bifurcation. Simulations show that a new stable solution emerges with nonuniform infection levels.
[ { "created": "Fri, 3 Nov 2023 19:26:58 GMT", "version": "v1" } ]
2023-11-07
[ [ "Bizyaeva", "Anastasia", "" ], [ "Arango", "Marcela Ordorica", "" ], [ "Zhou", "Yunxiu", "" ], [ "Levin", "Simon", "" ], [ "Leonard", "Naomi Ehrich", "" ] ]
We present and analyze an actively controlled Susceptible-Infected-Susceptible (actSIS) model of interconnected populations to study how risk aversion strategies, such as social distancing, affect network epidemics. A population using a risk aversion strategy reduces its contact rate with other populations when it perceives an increase in infection risk. The network actSIS model relies on two distinct networks. One is a physical contact network that defines which populations come into contact with which other populations and thus how infection spreads. The other is a communication network, such as an online social network, that defines which populations observe the infection level of which other populations and thus how information spreads. We prove that the model, with these two networks and populations using risk aversion strategies, exhibits a transcritical bifurcation in which an endemic equilibrium emerges. For regular graphs, we prove that the endemic infection level is uniform across populations and reduced by the risk aversion strategy, relative to the network SIS endemic level. We show that when communication is sufficiently sparse, this initially stable equilibrium loses stability in a secondary bifurcation. Simulations show that a new stable solution emerges with nonuniform infection levels.
1605.06379
Karol Wo{\l}ek
Karol Wo{\l}ek and Marek Cieplak
Criteria for folding in structure-based models of proteins
8 pages, 4 figures
The Journal of Chemical Physics 144.18 (2016): 185102
10.1063/1.4948783
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In structure-based models of proteins, one often assumes that folding is accomplished when all contacts are established. This assumption may frequently lead to a conceptual problem that folding takes place in a temperature region of very low thermodynamic stability, especially when the contact map used is too sparse. We consider six different structure-based models and show that allowing for a small, but model-dependent, percentage of the native contacts not being established boosts the folding temperature substantially while affecting the time scales of folding only in a minor way. We also compare other properties of the six models. We show that the choice of the description of the backbone stiffness has a substantial effect on the values of characteristic temperatures that relate both to equilibrium and kinetic properties. Models without any backbone stiffness (like the self-organized polymer) are found to perform similar to those with the stiffness, including in the studies of stretching.
[ { "created": "Fri, 20 May 2016 14:37:38 GMT", "version": "v1" } ]
2016-05-23
[ [ "Wołek", "Karol", "" ], [ "Cieplak", "Marek", "" ] ]
In structure-based models of proteins, one often assumes that folding is accomplished when all contacts are established. This assumption may frequently lead to a conceptual problem that folding takes place in a temperature region of very low thermodynamic stability, especially when the contact map used is too sparse. We consider six different structure-based models and show that allowing for a small, but model-dependent, percentage of the native contacts not being established boosts the folding temperature substantially while affecting the time scales of folding only in a minor way. We also compare other properties of the six models. We show that the choice of the description of the backbone stiffness has a substantial effect on the values of characteristic temperatures that relate both to equilibrium and kinetic properties. Models without any backbone stiffness (like the self-organized polymer) are found to perform similar to those with the stiffness, including in the studies of stretching.
1306.6313
Jack O'Brien
John O'Brien and Xavier Didelot and Zamin Iqbal and LucasAmenga-Etego and Bartu Ahiska and Daniel Falush
A Bayesian approach to inferring the phylogenetic structure of communities from metagenomic data
25 pages, 7 figures
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenomics provides a powerful new tool set for investigating evolutionary interactions with the environment. However, an absence of model-based statistical methods means that researchers are often not able to make full use of this complex information. We present a Bayesian method for inferring the phylogenetic relationship among related organisms found within metagenomic samples. Our approach exploits variation in the frequency of taxa among samples to simultaneously infer each lineage haplotype, the phylogenetic tree connecting them, and their frequency within each sample. Applications of the algorithm to simulated data show that our method can recover a substantial fraction of the phylogenetic structure even in the presence of strong mixing among samples. We provide examples of the method applied to data from green sulfur bacteria recovered from an Antarctic lake, plastids from mixed Plasmodium falciparum infections, and virulent Neisseria meningitidis samples.
[ { "created": "Wed, 26 Jun 2013 18:48:55 GMT", "version": "v1" } ]
2013-06-27
[ [ "O'Brien", "John", "" ], [ "Didelot", "Xavier", "" ], [ "Iqbal", "Zamin", "" ], [ "LucasAmenga-Etego", "", "" ], [ "Ahiska", "Bartu", "" ], [ "Falush", "Daniel", "" ] ]
Metagenomics provides a powerful new tool set for investigating evolutionary interactions with the environment. However, an absence of model-based statistical methods means that researchers are often not able to make full use of this complex information. We present a Bayesian method for inferring the phylogenetic relationship among related organisms found within metagenomic samples. Our approach exploits variation in the frequency of taxa among samples to simultaneously infer each lineage haplotype, the phylogenetic tree connecting them, and their frequency within each sample. Applications of the algorithm to simulated data show that our method can recover a substantial fraction of the phylogenetic structure even in the presence of strong mixing among samples. We provide examples of the method applied to data from green sulfur bacteria recovered from an Antarctic lake, plastids from mixed Plasmodium falciparum infections, and virulent Neisseria meningitidis samples.
1505.07855
W B Langdon
W. B. Langdon and Brian Yee Hong Lam
Genetically Improved BarraCUDA
UCL, Department of Computer Science, technical report RN/15/03
BioData Mining 10:28, 2 August, 2017
10.1186/s13040-017-0149-1
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BarraCUDA is a C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60percent more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU Barracuda running on a single K80 Tesla GPU can align short paired end nextgen sequences up to ten times faster than bwa on a 12 core CPU.
[ { "created": "Thu, 28 May 2015 20:39:49 GMT", "version": "v1" } ]
2017-08-08
[ [ "Langdon", "W. B.", "" ], [ "Lam", "Brian Yee Hong", "" ] ]
BarraCUDA is a C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60percent more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU Barracuda running on a single K80 Tesla GPU can align short paired end nextgen sequences up to ten times faster than bwa on a 12 core CPU.
2006.00003
Mauricio J. del Razo Sarmina
Margarita Kostr\'e, Christof Sch\"utte, Frank No\'e and Mauricio J. del Razo
Coupling particle-based reaction-diffusion simulations with reservoirs mediated by reaction-diffusion PDEs
null
null
null
null
q-bio.QM physics.chem-ph physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open biochemical systems of interacting molecules are ubiquitous in life-related processes. However, established computational methodologies, like molecular dynamics, are still mostly constrained to closed systems and timescales too small to be relevant for life processes. Alternatively, particle-based reaction-diffusion models are currently the most accurate and computationally feasible approach at these scales. Their efficiency lies in modeling entire molecules as particles that can diffuse and interact with each other. In this work, we develop modeling and numerical schemes for particle-based reaction-diffusion in an open setting, where the reservoirs are mediated by reaction-diffusion PDEs. We derive two important theoretical results. The first one is the mean-field for open systems of diffusing particles; the second one is the mean-field for a particle-based reaction-diffusion system with second-order reactions. We employ these two results to develop a numerical scheme that consistently couples particle-based reaction-diffusion processes with reaction-diffusion PDEs. This allows modeling open biochemical systems in contact with reservoirs that are time-dependent and spatially inhomogeneous, as in many relevant real-world applications.
[ { "created": "Fri, 29 May 2020 19:12:31 GMT", "version": "v1" } ]
2020-06-02
[ [ "Kostré", "Margarita", "" ], [ "Schütte", "Christof", "" ], [ "Noé", "Frank", "" ], [ "del Razo", "Mauricio J.", "" ] ]
Open biochemical systems of interacting molecules are ubiquitous in life-related processes. However, established computational methodologies, like molecular dynamics, are still mostly constrained to closed systems and timescales too small to be relevant for life processes. Alternatively, particle-based reaction-diffusion models are currently the most accurate and computationally feasible approach at these scales. Their efficiency lies in modeling entire molecules as particles that can diffuse and interact with each other. In this work, we develop modeling and numerical schemes for particle-based reaction-diffusion in an open setting, where the reservoirs are mediated by reaction-diffusion PDEs. We derive two important theoretical results. The first one is the mean-field for open systems of diffusing particles; the second one is the mean-field for a particle-based reaction-diffusion system with second-order reactions. We employ these two results to develop a numerical scheme that consistently couples particle-based reaction-diffusion processes with reaction-diffusion PDEs. This allows modeling open biochemical systems in contact with reservoirs that are time-dependent and spatially inhomogeneous, as in many relevant real-world applications.
2303.16725
Reza Abbasi-Asl
Alex J. Lee, Robert Cahill, Reza Abbasi-Asl
Machine Learning for Uncovering Biological Insights in Spatial Transcriptomics Data
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Development and homeostasis in multicellular systems both require exquisite control over spatial molecular pattern formation and maintenance. Advances in spatially-resolved and high-throughput molecular imaging methods such as multiplexed immunofluorescence and spatial transcriptomics (ST) provide exciting new opportunities to augment our fundamental understanding of these processes in health and disease. The large and complex datasets resulting from these techniques, particularly ST, have led to rapid development of innovative machine learning (ML) tools primarily based on deep learning techniques. These ML tools are now increasingly featured in integrated experimental and computational workflows to disentangle signals from noise in complex biological systems. However, it can be difficult to understand and balance the different implicit assumptions and methodologies of a rapidly expanding toolbox of analytical tools in ST. To address this, we summarize major ST analysis goals that ML can help address and current analysis trends. We also describe four major data science concepts and related heuristics that can help guide practitioners in their choices of the right tools for the right biological questions.
[ { "created": "Wed, 29 Mar 2023 14:22:08 GMT", "version": "v1" } ]
2023-03-30
[ [ "Lee", "Alex J.", "" ], [ "Cahill", "Robert", "" ], [ "Abbasi-Asl", "Reza", "" ] ]
Development and homeostasis in multicellular systems both require exquisite control over spatial molecular pattern formation and maintenance. Advances in spatially-resolved and high-throughput molecular imaging methods such as multiplexed immunofluorescence and spatial transcriptomics (ST) provide exciting new opportunities to augment our fundamental understanding of these processes in health and disease. The large and complex datasets resulting from these techniques, particularly ST, have led to rapid development of innovative machine learning (ML) tools primarily based on deep learning techniques. These ML tools are now increasingly featured in integrated experimental and computational workflows to disentangle signals from noise in complex biological systems. However, it can be difficult to understand and balance the different implicit assumptions and methodologies of a rapidly expanding toolbox of analytical tools in ST. To address this, we summarize major ST analysis goals that ML can help address and current analysis trends. We also describe four major data science concepts and related heuristics that can help guide practitioners in their choices of the right tools for the right biological questions.
2207.13190
Julia Berezutskaya
Julia Berezutskaya, Anne-Lise Saive, Karim Jerbi, Marcel van Gerven
How does artificial intelligence contribute to iEEG research?
This chapter is forthcoming in Intracranial EEG for Cognitive Neuroscience (book)
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Artificial intelligence (AI) is a fast-growing field focused on modeling and machine implementation of various cognitive functions with an increasing number of applications in computer vision, text processing, robotics, neurotechnology, bio-inspired computing and others. In this chapter, we describe how AI methods can be applied in the context of intracranial electroencephalography (iEEG) research. IEEG data is unique as it provides extremely high-quality signals recorded directly from brain tissue. Applying advanced AI models to these data carries the potential to further our understanding of many fundamental questions in neuroscience. At the same time, as an invasive technique, iEEG lends itself well to long-term, mobile brain-computer interface applications, particularly for communication in severely paralyzed individuals. We provide a detailed overview of these two research directions in the application of AI techniques to iEEG. That is, (1) the development of computational models that target fundamental questions about the neurobiological nature of cognition (AI-iEEG for neuroscience) and (2) applied research on monitoring and identification of event-driven brain states for the development of clinical brain-computer interface systems (AI-iEEG for neurotechnology). We explain key machine learning concepts, specifics of processing and modeling iEEG data and details of state-of-the-art iEEG-based neurotechnology and brain-computer interfaces.
[ { "created": "Tue, 26 Jul 2022 21:38:01 GMT", "version": "v1" } ]
2022-07-28
[ [ "Berezutskaya", "Julia", "" ], [ "Saive", "Anne-Lise", "" ], [ "Jerbi", "Karim", "" ], [ "van Gerven", "Marcel", "" ] ]
Artificial intelligence (AI) is a fast-growing field focused on modeling and machine implementation of various cognitive functions with an increasing number of applications in computer vision, text processing, robotics, neurotechnology, bio-inspired computing and others. In this chapter, we describe how AI methods can be applied in the context of intracranial electroencephalography (iEEG) research. IEEG data is unique as it provides extremely high-quality signals recorded directly from brain tissue. Applying advanced AI models to these data carries the potential to further our understanding of many fundamental questions in neuroscience. At the same time, as an invasive technique, iEEG lends itself well to long-term, mobile brain-computer interface applications, particularly for communication in severely paralyzed individuals. We provide a detailed overview of these two research directions in the application of AI techniques to iEEG. That is, (1) the development of computational models that target fundamental questions about the neurobiological nature of cognition (AI-iEEG for neuroscience) and (2) applied research on monitoring and identification of event-driven brain states for the development of clinical brain-computer interface systems (AI-iEEG for neurotechnology). We explain key machine learning concepts, specifics of processing and modeling iEEG data and details of state-of-the-art iEEG-based neurotechnology and brain-computer interfaces.
1502.00549
Wannaya Ngamkham
W. Ngamkham, M. N. van Dongen, W. A. Serdijn, C. J. Bes, J. J. Briaire, J. H. M. Frijns
A 0.042 mm^2 programmable biphasic stimulator for cochlear implants suitable for a large number of channels
13 pages, 12 figures, 2 tables
null
null
null
q-bio.NC cs.ET physics.ins-det
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a compact programmable biphasic stimulator for cochlear implants. By employing double-loop negative feedback, the output impedance of the current generator is increased, while maximizing the voltage compliance of the output transistor. To make the stimulator circuit compact, the stimulation current is set by scaling a reference current using a two stage binary-weighted transistor DAC (comprising a 3 bit high-voltage transistor DAC and a 4 bit low-voltage transistor DAC). With this structure the power consumption and the area of the circuit can be minimized. The proposed circuit has been implemented in AMS 0.18um high-voltage CMOS IC technology, using an active chip area of about 0.042mm^2. Measurement results show that proper charge balance of the anodic and cathodic stimulation phases is achieved and a dc blocking capacitor can be omitted. The resulting reduction in the required area makes the proposed system suitable for a large number of channels.
[ { "created": "Thu, 29 Jan 2015 09:10:54 GMT", "version": "v1" } ]
2015-02-03
[ [ "Ngamkham", "W.", "" ], [ "van Dongen", "M. N.", "" ], [ "Serdijn", "W. A.", "" ], [ "Bes", "C. J.", "" ], [ "Briaire", "J. J.", "" ], [ "Frijns", "J. H. M.", "" ] ]
This paper presents a compact programmable biphasic stimulator for cochlear implants. By employing double-loop negative feedback, the output impedance of the current generator is increased, while maximizing the voltage compliance of the output transistor. To make the stimulator circuit compact, the stimulation current is set by scaling a reference current using a two stage binary-weighted transistor DAC (comprising a 3 bit high-voltage transistor DAC and a 4 bit low-voltage transistor DAC). With this structure the power consumption and the area of the circuit can be minimized. The proposed circuit has been implemented in AMS 0.18um high-voltage CMOS IC technology, using an active chip area of about 0.042mm^2. Measurement results show that proper charge balance of the anodic and cathodic stimulation phases is achieved and a dc blocking capacitor can be omitted. The resulting reduction in the required area makes the proposed system suitable for a large number of channels.
1303.0455
Marc Harper
Marc Harper, Luisa Gronenberg, James Liao, Christopher Lee
Comprehensive Detection of Genes Causing a Phenotype using Phenotype Sequencing and Pathway Analysis
null
null
10.1371/journal.pone.0088072
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discovering all the genetic causes of a phenotype is an important goal in functional genomics. In this paper we combine an experimental design for multiple independent detections of the genetic causes of a phenotype, with a high-throughput sequencing analysis that maximizes sensitivity for comprehensively identifying them. Testing this approach on a set of 24 mutant strains generated for a metabolic phenotype with many known genetic causes, we show that this pathway-based phenotype sequencing analysis greatly improves sensitivity of detection compared with previous methods, and reveals a wide range of pathways that can cause this phenotype. We demonstrate our approach on a metabolic re-engineering phenotype, the PEP/OAA metabolic node in E. coli, which is crucial to a substantial number of metabolic pathways and under renewed interest for biofuel research. Out of 2157 mutations in these strains, pathway-phenoseq discriminated just five gene groups (12 genes) as statistically significant causes of the phenotype. Experimentally, these five gene groups, and the next two high-scoring pathway-phenoseq groups, either have a clear connection to the PEP metabolite level or offer an alternative path of producing oxaloacetate (OAA), and thus clearly explain the phenotype. These high-scoring gene groups also show strong evidence of positive selection pressure, compared with strictly neutral selection in the rest of the genome.
[ { "created": "Sun, 3 Mar 2013 04:34:10 GMT", "version": "v1" } ]
2014-03-18
[ [ "Harper", "Marc", "" ], [ "Gronenberg", "Luisa", "" ], [ "Liao", "James", "" ], [ "Lee", "Christopher", "" ] ]
Discovering all the genetic causes of a phenotype is an important goal in functional genomics. In this paper we combine an experimental design for multiple independent detections of the genetic causes of a phenotype, with a high-throughput sequencing analysis that maximizes sensitivity for comprehensively identifying them. Testing this approach on a set of 24 mutant strains generated for a metabolic phenotype with many known genetic causes, we show that this pathway-based phenotype sequencing analysis greatly improves sensitivity of detection compared with previous methods, and reveals a wide range of pathways that can cause this phenotype. We demonstrate our approach on a metabolic re-engineering phenotype, the PEP/OAA metabolic node in E. coli, which is crucial to a substantial number of metabolic pathways and under renewed interest for biofuel research. Out of 2157 mutations in these strains, pathway-phenoseq discriminated just five gene groups (12 genes) as statistically significant causes of the phenotype. Experimentally, these five gene groups, and the next two high-scoring pathway-phenoseq groups, either have a clear connection to the PEP metabolite level or offer an alternative path of producing oxaloacetate (OAA), and thus clearly explain the phenotype. These high-scoring gene groups also show strong evidence of positive selection pressure, compared with strictly neutral selection in the rest of the genome.
1902.08827
Jaime Ashander
Jaime Ashander, Lisa C. Thompson, James N. Sanchirico and Marissa L. Baskett
Optimal Investment to Enable Evolutionary Rescue
null
Theoretical Ecology 2019
10.1007/s12080-019-0413-8
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
'Evolutionary rescue' is the potential for evolution to enable population persistence in a changing environment. Even with eventual rescue, evolutionary time lags can cause the population size to temporarily fall below a threshold susceptible to extinction. To reduce extinction risk given human-driven global change, conservation management can enhance populations through actions such as captive breeding. To quantify the optimal timing of, and indicators for engaging in, investment in temporary enhancement to enable evolutionary rescue, we construct a model of coupled demographic-genetic dynamics given a moving optimum. We assume 'decelerating change', as might be relevant to climate change, where the rate of environmental change initially exceeds a rate where evolutionary rescue is possible, but eventually slows. We analyze the optimal control path of an intervention to avoid the population size falling below a threshold susceptible to extinction, minimizing costs. We find that the optimal path of intervention initially increases as the population declines, then declines and ceases when the population growth rate becomes positive, which lags the stabilization in environmental change. In other words, the optimal strategy involves increasing investment even in the face of a declining population, and positive population growth could serve as a signal to end the intervention. In addition, a greater carrying capacity relative to the initial population size decreases the optimal intervention. Therefore, a one-time action to increase carrying capacity, such as habitat restoration, can reduce the amount and duration of longer-term investment in population enhancement, even if the population is initially lower than and declining away from the new carrying capacity.
[ { "created": "Sat, 23 Feb 2019 18:48:29 GMT", "version": "v1" } ]
2019-03-07
[ [ "Ashander", "Jaime", "" ], [ "Thompson", "Lisa C.", "" ], [ "Sanchirico", "James N.", "" ], [ "Baskett", "Marissa L.", "" ] ]
'Evolutionary rescue' is the potential for evolution to enable population persistence in a changing environment. Even with eventual rescue, evolutionary time lags can cause the population size to temporarily fall below a threshold susceptible to extinction. To reduce extinction risk given human-driven global change, conservation management can enhance populations through actions such as captive breeding. To quantify the optimal timing of, and indicators for engaging in, investment in temporary enhancement to enable evolutionary rescue, we construct a model of coupled demographic-genetic dynamics given a moving optimum. We assume 'decelerating change', as might be relevant to climate change, where the rate of environmental change initially exceeds a rate where evolutionary rescue is possible, but eventually slows. We analyze the optimal control path of an intervention to avoid the population size falling below a threshold susceptible to extinction, minimizing costs. We find that the optimal path of intervention initially increases as the population declines, then declines and ceases when the population growth rate becomes positive, which lags the stabilization in environmental change. In other words, the optimal strategy involves increasing investment even in the face of a declining population, and positive population growth could serve as a signal to end the intervention. In addition, a greater carrying capacity relative to the initial population size decreases the optimal intervention. Therefore, a one-time action to increase carrying capacity, such as habitat restoration, can reduce the amount and duration of longer-term investment in population enhancement, even if the population is initially lower than and declining away from the new carrying capacity.
2307.06866
Konstantinos Mamis
Konstantinos Mamis and Mohammad Farazmand
Modeling correlated uncertainties in stochastic compartmental models
36 pages, 8 figures
null
10.1016/j.mbs.2024.109226
null
q-bio.PE math.DS math.PR
http://creativecommons.org/licenses/by-nc-nd/4.0/
We consider compartmental models of communicable disease with uncertain contact rates. Stochastic fluctuations are often added to the contact rate to account for uncertainties. White noise, which is the typical choice for the fluctuations, leads to significant underestimation of the disease severity. Here, starting from reasonable assumptions on the social behavior of individuals, we model the contacts as a Markov process which takes into account the temporal correlations present in human social activities. Consequently, we show that the mean-reverting Ornstein-Uhlenbeck (OU) process is the correct model for the stochastic contact rate. We demonstrate the implication of our model on two examples: a Susceptibles-Infected-Susceptibles (SIS) model and a Susceptibles-Exposed-Infected-Removed (SEIR) model of the COVID-19 pandemic. In particular, we observe that both compartmental models with white noise uncertainties undergo transitions that lead to the systematic underestimation of the spread of the disease. In contrast, modeling the contact rate with the OU process significantly hinders such unrealistic noise-induced transitions. For the SIS model, we derive its stationary probability density analytically, for both white and correlated noise. This allows us to give a complete description of the model's asymptotic behavior as a function of its bifurcation parameters, i.e., the basic reproduction number, noise intensity, and correlation time. For the SEIR model, where the probability density is not available in closed form, we study the transitions using Monte Carlo simulations. Our study underscores the necessity of temporal correlations in stochastic compartmental models and the need for more empirical studies that would systematically quantify such correlations.
[ { "created": "Mon, 10 Jul 2023 00:47:49 GMT", "version": "v1" } ]
2024-06-07
[ [ "Mamis", "Konstantinos", "" ], [ "Farazmand", "Mohammad", "" ] ]
We consider compartmental models of communicable disease with uncertain contact rates. Stochastic fluctuations are often added to the contact rate to account for uncertainties. White noise, which is the typical choice for the fluctuations, leads to significant underestimation of the disease severity. Here, starting from reasonable assumptions on the social behavior of individuals, we model the contacts as a Markov process which takes into account the temporal correlations present in human social activities. Consequently, we show that the mean-reverting Ornstein-Uhlenbeck (OU) process is the correct model for the stochastic contact rate. We demonstrate the implication of our model on two examples: a Susceptibles-Infected-Susceptibles (SIS) model and a Susceptibles-Exposed-Infected-Removed (SEIR) model of the COVID-19 pandemic. In particular, we observe that both compartmental models with white noise uncertainties undergo transitions that lead to the systematic underestimation of the spread of the disease. In contrast, modeling the contact rate with the OU process significantly hinders such unrealistic noise-induced transitions. For the SIS model, we derive its stationary probability density analytically, for both white and correlated noise. This allows us to give a complete description of the model's asymptotic behavior as a function of its bifurcation parameters, i.e., the basic reproduction number, noise intensity, and correlation time. For the SEIR model, where the probability density is not available in closed form, we study the transitions using Monte Carlo simulations. Our study underscores the necessity of temporal correlations in stochastic compartmental models and the need for more empirical studies that would systematically quantify such correlations.
1102.3720
Alexander Gusev
A Gusev, MJ Shah, EE Kenny, A Ramachandran, JK Lowe, J Salit, CC Lee, EC Levandowsky, TN Weaver, QC Doan, HE Peckham, SF McLaughlin, MR Lyons, VN Sheth, M Stoffel, FM De La Vega, JM Friedman, JL Breslow, I Pe'er
Low-pass Genomewide Sequencing and Variant Imputation Using Identity-by-descent in an Isolated Human Population
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whole-genome sequencing in an isolated population with few founders directly ascertains variants from the population bottleneck that may be rare elsewhere. In such populations, shared haplotypes allow imputation of variants in unsequenced samples without resorting to statistical methods, as in studies of outbred cohorts. We focus on an isolated population cohort from the Pacific Island of Kosrae, Micronesia, where we previously collected SNP array and rich phenotype data for the majority of the population. We report identification of long regions with haplotypes co-inherited between pairs of individuals and methodology to leverage such shared genetic content for imputation. Our estimates show that sequencing as few as 40 personal genomes allows for imputation in up to 60% of the 3,000-person cohort at the average locus. We ascertained a pilot data-set of whole-genome sequences from seven Kosraean individuals, with average 5X coverage. This dataset identified 5,735,306 unique sites of which 1,212,831 were previously unknown. Additionally, these Kosraen variants are unusually enriched for alleles that are rare in other populations when compared to geographic neighbors. We were able to use the presence of shared haplotypes between the seven individuals to estimate imputation accuracy of known and novel variants and achieved levels of 99.6% and 97.3%, respectively. This study presents the first whole-genome analysis of a homogenous isolate population with emphasis on rare variant inference.
[ { "created": "Thu, 17 Feb 2011 23:43:58 GMT", "version": "v1" } ]
2011-02-21
[ [ "Gusev", "A", "" ], [ "Shah", "MJ", "" ], [ "Kenny", "EE", "" ], [ "Ramachandran", "A", "" ], [ "Lowe", "JK", "" ], [ "Salit", "J", "" ], [ "Lee", "CC", "" ], [ "Levandowsky", "EC", "" ], [ "Weaver", "TN", "" ], [ "Doan", "QC", "" ], [ "Peckham", "HE", "" ], [ "McLaughlin", "SF", "" ], [ "Lyons", "MR", "" ], [ "Sheth", "VN", "" ], [ "Stoffel", "M", "" ], [ "De La Vega", "FM", "" ], [ "Friedman", "JM", "" ], [ "Breslow", "JL", "" ], [ "Pe'er", "I", "" ] ]
Whole-genome sequencing in an isolated population with few founders directly ascertains variants from the population bottleneck that may be rare elsewhere. In such populations, shared haplotypes allow imputation of variants in unsequenced samples without resorting to statistical methods, as in studies of outbred cohorts. We focus on an isolated population cohort from the Pacific Island of Kosrae, Micronesia, where we previously collected SNP array and rich phenotype data for the majority of the population. We report identification of long regions with haplotypes co-inherited between pairs of individuals and methodology to leverage such shared genetic content for imputation. Our estimates show that sequencing as few as 40 personal genomes allows for imputation in up to 60% of the 3,000-person cohort at the average locus. We ascertained a pilot data-set of whole-genome sequences from seven Kosraean individuals, with average 5X coverage. This dataset identified 5,735,306 unique sites of which 1,212,831 were previously unknown. Additionally, these Kosraen variants are unusually enriched for alleles that are rare in other populations when compared to geographic neighbors. We were able to use the presence of shared haplotypes between the seven individuals to estimate imputation accuracy of known and novel variants and achieved levels of 99.6% and 97.3%, respectively. This study presents the first whole-genome analysis of a homogenous isolate population with emphasis on rare variant inference.
1202.5080
Anatoly Zlotnik
Anatoly Zlotnik and Jr-Shin Li
Optimal Entrainment of Neural Oscillator Ensembles
null
null
null
null
q-bio.NC math.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we derive the minimum-energy periodic control that entrains an ensemble of structurally similar neural oscillators to a desired frequency. The state space representation of a nominal oscillator is reduced to a phase model by computing its limit cycle and phase response curve, from which the optimal control is derived by using formal averaging and the calculus of variations. We focus on the case of a 1:1 entrainment ratio, and introduce a numerical method for approximating the optimal controls. The method is applied to asymptotically control the spiking frequency of neural oscillators modeled using the Hodgkin-Huxley equations. This illustrates the optimality of entrainment controls derived using phase models when applied to the original state space system, which is a crucial requirement for using phase models in control synthesis for practical applications. The results of this work can be used to design low energy signals for deep brain stimulation therapies for neuropathologies, and can be generalized for optimal frequency control of large-scale complex oscillating systems with parameter uncertainty.
[ { "created": "Thu, 23 Feb 2012 03:02:49 GMT", "version": "v1" } ]
2012-02-24
[ [ "Zlotnik", "Anatoly", "" ], [ "Li", "Jr-Shin", "" ] ]
In this paper, we derive the minimum-energy periodic control that entrains an ensemble of structurally similar neural oscillators to a desired frequency. The state space representation of a nominal oscillator is reduced to a phase model by computing its limit cycle and phase response curve, from which the optimal control is derived by using formal averaging and the calculus of variations. We focus on the case of a 1:1 entrainment ratio, and introduce a numerical method for approximating the optimal controls. The method is applied to asymptotically control the spiking frequency of neural oscillators modeled using the Hodgkin-Huxley equations. This illustrates the optimality of entrainment controls derived using phase models when applied to the original state space system, which is a crucial requirement for using phase models in control synthesis for practical applications. The results of this work can be used to design low energy signals for deep brain stimulation therapies for neuropathologies, and can be generalized for optimal frequency control of large-scale complex oscillating systems with parameter uncertainty.
2111.02267
Yong Cui
Yong Cui, Jason D. Robinson, Seokhun Kim, George Kypriotakis, Charles E. Green, Sanjay S. Shete, Paul M. Cinciripini
An Open-Source Web App for Creating and Scoring Qualtrics-based Implicit Association Test
25 pages, 2 figures, 4 5ables
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The Implicit Association Test (IAT) is a common behavioral paradigm to assess implicit attitudes in various research contexts. In recent years, researchers have sought to collect IAT data remotely using online applications. Compared to laboratory-based assessments, online IAT experiments have several advantages, including widespread administration outside of artificial (i.e., laboratory) environments. Use of survey-software platforms (e.g., Qualtrics) represents an innovative and cost-effective approach that allows researchers to prepare online IAT experiments without any programming expertise. However, there are some drawbacks with the existing survey-software as well as other online IAT preparation tools, such as limited mobile device compatibility and lack of helper functionalities for easy adaptation. To address these issues, we developed an open-source web app (GitHub page: https://github.com/ycui1-mda/qualtrics_iat) for creating mobile-compatible Qualtrics-based IAT experiments and scoring the collected responses. The present study demonstrates the key functionalities of this web app and describes feasibility data that were collected and scored using the app to show the tool's validity. We show that the web app provides a complete and easy-to-adapt toolset for researchers to construct Qualtrics-based IAT experiments and process the derived IAT data.
[ { "created": "Wed, 3 Nov 2021 14:59:50 GMT", "version": "v1" } ]
2021-11-04
[ [ "Cui", "Yong", "" ], [ "Robinson", "Jason D.", "" ], [ "Kim", "Seokhun", "" ], [ "Kypriotakis", "George", "" ], [ "Green", "Charles E.", "" ], [ "Shete", "Sanjay S.", "" ], [ "Cinciripini", "Paul M.", "" ] ]
The Implicit Association Test (IAT) is a common behavioral paradigm to assess implicit attitudes in various research contexts. In recent years, researchers have sought to collect IAT data remotely using online applications. Compared to laboratory-based assessments, online IAT experiments have several advantages, including widespread administration outside of artificial (i.e., laboratory) environments. Use of survey-software platforms (e.g., Qualtrics) represents an innovative and cost-effective approach that allows researchers to prepare online IAT experiments without any programming expertise. However, there are some drawbacks with the existing survey-software as well as other online IAT preparation tools, such as limited mobile device compatibility and lack of helper functionalities for easy adaptation. To address these issues, we developed an open-source web app (GitHub page: https://github.com/ycui1-mda/qualtrics_iat) for creating mobile-compatible Qualtrics-based IAT experiments and scoring the collected responses. The present study demonstrates the key functionalities of this web app and describes feasibility data that were collected and scored using the app to show the tool's validity. We show that the web app provides a complete and easy-to-adapt toolset for researchers to construct Qualtrics-based IAT experiments and process the derived IAT data.
0711.2179
Peter Klimek
Rudolf Hanel, Peter Klimek, and Stefan Thurner
Studies in the physics of evolution: creation, formation, destruction
11 pages, 10 figures, to be published in SPIE proceedings
null
10.1117/12.771146
null
q-bio.PE
null
The concept of (auto)catalytic systems has become a cornerstone in understanding evolutionary processes in various fields. The common ground is the observation that for the production of new species/goods/ideas/elements etc. the pre-existence of specific other elements is a necessary condition. In previous work some of us showed that the dynamics of the catalytic network equation can be understood in terms of topological recurrence relations paving a path towards the analytic tractability of notoriously high dimensional evolution equations. We apply this philosophy to studies in socio-physics, bio-diversity and massive events of creation and destruction in technological and biological networks. Cascading events, triggered by small exogenous fluctuations, lead to dynamics strongly resembling the qualitative picture of Schumpeterian economic evolution. Further we show that this new methodology allows to mathematically treat a variant of the threshold voter-model of opinion formation on networks. For fixed topology we find distinct phases of mixed opinions and consensus.
[ { "created": "Wed, 14 Nov 2007 12:22:46 GMT", "version": "v1" } ]
2009-11-13
[ [ "Hanel", "Rudolf", "" ], [ "Klimek", "Peter", "" ], [ "Thurner", "Stefan", "" ] ]
The concept of (auto)catalytic systems has become a cornerstone in understanding evolutionary processes in various fields. The common ground is the observation that for the production of new species/goods/ideas/elements etc. the pre-existence of specific other elements is a necessary condition. In previous work some of us showed that the dynamics of the catalytic network equation can be understood in terms of topological recurrence relations paving a path towards the analytic tractability of notoriously high dimensional evolution equations. We apply this philosophy to studies in socio-physics, bio-diversity and massive events of creation and destruction in technological and biological networks. Cascading events, triggered by small exogenous fluctuations, lead to dynamics strongly resembling the qualitative picture of Schumpeterian economic evolution. Further we show that this new methodology allows to mathematically treat a variant of the threshold voter-model of opinion formation on networks. For fixed topology we find distinct phases of mixed opinions and consensus.
1505.07866
Dominik Thalmeier
Dominik Thalmeier, Marvin Uhlmann, Hilbert J. Kappen, Raoul-Martin Memmesheimer
Learning universal computations with spikes
null
PLoS Comput Biol 12(6): e1004895 (2016)
10.1371/journal.pcbi.1004895
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g.~for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.
[ { "created": "Thu, 28 May 2015 21:34:57 GMT", "version": "v1" }, { "created": "Wed, 29 Jun 2016 14:11:07 GMT", "version": "v2" } ]
2016-06-30
[ [ "Thalmeier", "Dominik", "" ], [ "Uhlmann", "Marvin", "" ], [ "Kappen", "Hilbert J.", "" ], [ "Memmesheimer", "Raoul-Martin", "" ] ]
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g.~for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.
2001.02868
Mihai Bibireata
Mihai Bibireata, Valentin M. Slepukhin, Alex J. Levine
Dynamical phase separation on rhythmogenic neuronal networks
14 pages, 15 figures
Phys. Rev. E 101, 062307 (2020)
10.1103/PhysRevE.101.062307
null
q-bio.NC nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the dynamics of the preB\"{o}tzinger complex, the mammalian central pattern generator with $N \sim 10^3$ neurons, which produces a collective metronomic signal that times the inspiration. Our analysis is based on a simple firing-rate model of excitatory neurons with dendritic adaptation (the Feldman Del Negro model [Nat. Rev. Neurosci. 7, 232 (2006), Phys. Rev. E 2010 :051911]) interacting on a fixed, directed Erd\H{o}s-R\'{e}nyi network. In the all-to-all coupled variant of the model, there is spontaneous symmetry breaking in which some fraction of the neurons become stuck in a high firing-rate state, while others become quiescent. This separation into firing and non-firing clusters persists into more sparsely connected networks, and is partially determined by $k$-cores in the directed graphs. The model has a number of features of the dynamical phase diagram that violate the predictions of mean-field analysis. In particular, we observe in the simulated networks that stable oscillations do not persist in the large-N limit, in contradiction to the predictions of mean-field theory. Moreover, we observe that the oscillations in these sparse networks are remarkably robust in response to killing neurons, surviving until only $\approx 20 \%$ of the network remains. This robustness is consistent with experiment.
[ { "created": "Thu, 9 Jan 2020 07:37:17 GMT", "version": "v1" } ]
2020-07-01
[ [ "Bibireata", "Mihai", "" ], [ "Slepukhin", "Valentin M.", "" ], [ "Levine", "Alex J.", "" ] ]
We explore the dynamics of the preB\"{o}tzinger complex, the mammalian central pattern generator with $N \sim 10^3$ neurons, which produces a collective metronomic signal that times the inspiration. Our analysis is based on a simple firing-rate model of excitatory neurons with dendritic adaptation (the Feldman Del Negro model [Nat. Rev. Neurosci. 7, 232 (2006), Phys. Rev. E 2010 :051911]) interacting on a fixed, directed Erd\H{o}s-R\'{e}nyi network. In the all-to-all coupled variant of the model, there is spontaneous symmetry breaking in which some fraction of the neurons become stuck in a high firing-rate state, while others become quiescent. This separation into firing and non-firing clusters persists into more sparsely connected networks, and is partially determined by $k$-cores in the directed graphs. The model has a number of features of the dynamical phase diagram that violate the predictions of mean-field analysis. In particular, we observe in the simulated networks that stable oscillations do not persist in the large-N limit, in contradiction to the predictions of mean-field theory. Moreover, we observe that the oscillations in these sparse networks are remarkably robust in response to killing neurons, surviving until only $\approx 20 \%$ of the network remains. This robustness is consistent with experiment.
q-bio/0512045
Angel (Anxo) Sanchez
Carlos P. Roca, Jose A. Cuesta, and Angel Sanchez
The importance of selection rate in the evolution of cooperation
null
null
null
null
q-bio.PE cond-mat.stat-mech nlin.AO physics.soc-ph q-bio.QM
null
How cooperation emerges in human societies is still a puzzle. Evolutionary game theory has been the standard framework to address this issue. In most models, every individual plays with all others, and then reproduce and die according to what they earn. This amounts to assuming that selection takes place at a slow pace with respect to the interaction time scale. We show that, quite generally, if selection speeds up, the evolution outcome changes dramatically. Thus, in games such as Harmony, where cooperation is the only equilibrium and the only rational outcome, rapid selection leads to dominance of defectors. Similar non trivial phenomena arise in other binary games and even in more complicated settings such as the Ultimatum game. We conclude that the rate of selection is a key element to understand and model the emergence of cooperation, and one that has so far been overlooked.
[ { "created": "Wed, 28 Dec 2005 15:53:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Roca", "Carlos P.", "" ], [ "Cuesta", "Jose A.", "" ], [ "Sanchez", "Angel", "" ] ]
How cooperation emerges in human societies is still a puzzle. Evolutionary game theory has been the standard framework to address this issue. In most models, every individual plays with all others, and then reproduce and die according to what they earn. This amounts to assuming that selection takes place at a slow pace with respect to the interaction time scale. We show that, quite generally, if selection speeds up, the evolution outcome changes dramatically. Thus, in games such as Harmony, where cooperation is the only equilibrium and the only rational outcome, rapid selection leads to dominance of defectors. Similar non trivial phenomena arise in other binary games and even in more complicated settings such as the Ultimatum game. We conclude that the rate of selection is a key element to understand and model the emergence of cooperation, and one that has so far been overlooked.
0903.0859
Chih-Yuan Tseng
Hung-I Pai, Chih-Yuan Tseng and H.C. Lee
Data Processing Approach for Localizing Bio-magnetic Sources in the Brain
11 pages, 15 figures
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Magnetoencephalography (MEG) provides dynamic spatial-temporal insight of neural activities in the cortex. Because the number of possible sources is far greater than the number of MEG detectors, the proposition to localize sources directly from MEG data is notoriously ill-posed. Here we develop an approach based on data processing procedures including clustering, forward and backward filtering, and the method of maximum entropy. We show that taking as a starting point the assumption that the sources lie in the general area of the auditory cortex (an area of about 40 mm by 15 mm), our approach is capable of achieving reasonable success in pinpointing active sources concentrated in an area of a few mm's across, while limiting the spatial distribution and number of false positives.
[ { "created": "Wed, 4 Mar 2009 21:10:15 GMT", "version": "v1" } ]
2009-03-06
[ [ "Pai", "Hung-I", "" ], [ "Tseng", "Chih-Yuan", "" ], [ "Lee", "H. C.", "" ] ]
Magnetoencephalography (MEG) provides dynamic spatial-temporal insight of neural activities in the cortex. Because the number of possible sources is far greater than the number of MEG detectors, the proposition to localize sources directly from MEG data is notoriously ill-posed. Here we develop an approach based on data processing procedures including clustering, forward and backward filtering, and the method of maximum entropy. We show that taking as a starting point the assumption that the sources lie in the general area of the auditory cortex (an area of about 40 mm by 15 mm), our approach is capable of achieving reasonable success in pinpointing active sources concentrated in an area of a few mm's across, while limiting the spatial distribution and number of false positives.
1512.00344
Joan Saldana
Tom Britton, David Juher, Joan Saldana
A network epidemic model with preventive rewiring: comparative analysis of the initial phase
25 pages, 7 figures
Bulletin of Mathematical Biology, 78(12), 2427-2454, 2016
10.1007/s11538-016-0227-4
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is concerned with stochastic SIR and SEIR epidemic models on random networks in which individuals may rewire away from infected neighbors at some rate $\omega$ (and reconnect to non-infectious individuals with probability $\alpha$ or else simply drop the edge if $\alpha=0$), so-called preventive rewiring. The models are denoted SIR-$\omega$ and SEIR-$\omega$, and we focus attention on the early stages of an outbreak, where we derive expression for the basic reproduction number $R_0$ and the expected degree of the infectious nodes $E(D_I)$ using two different approximation approaches. The first approach approximates the early spread of an epidemic by a branching process, whereas the second one uses pair approximation. The expressions are compared with the corresponding empirical means obtained from stochastic simulations of SIR-$\omega$ and SEIR-$\omega$ epidemics on Poisson and scale-free networks. Without rewiring of exposed nodes, the two approaches predict the same epidemic threshold and the same $E(D_I)$ for both types of epidemics, the latter being very close to the mean degree obtained from simulated epidemics over Poisson networks. Above the epidemic threshold, pairwise models overestimate the value of $R_0$ computed from simulations, which turns out to be very close to the one predicted by the branching process approximation. When exposed individuals also rewire with $\alpha > 0$ (perhaps unaware of being infected), the two approaches give different epidemic thresholds, with the branching process approximation being more in agreement with simulations.
[ { "created": "Tue, 1 Dec 2015 17:17:06 GMT", "version": "v1" }, { "created": "Tue, 22 Mar 2016 14:34:47 GMT", "version": "v2" }, { "created": "Tue, 18 Oct 2016 17:43:35 GMT", "version": "v3" } ]
2016-11-15
[ [ "Britton", "Tom", "" ], [ "Juher", "David", "" ], [ "Saldana", "Joan", "" ] ]
This paper is concerned with stochastic SIR and SEIR epidemic models on random networks in which individuals may rewire away from infected neighbors at some rate $\omega$ (and reconnect to non-infectious individuals with probability $\alpha$ or else simply drop the edge if $\alpha=0$), so-called preventive rewiring. The models are denoted SIR-$\omega$ and SEIR-$\omega$, and we focus attention on the early stages of an outbreak, where we derive expression for the basic reproduction number $R_0$ and the expected degree of the infectious nodes $E(D_I)$ using two different approximation approaches. The first approach approximates the early spread of an epidemic by a branching process, whereas the second one uses pair approximation. The expressions are compared with the corresponding empirical means obtained from stochastic simulations of SIR-$\omega$ and SEIR-$\omega$ epidemics on Poisson and scale-free networks. Without rewiring of exposed nodes, the two approaches predict the same epidemic threshold and the same $E(D_I)$ for both types of epidemics, the latter being very close to the mean degree obtained from simulated epidemics over Poisson networks. Above the epidemic threshold, pairwise models overestimate the value of $R_0$ computed from simulations, which turns out to be very close to the one predicted by the branching process approximation. When exposed individuals also rewire with $\alpha > 0$ (perhaps unaware of being infected), the two approaches give different epidemic thresholds, with the branching process approximation being more in agreement with simulations.
0912.2536
Richard A Neher
Richard A. Neher and Thomas Leitner
Recombination rate and selection strength in HIV intra-patient evolution
to appear in PLoS Computational Biology
PLoS Comput Biol, 2010, 6(1): e1000660
10.1371/journal.pcbi.1000660
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolutionary dynamics of HIV during the chronic phase of infection is driven by the host immune response and by selective pressures exerted through drug treatment. To understand and model the evolution of HIV quantitatively, the parameters governing genetic diversification and the strength of selection need to be known. While mutation rates can be measured in single replication cycles, the relevant effective recombination rate depends on the probability of coinfection of a cell with more than one virus and can only be inferred from population data. However, most population genetic estimators for recombination rates assume absence of selection and are hence of limited applicability to HIV, since positive and purifying selection are important in HIV evolution. Here, we estimate the rate of recombination and the distribution of selection coefficients from time-resolved sequence data tracking the evolution of HIV within single patients. By examining temporal changes in the genetic composition of the population, we estimate the effective recombination to be r=1.4e-5 recombinations per site and generation. Furthermore, we provide evidence that selection coefficients of at least 15% of the observed non-synonymous polymorphisms exceed 0.8% per generation. These results provide a basis for a more detailed understanding of the evolution of HIV. A particularly interesting case is evolution in response to drug treatment, where recombination can facilitate the rapid acquisition of multiple resistance mutations. With the methods developed here, more precise and more detailed studies will be possible, as soon as data with higher time resolution and greater sample sizes is available.
[ { "created": "Sun, 13 Dec 2009 19:54:54 GMT", "version": "v1" } ]
2012-08-01
[ [ "Neher", "Richard A.", "" ], [ "Leitner", "Thomas", "" ] ]
The evolutionary dynamics of HIV during the chronic phase of infection is driven by the host immune response and by selective pressures exerted through drug treatment. To understand and model the evolution of HIV quantitatively, the parameters governing genetic diversification and the strength of selection need to be known. While mutation rates can be measured in single replication cycles, the relevant effective recombination rate depends on the probability of coinfection of a cell with more than one virus and can only be inferred from population data. However, most population genetic estimators for recombination rates assume absence of selection and are hence of limited applicability to HIV, since positive and purifying selection are important in HIV evolution. Here, we estimate the rate of recombination and the distribution of selection coefficients from time-resolved sequence data tracking the evolution of HIV within single patients. By examining temporal changes in the genetic composition of the population, we estimate the effective recombination to be r=1.4e-5 recombinations per site and generation. Furthermore, we provide evidence that selection coefficients of at least 15% of the observed non-synonymous polymorphisms exceed 0.8% per generation. These results provide a basis for a more detailed understanding of the evolution of HIV. A particularly interesting case is evolution in response to drug treatment, where recombination can facilitate the rapid acquisition of multiple resistance mutations. With the methods developed here, more precise and more detailed studies will be possible, as soon as data with higher time resolution and greater sample sizes is available.
1905.02405
Elisa Castaldi
Elisa Castaldi, Claudia Lunghi and Maria Concetta Morrone
Neuroplasticity in adult human visual cortex
28 pages, 2 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Between 1 to 5 out of 100 people worldwide has never experienced normotypic vision due to a condition called amblyopia, and about 1 out of 4000 suffer from inherited retinal dystrophies that progressively lead them to blindness. While a wide range of technologies and therapies are being developed to restore vision, a fundamental question still remains unanswered: would the adult visual brain retain a sufficient plastic potential to learn how to see after a prolonged period of abnormal visual experience? In this review we summarize studies showing that the visual brain of sighted adults retains a type of developmental plasticity, called homeostatic plasticity, and this property has been recently exploited successfully for adult amblyopia recover. Next, we discuss how the brain circuits reorganizes when visual stimulation is partially restored by means of a bionic eye in late blinds with Retinitis Pigmentosa. The primary visual cortex in these patients slowly became activated by the artificial visual stimulation, indicating that sight restoration therapies can rely on a considerable degree of spared plasticity in adulthood.
[ { "created": "Tue, 7 May 2019 08:37:31 GMT", "version": "v1" } ]
2019-05-08
[ [ "Castaldi", "Elisa", "" ], [ "Lunghi", "Claudia", "" ], [ "Morrone", "Maria Concetta", "" ] ]
Between 1 to 5 out of 100 people worldwide has never experienced normotypic vision due to a condition called amblyopia, and about 1 out of 4000 suffer from inherited retinal dystrophies that progressively lead them to blindness. While a wide range of technologies and therapies are being developed to restore vision, a fundamental question still remains unanswered: would the adult visual brain retain a sufficient plastic potential to learn how to see after a prolonged period of abnormal visual experience? In this review we summarize studies showing that the visual brain of sighted adults retains a type of developmental plasticity, called homeostatic plasticity, and this property has been recently exploited successfully for adult amblyopia recover. Next, we discuss how the brain circuits reorganizes when visual stimulation is partially restored by means of a bionic eye in late blinds with Retinitis Pigmentosa. The primary visual cortex in these patients slowly became activated by the artificial visual stimulation, indicating that sight restoration therapies can rely on a considerable degree of spared plasticity in adulthood.
q-bio/0406042
Istvan Albert
Istvan Albert, Reka Albert
Conserved network motifs allow protein-protein interaction prediction
null
null
null
null
q-bio.MN q-bio.GN
null
High-throughput protein interaction detection methods are strongly affected by false positive and false negative results. Focused experiments are needed to complement the large-scale methods by validating previously detected interactions but it is often difficult to decide which proteins to probe as interaction partners. Developing reliable computational methods assisting this decision process is a pressing need in bioinformatics. We show that we can use the conserved properties of the protein network to identify and validate interaction candidates. We apply a number of machine learning algorithms to the protein connectivity information and achieve a surprisingly good overall performance in predicting interacting proteins. Using a 'leave-one-out' approach we find average success rates between 20-50% for predicting the correct interaction partner of a protein. We demonstrate that the success of these methods is based on the presence of conserved interaction motifs within the network. A reference implementation and a table with candidate interacting partners for each yeast protein are available at http://www.protsuggest.org
[ { "created": "Tue, 22 Jun 2004 16:53:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Albert", "Istvan", "" ], [ "Albert", "Reka", "" ] ]
High-throughput protein interaction detection methods are strongly affected by false positive and false negative results. Focused experiments are needed to complement the large-scale methods by validating previously detected interactions but it is often difficult to decide which proteins to probe as interaction partners. Developing reliable computational methods assisting this decision process is a pressing need in bioinformatics. We show that we can use the conserved properties of the protein network to identify and validate interaction candidates. We apply a number of machine learning algorithms to the protein connectivity information and achieve a surprisingly good overall performance in predicting interacting proteins. Using a 'leave-one-out' approach we find average success rates between 20-50% for predicting the correct interaction partner of a protein. We demonstrate that the success of these methods is based on the presence of conserved interaction motifs within the network. A reference implementation and a table with candidate interacting partners for each yeast protein are available at http://www.protsuggest.org
2003.10962
J. C. Phillips
J. C. Phillips
Self-Organized Networks, Darwinian Evolution of Self-Organized Networks, Darwinian Evolution of Dynein Rings, Stalks and Stalk Heads
17 pages,6 figures
null
null
null
q-bio.MN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cytoskeletons are self organized networks based on polymerized proteins, actin, tubulin, and driven by motor proteins, such as myosin, kinesin and dynein. Their positive Darwinian evolution enables them to approach optimized functionality self organized criticality. Dynein has three distinct titled subunits, but how these units connect to function as a molecular motor is mysterious. Dynein binds to tubulin through two coiled coil stalks and a stalk head. The energy used to alter the head binding and propel cargo along tubulin is supplied by ATP at a ring 1500 amino acids away.
[ { "created": "Sat, 21 Mar 2020 23:42:36 GMT", "version": "v1" } ]
2020-03-25
[ [ "Phillips", "J. C.", "" ] ]
Cytoskeletons are self organized networks based on polymerized proteins, actin, tubulin, and driven by motor proteins, such as myosin, kinesin and dynein. Their positive Darwinian evolution enables them to approach optimized functionality self organized criticality. Dynein has three distinct titled subunits, but how these units connect to function as a molecular motor is mysterious. Dynein binds to tubulin through two coiled coil stalks and a stalk head. The energy used to alter the head binding and propel cargo along tubulin is supplied by ATP at a ring 1500 amino acids away.
1905.03557
Eleonore Padovani
Jean-David Gothi\'e (ERE), Barbara Demeneix (ERE), Sylvie Remaud (ERE)
Comparative approaches to understanding thyroid hormone regulation of neurogenesis
Molecular and Cellular Endocrinology, Elsevier, 2017
null
10.1016/j.mce.2017.05.020
null
q-bio.SC q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thyroid hormone (TH) signalling, an evolutionary conserved pathway, is crucial for brain function and cognition throughout life, from early development to ageing. In humans, TH deficiency during pregnancy alters offspring brain development, increasing the risk of cognitive disorders. How TH regulates neurogenesis and subsequent behaviour and cognitive functions remains a major research challenge. Cellular and molecular mechanisms underlying TH signalling on proliferation, survival, determination, migration, differentiation and maturation have been studied in mammalian animal models for over a century. However, recent data show that THs also influence embryonic and adult neurogenesis throughout vertebrates (from mammals to teleosts). These latest observations raise the question of how TH availability is controlled during neurogenesis and particularly in specific neural stem cell populations. This review deals with the role of TH in regulating neurogenesis in the developing and the adult brain across different vertebrate species. Such evo-devo approaches can shed new light on (i) the evolution of the nervous system and (ii) the evolutionary control of neurogenesis by TH across animal phyla. We also discuss the role of thyroid disruptors on brain development in an evolutionary context.
[ { "created": "Thu, 9 May 2019 12:07:04 GMT", "version": "v1" } ]
2019-05-10
[ [ "Gothié", "Jean-David", "", "ERE" ], [ "Demeneix", "Barbara", "", "ERE" ], [ "Remaud", "Sylvie", "", "ERE" ] ]
Thyroid hormone (TH) signalling, an evolutionary conserved pathway, is crucial for brain function and cognition throughout life, from early development to ageing. In humans, TH deficiency during pregnancy alters offspring brain development, increasing the risk of cognitive disorders. How TH regulates neurogenesis and subsequent behaviour and cognitive functions remains a major research challenge. Cellular and molecular mechanisms underlying TH signalling on proliferation, survival, determination, migration, differentiation and maturation have been studied in mammalian animal models for over a century. However, recent data show that THs also influence embryonic and adult neurogenesis throughout vertebrates (from mammals to teleosts). These latest observations raise the question of how TH availability is controlled during neurogenesis and particularly in specific neural stem cell populations. This review deals with the role of TH in regulating neurogenesis in the developing and the adult brain across different vertebrate species. Such evo-devo approaches can shed new light on (i) the evolution of the nervous system and (ii) the evolutionary control of neurogenesis by TH across animal phyla. We also discuss the role of thyroid disruptors on brain development in an evolutionary context.
1806.03500
Boran Zhou
Xiaoming Zhang, Boran Zhou, Thomas Osborn, Brian Bartholmai, Sanjay Kalra
Lung ultrasound surface wave elastography for assessing instititial lung disease
11 pages, 5 figures
null
10.1121/1.5101138
null
q-bio.TO eess.SP physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lung ultrasound surface wave elastography (LUSWE) is a novel noninvasive technique for measuring superficial lung tissue stiffness.The purpose of this study was to translate LUSWE for assessing patients with interstitial lung disease (ILD) and various connective diseases including systemic sclerosis (SSc).In this study, LUSWE was used to measure the surface wave speed of lung at 100 Hz, 150 Hz and 200 Hz through six intercostal lung spaces for 91 patients with ILD and 30 healthy control subjects. In addition, skin viscoelasticity was measured at both forearms and upper arms for patients and controls. The surface wave speeds of patients' lungs were significantly higher than those of control subjects for the six intercostal spaces and the three excitation frequencies. Patient skin elasticity and viscosity were significantly higher than those of control subjects for the four locations on the arm. In dividing ILD patients into two groups, ILD patients with SSc and ILD patients without SSc, significant differences between each patient group with the control group were found for both the lung and skin.No significant differences were found between the two patients group, although there were some differences at a few locations and at 100 Hz. LUSWE may be useful for assessing ILD and SSc and screening early stage patients.
[ { "created": "Sat, 9 Jun 2018 16:12:26 GMT", "version": "v1" } ]
2019-05-22
[ [ "Zhang", "Xiaoming", "" ], [ "Zhou", "Boran", "" ], [ "Osborn", "Thomas", "" ], [ "Bartholmai", "Brian", "" ], [ "Kalra", "Sanjay", "" ] ]
Lung ultrasound surface wave elastography (LUSWE) is a novel noninvasive technique for measuring superficial lung tissue stiffness.The purpose of this study was to translate LUSWE for assessing patients with interstitial lung disease (ILD) and various connective diseases including systemic sclerosis (SSc).In this study, LUSWE was used to measure the surface wave speed of lung at 100 Hz, 150 Hz and 200 Hz through six intercostal lung spaces for 91 patients with ILD and 30 healthy control subjects. In addition, skin viscoelasticity was measured at both forearms and upper arms for patients and controls. The surface wave speeds of patients' lungs were significantly higher than those of control subjects for the six intercostal spaces and the three excitation frequencies. Patient skin elasticity and viscosity were significantly higher than those of control subjects for the four locations on the arm. In dividing ILD patients into two groups, ILD patients with SSc and ILD patients without SSc, significant differences between each patient group with the control group were found for both the lung and skin.No significant differences were found between the two patients group, although there were some differences at a few locations and at 100 Hz. LUSWE may be useful for assessing ILD and SSc and screening early stage patients.
0711.1751
Charles H. Lineweaver
Charles H. Lineweaver
Paleontological Tests: Human-like Intelligence is not a Convergent Feature of Evolution
14 pages, 6 figures, to be published in "From Fossils to Astrobiology" Edt J. Seckbach and M. Walsh, Springer 2008
null
null
null
q-bio.PE
null
We critically examine the evidence for the idea that encephalization quotients increase with time. We find that human-like intelligence is not a convergent feature of evolution. Implications for the search for extraterrestrial intelligence are discussed.
[ { "created": "Mon, 12 Nov 2007 11:36:29 GMT", "version": "v1" } ]
2007-11-13
[ [ "Lineweaver", "Charles H.", "" ] ]
We critically examine the evidence for the idea that encephalization quotients increase with time. We find that human-like intelligence is not a convergent feature of evolution. Implications for the search for extraterrestrial intelligence are discussed.
2405.07110
Cedric Chauve
Cedric Chauve and Caroline Colijn and Louxin Zhang
A Vector Representation for Phylogenetic Trees
null
null
null
null
q-bio.PE cs.DS math.CO
http://creativecommons.org/licenses/by/4.0/
Good representations for phylogenetic trees and networks are important for optimizing storage efficiency and implementation of scalable methods for the inference and analysis of evolutionary trees for genes, genomes and species. We introduce a new representation for rooted phylogenetic trees that encodes a binary tree on n taxa as a vector of length 2n in which each taxon appears exactly twice. Using this new tree representation, we introduce a novel tree rearrangement operator, called a HOP, that results in a tree space of diameter n and a quadratic neighbourhood size. We also introduce a novel metric, the HOP distance, which is the minimum number of HOPs to transform a tree into another tree. The HOP distance can be computed in near-linear time, a rare instance of a tree rearrangement distance that is tractable. Our experiments show that the HOP distance is better correlated to the Subtree-Prune-and-Regraft distance than the widely used Robinson-Foulds distance. We also describe how the novel tree representation we introduce can be further generalized to tree-child networks.
[ { "created": "Sat, 11 May 2024 22:59:57 GMT", "version": "v1" } ]
2024-05-14
[ [ "Chauve", "Cedric", "" ], [ "Colijn", "Caroline", "" ], [ "Zhang", "Louxin", "" ] ]
Good representations for phylogenetic trees and networks are important for optimizing storage efficiency and implementation of scalable methods for the inference and analysis of evolutionary trees for genes, genomes and species. We introduce a new representation for rooted phylogenetic trees that encodes a binary tree on n taxa as a vector of length 2n in which each taxon appears exactly twice. Using this new tree representation, we introduce a novel tree rearrangement operator, called a HOP, that results in a tree space of diameter n and a quadratic neighbourhood size. We also introduce a novel metric, the HOP distance, which is the minimum number of HOPs to transform a tree into another tree. The HOP distance can be computed in near-linear time, a rare instance of a tree rearrangement distance that is tractable. Our experiments show that the HOP distance is better correlated to the Subtree-Prune-and-Regraft distance than the widely used Robinson-Foulds distance. We also describe how the novel tree representation we introduce can be further generalized to tree-child networks.
2101.00570
Shen Jia
Shen Jia, Yulin Zhang, Yiming Mao, Jiawei Gao, Yixuan Chen, Yuxuan Jiang, Haochen Luo, Kebo Lv, Jionglong Su
A new parsimonious method for classifying Cancer Tissue-of-Origin Based on DNA Methylation 450K data
39 pages
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
DNA methylation is a well-studied genetic modification that regulates gene transcription of Eukaryotes. Its alternations have been recognized as a significant component of cancer development. In this study, we use the DNA methylation 450k data from The Cancer Genome Atlas to evaluate the efficacy of DNA methylation data on cancer classification for 30 cancer types. We propose a new method for gene selection in high dimensional data(over 450 thousand). Variance filtering is first introduced for dimension reduction and Recursive feature elimination (RFE) is then used for feature selection. We address the problem of selecting a small subsets of genes from large number of methylated sites, and our parsimonious model is demonstrated to be efficient, achieving an accuracy over 91%, outperforming other studies which use DNA micro-arrays and RNA-seq Data . The performance of 20 models, which are based on 4 estimators (Random Forest, Decision Tree, Extra Tree and Support Vector Machine) and 5 classifiers (k-Nearest Neighbours, Support Vector Machine, XGboost, Light GBM and Multi-Layer Perceptron), is compared and robustness of the RFE algorithm is examined. Results suggest that the combined model of extra tree plus catboost classifier offers the best performance in cancer identification, with an overall validation accuracy of 91% , 92.3%, 93.3% and 93.5% for 20, 30, 40 and 50 features respectively. The biological functions in cancer development of 50 selected genes is also explored through enrichment analysis and the results show that 12 out of 16 of our top features have already been identified to be specific with cancer and we also propose some more genes to be tested for future studies. Therefore, our method may be utilzed as an auxiliary diagnostic method to determine the actual clinicopathological status of a specific cancer.
[ { "created": "Sun, 3 Jan 2021 07:28:37 GMT", "version": "v1" } ]
2021-01-05
[ [ "Jia", "Shen", "" ], [ "Zhang", "Yulin", "" ], [ "Mao", "Yiming", "" ], [ "Gao", "Jiawei", "" ], [ "Chen", "Yixuan", "" ], [ "Jiang", "Yuxuan", "" ], [ "Luo", "Haochen", "" ], [ "Lv", "Kebo", "" ], [ "Su", "Jionglong", "" ] ]
DNA methylation is a well-studied genetic modification that regulates gene transcription of Eukaryotes. Its alternations have been recognized as a significant component of cancer development. In this study, we use the DNA methylation 450k data from The Cancer Genome Atlas to evaluate the efficacy of DNA methylation data on cancer classification for 30 cancer types. We propose a new method for gene selection in high dimensional data(over 450 thousand). Variance filtering is first introduced for dimension reduction and Recursive feature elimination (RFE) is then used for feature selection. We address the problem of selecting a small subsets of genes from large number of methylated sites, and our parsimonious model is demonstrated to be efficient, achieving an accuracy over 91%, outperforming other studies which use DNA micro-arrays and RNA-seq Data . The performance of 20 models, which are based on 4 estimators (Random Forest, Decision Tree, Extra Tree and Support Vector Machine) and 5 classifiers (k-Nearest Neighbours, Support Vector Machine, XGboost, Light GBM and Multi-Layer Perceptron), is compared and robustness of the RFE algorithm is examined. Results suggest that the combined model of extra tree plus catboost classifier offers the best performance in cancer identification, with an overall validation accuracy of 91% , 92.3%, 93.3% and 93.5% for 20, 30, 40 and 50 features respectively. The biological functions in cancer development of 50 selected genes is also explored through enrichment analysis and the results show that 12 out of 16 of our top features have already been identified to be specific with cancer and we also propose some more genes to be tested for future studies. Therefore, our method may be utilzed as an auxiliary diagnostic method to determine the actual clinicopathological status of a specific cancer.
1409.5788
Yuko Okamoto
Ryo Urano (Nagoya University), Hironori Kokubo (SOKENDAI), and Yuko Okamoto (Nagoya University)
Predictions of tertiary stuctures of $\alpha$-helical membrane proteins by replica-exchange method with consideration of helix deformations
11 pages, (Revtex4-1), 10 figures
null
null
null
q-bio.BM cond-mat.soft cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an improved prediction method of the tertiary structures of $\alpha$-helical membrane proteins based on the replica-exchange method by taking into account helix deformations. Our method allows wide applications because transmembrane helices of native membrane proteins are often distorted. In order to test the effectiveness of the present method, we applied it to the structure predictions of glycophorin A and phospholamban. The results were in accord with experiments.
[ { "created": "Fri, 19 Sep 2014 00:07:26 GMT", "version": "v1" } ]
2014-09-23
[ [ "Urano", "Ryo", "", "Nagoya University" ], [ "Kokubo", "Hironori", "", "SOKENDAI" ], [ "Okamoto", "Yuko", "", "Nagoya University" ] ]
We propose an improved prediction method of the tertiary structures of $\alpha$-helical membrane proteins based on the replica-exchange method by taking into account helix deformations. Our method allows wide applications because transmembrane helices of native membrane proteins are often distorted. In order to test the effectiveness of the present method, we applied it to the structure predictions of glycophorin A and phospholamban. The results were in accord with experiments.
0711.2654
Maksim Kouza M
Maksim Kouza, Chin-Kun Hu and Mai Suan Li,
New force replica exchange method and protein folding pathways probed by force-clamp technique
37 pages, 1 table, 11 figures, accepted for publication in JCP
null
10.1063/1.2822272
null
q-bio.BM
null
We have developed a new extended replica exchange method to study thermodynamics of a system in the presence of external force. Our idea is based on the exchange between different force replicas to accelerate the equilibrium process. We have shown that the refolding pathways of single ubiquitin depend on which terminus is fixed. If the N-end is fixed then the folding pathways are different compared to the case when both termini are free, but fixing the C-terminal does not change them. Surprisingly, we have found that the anchoring terminal does not affect the pathways of individual secondary structures of three-domain ubiquitin, indicating the important role of the multi-domain construction. Therefore, force-clamp experiments, in which one end of a protein is kept fixed, can probe the refolding pathways of a single free-end ubiquitin if one uses either the poly-ubiquitin or a single domain with the C-terminus anchored. However, it is shown that anchoring one end does not affect refolding pathways of the titin domain I27, and the force-clamp spectroscopy is always capable to predict folding sequencing of this protein. We have obtained the reasonable estimate for unfolding barrier of ubiqutin. The linkage between residue Lys48 and the C-terminal of ubiquitin is found to have the dramatic effect on the location of the transition state along the end-to-end distance reaction coordinate, but the multi-domain construction leaves the transition state almost unchanged. We have found that the maximum force in the force-extension profile from constant velocity force pulling simulations depends on temperature nonlinearly. However, for some narrow temperature interval this dependence becomes linear, as have been observed in recent experiments.
[ { "created": "Fri, 16 Nov 2007 17:21:59 GMT", "version": "v1" } ]
2009-11-13
[ [ "Kouza", "Maksim", "" ], [ "Hu", "Chin-Kun", "" ], [ "Li", "Mai Suan", "" ] ]
We have developed a new extended replica exchange method to study thermodynamics of a system in the presence of external force. Our idea is based on the exchange between different force replicas to accelerate the equilibrium process. We have shown that the refolding pathways of single ubiquitin depend on which terminus is fixed. If the N-end is fixed then the folding pathways are different compared to the case when both termini are free, but fixing the C-terminal does not change them. Surprisingly, we have found that the anchoring terminal does not affect the pathways of individual secondary structures of three-domain ubiquitin, indicating the important role of the multi-domain construction. Therefore, force-clamp experiments, in which one end of a protein is kept fixed, can probe the refolding pathways of a single free-end ubiquitin if one uses either the poly-ubiquitin or a single domain with the C-terminus anchored. However, it is shown that anchoring one end does not affect refolding pathways of the titin domain I27, and the force-clamp spectroscopy is always capable to predict folding sequencing of this protein. We have obtained the reasonable estimate for unfolding barrier of ubiqutin. The linkage between residue Lys48 and the C-terminal of ubiquitin is found to have the dramatic effect on the location of the transition state along the end-to-end distance reaction coordinate, but the multi-domain construction leaves the transition state almost unchanged. We have found that the maximum force in the force-extension profile from constant velocity force pulling simulations depends on temperature nonlinearly. However, for some narrow temperature interval this dependence becomes linear, as have been observed in recent experiments.
2105.05049
Charlotta Bengtson
Charlotta Bengtson and Annemie Bogaerts
The quest to quantify selective and synergistic effects of plasma for cancer treatment: Insights from mathematical modeling
null
Int. J. Mol. Sci. 22(9), 5033 (2021)
10.3390/ijms22095033
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cold atmospheric plasma (CAP) and plasma-treated liquids (PTLs) have recently become a promising option for cancer treatment, but the underlying mechanisms of the anti-cancer effect are still to a large extent unknown. Although hydrogen peroxide has been recognized as the major anti-cancer agent of PTL and may enable selectivity in a certain concentration regime, the co-existence of nitrite can create a synergistic effect. We develop a mathematical model to describe the key species and features of the cellular response towards PTL. From the numerical solutions, we define a number of dependent variables, which represent feasible measures to quantify cell susceptibility in terms of the hydrogen peroxide membrane diffusion rate constant and the intracellular catalase concentration. For each of these dependent variables, we investigate the regimes of selective versus non-selective, and of synergistic versus non-synergistic effect to evaluate their potential role as a measure of cell susceptibility. Our results suggest that the maximal intracellular hydrogen peroxide concentration, which in the selective regime is almost four times greater for the most susceptible cells compared to the most resistant cells, could be used to quantify the cell susceptibility towards exogenous hydrogen peroxide. We believe our theoretical approach brings novelty to the field of plasma oncology, and more broadly, to the field of redox biology, by proposing new ways to quantify the selective and synergistic anti-cancer effect of PTL in terms of inherent cell features.
[ { "created": "Tue, 11 May 2021 13:55:38 GMT", "version": "v1" } ]
2021-05-12
[ [ "Bengtson", "Charlotta", "" ], [ "Bogaerts", "Annemie", "" ] ]
Cold atmospheric plasma (CAP) and plasma-treated liquids (PTLs) have recently become a promising option for cancer treatment, but the underlying mechanisms of the anti-cancer effect are still to a large extent unknown. Although hydrogen peroxide has been recognized as the major anti-cancer agent of PTL and may enable selectivity in a certain concentration regime, the co-existence of nitrite can create a synergistic effect. We develop a mathematical model to describe the key species and features of the cellular response towards PTL. From the numerical solutions, we define a number of dependent variables, which represent feasible measures to quantify cell susceptibility in terms of the hydrogen peroxide membrane diffusion rate constant and the intracellular catalase concentration. For each of these dependent variables, we investigate the regimes of selective versus non-selective, and of synergistic versus non-synergistic effect to evaluate their potential role as a measure of cell susceptibility. Our results suggest that the maximal intracellular hydrogen peroxide concentration, which in the selective regime is almost four times greater for the most susceptible cells compared to the most resistant cells, could be used to quantify the cell susceptibility towards exogenous hydrogen peroxide. We believe our theoretical approach brings novelty to the field of plasma oncology, and more broadly, to the field of redox biology, by proposing new ways to quantify the selective and synergistic anti-cancer effect of PTL in terms of inherent cell features.
2209.01942
Yilin Zhu
Yilin Zhu and Jiayu Shang and Cheng Peng and Yanni Sun
Phage family classification under Caudoviricetes: a review of current tools using the latest ICTV classification framework
13 pages, 6 figures, 6 tables
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bacteriophages, which are viruses infecting bacteria, are the most ubiquitous and diverse entities in the biosphere. There is accumulating evidence revealing their important roles in shaping the structure of various microbiomes. Thanks to (viral) metagenomic sequencing, a large number of new bacteriophages have been discovered. However, lacking a standard and automatic virus classification pipeline, the taxonomic characterization of new viruses seriously lag behind the sequencing efforts. In particular, according to the latest version of ICTV, several large phage families in the previous classification system are removed. Therefore, a comprehensive review and comparison of taxonomic classification tools under the new standard are needed to establish the state-of-the-art. In this work, we retrained and tested four recently published tools on newly labeled databases. We demonstrated their utilities and tested them on multiple datasets, including the RefSeq, short contigs, simulated metagenomic datasets, and low-similarity datasets. This study provides a comprehensive review of phage family classification in different scenarios and a practical guidance for choosing appropriate taxonomic classification pipelines. To our best knowledge, this is the first review conducted under the new ICTV classification framework. The results show that the new family classification framework overall leads to better-conserved groups and thus makes family-level classification more feasible.
[ { "created": "Mon, 5 Sep 2022 12:44:53 GMT", "version": "v1" }, { "created": "Wed, 23 Nov 2022 10:57:53 GMT", "version": "v2" } ]
2022-11-24
[ [ "Zhu", "Yilin", "" ], [ "Shang", "Jiayu", "" ], [ "Peng", "Cheng", "" ], [ "Sun", "Yanni", "" ] ]
Bacteriophages, which are viruses infecting bacteria, are the most ubiquitous and diverse entities in the biosphere. There is accumulating evidence revealing their important roles in shaping the structure of various microbiomes. Thanks to (viral) metagenomic sequencing, a large number of new bacteriophages have been discovered. However, lacking a standard and automatic virus classification pipeline, the taxonomic characterization of new viruses seriously lag behind the sequencing efforts. In particular, according to the latest version of ICTV, several large phage families in the previous classification system are removed. Therefore, a comprehensive review and comparison of taxonomic classification tools under the new standard are needed to establish the state-of-the-art. In this work, we retrained and tested four recently published tools on newly labeled databases. We demonstrated their utilities and tested them on multiple datasets, including the RefSeq, short contigs, simulated metagenomic datasets, and low-similarity datasets. This study provides a comprehensive review of phage family classification in different scenarios and a practical guidance for choosing appropriate taxonomic classification pipelines. To our best knowledge, this is the first review conducted under the new ICTV classification framework. The results show that the new family classification framework overall leads to better-conserved groups and thus makes family-level classification more feasible.
1201.2366
Santiago Trevi\~no III
Santiago Trevi\~no and Yudong Sun and Tim F. Cooper and Kevin E. Bassler
Robust Detection of Hierarchical Communities from Escherichia coli Gene Expression Data
Due to appear in PLoS Computational Biology. Supplementary Figure S1 was not uploaded but is available by contacting the author. 27 pages, 5 figures, 15 supplementary files
null
10.1371/journal.pcbi.1002391
null
q-bio.QM cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the functional structure of biological networks is a central goal of systems biology. One approach is to analyze gene expression data to infer a network of gene interactions on the basis of their correlated responses to environmental and genetic perturbations. The inferred network can then be analyzed to identify functional communities. However, commonly used algorithms can yield unreliable results due to experimental noise, algorithmic stochasticity, and the influence of arbitrarily chosen parameter values. Furthermore, the results obtained typically provide only a simplistic view of the network partitioned into disjoint communities and provide no information of the relationship between communities. Here, we present methods to robustly detect coregulated and functionally enriched gene communities and demonstrate their application and validity for Escherichia coli gene expression data. Applying a recently developed community detection algorithm to the network of interactions identified with the context likelihood of relatedness (CLR) method, we show that a hierarchy of network communities can be identified. These communities significantly enrich for gene ontology (GO) terms, consistent with them representing biologically meaningful groups. Further, analysis of the most significantly enriched communities identified several candidate new regulatory interactions. The robustness of our methods is demonstrated by showing that a core set of functional communities is reliably found when artificial noise, modeling experimental noise, is added to the data. We find that noise mainly acts conservatively, increasing the relatedness required for a network link to be reliably assigned and decreasing the size of the core communities, rather than causing association of genes into new communities.
[ { "created": "Wed, 11 Jan 2012 18:12:04 GMT", "version": "v1" }, { "created": "Thu, 12 Jan 2012 02:21:09 GMT", "version": "v2" } ]
2015-06-03
[ [ "Treviño", "Santiago", "" ], [ "Sun", "Yudong", "" ], [ "Cooper", "Tim F.", "" ], [ "Bassler", "Kevin E.", "" ] ]
Determining the functional structure of biological networks is a central goal of systems biology. One approach is to analyze gene expression data to infer a network of gene interactions on the basis of their correlated responses to environmental and genetic perturbations. The inferred network can then be analyzed to identify functional communities. However, commonly used algorithms can yield unreliable results due to experimental noise, algorithmic stochasticity, and the influence of arbitrarily chosen parameter values. Furthermore, the results obtained typically provide only a simplistic view of the network partitioned into disjoint communities and provide no information of the relationship between communities. Here, we present methods to robustly detect coregulated and functionally enriched gene communities and demonstrate their application and validity for Escherichia coli gene expression data. Applying a recently developed community detection algorithm to the network of interactions identified with the context likelihood of relatedness (CLR) method, we show that a hierarchy of network communities can be identified. These communities significantly enrich for gene ontology (GO) terms, consistent with them representing biologically meaningful groups. Further, analysis of the most significantly enriched communities identified several candidate new regulatory interactions. The robustness of our methods is demonstrated by showing that a core set of functional communities is reliably found when artificial noise, modeling experimental noise, is added to the data. We find that noise mainly acts conservatively, increasing the relatedness required for a network link to be reliably assigned and decreasing the size of the core communities, rather than causing association of genes into new communities.
0905.4446
Steven Zucker
Pavel Dimitrov, Steven W. Zucker
Distance Maps and Plant Development #1: Uniform Production and Proportional Destruction
pdfLatex
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experimental data regarding auxin and venation formation exist at both macroscopic and molecular scales, and we attempt to unify them into a comprehensive model for venation formation. We begin with a set of principles to guide an abstract model of venation formation, from which we show how patterns in plant development are related to the representation of global distance information locally as cellular-level signals. Venation formation, in particular, is a function of distances between cells and their locations. The first principle, that auxin is produced at a constant rate in all cells, leads to a (Poisson) reaction-diffusion equation. Equilibrium solutions uniquely codify information about distances, thereby providing cells with the signal to begin differentiation from ground to vascular. A uniform destruction hypothesis and scaling by cell size leads to a more biologically-relevant (Helmholtz) model, and simulations demonstrate its capability to predict leaf and root auxin distributions and venation patterns. The mathematical development is centered on properties of the distance map, and provides a mechanism by which global information about shape can be presented locally to individual cells. The principles provide the foundation for an elaboration of these models in a companion paper \cite{plos-paper2}, and together they provide a framework for understanding organ- and plant-scale organization.
[ { "created": "Wed, 27 May 2009 15:18:52 GMT", "version": "v1" } ]
2009-05-28
[ [ "Dimitrov", "Pavel", "" ], [ "Zucker", "Steven W.", "" ] ]
Experimental data regarding auxin and venation formation exist at both macroscopic and molecular scales, and we attempt to unify them into a comprehensive model for venation formation. We begin with a set of principles to guide an abstract model of venation formation, from which we show how patterns in plant development are related to the representation of global distance information locally as cellular-level signals. Venation formation, in particular, is a function of distances between cells and their locations. The first principle, that auxin is produced at a constant rate in all cells, leads to a (Poisson) reaction-diffusion equation. Equilibrium solutions uniquely codify information about distances, thereby providing cells with the signal to begin differentiation from ground to vascular. A uniform destruction hypothesis and scaling by cell size leads to a more biologically-relevant (Helmholtz) model, and simulations demonstrate its capability to predict leaf and root auxin distributions and venation patterns. The mathematical development is centered on properties of the distance map, and provides a mechanism by which global information about shape can be presented locally to individual cells. The principles provide the foundation for an elaboration of these models in a companion paper \cite{plos-paper2}, and together they provide a framework for understanding organ- and plant-scale organization.
q-bio/0601039
Dietrich Stauffer
Marta Dembska, M. R. Dudek, D. Stauffer
Food-chain competition influences gene's size
11 pages including 7 figures
null
10.1016/j.physa.2006.03.043
null
q-bio.PE
null
We have analysed an effect of the Bak-Sneppen predator-prey food-chain self-organization on nucleotide content of evolving species. In our model, genomes of the species under consideration have been represented by their nucleotide genomic fraction and we have applied two-parameter Kimura model of substitutions to include the changes of the fraction in time. The initial nucleotide fraction and substitution rates were decided with the help of random number generator. Deviation of the genomic nucleotide fraction from its equilibrium value was playing the role of the fitness parameter, $B$, in Bak-Sneppen model. Our finding is, that the higher is the value of the threshold fitness, during the evolution course, the more frequent are large fluctuations in number of species with strongly differentiated nucleotide content; and it is more often the case that the oldest species, which survive the food-chain competition, might have specific nucleotide fraction making possible generating long genes
[ { "created": "Mon, 23 Jan 2006 15:50:47 GMT", "version": "v1" } ]
2009-11-13
[ [ "Dembska", "Marta", "" ], [ "Dudek", "M. R.", "" ], [ "Stauffer", "D.", "" ] ]
We have analysed an effect of the Bak-Sneppen predator-prey food-chain self-organization on nucleotide content of evolving species. In our model, genomes of the species under consideration have been represented by their nucleotide genomic fraction and we have applied two-parameter Kimura model of substitutions to include the changes of the fraction in time. The initial nucleotide fraction and substitution rates were decided with the help of random number generator. Deviation of the genomic nucleotide fraction from its equilibrium value was playing the role of the fitness parameter, $B$, in Bak-Sneppen model. Our finding is, that the higher is the value of the threshold fitness, during the evolution course, the more frequent are large fluctuations in number of species with strongly differentiated nucleotide content; and it is more often the case that the oldest species, which survive the food-chain competition, might have specific nucleotide fraction making possible generating long genes
1512.04925
Ingemar Kaj
Ingemar Kaj and Carina F. Mugal
The non-equilibrium allele frequency spectrum in a Poisson random field framework
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics.
[ { "created": "Tue, 15 Dec 2015 20:41:16 GMT", "version": "v1" } ]
2015-12-16
[ [ "Kaj", "Ingemar", "" ], [ "Mugal", "Carina F.", "" ] ]
In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics.
1603.06219
J. C. Phillips
Vedant Sachdeva and James C. Phillips
Oxygen Channels and Fractal Wave-Particle Duality in the Evolution of Myoglobin and Neuroglobin
27 pages, 12 figures
null
10.1016/j.physa.2016.07.007
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of terrestrial and aquatic wild type (WT) globins is dominated by changes in two proximate - distal Histidine ligand exit channels, here monitored quantitatively by hydropathic waves. These waves reveal allometric functional features inaccessible to single amino acid stereochemical contact models, and even very large all-atom Newtonian simulations. The evolutionary differences between these features between myoglobin and neuroglobin are related to the two oxidation channels through hydropathic wave analysis, which identifies subtle interspecies functional differences inaccessible to traditional size and metabolic scaling studies. Our analysis involves dynamic synchronization of allometric interactions across entire globins.
[ { "created": "Sun, 20 Mar 2016 14:25:13 GMT", "version": "v1" } ]
2016-08-24
[ [ "Sachdeva", "Vedant", "" ], [ "Phillips", "James C.", "" ] ]
The evolution of terrestrial and aquatic wild type (WT) globins is dominated by changes in two proximate - distal Histidine ligand exit channels, here monitored quantitatively by hydropathic waves. These waves reveal allometric functional features inaccessible to single amino acid stereochemical contact models, and even very large all-atom Newtonian simulations. The evolutionary differences between these features between myoglobin and neuroglobin are related to the two oxidation channels through hydropathic wave analysis, which identifies subtle interspecies functional differences inaccessible to traditional size and metabolic scaling studies. Our analysis involves dynamic synchronization of allometric interactions across entire globins.
1409.7006
Justin Yeakel
Justin D. Yeakel, Mathias M. Pires, Lars Rudolf, Nathaniel J. Dominy, Paul L. Koch, Paulo R. Guimar\~aes Jr., Thilo Gross
Collapse of an ecological network in Ancient Egypt
null
Proceedings of the National Academy of Sciences of the USA (2014)
10.1073/pnas.1408471111
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of ecosystem collapse are fundamental to determining how and why biological communities change through time, as well as the potential effects of extinctions on ecosystems. Here we integrate depictions of mammals from Egyptian antiquity with direct lines of paleontological and archeological evidence to infer local extinctions and community dynamics over a 6000-year span. The unprecedented temporal resolution of this data set enables examination of how the tandem effects of human population growth and climate change can disrupt mammalian communities. We show that the extinctions of mammals in Egypt were nonrandom, and that destabilizing changes in community composition coincided with abrupt aridification events and the attendant collapses of some complex societies. We also show that the roles of species in a community can change over time, and that persistence is predicted by measures of species sensitivity, a function of local dynamic stability. Our study is the first high-resolution analysis of the ecological impacts of environmental change on predator-prey networks over millennial timescales, and sheds light on the historical events that have shaped modern animal communities.
[ { "created": "Wed, 24 Sep 2014 16:09:14 GMT", "version": "v1" } ]
2014-09-25
[ [ "Yeakel", "Justin D.", "" ], [ "Pires", "Mathias M.", "" ], [ "Rudolf", "Lars", "" ], [ "Dominy", "Nathaniel J.", "" ], [ "Koch", "Paul L.", "" ], [ "Guimarães", "Paulo R.", "Jr." ], [ "Gross", "Thilo", "" ] ]
The dynamics of ecosystem collapse are fundamental to determining how and why biological communities change through time, as well as the potential effects of extinctions on ecosystems. Here we integrate depictions of mammals from Egyptian antiquity with direct lines of paleontological and archeological evidence to infer local extinctions and community dynamics over a 6000-year span. The unprecedented temporal resolution of this data set enables examination of how the tandem effects of human population growth and climate change can disrupt mammalian communities. We show that the extinctions of mammals in Egypt were nonrandom, and that destabilizing changes in community composition coincided with abrupt aridification events and the attendant collapses of some complex societies. We also show that the roles of species in a community can change over time, and that persistence is predicted by measures of species sensitivity, a function of local dynamic stability. Our study is the first high-resolution analysis of the ecological impacts of environmental change on predator-prey networks over millennial timescales, and sheds light on the historical events that have shaped modern animal communities.