id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1212.4239
Yu Hu
Yu Hu, James Trousdale, Kre\v{s}imir Josi\'c and Eric Shea-Brown
Local paths to global coherence: cutting networks down to size
34 pages, 11 figures
null
null
null
q-bio.NC math-ph math.DS math.MP math.ST q-bio.QM stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How does connectivity impact network dynamics? We address this question by linking network characteristics on two scales. On the global scale we consider the coherence of overall network dynamics. We show that such \emph{global coherence} in activity can often be predicted from the \emph{local structure} of the network. To characterize local network structure we use "motif cumulants," a measure of the deviation of pathway counts from those expected in a minimal probabilistic network model. We extend previous results in three ways. First, we give a new combinatorial formulation of motif cumulants that relates to the allied concept in probability theory. Second, we show that the link between global network dynamics and local network architecture is strongly affected by heterogeneity in network connectivity. However, we introduce a network-partitioning method that recovers a tight relationship between architecture and dynamics. Third, for a particular set of models we generalize the underlying theory to treat dynamical coherence at arbitrary orders (i.e. triplet correlations, and beyond). We show that at any order only a highly restricted set of motifs impact dynamical correlations.
[ { "created": "Tue, 18 Dec 2012 06:36:34 GMT", "version": "v1" }, { "created": "Sat, 24 Aug 2013 06:00:09 GMT", "version": "v2" }, { "created": "Wed, 11 Dec 2013 23:05:11 GMT", "version": "v3" } ]
2013-12-13
[ [ "Hu", "Yu", "" ], [ "Trousdale", "James", "" ], [ "Josić", "Krešimir", "" ], [ "Shea-Brown", "Eric", "" ] ]
How does connectivity impact network dynamics? We address this question by linking network characteristics on two scales. On the global scale we consider the coherence of overall network dynamics. We show that such \emph{global coherence} in activity can often be predicted from the \emph{local structure} of the network. To characterize local network structure we use "motif cumulants," a measure of the deviation of pathway counts from those expected in a minimal probabilistic network model. We extend previous results in three ways. First, we give a new combinatorial formulation of motif cumulants that relates to the allied concept in probability theory. Second, we show that the link between global network dynamics and local network architecture is strongly affected by heterogeneity in network connectivity. However, we introduce a network-partitioning method that recovers a tight relationship between architecture and dynamics. Third, for a particular set of models we generalize the underlying theory to treat dynamical coherence at arbitrary orders (i.e. triplet correlations, and beyond). We show that at any order only a highly restricted set of motifs impact dynamical correlations.
1008.1162
Baruch Meerson
Otso Ovaskainen and Baruch Meerson
Stochastic models of population extinction
A popular review, to appear in "Trends in Ecology & Evolution", 42 pages, 5 figures
Trends in Ecology & Evolution 25 (2010)
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Theoretical ecologists have long sought to understand how the persistence of populations depends on biotic and abiotic factors. Classical work showed that demographic stochasticity causes the mean time to extinction to increase exponentially with population size, whereas variation in environmental conditions can lead to a power law scaling. Recent work has focused especially on the influence of the autocorrelation structure ("color") of environmental noise. In theoretical physics, there is a burst of research activity in analyzing large fluctuations in stochastic population dynamics. This research provides powerful tools for determining extinction times and characterizing the pathway to extinction. It yields, therefore, sharp insights into extinction processes and has great potential for further applications in theoretical biology.
[ { "created": "Fri, 6 Aug 2010 11:45:58 GMT", "version": "v1" } ]
2014-08-06
[ [ "Ovaskainen", "Otso", "" ], [ "Meerson", "Baruch", "" ] ]
Theoretical ecologists have long sought to understand how the persistence of populations depends on biotic and abiotic factors. Classical work showed that demographic stochasticity causes the mean time to extinction to increase exponentially with population size, whereas variation in environmental conditions can lead to a power law scaling. Recent work has focused especially on the influence of the autocorrelation structure ("color") of environmental noise. In theoretical physics, there is a burst of research activity in analyzing large fluctuations in stochastic population dynamics. This research provides powerful tools for determining extinction times and characterizing the pathway to extinction. It yields, therefore, sharp insights into extinction processes and has great potential for further applications in theoretical biology.
2306.07101
Roman Makarov
Roman Makarov, Michalis Pagkalos and Panayiota Poirazi
Dendrites and Efficiency: Optimizing Performance and Resource Utilization
18 pages, 4 figures, review
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The brain is a highly efficient system evolved to achieve high performance with limited resources. We propose that dendrites make information processing and storage in the brain more efficient through the segregation of inputs and their conditional integration via nonlinear events, the compartmentalization of activity and plasticity and the binding of information through synapse clustering. In real-world scenarios with limited energy and space, dendrites help biological networks process natural stimuli on behavioral timescales, perform the inference process on those stimuli in a context-specific manner, and store the information in overlapping populations of neurons. A global picture starts to emerge, in which dendrites help the brain achieve efficiency through a combination of optimization strategies balancing the tradeoff between performance and resource utilization.
[ { "created": "Mon, 12 Jun 2023 13:25:18 GMT", "version": "v1" } ]
2023-06-13
[ [ "Makarov", "Roman", "" ], [ "Pagkalos", "Michalis", "" ], [ "Poirazi", "Panayiota", "" ] ]
The brain is a highly efficient system evolved to achieve high performance with limited resources. We propose that dendrites make information processing and storage in the brain more efficient through the segregation of inputs and their conditional integration via nonlinear events, the compartmentalization of activity and plasticity and the binding of information through synapse clustering. In real-world scenarios with limited energy and space, dendrites help biological networks process natural stimuli on behavioral timescales, perform the inference process on those stimuli in a context-specific manner, and store the information in overlapping populations of neurons. A global picture starts to emerge, in which dendrites help the brain achieve efficiency through a combination of optimization strategies balancing the tradeoff between performance and resource utilization.
2307.10178
Catherine Reason
Cathy M Reason
A Formalizable Proof of the No-Supervenience Theorem: A Diagonal Limitation on the Viability of Physicalist Theories of Consciousness
This is a formalizable proof of the theorem in Reason & Shah (2021) cited in the manuscript
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The no-supervenience theorem limits the capacity of physicalist theories to provide a comprehensive account of human consciousness. The proof of the theorem is difficult to formalize because it relies on both alethic and epistemic notions of possibility. This article outlines a formalizable proof using predicate modal logic in which the epistemic inferences are expressed in terms of an existing mathematical formalism, the inference device (Wolpert, 2008). The resulting proof shows definitely that any physicalist theory which describes a self-aware, intelligent system must be internally inconsistent.
[ { "created": "Thu, 8 Jun 2023 15:22:34 GMT", "version": "v1" } ]
2023-07-21
[ [ "Reason", "Cathy M", "" ] ]
The no-supervenience theorem limits the capacity of physicalist theories to provide a comprehensive account of human consciousness. The proof of the theorem is difficult to formalize because it relies on both alethic and epistemic notions of possibility. This article outlines a formalizable proof using predicate modal logic in which the epistemic inferences are expressed in terms of an existing mathematical formalism, the inference device (Wolpert, 2008). The resulting proof shows definitely that any physicalist theory which describes a self-aware, intelligent system must be internally inconsistent.
2405.01012
Alex Murphy
Alex Murphy, Joel Zylberberg, Alona Fyshe
Correcting Biased Centered Kernel Alignment Measures in Biological and Artificial Neural Networks
ICLR 2024 Re-Align Workshop
null
null
null
q-bio.NC cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Centred Kernel Alignment (CKA) has recently emerged as a popular metric to compare activations from biological and artificial neural networks (ANNs) in order to quantify the alignment between internal representations derived from stimuli sets (e.g. images, text, video) that are presented to both systems. In this paper we highlight issues that the community should take into account if using CKA as an alignment metric with neural data. Neural data are in the low-data high-dimensionality domain, which is one of the cases where (biased) CKA results in high similarity scores even for pairs of random matrices. Using fMRI and MEG data from the THINGS project, we show that if biased CKA is applied to representations of different sizes in the low-data high-dimensionality domain, they are not directly comparable due to biased CKA's sensitivity to differing feature-sample ratios and not stimuli-driven responses. This situation can arise both when comparing a pre-selected area of interest (e.g. ROI) to multiple ANN layers, as well as when determining to which ANN layer multiple regions of interest (ROIs) / sensor groups of different dimensionality are most similar. We show that biased CKA can be artificially driven to its maximum value when using independent random data of different sample-feature ratios. We further show that shuffling sample-feature pairs of real neural data does not drastically alter biased CKA similarity in comparison to unshuffled data, indicating an undesirable lack of sensitivity to stimuli-driven neural responses. Positive alignment of true stimuli-driven responses is only achieved by using debiased CKA. Lastly, we report findings that suggest biased CKA is sensitive to the inherent structure of neural data, only differing from shuffled data when debiased CKA detects stimuli-driven alignment.
[ { "created": "Thu, 2 May 2024 05:27:12 GMT", "version": "v1" } ]
2024-05-03
[ [ "Murphy", "Alex", "" ], [ "Zylberberg", "Joel", "" ], [ "Fyshe", "Alona", "" ] ]
Centred Kernel Alignment (CKA) has recently emerged as a popular metric to compare activations from biological and artificial neural networks (ANNs) in order to quantify the alignment between internal representations derived from stimuli sets (e.g. images, text, video) that are presented to both systems. In this paper we highlight issues that the community should take into account if using CKA as an alignment metric with neural data. Neural data are in the low-data high-dimensionality domain, which is one of the cases where (biased) CKA results in high similarity scores even for pairs of random matrices. Using fMRI and MEG data from the THINGS project, we show that if biased CKA is applied to representations of different sizes in the low-data high-dimensionality domain, they are not directly comparable due to biased CKA's sensitivity to differing feature-sample ratios and not stimuli-driven responses. This situation can arise both when comparing a pre-selected area of interest (e.g. ROI) to multiple ANN layers, as well as when determining to which ANN layer multiple regions of interest (ROIs) / sensor groups of different dimensionality are most similar. We show that biased CKA can be artificially driven to its maximum value when using independent random data of different sample-feature ratios. We further show that shuffling sample-feature pairs of real neural data does not drastically alter biased CKA similarity in comparison to unshuffled data, indicating an undesirable lack of sensitivity to stimuli-driven neural responses. Positive alignment of true stimuli-driven responses is only achieved by using debiased CKA. Lastly, we report findings that suggest biased CKA is sensitive to the inherent structure of neural data, only differing from shuffled data when debiased CKA detects stimuli-driven alignment.
2004.10274
Balint Meszaros
B\'alint M\'esz\'aros (1), Hugo S\'amano-S\'anchez (1), Jes\'us Alvarado-Valverde (1 and 2), Jelena \v{C}aly\v{s}eva (1 and 2), Elizabeth Mart\'inez-P\'erez (1 and 3), Renato Alves (1), Manjeet Kumar (1), Friedrich Rippmann (4), Luc\'ia B. Chemes (5), Toby J. Gibson (1). ((1) European Molecular Biology Laboratory, Heidelberg, Germany, (2) Collaboration for joint PhD degree between EMBL and Heidelberg University, Faculty of Biosciences, (3) Laboratorio de bioinform\'atica estructural, Fundaci\'on Instituto Leloir, Buenos Aires, Argentina, (4) Computational Chemistry & Biology, Merck KGaA, Darmstadt, Germany, (5) Instituto de Investigaciones Biotecnol\'ogicas, Universidad Nacional de San Mart\'in, Buenos Aires, Argentina)
Short linear motif candidates in the cell entry system used by SARS-CoV-2 and their potential therapeutic implications
38 pages, 7 figures, 2 tables. Corresponding authors are Luc\'ia B. Chemes, Toby J. Gibson
Science Signaling 12 Jan 2021: Vol. 14, Issue 665, eabd0334
10.1126/scisignal.abd0334
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The primary cell surface receptor for SARS-CoV-2 is the angiotensin-converting enzyme 2 (ACE2). Recently it has been noticed that the viral Spike protein has an RGD motif, suggesting that cell surface integrins may be co-receptors. We examined the sequences of ACE2 and integrins with the Eukaryotic Linear Motif resource, ELM, and were presented with candidate short linear motifs (SLiMs) in their short, unstructured, cytosolic tails with potential roles in endocytosis, membrane dynamics, autophagy, cytoskeleton and cell signalling. These SLiM candidates are highly conserved in vertebrates. They suggest potential interactions with the AP2 mu2 subunit as well as I-BAR, LC3, PDZ, PTB and SH2 domains found in signalling and regulatory proteins present in epithelial lung cells. Several motifs overlap in the tail sequences, suggesting that they may act as molecular switches, often involving tyrosine phosphorylation status. Candidate LIR motifs are present in the tails of ACE2 and integrin beta3, suggesting that these proteins can directly recruit autophagy components. We also noticed that the extracellular part of ACE2 has a conserved MIDAS structural motif, which are commonly used by beta integrins for ligand binding, potentially supporting the proposal that integrins and ACE2 share common ligands. The findings presented here identify several molecular links and testable hypotheses that might help uncover the mechanisms of SARS-CoV-2 attachment, entry and replication, and strengthen the possibility that it might be possible to develop host-directed therapies to dampen the efficiency of viral entry and hamper disease progression. The strong sequence conservation means that these putative SLiMs are good candidates: Nevertheless, SLiMs must always be validated by experimentation before they can be stated to be functional.
[ { "created": "Tue, 21 Apr 2020 20:14:15 GMT", "version": "v1" } ]
2021-01-14
[ [ "Mészáros", "Bálint", "", "1 and 2" ], [ "Sámano-Sánchez", "Hugo", "", "1 and 2" ], [ "Alvarado-Valverde", "Jesús", "", "1 and 2" ], [ "Čalyševa", "Jelena", "", "1 and 2" ], [ "Martínez-Pérez", "Elizabeth", "", "1 and 3" ], [ "Alves", "Renato", "" ], [ "Kumar", "Manjeet", "" ], [ "Rippmann", "Friedrich", "" ], [ "Chemes", "Lucía B.", "" ], [ "Gibson", "Toby J.", "" ], [ ".", "", "" ] ]
The primary cell surface receptor for SARS-CoV-2 is the angiotensin-converting enzyme 2 (ACE2). Recently it has been noticed that the viral Spike protein has an RGD motif, suggesting that cell surface integrins may be co-receptors. We examined the sequences of ACE2 and integrins with the Eukaryotic Linear Motif resource, ELM, and were presented with candidate short linear motifs (SLiMs) in their short, unstructured, cytosolic tails with potential roles in endocytosis, membrane dynamics, autophagy, cytoskeleton and cell signalling. These SLiM candidates are highly conserved in vertebrates. They suggest potential interactions with the AP2 mu2 subunit as well as I-BAR, LC3, PDZ, PTB and SH2 domains found in signalling and regulatory proteins present in epithelial lung cells. Several motifs overlap in the tail sequences, suggesting that they may act as molecular switches, often involving tyrosine phosphorylation status. Candidate LIR motifs are present in the tails of ACE2 and integrin beta3, suggesting that these proteins can directly recruit autophagy components. We also noticed that the extracellular part of ACE2 has a conserved MIDAS structural motif, which are commonly used by beta integrins for ligand binding, potentially supporting the proposal that integrins and ACE2 share common ligands. The findings presented here identify several molecular links and testable hypotheses that might help uncover the mechanisms of SARS-CoV-2 attachment, entry and replication, and strengthen the possibility that it might be possible to develop host-directed therapies to dampen the efficiency of viral entry and hamper disease progression. The strong sequence conservation means that these putative SLiMs are good candidates: Nevertheless, SLiMs must always be validated by experimentation before they can be stated to be functional.
q-bio/0411040
L. E. Jones
Laura E. Jones and Alan S. Perelson
Opportunistic infection as a cause of transient viremia in chronically infected HIV patients under treatment with HAART
30 pages, 9 figures, 1 table. Submitted to Bulletin of Mathematical Biology
Bulletin of Mathematical Biology (67) 1227-1251 (2005)
10.1016/j.bulm.2005.01.006
null
q-bio.PE q-bio.QM
null
When highly active antiretroviral therapy is administered for long periods of time to HIV-1 infected patients, most patients achieve viral loads that are ``undetectable'' by standard assay (i.e., HIV-1 RNA $ < 50$ copies/ml). Yet despite exhibiting sustained viral loads below the level of detection, a number of these patients experience unexplained episodes of transient viremia or viral "blips". We propose here that transient activation of the immune system by opportunistic infection may explain these episodes of viremia. Indeed, immune activation by opportunistic infection may spur HIV replication, replenish viral reservoirs and contribute to accelerated disease progression. In order to investigate the effects of concurrent infection on chronically infected HIV patients under treatment with highly active antiretroviral therapy (HAART), we extend a simple dynamic model of the effects of vaccination on HIV infection [Jones and Perelson, JAIDS 31:369-377, 2002] to include growing pathogens. We then propose a more realistic model for immune cell expansion in the presence of pathogen, and include this in a set of competing models that allow low baseline viral loads in the presence of drug treatment. Programmed expansion of immune cells upon exposure to antigen is a feature not previously included in HIV models, and one that is especially important to consider when simulating an immune response to opportunistic infection. Using these models we show that viral blips with realistic duration and amplitude can be generated by concurrent infections in HAART treated patients.
[ { "created": "Fri, 19 Nov 2004 20:43:50 GMT", "version": "v1" } ]
2012-11-20
[ [ "Jones", "Laura E.", "" ], [ "Perelson", "Alan S.", "" ] ]
When highly active antiretroviral therapy is administered for long periods of time to HIV-1 infected patients, most patients achieve viral loads that are ``undetectable'' by standard assay (i.e., HIV-1 RNA $ < 50$ copies/ml). Yet despite exhibiting sustained viral loads below the level of detection, a number of these patients experience unexplained episodes of transient viremia or viral "blips". We propose here that transient activation of the immune system by opportunistic infection may explain these episodes of viremia. Indeed, immune activation by opportunistic infection may spur HIV replication, replenish viral reservoirs and contribute to accelerated disease progression. In order to investigate the effects of concurrent infection on chronically infected HIV patients under treatment with highly active antiretroviral therapy (HAART), we extend a simple dynamic model of the effects of vaccination on HIV infection [Jones and Perelson, JAIDS 31:369-377, 2002] to include growing pathogens. We then propose a more realistic model for immune cell expansion in the presence of pathogen, and include this in a set of competing models that allow low baseline viral loads in the presence of drug treatment. Programmed expansion of immune cells upon exposure to antigen is a feature not previously included in HIV models, and one that is especially important to consider when simulating an immune response to opportunistic infection. Using these models we show that viral blips with realistic duration and amplitude can be generated by concurrent infections in HAART treated patients.
0908.1556
Brian Williams Dr.
Brian G. Williams, Eline L. Korenromp, Eleanor Gouws, Christopher Dye
The rate of decline of CD4 T-cells in people infected with HIV
Two pages with one figure
null
null
null
q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In people infected with HIV the RNA viral load is a good predictor of the rate of loss of CD4 cells at a population level but there is still great variability in the rate of decline of CD4 cells among individuals. Here we show that the pre-infection distribution of CD4 cell counts and the distribution of survival times together account for 87% of the variability in the observed rate of decline of CD4 cells among individuals. The challenge is to understand the variation in CD4 levels, among populations and individuals, and to establish the determinants of survival of which viral load may be the most important.
[ { "created": "Tue, 11 Aug 2009 18:29:41 GMT", "version": "v1" } ]
2009-08-12
[ [ "Williams", "Brian G.", "" ], [ "Korenromp", "Eline L.", "" ], [ "Gouws", "Eleanor", "" ], [ "Dye", "Christopher", "" ] ]
In people infected with HIV the RNA viral load is a good predictor of the rate of loss of CD4 cells at a population level but there is still great variability in the rate of decline of CD4 cells among individuals. Here we show that the pre-infection distribution of CD4 cell counts and the distribution of survival times together account for 87% of the variability in the observed rate of decline of CD4 cells among individuals. The challenge is to understand the variation in CD4 levels, among populations and individuals, and to establish the determinants of survival of which viral load may be the most important.
q-bio/0606038
Guido Tiana
G. Tiana, L. Sutto and R. A. Broglia
Use of the Metropolis algorithm to simulate the dynamics of protein chains
corrections to the text and to the figures
null
10.1016/j.physa.2007.02.044
null
q-bio.OT
null
The Metropolis implementation of the Monte Carlo algorithm has been developed to study the equilibrium thermodynamics of many-body systems. Choosing small trial moves, the trajectories obtained applying this algorithm agree with those obtained by Langevin's dynamics. Applying this procedure to a simplified protein model, it is possible to show that setting a threshold of 1 degree on the movement of the dihedrals of the protein backbone in a single Monte Carlo step, the mean quantities associated with the off-equilibrium dynamics (e.g., energy, RMSD, etc.) are well reproduced, while the good description of higher moments requires smaller moves. An important result is that the time duration of a Monte Carlo step depends linearly on the temperature, something which should be accounted for when doing simulations at different temperatures.
[ { "created": "Tue, 27 Jun 2006 12:52:47 GMT", "version": "v1" }, { "created": "Tue, 20 Feb 2007 08:27:28 GMT", "version": "v2" } ]
2009-11-13
[ [ "Tiana", "G.", "" ], [ "Sutto", "L.", "" ], [ "Broglia", "R. A.", "" ] ]
The Metropolis implementation of the Monte Carlo algorithm has been developed to study the equilibrium thermodynamics of many-body systems. Choosing small trial moves, the trajectories obtained applying this algorithm agree with those obtained by Langevin's dynamics. Applying this procedure to a simplified protein model, it is possible to show that setting a threshold of 1 degree on the movement of the dihedrals of the protein backbone in a single Monte Carlo step, the mean quantities associated with the off-equilibrium dynamics (e.g., energy, RMSD, etc.) are well reproduced, while the good description of higher moments requires smaller moves. An important result is that the time duration of a Monte Carlo step depends linearly on the temperature, something which should be accounted for when doing simulations at different temperatures.
2003.10266
Lynette Caitlin Mikula
Claus Vogl and Lynette Caitlin Mikula
A nearly-neutral biallelic Moran model with biased mutation and linear and quadratic selection
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, a biallelic reversible mutation model with linear and quadratic selection is analyzed. The approach reconnects to one proposed by Kimura ( Possibility of extensive neutral evolution under stabilizing selection with special reference to nonrandom use of codons (PNAS,1981)), who starts from a diffusion model and derives its equilibrium distribution up to a constant. We use a boundary-mutation Moran model, which approximates a general mutation model for small effective mutation rates, and derive its equilibrium distribution for polymorphic and monomorphic variants in small to moderately sized populations. Using this model, we show that biased mutation rates and linear selection alone can cause patterns of polymorphism rates within and substitution rates between populations that are usually ascribed to balancing or overdominant selection. We illustrate this using a data set of short introns and fourfold degenerate sites from Drosophila simulans and Drosophila melanogaster.
[ { "created": "Mon, 23 Mar 2020 13:23:19 GMT", "version": "v1" }, { "created": "Sat, 18 Apr 2020 10:44:29 GMT", "version": "v2" }, { "created": "Fri, 30 Oct 2020 08:55:28 GMT", "version": "v3" }, { "created": "Wed, 17 Feb 2021 09:30:09 GMT", "version": "v4" }, { "created": "Mon, 29 Mar 2021 08:51:54 GMT", "version": "v5" } ]
2021-03-30
[ [ "Vogl", "Claus", "" ], [ "Mikula", "Lynette Caitlin", "" ] ]
In this article, a biallelic reversible mutation model with linear and quadratic selection is analyzed. The approach reconnects to one proposed by Kimura ( Possibility of extensive neutral evolution under stabilizing selection with special reference to nonrandom use of codons (PNAS,1981)), who starts from a diffusion model and derives its equilibrium distribution up to a constant. We use a boundary-mutation Moran model, which approximates a general mutation model for small effective mutation rates, and derive its equilibrium distribution for polymorphic and monomorphic variants in small to moderately sized populations. Using this model, we show that biased mutation rates and linear selection alone can cause patterns of polymorphism rates within and substitution rates between populations that are usually ascribed to balancing or overdominant selection. We illustrate this using a data set of short introns and fourfold degenerate sites from Drosophila simulans and Drosophila melanogaster.
2305.00590
Josinaldo Menezes
J. Menezes and E. Rangel
Spatial dynamics of synergistic coinfection in rock-paper-scissors models
9 pages, 6 figures
Chaos 33, 9, 093115 (2023)
10.1063/5.0160753
null
q-bio.PE nlin.PS physics.bio-ph physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the spatial dynamics of two disease epidemics reaching a three-species cyclic model. Regardless of their species, all individuals are susceptible to being infected with two different pathogens, which spread through person-to-person contact. The occurrence of coinfection leads to a synergistic increase in the risk of hosts dying due to complications from either disease. Our stochastic simulations show that departed areas inhabited by hosts of a single pathogen arise from random initial conditions. The single-disease spatial domains are bordered by interfaces of coinfected hosts whose dynamics are curvature-driven. Our findings show that the coarsening dynamics of the interface network are controlled by the fluctuations of coinfection waves invading the single-disease territories. As the coinfection mortality grows, the dynamics of the interface network attain the scaling regime. We discover that organisms' infection risk is maximised if the coinfection increases the death due to disease in $30\%$, and minimised as the network dynamics reach the scaling regime, with species populations being maximum. Our conclusions may help ecologists understand the dynamics of epidemics and their impact on the stability of ecosystems.
[ { "created": "Sun, 30 Apr 2023 22:19:19 GMT", "version": "v1" } ]
2024-06-05
[ [ "Menezes", "J.", "" ], [ "Rangel", "E.", "" ] ]
We investigate the spatial dynamics of two disease epidemics reaching a three-species cyclic model. Regardless of their species, all individuals are susceptible to being infected with two different pathogens, which spread through person-to-person contact. The occurrence of coinfection leads to a synergistic increase in the risk of hosts dying due to complications from either disease. Our stochastic simulations show that departed areas inhabited by hosts of a single pathogen arise from random initial conditions. The single-disease spatial domains are bordered by interfaces of coinfected hosts whose dynamics are curvature-driven. Our findings show that the coarsening dynamics of the interface network are controlled by the fluctuations of coinfection waves invading the single-disease territories. As the coinfection mortality grows, the dynamics of the interface network attain the scaling regime. We discover that organisms' infection risk is maximised if the coinfection increases the death due to disease in $30\%$, and minimised as the network dynamics reach the scaling regime, with species populations being maximum. Our conclusions may help ecologists understand the dynamics of epidemics and their impact on the stability of ecosystems.
2205.08451
Hossein Parineh
Hossein Parineh, Nasser Mozayani
MAS2HP: A Multi Agent System to Predict Protein Structure in 2D HP model
null
null
null
null
q-bio.BM cs.AI
http://creativecommons.org/licenses/by/4.0/
Protein Structure Prediction (PSP) is an unsolved problem in the field of computational biology. The problem of protein structure prediction is about predicting the native conformation of a protein, while its sequence of amino acids is known. Regarding processing limitations of current computer systems, all-atom simulations for proteins are typically unpractical; several reduced models of proteins have been proposed. Additionally, due to intrinsic hardness of calculations even in reduced models, many computational methods mainly based on artificial intelligence have been proposed to solve the problem. Agent-based modeling is a relatively new method for modeling systems composed of interacting items. In this paper we proposed a new approach for protein structure prediction by using agent-based modeling (ABM) in two dimensional hydrophobic-hydrophilic model. We broke the whole process of protein structure prediction into two steps: the first step, which was introduced in our previous paper, is about biasing the linear sequence to gain a primary energy, and the next step, which will be explained in this paper, is about using ABM with a predefined set of rules, to find the best conformation in the least possible amount of time and steps. This method was implemented in NETLOGO. We have tested this algorithm on several benchmark sequences ranging from 20 to 50-mers in two dimensional Hydrophobic-Hydrophilic lattice models. Comparing to the result of the other algorithms, our method is capable of finding the best known conformations in a significantly shorter time. A major problem in PSP simulation is that as the sequence length increases the time consumed to predict a valid structure will exponentially increase. In contrast, by using MAS2HP the effect of increase in sequence length on spent time has changed from exponentially to linear.
[ { "created": "Wed, 11 May 2022 05:17:47 GMT", "version": "v1" }, { "created": "Tue, 24 May 2022 00:31:59 GMT", "version": "v2" }, { "created": "Wed, 25 May 2022 00:13:54 GMT", "version": "v3" }, { "created": "Thu, 2 Jun 2022 23:20:28 GMT", "version": "v4" } ]
2022-06-06
[ [ "Parineh", "Hossein", "" ], [ "Mozayani", "Nasser", "" ] ]
Protein Structure Prediction (PSP) is an unsolved problem in the field of computational biology. The problem of protein structure prediction is about predicting the native conformation of a protein, while its sequence of amino acids is known. Regarding processing limitations of current computer systems, all-atom simulations for proteins are typically unpractical; several reduced models of proteins have been proposed. Additionally, due to intrinsic hardness of calculations even in reduced models, many computational methods mainly based on artificial intelligence have been proposed to solve the problem. Agent-based modeling is a relatively new method for modeling systems composed of interacting items. In this paper we proposed a new approach for protein structure prediction by using agent-based modeling (ABM) in two dimensional hydrophobic-hydrophilic model. We broke the whole process of protein structure prediction into two steps: the first step, which was introduced in our previous paper, is about biasing the linear sequence to gain a primary energy, and the next step, which will be explained in this paper, is about using ABM with a predefined set of rules, to find the best conformation in the least possible amount of time and steps. This method was implemented in NETLOGO. We have tested this algorithm on several benchmark sequences ranging from 20 to 50-mers in two dimensional Hydrophobic-Hydrophilic lattice models. Comparing to the result of the other algorithms, our method is capable of finding the best known conformations in a significantly shorter time. A major problem in PSP simulation is that as the sequence length increases the time consumed to predict a valid structure will exponentially increase. In contrast, by using MAS2HP the effect of increase in sequence length on spent time has changed from exponentially to linear.
1507.07032
Ariel Amir
Po-Yi Ho and Ariel Amir
Simultaneous regulation of cell size and chromosome replication in bacteria
null
Frontiers in Microbiology, 6, 662 (2015), in special issue on "The Bacterial Cell: Coupling between Growth, Nucleoid Replication, Cell Division and Shape"
null
null
q-bio.CB q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bacteria are able to maintain a narrow distribution of cell sizes by regulating the timing of cell divisions. In rich nutrient conditions, cells divide much faster than their chromosomes replicate. This implies that cells maintain multiple rounds of chromosome replication per cell division by regulating the timing of chromosome replications. Here, we show that both cell size and chromosome replication may be simultaneously regulated by the long-standing initiator accumulation strategy. The strategy proposes that initiators are produced in proportion to the volume increase and is accumulated at each origin of replication, and chromosome replication is initiated when a critical amount per origin has accumulated. We show that this model maps to the incremental model of size control, which was previously shown to reproduce experimentally observed correlations between various events in the cell cycle and explains the exponential dependence of cell size on the growth rate of the cell. Furthermore, we show that this model also leads to the efficient regulation of the timing of initiation and the number of origins consistent with existing experimental results.
[ { "created": "Fri, 24 Jul 2015 21:54:42 GMT", "version": "v1" } ]
2015-07-28
[ [ "Ho", "Po-Yi", "" ], [ "Amir", "Ariel", "" ] ]
Bacteria are able to maintain a narrow distribution of cell sizes by regulating the timing of cell divisions. In rich nutrient conditions, cells divide much faster than their chromosomes replicate. This implies that cells maintain multiple rounds of chromosome replication per cell division by regulating the timing of chromosome replications. Here, we show that both cell size and chromosome replication may be simultaneously regulated by the long-standing initiator accumulation strategy. The strategy proposes that initiators are produced in proportion to the volume increase and is accumulated at each origin of replication, and chromosome replication is initiated when a critical amount per origin has accumulated. We show that this model maps to the incremental model of size control, which was previously shown to reproduce experimentally observed correlations between various events in the cell cycle and explains the exponential dependence of cell size on the growth rate of the cell. Furthermore, we show that this model also leads to the efficient regulation of the timing of initiation and the number of origins consistent with existing experimental results.
2101.10471
M. Ali Vosoughi
Axel Wism\"uller and M. Ali Vosoughi
Classification of Schizophrenia from Functional MRI Using Large-scale Extended Granger Causality
The paper is the preprint of the paper accepted at the SPIE 2021 conference. The manuscript includes 14 pages with two figures. arXiv admin note: substantial text overlap with arXiv:2101.01832. text overlap with arXiv:2101.09354
null
null
null
q-bio.NC cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
The literature manifests that schizophrenia is associated with alterations in brain network connectivity. We investigate whether large-scale Extended Granger Causality (lsXGC) can capture such alterations using resting-state fMRI data. Our method utilizes dimension reduction combined with the augmentation of source time-series in a predictive time-series model for estimating directed causal relationships among fMRI time-series. The lsXGC is a multivariate approach since it identifies the relationship of the underlying dynamic system in the presence of all other time-series. Here lsXGC serves as a biomarker for classifying schizophrenia patients from typical controls using a subset of 62 subjects from the Centers of Biomedical Research Excellence (COBRE) data repository. We use brain connections estimated by lsXGC as features for classification. After feature extraction, we perform feature selection by Kendall's tau rank correlation coefficient followed by classification using a support vector machine. As a reference method, we compare our results with cross-correlation, typically used in the literature as a standard measure of functional connectivity. We cross-validate 100 different training/test (90%/10%) data split to obtain mean accuracy and a mean Area Under the receiver operating characteristic Curve (AUC) across all tested numbers of features for lsXGC. Our results demonstrate a mean accuracy range of [0.767, 0.940] and a mean AUC range of [0.861, 0.983] for lsXGC. The result of lsXGC is significantly higher than the results obtained with the cross-correlation, namely mean accuracy of [0.721, 0.751] and mean AUC of [0.744, 0.860]. Our results suggest the applicability of lsXGC as a potential biomarker for schizophrenia.
[ { "created": "Tue, 12 Jan 2021 20:36:26 GMT", "version": "v1" } ]
2021-01-27
[ [ "Wismüller", "Axel", "" ], [ "Vosoughi", "M. Ali", "" ] ]
The literature manifests that schizophrenia is associated with alterations in brain network connectivity. We investigate whether large-scale Extended Granger Causality (lsXGC) can capture such alterations using resting-state fMRI data. Our method utilizes dimension reduction combined with the augmentation of source time-series in a predictive time-series model for estimating directed causal relationships among fMRI time-series. The lsXGC is a multivariate approach since it identifies the relationship of the underlying dynamic system in the presence of all other time-series. Here lsXGC serves as a biomarker for classifying schizophrenia patients from typical controls using a subset of 62 subjects from the Centers of Biomedical Research Excellence (COBRE) data repository. We use brain connections estimated by lsXGC as features for classification. After feature extraction, we perform feature selection by Kendall's tau rank correlation coefficient followed by classification using a support vector machine. As a reference method, we compare our results with cross-correlation, typically used in the literature as a standard measure of functional connectivity. We cross-validate 100 different training/test (90%/10%) data split to obtain mean accuracy and a mean Area Under the receiver operating characteristic Curve (AUC) across all tested numbers of features for lsXGC. Our results demonstrate a mean accuracy range of [0.767, 0.940] and a mean AUC range of [0.861, 0.983] for lsXGC. The result of lsXGC is significantly higher than the results obtained with the cross-correlation, namely mean accuracy of [0.721, 0.751] and mean AUC of [0.744, 0.860]. Our results suggest the applicability of lsXGC as a potential biomarker for schizophrenia.
1905.03669
Janis Antonovics
Janis Antonovics, Stavros D. Veresoglou, and Matthias C. Rillig
Species diversity in a metacommunity with patches connected by periodic coalescence: a neutral model
15 pages, 3 figures, Supplementary material
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent realization that entire communities fuse and separate (community coalescence) has led to a reappraisal of the forces determining species diversity and dynamics, especially in microbial communities where coalescence is likely widespread. To understand if connectedness by coalescence results in different outcomes from connectedness by individual dispersal, we investigated chance processes leading to loss of species diversity using a model of a neutral two-species metacommunity. Two scenarios were investigated: pairwise coalescence where the communities coalesce in pairs, intermix and then separate; and diffuse coalescence where several communities mix as a pool and are re-distributed to their original patches. When standardized for the same net movement, both types of coalescence led to a longer time to single species dominance than dispersal. Coalescence therefore may be an important process contributing to the surprisingly high microbial species diversity in nature.
[ { "created": "Thu, 9 May 2019 14:55:20 GMT", "version": "v1" } ]
2019-05-10
[ [ "Antonovics", "Janis", "" ], [ "Veresoglou", "Stavros D.", "" ], [ "Rillig", "Matthias C.", "" ] ]
The recent realization that entire communities fuse and separate (community coalescence) has led to a reappraisal of the forces determining species diversity and dynamics, especially in microbial communities where coalescence is likely widespread. To understand if connectedness by coalescence results in different outcomes from connectedness by individual dispersal, we investigated chance processes leading to loss of species diversity using a model of a neutral two-species metacommunity. Two scenarios were investigated: pairwise coalescence where the communities coalesce in pairs, intermix and then separate; and diffuse coalescence where several communities mix as a pool and are re-distributed to their original patches. When standardized for the same net movement, both types of coalescence led to a longer time to single species dominance than dispersal. Coalescence therefore may be an important process contributing to the surprisingly high microbial species diversity in nature.
2404.06699
Navid Mohammad Mirzaei
Navid Mohammad Mirzaei, Panayotis G. Kevrekidis, Leili Shahriyari
Oxygen, Angiogenesis, Cancer and Immune Interplay in Breast Tumor Micro-Environment: A Computational Investigation
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Breast cancer is one of the most challenging global health problems among women. This study investigates the intricate breast tumor microenvironment (TME) dynamics utilizing data from Mammary-specific Polyomavirus Middle T Antigen Overexpression mouse models (MMTV-PyMT). It incorporates Endothelial Cells (ECs), oxygen, and Vascular Endothelial Growth Factors (VEGF) to examine the interplay of angiogenesis, hypoxia, VEGF, and the immune cells in cancer progression. We introduce an approach to impute the immune cell fractions within the TME using single-cell RNA-sequencing (scRNA-seq) data from MMTV-PyMT mice. We further quantify our analysis by estimating cell counts using cell size data and laboratory findings from existing literature. Parameter estimation is carried out via a Hybrid Genetic Algorithm (HGA). Our simulations reveal various TME behaviors, emphasizing the critical role of adipocytes, angiogenesis, hypoxia, and oxygen transport in driving immune responses and cancer progression. The global sensitivity analyses highlight potential therapeutic intervention points, such as VEGFs' critical role in EC growth and oxygen transportation and severe hypoxia's effect on the cancer and the total number of cells. The VEGF-mediated production rate of ECs shows an essential time-dependent impact, highlighting the importance of early intervention in slowing cancer progression. These findings align with the observations from the clinical trials demonstrating the efficacy of VEGF inhibitors and suggest a timely intervention for better outcomes.
[ { "created": "Wed, 10 Apr 2024 03:08:00 GMT", "version": "v1" } ]
2024-04-11
[ [ "Mirzaei", "Navid Mohammad", "" ], [ "Kevrekidis", "Panayotis G.", "" ], [ "Shahriyari", "Leili", "" ] ]
Breast cancer is one of the most challenging global health problems among women. This study investigates the intricate breast tumor microenvironment (TME) dynamics utilizing data from Mammary-specific Polyomavirus Middle T Antigen Overexpression mouse models (MMTV-PyMT). It incorporates Endothelial Cells (ECs), oxygen, and Vascular Endothelial Growth Factors (VEGF) to examine the interplay of angiogenesis, hypoxia, VEGF, and the immune cells in cancer progression. We introduce an approach to impute the immune cell fractions within the TME using single-cell RNA-sequencing (scRNA-seq) data from MMTV-PyMT mice. We further quantify our analysis by estimating cell counts using cell size data and laboratory findings from existing literature. Parameter estimation is carried out via a Hybrid Genetic Algorithm (HGA). Our simulations reveal various TME behaviors, emphasizing the critical role of adipocytes, angiogenesis, hypoxia, and oxygen transport in driving immune responses and cancer progression. The global sensitivity analyses highlight potential therapeutic intervention points, such as VEGFs' critical role in EC growth and oxygen transportation and severe hypoxia's effect on the cancer and the total number of cells. The VEGF-mediated production rate of ECs shows an essential time-dependent impact, highlighting the importance of early intervention in slowing cancer progression. These findings align with the observations from the clinical trials demonstrating the efficacy of VEGF inhibitors and suggest a timely intervention for better outcomes.
2407.00003
Samiya Alkhairy
Samiya A Alkhairy
Cochlear Wave Propagation and Dynamics in the Human Base and Apex: Model-Based Estimates from Noninvasive Measurements
7 pages, 2 figures, 9 equations. Published: Nonlinearity and Hearing: Advances in Theory and Experiment AIP Conf. Proc. 3062
AIP Conference Proceedings, vol. 3062, no. 1. AIP Publishing, 2024
10.1063/5.0189264
null
q-bio.QM cs.SD eess.AS q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cochlear wavenumber and impedance are mechanistic variables that encode information regarding how the cochlea works - specifically wave propagation and Organ of Corti dynamics. These mechanistic variables underlie interesting features of cochlear signal processing such as its place-based wavelet analyzers, dispersivity and high-gain. Consequently, it is of interest to estimate these mechanistic variables in various species (particularly humans) and across various locations along the length of the cochlea. In this paper, we develop methods to estimate the mechanistic variables (wavenumber and impedance) from noninvasive response characteristics (such as the quality factors of psychophysical tuning curves) using an existing analytic shortwave single-partition model of the mammalian cochlea. We then apply these methods to estimate human mechanistic variables using reported values for quality factors from psychophysical tuning curves and a location-invariant ratio extrapolated from chinchilla. Our resultant estimates for human wavenumbers and impedances show that the minimum wavelength (which occurs at the peak of the traveling wave) is smaller in base than the apex. The Organ of Corti is stiffness dominated rather than mass dominated, and there is negative effective damping prior to the peak followed by positive effective damping. The effective stiffness, and positive and negative effective damping are greater in the base than the apex. The methods introduced here for estimating mechanistic variables from characteristics of invasive or noninvasive responses enable us to derive such estimates across various species and locations where the responses are describable by sharp filters. In addition to studying cochlear wave propagation and dynamics, the estimation methods developed here are also useful for auditory filter design.
[ { "created": "Wed, 10 Apr 2024 20:24:19 GMT", "version": "v1" } ]
2024-07-02
[ [ "Alkhairy", "Samiya A", "" ] ]
Cochlear wavenumber and impedance are mechanistic variables that encode information regarding how the cochlea works - specifically wave propagation and Organ of Corti dynamics. These mechanistic variables underlie interesting features of cochlear signal processing such as its place-based wavelet analyzers, dispersivity and high-gain. Consequently, it is of interest to estimate these mechanistic variables in various species (particularly humans) and across various locations along the length of the cochlea. In this paper, we develop methods to estimate the mechanistic variables (wavenumber and impedance) from noninvasive response characteristics (such as the quality factors of psychophysical tuning curves) using an existing analytic shortwave single-partition model of the mammalian cochlea. We then apply these methods to estimate human mechanistic variables using reported values for quality factors from psychophysical tuning curves and a location-invariant ratio extrapolated from chinchilla. Our resultant estimates for human wavenumbers and impedances show that the minimum wavelength (which occurs at the peak of the traveling wave) is smaller in base than the apex. The Organ of Corti is stiffness dominated rather than mass dominated, and there is negative effective damping prior to the peak followed by positive effective damping. The effective stiffness, and positive and negative effective damping are greater in the base than the apex. The methods introduced here for estimating mechanistic variables from characteristics of invasive or noninvasive responses enable us to derive such estimates across various species and locations where the responses are describable by sharp filters. In addition to studying cochlear wave propagation and dynamics, the estimation methods developed here are also useful for auditory filter design.
1806.03778
Xin Liu
Xin Liu, Anuj Mubayi, Dominik Reinhold, Liu Zhu
Approximation Methods for Analyzing Multiscale Stochastic Vector-borne Epidemic Models
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic epidemic models, generally more realistic than deterministic counterparts, have often been seen too complex for rigorous mathematical analysis because of level of details it requires to comprehensively capture the dynamics of diseases. This problem further becomes intense when complexity of diseasees increases as in the case of vector-borne diseases (VBD). The VBDs are human illnesses caused by pathogens transmitted among humans by intermediate species, which are primarily arthropods. In this study, a stochastic VBD model is developed and novel mathematical methods are described and evaluated to systematically analyze the model and understand its complex dynamics. The VBD model incorporates some relevant features of the VBD transmission process including demographical, ecological and social mechanisms. The analysis is based on dimensional reductions and model simplications via scaling limit theorems. The results suggest that the dynamics of the stochastic VBD depends on a threshold quantity R_0, the initial size of infectives, and the type of scaling in terms of host population size. The quantity R_0 for deterministic counterpart of the model, interpreted as threshold condition for infection persistence as is mentioned in the literature for many infectious disease models, can be computed. Different scalings yield different approximations of the model, and in particular, if vectors have much faster dynamics, the effect of the vector dynamics on the host population averages out, which largely reduces the dimension of the model.
[ { "created": "Mon, 11 Jun 2018 02:49:21 GMT", "version": "v1" } ]
2018-06-13
[ [ "Liu", "Xin", "" ], [ "Mubayi", "Anuj", "" ], [ "Reinhold", "Dominik", "" ], [ "Zhu", "Liu", "" ] ]
Stochastic epidemic models, generally more realistic than deterministic counterparts, have often been seen too complex for rigorous mathematical analysis because of level of details it requires to comprehensively capture the dynamics of diseases. This problem further becomes intense when complexity of diseasees increases as in the case of vector-borne diseases (VBD). The VBDs are human illnesses caused by pathogens transmitted among humans by intermediate species, which are primarily arthropods. In this study, a stochastic VBD model is developed and novel mathematical methods are described and evaluated to systematically analyze the model and understand its complex dynamics. The VBD model incorporates some relevant features of the VBD transmission process including demographical, ecological and social mechanisms. The analysis is based on dimensional reductions and model simplications via scaling limit theorems. The results suggest that the dynamics of the stochastic VBD depends on a threshold quantity R_0, the initial size of infectives, and the type of scaling in terms of host population size. The quantity R_0 for deterministic counterpart of the model, interpreted as threshold condition for infection persistence as is mentioned in the literature for many infectious disease models, can be computed. Different scalings yield different approximations of the model, and in particular, if vectors have much faster dynamics, the effect of the vector dynamics on the host population averages out, which largely reduces the dimension of the model.
2108.02354
Pedram Heidari
Bahar Ataeinia, Pedram Heidari
Artificial intelligence and the future of diagnostic and therapeutic radiopharmaceutical development: in Silico smart molecular design
null
null
10.1016/j.cpet.2021.06.008
null
q-bio.BM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Novel diagnostic and therapeutic radiopharmaceuticals are increasingly becoming a central part of personalized medicine. Continued innovation in the development of new radiopharmaceuticals is key to sustained growth and advancement of precision medicine. Artificial intelligence (AI) has been used in multiple fields of medicine to develop and validate better tools for patient diagnosis and therapy, including in radiopharmaceutical design. In this review, we first discuss common in silico approaches and focus on their utility and challenges in radiopharmaceutical development. Next, we discuss the practical applications of in silico modeling in design of radiopharmaceuticals in various diseases.
[ { "created": "Thu, 5 Aug 2021 03:51:24 GMT", "version": "v1" } ]
2021-08-06
[ [ "Ataeinia", "Bahar", "" ], [ "Heidari", "Pedram", "" ] ]
Novel diagnostic and therapeutic radiopharmaceuticals are increasingly becoming a central part of personalized medicine. Continued innovation in the development of new radiopharmaceuticals is key to sustained growth and advancement of precision medicine. Artificial intelligence (AI) has been used in multiple fields of medicine to develop and validate better tools for patient diagnosis and therapy, including in radiopharmaceutical design. In this review, we first discuss common in silico approaches and focus on their utility and challenges in radiopharmaceutical development. Next, we discuss the practical applications of in silico modeling in design of radiopharmaceuticals in various diseases.
1711.09424
Jose Fontanari
Jos\'e F. Fontanari
The collapse of ecosystem engineer populations
null
Mathematics (2018) 6: 9
10.3390/math6010009
null
q-bio.PE nlin.AO nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans are the ultimate ecosystem engineers who have profoundly transformed the world's landscapes in order to enhance their survival. Somewhat paradoxically, however, sometimes the unforeseen effect of this ecosystem engineering is the very collapse of the population it intended to protect. Here we use a spatial version of a standard population dynamics model of ecosystem engineers to study the colonization of unexplored virgin territories by a small settlement of engineers. We find that during the expansion phase the population density reaches values much higher than those the environment can support in the equilibrium situation. When the colonization front reaches the boundary of the available space, the population density plunges sharply and attains its equilibrium value. The collapse takes place without warning and happens just after the population reaches its peak number. We conclude that overpopulation and the consequent collapse of an expanding population of ecosystem engineers is a natural consequence of the nonlinear feedback between the population and environment variables.
[ { "created": "Sun, 26 Nov 2017 17:11:28 GMT", "version": "v1" } ]
2018-01-15
[ [ "Fontanari", "José F.", "" ] ]
Humans are the ultimate ecosystem engineers who have profoundly transformed the world's landscapes in order to enhance their survival. Somewhat paradoxically, however, sometimes the unforeseen effect of this ecosystem engineering is the very collapse of the population it intended to protect. Here we use a spatial version of a standard population dynamics model of ecosystem engineers to study the colonization of unexplored virgin territories by a small settlement of engineers. We find that during the expansion phase the population density reaches values much higher than those the environment can support in the equilibrium situation. When the colonization front reaches the boundary of the available space, the population density plunges sharply and attains its equilibrium value. The collapse takes place without warning and happens just after the population reaches its peak number. We conclude that overpopulation and the consequent collapse of an expanding population of ecosystem engineers is a natural consequence of the nonlinear feedback between the population and environment variables.
1508.07571
Andrew Mugler
Andrew Mugler, Mark Kittisopikul, Luke Hayden, Jintao Liu, Chris H. Wiggins, Gurol M. Suel, Aleksandra M. Walczak
Noise expands the response range of the Bacillus subtilis competence circuit
26 pages, 14 figures
null
10.1371/journal.pcbi.1004793
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulatory circuits must contend with intrinsic noise that arises due to finite numbers of proteins. While some circuits act to reduce this noise, others appear to exploit it. A striking example is the competence circuit in Bacillus subtilis, which exhibits much larger noise in the duration of its competence events than a synthetically constructed analog that performs the same function. Here, using stochastic modeling and fluorescence microscopy, we show that this larger noise allows cells to exit terminal phenotypic states, which expands the range of stress levels to which cells are responsive and leads to phenotypic heterogeneity at the population level. This is an important example of how noise confers a functional benefit in a genetic decision-making circuit.
[ { "created": "Sun, 30 Aug 2015 13:53:21 GMT", "version": "v1" } ]
2016-04-27
[ [ "Mugler", "Andrew", "" ], [ "Kittisopikul", "Mark", "" ], [ "Hayden", "Luke", "" ], [ "Liu", "Jintao", "" ], [ "Wiggins", "Chris H.", "" ], [ "Suel", "Gurol M.", "" ], [ "Walczak", "Aleksandra M.", "" ] ]
Gene regulatory circuits must contend with intrinsic noise that arises due to finite numbers of proteins. While some circuits act to reduce this noise, others appear to exploit it. A striking example is the competence circuit in Bacillus subtilis, which exhibits much larger noise in the duration of its competence events than a synthetically constructed analog that performs the same function. Here, using stochastic modeling and fluorescence microscopy, we show that this larger noise allows cells to exit terminal phenotypic states, which expands the range of stress levels to which cells are responsive and leads to phenotypic heterogeneity at the population level. This is an important example of how noise confers a functional benefit in a genetic decision-making circuit.
1804.09895
Takuro Shimaya
Takuro Shimaya, Kazumasa A. Takeuchi
Lane formation and critical coarsening in a model of bacterial competition
12 pages, 8 figures
Phys. Rev. E 99, 042403 (2019)
10.1103/PhysRevE.99.042403
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study competition of two non-motile bacterial strains in a three-dimensional channel numerically, and analyze how their configuration evolves in space and time. We construct a lattice model that takes into account self-replication, mutation, and killing of bacteria. When mutation is not significant, the two strains segregate and form stripe patterns along the channel. The formed lanes are gradually rearranged, with increasing length scales in the two-dimensional cross-sectional plane. We characterize it in terms of coarsening and phase ordering in statistical physics. In particular, for the simple model without mutation and killing, we find logarithmically slow coarsening, which is characteristic of the two-dimensional voter model. With mutation and killing, we find a phase transition from a monopolistic phase, in which lanes are formed and coarsened until the system is eventually dominated by one of the two strains, to an equally mixed and disordered phase without lane structure. Critical behavior at the transition point is also studied and compared with the generalized voter class and the Ising class. These results are accounted for by continuum equations, obtained by applying a mean field approximation along the channel axis. Our findings indicate relevance of critical coarsening of two-dimensional systems in the problem of bacterial competition within anisotropic three-dimensional geometry.
[ { "created": "Thu, 26 Apr 2018 05:54:48 GMT", "version": "v1" }, { "created": "Tue, 1 May 2018 02:08:08 GMT", "version": "v2" }, { "created": "Mon, 15 Apr 2019 06:42:48 GMT", "version": "v3" } ]
2019-04-16
[ [ "Shimaya", "Takuro", "" ], [ "Takeuchi", "Kazumasa A.", "" ] ]
We study competition of two non-motile bacterial strains in a three-dimensional channel numerically, and analyze how their configuration evolves in space and time. We construct a lattice model that takes into account self-replication, mutation, and killing of bacteria. When mutation is not significant, the two strains segregate and form stripe patterns along the channel. The formed lanes are gradually rearranged, with increasing length scales in the two-dimensional cross-sectional plane. We characterize it in terms of coarsening and phase ordering in statistical physics. In particular, for the simple model without mutation and killing, we find logarithmically slow coarsening, which is characteristic of the two-dimensional voter model. With mutation and killing, we find a phase transition from a monopolistic phase, in which lanes are formed and coarsened until the system is eventually dominated by one of the two strains, to an equally mixed and disordered phase without lane structure. Critical behavior at the transition point is also studied and compared with the generalized voter class and the Ising class. These results are accounted for by continuum equations, obtained by applying a mean field approximation along the channel axis. Our findings indicate relevance of critical coarsening of two-dimensional systems in the problem of bacterial competition within anisotropic three-dimensional geometry.
1407.0973
Laurence Aitchison
Laurence Aitchison and M\'at\'e Lengyel
The Hamiltonian brain: efficient probabilistic inference with excitatory-inhibitory neural circuit dynamics
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for in the context of probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations.
[ { "created": "Thu, 3 Jul 2014 16:13:13 GMT", "version": "v1" }, { "created": "Fri, 4 Jul 2014 06:57:48 GMT", "version": "v2" }, { "created": "Sat, 31 Dec 2016 15:44:36 GMT", "version": "v3" } ]
2017-01-03
[ [ "Aitchison", "Laurence", "" ], [ "Lengyel", "Máté", "" ] ]
Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for in the context of probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations.
1512.01943
Stephan Eismann
Stephan Eismann and Robert G. Endres
Protein connectivity in chemotaxis receptor complexes
null
null
10.1371/journal.pcbi.1004650
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The chemotaxis sensory system allows bacteria such as Escherichia coli to swim towards nutrients and away from repellents. The underlying pathway is remarkably sensitive in detecting chemical gradients over a wide range of ambient concentrations. Interactions among receptors, which are predominantly clustered at the cell poles, are crucial to this sensitivity. Although it has been suggested that the kinase CheA and the adapter protein CheW are integral for receptor connectivity, the exact coupling mechanism remains unclear. Here, we present a statistical-mechanics approach to model the receptor linkage mechanism itself, building on nanodisc and electron cryotomography experiments. Specifically, we investigate how the sensing behavior of mixed receptor clusters is affected by variations in the expression levels of CheA and CheW at a constant receptor density in the membrane. Our model compares favorably with dose-response curves from in vivo F\"orster resonance energy transfer (FRET) measurements, demonstrating that the receptor-methylation level has only minor effects on receptor cooperativity. Importantly, our model provides an explanation for the non-intuitive conclusion that the receptor cooperativity decreases with increasing levels of CheA, a core signaling protein associated with the receptors, whereas the receptor cooperativity increases with increasing levels of CheW, a key adapter protein. Finally, we propose an evolutionary advantage as explanation for the recently suggested CheW-only linker structures.
[ { "created": "Mon, 7 Dec 2015 08:50:11 GMT", "version": "v1" } ]
2016-02-17
[ [ "Eismann", "Stephan", "" ], [ "Endres", "Robert G.", "" ] ]
The chemotaxis sensory system allows bacteria such as Escherichia coli to swim towards nutrients and away from repellents. The underlying pathway is remarkably sensitive in detecting chemical gradients over a wide range of ambient concentrations. Interactions among receptors, which are predominantly clustered at the cell poles, are crucial to this sensitivity. Although it has been suggested that the kinase CheA and the adapter protein CheW are integral for receptor connectivity, the exact coupling mechanism remains unclear. Here, we present a statistical-mechanics approach to model the receptor linkage mechanism itself, building on nanodisc and electron cryotomography experiments. Specifically, we investigate how the sensing behavior of mixed receptor clusters is affected by variations in the expression levels of CheA and CheW at a constant receptor density in the membrane. Our model compares favorably with dose-response curves from in vivo F\"orster resonance energy transfer (FRET) measurements, demonstrating that the receptor-methylation level has only minor effects on receptor cooperativity. Importantly, our model provides an explanation for the non-intuitive conclusion that the receptor cooperativity decreases with increasing levels of CheA, a core signaling protein associated with the receptors, whereas the receptor cooperativity increases with increasing levels of CheW, a key adapter protein. Finally, we propose an evolutionary advantage as explanation for the recently suggested CheW-only linker structures.
0905.0383
Edward O'Brien Jr.
E. P. O'Brien, G. Morrison, B. R. Brooks and D. Thirumalai
How accurate are polymer models in the analysis of Forster resonance energy transfer experiments on proteins?
31 pages, 9 figures
J. Chem. Phys. (2009) 124903
10.1063/1.3082151
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single molecule Forster resonance energy transfer (FRET) experiments are used to infer the properties of the denatured state ensemble (DSE) of proteins. From the measured average FRET efficiency, <E>, the distance distribution P(R) is inferred by assuming that the DSE can be described as a polymer. The single parameter in the appropriate polymer model (Gaussian chain, Wormlike chain, or Self-avoiding walk) for P(R) is determined by equating the calculated and measured <E>. In order to assess the accuracy of this "standard procedure," we consider the generalized Rouse model (GRM), whose properties [<E> and P(R)] can be analytically computed, and the Molecular Transfer Model for protein L for which accurate simulations can be carried out as a function of guanadinium hydrochloride (GdmCl) concentration. Using the precisely computed <E> for the GRM and protein L, we infer P(R) using the standard procedure. We find that the mean end-to-end distance can be accurately inferred (less than 10% relative error) using <E> and polymer models for P(R). However, the value extracted for the radius of gyration (Rg) and the persistence length (lp) are less accurate. The relative error in the inferred R-g and lp, with respect to the exact values, can be as large as 25% at the highest GdmCl concentration. We propose a self-consistency test, requiring measurements of <E> by attaching dyes to different residues in the protein, to assess the validity of describing DSE using the Gaussian model. Application of the self-consistency test to the GRM shows that even for this simple model the Gaussian P(R) is inadequate. Analysis of experimental data of FRET efficiencies for the cold shock protein shows that at there are significant deviations in the DSE P(R) from the Gaussian model.
[ { "created": "Mon, 4 May 2009 14:05:41 GMT", "version": "v1" } ]
2009-05-05
[ [ "O'Brien", "E. P.", "" ], [ "Morrison", "G.", "" ], [ "Brooks", "B. R.", "" ], [ "Thirumalai", "D.", "" ] ]
Single molecule Forster resonance energy transfer (FRET) experiments are used to infer the properties of the denatured state ensemble (DSE) of proteins. From the measured average FRET efficiency, <E>, the distance distribution P(R) is inferred by assuming that the DSE can be described as a polymer. The single parameter in the appropriate polymer model (Gaussian chain, Wormlike chain, or Self-avoiding walk) for P(R) is determined by equating the calculated and measured <E>. In order to assess the accuracy of this "standard procedure," we consider the generalized Rouse model (GRM), whose properties [<E> and P(R)] can be analytically computed, and the Molecular Transfer Model for protein L for which accurate simulations can be carried out as a function of guanadinium hydrochloride (GdmCl) concentration. Using the precisely computed <E> for the GRM and protein L, we infer P(R) using the standard procedure. We find that the mean end-to-end distance can be accurately inferred (less than 10% relative error) using <E> and polymer models for P(R). However, the value extracted for the radius of gyration (Rg) and the persistence length (lp) are less accurate. The relative error in the inferred R-g and lp, with respect to the exact values, can be as large as 25% at the highest GdmCl concentration. We propose a self-consistency test, requiring measurements of <E> by attaching dyes to different residues in the protein, to assess the validity of describing DSE using the Gaussian model. Application of the self-consistency test to the GRM shows that even for this simple model the Gaussian P(R) is inadequate. Analysis of experimental data of FRET efficiencies for the cold shock protein shows that at there are significant deviations in the DSE P(R) from the Gaussian model.
1611.09819
Daniel Harari
Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon Ullman
Measuring and modeling the perception of natural and unconstrained gaze in humans and machines
Daniel Harari and Tao Gao contributed equally to this work
null
null
Center for Brains, Minds and Machines Memo No. 059
q-bio.NC cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans are remarkably adept at interpreting the gaze direction of other individuals in their surroundings. This skill is at the core of the ability to engage in joint visual attention, which is essential for establishing social interactions. How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes? These questions pose a challenge from both empirical and computational perspectives, due to the complexity of the visual input in real-life situations. Here we measure empirically human accuracy in perceiving the gaze direction of others in lifelike scenes, and study computationally the sources of information and representations underlying this cognitive capacity. We show that humans perform better in face-to-face conditions compared with recorded conditions, and that this advantage is not due to the availability of input dynamics. We further show that humans are still performing well when only the eyes-region is visible, rather than the whole face. We develop a computational model, which replicates the pattern of human performance, including the finding that the eyes-region contains on its own, the required information for estimating both head orientation and direction of gaze. Consistent with neurophysiological findings on task-specific face regions in the brain, the learned computational representations reproduce perceptual effects such as the Wollaston illusion, when trained to estimate direction of gaze, but not when trained to recognize objects or faces.
[ { "created": "Tue, 29 Nov 2016 20:11:09 GMT", "version": "v1" } ]
2016-11-30
[ [ "Harari", "Daniel", "" ], [ "Gao", "Tao", "" ], [ "Kanwisher", "Nancy", "" ], [ "Tenenbaum", "Joshua", "" ], [ "Ullman", "Shimon", "" ] ]
Humans are remarkably adept at interpreting the gaze direction of other individuals in their surroundings. This skill is at the core of the ability to engage in joint visual attention, which is essential for establishing social interactions. How accurate are humans in determining the gaze direction of others in lifelike scenes, when they can move their heads and eyes freely, and what are the sources of information for the underlying perceptual processes? These questions pose a challenge from both empirical and computational perspectives, due to the complexity of the visual input in real-life situations. Here we measure empirically human accuracy in perceiving the gaze direction of others in lifelike scenes, and study computationally the sources of information and representations underlying this cognitive capacity. We show that humans perform better in face-to-face conditions compared with recorded conditions, and that this advantage is not due to the availability of input dynamics. We further show that humans are still performing well when only the eyes-region is visible, rather than the whole face. We develop a computational model, which replicates the pattern of human performance, including the finding that the eyes-region contains on its own, the required information for estimating both head orientation and direction of gaze. Consistent with neurophysiological findings on task-specific face regions in the brain, the learned computational representations reproduce perceptual effects such as the Wollaston illusion, when trained to estimate direction of gaze, but not when trained to recognize objects or faces.
1210.6230
Guillermo Ludue\~na
Guillermo A. Ludue\~na and Claudius Gros
A Self-Organized Neural Comparator
null
G. A. Ludue\~na and C. Gros, A self-organized neural comparator, Neural Computation, 25, pp 1006 (2013)
10.1162/NECO_a_00424
null
q-bio.NC cond-mat.dis-nn cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning algorithms need generally the possibility to compare several streams of information. Neural learning architectures hence need a unit, a comparator, able to compare several inputs encoding either internal or external information, like for instance predictions and sensory readings. Without the possibility of comparing the values of prediction to actual sensory inputs, reward evaluation and supervised learning would not be possible. Comparators are usually not implemented explicitly, necessary comparisons are commonly performed by directly comparing one-to-one the respective activities. This implies that the characteristics of the two input streams (like size and encoding) must be provided at the time of designing the system. It is however plausible that biological comparators emerge from self-organizing, genetically encoded principles, which allow the system to adapt to the changes in the input and in the organism. We propose an unsupervised neural circuitry, where the function of input comparison emerges via self-organization only from the interaction of the system with the respective inputs, without external influence or supervision. The proposed neural comparator adapts, unsupervised, according to the correlations present in the input streams. The system consists of a multilayer feed-forward neural network which follows a local output minimization (anti-Hebbian) rule for adaptation of the synaptic weights. The local output minimization allows the circuit to autonomously acquire the capability of comparing the neural activities received from different neural populations, which may differ in the size of the population and in the neural encoding used. The comparator is able to compare objects never encountered before in the sensory input streams and to evaluate a measure of their similarity, even when differently encoded.
[ { "created": "Tue, 23 Oct 2012 13:19:08 GMT", "version": "v1" }, { "created": "Thu, 25 Oct 2012 13:55:10 GMT", "version": "v2" } ]
2013-03-14
[ [ "Ludueña", "Guillermo A.", "" ], [ "Gros", "Claudius", "" ] ]
Learning algorithms need generally the possibility to compare several streams of information. Neural learning architectures hence need a unit, a comparator, able to compare several inputs encoding either internal or external information, like for instance predictions and sensory readings. Without the possibility of comparing the values of prediction to actual sensory inputs, reward evaluation and supervised learning would not be possible. Comparators are usually not implemented explicitly, necessary comparisons are commonly performed by directly comparing one-to-one the respective activities. This implies that the characteristics of the two input streams (like size and encoding) must be provided at the time of designing the system. It is however plausible that biological comparators emerge from self-organizing, genetically encoded principles, which allow the system to adapt to the changes in the input and in the organism. We propose an unsupervised neural circuitry, where the function of input comparison emerges via self-organization only from the interaction of the system with the respective inputs, without external influence or supervision. The proposed neural comparator adapts, unsupervised, according to the correlations present in the input streams. The system consists of a multilayer feed-forward neural network which follows a local output minimization (anti-Hebbian) rule for adaptation of the synaptic weights. The local output minimization allows the circuit to autonomously acquire the capability of comparing the neural activities received from different neural populations, which may differ in the size of the population and in the neural encoding used. The comparator is able to compare objects never encountered before in the sensory input streams and to evaluate a measure of their similarity, even when differently encoded.
1812.07630
Giovanni Bussi
Francesca Cuturello, Guido Tiana, Giovanni Bussi
Assessing the accuracy of direct-coupling analysis for RNA contact prediction
Supporting information included in ancillary files
RNA 26, 637 (2020)
10.1261/rna.074179.119
null
q-bio.QM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many non-coding RNAs are known to play a role in the cell directly linked to their structure. Structure prediction based on the sole sequence is however a challenging task. On the other hand, thanks to the low cost of sequencing technologies, a very large number of homologous sequences are becoming available for many RNA families. In the protein community, it has emerged in the last decade the idea of exploiting the covariance of mutations within a family to predict the protein structure using the direct-coupling-analysis (DCA) method. The application of DCA to RNA systems has been limited so far. We here perform an assessment of the DCA method on 17 riboswitch families, comparing it with the commonly used mutual information analysis and with state-of-the-art R-scape covariance method. We also compare different flavors of DCA, including mean-field, pseudo-likelihood, and a proposed stochastic procedure (Boltzmann learning) for solving exactly the DCA inverse problem. Boltzmann learning outperforms the other methods in predicting contacts observed in high resolution crystal structures.
[ { "created": "Tue, 18 Dec 2018 20:32:02 GMT", "version": "v1" }, { "created": "Fri, 5 Apr 2019 14:24:38 GMT", "version": "v2" }, { "created": "Mon, 5 Aug 2019 13:34:28 GMT", "version": "v3" }, { "created": "Fri, 29 Nov 2019 10:16:43 GMT", "version": "v4" } ]
2020-05-05
[ [ "Cuturello", "Francesca", "" ], [ "Tiana", "Guido", "" ], [ "Bussi", "Giovanni", "" ] ]
Many non-coding RNAs are known to play a role in the cell directly linked to their structure. Structure prediction based on the sole sequence is however a challenging task. On the other hand, thanks to the low cost of sequencing technologies, a very large number of homologous sequences are becoming available for many RNA families. In the protein community, it has emerged in the last decade the idea of exploiting the covariance of mutations within a family to predict the protein structure using the direct-coupling-analysis (DCA) method. The application of DCA to RNA systems has been limited so far. We here perform an assessment of the DCA method on 17 riboswitch families, comparing it with the commonly used mutual information analysis and with state-of-the-art R-scape covariance method. We also compare different flavors of DCA, including mean-field, pseudo-likelihood, and a proposed stochastic procedure (Boltzmann learning) for solving exactly the DCA inverse problem. Boltzmann learning outperforms the other methods in predicting contacts observed in high resolution crystal structures.
2111.07146
Johannes M\"uller
Johannes M\"uller, Aur\'elien Tellier
Life-History traits and the replicator equation
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the relevance for conservation biology, there is an increasing interest to extend evolutionary genomics models to plant, animal or microbial species. However, this requires to understand the effect of life-history traits absent in humans on genomic evolution. In this context, it is fundamentally of interest to generalize the replicator equation, which is at the heart of most population genomics models. However, as the inclusion of life-history traits generates models with a large state space, the analysis becomes involving. We focus, here, on quiescence and seed banks, two features common to many plant, invertebrate and microbial species. We develop a method to obtain a low-dimensional replicator equation in the context of evolutionary game theory, based on two assumptions: (1) the life-history traits are {\it per se} neutral, and (2) frequency-dependent selection is weak. We use the results to investigate the evolution and maintenance of cooperation based on the Prisoner's dilemma. We first consider the generalized replicator equation, and then refine the investigation using adaptive dynamics. It turns out that, depending on the structure and timing of the quiescence/dormancy life-history trait, cooperation in a homogeneous population can be stabilized. We finally discuss and highlight the relevance of these results for plant, invertebrate and microbial communities.
[ { "created": "Sat, 13 Nov 2021 16:33:03 GMT", "version": "v1" } ]
2021-11-16
[ [ "Müller", "Johannes", "" ], [ "Tellier", "Aurélien", "" ] ]
Due to the relevance for conservation biology, there is an increasing interest to extend evolutionary genomics models to plant, animal or microbial species. However, this requires to understand the effect of life-history traits absent in humans on genomic evolution. In this context, it is fundamentally of interest to generalize the replicator equation, which is at the heart of most population genomics models. However, as the inclusion of life-history traits generates models with a large state space, the analysis becomes involving. We focus, here, on quiescence and seed banks, two features common to many plant, invertebrate and microbial species. We develop a method to obtain a low-dimensional replicator equation in the context of evolutionary game theory, based on two assumptions: (1) the life-history traits are {\it per se} neutral, and (2) frequency-dependent selection is weak. We use the results to investigate the evolution and maintenance of cooperation based on the Prisoner's dilemma. We first consider the generalized replicator equation, and then refine the investigation using adaptive dynamics. It turns out that, depending on the structure and timing of the quiescence/dormancy life-history trait, cooperation in a homogeneous population can be stabilized. We finally discuss and highlight the relevance of these results for plant, invertebrate and microbial communities.
1704.04355
Andrew Francis
Guilherme S. Rodrigues, Andrew R. Francis, Scott A. Sisson, Mark M. Tanaka
Inferences on the acquisition of multidrug resistance in \emph{Mycobacterium tuberculosis} using molecular epidemiological data
32 pages, 6 figures. This manuscript will appear as a chapter in the Handbook of Approximate Bayesian Computation
null
null
null
q-bio.QM q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the rates of drug resistance acquisition in a natural population using molecular epidemiological data from Bolivia. First, we study the rate of direct acquisition of double resistance from the double sensitive state within patients and compare it to the rates of evolution to single resistance. In particular, we address whether or not double resistance can evolve directly from a double sensitive state within a given host. Second, we aim to understand whether the differences in mutation rates to rifampicin and isoniazid resistance translate to the epidemiological scale. Third, we estimate the proportion of MDR TB cases that are due to the transmission of MDR strains compared to acquisition of resistance through evolution. To address these problems we develop a model of TB transmission in which we track the evolution of resistance to two drugs and the evolution of VNTR loci. However, the available data is incomplete, in that it is recorded only {for a fraction of the population and} at a single point in time. The likelihood function induced by the proposed model is computationally prohibitive to evaluate and accordingly impractical to work with directly. We therefore approach statistical inference using approximate Bayesian computation techniques.
[ { "created": "Fri, 14 Apr 2017 05:53:58 GMT", "version": "v1" } ]
2017-04-17
[ [ "Rodrigues", "Guilherme S.", "" ], [ "Francis", "Andrew R.", "" ], [ "Sisson", "Scott A.", "" ], [ "Tanaka", "Mark M.", "" ] ]
We investigate the rates of drug resistance acquisition in a natural population using molecular epidemiological data from Bolivia. First, we study the rate of direct acquisition of double resistance from the double sensitive state within patients and compare it to the rates of evolution to single resistance. In particular, we address whether or not double resistance can evolve directly from a double sensitive state within a given host. Second, we aim to understand whether the differences in mutation rates to rifampicin and isoniazid resistance translate to the epidemiological scale. Third, we estimate the proportion of MDR TB cases that are due to the transmission of MDR strains compared to acquisition of resistance through evolution. To address these problems we develop a model of TB transmission in which we track the evolution of resistance to two drugs and the evolution of VNTR loci. However, the available data is incomplete, in that it is recorded only {for a fraction of the population and} at a single point in time. The likelihood function induced by the proposed model is computationally prohibitive to evaluate and accordingly impractical to work with directly. We therefore approach statistical inference using approximate Bayesian computation techniques.
1512.00058
Sundeep Teki
Sundeep Teki
Observations on recent progress in the field of timing and time perception
23 pages, 1 table
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time is an important dimension of brain function, but little is still known about the underlying cognitive principles and neurobiological mechanisms. The field of timing and time perception has witnessed rapid growth and multidisciplinary interest in the recent years with the advent of modern neuroimaging, neurophysiological and optogenetic tools. In this article, I review the literature from the last ten years (2005-2015) using a data mining approach and highlight the most significant empirical as well as review articles based on the number of citations (a minimum of 100 citations). Such analysis provides a unique perspective on the current state-of-the-art in the field and highlights subtopics in the field that have received considerable attention, and those that have not. The objective of the article is to present an objective summary of the current progress in the field of timing and time perception and provide a valuable and accessible resource summarizing the most cited articles for new as well current investigators in the field.
[ { "created": "Mon, 30 Nov 2015 21:51:05 GMT", "version": "v1" } ]
2015-12-02
[ [ "Teki", "Sundeep", "" ] ]
Time is an important dimension of brain function, but little is still known about the underlying cognitive principles and neurobiological mechanisms. The field of timing and time perception has witnessed rapid growth and multidisciplinary interest in the recent years with the advent of modern neuroimaging, neurophysiological and optogenetic tools. In this article, I review the literature from the last ten years (2005-2015) using a data mining approach and highlight the most significant empirical as well as review articles based on the number of citations (a minimum of 100 citations). Such analysis provides a unique perspective on the current state-of-the-art in the field and highlights subtopics in the field that have received considerable attention, and those that have not. The objective of the article is to present an objective summary of the current progress in the field of timing and time perception and provide a valuable and accessible resource summarizing the most cited articles for new as well current investigators in the field.
q-bio/0502036
Jeremy England
Jeremy L. England, John Cardy
Morphogen Gradient from a Noisy Source
Four pages, three figures
PRL 94, 078101 (2005)
10.1103/PhysRevLett.94.078101
null
q-bio.TO cond-mat.other q-bio.MN q-bio.OT
null
We investigate the effect of time-dependent noise on the shape of a morphogen gradient in a developing embryo. Perturbation theory is used to calculate the deviations from deterministic behavior in a simple reaction-diffusion model of robust gradient formation, and the results are confirmed by numerical simulation. It is shown that such deviations can disrupt robustness for sufficiently high noise levels, and the implications of these findings for more complex models of gradient-shaping pathways are discussed.
[ { "created": "Thu, 24 Feb 2005 17:52:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "England", "Jeremy L.", "" ], [ "Cardy", "John", "" ] ]
We investigate the effect of time-dependent noise on the shape of a morphogen gradient in a developing embryo. Perturbation theory is used to calculate the deviations from deterministic behavior in a simple reaction-diffusion model of robust gradient formation, and the results are confirmed by numerical simulation. It is shown that such deviations can disrupt robustness for sufficiently high noise levels, and the implications of these findings for more complex models of gradient-shaping pathways are discussed.
1812.06590
Liane Gabora
Liane Gabora and Cameron M. Smith
Exploring the Psychological Basis for Transitions in the Archaeological Record
20 pages. arXiv admin note: substantial text overlap with arXiv:1811.10431
In Tracy B. Henley, Matthew Rossano, and Edward P. Kardas (Eds.) Handbook of Cognitive Archaeology: A Psychological Framework. (2019)
null
null
q-bio.NC q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In lieu of an abstract here is the first paragraph: No other species remotely approaches the human capacity for the cultural evolution of novelty that is accumulative, adaptive, and open-ended (i.e., with no a priori limit on the size or scope of possibilities). By culture we mean extrasomatic adaptations--including behavior and technology--that are socially rather than sexually transmitted. This chapter synthesizes research from anthropology, psychology, archaeology, and agent-based modeling into a speculative yet coherent account of two fundamental cognitive transitions underlying human cultural evolution that is consistent with contemporary psychology. While the chapter overlaps with a more technical paper on this topic (Gabora & Smith 2018), it incorporates new research and elaborates a genetic component to our overall argument. The ideas in this chapter grew out of a non-Darwinian framework for cultural evolution, referred to as the Self-other Reorganization (SOR) theory of cultural evolution (Gabora, 2013, in press; Smith, 2013), which was inspired by research on the origin and earliest stage in the evolution of life (Cornish-Bowden & C\'ardenas 2017; Goldenfeld, Biancalani, & Jafarpour, 2017, Vetsigian, Woese, & Goldenfeld 2006; Woese, 2002). SOR bridges psychological research on fundamental aspects of our human nature such as creativity and our proclivity to reflect on ideas from different perspectives, with the literature on evolutionary approaches to cultural evolution that aspire to synthesize the behavioral sciences much as has been done for the biological scientists. The current chapter is complementary to this effort, but less abstract; it attempts to ground the theory of cultural evolution in terms of cognitive transitions as suggested by archaeological evidence.
[ { "created": "Thu, 13 Dec 2018 23:09:31 GMT", "version": "v1" }, { "created": "Fri, 5 Jul 2019 20:53:47 GMT", "version": "v2" } ]
2021-07-22
[ [ "Gabora", "Liane", "" ], [ "Smith", "Cameron M.", "" ] ]
In lieu of an abstract here is the first paragraph: No other species remotely approaches the human capacity for the cultural evolution of novelty that is accumulative, adaptive, and open-ended (i.e., with no a priori limit on the size or scope of possibilities). By culture we mean extrasomatic adaptations--including behavior and technology--that are socially rather than sexually transmitted. This chapter synthesizes research from anthropology, psychology, archaeology, and agent-based modeling into a speculative yet coherent account of two fundamental cognitive transitions underlying human cultural evolution that is consistent with contemporary psychology. While the chapter overlaps with a more technical paper on this topic (Gabora & Smith 2018), it incorporates new research and elaborates a genetic component to our overall argument. The ideas in this chapter grew out of a non-Darwinian framework for cultural evolution, referred to as the Self-other Reorganization (SOR) theory of cultural evolution (Gabora, 2013, in press; Smith, 2013), which was inspired by research on the origin and earliest stage in the evolution of life (Cornish-Bowden & C\'ardenas 2017; Goldenfeld, Biancalani, & Jafarpour, 2017, Vetsigian, Woese, & Goldenfeld 2006; Woese, 2002). SOR bridges psychological research on fundamental aspects of our human nature such as creativity and our proclivity to reflect on ideas from different perspectives, with the literature on evolutionary approaches to cultural evolution that aspire to synthesize the behavioral sciences much as has been done for the biological scientists. The current chapter is complementary to this effort, but less abstract; it attempts to ground the theory of cultural evolution in terms of cognitive transitions as suggested by archaeological evidence.
2103.07104
Yulun Zhou
Christian Bongiorno, Yulun Zhou, Marta Kryven, David Theurel, Alessandro Rizzo, Paolo Santi, Joshua Tenenbaum, Carlo Ratti
Vector-based Pedestrian Navigation in Cities
null
null
10.1038/s43588-021-00130-y
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How do pedestrians choose their paths within city street networks? Researchers have tried to shed light on this matter through strictly controlled experiments, but an ultimate answer based on real-world mobility data is still lacking. Here, we analyze salient features of human path planning through a statistical analysis of a massive dataset of GPS traces, which reveals that (1) people increasingly deviate from the shortest path when the distance between origin and destination increases, and (2) chosen paths are statistically different when origin and destination are swapped. We posit that direction to goal is a main driver of path planning and develop a vector-based navigation model that is a statistically better predictor of human paths than a model based on minimizing distance with stochastic effects. Our findings generalize across two major US cities with different street networks, hinting to the fact that vector-based navigation might be a universal property of human path planning.
[ { "created": "Fri, 12 Mar 2021 06:37:16 GMT", "version": "v1" }, { "created": "Sun, 24 Oct 2021 03:01:04 GMT", "version": "v2" } ]
2021-10-26
[ [ "Bongiorno", "Christian", "" ], [ "Zhou", "Yulun", "" ], [ "Kryven", "Marta", "" ], [ "Theurel", "David", "" ], [ "Rizzo", "Alessandro", "" ], [ "Santi", "Paolo", "" ], [ "Tenenbaum", "Joshua", "" ], [ "Ratti", "Carlo", "" ] ]
How do pedestrians choose their paths within city street networks? Researchers have tried to shed light on this matter through strictly controlled experiments, but an ultimate answer based on real-world mobility data is still lacking. Here, we analyze salient features of human path planning through a statistical analysis of a massive dataset of GPS traces, which reveals that (1) people increasingly deviate from the shortest path when the distance between origin and destination increases, and (2) chosen paths are statistically different when origin and destination are swapped. We posit that direction to goal is a main driver of path planning and develop a vector-based navigation model that is a statistically better predictor of human paths than a model based on minimizing distance with stochastic effects. Our findings generalize across two major US cities with different street networks, hinting to the fact that vector-based navigation might be a universal property of human path planning.
1805.02068
Matthias Lechner
Alexandra G\"otz, Matthias Lechner, Andreas Mader, Benedikt von Bronk, Erwin Frey, Madeleine Opitz
CsrA and its regulators control the time-point of ColicinE2 release in Escherichia coli
null
Scientific Reports 8, 6537 (2018)
10.1038/s41598-018-24699-z
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The bacterial SOS response is a cellular reaction to DNA damage, that, among other actions, triggers the expression of colicin - toxic bacteriocins in Escherichia coli that are released to kill close relatives competing for resources. However, it is largely unknown, how the complex network regulating toxin expression controls the time-point of toxin release to prevent premature release of inefficient protein concentrations. Here, we study how different regulatory mechanisms affect production and release of the bacteriocin ColicinE2 in Escherichia coli. Combining experimental and theoretical approaches, we demonstrate that the global carbon storage regulator CsrA controls the duration of the delay between toxin production and release and emphasize the importance of CsrA sequestering elements for the timing of ColicinE2 release. In particular, we show that ssDNA originating from rolling-circle replication of the toxin-producing plasmid represents a yet unknown additional CsrA sequestering element, which is essential in the ColicinE2-producing strain to enable toxin release by reducing the amount of free CsrA molecules in the bacterial cell. Taken together, our findings show that CsrA times ColicinE2 release and reveal a dual function for CsrA as an ssDNA and mRNA-binding protein, introducing ssDNA as an important post-transcriptional gene regulatory element.
[ { "created": "Sat, 5 May 2018 15:12:45 GMT", "version": "v1" } ]
2018-05-08
[ [ "Götz", "Alexandra", "" ], [ "Lechner", "Matthias", "" ], [ "Mader", "Andreas", "" ], [ "von Bronk", "Benedikt", "" ], [ "Frey", "Erwin", "" ], [ "Opitz", "Madeleine", "" ] ]
The bacterial SOS response is a cellular reaction to DNA damage, that, among other actions, triggers the expression of colicin - toxic bacteriocins in Escherichia coli that are released to kill close relatives competing for resources. However, it is largely unknown, how the complex network regulating toxin expression controls the time-point of toxin release to prevent premature release of inefficient protein concentrations. Here, we study how different regulatory mechanisms affect production and release of the bacteriocin ColicinE2 in Escherichia coli. Combining experimental and theoretical approaches, we demonstrate that the global carbon storage regulator CsrA controls the duration of the delay between toxin production and release and emphasize the importance of CsrA sequestering elements for the timing of ColicinE2 release. In particular, we show that ssDNA originating from rolling-circle replication of the toxin-producing plasmid represents a yet unknown additional CsrA sequestering element, which is essential in the ColicinE2-producing strain to enable toxin release by reducing the amount of free CsrA molecules in the bacterial cell. Taken together, our findings show that CsrA times ColicinE2 release and reveal a dual function for CsrA as an ssDNA and mRNA-binding protein, introducing ssDNA as an important post-transcriptional gene regulatory element.
1411.3435
Joshua Weitz
Joshua S. Weitz (1), Jonathan Dushoff (2) ((1) School of Biology and School of Physics, Georgia Institute of Technology, Atlanta, GA, USA, (2) Department of Biology and Institute for Infectious Disease Research, McMaster University, Hamilton, ON, Canada)
Post-death Transmission of Ebola: Challenges for Inference and Opportunities for Control
10 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple epidemiological models have been proposed to predict the spread of Ebola in West Africa. These models include consideration of counter-measures meant to slow and, eventually, stop the spread of the disease. Here, we examine one component of Ebola dynamics that is of growing concern -- the transmission of Ebola from the dead to the living. We do so by applying the toolkit of mathematical epidemiology to analyze the consequences of post-death transmission. We show that underlying disease parameters cannot be inferred with confidence from early-stage incidence data (that is, they are not "identifiable") because different parameter combinations can produce virtually the same epidemic trajectory. Despite this identifiability problem, we find robustly that inferences that don't account for post-death transmission tend to underestimate the basic reproductive number -- thus, given the observed rate of epidemic growth, larger amounts of post-death transmission imply larger reproductive numbers. From a control perspective, we explain how improvements in reducing post-death transmission of Ebola may reduce the overall epidemic spread and scope substantially. Increased attention to the proportion of post-death transmission has the potential to aid both in projecting the course of the epidemic and in evaluating a portfolio of control strategies.
[ { "created": "Thu, 13 Nov 2014 03:13:32 GMT", "version": "v1" } ]
2014-11-14
[ [ "Weitz", "Joshua S.", "" ], [ "Dushoff", "Jonathan", "" ] ]
Multiple epidemiological models have been proposed to predict the spread of Ebola in West Africa. These models include consideration of counter-measures meant to slow and, eventually, stop the spread of the disease. Here, we examine one component of Ebola dynamics that is of growing concern -- the transmission of Ebola from the dead to the living. We do so by applying the toolkit of mathematical epidemiology to analyze the consequences of post-death transmission. We show that underlying disease parameters cannot be inferred with confidence from early-stage incidence data (that is, they are not "identifiable") because different parameter combinations can produce virtually the same epidemic trajectory. Despite this identifiability problem, we find robustly that inferences that don't account for post-death transmission tend to underestimate the basic reproductive number -- thus, given the observed rate of epidemic growth, larger amounts of post-death transmission imply larger reproductive numbers. From a control perspective, we explain how improvements in reducing post-death transmission of Ebola may reduce the overall epidemic spread and scope substantially. Increased attention to the proportion of post-death transmission has the potential to aid both in projecting the course of the epidemic and in evaluating a portfolio of control strategies.
1212.3124
Timothy Taylor
Timothy J Taylor, Istvan Z Kiss
Interdependency and hierarchy of exact epidemic models on networks
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the years numerous models of SIS (susceptible - infected - susceptible) disease dynamics unfolding on networks have been proposed. Here, we discuss the links between many of these models and how they can be viewed as more general motif-based models. We illustrate how the different models can be derived from one another and, where this is not possible, discuss extensions to established models that enables this derivation. We also derive a general result for the exact differential equations for the expected number of an arbitrary motif directly from the Kolmogorov/master equations and conclude with a comparison of the performance of the different closed systems of equations on networks of varying structure.
[ { "created": "Thu, 13 Dec 2012 10:59:42 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2013 19:14:54 GMT", "version": "v2" } ]
2013-04-10
[ [ "Taylor", "Timothy J", "" ], [ "Kiss", "Istvan Z", "" ] ]
Over the years numerous models of SIS (susceptible - infected - susceptible) disease dynamics unfolding on networks have been proposed. Here, we discuss the links between many of these models and how they can be viewed as more general motif-based models. We illustrate how the different models can be derived from one another and, where this is not possible, discuss extensions to established models that enables this derivation. We also derive a general result for the exact differential equations for the expected number of an arbitrary motif directly from the Kolmogorov/master equations and conclude with a comparison of the performance of the different closed systems of equations on networks of varying structure.
2103.01790
Fernanda Matias
Katiele V. P. Brito and Fernanda Selingardi Matias
Neuronal heterogeneity modulates phase-synchronization between unidirectionally coupled populations with excitation-inhibition balance
null
Phys. Rev. E 103, 032415 (2021)
10.1103/PhysRevE.103.032415
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Several experiments and models have highlighted the importance of neuronal heterogeneity in brain dynamics and function. However, how such a cell-to-cell diversity can affect cortical computation, synchronization, and neuronal communication is still under debate. Previous studies have focused on the effect of neuronal heterogeneity in one neuronal population. Here we are specifically interested in the effect of neuronal variability on the phase relations between two populations, which can be related to different cortical communication hypotheses. It has been recently shown that two spiking neuron populations unidirectionally connected in a sender-receiver configuration can exhibit anticipated synchronization (AS), which is characterized by a negative phase-lag. This phenomenon has been reported in electrophysiological data of non-human primates and human EEG during a visual discrimination cognitive task. In experiments, the unidirectional coupling could be accessed by Granger causality and can be accompanied by both positive or negative phase difference between cortical areas. Here we propose a model of two coupled populations in which the neuronal heterogeneity can determine the dynamical relation between the sender and the receiver and can reproduce phase relations reported in experiments. Depending on the distribution of parameters characterizing the neuronal firing patterns, the system can exhibit both AS and the usual delayed synchronization regime (DS, with positive phase) as well as a zero-lag synchronization regime and phase bistability between AS and DS. Furthermore, we show that our network can present diversity in their phase relations maintaining the excitation-inhibition balance.
[ { "created": "Tue, 2 Mar 2021 14:59:44 GMT", "version": "v1" } ]
2021-04-07
[ [ "Brito", "Katiele V. P.", "" ], [ "Matias", "Fernanda Selingardi", "" ] ]
Several experiments and models have highlighted the importance of neuronal heterogeneity in brain dynamics and function. However, how such a cell-to-cell diversity can affect cortical computation, synchronization, and neuronal communication is still under debate. Previous studies have focused on the effect of neuronal heterogeneity in one neuronal population. Here we are specifically interested in the effect of neuronal variability on the phase relations between two populations, which can be related to different cortical communication hypotheses. It has been recently shown that two spiking neuron populations unidirectionally connected in a sender-receiver configuration can exhibit anticipated synchronization (AS), which is characterized by a negative phase-lag. This phenomenon has been reported in electrophysiological data of non-human primates and human EEG during a visual discrimination cognitive task. In experiments, the unidirectional coupling could be accessed by Granger causality and can be accompanied by both positive or negative phase difference between cortical areas. Here we propose a model of two coupled populations in which the neuronal heterogeneity can determine the dynamical relation between the sender and the receiver and can reproduce phase relations reported in experiments. Depending on the distribution of parameters characterizing the neuronal firing patterns, the system can exhibit both AS and the usual delayed synchronization regime (DS, with positive phase) as well as a zero-lag synchronization regime and phase bistability between AS and DS. Furthermore, we show that our network can present diversity in their phase relations maintaining the excitation-inhibition balance.
1506.01863
Sriganesh Srihari Dr
Sriganesh Srihari
Challenges and open problems in computational prediction of protein complexes: the case of membrane complexes
7 pages
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying the entire set of complexes is essential not only to understand complex formations, but also to map the high level organisation of the cell. Computational prediction of protein complexes faces several challenges including the lack of sufficient protein interactions, presence of noise in protein interaction datasets and difficulty in predicting small and sparse complexes. These challenges are covered in most reviews of complex prediction methods. However, an important challenge that needs to be addressed is the prediction of membrane complexes. These are often ignored because existing protein interaction detection techniques do not detect interactions between membrane proteins. But, recently there have been several new experimental techniques including MY2H that are capable of detecting membrane protein interactions. In the light of this new data, we discuss here new challenges and the kind of open problems that need to be solved to effectively detect membrane complexes.
[ { "created": "Fri, 5 Jun 2015 11:08:05 GMT", "version": "v1" } ]
2015-06-08
[ [ "Srihari", "Sriganesh", "" ] ]
Identifying the entire set of complexes is essential not only to understand complex formations, but also to map the high level organisation of the cell. Computational prediction of protein complexes faces several challenges including the lack of sufficient protein interactions, presence of noise in protein interaction datasets and difficulty in predicting small and sparse complexes. These challenges are covered in most reviews of complex prediction methods. However, an important challenge that needs to be addressed is the prediction of membrane complexes. These are often ignored because existing protein interaction detection techniques do not detect interactions between membrane proteins. But, recently there have been several new experimental techniques including MY2H that are capable of detecting membrane protein interactions. In the light of this new data, we discuss here new challenges and the kind of open problems that need to be solved to effectively detect membrane complexes.
1811.10360
Pan-Jun Kim
Hang-Hyun Jo, Yeon Jeong Kim, Jae Kyoung Kim, Mathias Foo, David E. Somers, Pan-Jun Kim
Waveforms of molecular oscillations reveal circadian timekeeping mechanisms
Supplementary material is available at the journal website
Commun. Biol. 1, 207 (2018)
10.1038/s42003-018-0217-1
null
q-bio.SC nlin.AO physics.bio-ph q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Circadian clocks play a pivotal role in orchestrating numerous physiological and developmental events. Waveform shapes of the oscillations of protein abundances can be informative about the underlying biochemical processes of circadian clocks. We derive a mathematical framework where waveforms do reveal hidden biochemical mechanisms of circadian timekeeping. We find that the cost of synthesizing proteins with particular waveforms can be substantially reduced by rhythmic protein half-lives over time, as supported by previous plant and mammalian data, as well as our own seedling experiment. We also find that previously-enigmatic, cyclic expression of positive arm components within the mammalian and insect clocks allows both a broad range of peak time differences between protein waveforms and the symmetries of the waveforms about the peak times. Such various peak-time differences may facilitate tissue-specific or developmental stage-specific multicellular processes. Our waveform-guided approach can be extended to various biological oscillators, including cell-cycle and synthetic genetic oscillators.
[ { "created": "Mon, 26 Nov 2018 13:38:06 GMT", "version": "v1" } ]
2018-11-27
[ [ "Jo", "Hang-Hyun", "" ], [ "Kim", "Yeon Jeong", "" ], [ "Kim", "Jae Kyoung", "" ], [ "Foo", "Mathias", "" ], [ "Somers", "David E.", "" ], [ "Kim", "Pan-Jun", "" ] ]
Circadian clocks play a pivotal role in orchestrating numerous physiological and developmental events. Waveform shapes of the oscillations of protein abundances can be informative about the underlying biochemical processes of circadian clocks. We derive a mathematical framework where waveforms do reveal hidden biochemical mechanisms of circadian timekeeping. We find that the cost of synthesizing proteins with particular waveforms can be substantially reduced by rhythmic protein half-lives over time, as supported by previous plant and mammalian data, as well as our own seedling experiment. We also find that previously-enigmatic, cyclic expression of positive arm components within the mammalian and insect clocks allows both a broad range of peak time differences between protein waveforms and the symmetries of the waveforms about the peak times. Such various peak-time differences may facilitate tissue-specific or developmental stage-specific multicellular processes. Our waveform-guided approach can be extended to various biological oscillators, including cell-cycle and synthetic genetic oscillators.
1603.04477
Chengzhe Tian
Chengzhe Tian, Namiko Mitarai
Bifurcation of Transition Paths Induced by Coupled Bistable Systems
18 pages, 3 figures
J. Chem. Phys. 144, 215102 (2016)
10.1063/1.4953242
null
q-bio.MN nlin.CD physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss the transition paths in a coupled bistable system consisting of interacting multiple identical bistable motifs. We propose a simple model of coupled bistable gene circuits as an example, and show that its transition paths are bifurcating. We then derive a criterion to predict the bifurcation of transition paths in a generalized coupled bistable system. We confirm the validity of the theory for the example system by numerical simulation. We also demonstrate in the example system that, if the steady states of individual gene circuits are not changed by the coupling, the bifurcation pattern is not dependent on the number of gene circuits. We further show that the transition rate exponentially decreases with the number of gene circuits when the transition path does not bifurcate, while a bifurcation facilitates the transition by lowering the quasi-potential energy barrier.
[ { "created": "Mon, 14 Mar 2016 21:08:09 GMT", "version": "v1" }, { "created": "Fri, 20 May 2016 14:05:19 GMT", "version": "v2" } ]
2016-11-03
[ [ "Tian", "Chengzhe", "" ], [ "Mitarai", "Namiko", "" ] ]
We discuss the transition paths in a coupled bistable system consisting of interacting multiple identical bistable motifs. We propose a simple model of coupled bistable gene circuits as an example, and show that its transition paths are bifurcating. We then derive a criterion to predict the bifurcation of transition paths in a generalized coupled bistable system. We confirm the validity of the theory for the example system by numerical simulation. We also demonstrate in the example system that, if the steady states of individual gene circuits are not changed by the coupling, the bifurcation pattern is not dependent on the number of gene circuits. We further show that the transition rate exponentially decreases with the number of gene circuits when the transition path does not bifurcate, while a bifurcation facilitates the transition by lowering the quasi-potential energy barrier.
2109.07190
Matteo de Rosa
Michela Bollati, Luisa Diomede, Toni Giorgino, Carmina Natale, Elisa Fagnani, Irene Boniardi, Alberto Barbiroli, Rebecca Alemani, Marten Beeg, Marco Gobbi, Ana Fakin, Eloise Mastrangelo, Mario Milani, Gianluca Presciuttini, Edi Gabellieri, Patrizia Cioni, Matteo de Rosa
A novel hotspot of gelsolin instability and aggregation propensity triggers a new mechanism of amyloidosis
main: 28 pages, 7 figures; supplementary: 11 pages, 6 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The multidomain protein gelsolin (GSN) is composed of six homologous modules, sequentially named G1 to G6. Single point substitutions in this protein are responsible for AGel amyloidosis, a hereditary disease characterized by progressive corneal lattice dystrophy, cutis laxa, and polyneuropathy. Several different amyloidogenic variants of GSN have been identified over the years, but only the most common D187N/Y mutants, in G2, have been thoroughly characterized, and the underlying functional mechanistic link between mutation, altered protein structure, susceptibility to aberrant furin cleavage and aggregative potential resolved. Little is known about the recently identified mutations A551P, E553K and M517R hosted at the interface between G4 and G5, whose aggregation process likely follows an alternative pathway. We demonstrate that these three substitutions impair temperature and pressure stability of GSN but do not increase its susceptibility to furin cleavage, the first event of the canonical aggregation pathway. The variants are also characterized by a higher tendency to aggregate in the unproteolysed forms and show a higher proteotoxicity in a C. elegans-based assay. Structural studies point to a destabilization of the interface between G4 and G5 due to three different structural determinants: beta-strand breaking, steric hindrance and/or charge repulsion, all implying the impairment of interdomain contacts. All available evidence suggests that the rearrangement of the protein global architecture triggers a furin-independent aggregation of the protein, supporting the existence of a non-canonical pathway of gelsolin amyloidosis pathogenesis.
[ { "created": "Wed, 15 Sep 2021 10:10:20 GMT", "version": "v1" } ]
2021-09-16
[ [ "Bollati", "Michela", "" ], [ "Diomede", "Luisa", "" ], [ "Giorgino", "Toni", "" ], [ "Natale", "Carmina", "" ], [ "Fagnani", "Elisa", "" ], [ "Boniardi", "Irene", "" ], [ "Barbiroli", "Alberto", "" ], [ "Alemani", "Rebecca", "" ], [ "Beeg", "Marten", "" ], [ "Gobbi", "Marco", "" ], [ "Fakin", "Ana", "" ], [ "Mastrangelo", "Eloise", "" ], [ "Milani", "Mario", "" ], [ "Presciuttini", "Gianluca", "" ], [ "Gabellieri", "Edi", "" ], [ "Cioni", "Patrizia", "" ], [ "de Rosa", "Matteo", "" ] ]
The multidomain protein gelsolin (GSN) is composed of six homologous modules, sequentially named G1 to G6. Single point substitutions in this protein are responsible for AGel amyloidosis, a hereditary disease characterized by progressive corneal lattice dystrophy, cutis laxa, and polyneuropathy. Several different amyloidogenic variants of GSN have been identified over the years, but only the most common D187N/Y mutants, in G2, have been thoroughly characterized, and the underlying functional mechanistic link between mutation, altered protein structure, susceptibility to aberrant furin cleavage and aggregative potential resolved. Little is known about the recently identified mutations A551P, E553K and M517R hosted at the interface between G4 and G5, whose aggregation process likely follows an alternative pathway. We demonstrate that these three substitutions impair temperature and pressure stability of GSN but do not increase its susceptibility to furin cleavage, the first event of the canonical aggregation pathway. The variants are also characterized by a higher tendency to aggregate in the unproteolysed forms and show a higher proteotoxicity in a C. elegans-based assay. Structural studies point to a destabilization of the interface between G4 and G5 due to three different structural determinants: beta-strand breaking, steric hindrance and/or charge repulsion, all implying the impairment of interdomain contacts. All available evidence suggests that the rearrangement of the protein global architecture triggers a furin-independent aggregation of the protein, supporting the existence of a non-canonical pathway of gelsolin amyloidosis pathogenesis.
0802.2355
James Degnan
James H. Degnan
Properties of Consensus Methods for Inferring Species Trees from Gene Trees
24 pages, 2 tables, 8 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consensus methods provide a useful strategy for combining information from a collection of gene trees. An important application of consensus methods is to combine gene trees to estimate a species tree. To investigate the theoretical properties of consensus trees that would be obtained from large numbers of loci evolving according to a basic evolutionary model, we construct consensus trees from independent gene trees that occur in proportion to gene tree probabilities derived from coalescent theory. We consider majority-rule, rooted triple (R*), and greedy consensus trees constructed from known gene trees, both in the asymptotic case as numbers of gene trees approach infinity and for finite numbers of genes. Our results show that for some combinations of species tree branch lengths, increasing the number of independent loci can make the majority-rule consensus tree more likely to be at least partially unresolved and the greedy consensus tree less likely to match the species tree. However, the probability that the R* consensus tree has the species tree topology approaches 1 as the number of gene trees approaches infinity. Although the greedy consensus algorithm can be the quickest to converge on the correct species tree when increasing the number of gene trees, it can also be positively misleading. The majority-rule consensus tree is not a misleading estimator of the species tree topology, and the R* consensus tree is a statistically consistent estimator of the species tree topology. Our results therefore suggest a method for using multiple loci to infer the species tree topology, even when it is discordant with the most likely gene tree.
[ { "created": "Sun, 17 Feb 2008 01:21:28 GMT", "version": "v1" } ]
2008-02-19
[ [ "Degnan", "James H.", "" ] ]
Consensus methods provide a useful strategy for combining information from a collection of gene trees. An important application of consensus methods is to combine gene trees to estimate a species tree. To investigate the theoretical properties of consensus trees that would be obtained from large numbers of loci evolving according to a basic evolutionary model, we construct consensus trees from independent gene trees that occur in proportion to gene tree probabilities derived from coalescent theory. We consider majority-rule, rooted triple (R*), and greedy consensus trees constructed from known gene trees, both in the asymptotic case as numbers of gene trees approach infinity and for finite numbers of genes. Our results show that for some combinations of species tree branch lengths, increasing the number of independent loci can make the majority-rule consensus tree more likely to be at least partially unresolved and the greedy consensus tree less likely to match the species tree. However, the probability that the R* consensus tree has the species tree topology approaches 1 as the number of gene trees approaches infinity. Although the greedy consensus algorithm can be the quickest to converge on the correct species tree when increasing the number of gene trees, it can also be positively misleading. The majority-rule consensus tree is not a misleading estimator of the species tree topology, and the R* consensus tree is a statistically consistent estimator of the species tree topology. Our results therefore suggest a method for using multiple loci to infer the species tree topology, even when it is discordant with the most likely gene tree.
1007.4467
Tsvi Tlusty
Yonatan Savir and Tsvi Tlusty
Molecular Recognition as an Information Channel: The Role of Conformational Changes
Keywords--Molecular information channels, molecular recognition, conformational proofreading. http://www.weizmann.ac.il/complex/tlusty/papers/IEEE2009b.pdf
Workshop on Biological and Bio-Inspired Information Theory, 43rd Annual Conference on Information Sciences and Systems, March 18-20, 2009 2009 , Page(s): 835 - 840
10.1109/CISS.2009.5054833
null
q-bio.BM cs.IT math.IT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular recognition, which is essential in processing information in biological systems, takes place in a crowded noisy biochemical environment and requires the recognition of a specific target within a background of various similar competing molecules. We consider molecular recognition as a transmission of information via a noisy channel and use this analogy to gain insights on the optimal, or fittest, molecular recognizer. We focus on the optimal structural properties of the molecules such as flexibility and conformation. We show that conformational changes upon binding, which often occur during molecular recognition, may optimize the detection performance of the recognizer. We thus suggest a generic design principle termed 'conformational proofreading' in which deformation enhances detection. We evaluate the optimal flexibility of the molecular recognizer, which is analogous to the stochasticity in a decision unit. In some scenarios, a flexible recognizer, i.e., a stochastic decision unit, performs better than a rigid, deterministic one. As a biological example, we discuss conformational changes during homologous recombination, the process of genetic exchange between two DNA strands.
[ { "created": "Mon, 26 Jul 2010 14:08:35 GMT", "version": "v1" } ]
2010-07-27
[ [ "Savir", "Yonatan", "" ], [ "Tlusty", "Tsvi", "" ] ]
Molecular recognition, which is essential in processing information in biological systems, takes place in a crowded noisy biochemical environment and requires the recognition of a specific target within a background of various similar competing molecules. We consider molecular recognition as a transmission of information via a noisy channel and use this analogy to gain insights on the optimal, or fittest, molecular recognizer. We focus on the optimal structural properties of the molecules such as flexibility and conformation. We show that conformational changes upon binding, which often occur during molecular recognition, may optimize the detection performance of the recognizer. We thus suggest a generic design principle termed 'conformational proofreading' in which deformation enhances detection. We evaluate the optimal flexibility of the molecular recognizer, which is analogous to the stochasticity in a decision unit. In some scenarios, a flexible recognizer, i.e., a stochastic decision unit, performs better than a rigid, deterministic one. As a biological example, we discuss conformational changes during homologous recombination, the process of genetic exchange between two DNA strands.
2312.16074
Claudia Solis-Lemus
Yibo Kong, George P. Tiley, Claudia Solis-Lemus
Unsupervised Learning of Phylogenetic Trees via Split-Weight Embedding
null
null
null
null
q-bio.PE stat.ML
http://creativecommons.org/licenses/by/4.0/
Unsupervised learning has become a staple in classical machine learning, successfully identifying clustering patterns in data across a broad range of domain applications. Surprisingly, despite its accuracy and elegant simplicity, unsupervised learning has not been sufficiently exploited in the realm of phylogenetic tree inference. The main reason for the delay in adoption of unsupervised learning in phylogenetics is the lack of a meaningful, yet simple, way of embedding phylogenetic trees into a vector space. Here, we propose the simple yet powerful split-weight embedding which allows us to fit standard clustering algorithms to the space of phylogenetic trees. We show that our split-weight embedded clustering is able to recover meaningful evolutionary relationships in simulated and real (Adansonia baobabs) data.
[ { "created": "Tue, 26 Dec 2023 14:50:39 GMT", "version": "v1" }, { "created": "Fri, 3 May 2024 14:39:30 GMT", "version": "v2" } ]
2024-05-06
[ [ "Kong", "Yibo", "" ], [ "Tiley", "George P.", "" ], [ "Solis-Lemus", "Claudia", "" ] ]
Unsupervised learning has become a staple in classical machine learning, successfully identifying clustering patterns in data across a broad range of domain applications. Surprisingly, despite its accuracy and elegant simplicity, unsupervised learning has not been sufficiently exploited in the realm of phylogenetic tree inference. The main reason for the delay in adoption of unsupervised learning in phylogenetics is the lack of a meaningful, yet simple, way of embedding phylogenetic trees into a vector space. Here, we propose the simple yet powerful split-weight embedding which allows us to fit standard clustering algorithms to the space of phylogenetic trees. We show that our split-weight embedded clustering is able to recover meaningful evolutionary relationships in simulated and real (Adansonia baobabs) data.
1308.2012
Wei Fan
Binghang Liu, Yujian Shi, Jianying Yuan, Xuesong Hu, Hao Zhang, Nan Li, Zhenyu Li, Yanxiang Chen, Desheng Mu, Wei Fan
Estimation of genomic characteristics by analyzing k-mer frequency in de novo genome projects
In total, 47 pages include maintext and supplemental. 7 maintext figures, 3 tables, 6 supplemental figures, 5 supplemental tables
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: With the fast development of next generation sequencing technologies, increasing numbers of genomes are being de novo sequenced and assembled. However, most are in fragmental and incomplete draft status, and thus it is often difficult to know the accurate genome size and repeat content. Furthermore, many genomes are highly repetitive or heterozygous, posing problems to current assemblers utilizing short reads. Therefore, it is necessary to develop efficient assembly-independent methods for accurate estimation of these genomic characteristics. Results: Here we present a framework for modeling the distribution of k-mer frequency from sequencing data and estimating the genomic characteristics such as genome size, repeat structure and heterozygous rate. By introducing novel techniques of k-mer individuals, float precision estimation, and proper treatment of sequencing error and coverage bias, the estimation accuracy of our method is significantly improved over existing methods. We also studied how the various genomic and sequencing characteristics affect the estimation accuracy using simulated sequencing data, and discussed the limitations on applying our method to real sequencing data. Conclusion: Based on this research, we show that the k-mer frequency analysis can be used as a general and assembly-independent method for estimating genomic characteristics, which can improve our understanding of a species genome, help design the sequencing strategy of genome projects, and guide the development of assembly algorithms. The programs developed in this research are written using C/C++, and freely accessible at Github URL (https://github.com/fanagislab/GCE) or BGI ftp ( ftp://ftp.genomics.org.cn/pub/gce).
[ { "created": "Fri, 9 Aug 2013 01:51:19 GMT", "version": "v1" }, { "created": "Thu, 27 Feb 2020 01:58:36 GMT", "version": "v2" } ]
2020-02-28
[ [ "Liu", "Binghang", "" ], [ "Shi", "Yujian", "" ], [ "Yuan", "Jianying", "" ], [ "Hu", "Xuesong", "" ], [ "Zhang", "Hao", "" ], [ "Li", "Nan", "" ], [ "Li", "Zhenyu", "" ], [ "Chen", "Yanxiang", "" ], [ "Mu", "Desheng", "" ], [ "Fan", "Wei", "" ] ]
Background: With the fast development of next generation sequencing technologies, increasing numbers of genomes are being de novo sequenced and assembled. However, most are in fragmental and incomplete draft status, and thus it is often difficult to know the accurate genome size and repeat content. Furthermore, many genomes are highly repetitive or heterozygous, posing problems to current assemblers utilizing short reads. Therefore, it is necessary to develop efficient assembly-independent methods for accurate estimation of these genomic characteristics. Results: Here we present a framework for modeling the distribution of k-mer frequency from sequencing data and estimating the genomic characteristics such as genome size, repeat structure and heterozygous rate. By introducing novel techniques of k-mer individuals, float precision estimation, and proper treatment of sequencing error and coverage bias, the estimation accuracy of our method is significantly improved over existing methods. We also studied how the various genomic and sequencing characteristics affect the estimation accuracy using simulated sequencing data, and discussed the limitations on applying our method to real sequencing data. Conclusion: Based on this research, we show that the k-mer frequency analysis can be used as a general and assembly-independent method for estimating genomic characteristics, which can improve our understanding of a species genome, help design the sequencing strategy of genome projects, and guide the development of assembly algorithms. The programs developed in this research are written using C/C++, and freely accessible at Github URL (https://github.com/fanagislab/GCE) or BGI ftp ( ftp://ftp.genomics.org.cn/pub/gce).
2205.10391
Chris Salahub
Christopher Salahub
A structural model of genome-wide association studies
null
null
null
null
q-bio.GN stat.AP
http://creativecommons.org/licenses/by/4.0/
A structural genetic model incorporating a modern understanding of the genome and common practice in genome-wide association studies is derived mathematically. The model shows the Haldane map distance as a direct consequence of the structure of the genome. An expression for genetic correlation is derived under the model and compared to data resulting from the BSB mouse cross. A correlation test plot is introduced for this comparison and shows the close agreement of the model and empirical results. Noteworthy departures in this plot indicate regions which warrant further investigation.
[ { "created": "Fri, 20 May 2022 18:14:39 GMT", "version": "v1" } ]
2022-05-24
[ [ "Salahub", "Christopher", "" ] ]
A structural genetic model incorporating a modern understanding of the genome and common practice in genome-wide association studies is derived mathematically. The model shows the Haldane map distance as a direct consequence of the structure of the genome. An expression for genetic correlation is derived under the model and compared to data resulting from the BSB mouse cross. A correlation test plot is introduced for this comparison and shows the close agreement of the model and empirical results. Noteworthy departures in this plot indicate regions which warrant further investigation.
1805.12056
Marta Tyran-Kaminska
Mateusz Falfus, Michael C. Mackey, Marta Tyran-Kaminska
The combined effects of Feller diffusion and transcriptional/translational bursting in simple gene networks
20 pages, corrected typos
Journal of Mathematical Analysis and Applications 470 (2019), 931-953
10.1016/j.jmaa.2018.10.042
BCSim-2017-s07
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a stochastic model of biosynthesis of proteins in generic bacterial operons. The stochasticity arises from two different processes, namely from `bursting' production of either mRNA and/or protein (in the transcription/translation process) and from standard diffusive fluctuations. The amount of protein follows the Feller diffusion, while the bursting introduces random jumps between trajectories of the diffusion process. The combined effect leads to a process commonly known as a diffusion process with jumps. We study existence of invariant densities and the long time behavior of distributions of the corresponding Markov process, proving asymptotic stability in the evolution of the density.
[ { "created": "Wed, 30 May 2018 16:17:14 GMT", "version": "v1" }, { "created": "Wed, 18 Jul 2018 14:58:23 GMT", "version": "v2" } ]
2018-11-20
[ [ "Falfus", "Mateusz", "" ], [ "Mackey", "Michael C.", "" ], [ "Tyran-Kaminska", "Marta", "" ] ]
We study a stochastic model of biosynthesis of proteins in generic bacterial operons. The stochasticity arises from two different processes, namely from `bursting' production of either mRNA and/or protein (in the transcription/translation process) and from standard diffusive fluctuations. The amount of protein follows the Feller diffusion, while the bursting introduces random jumps between trajectories of the diffusion process. The combined effect leads to a process commonly known as a diffusion process with jumps. We study existence of invariant densities and the long time behavior of distributions of the corresponding Markov process, proving asymptotic stability in the evolution of the density.
2103.14132
Charilaos Akasiadis PhD
Charilaos Akasiadis, Miguel Ponce-de-Leon, Arnau Montagud, Evangelos Michelioudakis, Alexia Atsidakou, Elias Alevizos, Alexander Artikis, Alfonso Valencia, and Georgios Paliouras
Parallel Model Exploration for Tumor Treatment Simulations
19 pages, 10 figures
null
10.1111/coin.12515
null
q-bio.QM cs.DC q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Computational systems and methods are often being used in biological research, including the understanding of cancer and the development of treatments. Simulations of tumor growth and its response to different drugs are of particular importance, but also challenging complexity. The main challenges are first to calibrate the simulators so as to reproduce real-world cases, and second, to search for specific values of the parameter space concerning effective drug treatments. In this work, we combine a multi-scale simulator for tumor cell growth and a Genetic Algorithm (GA) as a heuristic search method for finding good parameter configurations in reasonable time. The two modules are integrated into a single workflow that can be executed in parallel on high performance computing infrastructures. In effect, the GA is used to calibrate the simulator, and then to explore different drug delivery schemes. Among these schemes, we aim to find those that minimize tumor cell size and the probability of emergence of drug resistant cells in the future. Experimental results illustrate the effectiveness and computational efficiency of the approach.
[ { "created": "Thu, 25 Mar 2021 20:58:44 GMT", "version": "v1" }, { "created": "Tue, 22 Feb 2022 12:07:09 GMT", "version": "v2" } ]
2022-02-23
[ [ "Akasiadis", "Charilaos", "" ], [ "Ponce-de-Leon", "Miguel", "" ], [ "Montagud", "Arnau", "" ], [ "Michelioudakis", "Evangelos", "" ], [ "Atsidakou", "Alexia", "" ], [ "Alevizos", "Elias", "" ], [ "Artikis", "Alexander", "" ], [ "Valencia", "Alfonso", "" ], [ "Paliouras", "Georgios", "" ] ]
Computational systems and methods are often being used in biological research, including the understanding of cancer and the development of treatments. Simulations of tumor growth and its response to different drugs are of particular importance, but also challenging complexity. The main challenges are first to calibrate the simulators so as to reproduce real-world cases, and second, to search for specific values of the parameter space concerning effective drug treatments. In this work, we combine a multi-scale simulator for tumor cell growth and a Genetic Algorithm (GA) as a heuristic search method for finding good parameter configurations in reasonable time. The two modules are integrated into a single workflow that can be executed in parallel on high performance computing infrastructures. In effect, the GA is used to calibrate the simulator, and then to explore different drug delivery schemes. Among these schemes, we aim to find those that minimize tumor cell size and the probability of emergence of drug resistant cells in the future. Experimental results illustrate the effectiveness and computational efficiency of the approach.
1801.08073
Marko Popovic
Marko Popovic
Thermodynamic Mechanism of Life and Aging
null
null
null
null
q-bio.OT physics.bio-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Life is a complex biological phenomenon represented by numerous chemical, physical and biological processes performed by a biothermodynamic system/cell/organism. Both living organisms and inanimate objects are subject to aging, a biological and physicochemical process characterized by changes in biological and thermodynamic state. Thus, the same physical laws govern processes in both animate and inanimate matter. All life processes lead to change of an organism's state. The change of biological and thermodynamic state of an organism in time underlies all of three kinds of aging (chronological, biological and thermodynamic). Life and aging of an organism both start at the moment of fertilization and continue through entire lifespan. Fertilization represents formation of a new organism. The new organism represents a new thermodynamic system. From the very beginning, it changes its state by changing thermodynamic parameters. The change of thermodynamic parameters is observed as aging and can be related to change in entropy. Entropy is thus the parameter that is related to all others and describes aging in the best manner. In the beginning, entropy change appears as a consequence of accumulation of matter (growth). Later, decomposition and configurational changes dominate, as a consequence of various chemical reactions (free radical, decomposition, fragmentation, accumulation of lipofuscin-like substances...).
[ { "created": "Tue, 9 Jan 2018 09:47:30 GMT", "version": "v1" } ]
2018-01-25
[ [ "Popovic", "Marko", "" ] ]
Life is a complex biological phenomenon represented by numerous chemical, physical and biological processes performed by a biothermodynamic system/cell/organism. Both living organisms and inanimate objects are subject to aging, a biological and physicochemical process characterized by changes in biological and thermodynamic state. Thus, the same physical laws govern processes in both animate and inanimate matter. All life processes lead to change of an organism's state. The change of biological and thermodynamic state of an organism in time underlies all of three kinds of aging (chronological, biological and thermodynamic). Life and aging of an organism both start at the moment of fertilization and continue through entire lifespan. Fertilization represents formation of a new organism. The new organism represents a new thermodynamic system. From the very beginning, it changes its state by changing thermodynamic parameters. The change of thermodynamic parameters is observed as aging and can be related to change in entropy. Entropy is thus the parameter that is related to all others and describes aging in the best manner. In the beginning, entropy change appears as a consequence of accumulation of matter (growth). Later, decomposition and configurational changes dominate, as a consequence of various chemical reactions (free radical, decomposition, fragmentation, accumulation of lipofuscin-like substances...).
1308.1289
Tanja Stadler
Amaury Lambert and Tanja Stadler
Macro-evolutionary models and coalescent point processes: The shape and probability of reconstructed phylogenies
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Forward-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPP), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPP lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: 1) extinction rate does not depend on a trait; 2) rates do not depend on time; 3) mass extinctions may happen additionally at certain points in the past.
[ { "created": "Tue, 6 Aug 2013 14:50:57 GMT", "version": "v1" } ]
2013-08-07
[ [ "Lambert", "Amaury", "" ], [ "Stadler", "Tanja", "" ] ]
Forward-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPP), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPP lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: 1) extinction rate does not depend on a trait; 2) rates do not depend on time; 3) mass extinctions may happen additionally at certain points in the past.
q-bio/0410013
Joshua Plotkin
Joshua B. Plotkin, Jonathan Dushoff, Michael M. Desai, Hunter B. Fraser
Synonymous codon usage and selection on proteins
33 pages
null
null
null
q-bio.PE q-bio.GN
null
Selection pressures on proteins are usually measured by comparing homologous nucleotide sequences (Zuckerkandl and Pauling 1965). Recently we introduced a novel method, termed `volatility', to estimate selection pressures on protein sequences from their synonymous codon usage (Plotkin and Dushoff 2003, Plotkin et al 2004a). Here we provide a theoretical foundation for this approach. We derive the expected frequencies of synonymous codons as a function of the strength of selection, the mutation rate, and the effective population size. We analyze the conditions under which we can expect to draw inferences from biased codon usage, and we estimate the time scales required to establish and maintain such a signal. Our results indicate that, over a broad range of parameters, synonymous codon usage can reliably distinguish between negative selection, positive selection, and neutrality. While the power of volatility to detect negative selection depends on the population size, there is no such dependence for the detection of positive selection. Furthermore, we show that phenomena such as transient hyper-mutators in microbes can improve the power of volatility to detect negative selection, even when the typical observed neutral site heterozygosity is low.
[ { "created": "Wed, 13 Oct 2004 04:55:21 GMT", "version": "v1" } ]
2016-09-08
[ [ "Plotkin", "Joshua B.", "" ], [ "Dushoff", "Jonathan", "" ], [ "Desai", "Michael M.", "" ], [ "Fraser", "Hunter B.", "" ] ]
Selection pressures on proteins are usually measured by comparing homologous nucleotide sequences (Zuckerkandl and Pauling 1965). Recently we introduced a novel method, termed `volatility', to estimate selection pressures on protein sequences from their synonymous codon usage (Plotkin and Dushoff 2003, Plotkin et al 2004a). Here we provide a theoretical foundation for this approach. We derive the expected frequencies of synonymous codons as a function of the strength of selection, the mutation rate, and the effective population size. We analyze the conditions under which we can expect to draw inferences from biased codon usage, and we estimate the time scales required to establish and maintain such a signal. Our results indicate that, over a broad range of parameters, synonymous codon usage can reliably distinguish between negative selection, positive selection, and neutrality. While the power of volatility to detect negative selection depends on the population size, there is no such dependence for the detection of positive selection. Furthermore, we show that phenomena such as transient hyper-mutators in microbes can improve the power of volatility to detect negative selection, even when the typical observed neutral site heterozygosity is low.
1512.05340
\"Ozkan Karabacak Mr.
Neslihan Serap \c{S}eng\"or and \"Ozkan Karabacak
A computational model revealing the effect of dopamine on action selection
34 pages
null
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to reveal the effect of nigrostriatal dopamine system on action selection, first a computational model of the cortex-basal ganglia-thalamus loop is proposed and based on this model a simple compound model realizing the Stroop effect is established. Even though Stroop task is mostly used to examine selective attention, the main objective of this work is to investigate the effect of action selection on Stroop task. The computational model of the cortex-basal ganglia-thalamus loop is a non-linear dynamical system which is not only capable of revealing the action selection property of basal ganglia but also capable of modelling the effect of dopamine on action selection. While the interpretation of action selection is based on the solutions of the non-linear dynamical system, the effect of dopamine is modelled by a parameter of the model. The inhibiting effect of dopamine on the habitual behaviour which corresponds to word reading in Stroop task and letting the novel one occur corresponding to colour naming is investigated using the compound computational model established in this work.
[ { "created": "Wed, 16 Dec 2015 10:32:11 GMT", "version": "v1" } ]
2015-12-18
[ [ "Şengör", "Neslihan Serap", "" ], [ "Karabacak", "Özkan", "" ] ]
In order to reveal the effect of nigrostriatal dopamine system on action selection, first a computational model of the cortex-basal ganglia-thalamus loop is proposed and based on this model a simple compound model realizing the Stroop effect is established. Even though Stroop task is mostly used to examine selective attention, the main objective of this work is to investigate the effect of action selection on Stroop task. The computational model of the cortex-basal ganglia-thalamus loop is a non-linear dynamical system which is not only capable of revealing the action selection property of basal ganglia but also capable of modelling the effect of dopamine on action selection. While the interpretation of action selection is based on the solutions of the non-linear dynamical system, the effect of dopamine is modelled by a parameter of the model. The inhibiting effect of dopamine on the habitual behaviour which corresponds to word reading in Stroop task and letting the novel one occur corresponding to colour naming is investigated using the compound computational model established in this work.
q-bio/0411042
Bjoern Naundorf
B.Naundorf, T. Geisel and F. Wolf
Action Potential Onset Dynamics and the Response Speed of Neuronal Populations
Submitted to the Journal of Computational Neuroscience
null
null
null
q-bio.NC cond-mat.dis-nn
null
The result of computational operations performed at the single cell level are coded into sequences of action potentials (APs). In the cerebral cortex, due to its columnar organization, large number of neurons are involved in any individual processing task. It is therefore important to understand how the properties of coding at the level of neuronal populations are determined by the dynamics of single neuron AP generation. Here we analyze how the AP generating mechanism determines the speed with which an ensemble of neurons can represent transient stochastic input signals. We analyze a generalization of the $\theta$-neuron, the normal form of the dynamics of Type-I excitable membranes. Using a novel sparse matrix representation of the Fokker-Planck equation, which describes the ensemble dynamics, we calculate the transmission functions for small modulations of the mean current and noise noise amplitude. In the high-frequency limit the transmission function decays as $\omega^{-\gamma}$, where $\gamma$ surprisingly depends on the phase $\theta_{s}$ at which APs are emitted. In a physiologically plausible regime up to 1kHz the typical response speed is, however, independent of the high-frequency limit and is set by the rapidness of the AP onset, as revealed by the full transmission function. In this regime modulations of the noise amplitude can be transmitted faithfully up to much higher frequencies than modulations in the mean input current. We finally show that the linear response approach used is valid for a large regime of stimulus amplitudes.
[ { "created": "Tue, 23 Nov 2004 00:26:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Naundorf", "B.", "" ], [ "Geisel", "T.", "" ], [ "Wolf", "F.", "" ] ]
The result of computational operations performed at the single cell level are coded into sequences of action potentials (APs). In the cerebral cortex, due to its columnar organization, large number of neurons are involved in any individual processing task. It is therefore important to understand how the properties of coding at the level of neuronal populations are determined by the dynamics of single neuron AP generation. Here we analyze how the AP generating mechanism determines the speed with which an ensemble of neurons can represent transient stochastic input signals. We analyze a generalization of the $\theta$-neuron, the normal form of the dynamics of Type-I excitable membranes. Using a novel sparse matrix representation of the Fokker-Planck equation, which describes the ensemble dynamics, we calculate the transmission functions for small modulations of the mean current and noise noise amplitude. In the high-frequency limit the transmission function decays as $\omega^{-\gamma}$, where $\gamma$ surprisingly depends on the phase $\theta_{s}$ at which APs are emitted. In a physiologically plausible regime up to 1kHz the typical response speed is, however, independent of the high-frequency limit and is set by the rapidness of the AP onset, as revealed by the full transmission function. In this regime modulations of the noise amplitude can be transmitted faithfully up to much higher frequencies than modulations in the mean input current. We finally show that the linear response approach used is valid for a large regime of stimulus amplitudes.
2201.05389
Etienne Joly
Agn\`es Maurel Ribes, Pierre Bessi\`ere, Jean Charles Gu\'ery, Elo\"ise Joly Featherstone, Timoth\'ee Bruel, Remy Robinot, Olivier Schwartz, Romain Volmer, Florence Abravanel, Jacques Izopet, Etienne Joly
A simple, sensitive and quantitative FACS-based test for SARS-CoV-2 serology in humans and animals
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Serological tests are important for understanding the physiopathology and following the evolution of the Covid-19 pandemic. Assays based on flow cytometry (FACS) of tissue culture cells expressing the spike (S) protein of SARS-CoV-2 have repeatedly proven to perform slightly better than the plate-based assays ELISA and CLIA (chemiluminescent immuno-assay), and markedly better than lateral flow immuno-assays (LFIA). Here, we describe an optimized and very simple FACS assay based on staining a mix of two Jurkat cell lines, expressing either high levels of the S protein (Jurkat-S) or a fluorescent protein (Jurkat-R expressing m-Cherry, or Jurkat-G, expressing GFP, which serve as an internal negative control). We show that the Jurkat-S\&R-flow test has a much broader dynamic range than a commercial ELISA test and performs at least as well in terms of sensitivity and specificity. Also, it is more sensitive and quantitative than the hemagglutination-based test HAT, which we described recently. The Jurkat-flow test requires only a few microliters of blood; thus, it can be used to quantify various Ig isotypes in capillary blood collected from a finger prick. It can be used also to evaluate serological responses in mice, hamsters, cats and dogs. FACS tests offer a very attractive solution for laboratories with access to tissue culture and flow cytometry who want to monitor serological responses in humans or in animals, and how these relate to susceptibility to infection, or re-infection, by the virus, and to protection against Covid-19.
[ { "created": "Fri, 14 Jan 2022 11:01:27 GMT", "version": "v1" } ]
2022-01-17
[ [ "Ribes", "Agnès Maurel", "" ], [ "Bessière", "Pierre", "" ], [ "Guéry", "Jean Charles", "" ], [ "Featherstone", "Eloïse Joly", "" ], [ "Bruel", "Timothée", "" ], [ "Robinot", "Remy", "" ], [ "Schwartz", "Olivier", "" ], [ "Volmer", "Romain", "" ], [ "Abravanel", "Florence", "" ], [ "Izopet", "Jacques", "" ], [ "Joly", "Etienne", "" ] ]
Serological tests are important for understanding the physiopathology and following the evolution of the Covid-19 pandemic. Assays based on flow cytometry (FACS) of tissue culture cells expressing the spike (S) protein of SARS-CoV-2 have repeatedly proven to perform slightly better than the plate-based assays ELISA and CLIA (chemiluminescent immuno-assay), and markedly better than lateral flow immuno-assays (LFIA). Here, we describe an optimized and very simple FACS assay based on staining a mix of two Jurkat cell lines, expressing either high levels of the S protein (Jurkat-S) or a fluorescent protein (Jurkat-R expressing m-Cherry, or Jurkat-G, expressing GFP, which serve as an internal negative control). We show that the Jurkat-S\&R-flow test has a much broader dynamic range than a commercial ELISA test and performs at least as well in terms of sensitivity and specificity. Also, it is more sensitive and quantitative than the hemagglutination-based test HAT, which we described recently. The Jurkat-flow test requires only a few microliters of blood; thus, it can be used to quantify various Ig isotypes in capillary blood collected from a finger prick. It can be used also to evaluate serological responses in mice, hamsters, cats and dogs. FACS tests offer a very attractive solution for laboratories with access to tissue culture and flow cytometry who want to monitor serological responses in humans or in animals, and how these relate to susceptibility to infection, or re-infection, by the virus, and to protection against Covid-19.
1305.7435
Michele Piana
Valentina Vivaldi, Sara Garbarino, Giacomo Caviglia, Michele Piana, Gianmario Sanbuceti
Compartmental analysis of nuclear imaging data for the quantification of FDG liver metabolism
arXiv admin note: text overlap with arXiv:1212.3967
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper utilizes compartmental analysis and a statistical optimization technique in order to reduce a compartmental model describing the metabolism of labelled glucose in liver. Specifically, we first design a compartmental model for the gut providing as output the tracer concentration in the portal vein. This quantity is then used as one of the two input functions in a compartmental model for the liver. This model, in turn, provides as output the tracer coefficients quantitatively describing the effectiveness with which the labelled glucose is transported between the different compartments. For both models, the computation of the solutions for the inverse problems is performed by means of an Ant Colony Optimization algorithm. The validation of the whole process is realized by means of synthetic data simulated by solving the forward problem of the compartmental system.
[ { "created": "Fri, 31 May 2013 14:56:37 GMT", "version": "v1" } ]
2013-06-03
[ [ "Vivaldi", "Valentina", "" ], [ "Garbarino", "Sara", "" ], [ "Caviglia", "Giacomo", "" ], [ "Piana", "Michele", "" ], [ "Sanbuceti", "Gianmario", "" ] ]
This paper utilizes compartmental analysis and a statistical optimization technique in order to reduce a compartmental model describing the metabolism of labelled glucose in liver. Specifically, we first design a compartmental model for the gut providing as output the tracer concentration in the portal vein. This quantity is then used as one of the two input functions in a compartmental model for the liver. This model, in turn, provides as output the tracer coefficients quantitatively describing the effectiveness with which the labelled glucose is transported between the different compartments. For both models, the computation of the solutions for the inverse problems is performed by means of an Ant Colony Optimization algorithm. The validation of the whole process is realized by means of synthetic data simulated by solving the forward problem of the compartmental system.
0708.3181
Rudolf A. Roemer
Chi-Tin Shih, Stephan Roche, Rudolf A. R\"omer
Point Mutations Effects on Charge Transport Properties of the Tumor-Suppressor Gene p53
4.1 PR style pages with 5 figures included
Phys. Rev. Lett. 100, 018105 (2008)
10.1103/PhysRevLett.100.018105
null
q-bio.GN cond-mat.soft q-bio.QM
null
We report on a theoretical study of point mutations effects on charge transfer properties in the DNA sequence of the tumor-suppressor p53 gene. On the basis of effective single-strand or double-strand tight-binding models which simulate hole propagation along the DNA, a statistical analysis of charge transmission modulations associated with all possible point mutations is performed. We find that in contrast to non-cancerous mutations, mutation hotspots tend to result in significantly weaker {\em changes of transmission properties}. This suggests that charge transport could play a significant role for DNA-repairing deficiency yielding carcinogenesis.
[ { "created": "Thu, 23 Aug 2007 13:57:58 GMT", "version": "v1" } ]
2008-01-09
[ [ "Shih", "Chi-Tin", "" ], [ "Roche", "Stephan", "" ], [ "Römer", "Rudolf A.", "" ] ]
We report on a theoretical study of point mutations effects on charge transfer properties in the DNA sequence of the tumor-suppressor p53 gene. On the basis of effective single-strand or double-strand tight-binding models which simulate hole propagation along the DNA, a statistical analysis of charge transmission modulations associated with all possible point mutations is performed. We find that in contrast to non-cancerous mutations, mutation hotspots tend to result in significantly weaker {\em changes of transmission properties}. This suggests that charge transport could play a significant role for DNA-repairing deficiency yielding carcinogenesis.
2210.01100
Wilhelm Hasselbring
Arne N. Johanson, Andreas Oschlies, Wilhelm Hasselbring, Wilhelm Hasselbring, Boris Worm
SPRAT: A Spatially-Explicit Marine Ecosystem Model Based on Population Balance Equations
20 pages
Ecological Modelling, 349, 11-25 (2017)
10.1016/j.ecolmodel.2017.01.020
null
q-bio.PE cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To successfully manage marine fisheries using an ecosystem-based approach, long-term predictions of fish stock development considering changing environmental conditions are necessary. Such predictions can be provided by end-to-end ecosystem models, which couple existing physical and biogeochemical ocean models with newly developed spatially-explicit fish stock models. Typically, Individual-Based Models (IBMs) and models based on Advection-Diffusion-Reaction (ADR) equations are employed for the fish stock models. In this paper, we present a novel fish stock model called SPRAT for end-to\hyp{}end ecosystem modeling based on Population Balance Equations (PBEs) that combines the advantages of IBMs and ADR models while avoiding their main drawbacks. SPRAT accomplishes this by describing the modeled ecosystem processes from the perspective of individuals while still being based on partial differential equations. We apply the SPRAT model to explore a well-documented regime shift observed on the eastern Scotian Shelf in the 1990s from a cod-dominated to a herring-dominated ecosystem. Model simulations are able to reconcile the observed multitrophic dynamics with documented changes in both fishing pressure and water temperature, followed by a predator-prey reversal that may have impeded recovery of depleted cod stocks. We conclude that our model can be used to generate new hypotheses and test ideas about spatially interacting fish populations, and their joint responses to both environmental and fisheries forcing.
[ { "created": "Fri, 30 Sep 2022 10:38:18 GMT", "version": "v1" } ]
2022-10-04
[ [ "Johanson", "Arne N.", "" ], [ "Oschlies", "Andreas", "" ], [ "Hasselbring", "Wilhelm", "" ], [ "Hasselbring", "Wilhelm", "" ], [ "Worm", "Boris", "" ] ]
To successfully manage marine fisheries using an ecosystem-based approach, long-term predictions of fish stock development considering changing environmental conditions are necessary. Such predictions can be provided by end-to-end ecosystem models, which couple existing physical and biogeochemical ocean models with newly developed spatially-explicit fish stock models. Typically, Individual-Based Models (IBMs) and models based on Advection-Diffusion-Reaction (ADR) equations are employed for the fish stock models. In this paper, we present a novel fish stock model called SPRAT for end-to\hyp{}end ecosystem modeling based on Population Balance Equations (PBEs) that combines the advantages of IBMs and ADR models while avoiding their main drawbacks. SPRAT accomplishes this by describing the modeled ecosystem processes from the perspective of individuals while still being based on partial differential equations. We apply the SPRAT model to explore a well-documented regime shift observed on the eastern Scotian Shelf in the 1990s from a cod-dominated to a herring-dominated ecosystem. Model simulations are able to reconcile the observed multitrophic dynamics with documented changes in both fishing pressure and water temperature, followed by a predator-prey reversal that may have impeded recovery of depleted cod stocks. We conclude that our model can be used to generate new hypotheses and test ideas about spatially interacting fish populations, and their joint responses to both environmental and fisheries forcing.
1404.5431
Leo Lahti
Leo Lahti, Jarkko Saloj\"arvi, Anne Salonen, Marten Scheffer, Willem M. de Vos
Tipping Elements in the Human Intestinal Ecosystem
29 pages; 9 figures; 4 tables; preprint
null
10.1038/ncomms5344
null
q-bio.QM
http://creativecommons.org/licenses/by/3.0/
Recent studies show that the microbial communities inhabiting the human intestine can have profound impact on our well-being and health. However, we have limited understanding of the mechanisms that control this complex ecosystem. Based on a deep phylogenetic analysis of the intestinal microbiota in a thousand western adults we identified groups of bacteria that tend to be either nearly absent, or abundant in most individuals. The abundances of these bimodally distributed bacteria vary independently, and their contrasting alternative states are associated with host factors such as ageing and overweight. We propose that such bimodal groups represent independent tipping elements of the intestinal microbiota. These reflect the overall state of the intestinal ecosystem whose critical transitions can have profound health implications and diagnostic potential.
[ { "created": "Tue, 22 Apr 2014 09:14:32 GMT", "version": "v1" } ]
2014-07-15
[ [ "Lahti", "Leo", "" ], [ "Salojärvi", "Jarkko", "" ], [ "Salonen", "Anne", "" ], [ "Scheffer", "Marten", "" ], [ "de Vos", "Willem M.", "" ] ]
Recent studies show that the microbial communities inhabiting the human intestine can have profound impact on our well-being and health. However, we have limited understanding of the mechanisms that control this complex ecosystem. Based on a deep phylogenetic analysis of the intestinal microbiota in a thousand western adults we identified groups of bacteria that tend to be either nearly absent, or abundant in most individuals. The abundances of these bimodally distributed bacteria vary independently, and their contrasting alternative states are associated with host factors such as ageing and overweight. We propose that such bimodal groups represent independent tipping elements of the intestinal microbiota. These reflect the overall state of the intestinal ecosystem whose critical transitions can have profound health implications and diagnostic potential.
2404.14336
Keisuke Sugie
Keisuke Sugie, Dimitri Loutchko, Tetsuya J. Kobayashi
Transitions and Thermodynamics on Species Graphs of Chemical Reaction Networks
8 pages, 4 figures
null
null
null
q-bio.MN physics.bio-ph physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
Chemical reaction networks (CRNs) exhibit complex dynamics governed by their underlying network structure. In this paper, we propose a novel approach to study the dynamics of CRNs by representing them on species graphs (S-graphs). By scaling concentrations by conservation laws, we obtain a graph representation of transitions compatible with the S-graph, which allows us to treat the dynamics in CRNs as transitions between chemicals. We also define thermodynamic-like quantities on the S-graph from the introduced transitions and investigate their properties, including the relationship between specieswise forces, activities, and conventional thermodynamic quantities. Remarkably, we demonstrate that this formulation can be developed for a class of irreversible CRNs, while for reversible CRNs, it is related to conventional thermodynamic quantities associated with reactions. The behavior of these specieswise quantities is numerically validated using an oscillating system (Brusselator). Our work provides a novel methodology for studying dynamics on S-graphs, paving the way for a deeper understanding of the intricate interplay between the structure and dynamics of chemical reaction networks.
[ { "created": "Mon, 22 Apr 2024 16:55:35 GMT", "version": "v1" }, { "created": "Tue, 23 Apr 2024 05:12:37 GMT", "version": "v2" } ]
2024-04-24
[ [ "Sugie", "Keisuke", "" ], [ "Loutchko", "Dimitri", "" ], [ "Kobayashi", "Tetsuya J.", "" ] ]
Chemical reaction networks (CRNs) exhibit complex dynamics governed by their underlying network structure. In this paper, we propose a novel approach to study the dynamics of CRNs by representing them on species graphs (S-graphs). By scaling concentrations by conservation laws, we obtain a graph representation of transitions compatible with the S-graph, which allows us to treat the dynamics in CRNs as transitions between chemicals. We also define thermodynamic-like quantities on the S-graph from the introduced transitions and investigate their properties, including the relationship between specieswise forces, activities, and conventional thermodynamic quantities. Remarkably, we demonstrate that this formulation can be developed for a class of irreversible CRNs, while for reversible CRNs, it is related to conventional thermodynamic quantities associated with reactions. The behavior of these specieswise quantities is numerically validated using an oscillating system (Brusselator). Our work provides a novel methodology for studying dynamics on S-graphs, paving the way for a deeper understanding of the intricate interplay between the structure and dynamics of chemical reaction networks.
q-bio/0506022
Eli Eisenberg
Yossef Neeman, Dvir Dahary, Erez Y. Levanon, Rotem Sorek and Eli Eisenberg
Is there any sense in antisense editing?
null
Trends in Genetics 21, 544-7 (2005)
10.1016/j.tig.2005.08.005
null
q-bio.GN
null
A number of recent studies have hypothesized that sense-antisense RNA transcript pairs create dsRNA duplexes that undergo extensive A-to-I RNA editing. Here we studied human and mouse genomic antisense regions, and found that the editing level in these areas is negligible. This observation puts in question the scope of sense-antisense duplexes formation in-vivo, which is the basis for a number of proposed regulatory mechanisms.
[ { "created": "Thu, 16 Jun 2005 14:07:04 GMT", "version": "v1" } ]
2007-05-23
[ [ "Neeman", "Yossef", "" ], [ "Dahary", "Dvir", "" ], [ "Levanon", "Erez Y.", "" ], [ "Sorek", "Rotem", "" ], [ "Eisenberg", "Eli", "" ] ]
A number of recent studies have hypothesized that sense-antisense RNA transcript pairs create dsRNA duplexes that undergo extensive A-to-I RNA editing. Here we studied human and mouse genomic antisense regions, and found that the editing level in these areas is negligible. This observation puts in question the scope of sense-antisense duplexes formation in-vivo, which is the basis for a number of proposed regulatory mechanisms.
2005.13790
Jari Saram\"aki
I. Satokangas, S.H. Martin, H. Helanter\"a, J. Saram\"aki, J. Kulmuni
Multi-locus interactions and the build-up of reproductive isolation
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
All genes interact with other genes, and their additive effects and epistatic interactions affect an organism's phenotype and fitness. Recent theoretical and empirical work has advanced our understanding of the role of multi-locus interactions in speciation. However, relating different models to one another and to empirical observations is challenging. This review focuses on multi-locus interactions that lead to reproductive isolation (RI) through reduced hybrid fitness. We first review theoretical approaches and show how recent work incorporating a mechanistic understanding of multi-locus interactions recapitulates earlier models, but also makes novel predictions concerning the build-up of RI. These include high variance in the build-up rate of RI among taxa, the emergence of strong incompatibilities producing localised barriers to introgression, and an effect of population size on the build-up of RI. We then review recent experimental approaches to detect multi-locus interactions underlying RI using genomic data. We argue that future studies would benefit from overlapping methods like Ancestry Disequilibrium scans, genome scans of differentiation and analyses of hybrid gene expression. Finally, we highlight a need for further overlap between theoretical and empirical work, and approaches that predict what kind of patterns multi-locus interactions resulting in incompatibilities will leave in genome-wide polymorphism data.
[ { "created": "Thu, 28 May 2020 06:06:26 GMT", "version": "v1" } ]
2020-05-29
[ [ "Satokangas", "I.", "" ], [ "Martin", "S. H.", "" ], [ "Helanterä", "H.", "" ], [ "Saramäki", "J.", "" ], [ "Kulmuni", "J.", "" ] ]
All genes interact with other genes, and their additive effects and epistatic interactions affect an organism's phenotype and fitness. Recent theoretical and empirical work has advanced our understanding of the role of multi-locus interactions in speciation. However, relating different models to one another and to empirical observations is challenging. This review focuses on multi-locus interactions that lead to reproductive isolation (RI) through reduced hybrid fitness. We first review theoretical approaches and show how recent work incorporating a mechanistic understanding of multi-locus interactions recapitulates earlier models, but also makes novel predictions concerning the build-up of RI. These include high variance in the build-up rate of RI among taxa, the emergence of strong incompatibilities producing localised barriers to introgression, and an effect of population size on the build-up of RI. We then review recent experimental approaches to detect multi-locus interactions underlying RI using genomic data. We argue that future studies would benefit from overlapping methods like Ancestry Disequilibrium scans, genome scans of differentiation and analyses of hybrid gene expression. Finally, we highlight a need for further overlap between theoretical and empirical work, and approaches that predict what kind of patterns multi-locus interactions resulting in incompatibilities will leave in genome-wide polymorphism data.
1503.01538
Natalia Bilenko
Natalia Y. Bilenko and Jack L. Gallant
Pyrcca: regularized kernel canonical correlation analysis in Python and its applications to neuroimaging
null
null
null
null
q-bio.QM cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Canonical correlation analysis (CCA) is a valuable method for interpreting cross-covariance across related datasets of different dimensionality. There are many potential applications of CCA to neuroimaging data analysis. For instance, CCA can be used for finding functional similarities across fMRI datasets collected from multiple subjects without resampling individual datasets to a template anatomy. In this paper, we introduce Pyrcca, an open-source Python module for executing CCA between two or more datasets. Pyrcca can be used to implement CCA with or without regularization, and with or without linear or a Gaussian kernelization of the datasets. We demonstrate an application of CCA implemented with Pyrcca to neuroimaging data analysis. We use CCA to find a data-driven set of functional response patterns that are similar across individual subjects in a natural movie experiment. We then demonstrate how this set of response patterns discovered by CCA can be used to accurately predict subject responses to novel natural movie stimuli.
[ { "created": "Thu, 5 Mar 2015 04:57:22 GMT", "version": "v1" } ]
2015-03-06
[ [ "Bilenko", "Natalia Y.", "" ], [ "Gallant", "Jack L.", "" ] ]
Canonical correlation analysis (CCA) is a valuable method for interpreting cross-covariance across related datasets of different dimensionality. There are many potential applications of CCA to neuroimaging data analysis. For instance, CCA can be used for finding functional similarities across fMRI datasets collected from multiple subjects without resampling individual datasets to a template anatomy. In this paper, we introduce Pyrcca, an open-source Python module for executing CCA between two or more datasets. Pyrcca can be used to implement CCA with or without regularization, and with or without linear or a Gaussian kernelization of the datasets. We demonstrate an application of CCA implemented with Pyrcca to neuroimaging data analysis. We use CCA to find a data-driven set of functional response patterns that are similar across individual subjects in a natural movie experiment. We then demonstrate how this set of response patterns discovered by CCA can be used to accurately predict subject responses to novel natural movie stimuli.
1602.01875
Christophe Dessimoz
Pascale Gaudet and Christophe Dessimoz
Gene Ontology: Pitfalls, Biases, Remedies
to appear in forthcoming book "The Gene Ontology Handbook" (Springer Humana)
The Gene Ontology Handbook (Springer, New York), 189-205 (2016)
10.1007/978-1-4939-3743-1_14
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Gene Ontology (GO) is a formidable resource but there are several considerations about it that are essential to understand the data and interpret it correctly. The GO is sufficiently simple that it can be used without deep understanding of its structure or how it is developed, which is both a strength and a weakness. In this chapter, we discuss some common misinterpretations of the ontology and the annotations. A better understanding of the pitfalls and the biases in the GO should help users make the most of this very rich resource. We also review some of the misconceptions and misleading assumptions commonly made about GO, including the effect of data incompleteness, the importance of annotation qualifiers, and the transitivity or lack thereof associated with different ontology relations. We also discuss several biases that can confound aggregate analyses such as gene enrichment analyses. For each of these pitfalls and biases, we suggest remedies and best practices.
[ { "created": "Thu, 4 Feb 2016 22:53:37 GMT", "version": "v1" } ]
2016-12-07
[ [ "Gaudet", "Pascale", "" ], [ "Dessimoz", "Christophe", "" ] ]
The Gene Ontology (GO) is a formidable resource but there are several considerations about it that are essential to understand the data and interpret it correctly. The GO is sufficiently simple that it can be used without deep understanding of its structure or how it is developed, which is both a strength and a weakness. In this chapter, we discuss some common misinterpretations of the ontology and the annotations. A better understanding of the pitfalls and the biases in the GO should help users make the most of this very rich resource. We also review some of the misconceptions and misleading assumptions commonly made about GO, including the effect of data incompleteness, the importance of annotation qualifiers, and the transitivity or lack thereof associated with different ontology relations. We also discuss several biases that can confound aggregate analyses such as gene enrichment analyses. For each of these pitfalls and biases, we suggest remedies and best practices.
2403.08959
Yoshitaka Inoue
Yoshitaka Inoue
scVGAE: A Novel Approach using ZINB-Based Variational Graph Autoencoder for Single-Cell RNA-Seq Imputation
11 pages, 3 figures
null
null
null
q-bio.GN cs.CE
http://creativecommons.org/licenses/by/4.0/
Single-cell RNA sequencing (scRNA-seq) has revolutionized our ability to study individual cellular distinctions and uncover unique cell characteristics. However, a significant technical challenge in scRNA-seq analysis is the occurrence of "dropout" events, where certain gene expressions cannot be detected. This issue is particularly pronounced in genes with low or sparse expression levels, impacting the precision and interpretability of the obtained data. To address this challenge, various imputation methods have been implemented to predict such missing values, aiming to enhance the analysis's accuracy and usefulness. A prevailing hypothesis posits that scRNA-seq data conforms to a zero-inflated negative binomial (ZINB) distribution. Consequently, methods have been developed to model the data according to this distribution. Recent trends in scRNA-seq analysis have seen the emergence of deep learning approaches. Some techniques, such as the variational autoencoder, incorporate the ZINB distribution as a model loss function. Graph-based methods like Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT) have also gained attention as deep learning methodologies for scRNA-seq analysis. This study introduces scVGAE, an innovative approach integrating GCN into a variational autoencoder framework while utilizing a ZINB loss function. This integration presents a promising avenue for effectively addressing dropout events in scRNA-seq data, thereby enhancing the accuracy and reliability of downstream analyses. scVGAE outperforms other methods in cell clustering, with the best performance in 11 out of 14 datasets. Ablation study shows all components of scVGAE are necessary. scVGAE is implemented in Python and downloadable at https://github.com/inoue0426/scVGAE.
[ { "created": "Wed, 13 Mar 2024 20:57:10 GMT", "version": "v1" }, { "created": "Tue, 23 Jul 2024 20:50:59 GMT", "version": "v2" } ]
2024-07-25
[ [ "Inoue", "Yoshitaka", "" ] ]
Single-cell RNA sequencing (scRNA-seq) has revolutionized our ability to study individual cellular distinctions and uncover unique cell characteristics. However, a significant technical challenge in scRNA-seq analysis is the occurrence of "dropout" events, where certain gene expressions cannot be detected. This issue is particularly pronounced in genes with low or sparse expression levels, impacting the precision and interpretability of the obtained data. To address this challenge, various imputation methods have been implemented to predict such missing values, aiming to enhance the analysis's accuracy and usefulness. A prevailing hypothesis posits that scRNA-seq data conforms to a zero-inflated negative binomial (ZINB) distribution. Consequently, methods have been developed to model the data according to this distribution. Recent trends in scRNA-seq analysis have seen the emergence of deep learning approaches. Some techniques, such as the variational autoencoder, incorporate the ZINB distribution as a model loss function. Graph-based methods like Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT) have also gained attention as deep learning methodologies for scRNA-seq analysis. This study introduces scVGAE, an innovative approach integrating GCN into a variational autoencoder framework while utilizing a ZINB loss function. This integration presents a promising avenue for effectively addressing dropout events in scRNA-seq data, thereby enhancing the accuracy and reliability of downstream analyses. scVGAE outperforms other methods in cell clustering, with the best performance in 11 out of 14 datasets. Ablation study shows all components of scVGAE are necessary. scVGAE is implemented in Python and downloadable at https://github.com/inoue0426/scVGAE.
2012.15448
Akif Ibraguimov
A. Ibragimov and A. Peace
Light driven interactions in spatial predator-prey model with toxicant chemotaxis
null
null
null
null
q-bio.PE math.AP
http://creativecommons.org/publicdomain/zero/1.0/
We develop and analyze a spatial temporal model of light driven ecotoxicological processes, motivated by an aquatic predator-prey system of algae and \textsl{Daphnia} subject to a contaminant. Population dynamics are driven by light, which is periodic in time and varies with spatial depth. The existence and uniqueness of spatial and temporal dependent periodic solutions are shown and analytical functions of the solutions under parameter constraints are presented. We conduct Turing stability analyses of solutions with respect to perturbations of initial conditions. Given a perturbation to a periodic equilibrium state, we show the system will return to this equilibrium state as long as motility is fast enough and/or the reservoir depth is shallow enough. Analytical results assume some Dirichlet boundary conditions that match the periodic equilibrium state, however numerical simulations with more relaxed boundary conditions capture similar periodic solutions. The work sheds light onto spatially dependent population dynamics that are driven by periodic forces, such as light levels.
[ { "created": "Thu, 31 Dec 2020 04:51:21 GMT", "version": "v1" } ]
2021-01-01
[ [ "Ibragimov", "A.", "" ], [ "Peace", "A.", "" ] ]
We develop and analyze a spatial temporal model of light driven ecotoxicological processes, motivated by an aquatic predator-prey system of algae and \textsl{Daphnia} subject to a contaminant. Population dynamics are driven by light, which is periodic in time and varies with spatial depth. The existence and uniqueness of spatial and temporal dependent periodic solutions are shown and analytical functions of the solutions under parameter constraints are presented. We conduct Turing stability analyses of solutions with respect to perturbations of initial conditions. Given a perturbation to a periodic equilibrium state, we show the system will return to this equilibrium state as long as motility is fast enough and/or the reservoir depth is shallow enough. Analytical results assume some Dirichlet boundary conditions that match the periodic equilibrium state, however numerical simulations with more relaxed boundary conditions capture similar periodic solutions. The work sheds light onto spatially dependent population dynamics that are driven by periodic forces, such as light levels.
2207.14061
Yohsuke Murase
Yohsuke Murase, Christian Hilbe, Seung Ki Baek
Evolution of direct reciprocity in group-structured populations
16 pages, 6 figures
Sci. Rep. 12, 18645 (2022)
10.1038/s41598-022-23467-4
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
People tend to have their social interactions with members of their own community. Such group-structured interactions can have a profound impact on the behaviors that evolve. Group structure affects the way people cooperate, and how they reciprocate each other's cooperative actions. Past work has shown that population structure and reciprocity can both promote the evolution of cooperation. Yet the impact of these mechanisms has been typically studied in isolation. In this work, we study how the two mechanisms interact. Using a game-theoretic model, we explore how people engage in reciprocal cooperation in group-structured populations, compared to well-mixed populations of equal size. To derive analytical results, we focus on two scenarios. In the first scenario, we assume a complete separation of time scales. Mutations are rare compared to between-group comparisons, which themselves are rare compared to within-group comparisons. In the second scenario, there is a partial separation of time scales, where mutations and between-group comparisons occur at a comparable rate. In both scenarios, we find that the effect of population structure depends on the benefit of cooperation. When this benefit is small, group-structured populations are more cooperative. But when the benefit is large, well-mixed populations result in more cooperation. Overall, our results reveal how group structure can sometimes enhance and sometimes suppress the evolution of cooperation.
[ { "created": "Thu, 28 Jul 2022 12:58:31 GMT", "version": "v1" } ]
2022-11-09
[ [ "Murase", "Yohsuke", "" ], [ "Hilbe", "Christian", "" ], [ "Baek", "Seung Ki", "" ] ]
People tend to have their social interactions with members of their own community. Such group-structured interactions can have a profound impact on the behaviors that evolve. Group structure affects the way people cooperate, and how they reciprocate each other's cooperative actions. Past work has shown that population structure and reciprocity can both promote the evolution of cooperation. Yet the impact of these mechanisms has been typically studied in isolation. In this work, we study how the two mechanisms interact. Using a game-theoretic model, we explore how people engage in reciprocal cooperation in group-structured populations, compared to well-mixed populations of equal size. To derive analytical results, we focus on two scenarios. In the first scenario, we assume a complete separation of time scales. Mutations are rare compared to between-group comparisons, which themselves are rare compared to within-group comparisons. In the second scenario, there is a partial separation of time scales, where mutations and between-group comparisons occur at a comparable rate. In both scenarios, we find that the effect of population structure depends on the benefit of cooperation. When this benefit is small, group-structured populations are more cooperative. But when the benefit is large, well-mixed populations result in more cooperation. Overall, our results reveal how group structure can sometimes enhance and sometimes suppress the evolution of cooperation.
1211.5953
Taiki Takahashi
Keigo Inukai and Taiki Takahashi
Decision under ambiguity: Effects of sign and magnitude
1 figure, 16 pages
International Journal of Neuroscience Vol. 119, No. 8 , Pages 1170-1178
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decision under ambiguity (uncertainty with unknown probabilities) has been attracting attention in behavioral and neuroeconomics. However, recent neuroimaging studies have mainly focused on gain domains while little attention has been paid to the magnitudes of outcomes. In this study, we examined the effects of the sign (i.e. gain and loss) and magnitude of outcomes on ambiguity aversion and the additivity of subjective probabilities in Ellsberg's urn problem. We observed that (i) ambiguity aversion was observed in both signs, and (ii) subadditivity of subjective probability was not observed in negative outcomes.
[ { "created": "Mon, 26 Nov 2012 13:53:04 GMT", "version": "v1" } ]
2012-11-27
[ [ "Inukai", "Keigo", "" ], [ "Takahashi", "Taiki", "" ] ]
Decision under ambiguity (uncertainty with unknown probabilities) has been attracting attention in behavioral and neuroeconomics. However, recent neuroimaging studies have mainly focused on gain domains while little attention has been paid to the magnitudes of outcomes. In this study, we examined the effects of the sign (i.e. gain and loss) and magnitude of outcomes on ambiguity aversion and the additivity of subjective probabilities in Ellsberg's urn problem. We observed that (i) ambiguity aversion was observed in both signs, and (ii) subadditivity of subjective probability was not observed in negative outcomes.
1803.09222
Burkhard Morgenstern
Thomas Dencker, Chris-Andre Leimeister, Michael Gerth, Christoph Bleidorn, Sagi Snir, Burkhard Morgenstern
Multi-SpaM: a Maximum-Likelihood approach to Phylogeny reconstruction based on Multiple Spaced-Word Matches
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Word-based or `alignment-free' methods for phylogeny reconstruction are much faster than traditional approaches, but they are generally less accurate. Most of these methods calculate pairwise distances for a set of input sequences, for example from word frequencies, from so-called spaced-word matches or from the average length of common substrings. Results: In this paper, we propose the first word-based approach to tree reconstruction that is based on multiple sequence comparison and Maximum Likelihood. Our algorithm first samples small, gap-free alignments involving four taxa each. For each of these alignments, it then calculates a quartet tree and, finally, the program Quartet MaxCut is used to infer a super tree topology for the full set of input taxa from the calculated quartet trees. Experimental results show that trees calculated with our approach are of high quality. Availability: The source code of the program is available at https://github.com/tdencker/multi-SpaM Contact: thomas.dencker@stud.uni-goettingen.de
[ { "created": "Sun, 25 Mar 2018 09:35:32 GMT", "version": "v1" }, { "created": "Sat, 28 Apr 2018 10:03:34 GMT", "version": "v2" } ]
2018-05-01
[ [ "Dencker", "Thomas", "" ], [ "Leimeister", "Chris-Andre", "" ], [ "Gerth", "Michael", "" ], [ "Bleidorn", "Christoph", "" ], [ "Snir", "Sagi", "" ], [ "Morgenstern", "Burkhard", "" ] ]
Motivation: Word-based or `alignment-free' methods for phylogeny reconstruction are much faster than traditional approaches, but they are generally less accurate. Most of these methods calculate pairwise distances for a set of input sequences, for example from word frequencies, from so-called spaced-word matches or from the average length of common substrings. Results: In this paper, we propose the first word-based approach to tree reconstruction that is based on multiple sequence comparison and Maximum Likelihood. Our algorithm first samples small, gap-free alignments involving four taxa each. For each of these alignments, it then calculates a quartet tree and, finally, the program Quartet MaxCut is used to infer a super tree topology for the full set of input taxa from the calculated quartet trees. Experimental results show that trees calculated with our approach are of high quality. Availability: The source code of the program is available at https://github.com/tdencker/multi-SpaM Contact: thomas.dencker@stud.uni-goettingen.de
1809.06806
Michael D Nicholson
Michael D. Nicholson and Tibor Antal
Competing evolutionary paths in growing populations with applications to multidrug resistance
Minor corrections. Altered title
null
10.1371/journal.pcbi.1006866
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
Investigating the emergence of a particular cell type is a recurring theme in models of growing cellular populations. The evolution of resistance to therapy is a classic example. Common questions are: when does the cell type first occur, and via which sequence of steps is it most likely to emerge? For growing populations, these questions can be formulated in a general framework of branching processes spreading through a graph from a root to a target vertex. Cells have a particular fitness value on each vertex and can transition along edges at specific rates. Vertices represents cell states, say \mic{genotypes }or physical locations, while possible transitions are acquiring a mutation or cell migration. We focus on the setting where cells at the root vertex have the highest fitness and transition rates are small. Simple formulas are derived for the time to reach the target vertex and for the probability that it is reached along a given path in the graph. We demonstrate our results on \mic{several scenarios relevant to the emergence of drug resistance}, including: the orderings of resistance-conferring mutations in bacteria and the impact of imperfect drug penetration in cancer.
[ { "created": "Tue, 18 Sep 2018 15:57:25 GMT", "version": "v1" }, { "created": "Wed, 23 Jan 2019 00:51:22 GMT", "version": "v2" } ]
2019-06-19
[ [ "Nicholson", "Michael D.", "" ], [ "Antal", "Tibor", "" ] ]
Investigating the emergence of a particular cell type is a recurring theme in models of growing cellular populations. The evolution of resistance to therapy is a classic example. Common questions are: when does the cell type first occur, and via which sequence of steps is it most likely to emerge? For growing populations, these questions can be formulated in a general framework of branching processes spreading through a graph from a root to a target vertex. Cells have a particular fitness value on each vertex and can transition along edges at specific rates. Vertices represents cell states, say \mic{genotypes }or physical locations, while possible transitions are acquiring a mutation or cell migration. We focus on the setting where cells at the root vertex have the highest fitness and transition rates are small. Simple formulas are derived for the time to reach the target vertex and for the probability that it is reached along a given path in the graph. We demonstrate our results on \mic{several scenarios relevant to the emergence of drug resistance}, including: the orderings of resistance-conferring mutations in bacteria and the impact of imperfect drug penetration in cancer.
1902.08395
Robbin Bastiaansen
Robbin Bastiaansen, Arjen Doelman, Frank van Langevelde, Vivi Rottsch\"afer
Modelling honey bee colonies in winter using a Keller-Segel model with a sign-changing chemotactic coefficient
20 pages, 12 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermoregulation in honey bee colonies during winter is thought to be self-organised. We added mortality of individual honey bees to an existing model of thermoregulation to account for elevated losses of bees that are reported worldwide. The aim of analysis is to obtain a better fundamental understanding of the consequences of individual mortality during winter. This model resembles the well-known Keller-Segel model. In contrast to the often studied Keller-Segel models, our model includes a chemotactic coefficient of which the sign can change as honey bees have a preferred temperature: when the local temperature is too low, they move towards higher temperatures, whereas the opposite is true for too high temperatures. Our study shows that we can distinguish two states of the colony: one in which the colony size is above a certain critical number of bees in which the bees can keep the core temperature of the colony above the threshold temperature, and one in which the core temperature drops below the critical threshold and the mortality of the bees increases dramatically, leading to a sudden death of the colony. This model behaviour may explain the globally observed honey bee colony losses during winter.
[ { "created": "Fri, 22 Feb 2019 08:27:57 GMT", "version": "v1" } ]
2019-02-25
[ [ "Bastiaansen", "Robbin", "" ], [ "Doelman", "Arjen", "" ], [ "van Langevelde", "Frank", "" ], [ "Rottschäfer", "Vivi", "" ] ]
Thermoregulation in honey bee colonies during winter is thought to be self-organised. We added mortality of individual honey bees to an existing model of thermoregulation to account for elevated losses of bees that are reported worldwide. The aim of analysis is to obtain a better fundamental understanding of the consequences of individual mortality during winter. This model resembles the well-known Keller-Segel model. In contrast to the often studied Keller-Segel models, our model includes a chemotactic coefficient of which the sign can change as honey bees have a preferred temperature: when the local temperature is too low, they move towards higher temperatures, whereas the opposite is true for too high temperatures. Our study shows that we can distinguish two states of the colony: one in which the colony size is above a certain critical number of bees in which the bees can keep the core temperature of the colony above the threshold temperature, and one in which the core temperature drops below the critical threshold and the mortality of the bees increases dramatically, leading to a sudden death of the colony. This model behaviour may explain the globally observed honey bee colony losses during winter.
1408.3505
Christian L. Althaus
Christian L. Althaus
Estimating the reproduction number of Ebola virus (EBOV) during the 2014 outbreak in West Africa
Published version, PLOS Currents Outbreaks. 2014 Sep 2
null
10.1371/currents.outbreaks.91afb5e0f279e7f29e7056095255b288
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 2014 Ebola virus (EBOV) outbreak in West Africa is the largest outbreak of the genus Ebolavirus to date. To better understand the spread of infection in the affected countries, it is crucial to know the number of secondary cases generated by an infected index case in the absence and presence of control measures, i.e., the basic and effective reproduction number. In this study, I describe the EBOV epidemic using an SEIR (susceptible-exposed-infectious-recovered) model and fit the model to the most recent reported data of infected cases and deaths in Guinea, Sierra Leone and Liberia. The maximum likelihood estimates of the basic reproduction number are 1.51 (95% confidence interval [CI]: 1.50-1.52) for Guinea, 2.53 (95% CI: 2.41-2.67) for Sierra Leone and 1.59 (95% CI: 1.57-1.60) for Liberia. The model indicates that in Guinea and Sierra Leone the effective reproduction number might have dropped to around unity by the end of May and July 2014, respectively. In Liberia, however, the model estimates no decline in the effective reproduction number by end-August 2014. This suggests that control efforts in Liberia need to be improved substantially in order to stop the current outbreak.
[ { "created": "Fri, 15 Aug 2014 10:15:47 GMT", "version": "v1" }, { "created": "Mon, 25 Aug 2014 18:55:11 GMT", "version": "v2" }, { "created": "Wed, 27 Aug 2014 21:16:23 GMT", "version": "v3" }, { "created": "Tue, 2 Sep 2014 21:07:26 GMT", "version": "v4" } ]
2016-05-25
[ [ "Althaus", "Christian L.", "" ] ]
The 2014 Ebola virus (EBOV) outbreak in West Africa is the largest outbreak of the genus Ebolavirus to date. To better understand the spread of infection in the affected countries, it is crucial to know the number of secondary cases generated by an infected index case in the absence and presence of control measures, i.e., the basic and effective reproduction number. In this study, I describe the EBOV epidemic using an SEIR (susceptible-exposed-infectious-recovered) model and fit the model to the most recent reported data of infected cases and deaths in Guinea, Sierra Leone and Liberia. The maximum likelihood estimates of the basic reproduction number are 1.51 (95% confidence interval [CI]: 1.50-1.52) for Guinea, 2.53 (95% CI: 2.41-2.67) for Sierra Leone and 1.59 (95% CI: 1.57-1.60) for Liberia. The model indicates that in Guinea and Sierra Leone the effective reproduction number might have dropped to around unity by the end of May and July 2014, respectively. In Liberia, however, the model estimates no decline in the effective reproduction number by end-August 2014. This suggests that control efforts in Liberia need to be improved substantially in order to stop the current outbreak.
2302.10392
Qi Lin
Qi Lin, Zifan Li, John Lafferty, Ilker Yildirim
From seeing to remembering: Images with harder-to-reconstruct representations leave stronger memory traces
null
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Much of what we remember is not due to intentional selection, but simply a by-product of perceiving. This raises a foundational question about the architecture of the mind: How does perception interface with and influence memory? Here, inspired by a classic proposal relating perceptual processing to memory durability, the level-of-processing theory, we present a sparse coding model for compressing feature embeddings of images, and show that the reconstruction residuals from this model predict how well images are encoded into memory. In an open memorability dataset of scene images, we show that reconstruction error not only explains memory accuracy but also response latencies during retrieval, subsuming, in the latter case, all of the variance explained by powerful vision-only models. We also confirm a prediction of this account with 'model-driven psychophysics'. This work establishes reconstruction error as a novel signal interfacing perception and memory, possibly through adaptive modulation of perceptual processing.
[ { "created": "Tue, 21 Feb 2023 01:40:32 GMT", "version": "v1" } ]
2023-02-22
[ [ "Lin", "Qi", "" ], [ "Li", "Zifan", "" ], [ "Lafferty", "John", "" ], [ "Yildirim", "Ilker", "" ] ]
Much of what we remember is not due to intentional selection, but simply a by-product of perceiving. This raises a foundational question about the architecture of the mind: How does perception interface with and influence memory? Here, inspired by a classic proposal relating perceptual processing to memory durability, the level-of-processing theory, we present a sparse coding model for compressing feature embeddings of images, and show that the reconstruction residuals from this model predict how well images are encoded into memory. In an open memorability dataset of scene images, we show that reconstruction error not only explains memory accuracy but also response latencies during retrieval, subsuming, in the latter case, all of the variance explained by powerful vision-only models. We also confirm a prediction of this account with 'model-driven psychophysics'. This work establishes reconstruction error as a novel signal interfacing perception and memory, possibly through adaptive modulation of perceptual processing.
q-bio/0410027
Andrew Barbour
Brigitte Pallmann, A. D. Barbour, D. J. Hosken, P. I. Ward
Inter-species regression analysis
20 pages, 2 tables: improved likelihood approach, extended appendix
null
null
null
q-bio.PE
null
When conducting inter-species regression analyses, the phylogenetic relationships between the individual species need to be taken into account. In this paper, a procedure for conducting such analyses is proposed, which only requires the use of a measure of relationship between pairs of species, rather than a complete phylogeny, and which at the same time assesses the importance to be attached to the relationships with regard to the conclusions reached. The procedure is applied to previous data, relating testis size to mean hind tibia length, duct length and spermathecal area in 15 species of Scathophagidae.
[ { "created": "Fri, 22 Oct 2004 16:26:04 GMT", "version": "v1" }, { "created": "Sun, 19 Mar 2006 13:24:00 GMT", "version": "v2" } ]
2007-05-23
[ [ "Pallmann", "Brigitte", "" ], [ "Barbour", "A. D.", "" ], [ "Hosken", "D. J.", "" ], [ "Ward", "P. I.", "" ] ]
When conducting inter-species regression analyses, the phylogenetic relationships between the individual species need to be taken into account. In this paper, a procedure for conducting such analyses is proposed, which only requires the use of a measure of relationship between pairs of species, rather than a complete phylogeny, and which at the same time assesses the importance to be attached to the relationships with regard to the conclusions reached. The procedure is applied to previous data, relating testis size to mean hind tibia length, duct length and spermathecal area in 15 species of Scathophagidae.
1101.1702
Jun-Sok Huhh
Jun-Sok Huhh
Sanctioning by Institution, Skepticism of Punisher and the Evolution of Cooperation
17 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article aims to clarify the case and the mechanism where sanction or punishment by institution can deliver the evolution of cooperation. Compared to peer sanctioning, institutional sanctioning may be sensitive to players' attitude toward players who do not pre-commit punishment. Departed from former studies based on the punisher who always acts cooperatively, we assume that the punishing player is skeptical in that she cooperates in proportion to how many same types join in her team. Relying on stochastic adaptive dynamics, we show that institutional sanctioning coupled with skeptical punisher can make cooperation evolve for the case where peer sanctioning may not.
[ { "created": "Mon, 10 Jan 2011 03:30:46 GMT", "version": "v1" }, { "created": "Tue, 11 Jan 2011 13:06:37 GMT", "version": "v2" }, { "created": "Wed, 12 Jan 2011 01:07:53 GMT", "version": "v3" }, { "created": "Wed, 16 Feb 2011 07:12:40 GMT", "version": "v4" } ]
2015-03-17
[ [ "Huhh", "Jun-Sok", "" ] ]
This article aims to clarify the case and the mechanism where sanction or punishment by institution can deliver the evolution of cooperation. Compared to peer sanctioning, institutional sanctioning may be sensitive to players' attitude toward players who do not pre-commit punishment. Departed from former studies based on the punisher who always acts cooperatively, we assume that the punishing player is skeptical in that she cooperates in proportion to how many same types join in her team. Relying on stochastic adaptive dynamics, we show that institutional sanctioning coupled with skeptical punisher can make cooperation evolve for the case where peer sanctioning may not.
1401.7975
Richard Lusk
Richard W Lusk
Diverse and widespread contamination evident in the unmapped depths of high throughput sequencing data
null
null
10.1371/journal.pone.0110808
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Trace quantities of contaminating DNA are widespread in the laboratory environment, but their presence has received little attention in the context of high throughput sequencing. This issue is highlighted by recent works that have rested controversial claims upon sequencing data that appear to support the presence of unexpected exogenous species. Results: I used reads that preferentially aligned to alternate genomes to infer the distribution of potential contaminant species in a set of independent sequencing experiments. I confirmed that dilute samples are more exposed to contaminating DNA, and, focusing on four single-cell sequencing experiments, found that these contaminants appear to originate from a wide diversity of clades. Although negative control libraries prepared from "blank" samples recovered the highest-frequency contaminants, low-frequency contaminants, which appeared to make heterogeneous contributions to samples prepared in parallel within a single experiment, were not well controlled for. I used these results to show that, despite heavy replication and plausible controls, contamination can explain all of the observations used to support a recent claim that complete genes pass from food to human blood. Conclusions: Contamination must be considered a potential source of signals of exogenous species in sequencing data, even if these signals are replicated in independent experiments, vary across conditions, or indicate a species which seems a priori unlikely to contaminate. Negative control libraries processed in parallel are essential to control for contaminant DNAs, but their limited ability to recover low-frequency contaminants must be recognized.
[ { "created": "Thu, 30 Jan 2014 20:35:03 GMT", "version": "v1" } ]
2015-06-18
[ [ "Lusk", "Richard W", "" ] ]
Background: Trace quantities of contaminating DNA are widespread in the laboratory environment, but their presence has received little attention in the context of high throughput sequencing. This issue is highlighted by recent works that have rested controversial claims upon sequencing data that appear to support the presence of unexpected exogenous species. Results: I used reads that preferentially aligned to alternate genomes to infer the distribution of potential contaminant species in a set of independent sequencing experiments. I confirmed that dilute samples are more exposed to contaminating DNA, and, focusing on four single-cell sequencing experiments, found that these contaminants appear to originate from a wide diversity of clades. Although negative control libraries prepared from "blank" samples recovered the highest-frequency contaminants, low-frequency contaminants, which appeared to make heterogeneous contributions to samples prepared in parallel within a single experiment, were not well controlled for. I used these results to show that, despite heavy replication and plausible controls, contamination can explain all of the observations used to support a recent claim that complete genes pass from food to human blood. Conclusions: Contamination must be considered a potential source of signals of exogenous species in sequencing data, even if these signals are replicated in independent experiments, vary across conditions, or indicate a species which seems a priori unlikely to contaminate. Negative control libraries processed in parallel are essential to control for contaminant DNAs, but their limited ability to recover low-frequency contaminants must be recognized.
2311.02091
Giorgia Ciavolella
Giorgia Ciavolella (IMB, MONC), Julien Granet (IMB, MONC), Jacky Goetz (IRM), Nael Osmani (IRM), Christ\`ele Etchegaray (MONC, IMB), Annabelle Collin (IMB, Bordeaux INP, MONC)
Deciphering circulating tumor cells binding in a microfluidic system thanks to a parameterized mathematical model
null
null
null
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spread of metastases is a crucial process in which some questions remain unanswered. In this work, we focus on tumor cells circulating in the bloodstream, the so-called Circulating Tumor Cells (CTCs). Our aim is to characterize their trajectories under the influence of hemodynamic and adhesion forces. We focus on already available in vitro measurements performed with a microfluidic device corresponding to the trajectories of CTCs -- without or with different protein depletions -- interacting with an endothelial layer. A key difficulty is the weak knowledge of the fluid velocity that has to be reconstructed. Our strategy combines a differential equation model -- a Poiseuille model for the fluid velocity and an ODE system for the cell adhesion model -- and a robust and well-designed calibration procedure. The parameterized model quantifies the strong influence of fluid velocity on adhesion and confirms the expected role of several proteins in the deceleration of CTCs. Finally, it enables the generation of synthetic cells, even for unobserved experimental conditions, opening the way to a digital twin for flowing cells with adhesion.
[ { "created": "Fri, 27 Oct 2023 08:41:20 GMT", "version": "v1" }, { "created": "Thu, 4 Jul 2024 09:45:06 GMT", "version": "v2" } ]
2024-07-08
[ [ "Ciavolella", "Giorgia", "", "IMB, MONC" ], [ "Granet", "Julien", "", "IMB, MONC" ], [ "Goetz", "Jacky", "", "IRM" ], [ "Osmani", "Nael", "", "IRM" ], [ "Etchegaray", "Christèle", "", "MONC, IMB" ], [ "Collin", "Annabelle", "", "IMB, Bordeaux INP, MONC" ] ]
The spread of metastases is a crucial process in which some questions remain unanswered. In this work, we focus on tumor cells circulating in the bloodstream, the so-called Circulating Tumor Cells (CTCs). Our aim is to characterize their trajectories under the influence of hemodynamic and adhesion forces. We focus on already available in vitro measurements performed with a microfluidic device corresponding to the trajectories of CTCs -- without or with different protein depletions -- interacting with an endothelial layer. A key difficulty is the weak knowledge of the fluid velocity that has to be reconstructed. Our strategy combines a differential equation model -- a Poiseuille model for the fluid velocity and an ODE system for the cell adhesion model -- and a robust and well-designed calibration procedure. The parameterized model quantifies the strong influence of fluid velocity on adhesion and confirms the expected role of several proteins in the deceleration of CTCs. Finally, it enables the generation of synthetic cells, even for unobserved experimental conditions, opening the way to a digital twin for flowing cells with adhesion.
1606.07024
J. C. Phillips
J. C. Phillips
Autoantibody recognition mechanisms of MUC1
13 pages, 7 figures
null
10.1016/j.physa.2016.11.075
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The most cost-effective blood-based, noninvasive molecular cancer biomarkers are based on p53 epitopes and MUC1 tandem repeats. Here we use dimensionally compressed bioinformatic fractal scaling analysis to compare the two distinct and comparable probes, which examine different sections of the autoantibody population, achieving combined sensitivities of order 50%. We discover a promising MUC1 epitope in the SEA region outside the tandem repeats.
[ { "created": "Tue, 21 Jun 2016 14:48:44 GMT", "version": "v1" } ]
2017-01-04
[ [ "Phillips", "J. C.", "" ] ]
The most cost-effective blood-based, noninvasive molecular cancer biomarkers are based on p53 epitopes and MUC1 tandem repeats. Here we use dimensionally compressed bioinformatic fractal scaling analysis to compare the two distinct and comparable probes, which examine different sections of the autoantibody population, achieving combined sensitivities of order 50%. We discover a promising MUC1 epitope in the SEA region outside the tandem repeats.
q-bio/0412031
Frank Dressel
F. Dressel and S. Kobe
Global optimization of proteins using a dynamical lattice model: Ground states and energy landscapes
8 pages, 5 figures, changed title and content, Chemical Physics Letters (in press)
Chem. Phys. Lett. 424 (2006) 369
10.1016/j.cplett.2006.05.007
null
q-bio.BM
null
A simple approach is proposed to investigate the protein structure. Using a low complexity model, a simple pairwise interaction and the concept of global optimization, we are able to calculate ground states of proteins, which are in agreement with experimental data. All possible model structures of small proteins are available below a certain energy threshold. The exact lowenergy landscapes for the trp cage protein (1L2Y) is presented showing the connectivity of all states and energy barriers.
[ { "created": "Thu, 16 Dec 2004 16:06:47 GMT", "version": "v1" }, { "created": "Wed, 14 Sep 2005 07:54:02 GMT", "version": "v2" }, { "created": "Fri, 2 Jun 2006 12:26:06 GMT", "version": "v3" } ]
2015-06-26
[ [ "Dressel", "F.", "" ], [ "Kobe", "S.", "" ] ]
A simple approach is proposed to investigate the protein structure. Using a low complexity model, a simple pairwise interaction and the concept of global optimization, we are able to calculate ground states of proteins, which are in agreement with experimental data. All possible model structures of small proteins are available below a certain energy threshold. The exact lowenergy landscapes for the trp cage protein (1L2Y) is presented showing the connectivity of all states and energy barriers.
1605.05682
Eugen Tarnow
Eugen Tarnow
Large individual differences in free recall
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using single factor ANOVA I show that there are large individual differences in free recall ({\eta} ranges from 0.09-0.26) including the total recall, the balance between recency and primacy, and the initial recall (subsequent recalls show smaller individual differences). All three memory properties are relatively uncorrelated. The variance in the initial position may be a measure of executive control and is correlated with total recall (the smaller the variation, the larger the recall).
[ { "created": "Mon, 4 Apr 2016 12:55:16 GMT", "version": "v1" } ]
2016-05-19
[ [ "Tarnow", "Eugen", "" ] ]
Using single factor ANOVA I show that there are large individual differences in free recall ({\eta} ranges from 0.09-0.26) including the total recall, the balance between recency and primacy, and the initial recall (subsequent recalls show smaller individual differences). All three memory properties are relatively uncorrelated. The variance in the initial position may be a measure of executive control and is correlated with total recall (the smaller the variation, the larger the recall).
2010.08527
Xianghao Zhan
Xianghao Zhan, Yuzhe Liu, Samuel J. Raymond, Hossein Vahid Alizadeh, August G. Domel, Olivier Gevaert, Michael Zeineh, Gerald Grant, David B. Camarillo
Deep Learning Head Model for Real-time Estimation of Entire Brain Deformation in Concussion
12 pages, 6 figures, IEEE journal
null
10.1109/TBME.2021.3073380
null
q-bio.TO cs.LG physics.bio-ph q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: Many recent studies have suggested that brain deformation resulting from a head impact is linked to the corresponding clinical outcome, such as mild traumatic brain injury (mTBI). Even though several finite element (FE) head models have been developed and validated to calculate brain deformation based on impact kinematics, the clinical application of these FE head models is limited due to the time-consuming nature of FE simulations. This work aims to accelerate the process of brain deformation calculation and thus improve the potential for clinical applications. Methods: We propose a deep learning head model with a five-layer deep neural network and feature engineering, and trained and tested the model on 1803 total head impacts from a combination of head model simulations and on-field college football and mixed martial arts impacts. Results: The proposed deep learning head model can calculate the maximum principal strain for every element in the entire brain in less than 0.001s (with an average root mean squared error of 0.025, and with a standard deviation of 0.002 over twenty repeats with random data partition and model initialization). The contributions of various features to the predictive power of the model were investigated, and it was noted that the features based on angular acceleration were found to be more predictive than the features based on angular velocity. Conclusion: Trained using the dataset of 1803 head impacts, this model can be applied to various sports in the calculation of brain strain with accuracy, and its applicability can even further be extended by incorporating data from other types of head impacts. Significance: In addition to the potential clinical application in real-time brain deformation monitoring, this model will help researchers estimate the brain strain from a large number of head impacts more efficiently than using FE models.
[ { "created": "Fri, 16 Oct 2020 17:37:59 GMT", "version": "v1" }, { "created": "Tue, 20 Oct 2020 18:50:29 GMT", "version": "v2" } ]
2022-03-11
[ [ "Zhan", "Xianghao", "" ], [ "Liu", "Yuzhe", "" ], [ "Raymond", "Samuel J.", "" ], [ "Alizadeh", "Hossein Vahid", "" ], [ "Domel", "August G.", "" ], [ "Gevaert", "Olivier", "" ], [ "Zeineh", "Michael", "" ], [ "Grant", "Gerald", "" ], [ "Camarillo", "David B.", "" ] ]
Objective: Many recent studies have suggested that brain deformation resulting from a head impact is linked to the corresponding clinical outcome, such as mild traumatic brain injury (mTBI). Even though several finite element (FE) head models have been developed and validated to calculate brain deformation based on impact kinematics, the clinical application of these FE head models is limited due to the time-consuming nature of FE simulations. This work aims to accelerate the process of brain deformation calculation and thus improve the potential for clinical applications. Methods: We propose a deep learning head model with a five-layer deep neural network and feature engineering, and trained and tested the model on 1803 total head impacts from a combination of head model simulations and on-field college football and mixed martial arts impacts. Results: The proposed deep learning head model can calculate the maximum principal strain for every element in the entire brain in less than 0.001s (with an average root mean squared error of 0.025, and with a standard deviation of 0.002 over twenty repeats with random data partition and model initialization). The contributions of various features to the predictive power of the model were investigated, and it was noted that the features based on angular acceleration were found to be more predictive than the features based on angular velocity. Conclusion: Trained using the dataset of 1803 head impacts, this model can be applied to various sports in the calculation of brain strain with accuracy, and its applicability can even further be extended by incorporating data from other types of head impacts. Significance: In addition to the potential clinical application in real-time brain deformation monitoring, this model will help researchers estimate the brain strain from a large number of head impacts more efficiently than using FE models.
q-bio/0408006
Nikolay V. Dokholyan
Nikolay V. Dokholyan
The architecture of the protein domain universe
14 pages, 3 figures
null
null
null
q-bio.MN cond-mat.stat-mech q-bio.BM
null
Understanding the design of the universe of protein structures may provide insights into protein evolution. We study the architecture of the protein domain universe, which has been found to poses peculiar scale-free properties (Dokholyan et al., Proc. Natl. Acad. Sci. USA 99: 14132-14136 (2002)). We examine the origin of these scale-free properties of the graph of protein domain structures (PDUG) and determine that that the PDUG is not modular, i.e. it does not consist of modules with uniform properties. Instead, we find the PDUG to be self-similar at all scales. We further characterize the PDUG architecture by studying the properties of the hub nodes that are responsible for the scale-free connectivity of the PDUG. We introduce a measure of the betweenness centrality of protein domains in the PDUG and find a power-law distribution of the betweenness centrality values. The scale-free distribution of hubs in the protein universe suggests that a set of specific statistical mechanics models, such as the self-organized criticality model, can potentially identify the principal driving forces of molecular evolution. We also find a gatekeeper protein domain, removal of which partitions the largest cluster into two large sub-clusters. We suggest that the loss of such gatekeeper protein domains in the course of evolution is responsible for the creation of new fold families.
[ { "created": "Thu, 12 Aug 2004 00:04:06 GMT", "version": "v1" } ]
2007-05-23
[ [ "Dokholyan", "Nikolay V.", "" ] ]
Understanding the design of the universe of protein structures may provide insights into protein evolution. We study the architecture of the protein domain universe, which has been found to poses peculiar scale-free properties (Dokholyan et al., Proc. Natl. Acad. Sci. USA 99: 14132-14136 (2002)). We examine the origin of these scale-free properties of the graph of protein domain structures (PDUG) and determine that that the PDUG is not modular, i.e. it does not consist of modules with uniform properties. Instead, we find the PDUG to be self-similar at all scales. We further characterize the PDUG architecture by studying the properties of the hub nodes that are responsible for the scale-free connectivity of the PDUG. We introduce a measure of the betweenness centrality of protein domains in the PDUG and find a power-law distribution of the betweenness centrality values. The scale-free distribution of hubs in the protein universe suggests that a set of specific statistical mechanics models, such as the self-organized criticality model, can potentially identify the principal driving forces of molecular evolution. We also find a gatekeeper protein domain, removal of which partitions the largest cluster into two large sub-clusters. We suggest that the loss of such gatekeeper protein domains in the course of evolution is responsible for the creation of new fold families.
0801.0082
Tamon Stephen
Utz-Uwe Haus, Steffen Klamt, Tamon Stephen
Computing knock out strategies in metabolic networks
12 pages
Journal of Computational Biology. April 1, 2008, 15(3): 259-268
10.1089/cmb.2007.0229
null
q-bio.QM
null
Given a metabolic network in terms of its metabolites and reactions, our goal is to efficiently compute the minimal knock out sets of reactions required to block a given behaviour. We describe an algorithm which improves the computation of these knock out sets when the elementary modes (minimal functional subsystems) of the network are given. We also describe an algorithm which computes both the knock out sets and the elementary modes containing the blocked reactions directly from the description of the network and whose worst-case computational complexity is better than the algorithms currently in use for these problems. Computational results are included.
[ { "created": "Sat, 29 Dec 2007 19:13:34 GMT", "version": "v1" } ]
2008-08-28
[ [ "Haus", "Utz-Uwe", "" ], [ "Klamt", "Steffen", "" ], [ "Stephen", "Tamon", "" ] ]
Given a metabolic network in terms of its metabolites and reactions, our goal is to efficiently compute the minimal knock out sets of reactions required to block a given behaviour. We describe an algorithm which improves the computation of these knock out sets when the elementary modes (minimal functional subsystems) of the network are given. We also describe an algorithm which computes both the knock out sets and the elementary modes containing the blocked reactions directly from the description of the network and whose worst-case computational complexity is better than the algorithms currently in use for these problems. Computational results are included.
1206.1800
Xaq Pitkow
Xaq Pitkow
Compressive neural representation of sparse, high-dimensional probabilities
9 pages, 4 figures
null
null
null
q-bio.NC cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper shows how sparse, high-dimensional probability distributions could be represented by neurons with exponential compression. The representation is a novel application of compressive sensing to sparse probability distributions rather than to the usual sparse signals. The compressive measurements correspond to expected values of nonlinear functions of the probabilistically distributed variables. When these expected values are estimated by sampling, the quality of the compressed representation is limited only by the quality of sampling. Since the compression preserves the geometric structure of the space of sparse probability distributions, probabilistic computation can be performed in the compressed domain. Interestingly, functions satisfying the requirements of compressive sensing can be implemented as simple perceptrons. If we use perceptrons as a simple model of feedforward computation by neurons, these results show that the mean activity of a relatively small number of neurons can accurately represent a high-dimensional joint distribution implicitly, even without accounting for any noise correlations. This comprises a novel hypothesis for how neurons could encode probabilities in the brain.
[ { "created": "Fri, 8 Jun 2012 15:52:50 GMT", "version": "v1" } ]
2012-06-11
[ [ "Pitkow", "Xaq", "" ] ]
This paper shows how sparse, high-dimensional probability distributions could be represented by neurons with exponential compression. The representation is a novel application of compressive sensing to sparse probability distributions rather than to the usual sparse signals. The compressive measurements correspond to expected values of nonlinear functions of the probabilistically distributed variables. When these expected values are estimated by sampling, the quality of the compressed representation is limited only by the quality of sampling. Since the compression preserves the geometric structure of the space of sparse probability distributions, probabilistic computation can be performed in the compressed domain. Interestingly, functions satisfying the requirements of compressive sensing can be implemented as simple perceptrons. If we use perceptrons as a simple model of feedforward computation by neurons, these results show that the mean activity of a relatively small number of neurons can accurately represent a high-dimensional joint distribution implicitly, even without accounting for any noise correlations. This comprises a novel hypothesis for how neurons could encode probabilities in the brain.
2006.02741
Ines Hipolito
Ines Hipolito, Maxwell Ramstead, Laura Convertino, Anjali Bhat, Karl Friston, Thomas Parr
Markov Blankets in the Brain
25 pages, 5 figures, 1 table, Glossary
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent characterisations of self-organising systems depend upon the presence of a Markov blanket: a statistical boundary that mediates the interactions between what is inside of and outside of a system. We leverage this idea to provide an analysis of partitions in neuronal systems. This is applicable to brain architectures at multiple scales, enabling partitions into single neurons, brain regions, and brain-wide networks. This treatment is based upon the canonical micro-circuitry used in empirical studies of effective connectivity, so as to speak directly to practical applications. This depends upon the dynamic coupling between functional units, whose form recapitulates that of a Markov blanket at each level. The nuance afforded by partitioning neural systems in this way highlights certain limitations of modular perspectives of brain function that only consider a single level of description.
[ { "created": "Thu, 4 Jun 2020 10:03:31 GMT", "version": "v1" } ]
2020-06-05
[ [ "Hipolito", "Ines", "" ], [ "Ramstead", "Maxwell", "" ], [ "Convertino", "Laura", "" ], [ "Bhat", "Anjali", "" ], [ "Friston", "Karl", "" ], [ "Parr", "Thomas", "" ] ]
Recent characterisations of self-organising systems depend upon the presence of a Markov blanket: a statistical boundary that mediates the interactions between what is inside of and outside of a system. We leverage this idea to provide an analysis of partitions in neuronal systems. This is applicable to brain architectures at multiple scales, enabling partitions into single neurons, brain regions, and brain-wide networks. This treatment is based upon the canonical micro-circuitry used in empirical studies of effective connectivity, so as to speak directly to practical applications. This depends upon the dynamic coupling between functional units, whose form recapitulates that of a Markov blanket at each level. The nuance afforded by partitioning neural systems in this way highlights certain limitations of modular perspectives of brain function that only consider a single level of description.
1310.8420
Ameet Talwalkar
Ameet Talwalkar, Jesse Liptrap, Julie Newcomb, Christopher Hartl, Jonathan Terhorst, Kristal Curtis, Ma'ayan Bresler, Yun S. Song, Michael I. Jordan, David Patterson
SMaSH: A Benchmarking Toolkit for Human Genome Variant Calling
null
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Computational methods are essential to extract actionable information from raw sequencing data, and to thus fulfill the promise of next-generation sequencing technology. Unfortunately, computational tools developed to call variants from human sequencing data disagree on many of their predictions, and current methods to evaluate accuracy and computational performance are ad-hoc and incomplete. Agreement on benchmarking variant calling methods would stimulate development of genomic processing tools and facilitate communication among researchers. Results: We propose SMaSH, a benchmarking methodology for evaluating human genome variant calling algorithms. We generate synthetic datasets, organize and interpret a wide range of existing benchmarking data for real genomes, and propose a set of accuracy and computational performance metrics for evaluating variant calling methods on this benchmarking data. Moreover, we illustrate the utility of SMaSH to evaluate the performance of some leading single nucleotide polymorphism (SNP), indel, and structural variant calling algorithms. Availability: We provide free and open access online to the SMaSH toolkit, along with detailed documentation, at smash.cs.berkeley.edu.
[ { "created": "Thu, 31 Oct 2013 08:09:16 GMT", "version": "v1" }, { "created": "Sun, 5 Jan 2014 07:36:34 GMT", "version": "v2" } ]
2014-01-07
[ [ "Talwalkar", "Ameet", "" ], [ "Liptrap", "Jesse", "" ], [ "Newcomb", "Julie", "" ], [ "Hartl", "Christopher", "" ], [ "Terhorst", "Jonathan", "" ], [ "Curtis", "Kristal", "" ], [ "Bresler", "Ma'ayan", "" ], [ "Song", "Yun S.", "" ], [ "Jordan", "Michael I.", "" ], [ "Patterson", "David", "" ] ]
Motivation: Computational methods are essential to extract actionable information from raw sequencing data, and to thus fulfill the promise of next-generation sequencing technology. Unfortunately, computational tools developed to call variants from human sequencing data disagree on many of their predictions, and current methods to evaluate accuracy and computational performance are ad-hoc and incomplete. Agreement on benchmarking variant calling methods would stimulate development of genomic processing tools and facilitate communication among researchers. Results: We propose SMaSH, a benchmarking methodology for evaluating human genome variant calling algorithms. We generate synthetic datasets, organize and interpret a wide range of existing benchmarking data for real genomes, and propose a set of accuracy and computational performance metrics for evaluating variant calling methods on this benchmarking data. Moreover, we illustrate the utility of SMaSH to evaluate the performance of some leading single nucleotide polymorphism (SNP), indel, and structural variant calling algorithms. Availability: We provide free and open access online to the SMaSH toolkit, along with detailed documentation, at smash.cs.berkeley.edu.
q-bio/0703063
Erik Aurell
Erik Aurell, Aymeric Fouquier d'Herouel, Claes Malmnas, Massimo Vergassola
Noise-filtering features of transcription regulation in the yeast S. cerevisiae
15 pages, 5 figures
null
null
null
q-bio.GN
null
Transcription regulation is largely governed by the profile and the dynamics of transcription factors' binding to DNA. Stochastic effects are intrinsic to this dynamics and the binding to functional sites must be controled with a certain specificity for living organisms to be able to elicit specific cellular responses. Specificity stems here from the interplay between binding affinity and cellular abundancy of transcription factor proteins and the binding of such proteins to DNA is thus controlled by their chemical potential. We combine large-scale protein abundance data in the budding yeast with binding affinities for all transcription factors with known DNA binding site sequences to assess the behavior of their chemical potentials. A sizable fraction of transcription factors is apparently bound non-specifically to DNA and the observed abundances are marginally sufficient to ensure high occupations of the functional sites. We argue that a biological cause of this feature is related to its noise-filtering consequences: abundances below physiological levels do not yield significant binding of functional targets and mis-expressions of regulated genes are thus tamed.
[ { "created": "Thu, 29 Mar 2007 06:20:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Aurell", "Erik", "" ], [ "d'Herouel", "Aymeric Fouquier", "" ], [ "Malmnas", "Claes", "" ], [ "Vergassola", "Massimo", "" ] ]
Transcription regulation is largely governed by the profile and the dynamics of transcription factors' binding to DNA. Stochastic effects are intrinsic to this dynamics and the binding to functional sites must be controled with a certain specificity for living organisms to be able to elicit specific cellular responses. Specificity stems here from the interplay between binding affinity and cellular abundancy of transcription factor proteins and the binding of such proteins to DNA is thus controlled by their chemical potential. We combine large-scale protein abundance data in the budding yeast with binding affinities for all transcription factors with known DNA binding site sequences to assess the behavior of their chemical potentials. A sizable fraction of transcription factors is apparently bound non-specifically to DNA and the observed abundances are marginally sufficient to ensure high occupations of the functional sites. We argue that a biological cause of this feature is related to its noise-filtering consequences: abundances below physiological levels do not yield significant binding of functional targets and mis-expressions of regulated genes are thus tamed.
2105.03400
Sathish Ande
Sathish Ande, Srinivas Avasarala, Ajith Karunarathne, Lopamudra Giri, Soumya Jana
Correlation in Neuronal Calcium Spiking: Quantification based on Empirical Mutual Information Rate
arXiv admin note: substantial text overlap with arXiv:2102.00723
null
null
null
q-bio.NC eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Quantification of neuronal correlations in neuron populations helps us to understand neural coding rules. Such quantification could also reveal how neurons encode information in normal and disease conditions like Alzheimer's and Parkinson's. While neurons communicate with each other by transmitting spikes, there would be a change in calcium concentration within the neurons inherently. Accordingly, there would be correlations in calcium spike trains and they could have heterogeneous memory structures. In this context, estimation of mutual information rate in calcium spike trains assumes primary significance. However, such estimation is difficult with available methods which would consider longer blocks for convergence without noticing that neuronal information changes in short time windows. Against this backdrop, we propose a faster method that exploits the memory structures in pair of calcium spike trains to quantify mutual information shared between them. Our method has shown superior performance with example Markov processes as well as experimental spike trains. Such mutual information rate analysis could be used to identify signatures of neuronal behavior in large populations in normal and abnormal conditions.
[ { "created": "Fri, 7 May 2021 17:19:02 GMT", "version": "v1" } ]
2021-05-10
[ [ "Ande", "Sathish", "" ], [ "Avasarala", "Srinivas", "" ], [ "Karunarathne", "Ajith", "" ], [ "Giri", "Lopamudra", "" ], [ "Jana", "Soumya", "" ] ]
Quantification of neuronal correlations in neuron populations helps us to understand neural coding rules. Such quantification could also reveal how neurons encode information in normal and disease conditions like Alzheimer's and Parkinson's. While neurons communicate with each other by transmitting spikes, there would be a change in calcium concentration within the neurons inherently. Accordingly, there would be correlations in calcium spike trains and they could have heterogeneous memory structures. In this context, estimation of mutual information rate in calcium spike trains assumes primary significance. However, such estimation is difficult with available methods which would consider longer blocks for convergence without noticing that neuronal information changes in short time windows. Against this backdrop, we propose a faster method that exploits the memory structures in pair of calcium spike trains to quantify mutual information shared between them. Our method has shown superior performance with example Markov processes as well as experimental spike trains. Such mutual information rate analysis could be used to identify signatures of neuronal behavior in large populations in normal and abnormal conditions.
q-bio/0509024
J\"org Langowski
Frank Aumann, Filip Lankas, Maiwen Caudron and J\"org Langowski
In silicio stretching of chromatin
51 pages, 13 figures
null
null
null
q-bio.BM q-bio.GN
null
We present Monte-Carlo (MC) simulations of the stretching of a single 30 nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-H\"uckel electrostatics and uses a two-angle zig-zag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90 deg to 130 deg increases the stiffness significantly. An increase in the opening angle from 22 deg to 34 deg leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: chromatin is much more resistant to stretching than to bending.
[ { "created": "Wed, 21 Sep 2005 09:36:23 GMT", "version": "v1" } ]
2007-05-23
[ [ "Aumann", "Frank", "" ], [ "Lankas", "Filip", "" ], [ "Caudron", "Maiwen", "" ], [ "Langowski", "Jörg", "" ] ]
We present Monte-Carlo (MC) simulations of the stretching of a single 30 nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-H\"uckel electrostatics and uses a two-angle zig-zag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90 deg to 130 deg increases the stiffness significantly. An increase in the opening angle from 22 deg to 34 deg leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: chromatin is much more resistant to stretching than to bending.
1302.3195
George Young
George Forrest Young, Luca Scardovi, Andrea Cavagna, Irene Giardina and Naomi Ehrich Leonard
Starling flock networks manage uncertainty in consensus at low cost
19 pages, 3 figures, 9 supporting figures
PLoS Comput. Biol. 9 (2013) 1-e1002894
10.1371/journal.pcbi.1002894
null
q-bio.PE nlin.AO physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flocks of starlings exhibit a remarkable ability to maintain cohesion as a group in highly uncertain environments and with limited, noisy information. Recent work demonstrated that individual starlings within large flocks respond to a fixed number of nearest neighbors, but until now it was not understood why this number is seven. We analyze robustness to uncertainty of consensus in empirical data from multiple starling flocks and show that the flock interaction networks with six or seven neighbors optimize the trade-off between group cohesion and individual effort. We can distinguish these numbers of neighbors from fewer or greater numbers using our systems-theoretic approach to measuring robustness of interaction networks as a function of the network structure, i.e., who is sensing whom. The metric quantifies the disagreement within the network due to disturbances and noise during consensus behavior and can be evaluated over a parameterized family of hypothesized sensing strategies (here the parameter is number of neighbors). We use this approach to further show that for the range of flocks studied the optimal number of neighbors does not depend on the number of birds within a flock; rather, it depends on the shape, notably the thickness, of the flock. The results suggest that robustness to uncertainty may have been a factor in the evolution of flocking for starlings. More generally, our results elucidate the role of the interaction network on uncertainty management in collective behavior, and motivate the application of our approach to other biological networks.
[ { "created": "Wed, 13 Feb 2013 19:26:35 GMT", "version": "v1" } ]
2013-02-14
[ [ "Young", "George Forrest", "" ], [ "Scardovi", "Luca", "" ], [ "Cavagna", "Andrea", "" ], [ "Giardina", "Irene", "" ], [ "Leonard", "Naomi Ehrich", "" ] ]
Flocks of starlings exhibit a remarkable ability to maintain cohesion as a group in highly uncertain environments and with limited, noisy information. Recent work demonstrated that individual starlings within large flocks respond to a fixed number of nearest neighbors, but until now it was not understood why this number is seven. We analyze robustness to uncertainty of consensus in empirical data from multiple starling flocks and show that the flock interaction networks with six or seven neighbors optimize the trade-off between group cohesion and individual effort. We can distinguish these numbers of neighbors from fewer or greater numbers using our systems-theoretic approach to measuring robustness of interaction networks as a function of the network structure, i.e., who is sensing whom. The metric quantifies the disagreement within the network due to disturbances and noise during consensus behavior and can be evaluated over a parameterized family of hypothesized sensing strategies (here the parameter is number of neighbors). We use this approach to further show that for the range of flocks studied the optimal number of neighbors does not depend on the number of birds within a flock; rather, it depends on the shape, notably the thickness, of the flock. The results suggest that robustness to uncertainty may have been a factor in the evolution of flocking for starlings. More generally, our results elucidate the role of the interaction network on uncertainty management in collective behavior, and motivate the application of our approach to other biological networks.
0806.2005
Pierre Baconnier
S. Randall Thomas (IBISC), Enas Abdulhay (TIMC), Pierre Baconnier (TIMC), Julie Fontecave (TIMC), Jean-Pierre Francoise (LJLL), Francois Guillaud, Patrick Hannaert, Alfredo Hernandez (LTSI), Virginie Le Rolle (LTSI), Pierre Maziere (CPBS), Fariza Tahi, Farida Zehraoui (LIPN)
SAPHIR - a multi-scale, multi-resolution modeling environment targeting blood pressure regulation and fluid homeostasis
null
Conference proceedings : Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2007 (2007) 6649-52
10.1109/IEMBS.2007.4353884
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present progress on a comprehensive, modular, interactive modeling environment centered on overall regulation of blood pressure and body fluid homeostasis. We call the project SAPHIR, for "a Systems Approach for PHysiological Integration of Renal, cardiac, and respiratory functions". The project uses state-of-the-art multi-scale simulation methods. The basic core model will give succinct input-output (reduced-dimension) descriptions of all relevant organ systems and regulatory processes, and it will be modular, multi-resolution, and extensible, in the sense that detailed submodules of any process(es) can be "plugged-in" to the basic model in order to explore, eg. system-level implications of local perturbations. The goal is to keep the basic core model compact enough to insure fast execution time (in view of eventual use in the clinic) and yet to allow elaborate detailed modules of target tissues or organs in order to focus on the problem area while maintaining the system-level regulatory compensations.
[ { "created": "Thu, 12 Jun 2008 06:39:06 GMT", "version": "v1" } ]
2008-12-18
[ [ "Thomas", "S. Randall", "", "IBISC" ], [ "Abdulhay", "Enas", "", "TIMC" ], [ "Baconnier", "Pierre", "", "TIMC" ], [ "Fontecave", "Julie", "", "TIMC" ], [ "Francoise", "Jean-Pierre", "", "LJLL" ], [ "Guillaud", "Francois", "", "LTSI" ], [ "Hannaert", "Patrick", "", "LTSI" ], [ "Hernandez", "Alfredo", "", "LTSI" ], [ "Rolle", "Virginie Le", "", "LTSI" ], [ "Maziere", "Pierre", "", "CPBS" ], [ "Tahi", "Fariza", "", "LIPN" ], [ "Zehraoui", "Farida", "", "LIPN" ] ]
We present progress on a comprehensive, modular, interactive modeling environment centered on overall regulation of blood pressure and body fluid homeostasis. We call the project SAPHIR, for "a Systems Approach for PHysiological Integration of Renal, cardiac, and respiratory functions". The project uses state-of-the-art multi-scale simulation methods. The basic core model will give succinct input-output (reduced-dimension) descriptions of all relevant organ systems and regulatory processes, and it will be modular, multi-resolution, and extensible, in the sense that detailed submodules of any process(es) can be "plugged-in" to the basic model in order to explore, eg. system-level implications of local perturbations. The goal is to keep the basic core model compact enough to insure fast execution time (in view of eventual use in the clinic) and yet to allow elaborate detailed modules of target tissues or organs in order to focus on the problem area while maintaining the system-level regulatory compensations.
1002.3292
Sk Sarif Hassan s
Sk. Sarif Hassana, Pabitra Pal Choudhury, Amita Pal, R. L. Brahmachary and Arunava Goswami
Complete Human Mitochondrial Genome Construction Using L-systems
null
Global Journal of Computer Science and Technology, 2010, USA
null
null
q-bio.OT
http://creativecommons.org/licenses/by/3.0/
Recently, scientists from The Craig J. Venter Institute reported construction of very long DNA molecules using a variety of experimental procedures adopting a number of working hypotheses. Finding a mathematical rule for generation of such a long sequence would revolutionize our thinking on various advanced areas of biology, viz. evolution of long DNA chains in chromosomes, reasons for existence of long stretches of non-coding regions as well as would usher automated methods for long DNA chains preparation for chromosome engineering. However, this mathematical principle must have room for editing / correcting DNA sequences locally in those areas of genomes where mutation and / or DNA polymerase has introduced errors over millions of years. In this paper, we report the basics and applications of L-system (a mathematical principle) which could answer all the aforesaid issues. At the end, we present the whole human mitochondrial genome which has been generated using this mathematical principle using PC computation power. We can claim now that we can make any stretch of DNA, be it 936 bp of olfactory receptor, with or without introns, mitochondrial DNA to 3 x 109 bp DNA sequences of the whole human genome with even a PC computation power.
[ { "created": "Wed, 17 Feb 2010 15:58:49 GMT", "version": "v1" }, { "created": "Wed, 24 Feb 2010 17:20:03 GMT", "version": "v2" }, { "created": "Thu, 25 Feb 2010 09:39:20 GMT", "version": "v3" } ]
2010-05-05
[ [ "Hassana", "Sk. Sarif", "" ], [ "Choudhury", "Pabitra Pal", "" ], [ "Pal", "Amita", "" ], [ "Brahmachary", "R. L.", "" ], [ "Goswami", "Arunava", "" ] ]
Recently, scientists from The Craig J. Venter Institute reported construction of very long DNA molecules using a variety of experimental procedures adopting a number of working hypotheses. Finding a mathematical rule for generation of such a long sequence would revolutionize our thinking on various advanced areas of biology, viz. evolution of long DNA chains in chromosomes, reasons for existence of long stretches of non-coding regions as well as would usher automated methods for long DNA chains preparation for chromosome engineering. However, this mathematical principle must have room for editing / correcting DNA sequences locally in those areas of genomes where mutation and / or DNA polymerase has introduced errors over millions of years. In this paper, we report the basics and applications of L-system (a mathematical principle) which could answer all the aforesaid issues. At the end, we present the whole human mitochondrial genome which has been generated using this mathematical principle using PC computation power. We can claim now that we can make any stretch of DNA, be it 936 bp of olfactory receptor, with or without introns, mitochondrial DNA to 3 x 109 bp DNA sequences of the whole human genome with even a PC computation power.
1101.5193
Yoichiro Mori
Yoichiro Mori, Chun Liu and Robert S. Eisenberg
A Model of Electrodiffusion and Osmotic Water Flow and its Energetic Structure
null
null
10.1016/j.bpj.2010.12.678
null
q-bio.CB cond-mat.soft math.AP physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a model for ionic electrodiffusion and osmotic water flow through cells and tissues. The model consists of a system of partial differential equations for ionic concentration and fluid flow with interface conditions at deforming membrane boundaries. The model satisfies a natural energy equality, in which the sum of the entropic, elastic and electrostatic free energies are dissipated through viscous, electrodiffusive and osmotic flows. We discuss limiting models when certain dimensionless parameters are small. Finally, we develop a numerical scheme for the one-dimensional case and present some simple applications of our model to cell volume control.
[ { "created": "Thu, 27 Jan 2011 01:47:41 GMT", "version": "v1" } ]
2017-08-23
[ [ "Mori", "Yoichiro", "" ], [ "Liu", "Chun", "" ], [ "Eisenberg", "Robert S.", "" ] ]
We introduce a model for ionic electrodiffusion and osmotic water flow through cells and tissues. The model consists of a system of partial differential equations for ionic concentration and fluid flow with interface conditions at deforming membrane boundaries. The model satisfies a natural energy equality, in which the sum of the entropic, elastic and electrostatic free energies are dissipated through viscous, electrodiffusive and osmotic flows. We discuss limiting models when certain dimensionless parameters are small. Finally, we develop a numerical scheme for the one-dimensional case and present some simple applications of our model to cell volume control.
1002.0876
Bruno Goncalves
Paolo Bajardi, Chiara Poletto, Duygu Balcan, Hao Hu, Bruno Goncalves, Jose J. Ramasco, Daniela Paolotti, Nicola Perra, Michele Tizzoni, Wouter Van den Broeck, Vittoria Colizza, Alessandro Vespignani
Modeling vaccination campaigns and the Fall/Winter 2009 activity of the new A(H1N1) influenza in the Northern Hemisphere
Paper: 19 Pages, 3 Figures. Supplementary Information: 10 pages, 8 Tables
Emerging Health Threats 2, E11 (2009)
10.3134/ehtj.09.011
null
q-bio.PE physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The unfolding of pandemic influenza A(H1N1) for Fall 2009 in the Northern Hemisphere is still uncertain. Plans for vaccination campaigns and vaccine trials are underway, with the first batches expected to be available early October. Several studies point to the possibility of an anticipated pandemic peak that could undermine the effectiveness of vaccination strategies. Here we use a structured global epidemic and mobility metapopulation model to assess the effectiveness of massive vaccination campaigns for the Fall/Winter 2009. Mitigation effects are explored depending on the interplay between the predicted pandemic evolution and the expected delivery of vaccines. The model is calibrated using recent estimates on the transmissibility of the new A(H1N1) influenza. Results show that if additional intervention strategies were not used to delay the time of pandemic peak, vaccination may not be able to considerably reduce the cumulative number of cases, even when the mass vaccination campaign is started as early as mid-October. Prioritized vaccination would be crucial in slowing down the pandemic evolution and reducing its burden.
[ { "created": "Thu, 4 Feb 2010 01:46:06 GMT", "version": "v1" } ]
2010-02-10
[ [ "Bajardi", "Paolo", "" ], [ "Poletto", "Chiara", "" ], [ "Balcan", "Duygu", "" ], [ "Hu", "Hao", "" ], [ "Goncalves", "Bruno", "" ], [ "Ramasco", "Jose J.", "" ], [ "Paolotti", "Daniela", "" ], [ "Perra", "Nicola", "" ], [ "Tizzoni", "Michele", "" ], [ "Broeck", "Wouter Van den", "" ], [ "Colizza", "Vittoria", "" ], [ "Vespignani", "Alessandro", "" ] ]
The unfolding of pandemic influenza A(H1N1) for Fall 2009 in the Northern Hemisphere is still uncertain. Plans for vaccination campaigns and vaccine trials are underway, with the first batches expected to be available early October. Several studies point to the possibility of an anticipated pandemic peak that could undermine the effectiveness of vaccination strategies. Here we use a structured global epidemic and mobility metapopulation model to assess the effectiveness of massive vaccination campaigns for the Fall/Winter 2009. Mitigation effects are explored depending on the interplay between the predicted pandemic evolution and the expected delivery of vaccines. The model is calibrated using recent estimates on the transmissibility of the new A(H1N1) influenza. Results show that if additional intervention strategies were not used to delay the time of pandemic peak, vaccination may not be able to considerably reduce the cumulative number of cases, even when the mass vaccination campaign is started as early as mid-October. Prioritized vaccination would be crucial in slowing down the pandemic evolution and reducing its burden.
1409.0243
Bob Eisenberg
Bob Eisenberg
Can we make biochemistry an exact science?
This is a documented, expanded version of a paper with a similar title scheduled for publication in the October 2014 issue of ASBMB Today, Editor: Angela Hopp. Chair of Editorial Advisory Board: Charlie Brenner
null
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biochemists know that the law of mass action is not exact and not very useful because we cannot transfer it (with unchanged parameters) from one condition to another. I argue that exact equations require calibrated multiscale analysis to deal with ions. Exact theories in biochemistry must use mathematics of interactions because biological ionic solutions, derived from seawater, are complex (not simple) fluids. The activity of one ion depends on every other ion. Mathematics of conservative interactions is well understood but friction is another matter. Mathematicians now have now an energetic variational calculus dealing with friction. Complex fluids need variational methods because everything interacts with everything else. Mathematics designed to handle interactions is needed to produce exact equations. If interactions are not addressed with variational mathematics, they are bewildering. The mathematics must include the global properties of the electric field. Flow of charge in one place changes the flow everywhere by Kirchoff and Maxwell laws. Charge changes physical nature as it flows through a circuit. It is ions in salt water; it is electrons in a vacuum tube; it is quasi-particles in a semiconductor; and it is nothing much in a vacuum capacitor (i.e., displacement current). Charge is abstract. The physical nature of charge and current is strikingly diverse; yet, the flow of current is exactly the same in every element in a series circuit. The global nature of electric flow prevents the law of mass action from being exact. The law of mass action (with rate constants that are constant) does not know about charge. The law of mass action is about mass conservation. I believe the law of mass action must be modified to be consistent with the Kirchoff current law if biochemistry is to be an exact science.
[ { "created": "Sun, 31 Aug 2014 18:22:32 GMT", "version": "v1" } ]
2014-09-02
[ [ "Eisenberg", "Bob", "" ] ]
Biochemists know that the law of mass action is not exact and not very useful because we cannot transfer it (with unchanged parameters) from one condition to another. I argue that exact equations require calibrated multiscale analysis to deal with ions. Exact theories in biochemistry must use mathematics of interactions because biological ionic solutions, derived from seawater, are complex (not simple) fluids. The activity of one ion depends on every other ion. Mathematics of conservative interactions is well understood but friction is another matter. Mathematicians now have now an energetic variational calculus dealing with friction. Complex fluids need variational methods because everything interacts with everything else. Mathematics designed to handle interactions is needed to produce exact equations. If interactions are not addressed with variational mathematics, they are bewildering. The mathematics must include the global properties of the electric field. Flow of charge in one place changes the flow everywhere by Kirchoff and Maxwell laws. Charge changes physical nature as it flows through a circuit. It is ions in salt water; it is electrons in a vacuum tube; it is quasi-particles in a semiconductor; and it is nothing much in a vacuum capacitor (i.e., displacement current). Charge is abstract. The physical nature of charge and current is strikingly diverse; yet, the flow of current is exactly the same in every element in a series circuit. The global nature of electric flow prevents the law of mass action from being exact. The law of mass action (with rate constants that are constant) does not know about charge. The law of mass action is about mass conservation. I believe the law of mass action must be modified to be consistent with the Kirchoff current law if biochemistry is to be an exact science.
2011.11808
Cheng Ma
Cheng Ma and Gyorgy Korniss and Boleslaw K. Szymanski and Jianxi Gao
Universality of noise-induced resilience restoration in spatially-extended ecological systems
31 pages, 7 figures
Communications Physics, vol. 4:262, Dec. 10, 2021
10.1038/s42005-021-00758-2
null
q-bio.PE cond-mat.stat-mech q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Many systems may switch to an undesired state due to internal failures or external perturbations, of which critical transitions toward degraded ecosystem states are a prominent example. Resilience restoration focuses on the ability of spatially-extended systems and the required time to recover to their desired states under stochastic environmental conditions. While mean-field approaches may guide recovery strategies by indicating the conditions needed to destabilize undesired states, these approaches are not accurately capturing the transition process toward the desired state of spatially-extended systems in stochastic environments. The difficulty is rooted in the lack of mathematical tools to analyze systems with high dimensionality, nonlinearity, and stochastic effects. We bridge this gap by developing new mathematical tools that employ nucleation theory in spatially-embedded systems to advance resilience restoration. We examine our approach on systems following mutualistic dynamics and diffusion models, finding that systems may exhibit single-cluster or multi-cluster phases depending on their sizes and noise strengths, and also construct a new scaling law governing the restoration time for arbitrary system size and noise strength in two-dimensional systems. This approach is not limited to ecosystems and has applications in various dynamical systems, from biology to infrastructural systems.
[ { "created": "Tue, 24 Nov 2020 00:28:37 GMT", "version": "v1" }, { "created": "Thu, 9 Sep 2021 18:54:02 GMT", "version": "v2" } ]
2021-12-23
[ [ "Ma", "Cheng", "" ], [ "Korniss", "Gyorgy", "" ], [ "Szymanski", "Boleslaw K.", "" ], [ "Gao", "Jianxi", "" ] ]
Many systems may switch to an undesired state due to internal failures or external perturbations, of which critical transitions toward degraded ecosystem states are a prominent example. Resilience restoration focuses on the ability of spatially-extended systems and the required time to recover to their desired states under stochastic environmental conditions. While mean-field approaches may guide recovery strategies by indicating the conditions needed to destabilize undesired states, these approaches are not accurately capturing the transition process toward the desired state of spatially-extended systems in stochastic environments. The difficulty is rooted in the lack of mathematical tools to analyze systems with high dimensionality, nonlinearity, and stochastic effects. We bridge this gap by developing new mathematical tools that employ nucleation theory in spatially-embedded systems to advance resilience restoration. We examine our approach on systems following mutualistic dynamics and diffusion models, finding that systems may exhibit single-cluster or multi-cluster phases depending on their sizes and noise strengths, and also construct a new scaling law governing the restoration time for arbitrary system size and noise strength in two-dimensional systems. This approach is not limited to ecosystems and has applications in various dynamical systems, from biology to infrastructural systems.
2109.00437
Johannes Pausch
Johannes Pausch, Rosalba Garcia-Millan, Gunnar Pruessner
Noise can lead to exponential epidemic spreading despite $R_0$ below one
17 pages, 10 figures
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Branching processes are widely used to model evolutionary and population dynamics as well as the spread of infectious diseases. To characterize the dynamics of their growth or spread, the basic reproduction number $R_0$ has received considerable attention. In the context of infectious diseases, it is usually defined as the expected number of secondary cases produced by an infectious case in a completely susceptible population. Typically $R_0>1$ indicates that an outbreak is expected to continue and to grow exponentially, while $R_0<1$ usually indicates that an outbreak is expected to terminate after some time. In this work, we show that fluctuations of the dynamics in time can lead to a continuation of outbreaks even when the expected number of secondary cases from a single case is below $1$. Such fluctuations are usually neglected in modelling of infectious diseases by a set of ordinary differential equations, such as the classic SIR model. We showcase three examples: 1) extinction following an Ornstein-Uhlenbeck process, 2) extinction switching randomly between two values and 3) mixing of two populations with different $R_0$ values. We corroborate our analytical findings with computer simulations.
[ { "created": "Wed, 1 Sep 2021 15:34:46 GMT", "version": "v1" } ]
2021-09-02
[ [ "Pausch", "Johannes", "" ], [ "Garcia-Millan", "Rosalba", "" ], [ "Pruessner", "Gunnar", "" ] ]
Branching processes are widely used to model evolutionary and population dynamics as well as the spread of infectious diseases. To characterize the dynamics of their growth or spread, the basic reproduction number $R_0$ has received considerable attention. In the context of infectious diseases, it is usually defined as the expected number of secondary cases produced by an infectious case in a completely susceptible population. Typically $R_0>1$ indicates that an outbreak is expected to continue and to grow exponentially, while $R_0<1$ usually indicates that an outbreak is expected to terminate after some time. In this work, we show that fluctuations of the dynamics in time can lead to a continuation of outbreaks even when the expected number of secondary cases from a single case is below $1$. Such fluctuations are usually neglected in modelling of infectious diseases by a set of ordinary differential equations, such as the classic SIR model. We showcase three examples: 1) extinction following an Ornstein-Uhlenbeck process, 2) extinction switching randomly between two values and 3) mixing of two populations with different $R_0$ values. We corroborate our analytical findings with computer simulations.
2209.13560
Andre Ribeiro
Andre F. Ribeiro
Mutation Effect Generalizability under Selection-Drift
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While Neutral Theory famously describes the number of discrete genetic differences in populations, we consider the number of genetic backgrounds under which such differences are observed - setting limits to the generalizability of their effects. This allow us to determine which population structures and diversity rates have maximal effect generalization across (1) environmental and (2) genetic variation, and to demonstrate that they correspond asymptotically to those of populations under (1) natural selection and (2) drift. At the same time, these results suggest distinct limits to the predictability of fitness and evolution across evolutionary regimes. We employ both broad time, large-scale genome sequencing datasets (including whole-genome autocorrelation calculations), and fine time-scale barcoding experiments.
[ { "created": "Tue, 27 Sep 2022 17:33:12 GMT", "version": "v1" }, { "created": "Wed, 28 Sep 2022 01:57:20 GMT", "version": "v2" }, { "created": "Tue, 18 Oct 2022 00:14:02 GMT", "version": "v3" }, { "created": "Sat, 31 Dec 2022 19:49:54 GMT", "version": "v4" }, { "created": "Sun, 19 Mar 2023 18:51:22 GMT", "version": "v5" } ]
2023-03-21
[ [ "Ribeiro", "Andre F.", "" ] ]
While Neutral Theory famously describes the number of discrete genetic differences in populations, we consider the number of genetic backgrounds under which such differences are observed - setting limits to the generalizability of their effects. This allow us to determine which population structures and diversity rates have maximal effect generalization across (1) environmental and (2) genetic variation, and to demonstrate that they correspond asymptotically to those of populations under (1) natural selection and (2) drift. At the same time, these results suggest distinct limits to the predictability of fitness and evolution across evolutionary regimes. We employ both broad time, large-scale genome sequencing datasets (including whole-genome autocorrelation calculations), and fine time-scale barcoding experiments.
1606.08314
Alfred Bennun BA
Alfred Bennun
The integrated model of cAMP-dependent DNA expression reveals an inverse relationship between cancer and neurodegeneration
9 pages, 3 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The model for cAMP-dependent synaptic plasticity relates the characterization of a noradrenaline-stimulated adenylyl cyclase with DNA unzipping. Specific proteins condition cAMP insertion in specific sites of the DNA structure to direct cellular and synaptic differentiation in brain tissues, also providing coding. Metabolic-dependent ATP binding of Mg2+, could control feedback by inactivating AC dependent formation of cAMP. The level of cAMP and cGMP, which could be assayed in red cells and cerebrospinal fluid, allows a clinical lab diagnostic improvement. Also, provides a relationship of best fitting to cAMP control by binding to DNA. The cAMP level allows the prediction of an inverse relationship between neurodegeneration and cancer. The latter, could be characterized by uncontrolled proliferation, whereas metabolic dominance by stress over a long period of time, may deplete cerebral cAMP.
[ { "created": "Fri, 3 Jun 2016 22:56:47 GMT", "version": "v1" } ]
2016-06-28
[ [ "Bennun", "Alfred", "" ] ]
The model for cAMP-dependent synaptic plasticity relates the characterization of a noradrenaline-stimulated adenylyl cyclase with DNA unzipping. Specific proteins condition cAMP insertion in specific sites of the DNA structure to direct cellular and synaptic differentiation in brain tissues, also providing coding. Metabolic-dependent ATP binding of Mg2+, could control feedback by inactivating AC dependent formation of cAMP. The level of cAMP and cGMP, which could be assayed in red cells and cerebrospinal fluid, allows a clinical lab diagnostic improvement. Also, provides a relationship of best fitting to cAMP control by binding to DNA. The cAMP level allows the prediction of an inverse relationship between neurodegeneration and cancer. The latter, could be characterized by uncontrolled proliferation, whereas metabolic dominance by stress over a long period of time, may deplete cerebral cAMP.
1905.11036
Masaki Watabe
Masaki Watabe, Satya N. V. Arjunan, Wei Xiang Chew, Kazunari Kaizu and Koichi Takahashi
Cooperativity transitions driven by higher-order oligomer formations in ligand-induced receptor dimerization
6 pages, 3 figures
Phys. Rev. E 100, 062407 (2019)
10.1103/PhysRevE.100.062407
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While cooperativity in ligand-induced receptor dimerization has been linked with receptor-receptor couplings via minimal representations of physical observables, effects arising from higher-order oligomer (e.g., trimer and tetramer) formations of unobserved receptors have received less attention. Here, we propose a dimerization model of ligand-induced receptors in multivalent form representing physical observables under basis vectors of various aggregated receptor-states. Our simulations of multivalent models not only reject Wofsy-Goldstein parameter conditions for cooperativity, but show higher-order oligomer formations can shift cooperativity from positive to negative.
[ { "created": "Mon, 27 May 2019 08:25:52 GMT", "version": "v1" }, { "created": "Thu, 28 Nov 2019 07:09:16 GMT", "version": "v2" }, { "created": "Fri, 13 Dec 2019 23:06:25 GMT", "version": "v3" } ]
2019-12-18
[ [ "Watabe", "Masaki", "" ], [ "Arjunan", "Satya N. V.", "" ], [ "Chew", "Wei Xiang", "" ], [ "Kaizu", "Kazunari", "" ], [ "Takahashi", "Koichi", "" ] ]
While cooperativity in ligand-induced receptor dimerization has been linked with receptor-receptor couplings via minimal representations of physical observables, effects arising from higher-order oligomer (e.g., trimer and tetramer) formations of unobserved receptors have received less attention. Here, we propose a dimerization model of ligand-induced receptors in multivalent form representing physical observables under basis vectors of various aggregated receptor-states. Our simulations of multivalent models not only reject Wofsy-Goldstein parameter conditions for cooperativity, but show higher-order oligomer formations can shift cooperativity from positive to negative.