id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1807.06203
Ding Zhou
E. Kelly Buchanan, Ian Kinsella, Ding Zhou, Rong Zhu, Pengcheng Zhou, Felipe Gerhard, John Ferrante, Ying Ma, Sharon Kim, Mohammed Shaik, Yajie Liang, Rongwen Lu, Jacob Reimer, Paul Fahey, Taliah Muhammad, Graham Dempsey, Elizabeth Hillman, Na Ji, Andreas Tolias, Liam Paninski
Penalized matrix decomposition for denoising, compression, and improved demixing of functional imaging data
36 pages, 18 figures
null
null
null
q-bio.NC stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
Calcium imaging has revolutionized systems neuroscience, providing the ability to image large neural populations with single-cell resolution. The resulting datasets are quite large, which has presented a barrier to routine open sharing of this data, slowing progress in reproducible research. State of the art methods for analyzing this data are based on non-negative matrix factorization (NMF); these approaches solve a non-convex optimization problem, and are effective when good initializations are available, but can break down in low-SNR settings where common initialization approaches fail. Here we introduce an approach to compressing and denoising functional imaging data. The method is based on a spatially-localized penalized matrix decomposition (PMD) of the data to separate (low-dimensional) signal from (temporally-uncorrelated) noise. This approach can be applied in parallel on local spatial patches and is therefore highly scalable, does not impose non-negativity constraints or require stringent identifiability assumptions (leading to significantly more robust results compared to NMF), and estimates all parameters directly from the data, so no hand-tuning is required. We have applied the method to a wide range of functional imaging data (including one-photon, two-photon, three-photon, widefield, somatic, axonal, dendritic, calcium, and voltage imaging datasets): in all cases, we observe ~2-4x increases in SNR and compression rates of 20-300x with minimal visible loss of signal, with no adjustment of hyperparameters; this in turn facilitates the process of demixing the observed activity into contributions from individual neurons. We focus on two challenging applications: dendritic calcium imaging data and voltage imaging data in the context of optogenetic stimulation. In both cases, we show that our new approach leads to faster and much more robust extraction of activity from the data.
[ { "created": "Tue, 17 Jul 2018 04:00:07 GMT", "version": "v1" } ]
2018-07-18
[ [ "Buchanan", "E. Kelly", "" ], [ "Kinsella", "Ian", "" ], [ "Zhou", "Ding", "" ], [ "Zhu", "Rong", "" ], [ "Zhou", "Pengcheng", "" ], [ "Gerhard", "Felipe", "" ], [ "Ferrante", "John", "" ], [ "Ma", "Ying", "" ], [ "Kim", "Sharon", "" ], [ "Shaik", "Mohammed", "" ], [ "Liang", "Yajie", "" ], [ "Lu", "Rongwen", "" ], [ "Reimer", "Jacob", "" ], [ "Fahey", "Paul", "" ], [ "Muhammad", "Taliah", "" ], [ "Dempsey", "Graham", "" ], [ "Hillman", "Elizabeth", "" ], [ "Ji", "Na", "" ], [ "Tolias", "Andreas", "" ], [ "Paninski", "Liam", "" ] ]
Calcium imaging has revolutionized systems neuroscience, providing the ability to image large neural populations with single-cell resolution. The resulting datasets are quite large, which has presented a barrier to routine open sharing of this data, slowing progress in reproducible research. State of the art methods for analyzing this data are based on non-negative matrix factorization (NMF); these approaches solve a non-convex optimization problem, and are effective when good initializations are available, but can break down in low-SNR settings where common initialization approaches fail. Here we introduce an approach to compressing and denoising functional imaging data. The method is based on a spatially-localized penalized matrix decomposition (PMD) of the data to separate (low-dimensional) signal from (temporally-uncorrelated) noise. This approach can be applied in parallel on local spatial patches and is therefore highly scalable, does not impose non-negativity constraints or require stringent identifiability assumptions (leading to significantly more robust results compared to NMF), and estimates all parameters directly from the data, so no hand-tuning is required. We have applied the method to a wide range of functional imaging data (including one-photon, two-photon, three-photon, widefield, somatic, axonal, dendritic, calcium, and voltage imaging datasets): in all cases, we observe ~2-4x increases in SNR and compression rates of 20-300x with minimal visible loss of signal, with no adjustment of hyperparameters; this in turn facilitates the process of demixing the observed activity into contributions from individual neurons. We focus on two challenging applications: dendritic calcium imaging data and voltage imaging data in the context of optogenetic stimulation. In both cases, we show that our new approach leads to faster and much more robust extraction of activity from the data.
2103.08143
Olga Lukyanova
Oleg Nikitin and Olga Lukyanova and Alex Kunin
Constrained plasticity reserve as a natural way to control frequency and weights in spiking neural networks
24 pages, 10 figures
null
null
null
q-bio.NC cs.AI cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Biological neurons have adaptive nature and perform complex computations involving the filtering of redundant information. However, most common neural cell models, including biologically plausible, such as Hodgkin-Huxley or Izhikevich, do not possess predictive dynamics on a single-cell level. Moreover, the modern rules of synaptic plasticity or interconnections weights adaptation also do not provide grounding for the ability of neurons to adapt to the ever-changing input signal intensity. While natural neuron synaptic growth is precisely controlled and restricted by protein supply and recycling, weight correction rules such as widely used STDP are efficiently unlimited in change rate and scale. The present article introduces new mechanics of interconnection between neuron firing rate homeostasis and weight change through STDP growth bounded by abstract protein reserve, controlled by the intracellular optimization algorithm. We show how these cellular dynamics help neurons filter out the intense noise signals to help neurons keep a stable firing rate. We also examine that such filtering does not affect the ability of neurons to recognize the correlated inputs in unsupervised mode. Such an approach might be used in the machine learning domain to improve the robustness of AI systems.
[ { "created": "Mon, 15 Mar 2021 05:22:14 GMT", "version": "v1" }, { "created": "Sun, 20 Jun 2021 10:56:05 GMT", "version": "v2" } ]
2021-06-22
[ [ "Nikitin", "Oleg", "" ], [ "Lukyanova", "Olga", "" ], [ "Kunin", "Alex", "" ] ]
Biological neurons have adaptive nature and perform complex computations involving the filtering of redundant information. However, most common neural cell models, including biologically plausible, such as Hodgkin-Huxley or Izhikevich, do not possess predictive dynamics on a single-cell level. Moreover, the modern rules of synaptic plasticity or interconnections weights adaptation also do not provide grounding for the ability of neurons to adapt to the ever-changing input signal intensity. While natural neuron synaptic growth is precisely controlled and restricted by protein supply and recycling, weight correction rules such as widely used STDP are efficiently unlimited in change rate and scale. The present article introduces new mechanics of interconnection between neuron firing rate homeostasis and weight change through STDP growth bounded by abstract protein reserve, controlled by the intracellular optimization algorithm. We show how these cellular dynamics help neurons filter out the intense noise signals to help neurons keep a stable firing rate. We also examine that such filtering does not affect the ability of neurons to recognize the correlated inputs in unsupervised mode. Such an approach might be used in the machine learning domain to improve the robustness of AI systems.
1504.02927
Michael Paulin
Michael G. Paulin
The Origin of Inference: Ediacaran Ecology and the Evolution of Bayesian Brains
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of spiking neurons and nervous systems in the late Ediacaran period simultaneously with the evolution of carnivores around 550 million years ago can be explained by the need for accurately timed decisions under an imminent threat of being eaten. A simple model shows that threshold triggering devices, spiking neurons, are utility-maximizing decision-makers for the timing of escape reflexes given the sensory cues available to Ediacaran animals at the onset of carnivory. Decisions are suboptimal for very weak stimuli, providing selection pressure for secondary processing of primary spike train data. A simple network can make approximately Bayes optimal decisions given stochastic spike trains. Decisions that are arbitrarily close to Bayes optimal can be obtained by enlarging this network. A subnetwork that computes the Bayesian posterior density of the critical state variable - distance between predator and prey - emerges as a core component of the decision-making mechanism. This is a neural analog of a Bayesian particle filter with cerebellar-like architecture. The model explains fundamental properties of neurons and nervous systems in modern animals and makes testable predictions.
[ { "created": "Sun, 12 Apr 2015 02:23:17 GMT", "version": "v1" } ]
2015-04-14
[ [ "Paulin", "Michael G.", "" ] ]
The evolution of spiking neurons and nervous systems in the late Ediacaran period simultaneously with the evolution of carnivores around 550 million years ago can be explained by the need for accurately timed decisions under an imminent threat of being eaten. A simple model shows that threshold triggering devices, spiking neurons, are utility-maximizing decision-makers for the timing of escape reflexes given the sensory cues available to Ediacaran animals at the onset of carnivory. Decisions are suboptimal for very weak stimuli, providing selection pressure for secondary processing of primary spike train data. A simple network can make approximately Bayes optimal decisions given stochastic spike trains. Decisions that are arbitrarily close to Bayes optimal can be obtained by enlarging this network. A subnetwork that computes the Bayesian posterior density of the critical state variable - distance between predator and prey - emerges as a core component of the decision-making mechanism. This is a neural analog of a Bayesian particle filter with cerebellar-like architecture. The model explains fundamental properties of neurons and nervous systems in modern animals and makes testable predictions.
1511.06431
Nathan Baker
Chase P. Dowling, Elizabeth Jurrus, Sylvia Johnson, Nathan A. Baker
An ISA-Tab specification for protein titration data exchange
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data curation presents a challenge to all scientific disciplines to ensure public availability and reproducibility of experimental data. Standards for data preservation and exchange are central to addressing this challenge: the Investigation-Study-Assay Tabular (ISA-Tab) project has developed a widely used template for such standards in biological research. This paper describes the application of ISA-Tab to protein titration data. Despite the importance of titration experiments for understanding protein structure, stability, and function and for testing computational approaches to protein electrostatics, no such mechanism currently exists for sharing and preserving biomolecular titration data. We have adapted the ISA-Tab template to provide a structured means of supporting experimental structural chemistry data with a particular emphasis on the calculation and measurement of pKa values. This activity has been performed as part of the broader pKa Cooperative effort, leveraging data that has been collected and curated by the Cooperative members. In this article, we present the details of this specification and its application to a broad range of pKa and electrostatics data obtained for multiple protein systems. The resulting curated data is publicly available at http://pkacoop.org.
[ { "created": "Thu, 19 Nov 2015 22:45:50 GMT", "version": "v1" }, { "created": "Mon, 15 Feb 2016 00:46:47 GMT", "version": "v2" } ]
2016-02-16
[ [ "Dowling", "Chase P.", "" ], [ "Jurrus", "Elizabeth", "" ], [ "Johnson", "Sylvia", "" ], [ "Baker", "Nathan A.", "" ] ]
Data curation presents a challenge to all scientific disciplines to ensure public availability and reproducibility of experimental data. Standards for data preservation and exchange are central to addressing this challenge: the Investigation-Study-Assay Tabular (ISA-Tab) project has developed a widely used template for such standards in biological research. This paper describes the application of ISA-Tab to protein titration data. Despite the importance of titration experiments for understanding protein structure, stability, and function and for testing computational approaches to protein electrostatics, no such mechanism currently exists for sharing and preserving biomolecular titration data. We have adapted the ISA-Tab template to provide a structured means of supporting experimental structural chemistry data with a particular emphasis on the calculation and measurement of pKa values. This activity has been performed as part of the broader pKa Cooperative effort, leveraging data that has been collected and curated by the Cooperative members. In this article, we present the details of this specification and its application to a broad range of pKa and electrostatics data obtained for multiple protein systems. The resulting curated data is publicly available at http://pkacoop.org.
q-bio/0412041
Jacek Miekisz
Dominik Kaminski, Jacek Miekisz, Marcin Zaborowski
Stochastic stability in three-player games
18 pages
null
null
null
q-bio.PE
null
Animal behavior and evolution can often be described by game-theoretic models. Although in many situations, the number of players is very large, their strategic interactions are usually decomposed into a sum of two-player games. Only recently evolutionarily stable strategies were defined for multi-player games and their properties analyzed (Broom et al., 1997). Here we study the long-run behavior of stochastic dynamics of populations of randomly matched individuals playing symmetric three-player games. We analyze stochastic stability of equilibria in games with multiple evolutionarily stable strategies. We also show that in some games, a population may not evolve in the long run to an evolutionarily stable equilibrium.
[ { "created": "Wed, 22 Dec 2004 13:18:35 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kaminski", "Dominik", "" ], [ "Miekisz", "Jacek", "" ], [ "Zaborowski", "Marcin", "" ] ]
Animal behavior and evolution can often be described by game-theoretic models. Although in many situations, the number of players is very large, their strategic interactions are usually decomposed into a sum of two-player games. Only recently evolutionarily stable strategies were defined for multi-player games and their properties analyzed (Broom et al., 1997). Here we study the long-run behavior of stochastic dynamics of populations of randomly matched individuals playing symmetric three-player games. We analyze stochastic stability of equilibria in games with multiple evolutionarily stable strategies. We also show that in some games, a population may not evolve in the long run to an evolutionarily stable equilibrium.
1406.6826
Sarada Seetharaman
Sarada Seetharaman and Kavita Jain
Length of adaptive walk on uncorrelated and correlated fitness landscapes
To appear in Phys. Rev. E
Phys. Rev. E 90, 032703 (2014)
10.1103/PhysRevE.90.032703
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the adaptation dynamics of an asexual population that walks uphill on a rugged fitness landscape which is endowed with large number of local fitness peaks. We work in a parameter regime where only those mutants that are single mutation away are accessible, as a result of which the population eventually gets trapped at a local fitness maximum and the adaptive walk terminates. We study how the number of adaptive steps taken by the population before reaching a local fitness peak depends on the initial fitness of the population, the extreme value distribution of the beneficial mutations and correlations amongst the fitnesses. Assuming that the relative fitness difference between successive steps is small, we analytically calculate the average walk length for both uncorrelated and correlated fitnesses in all extreme value domains for a given initial fitness. We present numerical results for the model where the fitness differences can be large, and find that the walk length behavior differs from that in the former model in the Fr\'echet domain of extreme value theory. We also discuss the relevance of our results to microbial experiments.
[ { "created": "Thu, 26 Jun 2014 09:51:35 GMT", "version": "v1" }, { "created": "Tue, 19 Aug 2014 06:21:34 GMT", "version": "v2" } ]
2016-01-13
[ [ "Seetharaman", "Sarada", "" ], [ "Jain", "Kavita", "" ] ]
We consider the adaptation dynamics of an asexual population that walks uphill on a rugged fitness landscape which is endowed with large number of local fitness peaks. We work in a parameter regime where only those mutants that are single mutation away are accessible, as a result of which the population eventually gets trapped at a local fitness maximum and the adaptive walk terminates. We study how the number of adaptive steps taken by the population before reaching a local fitness peak depends on the initial fitness of the population, the extreme value distribution of the beneficial mutations and correlations amongst the fitnesses. Assuming that the relative fitness difference between successive steps is small, we analytically calculate the average walk length for both uncorrelated and correlated fitnesses in all extreme value domains for a given initial fitness. We present numerical results for the model where the fitness differences can be large, and find that the walk length behavior differs from that in the former model in the Fr\'echet domain of extreme value theory. We also discuss the relevance of our results to microbial experiments.
1307.8026
Jared Simpson
Jared T. Simpson
Exploring Genome Characteristics and Sequence Quality Without a Reference
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The de novo assembly of large, complex genomes is a significant challenge with currently available DNA sequencing technology. While many de novo assembly software packages are available, comparatively little attention has been paid to assisting the user with the assembly. This paper addresses the practical aspects of de novo assembly by introducing new ways to perform quality assessment on a collection of DNA sequence reads. The software implementation calculates per-base error rates, paired-end fragment size histograms and coverage metrics in the absence of a reference genome. Additionally, the software will estimate characteristics of the sequenced genome, such as repeat content and heterozygosity, that are key determinants of assembly difficulty. The software described is freely available and open source under the GNU Public License.
[ { "created": "Tue, 30 Jul 2013 15:55:09 GMT", "version": "v1" } ]
2013-07-31
[ [ "Simpson", "Jared T.", "" ] ]
The de novo assembly of large, complex genomes is a significant challenge with currently available DNA sequencing technology. While many de novo assembly software packages are available, comparatively little attention has been paid to assisting the user with the assembly. This paper addresses the practical aspects of de novo assembly by introducing new ways to perform quality assessment on a collection of DNA sequence reads. The software implementation calculates per-base error rates, paired-end fragment size histograms and coverage metrics in the absence of a reference genome. Additionally, the software will estimate characteristics of the sequenced genome, such as repeat content and heterozygosity, that are key determinants of assembly difficulty. The software described is freely available and open source under the GNU Public License.
2102.05937
Mihalis Kavousanakis
I. Lampropoulos and M. Kavousanakis
Assessment of intra-tumor heterogeneity in a two-dimensional vascular tumor growth model
null
null
null
null
q-bio.PE
http://creativecommons.org/publicdomain/zero/1.0/
We present a two-dimensional continuum model of tumor growth, which treats the tissue as a composition of six distinct fluid phases; their dynamics are governed by the equations of mass and momentum conservation. Our model divides the cancer cells phase into two sub-phases depending on their maturity state. The same approach is also applied for the vasculature phase, which is divided into young sprouts (products of angiogenesis), and fully formed-mature vessels. The remaining two phases correspond to healthy cells and extracellular material (ECM). Furthermore, the model foresees the existence of nutrient chemical species, which are transferred within the tissue through diffusion or supplied by the vasculature (blood vessels). The model is numerically solved with the Finite Elements Method and computations are performed with the commercial software Comsol Multiphysics. The numerical simulations predict that mature cancer cells are well separated from young cancer cells, which form a protective shield for the growing tumor. We study the effect of different mitosis and death rates for mature and young cancer cells on the tumor growth rate, and predict accelerated rates when the mitosis rate of young cancer cells is higher compared to mature cancer cells.
[ { "created": "Thu, 11 Feb 2021 10:54:40 GMT", "version": "v1" } ]
2021-02-12
[ [ "Lampropoulos", "I.", "" ], [ "Kavousanakis", "M.", "" ] ]
We present a two-dimensional continuum model of tumor growth, which treats the tissue as a composition of six distinct fluid phases; their dynamics are governed by the equations of mass and momentum conservation. Our model divides the cancer cells phase into two sub-phases depending on their maturity state. The same approach is also applied for the vasculature phase, which is divided into young sprouts (products of angiogenesis), and fully formed-mature vessels. The remaining two phases correspond to healthy cells and extracellular material (ECM). Furthermore, the model foresees the existence of nutrient chemical species, which are transferred within the tissue through diffusion or supplied by the vasculature (blood vessels). The model is numerically solved with the Finite Elements Method and computations are performed with the commercial software Comsol Multiphysics. The numerical simulations predict that mature cancer cells are well separated from young cancer cells, which form a protective shield for the growing tumor. We study the effect of different mitosis and death rates for mature and young cancer cells on the tumor growth rate, and predict accelerated rates when the mitosis rate of young cancer cells is higher compared to mature cancer cells.
1002.2595
Wiet de Ronde
Wiet de Ronde, Filipe Tostevin and Pieter Rein ten Wolde
The effect of feedback on the fidelity of information transmission of time-varying signals
Article 17 pages, S1: 12 pages
Phys. Rev. E 82, 031914 (2010)
10.1103/PhysRevE.82.031914
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living cells are continually exposed to environmental signals that vary in time. These signals are detected and processed by biochemical networks, which are often highly stochastic. To understand how cells cope with a fluctuating environment, we therefore have to understand how reliably biochemical networks can transmit time-varying signals. To this end, we must understand both the noise characteristics and the amplification properties of networks. In this manuscript, we use information theory to study how reliably signalling cascades employing autoregulation and feedback can transmit time-varying signals. We calculate the frequency-dependence of the gain-to-noise ratio, which reflects how reliably a network transmits signals at different frequencies. We find that the gain-to-noise ratio may differ qualitatively from the power spectrum of the output, showing that the latter does not directly reflect signaling performance. Moreover, we find that auto-activation and auto-repression increase and decrease the gain-to-noise ratio for all of frequencies, respectively. Positive feedback specifically enhances information transmission at low frequencies, while negative feedback increases signal fidelity at high frequencies. Our analysis not only elucidates the role of autoregulation and feedback in naturally-occurring biological networks, but also reveals design principles that can be used for the reliable transmission of time-varying signals in synthetic gene circuits.
[ { "created": "Fri, 12 Feb 2010 16:51:56 GMT", "version": "v1" } ]
2012-06-01
[ [ "de Ronde", "Wiet", "" ], [ "Tostevin", "Filipe", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
Living cells are continually exposed to environmental signals that vary in time. These signals are detected and processed by biochemical networks, which are often highly stochastic. To understand how cells cope with a fluctuating environment, we therefore have to understand how reliably biochemical networks can transmit time-varying signals. To this end, we must understand both the noise characteristics and the amplification properties of networks. In this manuscript, we use information theory to study how reliably signalling cascades employing autoregulation and feedback can transmit time-varying signals. We calculate the frequency-dependence of the gain-to-noise ratio, which reflects how reliably a network transmits signals at different frequencies. We find that the gain-to-noise ratio may differ qualitatively from the power spectrum of the output, showing that the latter does not directly reflect signaling performance. Moreover, we find that auto-activation and auto-repression increase and decrease the gain-to-noise ratio for all of frequencies, respectively. Positive feedback specifically enhances information transmission at low frequencies, while negative feedback increases signal fidelity at high frequencies. Our analysis not only elucidates the role of autoregulation and feedback in naturally-occurring biological networks, but also reveals design principles that can be used for the reliable transmission of time-varying signals in synthetic gene circuits.
2209.13529
Islem Rekik
Ece Cinar, Sinem Elif Haseki, Alaa Bessadok and Islem Rekik
Deep Cross-Modality and Resolution Graph Integration for Universal Brain Connectivity Mapping and Augmentation
null
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
The connectional brain template (CBT) captures the shared traits across all individuals of a given population of brain connectomes, thereby acting as a fingerprint. Estimating a CBT from a population where brain graphs are derived from diverse neuroimaging modalities (e.g., functional and structural) and at different resolutions (i.e., number of nodes) remains a formidable challenge to solve. Such network integration task allows for learning a rich and universal representation of the brain connectivity across varying modalities and resolutions. The resulting CBT can be substantially used to generate entirely new multimodal brain connectomes, which can boost the learning of the downs-stream tasks such as brain state classification. Here, we propose the Multimodal Multiresolution Brain Graph Integrator Network (i.e., M2GraphIntegrator), the first multimodal multiresolution graph integration framework that maps a given connectomic population into a well centered CBT. M2GraphIntegrator first unifies brain graph resolutions by utilizing resolution-specific graph autoencoders. Next, it integrates the resulting fixed-size brain graphs into a universal CBT lying at the center of its population. To preserve the population diversity, we further design a novel clustering-based training sample selection strategy which leverages the most heterogeneous training samples. To ensure the biological soundness of the learned CBT, we propose a topological loss that minimizes the topological gap between the ground-truth brain graphs and the learned CBT. Our experiments show that from a single CBT, one can generate realistic connectomic datasets including brain graphs of varying resolutions and modalities. We further demonstrate that our framework significantly outperforms benchmarks in reconstruction quality, augmentation task, centeredness and topological soundness.
[ { "created": "Tue, 13 Sep 2022 14:04:12 GMT", "version": "v1" } ]
2022-09-28
[ [ "Cinar", "Ece", "" ], [ "Haseki", "Sinem Elif", "" ], [ "Bessadok", "Alaa", "" ], [ "Rekik", "Islem", "" ] ]
The connectional brain template (CBT) captures the shared traits across all individuals of a given population of brain connectomes, thereby acting as a fingerprint. Estimating a CBT from a population where brain graphs are derived from diverse neuroimaging modalities (e.g., functional and structural) and at different resolutions (i.e., number of nodes) remains a formidable challenge to solve. Such network integration task allows for learning a rich and universal representation of the brain connectivity across varying modalities and resolutions. The resulting CBT can be substantially used to generate entirely new multimodal brain connectomes, which can boost the learning of the downs-stream tasks such as brain state classification. Here, we propose the Multimodal Multiresolution Brain Graph Integrator Network (i.e., M2GraphIntegrator), the first multimodal multiresolution graph integration framework that maps a given connectomic population into a well centered CBT. M2GraphIntegrator first unifies brain graph resolutions by utilizing resolution-specific graph autoencoders. Next, it integrates the resulting fixed-size brain graphs into a universal CBT lying at the center of its population. To preserve the population diversity, we further design a novel clustering-based training sample selection strategy which leverages the most heterogeneous training samples. To ensure the biological soundness of the learned CBT, we propose a topological loss that minimizes the topological gap between the ground-truth brain graphs and the learned CBT. Our experiments show that from a single CBT, one can generate realistic connectomic datasets including brain graphs of varying resolutions and modalities. We further demonstrate that our framework significantly outperforms benchmarks in reconstruction quality, augmentation task, centeredness and topological soundness.
2005.06286
Matjaz Perc
Subhas Khajanchi, Kankan Sarkar, Jayanta Mondal
Dynamics of the COVID-19 pandemic in India
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Understanding the dynamics of the COVID-19 pandemic is crucial for improved control and social distancing strategies. To that effect, we have employed the susceptible-exposed-infectious-recovered model, refined by contact tracing and hospitalization data from Indian provinces Kerala, Delhi, Maharashtra, and West Bengal, as well as from overall India. We have performed a sensitivity analysis to identify the most crucial input parameters, and we have calibrated the model to describe the data as best as possible. Short-term predictions reveal an increasing and worrying trend of COVID-19 cases for all four provinces and India as a whole, while long-term predictions also reveal the possibility of oscillatory dynamics. Our research thus leaves the option open that COVID-19 might become a seasonal occurrence. We also simulate and discuss the impact of media on the dynamics of the COVID-19 pandemic.
[ { "created": "Wed, 13 May 2020 12:35:58 GMT", "version": "v1" }, { "created": "Tue, 12 Jan 2021 17:52:54 GMT", "version": "v2" } ]
2021-01-13
[ [ "Khajanchi", "Subhas", "" ], [ "Sarkar", "Kankan", "" ], [ "Mondal", "Jayanta", "" ] ]
Understanding the dynamics of the COVID-19 pandemic is crucial for improved control and social distancing strategies. To that effect, we have employed the susceptible-exposed-infectious-recovered model, refined by contact tracing and hospitalization data from Indian provinces Kerala, Delhi, Maharashtra, and West Bengal, as well as from overall India. We have performed a sensitivity analysis to identify the most crucial input parameters, and we have calibrated the model to describe the data as best as possible. Short-term predictions reveal an increasing and worrying trend of COVID-19 cases for all four provinces and India as a whole, while long-term predictions also reveal the possibility of oscillatory dynamics. Our research thus leaves the option open that COVID-19 might become a seasonal occurrence. We also simulate and discuss the impact of media on the dynamics of the COVID-19 pandemic.
2408.06750
Ari Rappoport
Ari Rappoport
A CRH Theory of Autism Spectrum Disorder
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a complete theory of autism spectrum disorder (ASD), explaining its etiology, symptoms, and pathology. The core cause of ASD is excessive stress-induced postnatal release of corticotropin-releasing hormone (CRH). CRH competes with urocortins for binding to the CRH2 receptor, impairing their essential function in the utilization of glucose for growth. This results in impaired development of all brain areas depending on CRH2, including areas that are central in social development and eye gaze learning, and low-level sensory areas. Excessive CRH also induces excessive release of adrenal androgens (mainly DHEA), which impairs the long-term plasticity function of gonadal steroids. I show that these two effects can explain all of the known symptoms and properties of ASD. The theory is supported by strong diverse evidence, and points to very early detection biomarkers and preventive pharmaceutical treatments, one of which seems to be very promising.
[ { "created": "Tue, 13 Aug 2024 09:17:22 GMT", "version": "v1" } ]
2024-08-14
[ [ "Rappoport", "Ari", "" ] ]
This paper presents a complete theory of autism spectrum disorder (ASD), explaining its etiology, symptoms, and pathology. The core cause of ASD is excessive stress-induced postnatal release of corticotropin-releasing hormone (CRH). CRH competes with urocortins for binding to the CRH2 receptor, impairing their essential function in the utilization of glucose for growth. This results in impaired development of all brain areas depending on CRH2, including areas that are central in social development and eye gaze learning, and low-level sensory areas. Excessive CRH also induces excessive release of adrenal androgens (mainly DHEA), which impairs the long-term plasticity function of gonadal steroids. I show that these two effects can explain all of the known symptoms and properties of ASD. The theory is supported by strong diverse evidence, and points to very early detection biomarkers and preventive pharmaceutical treatments, one of which seems to be very promising.
1607.04886
Marc Howard
Marc W. Howard and Karthik H. Shankar
Neural scaling laws for an uncertain world
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous neural systems must efficiently process information in a wide range of novel environments, which may have very different statistical properties. We consider the problem of how to optimally distribute receptors along a one-dimensional continuum consistent with the following design principles. First, neural representations of the world should obey a neural uncertainty principle---making as few assumptions as possible about the statistical structure of the world. Second, neural representations should convey, as much as possible, equivalent information about environments with different statistics. The results of these arguments resemble the structure of the visual system and provide a natural explanation of the behavioral Weber-Fechner law, a foundational result in psychology. Because the derivation is extremely general, this suggests that similar scaling relationships should be observed not only in sensory continua, but also in neural representations of ``cognitive' one-dimensional quantities such as time or numerosity.
[ { "created": "Sun, 17 Jul 2016 15:59:50 GMT", "version": "v1" }, { "created": "Mon, 3 Apr 2017 04:13:42 GMT", "version": "v2" } ]
2017-04-04
[ [ "Howard", "Marc W.", "" ], [ "Shankar", "Karthik H.", "" ] ]
Autonomous neural systems must efficiently process information in a wide range of novel environments, which may have very different statistical properties. We consider the problem of how to optimally distribute receptors along a one-dimensional continuum consistent with the following design principles. First, neural representations of the world should obey a neural uncertainty principle---making as few assumptions as possible about the statistical structure of the world. Second, neural representations should convey, as much as possible, equivalent information about environments with different statistics. The results of these arguments resemble the structure of the visual system and provide a natural explanation of the behavioral Weber-Fechner law, a foundational result in psychology. Because the derivation is extremely general, this suggests that similar scaling relationships should be observed not only in sensory continua, but also in neural representations of ``cognitive' one-dimensional quantities such as time or numerosity.
1010.3539
Luca Peliti
Ginestra Bianconi, Davide Fichera, Silvio Franz and Luca Peliti
Modeling microevolution in a changing environment: The evolving quasispecies and the Diluted Champion Process
15 pages, 12 figures. Figures redrawn, some additional clarifications in the text. To appear in Journal of Statistical Mechanics: Theory and Experiment
J. Stat. Mech. (2011) P08022
10.1088/1742-5468/2011/08/P08022
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several pathogens use evolvability as a survival strategy against acquired immunity of the host. Despite their high variability in time, some of them exhibit quite low variability within the population at any given time, a somehow paradoxical behavior often called the evolving quasispecies. In this paper we introduce a simplified model of an evolving viral population in which the effects of the acquired immunity of the host are represented by the decrease of the fitness of the corresponding viral strains, depending on the frequency of the strain in the viral population. The model exhibits evolving quasispecies behavior in a certain range of its parameters, ans suggests how punctuated evolution can be induced by a simple feedback mechanism.
[ { "created": "Mon, 18 Oct 2010 09:59:01 GMT", "version": "v1" }, { "created": "Mon, 9 May 2011 08:36:06 GMT", "version": "v2" }, { "created": "Wed, 27 Jul 2011 08:55:37 GMT", "version": "v3" } ]
2011-08-31
[ [ "Bianconi", "Ginestra", "" ], [ "Fichera", "Davide", "" ], [ "Franz", "Silvio", "" ], [ "Peliti", "Luca", "" ] ]
Several pathogens use evolvability as a survival strategy against acquired immunity of the host. Despite their high variability in time, some of them exhibit quite low variability within the population at any given time, a somehow paradoxical behavior often called the evolving quasispecies. In this paper we introduce a simplified model of an evolving viral population in which the effects of the acquired immunity of the host are represented by the decrease of the fitness of the corresponding viral strains, depending on the frequency of the strain in the viral population. The model exhibits evolving quasispecies behavior in a certain range of its parameters, ans suggests how punctuated evolution can be induced by a simple feedback mechanism.
1809.04069
Ruobing Wang
Zhiyuan Ma, Ping Wang, Zehui Gao, Ruobing Wang, Koroush Khalighi
Estimate the Warfarin Dose by Ensemble of Machine Learning Algorithms
other authors do not agree to submit to arxiv
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Warfarin dosing remains challenging due to narrow therapeutic index and highly individual variability. Incorrect warfarin dosing is associated with devastating adverse events. Remarkable efforts have been made to develop the machine learning based warfarin dosing algorithms incorporating clinical factors and genetic variants such as polymorphisms in CYP2C9 and VKORC1. The most widely validated pharmacogenetic algorithm is the IWPC algorithm based on multivariate linear regression (MLR). However, with only a single algorithm, the prediction performance may reach an upper limit even with optimal parameters. Here, we present novel algorithms using stacked generalization frameworks to estimate the warfarin dose, within which different types of machine learning algorithms function together through a meta-machine learning model to maximize the prediction accuracy. Compared to the IWPC-derived MLR algorithm, Stack 1 and 2 based on stacked generalization frameworks performed significantly better overall. Subgroup analysis revealed that the mean of the percentage of patients whose predicted dose of warfarin within 20% of the actual stable therapeutic dose (mean percentage within 20%) for Stack 1 was improved by 12.7% (from 42.47% to 47.86%) in Asians and by 13.5% (from 22.08% to 25.05%) in the low-dose group compared to that for MLR, respectively. These data suggest that our algorithms would especially benefit patients required low warfarin maintenance dose, as subtle changes in warfarin dose could lead to adverse clinical events (thrombosis or bleeding) in patients with low dose. Our study offers novel pharmacogenetic algorithms for clinical trials and practice.
[ { "created": "Mon, 10 Sep 2018 22:18:37 GMT", "version": "v1" }, { "created": "Thu, 13 Sep 2018 14:36:49 GMT", "version": "v2" } ]
2018-09-14
[ [ "Ma", "Zhiyuan", "" ], [ "Wang", "Ping", "" ], [ "Gao", "Zehui", "" ], [ "Wang", "Ruobing", "" ], [ "Khalighi", "Koroush", "" ] ]
Warfarin dosing remains challenging due to narrow therapeutic index and highly individual variability. Incorrect warfarin dosing is associated with devastating adverse events. Remarkable efforts have been made to develop the machine learning based warfarin dosing algorithms incorporating clinical factors and genetic variants such as polymorphisms in CYP2C9 and VKORC1. The most widely validated pharmacogenetic algorithm is the IWPC algorithm based on multivariate linear regression (MLR). However, with only a single algorithm, the prediction performance may reach an upper limit even with optimal parameters. Here, we present novel algorithms using stacked generalization frameworks to estimate the warfarin dose, within which different types of machine learning algorithms function together through a meta-machine learning model to maximize the prediction accuracy. Compared to the IWPC-derived MLR algorithm, Stack 1 and 2 based on stacked generalization frameworks performed significantly better overall. Subgroup analysis revealed that the mean of the percentage of patients whose predicted dose of warfarin within 20% of the actual stable therapeutic dose (mean percentage within 20%) for Stack 1 was improved by 12.7% (from 42.47% to 47.86%) in Asians and by 13.5% (from 22.08% to 25.05%) in the low-dose group compared to that for MLR, respectively. These data suggest that our algorithms would especially benefit patients required low warfarin maintenance dose, as subtle changes in warfarin dose could lead to adverse clinical events (thrombosis or bleeding) in patients with low dose. Our study offers novel pharmacogenetic algorithms for clinical trials and practice.
2101.11724
Tatjana Skrbic
Tatjana \v{S}krbi\'c, Amos Maritan, Achille Giacometti and Jayanth R. Banavar
Local sequence-structure relationships in proteins
32 pages, 13 figures, 3 tables, 8 supplementary information pages; Accepted for publication in Protein Science
null
null
null
q-bio.BM cond-mat.soft cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
We seek to understand the interplay between amino acid sequence and local structure in proteins. Are some amino acids unique in their ability to fit harmoniously into certain local structures? What is the role of sequence in sculpting the putative native state folds from myriad possible conformations? In order to address these questions, we represent the local structure of each C-alpha atom of a protein by just two angles, theta and mu, and we analyze a set of more than 4000 protein structures from the PDB. We use a hierarchical clustering scheme to divide the 20 amino acids into six distinct groups based on their similarity to each other in fitting local structural space. We present the results of a detailed analysis of patterns of amino acid specificity in adopting local structural conformations and show that the sequence-structure correlation is not very strong compared to a random assignment of sequence to structure. Yet, our analysis may be useful to determine an effective scoring rubric for quantifying the match of an amino acid to its putative local structure.
[ { "created": "Wed, 27 Jan 2021 22:26:16 GMT", "version": "v1" } ]
2021-01-29
[ [ "Škrbić", "Tatjana", "" ], [ "Maritan", "Amos", "" ], [ "Giacometti", "Achille", "" ], [ "Banavar", "Jayanth R.", "" ] ]
We seek to understand the interplay between amino acid sequence and local structure in proteins. Are some amino acids unique in their ability to fit harmoniously into certain local structures? What is the role of sequence in sculpting the putative native state folds from myriad possible conformations? In order to address these questions, we represent the local structure of each C-alpha atom of a protein by just two angles, theta and mu, and we analyze a set of more than 4000 protein structures from the PDB. We use a hierarchical clustering scheme to divide the 20 amino acids into six distinct groups based on their similarity to each other in fitting local structural space. We present the results of a detailed analysis of patterns of amino acid specificity in adopting local structural conformations and show that the sequence-structure correlation is not very strong compared to a random assignment of sequence to structure. Yet, our analysis may be useful to determine an effective scoring rubric for quantifying the match of an amino acid to its putative local structure.
1511.04340
Jean-Marc Luck
J.M. Luck and A. Mehta
Universality in survivor distributions: Characterising the winners of competitive dynamics
12 pages, 6 figures, 2 tables
Phys. Rev. E 92, 052810 (2015)
10.1103/PhysRevE.92.052810
null
q-bio.QM cond-mat.stat-mech nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and non-survivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterisation is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept -- the {\it dynamical fugacity}. Remarkably, in the large-mass limit, the survival probability of a node becomes independent of network geometry, and assumes a simple form which depends only on its mass and degree.
[ { "created": "Fri, 13 Nov 2015 20:21:58 GMT", "version": "v1" } ]
2015-11-24
[ [ "Luck", "J. M.", "" ], [ "Mehta", "A.", "" ] ]
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and non-survivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterisation is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept -- the {\it dynamical fugacity}. Remarkably, in the large-mass limit, the survival probability of a node becomes independent of network geometry, and assumes a simple form which depends only on its mass and degree.
2009.07103
Aditya Singh
Aditya Singh, Akram Mohammed, Lokesh Chinthala, Rishikesan Kamaleswaran
Machine learning predicts early onset of fever from continuous physiological data of critically ill patients
null
null
null
null
q-bio.QM cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fever can provide valuable information for diagnosis and prognosis of various diseases such as pneumonia, dengue, sepsis, etc., therefore, predicting fever early can help in the effectiveness of treatment options and expediting the treatment process. This study aims to develop novel algorithms that can accurately predict fever onset in critically ill patients by applying machine learning technique on continuous physiological data. We analyzed continuous physiological data collected every 5-minute from a cohort of over 200,000 critically ill patients admitted to an Intensive Care Unit (ICU) over a 2-year period. Each episode of fever from the same patient were considered as an independent event, with separations of at least 24 hours. We extracted descriptive statistical features from six physiological data streams, including heart rate, respiration, systolic and diastolic blood pressure, mean arterial pressure, and oxygen saturation, and use these features to independently predict the onset of fever. Using a bootstrap aggregation method, we created a balanced dataset of 7,801 afebrile and febrile patients and analyzed features up to 4 hours before the fever onset. We found that supervised machine learning methods can predict fever up to 4 hours before onset in critically ill patients with high recall, precision, and F1-score. This study demonstrates the viability of using machine learning to predict fever among hospitalized adults. The discovery of salient physiomarkers through machine learning and deep learning techniques has the potential to further accelerate the development and implementation of innovative care delivery protocols and strategies for medically vulnerable patients.
[ { "created": "Mon, 14 Sep 2020 09:16:09 GMT", "version": "v1" } ]
2020-09-16
[ [ "Singh", "Aditya", "" ], [ "Mohammed", "Akram", "" ], [ "Chinthala", "Lokesh", "" ], [ "Kamaleswaran", "Rishikesan", "" ] ]
Fever can provide valuable information for diagnosis and prognosis of various diseases such as pneumonia, dengue, sepsis, etc., therefore, predicting fever early can help in the effectiveness of treatment options and expediting the treatment process. This study aims to develop novel algorithms that can accurately predict fever onset in critically ill patients by applying machine learning technique on continuous physiological data. We analyzed continuous physiological data collected every 5-minute from a cohort of over 200,000 critically ill patients admitted to an Intensive Care Unit (ICU) over a 2-year period. Each episode of fever from the same patient were considered as an independent event, with separations of at least 24 hours. We extracted descriptive statistical features from six physiological data streams, including heart rate, respiration, systolic and diastolic blood pressure, mean arterial pressure, and oxygen saturation, and use these features to independently predict the onset of fever. Using a bootstrap aggregation method, we created a balanced dataset of 7,801 afebrile and febrile patients and analyzed features up to 4 hours before the fever onset. We found that supervised machine learning methods can predict fever up to 4 hours before onset in critically ill patients with high recall, precision, and F1-score. This study demonstrates the viability of using machine learning to predict fever among hospitalized adults. The discovery of salient physiomarkers through machine learning and deep learning techniques has the potential to further accelerate the development and implementation of innovative care delivery protocols and strategies for medically vulnerable patients.
1102.2146
Rub\'en J. Requejo
Rub\'en J. Requejo and Juan Camacho
Coexistence of cooperators and defectors in well mixed populations mediated by limiting resources
9 pages, 7 figures
Phys. Rev. Lett. 108, 038701 (2012)
10.1103/PhysRevLett.108.038701
null
q-bio.PE stat.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditionally, resource limitation in evolutionary game theory is assumed just to impose a constant population size. Here we show that resource limitations may generate dynamical payoffs able to alter an original prisoner's dilemma, and to allow for the stable coexistence between unconditional cooperators and defectors in well-mixed populations. This is a consequence of a self-organizing process that turns the interaction payoff matrix into evolutionary neutral, and represents a resource-based control mechanism preventing the spread of defectors. To our knowledge, this is the first example of coexistence in well-mixed populations with a game structure different from a snowdrift game.
[ { "created": "Thu, 10 Feb 2011 15:34:27 GMT", "version": "v1" }, { "created": "Thu, 29 Sep 2011 03:37:29 GMT", "version": "v2" }, { "created": "Thu, 25 Oct 2012 18:48:26 GMT", "version": "v3" } ]
2012-10-26
[ [ "Requejo", "Rubén J.", "" ], [ "Camacho", "Juan", "" ] ]
Traditionally, resource limitation in evolutionary game theory is assumed just to impose a constant population size. Here we show that resource limitations may generate dynamical payoffs able to alter an original prisoner's dilemma, and to allow for the stable coexistence between unconditional cooperators and defectors in well-mixed populations. This is a consequence of a self-organizing process that turns the interaction payoff matrix into evolutionary neutral, and represents a resource-based control mechanism preventing the spread of defectors. To our knowledge, this is the first example of coexistence in well-mixed populations with a game structure different from a snowdrift game.
1303.5848
Michael Stumpf
Angelique Ale, Paul Kirk, Michael P.P. Stumpf
A general moment expansion method for stochastic kinetic models
13 pages, 7 figures, 2 tables
null
10.1063/1.4802475
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Moment approximation methods are gaining increasing attention for their use in the approximation of the stochastic kinetics of chemical reaction systems. In this paper we derive a general moment expansion method for any type of propensities and which allows expansion up to any number of moments. For some chemical reaction systems, more than two moments are necessary to describe the dynamic properties of the system, which the linear noise approximation (LNA) is unable to provide. Moreover, also for systems for which the mean does not have a strong dependence on higher order moments, moment approximation methods give information about higher order moments of the underlying probability distribution. We demonstrate the method using a dimerisation reaction, Michaelis-Menten kinetics and a model of an oscillating p53 system. We show that for the dimerisation reaction and Michaelis-Menten enzyme kinetics system higher order moments have limited influence on the estimation of the mean, while for the p53 system, the solution for the mean can require several moments to converge to the average obtained from many stochastic simulations. We also find that agreement between lower order moments does not guarantee that higher moments will agree. Compared to stochastic simulations our approach is numerically highly efficient at capturing the behaviour of stochastic systems in terms of the average and higher moments, and we provide expressions for the computational cost for different system sizes and orders of approximation. {We show how the moment expansion method can be employed to efficiently {quantify parameter sensitivity}.} Finally we investigate the effects of using too few moments on parameter estimation, and provide guidance on how to estimate if the distribution can be accurately approximated using only a few moments.
[ { "created": "Sat, 23 Mar 2013 14:06:29 GMT", "version": "v1" } ]
2015-06-15
[ [ "Ale", "Angelique", "" ], [ "Kirk", "Paul", "" ], [ "Stumpf", "Michael P. P.", "" ] ]
Moment approximation methods are gaining increasing attention for their use in the approximation of the stochastic kinetics of chemical reaction systems. In this paper we derive a general moment expansion method for any type of propensities and which allows expansion up to any number of moments. For some chemical reaction systems, more than two moments are necessary to describe the dynamic properties of the system, which the linear noise approximation (LNA) is unable to provide. Moreover, also for systems for which the mean does not have a strong dependence on higher order moments, moment approximation methods give information about higher order moments of the underlying probability distribution. We demonstrate the method using a dimerisation reaction, Michaelis-Menten kinetics and a model of an oscillating p53 system. We show that for the dimerisation reaction and Michaelis-Menten enzyme kinetics system higher order moments have limited influence on the estimation of the mean, while for the p53 system, the solution for the mean can require several moments to converge to the average obtained from many stochastic simulations. We also find that agreement between lower order moments does not guarantee that higher moments will agree. Compared to stochastic simulations our approach is numerically highly efficient at capturing the behaviour of stochastic systems in terms of the average and higher moments, and we provide expressions for the computational cost for different system sizes and orders of approximation. {We show how the moment expansion method can be employed to efficiently {quantify parameter sensitivity}.} Finally we investigate the effects of using too few moments on parameter estimation, and provide guidance on how to estimate if the distribution can be accurately approximated using only a few moments.
0707.0764
Branko Dragovich
Branko Dragovich and Alexandra Dragovich
p-Adic Degeneracy of the Genetic Code
11 pages, 1 table. Published in the Proceedings of '4th Summer School in Modern Mathematcal Physics', September 2006, Belgrade (Serbia)
SFIN XX A1 (2007) 179-188
null
null
q-bio.GN cs.IT math.IT physics.bio-ph
null
Degeneracy of the genetic code is a biological way to minimize effects of the undesirable mutation changes. Degeneration has a natural description on the 5-adic space of 64 codons $\mathcal{C}_5 (64) = \{n_0 + n_1 5 + n_2 5^2 : n_i = 1, 2, 3, 4 \} ,$ where $n_i$ are digits related to nucleotides as follows: C = 1, A = 2, T = U = 3, G = 4. The smallest 5-adic distance between codons joins them into 16 quadruplets, which under 2-adic distance decay into 32 doublets. p-Adically close codons are assigned to one of 20 amino acids, which are building blocks of proteins, or code termination of protein synthesis. We shown that genetic code multiplets are made of the p-adic nearest codons.
[ { "created": "Thu, 5 Jul 2007 11:40:53 GMT", "version": "v1" } ]
2007-07-16
[ [ "Dragovich", "Branko", "" ], [ "Dragovich", "Alexandra", "" ] ]
Degeneracy of the genetic code is a biological way to minimize effects of the undesirable mutation changes. Degeneration has a natural description on the 5-adic space of 64 codons $\mathcal{C}_5 (64) = \{n_0 + n_1 5 + n_2 5^2 : n_i = 1, 2, 3, 4 \} ,$ where $n_i$ are digits related to nucleotides as follows: C = 1, A = 2, T = U = 3, G = 4. The smallest 5-adic distance between codons joins them into 16 quadruplets, which under 2-adic distance decay into 32 doublets. p-Adically close codons are assigned to one of 20 amino acids, which are building blocks of proteins, or code termination of protein synthesis. We shown that genetic code multiplets are made of the p-adic nearest codons.
2008.06034
Tom Zhao
Tom Y. Zhao and Neelesh A. Patankar
Tetracycline as an inhibitor to the coronavirus SARS-CoV-2
null
J Cell Biochem (2021) 1-8
10.1002/jcb.29909
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The coronavirus SARS-CoV-2 remains an extant threat against public health on a global scale. Cell infection begins when the spike protein of SARS-CoV-2 binds with the cell receptor, angiotensin-converting enzyme 2 (ACE2). Here, we address the role of Tetracycline as an inhibitor for the receptor-binding domain (RBD) of the spike protein. Targeted molecular investigation show that Tetracycline binds more favorably to the RBD (-9.40 kcal/mol) compared to Chloroquine (-6.31 kcal/mol) or Doxycycline (-8.08 kcal/mol) and inhibits attachment to ACE2 to a greater degree (binding efficiency of 2.98 $\frac{\text{kcal}}{\text{mol}\cdot \text{nm}^2}$ for Tetracycline-RBD, 5.59 $\frac{\text{kcal}}{\text{mol}\cdot \text{nm}^2}$ for Chloroquine-RBD, 5.16 $\frac{\text{kcal}}{\text{mol}\cdot \text{nm}^2}$ for Doxycycline-RBD). Stronger Tetracycline inhibition is verified with nonequilibrium PMF calculations, for which the Tetracycline-RBD complex exhibits the lowest free energy profile along the dissociation pathway from ACE2. Tetracycline appears to target viral residues that are usually involved in significant hydrogen bonding with ACE2; this inhibition of cellular infection complements the anti-inflammatory and cytokine suppressing capability of Tetracycline, and may further reduce the duration of ICU stays and mechanical ventilation induced by the coronavirus SARS-CoV-2.
[ { "created": "Thu, 13 Aug 2020 17:46:46 GMT", "version": "v1" } ]
2021-02-24
[ [ "Zhao", "Tom Y.", "" ], [ "Patankar", "Neelesh A.", "" ] ]
The coronavirus SARS-CoV-2 remains an extant threat against public health on a global scale. Cell infection begins when the spike protein of SARS-CoV-2 binds with the cell receptor, angiotensin-converting enzyme 2 (ACE2). Here, we address the role of Tetracycline as an inhibitor for the receptor-binding domain (RBD) of the spike protein. Targeted molecular investigation show that Tetracycline binds more favorably to the RBD (-9.40 kcal/mol) compared to Chloroquine (-6.31 kcal/mol) or Doxycycline (-8.08 kcal/mol) and inhibits attachment to ACE2 to a greater degree (binding efficiency of 2.98 $\frac{\text{kcal}}{\text{mol}\cdot \text{nm}^2}$ for Tetracycline-RBD, 5.59 $\frac{\text{kcal}}{\text{mol}\cdot \text{nm}^2}$ for Chloroquine-RBD, 5.16 $\frac{\text{kcal}}{\text{mol}\cdot \text{nm}^2}$ for Doxycycline-RBD). Stronger Tetracycline inhibition is verified with nonequilibrium PMF calculations, for which the Tetracycline-RBD complex exhibits the lowest free energy profile along the dissociation pathway from ACE2. Tetracycline appears to target viral residues that are usually involved in significant hydrogen bonding with ACE2; this inhibition of cellular infection complements the anti-inflammatory and cytokine suppressing capability of Tetracycline, and may further reduce the duration of ICU stays and mechanical ventilation induced by the coronavirus SARS-CoV-2.
q-bio/0402007
Jeremy Sumner
J. G. Sumner and P. D. Jarvis (University of Tasmania)
Entanglement Invariants and Phylogenetic Branching
21 pages, 3 Figures. Accepted for publication in Journal of Mathematical Biology
null
null
UTAS-PHYS-04-01
q-bio.PE
null
It is possible to consider stochastic models of sequence evolution in phylogenetics in the context of a dynamical tensor description inspired from physics. Approaching the problem in this framework allows for the well developed methods of mathematical physics to be exploited in the biological arena. We present the tensor description of the homogeneous continuous time Markov chain model of phylogenetics with branching events generated by dynamical operations. Standard results from phylogenetics are shown to be derivable from the tensor framework. We summarize a powerful approach to entanglement measures in quantum physics and present its relevance to phylogenetic analysis. Entanglement measures are found to give distance measures that are equivalent to, and expand upon, those already known in phylogenetics. In particular we make the connection between the group invariant functions of phylogenetic data and phylogenetic distance functions. We introduce a new distance measure valid for three taxa based on the group invariant function known in physics as the "tangle". All work is presented for the homogeneous continuous time Markov chain model with arbitrary rate matrices.
[ { "created": "Wed, 4 Feb 2004 00:01:08 GMT", "version": "v1" }, { "created": "Tue, 10 Feb 2004 03:23:54 GMT", "version": "v2" }, { "created": "Mon, 19 Jul 2004 10:05:21 GMT", "version": "v3" }, { "created": "Tue, 30 Nov 2004 11:10:58 GMT", "version": "v4" } ]
2007-05-23
[ [ "Sumner", "J. G.", "", "University of Tasmania" ], [ "Jarvis", "P. D.", "", "University of Tasmania" ] ]
It is possible to consider stochastic models of sequence evolution in phylogenetics in the context of a dynamical tensor description inspired from physics. Approaching the problem in this framework allows for the well developed methods of mathematical physics to be exploited in the biological arena. We present the tensor description of the homogeneous continuous time Markov chain model of phylogenetics with branching events generated by dynamical operations. Standard results from phylogenetics are shown to be derivable from the tensor framework. We summarize a powerful approach to entanglement measures in quantum physics and present its relevance to phylogenetic analysis. Entanglement measures are found to give distance measures that are equivalent to, and expand upon, those already known in phylogenetics. In particular we make the connection between the group invariant functions of phylogenetic data and phylogenetic distance functions. We introduce a new distance measure valid for three taxa based on the group invariant function known in physics as the "tangle". All work is presented for the homogeneous continuous time Markov chain model with arbitrary rate matrices.
2402.12943
Giuseppe Gaeta
Giuseppe Gaeta
On some dynamical features of the complete Moran model for neutral evolution in the presence of mutations
22 pages, 8 figures
Open Communications in Nonlinear Mathematical Physics, Volume 4 (March 26, 2024) ocnmp:13104
10.46298/ocnmp.13104
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a version of the classical Moran model, in which mutations are taken into account; the possibility of mutations was introduced by Moran in his seminal paper, but it is more often overlooked in discussing the Moran model. For this model, fixation is prevented by mutation, and we have an ergodic Markov process; the equilibrium distribution for such a process was determined by Moran. The problems we consider in this paper are those of first hitting either one of the ``pure'' (uniform population) states, depending on the initial state; and that of first hitting times. The presence of mutations leads to a nonlinear dependence of the hitting probabilities on the initial state, and to a larger mean hitting time compared to the mutation-free process (in which case hitting corresponds to fixation of one of the alleles).
[ { "created": "Tue, 20 Feb 2024 11:57:12 GMT", "version": "v1" }, { "created": "Sat, 23 Mar 2024 13:48:15 GMT", "version": "v2" } ]
2024-08-07
[ [ "Gaeta", "Giuseppe", "" ] ]
We present a version of the classical Moran model, in which mutations are taken into account; the possibility of mutations was introduced by Moran in his seminal paper, but it is more often overlooked in discussing the Moran model. For this model, fixation is prevented by mutation, and we have an ergodic Markov process; the equilibrium distribution for such a process was determined by Moran. The problems we consider in this paper are those of first hitting either one of the ``pure'' (uniform population) states, depending on the initial state; and that of first hitting times. The presence of mutations leads to a nonlinear dependence of the hitting probabilities on the initial state, and to a larger mean hitting time compared to the mutation-free process (in which case hitting corresponds to fixation of one of the alleles).
1009.6164
Elife Bagci
Elife Zerrin Bagci, Sercan Murat Sen, Mehmet Cihan Camurdan
Analysis of a Mathematical Model of Apoptosis: Individual Differences and Malfunction in Programmed Cell Death
null
null
null
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Apoptosis is an important area of research because of its role in keeping a mature multicellular organism's number of cells constant hence, ensuring that the organism does not have cell accumulation that may transform into cancer with additional hallmarks. Firstly, we have carried out sensitivity analysis on an existing mitochondria-dependent mathematical apoptosis model to find out which parameters have a role in causing monostable cell survival i.e., malfunction in apoptosis. We have then generated three healthy cell models by changing these sensitive parameters while preserving bistability i.e., healthy functioning. For each healthy cell, we varied the proapoptotic production rates, which were found to be among the most sensitive parameters, to yield cells that have malfunctioning apoptosis. We simulated caspase-3 activation, by numerically integrating the governing ordinary differential equations of a mitochondria-dependent apoptosis model, in a hypothetical malfunctioning cell which is treated by four potential treatments, namely: (i) proteasome inhibitor treatment, (ii) Bcl-2 inhibitor treatment, (iii) IAP inhibitor treatment, (iv) Bid-like synthetic peptides treatment. The simulations of the present model suggest that proteasome inhibitor treatment is the most effective treatment though it may have severe side effects. For this treatment, we observed that the amount of proteasome inhibitor needed for caspase-3 activation may be different for cells in individuals with a different proapoptotic protein deficiency. We also observed that caspase-3 can be activated by Bcl-2 inhibitor treatment only in those hypothetical malfunctioning cells with Bax deficiency but not in others. These support the view that molecular heterogeneity in individuals may be an important factor in determining the individuals' positive or negative responses to treatments.
[ { "created": "Thu, 30 Sep 2010 15:11:01 GMT", "version": "v1" } ]
2010-10-01
[ [ "Bagci", "Elife Zerrin", "" ], [ "Sen", "Sercan Murat", "" ], [ "Camurdan", "Mehmet Cihan", "" ] ]
Apoptosis is an important area of research because of its role in keeping a mature multicellular organism's number of cells constant hence, ensuring that the organism does not have cell accumulation that may transform into cancer with additional hallmarks. Firstly, we have carried out sensitivity analysis on an existing mitochondria-dependent mathematical apoptosis model to find out which parameters have a role in causing monostable cell survival i.e., malfunction in apoptosis. We have then generated three healthy cell models by changing these sensitive parameters while preserving bistability i.e., healthy functioning. For each healthy cell, we varied the proapoptotic production rates, which were found to be among the most sensitive parameters, to yield cells that have malfunctioning apoptosis. We simulated caspase-3 activation, by numerically integrating the governing ordinary differential equations of a mitochondria-dependent apoptosis model, in a hypothetical malfunctioning cell which is treated by four potential treatments, namely: (i) proteasome inhibitor treatment, (ii) Bcl-2 inhibitor treatment, (iii) IAP inhibitor treatment, (iv) Bid-like synthetic peptides treatment. The simulations of the present model suggest that proteasome inhibitor treatment is the most effective treatment though it may have severe side effects. For this treatment, we observed that the amount of proteasome inhibitor needed for caspase-3 activation may be different for cells in individuals with a different proapoptotic protein deficiency. We also observed that caspase-3 can be activated by Bcl-2 inhibitor treatment only in those hypothetical malfunctioning cells with Bax deficiency but not in others. These support the view that molecular heterogeneity in individuals may be an important factor in determining the individuals' positive or negative responses to treatments.
q-bio/0512017
Martyn Amos
Martyn Amos, David A. Hodgson and Alan Gibbons
Bacterial self-organisation and computation
Submitted to the International Journal of Unconventional Computing
null
null
null
q-bio.CB
null
In this article we highlight chemotaxis (cellular movement) as a rich source of potential engineering applications and computational models, highlighting current research and possible future work. We first give a brief description of the biological mechanism, before describing recent work on modelling it in silico. We then propose a methodology for extending existing models and their possible application as a fundamental tool in engineering cellular pattern formation. We discuss possible engineering applications of human-defined cell patterns, as well as the potential for using abstract models of chemotaxis for generalised computation, before concluding with a brief discussion of future challenges and opportunities in this field.
[ { "created": "Thu, 8 Dec 2005 11:28:46 GMT", "version": "v1" } ]
2007-05-23
[ [ "Amos", "Martyn", "" ], [ "Hodgson", "David A.", "" ], [ "Gibbons", "Alan", "" ] ]
In this article we highlight chemotaxis (cellular movement) as a rich source of potential engineering applications and computational models, highlighting current research and possible future work. We first give a brief description of the biological mechanism, before describing recent work on modelling it in silico. We then propose a methodology for extending existing models and their possible application as a fundamental tool in engineering cellular pattern formation. We discuss possible engineering applications of human-defined cell patterns, as well as the potential for using abstract models of chemotaxis for generalised computation, before concluding with a brief discussion of future challenges and opportunities in this field.
2012.11508
Tanja Slotte
Juanita Guti\'errez-Valencia, William Hughes, Emma L. Berdan, Tanja Slotte
The genomic architecture and evolutionary fates of supergenes
null
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supergenes are genomic regions containing sets of tightly linked loci that control multi-trait phenotypic polymorphisms under balancing selection. Recent advances in genomics have uncovered significant variation in both the genomic architecture as well as the mode of origin of supergenes across diverse organismal systems. Although the role of genomic architecture for the origin of supergenes has been much discussed, differences in the genomic architecture also subsequently affect the evolutionary trajectory of supergenes and the rate of degeneration of supergene haplotypes. In this review, we synthesize recent genomic work and historical models of supergene evolution, highlighting how the genomic architecture of supergenes affects their evolutionary fate. We discuss how recent findings on classic supergenes involved in governing ant colony social form, mimicry in butterflies, and heterostyly in flowering plants relate to theoretical expectations. Furthermore, we use forward simulations to demonstrate that differences in genomic architecture affect the degeneration of supergenes. Finally, we discuss implications of the evolution of supergene haplotypes for the long-term fate of balanced polymorphisms governed by supergenes.
[ { "created": "Mon, 21 Dec 2020 17:26:07 GMT", "version": "v1" }, { "created": "Mon, 15 Mar 2021 09:23:58 GMT", "version": "v2" } ]
2021-03-16
[ [ "Gutiérrez-Valencia", "Juanita", "" ], [ "Hughes", "William", "" ], [ "Berdan", "Emma L.", "" ], [ "Slotte", "Tanja", "" ] ]
Supergenes are genomic regions containing sets of tightly linked loci that control multi-trait phenotypic polymorphisms under balancing selection. Recent advances in genomics have uncovered significant variation in both the genomic architecture as well as the mode of origin of supergenes across diverse organismal systems. Although the role of genomic architecture for the origin of supergenes has been much discussed, differences in the genomic architecture also subsequently affect the evolutionary trajectory of supergenes and the rate of degeneration of supergene haplotypes. In this review, we synthesize recent genomic work and historical models of supergene evolution, highlighting how the genomic architecture of supergenes affects their evolutionary fate. We discuss how recent findings on classic supergenes involved in governing ant colony social form, mimicry in butterflies, and heterostyly in flowering plants relate to theoretical expectations. Furthermore, we use forward simulations to demonstrate that differences in genomic architecture affect the degeneration of supergenes. Finally, we discuss implications of the evolution of supergene haplotypes for the long-term fate of balanced polymorphisms governed by supergenes.
1307.7831
Aaron Darling
Nicolas Wieseke, Matthias Bernt, and Martin Middendorf
Unifying Parsimonious Tree Reconciliation
Peer-reviewed and presented as part of the 13th Workshop on Algorithms in Bioinformatics (WABI2013)
null
null
null
q-bio.QM cs.CE cs.DS q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolution is a process that is influenced by various environmental factors, e.g. the interactions between different species, genes, and biogeographical properties. Hence, it is interesting to study the combined evolutionary history of multiple species, their genes, and the environment they live in. A common approach to address this research problem is to describe each individual evolution as a phylogenetic tree and construct a tree reconciliation which is parsimonious with respect to a given event model. Unfortunately, most of the previous approaches are designed only either for host-parasite systems, for gene tree/species tree reconciliation, or biogeography. Hence, a method is desirable, which addresses the general problem of mapping phylogenetic trees and covering all varieties of coevolving systems, including e.g., predator-prey and symbiotic relationships. To overcome this gap, we introduce a generalized cophylogenetic event model considering the combinatorial complete set of local coevolutionary events. We give a dynamic programming based heuristic for solving the maximum parsimony reconciliation problem in time O(n^2), for two phylogenies each with at most n leaves. Furthermore, we present an exact branch-and-bound algorithm which uses the results from the dynamic programming heuristic for discarding partial reconciliations. The approach has been implemented as a Java application which is freely available from http://pacosy.informatik.uni-leipzig.de/coresym.
[ { "created": "Tue, 30 Jul 2013 05:47:27 GMT", "version": "v1" } ]
2013-08-02
[ [ "Wieseke", "Nicolas", "" ], [ "Bernt", "Matthias", "" ], [ "Middendorf", "Martin", "" ] ]
Evolution is a process that is influenced by various environmental factors, e.g. the interactions between different species, genes, and biogeographical properties. Hence, it is interesting to study the combined evolutionary history of multiple species, their genes, and the environment they live in. A common approach to address this research problem is to describe each individual evolution as a phylogenetic tree and construct a tree reconciliation which is parsimonious with respect to a given event model. Unfortunately, most of the previous approaches are designed only either for host-parasite systems, for gene tree/species tree reconciliation, or biogeography. Hence, a method is desirable, which addresses the general problem of mapping phylogenetic trees and covering all varieties of coevolving systems, including e.g., predator-prey and symbiotic relationships. To overcome this gap, we introduce a generalized cophylogenetic event model considering the combinatorial complete set of local coevolutionary events. We give a dynamic programming based heuristic for solving the maximum parsimony reconciliation problem in time O(n^2), for two phylogenies each with at most n leaves. Furthermore, we present an exact branch-and-bound algorithm which uses the results from the dynamic programming heuristic for discarding partial reconciliations. The approach has been implemented as a Java application which is freely available from http://pacosy.informatik.uni-leipzig.de/coresym.
2303.17402
Mona Nourbakhsh
Mona Nourbakhsh (1), Kristine Degn (1), Astrid Saksager (1), Matteo Tiberti (2), Elena Papaleo (1 and 2) ((1) Cancer Systems Biology, Section for Bioinformatics, Department of Health and Technology, Technical University of Denmark, 2800, Lyngby, Denmark, (2) Cancer Structural Biology, Danish Cancer Society Research Center, 2100, Copenhagen, Denmark)
Prediction of cancer driver genes and mutations: the potential of integrative computational frameworks
35 pages, 7 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
The vast amount of sequencing data presently available allow the scientific community to explore a range of genetic variables that may drive and progress cancer. A myriad of predictive tools has been proposed, allowing researchers and clinicians to compare and prioritize driver genes and mutations and their relative pathogenicity. However, there is little consensus on the computational approach or a golden standard for comparison. Hence, benchmarking the different tools depends highly on the input data, indicating that overfitting is still a massive problem. One of the solutions is to limit the scope and usage of specific tool. However, such limitations forces researchers to walk on a tightrope between creating and using high-quality tools for a specific purpose and describing the complex alterations driving cancer. While the knowledge of cancer development increases every day, many bioinformatic pipelines rely on single nucleotide variants or alterations in a vacuum without accounting for cellular compartment, mutational burden, or disease progression. Even within bioinformatics and computational cancer biology, the research fields work in silos, risking overlooking potential synergies or breakthroughs. Here, we provide an overview of databases and datasets for building or testing predictive tools for discovery of cancer drivers. We introduce predictive tools for driver genes, driver mutations, and the impact of these based on structural analysis. Additionally, we suggest and recommend directions in the field to avoid silo-research, moving in the direction of integrative frameworks.
[ { "created": "Thu, 30 Mar 2023 14:17:55 GMT", "version": "v1" } ]
2023-03-31
[ [ "Nourbakhsh", "Mona", "", "1 and 2" ], [ "Degn", "Kristine", "", "1 and 2" ], [ "Saksager", "Astrid", "", "1 and 2" ], [ "Tiberti", "Matteo", "", "1 and 2" ], [ "Papaleo", "Elena", "", "1 and 2" ] ]
The vast amount of sequencing data presently available allow the scientific community to explore a range of genetic variables that may drive and progress cancer. A myriad of predictive tools has been proposed, allowing researchers and clinicians to compare and prioritize driver genes and mutations and their relative pathogenicity. However, there is little consensus on the computational approach or a golden standard for comparison. Hence, benchmarking the different tools depends highly on the input data, indicating that overfitting is still a massive problem. One of the solutions is to limit the scope and usage of specific tool. However, such limitations forces researchers to walk on a tightrope between creating and using high-quality tools for a specific purpose and describing the complex alterations driving cancer. While the knowledge of cancer development increases every day, many bioinformatic pipelines rely on single nucleotide variants or alterations in a vacuum without accounting for cellular compartment, mutational burden, or disease progression. Even within bioinformatics and computational cancer biology, the research fields work in silos, risking overlooking potential synergies or breakthroughs. Here, we provide an overview of databases and datasets for building or testing predictive tools for discovery of cancer drivers. We introduce predictive tools for driver genes, driver mutations, and the impact of these based on structural analysis. Additionally, we suggest and recommend directions in the field to avoid silo-research, moving in the direction of integrative frameworks.
2104.07053
Gr\'egoire Sergeant-Perthuis
David Rudrauf, Gr\'egoire Sergeant-Perthuis, Yvain Tisserand, Teerawat Monnor, Olivier Belli
Combining the Projective Consciousness Model and Virtual Humans to assess ToM capacity in Virtual Reality: a proof-of-concept
null
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. We implemented the principles of the Projective Consciousness Model into artificial agents embodied as virtual humans, as a proof-of-concept for a methodological framework aimed at simulating behaviours and assessing underlying psychological parameters, in the context of experiments in virtual reality. We focus on simulating the role of Theory of Mind (ToM) in the choice of strategic behaviours of approach and avoidance to optimise the satisfaction of agents' preferences. We designed an experiment in a virtual environment that could be used with real humans, allowing us to classify behaviours as a function of order of ToM, up to the second order. We show that our agents demonstrate expected behaviours with consistent parameters of ToM in this experiment. We also show that the agents can be used to estimate correctly each other order of ToM. A similar approach could be used with real humans in virtual reality experiments not only to enable human participants to interact with parametric, virtual humans as stimuli, but also as a mean of inference to derive model-based psychological assessments of the participants.
[ { "created": "Fri, 2 Apr 2021 23:55:39 GMT", "version": "v1" }, { "created": "Thu, 9 Sep 2021 17:13:44 GMT", "version": "v2" } ]
2021-09-10
[ [ "Rudrauf", "David", "" ], [ "Sergeant-Perthuis", "Grégoire", "" ], [ "Tisserand", "Yvain", "" ], [ "Monnor", "Teerawat", "" ], [ "Belli", "Olivier", "" ] ]
Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. We implemented the principles of the Projective Consciousness Model into artificial agents embodied as virtual humans, as a proof-of-concept for a methodological framework aimed at simulating behaviours and assessing underlying psychological parameters, in the context of experiments in virtual reality. We focus on simulating the role of Theory of Mind (ToM) in the choice of strategic behaviours of approach and avoidance to optimise the satisfaction of agents' preferences. We designed an experiment in a virtual environment that could be used with real humans, allowing us to classify behaviours as a function of order of ToM, up to the second order. We show that our agents demonstrate expected behaviours with consistent parameters of ToM in this experiment. We also show that the agents can be used to estimate correctly each other order of ToM. A similar approach could be used with real humans in virtual reality experiments not only to enable human participants to interact with parametric, virtual humans as stimuli, but also as a mean of inference to derive model-based psychological assessments of the participants.
0706.1852
Erik Aurell
Maria Werner, LiZhe Zhu, Erik Aurell
Cooperative action in eukaryotic gene regulation: physical properties of a viral example
7 pages, 6 figures, 1 table
null
10.1103/PhysRevE.76.061909
null
q-bio.SC cond-mat.soft q-bio.MN
null
The Epstein-Barr virus (EBV) infects more than 90% of the human population, and is the cause of several both serious and mild diseases. It is a tumorivirus, and has been widely studied as a model system for gene (de)regulation in human. A central feature of the EBV life cycle is its ability to persist in human B cells in states denoted latency I, II and III. In latency III the host cell is driven to cell proliferation and hence expansion of the viral population, but does not enter the lytic pathway, and no new virions are produced, while the latency I state is almost completely dormant. In this paper we study a physico-chemical model of the switch between latency I and latency III in EBV. We show that the unusually large number of binding sites of two competing transcription factors, one viral and one from the host, serves to make the switch sharper (higher Hill coefficient), either by cooperative binding between molecules of the same species when they bind, or by competition between the two species if there is sufficient steric hindrance.
[ { "created": "Wed, 13 Jun 2007 09:14:29 GMT", "version": "v1" } ]
2009-11-13
[ [ "Werner", "Maria", "" ], [ "Zhu", "LiZhe", "" ], [ "Aurell", "Erik", "" ] ]
The Epstein-Barr virus (EBV) infects more than 90% of the human population, and is the cause of several both serious and mild diseases. It is a tumorivirus, and has been widely studied as a model system for gene (de)regulation in human. A central feature of the EBV life cycle is its ability to persist in human B cells in states denoted latency I, II and III. In latency III the host cell is driven to cell proliferation and hence expansion of the viral population, but does not enter the lytic pathway, and no new virions are produced, while the latency I state is almost completely dormant. In this paper we study a physico-chemical model of the switch between latency I and latency III in EBV. We show that the unusually large number of binding sites of two competing transcription factors, one viral and one from the host, serves to make the switch sharper (higher Hill coefficient), either by cooperative binding between molecules of the same species when they bind, or by competition between the two species if there is sufficient steric hindrance.
0705.4635
Thierry Emonet
Thierry Emonet and Philippe Cluzel
Relationship between cellular response and behavioral variability in bacterial chemotaxis
15 pages, 4 figures, Supporting information available here http://cluzel.uchicago.edu/data/emonet/arxiv_070531_supp.pdf
null
10.1073/pnas.0705463105
null
q-bio.MN q-bio.CB q-bio.OT
null
Bacterial chemotaxis in Escherichia coli is a canonical system for the study of signal transduction. A remarkable feature of this system is the coexistence of precise adaptation in population with large fluctuating cellular behavior in single cells (Korobkova et al. 2004, Nature, 428, 574). Using a stochastic model, we found that the large behavioral variability experimentally observed in non-stimulated cells is a direct consequence of the architecture of this adaptive system. Reversible covalent modification cycles, in which methylation and demethylation reactions antagonistically regulate the activity of receptor-kinase complexes, operate outside the region of first-order kinetics. As a result, the receptor-kinase that governs cellular behavior exhibits a sigmoidal activation curve. This curve simultaneously amplifies the inherent stochastic fluctuations in the system and lengthens the relaxation time in response to stimulus. Because stochastic fluctuations cause large behavioral variability and the relaxation time governs the average duration of runs in response to small stimuli, cells with the greatest fluctuating behavior also display the largest chemotactic response. Finally, Large-scale simulations of digital bacteria suggest that the chemotaxis network is tuned to simultaneously optimize the random spread of cells in absence of nutrients and the cellular response to gradients of attractant.
[ { "created": "Thu, 31 May 2007 16:05:09 GMT", "version": "v1" } ]
2019-08-19
[ [ "Emonet", "Thierry", "" ], [ "Cluzel", "Philippe", "" ] ]
Bacterial chemotaxis in Escherichia coli is a canonical system for the study of signal transduction. A remarkable feature of this system is the coexistence of precise adaptation in population with large fluctuating cellular behavior in single cells (Korobkova et al. 2004, Nature, 428, 574). Using a stochastic model, we found that the large behavioral variability experimentally observed in non-stimulated cells is a direct consequence of the architecture of this adaptive system. Reversible covalent modification cycles, in which methylation and demethylation reactions antagonistically regulate the activity of receptor-kinase complexes, operate outside the region of first-order kinetics. As a result, the receptor-kinase that governs cellular behavior exhibits a sigmoidal activation curve. This curve simultaneously amplifies the inherent stochastic fluctuations in the system and lengthens the relaxation time in response to stimulus. Because stochastic fluctuations cause large behavioral variability and the relaxation time governs the average duration of runs in response to small stimuli, cells with the greatest fluctuating behavior also display the largest chemotactic response. Finally, Large-scale simulations of digital bacteria suggest that the chemotaxis network is tuned to simultaneously optimize the random spread of cells in absence of nutrients and the cellular response to gradients of attractant.
1503.02904
Martino Trassinelli
Martino Trassinelli (INSP)
Energy cost and optimisation in breath-hold diving
null
J. Theor. Biol. 396, 42-52 (2016)
10.1016/j.jtbi.2016.02.009
null
q-bio.QM physics.bio-ph physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new model for calculating locomotion costs in breath-hold divers. Starting from basic mechanics principles, we calculate the work that the diver must provide through propulsion to counterbalance the action of drag, the buoyant force and weight during immersion. Compared to those in previous studies, the model presented here accurately analyses breath-hold divers which alternate active swimming with prolonged glides during the dive (as is the case in mammals). The energy cost of the dive is strongly dependent on these prolonged gliding phases. Here we investigate the length and impacts on energy cost of these glides with respect to the diver characteristics, and compare them with those observed in different breath-hold diving species. Taking into account the basal metabolic rate and chemical energy to propulsion transformation efficiency, we calculate optimal swim velocity and the corresponding total energy cost (including metabolic rate) and compare them with observations. Energy cost is minimised when the diver passes through neutral buoyancy conditions during the dive. This generally implies the presence of prolonged gliding phases in both ascent and descent, where the buoyancy (varying with depth) is best used against the drag, reducing energy cost. This is in agreement with past results (Miller et al., 2012; Sato et al., 2013) where, when the buoyant force is considered constant during the dive, the energy cost was minimised for neutral buoyancy. In particular, our model confirms the good physical adaption of dolphins for diving, compared to other breath-hold diving species which are mostly positively buoyant (penguins for example). The presence of prolonged glides implies a non-trivial dependency of optimal speed on maximal depth of the dive. This extends previous findings (Sato et al., 2010; Watanabe et al., 2011) which found no dependency of optimal speed on dive depth for particular conditions. The energy cost of the dive can be further diminished by reducing the volume of gas-filled body parts in divers close to neutral buoyancy. This provides a possible additional explanation for the observed exhalation of air before diving in phocid seals to minimise dive energy cost. Until now the only explanation for this phenomenon has been a reduction in the risk of decompression sickness.
[ { "created": "Tue, 10 Mar 2015 13:52:29 GMT", "version": "v1" }, { "created": "Wed, 11 Mar 2015 14:55:16 GMT", "version": "v2" }, { "created": "Tue, 22 Dec 2015 09:39:31 GMT", "version": "v3" }, { "created": "Tue, 19 Jan 2016 12:40:27 GMT", "version": "v4" } ]
2024-01-30
[ [ "Trassinelli", "Martino", "", "INSP" ] ]
We present a new model for calculating locomotion costs in breath-hold divers. Starting from basic mechanics principles, we calculate the work that the diver must provide through propulsion to counterbalance the action of drag, the buoyant force and weight during immersion. Compared to those in previous studies, the model presented here accurately analyses breath-hold divers which alternate active swimming with prolonged glides during the dive (as is the case in mammals). The energy cost of the dive is strongly dependent on these prolonged gliding phases. Here we investigate the length and impacts on energy cost of these glides with respect to the diver characteristics, and compare them with those observed in different breath-hold diving species. Taking into account the basal metabolic rate and chemical energy to propulsion transformation efficiency, we calculate optimal swim velocity and the corresponding total energy cost (including metabolic rate) and compare them with observations. Energy cost is minimised when the diver passes through neutral buoyancy conditions during the dive. This generally implies the presence of prolonged gliding phases in both ascent and descent, where the buoyancy (varying with depth) is best used against the drag, reducing energy cost. This is in agreement with past results (Miller et al., 2012; Sato et al., 2013) where, when the buoyant force is considered constant during the dive, the energy cost was minimised for neutral buoyancy. In particular, our model confirms the good physical adaption of dolphins for diving, compared to other breath-hold diving species which are mostly positively buoyant (penguins for example). The presence of prolonged glides implies a non-trivial dependency of optimal speed on maximal depth of the dive. This extends previous findings (Sato et al., 2010; Watanabe et al., 2011) which found no dependency of optimal speed on dive depth for particular conditions. The energy cost of the dive can be further diminished by reducing the volume of gas-filled body parts in divers close to neutral buoyancy. This provides a possible additional explanation for the observed exhalation of air before diving in phocid seals to minimise dive energy cost. Until now the only explanation for this phenomenon has been a reduction in the risk of decompression sickness.
1811.05371
Ching-Hao Wang
Ching-Hao Wang, Caleb J. Bashor, and Pankaj Mehta
The strength of protein-protein interactions controls the information capacity and dynamical response of signaling networks
13+10 pages, 5+9 figures, Supplemental Information included
null
null
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eukaryotic cells transmit information by signaling through complex networks of interacting proteins. Here we develop a theoretical and computational framework that relates the biophysics of protein-protein interactions (PPIs) within a signaling network to its information processing properties. To do so, we generalize statistical physics-inspired models for protein binding to account for interactions that depend on post-translational state (e.g. phosphorylation). By combining these models with information-theoretic methods, we find that PPIs are a key determinant of information transmission within a signaling network, with weak interactions giving rise to "noise" that diminishes information transmission. While noise can be mitigated by increasing interaction strength, the accompanying increase in transmission comes at the expense of a slower dynamical response. This suggests that the biophysics of signaling protein interactions give rise to a fundamental "speed-information" trade-off. Surprisingly, we find that cross-talk between pathways in complex signaling networks do not significantly alter information capacity--an observation that may partially explain the promiscuity and ubiquity of weak PPIs in heavily interconnected networks. We conclude by showing how our framework can be used to design synthetic biochemical networks that maximize information transmission, a procedure we dub "InfoMax" design.
[ { "created": "Tue, 13 Nov 2018 15:46:48 GMT", "version": "v1" }, { "created": "Fri, 23 Nov 2018 17:40:55 GMT", "version": "v2" } ]
2018-11-26
[ [ "Wang", "Ching-Hao", "" ], [ "Bashor", "Caleb J.", "" ], [ "Mehta", "Pankaj", "" ] ]
Eukaryotic cells transmit information by signaling through complex networks of interacting proteins. Here we develop a theoretical and computational framework that relates the biophysics of protein-protein interactions (PPIs) within a signaling network to its information processing properties. To do so, we generalize statistical physics-inspired models for protein binding to account for interactions that depend on post-translational state (e.g. phosphorylation). By combining these models with information-theoretic methods, we find that PPIs are a key determinant of information transmission within a signaling network, with weak interactions giving rise to "noise" that diminishes information transmission. While noise can be mitigated by increasing interaction strength, the accompanying increase in transmission comes at the expense of a slower dynamical response. This suggests that the biophysics of signaling protein interactions give rise to a fundamental "speed-information" trade-off. Surprisingly, we find that cross-talk between pathways in complex signaling networks do not significantly alter information capacity--an observation that may partially explain the promiscuity and ubiquity of weak PPIs in heavily interconnected networks. We conclude by showing how our framework can be used to design synthetic biochemical networks that maximize information transmission, a procedure we dub "InfoMax" design.
q-bio/0509037
Mikl\'os Cs\H{u}r\"os
Mikl\'os Cs\H{u}r\"os, Istv\'an Mikl\'os
A probabilistic model for gene content evolution with duplication, loss, and horizontal transfer
null
null
10.1007/11732990_18
null
q-bio.PE q-bio.QM
null
We introduce a Markov model for the evolution of a gene family along a phylogeny. The model includes parameters for the rates of horizontal gene transfer, gene duplication, and gene loss, in addition to branch lengths in the phylogeny. The likelihood for the changes in the size of a gene family across different organisms can be calculated in O(N+hM^2) time and O(N+M^2) space, where N is the number of organisms, $h$ is the height of the phylogeny, and M is the sum of family sizes. We apply the model to the evolution of gene content in Preoteobacteria using the gene families in the COG (Clusters of Orthologous Groups) database.
[ { "created": "Tue, 27 Sep 2005 14:17:53 GMT", "version": "v1" } ]
2016-09-08
[ [ "Csűrös", "Miklós", "" ], [ "Miklós", "István", "" ] ]
We introduce a Markov model for the evolution of a gene family along a phylogeny. The model includes parameters for the rates of horizontal gene transfer, gene duplication, and gene loss, in addition to branch lengths in the phylogeny. The likelihood for the changes in the size of a gene family across different organisms can be calculated in O(N+hM^2) time and O(N+M^2) space, where N is the number of organisms, $h$ is the height of the phylogeny, and M is the sum of family sizes. We apply the model to the evolution of gene content in Preoteobacteria using the gene families in the COG (Clusters of Orthologous Groups) database.
1302.4574
Anirban Banerji
Anirban Banerji
Structural and evolutionary tunnels of pairwise residue-interaction symmetries connect different structural classes of proteins
23 pages, 5 Tables, 1 Figure
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studying all non-redundant proteins in 76 most-commonly found structural domains, the present work attempts to decipher latent patterns that characterize acceptable and unacceptable symmetries in residue-residue interactions in functional proteins. We report that cutting across the structural classes, a select set of pairwise interactions are universally favored by geometrical and evolutionary constraints, termed 'acceptable' structural and evolutionary tunnels, respectively. An equally small subset of residue-residue interactions, the 'unacceptable' structural and evolutionary tunnels, is found to be universally disliked by structural and evolutionary constraints. Non-trivial overlapping is detected among acceptable structural and evolutionary tunnels, as also among unacceptable structural and evolutionary tunnels. A subset of tunnels is found to have equal relative importance, structurally and evolutionarily, in different structural classes. The MET-MET tunnel is detected to be universally most unacceptable by both structural and evolutionary constraints, whereas the ASP-LEU tunnel was found to be the closest approximation to be universally most acceptable. Residual populations in structural and evolutionary tunnels are found to be independent of stereochemical properties of individual residues. It is argued with examples that tunnels are emergent features that connect extent of symmetry in residue-residue interactions to the level of quaternary structural organization.
[ { "created": "Tue, 19 Feb 2013 10:53:32 GMT", "version": "v1" }, { "created": "Thu, 22 Aug 2013 04:41:19 GMT", "version": "v2" } ]
2013-08-23
[ [ "Banerji", "Anirban", "" ] ]
Studying all non-redundant proteins in 76 most-commonly found structural domains, the present work attempts to decipher latent patterns that characterize acceptable and unacceptable symmetries in residue-residue interactions in functional proteins. We report that cutting across the structural classes, a select set of pairwise interactions are universally favored by geometrical and evolutionary constraints, termed 'acceptable' structural and evolutionary tunnels, respectively. An equally small subset of residue-residue interactions, the 'unacceptable' structural and evolutionary tunnels, is found to be universally disliked by structural and evolutionary constraints. Non-trivial overlapping is detected among acceptable structural and evolutionary tunnels, as also among unacceptable structural and evolutionary tunnels. A subset of tunnels is found to have equal relative importance, structurally and evolutionarily, in different structural classes. The MET-MET tunnel is detected to be universally most unacceptable by both structural and evolutionary constraints, whereas the ASP-LEU tunnel was found to be the closest approximation to be universally most acceptable. Residual populations in structural and evolutionary tunnels are found to be independent of stereochemical properties of individual residues. It is argued with examples that tunnels are emergent features that connect extent of symmetry in residue-residue interactions to the level of quaternary structural organization.
1210.3378
Simon Mochrie
S. G. J. Mochrie, A. H. Mack, D. J. Schlingman, R. Collins, M. Kamenetska, and L. Regan
Unwinding and rewinding the nucleosome inner turn: Force dependence of the kinetic rate constants
null
null
10.1103/PhysRevE.87.012710
null
q-bio.BM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simple model for the force-dependent unwinding and rewinding rates of the nucleosome inner turn is constructed and quantitatively compared to the results of recent measurements [A. H. Mack et al., J. Mol. Biol. 423, 687 (2012)]. First, a coarse-grained model for the histone-DNA free energy landscape that incorporates both an elastic free energy barrier and specific histone-DNA bonds is developed. Next, a theoretical expression for the rate of transitions across a piecewise linear free energy landscape with multiple minima and maxima is presented. Then, the model free energy landscape, approximated as a piecewise linear function, and the theoretical expression for the transition rates are combined to construct a model for the force-dependent unwinding and re-winding rates of the nucleosome inner turn. Least-mean-squares fitting of the model rates to the rates observed in recent experiments rates demonstrates that this model is able to well describe the force-dependent unwinding and rewinding rates of the nucleosome inner turn, observed in the recent experiments, except at the highest forces studied, where an additional ad hoc term is required to describe the data, which may be interpreted as an indication of an alternate high-force nucleosome disassembly pathway, that bypasses simple unwinding. The good agreement between the measurements and the model at lower forces demonstrates that both specific histone-DNA contacts and an elastic free energy barrier play essential roles for nucleosome winding and unwinding, and quantifies their relative contributions.
[ { "created": "Thu, 11 Oct 2012 21:44:39 GMT", "version": "v1" }, { "created": "Fri, 4 Jan 2013 16:13:41 GMT", "version": "v2" } ]
2015-06-11
[ [ "Mochrie", "S. G. J.", "" ], [ "Mack", "A. H.", "" ], [ "Schlingman", "D. J.", "" ], [ "Collins", "R.", "" ], [ "Kamenetska", "M.", "" ], [ "Regan", "L.", "" ] ]
A simple model for the force-dependent unwinding and rewinding rates of the nucleosome inner turn is constructed and quantitatively compared to the results of recent measurements [A. H. Mack et al., J. Mol. Biol. 423, 687 (2012)]. First, a coarse-grained model for the histone-DNA free energy landscape that incorporates both an elastic free energy barrier and specific histone-DNA bonds is developed. Next, a theoretical expression for the rate of transitions across a piecewise linear free energy landscape with multiple minima and maxima is presented. Then, the model free energy landscape, approximated as a piecewise linear function, and the theoretical expression for the transition rates are combined to construct a model for the force-dependent unwinding and re-winding rates of the nucleosome inner turn. Least-mean-squares fitting of the model rates to the rates observed in recent experiments rates demonstrates that this model is able to well describe the force-dependent unwinding and rewinding rates of the nucleosome inner turn, observed in the recent experiments, except at the highest forces studied, where an additional ad hoc term is required to describe the data, which may be interpreted as an indication of an alternate high-force nucleosome disassembly pathway, that bypasses simple unwinding. The good agreement between the measurements and the model at lower forces demonstrates that both specific histone-DNA contacts and an elastic free energy barrier play essential roles for nucleosome winding and unwinding, and quantifies their relative contributions.
1505.00351
Sarine Babikian
Sarine Babikian, Francisco J. Valero-Cuevas, Eva Kanso
Slow Limb Movements Require Precise Control of Muscle Stiffness
null
null
null
null
q-bio.TO q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Slow and accurate finger and limb movements are essential to daily activities, but their neural control and governing mechanics are relatively unexplored. We consider neuromechanical systems where slow movements are produced by neural commands that modulate muscle stiffness. This formulation based on strain-energy equilibria is in agreement with prior work on neural control of muscle and limb impedance. Slow limb movements are driftless in the sense that movement stops when neural commands stop. We demonstrate, in the context of two planar tendon-driven systems representing a finger and a leg, that the control of muscle stiffness suffices to produce stable and accurate limb postures and quasi-static (slow) transitions among them. We prove, however, that stable postures are achievable only when muscles are pre-tensioned, as is the case for natural muscle tone. Our results further indicate, in accordance with experimental findings, that slow movements are non-smooth. The non-smoothness arises because the precision with which individual muscle stiffnesses need to be controlled changes substantially throughout the limb's motion. These results underscore the fundamental roles of muscle tone and accurate neural control of muscle stiffness in producing stable limb postures and slow movements.
[ { "created": "Sat, 2 May 2015 16:32:34 GMT", "version": "v1" }, { "created": "Tue, 5 May 2015 17:18:39 GMT", "version": "v2" } ]
2015-05-06
[ [ "Babikian", "Sarine", "" ], [ "Valero-Cuevas", "Francisco J.", "" ], [ "Kanso", "Eva", "" ] ]
Slow and accurate finger and limb movements are essential to daily activities, but their neural control and governing mechanics are relatively unexplored. We consider neuromechanical systems where slow movements are produced by neural commands that modulate muscle stiffness. This formulation based on strain-energy equilibria is in agreement with prior work on neural control of muscle and limb impedance. Slow limb movements are driftless in the sense that movement stops when neural commands stop. We demonstrate, in the context of two planar tendon-driven systems representing a finger and a leg, that the control of muscle stiffness suffices to produce stable and accurate limb postures and quasi-static (slow) transitions among them. We prove, however, that stable postures are achievable only when muscles are pre-tensioned, as is the case for natural muscle tone. Our results further indicate, in accordance with experimental findings, that slow movements are non-smooth. The non-smoothness arises because the precision with which individual muscle stiffnesses need to be controlled changes substantially throughout the limb's motion. These results underscore the fundamental roles of muscle tone and accurate neural control of muscle stiffness in producing stable limb postures and slow movements.
0802.1904
Nicolas Vuillerme
Nicolas Vuillerme (TIMC), Cyril Burdet (LMAS), Brice Isableu (EA 4042), Sylvain Demetz (LMAS)
The magnitude of the effect of calf muscles fatigue on postural control during bipedal quiet standing with vision depends on the eye-visual target distance
null
Gait & Posture / Gait and Posture 24, 2 (2006) 169-72
10.1016/j.gaitpost.2005.07.011
null
q-bio.NC
null
The purpose of the present experiment was to investigate whether, with vision, the magnitude of the effect of calf muscles fatigue on postural control during bipedal quiet standing depends on the eye-visual target distance. Twelve young university students were asked to stand upright as immobile as possible in three visual conditions (No vision, Vision 1m and Vision 4m) executed in two conditions of No fatigue and Fatigue of the calf muscles. Centre of foot pressure displacements were recorded using a force platform. Similar increased variances of the centre of foot pressure displacements were observed in the fatigue relative to the No fatigue condition for both the No vision and Vision 4m conditions. Interestingly, in the vision 1m condition, fatigue yielded: (1) a similar increased variance of the centre of foot pressure displacements to those observed in the No vision and Vision 4m conditions along the medio-lateral axis and (2) a weaker destabilising effect relative to the No vision and Vision 4m conditions along the antero-posterior axis. These results evidence that the ability to use visual information for postural control during bipedal quiet standing following calf muscles fatigue is dependent on the eye-visual target distance. More largely, in the context of the multisensory control of balance, the present findings suggest that the efficiency of the sensory reweighting of visual sensory cues as the neuro-muscular constraints acting on the subject change is critically linked with the quality of the information the visual system obtains.
[ { "created": "Wed, 13 Feb 2008 20:12:20 GMT", "version": "v1" } ]
2008-02-14
[ [ "Vuillerme", "Nicolas", "", "TIMC" ], [ "Burdet", "Cyril", "", "LMAS" ], [ "Isableu", "Brice", "", "EA\n 4042" ], [ "Demetz", "Sylvain", "", "LMAS" ] ]
The purpose of the present experiment was to investigate whether, with vision, the magnitude of the effect of calf muscles fatigue on postural control during bipedal quiet standing depends on the eye-visual target distance. Twelve young university students were asked to stand upright as immobile as possible in three visual conditions (No vision, Vision 1m and Vision 4m) executed in two conditions of No fatigue and Fatigue of the calf muscles. Centre of foot pressure displacements were recorded using a force platform. Similar increased variances of the centre of foot pressure displacements were observed in the fatigue relative to the No fatigue condition for both the No vision and Vision 4m conditions. Interestingly, in the vision 1m condition, fatigue yielded: (1) a similar increased variance of the centre of foot pressure displacements to those observed in the No vision and Vision 4m conditions along the medio-lateral axis and (2) a weaker destabilising effect relative to the No vision and Vision 4m conditions along the antero-posterior axis. These results evidence that the ability to use visual information for postural control during bipedal quiet standing following calf muscles fatigue is dependent on the eye-visual target distance. More largely, in the context of the multisensory control of balance, the present findings suggest that the efficiency of the sensory reweighting of visual sensory cues as the neuro-muscular constraints acting on the subject change is critically linked with the quality of the information the visual system obtains.
1703.01900
Jipeng Qiang
Jipeng Qiang and Wei Ding and John Quackenbush and Ping Chen
Network-based Distance Metric with Application to Discover Disease Subtypes in Cancer
null
null
null
null
q-bio.QM cs.IR q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While we once thought of cancer as single monolithic diseases affecting a specific organ site, we now understand that there are many subtypes of cancer defined by unique patterns of gene mutations. These gene mutational data, which can be more reliably obtained than gene expression data, help to determine how the subtypes develop, evolve, and respond to therapies. Different from dense continuous-value gene expression data, which most existing cancer subtype discovery algorithms use, somatic mutational data are extremely sparse and heterogeneous, because there are less than 0.5\% mutated genes in discrete value 1/0 out of 20,000 human protein-coding genes, and identical mutated genes are rarely shared by cancer patients. Our focus is to search for cancer subtypes from extremely sparse and high dimensional gene mutational data in discrete 1 and 0 values using unsupervised learning. We propose a new network-based distance metric. We project cancer patients' mutational profile into their gene network structure and measure the distance between two patients using the similarity between genes and between the gene vertexes of the patients in the network. Experimental results in synthetic data and real-world data show that our approach outperforms the top competitors in cancer subtype discovery. Furthermore, our approach can identify cancer subtypes that cannot be detected by other clustering algorithms in real cancer data.
[ { "created": "Wed, 1 Mar 2017 02:09:50 GMT", "version": "v1" } ]
2017-03-07
[ [ "Qiang", "Jipeng", "" ], [ "Ding", "Wei", "" ], [ "Quackenbush", "John", "" ], [ "Chen", "Ping", "" ] ]
While we once thought of cancer as single monolithic diseases affecting a specific organ site, we now understand that there are many subtypes of cancer defined by unique patterns of gene mutations. These gene mutational data, which can be more reliably obtained than gene expression data, help to determine how the subtypes develop, evolve, and respond to therapies. Different from dense continuous-value gene expression data, which most existing cancer subtype discovery algorithms use, somatic mutational data are extremely sparse and heterogeneous, because there are less than 0.5\% mutated genes in discrete value 1/0 out of 20,000 human protein-coding genes, and identical mutated genes are rarely shared by cancer patients. Our focus is to search for cancer subtypes from extremely sparse and high dimensional gene mutational data in discrete 1 and 0 values using unsupervised learning. We propose a new network-based distance metric. We project cancer patients' mutational profile into their gene network structure and measure the distance between two patients using the similarity between genes and between the gene vertexes of the patients in the network. Experimental results in synthetic data and real-world data show that our approach outperforms the top competitors in cancer subtype discovery. Furthermore, our approach can identify cancer subtypes that cannot be detected by other clustering algorithms in real cancer data.
2401.06151
Alex Morehead
Alex Morehead, Jeffrey Ruffolo, Aadyot Bhatnagar, Ali Madani
Towards Joint Sequence-Structure Generation of Nucleic Acid and Protein Complexes with SE(3)-Discrete Diffusion
15 pages, 11 figures, presented at the NeurIPS 2023 Machine Learning in Structural Biology (MLSB) workshop. Code available at https://github.com/Profluent-Internships/MMDiff
null
null
null
q-bio.BM cs.AI cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Generative models of macromolecules carry abundant and impactful implications for industrial and biomedical efforts in protein engineering. However, existing methods are currently limited to modeling protein structures or sequences, independently or jointly, without regard to the interactions that commonly occur between proteins and other macromolecules. In this work, we introduce MMDiff, a generative model that jointly designs sequences and structures of nucleic acid and protein complexes, independently or in complex, using joint SE(3)-discrete diffusion noise. Such a model has important implications for emerging areas of macromolecular design including structure-based transcription factor design and design of noncoding RNA sequences. We demonstrate the utility of MMDiff through a rigorous new design benchmark for macromolecular complex generation that we introduce in this work. Our results demonstrate that MMDiff is able to successfully generate micro-RNA and single-stranded DNA molecules while being modestly capable of joint modeling DNA and RNA molecules in interaction with multi-chain protein complexes. Source code: https://github.com/Profluent-Internships/MMDiff.
[ { "created": "Thu, 21 Dec 2023 05:53:33 GMT", "version": "v1" } ]
2024-01-15
[ [ "Morehead", "Alex", "" ], [ "Ruffolo", "Jeffrey", "" ], [ "Bhatnagar", "Aadyot", "" ], [ "Madani", "Ali", "" ] ]
Generative models of macromolecules carry abundant and impactful implications for industrial and biomedical efforts in protein engineering. However, existing methods are currently limited to modeling protein structures or sequences, independently or jointly, without regard to the interactions that commonly occur between proteins and other macromolecules. In this work, we introduce MMDiff, a generative model that jointly designs sequences and structures of nucleic acid and protein complexes, independently or in complex, using joint SE(3)-discrete diffusion noise. Such a model has important implications for emerging areas of macromolecular design including structure-based transcription factor design and design of noncoding RNA sequences. We demonstrate the utility of MMDiff through a rigorous new design benchmark for macromolecular complex generation that we introduce in this work. Our results demonstrate that MMDiff is able to successfully generate micro-RNA and single-stranded DNA molecules while being modestly capable of joint modeling DNA and RNA molecules in interaction with multi-chain protein complexes. Source code: https://github.com/Profluent-Internships/MMDiff.
2205.00054
Zachary Jackson
Zachary Jackson, BingKan Xue
Heterogeneity of Interaction Strengths and Its Consequences on Ecological Systems
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Ecosystems are formed by networks of species and their interactions. Traditional models of such interactions assume a constant interaction strength between a given pair of species. However, there is often significant trait variation among individual organisms even within the same species, causing heterogeneity in their interaction strengths with other species. The consequences of such heterogeneous interactions for the ecosystem have not been studied systematically. As a theoretical exploration, we analyze a simple ecosystem with trophic interactions between two predators and a shared prey, which would exhibit competitive exclusion in models with homogeneous interactions. We consider several scenarios where individuals of the prey species differentiate into subpopulations with different interaction strengths. We show that in all these cases, whether the heterogeneity is inherent, reversible, or adaptive, the ecosystem can stabilize at a new equilibrium where all three species coexist. Moreover, the prey population that has heterogeneous interactions with its predators reaches a higher density than it would without heterogeneity, and can even reach a higher density in the presence of two predators than with just one. Our results suggest that heterogeneity may be a naturally selected feature of ecological interactions that have important consequences for the stability and diversity of ecosystems.
[ { "created": "Fri, 29 Apr 2022 19:28:24 GMT", "version": "v1" } ]
2022-05-03
[ [ "Jackson", "Zachary", "" ], [ "Xue", "BingKan", "" ] ]
Ecosystems are formed by networks of species and their interactions. Traditional models of such interactions assume a constant interaction strength between a given pair of species. However, there is often significant trait variation among individual organisms even within the same species, causing heterogeneity in their interaction strengths with other species. The consequences of such heterogeneous interactions for the ecosystem have not been studied systematically. As a theoretical exploration, we analyze a simple ecosystem with trophic interactions between two predators and a shared prey, which would exhibit competitive exclusion in models with homogeneous interactions. We consider several scenarios where individuals of the prey species differentiate into subpopulations with different interaction strengths. We show that in all these cases, whether the heterogeneity is inherent, reversible, or adaptive, the ecosystem can stabilize at a new equilibrium where all three species coexist. Moreover, the prey population that has heterogeneous interactions with its predators reaches a higher density than it would without heterogeneity, and can even reach a higher density in the presence of two predators than with just one. Our results suggest that heterogeneity may be a naturally selected feature of ecological interactions that have important consequences for the stability and diversity of ecosystems.
2310.07185
Ankur Gupta
Benjamin M. Alessio, Ankur Gupta
The Ubiquity of Diffusiophoresis: Exploring Human Population Dynamics While Including Concentration Gradient-Driven Advection
null
null
null
null
q-bio.PE cond-mat.soft physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Diffusiophoresis, which refers to the movement of entities driven by gradients in the concentration of attractants, is observable in colloids and chemotactic bacteria. We suggest that humans also exhibit diffusiophoresis when they perceive concentration gradients in economic opportunities, social connections, safety factors, and more. Consequently, we build upon the Fisher-KPP reaction-diffusion model, which is foundational to human population dynamics, to incorporate diffusiophoretic advection. Through simulations, we demonstrate that diffusiophoresis can predict the emergence of population hotspots from initially dispersed populations and can enable precise control over inter- and intra-population segregation, a capability not achievable through Fisher-KPP alone. This framework is particularly pertinent given the anticipated impacts of climate change, which may result in significant human displacement influenced by the diffusiophoretic movement of individuals in response to concentration gradients in safety conditions.
[ { "created": "Wed, 11 Oct 2023 04:23:46 GMT", "version": "v1" } ]
2023-10-12
[ [ "Alessio", "Benjamin M.", "" ], [ "Gupta", "Ankur", "" ] ]
Diffusiophoresis, which refers to the movement of entities driven by gradients in the concentration of attractants, is observable in colloids and chemotactic bacteria. We suggest that humans also exhibit diffusiophoresis when they perceive concentration gradients in economic opportunities, social connections, safety factors, and more. Consequently, we build upon the Fisher-KPP reaction-diffusion model, which is foundational to human population dynamics, to incorporate diffusiophoretic advection. Through simulations, we demonstrate that diffusiophoresis can predict the emergence of population hotspots from initially dispersed populations and can enable precise control over inter- and intra-population segregation, a capability not achievable through Fisher-KPP alone. This framework is particularly pertinent given the anticipated impacts of climate change, which may result in significant human displacement influenced by the diffusiophoretic movement of individuals in response to concentration gradients in safety conditions.
1111.3065
Didier Sornette
Ivan Osorio, Alexey Lyubushin and Didier Sornette
Towards a Probabilistic Definition of Seizures
17 pages with 4 figures
Epilepsy & Behavior 22, S18-S28 (2011)
10.1016/j.yebeh.2011.09.009
null
q-bio.NC physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This writing: a) Draws attention to the intricacies inherent to the pursuit of a universal seizure definition even when powerful, well understood signal analysis methods are utilized to this end; b) Identifies this aim as a multi-objective optimization problem and discusses the advantages and disadvantages of adopting or rejecting a unitary seizure definition; c) Introduces a Probabilistic Measure of Seizure Activity to manage this thorny issue. The challenges posed by the attempt to define seizures unitarily may be partly related to their fractal properties and understood through a simplistic analogy to the so-called "Richardson effect". A revision of the time-honored conceptualization of seizures may be warranted to further advance epileptology.
[ { "created": "Sun, 13 Nov 2011 21:35:50 GMT", "version": "v1" } ]
2011-11-15
[ [ "Osorio", "Ivan", "" ], [ "Lyubushin", "Alexey", "" ], [ "Sornette", "Didier", "" ] ]
This writing: a) Draws attention to the intricacies inherent to the pursuit of a universal seizure definition even when powerful, well understood signal analysis methods are utilized to this end; b) Identifies this aim as a multi-objective optimization problem and discusses the advantages and disadvantages of adopting or rejecting a unitary seizure definition; c) Introduces a Probabilistic Measure of Seizure Activity to manage this thorny issue. The challenges posed by the attempt to define seizures unitarily may be partly related to their fractal properties and understood through a simplistic analogy to the so-called "Richardson effect". A revision of the time-honored conceptualization of seizures may be warranted to further advance epileptology.
2003.02340
Emily Diller
Emily Diller and Jason Parker
Variation in correlation between prognosis and histologic feature based on biopsy selection
9 Pages, 2 figures, 1 table
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Glioblastoma multiform carries a dismal prognosis with poor response to gold standard treatment. Innovative data analysis methods have been developed to characterize tumor genomic expression with histologic features. In a clinical setting, biopsy selection methods may be constrained by time and financial burden to the patient. Thus, we investigate the impact biopsy selection has on correlation between prognostic and histologic features in 35 patients with GBM. We compared methods using limited volumes, moderate volumes, and enblock tumor volumes. Additionally, we investigated the impact of random versus strategic methods for limited and moderate volume biopsies. Finally, we compared correlation results by selecting one to five small biopsy. We observed a wide range in correlation significance across selection methods. These findings may aid clinical management of GBM and direct better biopsy selection necessary for the development and deployment of targeted therapies.
[ { "created": "Wed, 4 Mar 2020 21:35:14 GMT", "version": "v1" } ]
2020-03-06
[ [ "Diller", "Emily", "" ], [ "Parker", "Jason", "" ] ]
Glioblastoma multiform carries a dismal prognosis with poor response to gold standard treatment. Innovative data analysis methods have been developed to characterize tumor genomic expression with histologic features. In a clinical setting, biopsy selection methods may be constrained by time and financial burden to the patient. Thus, we investigate the impact biopsy selection has on correlation between prognostic and histologic features in 35 patients with GBM. We compared methods using limited volumes, moderate volumes, and enblock tumor volumes. Additionally, we investigated the impact of random versus strategic methods for limited and moderate volume biopsies. Finally, we compared correlation results by selecting one to five small biopsy. We observed a wide range in correlation significance across selection methods. These findings may aid clinical management of GBM and direct better biopsy selection necessary for the development and deployment of targeted therapies.
2010.12332
Mohammad Reza Dayer
Mohammad Reza Dayer
Old Drugs for JAK-STAT Pathway Inhibition in COVID-19
null
null
10.13140/RG.2.2.33735.73122
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
The pandemic threat of COVID-19 with more than 37 million cases in which about 5 percent entering critical stage characterized by cytokine storm and hyperinflammatory condition, the state more often leads to admission to intensive care unit with rapid mortality. Janus kinase enzymes of Jak-1, Jak-2, Jak-3, and Tyk2 seem to be good targets for inhibition by medications to control cytokine storm in this context. In the present work, the inhibitory properties of different analgesic drugs on these targets are studied to assess their ability for clinical application from different points of view. Our docking results indicated that naproxen, methadone, and amitriptyline considering their higher binding energy, lower energy variance, and higher hydrophobicity, seem to express more inhibitory effects on Janus kinase enzymes than thats for approved inhibitors i.e. baricitinib and ruxolitinib. Accordingly, we suggest our wide list of candidate drugs including indomethacin, etodolac, buprenorphine, rofecoxib, duloxetine, valdecoxib, naproxen, methadone, and amitriptilin for clinical assessments for their usefulness in COVID-19 treatment, especially taking into account that up to now, there is no approved cure for this disease.
[ { "created": "Fri, 23 Oct 2020 13:09:16 GMT", "version": "v1" } ]
2020-10-26
[ [ "Dayer", "Mohammad Reza", "" ] ]
The pandemic threat of COVID-19 with more than 37 million cases in which about 5 percent entering critical stage characterized by cytokine storm and hyperinflammatory condition, the state more often leads to admission to intensive care unit with rapid mortality. Janus kinase enzymes of Jak-1, Jak-2, Jak-3, and Tyk2 seem to be good targets for inhibition by medications to control cytokine storm in this context. In the present work, the inhibitory properties of different analgesic drugs on these targets are studied to assess their ability for clinical application from different points of view. Our docking results indicated that naproxen, methadone, and amitriptyline considering their higher binding energy, lower energy variance, and higher hydrophobicity, seem to express more inhibitory effects on Janus kinase enzymes than thats for approved inhibitors i.e. baricitinib and ruxolitinib. Accordingly, we suggest our wide list of candidate drugs including indomethacin, etodolac, buprenorphine, rofecoxib, duloxetine, valdecoxib, naproxen, methadone, and amitriptilin for clinical assessments for their usefulness in COVID-19 treatment, especially taking into account that up to now, there is no approved cure for this disease.
1307.3426
Carl Whitfield
Carl A. Whitfield, Davide Marenduzzo, Rapha\"el Voituriez and Rhoda J. Hawkins
Active polar fluid flow in finite droplets
9 pages, 5 figures + 4 appendices (15 pages total)
Eur. Phys. J. E, 37 2 (2014) 8
10.1140/epje/i2014-14008-3
null
q-bio.CB cond-mat.soft q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a continuum level analytical model of a droplet of active contractile fluid consisting of filaments and motors. We calculate the steady state flows that result from a splayed polarisation of the filaments. We account for the interaction with an arbitrary external medium by imposing a viscous friction at the fixed droplet boundary. We then show that the droplet has non-zero force dipole and quadrupole moments, the latter of which is essential for self-propelled motion of the droplet at low Reynolds' number. Therefore, this calculation describes a simple mechanism for the motility of a droplet of active contractile fluid embedded in a 3D environment, which is relevant to cell migration in confinement (for example, embedded within a gel or tissue). Our analytical results predict how the system depends on various parameters such as the effective friction coefficient, the phenomenological activity parameter and the splay of the imposed polarisation.
[ { "created": "Fri, 12 Jul 2013 11:57:33 GMT", "version": "v1" }, { "created": "Mon, 28 Oct 2013 15:48:45 GMT", "version": "v2" }, { "created": "Wed, 19 Feb 2014 11:58:48 GMT", "version": "v3" } ]
2014-02-20
[ [ "Whitfield", "Carl A.", "" ], [ "Marenduzzo", "Davide", "" ], [ "Voituriez", "Raphaël", "" ], [ "Hawkins", "Rhoda J.", "" ] ]
We present a continuum level analytical model of a droplet of active contractile fluid consisting of filaments and motors. We calculate the steady state flows that result from a splayed polarisation of the filaments. We account for the interaction with an arbitrary external medium by imposing a viscous friction at the fixed droplet boundary. We then show that the droplet has non-zero force dipole and quadrupole moments, the latter of which is essential for self-propelled motion of the droplet at low Reynolds' number. Therefore, this calculation describes a simple mechanism for the motility of a droplet of active contractile fluid embedded in a 3D environment, which is relevant to cell migration in confinement (for example, embedded within a gel or tissue). Our analytical results predict how the system depends on various parameters such as the effective friction coefficient, the phenomenological activity parameter and the splay of the imposed polarisation.
q-bio/0608018
Emilio Hernandez-Garcia
E. Hernandez-Garcia, A. F. Rozenfeld, V. M. Eguiluz, S. Arnaud-Haond, and C. M. Duarte
Clone size distributions in networks of genetic similarity
17 pages, 4 figures. One figure improved and other minor changes. To appear in Physica D
Physica D, 214, 166-173 (2006)
10.1016/j.physd.2006.09.015
null
q-bio.PE cond-mat.stat-mech q-bio.QM
null
We build networks of genetic similarity in which the nodes are organisms sampled from biological populations. The procedure is illustrated by constructing networks from genetic data of a marine clonal plant. An important feature in the networks is the presence of clone subgraphs, i.e. sets of organisms with identical genotype forming clones. As a first step to understand the dynamics that has shaped these networks, we point up a relationship between a particular degree distribution and the clone size distribution in the populations. We construct a dynamical model for the population dynamics, focussing on the dynamics of the clones, and solve it for the required distributions. Scale free and exponentially decaying forms are obtained depending on parameter values, the first type being obtained when clonal growth is the dominant process. Average distributions are dominated by the power law behavior presented by the fastest replicating populations.
[ { "created": "Wed, 9 Aug 2006 14:17:24 GMT", "version": "v1" }, { "created": "Sat, 23 Sep 2006 08:28:21 GMT", "version": "v2" } ]
2008-01-23
[ [ "Hernandez-Garcia", "E.", "" ], [ "Rozenfeld", "A. F.", "" ], [ "Eguiluz", "V. M.", "" ], [ "Arnaud-Haond", "S.", "" ], [ "Duarte", "C. M.", "" ] ]
We build networks of genetic similarity in which the nodes are organisms sampled from biological populations. The procedure is illustrated by constructing networks from genetic data of a marine clonal plant. An important feature in the networks is the presence of clone subgraphs, i.e. sets of organisms with identical genotype forming clones. As a first step to understand the dynamics that has shaped these networks, we point up a relationship between a particular degree distribution and the clone size distribution in the populations. We construct a dynamical model for the population dynamics, focussing on the dynamics of the clones, and solve it for the required distributions. Scale free and exponentially decaying forms are obtained depending on parameter values, the first type being obtained when clonal growth is the dominant process. Average distributions are dominated by the power law behavior presented by the fastest replicating populations.
1505.06550
Yang Li
Yang Li and XifengYan
MSPKmerCounter: A Fast and Memory Efficient Approach for K-mer Counting
null
null
null
null
q-bio.GN cs.CE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major challenge in next-generation genome sequencing (NGS) is to assemble massive overlapping short reads that are randomly sampled from DNA fragments. To complete assembling, one needs to finish a fundamental task in many leading assembly algorithms: counting the number of occurrences of k-mers (length-k substrings in sequences). The counting results are critical for many components in assembly (e.g. variants detection and read error correction). For large genomes, the k-mer counting task can easily consume a huge amount of memory, making it impossible for large-scale parallel assembly on commodity servers. In this paper, we develop MSPKmerCounter, a disk-based approach, to efficiently perform k-mer counting for large genomes using a small amount of memory. Our approach is based on a novel technique called Minimum Substring Partitioning (MSP). MSP breaks short reads into multiple disjoint partitions such that each partition can be loaded into memory and processed individually. By leveraging the overlaps among the k-mers derived from the same short read, MSP can achieve astonishing compression ratio so that the I/O cost can be significantly reduced. For the task of k-mer counting, MSPKmerCounter offers a very fast and memory-efficient solution. Experiment results on large real-life short reads data sets demonstrate that MSPKmerCounter can achieve better overall performance than state-of-the-art k-mer counting approaches. MSPKmerCounter is available at http://www.cs.ucsb.edu/~yangli/MSPKmerCounter
[ { "created": "Mon, 25 May 2015 07:21:56 GMT", "version": "v1" } ]
2015-05-26
[ [ "Li", "Yang", "" ], [ "XifengYan", "", "" ] ]
A major challenge in next-generation genome sequencing (NGS) is to assemble massive overlapping short reads that are randomly sampled from DNA fragments. To complete assembling, one needs to finish a fundamental task in many leading assembly algorithms: counting the number of occurrences of k-mers (length-k substrings in sequences). The counting results are critical for many components in assembly (e.g. variants detection and read error correction). For large genomes, the k-mer counting task can easily consume a huge amount of memory, making it impossible for large-scale parallel assembly on commodity servers. In this paper, we develop MSPKmerCounter, a disk-based approach, to efficiently perform k-mer counting for large genomes using a small amount of memory. Our approach is based on a novel technique called Minimum Substring Partitioning (MSP). MSP breaks short reads into multiple disjoint partitions such that each partition can be loaded into memory and processed individually. By leveraging the overlaps among the k-mers derived from the same short read, MSP can achieve astonishing compression ratio so that the I/O cost can be significantly reduced. For the task of k-mer counting, MSPKmerCounter offers a very fast and memory-efficient solution. Experiment results on large real-life short reads data sets demonstrate that MSPKmerCounter can achieve better overall performance than state-of-the-art k-mer counting approaches. MSPKmerCounter is available at http://www.cs.ucsb.edu/~yangli/MSPKmerCounter
1906.02863
Qi Zhao
Qi Zhao, Lingli Zhang, Chun Shen, Jie Zhang, Jianfeng Feng
Double Generalized Linear Model Reveals Those with High Intelligence are More Similar in Cortical Thickness
null
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most studies indicate that intelligence (g) is positively correlated with cortical thickness. However, the interindividual variability of cortical thickness has not been taken into account. In this study, we aimed to identify the association between intelligence and cortical thickness in adolescents from both the group's mean and dispersion point of view, utilizing the structural brain imaging from the Adolescent Brain and Cognitive Development (ABCD) Consortium, the largest cohort in early adolescents around 10 years old. The mean and dispersion parameters of cortical thickness and their association with intelligence were estimated using double generalized linear models(DGLM). We found that for the mean model part, the thickness of the frontal lobe like superior frontal gyrus was negatively related to intelligence, while the surface area was most positively associated with intelligence in the frontal lobe. In the dispersion part, intelligence was negatively correlated with the dispersion of cortical thickness in widespread areas, but not with the surface area. These results suggested that people with higher IQ are more similar in cortical thickness, which may be related to less differentiation or heterogeneity in cortical columns.
[ { "created": "Fri, 7 Jun 2019 02:11:21 GMT", "version": "v1" }, { "created": "Fri, 22 Nov 2019 10:34:16 GMT", "version": "v2" } ]
2019-11-25
[ [ "Zhao", "Qi", "" ], [ "Zhang", "Lingli", "" ], [ "Shen", "Chun", "" ], [ "Zhang", "Jie", "" ], [ "Feng", "Jianfeng", "" ] ]
Most studies indicate that intelligence (g) is positively correlated with cortical thickness. However, the interindividual variability of cortical thickness has not been taken into account. In this study, we aimed to identify the association between intelligence and cortical thickness in adolescents from both the group's mean and dispersion point of view, utilizing the structural brain imaging from the Adolescent Brain and Cognitive Development (ABCD) Consortium, the largest cohort in early adolescents around 10 years old. The mean and dispersion parameters of cortical thickness and their association with intelligence were estimated using double generalized linear models(DGLM). We found that for the mean model part, the thickness of the frontal lobe like superior frontal gyrus was negatively related to intelligence, while the surface area was most positively associated with intelligence in the frontal lobe. In the dispersion part, intelligence was negatively correlated with the dispersion of cortical thickness in widespread areas, but not with the surface area. These results suggested that people with higher IQ are more similar in cortical thickness, which may be related to less differentiation or heterogeneity in cortical columns.
2310.03186
Xaq Pitkow
Rajkumar Vasudeva Raju, Zhe Li, Scott Linderman, Xaq Pitkow
Inferring Inference
26 pages, 4 figures and 1 supplementary figure
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Patterns of microcircuitry suggest that the brain has an array of repeated canonical computational units. Yet neural representations are distributed, so the relevant computations may only be related indirectly to single-neuron transformations. It thus remains an open challenge how to define canonical distributed computations. We integrate normative and algorithmic theories of neural computation into a mathematical framework for inferring canonical distributed computations from large-scale neural activity patterns. At the normative level, we hypothesize that the brain creates a structured internal model of its environment, positing latent causes that explain its sensory inputs, and uses those sensory inputs to infer the latent causes. At the algorithmic level, we propose that this inference process is a nonlinear message-passing algorithm on a graph-structured model of the world. Given a time series of neural activity during a perceptual inference task, our framework finds (i) the neural representation of relevant latent variables, (ii) interactions between these variables that define the brain's internal model of the world, and (iii) message-functions specifying the inference algorithm. These targeted computational properties are then statistically distinguishable due to the symmetries inherent in any canonical computation, up to a global transformation. As a demonstration, we simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model. Given its external inputs and noisy neural activity, we recover the latent variables, their neural representation and dynamics, and canonical message-functions. We highlight features of experimental design needed to successfully extract canonical computations from neural data. Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
[ { "created": "Wed, 4 Oct 2023 22:12:11 GMT", "version": "v1" }, { "created": "Fri, 6 Oct 2023 19:14:54 GMT", "version": "v2" }, { "created": "Fri, 13 Oct 2023 22:04:12 GMT", "version": "v3" } ]
2023-10-17
[ [ "Raju", "Rajkumar Vasudeva", "" ], [ "Li", "Zhe", "" ], [ "Linderman", "Scott", "" ], [ "Pitkow", "Xaq", "" ] ]
Patterns of microcircuitry suggest that the brain has an array of repeated canonical computational units. Yet neural representations are distributed, so the relevant computations may only be related indirectly to single-neuron transformations. It thus remains an open challenge how to define canonical distributed computations. We integrate normative and algorithmic theories of neural computation into a mathematical framework for inferring canonical distributed computations from large-scale neural activity patterns. At the normative level, we hypothesize that the brain creates a structured internal model of its environment, positing latent causes that explain its sensory inputs, and uses those sensory inputs to infer the latent causes. At the algorithmic level, we propose that this inference process is a nonlinear message-passing algorithm on a graph-structured model of the world. Given a time series of neural activity during a perceptual inference task, our framework finds (i) the neural representation of relevant latent variables, (ii) interactions between these variables that define the brain's internal model of the world, and (iii) message-functions specifying the inference algorithm. These targeted computational properties are then statistically distinguishable due to the symmetries inherent in any canonical computation, up to a global transformation. As a demonstration, we simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model. Given its external inputs and noisy neural activity, we recover the latent variables, their neural representation and dynamics, and canonical message-functions. We highlight features of experimental design needed to successfully extract canonical computations from neural data. Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
1512.01197
Geza Odor
Michael T. Gastner and G\'eza \'Odor
The topology of large Open Connectome networks for the human brain
14 pages, 6 figures, accepted version in Scientific Reports
Scientific Reports 6 (2016) 27249
10.1038/srep27249
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The structural human connectome (i.e.\ the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to $\simeq 10^6$ nodes and $\simeq 10^8$ edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension $D$ and the small-world coefficient $\sigma$ of these networks. While $\sigma$ suggests a small-world topology, we found that $D < 4$ showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
[ { "created": "Thu, 3 Dec 2015 19:13:54 GMT", "version": "v1" }, { "created": "Fri, 13 May 2016 12:42:43 GMT", "version": "v2" } ]
2016-06-08
[ [ "Gastner", "Michael T.", "" ], [ "Ódor", "Géza", "" ] ]
The structural human connectome (i.e.\ the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to $\simeq 10^6$ nodes and $\simeq 10^8$ edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension $D$ and the small-world coefficient $\sigma$ of these networks. While $\sigma$ suggests a small-world topology, we found that $D < 4$ showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
0706.2077
Michael Sadovsky
Michael G.Sadovsky, Julia A.Putintzeva
Codon Usage Bias Measured Through Entropy Approach
15 pages, 1 figure
null
null
null
q-bio.GN
null
Codon usage bias measure is defined through the mutual entropy calculation of real codon frequency distribution against the quasi-equilibrium one. This latter is defined in three manners: (1) the frequency of synonymous codons is supposed to be equal (i.e., the arithmetic mean of their frequencies); (2) it coincides to the frequency distribution of triplets; and, finally, (3) the quasi-equilibrium frequency distribution is defined as the expected frequency of codons derived from the dinucleotide frequency distribution. The measure of bias in codon usage is calculated for 125 bacterial genomes.
[ { "created": "Thu, 14 Jun 2007 10:07:48 GMT", "version": "v1" } ]
2007-06-15
[ [ "Sadovsky", "Michael G.", "" ], [ "Putintzeva", "Julia A.", "" ] ]
Codon usage bias measure is defined through the mutual entropy calculation of real codon frequency distribution against the quasi-equilibrium one. This latter is defined in three manners: (1) the frequency of synonymous codons is supposed to be equal (i.e., the arithmetic mean of their frequencies); (2) it coincides to the frequency distribution of triplets; and, finally, (3) the quasi-equilibrium frequency distribution is defined as the expected frequency of codons derived from the dinucleotide frequency distribution. The measure of bias in codon usage is calculated for 125 bacterial genomes.
1506.08683
Oskar Hallatschek Dr.
Oskar Hallatschek and Lukas Geyrhofer
Collective Fluctuations in models of adaptation
null
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of adaptation is difficult to predict because it is highly stochastic even in large populations. The uncertainty emerges from number fluctuations, called genetic drift, arising in the small number of particularly fit individuals of the population. Random genetic drift in this evolutionary vanguard also limits the speed of adaptation, which diverges in deterministic models that ignore these chance effects. Several approaches have been developed to analyze the crucial role of noise on the expected dynamics of adaptation, including the mean fitness of the entire population, or the fate of newly arising beneficial deleterious mutations. However, very little is known about how genetic drift causes fluctuations to emerge on the population level, including fitness distribution variations and speed variations. Yet, these phenomena control the replicability of experimental evolution experiments and are key to a truly predictive understanding of evolutionary processes. Here, we develop an exact approach to these emergent fluctuations by a combination of computational and analytical methods. We show, analytically, that the infinite hierarchy of moment equations can be closed at any arbitrary order by a suitable choice of a dynamical constraint. This constraint regulates (rather than fixes) the population size, accounting for resource limitations. The resulting linear equations, which can be accurately solved numerically, exhibit fluctuation-induced terms that amplify short-distance correlations and suppress long-distance ones. Importantly, by accounting for the dynamics of sub-populations, we provide a systematic route to key population genetic quantities, such as fixation probabilities and decay rates of the genetic diversity.
[ { "created": "Mon, 29 Jun 2015 15:35:40 GMT", "version": "v1" } ]
2015-06-30
[ [ "Hallatschek", "Oskar", "" ], [ "Geyrhofer", "Lukas", "" ] ]
The dynamics of adaptation is difficult to predict because it is highly stochastic even in large populations. The uncertainty emerges from number fluctuations, called genetic drift, arising in the small number of particularly fit individuals of the population. Random genetic drift in this evolutionary vanguard also limits the speed of adaptation, which diverges in deterministic models that ignore these chance effects. Several approaches have been developed to analyze the crucial role of noise on the expected dynamics of adaptation, including the mean fitness of the entire population, or the fate of newly arising beneficial deleterious mutations. However, very little is known about how genetic drift causes fluctuations to emerge on the population level, including fitness distribution variations and speed variations. Yet, these phenomena control the replicability of experimental evolution experiments and are key to a truly predictive understanding of evolutionary processes. Here, we develop an exact approach to these emergent fluctuations by a combination of computational and analytical methods. We show, analytically, that the infinite hierarchy of moment equations can be closed at any arbitrary order by a suitable choice of a dynamical constraint. This constraint regulates (rather than fixes) the population size, accounting for resource limitations. The resulting linear equations, which can be accurately solved numerically, exhibit fluctuation-induced terms that amplify short-distance correlations and suppress long-distance ones. Importantly, by accounting for the dynamics of sub-populations, we provide a systematic route to key population genetic quantities, such as fixation probabilities and decay rates of the genetic diversity.
q-bio/0504016
Kunihiko Kaneko
Kunihiko Kaneko
On Recursive Production and Evolvabilty of Cells: Catalytic Reaction Network Approach
46 pages 28 figures
Adv Chem. Phys 130 (2005) 543-598
null
null
q-bio.MN cond-mat.stat-mech nlin.AO physics.bio-ph q-bio.CB
null
To unveil the logic of cell from a level of chemical reaction dynamics, we need to clarify how ensemble of chemicals can autonomously produce the set of chemical, without assuming a specific external control echanism. A cell consists of a huge number of chemical species that catalyze each other. Often the number of each molecule species is not so large, and accordingly the number fluctuations in each molecule speciescan be large. In the amidst of such diversity and large fluctuations, how can a cell make recursive production? On the other hand, a cell can change its state to evolve to a different type over a longer time span. How are reproduction and evolution compatible? We address these questions, based on several model studies with catalytic reaction network. In the present survey paper, we first formulate basic questions on the recursiveness and evolvability of a cell, and then state the standpoint of our research to answer the questions, that is termed as 'constructive biology'. Based on this standpoint, we present general strategy of modeling a cell as a chemical reaction network.
[ { "created": "Tue, 12 Apr 2005 02:02:44 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kaneko", "Kunihiko", "" ] ]
To unveil the logic of cell from a level of chemical reaction dynamics, we need to clarify how ensemble of chemicals can autonomously produce the set of chemical, without assuming a specific external control echanism. A cell consists of a huge number of chemical species that catalyze each other. Often the number of each molecule species is not so large, and accordingly the number fluctuations in each molecule speciescan be large. In the amidst of such diversity and large fluctuations, how can a cell make recursive production? On the other hand, a cell can change its state to evolve to a different type over a longer time span. How are reproduction and evolution compatible? We address these questions, based on several model studies with catalytic reaction network. In the present survey paper, we first formulate basic questions on the recursiveness and evolvability of a cell, and then state the standpoint of our research to answer the questions, that is termed as 'constructive biology'. Based on this standpoint, we present general strategy of modeling a cell as a chemical reaction network.
0711.2208
Jacob Bock Axelsen
Jacob Bock Axelsen, Sebastian Bernhardsson, Kim Sneppen
One Hub-One Process: A Tool Based View on Regulatory Network Topology
18 pages, 3 figures, 5 supplementary figures
BMC Systems Biology 2008, 2:25
10.1186/1752-0509-2-25
null
q-bio.MN cond-mat.soft
null
The relationship between the regulatory design and the functionality of molecular networks is a key issue in biology. Modules and motifs have been associated to various cellular processes, thereby providing anecdotal evidence for performance based localization on molecular networks. To quantify structure-function relationship we investigate similarities of proteins which are close in the regulatory network of the yeast Saccharomyces Cerevisiae. We find that the topology of the regulatory network show weak remnants of its history of network reorganizations, but strong features of co-regulated proteins associated to similar tasks. This suggests that local topological features of regulatory networks, including broad degree distributions, emerge as an implicit result of matching a number of needed processes to a finite toolbox of proteins.
[ { "created": "Wed, 14 Nov 2007 14:16:39 GMT", "version": "v1" }, { "created": "Thu, 29 Nov 2007 15:02:23 GMT", "version": "v2" } ]
2008-03-04
[ [ "Axelsen", "Jacob Bock", "" ], [ "Bernhardsson", "Sebastian", "" ], [ "Sneppen", "Kim", "" ] ]
The relationship between the regulatory design and the functionality of molecular networks is a key issue in biology. Modules and motifs have been associated to various cellular processes, thereby providing anecdotal evidence for performance based localization on molecular networks. To quantify structure-function relationship we investigate similarities of proteins which are close in the regulatory network of the yeast Saccharomyces Cerevisiae. We find that the topology of the regulatory network show weak remnants of its history of network reorganizations, but strong features of co-regulated proteins associated to similar tasks. This suggests that local topological features of regulatory networks, including broad degree distributions, emerge as an implicit result of matching a number of needed processes to a finite toolbox of proteins.
1611.06065
Maude Pupin
Qassim Esmaeel, Maude Pupin (CRIStAL, BONSAI), Nam Phuong Kieu, Gabrielle Chataign\'e, Max B\'echet, Jovana Deravel, Fran\c{c}ois Krier, Monica H\"ofte, Philippe Jacques, Val\'erie Lecl\`ere (CRIStAL, BONSAI)
Burkholderia genome mining for nonribosomal peptide synthetases reveals a great potential for novel siderophores and lipopeptides synthesis
null
MicrobiologyOpen, 2016, 5 (3), pp.512 - 526
10.1002/mbo3.347
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Burkholderia is an important genus encompassing a variety of species, including pathogenic strains as well as strains that promote plant growth. We have carried out a global strategy, which combined two complementary approaches. The first one is genome guided with deep analysis of genome sequences and the second one is assay guided with experiments to support the predictions obtained in silico. This efficient screening for new secondary metabolites, performed on 48 gapless genomes of Burkholderia species, revealed a total of 161 clusters containing nonribosomal peptide synthetases (NRPSs), with the potential to synthesize at least 11 novel products. Most of them are siderophores or lipopeptides, two classes of products with potential application in biocontrol. The strategy led to the identification, for the first time, of the cluster for cepaciachelin biosynthesis in the genome of Burkholderia ambifaria AMMD and a cluster corresponding to a new malleobactin-like siderophore, called phymabactin, was identified in Burkholderia phymatum STM815 genome. In both cases, the siderophore was produced when the strain was grown in iron-limited conditions. Elsewhere, the cluster for the antifungal burkholdin was detected in the genome of B. ambifaria AMMD and also Burkholderia sp. KJ006. Burkholderia pseudomallei strains harbor the genetic potential to produce a novel lipopeptide called burkhomycin, containing a peptidyl moiety of 12 monomers. A mixture of lipopeptides produced by Burkholderia rhizoxinica lowered the surface tension of the supernatant from 70 to 27 mN/m. The production of nonribosomal secondary metabolites seems related to the three phylogenetic groups obtained from 16S rRNA sequences. Moreover, the genome-mining approach gave new insights into the nonribosomal synthesis exemplified by the identification of dual C/E domains in lipopeptide NRPSs, up to now essentially found in Pseudomonas strains.
[ { "created": "Fri, 18 Nov 2016 13:28:43 GMT", "version": "v1" } ]
2016-11-21
[ [ "Esmaeel", "Qassim", "", "CRIStAL, BONSAI" ], [ "Pupin", "Maude", "", "CRIStAL, BONSAI" ], [ "Kieu", "Nam Phuong", "", "CRIStAL, BONSAI" ], [ "Chataigné", "Gabrielle", "", "CRIStAL, BONSAI" ], [ "Béchet", "Max", "", "CRIStAL, BONSAI" ], [ "Deravel", "Jovana", "", "CRIStAL, BONSAI" ], [ "Krier", "François", "", "CRIStAL, BONSAI" ], [ "Höfte", "Monica", "", "CRIStAL, BONSAI" ], [ "Jacques", "Philippe", "", "CRIStAL, BONSAI" ], [ "Leclère", "Valérie", "", "CRIStAL, BONSAI" ] ]
Burkholderia is an important genus encompassing a variety of species, including pathogenic strains as well as strains that promote plant growth. We have carried out a global strategy, which combined two complementary approaches. The first one is genome guided with deep analysis of genome sequences and the second one is assay guided with experiments to support the predictions obtained in silico. This efficient screening for new secondary metabolites, performed on 48 gapless genomes of Burkholderia species, revealed a total of 161 clusters containing nonribosomal peptide synthetases (NRPSs), with the potential to synthesize at least 11 novel products. Most of them are siderophores or lipopeptides, two classes of products with potential application in biocontrol. The strategy led to the identification, for the first time, of the cluster for cepaciachelin biosynthesis in the genome of Burkholderia ambifaria AMMD and a cluster corresponding to a new malleobactin-like siderophore, called phymabactin, was identified in Burkholderia phymatum STM815 genome. In both cases, the siderophore was produced when the strain was grown in iron-limited conditions. Elsewhere, the cluster for the antifungal burkholdin was detected in the genome of B. ambifaria AMMD and also Burkholderia sp. KJ006. Burkholderia pseudomallei strains harbor the genetic potential to produce a novel lipopeptide called burkhomycin, containing a peptidyl moiety of 12 monomers. A mixture of lipopeptides produced by Burkholderia rhizoxinica lowered the surface tension of the supernatant from 70 to 27 mN/m. The production of nonribosomal secondary metabolites seems related to the three phylogenetic groups obtained from 16S rRNA sequences. Moreover, the genome-mining approach gave new insights into the nonribosomal synthesis exemplified by the identification of dual C/E domains in lipopeptide NRPSs, up to now essentially found in Pseudomonas strains.
q-bio/0605013
Kate Sugden Ms
K. E. P. Sugden, M. R. Evans, W. C. K. Poon, N. D. Read
A model of hyphal tip growth involving microtubule-based transport
5 pages, 5 figures
Phys. Rev. E 75, 031909 (2007)
10.1103/PhysRevE.75.031909
null
q-bio.SC cond-mat.stat-mech
null
We propose a simple model for mass transport within a fungal hypha and its subsequent growth. Inspired by the role of microtubule-transported vesicles, we embody the internal dynamics of mass inside a hypha with mutually excluding particles progressing stochastically along a growing one-dimensional lattice. The connection between long range transport of materials for growth, and the resulting extension of the hyphal tip has not previously been addressed in the modelling literature. We derive and analyse mean-field equations for the model and present a phase diagram of its steady state behaviour, which we compare to simulations. We discuss our results in the context of the filamentous fungus, Neurospora crassa.
[ { "created": "Tue, 9 May 2006 10:46:42 GMT", "version": "v1" }, { "created": "Thu, 29 Mar 2007 16:08:06 GMT", "version": "v2" } ]
2007-05-23
[ [ "Sugden", "K. E. P.", "" ], [ "Evans", "M. R.", "" ], [ "Poon", "W. C. K.", "" ], [ "Read", "N. D.", "" ] ]
We propose a simple model for mass transport within a fungal hypha and its subsequent growth. Inspired by the role of microtubule-transported vesicles, we embody the internal dynamics of mass inside a hypha with mutually excluding particles progressing stochastically along a growing one-dimensional lattice. The connection between long range transport of materials for growth, and the resulting extension of the hyphal tip has not previously been addressed in the modelling literature. We derive and analyse mean-field equations for the model and present a phase diagram of its steady state behaviour, which we compare to simulations. We discuss our results in the context of the filamentous fungus, Neurospora crassa.
1310.0507
Sepehr Ehsani
Sepehr Ehsani
Correlative-causative structures and the 'pericause': an analysis of causation and a model based on cellular biology
7 pages, 4 figures
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of molecular biology has led to the identification of definitive causative factors for a number of diseases, most of which are monogenic. Causes for most common diseases across the population, however, seem elusive and cannot be pinpointed to a limited number of genes or genetic pathways. This realization has led to the idea of personalized medicine and treating each case individually. Nevertheless, since each common disease appears to have the same endpoint and phenotypic features in all diagnosed individuals, the search for a unifying cause will still continue. Given that multivariate scientific data is of a correlative nature and causation is always inferred, a simple formalization of the general structure of cause and correlation is presented herein. Furthermore, the context in which a causal structure could take shape, termed the 'pericause', is proposed as a tractable and uninvestigated concept which could theoretically play a crucial role in determining the effects of a cause.
[ { "created": "Tue, 1 Oct 2013 22:22:02 GMT", "version": "v1" } ]
2013-10-03
[ [ "Ehsani", "Sepehr", "" ] ]
The advent of molecular biology has led to the identification of definitive causative factors for a number of diseases, most of which are monogenic. Causes for most common diseases across the population, however, seem elusive and cannot be pinpointed to a limited number of genes or genetic pathways. This realization has led to the idea of personalized medicine and treating each case individually. Nevertheless, since each common disease appears to have the same endpoint and phenotypic features in all diagnosed individuals, the search for a unifying cause will still continue. Given that multivariate scientific data is of a correlative nature and causation is always inferred, a simple formalization of the general structure of cause and correlation is presented herein. Furthermore, the context in which a causal structure could take shape, termed the 'pericause', is proposed as a tractable and uninvestigated concept which could theoretically play a crucial role in determining the effects of a cause.
1712.08336
Shankha Sanyal
Sayan Nag, Shankha Sanyal, Archi Banerjee, Ranjan Sengupta and Dipak Ghosh
Music of Brain and Music on Brain: A Novel EEG Sonification approach
6 pages, 4 figures; Presented in the International Symposium on Frontiers of Research in speech and Music (FRSM)-2017, held at NIT, Rourkela in 15-16 December 2017
null
null
null
q-bio.NC cs.SD eess.AS physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Can we hear the sound of our brain? Is there any technique which can enable us to hear the neuro-electrical impulses originating from the different lobes of brain? The answer to all these questions is YES. In this paper we present a novel method with which we can sonify the Electroencephalogram (EEG) data recorded in rest state as well as under the influence of a simplest acoustical stimuli - a tanpura drone. The tanpura drone has a very simple yet very complex acoustic features, which is generally used for creation of an ambiance during a musical performance. Hence, for this pilot project we chose to study the correlation between a simple acoustic stimuli (tanpura drone) and sonified EEG data. Till date, there have been no study which deals with the direct correlation between a bio-signal and its acoustic counterpart and how that correlation varies under the influence of different types of stimuli. This is the first of its kind study which bridges this gap and looks for a direct correlation between music signal and EEG data using a robust mathematical microscope called Multifractal Detrended Cross Correlation Analysis (MFDXA). For this, we took EEG data of 10 participants in 2 min 'rest state' (i.e. with white noise) and in 2 min 'tanpura drone' (musical stimulus) listening condition. Next, the EEG signals from different electrodes were sonified and MFDXA technique was used to assess the degree of correlation (or the cross correlation coefficient) between tanpura signal and EEG signals. The variation of {\gamma}x for different lobes during the course of the experiment also provides major interesting new information. Only music stimuli has the ability to engage several areas of the brain significantly unlike other stimuli (which engages specific domains only).
[ { "created": "Fri, 22 Dec 2017 08:30:47 GMT", "version": "v1" } ]
2017-12-25
[ [ "Nag", "Sayan", "" ], [ "Sanyal", "Shankha", "" ], [ "Banerjee", "Archi", "" ], [ "Sengupta", "Ranjan", "" ], [ "Ghosh", "Dipak", "" ] ]
Can we hear the sound of our brain? Is there any technique which can enable us to hear the neuro-electrical impulses originating from the different lobes of brain? The answer to all these questions is YES. In this paper we present a novel method with which we can sonify the Electroencephalogram (EEG) data recorded in rest state as well as under the influence of a simplest acoustical stimuli - a tanpura drone. The tanpura drone has a very simple yet very complex acoustic features, which is generally used for creation of an ambiance during a musical performance. Hence, for this pilot project we chose to study the correlation between a simple acoustic stimuli (tanpura drone) and sonified EEG data. Till date, there have been no study which deals with the direct correlation between a bio-signal and its acoustic counterpart and how that correlation varies under the influence of different types of stimuli. This is the first of its kind study which bridges this gap and looks for a direct correlation between music signal and EEG data using a robust mathematical microscope called Multifractal Detrended Cross Correlation Analysis (MFDXA). For this, we took EEG data of 10 participants in 2 min 'rest state' (i.e. with white noise) and in 2 min 'tanpura drone' (musical stimulus) listening condition. Next, the EEG signals from different electrodes were sonified and MFDXA technique was used to assess the degree of correlation (or the cross correlation coefficient) between tanpura signal and EEG signals. The variation of {\gamma}x for different lobes during the course of the experiment also provides major interesting new information. Only music stimuli has the ability to engage several areas of the brain significantly unlike other stimuli (which engages specific domains only).
1905.10834
Neil Oxtoby
Neil P. Oxtoby, Fabio S. Ferreira, Agoston Mihalik, Tong Wu, Mikael Brudfors, Hongxiang Lin, Anita Rau, Stefano B. Blumberg, Maria Robu, Cemre Zor, Maira Tariq, Maria Del Mar Estarellas Garcia, Baris Kanber, Daniil I. Nikitichev, Janaina Mourao-Miranda
ABCD Neurocognitive Prediction Challenge 2019: Predicting individual residual fluid intelligence scores from cortical grey matter morphology
8 pages plus references, 3 figures, 2 tables. Submission to the ABCD Neurocognitive Prediction Challenge at MICCAI 2019
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We predicted residual fluid intelligence scores from T1-weighted MRI data available as part of the ABCD NP Challenge 2019, using morphological similarity of grey-matter regions across the cortex. Individual structural covariance networks (SCN) were abstracted into graph-theory metrics averaged over nodes across the brain and in data-driven communities/modules. Metrics included degree, path length, clustering coefficient, centrality, rich club coefficient, and small-worldness. These features derived from the training set were used to build various regression models for predicting residual fluid intelligence scores, with performance evaluated both using cross-validation within the training set and using the held-out validation set. Our predictions on the test set were generated with a support vector regression model trained on the training set. We found minimal improvement over predicting a zero residual fluid intelligence score across the sample population, implying that structural covariance networks calculated from T1-weighted MR imaging data provide little information about residual fluid intelligence.
[ { "created": "Sun, 26 May 2019 16:38:28 GMT", "version": "v1" } ]
2019-05-28
[ [ "Oxtoby", "Neil P.", "" ], [ "Ferreira", "Fabio S.", "" ], [ "Mihalik", "Agoston", "" ], [ "Wu", "Tong", "" ], [ "Brudfors", "Mikael", "" ], [ "Lin", "Hongxiang", "" ], [ "Rau", "Anita", "" ], [ "Blumberg", "Stefano B.", "" ], [ "Robu", "Maria", "" ], [ "Zor", "Cemre", "" ], [ "Tariq", "Maira", "" ], [ "Garcia", "Maria Del Mar Estarellas", "" ], [ "Kanber", "Baris", "" ], [ "Nikitichev", "Daniil I.", "" ], [ "Mourao-Miranda", "Janaina", "" ] ]
We predicted residual fluid intelligence scores from T1-weighted MRI data available as part of the ABCD NP Challenge 2019, using morphological similarity of grey-matter regions across the cortex. Individual structural covariance networks (SCN) were abstracted into graph-theory metrics averaged over nodes across the brain and in data-driven communities/modules. Metrics included degree, path length, clustering coefficient, centrality, rich club coefficient, and small-worldness. These features derived from the training set were used to build various regression models for predicting residual fluid intelligence scores, with performance evaluated both using cross-validation within the training set and using the held-out validation set. Our predictions on the test set were generated with a support vector regression model trained on the training set. We found minimal improvement over predicting a zero residual fluid intelligence score across the sample population, implying that structural covariance networks calculated from T1-weighted MR imaging data provide little information about residual fluid intelligence.
1705.08324
Helen Liedtke
H.Liedtke, A.T. McBride, S. Sivarasu, S. Roche
Computational simulation of bone remodelling post reverse total shoulder arthroplasty
null
null
null
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bone is a living material. It adapts, in an optimal sense, to loading by changing its density and trabeculae architecture - a process termed remodelling. Implanted orthopaedic devices can significantly alter the loading on the surrounding bone, which can have a detrimental impact on bone ingrowth that is critical to ensure secure implant fixation. In this contribution, a computational model that accounts for bone remodelling is developed to elucidate the response of bone following a reverse shoulder procedure for rotator cuff deficient patients. The physical process of remodelling is modelled using continuum scale, open system thermodynamics whereby the density of bone evolves isotropically in response to the loading it experiences. The fully-nonlinear continuum theory is solved approximately using the finite element method. The code developed to model the reverse shoulder procedure is validated using a series of benchmark problems.
[ { "created": "Tue, 11 Apr 2017 09:38:15 GMT", "version": "v1" } ]
2017-05-24
[ [ "Liedtke", "H.", "" ], [ "McBride", "A. T.", "" ], [ "Sivarasu", "S.", "" ], [ "Roche", "S.", "" ] ]
Bone is a living material. It adapts, in an optimal sense, to loading by changing its density and trabeculae architecture - a process termed remodelling. Implanted orthopaedic devices can significantly alter the loading on the surrounding bone, which can have a detrimental impact on bone ingrowth that is critical to ensure secure implant fixation. In this contribution, a computational model that accounts for bone remodelling is developed to elucidate the response of bone following a reverse shoulder procedure for rotator cuff deficient patients. The physical process of remodelling is modelled using continuum scale, open system thermodynamics whereby the density of bone evolves isotropically in response to the loading it experiences. The fully-nonlinear continuum theory is solved approximately using the finite element method. The code developed to model the reverse shoulder procedure is validated using a series of benchmark problems.
2006.04176
Zafeirios Fountas PhD
Zafeirios Fountas, Noor Sajid, Pedro A.M. Mediano, Karl Friston
Deep active inference agents using Monte-Carlo methods
To appear in NeurIPS 2020
null
null
null
q-bio.NC cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Active inference is a Bayesian framework for understanding biological intelligence. The underlying theory brings together perception and action under one single imperative: minimizing free energy. However, despite its theoretical utility in explaining intelligence, computational implementations have been restricted to low-dimensional and idealized situations. In this paper, we present a neural architecture for building deep active inference agents operating in complex, continuous state-spaces using multiple forms of Monte-Carlo (MC) sampling. For this, we introduce a number of techniques, novel to active inference. These include: i) selecting free-energy-optimal policies via MC tree search, ii) approximating this optimal policy distribution via a feed-forward `habitual' network, iii) predicting future parameter belief updates using MC dropouts and, finally, iv) optimizing state transition precision (a high-end form of attention). Our approach enables agents to learn environmental dynamics efficiently, while maintaining task performance, in relation to reward-based counterparts. We illustrate this in a new toy environment, based on the dSprites data-set, and demonstrate that active inference agents automatically create disentangled representations that are apt for modeling state transitions. In a more complex Animal-AI environment, our agents (using the same neural architecture) are able to simulate future state transitions and actions (i.e., plan), to evince reward-directed navigation - despite temporary suspension of visual input. These results show that deep active inference - equipped with MC methods - provides a flexible framework to develop biologically-inspired intelligent agents, with applications in both machine learning and cognitive science.
[ { "created": "Sun, 7 Jun 2020 15:10:42 GMT", "version": "v1" }, { "created": "Thu, 22 Oct 2020 13:17:36 GMT", "version": "v2" } ]
2020-10-23
[ [ "Fountas", "Zafeirios", "" ], [ "Sajid", "Noor", "" ], [ "Mediano", "Pedro A. M.", "" ], [ "Friston", "Karl", "" ] ]
Active inference is a Bayesian framework for understanding biological intelligence. The underlying theory brings together perception and action under one single imperative: minimizing free energy. However, despite its theoretical utility in explaining intelligence, computational implementations have been restricted to low-dimensional and idealized situations. In this paper, we present a neural architecture for building deep active inference agents operating in complex, continuous state-spaces using multiple forms of Monte-Carlo (MC) sampling. For this, we introduce a number of techniques, novel to active inference. These include: i) selecting free-energy-optimal policies via MC tree search, ii) approximating this optimal policy distribution via a feed-forward `habitual' network, iii) predicting future parameter belief updates using MC dropouts and, finally, iv) optimizing state transition precision (a high-end form of attention). Our approach enables agents to learn environmental dynamics efficiently, while maintaining task performance, in relation to reward-based counterparts. We illustrate this in a new toy environment, based on the dSprites data-set, and demonstrate that active inference agents automatically create disentangled representations that are apt for modeling state transitions. In a more complex Animal-AI environment, our agents (using the same neural architecture) are able to simulate future state transitions and actions (i.e., plan), to evince reward-directed navigation - despite temporary suspension of visual input. These results show that deep active inference - equipped with MC methods - provides a flexible framework to develop biologically-inspired intelligent agents, with applications in both machine learning and cognitive science.
2104.12708
Juan Jim\'enez-S\'anchez
V\'ictor M. P\'erez-Garc\'ia, Gabriel F. Calvo, Jes\'us J. Bosque, Odelaisy Le\'on-Triana, Juan Jim\'enez, Juli\'an P\'erez-Beteta, Juan Belmonte-Beitia, Manuel Valiente, Luc\'ia Zhu, Pedro Garc\'ia-G\'omez, Pilar S\'anchez-G\'omez, Esther Hern\'andez-San Miguel, Rafael Hortig\"uela, Youness Azimzade, David Molina-Garc\'ia, \'Alvaro Mart\'inez, \'Angel Acosta Rojas, Ana Ortiz de Mendivil, Francois Vallette, Philippe Schucht, Michael Murek, Mar\'ia P\'erez-Cano, David Albillo, Antonio F. Honguero Mart\'inez, Germ\'an A. Jim\'enez Londo\~no, Estanislao Arana, Ana M. Garc\'ia Vicente
Universal scaling laws rule explosive growth inhuman cancers
null
Nat.Phys. 16 (2020) 1232-1237
10.1038/s41567-020-0978-6
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Most physical and other natural systems are complex entities composed of a large number of interacting individual elements. It is a surprising fact that they often obey the so-called scaling laws relating an observable quantity with a measure of the size of the system. Here we describe the discovery of universal superlinear metabolic scaling laws in human cancers. This dependence underpins increasing tumour aggressiveness, due to evolutionary dynamics, which leads to an explosive growth as the disease progresses. We validated this dynamic using longitudinal volumetric data of different histologies from large cohorts of cancer patients. To explain our observations we put forward increasingly complex biologically-inspired mathematical models that captured the key processes governing tumor growth. Our models predicted that the emergence of superlinear allometric scaling laws is an inherently three-dimensional phenomenon. Moreover, the scaling laws thereby identified allowed us to define a set of metabolic metrics with prognostic value, thus providing added clinical utility to the base findings.
[ { "created": "Thu, 22 Apr 2021 09:49:47 GMT", "version": "v1" } ]
2021-04-27
[ [ "Pérez-García", "Víctor M.", "" ], [ "Calvo", "Gabriel F.", "" ], [ "Bosque", "Jesús J.", "" ], [ "León-Triana", "Odelaisy", "" ], [ "Jiménez", "Juan", "" ], [ "Pérez-Beteta", "Julián", "" ], [ "Belmonte-Beitia", "Juan", "" ], [ "Valiente", "Manuel", "" ], [ "Zhu", "Lucía", "" ], [ "García-Gómez", "Pedro", "" ], [ "Sánchez-Gómez", "Pilar", "" ], [ "Miguel", "Esther Hernández-San", "" ], [ "Hortigüela", "Rafael", "" ], [ "Azimzade", "Youness", "" ], [ "Molina-García", "David", "" ], [ "Martínez", "Álvaro", "" ], [ "Rojas", "Ángel Acosta", "" ], [ "de Mendivil", "Ana Ortiz", "" ], [ "Vallette", "Francois", "" ], [ "Schucht", "Philippe", "" ], [ "Murek", "Michael", "" ], [ "Pérez-Cano", "María", "" ], [ "Albillo", "David", "" ], [ "Martínez", "Antonio F. Honguero", "" ], [ "Londoño", "Germán A. Jiménez", "" ], [ "Arana", "Estanislao", "" ], [ "Vicente", "Ana M. García", "" ] ]
Most physical and other natural systems are complex entities composed of a large number of interacting individual elements. It is a surprising fact that they often obey the so-called scaling laws relating an observable quantity with a measure of the size of the system. Here we describe the discovery of universal superlinear metabolic scaling laws in human cancers. This dependence underpins increasing tumour aggressiveness, due to evolutionary dynamics, which leads to an explosive growth as the disease progresses. We validated this dynamic using longitudinal volumetric data of different histologies from large cohorts of cancer patients. To explain our observations we put forward increasingly complex biologically-inspired mathematical models that captured the key processes governing tumor growth. Our models predicted that the emergence of superlinear allometric scaling laws is an inherently three-dimensional phenomenon. Moreover, the scaling laws thereby identified allowed us to define a set of metabolic metrics with prognostic value, thus providing added clinical utility to the base findings.
0802.1668
Andrea Cavagna
Andrea Cavagna, Irene Giardina, Alberto Orlandi, Giorgio Parisi, Andrea Procaccini, Massimiliano Viale, Vladimir Zdravkovic
The STARFLAG handbook on collective animal behaviour: Part I, empirical methods
To be published in Animal Behaviour
Animal Behaviour 76 (1), 217-236 (2008)
null
null
q-bio.QM cond-mat.stat-mech q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The most startling examples of collective animal behaviour are provided by very large and cohesive groups moving in three dimensions. Paradigmatic examples are bird flocks, fish schools and insect swarms. However, because of the sheer technical difficulty of obtaining 3D data, empirical studies conducted to date have only considered loose groups of a few tens of animals. Moreover, these studies were very seldom conducted in the field. Recently the STARFLAG project achieved the 3D reconstruction of thousands of birds under field conditions, thus opening the way to a new generation of quantitative studies of collective animal behaviour. Here, we review the main technical problems in 3D data collection of large animal groups and we outline some of the methodological solutions adopted by the STARFLAG project. In particular, we explain how to solve the stereoscopic correspondence - or matching - problem, which was the major bottleneck of all 3D studies in the past.
[ { "created": "Tue, 12 Feb 2008 15:34:13 GMT", "version": "v1" } ]
2014-10-10
[ [ "Cavagna", "Andrea", "" ], [ "Giardina", "Irene", "" ], [ "Orlandi", "Alberto", "" ], [ "Parisi", "Giorgio", "" ], [ "Procaccini", "Andrea", "" ], [ "Viale", "Massimiliano", "" ], [ "Zdravkovic", "Vladimir", "" ] ]
The most startling examples of collective animal behaviour are provided by very large and cohesive groups moving in three dimensions. Paradigmatic examples are bird flocks, fish schools and insect swarms. However, because of the sheer technical difficulty of obtaining 3D data, empirical studies conducted to date have only considered loose groups of a few tens of animals. Moreover, these studies were very seldom conducted in the field. Recently the STARFLAG project achieved the 3D reconstruction of thousands of birds under field conditions, thus opening the way to a new generation of quantitative studies of collective animal behaviour. Here, we review the main technical problems in 3D data collection of large animal groups and we outline some of the methodological solutions adopted by the STARFLAG project. In particular, we explain how to solve the stereoscopic correspondence - or matching - problem, which was the major bottleneck of all 3D studies in the past.
1306.2605
Sayak Mukherjee
Sayak Mukherjee, Sang-Cheol Seok, Veronica J. Vieland, and Jayajit Das
Cell responses only partially shape cell-to-cell variations in protein abundances in Escherichia coli chemotaxis
51 Pages, 18 Figures
PNAS epub, October 28, 2013
10.1073/pnas.1311069110
null
q-bio.CB cond-mat.stat-mech q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell-to-cell variations in protein abundance in clonal cell populations are ubiquitous in living systems. Since protein composition determines responses in individual cells, it stands to reason that the variations themselves are subject to selective pressures. But the functional role of these cell-to-cell differences is not well understood. One way to tackle questions regarding relationships between form and function is to perturb the form (e.g., change the protein abundances) and observe the resulting changes in some function. Here we take on the form-function relationship from the inverse perspective, asking instead what specific constraints on cell-to-cell variations in protein abundance are imposed by a given functional phenotype. We develop a maximum entropy (MaxEnt) based approach to posing questions of this type, and illustrate the method by application to the well characterized chemotactic response in Escherichia coli (E. coli). We find that full determination of observed cell-to-cell variations in protein abundances is not inherent in chemotaxis itself, but in fact appears to be jointly imposed by the chemotaxis program in conjunction with other factors, e.g., the protein synthesis machinery and/or additional non-chemotactic cell functions such as cell metabolism. These results illustrate the power of MaxEnt as a tool for the investigation of relationships between biological form and function.
[ { "created": "Tue, 11 Jun 2013 18:36:34 GMT", "version": "v1" }, { "created": "Tue, 29 Oct 2013 20:23:18 GMT", "version": "v2" } ]
2013-11-01
[ [ "Mukherjee", "Sayak", "" ], [ "Seok", "Sang-Cheol", "" ], [ "Vieland", "Veronica J.", "" ], [ "Das", "Jayajit", "" ] ]
Cell-to-cell variations in protein abundance in clonal cell populations are ubiquitous in living systems. Since protein composition determines responses in individual cells, it stands to reason that the variations themselves are subject to selective pressures. But the functional role of these cell-to-cell differences is not well understood. One way to tackle questions regarding relationships between form and function is to perturb the form (e.g., change the protein abundances) and observe the resulting changes in some function. Here we take on the form-function relationship from the inverse perspective, asking instead what specific constraints on cell-to-cell variations in protein abundance are imposed by a given functional phenotype. We develop a maximum entropy (MaxEnt) based approach to posing questions of this type, and illustrate the method by application to the well characterized chemotactic response in Escherichia coli (E. coli). We find that full determination of observed cell-to-cell variations in protein abundances is not inherent in chemotaxis itself, but in fact appears to be jointly imposed by the chemotaxis program in conjunction with other factors, e.g., the protein synthesis machinery and/or additional non-chemotactic cell functions such as cell metabolism. These results illustrate the power of MaxEnt as a tool for the investigation of relationships between biological form and function.
1907.13549
Eugenio Buzzoni
Jochen Blath, Eugenio Buzzoni, Jere Koskela and Maite Wilke Berenguer
Statistical tools for seed bank detection
33 pages, 25 figures
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we derive statistical tools to analyze and distinguish the patterns of genetic variability produced by classical and recent population genetic models related to seed banks. In particular, we are concerned with models described by the Kingman coalescent (K), models exhibiting so-called weak seed banks described by a time-changed Kingman coalescent (W), models with so-called strong seed bank described by the seed bank coalescent (S) and the classical two-island model by Wright, described by the structured coalescent (TI). As the presence of a (strong) seed bank should stratify a population, we expect it to produce a signal roughly comparable to the presence of population structure. We begin with a brief analysis of Wright's $F_{ST}$, which is a classical but crude measure for population structure, followed by a derivation of the expected site frequency spectrum (SFS) in the infinite sites model based on 'phase-type distribution calculus' as recently discussed by Hobolth et al. (2019). Both the $F_{ST}$ and the SFS can be readily computed under various population models, they discard statistical signal. Hence we also derive exact likelihoods for the full sampling probabilities, which can be achieved via recursions and a Monte Carlo scheme both in the infinite alleles and the infinite sites model. We employ a pseudo-marginal Metropolis-Hastings algorithm of Andrieu and Roberts (2009) to provide a method for simultaneous model selection and parameter inference under the so-called infinitely-many sites model, which is the most relevant in real applications. It turns out that this full likelihood method can reliably distinguish among the model classes (K, W), (S) and (TI) on the basis of simulated data even from moderate sample sizes. It is also possible to infer mutation rates, and in particular determine whether mutation is taking place in the (strong) seed bank.
[ { "created": "Wed, 31 Jul 2019 15:16:48 GMT", "version": "v1" }, { "created": "Mon, 9 Sep 2019 14:00:02 GMT", "version": "v2" }, { "created": "Thu, 9 Jan 2020 17:29:22 GMT", "version": "v3" } ]
2020-01-10
[ [ "Blath", "Jochen", "" ], [ "Buzzoni", "Eugenio", "" ], [ "Koskela", "Jere", "" ], [ "Berenguer", "Maite Wilke", "" ] ]
In this article, we derive statistical tools to analyze and distinguish the patterns of genetic variability produced by classical and recent population genetic models related to seed banks. In particular, we are concerned with models described by the Kingman coalescent (K), models exhibiting so-called weak seed banks described by a time-changed Kingman coalescent (W), models with so-called strong seed bank described by the seed bank coalescent (S) and the classical two-island model by Wright, described by the structured coalescent (TI). As the presence of a (strong) seed bank should stratify a population, we expect it to produce a signal roughly comparable to the presence of population structure. We begin with a brief analysis of Wright's $F_{ST}$, which is a classical but crude measure for population structure, followed by a derivation of the expected site frequency spectrum (SFS) in the infinite sites model based on 'phase-type distribution calculus' as recently discussed by Hobolth et al. (2019). Both the $F_{ST}$ and the SFS can be readily computed under various population models, they discard statistical signal. Hence we also derive exact likelihoods for the full sampling probabilities, which can be achieved via recursions and a Monte Carlo scheme both in the infinite alleles and the infinite sites model. We employ a pseudo-marginal Metropolis-Hastings algorithm of Andrieu and Roberts (2009) to provide a method for simultaneous model selection and parameter inference under the so-called infinitely-many sites model, which is the most relevant in real applications. It turns out that this full likelihood method can reliably distinguish among the model classes (K, W), (S) and (TI) on the basis of simulated data even from moderate sample sizes. It is also possible to infer mutation rates, and in particular determine whether mutation is taking place in the (strong) seed bank.
2203.07211
Jong Woo Kim
Jong Woo Kim, Niels Krausch, Judit Aizpuru, Tilman Barz, Sergio Lucia, Peter Neubauer, Mariano Nicolas Cruz Bournazou
Model predictive control and moving horizon estimation for adaptive optimal bolus feeding in high-throughput cultivation of \textit{E. coli}
null
null
null
null
q-bio.QM cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss the application of a nonlinear model predictive control (MPC) and a moving horizon estimation (MHE) to achieve an optimal operation of \textit{E. coli} fed-batch cultivations with intermittent bolus feeding. 24 parallel experiments were considered in a high-throughput microbioreactor platform at a 10 mL scale. The robotic island in question can run up to 48 fed-batch processes in parallel with automated liquid handling and online and at-line analytics. The implementation of the model-based monitoring and control framework reveals that there are mainly three challenges that need to be addressed; First, the inputs are given in an instantaneous pulsed form by bolus injections, second, online and at-line measurement frequencies are severely imbalanced, and third, optimization for the distinctive multiple reactors can be either parallelized or integrated. We address these challenges by incorporating the concept of impulsive control systems, formulating multi-rate MHE with identifiability analysis, and suggesting criteria for deciding the reactor configuration. In this study, we present the key elements and background theory of the implementation with \textit{in silico} simulations for bacterial fed-batch cultivation.
[ { "created": "Mon, 14 Mar 2022 15:53:11 GMT", "version": "v1" }, { "created": "Mon, 6 Feb 2023 05:40:43 GMT", "version": "v2" } ]
2023-02-07
[ [ "Kim", "Jong Woo", "" ], [ "Krausch", "Niels", "" ], [ "Aizpuru", "Judit", "" ], [ "Barz", "Tilman", "" ], [ "Lucia", "Sergio", "" ], [ "Neubauer", "Peter", "" ], [ "Bournazou", "Mariano Nicolas Cruz", "" ] ]
We discuss the application of a nonlinear model predictive control (MPC) and a moving horizon estimation (MHE) to achieve an optimal operation of \textit{E. coli} fed-batch cultivations with intermittent bolus feeding. 24 parallel experiments were considered in a high-throughput microbioreactor platform at a 10 mL scale. The robotic island in question can run up to 48 fed-batch processes in parallel with automated liquid handling and online and at-line analytics. The implementation of the model-based monitoring and control framework reveals that there are mainly three challenges that need to be addressed; First, the inputs are given in an instantaneous pulsed form by bolus injections, second, online and at-line measurement frequencies are severely imbalanced, and third, optimization for the distinctive multiple reactors can be either parallelized or integrated. We address these challenges by incorporating the concept of impulsive control systems, formulating multi-rate MHE with identifiability analysis, and suggesting criteria for deciding the reactor configuration. In this study, we present the key elements and background theory of the implementation with \textit{in silico} simulations for bacterial fed-batch cultivation.
q-bio/0401028
Thomas Petermann
Thomas Petermann and Paolo De Los Rios
Cluster approximations for probabilistic systems: a new perspective of epidemiological modelling
Submitted to J. Theor. Biol; 11 pages, 12 figures
J. Theor. Biol. 229, 1 (2004).
10.1016/j.jtbi.2004.02.017
null
q-bio.PE cond-mat.stat-mech q-bio.QM
null
Especially in lattice structured populations, homogeneous mixing represents an inadequate assumption. Various improvements upon the ordinary pair approximation based on a number of assumptions concerning the higher-order correlations have been proposed. To find approaches that allow for a derivation of their dynamics remains a great challenge. By representing the population with its connectivity patterns as a homogeneous network, we propose a systematic methodology for the description of the epidemic dynamics that takes into account spatial correlations up to a desired range. The equations which the dynamical correlations are subject to, are derived in a straightforward way, and they are solved very efficiently due to their binary character. The method embeds very naturally spatial patterns such as the presence of loops characterizing the square lattice or the treelike structure ubiquitous in random networks, providing an improved description of the steady state as well as the invasion dynamics.
[ { "created": "Wed, 21 Jan 2004 20:18:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Petermann", "Thomas", "" ], [ "Rios", "Paolo De Los", "" ] ]
Especially in lattice structured populations, homogeneous mixing represents an inadequate assumption. Various improvements upon the ordinary pair approximation based on a number of assumptions concerning the higher-order correlations have been proposed. To find approaches that allow for a derivation of their dynamics remains a great challenge. By representing the population with its connectivity patterns as a homogeneous network, we propose a systematic methodology for the description of the epidemic dynamics that takes into account spatial correlations up to a desired range. The equations which the dynamical correlations are subject to, are derived in a straightforward way, and they are solved very efficiently due to their binary character. The method embeds very naturally spatial patterns such as the presence of loops characterizing the square lattice or the treelike structure ubiquitous in random networks, providing an improved description of the steady state as well as the invasion dynamics.
1502.06900
Brian Wandell
B. A. Wandell, A. Rokem, L. M. Perry, G. Schaefer, R. F. Dougherty
Data management to support reproducible research
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe the current state and future plans for a set of tools for scientific data management (SDM) designed to support scientific transparency and reproducible research. SDM has been in active use at our MRI Center for more than two years. We designed the system to be used from the beginning of a research project, which contrasts with conventional end-state databases that accept data as a project concludes. A number of benefits accrue from using scientific data management tools early and throughout the project, including data integrity as well as reuse of the data and of computational methods.
[ { "created": "Sat, 21 Feb 2015 06:00:04 GMT", "version": "v1" } ]
2015-02-25
[ [ "Wandell", "B. A.", "" ], [ "Rokem", "A.", "" ], [ "Perry", "L. M.", "" ], [ "Schaefer", "G.", "" ], [ "Dougherty", "R. F.", "" ] ]
We describe the current state and future plans for a set of tools for scientific data management (SDM) designed to support scientific transparency and reproducible research. SDM has been in active use at our MRI Center for more than two years. We designed the system to be used from the beginning of a research project, which contrasts with conventional end-state databases that accept data as a project concludes. A number of benefits accrue from using scientific data management tools early and throughout the project, including data integrity as well as reuse of the data and of computational methods.
q-bio/0406034
J. F. R. Archilla
D. Hennig, J. F. R. Archilla, and J. M. Romero
Modeling the thermal evolution of enzyme-created bubbles in DNA
19 pages, 7 figures
Interface, 2(2):89-95, 2005
10.1098/rsif.2004.0024
null
q-bio.BM nlin.PS
null
The formation of bubbles in nucleic acids (NAs) are fundamental in many biological processes such as DNA replication, recombination, telomeres formation, nucleotide excision repair, as well as RNA transcription and splicing. These precesses are carried out by assembled complexes with enzymes that separate selected regions of NAs. Within the frame of a nonlinear dynamics approach we model the structure of the DNA duplex by a nonlinear network of coupled oscillators. We show that in fact from certain local structural distortions there originate oscillating localized patterns, that is radial and torsional breathers, which are associated with localized H-bond deformations, being reminiscent of the replication bubble. We further study the temperature dependence of these oscillating bubbles. To this aim the underlying nonlinear oscillator network of the DNA duplex is brought in contact with a heat bath using the Nos$\rm{\acute{e}}$-Hoover-method. Special attention is paid to the stability of the oscillating bubbles under the imposed thermal perturbations. It is demonstrated that the radial and torsional breathers, sustain the impact of thermal perturbations even at temperatures as high as room temperature. Generally, for nonzero temperature the H-bond breathers move coherently along the double chain whereas at T=0 standing radial and torsional breathers result.
[ { "created": "Wed, 16 Jun 2004 17:25:31 GMT", "version": "v1" }, { "created": "Sat, 4 Dec 2004 01:21:40 GMT", "version": "v2" } ]
2007-05-23
[ [ "Hennig", "D.", "" ], [ "Archilla", "J. F. R.", "" ], [ "Romero", "J. M.", "" ] ]
The formation of bubbles in nucleic acids (NAs) are fundamental in many biological processes such as DNA replication, recombination, telomeres formation, nucleotide excision repair, as well as RNA transcription and splicing. These precesses are carried out by assembled complexes with enzymes that separate selected regions of NAs. Within the frame of a nonlinear dynamics approach we model the structure of the DNA duplex by a nonlinear network of coupled oscillators. We show that in fact from certain local structural distortions there originate oscillating localized patterns, that is radial and torsional breathers, which are associated with localized H-bond deformations, being reminiscent of the replication bubble. We further study the temperature dependence of these oscillating bubbles. To this aim the underlying nonlinear oscillator network of the DNA duplex is brought in contact with a heat bath using the Nos$\rm{\acute{e}}$-Hoover-method. Special attention is paid to the stability of the oscillating bubbles under the imposed thermal perturbations. It is demonstrated that the radial and torsional breathers, sustain the impact of thermal perturbations even at temperatures as high as room temperature. Generally, for nonzero temperature the H-bond breathers move coherently along the double chain whereas at T=0 standing radial and torsional breathers result.
2102.00836
Eugene Terentjev M.
Neil Ibata and Eugene M. Terentjev
Why exercise builds muscles: Titin mechanosensing controls skeletal muscle growth under load
null
null
10.1016/j.bpj.2021.07.023
null
q-bio.TO physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Muscles sense internally generated and externally applied forces, responding to these in a coordinated hierarchical manner at different time scales. The center of the basic unit of the muscle, the sarcomeric M-band, is perfectly placed to sense the different types of load to which the muscle is subjected. In particular, the kinase domain (TK) of titin located at the M-band is a known candidate for mechanical signaling. Here, we develop the quantitative mathematical model that describes the kinetics of TK-based mechanosensitive signaling, and predicts trophic changes in response to exercise and rehabilitation regimes. First, we build the kinetic model for TK conformational changes under force: opening, phosphorylation, signaling and autoinhibition. We find that TK opens as a metastable mechanosensitive switch, which naturally produces a much greater signal after high-load resistance exercise than an equally energetically costly endurance effort. Next, in order for the model to be stable, give coherent predictions, in particular the lag following the onset of an exercise regime, we have to account for the associated kinetics of phosphate (carried by ATP), and for the non-linear dependence of protein synthesis rates on muscle fibre size. We suggest that the latter effect may occur via the steric inhibition of ribosome diffusion through the sieve-like myofilament lattice. The full model yields a steady-state solution (homeostasis) for muscle cross-sectional area and tension, and a quantitatively plausible hypertrophic response to training as well as atrophy following an extended reduction in tension.
[ { "created": "Mon, 1 Feb 2021 13:49:33 GMT", "version": "v1" }, { "created": "Wed, 5 May 2021 12:20:27 GMT", "version": "v2" } ]
2021-09-22
[ [ "Ibata", "Neil", "" ], [ "Terentjev", "Eugene M.", "" ] ]
Muscles sense internally generated and externally applied forces, responding to these in a coordinated hierarchical manner at different time scales. The center of the basic unit of the muscle, the sarcomeric M-band, is perfectly placed to sense the different types of load to which the muscle is subjected. In particular, the kinase domain (TK) of titin located at the M-band is a known candidate for mechanical signaling. Here, we develop the quantitative mathematical model that describes the kinetics of TK-based mechanosensitive signaling, and predicts trophic changes in response to exercise and rehabilitation regimes. First, we build the kinetic model for TK conformational changes under force: opening, phosphorylation, signaling and autoinhibition. We find that TK opens as a metastable mechanosensitive switch, which naturally produces a much greater signal after high-load resistance exercise than an equally energetically costly endurance effort. Next, in order for the model to be stable, give coherent predictions, in particular the lag following the onset of an exercise regime, we have to account for the associated kinetics of phosphate (carried by ATP), and for the non-linear dependence of protein synthesis rates on muscle fibre size. We suggest that the latter effect may occur via the steric inhibition of ribosome diffusion through the sieve-like myofilament lattice. The full model yields a steady-state solution (homeostasis) for muscle cross-sectional area and tension, and a quantitatively plausible hypertrophic response to training as well as atrophy following an extended reduction in tension.
2112.13044
Jessica Rogge
Mark R Baker, Elizabeth L Hawthorne, Jessica R Rogge
COVID 19: Open source model for rapid reduction of R to below 1 in high R0 scenarios
10 pages, 6 figures, in English
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an open source model that allows quantitative prediction of the effects of testing on the rate of spread of COVID-19 described by R, the reproduction number, and on the degree of quarantine, isolation and lockdown required to limit it. The paper uses the model to quantify the outcomes of different test types and regimes, and to identify strategies and tests that can reduce the rate of spread and R value by a factor of between 1.67 and 33.3, reducing it to between 60% and 3% of the initial value.
[ { "created": "Fri, 24 Dec 2021 12:09:37 GMT", "version": "v1" }, { "created": "Thu, 20 Jan 2022 15:54:55 GMT", "version": "v2" }, { "created": "Fri, 28 Jan 2022 17:32:38 GMT", "version": "v3" } ]
2022-01-31
[ [ "Baker", "Mark R", "" ], [ "Hawthorne", "Elizabeth L", "" ], [ "Rogge", "Jessica R", "" ] ]
We present an open source model that allows quantitative prediction of the effects of testing on the rate of spread of COVID-19 described by R, the reproduction number, and on the degree of quarantine, isolation and lockdown required to limit it. The paper uses the model to quantify the outcomes of different test types and regimes, and to identify strategies and tests that can reduce the rate of spread and R value by a factor of between 1.67 and 33.3, reducing it to between 60% and 3% of the initial value.
1602.05558
Hyun Youk
Th\'eo Maire, Hyun Youk
Molecular-level tuning of cellular autonomy controls the collective behaviors of cell populations
75 pages in total, includes main text, graphical abstract, 5 main figures, supplementary information, and 10 supplementary figures
Cell Systems 1:349-360 (November 2015)
10.1016/j.cels.2015.10.012
null
q-bio.MN nlin.CG nlin.PS physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A rigorous understanding of how multicellular behaviors arise from the actions of single cells requires quantitative frameworks that bridge the gap between genetic circuits, the arrangement of cells in space, and population-level behaviors. Here, we provide such a framework for a ubiquitous class of multicellular systems - namely, "secrete-and-sense cells" that communicate by secreting and sensing a signaling molecule. By using formal, mathematical arguments and introducing the concept of a phenotype diagram, we show how these cells tune their degrees of autonomous and collective behavior to realize distinct single-cell and population-level phenotypes; these phenomena have biological analogs, such as quorum sensing or paracrine signaling. We also define the "entropy of population," a measurement of the number of arrangements that a population of cells can assume, and demonstrate how a decrease in the entropy of population accompanies the formation of ordered spatial patterns. Our conceptual framework ties together diverse systems, including tissues and microbes, with common principles.
[ { "created": "Wed, 17 Feb 2016 20:11:35 GMT", "version": "v1" } ]
2016-02-18
[ [ "Maire", "Théo", "" ], [ "Youk", "Hyun", "" ] ]
A rigorous understanding of how multicellular behaviors arise from the actions of single cells requires quantitative frameworks that bridge the gap between genetic circuits, the arrangement of cells in space, and population-level behaviors. Here, we provide such a framework for a ubiquitous class of multicellular systems - namely, "secrete-and-sense cells" that communicate by secreting and sensing a signaling molecule. By using formal, mathematical arguments and introducing the concept of a phenotype diagram, we show how these cells tune their degrees of autonomous and collective behavior to realize distinct single-cell and population-level phenotypes; these phenomena have biological analogs, such as quorum sensing or paracrine signaling. We also define the "entropy of population," a measurement of the number of arrangements that a population of cells can assume, and demonstrate how a decrease in the entropy of population accompanies the formation of ordered spatial patterns. Our conceptual framework ties together diverse systems, including tissues and microbes, with common principles.
2011.04738
Jann-Long Chern
Bo-Cyuan Lin, Yen-Jia Chen, Yi-Cheng Hung, Chun-sheng Chen, Han-Chun Wang and Jann-Long Chern
The Data Forecast in COVID-19 Model with Applications to US, South Korea, Brazil, India, Russia and Italy
15 Pages, 19 fugures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we firstly propose SQIARD and SIARD models to investigate the transmission of COVID-19 with quarantine, infected and asymptomatic infected, and discuss the relation between the respective basic reproduction number $R_0, R_Q$ and the stability of the equilibrium points of model. Secondly, after training the related data parameters, in our numerical simulations, we respectively conduct the forecast of the data of US, South Korea, Brazil, India, Russia and Italy, and the effect of prediction of the epidemic situation in each country. Furthermore, we apply US data to compare SQIARD with SIARD, and display the effects of predictions.
[ { "created": "Thu, 5 Nov 2020 14:41:04 GMT", "version": "v1" }, { "created": "Sun, 13 Jun 2021 09:20:56 GMT", "version": "v2" } ]
2021-06-15
[ [ "Lin", "Bo-Cyuan", "" ], [ "Chen", "Yen-Jia", "" ], [ "Hung", "Yi-Cheng", "" ], [ "Chen", "Chun-sheng", "" ], [ "Wang", "Han-Chun", "" ], [ "Chern", "Jann-Long", "" ] ]
In this paper, we firstly propose SQIARD and SIARD models to investigate the transmission of COVID-19 with quarantine, infected and asymptomatic infected, and discuss the relation between the respective basic reproduction number $R_0, R_Q$ and the stability of the equilibrium points of model. Secondly, after training the related data parameters, in our numerical simulations, we respectively conduct the forecast of the data of US, South Korea, Brazil, India, Russia and Italy, and the effect of prediction of the epidemic situation in each country. Furthermore, we apply US data to compare SQIARD with SIARD, and display the effects of predictions.
1907.06171
Kelin Xia
D Vijay Anand, Kelin Xia, Yuguang Mu
Weighted persistent homology for osmolyte molecular aggregation and hydrogen-bonding network analysis
19 pages,9 figures
null
null
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has long been observed that trimethylamin N-oxide (TMAO) and urea demonstrate dramatically different properties in a protein folding process. Even with the enormous theoretical and experimental research work of the two osmolytes, various aspects of their underlying mechanisms still remain largely elusive. In this paper, we propose to use the weighted persistent homology to systematically study the osmolytes molecular aggregation and their hydrogen-bonding network from a local topological perspective. We consider two weighted models, i.e., localized persistent homology (LPH) and interactive persistent homology (IPH). From the localized persistent homology models, we have found that TMAO and urea have very different local topology. TMAO shows local network structures. With the concentration increase, the circle elements in these networks show a clear increase in their total numbers and a decrease in their relative sizes. In contrast, urea shows two types of local topological patterns, i.e., local clusters around 6 \AA~ and a few global circle elements at around 12 \AA. From the interactive persistent homology models, it has been found that our persistent radial distribution function (PRDF) from the global-scale IPH has same physical properties as the traditional radial distribution function (RDF). Moreover, PRDFs from the local-scale IPH can also be generated and used to characterize the local interaction information. Other than the clear difference of the first peak value of PRDFs at filtration size 4\AA, TMAO and urea also shows very different behaviors at the second peak region from filtration size 5\AA~ to 10 \AA.
[ { "created": "Sun, 14 Jul 2019 06:04:39 GMT", "version": "v1" } ]
2019-07-16
[ [ "Anand", "D Vijay", "" ], [ "Xia", "Kelin", "" ], [ "Mu", "Yuguang", "" ] ]
It has long been observed that trimethylamin N-oxide (TMAO) and urea demonstrate dramatically different properties in a protein folding process. Even with the enormous theoretical and experimental research work of the two osmolytes, various aspects of their underlying mechanisms still remain largely elusive. In this paper, we propose to use the weighted persistent homology to systematically study the osmolytes molecular aggregation and their hydrogen-bonding network from a local topological perspective. We consider two weighted models, i.e., localized persistent homology (LPH) and interactive persistent homology (IPH). From the localized persistent homology models, we have found that TMAO and urea have very different local topology. TMAO shows local network structures. With the concentration increase, the circle elements in these networks show a clear increase in their total numbers and a decrease in their relative sizes. In contrast, urea shows two types of local topological patterns, i.e., local clusters around 6 \AA~ and a few global circle elements at around 12 \AA. From the interactive persistent homology models, it has been found that our persistent radial distribution function (PRDF) from the global-scale IPH has same physical properties as the traditional radial distribution function (RDF). Moreover, PRDFs from the local-scale IPH can also be generated and used to characterize the local interaction information. Other than the clear difference of the first peak value of PRDFs at filtration size 4\AA, TMAO and urea also shows very different behaviors at the second peak region from filtration size 5\AA~ to 10 \AA.
q-bio/0311016
Rom\`an R. Zapatrin
Christopher Altman, Jaroslaw Pykacz, Roman Zapatrin
Superpositional Quantum Network Topologies
10 pages, LaTeX2e
International Journal of Theoretical Physics, 43, 2029-2040 (2004)
10.1023/B:IJTP.0000049008.51567.ec
null
q-bio.NC quant-ph
null
We introduce superposition-based quantum networks composed of (i) the classical perceptron model of multilayered, feedforward neural networks and (ii) the algebraic model of evolving reticular quantum structures as described in quantum gravity. The main feature of this model is moving from particular neural topologies to a quantum metastructure which embodies many differing topological patterns. Using quantum parallelism, training is possible on superpositions of different network topologies. As a result, not only classical transition functions, but also topology becomes a subject of training. The main feature of our model is that particular neural networks, with different topologies, are quantum states. We consider high-dimensional dissipative quantum structures as candidates for implementation of the model.
[ { "created": "Wed, 12 Nov 2003 10:37:32 GMT", "version": "v1" }, { "created": "Fri, 19 Mar 2004 11:46:59 GMT", "version": "v2" }, { "created": "Fri, 30 Jul 2004 10:48:14 GMT", "version": "v3" } ]
2009-11-10
[ [ "Altman", "Christopher", "" ], [ "Pykacz", "Jaroslaw", "" ], [ "Zapatrin", "Roman", "" ] ]
We introduce superposition-based quantum networks composed of (i) the classical perceptron model of multilayered, feedforward neural networks and (ii) the algebraic model of evolving reticular quantum structures as described in quantum gravity. The main feature of this model is moving from particular neural topologies to a quantum metastructure which embodies many differing topological patterns. Using quantum parallelism, training is possible on superpositions of different network topologies. As a result, not only classical transition functions, but also topology becomes a subject of training. The main feature of our model is that particular neural networks, with different topologies, are quantum states. We consider high-dimensional dissipative quantum structures as candidates for implementation of the model.
1010.3537
Moritz Deger
Moritz Helias, Moritz Deger, Stefan Rotter, Markus Diesmann
The perfect integrator driven by Poisson input and its approximation in the diffusion limit
7 pages, 3 figures, v2: corrected authors in reference
null
10.3389/fnins.2011.00019
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note we consider the perfect integrator driven by Poisson process input. We derive its equilibrium and response properties and contrast them to the approximations obtained by applying the diffusion approximation. In particular, the probability density in the vicinity of the threshold differs, which leads to altered response properties of the system in equilibrium.
[ { "created": "Mon, 18 Oct 2010 09:57:29 GMT", "version": "v1" }, { "created": "Mon, 22 Nov 2010 11:18:59 GMT", "version": "v2" } ]
2022-05-18
[ [ "Helias", "Moritz", "" ], [ "Deger", "Moritz", "" ], [ "Rotter", "Stefan", "" ], [ "Diesmann", "Markus", "" ] ]
In this note we consider the perfect integrator driven by Poisson process input. We derive its equilibrium and response properties and contrast them to the approximations obtained by applying the diffusion approximation. In particular, the probability density in the vicinity of the threshold differs, which leads to altered response properties of the system in equilibrium.
2006.06826
Ian Leifer
Flaviano Morone, Ian Leifer, Hernan A. Makse
Fibration symmetries uncover the building blocks of biological networks
null
Proc Natl Acad Sci USA. 2020;117(15):83068314
10.1073/pnas.1914628117
null
q-bio.MN physics.data-an q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major ambition of systems science is to uncover the building blocks of any biological network to decipher how cellular function emerges from their interactions. Here, we introduce a graph representation of the information flow in these networks as a set of input trees, one for each node, which contains all pathways along which information can be transmitted in the network. In this representation, we find remarkable symmetries in the input trees that deconstruct the network into functional building blocks called fibers. Nodes in a fiber have isomorphic input trees and thus process equivalent dynamics and synchronize their activity. Each fiber can then be collapsed into a single representative base node through an information-preserving transformation called 'symmetry fibration', introduced by Grothendieck in the context of algebraic geometry. We exemplify the symmetry fibrations in gene regulatory networks and then show that they universally apply across species and domains from biology to social and infrastructure networks. The building blocks are classified into topological classes of input trees characterized by integer branching ratios and fractal golden ratios of Fibonacci sequences representing cycles of information. Thus, symmetry fibrations describe how complex networks are built from the bottom up to process information through the synchronization of their constitutive building blocks.
[ { "created": "Wed, 10 Jun 2020 17:05:25 GMT", "version": "v1" } ]
2020-06-15
[ [ "Morone", "Flaviano", "" ], [ "Leifer", "Ian", "" ], [ "Makse", "Hernan A.", "" ] ]
A major ambition of systems science is to uncover the building blocks of any biological network to decipher how cellular function emerges from their interactions. Here, we introduce a graph representation of the information flow in these networks as a set of input trees, one for each node, which contains all pathways along which information can be transmitted in the network. In this representation, we find remarkable symmetries in the input trees that deconstruct the network into functional building blocks called fibers. Nodes in a fiber have isomorphic input trees and thus process equivalent dynamics and synchronize their activity. Each fiber can then be collapsed into a single representative base node through an information-preserving transformation called 'symmetry fibration', introduced by Grothendieck in the context of algebraic geometry. We exemplify the symmetry fibrations in gene regulatory networks and then show that they universally apply across species and domains from biology to social and infrastructure networks. The building blocks are classified into topological classes of input trees characterized by integer branching ratios and fractal golden ratios of Fibonacci sequences representing cycles of information. Thus, symmetry fibrations describe how complex networks are built from the bottom up to process information through the synchronization of their constitutive building blocks.
q-bio/0408011
Steven N. Evans
Steven N. Evans and Tandy Warnow
Unidentifiable divergence times in rates-across-sites models
13 pages, update to include referee's comments, to appear in IEEE/ACM Transactions on Computational Biology and Bioinformatics
null
null
U.C. Berkeley Department of Statistics Technical Report #668
q-bio.PE q-bio.GN
null
The rates-across-sites assumption in phylogenetic inference posits that the rate matrix governing the Markovian evolution of a character on an edge of the putative phylogenetic tree is the product of a character-specific scale factor and a rate matrix that is particular to that edge. Thus, evolution follows basically the same process for all characters, except that it occurs faster for some characters than others. To allow estimation of tree topologies and edge lengths for such models, it is commonly assumed that the scale factors are not arbitrary unknown constants, but rather unobserved, independent, identically distributed draws from a member of some parametric family of distributions. A popular choice is the gamma family. We consider an example of a clock-like tree with three taxa, one unknown edge length, and a parametric family of scale factor distributions that contain the gamma family. This model has the property that, for a generic choice of unknown edge length and scale factor distribution, there is another edge length and scale factor distribution which generates data with exactly the same distribution, so that even with infinitely many data it will be typically impossible to make correct inferences about the unknown edge length.
[ { "created": "Sun, 15 Aug 2004 20:19:37 GMT", "version": "v1" }, { "created": "Mon, 22 Nov 2004 14:20:24 GMT", "version": "v2" } ]
2007-05-23
[ [ "Evans", "Steven N.", "" ], [ "Warnow", "Tandy", "" ] ]
The rates-across-sites assumption in phylogenetic inference posits that the rate matrix governing the Markovian evolution of a character on an edge of the putative phylogenetic tree is the product of a character-specific scale factor and a rate matrix that is particular to that edge. Thus, evolution follows basically the same process for all characters, except that it occurs faster for some characters than others. To allow estimation of tree topologies and edge lengths for such models, it is commonly assumed that the scale factors are not arbitrary unknown constants, but rather unobserved, independent, identically distributed draws from a member of some parametric family of distributions. A popular choice is the gamma family. We consider an example of a clock-like tree with three taxa, one unknown edge length, and a parametric family of scale factor distributions that contain the gamma family. This model has the property that, for a generic choice of unknown edge length and scale factor distribution, there is another edge length and scale factor distribution which generates data with exactly the same distribution, so that even with infinitely many data it will be typically impossible to make correct inferences about the unknown edge length.
2111.02930
Markus Fleck
Markus Fleck and Noah Weber and Christopher Trummer
Decoupled coordinates for machine learning-based molecular fragment linking
16 pages, 5 Figures
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent developments in machine-learning based molecular fragment linking have demonstrated the importance of informing the generation process with structural information specifying the relative orientation of the fragments to be linked. However, such structural information has not yet been provided in the form of a complete relative coordinate system. Mathematical details for a decoupled set of bond lengths, bond angles and torsion angles are elaborated and the coordinate system is demonstrated to be complete. Significant impact on the quality of the generated linkers is demonstrated numerically. The amount of reliable information within the different types of degrees of freedom is investigated. Ablation studies and an information-theoretical analysis are performed. The presented benefits suggest the application of a complete and decoupled relative coordinate system as a standard good practice in linker design.
[ { "created": "Mon, 1 Nov 2021 17:39:23 GMT", "version": "v1" } ]
2021-11-05
[ [ "Fleck", "Markus", "" ], [ "Weber", "Noah", "" ], [ "Trummer", "Christopher", "" ] ]
Recent developments in machine-learning based molecular fragment linking have demonstrated the importance of informing the generation process with structural information specifying the relative orientation of the fragments to be linked. However, such structural information has not yet been provided in the form of a complete relative coordinate system. Mathematical details for a decoupled set of bond lengths, bond angles and torsion angles are elaborated and the coordinate system is demonstrated to be complete. Significant impact on the quality of the generated linkers is demonstrated numerically. The amount of reliable information within the different types of degrees of freedom is investigated. Ablation studies and an information-theoretical analysis are performed. The presented benefits suggest the application of a complete and decoupled relative coordinate system as a standard good practice in linker design.
2111.14964
Tatiana Yakushkina S.
Igor Samokhin, Tatiana Yakushkina, Dmitry Markin, Alexander S. Bratus
Fitness landscape adaptation in open replicator systems with competition: application to cancer therapy
null
null
null
null
q-bio.PE cs.NA math.DS math.NA
http://creativecommons.org/licenses/by/4.0/
This study focuses on open quasispecies systems with competition and death flow, described by modified Eigen and Crow-Kimura models. We examine the evolutionary adaptation process as a reaction to changes in rates. One of the fundamental assumptions, which forms the basis of our mathematical model, is the existence of two different timescales: internal dynamics time and evolutionary time. The latter is much slower and exhibits significant adaptation events. These conditions allow us to represent the whole evolutionary process through a series of steady-state equations, where all the elements continuously depend on the evolutionary parameter.
[ { "created": "Mon, 29 Nov 2021 21:17:49 GMT", "version": "v1" } ]
2021-12-01
[ [ "Samokhin", "Igor", "" ], [ "Yakushkina", "Tatiana", "" ], [ "Markin", "Dmitry", "" ], [ "Bratus", "Alexander S.", "" ] ]
This study focuses on open quasispecies systems with competition and death flow, described by modified Eigen and Crow-Kimura models. We examine the evolutionary adaptation process as a reaction to changes in rates. One of the fundamental assumptions, which forms the basis of our mathematical model, is the existence of two different timescales: internal dynamics time and evolutionary time. The latter is much slower and exhibits significant adaptation events. These conditions allow us to represent the whole evolutionary process through a series of steady-state equations, where all the elements continuously depend on the evolutionary parameter.
1303.0882
Jeffrey Ross-Ibarra
David M. Wills, Clinton Whipple, Shohei Takuno, Lisa E. Kursel, Laura M. Shannon, Jeffrey Ross-Ibarra, John F. Doebley
From Many, One: Genetic Control of Prolificacy during Maize Domestication
null
PLoS Genetics 2013 9(6): e1003604
10.1371/journal.pgen.1003604
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A reduction in number and an increase in size of inflorescences is a common aspect of plant domestication. When maize was domesticated from teosinte, the number and arrangement of ears changed dramatically. Teosinte has long lateral branches that bear multiple small ears at their nodes and tassels at their tips. Maize has much shorter lateral branches that are tipped by a single large ear with no additional ears at the branch nodes. To investigate the genetic basis of this difference in prolificacy (the number of ears on a plant), we performed a genome-wide QTL scan. A large effect QTL for prolificacy (prol1.1) was detected on the short arm of chromosome one in a location that has previously been shown to influence multiple domestication traits. We fine-mapped prol1.1 to a 2.7 kb interval or causative region upstream of the grassy tillers1 gene, which encodes a homeodomain leucine zipper transcription factor. Tissue in situ hybridizations reveal that the maize allele of prol1.1 is associated with up-regulation of gt1 expression in the nodal plexus. Given that maize does not initiate secondary ear buds, the expression of gt1 in the nodal plexus in maize may suppress their initiation. Population genetic analyses indicate positive selection on the maize allele of prol1.1, causing a partial sweep that fixed the maize allele throughout most of domesticated maize. This work shows how a subtle cis-regulatory change in tissue specific gene expression altered plant architecture in a way that improved the harvestability of maize.
[ { "created": "Mon, 4 Mar 2013 22:35:20 GMT", "version": "v1" } ]
2013-07-30
[ [ "Wills", "David M.", "" ], [ "Whipple", "Clinton", "" ], [ "Takuno", "Shohei", "" ], [ "Kursel", "Lisa E.", "" ], [ "Shannon", "Laura M.", "" ], [ "Ross-Ibarra", "Jeffrey", "" ], [ "Doebley", "John F.", "" ] ]
A reduction in number and an increase in size of inflorescences is a common aspect of plant domestication. When maize was domesticated from teosinte, the number and arrangement of ears changed dramatically. Teosinte has long lateral branches that bear multiple small ears at their nodes and tassels at their tips. Maize has much shorter lateral branches that are tipped by a single large ear with no additional ears at the branch nodes. To investigate the genetic basis of this difference in prolificacy (the number of ears on a plant), we performed a genome-wide QTL scan. A large effect QTL for prolificacy (prol1.1) was detected on the short arm of chromosome one in a location that has previously been shown to influence multiple domestication traits. We fine-mapped prol1.1 to a 2.7 kb interval or causative region upstream of the grassy tillers1 gene, which encodes a homeodomain leucine zipper transcription factor. Tissue in situ hybridizations reveal that the maize allele of prol1.1 is associated with up-regulation of gt1 expression in the nodal plexus. Given that maize does not initiate secondary ear buds, the expression of gt1 in the nodal plexus in maize may suppress their initiation. Population genetic analyses indicate positive selection on the maize allele of prol1.1, causing a partial sweep that fixed the maize allele throughout most of domesticated maize. This work shows how a subtle cis-regulatory change in tissue specific gene expression altered plant architecture in a way that improved the harvestability of maize.
1706.07091
Andrei D. Robu
Andrei D. Robu, Christoph Salge, Chrystopher L. Nehaniv and Daniel Polani
Time as it could Be measured in Artificial Living Systems
Accepted at the European Conference on Artificial Life 2017, Lyon, France
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Being able to measure time, whether directly or indirectly, is a significant advantage for an organism. It permits it to predict regular events, and prepare for them on time. Thus, clocks are ubiquitous in biology. In the present paper, we consider the most minimal abstract pure clocks and investigate their characteristics with respect to their ability to measure time. Amongst other, we find fundamentally diametral clock characteristics, such as oscillatory behaviour for local time measurement or decay-based clocks measuring time periods in scales global to the problem. We include also cascades of independent clocks ("clock bags") and composite clocks with controlled dependency; the latter show various regimes of markedly different dynamics.
[ { "created": "Tue, 20 Jun 2017 15:55:47 GMT", "version": "v1" } ]
2017-06-23
[ [ "Robu", "Andrei D.", "" ], [ "Salge", "Christoph", "" ], [ "Nehaniv", "Chrystopher L.", "" ], [ "Polani", "Daniel", "" ] ]
Being able to measure time, whether directly or indirectly, is a significant advantage for an organism. It permits it to predict regular events, and prepare for them on time. Thus, clocks are ubiquitous in biology. In the present paper, we consider the most minimal abstract pure clocks and investigate their characteristics with respect to their ability to measure time. Amongst other, we find fundamentally diametral clock characteristics, such as oscillatory behaviour for local time measurement or decay-based clocks measuring time periods in scales global to the problem. We include also cascades of independent clocks ("clock bags") and composite clocks with controlled dependency; the latter show various regimes of markedly different dynamics.
1608.02027
Nikolaus Kriegeskorte
Nikolaus Kriegeskorte, J\"orn Diedrichsen
Inferring brain-computational mechanisms with models of activity measurements
25 pages, 9 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-resolution functional imaging is providing increasingly rich measurements of brain activity in animals and humans. A major challenge is to leverage such data to gain insight into the brain's computational mechanisms. The first step is to define candidate brain-computational models (BCMs) that can perform the behavioural task in question. We would then like to infer, which of the candidate BCMs best accounts for measured brain-activity data. Here we describe a method that complements each BCM by a measurement model (MM), which simulates the way the brain-activity measurements reflect neuronal activity (e.g. local averaging in fMRI voxels or sparse sampling in array recordings). The resulting generative model (BCM-MM) produces simulated measurements. In order to avoid having to fit the MM to predict each individual measurement channel of the brain-activity data, we compare the measured and predicted data at the level of summary statistics. We describe a novel particular implementation of this approach, called probabilistic RSA (pRSA) with measurement models, which uses representational dissimilarity matrices (RDMs) as the summary statistics. We validate this method by simulations of fMRI measurements (locally averaging voxels) based on a deep convolutional neural network for visual object recognition. Results indicate that the way the measurements sample the activity patterns strongly affects the apparent representational dissimilarities. However, modelling of the measurement process can account for these effects and different BCMs remain distinguishable even under substantial noise. The pRSA method enables us to perform Bayesian inference on the set of BCMs and to recognise the data-generating model in each case.
[ { "created": "Fri, 5 Aug 2016 21:38:37 GMT", "version": "v1" } ]
2016-08-09
[ [ "Kriegeskorte", "Nikolaus", "" ], [ "Diedrichsen", "Jörn", "" ] ]
High-resolution functional imaging is providing increasingly rich measurements of brain activity in animals and humans. A major challenge is to leverage such data to gain insight into the brain's computational mechanisms. The first step is to define candidate brain-computational models (BCMs) that can perform the behavioural task in question. We would then like to infer, which of the candidate BCMs best accounts for measured brain-activity data. Here we describe a method that complements each BCM by a measurement model (MM), which simulates the way the brain-activity measurements reflect neuronal activity (e.g. local averaging in fMRI voxels or sparse sampling in array recordings). The resulting generative model (BCM-MM) produces simulated measurements. In order to avoid having to fit the MM to predict each individual measurement channel of the brain-activity data, we compare the measured and predicted data at the level of summary statistics. We describe a novel particular implementation of this approach, called probabilistic RSA (pRSA) with measurement models, which uses representational dissimilarity matrices (RDMs) as the summary statistics. We validate this method by simulations of fMRI measurements (locally averaging voxels) based on a deep convolutional neural network for visual object recognition. Results indicate that the way the measurements sample the activity patterns strongly affects the apparent representational dissimilarities. However, modelling of the measurement process can account for these effects and different BCMs remain distinguishable even under substantial noise. The pRSA method enables us to perform Bayesian inference on the set of BCMs and to recognise the data-generating model in each case.
1704.00497
Taoyang Wu
Vincent Moulton and Andreas Spillner and Taoyang Wu
UPGMA and the normalized equidistant minimum evolution problem
29 pages, 8 figures
null
null
null
q-bio.PE cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
UPGMA (Unweighted Pair Group Method with Arithmetic Mean) is a widely used clustering method. Here we show that UPGMA is a greedy heuristic for the normalized equidistant minimum evolution (NEME) problem, that is, finding a rooted tree that minimizes the minimum evolution score relative to the dissimilarity matrix among all rooted trees with the same leaf-set in which all leaves have the same distance to the root. We prove that the NEME problem is NP-hard. In addition, we present some heuristic and approximation algorithms for solving the NEME problem, including a polynomial time algorithm that yields a binary, rooted tree whose NEME score is within O(log^2 n) of the optimum. We expect that these results to eventually provide further insights into the behavior of the UPGMA algorithm.
[ { "created": "Mon, 3 Apr 2017 09:38:33 GMT", "version": "v1" } ]
2017-04-04
[ [ "Moulton", "Vincent", "" ], [ "Spillner", "Andreas", "" ], [ "Wu", "Taoyang", "" ] ]
UPGMA (Unweighted Pair Group Method with Arithmetic Mean) is a widely used clustering method. Here we show that UPGMA is a greedy heuristic for the normalized equidistant minimum evolution (NEME) problem, that is, finding a rooted tree that minimizes the minimum evolution score relative to the dissimilarity matrix among all rooted trees with the same leaf-set in which all leaves have the same distance to the root. We prove that the NEME problem is NP-hard. In addition, we present some heuristic and approximation algorithms for solving the NEME problem, including a polynomial time algorithm that yields a binary, rooted tree whose NEME score is within O(log^2 n) of the optimum. We expect that these results to eventually provide further insights into the behavior of the UPGMA algorithm.
1906.09861
Andrea De Martino
Jonathan Fiorentino, Andrea De Martino
Independent channels for miRNA biosynthesis ensure efficient static and dynamic control in the regulation of the early stages of myogenesis
16+eps pages
null
10.1016/j.jtbi.2017.06.038
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by recent experimental work, we define and study a deterministic model of the complex miRNA-based regulatory circuit that putatively controls the early stage of myogenesis in human. We aim in particular at a quantitative understanding of (i) the roles played by the separate and independent miRNA biosynthesis channels (one involving a miRNA-decoy system regulated by an exogenous controller, the other given by transcription from a distinct genomic locus) that appear to be crucial for the differentiation program, and of (ii) how competition to bind miRNAs can efficiently control molecular levels in such an interconnected architecture. We show that optimal static control via the miRNA-decoy system constrains kinetic parameters in narrow ranges where the channels are tightly cross-linked. On the other hand, the alternative locus for miRNA transcription can ensure that the fast concentration shifts required by the differentiation program are achieved, specifically via non-linear response of the target to even modest surges in the miRNA transcription rate. While static, competition-mediated regulation can be achieved by the miRNA-decoy system alone, both channels are essential for the circuit's overall functionality, suggesting that that this type of joint control may represent a minimal optimal architecture in different contexts.
[ { "created": "Mon, 24 Jun 2019 11:35:37 GMT", "version": "v1" } ]
2019-06-25
[ [ "Fiorentino", "Jonathan", "" ], [ "De Martino", "Andrea", "" ] ]
Motivated by recent experimental work, we define and study a deterministic model of the complex miRNA-based regulatory circuit that putatively controls the early stage of myogenesis in human. We aim in particular at a quantitative understanding of (i) the roles played by the separate and independent miRNA biosynthesis channels (one involving a miRNA-decoy system regulated by an exogenous controller, the other given by transcription from a distinct genomic locus) that appear to be crucial for the differentiation program, and of (ii) how competition to bind miRNAs can efficiently control molecular levels in such an interconnected architecture. We show that optimal static control via the miRNA-decoy system constrains kinetic parameters in narrow ranges where the channels are tightly cross-linked. On the other hand, the alternative locus for miRNA transcription can ensure that the fast concentration shifts required by the differentiation program are achieved, specifically via non-linear response of the target to even modest surges in the miRNA transcription rate. While static, competition-mediated regulation can be achieved by the miRNA-decoy system alone, both channels are essential for the circuit's overall functionality, suggesting that that this type of joint control may represent a minimal optimal architecture in different contexts.
1507.08269
Lianchun Yu
Lianchun Yu, Longfei Wang, Fei Jia, Duojie Jia
Stimulus-Dependent Frequency Modulation of Information Transmission in Neural Systems
15 pages, 8 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural oscillations are universal phenomena and can be observed at different levels of neural systems, from single neuron to macroscopic brain. The frequency of those oscillations are related to the brain functions. However, little is know about how the oscillating frequency of neural system affects neural information transmission in them. In this paper, we investigated how the signal processing in single neuron is modulated by subthreshold membrane potential oscillation generated by upstream rhythmic neural activities. We found that the high frequency oscillations facilitate the transferring of strong signals, whereas slow oscillations the weak signals. Though the capacity of information convey for weak signal is low in single neuron, it is greatly enhanced when weak signals are transferred by multiple pathways with different oscillation phases. We provided a simple phase plane analysis to explain the mechanism for this stimulus-dependent frequency modulation in the leakage integrate-and-fire neuron model. Those results provided a basic understanding of how the brain could modulate its information processing simply through oscillating frequency.
[ { "created": "Fri, 13 Feb 2015 07:02:48 GMT", "version": "v1" } ]
2015-07-30
[ [ "Yu", "Lianchun", "" ], [ "Wang", "Longfei", "" ], [ "Jia", "Fei", "" ], [ "Jia", "Duojie", "" ] ]
Neural oscillations are universal phenomena and can be observed at different levels of neural systems, from single neuron to macroscopic brain. The frequency of those oscillations are related to the brain functions. However, little is know about how the oscillating frequency of neural system affects neural information transmission in them. In this paper, we investigated how the signal processing in single neuron is modulated by subthreshold membrane potential oscillation generated by upstream rhythmic neural activities. We found that the high frequency oscillations facilitate the transferring of strong signals, whereas slow oscillations the weak signals. Though the capacity of information convey for weak signal is low in single neuron, it is greatly enhanced when weak signals are transferred by multiple pathways with different oscillation phases. We provided a simple phase plane analysis to explain the mechanism for this stimulus-dependent frequency modulation in the leakage integrate-and-fire neuron model. Those results provided a basic understanding of how the brain could modulate its information processing simply through oscillating frequency.
q-bio/0511005
Hong Qian
Hong Qian (Univ. of Washington) and Daniel A. Beard (Medical Collge of Wisconsin)
Metabolic Futile Cycles and Their Functions: A Systems Analysis of Energy and Control
11 pages, 5 figures
IEE Proceedings - Systems Biology , Vol. 153, pp. 192-200 (2006)
10.1049/ip-syb:20050086
null
q-bio.SC q-bio.MN
null
It has long been hypothesized that futile cycles in cellular metabolism are involved in the regulation of biochemical pathways. Following the work of Newsholme and Crabtree, we develop a quantitative theory for this idea based on open-system thermodynamics and metabolic control analysis. It is shown that the {\it stoichiometric sensitivity} of an intermediary metabolite concentration with respect to changes in steady-state flux is governed by the effective equilibrium constant of the intermediate formation, and the equilibrium can be regulated by a futile cycle. The direction of the shift in the effective equilibrium constant depends on the direction of operation of the futile cycle. High stoichiometric sensitivity corresponds to ultrasensitivity of an intermediate concentration to net flow through a pathway; low stoichiometric sensitivity corresponds to super-robustness of concentration with respect to changes in flux. Both cases potentially play important roles in metabolic regulation. Futile cycles actively shift the effective equilibrium by expending energy; the magnitude of changes in effective equilibria and sensitivities is a function of the amount of energy used by a futile cycle. This proposed mechanism for control by futile cycles works remarkably similarly to kinetic proofreading in biosynthesis. The sensitivity of the system is also intimately related to the rate of concentration fluctuations of intermediate metabolites. The possibly different roles of the two major mechanisms for cellular biochemical regulation, namely reversible chemical modifications via futile cycles and shifting equilibrium by macromolecular binding, are discussed.
[ { "created": "Fri, 4 Nov 2005 07:18:16 GMT", "version": "v1" }, { "created": "Wed, 14 Jun 2006 19:40:10 GMT", "version": "v2" } ]
2007-05-23
[ [ "Qian", "Hong", "", "Univ. of Washington" ], [ "Beard", "Daniel A.", "", "Medical Collge of\n Wisconsin" ] ]
It has long been hypothesized that futile cycles in cellular metabolism are involved in the regulation of biochemical pathways. Following the work of Newsholme and Crabtree, we develop a quantitative theory for this idea based on open-system thermodynamics and metabolic control analysis. It is shown that the {\it stoichiometric sensitivity} of an intermediary metabolite concentration with respect to changes in steady-state flux is governed by the effective equilibrium constant of the intermediate formation, and the equilibrium can be regulated by a futile cycle. The direction of the shift in the effective equilibrium constant depends on the direction of operation of the futile cycle. High stoichiometric sensitivity corresponds to ultrasensitivity of an intermediate concentration to net flow through a pathway; low stoichiometric sensitivity corresponds to super-robustness of concentration with respect to changes in flux. Both cases potentially play important roles in metabolic regulation. Futile cycles actively shift the effective equilibrium by expending energy; the magnitude of changes in effective equilibria and sensitivities is a function of the amount of energy used by a futile cycle. This proposed mechanism for control by futile cycles works remarkably similarly to kinetic proofreading in biosynthesis. The sensitivity of the system is also intimately related to the rate of concentration fluctuations of intermediate metabolites. The possibly different roles of the two major mechanisms for cellular biochemical regulation, namely reversible chemical modifications via futile cycles and shifting equilibrium by macromolecular binding, are discussed.
2003.12417
Alessio Notari
Alessio Notari
Temperature dependence of COVID-19 transmission
5 pages, 5 figures. Updated with improved analysis, leading to higher significance. Analysis with extended dataset added
null
10.1016/j.scitotenv.2020.144390
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent coronavirus pandemic follows in its early stages an almost exponential growth, with the number of cases quite well fit in time by $N(t)\propto e^{\alpha t}$, in many countries. We analyze the rate $\alpha$ for each country, starting from a threshold of 30 total cases and using the next 12 days, capturing thus the early growth homogeneously. We look for a link between $\alpha$ and the average temperature $T$ of each country, in the month of the epidemic growth. We analyze a {\it base} set of 42 countries, which developed the epidemic earlier, an {\it intermediate} set of 88 countries and an {\it extended} set of 125 countries, which developed the epidemic more recently. Applying a linear fit $\alpha(T)$, we find increasing evidence for a decreasing $\alpha$ as a function of $T$, at $99.66\%$C.L., $99.86\%$C.L. and $99.99995 \%$ C.L. ($p$-value $5 \cdot 10^{-7}$, or 5$\sigma$ detection) in the {\it base}, {\it intermediate} and {\it extended} dataset, respectively. The doubling time is expected to increase by $40\%\sim 50\%$, going from $5^\circ$ C to $25^\circ$ C. In the {\it base} set, going beyond a linear model, a peak at $(7.7\pm 3.6)^\circ C$ seems to be present, but its evidence disappears for the larger datasets. We also analyzed a possible bias: poor countries, often located in warm regions, might have less intense testing. By excluding countries below a given GDP per capita, we find that our conclusions are only slightly affected and only for the {\it extended} dataset. The significance remains high, with a $p$-value of $10^{-3}-10^{-4}$ or less. Our findings give hope that, for northern hemisphere countries, the growth rate should significantly decrease as a result of both warmer weather and lockdown policies. In general the propagation should be hopefully stopped by strong lockdown, testing and tracking policies, before the arrival of the cold season.
[ { "created": "Fri, 27 Mar 2020 13:47:43 GMT", "version": "v1" }, { "created": "Tue, 31 Mar 2020 18:41:55 GMT", "version": "v2" }, { "created": "Fri, 3 Apr 2020 15:20:22 GMT", "version": "v3" }, { "created": "Wed, 22 Apr 2020 22:15:04 GMT", "version": "v4" } ]
2020-12-22
[ [ "Notari", "Alessio", "" ] ]
The recent coronavirus pandemic follows in its early stages an almost exponential growth, with the number of cases quite well fit in time by $N(t)\propto e^{\alpha t}$, in many countries. We analyze the rate $\alpha$ for each country, starting from a threshold of 30 total cases and using the next 12 days, capturing thus the early growth homogeneously. We look for a link between $\alpha$ and the average temperature $T$ of each country, in the month of the epidemic growth. We analyze a {\it base} set of 42 countries, which developed the epidemic earlier, an {\it intermediate} set of 88 countries and an {\it extended} set of 125 countries, which developed the epidemic more recently. Applying a linear fit $\alpha(T)$, we find increasing evidence for a decreasing $\alpha$ as a function of $T$, at $99.66\%$C.L., $99.86\%$C.L. and $99.99995 \%$ C.L. ($p$-value $5 \cdot 10^{-7}$, or 5$\sigma$ detection) in the {\it base}, {\it intermediate} and {\it extended} dataset, respectively. The doubling time is expected to increase by $40\%\sim 50\%$, going from $5^\circ$ C to $25^\circ$ C. In the {\it base} set, going beyond a linear model, a peak at $(7.7\pm 3.6)^\circ C$ seems to be present, but its evidence disappears for the larger datasets. We also analyzed a possible bias: poor countries, often located in warm regions, might have less intense testing. By excluding countries below a given GDP per capita, we find that our conclusions are only slightly affected and only for the {\it extended} dataset. The significance remains high, with a $p$-value of $10^{-3}-10^{-4}$ or less. Our findings give hope that, for northern hemisphere countries, the growth rate should significantly decrease as a result of both warmer weather and lockdown policies. In general the propagation should be hopefully stopped by strong lockdown, testing and tracking policies, before the arrival of the cold season.
2306.10373
Hina Shaheen
Hina Shaheen, Roderick Melnik, Sundeep Singh
Data-driven Stochastic Model for Quantifying the Interplay Between Amyloid-beta and Calcium Levels in Alzheimer's Disease
20 pages, 6 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The abnormal aggregation of extracellular amyloid-$\beta$ (A\beta) in senile plaques resulting in calcium (Ca^{+2}) dyshomeostasis is one of the primary symptoms of Alzheimer's disease (AD). Significant research efforts have been devoted in the past to better understand the underlying molecular mechanisms driving A\beta deposition and Ca^{+2} dysregulation. To better understand this interaction, we report a novel stochastic model where we analyze the positive feedback loop between A\beta and Ca^{+2} using ADNI data. A good therapeutic treatment plan for AD requires precise predictions. Stochastic models offer an appropriate framework for modelling AD since AD studies are observational in nature and involve regular patient visits. The etiology of AD may be described as a multi-state disease process using the approximate Bayesian computation method. So, utilizing ADNI data from $2$-year visits for AD patients, we employ this method to investigate the interplay between A\beta and Ca^{+2} levels at various disease development phases. Incorporating the ADNI data in our physics-based Bayesian model, we discovered that a sufficiently large disruption in either A\beta metabolism or intracellular Ca^{+2} homeostasis causes the relative growth rate in both Ca^{+2} and A\beta, which corresponds to the development of AD. The imbalance of Ca^{+2} ions causes A\beta disorders by directly or indirectly affecting a variety of cellular and subcellular processes, and the altered homeostasis may worsen the abnormalities of Ca^{+2} ion transportation and deposition. This suggests that altering the Ca^{+2} balance or the balance between A\beta and Ca^{+2} by chelating them may be able to reduce disorders associated with AD and open up new research possibilities for AD therapy.
[ { "created": "Sat, 17 Jun 2023 15:13:38 GMT", "version": "v1" } ]
2023-06-21
[ [ "Shaheen", "Hina", "" ], [ "Melnik", "Roderick", "" ], [ "Singh", "Sundeep", "" ] ]
The abnormal aggregation of extracellular amyloid-$\beta$ (A\beta) in senile plaques resulting in calcium (Ca^{+2}) dyshomeostasis is one of the primary symptoms of Alzheimer's disease (AD). Significant research efforts have been devoted in the past to better understand the underlying molecular mechanisms driving A\beta deposition and Ca^{+2} dysregulation. To better understand this interaction, we report a novel stochastic model where we analyze the positive feedback loop between A\beta and Ca^{+2} using ADNI data. A good therapeutic treatment plan for AD requires precise predictions. Stochastic models offer an appropriate framework for modelling AD since AD studies are observational in nature and involve regular patient visits. The etiology of AD may be described as a multi-state disease process using the approximate Bayesian computation method. So, utilizing ADNI data from $2$-year visits for AD patients, we employ this method to investigate the interplay between A\beta and Ca^{+2} levels at various disease development phases. Incorporating the ADNI data in our physics-based Bayesian model, we discovered that a sufficiently large disruption in either A\beta metabolism or intracellular Ca^{+2} homeostasis causes the relative growth rate in both Ca^{+2} and A\beta, which corresponds to the development of AD. The imbalance of Ca^{+2} ions causes A\beta disorders by directly or indirectly affecting a variety of cellular and subcellular processes, and the altered homeostasis may worsen the abnormalities of Ca^{+2} ion transportation and deposition. This suggests that altering the Ca^{+2} balance or the balance between A\beta and Ca^{+2} by chelating them may be able to reduce disorders associated with AD and open up new research possibilities for AD therapy.
1108.5657
Vasily Ogryzko V
Arman Kulyyassov, Muhammad Shoaib, Andrei Pichugin, Patricia Kannouche, Erlan Ramanculov, Marc Lipinski and Vasily Ogryzko
PUB-MS - a mass-spectrometry-based method to monitor protein-protein proximity in vivo
46 pages, 5 main Figures and 7 supplementary Figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The common techniques to study protein-protein proximity in vivo are not well-adapted to the capabilities and the expertise of a standard proteomics laboratory, typically based on the use of mass spectrometry. With the aim of closing this gap, we have developed PUB-MS (for Proximity Utilizing Biotinylation and Mass Spectrometry), an approach to monitor protein-protein proximity, based on biotinylation of a protein fused to a biotin-acceptor peptide (BAP) by a biotin-ligase, BirA, fused to its interaction partner. The biotinylation status of the BAP can be further detected by either Western analysis or mass spectrometry. The BAP sequence was redesigned for easy monitoring of the biotinylation status by LC-MS/MS. In several experimental models, we demonstrate that the biotinylation in vivo is specifically enhanced when the BAP- and BirA- fused proteins are in proximity to each other. The advantage of mass spectrometry is demonstrated by using BAPs with different sequences in a single experiment (allowing multiplex analysis) and by the use of stable isotopes. Finally, we show that our methodology can be also used to study a specific subfraction of a protein of interest that was in proximity with another protein at a predefined time before the analysis.
[ { "created": "Mon, 29 Aug 2011 17:10:27 GMT", "version": "v1" } ]
2011-08-30
[ [ "Kulyyassov", "Arman", "" ], [ "Shoaib", "Muhammad", "" ], [ "Pichugin", "Andrei", "" ], [ "Kannouche", "Patricia", "" ], [ "Ramanculov", "Erlan", "" ], [ "Lipinski", "Marc", "" ], [ "Ogryzko", "Vasily", "" ] ]
The common techniques to study protein-protein proximity in vivo are not well-adapted to the capabilities and the expertise of a standard proteomics laboratory, typically based on the use of mass spectrometry. With the aim of closing this gap, we have developed PUB-MS (for Proximity Utilizing Biotinylation and Mass Spectrometry), an approach to monitor protein-protein proximity, based on biotinylation of a protein fused to a biotin-acceptor peptide (BAP) by a biotin-ligase, BirA, fused to its interaction partner. The biotinylation status of the BAP can be further detected by either Western analysis or mass spectrometry. The BAP sequence was redesigned for easy monitoring of the biotinylation status by LC-MS/MS. In several experimental models, we demonstrate that the biotinylation in vivo is specifically enhanced when the BAP- and BirA- fused proteins are in proximity to each other. The advantage of mass spectrometry is demonstrated by using BAPs with different sequences in a single experiment (allowing multiplex analysis) and by the use of stable isotopes. Finally, we show that our methodology can be also used to study a specific subfraction of a protein of interest that was in proximity with another protein at a predefined time before the analysis.
2202.00143
Henry Cousins
Henry Cousins, Taryn Hall, Yinglong Guo, Luke Tso, Kathy Tzy-Hwa Tzeng, Le Cong, Russ Altman
Gene set proximity analysis: expanding gene set enrichment analysis through learned geometric embeddings
21 pages, 6 figures
null
10.1093/bioinformatics/btac735
null
q-bio.QM q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Gene set analysis methods rely on knowledge-based representations of genetic interactions in the form of both gene set collections and protein-protein interaction (PPI) networks. Explicit representations of genetic interactions often fail to capture complex interdependencies among genes, limiting the analytic power of such methods. Here we propose an extension of gene set enrichment analysis to a latent feature space reflecting PPI network topology, called gene set proximity analysis (GSPA). Compared with existing methods, GSPA provides improved ability to identify disease-associated pathways in disease-matched gene expression datasets, while improving reproducibility of enrichment statistics for similar gene sets. GSPA is statistically straightforward, reducing to classical gene set enrichment through a single user-defined parameter. We apply our method to identify novel drug associations with SARS-CoV-2 viral entry. Finally, we validate our drug association predictions through retrospective clinical analysis of claims data from 8 million patients, supporting a role for gabapentin as a risk factor and metformin as a protective factor for COVID-19 hospitalization.
[ { "created": "Mon, 31 Jan 2022 23:11:26 GMT", "version": "v1" } ]
2023-02-23
[ [ "Cousins", "Henry", "" ], [ "Hall", "Taryn", "" ], [ "Guo", "Yinglong", "" ], [ "Tso", "Luke", "" ], [ "Tzeng", "Kathy Tzy-Hwa", "" ], [ "Cong", "Le", "" ], [ "Altman", "Russ", "" ] ]
Gene set analysis methods rely on knowledge-based representations of genetic interactions in the form of both gene set collections and protein-protein interaction (PPI) networks. Explicit representations of genetic interactions often fail to capture complex interdependencies among genes, limiting the analytic power of such methods. Here we propose an extension of gene set enrichment analysis to a latent feature space reflecting PPI network topology, called gene set proximity analysis (GSPA). Compared with existing methods, GSPA provides improved ability to identify disease-associated pathways in disease-matched gene expression datasets, while improving reproducibility of enrichment statistics for similar gene sets. GSPA is statistically straightforward, reducing to classical gene set enrichment through a single user-defined parameter. We apply our method to identify novel drug associations with SARS-CoV-2 viral entry. Finally, we validate our drug association predictions through retrospective clinical analysis of claims data from 8 million patients, supporting a role for gabapentin as a risk factor and metformin as a protective factor for COVID-19 hospitalization.
1506.01033
Adam Marblestone
Samuel G Rodriques, Adam H Marblestone, Max Mankin, Lowell Wood and Edward S Boyden
Multiplexed Neural Recording Down a Single Optical Fiber via Optical Reflectometry with Capacitive Signal Enhancement
null
null
10.1117/1.JBO.21.5.057003
null
q-bio.NC physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a fiber-optic architecture for neural recording without contrast agents, and study its properties theoretically. Our sensor design is inspired by electrooptic modulators, which modulate the refractive index of a waveguide by applying an electric field across an electrooptic core material, and allows recording of the activities of individual neurons located at points along a 10 cm length of optical fiber with 20 um axial resolution, sensitivity down to 100 uV and a dynamic range of up to 1V using commercially available optical reflectometers as readout devices. A key concept of the design is the ability to create an "intensified" electric field inside an optical waveguide by applying the extracellular voltage from a neural spike over a nanoscopic distance. Implementing this concept requires the use of ultrathin high-dielectric capacitor layers. If suitable materials can be found -- possessing favorable properties with respect to toxicity, ohmic junctions, and surface capacitance -- then such sensing fibers could, in principle, be scaled down to few-micron cross-sections for minimally invasive neural interfacing. Custom-designed multi-material optical fibers, probed using a reflectometric readout, may therefore provide a powerful platform for neural sensing.
[ { "created": "Tue, 2 Jun 2015 20:05:05 GMT", "version": "v1" } ]
2020-02-04
[ [ "Rodriques", "Samuel G", "" ], [ "Marblestone", "Adam H", "" ], [ "Mankin", "Max", "" ], [ "Wood", "Lowell", "" ], [ "Boyden", "Edward S", "" ] ]
We introduce a fiber-optic architecture for neural recording without contrast agents, and study its properties theoretically. Our sensor design is inspired by electrooptic modulators, which modulate the refractive index of a waveguide by applying an electric field across an electrooptic core material, and allows recording of the activities of individual neurons located at points along a 10 cm length of optical fiber with 20 um axial resolution, sensitivity down to 100 uV and a dynamic range of up to 1V using commercially available optical reflectometers as readout devices. A key concept of the design is the ability to create an "intensified" electric field inside an optical waveguide by applying the extracellular voltage from a neural spike over a nanoscopic distance. Implementing this concept requires the use of ultrathin high-dielectric capacitor layers. If suitable materials can be found -- possessing favorable properties with respect to toxicity, ohmic junctions, and surface capacitance -- then such sensing fibers could, in principle, be scaled down to few-micron cross-sections for minimally invasive neural interfacing. Custom-designed multi-material optical fibers, probed using a reflectometric readout, may therefore provide a powerful platform for neural sensing.
2202.11752
Raju Hazari
Raju Hazari and P Pal Chaudhuri
Analysis of Coronavirus Envelope Protein with Cellular Automata (CA) Model
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The reason of significantly higher transmissibility of SARS Covid (2019 CoV-2) compared to SARS Covid (2003 CoV) and MERS Covid (2012 MERS) can be attributed to mutations reported in structural proteins, and the role played by non-structural proteins (nsps) and accessory proteins (ORFs) for viral replication, assembly, and shedding. Envelope protein E is one of the four structural proteins of minimum length. Recent studies have confirmed critical role played by the envelope protein in the viral life cycle including assembly of virion exported from infected cell for its transmission. However, the determinants of the highly complex viral - host interactions of envelope protein, particularly with host Golgi complex, have not been adequately characterized. CoV-2 and CoV Envelope proteins of length 75 and 76 amino acids differ in four amino acid locations. The additional amino acid Gly (G) at location 70 makes CoV length 76. The amino acid pair EG at location 69-70 of CoV in place of amino acid R in location 69 of CoV-2, has been identified as a major determining factor in the current investigation. This paper concentrates on the design of computational model to compare the structure/function of wild and mutants of CoV-2 with wild and mutants of CoV in the functionally important region of the protein chain pair. We hypothesize that differences of CAML model parameter of CoV-2 and CoV characterize the deviation in structure and function of envelope proteins in respect of interaction of virus with host Golgi complex; and this difference gets reflected in the difference of their transmissibility. The hypothesis has been validated from single point mutational study on- (i) human HBB beta-globin hemoglobin protein associated with sickle cell anemia, (ii) mutants of envelope protein of Covid-2 infected patients reported in recent publications.
[ { "created": "Sat, 15 Jan 2022 19:07:18 GMT", "version": "v1" } ]
2022-02-25
[ [ "Hazari", "Raju", "" ], [ "Chaudhuri", "P Pal", "" ] ]
The reason of significantly higher transmissibility of SARS Covid (2019 CoV-2) compared to SARS Covid (2003 CoV) and MERS Covid (2012 MERS) can be attributed to mutations reported in structural proteins, and the role played by non-structural proteins (nsps) and accessory proteins (ORFs) for viral replication, assembly, and shedding. Envelope protein E is one of the four structural proteins of minimum length. Recent studies have confirmed critical role played by the envelope protein in the viral life cycle including assembly of virion exported from infected cell for its transmission. However, the determinants of the highly complex viral - host interactions of envelope protein, particularly with host Golgi complex, have not been adequately characterized. CoV-2 and CoV Envelope proteins of length 75 and 76 amino acids differ in four amino acid locations. The additional amino acid Gly (G) at location 70 makes CoV length 76. The amino acid pair EG at location 69-70 of CoV in place of amino acid R in location 69 of CoV-2, has been identified as a major determining factor in the current investigation. This paper concentrates on the design of computational model to compare the structure/function of wild and mutants of CoV-2 with wild and mutants of CoV in the functionally important region of the protein chain pair. We hypothesize that differences of CAML model parameter of CoV-2 and CoV characterize the deviation in structure and function of envelope proteins in respect of interaction of virus with host Golgi complex; and this difference gets reflected in the difference of their transmissibility. The hypothesis has been validated from single point mutational study on- (i) human HBB beta-globin hemoglobin protein associated with sickle cell anemia, (ii) mutants of envelope protein of Covid-2 infected patients reported in recent publications.
1207.5289
Pabitra Pal Choudhury
Sk. Sarif Hassan, Pabitra Pal Choudhury, Antara Sengupta, Binayak Sahu, Rojalin Mishra, Devendra Kumar Yadav, Saswatee Panda, Dharamveer Pradhan, Shrusti Dash and Gourav Pradhan
A Quantitative Understanding of Human Sex Chromosomal Genes
null
null
null
null
q-bio.GN q-bio.PE
http://creativecommons.org/licenses/by/3.0/
In the last few decades, the human allosomes are engrossed in an intensive attention among researchers. The allosomes are now already been sequenced and found there are about 2000 and 78 genes in human X and Y chromosomes respectively. The hemizygosity of the human X chromosome in males exposes recessive disease alleles, and this phenomenon has prompted decades of intensive study of X-linked disorders. By contrast, the small size of the human Y chromosome, and its prominent long-arm heterochromatic region suggested absence of function beyond sex determination. But the present problem is to accomplish whether a given sequence of nucleotides i.e. a DNA is a Human X or Y chromosomal genes or not, without any biological experimental support. In our perspective, a proper quantitative understanding of these genes is required to justify or nullify whether a given sequence is a Human X or Y chromosomal gene. In this paper, some of the X and Y chromosomal genes have been quantified in genomic and proteomic level through Fractal Geometric and Mathematical Morphometric analysis. Using the proposed quantitative model, one can easily make probable justification or deterministic nullification whether a given sequence of nucleotides is a probable Human X or Y chromosomal gene or not, without seeking any biological experiment. Of course, a further biological experiment is essential to validate it as the probable Human X or Y chromosomal gene homologue. This study would enable Biologists to understand these genes in more quantitative manner instead of their qualitative features.
[ { "created": "Mon, 23 Jul 2012 04:20:23 GMT", "version": "v1" }, { "created": "Mon, 2 Dec 2013 04:13:18 GMT", "version": "v2" } ]
2013-12-03
[ [ "Hassan", "Sk. Sarif", "" ], [ "Choudhury", "Pabitra Pal", "" ], [ "Sengupta", "Antara", "" ], [ "Sahu", "Binayak", "" ], [ "Mishra", "Rojalin", "" ], [ "Yadav", "Devendra Kumar", "" ], [ "Panda", "Saswatee", "" ], [ "Pradhan", "Dharamveer", "" ], [ "Dash", "Shrusti", "" ], [ "Pradhan", "Gourav", "" ] ]
In the last few decades, the human allosomes are engrossed in an intensive attention among researchers. The allosomes are now already been sequenced and found there are about 2000 and 78 genes in human X and Y chromosomes respectively. The hemizygosity of the human X chromosome in males exposes recessive disease alleles, and this phenomenon has prompted decades of intensive study of X-linked disorders. By contrast, the small size of the human Y chromosome, and its prominent long-arm heterochromatic region suggested absence of function beyond sex determination. But the present problem is to accomplish whether a given sequence of nucleotides i.e. a DNA is a Human X or Y chromosomal genes or not, without any biological experimental support. In our perspective, a proper quantitative understanding of these genes is required to justify or nullify whether a given sequence is a Human X or Y chromosomal gene. In this paper, some of the X and Y chromosomal genes have been quantified in genomic and proteomic level through Fractal Geometric and Mathematical Morphometric analysis. Using the proposed quantitative model, one can easily make probable justification or deterministic nullification whether a given sequence of nucleotides is a probable Human X or Y chromosomal gene or not, without seeking any biological experiment. Of course, a further biological experiment is essential to validate it as the probable Human X or Y chromosomal gene homologue. This study would enable Biologists to understand these genes in more quantitative manner instead of their qualitative features.
1708.05774
Daniel Kepple
Daniel Kepple and Alexei Koulakov
Constructing an olfactory perceptual space and predicting percepts from molecular structure
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given the structure of a novel molecule, there is still no one who can reliably predict what odor percept that molecule will evoke. The challenge comes from both the difficulty in quantitatively characterizing molecular structure, and the inadequacy of language to fully characterize olfactory perception. Here, we present a novel approach to both problems.
[ { "created": "Fri, 18 Aug 2017 21:56:53 GMT", "version": "v1" }, { "created": "Wed, 6 Jun 2018 20:50:17 GMT", "version": "v2" } ]
2018-06-08
[ [ "Kepple", "Daniel", "" ], [ "Koulakov", "Alexei", "" ] ]
Given the structure of a novel molecule, there is still no one who can reliably predict what odor percept that molecule will evoke. The challenge comes from both the difficulty in quantitatively characterizing molecular structure, and the inadequacy of language to fully characterize olfactory perception. Here, we present a novel approach to both problems.
2307.05606
Erdi Kara
Erdi Kara, T. L. Jackson, Chartese Jones, Reginald L. McGee II, Rockford Sison
Mathematical Modeling Insights into Improving CAR T cell Therapy for Solid Tumors: Antigen Heterogeneity and Bystander Effects
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
As an adoptive cellular therapy, Chimeric Antigen Receptor T-cell (CAR T-cell) therapy has shown remarkable success in hematological malignancies, but only limited efficacy against solid tumors. Compared with blood cancers, solid tumors present a unique set of challenges that ultimately neutralize the function of CAR T-cells. One such barrier is antigen heterogeneity - variability in the expression of the antigen on tumor cells. Success of CAR T-cell therapy in solid tumors is unlikely unless almost all the tumor cells express the specific antigen that CAR T-cells target. A critical question for solving the heterogeneity problem is whether CAR T therapy induces bystander effects, such as antigen spreading. Antigen spreading occurs when CAR T-cells activate other endogenous antitumor CD8 T cells against antigens that were not originally targeted. In this work, we develop a mathematical model of CAR T-cell therapy for solid tumors that takes into consideration both antigen heterogeneity and bystander effects. Our model is based on in vivo treatment data that includes a mixture of target antigen-positive and target antigen-negative tumor cells. We use our model to simulate large cohorts of virtual patients to gain a better understanding of the relationship between bystander killing. We also investigate several strategies for enhancing the bystander effect and thus increasing the overall efficacy of CAR T-cell therapy for solid tumor.
[ { "created": "Mon, 10 Jul 2023 21:35:39 GMT", "version": "v1" } ]
2023-07-13
[ [ "Kara", "Erdi", "" ], [ "Jackson", "T. L.", "" ], [ "Jones", "Chartese", "" ], [ "McGee", "Reginald L.", "II" ], [ "Sison", "Rockford", "" ] ]
As an adoptive cellular therapy, Chimeric Antigen Receptor T-cell (CAR T-cell) therapy has shown remarkable success in hematological malignancies, but only limited efficacy against solid tumors. Compared with blood cancers, solid tumors present a unique set of challenges that ultimately neutralize the function of CAR T-cells. One such barrier is antigen heterogeneity - variability in the expression of the antigen on tumor cells. Success of CAR T-cell therapy in solid tumors is unlikely unless almost all the tumor cells express the specific antigen that CAR T-cells target. A critical question for solving the heterogeneity problem is whether CAR T therapy induces bystander effects, such as antigen spreading. Antigen spreading occurs when CAR T-cells activate other endogenous antitumor CD8 T cells against antigens that were not originally targeted. In this work, we develop a mathematical model of CAR T-cell therapy for solid tumors that takes into consideration both antigen heterogeneity and bystander effects. Our model is based on in vivo treatment data that includes a mixture of target antigen-positive and target antigen-negative tumor cells. We use our model to simulate large cohorts of virtual patients to gain a better understanding of the relationship between bystander killing. We also investigate several strategies for enhancing the bystander effect and thus increasing the overall efficacy of CAR T-cell therapy for solid tumor.
1903.04353
Pedro Lind
Jo\~ao Sequeira, Jorge Lou\c{c}\~a, Ant\'onio M. Mendes and Pedro G. Lind
Transition from endemic behavior to eradication of malaria due to combined drug therapies: an agent-model approach
12 pages, 6 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an agent-based model describing a susceptible-infectious-susceptible (SIS) system of humans and mosquitoes to predict malaria epidemiological scenarios in realistic biological conditions. Emphasis is given to the transition from endemic behavior to eradication of malaria transmission induced by combined drug therapies acting on both the gametocytemia reduction and on the selective mosquito mortality during parasite development in the mosquito. Our mathematical framework enables to uncover the critical values of the parameters characterizing the effect of each drug therapy. Moreover, our results provide quantitative evidence of what is empirically known: interventions combining gametocytemia reduction through the use of gametocidal drugs, with the selective action of ivermectin during parasite development in the mosquito, may actively promote disease eradication in the long run. In the agent model, the main properties of human-mosquito interactions are implemented as parameters and the model is validated by comparing simulations with real data of malaria incidence collected in the endemic malaria region of Chimoio in Mozambique. Finally, we discuss our findings in light of current drug administration strategies for malaria prevention, that may interfere with human-to-mosquito transmission process.
[ { "created": "Fri, 8 Mar 2019 10:59:18 GMT", "version": "v1" } ]
2019-03-12
[ [ "Sequeira", "João", "" ], [ "Louçã", "Jorge", "" ], [ "Mendes", "António M.", "" ], [ "Lind", "Pedro G.", "" ] ]
We introduce an agent-based model describing a susceptible-infectious-susceptible (SIS) system of humans and mosquitoes to predict malaria epidemiological scenarios in realistic biological conditions. Emphasis is given to the transition from endemic behavior to eradication of malaria transmission induced by combined drug therapies acting on both the gametocytemia reduction and on the selective mosquito mortality during parasite development in the mosquito. Our mathematical framework enables to uncover the critical values of the parameters characterizing the effect of each drug therapy. Moreover, our results provide quantitative evidence of what is empirically known: interventions combining gametocytemia reduction through the use of gametocidal drugs, with the selective action of ivermectin during parasite development in the mosquito, may actively promote disease eradication in the long run. In the agent model, the main properties of human-mosquito interactions are implemented as parameters and the model is validated by comparing simulations with real data of malaria incidence collected in the endemic malaria region of Chimoio in Mozambique. Finally, we discuss our findings in light of current drug administration strategies for malaria prevention, that may interfere with human-to-mosquito transmission process.
1210.3809
John Barton
John Barton, Eduardo D. Sontag
The energy costs of biological insulators
16 pages, 5 figures
null
10.1016/j.bpj.2013.01.056
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biochemical signaling pathways can be insulated from impedance and competition effects through enzymatic "futile cycles" which consume energy, typically in the form of ATP. We hypothesize that better insulation necessarily requires higher energy consumption, and provide evidence, through the computational analysis of a simplified physical model, to support this hypothesis.
[ { "created": "Sun, 14 Oct 2012 16:23:33 GMT", "version": "v1" }, { "created": "Tue, 16 Oct 2012 16:11:27 GMT", "version": "v2" } ]
2014-05-06
[ [ "Barton", "John", "" ], [ "Sontag", "Eduardo D.", "" ] ]
Biochemical signaling pathways can be insulated from impedance and competition effects through enzymatic "futile cycles" which consume energy, typically in the form of ATP. We hypothesize that better insulation necessarily requires higher energy consumption, and provide evidence, through the computational analysis of a simplified physical model, to support this hypothesis.