id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1602.05167
Wolfram Liebermeister
Wolfram Liebermeister
Optimal enzyme rhythms in cells
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells can use periodic enzyme activities to adapt to periodic environments or existing internal rhythms and to establish metabolic cycles that schedule biochemical processes in time. A periodically changing allocation of the protein budget between reactions or pathways may increase the overall metabolic efficiency. To study this hypothesis, I quantify the possible benefits of small-amplitude enzyme rhythms in kinetic models. Starting from an enzyme-optimised steady state, I score the effects of possible enzyme rhythms on a metabolic objective and optimise their amplitudes and phase shifts. Assuming small-amplitude rhythms around an optimal reference state, optimal phases and amplitudes can be computed by solving a quadratic optimality problem. In models without amplitude constraints, general periodic enzyme profiles can be obtained by Fourier synthesis. The theory of optimal enzyme rhythms combines the dynamics and economics of metabolic systems and explains how optimal small-amplitude enzyme profiles are shaped by network structure, kinetics, external rhythms, and the metabolic objective. The formulae show how orchestrated enzyme rhythms can exploit synergy effects to improve metabolic performance and that optimal enzyme profiles are not simply adapted to existing metabolic rhythms, but that they actively shape these rhythms to improve their own (and other enzymes') efficiency. The resulting optimal enzyme profiles "portray" the enzymes' dynamic effects in the network: for example, enzymes that act synergistically may be coexpressed, periodically and with some optimal phase shifts. The theory yields optimality conditions for enzyme rhythms in metabolic cycles, with static enzyme adaptation as a special case, and predicts how cells should combine transcriptional and posttranslational regulation to realise enzyme rhythms at different frequencies.
[ { "created": "Tue, 16 Feb 2016 20:27:47 GMT", "version": "v1" }, { "created": "Tue, 4 Oct 2022 07:48:24 GMT", "version": "v2" } ]
2022-10-05
[ [ "Liebermeister", "Wolfram", "" ] ]
Cells can use periodic enzyme activities to adapt to periodic environments or existing internal rhythms and to establish metabolic cycles that schedule biochemical processes in time. A periodically changing allocation of the protein budget between reactions or pathways may increase the overall metabolic efficiency. To study this hypothesis, I quantify the possible benefits of small-amplitude enzyme rhythms in kinetic models. Starting from an enzyme-optimised steady state, I score the effects of possible enzyme rhythms on a metabolic objective and optimise their amplitudes and phase shifts. Assuming small-amplitude rhythms around an optimal reference state, optimal phases and amplitudes can be computed by solving a quadratic optimality problem. In models without amplitude constraints, general periodic enzyme profiles can be obtained by Fourier synthesis. The theory of optimal enzyme rhythms combines the dynamics and economics of metabolic systems and explains how optimal small-amplitude enzyme profiles are shaped by network structure, kinetics, external rhythms, and the metabolic objective. The formulae show how orchestrated enzyme rhythms can exploit synergy effects to improve metabolic performance and that optimal enzyme profiles are not simply adapted to existing metabolic rhythms, but that they actively shape these rhythms to improve their own (and other enzymes') efficiency. The resulting optimal enzyme profiles "portray" the enzymes' dynamic effects in the network: for example, enzymes that act synergistically may be coexpressed, periodically and with some optimal phase shifts. The theory yields optimality conditions for enzyme rhythms in metabolic cycles, with static enzyme adaptation as a special case, and predicts how cells should combine transcriptional and posttranslational regulation to realise enzyme rhythms at different frequencies.
1902.03978
Muyuan Chen
Muyuan Chen, James M. Bell, Xiaodong Shi, Stella Y. Sun, Zhao Wang, Steven J. Ludtke
A complete data processing workflow for CryoET and subtomogram averaging
21 pages, 4+2 figures
Nature Methods 16 (2019) 1161-1168
10.1038/s41592-019-0591-8
null
q-bio.QM eess.IV
http://creativecommons.org/licenses/by/4.0/
Electron cryotomography (CryoET) is currently the only method capable of visualizing cells in 3D at nanometer resolutions. While modern instruments produce massive amounts of tomography data containing extremely rich structural information, the data processing is very labor intensive and results are often limited by the skills of the personnel rather than the data. We present an integrated workflow that covers the entire tomography data processing pipeline, from automated tilt series alignment to subnanometer resolution subtomogram averaging. This workflow greatly reduces human effort and increases throughput, and is capable of determining protein structures at state-of-the-art resolutions for both purified macromolecules and cells.
[ { "created": "Mon, 11 Feb 2019 16:36:30 GMT", "version": "v1" } ]
2021-07-01
[ [ "Chen", "Muyuan", "" ], [ "Bell", "James M.", "" ], [ "Shi", "Xiaodong", "" ], [ "Sun", "Stella Y.", "" ], [ "Wang", "Zhao", "" ], [ "Ludtke", "Steven J.", "" ] ]
Electron cryotomography (CryoET) is currently the only method capable of visualizing cells in 3D at nanometer resolutions. While modern instruments produce massive amounts of tomography data containing extremely rich structural information, the data processing is very labor intensive and results are often limited by the skills of the personnel rather than the data. We present an integrated workflow that covers the entire tomography data processing pipeline, from automated tilt series alignment to subnanometer resolution subtomogram averaging. This workflow greatly reduces human effort and increases throughput, and is capable of determining protein structures at state-of-the-art resolutions for both purified macromolecules and cells.
2007.00887
Vu Viet Hoang Pham
Vu Viet Hoang Pham, Lin Liu, Cameron Bracken, Gregory Goodall, Jiuyong Li and Thuc Duy Le
Computational methods for cancer driver discovery: A survey
13 pages, 6 figures
null
null
null
q-bio.GN cs.CE
http://creativecommons.org/licenses/by/4.0/
Motivation: Uncovering the genomic causes of cancer, known as cancer driver genes, is a fundamental task in biomedical research. Cancer driver genes drive the development and progression of cancer, thus identifying cancer driver genes and their regulatory mechanism is crucial to the design of cancer treatment and intervention. Many computational methods, which take the advantages of computer science and data science, have been developed to utilise multiple types of genomic data to reveal cancer drivers and their regulatory mechanism behind cancer development and progression. Due to the complexity of the mechanistic insight of cancer genes in driving cancer and the fast development of the field, it is necessary to have a comprehensive review about the current computational methods for discovering different types of cancer drivers. Results: We survey computational methods for identifying cancer drivers from genomic data. We categorise the methods into three groups, methods for single driver identification, methods for driver module identification, and methods for identifying personalised cancer drivers. We also conduct a case study to compare the performance of the current methods. We further analyse the advantages and limitations of the current methods, and discuss the challenges and future directions of the topic. In addition, we investigate the resources for discovering and validating cancer drivers in order to provide a one-stop reference of the tools to facilitate cancer driver discovery. The ultimate goal of the paper is to help those interested in the topic to establish a solid background to carry out further research in the field.
[ { "created": "Thu, 2 Jul 2020 05:18:08 GMT", "version": "v1" } ]
2020-07-03
[ [ "Pham", "Vu Viet Hoang", "" ], [ "Liu", "Lin", "" ], [ "Bracken", "Cameron", "" ], [ "Goodall", "Gregory", "" ], [ "Li", "Jiuyong", "" ], [ "Le", "Thuc Duy", "" ] ]
Motivation: Uncovering the genomic causes of cancer, known as cancer driver genes, is a fundamental task in biomedical research. Cancer driver genes drive the development and progression of cancer, thus identifying cancer driver genes and their regulatory mechanism is crucial to the design of cancer treatment and intervention. Many computational methods, which take the advantages of computer science and data science, have been developed to utilise multiple types of genomic data to reveal cancer drivers and their regulatory mechanism behind cancer development and progression. Due to the complexity of the mechanistic insight of cancer genes in driving cancer and the fast development of the field, it is necessary to have a comprehensive review about the current computational methods for discovering different types of cancer drivers. Results: We survey computational methods for identifying cancer drivers from genomic data. We categorise the methods into three groups, methods for single driver identification, methods for driver module identification, and methods for identifying personalised cancer drivers. We also conduct a case study to compare the performance of the current methods. We further analyse the advantages and limitations of the current methods, and discuss the challenges and future directions of the topic. In addition, we investigate the resources for discovering and validating cancer drivers in order to provide a one-stop reference of the tools to facilitate cancer driver discovery. The ultimate goal of the paper is to help those interested in the topic to establish a solid background to carry out further research in the field.
2404.07090
Jose Camacho-Mateu
Jos\'e Camacho-Mateu, Aniello Lampo, Sa\'ul Ares, Jos\'e A. Cuesta
Non-equilibrium microbial dynamics unveil a new macroecological pattern beyond Taylor's law
null
null
null
null
q-bio.PE cond-mat.stat-mech q-bio.QM
http://creativecommons.org/licenses/by/4.0/
We introduce a comprehensive analytical benchmark, relying on Fokker-Planck formalism, to study microbial dynamics in presence of both biotic and abiotic forces. In equilibrium, we observe a balance between the two kinds of forces, leading to no correlations between species abundances. This implies that real microbiomes, where correlations have been observed, operate out of equilibrium. Therefore, we analyze non-equilibrium dynamics, presenting an ansatz for an approximate solution that embodies the complex interplay of forces in the system. This solution is consistent with Taylor's law as a coarse-grained approximation of the relation between species abundance and variance, but implies subtler effects, predicting unobserved structure beyond Taylor's law. Motivated by this theoretical prediction, we refine the analysis of existing metagenomic data, unveiling a novel universal macroecological pattern. Finally, we speculate on the physical origin of Taylor's law: building upon an analogy with Brownian motion theory, we propose that Taylor's law emerges as a Fluctuation-Growth relation resulting from equipartition of environmental resources among microbial species.
[ { "created": "Wed, 10 Apr 2024 15:26:54 GMT", "version": "v1" } ]
2024-04-11
[ [ "Camacho-Mateu", "José", "" ], [ "Lampo", "Aniello", "" ], [ "Ares", "Saúl", "" ], [ "Cuesta", "José A.", "" ] ]
We introduce a comprehensive analytical benchmark, relying on Fokker-Planck formalism, to study microbial dynamics in presence of both biotic and abiotic forces. In equilibrium, we observe a balance between the two kinds of forces, leading to no correlations between species abundances. This implies that real microbiomes, where correlations have been observed, operate out of equilibrium. Therefore, we analyze non-equilibrium dynamics, presenting an ansatz for an approximate solution that embodies the complex interplay of forces in the system. This solution is consistent with Taylor's law as a coarse-grained approximation of the relation between species abundance and variance, but implies subtler effects, predicting unobserved structure beyond Taylor's law. Motivated by this theoretical prediction, we refine the analysis of existing metagenomic data, unveiling a novel universal macroecological pattern. Finally, we speculate on the physical origin of Taylor's law: building upon an analogy with Brownian motion theory, we propose that Taylor's law emerges as a Fluctuation-Growth relation resulting from equipartition of environmental resources among microbial species.
2302.05511
Christopher Monti
Christopher Monti, Said H. Audi, Justin Womack, Seung-Keun Hong, Yongqiang Yang, Joohyun Kim, Ranjan K. Dash
Physiologically-Based Pharmacokinetic Modeling of Blood Clearance of Liver Fluorescent Markers for the Assessment of the Degree of Hepatic Ischemia-Reperfusion Injury
6 pages, 6 figures, Submitted to IEEE-EMBC Conference Proceedings
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
During liver transplantation, ischemia-reperfusion injury (IRI) is inevitable and decreases the overall success of the surgery. While guidelines exist, there is no reliable way to quantitatively assess the degree of IRI present in the liver. Our recent study has shown a correlation between the bile-to-plasma ratio of FDA-approved sodium fluorescein (SF) and the degree of hepatic IRI, presumably due to IRI-induced decrease in the activity of the hepatic multidrug resistance-associated protein 2 (MRP2); however, the contribution of SF blood clearance via the bile is still convoluted with other factors, such as renal clearance. In this work, we sought to computationally model SF blood clearance via the bile. First, we converted extant SF fluorescence data from rat whole blood, plasma, and bile to concentrations using calibration curves. Next, based on these SF concentration data, we generated a liver-centric, physiologically-based pharmacokinetic (PBPK) model of SF liver uptake and clearance via the bile. Model simulations show that SF bile concentration is highly sensitive to a change in the activity of hepatic MPR2. These simulations suggest that SF bile clearance along with the PBPK model can be used to quantify the effect of IRI on the activity of MRP2.
[ { "created": "Fri, 10 Feb 2023 21:13:13 GMT", "version": "v1" } ]
2023-02-14
[ [ "Monti", "Christopher", "" ], [ "Audi", "Said H.", "" ], [ "Womack", "Justin", "" ], [ "Hong", "Seung-Keun", "" ], [ "Yang", "Yongqiang", "" ], [ "Kim", "Joohyun", "" ], [ "Dash", "Ranjan K.", "" ] ]
During liver transplantation, ischemia-reperfusion injury (IRI) is inevitable and decreases the overall success of the surgery. While guidelines exist, there is no reliable way to quantitatively assess the degree of IRI present in the liver. Our recent study has shown a correlation between the bile-to-plasma ratio of FDA-approved sodium fluorescein (SF) and the degree of hepatic IRI, presumably due to IRI-induced decrease in the activity of the hepatic multidrug resistance-associated protein 2 (MRP2); however, the contribution of SF blood clearance via the bile is still convoluted with other factors, such as renal clearance. In this work, we sought to computationally model SF blood clearance via the bile. First, we converted extant SF fluorescence data from rat whole blood, plasma, and bile to concentrations using calibration curves. Next, based on these SF concentration data, we generated a liver-centric, physiologically-based pharmacokinetic (PBPK) model of SF liver uptake and clearance via the bile. Model simulations show that SF bile concentration is highly sensitive to a change in the activity of hepatic MPR2. These simulations suggest that SF bile clearance along with the PBPK model can be used to quantify the effect of IRI on the activity of MRP2.
2307.10634
Musa \.Ihtiyar
Musa Nuri Ihtiyar and Arzucan Ozgur
Generative Language Models on Nucleotide Sequences of Human Genes
null
null
null
null
q-bio.GN cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Language models, primarily transformer-based ones, obtained colossal success in NLP. To be more precise, studies like BERT in NLU and works such as GPT-3 for NLG are very crucial. DNA sequences are very close to natural language in terms of structure, so if the DNA-related bioinformatics domain is concerned, discriminative models, like DNABert, exist. Yet, the generative side of the coin is mainly unexplored to the best of our knowledge. Consequently, we focused on developing an autoregressive generative language model like GPT-3 for DNA sequences. Because working with whole DNA sequences is challenging without substantial computational resources, we decided to carry out our study on a smaller scale, focusing on nucleotide sequences of human genes, unique parts in DNA with specific functionalities, instead of the whole DNA. This decision did not change the problem structure a lot due to the fact that both DNA and genes can be seen as 1D sequences consisting of four different nucleotides without losing much information and making too much simplification. First of all, we systematically examined an almost entirely unexplored problem and observed that RNNs performed the best while simple techniques like N-grams were also promising. Another beneficial point was learning how to work with generative models on languages we do not understand, unlike natural language. How essential using real-life tasks beyond the classical metrics such as perplexity is observed. Furthermore, checking whether the data-hungry nature of these models can be changed through selecting a language with minimal vocabulary size, four owing to four different types of nucleotides, is examined. The reason for reviewing this was that choosing such a language might make the problem easier. However, what we observed in this study was it did not provide that much of a change in the amount of data needed.
[ { "created": "Thu, 20 Jul 2023 06:59:02 GMT", "version": "v1" } ]
2023-07-21
[ [ "Ihtiyar", "Musa Nuri", "" ], [ "Ozgur", "Arzucan", "" ] ]
Language models, primarily transformer-based ones, obtained colossal success in NLP. To be more precise, studies like BERT in NLU and works such as GPT-3 for NLG are very crucial. DNA sequences are very close to natural language in terms of structure, so if the DNA-related bioinformatics domain is concerned, discriminative models, like DNABert, exist. Yet, the generative side of the coin is mainly unexplored to the best of our knowledge. Consequently, we focused on developing an autoregressive generative language model like GPT-3 for DNA sequences. Because working with whole DNA sequences is challenging without substantial computational resources, we decided to carry out our study on a smaller scale, focusing on nucleotide sequences of human genes, unique parts in DNA with specific functionalities, instead of the whole DNA. This decision did not change the problem structure a lot due to the fact that both DNA and genes can be seen as 1D sequences consisting of four different nucleotides without losing much information and making too much simplification. First of all, we systematically examined an almost entirely unexplored problem and observed that RNNs performed the best while simple techniques like N-grams were also promising. Another beneficial point was learning how to work with generative models on languages we do not understand, unlike natural language. How essential using real-life tasks beyond the classical metrics such as perplexity is observed. Furthermore, checking whether the data-hungry nature of these models can be changed through selecting a language with minimal vocabulary size, four owing to four different types of nucleotides, is examined. The reason for reviewing this was that choosing such a language might make the problem easier. However, what we observed in this study was it did not provide that much of a change in the amount of data needed.
2405.06724
Lun Ai
Lun Ai, Stephen H. Muggleton, Shi-Shun Liang, Geoff S. Baldwin
Boolean matrix logic programming for active learning of gene functions in genome-scale metabolic network models
null
null
null
null
q-bio.MN cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Techniques to autonomously drive research have been prominent in Computational Scientific Discovery, while Synthetic Biology is a field of science that focuses on designing and constructing new biological systems for useful purposes. Here we seek to apply logic-based machine learning techniques to facilitate cellular engineering and drive biological discovery. Comprehensive databases of metabolic processes called genome-scale metabolic network models (GEMs) are often used to evaluate cellular engineering strategies to optimise target compound production. However, predicted host behaviours are not always correctly described by GEMs, often due to errors in the models. The task of learning the intricate genetic interactions within GEMs presents computational and empirical challenges. To address these, we describe a novel approach called Boolean Matrix Logic Programming (BMLP) by leveraging boolean matrices to evaluate large logic programs. We introduce a new system, $BMLP_{active}$, which efficiently explores the genomic hypothesis space by guiding informative experimentation through active learning. In contrast to sub-symbolic methods, $BMLP_{active}$ encodes a state-of-the-art GEM of a widely accepted bacterial host in an interpretable and logical representation using datalog logic programs. Notably, $BMLP_{active}$ can successfully learn the interaction between a gene pair with fewer training examples than random experimentation, overcoming the increase in experimental design space. $BMLP_{active}$ enables rapid optimisation of metabolic models to reliably engineer biological systems for producing useful compounds. It offers a realistic approach to creating a self-driving lab for microbial engineering.
[ { "created": "Fri, 10 May 2024 09:51:06 GMT", "version": "v1" }, { "created": "Mon, 20 May 2024 13:01:18 GMT", "version": "v2" }, { "created": "Sun, 11 Aug 2024 17:54:22 GMT", "version": "v3" } ]
2024-08-13
[ [ "Ai", "Lun", "" ], [ "Muggleton", "Stephen H.", "" ], [ "Liang", "Shi-Shun", "" ], [ "Baldwin", "Geoff S.", "" ] ]
Techniques to autonomously drive research have been prominent in Computational Scientific Discovery, while Synthetic Biology is a field of science that focuses on designing and constructing new biological systems for useful purposes. Here we seek to apply logic-based machine learning techniques to facilitate cellular engineering and drive biological discovery. Comprehensive databases of metabolic processes called genome-scale metabolic network models (GEMs) are often used to evaluate cellular engineering strategies to optimise target compound production. However, predicted host behaviours are not always correctly described by GEMs, often due to errors in the models. The task of learning the intricate genetic interactions within GEMs presents computational and empirical challenges. To address these, we describe a novel approach called Boolean Matrix Logic Programming (BMLP) by leveraging boolean matrices to evaluate large logic programs. We introduce a new system, $BMLP_{active}$, which efficiently explores the genomic hypothesis space by guiding informative experimentation through active learning. In contrast to sub-symbolic methods, $BMLP_{active}$ encodes a state-of-the-art GEM of a widely accepted bacterial host in an interpretable and logical representation using datalog logic programs. Notably, $BMLP_{active}$ can successfully learn the interaction between a gene pair with fewer training examples than random experimentation, overcoming the increase in experimental design space. $BMLP_{active}$ enables rapid optimisation of metabolic models to reliably engineer biological systems for producing useful compounds. It offers a realistic approach to creating a self-driving lab for microbial engineering.
1307.1009
Jose Vilar
Jose M. G. Vilar and Leonor Saiz
Systems Biophysics of Gene Expression
Biophysical Review
Biophys. J. 104, 2574-2585 (2013)
10.1016/j.bpj.2013.04.032
null
q-bio.MN cond-mat.soft cond-mat.stat-mech physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression is a central process to any form of life. It involves multiple temporal and functional scales that extend from specific protein-DNA interactions to the coordinated regulation of multiple genes in response to intracellular and extracellular changes. This diversity in scales poses fundamental challenges among traditional approaches to fully understand even the simplest gene expression systems. Recent advances in computational systems biophysics have provided promising avenues to reliably integrate the molecular detail of biophysical process into the system behavior. Here, we review recent advances in the description of gene regulation as a system of biophysical processes that extend from specific protein-DNA interactions to the combinatorial assembly of nucleoprotein complexes. There is now basic mechanistic understanding on how promoters controlled by multiple, local and distal, DNA binding sites for transcription factors can actively control transcriptional noise, cell-to-cell variability, and other properties of gene regulation, including precision and flexibility of the transcriptional responses.
[ { "created": "Wed, 3 Jul 2013 13:34:42 GMT", "version": "v1" } ]
2013-07-11
[ [ "Vilar", "Jose M. G.", "" ], [ "Saiz", "Leonor", "" ] ]
Gene expression is a central process to any form of life. It involves multiple temporal and functional scales that extend from specific protein-DNA interactions to the coordinated regulation of multiple genes in response to intracellular and extracellular changes. This diversity in scales poses fundamental challenges among traditional approaches to fully understand even the simplest gene expression systems. Recent advances in computational systems biophysics have provided promising avenues to reliably integrate the molecular detail of biophysical process into the system behavior. Here, we review recent advances in the description of gene regulation as a system of biophysical processes that extend from specific protein-DNA interactions to the combinatorial assembly of nucleoprotein complexes. There is now basic mechanistic understanding on how promoters controlled by multiple, local and distal, DNA binding sites for transcription factors can actively control transcriptional noise, cell-to-cell variability, and other properties of gene regulation, including precision and flexibility of the transcriptional responses.
1806.09723
Diwakar Shukla
Qihua Chen, Jiangyan Feng, Shriyaa Mittal and Diwakar Shukla
Automatic Feature Selection in Markov State Models Using Genetic Algorithm
9 Pages, 5 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Markov State Models (MSMs) are a powerful framework to reproduce the long-time conformational dynamics of biomolecules using a set of short Molecular Dynamics (MD) simulations. However, precise kinetics predictions of MSMs heavily rely on the features selected to describe the system. Despite the importance of feature selection for large system, determining an optimal set of features remains a difficult unsolved problem. Here, we introduce an automatic approach to optimize feature selection based on genetic algorithms (GA), which adaptively evolves the most fitted solution according to natural selection laws. The power of the GA-based method is illustrated on long atomistic folding simulations of four proteins, varying in length from 28 to 80 residues. Due to the diversity of tested proteins, we expect that our method will be extensible to other proteins and drive MSM building to a more objective protocol.
[ { "created": "Mon, 25 Jun 2018 23:01:57 GMT", "version": "v1" } ]
2018-06-27
[ [ "Chen", "Qihua", "" ], [ "Feng", "Jiangyan", "" ], [ "Mittal", "Shriyaa", "" ], [ "Shukla", "Diwakar", "" ] ]
Markov State Models (MSMs) are a powerful framework to reproduce the long-time conformational dynamics of biomolecules using a set of short Molecular Dynamics (MD) simulations. However, precise kinetics predictions of MSMs heavily rely on the features selected to describe the system. Despite the importance of feature selection for large system, determining an optimal set of features remains a difficult unsolved problem. Here, we introduce an automatic approach to optimize feature selection based on genetic algorithms (GA), which adaptively evolves the most fitted solution according to natural selection laws. The power of the GA-based method is illustrated on long atomistic folding simulations of four proteins, varying in length from 28 to 80 residues. Due to the diversity of tested proteins, we expect that our method will be extensible to other proteins and drive MSM building to a more objective protocol.
1111.6062
Patricio Orio
Patricio Orio and Daniel Soudry
Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States
32 text pages, 10 figures, 1 supplementary text + figure
PLoS ONE 7(2012) e36670
10.1371/journal.pone.0036670
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled activation subunits, while the DA was modeled using uncoupled activation subunits. Implementations of DA with coupled subunits, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable - allowing an easy and efficient DA implementation. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods.
[ { "created": "Fri, 25 Nov 2011 17:35:01 GMT", "version": "v1" } ]
2012-05-29
[ [ "Orio", "Patricio", "" ], [ "Soudry", "Daniel", "" ] ]
The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled activation subunits, while the DA was modeled using uncoupled activation subunits. Implementations of DA with coupled subunits, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable - allowing an easy and efficient DA implementation. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods.
1312.3112
Antonio Marco
Antonio Marco
Sex-biased expression of microRNAs in Drosophila melanogaster
25 pages, including 4 figures
Marco A (2014) Open Biology 4, 140024
10.1098/rsob.140024
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most animals have separate sexes. The differential expression of gene products, in particular that of gene regulators, is underlying sexual dimorphism. Analyses of sex-biased expression have focused mostly in protein coding genes. Several lines of evidence indicate that microRNAs, a class of major gene regulators, are likely to have a significant role in sexual dimorphism. This role has not been systematically explored so far. Here I study the sex-biased expression pattern of microRNAs in the model species Drosophila melanogaster. As with protein coding genes, sex biased microRNAs are associated with the reproductive function. Strikingly, contrary to protein-coding genes, male biased microRNAs are enriched in the X chromosome whilst female microRNAs are mostly autosomal. I propose that the chromosomal distribution is a consequence of high rates of de novo emergence, and a preference of new microRNAs to be expressed in the testis. I also suggest that demasculinization of the X chromosome may not affect microRNAs. Interestingly, female biased microRNAs are often encoded within protein coding genes that are also expressed in females. MicroRNAs with sex-biased expression do not preferentially target sex-biased gene transcripts. These results strongly suggest that the sex-biased expression of microRNAs is mainly a consequence of high rates of microRNA emergence in the X (male bias) or hitch-hiked expression by host genes (female bias).
[ { "created": "Wed, 11 Dec 2013 11:06:48 GMT", "version": "v1" }, { "created": "Tue, 11 Feb 2014 11:40:46 GMT", "version": "v2" } ]
2014-04-03
[ [ "Marco", "Antonio", "" ] ]
Most animals have separate sexes. The differential expression of gene products, in particular that of gene regulators, is underlying sexual dimorphism. Analyses of sex-biased expression have focused mostly in protein coding genes. Several lines of evidence indicate that microRNAs, a class of major gene regulators, are likely to have a significant role in sexual dimorphism. This role has not been systematically explored so far. Here I study the sex-biased expression pattern of microRNAs in the model species Drosophila melanogaster. As with protein coding genes, sex biased microRNAs are associated with the reproductive function. Strikingly, contrary to protein-coding genes, male biased microRNAs are enriched in the X chromosome whilst female microRNAs are mostly autosomal. I propose that the chromosomal distribution is a consequence of high rates of de novo emergence, and a preference of new microRNAs to be expressed in the testis. I also suggest that demasculinization of the X chromosome may not affect microRNAs. Interestingly, female biased microRNAs are often encoded within protein coding genes that are also expressed in females. MicroRNAs with sex-biased expression do not preferentially target sex-biased gene transcripts. These results strongly suggest that the sex-biased expression of microRNAs is mainly a consequence of high rates of microRNA emergence in the X (male bias) or hitch-hiked expression by host genes (female bias).
2207.05011
Vasileios E. Papageorgiou
Vasileios E. Papageorgiou, George Tsaklidis
An improved Epidemiological-Unscented Kalman Filter (Hybrid SEIHCRDV-UKF) model for the prediction of COVID-19. Application on real-time data
This article has been published in the journal Chaos, Solitons & Fractals with the title " An improved epidemiological-unscented Kalman filter (hybrid SEIHCRDV-UKF) model for the prediction of COVID-19. Application on real-time data "
Chaos, Solitons & Fractals, 2022
10.1016/j.chaos.2022.112914
null
q-bio.PE math.OC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prevalence of COVID-19 has been the most serious health challenge of the 21th century to date, concerning national health systems on a daily basis, since December 2019 when it appeared in Wuhan City. Nevertheless, most of the proposed mathematical methodologies aiming to describe the dynamics of an epidemic, rely on deterministic models that are not able to reflect the true nature of its spread. In this paper, we propose a SEIHCRDV model - an extension/improvement of the classic SIR compartmental model - which also takes into consideration the populations of exposed, hospitalized, admitted in intensive care units (ICU), deceased and vaccinated cases, in combination with an unscented Kalman filter (UKF), providing a dynamic estimation of the time dependent system's parameters. The stochastic approach is considered necessary, as both observations and system equations are characterized by uncertainties. Apparently, this new consideration is useful for examining various pandemics more effectively. The reliability of the model is examined on the daily recordings of COVID-19 in France, over a long period of 265 days. Two major waves of infection are observed, starting in January 2021, which signified the start of vaccinations in Europe providing quite encouraging predictive performance, based on the produced NRMSE values. Special emphasis is placed on proving the non-negativity of SEIHCRDV model, achieving a representative basic reproductive number R0 and demonstrating the existence and stability of disease equilibria according to the formula produced to estimate R0. The model outperforms in predictive ability not only deterministic approaches but also state-of-the-art stochastic models that employ Kalman filters.
[ { "created": "Sun, 3 Jul 2022 07:47:06 GMT", "version": "v1" }, { "created": "Wed, 13 Jul 2022 16:01:23 GMT", "version": "v2" }, { "created": "Tue, 22 Nov 2022 08:15:51 GMT", "version": "v3" }, { "created": "Tue, 29 Nov 2022 07:53:16 GMT", "version": "v4" } ]
2022-11-30
[ [ "Papageorgiou", "Vasileios E.", "" ], [ "Tsaklidis", "George", "" ] ]
The prevalence of COVID-19 has been the most serious health challenge of the 21th century to date, concerning national health systems on a daily basis, since December 2019 when it appeared in Wuhan City. Nevertheless, most of the proposed mathematical methodologies aiming to describe the dynamics of an epidemic, rely on deterministic models that are not able to reflect the true nature of its spread. In this paper, we propose a SEIHCRDV model - an extension/improvement of the classic SIR compartmental model - which also takes into consideration the populations of exposed, hospitalized, admitted in intensive care units (ICU), deceased and vaccinated cases, in combination with an unscented Kalman filter (UKF), providing a dynamic estimation of the time dependent system's parameters. The stochastic approach is considered necessary, as both observations and system equations are characterized by uncertainties. Apparently, this new consideration is useful for examining various pandemics more effectively. The reliability of the model is examined on the daily recordings of COVID-19 in France, over a long period of 265 days. Two major waves of infection are observed, starting in January 2021, which signified the start of vaccinations in Europe providing quite encouraging predictive performance, based on the produced NRMSE values. Special emphasis is placed on proving the non-negativity of SEIHCRDV model, achieving a representative basic reproductive number R0 and demonstrating the existence and stability of disease equilibria according to the formula produced to estimate R0. The model outperforms in predictive ability not only deterministic approaches but also state-of-the-art stochastic models that employ Kalman filters.
1712.04633
Jin Xu
Jin Xu and Junghyo Jo
Broad cross-reactivity of the T-cell repertoire achieves specific and sufficiently rapid target searching
null
null
10.1016/j.jtbi.2019.01.025
null
q-bio.CB
http://creativecommons.org/licenses/by-nc-nd/4.0/
The molecular recognition of T-cell receptors is the hallmark of the adaptive immunity. Given the finiteness of the T-cell repertoire, individual T-cell receptors are necessary to be cross-reactive to multiple antigenic peptides. In this study, we quantify the variability of the cross-reactivity by using a string model that estimates the binding affinity between two sequences of amino acids. We examine sequences of 10,000 human T-cell receptors and 10,000 antigenic peptides, and obtain a full spectrum of cross-reactivity of the receptor-peptide binding. Then, we find that the cross-reactivity spectrum is broad. Some T cells are reactive to 1,000 peptides, but some T cells are reactive to only one or two peptides. Since the degree of cross-reactivity has a correlation with the (un)binding affinity of receptors, we further investigate how the broad crossreactivity affects the target searching of T cells. High cross-reactive T cells may not require many trials for searching correct targets, but they may spend long time to unbind from incorrect targets. In contrast, low cross-reactive T cells may not spend long time to ignore incorrect targets, but they require many trials for screening correct targets. We evaluate this hypothesis, and show that the broad cross-reactivity of the natural T-cell repertoire can balance the trade-off between the rapid screening and unbinding penalty.
[ { "created": "Wed, 13 Dec 2017 07:25:26 GMT", "version": "v1" }, { "created": "Tue, 13 Feb 2024 19:40:30 GMT", "version": "v2" } ]
2024-02-15
[ [ "Xu", "Jin", "" ], [ "Jo", "Junghyo", "" ] ]
The molecular recognition of T-cell receptors is the hallmark of the adaptive immunity. Given the finiteness of the T-cell repertoire, individual T-cell receptors are necessary to be cross-reactive to multiple antigenic peptides. In this study, we quantify the variability of the cross-reactivity by using a string model that estimates the binding affinity between two sequences of amino acids. We examine sequences of 10,000 human T-cell receptors and 10,000 antigenic peptides, and obtain a full spectrum of cross-reactivity of the receptor-peptide binding. Then, we find that the cross-reactivity spectrum is broad. Some T cells are reactive to 1,000 peptides, but some T cells are reactive to only one or two peptides. Since the degree of cross-reactivity has a correlation with the (un)binding affinity of receptors, we further investigate how the broad crossreactivity affects the target searching of T cells. High cross-reactive T cells may not require many trials for searching correct targets, but they may spend long time to unbind from incorrect targets. In contrast, low cross-reactive T cells may not spend long time to ignore incorrect targets, but they require many trials for screening correct targets. We evaluate this hypothesis, and show that the broad cross-reactivity of the natural T-cell repertoire can balance the trade-off between the rapid screening and unbinding penalty.
2006.15802
Meenakshi Khosla
Meenakshi Khosla, Gia H. Ngo, Keith Jamison, Amy Kuceyeski and Mert R. Sabuncu
A shared neural encoding model for the prediction of subject-specific fMRI response
MICCAI 2020 early accepted
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing popularity of naturalistic paradigms in fMRI (such as movie watching) demands novel strategies for multi-subject data analysis, such as use of neural encoding models. In the present study, we propose a shared convolutional neural encoding method that accounts for individual-level differences. Our method leverages multi-subject data to improve the prediction of subject-specific responses evoked by visual or auditory stimuli. We showcase our approach on high-resolution 7T fMRI data from the Human Connectome Project movie-watching protocol and demonstrate significant improvement over single-subject encoding models. We further demonstrate the ability of the shared encoding model to successfully capture meaningful individual differences in response to traditional task-based facial and scenes stimuli. Taken together, our findings suggest that inter-subject knowledge transfer can be beneficial to subject-specific predictive models.
[ { "created": "Mon, 29 Jun 2020 04:10:14 GMT", "version": "v1" }, { "created": "Sat, 11 Jul 2020 03:10:46 GMT", "version": "v2" } ]
2020-07-14
[ [ "Khosla", "Meenakshi", "" ], [ "Ngo", "Gia H.", "" ], [ "Jamison", "Keith", "" ], [ "Kuceyeski", "Amy", "" ], [ "Sabuncu", "Mert R.", "" ] ]
The increasing popularity of naturalistic paradigms in fMRI (such as movie watching) demands novel strategies for multi-subject data analysis, such as use of neural encoding models. In the present study, we propose a shared convolutional neural encoding method that accounts for individual-level differences. Our method leverages multi-subject data to improve the prediction of subject-specific responses evoked by visual or auditory stimuli. We showcase our approach on high-resolution 7T fMRI data from the Human Connectome Project movie-watching protocol and demonstrate significant improvement over single-subject encoding models. We further demonstrate the ability of the shared encoding model to successfully capture meaningful individual differences in response to traditional task-based facial and scenes stimuli. Taken together, our findings suggest that inter-subject knowledge transfer can be beneficial to subject-specific predictive models.
1802.01553
Gabriella Olmo
M. Varrecchia, G. Olmo, J. Levine, M. Grangetto, M. Gai, F. Di Cunto
Automatic microtubule tracking in fluorescence images of cells doped with increasing concentrations of taxol and nocodazole
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this paper is to provide an algorithm for detecting and tracking astral MTs in a fully automated way and supply a description of their dynamic behaviour. For the algorithm testing, a dataset of stacks (i.e. time-lapse image sequences), acquired with a confocal microscope, has been employed. Cells were treated with two different drugs, nocodazole and taxol, in order to explore their effect on microtubule dynamic instability.
[ { "created": "Mon, 5 Feb 2018 18:29:32 GMT", "version": "v1" }, { "created": "Wed, 4 Apr 2018 16:43:48 GMT", "version": "v2" } ]
2018-04-05
[ [ "Varrecchia", "M.", "" ], [ "Olmo", "G.", "" ], [ "Levine", "J.", "" ], [ "Grangetto", "M.", "" ], [ "Gai", "M.", "" ], [ "Di Cunto", "F.", "" ] ]
The purpose of this paper is to provide an algorithm for detecting and tracking astral MTs in a fully automated way and supply a description of their dynamic behaviour. For the algorithm testing, a dataset of stacks (i.e. time-lapse image sequences), acquired with a confocal microscope, has been employed. Cells were treated with two different drugs, nocodazole and taxol, in order to explore their effect on microtubule dynamic instability.
1308.5678
Marcos Amaku
Raul Ossada, Jos\'e H. H. Grisi-Filho, Fernando Ferreira and Marcos Amaku
Modeling the Dynamics of Infectious Diseases in Different Scale-Free Networks with the Same Degree Distribution
13 pages, 7 figures
Advanced Studies in Theoretical Physics, Vol. 7, 2013, no. 16, 759 - 771
null
null
q-bio.PE cs.SI physics.soc-ph
http://creativecommons.org/licenses/by/3.0/
The transmission dynamics of some infectious diseases is related to the contact structure between individuals in a network. We used five algorithms to generate contact networks with different topological structure but with the same scale-free degree distribution. We simulated the spread of acute and chronic infectious diseases on these networks, using SI (Susceptible - Infected) and SIS (Susceptible - Infected - Susceptible) epidemic models. In the simulations, our objective was to observe the effects of the topological structure of the networks on the dynamics and prevalence of the simulated diseases. We found that the dynamics of spread of an infectious disease on different networks with the same degree distribution may be considerably different.
[ { "created": "Mon, 26 Aug 2013 18:26:06 GMT", "version": "v1" } ]
2013-08-28
[ [ "Ossada", "Raul", "" ], [ "Grisi-Filho", "José H. H.", "" ], [ "Ferreira", "Fernando", "" ], [ "Amaku", "Marcos", "" ] ]
The transmission dynamics of some infectious diseases is related to the contact structure between individuals in a network. We used five algorithms to generate contact networks with different topological structure but with the same scale-free degree distribution. We simulated the spread of acute and chronic infectious diseases on these networks, using SI (Susceptible - Infected) and SIS (Susceptible - Infected - Susceptible) epidemic models. In the simulations, our objective was to observe the effects of the topological structure of the networks on the dynamics and prevalence of the simulated diseases. We found that the dynamics of spread of an infectious disease on different networks with the same degree distribution may be considerably different.
1306.5297
Steven Kuntz
Steven G. Kuntz and Michael B. Eisen
Drosophila embryogenesis scales uniformly across temperature in developmentally diverse species
37 pages, 6 figures, 8 supplemental figures, supporting video files available
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Temperature affects both the timing and outcome of animal development, but the detailed effects of temperature on the progress of early development have been poorly characterized. To determine the impact of temperature on the order and timing of events during Drosophila melanogaster embryogenesis, we used time-lapse imaging to track the progress of embryos from shortly after egg laying through hatching at seven precisely maintained temperatures between 17.5C and 32.5C. We employed a combination of automated and manual annotation to determine when 36 milestones occurred in each embryo. D. melanogaster embryogenesis takes ~33 hours at 17.5C, and accelerates with increasing temperature to 16 hours at 27.5C, above which embryogenesis slows slightly. Remarkably, while the total time of embryogenesis varies over two fold, the relative timing of events from cellularization through hatching is constant across temperatures. To further explore the relationship between temperature and embryogenesis, we expanded our analysis to cover ten additional Drosophila species of varying climatic origins. Six of these species, like D. melanogaster, are of tropical origin, and embryogenesis time at different temperatures was similar for them all. D. mojavensis, a sub-tropical fly, develops slower than the tropical species at lower temperatures, while D. virilis, a temperate fly, exhibits slower development at all temperatures. The alpine sister species D. persimilis and D. pseudoobscura develop as rapidly as tropical flies at cooler temperatures, but exhibit diminished acceleration above 22.5C and have drastically slowed development by 30C. Despite ranging from 13 hours for D. erecta at 30C to 46 hours for D. virilis at 17.5C, the relative timing of events from cellularization through hatching is constant across all species and temperatures, suggesting the existence of a timer controlling embryogenesis.
[ { "created": "Sat, 22 Jun 2013 07:01:31 GMT", "version": "v1" }, { "created": "Tue, 22 Oct 2013 20:51:37 GMT", "version": "v2" }, { "created": "Tue, 25 Mar 2014 22:17:20 GMT", "version": "v3" } ]
2014-03-27
[ [ "Kuntz", "Steven G.", "" ], [ "Eisen", "Michael B.", "" ] ]
Temperature affects both the timing and outcome of animal development, but the detailed effects of temperature on the progress of early development have been poorly characterized. To determine the impact of temperature on the order and timing of events during Drosophila melanogaster embryogenesis, we used time-lapse imaging to track the progress of embryos from shortly after egg laying through hatching at seven precisely maintained temperatures between 17.5C and 32.5C. We employed a combination of automated and manual annotation to determine when 36 milestones occurred in each embryo. D. melanogaster embryogenesis takes ~33 hours at 17.5C, and accelerates with increasing temperature to 16 hours at 27.5C, above which embryogenesis slows slightly. Remarkably, while the total time of embryogenesis varies over two fold, the relative timing of events from cellularization through hatching is constant across temperatures. To further explore the relationship between temperature and embryogenesis, we expanded our analysis to cover ten additional Drosophila species of varying climatic origins. Six of these species, like D. melanogaster, are of tropical origin, and embryogenesis time at different temperatures was similar for them all. D. mojavensis, a sub-tropical fly, develops slower than the tropical species at lower temperatures, while D. virilis, a temperate fly, exhibits slower development at all temperatures. The alpine sister species D. persimilis and D. pseudoobscura develop as rapidly as tropical flies at cooler temperatures, but exhibit diminished acceleration above 22.5C and have drastically slowed development by 30C. Despite ranging from 13 hours for D. erecta at 30C to 46 hours for D. virilis at 17.5C, the relative timing of events from cellularization through hatching is constant across all species and temperatures, suggesting the existence of a timer controlling embryogenesis.
1301.2520
Miroslaw Rewekant PhD MD
S.Piekarski, M.Rewekant
On modelling of bioavailability of drugs in terms of conservation laws
5 pages
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the text S.Piekarski, M.Rewekant,(arXiv:1208.3847)it has been mentioned that some information on bioavailability and bioequivalence of drugs can be obtained from simulations based on the conservation laws. Here we shortly discuss that possibility starting from the fundamental pharmacokinetic parameter called AUC (Area Under the Curve). The curve is is the profile shape of plasma drug concentration in time intervals after drug administration into organism. Our aim here is to give some information on the subject for the reader with no experience in pharmacokinetics.
[ { "created": "Fri, 11 Jan 2013 15:23:41 GMT", "version": "v1" } ]
2013-01-14
[ [ "Piekarski", "S.", "" ], [ "Rewekant", "M.", "" ] ]
In the text S.Piekarski, M.Rewekant,(arXiv:1208.3847)it has been mentioned that some information on bioavailability and bioequivalence of drugs can be obtained from simulations based on the conservation laws. Here we shortly discuss that possibility starting from the fundamental pharmacokinetic parameter called AUC (Area Under the Curve). The curve is is the profile shape of plasma drug concentration in time intervals after drug administration into organism. Our aim here is to give some information on the subject for the reader with no experience in pharmacokinetics.
1706.08238
Charlotta Sch\"arfe
Charlotta P.I. Sch\"arfe, Roman Tremmel, Matthias Schwab, Oliver Kohlbacher, Debora S. Marks
Genetic variation in human drug-related genes
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variability in drug efficacy and adverse effects are observed in clinical practice. While the extent of genetic variability in classical pharmacokinetic genes is rather well understood, the role of genetic variation in drug targets is typically less studied. Based on 60,706 human exomes from the ExAC dataset, we performed an in-depth computational analysis of the prevalence of functional-variants in in 806 drug-related genes, including 628 known drug targets. We find that most genetic variants in these genes are very rare (f < 0.1%) and thus likely not observed in clinical trials. Overall, however, four in five patients are likely to carry a functional-variant in a target for commonly prescribed drugs and many of these might alter drug efficacy. We further computed the likelihood of 1,236 FDA approved drugs to be affected by functional-variants in their targets and show that the patient-risk varies for many drugs with respect to geographic ancestry. A focused analysis of oncological drug targets indicates that the probability of a patient carrying germline variants in oncological drug targets is with 44% high enough to suggest that not only somatic alterations, but also germline variants carried over into the tumor genome should be included in therapeutic decision-making.
[ { "created": "Mon, 26 Jun 2017 06:00:40 GMT", "version": "v1" } ]
2017-06-27
[ [ "Schärfe", "Charlotta P. I.", "" ], [ "Tremmel", "Roman", "" ], [ "Schwab", "Matthias", "" ], [ "Kohlbacher", "Oliver", "" ], [ "Marks", "Debora S.", "" ] ]
Variability in drug efficacy and adverse effects are observed in clinical practice. While the extent of genetic variability in classical pharmacokinetic genes is rather well understood, the role of genetic variation in drug targets is typically less studied. Based on 60,706 human exomes from the ExAC dataset, we performed an in-depth computational analysis of the prevalence of functional-variants in in 806 drug-related genes, including 628 known drug targets. We find that most genetic variants in these genes are very rare (f < 0.1%) and thus likely not observed in clinical trials. Overall, however, four in five patients are likely to carry a functional-variant in a target for commonly prescribed drugs and many of these might alter drug efficacy. We further computed the likelihood of 1,236 FDA approved drugs to be affected by functional-variants in their targets and show that the patient-risk varies for many drugs with respect to geographic ancestry. A focused analysis of oncological drug targets indicates that the probability of a patient carrying germline variants in oncological drug targets is with 44% high enough to suggest that not only somatic alterations, but also germline variants carried over into the tumor genome should be included in therapeutic decision-making.
2111.03025
Tatjana Skrbic
Tatjana \v{S}krbi\'c and Trinh Xuan Hoang and Achille Giacometti and Amos Maritan and Jayanth R. Banavar
Proteins -- a celebration of consilience
18 pages; 1 table; 6 figures
null
10.1142/S0217979221400518
null
q-bio.BM cond-mat.soft cond-mat.stat-mech
http://creativecommons.org/licenses/by-nc-nd/4.0/
Proteins are the common constituents of all living cells. They are molecular machines that interact with each other as well as with other cell products and carry out a dizzying array of functions with distinction. These interactions follow from their native state structures and therefore understanding sequence-structure relationships is of fundamental importance. What is quite remarkable about proteins is that their understanding necessarily straddles several disciplines. The importance of geometry in defining protein native state structure, the constraints placed on protein behavior by mathematics and physics, the need for proteins to obey the laws of quantum chemistry, and the rich role of evolution and biology all come together in defining protein science. Here we review ideas from the literature and present an interdisciplinary framework that aims to marry ideas from Plato and Darwin and demonstrates an astonishing consilience between disciplines in describing proteins. We discuss the consequences of this framework on protein behavior.
[ { "created": "Thu, 4 Nov 2021 17:31:35 GMT", "version": "v1" } ]
2022-05-25
[ [ "Škrbić", "Tatjana", "" ], [ "Hoang", "Trinh Xuan", "" ], [ "Giacometti", "Achille", "" ], [ "Maritan", "Amos", "" ], [ "Banavar", "Jayanth R.", "" ] ]
Proteins are the common constituents of all living cells. They are molecular machines that interact with each other as well as with other cell products and carry out a dizzying array of functions with distinction. These interactions follow from their native state structures and therefore understanding sequence-structure relationships is of fundamental importance. What is quite remarkable about proteins is that their understanding necessarily straddles several disciplines. The importance of geometry in defining protein native state structure, the constraints placed on protein behavior by mathematics and physics, the need for proteins to obey the laws of quantum chemistry, and the rich role of evolution and biology all come together in defining protein science. Here we review ideas from the literature and present an interdisciplinary framework that aims to marry ideas from Plato and Darwin and demonstrates an astonishing consilience between disciplines in describing proteins. We discuss the consequences of this framework on protein behavior.
1411.6322
Ioannis Panageas
Ruta Mehta and Ioannis Panageas and Georgios Piliouras and Sadra Yazdanbod
The Complexity of Genetic Diversity
24 pages, 2 figues
null
null
null
q-bio.PE cs.CC cs.GT math.DS math.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key question in biological systems is whether genetic diversity persists in the long run under evolutionary competition or whether a single dominant genotype emerges. Classic work by Kalmus in 1945 has established that even in simple diploid species (species with two chromosomes) diversity can be guaranteed as long as the heterozygote individuals enjoy a selective advantage. Despite the classic nature of the problem, as we move towards increasingly polymorphic traits (e.g. human blood types) predicting diversity and understanding its implications is still not fully understood. Our key contribution is to establish complexity theoretic hardness results implying that even in the textbook case of single locus diploid models predicting whether diversity survives or not given its fitness landscape is algorithmically intractable. We complement our results by establishing that under randomly chosen fitness landscapes diversity survives with significant probability. Our results are structurally robust along several dimensions (e.g., choice of parameter distribution, different definitions of stability/persistence, restriction to typical subclasses of fitness landscapes). Technically, our results exploit connections between game theory, nonlinear dynamical systems, complexity theory and biology and establish hardness results for predicting the evolution of a deterministic variant of the well known multiplicative weights update algorithm in symmetric coordination games which could be of independent interest.
[ { "created": "Mon, 24 Nov 2014 01:35:10 GMT", "version": "v1" }, { "created": "Thu, 22 Oct 2015 14:51:53 GMT", "version": "v2" } ]
2015-10-23
[ [ "Mehta", "Ruta", "" ], [ "Panageas", "Ioannis", "" ], [ "Piliouras", "Georgios", "" ], [ "Yazdanbod", "Sadra", "" ] ]
A key question in biological systems is whether genetic diversity persists in the long run under evolutionary competition or whether a single dominant genotype emerges. Classic work by Kalmus in 1945 has established that even in simple diploid species (species with two chromosomes) diversity can be guaranteed as long as the heterozygote individuals enjoy a selective advantage. Despite the classic nature of the problem, as we move towards increasingly polymorphic traits (e.g. human blood types) predicting diversity and understanding its implications is still not fully understood. Our key contribution is to establish complexity theoretic hardness results implying that even in the textbook case of single locus diploid models predicting whether diversity survives or not given its fitness landscape is algorithmically intractable. We complement our results by establishing that under randomly chosen fitness landscapes diversity survives with significant probability. Our results are structurally robust along several dimensions (e.g., choice of parameter distribution, different definitions of stability/persistence, restriction to typical subclasses of fitness landscapes). Technically, our results exploit connections between game theory, nonlinear dynamical systems, complexity theory and biology and establish hardness results for predicting the evolution of a deterministic variant of the well known multiplicative weights update algorithm in symmetric coordination games which could be of independent interest.
1812.10930
Tarik Gouhier
Pradeep Pillai, Tarik C. Gouhier
On the use and abuse of Price equation concepts in ecology
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In biodiversity and ecosystem functioning (BEF) research, the Loreau-Hector (LH) statistical scheme is widely-used to partition the effect of biodiversity on ecosystem properties into a "complementarity effect" and a "selection effect". This selection effect was originally considered analogous to the selection term in the Price equation from evolutionary biology. However, a key paper published over thirteen years ago challenged this interpretation by devising a new tripartite partitioning scheme that purportedly quantified the role of selection in biodiversity experiments more accurately. This tripartite method, as well as its recent spatiotemporal extension, were both developed as an attempt to apply the Price equation in a BEF context. Here, we demonstrate that the derivation of this tripartite method, as well as its spatiotemporal extension, involve a set of incoherent and nonsensical mathematical arguments driven largely by na\"ive visual analogies with the original Price equation, that result in neither partitioning scheme quantifying any real property in the natural world. Furthermore, we show that Loreau and Hector's original selection effect always represented a true analog of the original Price selection term, making the tripartite partitioning scheme a nonsensical solution to a non-existent problem [...]
[ { "created": "Fri, 28 Dec 2018 09:11:20 GMT", "version": "v1" } ]
2018-12-31
[ [ "Pillai", "Pradeep", "" ], [ "Gouhier", "Tarik C.", "" ] ]
In biodiversity and ecosystem functioning (BEF) research, the Loreau-Hector (LH) statistical scheme is widely-used to partition the effect of biodiversity on ecosystem properties into a "complementarity effect" and a "selection effect". This selection effect was originally considered analogous to the selection term in the Price equation from evolutionary biology. However, a key paper published over thirteen years ago challenged this interpretation by devising a new tripartite partitioning scheme that purportedly quantified the role of selection in biodiversity experiments more accurately. This tripartite method, as well as its recent spatiotemporal extension, were both developed as an attempt to apply the Price equation in a BEF context. Here, we demonstrate that the derivation of this tripartite method, as well as its spatiotemporal extension, involve a set of incoherent and nonsensical mathematical arguments driven largely by na\"ive visual analogies with the original Price equation, that result in neither partitioning scheme quantifying any real property in the natural world. Furthermore, we show that Loreau and Hector's original selection effect always represented a true analog of the original Price selection term, making the tripartite partitioning scheme a nonsensical solution to a non-existent problem [...]
0805.0695
Tidjani Negadi
Tidjani Negadi
The genetic code Via Godel encoding
null
The Open Physical Chemistry Journal, 2008, Vol.2, 1-5
10.2174/1874067700802010001
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The genetic code structure into distinct multiplet-classes as well as the numeric degeneracies of the latter are revealed by a two-step process. First, an empirical inventory of the degeneracies (of the shuffled multiplets) in two specific equal moieties of the experimental genetic code table is made and transcribed in the form of a sequence of integers. Second, a Godel Encoding procedure is applied to the latter sequence delivering, as an output, a Godel Number the digits of which, from the decimal representation, could remarkably describe the amino acids and the stops and allow us also to compute the exact degeneracies, class by class. The standard and the vertebrate mitochondrial genetic codes are considered and their multiplet structure is fully established.
[ { "created": "Tue, 6 May 2008 10:56:19 GMT", "version": "v1" } ]
2009-11-13
[ [ "Negadi", "Tidjani", "" ] ]
The genetic code structure into distinct multiplet-classes as well as the numeric degeneracies of the latter are revealed by a two-step process. First, an empirical inventory of the degeneracies (of the shuffled multiplets) in two specific equal moieties of the experimental genetic code table is made and transcribed in the form of a sequence of integers. Second, a Godel Encoding procedure is applied to the latter sequence delivering, as an output, a Godel Number the digits of which, from the decimal representation, could remarkably describe the amino acids and the stops and allow us also to compute the exact degeneracies, class by class. The standard and the vertebrate mitochondrial genetic codes are considered and their multiplet structure is fully established.
2212.04195
Xi-Nian Zuo Dr.
Zi-Xuan Zhou and Xi-Nian Zuo
A Paradigm Shift in Neuroscience Driven by Big Data: State of art, Challenges, and Proof of Concept
6 pages, 1 figure
null
null
null
q-bio.NC q-bio.QM stat.ME
http://creativecommons.org/licenses/by/4.0/
A recent editorial in Nature noted that cognitive neuroscience is at a crossroads where it is a thorny issue to reliably reveal brain-behavior associations. This commentary sketches a big data science way out for cognitive neuroscience, namely population neuroscience. In terms of design, analysis, and interpretations, population neuroscience research takes the design control to an unprecedented level, greatly expands the dimensions of the data analysis space, and paves a paradigm shift for exploring mechanisms on brain-behavior associations.
[ { "created": "Thu, 8 Dec 2022 11:23:07 GMT", "version": "v1" }, { "created": "Sat, 4 Mar 2023 00:51:41 GMT", "version": "v2" } ]
2023-03-07
[ [ "Zhou", "Zi-Xuan", "" ], [ "Zuo", "Xi-Nian", "" ] ]
A recent editorial in Nature noted that cognitive neuroscience is at a crossroads where it is a thorny issue to reliably reveal brain-behavior associations. This commentary sketches a big data science way out for cognitive neuroscience, namely population neuroscience. In terms of design, analysis, and interpretations, population neuroscience research takes the design control to an unprecedented level, greatly expands the dimensions of the data analysis space, and paves a paradigm shift for exploring mechanisms on brain-behavior associations.
1402.5321
Michael GB Blum
N. Duforet-Frebourg, E. Bazin, M.G.B. Blum
Genome scans for detecting footprints of local adaptation using a Bayesian factor model
This work has been partially supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01),Molecular Biology and Evolution 2014
null
10.1093/molbev/msu182
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central part of population genomics consists of finding genomic regions implicated in local adaptation. Population genomic analyses are based on genotyping numerous molecular markers and looking for outlier loci in terms of patterns of genetic differentiation. One of the most common approach for selection scan is based on statistics that measure population differentiation such as $F_{ST}$. However they are important caveats with approaches related to $F_{ST}$ because they require grouping individuals into populations and they additionally assume a particular model of population structure. Here we implement a more flexible individual-based approach based on Bayesian factor models. Factor models capture population structure with latent variables called factors, which can describe clustering of individuals into populations or isolation-by-distance patterns. Using hierarchical Bayesian modeling, we both infer population structure and identify outlier loci that are candidates for local adaptation. As outlier loci, the hierarchical factor model searches for loci that are atypically related to population structure as measured by the latent factors. In a model of population divergence, we show that the factor model can achieve a 2-fold or more reduction of false discovery rate compared to the software BayeScan or compared to a $F_{ST}$ approach. We analyze the data of the Human Genome Diversity Panel to provide an example of how factor models can be used to detect local adaptation with a large number of SNPs. The Bayesian factor model is implemented in the open-source PCAdapt software.
[ { "created": "Fri, 21 Feb 2014 15:09:06 GMT", "version": "v1" }, { "created": "Wed, 26 Feb 2014 20:02:22 GMT", "version": "v2" }, { "created": "Sun, 23 Mar 2014 10:55:28 GMT", "version": "v3" }, { "created": "Tue, 29 Jul 2014 11:40:36 GMT", "version": "v4" } ]
2014-07-30
[ [ "Duforet-Frebourg", "N.", "" ], [ "Bazin", "E.", "" ], [ "Blum", "M. G. B.", "" ] ]
A central part of population genomics consists of finding genomic regions implicated in local adaptation. Population genomic analyses are based on genotyping numerous molecular markers and looking for outlier loci in terms of patterns of genetic differentiation. One of the most common approach for selection scan is based on statistics that measure population differentiation such as $F_{ST}$. However they are important caveats with approaches related to $F_{ST}$ because they require grouping individuals into populations and they additionally assume a particular model of population structure. Here we implement a more flexible individual-based approach based on Bayesian factor models. Factor models capture population structure with latent variables called factors, which can describe clustering of individuals into populations or isolation-by-distance patterns. Using hierarchical Bayesian modeling, we both infer population structure and identify outlier loci that are candidates for local adaptation. As outlier loci, the hierarchical factor model searches for loci that are atypically related to population structure as measured by the latent factors. In a model of population divergence, we show that the factor model can achieve a 2-fold or more reduction of false discovery rate compared to the software BayeScan or compared to a $F_{ST}$ approach. We analyze the data of the Human Genome Diversity Panel to provide an example of how factor models can be used to detect local adaptation with a large number of SNPs. The Bayesian factor model is implemented in the open-source PCAdapt software.
2311.16181
Sandra E. Safo
Elise F. Palzer and Sandra E. Safo
mvlearnR and Shiny App for multiview learning
null
null
null
null
q-bio.GN cs.LG stat.AP stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The package mvlearnR and accompanying Shiny App is intended for integrating data from multiple sources or views or modalities (e.g. genomics, proteomics, clinical and demographic data). Most existing software packages for multiview learning are decentralized and offer limited capabilities, making it difficult for users to perform comprehensive integrative analysis. The new package wraps statistical and machine learning methods and graphical tools, providing a convenient and easy data integration workflow. For users with limited programming language, we provide a Shiny Application to facilitate data integration anywhere and on any device. The methods have potential to offer deeper insights into complex disease mechanisms. Availability and Implementation: mvlearnR is available from the following GitHub repository: https://github.com/lasandrall/mvlearnR. The web application is hosted on shinyapps.io and available at: https://multi-viewlearn.shinyapps.io/MultiView_Modeling/
[ { "created": "Sat, 25 Nov 2023 03:01:12 GMT", "version": "v1" } ]
2023-11-29
[ [ "Palzer", "Elise F.", "" ], [ "Safo", "Sandra E.", "" ] ]
The package mvlearnR and accompanying Shiny App is intended for integrating data from multiple sources or views or modalities (e.g. genomics, proteomics, clinical and demographic data). Most existing software packages for multiview learning are decentralized and offer limited capabilities, making it difficult for users to perform comprehensive integrative analysis. The new package wraps statistical and machine learning methods and graphical tools, providing a convenient and easy data integration workflow. For users with limited programming language, we provide a Shiny Application to facilitate data integration anywhere and on any device. The methods have potential to offer deeper insights into complex disease mechanisms. Availability and Implementation: mvlearnR is available from the following GitHub repository: https://github.com/lasandrall/mvlearnR. The web application is hosted on shinyapps.io and available at: https://multi-viewlearn.shinyapps.io/MultiView_Modeling/
2101.12342
Helena Junicke
Helena Junicke
Comment on "A compilation and bioenergetic evaluation of syntrophic microbial growth yields in anaerobic digestion" by Pat\'on, M. and Rodr\'iguez, J. [Water Research 162 (2019), 516-517]
8 pages, 1 figure
Water Research 173 (2020) 115347
10.1016/j.watres.2019.115347
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent efforts have focused on providing a systematic analysis of syntrophic microbial growth yields. These biokinetic parameters are key to developing an accurate mathematical description of the anaerobic digestion process. The agreement between experimentally determined growth yields and those obtained from bioenergetic estimations is therefore of great interest. Considering five important syntrophic groups, including acetoclastic and hydrogenotrophic methanogens, as well as propionate, butyrate and lactate oxidizers, previous findings suggest that measured and estimated growth yields were consistent only for acetoclastic methanogens. A re-analysis revealed that data are also consistent for lactate oxidizers and hydrogenotrophic methanogens, whereas the limited data available for propionate and butyrate oxidizers are unsupportive of firm conclusions. These results highlight pertinent challenges in the analysis of microbial syntrophy and encourage more accurate measurements of syntrophic microbial growth yields in the future.
[ { "created": "Fri, 29 Jan 2021 01:20:24 GMT", "version": "v1" } ]
2021-02-01
[ [ "Junicke", "Helena", "" ] ]
Recent efforts have focused on providing a systematic analysis of syntrophic microbial growth yields. These biokinetic parameters are key to developing an accurate mathematical description of the anaerobic digestion process. The agreement between experimentally determined growth yields and those obtained from bioenergetic estimations is therefore of great interest. Considering five important syntrophic groups, including acetoclastic and hydrogenotrophic methanogens, as well as propionate, butyrate and lactate oxidizers, previous findings suggest that measured and estimated growth yields were consistent only for acetoclastic methanogens. A re-analysis revealed that data are also consistent for lactate oxidizers and hydrogenotrophic methanogens, whereas the limited data available for propionate and butyrate oxidizers are unsupportive of firm conclusions. These results highlight pertinent challenges in the analysis of microbial syntrophy and encourage more accurate measurements of syntrophic microbial growth yields in the future.
1408.3802
Suman Kumar Banik
Alok Kumar Maity, Pinaki Chaudhury and Suman K. Banik
Role of relaxation time scale in noisy signal transduction
14 pages, 6 figures
null
10.1371/journal.pone.0123242
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intracellular fluctuations, mainly triggered by gene expression, are an inevitable phenomenon observed in living cells. It influences generation of phenotypic diversity in genetically identical cells. Such variation of cellular components is beneficial in some contexts but detrimental in others. To quantify the fluctuations in a gene product, we undertake an analytical scheme for studying few naturally abundant linear as well as branched chain network motifs. We solve the Langevin equations associated with each motif under the purview of linear noise approximation and quantify Fano factor and mutual information. Both quantifiable expressions exclusively depend on the relaxation time (decay rate constant) and steady state population of the network components. We investigate the effect of relaxation time constraints on Fano factor and mutual information to indentify a time scale domain where a network can recognize the fluctuations associated with the input signal more reliably. We also show how input population affects both quantities. We extend our calculation to long chain linear motif and show that with increasing chain length, the Fano factor value increases but the mutual information processing capability decreases. In this type of motif, the intermediate components are shown to act as a noise filter that tune up input fluctuations and maintain optimum fluctuations in the output. For branched chain motifs, both quantities vary within a large scale due to their network architecture and facilitate survival of living system in diverse environmental conditions.
[ { "created": "Sun, 17 Aug 2014 07:03:58 GMT", "version": "v1" } ]
2017-02-08
[ [ "Maity", "Alok Kumar", "" ], [ "Chaudhury", "Pinaki", "" ], [ "Banik", "Suman K.", "" ] ]
Intracellular fluctuations, mainly triggered by gene expression, are an inevitable phenomenon observed in living cells. It influences generation of phenotypic diversity in genetically identical cells. Such variation of cellular components is beneficial in some contexts but detrimental in others. To quantify the fluctuations in a gene product, we undertake an analytical scheme for studying few naturally abundant linear as well as branched chain network motifs. We solve the Langevin equations associated with each motif under the purview of linear noise approximation and quantify Fano factor and mutual information. Both quantifiable expressions exclusively depend on the relaxation time (decay rate constant) and steady state population of the network components. We investigate the effect of relaxation time constraints on Fano factor and mutual information to indentify a time scale domain where a network can recognize the fluctuations associated with the input signal more reliably. We also show how input population affects both quantities. We extend our calculation to long chain linear motif and show that with increasing chain length, the Fano factor value increases but the mutual information processing capability decreases. In this type of motif, the intermediate components are shown to act as a noise filter that tune up input fluctuations and maintain optimum fluctuations in the output. For branched chain motifs, both quantities vary within a large scale due to their network architecture and facilitate survival of living system in diverse environmental conditions.
2107.08743
Felix Hol
Gregory PD Murray, Emilie Giraud, Felix JH Hol
Characterising mosquito biting behaviour at high resolution
12 pages, 3 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Blood feeding represents a critical event in the life cycle of female mosquitoes. In addition to providing nutrients to the mosquito, blood feeding facilitates the transmission of parasites and viruses to hosts, potentially having devastating health consequences. Despite this, our understanding of these short, yet important bouts of behaviour is incomplete. How and where a mosquito decides to feed and the success of feeding can influence the transmission of pathogens, while a more thorough understanding may allow interventions to reduce or prevent infections. Recent advances in machine vision and automated tracking presents the opportunity to observe and understand blood feeding behaviour of mosquitoes at unprecedented spatial and temporal resolution. Here, we combine these technologies with novel designs for behavioural arenas and controllable artificial host cues to enable detailed observations of biting behaviour using relatively inexpensive and readily available materials. Combined with a workflow for quantitative image analysis, we are able to describe nuanced, high resolution biting behaviour under tightly controlled conditions.
[ { "created": "Mon, 19 Jul 2021 10:33:03 GMT", "version": "v1" } ]
2021-07-20
[ [ "Murray", "Gregory PD", "" ], [ "Giraud", "Emilie", "" ], [ "Hol", "Felix JH", "" ] ]
Blood feeding represents a critical event in the life cycle of female mosquitoes. In addition to providing nutrients to the mosquito, blood feeding facilitates the transmission of parasites and viruses to hosts, potentially having devastating health consequences. Despite this, our understanding of these short, yet important bouts of behaviour is incomplete. How and where a mosquito decides to feed and the success of feeding can influence the transmission of pathogens, while a more thorough understanding may allow interventions to reduce or prevent infections. Recent advances in machine vision and automated tracking presents the opportunity to observe and understand blood feeding behaviour of mosquitoes at unprecedented spatial and temporal resolution. Here, we combine these technologies with novel designs for behavioural arenas and controllable artificial host cues to enable detailed observations of biting behaviour using relatively inexpensive and readily available materials. Combined with a workflow for quantitative image analysis, we are able to describe nuanced, high resolution biting behaviour under tightly controlled conditions.
2102.10265
Fadhah Alanazi
Fadhah Amer Alanazi
The spread of COVID-19 at Hot-Temperature Places With Different Curfew Situations Using Copula Models
10 pages
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
The infectious coronavirus disease 2019 (COVID-19) has become a serious global pandemic. Different studies have shown that increasing temperature can play a crucial role in the spread of the virus. Most of these studies were limited to winter or moderate temperature levels and were conducted using conventional models. However, traditional models are too simplistic to investigate complex, non-linear relationships and suffer from some restrictions. Therefore, we employed copula models to examine the impact of high temperatures on virus transmission. The findings from the copula models showed that there was a weak to moderate effect of temperature on the number of infections and the effect almost vanished under a lockdown policy. Therefore, this study provides new insight into the relationship between COVID-19 and temperature, both with and without social isolation practices. Such results can lead to improvements in our understanding of this new virus. In particular, the results derived from the copula models examined here, unlike existing traditional models, provide evidence that there is no substantial influence of high temperatures on the active COVID-19 outbreak situation. In addition, the results indicate that the transmission of COVID-19 is strongly influenced by social isolation practices. To the best of the author knowledge, this is the first copula model investigation applied to the COVID-19 pandemic.
[ { "created": "Sat, 20 Feb 2021 06:01:05 GMT", "version": "v1" } ]
2021-02-23
[ [ "Alanazi", "Fadhah Amer", "" ] ]
The infectious coronavirus disease 2019 (COVID-19) has become a serious global pandemic. Different studies have shown that increasing temperature can play a crucial role in the spread of the virus. Most of these studies were limited to winter or moderate temperature levels and were conducted using conventional models. However, traditional models are too simplistic to investigate complex, non-linear relationships and suffer from some restrictions. Therefore, we employed copula models to examine the impact of high temperatures on virus transmission. The findings from the copula models showed that there was a weak to moderate effect of temperature on the number of infections and the effect almost vanished under a lockdown policy. Therefore, this study provides new insight into the relationship between COVID-19 and temperature, both with and without social isolation practices. Such results can lead to improvements in our understanding of this new virus. In particular, the results derived from the copula models examined here, unlike existing traditional models, provide evidence that there is no substantial influence of high temperatures on the active COVID-19 outbreak situation. In addition, the results indicate that the transmission of COVID-19 is strongly influenced by social isolation practices. To the best of the author knowledge, this is the first copula model investigation applied to the COVID-19 pandemic.
2207.09551
Julian Heidecke
Julian Heidecke, Jan Fuhrmann and Maria Vittoria Barbarossa
A mechanistic model to assess the effectiveness of test-trace-isolate-and-quarantine under limited capacities
Improved description of model derivation and notation, results unchanged
(2024) PLoS ONE 19(3): e0299880
10.1371/journal.pone.0299880
null
q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
Diagnostic testing followed by isolation of identified cases with subsequent tracing and quarantine of close contacts - often referred to as test-trace-isolate-and-quarantine (TTIQ) strategy - is one of the cornerstone measures of infectious disease control. The COVID-19 pandemic has highlighted that an appropriate response to outbreaks requires us to be aware about the effectiveness of such containment strategies. This can be evaluated using mathematical models. We present a delay differential equation model of TTIQ interventions for infectious disease control. Our model incorporates a detailed mechanistic description of the state-dependent dynamics induced by limited TTIQ capacities. In addition, we account for transmission during the early phase of SARS-CoV-2 infection, including presymptomatic transmission, which may be particularly adverse to a TTIQ based control. Numerical experiments, inspired by the early spread of COVID-19 in Germany, reveal the effectiveness of TTIQ in a scenario where immunity within the population is low and pharmaceutical interventions are absent - representative of a typical situation during the (re-)emergence of infectious diseases for which therapeutic drugs or vaccines are not yet available. Stability and sensitivity analyses emphasize factors, partially related to the specific disease, which impede or enhance the success of TTIQ. Studying the diminishing effectiveness of TTIQ along simulations of an epidemic wave we highlight consequences for intervention strategies.
[ { "created": "Tue, 19 Jul 2022 20:59:56 GMT", "version": "v1" }, { "created": "Wed, 23 Nov 2022 15:20:49 GMT", "version": "v2" } ]
2024-03-14
[ [ "Heidecke", "Julian", "" ], [ "Fuhrmann", "Jan", "" ], [ "Barbarossa", "Maria Vittoria", "" ] ]
Diagnostic testing followed by isolation of identified cases with subsequent tracing and quarantine of close contacts - often referred to as test-trace-isolate-and-quarantine (TTIQ) strategy - is one of the cornerstone measures of infectious disease control. The COVID-19 pandemic has highlighted that an appropriate response to outbreaks requires us to be aware about the effectiveness of such containment strategies. This can be evaluated using mathematical models. We present a delay differential equation model of TTIQ interventions for infectious disease control. Our model incorporates a detailed mechanistic description of the state-dependent dynamics induced by limited TTIQ capacities. In addition, we account for transmission during the early phase of SARS-CoV-2 infection, including presymptomatic transmission, which may be particularly adverse to a TTIQ based control. Numerical experiments, inspired by the early spread of COVID-19 in Germany, reveal the effectiveness of TTIQ in a scenario where immunity within the population is low and pharmaceutical interventions are absent - representative of a typical situation during the (re-)emergence of infectious diseases for which therapeutic drugs or vaccines are not yet available. Stability and sensitivity analyses emphasize factors, partially related to the specific disease, which impede or enhance the success of TTIQ. Studying the diminishing effectiveness of TTIQ along simulations of an epidemic wave we highlight consequences for intervention strategies.
2009.00133
Siqi Sun
Siqi Sun
Unsupervised and Supervised Structure Learning for Protein Contact Prediction
PhD Thesis
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein contacts provide key information for the understanding of protein structure and function, and therefore contact prediction from sequences is an important problem. Recent research shows that some correctly predicted long-range contacts could help topology-level structure modeling. Thus, contact prediction and contact-assisted protein folding also proves the importance of this problem. In this thesis, I will briefly introduce the extant related work, then show how to establish the contact prediction through unsupervised graphical models with topology constraints. Further, I will explain how to use the supervised deep learning methods to further boost the accuracy of contact prediction. Finally, I will propose a scoring system called diversity score to measure the novelty of contact predictions, as well as an algorithm that predicts contacts with respect to the new scoring system.
[ { "created": "Mon, 31 Aug 2020 22:37:16 GMT", "version": "v1" } ]
2020-09-02
[ [ "Sun", "Siqi", "" ] ]
Protein contacts provide key information for the understanding of protein structure and function, and therefore contact prediction from sequences is an important problem. Recent research shows that some correctly predicted long-range contacts could help topology-level structure modeling. Thus, contact prediction and contact-assisted protein folding also proves the importance of this problem. In this thesis, I will briefly introduce the extant related work, then show how to establish the contact prediction through unsupervised graphical models with topology constraints. Further, I will explain how to use the supervised deep learning methods to further boost the accuracy of contact prediction. Finally, I will propose a scoring system called diversity score to measure the novelty of contact predictions, as well as an algorithm that predicts contacts with respect to the new scoring system.
1902.09528
Bob Eisenberg
Bob Eisenberg
Multiple Scales in the Simulation of Ion Channels and Proteins
arXiv admin note: text overlap with arXiv:1009.1786; AUTHOR NOTE: Reader, please check extent of overlap for yourself; Revision 2 change only on title page, to show arXiv reference
Journal of Physical Chemistry C (2010) 114 20719-20733
10.1021/jp106760t
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computation of biological processes creates great promise for everyday life and great challenges for physical scientists. Simulations of molecular dynamics appeal to biologists as a natural extension of structural biology. Once biologists see a structure, they want to see it move. Molecular biology shows that a few atoms, often messenger ions like Ca$^{2+}$, control biological function on the scale of cells, sometimes organisms. Enormously concentrated ions (~20 M) in protein channels and enzymes can control important characteristics of living systems, just as highly concentrated ions near electrodes control important characteristics of electrochemical systems. The scale differences needed to simulate all the atoms of biological cells are $10^7$ in linear dimension, $10^{21}$ in three dimensions, $10^{9}$ in resolution, $10^{11}$ in time, and $10^{13}$ in particle number (to deal with concentrations of Ca$^{2+}$). $\mathbf{These}$ $\mathbf{scales}$ $\mathbf{must}$ $\mathbf{be}$ $\mathbf{dealt}$ $\mathbf{with}$ $\mathbf{simultaneously}$ if the simulation is to deal with most biological functions. We suggest a computational approach using explicit multiscale analysis instead of implicit simulation of all scales. Successful computation of ions concentrated in special places may be a significant step to understanding the defining characteristics of biological and electrochemical systems.
[ { "created": "Sun, 24 Feb 2019 07:46:53 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2019 18:01:44 GMT", "version": "v2" } ]
2019-03-05
[ [ "Eisenberg", "Bob", "" ] ]
Computation of biological processes creates great promise for everyday life and great challenges for physical scientists. Simulations of molecular dynamics appeal to biologists as a natural extension of structural biology. Once biologists see a structure, they want to see it move. Molecular biology shows that a few atoms, often messenger ions like Ca$^{2+}$, control biological function on the scale of cells, sometimes organisms. Enormously concentrated ions (~20 M) in protein channels and enzymes can control important characteristics of living systems, just as highly concentrated ions near electrodes control important characteristics of electrochemical systems. The scale differences needed to simulate all the atoms of biological cells are $10^7$ in linear dimension, $10^{21}$ in three dimensions, $10^{9}$ in resolution, $10^{11}$ in time, and $10^{13}$ in particle number (to deal with concentrations of Ca$^{2+}$). $\mathbf{These}$ $\mathbf{scales}$ $\mathbf{must}$ $\mathbf{be}$ $\mathbf{dealt}$ $\mathbf{with}$ $\mathbf{simultaneously}$ if the simulation is to deal with most biological functions. We suggest a computational approach using explicit multiscale analysis instead of implicit simulation of all scales. Successful computation of ions concentrated in special places may be a significant step to understanding the defining characteristics of biological and electrochemical systems.
1407.3752
Cameron Mura
Cameron Mura and Charles E. McAnany
An Introduction to Biomolecular Simulations and Docking
The text is accompanied by 6 figures and 5 boxes. The text is in press at Molecular Simulation (2014), as part of a special issue on simulations in molecular biology
null
10.1080/08927022.2014.935372
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The biomolecules in and around a living cell -- proteins, nucleic acids, lipids, carbohydrates -- continuously sample myriad conformational states that are thermally accessible at physiological temperatures. Simultaneously, a given biomolecule also samples (and is sampled by) a rapidly fluctuating local environment comprised of other biopolymers, small molecules, water, ions, etc. that diffuse to within a few nanometers, leading to inter-molecular contacts that stitch together large supramolecular assemblies. Indeed, all biological systems can be viewed as dynamic networks of molecular interactions. As a complement to experimentation, molecular simulation offers a uniquely powerful approach to analyze biomolecular structure, mechanism, and dynamics; this is possible because the molecular contacts that define a complicated biomolecular system are governed by the same physical principles (forces, energetics) that characterize individual small molecules, and these simpler systems are relatively well-understood. With modern algorithms and computing capabilities, simulations are now an indispensable tool for examining biomolecular assemblies in atomic detail, from the conformational motion in an individual protein to the diffusional dynamics and inter-molecular collisions in the early stages of formation of cellular-scale assemblies such as the ribosome. This text introduces the physicochemical foundations of molecular simulations and docking, largely from the perspective of biomolecular interactions.
[ { "created": "Mon, 14 Jul 2014 18:01:40 GMT", "version": "v1" } ]
2014-07-15
[ [ "Mura", "Cameron", "" ], [ "McAnany", "Charles E.", "" ] ]
The biomolecules in and around a living cell -- proteins, nucleic acids, lipids, carbohydrates -- continuously sample myriad conformational states that are thermally accessible at physiological temperatures. Simultaneously, a given biomolecule also samples (and is sampled by) a rapidly fluctuating local environment comprised of other biopolymers, small molecules, water, ions, etc. that diffuse to within a few nanometers, leading to inter-molecular contacts that stitch together large supramolecular assemblies. Indeed, all biological systems can be viewed as dynamic networks of molecular interactions. As a complement to experimentation, molecular simulation offers a uniquely powerful approach to analyze biomolecular structure, mechanism, and dynamics; this is possible because the molecular contacts that define a complicated biomolecular system are governed by the same physical principles (forces, energetics) that characterize individual small molecules, and these simpler systems are relatively well-understood. With modern algorithms and computing capabilities, simulations are now an indispensable tool for examining biomolecular assemblies in atomic detail, from the conformational motion in an individual protein to the diffusional dynamics and inter-molecular collisions in the early stages of formation of cellular-scale assemblies such as the ribosome. This text introduces the physicochemical foundations of molecular simulations and docking, largely from the perspective of biomolecular interactions.
2305.13343
Chukuan Jiang
Chu-Kuan Jiang, Yang-Fan Deng, Hongxiao Guo, Guang-Hao Chen, Di Wu
A new sulfur bioconversion process development for energy- and space-efficient secondary wastewater treatment
Written by Chu-Kuan Jiang; edited by Yang-Fan Deng, Hongxiao Guo, Guang-Hao Chen, Di Wu; Corresponding authors: Guang-Hao Chen, Di Wu; Last author (team leader): Guang-Hao Chen
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Harvesting organic matter from wastewater is widely applied to maximize energy recovery; however, it limits the applicability of secondary treatment for acceptable effluent discharge into surface water bodies. To turn this bottleneck issue into an opportunity, this study developed oxygen-induced thiosulfatE production duRing sulfATe reductiOn (EARTO) to provide an efficient electron donor for wastewater treatment. Typical pretreated wastewater was synthesized with chemical oxygen demand of 110 mg/L, sulfate of 50 mg S/L, and varying dissolved oxygen (DO) and was fed into a moving-bed biofilm reactor (MBBR). The MBBR was operated continuously with a short hydraulic retention time of 40 min for 349 days. The formation rate of thiosulfate reached 0.12-0.18 g S/(m2.d) with a high produced thiosulfate-S/TdS-S ratio of 38-73% when influent DO was 2.7-3.6 mg/L. The sludge yield was 0.23-0.29 gVSS/gCOD, much lower than it was in conventional activated sludge processes. Then, batch tests and metabolism analysis were conducted to confirm the oxygen effect on thiosulfate formation, characterize the roles of sulfate and microbial activities, and explore the mechanism of oxygen-induced thiosulfate formation in ERATO. Results examined that oxygen supply promoted the thiosulfate-Sproduced/TdS-Sproduced ratio from 4% to 24-26%, demonstrated that sulfate and microbial activities were critical for thiosulfate production, and indicated that oxygen induces thiosulfate formation through two pathways: 1) direct sulfide oxidation, and 2) indirect sulfide oxidation, sulfide is first oxidized to S0 (dominant) which then reacts with sulfite derived from oxygen-regulated biological sulfate reduction. The proposed compact ERATO process, featuring high thiosulfate production and low sludge production, supports space- and energy-efficient secondary wastewater treatment.
[ { "created": "Mon, 22 May 2023 03:01:08 GMT", "version": "v1" } ]
2023-05-24
[ [ "Jiang", "Chu-Kuan", "" ], [ "Deng", "Yang-Fan", "" ], [ "Guo", "Hongxiao", "" ], [ "Chen", "Guang-Hao", "" ], [ "Wu", "Di", "" ] ]
Harvesting organic matter from wastewater is widely applied to maximize energy recovery; however, it limits the applicability of secondary treatment for acceptable effluent discharge into surface water bodies. To turn this bottleneck issue into an opportunity, this study developed oxygen-induced thiosulfatE production duRing sulfATe reductiOn (EARTO) to provide an efficient electron donor for wastewater treatment. Typical pretreated wastewater was synthesized with chemical oxygen demand of 110 mg/L, sulfate of 50 mg S/L, and varying dissolved oxygen (DO) and was fed into a moving-bed biofilm reactor (MBBR). The MBBR was operated continuously with a short hydraulic retention time of 40 min for 349 days. The formation rate of thiosulfate reached 0.12-0.18 g S/(m2.d) with a high produced thiosulfate-S/TdS-S ratio of 38-73% when influent DO was 2.7-3.6 mg/L. The sludge yield was 0.23-0.29 gVSS/gCOD, much lower than it was in conventional activated sludge processes. Then, batch tests and metabolism analysis were conducted to confirm the oxygen effect on thiosulfate formation, characterize the roles of sulfate and microbial activities, and explore the mechanism of oxygen-induced thiosulfate formation in ERATO. Results examined that oxygen supply promoted the thiosulfate-Sproduced/TdS-Sproduced ratio from 4% to 24-26%, demonstrated that sulfate and microbial activities were critical for thiosulfate production, and indicated that oxygen induces thiosulfate formation through two pathways: 1) direct sulfide oxidation, and 2) indirect sulfide oxidation, sulfide is first oxidized to S0 (dominant) which then reacts with sulfite derived from oxygen-regulated biological sulfate reduction. The proposed compact ERATO process, featuring high thiosulfate production and low sludge production, supports space- and energy-efficient secondary wastewater treatment.
2110.14386
Vladimir Kozlov
S. Ghersheen, V. Kozlov and U. Wennergren
Specifics of coinfection and it's dynamics
null
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
It is essential to understand the dynamics of epidemics in the presence of coexisting pathogens. There are various phenomenon that can effect the dynamics. In this paper, we formulate a mathematical model using different assumptions to capture the effect of these additional phenomena such as partial cross immunity, density dependence in each class and a role of recovered population in the dynamics. We found the basic reproduction number for each model which is the threshold that describes the invasion of disease in population. The basic reproduction number in each model shows that the persistence of disease or strains depends on the carrying capacity. In the model of this paper, we present the local stability analysis of the boundary equilibrium points and observed that the recovered population is not uniformly bounded with respect to the carrying capacity.
[ { "created": "Wed, 27 Oct 2021 12:40:56 GMT", "version": "v1" }, { "created": "Sun, 31 Oct 2021 15:05:02 GMT", "version": "v2" } ]
2021-11-02
[ [ "Ghersheen", "S.", "" ], [ "Kozlov", "V.", "" ], [ "Wennergren", "U.", "" ] ]
It is essential to understand the dynamics of epidemics in the presence of coexisting pathogens. There are various phenomenon that can effect the dynamics. In this paper, we formulate a mathematical model using different assumptions to capture the effect of these additional phenomena such as partial cross immunity, density dependence in each class and a role of recovered population in the dynamics. We found the basic reproduction number for each model which is the threshold that describes the invasion of disease in population. The basic reproduction number in each model shows that the persistence of disease or strains depends on the carrying capacity. In the model of this paper, we present the local stability analysis of the boundary equilibrium points and observed that the recovered population is not uniformly bounded with respect to the carrying capacity.
2006.10357
Miroslav Andjelkovic
Miroslav Andjelkovic, Bosiljka Tadic, Roderick Melnik
The topology of higher-order complexes associated with brain-function hubs in human connectomes
14 pages, 6 figures
null
null
null
q-bio.NC cond-mat.dis-nn math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Higher-order connectivity in complex systems described by simplexes of different orders provides a geometry for simplex-based dynamical variables and interactions. Simplicial complexes that constitute a functional geometry of the human connectome can be crucial for the brain complex dynamics. In this context, the best-connected brain areas, designated as hub nodes, play a central role in supporting integrated brain function. Here, we study the structure of simplicial complexes attached to eight global hubs in the female and male connectomes and identify the core networks among the affected brain regions. These eight hubs (Putamen, Caudate, Hippocampus and Thalamus-Proper in the left and right cerebral hemisphere) are the highest-ranking according to their topological dimension, defined as the number of simplexes of all orders in which the node participates. Furthermore, we analyse the weight-dependent heterogeneity of simplexes. We demonstrate changes in the structure of identified core networks and topological entropy when the threshold weight is gradually increased. These results highlight the role of higher-order interactions in human brain networks and provide additional evidence for (dis)similarity between the female and male connectomes.
[ { "created": "Thu, 18 Jun 2020 08:29:35 GMT", "version": "v1" } ]
2020-06-19
[ [ "Andjelkovic", "Miroslav", "" ], [ "Tadic", "Bosiljka", "" ], [ "Melnik", "Roderick", "" ] ]
Higher-order connectivity in complex systems described by simplexes of different orders provides a geometry for simplex-based dynamical variables and interactions. Simplicial complexes that constitute a functional geometry of the human connectome can be crucial for the brain complex dynamics. In this context, the best-connected brain areas, designated as hub nodes, play a central role in supporting integrated brain function. Here, we study the structure of simplicial complexes attached to eight global hubs in the female and male connectomes and identify the core networks among the affected brain regions. These eight hubs (Putamen, Caudate, Hippocampus and Thalamus-Proper in the left and right cerebral hemisphere) are the highest-ranking according to their topological dimension, defined as the number of simplexes of all orders in which the node participates. Furthermore, we analyse the weight-dependent heterogeneity of simplexes. We demonstrate changes in the structure of identified core networks and topological entropy when the threshold weight is gradually increased. These results highlight the role of higher-order interactions in human brain networks and provide additional evidence for (dis)similarity between the female and male connectomes.
1409.1899
Joshua M. Deutsch
J. M. Deutsch
Collective regulation by non-coding RNA
9 pages, 4 figures
null
null
null
q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study genetic networks that produce many species of non-coding RNA molecules that are present at a moderate density, as typically exists in the cell. The associations of the many species of these RNA are modeled physically, taking into account the equilibrium constants between bound and unbound states. By including the pair-wise binding of the many RNA species, the network becomes highly interconnected and shows different properties than the usual type of genetic network. It shows much more robustness to mutation, and also rapid evolutionary adaptation in an environment that oscillates in time. This provides a possible explanation for the weak evolutionary constraints seen in much of the non-coding RNA that has been studied.
[ { "created": "Fri, 5 Sep 2014 18:35:09 GMT", "version": "v1" } ]
2014-09-08
[ [ "Deutsch", "J. M.", "" ] ]
We study genetic networks that produce many species of non-coding RNA molecules that are present at a moderate density, as typically exists in the cell. The associations of the many species of these RNA are modeled physically, taking into account the equilibrium constants between bound and unbound states. By including the pair-wise binding of the many RNA species, the network becomes highly interconnected and shows different properties than the usual type of genetic network. It shows much more robustness to mutation, and also rapid evolutionary adaptation in an environment that oscillates in time. This provides a possible explanation for the weak evolutionary constraints seen in much of the non-coding RNA that has been studied.
1007.5471
Pablo Barttfeld
Pablo Barttfeld, Bruno Wicker, Sebasti\'an Cukier, Silvana Navarta, Sergio Lew and Mariano Sigman
A big-world network in ASD: Dynamical connectivity analysis reflects a deficit in long-range connections and an excess of short-range connections
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last years, increasing evidence has fuelled the hypothesis that Autism Spectrum Disorder (ASD) is a condition of altered brain functional connectivity. The great majority of these empirical studies rely on functional magnetic resonance imaging (fMRI) which has a relatively poor temporal resolution. Only a handful of studies have examined networks emerging from dynamic coherence at the millisecond resolution and there are no investigations of coherence at the lowest frequencies in the power spectrum - which has recently been shown to reflect long-range cortico-cortical connections. Here we used electroencephalography (EEG) to assess dynamic brain connectivity in ASD focusing in the low-frequency (delta) range. We found that connectivity patterns were distinct in ASD and control populations and reflected a double dissociation: ASD subjects lacked long-range connections, with a most prominent deficit in fronto-occipital connections. Conversely, individuals with ASD showed increased short-range connections in lateral-frontal electrodes. This effect between categories showed a consistent parametric dependency: as ASD severity increased, short-range coherence was more pronounced and long-range coherence decreased. Theoretical arguments have been proposed arguing that distinct patterns of connectivity may result in networks with different efficiency in transmission of information. We show that the networks in ASD subjects have less Clustering coefficient, greater Characteristic Path Length than controls -indicating that the topology of the network departs from small-world behaviour- and greater modularity. Together these results show that delta-band coherence reveal qualitative and quantitative aspects associated with ASD pathology.
[ { "created": "Fri, 30 Jul 2010 15:09:12 GMT", "version": "v1" } ]
2010-08-02
[ [ "Barttfeld", "Pablo", "" ], [ "Wicker", "Bruno", "" ], [ "Cukier", "Sebastián", "" ], [ "Navarta", "Silvana", "" ], [ "Lew", "Sergio", "" ], [ "Sigman", "Mariano", "" ] ]
Over the last years, increasing evidence has fuelled the hypothesis that Autism Spectrum Disorder (ASD) is a condition of altered brain functional connectivity. The great majority of these empirical studies rely on functional magnetic resonance imaging (fMRI) which has a relatively poor temporal resolution. Only a handful of studies have examined networks emerging from dynamic coherence at the millisecond resolution and there are no investigations of coherence at the lowest frequencies in the power spectrum - which has recently been shown to reflect long-range cortico-cortical connections. Here we used electroencephalography (EEG) to assess dynamic brain connectivity in ASD focusing in the low-frequency (delta) range. We found that connectivity patterns were distinct in ASD and control populations and reflected a double dissociation: ASD subjects lacked long-range connections, with a most prominent deficit in fronto-occipital connections. Conversely, individuals with ASD showed increased short-range connections in lateral-frontal electrodes. This effect between categories showed a consistent parametric dependency: as ASD severity increased, short-range coherence was more pronounced and long-range coherence decreased. Theoretical arguments have been proposed arguing that distinct patterns of connectivity may result in networks with different efficiency in transmission of information. We show that the networks in ASD subjects have less Clustering coefficient, greater Characteristic Path Length than controls -indicating that the topology of the network departs from small-world behaviour- and greater modularity. Together these results show that delta-band coherence reveal qualitative and quantitative aspects associated with ASD pathology.
1909.02636
Christoph Dinh
Christoph Dinh, John GW Samuelsson, Alexander Hunold, Matti S H\"am\"al\"ainen, Sheraz Khan
Contextual Minimum-Norm Estimates (CMNE): A Deep Learning Method for Source Estimation in Neuronal Networks
14 pages, 9 figures
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Magnetoencephalography (MEG) and Electroencephalography (EEG) source estimates have thus far mostly been derived sample by sample, i.e., independent of each other in time. However, neuronal assemblies are heavily interconnected, constraining the temporal evolution of neural activity in space as detected by MEG and EEG. The observed neural currents are thus highly context dependent. Here, a new method is presented which integrates predictive deep learning networks with the Minimum-Norm Estimates (MNE) approach. Specifically, we employ Long Short-Term Memory (LSTM) networks, a type of recurrent neural network, for predicting brain activity. Because we use past activity (context) in the estimation, we call our method Contextual MNE (CMNE). We demonstrate that these contextual algorithms can be used for predicting activity based on previous brain states and when used in conjunction with MNE, they lead to more accurate source estimation. To evaluate the performance of CMNE, it was tested on simulated and experimental data from human auditory evoked response experiments.
[ { "created": "Thu, 5 Sep 2019 21:14:20 GMT", "version": "v1" } ]
2019-09-09
[ [ "Dinh", "Christoph", "" ], [ "Samuelsson", "John GW", "" ], [ "Hunold", "Alexander", "" ], [ "Hämäläinen", "Matti S", "" ], [ "Khan", "Sheraz", "" ] ]
Magnetoencephalography (MEG) and Electroencephalography (EEG) source estimates have thus far mostly been derived sample by sample, i.e., independent of each other in time. However, neuronal assemblies are heavily interconnected, constraining the temporal evolution of neural activity in space as detected by MEG and EEG. The observed neural currents are thus highly context dependent. Here, a new method is presented which integrates predictive deep learning networks with the Minimum-Norm Estimates (MNE) approach. Specifically, we employ Long Short-Term Memory (LSTM) networks, a type of recurrent neural network, for predicting brain activity. Because we use past activity (context) in the estimation, we call our method Contextual MNE (CMNE). We demonstrate that these contextual algorithms can be used for predicting activity based on previous brain states and when used in conjunction with MNE, they lead to more accurate source estimation. To evaluate the performance of CMNE, it was tested on simulated and experimental data from human auditory evoked response experiments.
1810.04796
Johan Nygren
Johan Nygren
Hominin evolution was caused by introgression from Gorilla
arXiv admin note: text overlap with arXiv:1808.06307
Natural Science Vol.10 No.09(2018), Article ID:87215, 9 pages
10.4236/ns.2018.109033
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
The discovery of Paranthropus deyiremeda in 3.3-3.5 million year old fossil sites in Afar, together with 30% of the gorilla genome showing lineage sorting between humans and chimpanzees, and a NUMT ("nuclear mitochondrial DNA segment") on chromosome 5 that is shared by both gorillas, humans and chimpanzees, and shown to have diverged at the time of the Pan-Homo split rather than the Gorilla/Pan-Homo split, provides conclusive evidence that introgression from the gorilla lineage caused the Pan-Homo split, and the speciation of both the Australopithecus lineage and the Paranthropus lineage.
[ { "created": "Tue, 2 Oct 2018 22:35:25 GMT", "version": "v1" } ]
2018-10-12
[ [ "Nygren", "Johan", "" ] ]
The discovery of Paranthropus deyiremeda in 3.3-3.5 million year old fossil sites in Afar, together with 30% of the gorilla genome showing lineage sorting between humans and chimpanzees, and a NUMT ("nuclear mitochondrial DNA segment") on chromosome 5 that is shared by both gorillas, humans and chimpanzees, and shown to have diverged at the time of the Pan-Homo split rather than the Gorilla/Pan-Homo split, provides conclusive evidence that introgression from the gorilla lineage caused the Pan-Homo split, and the speciation of both the Australopithecus lineage and the Paranthropus lineage.
1704.07526
Yansong Chua
Yansong Chua, Cheston Tan
Neurogenesis and multiple plasticity mechanisms enhance associative memory retrieval in a spiking network model of the hippocampus
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hippocampal CA3 is crucial for the formation of long-term associative memory. It has a heavily recurrent connectivity, and memories are thought to be stored as memory engrams in the CA3. However, despite its importance for memory storage and retrieval, spiking network models of the CA3 to date are relatively small-scale, and exist as only proof-of-concept models. Specifically, how neurogenesis in the dentate gyrus affects memory encoding and retrieval in the CA3 is not studied in such spiking models. Our work is the first to develop a biologically plausible spiking neural network model of hippocampal memory encoding and retrieval, with at least an order-of-magnitude more neurons than previous models. It is also the first to investigate the effect of neurogenesis on CA3 memory encoding and retrieval. Using such a model, we first show that a recently developed plasticity model is crucial for good encoding and retrieval. Next, we show how neural properties related to neurogenesis and neuronal death enhance storage and retrieval of associative memories in the CA3. In particular, we show that without neurogenesis, increasing number of CA3 neurons are recruited by each new memory stimulus, resulting in a corresponding increase in inhibition and poor memory retrieval as more memories are encoded. Neurogenesis, on the other hand, maintains the number of CA3 neurons recruited per stimulus, and enables the retrieval of recent memories, while forgetting the older ones. Our model suggests that structural plasticity (provided by neurogenesis and apoptosis) is required in the hippocampus for memory encoding and retrieval when the network is overloaded; synaptic plasticity alone does not suffice. The above results are obtained from an exhaustive study in the different plasticity models and network parameters.
[ { "created": "Tue, 25 Apr 2017 03:30:28 GMT", "version": "v1" } ]
2017-04-26
[ [ "Chua", "Yansong", "" ], [ "Tan", "Cheston", "" ] ]
Hippocampal CA3 is crucial for the formation of long-term associative memory. It has a heavily recurrent connectivity, and memories are thought to be stored as memory engrams in the CA3. However, despite its importance for memory storage and retrieval, spiking network models of the CA3 to date are relatively small-scale, and exist as only proof-of-concept models. Specifically, how neurogenesis in the dentate gyrus affects memory encoding and retrieval in the CA3 is not studied in such spiking models. Our work is the first to develop a biologically plausible spiking neural network model of hippocampal memory encoding and retrieval, with at least an order-of-magnitude more neurons than previous models. It is also the first to investigate the effect of neurogenesis on CA3 memory encoding and retrieval. Using such a model, we first show that a recently developed plasticity model is crucial for good encoding and retrieval. Next, we show how neural properties related to neurogenesis and neuronal death enhance storage and retrieval of associative memories in the CA3. In particular, we show that without neurogenesis, increasing number of CA3 neurons are recruited by each new memory stimulus, resulting in a corresponding increase in inhibition and poor memory retrieval as more memories are encoded. Neurogenesis, on the other hand, maintains the number of CA3 neurons recruited per stimulus, and enables the retrieval of recent memories, while forgetting the older ones. Our model suggests that structural plasticity (provided by neurogenesis and apoptosis) is required in the hippocampus for memory encoding and retrieval when the network is overloaded; synaptic plasticity alone does not suffice. The above results are obtained from an exhaustive study in the different plasticity models and network parameters.
1502.02001
Hal Lbi2m
Andr\'es Ritter, Simon M Dittami (UMR7139), Sophie Goulitquer (SBR), Juan A Correa, Catherine Boyen (LBI2M), Philippe Potin, Thierry Tonon (LBI2M)
Transcriptomic and metabolomic analysis of copper stress acclimation in Ectocarpus siliculosus highlights signaling and tolerance mechanisms in brown algae
null
BMC Plant Biology, BioMed Central, 2013, 14, pp.116
null
null
q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brown algae are sessile macro-organisms of great ecological relevance in coastal ecosystems. They evolved independently from land plants and other multicellular lineages, and therefore hold several original ontogenic and metabolic features. Most brown algae grow along the coastal zone where they face frequent environmental changes, including exposure to toxic levels of heavy metals such as copper (Cu). We carried out large-scale transcriptomic and metabolomic analyses to decipher the short-term acclimation of the brown algal model E. siliculosus to Cu stress, and compared these data to results known for other abiotic stressors. This comparison demonstrates that Cu induces oxidative stress in E. siliculosus as illustrated by the transcriptomic overlap between Cu and H2O2 treatments. The common response to Cu and H2O2 consisted in the activation of the oxylipin and the repression of inositol signaling pathways, together with the regulation of genes coding for several transcription-associated proteins. Concomitantly, Cu stress specifically activated a set of genes coding for orthologs of ABC transporters, a P1B-type ATPase, ROS detoxification systems such as a vanadium-dependent bromoperoxidase, and induced an increase of free fatty acid contents. Finally we observed, as a common abiotic stress mechanism, the activation of autophagic processes on one hand and the repression of genes involved in nitrogen assimilation on the other hand. Comparisons with data from green plants indicate that some processes involved in Cu and oxidative stress response are conserved across these two distant lineages. At the same time the high number of yet uncharacterized brown alga-specific genes induced in response to copper stress underlines the potential to discover new components and molecular interactions unique to these organisms. Of particular interest for future research is the potential cross-talk between reactive oxygen species (ROS)-, myo-inositol-, and oxylipin signaling.
[ { "created": "Fri, 6 Feb 2015 19:35:55 GMT", "version": "v1" } ]
2015-02-09
[ [ "Ritter", "Andrés", "", "UMR7139" ], [ "Dittami", "Simon M", "", "UMR7139" ], [ "Goulitquer", "Sophie", "", "SBR" ], [ "Correa", "Juan A", "", "LBI2M" ], [ "Boyen", "Catherine", "", "LBI2M" ], [ "Potin", "Philippe", "", "LBI2M" ], [ "Tonon", "Thierry", "", "LBI2M" ] ]
Brown algae are sessile macro-organisms of great ecological relevance in coastal ecosystems. They evolved independently from land plants and other multicellular lineages, and therefore hold several original ontogenic and metabolic features. Most brown algae grow along the coastal zone where they face frequent environmental changes, including exposure to toxic levels of heavy metals such as copper (Cu). We carried out large-scale transcriptomic and metabolomic analyses to decipher the short-term acclimation of the brown algal model E. siliculosus to Cu stress, and compared these data to results known for other abiotic stressors. This comparison demonstrates that Cu induces oxidative stress in E. siliculosus as illustrated by the transcriptomic overlap between Cu and H2O2 treatments. The common response to Cu and H2O2 consisted in the activation of the oxylipin and the repression of inositol signaling pathways, together with the regulation of genes coding for several transcription-associated proteins. Concomitantly, Cu stress specifically activated a set of genes coding for orthologs of ABC transporters, a P1B-type ATPase, ROS detoxification systems such as a vanadium-dependent bromoperoxidase, and induced an increase of free fatty acid contents. Finally we observed, as a common abiotic stress mechanism, the activation of autophagic processes on one hand and the repression of genes involved in nitrogen assimilation on the other hand. Comparisons with data from green plants indicate that some processes involved in Cu and oxidative stress response are conserved across these two distant lineages. At the same time the high number of yet uncharacterized brown alga-specific genes induced in response to copper stress underlines the potential to discover new components and molecular interactions unique to these organisms. Of particular interest for future research is the potential cross-talk between reactive oxygen species (ROS)-, myo-inositol-, and oxylipin signaling.
1607.04169
Marcelo Mattar
Marcelo G Mattar and Sharon L Thompson-Schill and Danielle S Bassett
The network architecture of value learning
27 pages; 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Value guides behavior. With knowledge of stimulus values and action consequences, behaviors that maximize expected reward can be selected. Prior work has identified several brain structures critical for representing both stimuli and their values. Yet, it remains unclear how these structures interact with one another and with other regions of the brain to support the dynamic acquisition of value-related knowledge. Here, we use a network neuroscience approach to examine how BOLD functional networks change as 20 healthy human subjects learn the values of novel visual stimuli over the course of four consecutive days. We show that connections between regions of the visual, frontal, and cingulate cortices become increasingly stronger as learning progresses, and that these changes are primarily confined to the temporal core of the network. These results demonstrate that functional networks dynamically track behavioral improvement in value judgments, and that interactions between network communities form predictive biomarkers of learning.
[ { "created": "Thu, 14 Jul 2016 15:40:49 GMT", "version": "v1" } ]
2016-07-15
[ [ "Mattar", "Marcelo G", "" ], [ "Thompson-Schill", "Sharon L", "" ], [ "Bassett", "Danielle S", "" ] ]
Value guides behavior. With knowledge of stimulus values and action consequences, behaviors that maximize expected reward can be selected. Prior work has identified several brain structures critical for representing both stimuli and their values. Yet, it remains unclear how these structures interact with one another and with other regions of the brain to support the dynamic acquisition of value-related knowledge. Here, we use a network neuroscience approach to examine how BOLD functional networks change as 20 healthy human subjects learn the values of novel visual stimuli over the course of four consecutive days. We show that connections between regions of the visual, frontal, and cingulate cortices become increasingly stronger as learning progresses, and that these changes are primarily confined to the temporal core of the network. These results demonstrate that functional networks dynamically track behavioral improvement in value judgments, and that interactions between network communities form predictive biomarkers of learning.
2402.05377
Xingpeng Di Dr.
Guanbo Wang and Xingpeng Di
Association between Sitting Time and Urinary Incontinence in the US Population: data from the National Health and Nutrition Examination Survey (NHANES) 2007 to 2018
15 pages, 3 figures, and 3 tables
null
null
null
q-bio.QM stat.ME
http://creativecommons.org/licenses/by/4.0/
Background Urinary incontinence (UI) is a common health problem that affects the life and health quality of millions of people in the US. We aimed to investigate the association between sitting time and UI. Methods Across-sectional survey of adult participants of National Health and Nutrition Examination Survey 2007-2018 was performed. Weighted multivariable logistic and regression models were conducted to assess the association between sitting time and UI. Results A total of 22916 participants were enrolled. Prolonged sitting time was associated with urgent UI (UUI, Odds ratio [OR] = 1.184, 95% Confidence interval [CI] = 1.076 to 1.302, P = 0.001). Compared with patients with sitting time shorter than 7 hours, moderate activity increased the risk of prolonged sitting time over 7 hours in the fully-adjusted model (OR = 2.537, 95% CI = 1.419 to 4.536, P = 0.002). Sitting time over 7 hours was related to male mixed UI (MUI, OR = 1.581, 95% CI = 1.129 to 2.213, P = 0.010), and female stress UI (SUI, OR = 0.884, 95% CI = 0.795 to 0.983, P = 0.026) in the fully-adjusted model. Conclusions Prolonged sedentary sitting time (> 7 hours) indicated a high risk of UUI in all populations, female SUI and male MUI. Compared with sitting time shorter than 7 hours, the moderate activity could not reverse the risk of prolonged sitting, which warranted further studies for confirmation.
[ { "created": "Thu, 8 Feb 2024 03:21:03 GMT", "version": "v1" }, { "created": "Fri, 9 Feb 2024 04:00:46 GMT", "version": "v2" } ]
2024-02-12
[ [ "Wang", "Guanbo", "" ], [ "Di", "Xingpeng", "" ] ]
Background Urinary incontinence (UI) is a common health problem that affects the life and health quality of millions of people in the US. We aimed to investigate the association between sitting time and UI. Methods Across-sectional survey of adult participants of National Health and Nutrition Examination Survey 2007-2018 was performed. Weighted multivariable logistic and regression models were conducted to assess the association between sitting time and UI. Results A total of 22916 participants were enrolled. Prolonged sitting time was associated with urgent UI (UUI, Odds ratio [OR] = 1.184, 95% Confidence interval [CI] = 1.076 to 1.302, P = 0.001). Compared with patients with sitting time shorter than 7 hours, moderate activity increased the risk of prolonged sitting time over 7 hours in the fully-adjusted model (OR = 2.537, 95% CI = 1.419 to 4.536, P = 0.002). Sitting time over 7 hours was related to male mixed UI (MUI, OR = 1.581, 95% CI = 1.129 to 2.213, P = 0.010), and female stress UI (SUI, OR = 0.884, 95% CI = 0.795 to 0.983, P = 0.026) in the fully-adjusted model. Conclusions Prolonged sedentary sitting time (> 7 hours) indicated a high risk of UUI in all populations, female SUI and male MUI. Compared with sitting time shorter than 7 hours, the moderate activity could not reverse the risk of prolonged sitting, which warranted further studies for confirmation.
1711.07292
Axel Voigt
S. Aland, F. Stenger, R. M\"uller, M. Kampschulte, A.C. Langheinrich, T. El Khassawna, C. Hei{\ss}, A. Dreutsch, A. Voigt
A phase field approach to trabecular bone remodeling
null
null
null
null
q-bio.TO math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a continuous modeling approach which combines elastic responds of the trabecular bone structure, the concentration of signaling molecules within the bone and a mechanism how this concentration at the bone surface is used for local bone formation and resorption. In an abstract setting bone can be considered as a shape changing structure. For similar problems in materials science phase field approximations have been established as an efficient computational tool. We adapt such an approach for trabecular bone remodeling. It allows for a smooth representation of the trabecular bone structure and drastically reduces computational costs if compared with traditional micro finite element approaches. We demonstrate the advantage of the approach within a minimal model. We quantitatively compare the results with established micro finite element approaches on simple geometries and consider the bone morphology within a bone segment obtained from $\mu$CT data of a sheep vertebra with realistic parameters.
[ { "created": "Mon, 20 Nov 2017 12:58:36 GMT", "version": "v1" } ]
2017-11-21
[ [ "Aland", "S.", "" ], [ "Stenger", "F.", "" ], [ "Müller", "R.", "" ], [ "Kampschulte", "M.", "" ], [ "Langheinrich", "A. C.", "" ], [ "Khassawna", "T. El", "" ], [ "Heiß", "C.", "" ], [ "Dreutsch", "A.", "" ], [ "Voigt", "A.", "" ] ]
We introduce a continuous modeling approach which combines elastic responds of the trabecular bone structure, the concentration of signaling molecules within the bone and a mechanism how this concentration at the bone surface is used for local bone formation and resorption. In an abstract setting bone can be considered as a shape changing structure. For similar problems in materials science phase field approximations have been established as an efficient computational tool. We adapt such an approach for trabecular bone remodeling. It allows for a smooth representation of the trabecular bone structure and drastically reduces computational costs if compared with traditional micro finite element approaches. We demonstrate the advantage of the approach within a minimal model. We quantitatively compare the results with established micro finite element approaches on simple geometries and consider the bone morphology within a bone segment obtained from $\mu$CT data of a sheep vertebra with realistic parameters.
q-bio/0608015
Anders Eriksson
Anders Eriksson and Kristian Lindgren
A simple model of cognitive processing in repeated games
Accepted for publication in the conference proceedings of ECCS'06
null
null
null
q-bio.PE
null
In repeated interactions between individuals, we do not expect that exactly the same situation will occur from one time to another. Contrary to what is common in models of repeated games in the literature, most real situations may differ a lot and they are seldom completely symmetric. The purpose of this paper is to discuss a simple model of cognitive processing in the context of a repeated interaction with varying payoffs. The interaction between players is modelled by a repeated game with random observable payoffs. Cooperation is not simply associated with a certain action but needs to be understood as a phenomenon of the behaviour in the repeated game. The players are thus faced with a more complex situation, compared to the Prisoner's Dilemma that has been widely used for investigating the conditions for cooperation in evolving populations. Still, there are robust cooperating strategies that usually evolve in a population of players. In the cooperative mode, these strategies select an action that allows for maximizing the sum of the payoff of the two players in each round, regardless of the own payoff. Two such players maximise the expected total long-term payoff. If the opponent deviates from this scheme, the strategy invokes a punishment action, which aims at lowering the opponent's score for the rest of the (possibly infinitely) repeated game. The introduction of mistakes to the game actually pushes evolution towards more cooperative strategies even though the game becomes more difficult.
[ { "created": "Mon, 7 Aug 2006 16:38:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Eriksson", "Anders", "" ], [ "Lindgren", "Kristian", "" ] ]
In repeated interactions between individuals, we do not expect that exactly the same situation will occur from one time to another. Contrary to what is common in models of repeated games in the literature, most real situations may differ a lot and they are seldom completely symmetric. The purpose of this paper is to discuss a simple model of cognitive processing in the context of a repeated interaction with varying payoffs. The interaction between players is modelled by a repeated game with random observable payoffs. Cooperation is not simply associated with a certain action but needs to be understood as a phenomenon of the behaviour in the repeated game. The players are thus faced with a more complex situation, compared to the Prisoner's Dilemma that has been widely used for investigating the conditions for cooperation in evolving populations. Still, there are robust cooperating strategies that usually evolve in a population of players. In the cooperative mode, these strategies select an action that allows for maximizing the sum of the payoff of the two players in each round, regardless of the own payoff. Two such players maximise the expected total long-term payoff. If the opponent deviates from this scheme, the strategy invokes a punishment action, which aims at lowering the opponent's score for the rest of the (possibly infinitely) repeated game. The introduction of mistakes to the game actually pushes evolution towards more cooperative strategies even though the game becomes more difficult.
2205.01473
Peter Carstensen
Peter Emil Carstensen, Jacob Bendsen, Asbj{\o}rn Thode Reenberg, Tobias K. S. Ritschel, John Bagterp J{\o}rgensen
A whole-body multi-scale mathematical model for dynamic simulation of the metabolism in man
6 pages, 7 figures, submitted to be presented at a conference
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a whole-body model of the metabolism in man as well as a generalized approach for modeling metabolic networks. Using this approach, we are able to write a large metabolic network in a systematic and compact way. We demonstrate the approach using a whole-body model of the metabolism of the three macronutrients, carbohydrates, proteins and lipids. The model contains 7 organs, 16 metabolites and 31 enzymatic reactions. All reaction rates are described by Michaelis-Menten kinetics with an addition of a hormonal regulator based on the two hormones insulin and glucagon. We incorporate ingestion of food in order to simulate metabolite concentrations during the feed-fast cycle. The model can simulate several days due to the inclusion of storage forms (glycogen, muscle protein and lipid droplets), that can be depleted if food is not ingested regularly. A physiological model incorporating complex cellular metabolism and whole-body mass dynamics can be used in virtual clinical trials. Such trials can be used to improve the development of medicine, treatment strategies such as control algorithms, and increase the likelihood of a successful clinical trial.
[ { "created": "Tue, 3 May 2022 13:06:29 GMT", "version": "v1" } ]
2022-05-04
[ [ "Carstensen", "Peter Emil", "" ], [ "Bendsen", "Jacob", "" ], [ "Reenberg", "Asbjørn Thode", "" ], [ "Ritschel", "Tobias K. S.", "" ], [ "Jørgensen", "John Bagterp", "" ] ]
We propose a whole-body model of the metabolism in man as well as a generalized approach for modeling metabolic networks. Using this approach, we are able to write a large metabolic network in a systematic and compact way. We demonstrate the approach using a whole-body model of the metabolism of the three macronutrients, carbohydrates, proteins and lipids. The model contains 7 organs, 16 metabolites and 31 enzymatic reactions. All reaction rates are described by Michaelis-Menten kinetics with an addition of a hormonal regulator based on the two hormones insulin and glucagon. We incorporate ingestion of food in order to simulate metabolite concentrations during the feed-fast cycle. The model can simulate several days due to the inclusion of storage forms (glycogen, muscle protein and lipid droplets), that can be depleted if food is not ingested regularly. A physiological model incorporating complex cellular metabolism and whole-body mass dynamics can be used in virtual clinical trials. Such trials can be used to improve the development of medicine, treatment strategies such as control algorithms, and increase the likelihood of a successful clinical trial.
1502.04991
Ali Bakhshinejad
Ali Bakhshinejad and Roshan M D'Souza
A Brief Comparison Between Available Bio-printing Methods
null
null
10.1109/GLBC.2015.7158294
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scarcity of organs for transplant has led to large waiting lists of very sick patients. In drug development, the time required for human trials greatly increases the time to market. Drug companies are searching for alternative environments where the in-vivo conditions can be closely replicated. Both these problems could be addressed by manufacturing artificial human tissue. Recently, researchers in tissue engineering have developed tissue generation methods based on 3-D printing to fabricate artificial human tissue. Broadly, these methods could be classified as laser-assisted and laser free. The former have very fine spatial resolutions (10s of $\mu$m) but suffer from slow speed ( $< 10^2$ drops per second). The later have lower spatial resolutions (100s of $\mu$ m) but are very fast (up to $5\times 10^3$ drops per second). In this paper we review state-of-the-art methods in each of these classes and provide a comparison based on reported resolution, printing speed, cell density and cell viability.
[ { "created": "Tue, 17 Feb 2015 18:38:19 GMT", "version": "v1" } ]
2015-07-28
[ [ "Bakhshinejad", "Ali", "" ], [ "D'Souza", "Roshan M", "" ] ]
The scarcity of organs for transplant has led to large waiting lists of very sick patients. In drug development, the time required for human trials greatly increases the time to market. Drug companies are searching for alternative environments where the in-vivo conditions can be closely replicated. Both these problems could be addressed by manufacturing artificial human tissue. Recently, researchers in tissue engineering have developed tissue generation methods based on 3-D printing to fabricate artificial human tissue. Broadly, these methods could be classified as laser-assisted and laser free. The former have very fine spatial resolutions (10s of $\mu$m) but suffer from slow speed ( $< 10^2$ drops per second). The later have lower spatial resolutions (100s of $\mu$ m) but are very fast (up to $5\times 10^3$ drops per second). In this paper we review state-of-the-art methods in each of these classes and provide a comparison based on reported resolution, printing speed, cell density and cell viability.
2004.12405
Chandrika Prakash Vyasarayani
C. P. Vyasarayani and Anindya Chatterjee
Complete dimensional collapse in the continuum limit of a delayed SEIQR network model with separable distributed infectivity
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We take up a recently proposed compartmental SEIQR model with delays, ignore loss of immunity in the context of a fast pandemic, extend the model to a network structured on infectivity, and consider the continuum limit of the same with a simple separable interaction model for the infectivities $\beta$. Numerical simulations show that the evolving dynamics of the network is effectively captured by a single scalar function of time, regardless of the distribution of $\beta$ in the population. The continuum limit of the network model allows a simple derivation of the simpler model, which is a single scalar delay differential equation (DDE), wherein the variation in $\beta$ appears through an integral closely related to the moment generating function of $u=\sqrt{\beta}$. If the first few moments of $u$ exist, the governing DDE can be expanded in a series that shows a direct correspondence with the original compartmental DDE with a single $\beta$. Even otherwise, the new scalar DDE can be solved using either numerical integration over $u$ at each time step, or with the analytical integral if available in some useful form. Our work provides a new academic example of complete dimensional collapse, ties up an underlying continuum model for a pandemic with a simpler-seeming compartmental model, and will hopefully lead to new analysis of continuum models for epidemics.
[ { "created": "Sun, 26 Apr 2020 15:02:17 GMT", "version": "v1" }, { "created": "Tue, 23 Jun 2020 07:24:46 GMT", "version": "v2" } ]
2020-06-24
[ [ "Vyasarayani", "C. P.", "" ], [ "Chatterjee", "Anindya", "" ] ]
We take up a recently proposed compartmental SEIQR model with delays, ignore loss of immunity in the context of a fast pandemic, extend the model to a network structured on infectivity, and consider the continuum limit of the same with a simple separable interaction model for the infectivities $\beta$. Numerical simulations show that the evolving dynamics of the network is effectively captured by a single scalar function of time, regardless of the distribution of $\beta$ in the population. The continuum limit of the network model allows a simple derivation of the simpler model, which is a single scalar delay differential equation (DDE), wherein the variation in $\beta$ appears through an integral closely related to the moment generating function of $u=\sqrt{\beta}$. If the first few moments of $u$ exist, the governing DDE can be expanded in a series that shows a direct correspondence with the original compartmental DDE with a single $\beta$. Even otherwise, the new scalar DDE can be solved using either numerical integration over $u$ at each time step, or with the analytical integral if available in some useful form. Our work provides a new academic example of complete dimensional collapse, ties up an underlying continuum model for a pandemic with a simpler-seeming compartmental model, and will hopefully lead to new analysis of continuum models for epidemics.
1806.03504
Yujiang Wang
Yujiang Wang, Joe Necus, Luis Peraza Rodriguez, Peter Neal Taylor, Bruno Mota
Universality in human cortical folding across lobes of individual brains
null
null
10.1038/s42003-019-0421-7
null
q-bio.NC physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: We have previously demonstrated that cortical folding across mammalian species follows a universal scaling law that can be derived from a simple theoretical model. The same scaling law has also been shown to hold across brains of our own species, irrespective of age or sex. These results, however, only relate measures of complete cortical hemispheres. There are known systematic variations in morphology between different brain regions, and region-specific changes with age. It is therefore of interest to extend our analyses to different cortical regions, and analyze the scaling law within an individual brain. Methods: To directly compare the morphology of sub-divisions of the cortical surface in a size-independent manner, we base our method on a topological invariant of closed surfaces. We reconstruct variables of a complete hemisphere from each lobe of the brain so that it has the same gyrification index, average thickness and average Gaussian curvature. Results: We show that different lobes are morphologically diverse but obey the same scaling law that was observed across human subjects and across mammalian species. This is also the case for subjects with Alzheimer's disease. The age-dependent offset changes at similar rates for all lobes in healthy subjects, but differs most dramatically in the temporal lobe in Alzheimer's disease. Significance: Our results further support the idea that while morphological parameters can vary locally across the cortical surface/across subjects of the same species/across species, the processes that drive cortical gyrification are universal.
[ { "created": "Sat, 9 Jun 2018 16:36:04 GMT", "version": "v1" }, { "created": "Wed, 13 Jun 2018 08:52:50 GMT", "version": "v2" } ]
2020-06-01
[ [ "Wang", "Yujiang", "" ], [ "Necus", "Joe", "" ], [ "Rodriguez", "Luis Peraza", "" ], [ "Taylor", "Peter Neal", "" ], [ "Mota", "Bruno", "" ] ]
Background: We have previously demonstrated that cortical folding across mammalian species follows a universal scaling law that can be derived from a simple theoretical model. The same scaling law has also been shown to hold across brains of our own species, irrespective of age or sex. These results, however, only relate measures of complete cortical hemispheres. There are known systematic variations in morphology between different brain regions, and region-specific changes with age. It is therefore of interest to extend our analyses to different cortical regions, and analyze the scaling law within an individual brain. Methods: To directly compare the morphology of sub-divisions of the cortical surface in a size-independent manner, we base our method on a topological invariant of closed surfaces. We reconstruct variables of a complete hemisphere from each lobe of the brain so that it has the same gyrification index, average thickness and average Gaussian curvature. Results: We show that different lobes are morphologically diverse but obey the same scaling law that was observed across human subjects and across mammalian species. This is also the case for subjects with Alzheimer's disease. The age-dependent offset changes at similar rates for all lobes in healthy subjects, but differs most dramatically in the temporal lobe in Alzheimer's disease. Significance: Our results further support the idea that while morphological parameters can vary locally across the cortical surface/across subjects of the same species/across species, the processes that drive cortical gyrification are universal.
1007.5267
Sebastian Schreiber
Sebastian J. Schreiber and Chi-Kwong Li
Evolution of unconditional dispersal in periodic environments
null
Journal of Biological Dynamics (special issue on Adaptive Dynamics). Vol. 5, pp. 120-134 (2011)
10.1080/17513758.2010.525667
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Organisms modulate their fitness in heterogeneous environments by dispersing. Prior work shows that there is selection against "unconditional" dispersal in spatially heterogeneous environments. "Unconditional" means individuals disperse at a rate independent of their location. We prove that if within-patch fitness varies spatially and between two values temporally, then there is selection for unconditional dispersal: any evolutionarily stable strategy (ESS) or evolutionarily stable coalition (ESC) includes a dispersive phenotype. Moreover, at this ESS or ESC, there is at least one sink patch (i.e. geometric mean of fitness less than one) and no sources patches (i.e. geometric mean of fitness greater than one). These results coupled with simulations suggest that spatial-temporal heterogeneity due to abiotic forcing result in either an ESS with a dispersive phenotype or an ESC with sedentary and dispersive phenotypes. In contrast, spatial-temporal heterogeneity due to biotic interactions can select for higher dispersal rates that ultimately spatially synchronize population dynamics.
[ { "created": "Thu, 29 Jul 2010 17:01:40 GMT", "version": "v1" } ]
2011-09-28
[ [ "Schreiber", "Sebastian J.", "" ], [ "Li", "Chi-Kwong", "" ] ]
Organisms modulate their fitness in heterogeneous environments by dispersing. Prior work shows that there is selection against "unconditional" dispersal in spatially heterogeneous environments. "Unconditional" means individuals disperse at a rate independent of their location. We prove that if within-patch fitness varies spatially and between two values temporally, then there is selection for unconditional dispersal: any evolutionarily stable strategy (ESS) or evolutionarily stable coalition (ESC) includes a dispersive phenotype. Moreover, at this ESS or ESC, there is at least one sink patch (i.e. geometric mean of fitness less than one) and no sources patches (i.e. geometric mean of fitness greater than one). These results coupled with simulations suggest that spatial-temporal heterogeneity due to abiotic forcing result in either an ESS with a dispersive phenotype or an ESC with sedentary and dispersive phenotypes. In contrast, spatial-temporal heterogeneity due to biotic interactions can select for higher dispersal rates that ultimately spatially synchronize population dynamics.
2310.09595
Szabolcs Kelemen
Szabolcs Kelemen, M\'at\'e J\'ozsa, Tibor Hartel, Gy\"orgy Cs\'oka, Zolt\'an N\'eda
Tree size distribution as the stationary limit of an evolutionary master equation
null
null
null
null
q-bio.PE math-ph math.MP q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The diameter distribution of a given species of deciduous trees in mature, temperate zone forests is well approximated by a Gamma distribution. Here we give new experimental evidence for this conjecture by analyzing deciduous tree size data in mature semi-natural forest and ancient, traditionally managed wood-pasture from Central Europe. These distribution functions collapse on a universal shape if the tree sizes are normalized to the mean value in the considered sample. A novel evolutionary master equation is used to model the observed distribution. The model incorporates three probabilistic processes: tree growth, mortality and diversification. By using simple, and realistic state dependent kernel functions for the growth and reset rates together with an assumed multiplicative dilution due to diversification, the stationary solution of the master equation yields the experimentally observed Gamma distribution. The model as it is formulated allows analytically compact solution and has only two fitting parameters whose values are consistent with the experimental data for the growth and reset processes. Our results suggest also that tree size statistics can be used to infer woodland naturalness.
[ { "created": "Sat, 14 Oct 2023 14:46:10 GMT", "version": "v1" } ]
2023-10-17
[ [ "Kelemen", "Szabolcs", "" ], [ "Józsa", "Máté", "" ], [ "Hartel", "Tibor", "" ], [ "Csóka", "György", "" ], [ "Néda", "Zoltán", "" ] ]
The diameter distribution of a given species of deciduous trees in mature, temperate zone forests is well approximated by a Gamma distribution. Here we give new experimental evidence for this conjecture by analyzing deciduous tree size data in mature semi-natural forest and ancient, traditionally managed wood-pasture from Central Europe. These distribution functions collapse on a universal shape if the tree sizes are normalized to the mean value in the considered sample. A novel evolutionary master equation is used to model the observed distribution. The model incorporates three probabilistic processes: tree growth, mortality and diversification. By using simple, and realistic state dependent kernel functions for the growth and reset rates together with an assumed multiplicative dilution due to diversification, the stationary solution of the master equation yields the experimentally observed Gamma distribution. The model as it is formulated allows analytically compact solution and has only two fitting parameters whose values are consistent with the experimental data for the growth and reset processes. Our results suggest also that tree size statistics can be used to infer woodland naturalness.
q-bio/0608004
Henrik Jeldtoft Jensen
Simon Laird and Henrik Jeldtoft Jensen
Correlation, selection and the evolution of species networks
13 pages, 4 figures, submitted to Ecological Modelling
null
null
null
q-bio.PE
null
We use a generalised version of the individual-based Tangled Nature model of evolutionary ecology to study the relationship between ecosystem structure and evolutionary history. Our evolved model ecosystems typically exhibit interaction networks with exponential degree distributions and an inverse dependence between connectance and species richness. We use a simplified network evolution model to demonstrate that the observed degree distributions can occur as a consequence of partial correlations in the inheritance process. Futher to this, in the limit of low connectance and maximal correlation, distributions of power law form, $P(k){\propto}1/k$, can be achieved. We also show that a hyperbolic relationship between connectance and species richness, $C{\sim}1/D$ can arise as a consequence of probabilistic constraints on the evolutionary search process.
[ { "created": "Wed, 2 Aug 2006 16:09:51 GMT", "version": "v1" } ]
2007-05-23
[ [ "Laird", "Simon", "" ], [ "Jensen", "Henrik Jeldtoft", "" ] ]
We use a generalised version of the individual-based Tangled Nature model of evolutionary ecology to study the relationship between ecosystem structure and evolutionary history. Our evolved model ecosystems typically exhibit interaction networks with exponential degree distributions and an inverse dependence between connectance and species richness. We use a simplified network evolution model to demonstrate that the observed degree distributions can occur as a consequence of partial correlations in the inheritance process. Futher to this, in the limit of low connectance and maximal correlation, distributions of power law form, $P(k){\propto}1/k$, can be achieved. We also show that a hyperbolic relationship between connectance and species richness, $C{\sim}1/D$ can arise as a consequence of probabilistic constraints on the evolutionary search process.
2004.12433
Xin Lin
Jingyuan Wang and Xin Lin and Yuxi Liu and Qilegeri and Kai Feng and Hui Lin
A knowledge transfer model for COVID-19 predicting and non-pharmaceutical intervention simulation
8 pages, 8 figures
null
null
null
q-bio.PE math.DS math.OC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since December 2019, A novel coronavirus (2019-nCoV) has been breaking out in China, which can cause respiratory diseases and severe pneumonia. Mathematical and empirical models relying on the epidemic situation scale for forecasting disease outbreaks have received increasing attention. Given its successful application in the evaluation of infectious diseases scale, we propose a Susceptible-Undiagnosed-Infected-Removed (SUIR) model to offer the effective prediction, prevention, and control of infectious diseases. Our model is a modified susceptible-infected-recovered (SIR) model that injects undiagnosed state and offers pre-training effective reproduction number. Our SUIR model is more precise than the traditional SIR model. Moreover, we combine domain knowledge of the epidemic to estimate effective reproduction number, which addresses the initial susceptible population of the infectious disease model approach to the ground truth. These findings have implications for the forecasting of epidemic trends in COVID-19 as these could help the growth of estimating epidemic situation.
[ { "created": "Sun, 26 Apr 2020 16:52:57 GMT", "version": "v1" }, { "created": "Mon, 4 Jan 2021 11:39:36 GMT", "version": "v2" } ]
2021-01-05
[ [ "Wang", "Jingyuan", "" ], [ "Lin", "Xin", "" ], [ "Liu", "Yuxi", "" ], [ "Qilegeri", "", "" ], [ "Feng", "Kai", "" ], [ "Lin", "Hui", "" ] ]
Since December 2019, A novel coronavirus (2019-nCoV) has been breaking out in China, which can cause respiratory diseases and severe pneumonia. Mathematical and empirical models relying on the epidemic situation scale for forecasting disease outbreaks have received increasing attention. Given its successful application in the evaluation of infectious diseases scale, we propose a Susceptible-Undiagnosed-Infected-Removed (SUIR) model to offer the effective prediction, prevention, and control of infectious diseases. Our model is a modified susceptible-infected-recovered (SIR) model that injects undiagnosed state and offers pre-training effective reproduction number. Our SUIR model is more precise than the traditional SIR model. Moreover, we combine domain knowledge of the epidemic to estimate effective reproduction number, which addresses the initial susceptible population of the infectious disease model approach to the ground truth. These findings have implications for the forecasting of epidemic trends in COVID-19 as these could help the growth of estimating epidemic situation.
1602.04470
Michael B\"orsch
Ilka Starke, Kathryn M. Johnson, Jan Petersen, Peter Graber, Anthony W. Opipari, Jr., Gary D. Glick, Michael B\"orsch
Binding of the immunomodulatory drug Bz-423 to mitochondrial FoF1-ATP synthase in living cells by FRET acceptor photobleaching
11 pages, 4 figures
null
10.1117/12.2209645
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bz-423 is a promising new drug for treatment of autoimmune diseases. This small molecule binds to subunit OSCP of the mitochondrial enzyme FoF1-ATP synthase and modulates its catalytic activities. We investigate the binding of Bz-423 to mitochondria in living cells and how subunit rotation in FoF1-ATP synthase, i.e. the mechanochemical mechanism of this enzyme, is affected by Bz-423. Therefore, the enzyme was marked selectively by genetic fusion with the fluorescent protein EGFP to the C terminus of subunit gamma. Imaging the threedimensional arrangement of mitochondria in living yeast cells was possible at superresolution using structured illumination microscopy, SIM. We measured uptake and binding of a Cy5-labeled Bz-423 derivative to mitochondrial FoF1-ATP synthase in living yeast cells using FRET acceptor photobleaching microscopy. Our data confirmed the binding of Cy5-labeled Bz-423 to the top of the F1 domain of the enzyme in mitochondria of living Saccharomyces cerevisiae cells.
[ { "created": "Sun, 14 Feb 2016 15:58:28 GMT", "version": "v1" } ]
2016-06-22
[ [ "Starke", "Ilka", "" ], [ "Johnson", "Kathryn M.", "" ], [ "Petersen", "Jan", "" ], [ "Graber", "Peter", "" ], [ "Opipari,", "Anthony W.", "Jr." ], [ "Glick", "Gary D.", "" ], [ "Börsch", "Michael", "" ] ]
Bz-423 is a promising new drug for treatment of autoimmune diseases. This small molecule binds to subunit OSCP of the mitochondrial enzyme FoF1-ATP synthase and modulates its catalytic activities. We investigate the binding of Bz-423 to mitochondria in living cells and how subunit rotation in FoF1-ATP synthase, i.e. the mechanochemical mechanism of this enzyme, is affected by Bz-423. Therefore, the enzyme was marked selectively by genetic fusion with the fluorescent protein EGFP to the C terminus of subunit gamma. Imaging the threedimensional arrangement of mitochondria in living yeast cells was possible at superresolution using structured illumination microscopy, SIM. We measured uptake and binding of a Cy5-labeled Bz-423 derivative to mitochondrial FoF1-ATP synthase in living yeast cells using FRET acceptor photobleaching microscopy. Our data confirmed the binding of Cy5-labeled Bz-423 to the top of the F1 domain of the enzyme in mitochondria of living Saccharomyces cerevisiae cells.
1703.10062
Sofia Ira Ktena
Sofia Ira Ktena, Salim Arslan, Sarah Parisot, Daniel Rueckert
Exploring Heritability of Functional Brain Networks with Inexact Graph Matching
accepted at ISBI 2017: International Symposium on Biomedical Imaging, Apr 2017, Melbourne, Australia
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data-driven brain parcellations aim to provide a more accurate representation of an individual's functional connectivity, since they are able to capture individual variability that arises due to development or disease. This renders comparisons between the emerging brain connectivity networks more challenging, since correspondences between their elements are not preserved. Unveiling these correspondences is of major importance to keep track of local functional connectivity changes. We propose a novel method based on graph edit distance for the comparison of brain graphs directly in their domain, that can accurately reflect similarities between individual networks while providing the network element correspondences. This method is validated on a dataset of 116 twin subjects provided by the Human Connectome Project.
[ { "created": "Wed, 29 Mar 2017 14:24:52 GMT", "version": "v1" } ]
2017-03-30
[ [ "Ktena", "Sofia Ira", "" ], [ "Arslan", "Salim", "" ], [ "Parisot", "Sarah", "" ], [ "Rueckert", "Daniel", "" ] ]
Data-driven brain parcellations aim to provide a more accurate representation of an individual's functional connectivity, since they are able to capture individual variability that arises due to development or disease. This renders comparisons between the emerging brain connectivity networks more challenging, since correspondences between their elements are not preserved. Unveiling these correspondences is of major importance to keep track of local functional connectivity changes. We propose a novel method based on graph edit distance for the comparison of brain graphs directly in their domain, that can accurately reflect similarities between individual networks while providing the network element correspondences. This method is validated on a dataset of 116 twin subjects provided by the Human Connectome Project.
2103.08100
Venkata Avaneesh Narla
Avaneesh V. Narla, Jonas Cremer and Terry Hwa
A Traveling-Wave Solution for Bacterial Chemotaxis with Growth
27 pages main text, 34 pages Supplemental Information
null
10.1073/pnas.2105138118
null
q-bio.PE q-bio.CB
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bacterial cells navigate around their environment by directing their movement along chemical gradients. This process, known as chemotaxis, can promote the rapid expansion of bacterial populations into previously unoccupied territories. However, despite numerous experimental and theoretical studies on this classical topic, chemotaxis-driven population expansion is not understood in quantitative terms. Building on recent experimental progress, we here present a detailed analytical study that provides a quantitative understanding of how chemotaxis and cell growth lead to rapid and stable expansion of bacterial populations. We provide analytical relations that accurately describe the dependence of the expansion speed and density profile of the expanding population on important molecular, cellular, and environmental parameters. In particular, expansion speeds can be boosted by orders of magnitude when the environmental availability of chemicals relative to the cellular limits of chemical sensing is high. As analytical understanding of such complex spatiotemporal dynamic processes is rare, the results derived here provide a mathematical framework for further investigations of the different roles chemotaxis plays in diverse ecological contexts.
[ { "created": "Mon, 15 Mar 2021 02:17:56 GMT", "version": "v1" } ]
2022-06-08
[ [ "Narla", "Avaneesh V.", "" ], [ "Cremer", "Jonas", "" ], [ "Hwa", "Terry", "" ] ]
Bacterial cells navigate around their environment by directing their movement along chemical gradients. This process, known as chemotaxis, can promote the rapid expansion of bacterial populations into previously unoccupied territories. However, despite numerous experimental and theoretical studies on this classical topic, chemotaxis-driven population expansion is not understood in quantitative terms. Building on recent experimental progress, we here present a detailed analytical study that provides a quantitative understanding of how chemotaxis and cell growth lead to rapid and stable expansion of bacterial populations. We provide analytical relations that accurately describe the dependence of the expansion speed and density profile of the expanding population on important molecular, cellular, and environmental parameters. In particular, expansion speeds can be boosted by orders of magnitude when the environmental availability of chemicals relative to the cellular limits of chemical sensing is high. As analytical understanding of such complex spatiotemporal dynamic processes is rare, the results derived here provide a mathematical framework for further investigations of the different roles chemotaxis plays in diverse ecological contexts.
0806.3591
Pascal Cr\'epey
Pascal Cr\'epey and Marc Barth\'elemy
Detecting Robust Patterns in the Spread of Epidemics: A Case Study of Influenza in the United States and France
8 pages, 7 figures, 3 tables
Am J Epidemiol. 2007 Dec 1;166(11):1244-51. Epub 2007 Oct 15
10.1093/aje/kwm266
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the authors develop a method of detecting correlations between epidemic patterns in different regions that are due to human movement and introduce a null model in which the travel-induced correlations are cancelled. They apply this method to the well-documented cases of seasonal influenza outbreaks in the United States and France. In the United States (using data for 1972-2002), the authors observed strong short-range correlations between several states and their immediate neighbors, as well as robust long-range spreading patterns resulting from large domestic air-traffic flows. The stability of these results over time allowed the authors to draw conclusions about the possible impact of travel restrictions on epidemic spread. The authors also applied this method to the case of France (1984-2004) and found that on the regional scale, there was no transportation mode that clearly dominated disease spread. The simplicity and robustness of this method suggest that it could be a useful tool for detecting transmission channels in the spread of epidemics.
[ { "created": "Sun, 22 Jun 2008 21:17:55 GMT", "version": "v1" } ]
2008-06-24
[ [ "Crépey", "Pascal", "" ], [ "Barthélemy", "Marc", "" ] ]
In this paper, the authors develop a method of detecting correlations between epidemic patterns in different regions that are due to human movement and introduce a null model in which the travel-induced correlations are cancelled. They apply this method to the well-documented cases of seasonal influenza outbreaks in the United States and France. In the United States (using data for 1972-2002), the authors observed strong short-range correlations between several states and their immediate neighbors, as well as robust long-range spreading patterns resulting from large domestic air-traffic flows. The stability of these results over time allowed the authors to draw conclusions about the possible impact of travel restrictions on epidemic spread. The authors also applied this method to the case of France (1984-2004) and found that on the regional scale, there was no transportation mode that clearly dominated disease spread. The simplicity and robustness of this method suggest that it could be a useful tool for detecting transmission channels in the spread of epidemics.
1603.04131
Eve Armstrong
Eve Armstrong and Henry D. I. Abarbanel
Model of the Songbird Nucleus HVC as a Network of Central Pattern Generators
20 pages, 9 figures. Submitted to the Journal of Neurophysiology
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a functional architecture of the adult songbird nucleus HVC in which the core element is a "functional syllable unit" (FSU). In this model, HVC is organized into FSUs, each of which provides the basis for the production of one syllable in vocalization. Within each FSU, the inhibitory neuron population takes one of two operational states: (A) simultaneous firing wherein all inhibitory neurons fire simultaneously, and (B) competitive firing of the inhibitory neurons. Switching between these basic modes of activity is accomplished via changes in the synaptic strengths among the inhibitory neurons. The inhibitory neurons connect to excitatory projection neurons such that during state (A) the activity of projection neurons is suppressed, while during state (B) patterns of sequential firing of projection neurons can occur. The latter state is stabilized by feedback from the projection to the inhibitory neurons. Song composition for specific species is distinguished by the manner in which different FSUs are functionally connected to each other. Ours is a computational model built with biophysically based neurons. We illustrate that many observations of HVC activity are explained by the dynamics of the proposed population of FSUs, and we identify aspects of the model that are currently testable experimentally. In addition, and standing apart from the core features of an FSU, we propose that the transition between modes may be governed by the biophysical mechanism of neuromodulation.
[ { "created": "Mon, 14 Mar 2016 04:13:02 GMT", "version": "v1" }, { "created": "Fri, 3 Jun 2016 18:21:20 GMT", "version": "v2" }, { "created": "Mon, 18 Jul 2016 14:45:18 GMT", "version": "v3" } ]
2016-07-19
[ [ "Armstrong", "Eve", "" ], [ "Abarbanel", "Henry D. I.", "" ] ]
We propose a functional architecture of the adult songbird nucleus HVC in which the core element is a "functional syllable unit" (FSU). In this model, HVC is organized into FSUs, each of which provides the basis for the production of one syllable in vocalization. Within each FSU, the inhibitory neuron population takes one of two operational states: (A) simultaneous firing wherein all inhibitory neurons fire simultaneously, and (B) competitive firing of the inhibitory neurons. Switching between these basic modes of activity is accomplished via changes in the synaptic strengths among the inhibitory neurons. The inhibitory neurons connect to excitatory projection neurons such that during state (A) the activity of projection neurons is suppressed, while during state (B) patterns of sequential firing of projection neurons can occur. The latter state is stabilized by feedback from the projection to the inhibitory neurons. Song composition for specific species is distinguished by the manner in which different FSUs are functionally connected to each other. Ours is a computational model built with biophysically based neurons. We illustrate that many observations of HVC activity are explained by the dynamics of the proposed population of FSUs, and we identify aspects of the model that are currently testable experimentally. In addition, and standing apart from the core features of an FSU, we propose that the transition between modes may be governed by the biophysical mechanism of neuromodulation.
1209.3924
Miko{\l}aj Rybi\'nski
Miko{\l}aj Rybi\'nski, Zuzanna Szyma\'nska, S{\l}awomir Lasota and Anna Gambin
Modelling the efficacy of hyperthermia treatment
Based on results published in first authors PhD thesis (2012). In contrast to the original text most of the technical stuff has been moved to supplementary material ("file_si-termotolerancja.pdf"), plus many other minor improvements and additions have been done. Latest version includes minor revisions and improvements such as expansion of methods section and fig. 5 in the main text
null
10.1098/rsif.2013.0527
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal oncological strategies which combine chemotherapy or radiotherapy with hyperthermia have a potential of improving the efficacy of the non-surgical methods of cancer treatment. Hyperthermia engages the heat-shock response mechanism (HSR), main component of which are heat-shock proteins (HSP). Cancer cells have already partially activated HSR, thereby, hyperthermia may be more toxic to them relative to normal cells. On the other hand, HSR triggers thermotolerance, i.e. hyperthermia treated cells show an impairment in their susceptibility to a subsequent heat-induced stress. This poses questions about efficacy and optimal strategy of the anti-cancer therapy combined with hyperthermia treatment. To address these questions, we adapt our previous HSR model and propose its stochastic extension. We formalise the notion of a HSP-induced thermotolerance. Next, we estimate the intensity and the duration of the thermotolerance. Finally, we quantify the effect of a multimodal therapy based on hyperthermia and a cytotoxic effect of bortezomib, a clinically approved proteasome inhibitor. Consequently, we propose an optimal strategy for combining hyperthermia and proteasome inhibition modalities. In summary, by a proof of concept mathematical analysis of HSR we are able to support the common belief that the combination of cancer treatment strategies increases therapy efficacy. thermotolerance.
[ { "created": "Tue, 18 Sep 2012 12:32:31 GMT", "version": "v1" }, { "created": "Tue, 23 Oct 2012 10:54:32 GMT", "version": "v2" }, { "created": "Mon, 7 Jan 2013 17:05:29 GMT", "version": "v3" }, { "created": "Wed, 6 Mar 2013 15:03:10 GMT", "version": "v4" } ]
2013-09-05
[ [ "Rybiński", "Mikołaj", "" ], [ "Szymańska", "Zuzanna", "" ], [ "Lasota", "Sławomir", "" ], [ "Gambin", "Anna", "" ] ]
Multimodal oncological strategies which combine chemotherapy or radiotherapy with hyperthermia have a potential of improving the efficacy of the non-surgical methods of cancer treatment. Hyperthermia engages the heat-shock response mechanism (HSR), main component of which are heat-shock proteins (HSP). Cancer cells have already partially activated HSR, thereby, hyperthermia may be more toxic to them relative to normal cells. On the other hand, HSR triggers thermotolerance, i.e. hyperthermia treated cells show an impairment in their susceptibility to a subsequent heat-induced stress. This poses questions about efficacy and optimal strategy of the anti-cancer therapy combined with hyperthermia treatment. To address these questions, we adapt our previous HSR model and propose its stochastic extension. We formalise the notion of a HSP-induced thermotolerance. Next, we estimate the intensity and the duration of the thermotolerance. Finally, we quantify the effect of a multimodal therapy based on hyperthermia and a cytotoxic effect of bortezomib, a clinically approved proteasome inhibitor. Consequently, we propose an optimal strategy for combining hyperthermia and proteasome inhibition modalities. In summary, by a proof of concept mathematical analysis of HSR we are able to support the common belief that the combination of cancer treatment strategies increases therapy efficacy. thermotolerance.
q-bio/0606041
Marc Timme
Raoul-Martin Memmesheimer and Marc Timme
Designing Complex Networks
42 pages, 12 figures
Physica D 224:182 (2006)
10.1016/j.physd.2006.09.037
null
q-bio.NC cond-mat.dis-nn q-bio.QM
null
We suggest a new perspective of research towards understanding the relations between structure and dynamics of a complex network: Can we design a network, e.g. by modifying the features of units or interactions, such that it exhibits a desired dynamics? Here we present a case study where we positively answer this question analytically for networks of spiking neural oscillators. First, we present a method of finding the set of all networks (defined by all mutual coupling strengths) that exhibit an arbitrary given periodic pattern of spikes as an invariant solution. In such a pattern all the spike times of all the neurons are exactly predefined. The method is very general as it covers networks of different types of neurons, excitatory and inhibitory couplings, interaction delays that may be heterogeneously distributed, and arbitrary network connectivities. Second, we show how to design networks if further restrictions are imposed, for instance by predefining the detailed network connectivity. We illustrate the applicability of the method by examples of Erd\"{o}s-R\'{e}nyi and power-law random networks. Third, the method can be used to design networks that optimize network properties. To illustrate this idea, we design networks that exhibit a predefined pattern dynamics while at the same time minimizing the networks' wiring costs.
[ { "created": "Thu, 29 Jun 2006 16:58:05 GMT", "version": "v1" }, { "created": "Fri, 10 Nov 2006 12:57:37 GMT", "version": "v2" } ]
2009-11-13
[ [ "Memmesheimer", "Raoul-Martin", "" ], [ "Timme", "Marc", "" ] ]
We suggest a new perspective of research towards understanding the relations between structure and dynamics of a complex network: Can we design a network, e.g. by modifying the features of units or interactions, such that it exhibits a desired dynamics? Here we present a case study where we positively answer this question analytically for networks of spiking neural oscillators. First, we present a method of finding the set of all networks (defined by all mutual coupling strengths) that exhibit an arbitrary given periodic pattern of spikes as an invariant solution. In such a pattern all the spike times of all the neurons are exactly predefined. The method is very general as it covers networks of different types of neurons, excitatory and inhibitory couplings, interaction delays that may be heterogeneously distributed, and arbitrary network connectivities. Second, we show how to design networks if further restrictions are imposed, for instance by predefining the detailed network connectivity. We illustrate the applicability of the method by examples of Erd\"{o}s-R\'{e}nyi and power-law random networks. Third, the method can be used to design networks that optimize network properties. To illustrate this idea, we design networks that exhibit a predefined pattern dynamics while at the same time minimizing the networks' wiring costs.
2009.02620
Allison Lewis
Heyrim Cho, Allison L. Lewis, and Kathleen M. Storey
Bayesian information-theoretic calibration of patient-specific radiotherapy sensitivity parameters for informing effective scanning protocols in cancer
null
null
null
null
q-bio.QM cs.IT math.DS math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With new advancements in technology, it is now possible to collect data for a variety of different metrics describing tumor growth, including tumor volume, composition, and vascularity, among others. For any proposed model of tumor growth and treatment, we observe large variability among individual patients' parameter values, particularly those relating to treatment response; thus, exploiting the use of these various metrics for model calibration can be helpful to infer such patient-specific parameters both accurately and early, so that treatment protocols can be adjusted mid-course for maximum efficacy. However, taking measurements can be costly and invasive, limiting clinicians to a sparse collection schedule. As such, the determination of optimal times and metrics for which to collect data in order to best inform proper treatment protocols could be of great assistance to clinicians. In this investigation, we employ a Bayesian information-theoretic calibration protocol for experimental design in order to identify the optimal times at which to collect data for informing treatment parameters. Within this procedure, data collection times are chosen sequentially to maximize the reduction in parameter uncertainty with each added measurement, ensuring that a budget of $n$ high-fidelity experimental measurements results in maximum information gain about the low-fidelity model parameter values. In addition to investigating the optimal temporal pattern for data collection, we also develop a framework for deciding which metrics should be utilized at each data collection point. We illustrate this framework with a variety of toy examples, each utilizing a radiotherapy treatment regimen. For each scenario, we analyze the dependence of the predictive power of the low-fidelity model upon the measurement budget.
[ { "created": "Sun, 6 Sep 2020 00:26:33 GMT", "version": "v1" } ]
2020-09-08
[ [ "Cho", "Heyrim", "" ], [ "Lewis", "Allison L.", "" ], [ "Storey", "Kathleen M.", "" ] ]
With new advancements in technology, it is now possible to collect data for a variety of different metrics describing tumor growth, including tumor volume, composition, and vascularity, among others. For any proposed model of tumor growth and treatment, we observe large variability among individual patients' parameter values, particularly those relating to treatment response; thus, exploiting the use of these various metrics for model calibration can be helpful to infer such patient-specific parameters both accurately and early, so that treatment protocols can be adjusted mid-course for maximum efficacy. However, taking measurements can be costly and invasive, limiting clinicians to a sparse collection schedule. As such, the determination of optimal times and metrics for which to collect data in order to best inform proper treatment protocols could be of great assistance to clinicians. In this investigation, we employ a Bayesian information-theoretic calibration protocol for experimental design in order to identify the optimal times at which to collect data for informing treatment parameters. Within this procedure, data collection times are chosen sequentially to maximize the reduction in parameter uncertainty with each added measurement, ensuring that a budget of $n$ high-fidelity experimental measurements results in maximum information gain about the low-fidelity model parameter values. In addition to investigating the optimal temporal pattern for data collection, we also develop a framework for deciding which metrics should be utilized at each data collection point. We illustrate this framework with a variety of toy examples, each utilizing a radiotherapy treatment regimen. For each scenario, we analyze the dependence of the predictive power of the low-fidelity model upon the measurement budget.
2308.05326
Gustaf Ahdritz
Gustaf Ahdritz, Nazim Bouatta, Sachin Kadyan, Lukas Jarosch, Daniel Berenberg, Ian Fisk, Andrew M. Watkins, Stephen Ra, Richard Bonneau, Mohammed AlQuraishi
OpenProteinSet: Training data for structural biology at scale
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Multiple sequence alignments (MSAs) of proteins encode rich biological information and have been workhorses in bioinformatic methods for tasks like protein design and protein structure prediction for decades. Recent breakthroughs like AlphaFold2 that use transformers to attend directly over large quantities of raw MSAs have reaffirmed their importance. Generation of MSAs is highly computationally intensive, however, and no datasets comparable to those used to train AlphaFold2 have been made available to the research community, hindering progress in machine learning for proteins. To remedy this problem, we introduce OpenProteinSet, an open-source corpus of more than 16 million MSAs, associated structural homologs from the Protein Data Bank, and AlphaFold2 protein structure predictions. We have previously demonstrated the utility of OpenProteinSet by successfully retraining AlphaFold2 on it. We expect OpenProteinSet to be broadly useful as training and validation data for 1) diverse tasks focused on protein structure, function, and design and 2) large-scale multimodal machine learning research.
[ { "created": "Thu, 10 Aug 2023 04:01:04 GMT", "version": "v1" } ]
2023-08-11
[ [ "Ahdritz", "Gustaf", "" ], [ "Bouatta", "Nazim", "" ], [ "Kadyan", "Sachin", "" ], [ "Jarosch", "Lukas", "" ], [ "Berenberg", "Daniel", "" ], [ "Fisk", "Ian", "" ], [ "Watkins", "Andrew M.", "" ], [ "Ra", "Stephen", "" ], [ "Bonneau", "Richard", "" ], [ "AlQuraishi", "Mohammed", "" ] ]
Multiple sequence alignments (MSAs) of proteins encode rich biological information and have been workhorses in bioinformatic methods for tasks like protein design and protein structure prediction for decades. Recent breakthroughs like AlphaFold2 that use transformers to attend directly over large quantities of raw MSAs have reaffirmed their importance. Generation of MSAs is highly computationally intensive, however, and no datasets comparable to those used to train AlphaFold2 have been made available to the research community, hindering progress in machine learning for proteins. To remedy this problem, we introduce OpenProteinSet, an open-source corpus of more than 16 million MSAs, associated structural homologs from the Protein Data Bank, and AlphaFold2 protein structure predictions. We have previously demonstrated the utility of OpenProteinSet by successfully retraining AlphaFold2 on it. We expect OpenProteinSet to be broadly useful as training and validation data for 1) diverse tasks focused on protein structure, function, and design and 2) large-scale multimodal machine learning research.
1905.01181
Rodrigo Felipe De Oliveira Pena
Rodrigo F.O. Pena, Vinicius Lima, Renan O. Shimoura, Jo\~ao P. Novato, Antonio C. Roque
Optimal interplay between synaptic strengths and network structure enhances activity fluctuations and information propagation in hierarchical modular networks
19 pages, 7 figures
Brain Sciences 2020, 10, 228
10.3390/brainsci10040228
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In network models of spiking neurons, the joint impact of network structure and synaptic parameters on activity propagation is still an open problem. Here we use an information-theoretical approach to investigate activity propagation in spiking networks with hierarchical modular topology. We observe that optimized pairwise information propagation emerges due to the increase of either (i) the global synaptic strength parameter or (ii) the number of modules in the network, while the network size remains constant. At the population level, information propagation of activity among adjacent modules is enhanced as the number of modules increases until a maximum value is reached and then decreases, showing that there is an optimal interplay between synaptic strength and modularity for population information flow. This is in contrast to information propagation evaluated among pairs of neurons, which attains maximum value at the maximum values of these two parameter ranges. By examining the network behavior under increase of synaptic strength and number of modules we find that these increases are associated with two different effects: (i) increase of autocorrelations among individual neurons, and (ii) increase of cross-correlations among pairs of neurons. The second effect is associated with better information propagation in the network. Our results suggest roles that link topological features and synaptic strength levels to the transmission of information in cortical networks.v
[ { "created": "Fri, 3 May 2019 13:42:13 GMT", "version": "v1" }, { "created": "Mon, 6 May 2019 02:56:05 GMT", "version": "v2" }, { "created": "Mon, 30 Mar 2020 23:26:31 GMT", "version": "v3" }, { "created": "Fri, 10 Apr 2020 19:21:12 GMT", "version": "v4" } ]
2020-04-14
[ [ "Pena", "Rodrigo F. O.", "" ], [ "Lima", "Vinicius", "" ], [ "Shimoura", "Renan O.", "" ], [ "Novato", "João P.", "" ], [ "Roque", "Antonio C.", "" ] ]
In network models of spiking neurons, the joint impact of network structure and synaptic parameters on activity propagation is still an open problem. Here we use an information-theoretical approach to investigate activity propagation in spiking networks with hierarchical modular topology. We observe that optimized pairwise information propagation emerges due to the increase of either (i) the global synaptic strength parameter or (ii) the number of modules in the network, while the network size remains constant. At the population level, information propagation of activity among adjacent modules is enhanced as the number of modules increases until a maximum value is reached and then decreases, showing that there is an optimal interplay between synaptic strength and modularity for population information flow. This is in contrast to information propagation evaluated among pairs of neurons, which attains maximum value at the maximum values of these two parameter ranges. By examining the network behavior under increase of synaptic strength and number of modules we find that these increases are associated with two different effects: (i) increase of autocorrelations among individual neurons, and (ii) increase of cross-correlations among pairs of neurons. The second effect is associated with better information propagation in the network. Our results suggest roles that link topological features and synaptic strength levels to the transmission of information in cortical networks.v
0903.1172
George Tsibidis
George D Tsibidis
Quantitative interpretation of binding reactions of rapidly diffusing species using fluorescence recovery after photobleaching
14 pages, 5 figures
Journal ofMicroscopy, Vol. 233, Pt 3 2009, pp. 384-390
null
null
q-bio.SC q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fluorescence recovery after photobleaching (FRAP) measurements offer an important tool for analyzing diffusion and binding processes. Confocal scanning laser microscopes that are used in FRAP experiments bleach regions with a radially Gaussian distributed profile. Previous attempts to derive analytical expressions in the case of processes governed by fast diffusion have overlooked the characteristics of the instruments used to perform FRAP measurements and therefore led to approximating solutions. In the present paper, bleaching laser beam characteristics are incorporated into an improved model to provide a more rigorous and accurate method. The proposed model simulates binding inside bounded regions, and it leads to FRAP curves that depend on the on and off rates that can be employed to determine the rate constants. It can be used in conjunction with experimental data acquired with confocal scanning laser microscopes to investigate the biophysical properties of proteins in living cells. The model aims to improve the accuracy when determining rate constants by taking into account amore realistic scenario of the light-matter interaction.
[ { "created": "Fri, 6 Mar 2009 09:10:54 GMT", "version": "v1" } ]
2009-03-09
[ [ "Tsibidis", "George D", "" ] ]
Fluorescence recovery after photobleaching (FRAP) measurements offer an important tool for analyzing diffusion and binding processes. Confocal scanning laser microscopes that are used in FRAP experiments bleach regions with a radially Gaussian distributed profile. Previous attempts to derive analytical expressions in the case of processes governed by fast diffusion have overlooked the characteristics of the instruments used to perform FRAP measurements and therefore led to approximating solutions. In the present paper, bleaching laser beam characteristics are incorporated into an improved model to provide a more rigorous and accurate method. The proposed model simulates binding inside bounded regions, and it leads to FRAP curves that depend on the on and off rates that can be employed to determine the rate constants. It can be used in conjunction with experimental data acquired with confocal scanning laser microscopes to investigate the biophysical properties of proteins in living cells. The model aims to improve the accuracy when determining rate constants by taking into account amore realistic scenario of the light-matter interaction.
1102.1928
Mathis Antony A
Mathis Antony, Degang Wu, K Y Szeto
Imitation with incomplete information in 2x2 games
7 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary game theory has been an important tool for describing economic and social behaviour for decades. Approximate mean value equations describing the time evolution of strategy concentrations can be derived from the players' microscopic update rules. We show that they can be generalized to a learning process. As an example, we compare a restricted imitation process, in which unused parts of the role model's meta-strategy are hidden from the imitator, with the widely used imitation rule that allows the imitator to adopt the entire meta-strategy of the role model. This change in imitation behaviour greatly affects dynamics and stationary states in the iterated prisoner dilemma. Particularly we find Grim Trigger to be a more successful strategy than Tit-For-Tat especially in the weak selection regime.
[ { "created": "Wed, 9 Feb 2011 18:25:18 GMT", "version": "v1" } ]
2011-02-10
[ [ "Antony", "Mathis", "" ], [ "Wu", "Degang", "" ], [ "Szeto", "K Y", "" ] ]
Evolutionary game theory has been an important tool for describing economic and social behaviour for decades. Approximate mean value equations describing the time evolution of strategy concentrations can be derived from the players' microscopic update rules. We show that they can be generalized to a learning process. As an example, we compare a restricted imitation process, in which unused parts of the role model's meta-strategy are hidden from the imitator, with the widely used imitation rule that allows the imitator to adopt the entire meta-strategy of the role model. This change in imitation behaviour greatly affects dynamics and stationary states in the iterated prisoner dilemma. Particularly we find Grim Trigger to be a more successful strategy than Tit-For-Tat especially in the weak selection regime.
1601.03419
Anarina Murillo
Anarina L. Murillo, Muntaser Safan, Carlos Castillo-Chavez, Elizabeth D. Capaldi-Phillips, Devina Wadhera
Modeling Eating Behaviors: the Role of Environment and Positive Food Association Learning via a Ratatouille Effect
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eating behaviors among a large population of children are studied as a dynamic process driven by nonlinear interactions in the sociocultural school environment. The impact of food association learning on diet dynamics, inspired by a pilot study conducted among Arizona children in Pre-Kindergarten to 8th grades, is used to build simple population-level learning models. Qualitatively, mathematical studies are used to highlight the possible ramifications of instruction, learning in nutrition, and health at the community level. Model results suggest that nutrition education programs at the population-level have minimal impact on improving eating behaviors, findings that agree with prior field studies. Hence, the incorporation of food association learning may be a better strategy for creating resilient communities of healthy and non-healthy eaters. A \textit{Ratatouille} effect can be observed when food association learners become food preference learners, a potential sustainable behavioral change, which in turn, may impact the overall distribution of healthy eaters. In short, this work evaluates the effectiveness of population-level intervention strategies and the importance of institutionalizing nutrition programs that factor in economical, social, cultural, and environmental elements that mesh well with the norms and values in the community.
[ { "created": "Sun, 13 Dec 2015 23:40:01 GMT", "version": "v1" } ]
2016-01-15
[ [ "Murillo", "Anarina L.", "" ], [ "Safan", "Muntaser", "" ], [ "Castillo-Chavez", "Carlos", "" ], [ "Capaldi-Phillips", "Elizabeth D.", "" ], [ "Wadhera", "Devina", "" ] ]
Eating behaviors among a large population of children are studied as a dynamic process driven by nonlinear interactions in the sociocultural school environment. The impact of food association learning on diet dynamics, inspired by a pilot study conducted among Arizona children in Pre-Kindergarten to 8th grades, is used to build simple population-level learning models. Qualitatively, mathematical studies are used to highlight the possible ramifications of instruction, learning in nutrition, and health at the community level. Model results suggest that nutrition education programs at the population-level have minimal impact on improving eating behaviors, findings that agree with prior field studies. Hence, the incorporation of food association learning may be a better strategy for creating resilient communities of healthy and non-healthy eaters. A \textit{Ratatouille} effect can be observed when food association learners become food preference learners, a potential sustainable behavioral change, which in turn, may impact the overall distribution of healthy eaters. In short, this work evaluates the effectiveness of population-level intervention strategies and the importance of institutionalizing nutrition programs that factor in economical, social, cultural, and environmental elements that mesh well with the norms and values in the community.
1112.4059
Demian Battaglia
Demian Battaglia, David Hansel
Synchronous chaos and broad band gamma rhythm in a minimal multi-layer model of primary visual cortex
49 pages, 11 figures, 7 tables
Published in PLoS Comput Biol 7(10): e1002176 (2011)
10.1371/journal.pcbi.1002176
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visually induced neuronal activity in V1 displays a marked gamma-band component which is modulated by stimulus properties. It has been argued that synchronized oscillations contribute to these gamma-band activity [... however,] even when oscillations are observed, they undergo temporal decorrelation over very few cycles. This is not easily accounted for in previous network modeling of gamma oscillations. We argue here that interactions between cortical layers can be responsible for this fast decorrelation. We study a model of a V1 hypercolumn, embedding a simplified description of the multi-layered structure of the cortex. When the stimulus contrast is low, the induced activity is only weakly synchronous and the network resonates transiently without developing collective oscillations. When the contrast is high, on the other hand, the induced activity undergoes synchronous oscillations with an irregular spatiotemporal structure expressing a synchronous chaotic state. As a consequence the population activity undergoes fast temporal decorrelation, with concomitant rapid damping of the oscillations in LFPs autocorrelograms and peak broadening in LFPs power spectra. [...] Finally, we argue that the mechanism underlying the emergence of synchronous chaos in our model is in fact very general. It stems from the fact that gamma oscillations induced by local delayed inhibition tend to develop chaos when coupled by sufficiently strong excitation.
[ { "created": "Sat, 17 Dec 2011 14:39:07 GMT", "version": "v1" } ]
2011-12-20
[ [ "Battaglia", "Demian", "" ], [ "Hansel", "David", "" ] ]
Visually induced neuronal activity in V1 displays a marked gamma-band component which is modulated by stimulus properties. It has been argued that synchronized oscillations contribute to these gamma-band activity [... however,] even when oscillations are observed, they undergo temporal decorrelation over very few cycles. This is not easily accounted for in previous network modeling of gamma oscillations. We argue here that interactions between cortical layers can be responsible for this fast decorrelation. We study a model of a V1 hypercolumn, embedding a simplified description of the multi-layered structure of the cortex. When the stimulus contrast is low, the induced activity is only weakly synchronous and the network resonates transiently without developing collective oscillations. When the contrast is high, on the other hand, the induced activity undergoes synchronous oscillations with an irregular spatiotemporal structure expressing a synchronous chaotic state. As a consequence the population activity undergoes fast temporal decorrelation, with concomitant rapid damping of the oscillations in LFPs autocorrelograms and peak broadening in LFPs power spectra. [...] Finally, we argue that the mechanism underlying the emergence of synchronous chaos in our model is in fact very general. It stems from the fact that gamma oscillations induced by local delayed inhibition tend to develop chaos when coupled by sufficiently strong excitation.
1404.1421
Chaitanya Gokhale
Chaitanya S. Gokhale and Arne Traulsen
Evolutionary Multiplayer Games
14 pages, 2 figures, review paper
Dynamic Games and Applications, 2014, 10.1007/s13235-014-0106-2
10.1007/s13235-014-0106-2
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary game theory has become one of the most diverse and far reaching theories in biology. Applications of this theory range from cell dynamics to social evolution. However, many applications make it clear that inherent non-linearities of natural systems need to be taken into account. One way of introducing such non-linearities into evolutionary games is by the inclusion of multiple players. An example is of social dilemmas, where group benefits could e.g.\ increase less than linear with the number of cooperators. Such multiplayer games can be introduced in all the fields where evolutionary game theory is already well established. However, the inclusion of non-linearities can help to advance the analysis of systems which are known to be complex, e.g. in the case of non-Mendelian inheritance. We review the diachronic theory and applications of multiplayer evolutionary games and present the current state of the field. Our aim is a summary of the theoretical results from well-mixed populations in infinite as well as finite populations. We also discuss examples from three fields where the theory has been successfully applied, ecology, social sciences and population genetics. In closing, we probe certain future directions which can be explored using the complexity of multiplayer games while preserving the promise of simplicity of evolutionary games.
[ { "created": "Sat, 5 Apr 2014 02:41:16 GMT", "version": "v1" } ]
2014-04-08
[ [ "Gokhale", "Chaitanya S.", "" ], [ "Traulsen", "Arne", "" ] ]
Evolutionary game theory has become one of the most diverse and far reaching theories in biology. Applications of this theory range from cell dynamics to social evolution. However, many applications make it clear that inherent non-linearities of natural systems need to be taken into account. One way of introducing such non-linearities into evolutionary games is by the inclusion of multiple players. An example is of social dilemmas, where group benefits could e.g.\ increase less than linear with the number of cooperators. Such multiplayer games can be introduced in all the fields where evolutionary game theory is already well established. However, the inclusion of non-linearities can help to advance the analysis of systems which are known to be complex, e.g. in the case of non-Mendelian inheritance. We review the diachronic theory and applications of multiplayer evolutionary games and present the current state of the field. Our aim is a summary of the theoretical results from well-mixed populations in infinite as well as finite populations. We also discuss examples from three fields where the theory has been successfully applied, ecology, social sciences and population genetics. In closing, we probe certain future directions which can be explored using the complexity of multiplayer games while preserving the promise of simplicity of evolutionary games.
1604.05992
Vince Grolmusz
Bal\'azs Szalkai and Vince Grolmusz
Human Sexual Dimorphism of the Relative Cerebral Area Volumes in the Data of the Human Connectome Project
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The average human brain volume of the males is larger than that of the females. Several MRI voxel-based morphometry studies show that the gray matter/white matter ratio is larger in females. Here we have analyzed the recent public release of the Human Connectome Project, and by using the diffusion MRI data of 511 subjects (209 men and 302 women), we have found that the relative volumes of numerous subcortical areas and the gray matter of most cortical areas are significantly larger in women than in men. Additionally, we have discovered differences of the strengths of the sexual correlations between the same structures in different hemispheres.
[ { "created": "Wed, 20 Apr 2016 15:03:07 GMT", "version": "v1" } ]
2016-04-21
[ [ "Szalkai", "Balázs", "" ], [ "Grolmusz", "Vince", "" ] ]
The average human brain volume of the males is larger than that of the females. Several MRI voxel-based morphometry studies show that the gray matter/white matter ratio is larger in females. Here we have analyzed the recent public release of the Human Connectome Project, and by using the diffusion MRI data of 511 subjects (209 men and 302 women), we have found that the relative volumes of numerous subcortical areas and the gray matter of most cortical areas are significantly larger in women than in men. Additionally, we have discovered differences of the strengths of the sexual correlations between the same structures in different hemispheres.
q-bio/0607009
Sa\'ul Ares
S. Ares and G. Kalosakas
Probability distribution of bubble lengths in DNA
5 pages, 4 figures, accepted version. The distribution obtained from the Peyrard-Bishop-Dauxois model is now fitted and discussed in the framework of the Poland-Scheraga model
Nano Lett., 7 (2), 307 -311, 2007
10.1021/nl062304a
null
q-bio.BM cond-mat.stat-mech physics.bio-ph
null
The distribution of bubble lengths in double-stranded DNA is presented for segments of varying guanine-cytosine (GC) content, obtained with Monte Carlo simulations using the Peyrard-Bishop-Dauxois model at 310 K. An analytical description of the obtained distribution in the whole regime investigated, i.e., up to bubble widths of the order of tens of nanometers, is available. We find that the decay lengths and characteristic exponents of this distribution show two distinct regimes as a function of GC content. The observed distribution is attributed to the anharmonic interactions within base pairs. The results are discussed in the framework of the Poland-Scheraga and the Peyrard-Bishop (with linear instead of nonlinear stacking interaction) models.
[ { "created": "Wed, 5 Jul 2006 17:17:10 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2007 10:21:50 GMT", "version": "v2" } ]
2007-05-23
[ [ "Ares", "S.", "" ], [ "Kalosakas", "G.", "" ] ]
The distribution of bubble lengths in double-stranded DNA is presented for segments of varying guanine-cytosine (GC) content, obtained with Monte Carlo simulations using the Peyrard-Bishop-Dauxois model at 310 K. An analytical description of the obtained distribution in the whole regime investigated, i.e., up to bubble widths of the order of tens of nanometers, is available. We find that the decay lengths and characteristic exponents of this distribution show two distinct regimes as a function of GC content. The observed distribution is attributed to the anharmonic interactions within base pairs. The results are discussed in the framework of the Poland-Scheraga and the Peyrard-Bishop (with linear instead of nonlinear stacking interaction) models.
1706.05783
Yiyin Zhou
Aurel A. Lazar and Nikul H. Ukani and Yiyin Zhou
Sparse Functional Identification of Complex Cells from Spike Times and the Decoding of Visual Stimuli
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
We investigate the sparse functional identification of complex cells and the decoding of visual stimuli encoded by an ensemble of complex cells. The reconstruction algorithm of both temporal and spatio-temporal stimuli is formulated as a rank minimization problem that significantly reduces the number of sampling measurements (spikes) required for decoding. We also establish the duality between sparse decoding and functional identification, and provide algorithms for identification of low-rank dendritic stimulus processors. The duality enables us to efficiently evaluate our functional identification algorithms by reconstructing novel stimuli in the input space. Finally, we demonstrate that our identification algorithms substantially outperform the generalized quadratic model, the non-linear input model and the widely used spike-triggered covariance algorithm.
[ { "created": "Mon, 19 Jun 2017 04:50:26 GMT", "version": "v1" } ]
2017-06-20
[ [ "Lazar", "Aurel A.", "" ], [ "Ukani", "Nikul H.", "" ], [ "Zhou", "Yiyin", "" ] ]
We investigate the sparse functional identification of complex cells and the decoding of visual stimuli encoded by an ensemble of complex cells. The reconstruction algorithm of both temporal and spatio-temporal stimuli is formulated as a rank minimization problem that significantly reduces the number of sampling measurements (spikes) required for decoding. We also establish the duality between sparse decoding and functional identification, and provide algorithms for identification of low-rank dendritic stimulus processors. The duality enables us to efficiently evaluate our functional identification algorithms by reconstructing novel stimuli in the input space. Finally, we demonstrate that our identification algorithms substantially outperform the generalized quadratic model, the non-linear input model and the widely used spike-triggered covariance algorithm.
1302.3267
Mathew Wedel
Mathew John Wedel
Postcranial pneumaticity in dinosaurs and the origin of the avian lung
PhD dissertation, University of California, Berkeley, Department of Integrative Biology, May 2007, 303 pages
null
null
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In birds, diverticula of the lungs and air sacs pneumatize specific regions of the postcranial skeleton. Relationships among pulmonary components and skeletal regions they pneumatize allow inferences about pulmonary anatomy in non-avian dinosaurs. Fossae, foramina and chambers in the postcranial skeletons of pterosaurs and saurischian dinosaurs are diagnostic for pneumaticity. In basal saurischians only the cervical skeleton is pneumatized, by cervical air sacs. In more derived saurischians pneumatization of posterior dorsal, sacral, and caudal vertebrae indicates abdominal air sacs. Abdominal air sacs in sauropods are also indicated by a pneumatic hiatus (a gap in vertebral pneumatization) in Haplocanthosaurus. Minimally, saurischians had dorsally attached diverticular lungs plus anterior and posterior air sacs, and all the pulmonary prerequisites for flow-through lung ventilation like that of birds. Pneumaticity reduced skeletal mass in saurischians. I propose the Air Space Proportion (ASP) as a measure of proportional volume of air in pneumatic bones. The mean ASP of a sample of sauropod and theropod vertebrae is 0.61, so on average, air occupied more than half the volume of these vertebrae. In Diplodocus, pneumatization lightened the living animal by 7-10 percent, and that does not include extraskeletal diverticula, air sacs, lungs, or trachea. If all these air reservoirs included, the specific gravity of Diplodocus is 0.80, higher than published values for birds but lower than those for squamates and crocodilians. Pneumatization of cervical vertebrae facilitated evolution of long necks in sauropods. Necks longer than nine meters evolved at least four times, in mamenchisaurs, diplodocids, brachiosaurids, and titanosaurs. Increases in the number of cervical vertebrae, their proportional lengths, and their internal complexity occurred in parallel in most of these lineages.
[ { "created": "Wed, 13 Feb 2013 22:41:44 GMT", "version": "v1" } ]
2013-02-15
[ [ "Wedel", "Mathew John", "" ] ]
In birds, diverticula of the lungs and air sacs pneumatize specific regions of the postcranial skeleton. Relationships among pulmonary components and skeletal regions they pneumatize allow inferences about pulmonary anatomy in non-avian dinosaurs. Fossae, foramina and chambers in the postcranial skeletons of pterosaurs and saurischian dinosaurs are diagnostic for pneumaticity. In basal saurischians only the cervical skeleton is pneumatized, by cervical air sacs. In more derived saurischians pneumatization of posterior dorsal, sacral, and caudal vertebrae indicates abdominal air sacs. Abdominal air sacs in sauropods are also indicated by a pneumatic hiatus (a gap in vertebral pneumatization) in Haplocanthosaurus. Minimally, saurischians had dorsally attached diverticular lungs plus anterior and posterior air sacs, and all the pulmonary prerequisites for flow-through lung ventilation like that of birds. Pneumaticity reduced skeletal mass in saurischians. I propose the Air Space Proportion (ASP) as a measure of proportional volume of air in pneumatic bones. The mean ASP of a sample of sauropod and theropod vertebrae is 0.61, so on average, air occupied more than half the volume of these vertebrae. In Diplodocus, pneumatization lightened the living animal by 7-10 percent, and that does not include extraskeletal diverticula, air sacs, lungs, or trachea. If all these air reservoirs included, the specific gravity of Diplodocus is 0.80, higher than published values for birds but lower than those for squamates and crocodilians. Pneumatization of cervical vertebrae facilitated evolution of long necks in sauropods. Necks longer than nine meters evolved at least four times, in mamenchisaurs, diplodocids, brachiosaurids, and titanosaurs. Increases in the number of cervical vertebrae, their proportional lengths, and their internal complexity occurred in parallel in most of these lineages.
2112.07126
Mathew Aibinu
M.O. Aibinu, S.C. Moyo, S. Moyo
Analyzing population dynamics models via Sumudu transform
11 pages
null
null
null
q-bio.PE math.FA
http://creativecommons.org/licenses/by/4.0/
This study demonstrates how to construct the solutions of a more general form of population dynamics models via a blend of variational iterative method with Sumudu transform. In this paper, population growth models are formulated in the form of delay differential equations of pantograph type which is a general form for the existing models. Innovative ways are presented for obtaining the solutions of population growth models where other analytic methods fail. Stimulating procedures for finding patterns and regularities in seemingly chaotic processes have been elucidated in this paper. How, when and why the changes in population sizes occur can be deduced through this study.
[ { "created": "Tue, 14 Dec 2021 02:55:44 GMT", "version": "v1" } ]
2021-12-15
[ [ "Aibinu", "M. O.", "" ], [ "Moyo", "S. C.", "" ], [ "Moyo", "S.", "" ] ]
This study demonstrates how to construct the solutions of a more general form of population dynamics models via a blend of variational iterative method with Sumudu transform. In this paper, population growth models are formulated in the form of delay differential equations of pantograph type which is a general form for the existing models. Innovative ways are presented for obtaining the solutions of population growth models where other analytic methods fail. Stimulating procedures for finding patterns and regularities in seemingly chaotic processes have been elucidated in this paper. How, when and why the changes in population sizes occur can be deduced through this study.
2004.05466
Paul Patrone
Paul N. Patrone and Anthony J. Kearsley and Erica L. Romsos and Peter M. Vallone
Improving Baseline Subtraction for Increased Sensitivity of Quantitative PCR Measurements
null
null
null
null
q-bio.QM physics.bio-ph physics.data-an physics.ins-det
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the current COVID-19 health-crisis, we examine the task of baseline subtraction for quantitative polymerase chain-reaction (qPCR) measurements. In particular, we present an algorithm that leverages information obtained from non-template and/or DNA extraction-control experiments to remove systematic bias from amplification curves. We recast this problem in terms of mathematical optimization, i.e. by finding the amount of control signal that, when subtracted from an amplification curve, minimizes background noise. We demonstrate that this approach can yield a decade improvement in sensitivity relative to standard approaches, especially for data exhibiting late-cycle amplification. Critically, this increased sensitivity and accuracy promises more effective screening of viral DNA and a reduction in the rate of false-negatives in diagnostic settings.
[ { "created": "Sat, 11 Apr 2020 19:17:35 GMT", "version": "v1" } ]
2020-04-14
[ [ "Patrone", "Paul N.", "" ], [ "Kearsley", "Anthony J.", "" ], [ "Romsos", "Erica L.", "" ], [ "Vallone", "Peter M.", "" ] ]
Motivated by the current COVID-19 health-crisis, we examine the task of baseline subtraction for quantitative polymerase chain-reaction (qPCR) measurements. In particular, we present an algorithm that leverages information obtained from non-template and/or DNA extraction-control experiments to remove systematic bias from amplification curves. We recast this problem in terms of mathematical optimization, i.e. by finding the amount of control signal that, when subtracted from an amplification curve, minimizes background noise. We demonstrate that this approach can yield a decade improvement in sensitivity relative to standard approaches, especially for data exhibiting late-cycle amplification. Critically, this increased sensitivity and accuracy promises more effective screening of viral DNA and a reduction in the rate of false-negatives in diagnostic settings.
1307.3235
Yu Hu
Yu Hu, Joel Zylberberg, Eric Shea-Brown
The sign rule and beyond: Boundary effects, flexibility, and noise correlations in neural population codes
41 pages, 5 figures
null
10.1371/journal.pcbi.1003469
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This "noise" is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem - neural tuning curves, etc. - held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) - if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all.
[ { "created": "Thu, 11 Jul 2013 19:52:20 GMT", "version": "v1" }, { "created": "Fri, 12 Jul 2013 19:49:11 GMT", "version": "v2" }, { "created": "Thu, 22 Aug 2013 18:52:35 GMT", "version": "v3" }, { "created": "Wed, 15 Jan 2014 23:46:25 GMT", "version": "v4" } ]
2015-06-16
[ [ "Hu", "Yu", "" ], [ "Zylberberg", "Joel", "" ], [ "Shea-Brown", "Eric", "" ] ]
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This "noise" is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem - neural tuning curves, etc. - held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) - if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all.
1606.04519
Theoni Photopoulou Dr
Theoni Photopoulou, Ines M Ferreira, Toshio Kasuya, Peter B Best and Helene Marsh
Evidence for a postreproductive phase in female false killer whales Pseudorca crassidens
22 pages, 5 figures, 9 additional files
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A substantial period of life after reproduction ends, known as postreproductive lifespan (PRLS), is at odds with classical life history theory. Prolonged PRLS has been confirmed in only two non-human mammals, both odontocete cetaceans. We investigate the evidence for PRLS in a third species, the false killer whale, Pseudorca crassidens, using a quantitative measure of PRLS and morphological evidence from reproductive tissue. We examined specimens from false killer whales from combined strandings (South Africa, 1981) and harvest (Japan 1979-80) and found morphological evidence of changes in the activity of the ovaries in relation to age. Ovulation had ceased in 50% of whales over 45 years, and all whales over 55 years old had ovaries classified as postreproductive. We also calculated a measure of PRLS, known as postreproductive representation (PrR) as an indication of the effect of inter-population demographic variability. PrR for the combined sample was 0.14, whereas the mean of the simulated distribution for PrR under the null hypothesis of no PRLS was 0.02. The 99th percentile of the simulated distribution was 0.08 and no simulated value exceeded 0.13. These results suggest that PrR was convincingly different from the measures simulated under the null hypothesis. We found morphological and statistical evidence for PRLS in South African and Japanese pods of false killer whales, suggesting that this species is the third non-human mammal in which this phenomenon has been demonstrated in wild populations. Our estimates for PrR in false killer whales (0.12-0.37) spanned the single values available for the short-finned pilot whale (0.28) and the killer whale (0.22) and are comparable to estimates for historical or hunter-gather human populations (0.3-0.47).
[ { "created": "Tue, 14 Jun 2016 19:33:27 GMT", "version": "v1" }, { "created": "Fri, 10 Feb 2017 07:29:13 GMT", "version": "v2" } ]
2017-02-13
[ [ "Photopoulou", "Theoni", "" ], [ "Ferreira", "Ines M", "" ], [ "Kasuya", "Toshio", "" ], [ "Best", "Peter B", "" ], [ "Marsh", "Helene", "" ] ]
A substantial period of life after reproduction ends, known as postreproductive lifespan (PRLS), is at odds with classical life history theory. Prolonged PRLS has been confirmed in only two non-human mammals, both odontocete cetaceans. We investigate the evidence for PRLS in a third species, the false killer whale, Pseudorca crassidens, using a quantitative measure of PRLS and morphological evidence from reproductive tissue. We examined specimens from false killer whales from combined strandings (South Africa, 1981) and harvest (Japan 1979-80) and found morphological evidence of changes in the activity of the ovaries in relation to age. Ovulation had ceased in 50% of whales over 45 years, and all whales over 55 years old had ovaries classified as postreproductive. We also calculated a measure of PRLS, known as postreproductive representation (PrR) as an indication of the effect of inter-population demographic variability. PrR for the combined sample was 0.14, whereas the mean of the simulated distribution for PrR under the null hypothesis of no PRLS was 0.02. The 99th percentile of the simulated distribution was 0.08 and no simulated value exceeded 0.13. These results suggest that PrR was convincingly different from the measures simulated under the null hypothesis. We found morphological and statistical evidence for PRLS in South African and Japanese pods of false killer whales, suggesting that this species is the third non-human mammal in which this phenomenon has been demonstrated in wild populations. Our estimates for PrR in false killer whales (0.12-0.37) spanned the single values available for the short-finned pilot whale (0.28) and the killer whale (0.22) and are comparable to estimates for historical or hunter-gather human populations (0.3-0.47).
2111.10719
Marco Pe\~na-Garcia
Marco Pe\~na-Garcia, Francesco Pe\~na-Garcia, Nelson Castro, Walter Cabrera-Febola
Mathematical representation of the structure of neuron-glia networks
Discusssion improved and filiation corrected
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Network representations of the nervous system have being useful for the understanding of brain phenomena such as perception, motor coordination, memory, etc. Although brains are composed of both neurons and glial cells, only networks of neurons have been studied so far. Given the emergent role of glial cells in information transmission in the brain, here we develop a mathematical representation of neuron-glial networks: $\Upsilon$-graph. We proceed defining isomorphisms for $\Upsilon$-graph, generalizing the multidigraph isomorphism. Then, we define a function for unnesting an $\Upsilon$-graph, getting a multidigraph. Additionally, we found that the isomorphism between unnested forms guarantee the isomorphism between their $\Upsilon$-graph if the matrix equations has only linearly independent columns and are equal after interchanging some rows and columns . Finally, we introduce a novel approach to model the network shape. Our work presents a mathematical framework for working with neuron-glia networks.
[ { "created": "Sun, 21 Nov 2021 02:55:27 GMT", "version": "v1" }, { "created": "Thu, 25 Nov 2021 14:53:29 GMT", "version": "v2" }, { "created": "Thu, 9 Dec 2021 18:02:18 GMT", "version": "v3" } ]
2021-12-10
[ [ "Peña-Garcia", "Marco", "" ], [ "Peña-Garcia", "Francesco", "" ], [ "Castro", "Nelson", "" ], [ "Cabrera-Febola", "Walter", "" ] ]
Network representations of the nervous system have being useful for the understanding of brain phenomena such as perception, motor coordination, memory, etc. Although brains are composed of both neurons and glial cells, only networks of neurons have been studied so far. Given the emergent role of glial cells in information transmission in the brain, here we develop a mathematical representation of neuron-glial networks: $\Upsilon$-graph. We proceed defining isomorphisms for $\Upsilon$-graph, generalizing the multidigraph isomorphism. Then, we define a function for unnesting an $\Upsilon$-graph, getting a multidigraph. Additionally, we found that the isomorphism between unnested forms guarantee the isomorphism between their $\Upsilon$-graph if the matrix equations has only linearly independent columns and are equal after interchanging some rows and columns . Finally, we introduce a novel approach to model the network shape. Our work presents a mathematical framework for working with neuron-glia networks.
2002.11267
Mario Ignacio Simoy
Mario Ignacio Simoy, Juan Pablo Aparicio
Ross-Macdonald Models: Which one should we use?
null
null
10.1016/j.actatropica.2020.105452
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ross-Macdonald models are the building blocks of most vector-borne disease models. Even for the same disease, different authors use different model formulations, but a study of the dynamical consequences of assuming different hypotheses is missing. In this work we present different formulations of the basic Ross-Macdonald model together with a careful discussion of the assumptions behind each model. The most general model presented is an agent based model for which arbitrary distributions for latency and infectious periods for both, host and vectors, is considered. At population level we also developed a deterministic Volterra integral equations model for which also arbitrary distributions in the waiting times are included. We compare the model solutions using different distributions for the infectious and latency periods using statistics, like the epidemic peak, or epidemic final size, to characterize the epidemic curves. The basic reproduction number ($R_0$) for each formulation is computed and compared with empirical estimations obtained with the agent based models. The importance of considering realistic distributions for the latent and infectious periods is highlighted and discussed. We also show that seasonality is a key driver of vector-borne disease dynamics shaping the epidemic curve and its duration.
[ { "created": "Wed, 26 Feb 2020 02:52:04 GMT", "version": "v1" } ]
2020-07-22
[ [ "Simoy", "Mario Ignacio", "" ], [ "Aparicio", "Juan Pablo", "" ] ]
Ross-Macdonald models are the building blocks of most vector-borne disease models. Even for the same disease, different authors use different model formulations, but a study of the dynamical consequences of assuming different hypotheses is missing. In this work we present different formulations of the basic Ross-Macdonald model together with a careful discussion of the assumptions behind each model. The most general model presented is an agent based model for which arbitrary distributions for latency and infectious periods for both, host and vectors, is considered. At population level we also developed a deterministic Volterra integral equations model for which also arbitrary distributions in the waiting times are included. We compare the model solutions using different distributions for the infectious and latency periods using statistics, like the epidemic peak, or epidemic final size, to characterize the epidemic curves. The basic reproduction number ($R_0$) for each formulation is computed and compared with empirical estimations obtained with the agent based models. The importance of considering realistic distributions for the latent and infectious periods is highlighted and discussed. We also show that seasonality is a key driver of vector-borne disease dynamics shaping the epidemic curve and its duration.
2307.12090
Michael Brocidiacono
Michael Brocidiacono, Konstantin I. Popov, David Ryan Koes, Alexander Tropsha
PLANTAIN: Diffusion-inspired Pose Score Minimization for Fast and Accurate Molecular Docking
Camera-ready submission to ICML CompBio workshop. 5 pages and 1 figure
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Molecular docking aims to predict the 3D pose of a small molecule in a protein binding site. Traditional docking methods predict ligand poses by minimizing a physics-inspired scoring function. Recently, a diffusion model has been proposed that iteratively refines a ligand pose. We combine these two approaches by training a pose scoring function in a diffusion-inspired manner. In our method, PLANTAIN, a neural network is used to develop a very fast pose scoring function. We parameterize a simple scoring function on the fly and use L-BFGS minimization to optimize an initially random ligand pose. Using rigorous benchmarking practices, we demonstrate that our method achieves state-of-the-art performance while running ten times faster than the next-best method. We release PLANTAIN publicly and hope that it improves the utility of virtual screening workflows.
[ { "created": "Sat, 22 Jul 2023 14:41:11 GMT", "version": "v1" }, { "created": "Wed, 26 Jul 2023 01:47:45 GMT", "version": "v2" } ]
2023-07-27
[ [ "Brocidiacono", "Michael", "" ], [ "Popov", "Konstantin I.", "" ], [ "Koes", "David Ryan", "" ], [ "Tropsha", "Alexander", "" ] ]
Molecular docking aims to predict the 3D pose of a small molecule in a protein binding site. Traditional docking methods predict ligand poses by minimizing a physics-inspired scoring function. Recently, a diffusion model has been proposed that iteratively refines a ligand pose. We combine these two approaches by training a pose scoring function in a diffusion-inspired manner. In our method, PLANTAIN, a neural network is used to develop a very fast pose scoring function. We parameterize a simple scoring function on the fly and use L-BFGS minimization to optimize an initially random ligand pose. Using rigorous benchmarking practices, we demonstrate that our method achieves state-of-the-art performance while running ten times faster than the next-best method. We release PLANTAIN publicly and hope that it improves the utility of virtual screening workflows.
2003.10372
Manuel Ciba
Manuel Ciba, Robert Bestel, Christoph Nick, Guilherme Ferraz de Arruda, Thomas Peron, Comin C\'esar Henrique, Luciano da Fontoura Costa, Francisco Aparecido Rodrigues, Christiane Thielemann
Comparison of different spike train synchrony measures regarding their robustness to erroneous data from bicuculline induced epileptiform activity
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As synchronized activity is associated with basic brain functions and pathological states, spike train synchrony has become an important measure to analyze experimental neuronal data. Many different measures of spike train synchrony have been proposed, but there is no gold standard allowing for comparison of results between different experiments. This work aims to provide guidance on which synchrony measure is best suitable to quantify the effect of epileptiform inducing substances (e.g. bicuculline (BIC)) in in vitro neuronal spike train data. Spike train data from recordings are likely to suffer from erroneous spike detection, such as missed spikes (false negative) or noise (false positive). Therefore, different time-scale dependent (cross-correlation, mutual information, spike time tiling coefficient) and time-scale independent (Spike-contrast, phase synchronization, A-SPIKE-synchronization, A-ISI-distance, ARI-SPIKE-distance) synchrony measures were compared in terms of their robustness to erroneous spike trains. As a result of the in silico manipulated data, Spike-contrast was the only measure being robust to false negative as well as false-positive spikes. Analyzing the experimental data set revealed that all measures were able to capture the effect of BIC in a statistically significant way, with Spike-contrast showing the highest statistical significance even at low spike detection thresholds. In summary, we suggest the usage of Spike-contrast to complement established synchrony measures, as it is time scale independent and robust to erroneous spike trains.
[ { "created": "Mon, 23 Mar 2020 16:37:06 GMT", "version": "v1" } ]
2020-03-24
[ [ "Ciba", "Manuel", "" ], [ "Bestel", "Robert", "" ], [ "Nick", "Christoph", "" ], [ "de Arruda", "Guilherme Ferraz", "" ], [ "Peron", "Thomas", "" ], [ "Henrique", "Comin César", "" ], [ "Costa", "Luciano da Fontoura", "" ], [ "Rodrigues", "Francisco Aparecido", "" ], [ "Thielemann", "Christiane", "" ] ]
As synchronized activity is associated with basic brain functions and pathological states, spike train synchrony has become an important measure to analyze experimental neuronal data. Many different measures of spike train synchrony have been proposed, but there is no gold standard allowing for comparison of results between different experiments. This work aims to provide guidance on which synchrony measure is best suitable to quantify the effect of epileptiform inducing substances (e.g. bicuculline (BIC)) in in vitro neuronal spike train data. Spike train data from recordings are likely to suffer from erroneous spike detection, such as missed spikes (false negative) or noise (false positive). Therefore, different time-scale dependent (cross-correlation, mutual information, spike time tiling coefficient) and time-scale independent (Spike-contrast, phase synchronization, A-SPIKE-synchronization, A-ISI-distance, ARI-SPIKE-distance) synchrony measures were compared in terms of their robustness to erroneous spike trains. As a result of the in silico manipulated data, Spike-contrast was the only measure being robust to false negative as well as false-positive spikes. Analyzing the experimental data set revealed that all measures were able to capture the effect of BIC in a statistically significant way, with Spike-contrast showing the highest statistical significance even at low spike detection thresholds. In summary, we suggest the usage of Spike-contrast to complement established synchrony measures, as it is time scale independent and robust to erroneous spike trains.
1604.02412
Ga\"etan Benoit
Ga\"etan Benoit, Pierre Peterlongo, Mahendra Mariadassou, Erwan Drezen, Sophie Schbath, Dominique Lavenier, Claire Lemaitre
Multiple Comparative Metagenomics using Multiset k-mer Counting
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background. Large scale metagenomic projects aim to extract biodiversity knowledge between different environmental conditions. Current methods for comparing microbial communities face important limitations. Those based on taxonomical or functional assignation rely on a small subset of the sequences that can be associated to known organisms. On the other hand, de novo methods, that compare the whole sets of sequences, either do not scale up on ambitious metagenomic projects or do not provide precise and exhaustive results. Methods. These limitations motivated the development of a new de novo metagenomic comparative method, called Simka. This method computes a large collection of standard ecological distances by replacing species counts by k-mer counts. Simka scales-up today's metagenomic projects thanks to a new parallel k-mer counting strategy on multiple datasets. Results. Experiments on public Human Microbiome Project datasets demonstrate that Simka captures the essential underlying biological structure. Simka was able to compute in a few hours both qualitative and quantitative ecological distances on hundreds of metagenomic samples (690 samples, 32 billions of reads). We also demonstrate that analyzing metagenomes at the k-mer level is highly correlated with extremely precise de novo comparison techniques which rely on all-versus-all sequences alignment strategy or which are based on taxonomic profiling.
[ { "created": "Fri, 8 Apr 2016 17:59:27 GMT", "version": "v1" }, { "created": "Wed, 27 Jul 2016 14:45:59 GMT", "version": "v2" }, { "created": "Fri, 23 Sep 2016 09:12:32 GMT", "version": "v3" } ]
2016-09-26
[ [ "Benoit", "Gaëtan", "" ], [ "Peterlongo", "Pierre", "" ], [ "Mariadassou", "Mahendra", "" ], [ "Drezen", "Erwan", "" ], [ "Schbath", "Sophie", "" ], [ "Lavenier", "Dominique", "" ], [ "Lemaitre", "Claire", "" ] ]
Background. Large scale metagenomic projects aim to extract biodiversity knowledge between different environmental conditions. Current methods for comparing microbial communities face important limitations. Those based on taxonomical or functional assignation rely on a small subset of the sequences that can be associated to known organisms. On the other hand, de novo methods, that compare the whole sets of sequences, either do not scale up on ambitious metagenomic projects or do not provide precise and exhaustive results. Methods. These limitations motivated the development of a new de novo metagenomic comparative method, called Simka. This method computes a large collection of standard ecological distances by replacing species counts by k-mer counts. Simka scales-up today's metagenomic projects thanks to a new parallel k-mer counting strategy on multiple datasets. Results. Experiments on public Human Microbiome Project datasets demonstrate that Simka captures the essential underlying biological structure. Simka was able to compute in a few hours both qualitative and quantitative ecological distances on hundreds of metagenomic samples (690 samples, 32 billions of reads). We also demonstrate that analyzing metagenomes at the k-mer level is highly correlated with extremely precise de novo comparison techniques which rely on all-versus-all sequences alignment strategy or which are based on taxonomic profiling.
q-bio/0311011
Jie Liang
Yan Yuan Tseng and Jie Liang
Are residues in a protein folding nucleus evolutionarily conserved?
15 pages, 4 figures, and 1 table. Accepted by J. Mol. Biol
null
null
null
q-bio.BM
null
It is important to understand how protein folding and evolution influences each other. Several studies based on entropy calculation correlating experimental measurement of residue participation in folding nucleus and sequence conservation have reached different conclusions. Here we report analysis of conservation of folding nucleus using an evolutionary model alternative to entropy based approaches. We employ a continuous time Markov model of codon substitution to distinguish mutation fixed by evolution and mutation fixed by chance. This model takes into account bias in codon frequency, bias favoring transition over transversion, as well as explicit phylogenetic information. We measure selection pressure using the ratio $\omega$ of synonymous vs. non-synonymous substitution at individual residue site. The $\omega$-values are estimated using the {\sc Paml} method, a maximum-likelihood estimator. Our results show that there is little correlation between the extent of kinetic participation in protein folding nucleus as measured by experimental $\phi$-value and selection pressure as measured by $\omega$-value. In addition, two randomization tests failed to show that folding nucleus residues are significantly more conserved than the whole protein. These results suggest that at the level of codon substitution, there is no indication that folding nucleus residues are significantly more conserved than other residues. We further reconstruct candidate ancestral residues of the folding nucleus and suggest possible test tube mutation studies of ancient folding nucleus.
[ { "created": "Mon, 10 Nov 2003 05:26:15 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tseng", "Yan Yuan", "" ], [ "Liang", "Jie", "" ] ]
It is important to understand how protein folding and evolution influences each other. Several studies based on entropy calculation correlating experimental measurement of residue participation in folding nucleus and sequence conservation have reached different conclusions. Here we report analysis of conservation of folding nucleus using an evolutionary model alternative to entropy based approaches. We employ a continuous time Markov model of codon substitution to distinguish mutation fixed by evolution and mutation fixed by chance. This model takes into account bias in codon frequency, bias favoring transition over transversion, as well as explicit phylogenetic information. We measure selection pressure using the ratio $\omega$ of synonymous vs. non-synonymous substitution at individual residue site. The $\omega$-values are estimated using the {\sc Paml} method, a maximum-likelihood estimator. Our results show that there is little correlation between the extent of kinetic participation in protein folding nucleus as measured by experimental $\phi$-value and selection pressure as measured by $\omega$-value. In addition, two randomization tests failed to show that folding nucleus residues are significantly more conserved than the whole protein. These results suggest that at the level of codon substitution, there is no indication that folding nucleus residues are significantly more conserved than other residues. We further reconstruct candidate ancestral residues of the folding nucleus and suggest possible test tube mutation studies of ancient folding nucleus.
1102.5141
Joshua M. Deutsch
J. M. Deutsch, M. E. Brunner, and William M. Saxton
Analysis of microtubule motion due to drag from kinesin walkers
14 pages, 6 figures
null
null
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the nonlinear waves that propagate on a microtubule that is tethered at its minus end due to kinesin walking on it, as is seen during the fluid mixing caused by cytoplasmic streaming in Drosophila oocytes.The model we use assumes that the microtubule can be modeled as an elastic string in a viscous medium. The effect of the kinesin is to apply a force tangential to the microtubule and we also consider the addition of a uniform cytoplasmic velocity field. We show that travelling wave solutions exist and analyze their properties. There exist scale invariant families of solutions and solutions can exist that are flat or helical. The relationship between the period and wavelength is obtained by both analytic and numerical means. Numerical implementation of the equation of motion verifies our analytical predictions.
[ { "created": "Fri, 25 Feb 2011 02:40:52 GMT", "version": "v1" }, { "created": "Tue, 10 Jul 2012 02:43:20 GMT", "version": "v2" } ]
2012-07-11
[ [ "Deutsch", "J. M.", "" ], [ "Brunner", "M. E.", "" ], [ "Saxton", "William M.", "" ] ]
We analyze the nonlinear waves that propagate on a microtubule that is tethered at its minus end due to kinesin walking on it, as is seen during the fluid mixing caused by cytoplasmic streaming in Drosophila oocytes.The model we use assumes that the microtubule can be modeled as an elastic string in a viscous medium. The effect of the kinesin is to apply a force tangential to the microtubule and we also consider the addition of a uniform cytoplasmic velocity field. We show that travelling wave solutions exist and analyze their properties. There exist scale invariant families of solutions and solutions can exist that are flat or helical. The relationship between the period and wavelength is obtained by both analytic and numerical means. Numerical implementation of the equation of motion verifies our analytical predictions.
1209.1444
Paul Gardner
Marc P. Hoeppner, Paul P. Gardner, Anthony M. Poole
Comparative Analysis of RNA Families Reveals Distinct Repertoires for Each Domain of Life
47 pages, 4 main figures, 3 supplementary figures, 4 supplementary tables. Submitted to PLOS Computational Biology
PLoS Comput Biol. 2012 8(11):e1002752
10.1371/journal.pcbi.1002752
null
q-bio.GN q-bio.BM q-bio.PE
http://creativecommons.org/licenses/by/3.0/
The RNA world hypothesis, that RNA genomes and catalysts preceded DNA genomes and genetically-encoded protein catalysts, has been central to models for the early evolution of life on Earth. A key part of such models is continuity between the earliest stages in the evolution of life and the RNA repertoires of extant lineages. Some assessments seem consistent with a diverse RNA world, yet direct continuity between modern RNAs and an RNA world has not been demonstrated for the majority of RNA families, and, anecdotally, many RNA functions appear restricted in their distribution. Despite much discussion of the possible antiquity of RNA families, no systematic analyses of RNA family distribution have been performed. To chart the broad evolutionary history of known RNA families, we performed comparative genomic analysis of over 3 million RNA annotations spanning 1446 families from the Rfam 10 database. We report that 99% of known RNA families are restricted to a single domain of life, revealing discrete repertoires for each domain. For the 1% of RNA families/clans present in more than one domain, over half show evidence of horizontal gene transfer, and the rest show a vertical trace, indicating the presence of a complex protein synthesis machinery in the Last Universal Common Ancestor (LUCA) and consistent with the evolutionary history of the most ancient protein-coding genes. However, with limited interdomain transfer and few RNA families exhibiting demonstrable antiquity as predicted under RNA world continuity, our results indicate that the majority of modern cellular RNA repertoires have primarily evolved in a domain-specific manner.
[ { "created": "Fri, 7 Sep 2012 03:46:09 GMT", "version": "v1" } ]
2015-03-13
[ [ "Hoeppner", "Marc P.", "" ], [ "Gardner", "Paul P.", "" ], [ "Poole", "Anthony M.", "" ] ]
The RNA world hypothesis, that RNA genomes and catalysts preceded DNA genomes and genetically-encoded protein catalysts, has been central to models for the early evolution of life on Earth. A key part of such models is continuity between the earliest stages in the evolution of life and the RNA repertoires of extant lineages. Some assessments seem consistent with a diverse RNA world, yet direct continuity between modern RNAs and an RNA world has not been demonstrated for the majority of RNA families, and, anecdotally, many RNA functions appear restricted in their distribution. Despite much discussion of the possible antiquity of RNA families, no systematic analyses of RNA family distribution have been performed. To chart the broad evolutionary history of known RNA families, we performed comparative genomic analysis of over 3 million RNA annotations spanning 1446 families from the Rfam 10 database. We report that 99% of known RNA families are restricted to a single domain of life, revealing discrete repertoires for each domain. For the 1% of RNA families/clans present in more than one domain, over half show evidence of horizontal gene transfer, and the rest show a vertical trace, indicating the presence of a complex protein synthesis machinery in the Last Universal Common Ancestor (LUCA) and consistent with the evolutionary history of the most ancient protein-coding genes. However, with limited interdomain transfer and few RNA families exhibiting demonstrable antiquity as predicted under RNA world continuity, our results indicate that the majority of modern cellular RNA repertoires have primarily evolved in a domain-specific manner.
2003.08704
Barbara Bravi
Barbara Bravi, Katy J. Rubin and Peter Sollich
Systematic model reduction captures the dynamics of extrinsic noise in biochemical subnetworks
24 pages, 10 figures
null
10.1063/5.0008304
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the general problem of describing the dynamics of subnetworks of larger biochemical reaction networks, e.g. protein interaction networks involving complex formation and dissociation reactions. We propose the use of model reduction strategies to understand the 'extrinsic' sources of stochasticity arising from the rest of the network. Our approaches are based on subnetwork dynamical equations derived by projection methods and by path integrals. The results provide a principled derivation of the different components of the extrinsic noise that is observed experimentally in cellular biochemical reactions, over and above the intrinsic noise from the stochasticity of biochemical events in the subnetwork. We explore several intermediate approximations to assess systematically the relative importance of different extrinsic noise components, including initial transients, long-time plateaus, temporal correlations, multiplicative noise terms and nonlinear noise propagation. The best approximations achieve excellent accuracy in quantitative tests on a simple protein network and on the epidermal growth factor receptor signalling network.
[ { "created": "Thu, 19 Mar 2020 11:54:50 GMT", "version": "v1" }, { "created": "Wed, 1 Jul 2020 22:08:49 GMT", "version": "v2" } ]
2020-08-26
[ [ "Bravi", "Barbara", "" ], [ "Rubin", "Katy J.", "" ], [ "Sollich", "Peter", "" ] ]
We consider the general problem of describing the dynamics of subnetworks of larger biochemical reaction networks, e.g. protein interaction networks involving complex formation and dissociation reactions. We propose the use of model reduction strategies to understand the 'extrinsic' sources of stochasticity arising from the rest of the network. Our approaches are based on subnetwork dynamical equations derived by projection methods and by path integrals. The results provide a principled derivation of the different components of the extrinsic noise that is observed experimentally in cellular biochemical reactions, over and above the intrinsic noise from the stochasticity of biochemical events in the subnetwork. We explore several intermediate approximations to assess systematically the relative importance of different extrinsic noise components, including initial transients, long-time plateaus, temporal correlations, multiplicative noise terms and nonlinear noise propagation. The best approximations achieve excellent accuracy in quantitative tests on a simple protein network and on the epidermal growth factor receptor signalling network.
q-bio/0312019
Nadav Kashtan
N. Kashtan, S. Itzkovitz, R. Milo, U. Alon
Topological Generalizations of network motifs
null
null
10.1103/PhysRevE.70.031909
null
q-bio.MN cond-mat.stat-mech
null
Biological and technological networks contain patterns, termed network motifs, which occur far more often than in randomized networks. Network motifs were suggested to be elementary building blocks that carry out key functions in the network. It is of interest to understand how network motifs combine to form larger structures. To address this, we present a systematic approach to define 'motif generalizations': families of motifs of different sizes that share a common architectural theme. To define motif generalizations, we first define 'roles' in a subgraph according to structural equivalence. For example, the feedforward loop triad, a motif in transcription, neuronal and some electronic networks, has three roles, an input node, an output node and an internal node. The roles are used to define possible generalizations of the motif. The feedforward loop can have three simple generalizations, based on replicating each of the three roles and their connections. We present algorithms for efficiently detecting motif generalizations. We find that the transcription networks of bacteria and yeast display only one of the three generalizations, the multi-output feedforward generalization. In contrast, the neuronal network of \emph{C. elegans} mainly displays the multi-input generalization. Forward-logic electronic circuits display a multi-input, multi-output hybrid. Thus, networks which share a common motif can have very different generalizations of that motif. Using mathematical modelling, we describe the information processing functions of the different motif generalizations in transcription, neuronal and electronic networks.
[ { "created": "Mon, 15 Dec 2003 13:24:26 GMT", "version": "v1" }, { "created": "Mon, 8 Mar 2004 18:27:44 GMT", "version": "v2" }, { "created": "Sun, 16 May 2004 13:53:45 GMT", "version": "v3" } ]
2009-11-10
[ [ "Kashtan", "N.", "" ], [ "Itzkovitz", "S.", "" ], [ "Milo", "R.", "" ], [ "Alon", "U.", "" ] ]
Biological and technological networks contain patterns, termed network motifs, which occur far more often than in randomized networks. Network motifs were suggested to be elementary building blocks that carry out key functions in the network. It is of interest to understand how network motifs combine to form larger structures. To address this, we present a systematic approach to define 'motif generalizations': families of motifs of different sizes that share a common architectural theme. To define motif generalizations, we first define 'roles' in a subgraph according to structural equivalence. For example, the feedforward loop triad, a motif in transcription, neuronal and some electronic networks, has three roles, an input node, an output node and an internal node. The roles are used to define possible generalizations of the motif. The feedforward loop can have three simple generalizations, based on replicating each of the three roles and their connections. We present algorithms for efficiently detecting motif generalizations. We find that the transcription networks of bacteria and yeast display only one of the three generalizations, the multi-output feedforward generalization. In contrast, the neuronal network of \emph{C. elegans} mainly displays the multi-input generalization. Forward-logic electronic circuits display a multi-input, multi-output hybrid. Thus, networks which share a common motif can have very different generalizations of that motif. Using mathematical modelling, we describe the information processing functions of the different motif generalizations in transcription, neuronal and electronic networks.
2005.13073
John Lagergren
John H. Lagergren, John T. Nardini, Ruth E. Baker, Matthew J. Simpson, Kevin B. Flores
Biologically-informed neural networks guide mechanistic modeling from sparse experimental data
null
null
10.1371/journal.pcbi.1008462
null
q-bio.QM math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biologically-informed neural networks (BINNs), an extension of physics-informed neural networks [1], are introduced and used to discover the underlying dynamics of biological systems from sparse experimental data. In the present work, BINNs are trained in a supervised learning framework to approximate in vitro cell biology assay experiments while respecting a generalized form of the governing reaction-diffusion partial differential equation (PDE). By allowing the diffusion and reaction terms to be multilayer perceptrons (MLPs), the nonlinear forms of these terms can be learned while simultaneously converging to the solution of the governing PDE. Further, the trained MLPs are used to guide the selection of biologically interpretable mechanistic forms of the PDE terms which provides new insights into the biological and physical mechanisms that govern the dynamics of the observed system. The method is evaluated on sparse real-world data from wound healing assays with varying initial cell densities [2].
[ { "created": "Tue, 26 May 2020 22:41:12 GMT", "version": "v1" } ]
2021-01-27
[ [ "Lagergren", "John H.", "" ], [ "Nardini", "John T.", "" ], [ "Baker", "Ruth E.", "" ], [ "Simpson", "Matthew J.", "" ], [ "Flores", "Kevin B.", "" ] ]
Biologically-informed neural networks (BINNs), an extension of physics-informed neural networks [1], are introduced and used to discover the underlying dynamics of biological systems from sparse experimental data. In the present work, BINNs are trained in a supervised learning framework to approximate in vitro cell biology assay experiments while respecting a generalized form of the governing reaction-diffusion partial differential equation (PDE). By allowing the diffusion and reaction terms to be multilayer perceptrons (MLPs), the nonlinear forms of these terms can be learned while simultaneously converging to the solution of the governing PDE. Further, the trained MLPs are used to guide the selection of biologically interpretable mechanistic forms of the PDE terms which provides new insights into the biological and physical mechanisms that govern the dynamics of the observed system. The method is evaluated on sparse real-world data from wound healing assays with varying initial cell densities [2].
1902.02750
J. F. Rojas
Gabriel Guarneros B., Cristian P\'erez A., Andrea Montiel P., J. F. Rojas
Identification of epileptic regions from electroencephalographic data: Feigenbaum graphs
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diagnosing epilepsy is a problem of crucial importance. So analysing EEG data is of much importance to help this diagnosis. Assembling the Feigenbaum graphs for EEG signals. And calculating their average clustering, average degree, and average shortest path length. We manage to characterize two different data sets from each other. Each data set consisted of focal and non-focal activity, from where epileptic regions could be identified. This method yields good results for identifying sets of data from epileptic zones. Suggesting our approach could be used to aid physicians with diagnosing epilepsy from EEG data.
[ { "created": "Thu, 7 Feb 2019 17:52:31 GMT", "version": "v1" } ]
2019-02-08
[ [ "B.", "Gabriel Guarneros", "" ], [ "A.", "Cristian Pérez", "" ], [ "P.", "Andrea Montiel", "" ], [ "Rojas", "J. F.", "" ] ]
Diagnosing epilepsy is a problem of crucial importance. So analysing EEG data is of much importance to help this diagnosis. Assembling the Feigenbaum graphs for EEG signals. And calculating their average clustering, average degree, and average shortest path length. We manage to characterize two different data sets from each other. Each data set consisted of focal and non-focal activity, from where epileptic regions could be identified. This method yields good results for identifying sets of data from epileptic zones. Suggesting our approach could be used to aid physicians with diagnosing epilepsy from EEG data.
1904.10924
Muhammad Usman Sadiq
Muhammad Usman Sadiq, Diana Svaldi, Trey Shenk, Evan Breedlove, Victoria Poole, Greg Tamer, Kausar Abbas, Thomas Talavage
Alterations in Structural Correlation Networks with Prior Concussion in Collision-Sport Athletes
null
Originally published at: Annual meeting of the Organization for Human Brain Mapping, OHBM 2018
null
null
q-bio.QM cs.NE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several studies have used structural correlation networks, derived from anatomical covariance of brain regions, to analyze neurologic changes associated with multiple sclerosis, schizophrenia and breast cancer [1][2]. Graph-theoretical analyses of human brain structural networks have consistently shown the characteristic of small-worldness that reflects a network with both high segregation and high integration. A large neuroimaging literature on football players, with and without history of concussion, has shown both functional and anatomical changes. Here we use graph-based topological properties of anatomical correlation networks to study the effect of prior concussion in collision-sport athletes. 40 high school collision-sport athletes (23 male football, 17 female soccer; CSA) without self-reported history of concussion (HOC-), 18 athletes (13 male football, 5 female soccer) with self-reported history of concussion (HOC+) and 24 healthy controls (19 male, 5 female; CN) participated in imaging sessions before the beginning of a competition season. The extracted residual volumes for each group were used for building the correlation networks and their small-worldness, , is calculated. The small-worldness of CSA without prior history of concussion, , is significantly greater than that of controls, . CSA with prior history have significantly higher (vs. 95% confidence interval) small-worldness compared to HOC+, over a range of network densities. The longer path lengths in HOC+ group could indicate disrupted neuronal integration relative to healthy controls.
[ { "created": "Thu, 18 Apr 2019 17:34:17 GMT", "version": "v1" } ]
2019-04-25
[ [ "Sadiq", "Muhammad Usman", "" ], [ "Svaldi", "Diana", "" ], [ "Shenk", "Trey", "" ], [ "Breedlove", "Evan", "" ], [ "Poole", "Victoria", "" ], [ "Tamer", "Greg", "" ], [ "Abbas", "Kausar", "" ], [ "Talavage", "Thomas", "" ] ]
Several studies have used structural correlation networks, derived from anatomical covariance of brain regions, to analyze neurologic changes associated with multiple sclerosis, schizophrenia and breast cancer [1][2]. Graph-theoretical analyses of human brain structural networks have consistently shown the characteristic of small-worldness that reflects a network with both high segregation and high integration. A large neuroimaging literature on football players, with and without history of concussion, has shown both functional and anatomical changes. Here we use graph-based topological properties of anatomical correlation networks to study the effect of prior concussion in collision-sport athletes. 40 high school collision-sport athletes (23 male football, 17 female soccer; CSA) without self-reported history of concussion (HOC-), 18 athletes (13 male football, 5 female soccer) with self-reported history of concussion (HOC+) and 24 healthy controls (19 male, 5 female; CN) participated in imaging sessions before the beginning of a competition season. The extracted residual volumes for each group were used for building the correlation networks and their small-worldness, , is calculated. The small-worldness of CSA without prior history of concussion, , is significantly greater than that of controls, . CSA with prior history have significantly higher (vs. 95% confidence interval) small-worldness compared to HOC+, over a range of network densities. The longer path lengths in HOC+ group could indicate disrupted neuronal integration relative to healthy controls.
0706.4219
Brigitte Gaillard
Audrey Bergouignan (DEPE-IPHC), Dale A Schoeller, Sylvie Normand, Guillemette Gauquelin-Koch, Martine Laville, Timothy Shriver, Michel Desage, Yvon Le Maho (DEPE-IPHC), Hiroshi Ohshima, Claude Gharib, St\'ephane Blanc (DEPE-IPHC)
Effect of physical inactivity on the oxidation of saturated and monounsaturated dietary Fatty acids: results of a randomized trial
null
PLoS Clin Trials 1, 5 (2006) e27
10.1371/journal.pctr.0010027
null
q-bio.PE
null
OBJECTIVES: Changes in the way dietary fat is metabolized can be considered causative in obesity. The role of sedentary behavior in this defect has not been determined. We hypothesized that physical inactivity partitions dietary fats toward storage and that a resistance exercise training program mitigates storage.
[ { "created": "Thu, 28 Jun 2007 12:56:08 GMT", "version": "v1" } ]
2007-06-29
[ [ "Bergouignan", "Audrey", "", "DEPE-IPHC" ], [ "Schoeller", "Dale A", "", "DEPE-IPHC" ], [ "Normand", "Sylvie", "", "DEPE-IPHC" ], [ "Gauquelin-Koch", "Guillemette", "", "DEPE-IPHC" ], [ "Laville", "Martine", "", "DEPE-IPHC" ], [ "Shriver", "Timothy", "", "DEPE-IPHC" ], [ "Desage", "Michel", "", "DEPE-IPHC" ], [ "Maho", "Yvon Le", "", "DEPE-IPHC" ], [ "Ohshima", "Hiroshi", "", "DEPE-IPHC" ], [ "Gharib", "Claude", "", "DEPE-IPHC" ], [ "Blanc", "Stéphane", "", "DEPE-IPHC" ] ]
OBJECTIVES: Changes in the way dietary fat is metabolized can be considered causative in obesity. The role of sedentary behavior in this defect has not been determined. We hypothesized that physical inactivity partitions dietary fats toward storage and that a resistance exercise training program mitigates storage.
1503.03928
Michele Castellana
Michele Castellana, Sophia Hsin-Jung Li, Ned S. Wingreen
Spatial organization of bacterial transcription and translation
null
Proc. Natl. Acad. Sci. U.S.A. 113(33), 9286 (2016)
10.1073/pnas.1604995113
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In bacteria such as $\textit{Escherichia coli}$, DNA is compacted into a nucleoid near the cell center, while ribosomes$-$molecular complexes that translate messenger RNAs (mRNAs) into proteins$-$are mainly localized at the poles. We study the impact of this spatial organization using a minimal reaction-diffusion model for the cellular transcriptional-translational machinery. Our model predicts that $\sim 90\%$ of mRNAs are segregated to the poles and reveals a "circulation" of ribosomes driven by the flux of mRNAs, from synthesis in the nucleoid to degradation at the poles. To address the existence of non-specific, transient interactions between ribosomes and mRNAs, we developed a novel method to efficiently incorporate such transient interactions into reaction-diffusion equations, which allowed us to quantify the biological implications of such non-specific interactions, e.g. for ribosome efficiency.
[ { "created": "Fri, 13 Mar 2015 01:53:41 GMT", "version": "v1" }, { "created": "Thu, 29 Oct 2015 12:36:04 GMT", "version": "v2" }, { "created": "Mon, 29 Aug 2016 14:38:16 GMT", "version": "v3" } ]
2016-08-30
[ [ "Castellana", "Michele", "" ], [ "Li", "Sophia Hsin-Jung", "" ], [ "Wingreen", "Ned S.", "" ] ]
In bacteria such as $\textit{Escherichia coli}$, DNA is compacted into a nucleoid near the cell center, while ribosomes$-$molecular complexes that translate messenger RNAs (mRNAs) into proteins$-$are mainly localized at the poles. We study the impact of this spatial organization using a minimal reaction-diffusion model for the cellular transcriptional-translational machinery. Our model predicts that $\sim 90\%$ of mRNAs are segregated to the poles and reveals a "circulation" of ribosomes driven by the flux of mRNAs, from synthesis in the nucleoid to degradation at the poles. To address the existence of non-specific, transient interactions between ribosomes and mRNAs, we developed a novel method to efficiently incorporate such transient interactions into reaction-diffusion equations, which allowed us to quantify the biological implications of such non-specific interactions, e.g. for ribosome efficiency.
1804.03135
Mehdi Fardmanesh Prof.
Afarin Aghassizadeh, Mohammad Reza Nematollahi, Iman Mirzaie, Mehdi Fardmanesh
Blood Glucose Measurement Based on Infra-Red Spectroscopy
null
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An algorithm based on PLS regression has been developed and optimized for measuring blood glucose level using the infra-red transmission spectrum of blood samples. A set of blood samples were tagged with their glucose concentration using an accurate invasive glucometer and analyzed using a standard FTIR spectrometer. Using the developed algorithm, the results of the FTIR spectroscopy of the samples were analyzed to find the glucose concentration in the samples. The obtained glucose concentration by the algorithm were in good agreement with the results obtained by the standard glucometer, and the mean estimation error was 7 mg/dL. This error is in the range of available commercial invasive meters.
[ { "created": "Sun, 8 Apr 2018 10:11:28 GMT", "version": "v1" } ]
2018-04-11
[ [ "Aghassizadeh", "Afarin", "" ], [ "Nematollahi", "Mohammad Reza", "" ], [ "Mirzaie", "Iman", "" ], [ "Fardmanesh", "Mehdi", "" ] ]
An algorithm based on PLS regression has been developed and optimized for measuring blood glucose level using the infra-red transmission spectrum of blood samples. A set of blood samples were tagged with their glucose concentration using an accurate invasive glucometer and analyzed using a standard FTIR spectrometer. Using the developed algorithm, the results of the FTIR spectroscopy of the samples were analyzed to find the glucose concentration in the samples. The obtained glucose concentration by the algorithm were in good agreement with the results obtained by the standard glucometer, and the mean estimation error was 7 mg/dL. This error is in the range of available commercial invasive meters.
1109.0423
Antti Niemi
Martin Lundgren and Antti J. Niemi
Covalent bond symmetry breaking and protein secondary structure
5 pages 3 figs
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Both symmetry and organized breaking of symmetry have a pivotal r\^ole in our understanding of structure and pattern formation in physical systems, including the origin of mass in the Universe and the chiral structure of biological macromolecules. Here we report on a new symmetry breaking phenomenon that takes place in all biologically active proteins, thus this symmetry breaking relates to the inception of life. The unbroken symmetry determines the covalent bond geometry of a sp3 hybridized carbon atom. It dictates the tetrahedral architecture of atoms around the central carbon of an amino acid. Here we show that in a biologically active protein this symmetry becomes broken. Moreover, we show that the pattern of symmetry breaking is in a direct correspondence with the local secondary structure of the folded protein.
[ { "created": "Fri, 2 Sep 2011 12:16:16 GMT", "version": "v1" } ]
2011-09-05
[ [ "Lundgren", "Martin", "" ], [ "Niemi", "Antti J.", "" ] ]
Both symmetry and organized breaking of symmetry have a pivotal r\^ole in our understanding of structure and pattern formation in physical systems, including the origin of mass in the Universe and the chiral structure of biological macromolecules. Here we report on a new symmetry breaking phenomenon that takes place in all biologically active proteins, thus this symmetry breaking relates to the inception of life. The unbroken symmetry determines the covalent bond geometry of a sp3 hybridized carbon atom. It dictates the tetrahedral architecture of atoms around the central carbon of an amino acid. Here we show that in a biologically active protein this symmetry becomes broken. Moreover, we show that the pattern of symmetry breaking is in a direct correspondence with the local secondary structure of the folded protein.
q-bio/0601002
Susan Atlas
Erik Andries (1 and 2), Thomas Hagstrom (1), Susan R. Atlas (3), and Cheryl Willman (2 and 4) ((1) Department of Mathematics and Statistics, (2) Department of Pathology, (3) Center for Advanced Studies and Department of Physics and Astronomy, (4) Cancer Research and Treatment Center, University of New Mexico)
Regularization Strategies for Hyperplane Classifiers: Application to Cancer Classification with Gene Expression Data
22 pages, 3 figures; uses journal's ws-jbcb.cls; submitted to Journal of Bioinformatics and Computational Biology
null
null
HPC<at>UNM TR# 2005-02
q-bio.GN
null
Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.
[ { "created": "Sun, 1 Jan 2006 06:24:23 GMT", "version": "v1" } ]
2007-05-23
[ [ "Andries", "Erik", "", "1 and 2" ], [ "Hagstrom", "Thomas", "", "2 and 4" ], [ "Atlas", "Susan R.", "", "2 and 4" ], [ "Willman", "Cheryl", "", "2 and 4" ] ]
Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.
1312.7012
Diana Fusco
Diana Fusco, Timothy J. Barnum, Andrew E. Bruno, Joseph R. Luft, Edward H. Snell, Sayan Mukherjee, Patrick Charbonneau
Statistical analysis of crystallization database links protein physico-chemical features with crystallization mechanisms
14 pages, 8 figures, 3 tables. Accepted in Plos One
PLoS ONE 9(7): e101123 (2014)
10.1371/journal.pone.0101123
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
X-ray crystallography is the predominant method for obtaining atomic-scale information about biological macromolecules. Despite the success of the technique, obtaining well diffracting crystals still critically limits going from protein to structure. In practice, the crystallization process proceeds through knowledge-informed empiricism. Better physico-chemical understanding remains elusive because of the large number of variables involved, hence little guidance is available to systematically identify solution conditions that promote crystallization. To help determine relationships between macromolecular properties and their crystallization propensity, we have trained statistical models on samples for 182 proteins supplied by the Northeast Structural Genomics consortium. Gaussian processes, which capture trends beyond the reach of linear statistical models, distinguish between two main physico-chemical mechanisms driving crystallization. One is characterized by low levels of side chain entropy and has been extensively reported in the literature. The other identifies specific electrostatic interactions not previously described in the crystallization context. Because evidence for two distinct mechanisms can be gleaned both from crystal contacts and from solution conditions leading to successful crystallization, the model offers future avenues for optimizing crystallization screens based on partial structural information. The availability of crystallization data coupled with structural outcomes analyzed through state-of-the-art statistical models may thus guide macromolecular crystallization toward a more rational basis.
[ { "created": "Wed, 25 Dec 2013 19:45:28 GMT", "version": "v1" }, { "created": "Mon, 9 Jun 2014 17:07:06 GMT", "version": "v2" } ]
2014-07-14
[ [ "Fusco", "Diana", "" ], [ "Barnum", "Timothy J.", "" ], [ "Bruno", "Andrew E.", "" ], [ "Luft", "Joseph R.", "" ], [ "Snell", "Edward H.", "" ], [ "Mukherjee", "Sayan", "" ], [ "Charbonneau", "Patrick", "" ] ]
X-ray crystallography is the predominant method for obtaining atomic-scale information about biological macromolecules. Despite the success of the technique, obtaining well diffracting crystals still critically limits going from protein to structure. In practice, the crystallization process proceeds through knowledge-informed empiricism. Better physico-chemical understanding remains elusive because of the large number of variables involved, hence little guidance is available to systematically identify solution conditions that promote crystallization. To help determine relationships between macromolecular properties and their crystallization propensity, we have trained statistical models on samples for 182 proteins supplied by the Northeast Structural Genomics consortium. Gaussian processes, which capture trends beyond the reach of linear statistical models, distinguish between two main physico-chemical mechanisms driving crystallization. One is characterized by low levels of side chain entropy and has been extensively reported in the literature. The other identifies specific electrostatic interactions not previously described in the crystallization context. Because evidence for two distinct mechanisms can be gleaned both from crystal contacts and from solution conditions leading to successful crystallization, the model offers future avenues for optimizing crystallization screens based on partial structural information. The availability of crystallization data coupled with structural outcomes analyzed through state-of-the-art statistical models may thus guide macromolecular crystallization toward a more rational basis.
1303.2876
Andrew Leifer
Steven J. Husson (1 and 2) Alexander Gottschalk (3) and Andrew M. Leifer (4) ((1) Functional Genomics and Proteomics, Department of Biology, KU Leuven, Belgium. (2) SPHERE - Systemic Physiological & Ecotoxicological Research, Department of Biology, University of Antwerp, Belgium. (3) Buchmann Institute for Molecular Life Sciences, Johann Wolfgang Goethe-University Frankfurt, Germany. (4) Lewis-Sigler Institute for Integrative Genomics, Princeton University, USA.)
Optogenetic manipulation of neural activity in C. elegans: from synapse to circuits and behavior
28 pages, 3 figures. Accepted for publication in Biology of the Cell. Available from publisher at http://onlinelibrary.wiley.com/doi/10.1111/boc.201200069/abstract Princeton University's Open Access Policy is at http://www.princeton.edu/dof/policies/publ/fac/open-access-policy/
null
10.1111/boc.201200069
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emerging field of optogenetics allows for optical activation or inhibition of neurons and other tissue in the nervous system. In 2005 optogenetic proteins were expressed in the nematode C. elegans for the first time. Since then, C. elegans has served as a powerful platform upon which to conduct optogenetic investigations of synaptic function, circuit dynamics and the neuronal basis of behavior. The C. elegans nervous system, consisting of 302 neurons, whose connectivity and morphology has been mapped completely, drives a rich repertoire of behaviors that are quantifiable by video microscopy. This model organism's compact nervous system, quantifiable behavior, genetic tractability and optical accessibility make it especially amenable to optogenetic interrogation. Channelrhodopsin-2 (ChR2), halorhodopsin (NpHR/Halo) and other common optogenetic proteins have all been expressed in C. elegans. Moreover recent advances leveraging molecular genetics and patterned light illumination have now made it possible to target photoactivation and inhibition to single cells and to do so in worms as they behave freely. Here we describe techniques and methods for optogenetic manipulation in C. elegans. We review recent work using optogenetics and C. elegans for neuroscience investigations at the level of synapses, circuits and behavior.
[ { "created": "Tue, 12 Mar 2013 13:33:50 GMT", "version": "v1" }, { "created": "Thu, 9 May 2013 21:33:08 GMT", "version": "v2" } ]
2013-05-13
[ [ "Husson", "Steven J.", "", "1 and 2" ], [ "Gottschalk", "Alexander", "" ], [ "Leifer", "Andrew M.", "" ] ]
The emerging field of optogenetics allows for optical activation or inhibition of neurons and other tissue in the nervous system. In 2005 optogenetic proteins were expressed in the nematode C. elegans for the first time. Since then, C. elegans has served as a powerful platform upon which to conduct optogenetic investigations of synaptic function, circuit dynamics and the neuronal basis of behavior. The C. elegans nervous system, consisting of 302 neurons, whose connectivity and morphology has been mapped completely, drives a rich repertoire of behaviors that are quantifiable by video microscopy. This model organism's compact nervous system, quantifiable behavior, genetic tractability and optical accessibility make it especially amenable to optogenetic interrogation. Channelrhodopsin-2 (ChR2), halorhodopsin (NpHR/Halo) and other common optogenetic proteins have all been expressed in C. elegans. Moreover recent advances leveraging molecular genetics and patterned light illumination have now made it possible to target photoactivation and inhibition to single cells and to do so in worms as they behave freely. Here we describe techniques and methods for optogenetic manipulation in C. elegans. We review recent work using optogenetics and C. elegans for neuroscience investigations at the level of synapses, circuits and behavior.
2108.05039
Anastasia Dunca
Anastasia Dunca, Frederick R. Adler
Predicting Molecular Phenotypes with Single Cell RNA Sequencing Data: an Assessment of Unsupervised Machine Learning Models
null
null
null
null
q-bio.GN cs.LG q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
According to the National Cancer Institute, there were 9.5 million cancer-related deaths in 2018. A challenge in improving treatment is resistance in genetically unstable cells. The purpose of this study is to evaluate unsupervised machine learning on classifying treatment-resistant phenotypes in heterogeneous tumors through analysis of single cell RNA sequencing(scRNAseq) data with a pipeline and evaluation metrics. scRNAseq quantifies mRNA in cells and characterizes cell phenotypes. One scRNAseq dataset was analyzed (tumor/non-tumor cells of different molecular subtypes and patient identifications). The pipeline consisted of data filtering, dimensionality reduction with Principal Component Analysis, projection with Uniform Manifold Approximation and Projection, clustering with nine approaches (Ward, BIRCH, Gaussian Mixture Model, DBSCAN, Spectral, Affinity Propagation, Agglomerative Clustering, Mean Shift, and K-Means), and evaluation. Seven models divided tumor versus non-tumor cells and molecular subtype while six models classified different patient identification (13 of which were presented in the dataset); K-Means, Ward, and BIRCH often ranked highest with ~80% accuracy on the tumor versus non-tumor task and ~60% for molecular subtype and patient ID. An optimized classification pipeline using K-Means, Ward, and BIRCH models was evaluated to be most effective for further analysis. In clinical research where there is currently no standard protocol for scRNAseq analysis, clusters generated from this pipeline can be used to understand cancer cell behavior and malignant growth, directly affecting the success of treatment.
[ { "created": "Wed, 11 Aug 2021 05:30:37 GMT", "version": "v1" } ]
2021-08-12
[ [ "Dunca", "Anastasia", "" ], [ "Adler", "Frederick R.", "" ] ]
According to the National Cancer Institute, there were 9.5 million cancer-related deaths in 2018. A challenge in improving treatment is resistance in genetically unstable cells. The purpose of this study is to evaluate unsupervised machine learning on classifying treatment-resistant phenotypes in heterogeneous tumors through analysis of single cell RNA sequencing(scRNAseq) data with a pipeline and evaluation metrics. scRNAseq quantifies mRNA in cells and characterizes cell phenotypes. One scRNAseq dataset was analyzed (tumor/non-tumor cells of different molecular subtypes and patient identifications). The pipeline consisted of data filtering, dimensionality reduction with Principal Component Analysis, projection with Uniform Manifold Approximation and Projection, clustering with nine approaches (Ward, BIRCH, Gaussian Mixture Model, DBSCAN, Spectral, Affinity Propagation, Agglomerative Clustering, Mean Shift, and K-Means), and evaluation. Seven models divided tumor versus non-tumor cells and molecular subtype while six models classified different patient identification (13 of which were presented in the dataset); K-Means, Ward, and BIRCH often ranked highest with ~80% accuracy on the tumor versus non-tumor task and ~60% for molecular subtype and patient ID. An optimized classification pipeline using K-Means, Ward, and BIRCH models was evaluated to be most effective for further analysis. In clinical research where there is currently no standard protocol for scRNAseq analysis, clusters generated from this pipeline can be used to understand cancer cell behavior and malignant growth, directly affecting the success of treatment.
2304.03192
Peter Taylor
Jonathan J Horsley, Rhys H Thomas, Fahmida A Chowdhury, Beate Diehl, Andrew W McEvoy, Anna Miserocchi, Jane de Tisi, Sjoerd B Vos, Matthew C Walker, Gavin P Winston, John S Duncan, Yujiang Wang, Peter N Taylor
Complementary structural and functional abnormalities to localise epileptogenic tissue
5 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
When investigating suitability for surgery, people with drug-refractory focal epilepsy may have intracranial EEG (iEEG) electrodes implanted to localise seizure onset. Diffusion-weighted magnetic resonance imaging (dMRI) may be acquired to identify key white matter tracts for surgical avoidance. Here, we investigate whether structural connectivity abnormalities, inferred from dMRI, may be used in conjunction with functional iEEG abnormalities to aid localisation and resection of the epileptogenic zone (EZ), and improve surgical outcomes in epilepsy. We retrospectively investigated data from 43 patients with epilepsy who had surgery following iEEG. Twenty five patients (58%) were free from disabling seizures (ILAE 1 or 2) at one year. For all patients, T1-weighted and diffusion-weighted MRIs were acquired prior to iEEG implantation. Interictal iEEG functional, and dMRI structural connectivity abnormalities were quantified by comparison to a normative map and healthy controls respectively. First, we explored whether the resection of maximal (dMRI and iEEG) abnormalities related to improved surgical outcomes. Second, we investigated whether the modalities provided complementary information for improved prediction of surgical outcome. Third, we suggest how dMRI abnormalities may be useful to inform the placement of iEEG electrodes as part of the pre-surgical evaluation using a patient case study. Seizure freedom was 15 times more likely in those patients with resection of maximal dMRI and iEEG abnormalities (p=0.008). Both modalities were separately able to distinguish patient outcome groups and when combined, a decision tree correctly separated 36 out of 43 (84%) patients based on surgical outcome. Structural dMRI could be used in pre-surgical evaluations, particularly when localisation of the EZ is uncertain, to inform personalised iEEG implantation and resection.
[ { "created": "Thu, 6 Apr 2023 16:15:55 GMT", "version": "v1" }, { "created": "Fri, 14 Apr 2023 07:15:40 GMT", "version": "v2" }, { "created": "Tue, 24 Oct 2023 09:54:10 GMT", "version": "v3" } ]
2023-10-25
[ [ "Horsley", "Jonathan J", "" ], [ "Thomas", "Rhys H", "" ], [ "Chowdhury", "Fahmida A", "" ], [ "Diehl", "Beate", "" ], [ "McEvoy", "Andrew W", "" ], [ "Miserocchi", "Anna", "" ], [ "de Tisi", "Jane", "" ], [ "Vos", "Sjoerd B", "" ], [ "Walker", "Matthew C", "" ], [ "Winston", "Gavin P", "" ], [ "Duncan", "John S", "" ], [ "Wang", "Yujiang", "" ], [ "Taylor", "Peter N", "" ] ]
When investigating suitability for surgery, people with drug-refractory focal epilepsy may have intracranial EEG (iEEG) electrodes implanted to localise seizure onset. Diffusion-weighted magnetic resonance imaging (dMRI) may be acquired to identify key white matter tracts for surgical avoidance. Here, we investigate whether structural connectivity abnormalities, inferred from dMRI, may be used in conjunction with functional iEEG abnormalities to aid localisation and resection of the epileptogenic zone (EZ), and improve surgical outcomes in epilepsy. We retrospectively investigated data from 43 patients with epilepsy who had surgery following iEEG. Twenty five patients (58%) were free from disabling seizures (ILAE 1 or 2) at one year. For all patients, T1-weighted and diffusion-weighted MRIs were acquired prior to iEEG implantation. Interictal iEEG functional, and dMRI structural connectivity abnormalities were quantified by comparison to a normative map and healthy controls respectively. First, we explored whether the resection of maximal (dMRI and iEEG) abnormalities related to improved surgical outcomes. Second, we investigated whether the modalities provided complementary information for improved prediction of surgical outcome. Third, we suggest how dMRI abnormalities may be useful to inform the placement of iEEG electrodes as part of the pre-surgical evaluation using a patient case study. Seizure freedom was 15 times more likely in those patients with resection of maximal dMRI and iEEG abnormalities (p=0.008). Both modalities were separately able to distinguish patient outcome groups and when combined, a decision tree correctly separated 36 out of 43 (84%) patients based on surgical outcome. Structural dMRI could be used in pre-surgical evaluations, particularly when localisation of the EZ is uncertain, to inform personalised iEEG implantation and resection.