id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1906.04527
Yuhong Wang
Yuhong Wang, Wei Li, Hongmao Sun, Kennie Cruz-Gutierrez
Protein contact map prediction using bi-directional recurrent neural network
14 pages
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given native 2D contact map, protein 3D structure could be reconstructed with accuracy of 2A or better, and such reconstruction is a feasible computational approach for protein folding problem. The prediction accuracy from traditional methods is generally too poor to useful, but the recent deep learning model has significantly improved the accuracy. In this study, we proposed a neural network model comprising a bi-directional recurrent neural network and artificial neural network. Over the non-redundant database of all available protein 3D structures in Protein Data Bank, this deep learning model achieved an accuracy of 0.80, much higher than those of previous models. This study represents a major breakthrough in protein 2D contact map prediction and likely a major step forward for the protein folding problem.
[ { "created": "Tue, 11 Jun 2019 12:27:02 GMT", "version": "v1" } ]
2019-06-12
[ [ "Wang", "Yuhong", "" ], [ "Li", "Wei", "" ], [ "Sun", "Hongmao", "" ], [ "Cruz-Gutierrez", "Kennie", "" ] ]
Given native 2D contact map, protein 3D structure could be reconstructed with accuracy of 2A or better, and such reconstruction is a feasible computational approach for protein folding problem. The prediction accuracy from traditional methods is generally too poor to useful, but the recent deep learning model has significantly improved the accuracy. In this study, we proposed a neural network model comprising a bi-directional recurrent neural network and artificial neural network. Over the non-redundant database of all available protein 3D structures in Protein Data Bank, this deep learning model achieved an accuracy of 0.80, much higher than those of previous models. This study represents a major breakthrough in protein 2D contact map prediction and likely a major step forward for the protein folding problem.
1705.02330
M. Ali Al-Radhawi
M. Ali Al-Radhawi, Domitilla Del Vecchio, Eduardo D. Sontag
Multi-modality in gene regulatory networks with slow promoter kinetics
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Phenotypical variability in the absence of genetic variation often reflects complex energetic landscapes associated with underlying gene regulatory networks (GRNs). In this view, different phenotypes are associated with alternative states of complex nonlinear systems: stable attractors in deterministic models or modes of stationary distributions in stochastic descriptions. We provide theoretical and practical characterizations of these landscapes, specifically focusing on stochastic slow promoter kinetics, a time scale relevant when transcription factor binding and unbinding are affected by epigenetic processes like DNA methylation and chromatin remodeling. In this case, largely unexplored except for numerical simulations, adiabatic approximations of promoter kinetics are not appropriate. In contrast to the existing literature, we provide rigorous analytic characterizations of multiple modes. A general formal approach gives insight into the influence of parameters and the prediction of how changes in GRN wiring, for example through mutations or artificial interventions, impact the possible number, location, and likelihood of alternative states. We adapt tools from the mathematical field of singular perturbation theory to represent stationary distributions of Chemical Master Equations for GRNs as mixtures of Poisson distributions and obtain explicit formulas for the locations and probabilities of metastable states as a function of the parameters describing the system. As illustrations, the theory is used to tease out the role of cooperative binding in stochastic models in comparison to deterministic models, and applications are given to various model systems, such as toggle switches in isolation or in communicating populations, a synthetic oscillator and a trans-differentiation network.
[ { "created": "Fri, 5 May 2017 17:58:48 GMT", "version": "v1" }, { "created": "Sat, 18 Nov 2017 00:01:01 GMT", "version": "v2" }, { "created": "Tue, 26 Jun 2018 17:52:37 GMT", "version": "v3" } ]
2018-06-27
[ [ "Al-Radhawi", "M. Ali", "" ], [ "Del Vecchio", "Domitilla", "" ], [ "Sontag", "Eduardo D.", "" ] ]
Phenotypical variability in the absence of genetic variation often reflects complex energetic landscapes associated with underlying gene regulatory networks (GRNs). In this view, different phenotypes are associated with alternative states of complex nonlinear systems: stable attractors in deterministic models or modes of stationary distributions in stochastic descriptions. We provide theoretical and practical characterizations of these landscapes, specifically focusing on stochastic slow promoter kinetics, a time scale relevant when transcription factor binding and unbinding are affected by epigenetic processes like DNA methylation and chromatin remodeling. In this case, largely unexplored except for numerical simulations, adiabatic approximations of promoter kinetics are not appropriate. In contrast to the existing literature, we provide rigorous analytic characterizations of multiple modes. A general formal approach gives insight into the influence of parameters and the prediction of how changes in GRN wiring, for example through mutations or artificial interventions, impact the possible number, location, and likelihood of alternative states. We adapt tools from the mathematical field of singular perturbation theory to represent stationary distributions of Chemical Master Equations for GRNs as mixtures of Poisson distributions and obtain explicit formulas for the locations and probabilities of metastable states as a function of the parameters describing the system. As illustrations, the theory is used to tease out the role of cooperative binding in stochastic models in comparison to deterministic models, and applications are given to various model systems, such as toggle switches in isolation or in communicating populations, a synthetic oscillator and a trans-differentiation network.
q-bio/0406039
Onuttom Narayan
J. M. Deutsch, Shoudan Liang and Onuttom Narayan
Modeling of microarray data with zippering
4 pages, 4 figures
null
null
null
q-bio.BM cond-mat.soft q-bio.MN
null
The ability of oligonucleotide microarrays to measure gene expression has been hindered by an imperfect understanding of the relationship between input RNA concentrations and output signals. We argue that this relationship can be understood based on the underlying statistical mechanics of these devices. We present a model that includes the relevant interactions between the molecules. Our model for the first time accounts for partially zippered probe-target hybrids in a physically realistic manner, and also includes target-target binding in solution. Large segments of the target molecules are not bound to the probes, often in an asymmetric pattern, emphasizing the importance of modeling zippering properly. The resultant fit between the model and training data using optimized parameters is excellent, and it also does well at predicting test data.
[ { "created": "Thu, 17 Jun 2004 19:56:03 GMT", "version": "v1" } ]
2007-05-23
[ [ "Deutsch", "J. M.", "" ], [ "Liang", "Shoudan", "" ], [ "Narayan", "Onuttom", "" ] ]
The ability of oligonucleotide microarrays to measure gene expression has been hindered by an imperfect understanding of the relationship between input RNA concentrations and output signals. We argue that this relationship can be understood based on the underlying statistical mechanics of these devices. We present a model that includes the relevant interactions between the molecules. Our model for the first time accounts for partially zippered probe-target hybrids in a physically realistic manner, and also includes target-target binding in solution. Large segments of the target molecules are not bound to the probes, often in an asymmetric pattern, emphasizing the importance of modeling zippering properly. The resultant fit between the model and training data using optimized parameters is excellent, and it also does well at predicting test data.
2401.05341
Yannick Vogt
Yannick Vogt, Mehdi Naouar, Maria Kalweit, Christoph Cornelius Miething, Justus Duyster, Roland Mertelsmann, Gabriel Kalweit, Joschka Boedecker
Stable Online and Offline Reinforcement Learning for Antibody CDRH3 Design
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of antibody-based therapeutics has grown significantly in recent years, with targeted antibodies emerging as a potentially effective approach to personalized therapies. Such therapies could be particularly beneficial for complex, highly individual diseases such as cancer. However, progress in this field is often constrained by the extensive search space of amino acid sequences that form the foundation of antibody design. In this study, we introduce a novel reinforcement learning method specifically tailored to address the unique challenges of this domain. We demonstrate that our method can learn the design of high-affinity antibodies against multiple targets in silico, utilizing either online interaction or offline datasets. To the best of our knowledge, our approach is the first of its kind and outperforms existing methods on all tested antigens in the Absolut! database.
[ { "created": "Wed, 29 Nov 2023 10:09:36 GMT", "version": "v1" } ]
2024-01-12
[ [ "Vogt", "Yannick", "" ], [ "Naouar", "Mehdi", "" ], [ "Kalweit", "Maria", "" ], [ "Miething", "Christoph Cornelius", "" ], [ "Duyster", "Justus", "" ], [ "Mertelsmann", "Roland", "" ], [ "Kalweit", "Gabriel", "" ], [ "Boedecker", "Joschka", "" ] ]
The field of antibody-based therapeutics has grown significantly in recent years, with targeted antibodies emerging as a potentially effective approach to personalized therapies. Such therapies could be particularly beneficial for complex, highly individual diseases such as cancer. However, progress in this field is often constrained by the extensive search space of amino acid sequences that form the foundation of antibody design. In this study, we introduce a novel reinforcement learning method specifically tailored to address the unique challenges of this domain. We demonstrate that our method can learn the design of high-affinity antibodies against multiple targets in silico, utilizing either online interaction or offline datasets. To the best of our knowledge, our approach is the first of its kind and outperforms existing methods on all tested antigens in the Absolut! database.
1101.5519
Blaise Li
Blaise Li
An exploratory analysis of combined genome-wide SNP data from several recent studies
Large colour figures available online on FigShare (http://dx.doi.org/10.6084/m9.figshare.100442)
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The usefulness of a `total-evidence' approach to human population genetics was assessed through a clustering analysis of combined genome-wide SNP datasets. The combination contained only 3146 SNPs. Detailed examination of the results nonetheless enables the extraction of relevant clues about the history of human populations, some pertaining to events as ancient as the first migration out of Africa. The results are mostly coherent with what is known from history, linguistics, and previous genetic analyses. These promising results suggest that cross-studies data confrontation have the potential to yield interesting new hypotheses about human population history.
[ { "created": "Fri, 28 Jan 2011 11:51:34 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2011 11:04:34 GMT", "version": "v2" }, { "created": "Thu, 2 Jun 2011 23:25:28 GMT", "version": "v3" }, { "created": "Thu, 6 Dec 2012 16:57:50 GMT", "version": "v4" } ]
2012-12-07
[ [ "Li", "Blaise", "" ] ]
The usefulness of a `total-evidence' approach to human population genetics was assessed through a clustering analysis of combined genome-wide SNP datasets. The combination contained only 3146 SNPs. Detailed examination of the results nonetheless enables the extraction of relevant clues about the history of human populations, some pertaining to events as ancient as the first migration out of Africa. The results are mostly coherent with what is known from history, linguistics, and previous genetic analyses. These promising results suggest that cross-studies data confrontation have the potential to yield interesting new hypotheses about human population history.
q-bio/0510041
Vladimir Gavryushin
V. Gavryushin
A New Form-Factor Method for the Analysis of Tissue Fluorescence
13 pages, 7 figures, submitted to "Journal of Medical Physics", 2005. As the supplemental material, 4 animation files illustrating the reconstruction procedures and properties of fluorophores spectra, by the proposed line-shape model. To view them with comments, - use link: http://www.mtmi.vu.lt/Suppl_CancLetters/
null
null
null
q-bio.BM
null
The aim of this article is to present a developed method that decomposes the autofluorescence spectrum into the spectra of naturally occurring biochemical components of biotissue. It requires knowledge of detailed spectrum behaviour of different endogenous fluorophores. We have studied the main bio-markers in human tissue and proposed a simple modelling algorithm for their spectra shapes. The empirical method was tested theoretically by quantum-mechanical calculations of the spectra in the unharmonic Morse potential approach.
[ { "created": "Thu, 20 Oct 2005 20:42:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Gavryushin", "V.", "" ] ]
The aim of this article is to present a developed method that decomposes the autofluorescence spectrum into the spectra of naturally occurring biochemical components of biotissue. It requires knowledge of detailed spectrum behaviour of different endogenous fluorophores. We have studied the main bio-markers in human tissue and proposed a simple modelling algorithm for their spectra shapes. The empirical method was tested theoretically by quantum-mechanical calculations of the spectra in the unharmonic Morse potential approach.
0902.1417
Kaihsu Tai
Sarah E. Rogers, Kaihsu Tai, Oliver Beckstein, Mark S. P. Sansom
Opening a hydrophobic gate: the nicotinic acetylcholine receptor as an example
null
null
null
null
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/3.0/
To what extent must a hydrophobic gate expand for the channel to count as open? We address this question using the nicotinic acetylcholine receptor (nAChR) as the exemplar. The nAChR is an integral membrane protein which forms a cation selective channel gated by neurotransmitter binding to its extracellular domain. A hydrophobic gating model has been proposed for the nAChR, whereby the pore is incompletely occluded in the closed state channel, with a narrow hydrophobic central gate region which presents an energetic barrier to ion permeation. The nAChR pore is lined by a parallel bundle of five M2 alpha-helices, with the gate formed by three rings of hydrophobic sidechains (9', 13', and 17' of M2). A number of models have been proposed to describe the nature of the conformational change underlying the closed to open transition of the nAChR. These models involve different degrees of M2 helix displacement, rotation, and/or kinking. In this study, we use a simple pore expansion method (previously used to model opening of potassium channels) to generate a series of progressively wider models of the nAChR transmembrane domain. Continuum electrostatics calculations are used to assess the change in the barrier height of the hydrophobic gate as a function of pore expansion. The results suggest that an increase in radius of Delta r ~ 1.5 angstrom is sufficient to functionally open the pore without, for example, a requirement for rotation of the M2 helices. This is evaluated in the context of current mutational and structural data on the nAChR and its homologues.
[ { "created": "Mon, 9 Feb 2009 11:42:51 GMT", "version": "v1" } ]
2009-02-12
[ [ "Rogers", "Sarah E.", "" ], [ "Tai", "Kaihsu", "" ], [ "Beckstein", "Oliver", "" ], [ "Sansom", "Mark S. P.", "" ] ]
To what extent must a hydrophobic gate expand for the channel to count as open? We address this question using the nicotinic acetylcholine receptor (nAChR) as the exemplar. The nAChR is an integral membrane protein which forms a cation selective channel gated by neurotransmitter binding to its extracellular domain. A hydrophobic gating model has been proposed for the nAChR, whereby the pore is incompletely occluded in the closed state channel, with a narrow hydrophobic central gate region which presents an energetic barrier to ion permeation. The nAChR pore is lined by a parallel bundle of five M2 alpha-helices, with the gate formed by three rings of hydrophobic sidechains (9', 13', and 17' of M2). A number of models have been proposed to describe the nature of the conformational change underlying the closed to open transition of the nAChR. These models involve different degrees of M2 helix displacement, rotation, and/or kinking. In this study, we use a simple pore expansion method (previously used to model opening of potassium channels) to generate a series of progressively wider models of the nAChR transmembrane domain. Continuum electrostatics calculations are used to assess the change in the barrier height of the hydrophobic gate as a function of pore expansion. The results suggest that an increase in radius of Delta r ~ 1.5 angstrom is sufficient to functionally open the pore without, for example, a requirement for rotation of the M2 helices. This is evaluated in the context of current mutational and structural data on the nAChR and its homologues.
1011.2861
Eric M\"uller
Daniel Br\"uderle, Mihai A. Petrovici, Bernhard Vogginger, Matthias Ehrlich, Thomas Pfeil, Sebastian Millner, Andreas Gr\"ubl, Karsten Wendt, Eric M\"uller, Marc-Olivier Schwartz, Dan Husmann de Oliveira, Sebastian Jeltsch, Johannes Fieres, Moritz Schilling, Paul M\"uller, Oliver Breitwieser, Venelin Petkov, Lyle Muller, Andrew P. Davison, Pradeep Krishnamurthy, Jens Kremkow, Mikael Lundqvist, Eilif Muller, Johannes Partzsch, Stefan Scholze, Lukas Z\"uhl, Christian Mayr, Alain Destexhe, Markus Diesmann, Tobias C. Potjans, Anders Lansner, Ren\'e Sch\"uffny, Johannes Schemmel, Karlheinz Meier
A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems
null
Biol Cybern. 2011 May;104(4-5):263-96
10.1007/s00422-011-0435-9
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
[ { "created": "Fri, 12 Nov 2010 09:50:59 GMT", "version": "v1" }, { "created": "Thu, 21 Jul 2011 15:51:02 GMT", "version": "v2" } ]
2011-07-22
[ [ "Brüderle", "Daniel", "" ], [ "Petrovici", "Mihai A.", "" ], [ "Vogginger", "Bernhard", "" ], [ "Ehrlich", "Matthias", "" ], [ "Pfeil", "Thomas", "" ], [ "Millner", "Sebastian", "" ], [ "Grübl", "Andreas", "" ], [ "Wendt", "Karsten", "" ], [ "Müller", "Eric", "" ], [ "Schwartz", "Marc-Olivier", "" ], [ "de Oliveira", "Dan Husmann", "" ], [ "Jeltsch", "Sebastian", "" ], [ "Fieres", "Johannes", "" ], [ "Schilling", "Moritz", "" ], [ "Müller", "Paul", "" ], [ "Breitwieser", "Oliver", "" ], [ "Petkov", "Venelin", "" ], [ "Muller", "Lyle", "" ], [ "Davison", "Andrew P.", "" ], [ "Krishnamurthy", "Pradeep", "" ], [ "Kremkow", "Jens", "" ], [ "Lundqvist", "Mikael", "" ], [ "Muller", "Eilif", "" ], [ "Partzsch", "Johannes", "" ], [ "Scholze", "Stefan", "" ], [ "Zühl", "Lukas", "" ], [ "Mayr", "Christian", "" ], [ "Destexhe", "Alain", "" ], [ "Diesmann", "Markus", "" ], [ "Potjans", "Tobias C.", "" ], [ "Lansner", "Anders", "" ], [ "Schüffny", "René", "" ], [ "Schemmel", "Johannes", "" ], [ "Meier", "Karlheinz", "" ] ]
In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
1706.08139
Juan B Gutierrez
Yi H. Yan, Jacob B. Aguilar, Elizabeth D. Trippe, Juan B. Gutierrez
Quantification of Healthy Red Blood Cell Removal and Preferential Invasion of Reticulocytes in Macaca mulatta during Plasmodium cynomolgi Infection
17 pages, 14 figures
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derived an ordinary differential equation model to capture the disease dynamics during blood-stage malaria. The model was directly derived from an earlier age-structured partial differential equation model. The original model was simplified due to experimental constraints. Here we calibrated the simplified model with experimental data using a multiple objective genetic algorithm. Through the calibration process, we quantified the removal of healthy red blood cells and the the preferential infection of reticulocytes during \textit{Plamodium cynomolgi} infection of \textit{Macaca mulatta}. The calibration of our model also revealed the existence of host erythropoietic response prior to blood stage infection.
[ { "created": "Sun, 25 Jun 2017 16:39:26 GMT", "version": "v1" }, { "created": "Fri, 30 Jun 2017 05:57:42 GMT", "version": "v2" } ]
2017-07-03
[ [ "Yan", "Yi H.", "" ], [ "Aguilar", "Jacob B.", "" ], [ "Trippe", "Elizabeth D.", "" ], [ "Gutierrez", "Juan B.", "" ] ]
We derived an ordinary differential equation model to capture the disease dynamics during blood-stage malaria. The model was directly derived from an earlier age-structured partial differential equation model. The original model was simplified due to experimental constraints. Here we calibrated the simplified model with experimental data using a multiple objective genetic algorithm. Through the calibration process, we quantified the removal of healthy red blood cells and the the preferential infection of reticulocytes during \textit{Plamodium cynomolgi} infection of \textit{Macaca mulatta}. The calibration of our model also revealed the existence of host erythropoietic response prior to blood stage infection.
2106.14675
Nicol\'as J. Gallego-Molina
Nicol\'as J. Gallego-Molina, Andr\'es Ortiz, Francisco J. Mart\'inez-Murcia, Marco Formoso and Almudena Gim\'enez
Complex network modelling of EEG band coupling in dyslexia: an exploratory analysis of auditory processing and diagnosis
null
null
10.1016/j.knosys.2021.108098
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex network analysis has an increasing relevance in the study of neurological disorders, enhancing the knowledge of brain's structural and functional organization. Network structure and efficiency reveal different brain states along with different ways of processing the information. This work is structured around the exploratory analysis of the brain processes involved in low-level auditory processing. A complex network analysis was performed on the basis of brain coupling obtained from Electroencephalography (EEG) data, while different auditory stimuli were presented to the subjects. This coupling is inferred from the Phase-Amplitude coupling (PAC) from different EEG electrodes to explore differences between controls and dyslexic subjects. Coupling data allows the construction of a graph, and then, graph theory is used to study the characteristics of the complex networks throughout time for controls and dyslexics. This results in a set of metrics including clustering coefficient, path length and small-worldness. From this, different characteristics linked to the temporal evolution of networks and coupling are pointed out for dyslexics. Our study revealed patterns related to Dyslexia as losing the small-world topology. Finally, these graph-based features are used to classify between controls and dyslexic subjects by means of a Support Vector Machine (SVM).
[ { "created": "Mon, 28 Jun 2021 12:57:25 GMT", "version": "v1" }, { "created": "Wed, 21 Jul 2021 11:37:26 GMT", "version": "v2" } ]
2022-01-24
[ [ "Gallego-Molina", "Nicolás J.", "" ], [ "Ortiz", "Andrés", "" ], [ "Martínez-Murcia", "Francisco J.", "" ], [ "Formoso", "Marco", "" ], [ "Giménez", "Almudena", "" ] ]
Complex network analysis has an increasing relevance in the study of neurological disorders, enhancing the knowledge of brain's structural and functional organization. Network structure and efficiency reveal different brain states along with different ways of processing the information. This work is structured around the exploratory analysis of the brain processes involved in low-level auditory processing. A complex network analysis was performed on the basis of brain coupling obtained from Electroencephalography (EEG) data, while different auditory stimuli were presented to the subjects. This coupling is inferred from the Phase-Amplitude coupling (PAC) from different EEG electrodes to explore differences between controls and dyslexic subjects. Coupling data allows the construction of a graph, and then, graph theory is used to study the characteristics of the complex networks throughout time for controls and dyslexics. This results in a set of metrics including clustering coefficient, path length and small-worldness. From this, different characteristics linked to the temporal evolution of networks and coupling are pointed out for dyslexics. Our study revealed patterns related to Dyslexia as losing the small-world topology. Finally, these graph-based features are used to classify between controls and dyslexic subjects by means of a Support Vector Machine (SVM).
1412.2125
Tyas Fiantoro
Tyas Pandu Fiantoro, Adhi Susanto, Bondhan Winduratna
Gastric Slow Wave Modelling Based on Stomach Morphology and Neuronal Firings
5 pages, 9 figures, figure legends fixed
null
null
S1_2014_284506
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gastric content's mass and pH commonly assessed invasively using endoscopic biopsy, or semi-invasively using swallowable transducer. EGG (electrogastrography) is a technique for observing gastric myoelectrical activity non-invasively, that could be designed as mobile device. In this research, 72 EGG recordings were obtained from 13 local white rabbit (Oryctolagus cuniculus). Recorded EGG processed using SCILAB 5.5.1 package. Signal processing consists of waveform identification altogether with recognition of resting, depolarization, ECA plateau, and repolarization segments of each EGG in the time domain based on amplitude and temporal filter. All rabbits were sacrificed after the recording in order to obtain its stomach content's mass and pH data. EGG waveform generator based on gastric morphological neuron assembly modeled using those data. If this model proved to be accurate, the mass and pH from rabbit (Oryctolagus cuniculus)'s stomach content could be assessed non-invasively, and could be a basis for human (Homo sapiens) trial.
[ { "created": "Fri, 5 Dec 2014 20:19:54 GMT", "version": "v1" }, { "created": "Mon, 8 Dec 2014 17:42:22 GMT", "version": "v2" } ]
2014-12-09
[ [ "Fiantoro", "Tyas Pandu", "" ], [ "Susanto", "Adhi", "" ], [ "Winduratna", "Bondhan", "" ] ]
Gastric content's mass and pH commonly assessed invasively using endoscopic biopsy, or semi-invasively using swallowable transducer. EGG (electrogastrography) is a technique for observing gastric myoelectrical activity non-invasively, that could be designed as mobile device. In this research, 72 EGG recordings were obtained from 13 local white rabbit (Oryctolagus cuniculus). Recorded EGG processed using SCILAB 5.5.1 package. Signal processing consists of waveform identification altogether with recognition of resting, depolarization, ECA plateau, and repolarization segments of each EGG in the time domain based on amplitude and temporal filter. All rabbits were sacrificed after the recording in order to obtain its stomach content's mass and pH data. EGG waveform generator based on gastric morphological neuron assembly modeled using those data. If this model proved to be accurate, the mass and pH from rabbit (Oryctolagus cuniculus)'s stomach content could be assessed non-invasively, and could be a basis for human (Homo sapiens) trial.
2108.10018
Sebastian Contreras
Karen Y. Or\'ostica, Sebastian Contreras, Sebastian B. Mohr, Jonas Dehning, Simon Bauer, David Medina-Ortiz, Emil N. Iftekhar, Karen Mujica, Paulo C. Covarrubias, Soledad Ulloa, Andr\'es E. Castillo, Ricardo A. Verdugo, Jorge Fern\'andez, \'Alvaro Olivera-Nappa and Viola Priesemann
Mutational signatures and transmissibility of SARS-CoV-2 Gamma and Lambda variants
null
null
null
null
q-bio.PE math.ST q-bio.QM stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of SARS-CoV-2 variants of concern endangers the long-term control of COVID-19, especially in countries with limited genomic surveillance. In this work, we explored genomic drivers of contagion in Chile. We sequenced 3443 SARS-CoV-2 genomes collected between January and July 2021, where the Gamma (P.1), Lambda (C.37), Alpha (B.1.1.7), B.1.1.348, and B.1.1 lineages were predominant. Using a Bayesian model tailored for limited genomic surveillance, we found that Lambda and Gamma variants' reproduction numbers were about 5% and 16% larger than Alpha's, respectively. We observed an overabundance of mutations in the Spike gene, strongly correlated with the variant's transmissibility. Furthermore, the variants' mutational signatures featured a breakpoint concurrent with the beginning of vaccination (mostly CoronaVac, an inactivated virus vaccine), indicating an additional putative selective pressure. Thus, our work provides a reliable method for quantifying novel variants' transmissibility under subsampling (as newly-reported Delta, B.1.617.2) and highlights the importance of continuous genomic surveillance.
[ { "created": "Mon, 23 Aug 2021 09:10:49 GMT", "version": "v1" } ]
2021-08-25
[ [ "Oróstica", "Karen Y.", "" ], [ "Contreras", "Sebastian", "" ], [ "Mohr", "Sebastian B.", "" ], [ "Dehning", "Jonas", "" ], [ "Bauer", "Simon", "" ], [ "Medina-Ortiz", "David", "" ], [ "Iftekhar", "Emil N.", "" ], [ "Mujica", "Karen", "" ], [ "Covarrubias", "Paulo C.", "" ], [ "Ulloa", "Soledad", "" ], [ "Castillo", "Andrés E.", "" ], [ "Verdugo", "Ricardo A.", "" ], [ "Fernández", "Jorge", "" ], [ "Olivera-Nappa", "Álvaro", "" ], [ "Priesemann", "Viola", "" ] ]
The emergence of SARS-CoV-2 variants of concern endangers the long-term control of COVID-19, especially in countries with limited genomic surveillance. In this work, we explored genomic drivers of contagion in Chile. We sequenced 3443 SARS-CoV-2 genomes collected between January and July 2021, where the Gamma (P.1), Lambda (C.37), Alpha (B.1.1.7), B.1.1.348, and B.1.1 lineages were predominant. Using a Bayesian model tailored for limited genomic surveillance, we found that Lambda and Gamma variants' reproduction numbers were about 5% and 16% larger than Alpha's, respectively. We observed an overabundance of mutations in the Spike gene, strongly correlated with the variant's transmissibility. Furthermore, the variants' mutational signatures featured a breakpoint concurrent with the beginning of vaccination (mostly CoronaVac, an inactivated virus vaccine), indicating an additional putative selective pressure. Thus, our work provides a reliable method for quantifying novel variants' transmissibility under subsampling (as newly-reported Delta, B.1.617.2) and highlights the importance of continuous genomic surveillance.
1803.10205
Johanna Senk
Johanna Senk, Corto Carde, Espen Hagen, Torsten W. Kuhlen, Markus Diesmann, Benjamin Weyers
VIOLA - A multi-purpose and web-based visualization tool for neuronal-network simulation output
38 pages, 10 figures, 3 tables
Front. Neuroinform. 12:75 (2018)
10.3389/fninf.2018.00075
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data. Furthermore, ongoing efforts including the European Human Brain Project aim at providing online user portals for integrated model development, simulation, analysis and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.
[ { "created": "Tue, 27 Mar 2018 17:43:06 GMT", "version": "v1" } ]
2022-09-16
[ [ "Senk", "Johanna", "" ], [ "Carde", "Corto", "" ], [ "Hagen", "Espen", "" ], [ "Kuhlen", "Torsten W.", "" ], [ "Diesmann", "Markus", "" ], [ "Weyers", "Benjamin", "" ] ]
Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data. Furthermore, ongoing efforts including the European Human Brain Project aim at providing online user portals for integrated model development, simulation, analysis and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.
2404.14003
LAURIC REYNES
Lauric Reynes (EBEA), Louise Fouqueau (EBEA), D. Aurelle (MIO, ISYEB, EMBIO), St\'ephane Mauger (EBEA), Christophe Destombe (EBEA), Myriam Valero (EBEA)
Temporal genomics help in deciphering neutral and adaptive patterns in the contemporary evolution of kelp populations
Journal of Evolutionary Biology, 2024
null
10.1093/jeb/voae048
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The impact of climate change on populations will be contingent upon their contemporary adaptive evolution. In this study, we investigated the contemporary evolution of four populations of the cold-water kelp Laminaria digitata by analysing their spatial and temporal genomic variation using ddRAD-sequencing. These populations were sampled from the center to the southern margin of its north-eastern Atlantic distribution at two-time points, spanning at least two generations. Through genome scans for local adaptation at a single time point, we identified candidate loci that showed clinal variation correlated with changes in sea surface temperature (SST) along latitudinal gradients. This finding suggests that SST may drive the adaptive response of these kelp populations, although factors such as species' demographic history should also be considered. Additionally, we performed a simulation approach to distinguish the effect of selection from genetic drift in allele frequency changes over time. This enabled the detection of loci in the southernmost population that exhibited temporal differentiation beyond what would be expected from genetic drift alone: these are candidate loci which could have evolved under selection over time. In contrast, we did not detect any outlier locus based on temporal differentiation in the population from the North Sea, which also displayed low and decreasing levels of genetic diversity. The diverse evolutionary scenarios observed among populations can be attributed to variations in the prevalence of selection relative to genetic drift across different environments. Therefore, our study highlights the potential of temporal genomics to offer valuable insights into the contemporary evolution of marine foundation species facing climate change.
[ { "created": "Mon, 22 Apr 2024 09:09:09 GMT", "version": "v1" } ]
2024-04-23
[ [ "Reynes", "Lauric", "", "EBEA" ], [ "Fouqueau", "Louise", "", "EBEA" ], [ "Aurelle", "D.", "", "MIO, ISYEB,\n EMBIO" ], [ "Mauger", "Stéphane", "", "EBEA" ], [ "Destombe", "Christophe", "", "EBEA" ], [ "Valero", "Myriam", "", "EBEA" ] ]
The impact of climate change on populations will be contingent upon their contemporary adaptive evolution. In this study, we investigated the contemporary evolution of four populations of the cold-water kelp Laminaria digitata by analysing their spatial and temporal genomic variation using ddRAD-sequencing. These populations were sampled from the center to the southern margin of its north-eastern Atlantic distribution at two-time points, spanning at least two generations. Through genome scans for local adaptation at a single time point, we identified candidate loci that showed clinal variation correlated with changes in sea surface temperature (SST) along latitudinal gradients. This finding suggests that SST may drive the adaptive response of these kelp populations, although factors such as species' demographic history should also be considered. Additionally, we performed a simulation approach to distinguish the effect of selection from genetic drift in allele frequency changes over time. This enabled the detection of loci in the southernmost population that exhibited temporal differentiation beyond what would be expected from genetic drift alone: these are candidate loci which could have evolved under selection over time. In contrast, we did not detect any outlier locus based on temporal differentiation in the population from the North Sea, which also displayed low and decreasing levels of genetic diversity. The diverse evolutionary scenarios observed among populations can be attributed to variations in the prevalence of selection relative to genetic drift across different environments. Therefore, our study highlights the potential of temporal genomics to offer valuable insights into the contemporary evolution of marine foundation species facing climate change.
2307.00385
Moo K. Chung
Zijian Chen, Soumya Das, Moo K. Chung
Sulcal Pattern Matching with the Wasserstein Distance
In press in IEEE ISBI
null
null
null
q-bio.NC eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present the unified computational framework for modeling the sulcal patterns of human brain obtained from the magnetic resonance images. The Wasserstein distance is used to align the sulcal patterns nonlinearly. These patterns are topologically different across subjects making the pattern matching a challenge. We work out the mathematical details and develop the gradient descent algorithms for estimating the deformation field. We further quantify the image registration performance. This method is applied in identifying the differences between male and female sulcal patterns.
[ { "created": "Sat, 1 Jul 2023 16:57:49 GMT", "version": "v1" } ]
2023-07-04
[ [ "Chen", "Zijian", "" ], [ "Das", "Soumya", "" ], [ "Chung", "Moo K.", "" ] ]
We present the unified computational framework for modeling the sulcal patterns of human brain obtained from the magnetic resonance images. The Wasserstein distance is used to align the sulcal patterns nonlinearly. These patterns are topologically different across subjects making the pattern matching a challenge. We work out the mathematical details and develop the gradient descent algorithms for estimating the deformation field. We further quantify the image registration performance. This method is applied in identifying the differences between male and female sulcal patterns.
1601.00364
Elad Ganmor
Elad Ganmor, Michael Krumin, Luigi F. Rossi, Matteo Carandini and Eero P. Simoncelli
Direct Estimation of Firing Rates from Calcium Imaging Data
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two-photon imaging of calcium indicators allows simultaneous recording of responses of hundreds of neurons over hours and even days, but provides a relatively indirect measure of their spiking activity. Existing deconvolution algorithms attempt to recover spikes from observed imaging data, which are then commonly subjected to the same analyses that are applied to electrophysiologically recorded spikes (e.g., estimation of average firing rates, or tuning curves). Here we show, however, that in the presence of noise this approach is often heavily biased. We propose an alternative analysis that aims to estimate the underlying rate directly, by integrating over the unobserved spikes instead of committing to a single estimate of the spike train. This approach can be used to estimate average firing rates or tuning curves directly from the imaging data, and is sufficiently flexible to incorporate prior knowledge about tuning structure. We show that directly estimated rates are more accurate than those obtained from averaging of spikes estimated through deconvolution, both on simulated data and on imaging data acquired in mouse visual cortex.
[ { "created": "Mon, 4 Jan 2016 02:01:19 GMT", "version": "v1" } ]
2016-01-05
[ [ "Ganmor", "Elad", "" ], [ "Krumin", "Michael", "" ], [ "Rossi", "Luigi F.", "" ], [ "Carandini", "Matteo", "" ], [ "Simoncelli", "Eero P.", "" ] ]
Two-photon imaging of calcium indicators allows simultaneous recording of responses of hundreds of neurons over hours and even days, but provides a relatively indirect measure of their spiking activity. Existing deconvolution algorithms attempt to recover spikes from observed imaging data, which are then commonly subjected to the same analyses that are applied to electrophysiologically recorded spikes (e.g., estimation of average firing rates, or tuning curves). Here we show, however, that in the presence of noise this approach is often heavily biased. We propose an alternative analysis that aims to estimate the underlying rate directly, by integrating over the unobserved spikes instead of committing to a single estimate of the spike train. This approach can be used to estimate average firing rates or tuning curves directly from the imaging data, and is sufficiently flexible to incorporate prior knowledge about tuning structure. We show that directly estimated rates are more accurate than those obtained from averaging of spikes estimated through deconvolution, both on simulated data and on imaging data acquired in mouse visual cortex.
1011.2575
Kentaro Katahira
Kentaro Katahira, Kenta Suzuki, Kazuo Okanoya and Masato Okada
Complex sequencing rules of birdsong can be explained by simple hidden Markov processes
null
null
10.1371/journal.pone.0024516
null
q-bio.NC cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical propertiesof the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable sequences, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. This property is shared with other complex sequential behaviors. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model (GMM)), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex sequences with higher-order dependencies.
[ { "created": "Thu, 11 Nov 2010 06:46:10 GMT", "version": "v1" } ]
2015-05-20
[ [ "Katahira", "Kentaro", "" ], [ "Suzuki", "Kenta", "" ], [ "Okanoya", "Kazuo", "" ], [ "Okada", "Masato", "" ] ]
Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical propertiesof the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable sequences, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. This property is shared with other complex sequential behaviors. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model (GMM)), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex sequences with higher-order dependencies.
1909.05137
Tanja Mittag
Ivan Peran and Tanja Mittag
Molecular structure in biomolecular condensates
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evidence accumulated over the past decade provides support for liquid-liquid phase separation as the mechanism underlying the formation of biomolecular condensates, which include not only membraneless organelles such as nucleoli and RNA granules, but additional assemblies involved in transcription, translation and signaling. Understanding the molecular mechanisms of condensate function requires knowledge of the structures of their constituents. Current knowledge suggests that structures formed via multivalent domain-motif interactions remain largely unchanged within condensates. Two different viewpoints exist regarding structures of disordered low-complexity domains within condensates; one argues that low-complexity domains remain largely disordered in condensates and their multivalency is encoded in short motifs called stickers, while the other argues that the sequences form cross-beta structures resembling amyloid fibrils. We review these viewpoints and highlight outstanding questions that will inform structure-function relationships for biomolecular condensates.
[ { "created": "Wed, 11 Sep 2019 15:31:12 GMT", "version": "v1" } ]
2019-09-12
[ [ "Peran", "Ivan", "" ], [ "Mittag", "Tanja", "" ] ]
Evidence accumulated over the past decade provides support for liquid-liquid phase separation as the mechanism underlying the formation of biomolecular condensates, which include not only membraneless organelles such as nucleoli and RNA granules, but additional assemblies involved in transcription, translation and signaling. Understanding the molecular mechanisms of condensate function requires knowledge of the structures of their constituents. Current knowledge suggests that structures formed via multivalent domain-motif interactions remain largely unchanged within condensates. Two different viewpoints exist regarding structures of disordered low-complexity domains within condensates; one argues that low-complexity domains remain largely disordered in condensates and their multivalency is encoded in short motifs called stickers, while the other argues that the sequences form cross-beta structures resembling amyloid fibrils. We review these viewpoints and highlight outstanding questions that will inform structure-function relationships for biomolecular condensates.
1404.5345
Zhiyuan Li
Zhiyuan Li, Simone Bianco, Zhaoyang Zhang, Chao Tang
Generic Properties of Random Gene Regulatory Networks
4 figures
Quantitative Biology, December 2013, Volume 1, Issue 4, pp 253-260
10.1007/s40484-014-0026-6
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling gene regulatory networks (GRNs) is an important topic in systems biology. Although there has been much work focusing on various specific systems, the generic behavior of GRNs with continuous variables is still elusive. In particular, it is not clear typically how attractors partition among the three types of orbits: steady state, periodic and chaotic, and how the dynamical properties change with network's topological characteristics. In this work, we first investigated these questions in random GRNs with different network sizes, connectivity, fraction of inhibitory links and transcription regulation rules. Then we searched for the core motifs that govern the dynamic behavior of large GRNs. We show that the stability of a random GRN is typically governed by a few embedding motifs of small sizes, and therefore can in general be understood in the context of these short motifs. Our results provide insights for the study and design of genetic networks.
[ { "created": "Mon, 21 Apr 2014 22:20:09 GMT", "version": "v1" } ]
2014-04-23
[ [ "Li", "Zhiyuan", "" ], [ "Bianco", "Simone", "" ], [ "Zhang", "Zhaoyang", "" ], [ "Tang", "Chao", "" ] ]
Modeling gene regulatory networks (GRNs) is an important topic in systems biology. Although there has been much work focusing on various specific systems, the generic behavior of GRNs with continuous variables is still elusive. In particular, it is not clear typically how attractors partition among the three types of orbits: steady state, periodic and chaotic, and how the dynamical properties change with network's topological characteristics. In this work, we first investigated these questions in random GRNs with different network sizes, connectivity, fraction of inhibitory links and transcription regulation rules. Then we searched for the core motifs that govern the dynamic behavior of large GRNs. We show that the stability of a random GRN is typically governed by a few embedding motifs of small sizes, and therefore can in general be understood in the context of these short motifs. Our results provide insights for the study and design of genetic networks.
1001.0009
Piotr Su{\l}kowski
Joanna I. Su{\l}kowska, Piotr Su{\l}kowski, Jos\'e N. Onuchic
Jamming proteins with slipknots and their free energy landscape
5 pages
Phys. Rev. Lett. 103, 268103 (2009)
10.1103/PhysRevLett.103.268103
CALT 68-2762
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Theoretical studies of stretching proteins with slipknots reveal a surprising growth of their unfolding times when the stretching force crosses an intermediate threshold. This behavior arises as a consequence of the existence of alternative unfolding routes that are dominant at different force ranges. Responsible for longer unfolding times at higher forces is the existence of an intermediate, metastable configuration where the slipknot is jammed. Simulations are performed with a coarsed grained model with further quantification using a refined description of the geometry of the slipknots. The simulation data is used to determine the free energy landscape (FEL) of the protein, which supports recent analytical predictions.
[ { "created": "Wed, 30 Dec 2009 23:34:27 GMT", "version": "v1" } ]
2010-01-05
[ [ "Sułkowska", "Joanna I.", "" ], [ "Sułkowski", "Piotr", "" ], [ "Onuchic", "José N.", "" ] ]
Theoretical studies of stretching proteins with slipknots reveal a surprising growth of their unfolding times when the stretching force crosses an intermediate threshold. This behavior arises as a consequence of the existence of alternative unfolding routes that are dominant at different force ranges. Responsible for longer unfolding times at higher forces is the existence of an intermediate, metastable configuration where the slipknot is jammed. Simulations are performed with a coarsed grained model with further quantification using a refined description of the geometry of the slipknots. The simulation data is used to determine the free energy landscape (FEL) of the protein, which supports recent analytical predictions.
1805.10734
William Lotter
William Lotter, Gabriel Kreiman, David Cox
A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception
null
null
null
null
q-bio.NC cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While deep neural networks take loose inspiration from neuroscience, it is an open question how seriously to take the analogies between artificial deep networks and biological neuronal systems. Interestingly, recent work has shown that deep convolutional neural networks (CNNs) trained on large-scale image recognition tasks can serve as strikingly good models for predicting the responses of neurons in visual cortex to visual stimuli, suggesting that analogies between artificial and biological neural networks may be more than superficial. However, while CNNs capture key properties of the average responses of cortical neurons, they fail to explain other properties of these neurons. For one, CNNs typically require large quantities of labeled input data for training. Our own brains, in contrast, rarely have access to this kind of supervision, so to the extent that representations are similar between CNNs and brains, this similarity must arise via different training paths. In addition, neurons in visual cortex produce complex time-varying responses even to static inputs, and they dynamically tune themselves to temporal regularities in the visual environment. We argue that these differences are clues to fundamental differences between the computations performed in the brain and in deep networks. To begin to close the gap, here we study the emergent properties of a previously-described recurrent generative network that is trained to predict future video frames in a self-supervised manner. Remarkably, the model is able to capture a wide variety of seemingly disparate phenomena observed in visual cortex, ranging from single unit response dynamics to complex perceptual motion illusions. These results suggest potentially deep connections between recurrent predictive neural network models and the brain, providing new leads that can enrich both fields.
[ { "created": "Mon, 28 May 2018 02:15:09 GMT", "version": "v1" }, { "created": "Wed, 30 May 2018 02:47:58 GMT", "version": "v2" } ]
2018-05-31
[ [ "Lotter", "William", "" ], [ "Kreiman", "Gabriel", "" ], [ "Cox", "David", "" ] ]
While deep neural networks take loose inspiration from neuroscience, it is an open question how seriously to take the analogies between artificial deep networks and biological neuronal systems. Interestingly, recent work has shown that deep convolutional neural networks (CNNs) trained on large-scale image recognition tasks can serve as strikingly good models for predicting the responses of neurons in visual cortex to visual stimuli, suggesting that analogies between artificial and biological neural networks may be more than superficial. However, while CNNs capture key properties of the average responses of cortical neurons, they fail to explain other properties of these neurons. For one, CNNs typically require large quantities of labeled input data for training. Our own brains, in contrast, rarely have access to this kind of supervision, so to the extent that representations are similar between CNNs and brains, this similarity must arise via different training paths. In addition, neurons in visual cortex produce complex time-varying responses even to static inputs, and they dynamically tune themselves to temporal regularities in the visual environment. We argue that these differences are clues to fundamental differences between the computations performed in the brain and in deep networks. To begin to close the gap, here we study the emergent properties of a previously-described recurrent generative network that is trained to predict future video frames in a self-supervised manner. Remarkably, the model is able to capture a wide variety of seemingly disparate phenomena observed in visual cortex, ranging from single unit response dynamics to complex perceptual motion illusions. These results suggest potentially deep connections between recurrent predictive neural network models and the brain, providing new leads that can enrich both fields.
2105.14258
Giulia Bertaglia
Giulia Bertaglia and Lorenzo Pareschi
Hyperbolic compartmental models for epidemic spread on networks with uncertain data: application to the emergence of Covid-19 in Italy
null
Math.Mod.Meth.App.Scie. 31 (2021) 2495-2531
10.1142/S0218202521500548
null
q-bio.PE cs.NA math.NA physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The importance of spatial networks in the spread of an epidemic is an essential aspect in modeling the dynamics of an infectious disease. Additionally, any realistic data-driven model must take into account the large uncertainty in the values reported by official sources, such as the amount of infectious individuals. In this paper we address the above aspects through a hyperbolic compartmental model on networks, in which nodes identify locations of interest, such as cities or regions, and arcs represent the ensemble of main mobility paths. The model describes the spatial movement and interactions of a population partitioned, from an epidemiological point of view, on the basis of an extended compartmental structure and divided into commuters, moving on a suburban scale, and non-commuters, acting on an urban scale. Through a diffusive rescaling, the model allows us to recover classical diffusion equations related to commuting dynamics. The numerical solution of the resulting multiscale hyperbolic system with uncertainty is then tackled using a stochastic collocation approach in combination with a finite-volume IMEX method. The ability of the model to correctly describe the spatial heterogeneity underlying the spread of an epidemic in a realistic city network is confirmed with a study of the outbreak of COVID-19 in Italy and its spread in the Lombardy Region.
[ { "created": "Sat, 29 May 2021 09:32:43 GMT", "version": "v1" } ]
2022-01-17
[ [ "Bertaglia", "Giulia", "" ], [ "Pareschi", "Lorenzo", "" ] ]
The importance of spatial networks in the spread of an epidemic is an essential aspect in modeling the dynamics of an infectious disease. Additionally, any realistic data-driven model must take into account the large uncertainty in the values reported by official sources, such as the amount of infectious individuals. In this paper we address the above aspects through a hyperbolic compartmental model on networks, in which nodes identify locations of interest, such as cities or regions, and arcs represent the ensemble of main mobility paths. The model describes the spatial movement and interactions of a population partitioned, from an epidemiological point of view, on the basis of an extended compartmental structure and divided into commuters, moving on a suburban scale, and non-commuters, acting on an urban scale. Through a diffusive rescaling, the model allows us to recover classical diffusion equations related to commuting dynamics. The numerical solution of the resulting multiscale hyperbolic system with uncertainty is then tackled using a stochastic collocation approach in combination with a finite-volume IMEX method. The ability of the model to correctly describe the spatial heterogeneity underlying the spread of an epidemic in a realistic city network is confirmed with a study of the outbreak of COVID-19 in Italy and its spread in the Lombardy Region.
1810.00816
Trang-Anh Estelle Nghiem
Trang-Anh E. Nghiem, N\'uria Tort-Colet, Tomasz G\'orski, Ulisse Ferrari, Shayan Moghimyfiroozabad, Jennifer S. Goldman, Bartosz Tele\'nczuk, Cristiano Capone, Thierry Bal, Matteo di Volo, and Alain Destexhe
Cholinergic switch between two types of slow waves in cerebral cortex
37 pages, 5 main figures, 4 supplementary figures
Cerebral Cortex 2019
10.1093/cercor/bhz320
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sleep slow waves are known to participate in memory consolidation, yet slow waves occurring under anesthesia present no positive effects on memory. Here, we shed light onto this paradox, based on a combination of extracellular recordings in vivo, in vitro, and computational models. We find two types of slow waves, based on analyzing the temporal patterns of successive slow-wave events. The first type is consistently observed in natural slow-wave sleep, while the second is shown to be ubiquitous under anesthesia. Network models of spiking neurons predict that the two slow wave types emerge due to a different gain on inhibitory vs excitatory cells and that different levels of spike-frequency adaptation in excitatory cells can account for dynamical distinctions between the two types. This prediction was tested in vitro by varying adaptation strength using an agonist of acetylcholine receptors, which demonstrated a neuromodulatory switch between the two types of slow waves. Finally, we show that the first type of slow-wave dynamics is more sensitive to external stimuli, which can explain how slow waves in sleep and anesthesia differentially affect memory consolidation, as well as provide a link between slow-wave dynamics and memory diseases.
[ { "created": "Mon, 1 Oct 2018 16:53:56 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2019 11:45:46 GMT", "version": "v2" }, { "created": "Thu, 28 Nov 2019 10:46:00 GMT", "version": "v3" } ]
2020-01-09
[ [ "Nghiem", "Trang-Anh E.", "" ], [ "Tort-Colet", "Núria", "" ], [ "Górski", "Tomasz", "" ], [ "Ferrari", "Ulisse", "" ], [ "Moghimyfiroozabad", "Shayan", "" ], [ "Goldman", "Jennifer S.", "" ], [ "Teleńczuk", "Bartosz", "" ], [ "Capone", "Cristiano", "" ], [ "Bal", "Thierry", "" ], [ "di Volo", "Matteo", "" ], [ "Destexhe", "Alain", "" ] ]
Sleep slow waves are known to participate in memory consolidation, yet slow waves occurring under anesthesia present no positive effects on memory. Here, we shed light onto this paradox, based on a combination of extracellular recordings in vivo, in vitro, and computational models. We find two types of slow waves, based on analyzing the temporal patterns of successive slow-wave events. The first type is consistently observed in natural slow-wave sleep, while the second is shown to be ubiquitous under anesthesia. Network models of spiking neurons predict that the two slow wave types emerge due to a different gain on inhibitory vs excitatory cells and that different levels of spike-frequency adaptation in excitatory cells can account for dynamical distinctions between the two types. This prediction was tested in vitro by varying adaptation strength using an agonist of acetylcholine receptors, which demonstrated a neuromodulatory switch between the two types of slow waves. Finally, we show that the first type of slow-wave dynamics is more sensitive to external stimuli, which can explain how slow waves in sleep and anesthesia differentially affect memory consolidation, as well as provide a link between slow-wave dynamics and memory diseases.
2010.12075
Enrico Borriello Dr.
Enrico Borriello and Bryan C. Daniels
The basis of easy controllability in Boolean networks
44 pages, 17 figures, 2 table
Nat Commun 12, 5227 (2021)
10.1038/s41467-021-25533-3
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effective control of biological systems can often be achieved through the control of a surprisingly small number of distinct variables. We bring clarity to such results using the formalism of Boolean dynamical networks, analyzing the effectiveness of external control in selecting a desired final state when that state is among the original attractors of the dynamics. Analyzing 49 existing biological network models, we find strong numerical evidence that the average number of nodes that must be forced scales logarithmically with the number of original attractors. This suggests that biological networks may be typically easy to control even when the number of interacting components is large. We provide a theoretical explanation of the scaling by separating controlling nodes into three types: those that act as inputs, those that distinguish among attractors, and any remaining nodes. We further identify characteristics of dynamics that can invalidate this scaling, and speculate about how this relates more broadly to non-biological systems.
[ { "created": "Thu, 22 Oct 2020 21:29:58 GMT", "version": "v1" }, { "created": "Thu, 9 Sep 2021 23:32:23 GMT", "version": "v2" } ]
2021-09-13
[ [ "Borriello", "Enrico", "" ], [ "Daniels", "Bryan C.", "" ] ]
Effective control of biological systems can often be achieved through the control of a surprisingly small number of distinct variables. We bring clarity to such results using the formalism of Boolean dynamical networks, analyzing the effectiveness of external control in selecting a desired final state when that state is among the original attractors of the dynamics. Analyzing 49 existing biological network models, we find strong numerical evidence that the average number of nodes that must be forced scales logarithmically with the number of original attractors. This suggests that biological networks may be typically easy to control even when the number of interacting components is large. We provide a theoretical explanation of the scaling by separating controlling nodes into three types: those that act as inputs, those that distinguish among attractors, and any remaining nodes. We further identify characteristics of dynamics that can invalidate this scaling, and speculate about how this relates more broadly to non-biological systems.
2304.12325
Subhrangshu Adhikary
Subhrangshu Adhikary, Sudhir Kumar Chaturvedi, Saikat Banerjee and Sourav Basu
Dependence of Physiochemical Features on Marine Chlorophyll Analysis with Learning Techniques
Advances in Environment Engineering and Management. Year 2021. Springer Proceedings in Earth and Environmental Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-79065-3_29
Advances in Environment Engineering and Management. Year 2021. Springer Proceedings in Earth and Environmental Sciences. Springer, Cham
10.1007/978-3-030-79065-3_29
null
q-bio.QM cs.AI cs.LG physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Marine chlorophyll which is present within phytoplankton are the basis of photosynthesis and they have a high significance in sustaining ecological balance as they highly contribute toward global primary productivity and comes under the food chain of many marine organisms. Imbalance in the concentrations of phytoplankton can disrupt the ecological balance. The growth of phytoplankton depends upon the optimum concentrations of physiochemical constituents like iron, nitrates, phosphates, pH level, salinity, etc. and deviations from an ideal concentration can affect the growth of phytoplankton which can ultimately disrupt the ecosystem at a large scale. Thus the analysis of such constituents has high significance to estimate the probable growth of marine phytoplankton. The advancements of remote sensing technologies have improved the scope to remotely study the physiochemical constituents on a global scale. The machine learning techniques have made it possible to predict the marine chlorophyll levels based on physiochemical properties and deep learning helped to do the same but in a more advanced manner simulating the working principle of a human brain. In this study, we have used machine learning and deep learning for the Bay of Bengal to establish a regression model of chlorophyll levels based on physiochemical features and discussed its reliability and performance for different regression models. This could help to estimate the amount of chlorophyll present in water bodies based on physiochemical features so we can plan early in case there arises a possibility of disruption in the ecosystem due to imbalance in marine phytoplankton.
[ { "created": "Sun, 23 Apr 2023 19:46:03 GMT", "version": "v1" } ]
2023-04-26
[ [ "Adhikary", "Subhrangshu", "" ], [ "Chaturvedi", "Sudhir Kumar", "" ], [ "Banerjee", "Saikat", "" ], [ "Basu", "Sourav", "" ] ]
Marine chlorophyll which is present within phytoplankton are the basis of photosynthesis and they have a high significance in sustaining ecological balance as they highly contribute toward global primary productivity and comes under the food chain of many marine organisms. Imbalance in the concentrations of phytoplankton can disrupt the ecological balance. The growth of phytoplankton depends upon the optimum concentrations of physiochemical constituents like iron, nitrates, phosphates, pH level, salinity, etc. and deviations from an ideal concentration can affect the growth of phytoplankton which can ultimately disrupt the ecosystem at a large scale. Thus the analysis of such constituents has high significance to estimate the probable growth of marine phytoplankton. The advancements of remote sensing technologies have improved the scope to remotely study the physiochemical constituents on a global scale. The machine learning techniques have made it possible to predict the marine chlorophyll levels based on physiochemical properties and deep learning helped to do the same but in a more advanced manner simulating the working principle of a human brain. In this study, we have used machine learning and deep learning for the Bay of Bengal to establish a regression model of chlorophyll levels based on physiochemical features and discussed its reliability and performance for different regression models. This could help to estimate the amount of chlorophyll present in water bodies based on physiochemical features so we can plan early in case there arises a possibility of disruption in the ecosystem due to imbalance in marine phytoplankton.
2010.07826
Gjalt Huppes
Gjalt Huppes, Ruben Huele
Mass Flow Analysis of SARS-CoV-2 for quantified COVID-19 Risk Analysis
18 pages. 5 figures and 3 tables, included in the text body. Paper under review in the Journal of Industrial Ecology. Keywords: SARS-CoV-2; COVID-19; Virion Mass Balance; Routes to Exposure; Epidemic; Pandemic; Ventilation Rate; Risk Analysis
null
null
null
q-bio.QM econ.GN q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How may exposure risks to SARS-CoV-2 be assessed quantitatively? The material metabolism approach of Industrial Ecology can be applied to the mass flows of these virions by their numbers, as a key step in the analysis of the current pandemic. Several transmission routes of SARS-2 from emission by a person to exposure of another person have been modelled and quantified. Start is a COVID-19 illness progression model specifying rising emissions by an infected person: the human virion factory. The first route covers closed spaces, with an emission, concentration, and decay model quantifying exposure. A next set of routes covers person-to-person contacts mostly in open spaces, modelling the spatial distribution of exhales towards inhalation. These models also cover incidental exposures, like coughs and sneezes, and exposure through objects. Routes through animal contacts, excrements, and food, have not been quantified. Potential exposures differ by six orders of magnitude. Closed rooms, even with reasonably (VR 2) to good (VR 5) ventilation, constitute the major exposure risks. Close person-to-person contacts of longer duration create two orders of magnitude lower exposure risks. Open spaces may create risks an order of magnitude lower again. Burst of larger droplets may cause a common cold but not viral pneumonia as the virions in such droplets cannot reach the alveoli. Fomites have not shown viable viruses in hospitals, let alone infections. Infection by animals might be possible, as by cats and ferrets kept as pets. These results indicate priority domains for individual and collective measures. The wide divergence in outcomes indicates robustness to most modelling and data improvements, hardly leading to major changes in relative exposure potentials. However, models and data can substantially be improved.
[ { "created": "Thu, 15 Oct 2020 15:33:39 GMT", "version": "v1" }, { "created": "Thu, 29 Oct 2020 14:47:36 GMT", "version": "v2" } ]
2020-10-30
[ [ "Huppes", "Gjalt", "" ], [ "Huele", "Ruben", "" ] ]
How may exposure risks to SARS-CoV-2 be assessed quantitatively? The material metabolism approach of Industrial Ecology can be applied to the mass flows of these virions by their numbers, as a key step in the analysis of the current pandemic. Several transmission routes of SARS-2 from emission by a person to exposure of another person have been modelled and quantified. Start is a COVID-19 illness progression model specifying rising emissions by an infected person: the human virion factory. The first route covers closed spaces, with an emission, concentration, and decay model quantifying exposure. A next set of routes covers person-to-person contacts mostly in open spaces, modelling the spatial distribution of exhales towards inhalation. These models also cover incidental exposures, like coughs and sneezes, and exposure through objects. Routes through animal contacts, excrements, and food, have not been quantified. Potential exposures differ by six orders of magnitude. Closed rooms, even with reasonably (VR 2) to good (VR 5) ventilation, constitute the major exposure risks. Close person-to-person contacts of longer duration create two orders of magnitude lower exposure risks. Open spaces may create risks an order of magnitude lower again. Burst of larger droplets may cause a common cold but not viral pneumonia as the virions in such droplets cannot reach the alveoli. Fomites have not shown viable viruses in hospitals, let alone infections. Infection by animals might be possible, as by cats and ferrets kept as pets. These results indicate priority domains for individual and collective measures. The wide divergence in outcomes indicates robustness to most modelling and data improvements, hardly leading to major changes in relative exposure potentials. However, models and data can substantially be improved.
q-bio/0311019
Reka Albert
Reka Albert, Hans G. Othmer
The topology of the regulatory interactions predics the expression pattern of the segment polarity genes in Drosophila melanogaster
24 pages, 9 figures
Journal of Theoretical Biology 223, 1-18 (2003)
null
null
q-bio.MN
null
Expression of the Drosophila segment polarity genes is initiated by a prepattern of pair-rule gene products and maintained by a network of regulatory interactions throughout several stages of embryonic development. Analysis of a model of gene interactions based on differential equations showed that wild-type expression patterns of these genes can be obtained for a wide range of kinetic parameters, which suggests that the steady states are determined by the topology of the network and the type of regulatory interactions between components, not the detailed form of the rate laws. To investigate this, we propose and analyze a Boolean model of this network which is based on a binary ON/OFF representation of transcription and protein levels, and in which the interactions are formulated as logical functions. In this model the spatial and temporal patterns of gene expression are determined by the topology of the network and whether components are present or absent, rather than the absolute levels of the mRNAs and proteins and the functional details of their interactions. The model is able to reproduce the wild type gene expression patterns, as well as the ectopic expression patterns observed in over-expression experiments and various mutants. Furthermore, we compute explicitly all steady states of the network and identify the basin of attraction of each steady state. The model gives important insights into the functioning of the segment polarity gene network, such as the crucial role of the wingless and sloppy paired genes, and the network's ability to correct errors in the prepattern.
[ { "created": "Thu, 13 Nov 2003 15:23:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Albert", "Reka", "" ], [ "Othmer", "Hans G.", "" ] ]
Expression of the Drosophila segment polarity genes is initiated by a prepattern of pair-rule gene products and maintained by a network of regulatory interactions throughout several stages of embryonic development. Analysis of a model of gene interactions based on differential equations showed that wild-type expression patterns of these genes can be obtained for a wide range of kinetic parameters, which suggests that the steady states are determined by the topology of the network and the type of regulatory interactions between components, not the detailed form of the rate laws. To investigate this, we propose and analyze a Boolean model of this network which is based on a binary ON/OFF representation of transcription and protein levels, and in which the interactions are formulated as logical functions. In this model the spatial and temporal patterns of gene expression are determined by the topology of the network and whether components are present or absent, rather than the absolute levels of the mRNAs and proteins and the functional details of their interactions. The model is able to reproduce the wild type gene expression patterns, as well as the ectopic expression patterns observed in over-expression experiments and various mutants. Furthermore, we compute explicitly all steady states of the network and identify the basin of attraction of each steady state. The model gives important insights into the functioning of the segment polarity gene network, such as the crucial role of the wingless and sloppy paired genes, and the network's ability to correct errors in the prepattern.
1906.02487
Qiulei Dong
Qiulei Dong and Bo Liu and Zhanyi Hu
Non-uniqueness phenomenon of object representation in modelling IT cortex by deep convolutional neural network (DCNN)
24 pages, 5 figures
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently DCNN (Deep Convolutional Neural Network) has been advocated as a general and promising modelling approach for neural object representation in primate inferotemporal cortex. In this work, we show that some inherent non-uniqueness problem exists in the DCNN-based modelling of image object representations. This non-uniqueness phenomenon reveals to some extent the theoretical limitation of this general modelling approach, and invites due attention to be taken in practice.
[ { "created": "Thu, 6 Jun 2019 09:08:21 GMT", "version": "v1" } ]
2019-06-07
[ [ "Dong", "Qiulei", "" ], [ "Liu", "Bo", "" ], [ "Hu", "Zhanyi", "" ] ]
Recently DCNN (Deep Convolutional Neural Network) has been advocated as a general and promising modelling approach for neural object representation in primate inferotemporal cortex. In this work, we show that some inherent non-uniqueness problem exists in the DCNN-based modelling of image object representations. This non-uniqueness phenomenon reveals to some extent the theoretical limitation of this general modelling approach, and invites due attention to be taken in practice.
1111.6172
Valery Kirzhner
V.M. Kirzhner, S. Frenkel, A.B. Korol
Non-alignment comparison of human and high primate genomes
null
null
10.1101/gr.923303
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compositional spectra (CS) analysis based on k-mer scoring of DNA sequences was employed in this study for dot-plot comparison of human and primate genomes. The detection of extended conserved synteny regions was based on continuous fuzzy similarity rather than on chains of discrete anchors (genes or highly conserved noncoding elements). In addition to the high correspondence found in the comparisons of whole-genome sequences, a good similarity was also found after masking gene sequences, indicating that CS analysis manages to reveal phylogenetic signal in the organization of noncoding part of the genome sequences, including repetitive DNA and the genome "dark matter". Obviously, the possibility to reveal parallel ordering depends on the signal of common ancestor sequence organization varying locally along the corresponding segments of the compared genomes. We explored two sources contributing to this signal: sequence composition (GC content) and sequence organization (abundances of k-mers in the usual A,T,G,C or purine-pyrimidine alphabets). Whole-genome comparisons based on GC distribution along the analyzed sequences indeed gives reasonable results, but combining it with k-mer abundances dramatically improves the ordering quality, indicating that compositional and organizational heterogeneity comprise complementary sources of information on evolutionary conserved similarity of genome sequences.
[ { "created": "Sat, 26 Nov 2011 15:58:16 GMT", "version": "v1" } ]
2011-12-02
[ [ "Kirzhner", "V. M.", "" ], [ "Frenkel", "S.", "" ], [ "Korol", "A. B.", "" ] ]
Compositional spectra (CS) analysis based on k-mer scoring of DNA sequences was employed in this study for dot-plot comparison of human and primate genomes. The detection of extended conserved synteny regions was based on continuous fuzzy similarity rather than on chains of discrete anchors (genes or highly conserved noncoding elements). In addition to the high correspondence found in the comparisons of whole-genome sequences, a good similarity was also found after masking gene sequences, indicating that CS analysis manages to reveal phylogenetic signal in the organization of noncoding part of the genome sequences, including repetitive DNA and the genome "dark matter". Obviously, the possibility to reveal parallel ordering depends on the signal of common ancestor sequence organization varying locally along the corresponding segments of the compared genomes. We explored two sources contributing to this signal: sequence composition (GC content) and sequence organization (abundances of k-mers in the usual A,T,G,C or purine-pyrimidine alphabets). Whole-genome comparisons based on GC distribution along the analyzed sequences indeed gives reasonable results, but combining it with k-mer abundances dramatically improves the ordering quality, indicating that compositional and organizational heterogeneity comprise complementary sources of information on evolutionary conserved similarity of genome sequences.
2012.05817
Stephen Lindsly
Stephen Lindsly, Maya Gupta, Cooper Stansbury, Indika Rajapakse
Understanding Memory B Cell Selection
15 pages, 5 figures, 1 algorithm
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
The mammalian adaptive immune system has evolved over millions of years to become an incredibly effective defense against foreign antigens. The adaptive immune system's humoral response creates plasma B cells and memory B cells, each with their own immunological objectives. The affinity maturation process is widely viewed as a heuristic to solve the global optimization problem of finding B cells with high affinity to the antigen. However, memory B cells appear to be purposely selected earlier in the affinity maturation process and have lower affinity. We propose that this memory B cell selection process may be an approximate solution to two optimization problems: optimizing for affinity to similar antigens in the future despite mutations or other minor differences, and optimizing to warm start the generation of plasma B cells in the future. We use simulations to provide evidence for our hypotheses, taking into account data showing that certain B cell mutations are more likely than others. Our findings are consistent with memory B cells having high-affinity to mutated antigens, but do not provide strong evidence that memory B cells will be more useful than selected naive B cells for seeding the secondary germinal centers.
[ { "created": "Wed, 9 Dec 2020 14:39:25 GMT", "version": "v1" }, { "created": "Wed, 9 Jun 2021 20:41:06 GMT", "version": "v2" }, { "created": "Thu, 17 Jun 2021 20:04:17 GMT", "version": "v3" } ]
2021-06-21
[ [ "Lindsly", "Stephen", "" ], [ "Gupta", "Maya", "" ], [ "Stansbury", "Cooper", "" ], [ "Rajapakse", "Indika", "" ] ]
The mammalian adaptive immune system has evolved over millions of years to become an incredibly effective defense against foreign antigens. The adaptive immune system's humoral response creates plasma B cells and memory B cells, each with their own immunological objectives. The affinity maturation process is widely viewed as a heuristic to solve the global optimization problem of finding B cells with high affinity to the antigen. However, memory B cells appear to be purposely selected earlier in the affinity maturation process and have lower affinity. We propose that this memory B cell selection process may be an approximate solution to two optimization problems: optimizing for affinity to similar antigens in the future despite mutations or other minor differences, and optimizing to warm start the generation of plasma B cells in the future. We use simulations to provide evidence for our hypotheses, taking into account data showing that certain B cell mutations are more likely than others. Our findings are consistent with memory B cells having high-affinity to mutated antigens, but do not provide strong evidence that memory B cells will be more useful than selected naive B cells for seeding the secondary germinal centers.
q-bio/0504023
Aniello Buonocore
A. Buonocore, L. Caputo, Y. Ishii, E. Pirozzi, T. Yanagida and L. M. Ricciardi
On Myosin II dynamics in the presence of external loads
6 figures, 8 tables, to appear on Biosystems
null
null
null
q-bio.SC
null
We address the controversial hot question concerning the validity of the loose coupling versus the lever-arm theories in the actomyosin dynamics by re-interpreting and extending the phenomenological washboard potential model proposed by some of us in a previous paper. In this new model a Brownian motion harnessing thermal energy is assumed to co-exist with the deterministic swing of the lever-arm, to yield an excellent fit of the set of data obtained by some of us on the sliding of Myosin II heads on immobilized actin filaments under various load conditions. Our theoretical arguments are complemented by accurate numerical simulations, and the robustness of the model is tested via different choices of parameters and potential profiles.
[ { "created": "Mon, 18 Apr 2005 13:45:28 GMT", "version": "v1" } ]
2007-05-23
[ [ "Buonocore", "A.", "" ], [ "Caputo", "L.", "" ], [ "Ishii", "Y.", "" ], [ "Pirozzi", "E.", "" ], [ "Yanagida", "T.", "" ], [ "Ricciardi", "L. M.", "" ] ]
We address the controversial hot question concerning the validity of the loose coupling versus the lever-arm theories in the actomyosin dynamics by re-interpreting and extending the phenomenological washboard potential model proposed by some of us in a previous paper. In this new model a Brownian motion harnessing thermal energy is assumed to co-exist with the deterministic swing of the lever-arm, to yield an excellent fit of the set of data obtained by some of us on the sliding of Myosin II heads on immobilized actin filaments under various load conditions. Our theoretical arguments are complemented by accurate numerical simulations, and the robustness of the model is tested via different choices of parameters and potential profiles.
1407.1022
Alexander Stewart
Alexander J. Stewart and Joshua B. Plotkin
Small games and long memories promote cooperation
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex social behaviors lie at the heart of many of the challenges facing evolutionary biology, sociology, economics, and beyond. For evolutionary biologists in particular the question is often how such behaviors can arise \textit{de novo} in a simple evolving system. How can group behaviors such as collective action, or decision making that accounts for memories of past experience, emerge and persist? Evolutionary game theory provides a framework for formalizing these questions and admitting them to rigorous study. Here we develop such a framework to study the evolution of sustained collective action in multi-player public-goods games, in which players have arbitrarily long memories of prior rounds of play and can react to their experience in an arbitrary way. To study this problem we construct a coordinate system for memory-$m$ strategies in iterated $n$-player games that permits us to characterize all the cooperative strategies that resist invasion by any mutant strategy, and thus stabilize cooperative behavior. We show that while larger games inevitably make cooperation harder to evolve, there nevertheless always exists a positive volume of strategies that stabilize cooperation provided the population size is large enough. We also show that, when games are small, longer-memory strategies make cooperation easier to evolve, by increasing the number of ways to stabilize cooperation. Finally we explore the co-evolution of behavior and memory capacity, and we find that longer-memory strategies tend to evolve in small games, which in turn drives the evolution of cooperation even when the benefits for cooperation are low.
[ { "created": "Thu, 3 Jul 2014 19:25:00 GMT", "version": "v1" }, { "created": "Thu, 29 Oct 2015 22:57:53 GMT", "version": "v2" }, { "created": "Mon, 16 Nov 2015 23:20:01 GMT", "version": "v3" } ]
2015-11-18
[ [ "Stewart", "Alexander J.", "" ], [ "Plotkin", "Joshua B.", "" ] ]
Complex social behaviors lie at the heart of many of the challenges facing evolutionary biology, sociology, economics, and beyond. For evolutionary biologists in particular the question is often how such behaviors can arise \textit{de novo} in a simple evolving system. How can group behaviors such as collective action, or decision making that accounts for memories of past experience, emerge and persist? Evolutionary game theory provides a framework for formalizing these questions and admitting them to rigorous study. Here we develop such a framework to study the evolution of sustained collective action in multi-player public-goods games, in which players have arbitrarily long memories of prior rounds of play and can react to their experience in an arbitrary way. To study this problem we construct a coordinate system for memory-$m$ strategies in iterated $n$-player games that permits us to characterize all the cooperative strategies that resist invasion by any mutant strategy, and thus stabilize cooperative behavior. We show that while larger games inevitably make cooperation harder to evolve, there nevertheless always exists a positive volume of strategies that stabilize cooperation provided the population size is large enough. We also show that, when games are small, longer-memory strategies make cooperation easier to evolve, by increasing the number of ways to stabilize cooperation. Finally we explore the co-evolution of behavior and memory capacity, and we find that longer-memory strategies tend to evolve in small games, which in turn drives the evolution of cooperation even when the benefits for cooperation are low.
2403.01844
Loretta del Mercato
N. Depalma, S. D Ugo, F. Manoochehri, A. Libia, W. Sergi, T. R. L. Marchese, S. Forciniti, L. L. del Mercato, P. Piscitelli, S. Garritano, F. Castellana, R. Zupo and M. G. Spampinato
NIR ICG-Enhanced Fluorescence: A Quantitative Evaluation of Bowel Microperfusion and Its Relation to Central Perfusion in Colorectal Surgery
null
Cancers 2023, 15, 23, 5528
10.3390/cancers15235528
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: To date, no standardized protocols nor a quantitative assessment of the near-infrared fluorescence angiography with indocyanine green (NIR-ICG) are available. The aim of this study was to evaluate the timing of fluorescence as a reproducible parameter and its efficacy in predicting anastomotic leakage (AL) in colorectal surgery. Methods: A consecutive cohort of 108 patients undergoing minimally invasive elective procedures for colorectal cancer was prospectively enrolled. The difference between macro and microperfusion DeltaT was obtained by calculating the timing of fluorescence at the level of iliac artery division and colonic wall, respectively. Results: Subjects with a DeltaT higher 15.5 s had a higher tendency to develop an AL (p lower 0.01). The DeltaT/heart rate interaction was found to predict AL with an odds ratio of 1.02 (p lower 0.01); a cut-off threshold of 832 was identified (sensitivity 0.86, specificity 0.77). Perfusion parameters were also associated with a faster bowel motility resumption and a reduced length of hospital stay. Conclusions: The analysis of the timing of fluorescence provides a quantitative, easy evaluation of tissue perfusion. A DeltaT/HR interaction higher 832 may be used as a real-time parameter to guide surgical decision making in colorectal surgery.
[ { "created": "Mon, 4 Mar 2024 08:51:01 GMT", "version": "v1" } ]
2024-03-05
[ [ "Depalma", "N.", "" ], [ "Ugo", "S. D", "" ], [ "Manoochehri", "F.", "" ], [ "Libia", "A.", "" ], [ "Sergi", "W.", "" ], [ "Marchese", "T. R. L.", "" ], [ "Forciniti", "S.", "" ], [ "del Mercato", "L. L.", "" ], [ "Piscitelli", "P.", "" ], [ "Garritano", "S.", "" ], [ "Castellana", "F.", "" ], [ "Zupo", "R.", "" ], [ "Spampinato", "M. G.", "" ] ]
Background: To date, no standardized protocols nor a quantitative assessment of the near-infrared fluorescence angiography with indocyanine green (NIR-ICG) are available. The aim of this study was to evaluate the timing of fluorescence as a reproducible parameter and its efficacy in predicting anastomotic leakage (AL) in colorectal surgery. Methods: A consecutive cohort of 108 patients undergoing minimally invasive elective procedures for colorectal cancer was prospectively enrolled. The difference between macro and microperfusion DeltaT was obtained by calculating the timing of fluorescence at the level of iliac artery division and colonic wall, respectively. Results: Subjects with a DeltaT higher 15.5 s had a higher tendency to develop an AL (p lower 0.01). The DeltaT/heart rate interaction was found to predict AL with an odds ratio of 1.02 (p lower 0.01); a cut-off threshold of 832 was identified (sensitivity 0.86, specificity 0.77). Perfusion parameters were also associated with a faster bowel motility resumption and a reduced length of hospital stay. Conclusions: The analysis of the timing of fluorescence provides a quantitative, easy evaluation of tissue perfusion. A DeltaT/HR interaction higher 832 may be used as a real-time parameter to guide surgical decision making in colorectal surgery.
1205.3547
Shoma Tanabe
Shoma Tanabe, Hideyuki Suzuki, Naoki Masuda
Indirect reciprocity with trinary reputations
5 figures, 1 table
Journal of Theoretical Biology, 317, 338--347 (2013)
10.1016/j.jtbi.2012.10.031
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Indirect reciprocity is a reputation-based mechanism for cooperation in social dilemma situations when individuals do not repeatedly meet. The conditions under which cooperation based on indirect reciprocity occurs have been examined in great details. Most previous theoretical analysis assumed for mathematical tractability that an individual possesses a binary reputation value, i.e., good or bad, which depends on their past actions and other factors. However, in real situations, reputations of individuals may be multiple valued. Another puzzling discrepancy between the theory and experiments is the status of the so-called image scoring, in which cooperation and defection are judged to be good and bad, respectively, independent of other factors. Such an assessment rule is found in behavioral experiments, whereas it is known to be unstable in theory. In the present study, we fill both gaps by analyzing a trinary reputation model. By an exhaustive search, we identify all the cooperative and stable equilibria composed of a homogeneous population or a heterogeneous population containing two types of players. Some results derived for the trinary reputation model are direct extensions of those for the binary model. However, we find that the trinary model allows cooperation under image scoring under some mild conditions.
[ { "created": "Wed, 16 May 2012 03:42:06 GMT", "version": "v1" }, { "created": "Fri, 1 Jun 2012 09:05:56 GMT", "version": "v2" }, { "created": "Wed, 21 Nov 2012 07:40:21 GMT", "version": "v3" } ]
2012-11-22
[ [ "Tanabe", "Shoma", "" ], [ "Suzuki", "Hideyuki", "" ], [ "Masuda", "Naoki", "" ] ]
Indirect reciprocity is a reputation-based mechanism for cooperation in social dilemma situations when individuals do not repeatedly meet. The conditions under which cooperation based on indirect reciprocity occurs have been examined in great details. Most previous theoretical analysis assumed for mathematical tractability that an individual possesses a binary reputation value, i.e., good or bad, which depends on their past actions and other factors. However, in real situations, reputations of individuals may be multiple valued. Another puzzling discrepancy between the theory and experiments is the status of the so-called image scoring, in which cooperation and defection are judged to be good and bad, respectively, independent of other factors. Such an assessment rule is found in behavioral experiments, whereas it is known to be unstable in theory. In the present study, we fill both gaps by analyzing a trinary reputation model. By an exhaustive search, we identify all the cooperative and stable equilibria composed of a homogeneous population or a heterogeneous population containing two types of players. Some results derived for the trinary reputation model are direct extensions of those for the binary model. However, we find that the trinary model allows cooperation under image scoring under some mild conditions.
1702.08411
Jose Fontanari
Jos\'e F. Fontanari and Mauro Santos
The revival of the Baldwin Effect
null
Eur. Phys. J. B (2017) 90: 186
10.1140/epjb/e2017-80409-8
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The idea that a genetically fixed behavior evolved from the once differential learning ability of individuals that performed the behavior is known as the Baldwin effect. A highly influential paper [Hinton G.E., Nowlan S.J., 1987. How learning can guide evolution. Complex Syst. 1, 495-502] claimed that this effect can be observed in silico, but here we argue that what was actually shown is that the learning ability is easily selected for. Then we demonstrate the Baldwin effect to happen in the in silico scenario by estimating the probability and waiting times for the learned behavior to become innate. Depending on parameter values, we find that learning can increase the chance of fixation of the learned behavior by several orders of magnitude compared with the non-learning situation.
[ { "created": "Mon, 27 Feb 2017 18:11:57 GMT", "version": "v1" }, { "created": "Thu, 27 Apr 2017 19:44:04 GMT", "version": "v2" }, { "created": "Wed, 9 Aug 2017 13:37:14 GMT", "version": "v3" } ]
2017-10-16
[ [ "Fontanari", "José F.", "" ], [ "Santos", "Mauro", "" ] ]
The idea that a genetically fixed behavior evolved from the once differential learning ability of individuals that performed the behavior is known as the Baldwin effect. A highly influential paper [Hinton G.E., Nowlan S.J., 1987. How learning can guide evolution. Complex Syst. 1, 495-502] claimed that this effect can be observed in silico, but here we argue that what was actually shown is that the learning ability is easily selected for. Then we demonstrate the Baldwin effect to happen in the in silico scenario by estimating the probability and waiting times for the learned behavior to become innate. Depending on parameter values, we find that learning can increase the chance of fixation of the learned behavior by several orders of magnitude compared with the non-learning situation.
1808.05643
Tarik Gouhier
Pradeep Pillai, Tarik C. Gouhier
Not even wrong: The spurious link between biodiversity and ecosystem functioning
Keywords: biodiversity; ecosystem functioning; complementarity effect; selection effect; Jensen's inequality; Price equation; species coexistence
Ecology 100 (2019) e02645
10.1002/ecy.2645
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resolving the relationship between biodiversity and ecosystem functioning has been one of the central goals of modern ecology. Early debates about the relationship were finally resolved with the advent of a statistical partitioning scheme that decomposed the biodiversity effect into a "selection" effect and a "complementarity" effect. We prove that both the biodiversity effect and its statistical decomposition into selection and complementarity are fundamentally flawed because these methods use a na\"ive null expectation based on neutrality, likely leading to an overestimate of the net biodiversity effect, and they fail to account for the nonlinear abundance-ecosystem functioning relationships observed in nature. Furthermore, under such nonlinearity no statistical scheme can be devised to partition the biodiversity effects. We also present an alternative metric providing a more reasonable estimate of biodiversity effect. Our results suggest that all studies conducted since the early 1990s likely overestimated the positive effects of biodiversity on ecosystem functioning.
[ { "created": "Thu, 16 Aug 2018 19:00:13 GMT", "version": "v1" } ]
2019-07-12
[ [ "Pillai", "Pradeep", "" ], [ "Gouhier", "Tarik C.", "" ] ]
Resolving the relationship between biodiversity and ecosystem functioning has been one of the central goals of modern ecology. Early debates about the relationship were finally resolved with the advent of a statistical partitioning scheme that decomposed the biodiversity effect into a "selection" effect and a "complementarity" effect. We prove that both the biodiversity effect and its statistical decomposition into selection and complementarity are fundamentally flawed because these methods use a na\"ive null expectation based on neutrality, likely leading to an overestimate of the net biodiversity effect, and they fail to account for the nonlinear abundance-ecosystem functioning relationships observed in nature. Furthermore, under such nonlinearity no statistical scheme can be devised to partition the biodiversity effects. We also present an alternative metric providing a more reasonable estimate of biodiversity effect. Our results suggest that all studies conducted since the early 1990s likely overestimated the positive effects of biodiversity on ecosystem functioning.
q-bio/0503030
Michael Deem
Vishal Gupta, David J. Earl, Michael W. Deem
Quantifying Influenza Vaccine Efficacy and Antigenic Distance
25 pages; 5 figures; 1 table; some tyops fixed
null
null
null
q-bio.BM
null
We introduce a new measure of antigenic distance between influenza A vaccine and circulating strains. The measure correlates well with efficacies of the H3N2 influenza A component of the annual vaccine between 1971 and 2004, as do results of a theory of the immune response to influenza following vaccination. This new measure of antigenic distance is correlated with vaccine efficacy to a greater degree than are current state-of-the-art phylogenetic sequence analyzes or ferret antisera inhibition assays. We suggest that this new measure of antigenic distance be used in the design of the annual influenza vaccine and in the interpretation of vaccine efficacy monitoring.
[ { "created": "Sat, 19 Mar 2005 17:57:48 GMT", "version": "v1" }, { "created": "Thu, 16 Jun 2005 16:55:40 GMT", "version": "v2" } ]
2007-05-23
[ [ "Gupta", "Vishal", "" ], [ "Earl", "David J.", "" ], [ "Deem", "Michael W.", "" ] ]
We introduce a new measure of antigenic distance between influenza A vaccine and circulating strains. The measure correlates well with efficacies of the H3N2 influenza A component of the annual vaccine between 1971 and 2004, as do results of a theory of the immune response to influenza following vaccination. This new measure of antigenic distance is correlated with vaccine efficacy to a greater degree than are current state-of-the-art phylogenetic sequence analyzes or ferret antisera inhibition assays. We suggest that this new measure of antigenic distance be used in the design of the annual influenza vaccine and in the interpretation of vaccine efficacy monitoring.
1912.08579
William Bialek
William Bialek, Thomas Gregor, and Ga\v{s}per Tka\v{c}ik
Action at a distance in transcriptional regulation
null
null
null
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is increasing evidence that protein binding to specific sites along DNA can activate the reading out of genetic information without coming into direct physical contact with the gene. There also is evidence that these distant but interacting sites are embedded in a liquid droplet of proteins which condenses out of the surrounding solution. We argue that droplet-mediated interactions can account for crucial features of gene regulation only if the droplet is poised at a non-generic point in its phase diagram. We explore a minimal model that embodies this idea, show that this model has a natural mechanism for self-tuning, and suggest direct experimental tests.
[ { "created": "Wed, 18 Dec 2019 13:10:11 GMT", "version": "v1" } ]
2019-12-19
[ [ "Bialek", "William", "" ], [ "Gregor", "Thomas", "" ], [ "Tkačik", "Gašper", "" ] ]
There is increasing evidence that protein binding to specific sites along DNA can activate the reading out of genetic information without coming into direct physical contact with the gene. There also is evidence that these distant but interacting sites are embedded in a liquid droplet of proteins which condenses out of the surrounding solution. We argue that droplet-mediated interactions can account for crucial features of gene regulation only if the droplet is poised at a non-generic point in its phase diagram. We explore a minimal model that embodies this idea, show that this model has a natural mechanism for self-tuning, and suggest direct experimental tests.
1701.06931
Hanli Qiao
Han Li Qiao and Ezio Venturino
A model for an aquatic ecosystem
null
AIP Conference Proceedings 1738, 390009 (2016)
10.1063/1.4952183
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An ecosystem made of nutrients, plants, detritus and dissolved oxygen is presented. Its equilibria are established. Sufficient conditions for the existence of the coexistence equilibrium are derived and its feasibility is discussed in every detail.
[ { "created": "Mon, 23 Jan 2017 15:54:34 GMT", "version": "v1" } ]
2017-01-25
[ [ "Qiao", "Han Li", "" ], [ "Venturino", "Ezio", "" ] ]
An ecosystem made of nutrients, plants, detritus and dissolved oxygen is presented. Its equilibria are established. Sufficient conditions for the existence of the coexistence equilibrium are derived and its feasibility is discussed in every detail.
1503.01984
Sona John
Sona John and Sarada Seetharaman
Exploiting the adaptation dynamics to predict the distribution of beneficial fitness effects
Communicated to PLOS ONE
PLoS ONE 11(3): e0151795. March 18, 2016
10.1371/journal.pone.0151795
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptation of asexual populations is driven by beneficial mutations and therefore the dynamics of this process, besides other factors, depend on the distribution of beneficial fitness effects. It is known that on uncorrelated fitness landscapes, this distribution can only be of three types: truncated, exponential and power law. We performed extensive stochastic simulations to study the adaptation dynamics on rugged fitness landscapes, and identified two quantities that can be used to distinguish the underlying distribution of beneficial fitness effects. The first quantity studied here is the fitness difference between successive mutations that spread in the population, which is found to decrease in the case of truncated distributions, remain nearly a constant for exponentially decaying distributions and increase when the fitness distribution decays as a power law. The second quantity of interest, namely, the rate of change of fitness with time also shows quantitatively different behaviour for different beneficial fitness distributions. The patterns displayed by the two aforementioned quantities are found to hold for both low and high mutation rates. We discuss how these patterns can be exploited to determine the distribution of beneficial fitness effects in microbial experiments.
[ { "created": "Thu, 26 Feb 2015 08:05:13 GMT", "version": "v1" }, { "created": "Fri, 25 Dec 2015 12:59:38 GMT", "version": "v2" } ]
2016-08-05
[ [ "John", "Sona", "" ], [ "Seetharaman", "Sarada", "" ] ]
Adaptation of asexual populations is driven by beneficial mutations and therefore the dynamics of this process, besides other factors, depend on the distribution of beneficial fitness effects. It is known that on uncorrelated fitness landscapes, this distribution can only be of three types: truncated, exponential and power law. We performed extensive stochastic simulations to study the adaptation dynamics on rugged fitness landscapes, and identified two quantities that can be used to distinguish the underlying distribution of beneficial fitness effects. The first quantity studied here is the fitness difference between successive mutations that spread in the population, which is found to decrease in the case of truncated distributions, remain nearly a constant for exponentially decaying distributions and increase when the fitness distribution decays as a power law. The second quantity of interest, namely, the rate of change of fitness with time also shows quantitatively different behaviour for different beneficial fitness distributions. The patterns displayed by the two aforementioned quantities are found to hold for both low and high mutation rates. We discuss how these patterns can be exploited to determine the distribution of beneficial fitness effects in microbial experiments.
1507.00717
Matjaz Perc
Heinrich H. Nax, Matjaz Perc, Attila Szolnoki, Dirk Helbing
Stability of cooperation under image scoring in group interactions
6 two-column pages, 4 figures; accepted for publication in Scientific Reports
Sci. Rep. 5 (2015) 12145
10.1038/srep12145
null
q-bio.PE cs.GT physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image scoring sustains cooperation in the repeated two-player prisoner's dilemma through indirect reciprocity, even though defection is the uniquely dominant selfish behaviour in the one-shot game. Many real-world dilemma situations, however, firstly, take place in groups and, secondly, lack the necessary transparency to inform subjects reliably of others' individual past actions. Instead, there is revelation of information regarding groups, which allows for `group scoring' but not for image scoring. Here, we study how sensitive the positive results related to image scoring are to information based on group scoring. We combine analytic results and computer simulations to specify the conditions for the emergence of cooperation. We show that under pure group scoring, that is, under the complete absence of image-scoring information, cooperation is unsustainable. Away from this extreme case, however, the necessary degree of image scoring relative to group scoring depends on the population size and is generally very small. We thus conclude that the positive results based on image scoring apply to a much broader range of informational settings that are relevant in the real world than previously assumed.
[ { "created": "Thu, 2 Jul 2015 19:52:38 GMT", "version": "v1" } ]
2015-07-03
[ [ "Nax", "Heinrich H.", "" ], [ "Perc", "Matjaz", "" ], [ "Szolnoki", "Attila", "" ], [ "Helbing", "Dirk", "" ] ]
Image scoring sustains cooperation in the repeated two-player prisoner's dilemma through indirect reciprocity, even though defection is the uniquely dominant selfish behaviour in the one-shot game. Many real-world dilemma situations, however, firstly, take place in groups and, secondly, lack the necessary transparency to inform subjects reliably of others' individual past actions. Instead, there is revelation of information regarding groups, which allows for `group scoring' but not for image scoring. Here, we study how sensitive the positive results related to image scoring are to information based on group scoring. We combine analytic results and computer simulations to specify the conditions for the emergence of cooperation. We show that under pure group scoring, that is, under the complete absence of image-scoring information, cooperation is unsustainable. Away from this extreme case, however, the necessary degree of image scoring relative to group scoring depends on the population size and is generally very small. We thus conclude that the positive results based on image scoring apply to a much broader range of informational settings that are relevant in the real world than previously assumed.
1806.03988
Celine Scornavacca
Riccardo Dondi, Manuel Lafond and Celine Scornavacca
Reconciling Multiple Genes Trees via Segmental Duplications and Losses
23 pages, 7 figures, WABI 2018
null
null
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconciling gene trees with a species tree is a fundamental problem to understand the evolution of gene families. Many existing approaches reconcile each gene tree independently. However, it is well-known that the evolution of gene families is interconnected. In this paper, we extend a previous approach to reconcile a set of gene trees with a species tree based on segmental macro-evolutionary events, where segmental duplication events and losses are associated with cost $\delta$ and $\lambda$, respectively. We show that the problem is polynomial-time solvable when $\delta \leq \lambda$ (via LCA-mapping), while if $\delta > \lambda$ the problem is NP-hard, even when $\lambda = 0$ and a single gene tree is given, solving a long standing open problem on the complexity of the reconciliation problem. On the positive side, we give a fixed-parameter algorithm for the problem, where the parameters are $\delta/\lambda$ and the number $d$ of segmental duplications, of time complexity $O(\lceil \frac{\delta}{\lambda} \rceil^{d} \cdot n \cdot \frac{\delta}{\lambda})$. Finally, we demonstrate the usefulness of this algorithm on two previously studied real datasets: we first show that our method can be used to confirm or refute hypothetical segmental duplications on a set of 16 eukaryotes, then show how we can detect whole genome duplications in yeast genomes.
[ { "created": "Mon, 11 Jun 2018 13:57:30 GMT", "version": "v1" } ]
2018-06-12
[ [ "Dondi", "Riccardo", "" ], [ "Lafond", "Manuel", "" ], [ "Scornavacca", "Celine", "" ] ]
Reconciling gene trees with a species tree is a fundamental problem to understand the evolution of gene families. Many existing approaches reconcile each gene tree independently. However, it is well-known that the evolution of gene families is interconnected. In this paper, we extend a previous approach to reconcile a set of gene trees with a species tree based on segmental macro-evolutionary events, where segmental duplication events and losses are associated with cost $\delta$ and $\lambda$, respectively. We show that the problem is polynomial-time solvable when $\delta \leq \lambda$ (via LCA-mapping), while if $\delta > \lambda$ the problem is NP-hard, even when $\lambda = 0$ and a single gene tree is given, solving a long standing open problem on the complexity of the reconciliation problem. On the positive side, we give a fixed-parameter algorithm for the problem, where the parameters are $\delta/\lambda$ and the number $d$ of segmental duplications, of time complexity $O(\lceil \frac{\delta}{\lambda} \rceil^{d} \cdot n \cdot \frac{\delta}{\lambda})$. Finally, we demonstrate the usefulness of this algorithm on two previously studied real datasets: we first show that our method can be used to confirm or refute hypothetical segmental duplications on a set of 16 eukaryotes, then show how we can detect whole genome duplications in yeast genomes.
1311.5795
Gerrit Ansmann
Kaspar A. Schindler, Stephan Bialonski, Marie-Therese Horstmann, Christian E. Elger, Klaus Lehnertz
Evolving functional network properties and synchronizability during human epileptic seizures
null
Chaos 18, 033119 (2008)
10.1063/1.2966112
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We assess electrical brain dynamics before, during, and after one-hundred human epileptic seizures with different anatomical onset locations by statistical and spectral properties of functionally defined networks. We observe a concave-like temporal evolution of characteristic path length and cluster coefficient indicative of a movement from a more random toward a more regular and then back toward a more random functional topology. Surprisingly, synchronizability was significantly decreased during the seizure state but increased already prior to seizure end. Our findings underline the high relevance of studying complex systems from the view point of complex networks, which may help to gain deeper insights into the complicated dynamics underlying epileptic seizures.
[ { "created": "Fri, 22 Nov 2013 16:07:05 GMT", "version": "v1" } ]
2013-11-25
[ [ "Schindler", "Kaspar A.", "" ], [ "Bialonski", "Stephan", "" ], [ "Horstmann", "Marie-Therese", "" ], [ "Elger", "Christian E.", "" ], [ "Lehnertz", "Klaus", "" ] ]
We assess electrical brain dynamics before, during, and after one-hundred human epileptic seizures with different anatomical onset locations by statistical and spectral properties of functionally defined networks. We observe a concave-like temporal evolution of characteristic path length and cluster coefficient indicative of a movement from a more random toward a more regular and then back toward a more random functional topology. Surprisingly, synchronizability was significantly decreased during the seizure state but increased already prior to seizure end. Our findings underline the high relevance of studying complex systems from the view point of complex networks, which may help to gain deeper insights into the complicated dynamics underlying epileptic seizures.
1204.6186
Walter Langel
Daniel Hasenpusch, Daniel M\"oller, Uwe T. Bornscheuer, and Walter Langel
Substrate-Enzyme Interaction in Pig Liver Esterase
32 pages, 10 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Force field and first principles molecular dynamics simulations on complexes of pig liver esterase (pig liver isoenzymes and a mutant) and selected substrates (1-phenyl-1-ethyl acetate, 1- phenyl-2-butylacetate, proline-{\beta}-naphthylamide and methyl butyrate) are presented. By restrained force field simulations the access of the substrate to the hidden active site was probed. For a few substrates spontaneous access to the active site via a well defined entrance channel was found. The structure of the tetrahedral intermediate was simulated for several substrates and our previous assignment of GLU 452 instead of GLU 336 was confirmed. It was shown that the active site readily adapts to the embedded substrate involving a varying number of hydrophobic residues in the neighborhood. This puts into question key-lock models for enantioselectivity. Ab initio molecular dynamics showed that the structures we found for the tetrahedral intermediate in force field simulations are consistent with the presumed mechanism of ester cleavage. Product release from the active site as final step of the enzymatic reaction revealed to be very slow and took already more than 20ns for the smallest product, methanol.
[ { "created": "Fri, 27 Apr 2012 12:23:03 GMT", "version": "v1" } ]
2012-04-30
[ [ "Hasenpusch", "Daniel", "" ], [ "Möller", "Daniel", "" ], [ "Bornscheuer", "Uwe T.", "" ], [ "Langel", "Walter", "" ] ]
Force field and first principles molecular dynamics simulations on complexes of pig liver esterase (pig liver isoenzymes and a mutant) and selected substrates (1-phenyl-1-ethyl acetate, 1- phenyl-2-butylacetate, proline-{\beta}-naphthylamide and methyl butyrate) are presented. By restrained force field simulations the access of the substrate to the hidden active site was probed. For a few substrates spontaneous access to the active site via a well defined entrance channel was found. The structure of the tetrahedral intermediate was simulated for several substrates and our previous assignment of GLU 452 instead of GLU 336 was confirmed. It was shown that the active site readily adapts to the embedded substrate involving a varying number of hydrophobic residues in the neighborhood. This puts into question key-lock models for enantioselectivity. Ab initio molecular dynamics showed that the structures we found for the tetrahedral intermediate in force field simulations are consistent with the presumed mechanism of ester cleavage. Product release from the active site as final step of the enzymatic reaction revealed to be very slow and took already more than 20ns for the smallest product, methanol.
1905.07813
Maria Giulia Preti
Maria Giulia Preti and Dimitri Van De Ville
Decoupling of brain function from structure reveals regional behavioral specialization in humans
null
null
null
null
q-bio.NC eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain is an assembly of neuronal populations interconnected by structural pathways. Brain activity is expressed on and constrained by this substrate. Therefore, statistical dependencies between functional signals in directly connected areas can be expected higher. However, the degree to which brain function is bound by the underlying wiring diagram remains a complex question that has been only partially answered. Here, we introduce the structural-decoupling index to quantify the coupling strength between structure and function, and we reveal a macroscale gradient from brain regions more strongly coupled, to regions more strongly decoupled, than expected by realistic surrogate data. This gradient spans behavioral domains from lower-level sensory function to high-level cognitive ones and shows for the first time that the strength of structure-function coupling is spatially varying in line with evidence derived from other modalities, such as functional connectivity, gene expression, microstructural properties and temporal hierarchy.
[ { "created": "Sun, 19 May 2019 21:51:11 GMT", "version": "v1" }, { "created": "Tue, 10 Sep 2019 12:24:55 GMT", "version": "v2" } ]
2019-09-11
[ [ "Preti", "Maria Giulia", "" ], [ "Van De Ville", "Dimitri", "" ] ]
The brain is an assembly of neuronal populations interconnected by structural pathways. Brain activity is expressed on and constrained by this substrate. Therefore, statistical dependencies between functional signals in directly connected areas can be expected higher. However, the degree to which brain function is bound by the underlying wiring diagram remains a complex question that has been only partially answered. Here, we introduce the structural-decoupling index to quantify the coupling strength between structure and function, and we reveal a macroscale gradient from brain regions more strongly coupled, to regions more strongly decoupled, than expected by realistic surrogate data. This gradient spans behavioral domains from lower-level sensory function to high-level cognitive ones and shows for the first time that the strength of structure-function coupling is spatially varying in line with evidence derived from other modalities, such as functional connectivity, gene expression, microstructural properties and temporal hierarchy.
1106.3616
Christopher Hillar
Christopher J. Hillar, Friedrich T. Sommer
When can dictionary learning uniquely recover sparse data from subsamples?
8 pages, 1 figures; IEEE Trans. Info. Theory, to appear
null
10.1109/TIT.2015.2460238
null
q-bio.NC math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sparse coding or sparse dictionary learning has been widely used to recover underlying structure in many kinds of natural data. Here, we provide conditions guaranteeing when this recovery is universal; that is, when sparse codes and dictionaries are unique (up to natural symmetries). Our main tool is a useful lemma in combinatorial matrix theory that allows us to derive bounds on the sample sizes guaranteeing such uniqueness under various assumptions for how training data are generated. Whenever the conditions to one of our theorems are met, any sparsity-constrained learning algorithm that succeeds in reconstructing the data recovers the original sparse codes and dictionary. We also discuss potential applications to neuroscience and data analysis.
[ { "created": "Sat, 18 Jun 2011 05:06:20 GMT", "version": "v1" }, { "created": "Thu, 17 Nov 2011 13:59:44 GMT", "version": "v2" }, { "created": "Fri, 24 May 2013 03:17:09 GMT", "version": "v3" }, { "created": "Fri, 13 Feb 2015 18:20:53 GMT", "version": "v4" }, { "created": "Fri, 31 Jul 2015 16:34:45 GMT", "version": "v5" } ]
2016-11-18
[ [ "Hillar", "Christopher J.", "" ], [ "Sommer", "Friedrich T.", "" ] ]
Sparse coding or sparse dictionary learning has been widely used to recover underlying structure in many kinds of natural data. Here, we provide conditions guaranteeing when this recovery is universal; that is, when sparse codes and dictionaries are unique (up to natural symmetries). Our main tool is a useful lemma in combinatorial matrix theory that allows us to derive bounds on the sample sizes guaranteeing such uniqueness under various assumptions for how training data are generated. Whenever the conditions to one of our theorems are met, any sparsity-constrained learning algorithm that succeeds in reconstructing the data recovers the original sparse codes and dictionary. We also discuss potential applications to neuroscience and data analysis.
2203.06475
H\'el\`ene Delano\"e-Ayari
H\'el\`ene Delano\"e-Ayari, Alice Nicolas
Quantifying active and resistive stresses in adherent cells
7 pages, 5 figures. arXiv admin note: substantial text overlap with arXiv:2105.05792
null
10.1103/PhysRevE.106.024411
null
q-bio.CB
http://creativecommons.org/licenses/by-sa/4.0/
To understand cell migration, it is crucial to gain knowledge on how cells exert and integrate forces on/from their environment. A quantity of prime interest for biophysicists interested in cell movements modeling is the intracellular stresses. Up to now, three different methods have been proposed to calculate it, they are all in the regime of the thin plate approximation. Two are based on solving the mechanical equilibrium equation inside the cell material (Monolayer Stress Microscopy, and Bayesian Inference Stress Microscopy) and one is based on the continuity of displacement at the cell/substrate interface (Intracellular Stress Microscopy). We show here using 3D FEM modeling that these techniques do not calculate the same quantities (as was previously assumed), the first techniques calculate the sum of the active and resistive stresses within the cell, whereas the last one only calculate the resistive component. Combining these techniques should in principle permit to get access to the active stress alone.
[ { "created": "Sat, 12 Mar 2022 16:09:13 GMT", "version": "v1" }, { "created": "Wed, 23 Mar 2022 21:01:34 GMT", "version": "v2" } ]
2022-09-14
[ [ "Delanoë-Ayari", "Hélène", "" ], [ "Nicolas", "Alice", "" ] ]
To understand cell migration, it is crucial to gain knowledge on how cells exert and integrate forces on/from their environment. A quantity of prime interest for biophysicists interested in cell movements modeling is the intracellular stresses. Up to now, three different methods have been proposed to calculate it, they are all in the regime of the thin plate approximation. Two are based on solving the mechanical equilibrium equation inside the cell material (Monolayer Stress Microscopy, and Bayesian Inference Stress Microscopy) and one is based on the continuity of displacement at the cell/substrate interface (Intracellular Stress Microscopy). We show here using 3D FEM modeling that these techniques do not calculate the same quantities (as was previously assumed), the first techniques calculate the sum of the active and resistive stresses within the cell, whereas the last one only calculate the resistive component. Combining these techniques should in principle permit to get access to the active stress alone.
1806.04341
Michael Pan
Michael Pan, Peter J. Gawthrop, Kenneth Tran, Joseph Cursons and Edmund J. Crampin
A thermodynamic framework for modelling membrane transporters
null
Journal of Theoretical Biology, Available online 28 September 2018
10.1016/j.bpj.2018.11.2263
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Membrane transporters contribute to the regulation of the internal environment of cells by translocating substrates across cell membranes. Like all physical systems, the behaviour of membrane transporters is constrained by the laws of thermodynamics. However, many mathematical models of transporters, especially those incorporated into whole-cell models, are not thermodynamically consistent, leading to unrealistic behaviour. In this paper we use a physics-based modelling framework, in which the transfer of energy is explicitly accounted for, to develop thermodynamically consistent models of transporters. We then apply this methodology to model two specific transporters: the cardiac sarcoplasmic/endoplasmic Ca$^{2+}$ ATPase (SERCA) and the cardiac Na$^+$/K$^+$ ATPase.
[ { "created": "Tue, 12 Jun 2018 05:53:53 GMT", "version": "v1" } ]
2023-07-19
[ [ "Pan", "Michael", "" ], [ "Gawthrop", "Peter J.", "" ], [ "Tran", "Kenneth", "" ], [ "Cursons", "Joseph", "" ], [ "Crampin", "Edmund J.", "" ] ]
Membrane transporters contribute to the regulation of the internal environment of cells by translocating substrates across cell membranes. Like all physical systems, the behaviour of membrane transporters is constrained by the laws of thermodynamics. However, many mathematical models of transporters, especially those incorporated into whole-cell models, are not thermodynamically consistent, leading to unrealistic behaviour. In this paper we use a physics-based modelling framework, in which the transfer of energy is explicitly accounted for, to develop thermodynamically consistent models of transporters. We then apply this methodology to model two specific transporters: the cardiac sarcoplasmic/endoplasmic Ca$^{2+}$ ATPase (SERCA) and the cardiac Na$^+$/K$^+$ ATPase.
2004.00351
Nicola Calonaci
Nicola Calonaci, Alisha Jones, Francesca Cuturello, Michael Sattler and Giovanni Bussi
Machine learning a model for RNA structure prediction
Supporting Information are reported in a Jupyter notebook, in both format .pdf (read-only) and .ipynb (interactive and editable). Full data required to run the .ipynb notebook can be found at the GitHub repository reported in the notebook
NAR Genom. Bioinform. 2, lqaa090 (2020)
10.1093/nargab/lqaa090
null
q-bio.BM physics.bio-ph physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA function crucially depends on its structure. Thermodynamic models currently used for secondary structure prediction rely on computing the partition function of folding ensembles, and can thus estimate minimum free-energy structures and ensemble populations. These models sometimes fail in identifying native structures unless complemented by auxiliary experimental data. Here, we build a set of models that combine thermodynamic parameters, chemical probing data (DMS, SHAPE), and co-evolutionary data (Direct Coupling Analysis, DCA) through a network that outputs perturbations to the ensemble free energy. Perturbations are trained to increase the ensemble populations of a representative set of known native RNA structures. In the chemical probing nodes of the network, a convolutional window combines neighboring reactivities, enlightening their structural information content and the contribution of local conformational ensembles. Regularization is used to limit overfitting and improve transferability. The most transferable model is selected through a cross-validation strategy that estimates the performance of models on systems on which they are not trained. With the selected model we obtain increased ensemble populations for native structures and more accurate predictions in an independent validation set. The flexibility of the approach allows the model to be easily retrained and adapted to incorporate arbitrary experimental information.
[ { "created": "Wed, 1 Apr 2020 11:36:29 GMT", "version": "v1" }, { "created": "Mon, 5 Oct 2020 10:13:16 GMT", "version": "v2" } ]
2022-07-26
[ [ "Calonaci", "Nicola", "" ], [ "Jones", "Alisha", "" ], [ "Cuturello", "Francesca", "" ], [ "Sattler", "Michael", "" ], [ "Bussi", "Giovanni", "" ] ]
RNA function crucially depends on its structure. Thermodynamic models currently used for secondary structure prediction rely on computing the partition function of folding ensembles, and can thus estimate minimum free-energy structures and ensemble populations. These models sometimes fail in identifying native structures unless complemented by auxiliary experimental data. Here, we build a set of models that combine thermodynamic parameters, chemical probing data (DMS, SHAPE), and co-evolutionary data (Direct Coupling Analysis, DCA) through a network that outputs perturbations to the ensemble free energy. Perturbations are trained to increase the ensemble populations of a representative set of known native RNA structures. In the chemical probing nodes of the network, a convolutional window combines neighboring reactivities, enlightening their structural information content and the contribution of local conformational ensembles. Regularization is used to limit overfitting and improve transferability. The most transferable model is selected through a cross-validation strategy that estimates the performance of models on systems on which they are not trained. With the selected model we obtain increased ensemble populations for native structures and more accurate predictions in an independent validation set. The flexibility of the approach allows the model to be easily retrained and adapted to incorporate arbitrary experimental information.
0907.1291
Chris Adami
Arend Hintze and Christoph Adami
Modularity and anti-modularity in networks with arbitrary degree distribution
23 pages. 13 figures, 1 table. Includes Supplementary text
Biology Direct 5:32 (2010)
10.1186/1745-6150-5-32
null
q-bio.MN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks describing the interaction of the elements that constitute a complex system grow and develop via a number of different mechanisms, such as the addition and deletion of nodes, the addition and deletion of edges, as well as the duplication or fusion of nodes. While each of these mechanisms can have a different cause depending on whether the network is biological, technological, or social, their impact on the network's structure, as well as its local and global properties, is similar. This allows us to study how each of these mechanisms affects networks either alone or together with the other processes, and how they shape the characteristics that have been observed. We study how a network's growth parameters impact the distribution of edges in the network, how they affect a network's modularity, and point out that some parameters will give rise to networks that have the opposite tendency, namely to display anti-modularity. Within the model we are describing, we can search the space of possible networks for parameter sets that generate networks that are very similar to well-known and well-studied examples, such as the brain of a worm, and the network of interactions of the proteins in baker's yeast.
[ { "created": "Tue, 7 Jul 2009 20:20:48 GMT", "version": "v1" } ]
2010-06-09
[ [ "Hintze", "Arend", "" ], [ "Adami", "Christoph", "" ] ]
Networks describing the interaction of the elements that constitute a complex system grow and develop via a number of different mechanisms, such as the addition and deletion of nodes, the addition and deletion of edges, as well as the duplication or fusion of nodes. While each of these mechanisms can have a different cause depending on whether the network is biological, technological, or social, their impact on the network's structure, as well as its local and global properties, is similar. This allows us to study how each of these mechanisms affects networks either alone or together with the other processes, and how they shape the characteristics that have been observed. We study how a network's growth parameters impact the distribution of edges in the network, how they affect a network's modularity, and point out that some parameters will give rise to networks that have the opposite tendency, namely to display anti-modularity. Within the model we are describing, we can search the space of possible networks for parameter sets that generate networks that are very similar to well-known and well-studied examples, such as the brain of a worm, and the network of interactions of the proteins in baker's yeast.
2006.11099
Michael G. M\"uller
Agnes Korcsak-Gorzo, Michael G. M\"uller, Andreas Baumbach, Luziwei Leng, Oliver Julien Breitwieser, Sacha J. van Albada, Walter Senn, Karlheinz Meier, Robert Legenstein, Mihai A. Petrovici
Cortical oscillations implement a backbone for sampling-based computation in spiking neural networks
34 pages, 9 figures
PLoS Comput Biol 18(3): e1009753 (2022)
10.1371/journal.pcbi.1009753
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.
[ { "created": "Fri, 19 Jun 2020 12:18:43 GMT", "version": "v1" }, { "created": "Tue, 23 Feb 2021 17:18:34 GMT", "version": "v2" }, { "created": "Fri, 19 Mar 2021 16:26:15 GMT", "version": "v3" }, { "created": "Mon, 22 Mar 2021 20:24:02 GMT", "version": "v4" }, { "created": "Mon, 4 Apr 2022 12:07:53 GMT", "version": "v5" } ]
2022-04-05
[ [ "Korcsak-Gorzo", "Agnes", "" ], [ "Müller", "Michael G.", "" ], [ "Baumbach", "Andreas", "" ], [ "Leng", "Luziwei", "" ], [ "Breitwieser", "Oliver Julien", "" ], [ "van Albada", "Sacha J.", "" ], [ "Senn", "Walter", "" ], [ "Meier", "Karlheinz", "" ], [ "Legenstein", "Robert", "" ], [ "Petrovici", "Mihai A.", "" ] ]
Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.
2201.06666
Evan Johnson
Evan Johnson and Alan Hastings
Methods for calculating coexistence mechanisms: Beyond scaling factors
26 pages, 1 figure, 4 tables
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
How do species coexist? A framework known as Modern Coexistence Theory measures mechanisms of coexistence by comparing a species perturbed to low density (the invader) to other species that remain at their typical densities (the residents); this invader-resident comparison measures a rare-species advantage that results from specialization. However, there are several reasonable ways (i.e., methods) to compare invaders and residents, each differing in practicality and biological interpretation. Here, using theoretical arguments and case studies, we compare four such methods for calculating coexistence mechanisms: 1) Scaling factors, the traditional approach where resident growth rates are scaled by a measure of relative sensitivity to competition, obtained by solving a system of linear equations; 2) The simple comparison, which gives equal weight to all resident species; 3) Speed conversion factors, a novel method in which resident growth rates are scaled by a ratio of generation times, and; 4) The invader-invader comparison, another novel method in which a focal species is compared to itself at high vs. low density. We conclude that the conventional scaling factors can be useful in some theoretical research, but are not recommended for empirical applications, i.e., determining the mechanisms of coexistence in real communities. Instead, we recommend the simple comparison and speed conversion factor methods. The speed conversion factors are most useful when comparing species with dissimilar generation times. However, ecologists often study coexistence in guilds of species with similar life-histories, and therefore, similar generation times. In such scenarios, the easier-to-use simple comparison method is reasonable.
[ { "created": "Mon, 17 Jan 2022 23:56:32 GMT", "version": "v1" } ]
2022-01-19
[ [ "Johnson", "Evan", "" ], [ "Hastings", "Alan", "" ] ]
How do species coexist? A framework known as Modern Coexistence Theory measures mechanisms of coexistence by comparing a species perturbed to low density (the invader) to other species that remain at their typical densities (the residents); this invader-resident comparison measures a rare-species advantage that results from specialization. However, there are several reasonable ways (i.e., methods) to compare invaders and residents, each differing in practicality and biological interpretation. Here, using theoretical arguments and case studies, we compare four such methods for calculating coexistence mechanisms: 1) Scaling factors, the traditional approach where resident growth rates are scaled by a measure of relative sensitivity to competition, obtained by solving a system of linear equations; 2) The simple comparison, which gives equal weight to all resident species; 3) Speed conversion factors, a novel method in which resident growth rates are scaled by a ratio of generation times, and; 4) The invader-invader comparison, another novel method in which a focal species is compared to itself at high vs. low density. We conclude that the conventional scaling factors can be useful in some theoretical research, but are not recommended for empirical applications, i.e., determining the mechanisms of coexistence in real communities. Instead, we recommend the simple comparison and speed conversion factor methods. The speed conversion factors are most useful when comparing species with dissimilar generation times. However, ecologists often study coexistence in guilds of species with similar life-histories, and therefore, similar generation times. In such scenarios, the easier-to-use simple comparison method is reasonable.
1611.03144
Arnaud Poret
Arnaud Poret, Carito Guziolowski
Therapeutic target discovery using Boolean network attractors: improvements of kali
null
Royal Society Open Science, The Royal Society, 2018, 5 (2)
10.1098/rsos.171852
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
In a previous article, an algorithm for identifying therapeutic targets in Boolean networks modeling pathological mechanisms was introduced. In the present article, the improvements made on this algorithm, named kali, are described. These improvements are i) the possibility to work on asynchronous Boolean networks, ii) a finer assessment of therapeutic targets and iii) the possibility to use multivalued logic. kali assumes that the attractors of a dynamical system, such as a Boolean network, are associated with the phenotypes of the modeled biological system. Given a logic-based model of pathological mechanisms, kali searches for therapeutic targets able to reduce the reachability of the attractors associated with pathological phenotypes, thus reducing their likeliness. kali is illustrated on an example network and used on a biological case study. The case study is a published logic-based model of bladder tumorigenesis from which kali returns consistent results. However, like any computational tool, kali can predict but can not replace human expertise: it is a supporting tool for coping with the complexity of biological systems in the field of drug discovery.
[ { "created": "Thu, 10 Nov 2016 01:08:33 GMT", "version": "v1" }, { "created": "Wed, 8 Nov 2017 14:03:27 GMT", "version": "v2" }, { "created": "Mon, 8 Jan 2018 17:24:12 GMT", "version": "v3" } ]
2018-03-13
[ [ "Poret", "Arnaud", "" ], [ "Guziolowski", "Carito", "" ] ]
In a previous article, an algorithm for identifying therapeutic targets in Boolean networks modeling pathological mechanisms was introduced. In the present article, the improvements made on this algorithm, named kali, are described. These improvements are i) the possibility to work on asynchronous Boolean networks, ii) a finer assessment of therapeutic targets and iii) the possibility to use multivalued logic. kali assumes that the attractors of a dynamical system, such as a Boolean network, are associated with the phenotypes of the modeled biological system. Given a logic-based model of pathological mechanisms, kali searches for therapeutic targets able to reduce the reachability of the attractors associated with pathological phenotypes, thus reducing their likeliness. kali is illustrated on an example network and used on a biological case study. The case study is a published logic-based model of bladder tumorigenesis from which kali returns consistent results. However, like any computational tool, kali can predict but can not replace human expertise: it is a supporting tool for coping with the complexity of biological systems in the field of drug discovery.
1508.06309
Rachel Taylor
Rachel A. Taylor, Sadie J. Ryan, Justin S. Brashares and Leah R. Johnson
Hunting, food subsidies, and mesopredator release: the dynamics of crop-raiding baboons in a managed landscape
23 pages, 8 figures, 2 appendices
Ecology 97 (2016) 951-960
10.1890/15-0885.1
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The establishment of protected areas or parks has become an important tool for wildlife conservation. However, frequent occurrences of human-wildlife conflict at the edges of these parks can undermine their conservation goals. Many African protected areas have experienced concurrent declines of apex predators alongside increases in both baboon abundance and the density of humans living near the park boundary. Baboons then take excursions outside of the park to raid crops for food, conflicting with the human population. We model the interactions of mesopredators (baboons), apex predators and shared prey in the park to analyze how four components affect the proportion of time that mesopredators choose to crop-raid: 1) the presence of apex predators; 2) nutritional quality of the crops; 3) mesopredator "shyness" about leaving the park; and 4) human hunting of mesopredators. We predict that the presence of apex predators in the park is the most effective method for controlling mesopredator abundance, and hence significantly reduces their impact on crops. Human hunting of mesopredators is less effective as it only occurs during crop-raiding excursions. Furthermore, making crops less attractive, for instance by planting crops further from the park boundary or farming less nutritional crops, can reduce the amount of time mesopredators crop-raid.
[ { "created": "Tue, 25 Aug 2015 21:27:48 GMT", "version": "v1" } ]
2016-04-12
[ [ "Taylor", "Rachel A.", "" ], [ "Ryan", "Sadie J.", "" ], [ "Brashares", "Justin S.", "" ], [ "Johnson", "Leah R.", "" ] ]
The establishment of protected areas or parks has become an important tool for wildlife conservation. However, frequent occurrences of human-wildlife conflict at the edges of these parks can undermine their conservation goals. Many African protected areas have experienced concurrent declines of apex predators alongside increases in both baboon abundance and the density of humans living near the park boundary. Baboons then take excursions outside of the park to raid crops for food, conflicting with the human population. We model the interactions of mesopredators (baboons), apex predators and shared prey in the park to analyze how four components affect the proportion of time that mesopredators choose to crop-raid: 1) the presence of apex predators; 2) nutritional quality of the crops; 3) mesopredator "shyness" about leaving the park; and 4) human hunting of mesopredators. We predict that the presence of apex predators in the park is the most effective method for controlling mesopredator abundance, and hence significantly reduces their impact on crops. Human hunting of mesopredators is less effective as it only occurs during crop-raiding excursions. Furthermore, making crops less attractive, for instance by planting crops further from the park boundary or farming less nutritional crops, can reduce the amount of time mesopredators crop-raid.
1602.03086
Sebastian Deorowicz
Maciej Dlugosz and Sebastian Deorowicz
RECKONER: Read Error Corrector Based on KMC
7 pages + 24 pages of supplementary material
null
10.1093/bioinformatics/btw746
null
q-bio.GN cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Next-generation sequencing tools have enabled producing of huge amount of genomic information at low cost. Unfortunately, presence of sequencing errors in such data affects quality of downstream analyzes. Accuracy of them can be improved by performing error correction. Because of huge amount of such data correction algorithms have to: be fast, memory-frugal, and provide high accuracy of error detection and elimination for variously-sized organisms. Results: We introduce a new algorithm for genomic data correction, capable of processing eucaryotic 300 Mbp-genome-size, high error-rated data using less than 4 GB of RAM in less than 40 minutes on 16-core CPU. The algorithm allows to correct sequencing data at better or comparable level than competitors. This was achieved by using very robust KMC~2 $k$-mer counter, new method of erroneous regions correction based on both $k$-mer counts and FASTQ quality indicators as well as careful optimization. Availability: Program is freely available at http://sun.aei.posl.pl/REFRESH/reckoner. Contact: sebastian.deorowicz@polsl.pl
[ { "created": "Tue, 9 Feb 2016 17:27:41 GMT", "version": "v1" } ]
2017-03-03
[ [ "Dlugosz", "Maciej", "" ], [ "Deorowicz", "Sebastian", "" ] ]
Motivation: Next-generation sequencing tools have enabled producing of huge amount of genomic information at low cost. Unfortunately, presence of sequencing errors in such data affects quality of downstream analyzes. Accuracy of them can be improved by performing error correction. Because of huge amount of such data correction algorithms have to: be fast, memory-frugal, and provide high accuracy of error detection and elimination for variously-sized organisms. Results: We introduce a new algorithm for genomic data correction, capable of processing eucaryotic 300 Mbp-genome-size, high error-rated data using less than 4 GB of RAM in less than 40 minutes on 16-core CPU. The algorithm allows to correct sequencing data at better or comparable level than competitors. This was achieved by using very robust KMC~2 $k$-mer counter, new method of erroneous regions correction based on both $k$-mer counts and FASTQ quality indicators as well as careful optimization. Availability: Program is freely available at http://sun.aei.posl.pl/REFRESH/reckoner. Contact: sebastian.deorowicz@polsl.pl
2301.08799
Brendan Case
B. K. M. Case, Jean-Gabriel Young, Laurent H\'ebert-Dufresne
Accurately summarizing an outbreak using epidemiological models takes time
7 pages, 4 figures
null
10.1098/rsos.230634
null
q-bio.PE math.DS physics.data-an physics.soc-ph stat.ME
http://creativecommons.org/licenses/by/4.0/
Recent outbreaks of monkeypox and Ebola, and worrying waves of COVID-19, influenza and respiratory syncytial virus, have all led to a sharp increase in the use of epidemiological models to estimate key epidemiological parameters. The feasibility of this estimation task is known as the practical identifiability (PI) problem. Here, we investigate the PI of eight commonly reported statistics of the classic Susceptible-Infectious-Recovered model using a new measure that shows how much a researcher can expect to learn in a model-based Bayesian analysis of prevalence data. Our findings show that the basic reproductive number and final outbreak size are often poorly identified, with learning exceeding that of individual model parameters only in the early stages of an outbreak. The peak intensity, peak timing, and initial growth rate are better identified, being in expectation over 20 times more probable having seen the data by the time the underlying outbreak peaks. We then test PI for a variety of true parameter combinations, and find that PI is especially problematic in slow-growing or less-severe outbreaks. These results add to the growing body of literature questioning the reliability of inferences from epidemiological models when limited data are available.
[ { "created": "Fri, 20 Jan 2023 20:54:50 GMT", "version": "v1" } ]
2023-12-01
[ [ "Case", "B. K. M.", "" ], [ "Young", "Jean-Gabriel", "" ], [ "Hébert-Dufresne", "Laurent", "" ] ]
Recent outbreaks of monkeypox and Ebola, and worrying waves of COVID-19, influenza and respiratory syncytial virus, have all led to a sharp increase in the use of epidemiological models to estimate key epidemiological parameters. The feasibility of this estimation task is known as the practical identifiability (PI) problem. Here, we investigate the PI of eight commonly reported statistics of the classic Susceptible-Infectious-Recovered model using a new measure that shows how much a researcher can expect to learn in a model-based Bayesian analysis of prevalence data. Our findings show that the basic reproductive number and final outbreak size are often poorly identified, with learning exceeding that of individual model parameters only in the early stages of an outbreak. The peak intensity, peak timing, and initial growth rate are better identified, being in expectation over 20 times more probable having seen the data by the time the underlying outbreak peaks. We then test PI for a variety of true parameter combinations, and find that PI is especially problematic in slow-growing or less-severe outbreaks. These results add to the growing body of literature questioning the reliability of inferences from epidemiological models when limited data are available.
0704.3724
Paul Smolen
Paul Smolen
A Model of Late Long-Term Potentiation Simulates Aspects of Memory Maintenance
Accepted to PLoS One. 8 figures at end
null
10.1371/journal.pone.0000445
null
q-bio.NC q-bio.MN
null
Late long-term potentiation (L-LTP) appears essential for the formation of long-term memory, with memories at least partly encoded by patterns of strengthened synapses. How memories are preserved for months or years, despite molecular turnover, is not well understood. Ongoing recurrent neuronal activity, during memory recall or during sleep, has been hypothesized to preferentially potentiate strong synapses, preserving memories. This hypothesis has not been evaluated in the context of a mathematical model representing biochemical pathways important for L-LTP. I incorporated ongoing activity into two such models: a reduced model that represents some of the essential biochemical processes, and a more detailed published model. The reduced model represents synaptic tagging and gene induction intuitively, and the detailed model adds activation of essential kinases by Ca. Ongoing activity was modeled as continual brief elevations of [Ca]. In each model, two stable states of synaptic weight resulted. Positive feedback between synaptic weight and the amplitude of ongoing Ca transients underlies this bistability. A tetanic or theta-burst stimulus switches a model synapse from a low weight to a high weight stabilized by ongoing activity. Bistability was robust to parameter variations. Simulations illustrated that prolonged decreased activity reset synapses to low weights, suggesting a plausible forgetting mechanism. However, episodic activity with shorter inactive intervals maintained strong synapses. Both models support experimental predictions. Tests of these predictions are expected to further understanding of how neuronal activity is coupled to maintenance of synaptic strength.
[ { "created": "Fri, 27 Apr 2007 18:39:12 GMT", "version": "v1" } ]
2015-05-13
[ [ "Smolen", "Paul", "" ] ]
Late long-term potentiation (L-LTP) appears essential for the formation of long-term memory, with memories at least partly encoded by patterns of strengthened synapses. How memories are preserved for months or years, despite molecular turnover, is not well understood. Ongoing recurrent neuronal activity, during memory recall or during sleep, has been hypothesized to preferentially potentiate strong synapses, preserving memories. This hypothesis has not been evaluated in the context of a mathematical model representing biochemical pathways important for L-LTP. I incorporated ongoing activity into two such models: a reduced model that represents some of the essential biochemical processes, and a more detailed published model. The reduced model represents synaptic tagging and gene induction intuitively, and the detailed model adds activation of essential kinases by Ca. Ongoing activity was modeled as continual brief elevations of [Ca]. In each model, two stable states of synaptic weight resulted. Positive feedback between synaptic weight and the amplitude of ongoing Ca transients underlies this bistability. A tetanic or theta-burst stimulus switches a model synapse from a low weight to a high weight stabilized by ongoing activity. Bistability was robust to parameter variations. Simulations illustrated that prolonged decreased activity reset synapses to low weights, suggesting a plausible forgetting mechanism. However, episodic activity with shorter inactive intervals maintained strong synapses. Both models support experimental predictions. Tests of these predictions are expected to further understanding of how neuronal activity is coupled to maintenance of synaptic strength.
2009.13160
Maria Kleshnina
M. Kleshnina, K. Kaveh and K. Chatterjee
The role of behavioural plasticity in finite vs infinite populations
null
null
null
null
q-bio.PE cs.GT econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary game theory has proven to be an elegant framework providing many fruitful insights in population dynamics and human behaviour. Here, we focus on the aspect of behavioural plasticity and its effect on the evolution of populations. We consider games with only two strategies in both well-mixed infinite and finite populations settings. We assume that individuals might exhibit behavioural plasticity referred to as incompetence of players. We study the effect of such heterogeneity on the outcome of local interactions and, ultimately, on global competition. For instance, a strategy that was dominated before can become desirable from the selection perspective when behavioural plasticity is taken into account. Furthermore, it can ease conditions for a successful fixation in infinite populations' invasions. We demonstrate our findings on the examples of Prisoners' Dilemma and Snowdrift game, where we define conditions under which cooperation can be promoted.
[ { "created": "Mon, 28 Sep 2020 09:14:58 GMT", "version": "v1" } ]
2020-09-29
[ [ "Kleshnina", "M.", "" ], [ "Kaveh", "K.", "" ], [ "Chatterjee", "K.", "" ] ]
Evolutionary game theory has proven to be an elegant framework providing many fruitful insights in population dynamics and human behaviour. Here, we focus on the aspect of behavioural plasticity and its effect on the evolution of populations. We consider games with only two strategies in both well-mixed infinite and finite populations settings. We assume that individuals might exhibit behavioural plasticity referred to as incompetence of players. We study the effect of such heterogeneity on the outcome of local interactions and, ultimately, on global competition. For instance, a strategy that was dominated before can become desirable from the selection perspective when behavioural plasticity is taken into account. Furthermore, it can ease conditions for a successful fixation in infinite populations' invasions. We demonstrate our findings on the examples of Prisoners' Dilemma and Snowdrift game, where we define conditions under which cooperation can be promoted.
2310.13769
Benson Chen
Benson Chen, Mohammad M. Sultan, Theofanis Karaletsos
Compositional Deep Probabilistic Models of DNA Encoded Libraries
null
null
null
null
q-bio.QM stat.ML
http://creativecommons.org/licenses/by/4.0/
DNA-Encoded Library (DEL) has proven to be a powerful tool that utilizes combinatorially constructed small molecules to facilitate highly-efficient screening assays. These selection experiments, involving multiple stages of washing, elution, and identification of potent binders via unique DNA barcodes, often generate complex data. This complexity can potentially mask the underlying signals, necessitating the application of computational tools such as machine learning to uncover valuable insights. We introduce a compositional deep probabilistic model of DEL data, DEL-Compose, which decomposes molecular representations into their mono-synthon, di-synthon, and tri-synthon building blocks and capitalizes on the inherent hierarchical structure of these molecules by modeling latent reactions between embedded synthons. Additionally, we investigate methods to improve the observation models for DEL count data such as integrating covariate factors to more effectively account for data noise. Across two popular public benchmark datasets (CA-IX and HRP), our model demonstrates strong performance compared to count baselines, enriches the correct pharmacophores, and offers valuable insights via its intrinsic interpretable structure, thereby providing a robust tool for the analysis of DEL data.
[ { "created": "Fri, 20 Oct 2023 19:04:28 GMT", "version": "v1" }, { "created": "Tue, 13 Feb 2024 18:15:09 GMT", "version": "v2" } ]
2024-02-14
[ [ "Chen", "Benson", "" ], [ "Sultan", "Mohammad M.", "" ], [ "Karaletsos", "Theofanis", "" ] ]
DNA-Encoded Library (DEL) has proven to be a powerful tool that utilizes combinatorially constructed small molecules to facilitate highly-efficient screening assays. These selection experiments, involving multiple stages of washing, elution, and identification of potent binders via unique DNA barcodes, often generate complex data. This complexity can potentially mask the underlying signals, necessitating the application of computational tools such as machine learning to uncover valuable insights. We introduce a compositional deep probabilistic model of DEL data, DEL-Compose, which decomposes molecular representations into their mono-synthon, di-synthon, and tri-synthon building blocks and capitalizes on the inherent hierarchical structure of these molecules by modeling latent reactions between embedded synthons. Additionally, we investigate methods to improve the observation models for DEL count data such as integrating covariate factors to more effectively account for data noise. Across two popular public benchmark datasets (CA-IX and HRP), our model demonstrates strong performance compared to count baselines, enriches the correct pharmacophores, and offers valuable insights via its intrinsic interpretable structure, thereby providing a robust tool for the analysis of DEL data.
1411.0450
Robert Hoehndorf
Robert Hoehndorf and Paul N Schofield and Georgios V Gkoutos
Analysis of the human diseasome reveals phenotype modules across common, genetic, and infectious diseases
null
null
10.1038/srep10888
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phenotypes are the observable characteristics of an organism arising from its response to the environment. Phenotypes associated with engineered and natural genetic variation are widely recorded using phenotype ontologies in model organisms, as are signs and symptoms of human Mendelian diseases in databases such as OMIM and Orphanet. Exploiting these resources, several computational methods have been developed for integration and analysis of phenotype data to identify the genetic etiology of diseases or suggest plausible interventions. A similar resource would be highly useful not only for rare and Mendelian diseases, but also for common, complex and infectious diseases. We apply a semantic text- mining approach to identify the phenotypes (signs and symptoms) associated with over 8,000 diseases. We demonstrate that our method generates phenotypes that correctly identify known disease-associated genes in mice and humans with high accuracy. Using a phenotypic similarity measure, we generate a human disease network in which diseases that share signs and symptoms cluster together, and we use this network to identify phenotypic disease modules.
[ { "created": "Mon, 3 Nov 2014 12:15:39 GMT", "version": "v1" }, { "created": "Wed, 26 Nov 2014 18:48:02 GMT", "version": "v2" } ]
2015-05-26
[ [ "Hoehndorf", "Robert", "" ], [ "Schofield", "Paul N", "" ], [ "Gkoutos", "Georgios V", "" ] ]
Phenotypes are the observable characteristics of an organism arising from its response to the environment. Phenotypes associated with engineered and natural genetic variation are widely recorded using phenotype ontologies in model organisms, as are signs and symptoms of human Mendelian diseases in databases such as OMIM and Orphanet. Exploiting these resources, several computational methods have been developed for integration and analysis of phenotype data to identify the genetic etiology of diseases or suggest plausible interventions. A similar resource would be highly useful not only for rare and Mendelian diseases, but also for common, complex and infectious diseases. We apply a semantic text- mining approach to identify the phenotypes (signs and symptoms) associated with over 8,000 diseases. We demonstrate that our method generates phenotypes that correctly identify known disease-associated genes in mice and humans with high accuracy. Using a phenotypic similarity measure, we generate a human disease network in which diseases that share signs and symptoms cluster together, and we use this network to identify phenotypic disease modules.
2404.08833
Abbey Corson
Abbey E. Corson, Meaghan MacDonald, Velislava Tzaneva, Chris M. Edwards, Kristi B. Adamo
Breaking Boundaries: A Chronology with Future Directions of Women in Exercise Physiology Research, Centred on Pregnancy
4 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Historically, females were excluded from clinical research due to their reproductive roles, hindering medical understanding and healthcare quality. Despite guidelines promoting equal participation, females are underrepresented in exercise science, perpetuating misconceptions about female physiology. Even less attention has been given to exercise in the pregnant population. Research on pregnancy and exercise has evolved considerably from the initial bedrest prescriptions but concerns about exercise risks during pregnancy persisted for many decades. Recent guidelines endorse moderate-intensity physical activity during pregnancy, supported by considerable evidence of its safety and benefits. Mental health during pregnancy, often overlooked, is gaining traction, with exercise showing promise in reducing depression and anxiety. While pregnancy guidelines recommend moderate-intensity physical activity, there remains limited understanding of optimal frequency, intensity, type and time (duration) for extremes like elite athletes or those with complications. Female participation in elite sport and physically demanding jobs is rising, but research on their specific needs is lacking. Traditional practices like bed rest for high-risk pregnancies are being questioned, as evidence suggests it may not improve outcomes. Historical neglect of gestational parents in research perpetuated stereotypes of female frailty, but recent years have seen a shift towards recognizing the benefits of an active pregnancy. Closing knowledge gaps and inclusivity in research are crucial for ensuring guidelines reflect the diverse needs of gestational parents. Therefore, the purpose of this review is to summarize the evolution of exercise physiology and pregnancy research along with future directions for this novel field.
[ { "created": "Fri, 12 Apr 2024 22:13:23 GMT", "version": "v1" } ]
2024-04-16
[ [ "Corson", "Abbey E.", "" ], [ "MacDonald", "Meaghan", "" ], [ "Tzaneva", "Velislava", "" ], [ "Edwards", "Chris M.", "" ], [ "Adamo", "Kristi B.", "" ] ]
Historically, females were excluded from clinical research due to their reproductive roles, hindering medical understanding and healthcare quality. Despite guidelines promoting equal participation, females are underrepresented in exercise science, perpetuating misconceptions about female physiology. Even less attention has been given to exercise in the pregnant population. Research on pregnancy and exercise has evolved considerably from the initial bedrest prescriptions but concerns about exercise risks during pregnancy persisted for many decades. Recent guidelines endorse moderate-intensity physical activity during pregnancy, supported by considerable evidence of its safety and benefits. Mental health during pregnancy, often overlooked, is gaining traction, with exercise showing promise in reducing depression and anxiety. While pregnancy guidelines recommend moderate-intensity physical activity, there remains limited understanding of optimal frequency, intensity, type and time (duration) for extremes like elite athletes or those with complications. Female participation in elite sport and physically demanding jobs is rising, but research on their specific needs is lacking. Traditional practices like bed rest for high-risk pregnancies are being questioned, as evidence suggests it may not improve outcomes. Historical neglect of gestational parents in research perpetuated stereotypes of female frailty, but recent years have seen a shift towards recognizing the benefits of an active pregnancy. Closing knowledge gaps and inclusivity in research are crucial for ensuring guidelines reflect the diverse needs of gestational parents. Therefore, the purpose of this review is to summarize the evolution of exercise physiology and pregnancy research along with future directions for this novel field.
2101.10665
Claudius Gros
Fabian Schubert, Claudius Gros
Local homeostatic regulation of the spectral radius of echo-state networks
Frontiers In Computational Neuroscience, in press
Frontiers In Computational Neuroscience 24, 587721 (2021)
10.3389/fncom.2021.587721
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recurrent cortical networks provide reservoirs of states that are thought to play a crucial role for sequential information processing in the brain. However, classical reservoir computing requires manual adjustments of global network parameters, particularly of the spectral radius of the recurrent synaptic weight matrix. It is hence not clear if the spectral radius is accessible to biological neural networks. Using random matrix theory, we show that the spectral radius is related to local properties of the neuronal dynamics whenever the overall dynamical state is only weakly correlated. This result allows us to introduce two local homeostatic synaptic scaling mechanisms, termed flow control and variance control, that implicitly drive the spectral radius towards the desired value under working conditions. We demonstrate the effectiveness of the two adaptation mechanisms under different external input protocols and the network performance after adaptation by training the network to perform a time-delayed XOR operation on binary sequences. As our main result, we found that flow control reliably regulates the spectral radius for different types of input statistics. Precise tuning is however negatively affected when interneural correlations are substantial. Furthermore, we found a consistent task performance over a wide range of input strengths/variances. Variance control did however not yield the desired spectral radii with the same precision, being less consistent across different input strengths. Given the effectiveness and remarkably simple mathematical form of flow control, we conclude that self-consistent local control of the spectral radius via an implicit adaptation scheme is an interesting and biological plausible alternative to conventional methods using setpoint homeostatic feedback controls of neural firing.
[ { "created": "Tue, 26 Jan 2021 09:47:37 GMT", "version": "v1" } ]
2021-04-02
[ [ "Schubert", "Fabian", "" ], [ "Gros", "Claudius", "" ] ]
Recurrent cortical networks provide reservoirs of states that are thought to play a crucial role for sequential information processing in the brain. However, classical reservoir computing requires manual adjustments of global network parameters, particularly of the spectral radius of the recurrent synaptic weight matrix. It is hence not clear if the spectral radius is accessible to biological neural networks. Using random matrix theory, we show that the spectral radius is related to local properties of the neuronal dynamics whenever the overall dynamical state is only weakly correlated. This result allows us to introduce two local homeostatic synaptic scaling mechanisms, termed flow control and variance control, that implicitly drive the spectral radius towards the desired value under working conditions. We demonstrate the effectiveness of the two adaptation mechanisms under different external input protocols and the network performance after adaptation by training the network to perform a time-delayed XOR operation on binary sequences. As our main result, we found that flow control reliably regulates the spectral radius for different types of input statistics. Precise tuning is however negatively affected when interneural correlations are substantial. Furthermore, we found a consistent task performance over a wide range of input strengths/variances. Variance control did however not yield the desired spectral radii with the same precision, being less consistent across different input strengths. Given the effectiveness and remarkably simple mathematical form of flow control, we conclude that self-consistent local control of the spectral radius via an implicit adaptation scheme is an interesting and biological plausible alternative to conventional methods using setpoint homeostatic feedback controls of neural firing.
2208.06361
Ke Yu
Ke Yu, Shyam Visweswaran, Kayhan Batmanghelich
Hyperbolic Molecular Representation Learning for Drug Repositioning
Accepted by NeurIPS workshop 2020. arXiv admin note: substantial text overlap with arXiv:2006.00986
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Learning accurate drug representations is essential for task such as computational drug repositioning. A drug hierarchy is a valuable source that encodes knowledge of relations among drugs in a tree-like structure where drugs that act on the same organs, treat the same disease, or bind to the same biological target are grouped together. However, its utility in learning drug representations has not yet been explored, and currently described drug representations cannot place novel molecules in a drug hierarchy. Here, we develop a semi-supervised drug embedding that incorporates two sources of information: (1) underlying chemical grammar that is inferred from chemical structures of drugs and drug-like molecules (unsupervised), and (2) hierarchical relations that are encoded in an expert-crafted hierarchy of approved drugs (supervised). We use the Variational Auto-Encoder (VAE) framework to encode the chemical structures of molecules and use the drug-drug similarity information obtained from the hierarchy to induce the clustering of drugs in hyperbolic space. The hyperbolic space is amenable for encoding hierarchical relations. Our qualitative results support that the learned drug embedding can induce the hierarchical relations among drugs. We demonstrate that the learned drug embedding can be used for drug repositioning.
[ { "created": "Wed, 6 Jul 2022 20:20:29 GMT", "version": "v1" } ]
2022-08-15
[ [ "Yu", "Ke", "" ], [ "Visweswaran", "Shyam", "" ], [ "Batmanghelich", "Kayhan", "" ] ]
Learning accurate drug representations is essential for task such as computational drug repositioning. A drug hierarchy is a valuable source that encodes knowledge of relations among drugs in a tree-like structure where drugs that act on the same organs, treat the same disease, or bind to the same biological target are grouped together. However, its utility in learning drug representations has not yet been explored, and currently described drug representations cannot place novel molecules in a drug hierarchy. Here, we develop a semi-supervised drug embedding that incorporates two sources of information: (1) underlying chemical grammar that is inferred from chemical structures of drugs and drug-like molecules (unsupervised), and (2) hierarchical relations that are encoded in an expert-crafted hierarchy of approved drugs (supervised). We use the Variational Auto-Encoder (VAE) framework to encode the chemical structures of molecules and use the drug-drug similarity information obtained from the hierarchy to induce the clustering of drugs in hyperbolic space. The hyperbolic space is amenable for encoding hierarchical relations. Our qualitative results support that the learned drug embedding can induce the hierarchical relations among drugs. We demonstrate that the learned drug embedding can be used for drug repositioning.
1701.04316
Bashar Ibrahim
Bashar Ibrahim
Spindle assembly checkpoint is sufficient for complete Cdc20 sequestering in mitotic control
arXiv admin note: substantial text overlap with arXiv:1611.04781
Comput Struct Biotechnol J. 2015 Apr 9;13:320-8
10.1016/j.csbj.2015.03.006.
null
q-bio.MN math.AP math.CA math.DS q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spindle checkpoint assembly (SAC) ensures genome fidelity by temporarily delaying anaphase onset, until all chromosomes are properly attached to the mitotic spindle. The SAC delays mitotic progression by preventing activation of the ubiquitin ligase anaphase-promoting complex (APC/C) or cyclosome; whose activation by Cdc20 is required for sister-chromatid separation marking the transition into anaphase. The mitotic checkpoint complex (MCC), which contains Cdc20 as a subunit, binds stably to the APC/C. Compelling evidence by Izawa and Pines (Nature 2014; 10.1038/nature13911) indicates that the MCC can inhibit a second Cdc20 that has already bound and activated the APC/C. Whether or not MCC per se is sufficient to fully sequester Cdc20 and inhibit APC/C remains unclear. Here, a dynamic model for SAC regulation in which the MCC binds a second Cdc20 was constructed. This model is compared to the MCC, and the MCC-and-BubR1 (dual inhibition of APC) core model variants and subsequently validated with experimental data from the literature. By using ordinary nonlinear differential equations and spatial simulations, it is shown that the SAC works sufficiently to fully sequester Cdc20 and completely inhibit APC/C activity. This study highlights the principle that a systems biology approach is vital for molecular biology and could also be used for creating hypotheses to design future experiments.
[ { "created": "Sat, 19 Nov 2016 09:26:42 GMT", "version": "v1" } ]
2017-01-18
[ [ "Ibrahim", "Bashar", "" ] ]
The spindle checkpoint assembly (SAC) ensures genome fidelity by temporarily delaying anaphase onset, until all chromosomes are properly attached to the mitotic spindle. The SAC delays mitotic progression by preventing activation of the ubiquitin ligase anaphase-promoting complex (APC/C) or cyclosome; whose activation by Cdc20 is required for sister-chromatid separation marking the transition into anaphase. The mitotic checkpoint complex (MCC), which contains Cdc20 as a subunit, binds stably to the APC/C. Compelling evidence by Izawa and Pines (Nature 2014; 10.1038/nature13911) indicates that the MCC can inhibit a second Cdc20 that has already bound and activated the APC/C. Whether or not MCC per se is sufficient to fully sequester Cdc20 and inhibit APC/C remains unclear. Here, a dynamic model for SAC regulation in which the MCC binds a second Cdc20 was constructed. This model is compared to the MCC, and the MCC-and-BubR1 (dual inhibition of APC) core model variants and subsequently validated with experimental data from the literature. By using ordinary nonlinear differential equations and spatial simulations, it is shown that the SAC works sufficiently to fully sequester Cdc20 and completely inhibit APC/C activity. This study highlights the principle that a systems biology approach is vital for molecular biology and could also be used for creating hypotheses to design future experiments.
1603.00959
Shun Adachi
Shun Adachi
Biological hierarchies emerged from natural characteristics of a number theory
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We would like to show how biological grouping, especially in the case of species formation, is emerged through a nature of interactive populations with a number theory. First, we are able to define a species as a $p$-Sylow subgroup of a particular community in a single niche, confirmed by topological analysis. We named this model the patch with zeta dominance (PzDom) model. Next, the topological nature of the system is carefully examined. We confirm the induction of hierarchy and time through a one-dimensional probability space with certain topologies. For further clarification of induced fractals including the relation to renormalization, a theoretical development is proposed based on a newly identified fact, namely that scaling parameters for magnetization analogs exactly correspond to imaginary parts of the Riemann zeta function's nontrivial zeros. In our PzDom model, calculations only require knowledge of the density of individuals over time.
[ { "created": "Thu, 3 Mar 2016 03:24:12 GMT", "version": "v1" }, { "created": "Mon, 28 Aug 2023 08:31:45 GMT", "version": "v10" }, { "created": "Mon, 25 Sep 2023 14:11:38 GMT", "version": "v11" }, { "created": "Mon, 28 Mar 2016 06:01:08 GMT", "version": "v2" }, { "created": "Thu, 14 Apr 2016 16:11:55 GMT", "version": "v3" }, { "created": "Thu, 7 Jul 2016 10:26:12 GMT", "version": "v4" }, { "created": "Fri, 30 Sep 2016 05:20:48 GMT", "version": "v5" }, { "created": "Thu, 1 Jun 2017 06:47:35 GMT", "version": "v6" }, { "created": "Thu, 28 Mar 2019 15:30:38 GMT", "version": "v7" }, { "created": "Tue, 24 Sep 2019 07:14:20 GMT", "version": "v8" }, { "created": "Mon, 16 Nov 2020 16:23:47 GMT", "version": "v9" } ]
2023-09-26
[ [ "Adachi", "Shun", "" ] ]
We would like to show how biological grouping, especially in the case of species formation, is emerged through a nature of interactive populations with a number theory. First, we are able to define a species as a $p$-Sylow subgroup of a particular community in a single niche, confirmed by topological analysis. We named this model the patch with zeta dominance (PzDom) model. Next, the topological nature of the system is carefully examined. We confirm the induction of hierarchy and time through a one-dimensional probability space with certain topologies. For further clarification of induced fractals including the relation to renormalization, a theoretical development is proposed based on a newly identified fact, namely that scaling parameters for magnetization analogs exactly correspond to imaginary parts of the Riemann zeta function's nontrivial zeros. In our PzDom model, calculations only require knowledge of the density of individuals over time.
1601.04193
Marcelo Rossi
Marcelo Margon Rossi, Luis Fernandez Lopez
Modelling of immune cells as vectors of HIV spread inside a patient human body
22 pages, 5 figures, 1 table
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The search to understand how the HIV virus spreads inside the human body and how the immune response works to control it has motivated studies related to Mathematical Immunology. Actually, researches include the idea of mathematical models representing the dynamics of healthy and infected cell populations and focusing on mechanisms used by HIV to invade target-host cells, viral dissemination (which leads to depletion of the T-cell pool and collapse of the immune system), and impairment of immune response. In this work, we show the importance of specific cells of immune response, as infection vectors involved in the dynamics of viral proliferation within an untreated patient, by using an ordinary differential equation model in which we considered that the virus infected target-cells such as macrophages, dendritic cells, and lymphocytes TCD4 and TCD8 populations. In conclusion, we demonstrate the importance of each cell-host and the threshold of viral establishment and posterior spread based on the presence of infected macrophages and dendritic cells in antigen-presenting processing, which leads to new infections (by wild or mutant virions) after immune response activation episodes. We presented an $R_0$ expression that provides the major parameters of HIV infection. Additionally, we suggest some possibilities of new targets for functional vaccines.
[ { "created": "Sat, 16 Jan 2016 18:17:12 GMT", "version": "v1" } ]
2016-01-19
[ [ "Rossi", "Marcelo Margon", "" ], [ "Lopez", "Luis Fernandez", "" ] ]
The search to understand how the HIV virus spreads inside the human body and how the immune response works to control it has motivated studies related to Mathematical Immunology. Actually, researches include the idea of mathematical models representing the dynamics of healthy and infected cell populations and focusing on mechanisms used by HIV to invade target-host cells, viral dissemination (which leads to depletion of the T-cell pool and collapse of the immune system), and impairment of immune response. In this work, we show the importance of specific cells of immune response, as infection vectors involved in the dynamics of viral proliferation within an untreated patient, by using an ordinary differential equation model in which we considered that the virus infected target-cells such as macrophages, dendritic cells, and lymphocytes TCD4 and TCD8 populations. In conclusion, we demonstrate the importance of each cell-host and the threshold of viral establishment and posterior spread based on the presence of infected macrophages and dendritic cells in antigen-presenting processing, which leads to new infections (by wild or mutant virions) after immune response activation episodes. We presented an $R_0$ expression that provides the major parameters of HIV infection. Additionally, we suggest some possibilities of new targets for functional vaccines.
1003.1200
Lucilla de Arcangelis
Lucilla de Arcangelis and Hans J. Herrmann
Learning as a phenomenon occurring in a critical state
5 pages, 5 figures
PNAS 2010 vol 107 pages 3977-3981
10.1073/pnas.0912289107
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent physiological measurements have provided clear evidence about scale-free avalanche brain activity and EEG spectra, feeding the classical enigma of how such a chaotic system can ever learn or respond in a controlled and reproducible way. Models for learning, like neural networks or perceptrons, have traditionally avoided strong fluctuations. Conversely, we propose that brain activity having features typical of systems at a critical point, represents a crucial ingredient for learning. We present here a study which provides novel insights toward the understanding of the problem. Our model is able to reproduce quantitatively the experimentally observed critical state of the brain and, at the same time, learns and remembers logical rules including the exclusive OR (XOR), which has posed difficulties to several previous attempts. We implement the model on a network with topological properties close to the functionality network in real brains. Learning occurs via plastic adaptation of synaptic strengths and exhibits universal features. We find that the learning performance and the average time required to learn are controlled by the strength of plastic adaptation, in a way independent of the specific task assigned to the system. Even complex rules can be learned provided that the plastic adaptation is sufficiently slow.
[ { "created": "Fri, 5 Mar 2010 07:50:49 GMT", "version": "v1" } ]
2015-05-18
[ [ "de Arcangelis", "Lucilla", "" ], [ "Herrmann", "Hans J.", "" ] ]
Recent physiological measurements have provided clear evidence about scale-free avalanche brain activity and EEG spectra, feeding the classical enigma of how such a chaotic system can ever learn or respond in a controlled and reproducible way. Models for learning, like neural networks or perceptrons, have traditionally avoided strong fluctuations. Conversely, we propose that brain activity having features typical of systems at a critical point, represents a crucial ingredient for learning. We present here a study which provides novel insights toward the understanding of the problem. Our model is able to reproduce quantitatively the experimentally observed critical state of the brain and, at the same time, learns and remembers logical rules including the exclusive OR (XOR), which has posed difficulties to several previous attempts. We implement the model on a network with topological properties close to the functionality network in real brains. Learning occurs via plastic adaptation of synaptic strengths and exhibits universal features. We find that the learning performance and the average time required to learn are controlled by the strength of plastic adaptation, in a way independent of the specific task assigned to the system. Even complex rules can be learned provided that the plastic adaptation is sufficiently slow.
q-bio/0703022
Volkan Sevim
Volkan Sevim, Per Arne Rikvold
Are Genetically Robust Regulatory Networks Dynamically Different from Random Ones?
To be published in Computer Simulation Studies in Condensed-Matter Physics XX. Ed. by D.P. Landau, S. P. Lewis, H.-B. Schuttler (Springer-Verlag, Berlin Heidelberg New York)
Physics Procedia 7, 93-97 (2010)
10.1016/j.phpro.2010.09.051
null
q-bio.MN cond-mat.stat-mech nlin.AO q-bio.QM
null
We study a genetic regulatory network model developed to demonstrate that genetic robustness can evolve through stabilizing selection for optimal phenotypes. We report preliminary results on whether such selection could result in a reorganization of the state space of the system. For the chosen parameters, the evolution moves the system slightly toward the more ordered part of the phase diagram. We also find that strong memory effects cause the Derrida annealed approximation to give erroneous predictions about the model's phase diagram.
[ { "created": "Fri, 9 Mar 2007 17:54:22 GMT", "version": "v1" }, { "created": "Mon, 17 Mar 2008 17:06:08 GMT", "version": "v2" } ]
2010-12-07
[ [ "Sevim", "Volkan", "" ], [ "Rikvold", "Per Arne", "" ] ]
We study a genetic regulatory network model developed to demonstrate that genetic robustness can evolve through stabilizing selection for optimal phenotypes. We report preliminary results on whether such selection could result in a reorganization of the state space of the system. For the chosen parameters, the evolution moves the system slightly toward the more ordered part of the phase diagram. We also find that strong memory effects cause the Derrida annealed approximation to give erroneous predictions about the model's phase diagram.
1010.2530
Dante Chialvo
Dante R. Chialvo
Emergent complex neural dynamics
null
Nature Physics 6, 744-750 (2010)
10.1038/nphys1803
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A large repertoire of spatiotemporal activity patterns in the brain is the basis for adaptive behaviour. Understanding the mechanism by which the brain's hundred billion neurons and hundred trillion synapses manage to produce such a range of cortical configurations in a flexible manner remains a fundamental problem in neuroscience. One plausible solution is the involvement of universal mechanisms of emergent complex phenomena evident in dynamical systems poised near a critical point of a second-order phase transition. We review recent theoretical and empirical results supporting the notion that the brain is naturally poised near criticality, as well as its implications for better understanding of the brain.
[ { "created": "Tue, 12 Oct 2010 22:29:48 GMT", "version": "v1" } ]
2010-10-14
[ [ "Chialvo", "Dante R.", "" ] ]
A large repertoire of spatiotemporal activity patterns in the brain is the basis for adaptive behaviour. Understanding the mechanism by which the brain's hundred billion neurons and hundred trillion synapses manage to produce such a range of cortical configurations in a flexible manner remains a fundamental problem in neuroscience. One plausible solution is the involvement of universal mechanisms of emergent complex phenomena evident in dynamical systems poised near a critical point of a second-order phase transition. We review recent theoretical and empirical results supporting the notion that the brain is naturally poised near criticality, as well as its implications for better understanding of the brain.
1009.4167
Giovanni Meacci
Elisabeth Fischer-Friedrich, Giovanni Meacci, Joe Lutkenhaus, Hugues Chate, and Karsten Kruse
Intra- and intercellular fluctuations in Min-protein dynamics decrease with cell length
Article and Supplementary Information: 26 pages, 12 figures
PNAS (2010) 107 6134-6139
10.1073/pnas.0911708107
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-organization of proteins in space and time is of crucial importance for the functioning of cellular processes. Often, this organization takes place in the presence of strong random fluctuations due to the small number of molecules involved. We report on stochastic switching of the Min-protein distributions between the two cell halves in short Escherichia coli cells. A computational model provides strong evidence that the macroscopic switching is rooted in microscopic noise on the molecular scale. In longer bacteria, the switching turns into regular oscillations that are required for positioning of the division plane. As the pattern becomes more regular, cell-to-cell variability also lessens, indicating cell length-dependent regulation of Min-protein activity.
[ { "created": "Tue, 21 Sep 2010 18:16:42 GMT", "version": "v1" } ]
2010-09-22
[ [ "Fischer-Friedrich", "Elisabeth", "" ], [ "Meacci", "Giovanni", "" ], [ "Lutkenhaus", "Joe", "" ], [ "Chate", "Hugues", "" ], [ "Kruse", "Karsten", "" ] ]
Self-organization of proteins in space and time is of crucial importance for the functioning of cellular processes. Often, this organization takes place in the presence of strong random fluctuations due to the small number of molecules involved. We report on stochastic switching of the Min-protein distributions between the two cell halves in short Escherichia coli cells. A computational model provides strong evidence that the macroscopic switching is rooted in microscopic noise on the molecular scale. In longer bacteria, the switching turns into regular oscillations that are required for positioning of the division plane. As the pattern becomes more regular, cell-to-cell variability also lessens, indicating cell length-dependent regulation of Min-protein activity.
1912.10956
Anne-Florence Bitbol
Carlos A. Gandarilla-P\'erez, Pierre Mergny, Martin Weigt, Anne-Florence Bitbol
Statistical physics of interacting proteins: impact of dataset size and quality assessed in synthetic sequences
18 pages, 16 figures
Phys. Rev. E 101, 032413 (2020)
10.1103/PhysRevE.101.032413
null
q-bio.BM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying protein-protein interactions is crucial for a systems-level understanding of the cell. Recently, algorithms based on inverse statistical physics, e.g. Direct Coupling Analysis (DCA), have allowed to use evolutionarily related sequences to address two conceptually related inference tasks: finding pairs of interacting proteins, and identifying pairs of residues which form contacts between interacting proteins. Here we address two underlying questions: How are the performances of both inference tasks related? How does performance depend on dataset size and the quality? To this end, we formalize both tasks using Ising models defined over stochastic block models, with individual blocks representing single proteins, and inter-block couplings protein-protein interactions; controlled synthetic sequence data are generated by Monte-Carlo simulations. We show that DCA is able to address both inference tasks accurately when sufficiently large training sets are available, and that an iterative pairing algorithm (IPA) allows to make predictions even without a training set. Noise in the training data deteriorates performance. In both tasks we find a quadratic scaling relating dataset quality and size that is consistent with noise adding in square-root fashion and signal adding linearly when increasing the dataset. This implies that it is generally good to incorporate more data even if its quality is imperfect, thereby shedding light on the empirically observed performance of DCA applied to natural protein sequences.
[ { "created": "Mon, 23 Dec 2019 16:30:18 GMT", "version": "v1" }, { "created": "Mon, 16 Mar 2020 10:26:43 GMT", "version": "v2" } ]
2020-03-25
[ [ "Gandarilla-Pérez", "Carlos A.", "" ], [ "Mergny", "Pierre", "" ], [ "Weigt", "Martin", "" ], [ "Bitbol", "Anne-Florence", "" ] ]
Identifying protein-protein interactions is crucial for a systems-level understanding of the cell. Recently, algorithms based on inverse statistical physics, e.g. Direct Coupling Analysis (DCA), have allowed to use evolutionarily related sequences to address two conceptually related inference tasks: finding pairs of interacting proteins, and identifying pairs of residues which form contacts between interacting proteins. Here we address two underlying questions: How are the performances of both inference tasks related? How does performance depend on dataset size and the quality? To this end, we formalize both tasks using Ising models defined over stochastic block models, with individual blocks representing single proteins, and inter-block couplings protein-protein interactions; controlled synthetic sequence data are generated by Monte-Carlo simulations. We show that DCA is able to address both inference tasks accurately when sufficiently large training sets are available, and that an iterative pairing algorithm (IPA) allows to make predictions even without a training set. Noise in the training data deteriorates performance. In both tasks we find a quadratic scaling relating dataset quality and size that is consistent with noise adding in square-root fashion and signal adding linearly when increasing the dataset. This implies that it is generally good to incorporate more data even if its quality is imperfect, thereby shedding light on the empirically observed performance of DCA applied to natural protein sequences.
2203.02510
Raja Muhammad Saad Bashir
Muhammad Dawood, Raja Muhammad Saad Bashir, Srijay Deshpande, Manahil Raza, Adam Shephard
Cellular Segmentation and Composition in Routine Histology Images using Deep Learning
null
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Identification and quantification of nuclei in colorectal cancer haematoxylin \& eosin (H\&E) stained histology images is crucial to prognosis and patient management. In computational pathology these tasks are referred to as nuclear segmentation, classification and composition and are used to extract meaningful interpretable cytological and architectural features for downstream analysis. The CoNIC challenge poses the task of automated nuclei segmentation, classification and composition into six different types of nuclei from the largest publicly known nuclei dataset - Lizard. In this regard, we have developed pipelines for the prediction of nuclei segmentation using HoVer-Net and ALBRT for cellular composition. On testing on the preliminary test set, HoVer-Net achieved a PQ of 0.58, a PQ+ of 0.58 and finally a mPQ+ of 0.35. For the prediction of cellular composition with ALBRT on the preliminary test set, we achieved an overall $R^2$ score of 0.53, consisting of 0.84 for lymphocytes, 0.70 for epithelial cells, 0.70 for plasma and .060 for eosinophils.
[ { "created": "Fri, 4 Mar 2022 15:03:53 GMT", "version": "v1" } ]
2022-03-08
[ [ "Dawood", "Muhammad", "" ], [ "Bashir", "Raja Muhammad Saad", "" ], [ "Deshpande", "Srijay", "" ], [ "Raza", "Manahil", "" ], [ "Shephard", "Adam", "" ] ]
Identification and quantification of nuclei in colorectal cancer haematoxylin \& eosin (H\&E) stained histology images is crucial to prognosis and patient management. In computational pathology these tasks are referred to as nuclear segmentation, classification and composition and are used to extract meaningful interpretable cytological and architectural features for downstream analysis. The CoNIC challenge poses the task of automated nuclei segmentation, classification and composition into six different types of nuclei from the largest publicly known nuclei dataset - Lizard. In this regard, we have developed pipelines for the prediction of nuclei segmentation using HoVer-Net and ALBRT for cellular composition. On testing on the preliminary test set, HoVer-Net achieved a PQ of 0.58, a PQ+ of 0.58 and finally a mPQ+ of 0.35. For the prediction of cellular composition with ALBRT on the preliminary test set, we achieved an overall $R^2$ score of 0.53, consisting of 0.84 for lymphocytes, 0.70 for epithelial cells, 0.70 for plasma and .060 for eosinophils.
2407.14020
Chao Feng
Fengyu Yang, Chao Feng, Daniel Wang, Tianye Wang, Ziyao Zeng, Zhiyang Xu, Hyoungseob Park, Pengliang Ji, Hanbin Zhao, Yuanning Li, Alex Wong
NeuroBind: Towards Unified Multimodal Representations for Neural Signals
null
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding neural activity and information representation is crucial for advancing knowledge of brain function and cognition. Neural activity, measured through techniques like electrophysiology and neuroimaging, reflects various aspects of information processing. Recent advances in deep neural networks offer new approaches to analyzing these signals using pre-trained models. However, challenges arise due to discrepancies between different neural signal modalities and the limited scale of high-quality neural data. To address these challenges, we present NeuroBind, a general representation that unifies multiple brain signal types, including EEG, fMRI, calcium imaging, and spiking data. To achieve this, we align neural signals in these image-paired neural datasets to pre-trained vision-language embeddings. Neurobind is the first model that studies different neural modalities interconnectedly and is able to leverage high-resource modality models for various neuroscience tasks. We also showed that by combining information from different neural signal modalities, NeuroBind enhances downstream performance, demonstrating the effectiveness of the complementary strengths of different neural modalities. As a result, we can leverage multiple types of neural signals mapped to the same space to improve downstream tasks, and demonstrate the complementary strengths of different neural modalities. This approach holds significant potential for advancing neuroscience research, improving AI systems, and developing neuroprosthetics and brain-computer interfaces.
[ { "created": "Fri, 19 Jul 2024 04:42:52 GMT", "version": "v1" } ]
2024-07-22
[ [ "Yang", "Fengyu", "" ], [ "Feng", "Chao", "" ], [ "Wang", "Daniel", "" ], [ "Wang", "Tianye", "" ], [ "Zeng", "Ziyao", "" ], [ "Xu", "Zhiyang", "" ], [ "Park", "Hyoungseob", "" ], [ "Ji", "Pengliang", "" ], [ "Zhao", "Hanbin", "" ], [ "Li", "Yuanning", "" ], [ "Wong", "Alex", "" ] ]
Understanding neural activity and information representation is crucial for advancing knowledge of brain function and cognition. Neural activity, measured through techniques like electrophysiology and neuroimaging, reflects various aspects of information processing. Recent advances in deep neural networks offer new approaches to analyzing these signals using pre-trained models. However, challenges arise due to discrepancies between different neural signal modalities and the limited scale of high-quality neural data. To address these challenges, we present NeuroBind, a general representation that unifies multiple brain signal types, including EEG, fMRI, calcium imaging, and spiking data. To achieve this, we align neural signals in these image-paired neural datasets to pre-trained vision-language embeddings. Neurobind is the first model that studies different neural modalities interconnectedly and is able to leverage high-resource modality models for various neuroscience tasks. We also showed that by combining information from different neural signal modalities, NeuroBind enhances downstream performance, demonstrating the effectiveness of the complementary strengths of different neural modalities. As a result, we can leverage multiple types of neural signals mapped to the same space to improve downstream tasks, and demonstrate the complementary strengths of different neural modalities. This approach holds significant potential for advancing neuroscience research, improving AI systems, and developing neuroprosthetics and brain-computer interfaces.
2309.02071
Xiao Yuan
Xiao Yuan
BeeTLe: A Framework for Linear B-Cell Epitope Prediction and Classification
18 pages, 3 figures, accepted at ECML PKDD 2023
null
10.1007/978-3-031-43427-3_29
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The process of identifying and characterizing B-cell epitopes, which are the portions of antigens recognized by antibodies, is important for our understanding of the immune system, and for many applications including vaccine development, therapeutics, and diagnostics. Computational epitope prediction is challenging yet rewarding as it significantly reduces the time and cost of laboratory work. Most of the existing tools do not have satisfactory performance and only discriminate epitopes from non-epitopes. This paper presents a new deep learning-based multi-task framework for linear B-cell epitope prediction as well as antibody type-specific epitope classification. Specifically, a sequenced-based neural network model using recurrent layers and Transformer blocks is developed. We propose an amino acid encoding method based on eigen decomposition to help the model learn the representations of epitopes. We introduce modifications to standard cross-entropy loss functions by extending a logit adjustment technique to cope with the class imbalance. Experimental results on data curated from the largest public epitope database demonstrate the validity of the proposed methods and the superior performance compared to competing ones.
[ { "created": "Tue, 5 Sep 2023 09:18:29 GMT", "version": "v1" } ]
2023-09-22
[ [ "Yuan", "Xiao", "" ] ]
The process of identifying and characterizing B-cell epitopes, which are the portions of antigens recognized by antibodies, is important for our understanding of the immune system, and for many applications including vaccine development, therapeutics, and diagnostics. Computational epitope prediction is challenging yet rewarding as it significantly reduces the time and cost of laboratory work. Most of the existing tools do not have satisfactory performance and only discriminate epitopes from non-epitopes. This paper presents a new deep learning-based multi-task framework for linear B-cell epitope prediction as well as antibody type-specific epitope classification. Specifically, a sequenced-based neural network model using recurrent layers and Transformer blocks is developed. We propose an amino acid encoding method based on eigen decomposition to help the model learn the representations of epitopes. We introduce modifications to standard cross-entropy loss functions by extending a logit adjustment technique to cope with the class imbalance. Experimental results on data curated from the largest public epitope database demonstrate the validity of the proposed methods and the superior performance compared to competing ones.
2206.11417
Armin Thomas
Armin W. Thomas and Christopher R\'e and Russell A. Poldrack
Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data
21 pages, 5 main figures, 7 supplementary figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Self-supervised learning techniques are celebrating immense success in natural language processing (NLP) by enabling models to learn from broad language data at unprecedented scales. Here, we aim to leverage the success of these techniques for mental state decoding, where researchers aim to identify specific mental states (e.g., the experience of anger or joy) from brain activity. To this end, we devise a set of novel self-supervised learning frameworks for neuroimaging data inspired by prominent learning frameworks in NLP. At their core, these frameworks learn the dynamics of brain activity by modeling sequences of activity akin to how sequences of text are modeled in NLP. We evaluate the frameworks by pre-training models on a broad neuroimaging dataset spanning functional Magnetic Resonance Imaging data from 11,980 experimental runs of 1,726 individuals across 34 datasets, and subsequently adapting the pre-trained models to benchmark mental state decoding datasets. The pre-trained models transfer well, generally outperforming baseline models trained from scratch, while models trained in a learning framework based on causal language modeling clearly outperform the others.
[ { "created": "Wed, 22 Jun 2022 23:22:17 GMT", "version": "v1" }, { "created": "Fri, 14 Oct 2022 14:27:47 GMT", "version": "v2" }, { "created": "Fri, 13 Jan 2023 20:55:02 GMT", "version": "v3" } ]
2023-01-18
[ [ "Thomas", "Armin W.", "" ], [ "Ré", "Christopher", "" ], [ "Poldrack", "Russell A.", "" ] ]
Self-supervised learning techniques are celebrating immense success in natural language processing (NLP) by enabling models to learn from broad language data at unprecedented scales. Here, we aim to leverage the success of these techniques for mental state decoding, where researchers aim to identify specific mental states (e.g., the experience of anger or joy) from brain activity. To this end, we devise a set of novel self-supervised learning frameworks for neuroimaging data inspired by prominent learning frameworks in NLP. At their core, these frameworks learn the dynamics of brain activity by modeling sequences of activity akin to how sequences of text are modeled in NLP. We evaluate the frameworks by pre-training models on a broad neuroimaging dataset spanning functional Magnetic Resonance Imaging data from 11,980 experimental runs of 1,726 individuals across 34 datasets, and subsequently adapting the pre-trained models to benchmark mental state decoding datasets. The pre-trained models transfer well, generally outperforming baseline models trained from scratch, while models trained in a learning framework based on causal language modeling clearly outperform the others.
1611.04794
Liviu Badea
Liviu Badea, Mihaela Onu, Tao Wu, Adina Roceanu, Ovidiu Bajenaru
Exploring the reproducibility of functional connectivity alterations in Parkinson's Disease
null
null
10.1371/journal.pone.0188196
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since anatomic MRI is presently not able to directly discern neuronal loss in Parkinson's Disease (PD), studying the associated functional connectivity (FC) changes seems a promising approach toward developing non-invasive and non-radioactive neuroimaging markers for this disease. While several groups have reported such FC changes in PD, there are also significant discrepancies between studies. Investigating the reproducibility of PD-related FC changes on independent datasets is therefore of crucial importance. We acquired resting-state fMRI scans for 43 subjects (27 patients , 16 controls) and compared the observed FC changes with those obtained in 2 independent datasets, one made available by the PPMI consortium and a second one by the group of Tao Wu. Unfortunately, PD-related functional connectivity changes turned out to be non-reproducible across datasets. This could be due to disease heterogeneity, but also to technical differences. To distinguish between the two, we devised a method to directly check for disease heterogeneity using random splits of a single dataset. Since we still observe non-reproducibility in a large fraction of random splits of the same dataset, we conclude that functional heterogeneity may be a dominating factor behind the lack of reproducibility of FC alterations in different rs-fMRI studies of PD. While global PD-related functional connectivity changes were non-reproducible across datasets, we identified a few individual brain region pairs with marginally consistent FC changes across all three datasets. However, training classifiers on each one of the 3 datasets to discriminate PD scans from controls produced only low accuracies on the remaining two test datasets. Moreover, classifiers trained and tested on random splits of the same dataset (which are technically homogeneous) also had low test accuracies, directly substantiating disease heterogeneity.
[ { "created": "Tue, 15 Nov 2016 11:26:22 GMT", "version": "v1" }, { "created": "Thu, 15 Jun 2017 11:30:52 GMT", "version": "v2" }, { "created": "Fri, 1 Sep 2017 13:57:18 GMT", "version": "v3" } ]
2018-02-07
[ [ "Badea", "Liviu", "" ], [ "Onu", "Mihaela", "" ], [ "Wu", "Tao", "" ], [ "Roceanu", "Adina", "" ], [ "Bajenaru", "Ovidiu", "" ] ]
Since anatomic MRI is presently not able to directly discern neuronal loss in Parkinson's Disease (PD), studying the associated functional connectivity (FC) changes seems a promising approach toward developing non-invasive and non-radioactive neuroimaging markers for this disease. While several groups have reported such FC changes in PD, there are also significant discrepancies between studies. Investigating the reproducibility of PD-related FC changes on independent datasets is therefore of crucial importance. We acquired resting-state fMRI scans for 43 subjects (27 patients , 16 controls) and compared the observed FC changes with those obtained in 2 independent datasets, one made available by the PPMI consortium and a second one by the group of Tao Wu. Unfortunately, PD-related functional connectivity changes turned out to be non-reproducible across datasets. This could be due to disease heterogeneity, but also to technical differences. To distinguish between the two, we devised a method to directly check for disease heterogeneity using random splits of a single dataset. Since we still observe non-reproducibility in a large fraction of random splits of the same dataset, we conclude that functional heterogeneity may be a dominating factor behind the lack of reproducibility of FC alterations in different rs-fMRI studies of PD. While global PD-related functional connectivity changes were non-reproducible across datasets, we identified a few individual brain region pairs with marginally consistent FC changes across all three datasets. However, training classifiers on each one of the 3 datasets to discriminate PD scans from controls produced only low accuracies on the remaining two test datasets. Moreover, classifiers trained and tested on random splits of the same dataset (which are technically homogeneous) also had low test accuracies, directly substantiating disease heterogeneity.
2011.04172
Andr\'e C. R. Martins
Andr\'e C. R. Martins
Senescence, change, and competition: when the desire to pick one model harms our understanding
10 pages, 8 figures
null
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The question of why we age is a fundamental one. It is about who we are, and it also might have critical practical aspects as we try to find ways to age slower. Or to not age at all. Different reasons point at distinct strategies for the research of anti-ageing drugs. While the main reason why biological systems work as they do is evolution, for quite a while, it was believed that aging required another explanation. Aging seems to harm individuals so much that even if it has group benefits, those benefits were unlikely to be enough. That has led many scientists to propose non-evolutionary explanations as to why we age. But those theories seem to fail at explaining all the data on how species age. Here, I will show that the insistence of finding the one idea that explains it all might be at the root of the difficulty of getting a full picture. By exploring an evolutionary model of aging where locality and temporal changes are fundamental aspects of the problem, I will show that environmental change causes the barrier for group advantages to become much weaker. That weakening might help small group advantages to add up to the point they could make an adaptive difference. To answer why we age, we might have to abandon asking which models are correct. The full answer might come from considering how much each hypothesis behind each existing model, evolutionary and non-evolutionary ones, contributes to the real world's solution.
[ { "created": "Mon, 9 Nov 2020 03:48:17 GMT", "version": "v1" } ]
2020-11-10
[ [ "Martins", "André C. R.", "" ] ]
The question of why we age is a fundamental one. It is about who we are, and it also might have critical practical aspects as we try to find ways to age slower. Or to not age at all. Different reasons point at distinct strategies for the research of anti-ageing drugs. While the main reason why biological systems work as they do is evolution, for quite a while, it was believed that aging required another explanation. Aging seems to harm individuals so much that even if it has group benefits, those benefits were unlikely to be enough. That has led many scientists to propose non-evolutionary explanations as to why we age. But those theories seem to fail at explaining all the data on how species age. Here, I will show that the insistence of finding the one idea that explains it all might be at the root of the difficulty of getting a full picture. By exploring an evolutionary model of aging where locality and temporal changes are fundamental aspects of the problem, I will show that environmental change causes the barrier for group advantages to become much weaker. That weakening might help small group advantages to add up to the point they could make an adaptive difference. To answer why we age, we might have to abandon asking which models are correct. The full answer might come from considering how much each hypothesis behind each existing model, evolutionary and non-evolutionary ones, contributes to the real world's solution.
1611.00388
Uygar S\"umb\"ul
Uygar S\"umb\"ul, Douglas Roussien Jr., Fei Chen, Nicholas Barry, Edward S. Boyden, Dawen Cai, John P. Cunningham, Liam Paninski
Automated scalable segmentation of neurons from multispectral images
main text: 9 pages and 5 figures, supplementary text: 11 pages and 8 figures (NIPS 2016)
null
null
null
q-bio.NC q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstruction of neuroanatomy is a fundamental problem in neuroscience. Stochastic expression of colors in individual cells is a promising tool, although its use in the nervous system has been limited due to various sources of variability in expression. Moreover, the intermingled anatomy of neuronal trees is challenging for existing segmentation algorithms. Here, we propose a method to automate the segmentation of neurons in such (potentially pseudo-colored) images. The method uses spatio-color relations between the voxels, generates supervoxels to reduce the problem size by four orders of magnitude before the final segmentation, and is parallelizable over the supervoxels. To quantify performance and gain insight, we generate simulated images, where the noise level and characteristics, the density of expression, and the number of fluorophore types are variable. We also present segmentations of real Brainbow images of the mouse hippocampus, which reveal many of the dendritic segments.
[ { "created": "Tue, 1 Nov 2016 21:01:44 GMT", "version": "v1" }, { "created": "Sun, 22 Jan 2017 01:48:48 GMT", "version": "v2" } ]
2017-01-24
[ [ "Sümbül", "Uygar", "" ], [ "Roussien", "Douglas", "Jr." ], [ "Chen", "Fei", "" ], [ "Barry", "Nicholas", "" ], [ "Boyden", "Edward S.", "" ], [ "Cai", "Dawen", "" ], [ "Cunningham", "John P.", "" ], [ "Paninski", "Liam", "" ] ]
Reconstruction of neuroanatomy is a fundamental problem in neuroscience. Stochastic expression of colors in individual cells is a promising tool, although its use in the nervous system has been limited due to various sources of variability in expression. Moreover, the intermingled anatomy of neuronal trees is challenging for existing segmentation algorithms. Here, we propose a method to automate the segmentation of neurons in such (potentially pseudo-colored) images. The method uses spatio-color relations between the voxels, generates supervoxels to reduce the problem size by four orders of magnitude before the final segmentation, and is parallelizable over the supervoxels. To quantify performance and gain insight, we generate simulated images, where the noise level and characteristics, the density of expression, and the number of fluorophore types are variable. We also present segmentations of real Brainbow images of the mouse hippocampus, which reveal many of the dendritic segments.
1207.3453
Alicia Dickenstein
R. L. Karp, M. P\'erez Mill\'an, T. Dasgupta, A. Dickenstein, J. Gunawardena
Complex-linear invariants of biochemical networks
36 pages, 6 figures
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The nonlinearities found in molecular networks usually prevent mathematical analysis of network behaviour, which has largely been studied by numerical simulation. This can lead to difficult problems of parameter determination. However, molecular networks give rise, through mass-action kinetics, to polynomial dynamical systems, whose steady states are zeros of a set of polynomial equations. These equations may be analysed by algebraic methods, in which parameters are treated as symbolic expressions whose numerical values do not have to be known in advance. For instance, an "invariant" of a network is a polynomial expression on selected state variables that vanishes in any steady state. Invariants have been found that encode key network properties and that discriminate between different network structures. Although invariants may be calculated by computational algebraic methods, such as Gr\"obner bases, these become computationally infeasible for biologically realistic networks. Here, we exploit Chemical Reaction Network Theory (CRNT) to develop an efficient procedure for calculating invariants that are linear combinations of "complexes", or the monomials coming from mass action. We show how this procedure can be used in proving earlier results of Horn and Jackson and of Shinar and Feinberg for networks of deficiency at most one. We then apply our method to enzyme bifunctionality, including the bacterial EnvZ/OmpR osmolarity regulator and the mammalian 6-phosphofructo-2-kinase/fructose-2,6-bisphosphatase glycolytic regulator, whose networks have deficiencies up to four. We show that bifunctionality leads to different forms of concentration control that are robust to changes in initial conditions or total amounts. Finally, we outline a systematic procedure for using complex-linear invariants to analyse molecular networks of any deficiency.
[ { "created": "Sat, 14 Jul 2012 19:30:59 GMT", "version": "v1" } ]
2012-07-17
[ [ "Karp", "R. L.", "" ], [ "Millán", "M. Pérez", "" ], [ "Dasgupta", "T.", "" ], [ "Dickenstein", "A.", "" ], [ "Gunawardena", "J.", "" ] ]
The nonlinearities found in molecular networks usually prevent mathematical analysis of network behaviour, which has largely been studied by numerical simulation. This can lead to difficult problems of parameter determination. However, molecular networks give rise, through mass-action kinetics, to polynomial dynamical systems, whose steady states are zeros of a set of polynomial equations. These equations may be analysed by algebraic methods, in which parameters are treated as symbolic expressions whose numerical values do not have to be known in advance. For instance, an "invariant" of a network is a polynomial expression on selected state variables that vanishes in any steady state. Invariants have been found that encode key network properties and that discriminate between different network structures. Although invariants may be calculated by computational algebraic methods, such as Gr\"obner bases, these become computationally infeasible for biologically realistic networks. Here, we exploit Chemical Reaction Network Theory (CRNT) to develop an efficient procedure for calculating invariants that are linear combinations of "complexes", or the monomials coming from mass action. We show how this procedure can be used in proving earlier results of Horn and Jackson and of Shinar and Feinberg for networks of deficiency at most one. We then apply our method to enzyme bifunctionality, including the bacterial EnvZ/OmpR osmolarity regulator and the mammalian 6-phosphofructo-2-kinase/fructose-2,6-bisphosphatase glycolytic regulator, whose networks have deficiencies up to four. We show that bifunctionality leads to different forms of concentration control that are robust to changes in initial conditions or total amounts. Finally, we outline a systematic procedure for using complex-linear invariants to analyse molecular networks of any deficiency.
q-bio/0611022
Thomas Frewen
Radek Erban, Thomas A. Frewen, Xiao Wang, Timothy C. Elston, Ronald Coifman, Boaz Nadler, and Ioannis G. Kevrekidis
Variable-free exploration of stochastic models: a gene regulatory network example
26 pages, 9 figures
null
10.1063/1.2718529
null
q-bio.QM physics.comp-ph q-bio.MN
null
Finding coarse-grained, low-dimensional descriptions is an important task in the analysis of complex, stochastic models of gene regulatory networks. This task involves (a) identifying observables that best describe the state of these complex systems and (b) characterizing the dynamics of the observables. In a previous paper [13], we assumed that good observables were known a priori, and presented an equation-free approach to approximate coarse-grained quantities (i.e, effective drift and diffusion coefficients) that characterize the long-time behavior of the observables. Here we use diffusion maps [9] to extract appropriate observables ("reduction coordinates") in an automated fashion; these involve the leading eigenvectors of a weighted Laplacian on a graph constructed from network simulation data. We present lifting and restriction procedures for translating between physical variables and these data-based observables. These procedures allow us to perform equation-free coarse-grained, computations characterizing the long-term dynamics through the design and processing of short bursts of stochastic simulation initialized at appropriate values of the data-based observables.
[ { "created": "Mon, 6 Nov 2006 19:39:19 GMT", "version": "v1" } ]
2015-06-26
[ [ "Erban", "Radek", "" ], [ "Frewen", "Thomas A.", "" ], [ "Wang", "Xiao", "" ], [ "Elston", "Timothy C.", "" ], [ "Coifman", "Ronald", "" ], [ "Nadler", "Boaz", "" ], [ "Kevrekidis", "Ioannis G.", "" ] ]
Finding coarse-grained, low-dimensional descriptions is an important task in the analysis of complex, stochastic models of gene regulatory networks. This task involves (a) identifying observables that best describe the state of these complex systems and (b) characterizing the dynamics of the observables. In a previous paper [13], we assumed that good observables were known a priori, and presented an equation-free approach to approximate coarse-grained quantities (i.e, effective drift and diffusion coefficients) that characterize the long-time behavior of the observables. Here we use diffusion maps [9] to extract appropriate observables ("reduction coordinates") in an automated fashion; these involve the leading eigenvectors of a weighted Laplacian on a graph constructed from network simulation data. We present lifting and restriction procedures for translating between physical variables and these data-based observables. These procedures allow us to perform equation-free coarse-grained, computations characterizing the long-term dynamics through the design and processing of short bursts of stochastic simulation initialized at appropriate values of the data-based observables.
1008.2555
Thomas Conway
Thomas C Conway, Andrew J Bromage
Succinct Data Structures for Assembling Large Genomes
null
null
null
null
q-bio.GN cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Second generation sequencing technology makes it feasible for many researches to obtain enough sequence reads to attempt the de novo assembly of higher eukaryotes (including mammals). De novo assembly not only provides a tool for understanding wide scale biological variation, but within human bio-medicine, it offers a direct way of observing both large scale structural variation and fine scale sequence variation. Unfortunately, improvements in the computational feasibility for de novo assembly have not matched the improvements in the gathering of sequence data. This is for two reasons: the inherent computational complexity of the problem, and the in-practice memory requirements of tools. Results: In this paper we use entropy compressed or succinct data structures to create a practical representation of the de Bruijn assembly graph, which requires at least a factor of 10 less storage than the kinds of structures used by deployed methods. In particular we show that when stored succinctly, the de Bruijn assembly graph for homo sapiens requires only 23 gigabytes of storage. Moreover, because our representation is entropy compressed, in the presence of sequencing errors it has better scaling behaviour asymptotically than conventional approaches.
[ { "created": "Sun, 15 Aug 2010 23:07:45 GMT", "version": "v1" } ]
2010-08-17
[ [ "Conway", "Thomas C", "" ], [ "Bromage", "Andrew J", "" ] ]
Motivation: Second generation sequencing technology makes it feasible for many researches to obtain enough sequence reads to attempt the de novo assembly of higher eukaryotes (including mammals). De novo assembly not only provides a tool for understanding wide scale biological variation, but within human bio-medicine, it offers a direct way of observing both large scale structural variation and fine scale sequence variation. Unfortunately, improvements in the computational feasibility for de novo assembly have not matched the improvements in the gathering of sequence data. This is for two reasons: the inherent computational complexity of the problem, and the in-practice memory requirements of tools. Results: In this paper we use entropy compressed or succinct data structures to create a practical representation of the de Bruijn assembly graph, which requires at least a factor of 10 less storage than the kinds of structures used by deployed methods. In particular we show that when stored succinctly, the de Bruijn assembly graph for homo sapiens requires only 23 gigabytes of storage. Moreover, because our representation is entropy compressed, in the presence of sequencing errors it has better scaling behaviour asymptotically than conventional approaches.
1607.04372
Conrad Burden
Conrad J. Burden and Yurong Tang
Rate Matrix Estimation From Site Frequency Data
22 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A procedure is described for estimating evolutionary rate matrices from observed site frequency data. The procedure assumes (1) that the data are obtained from a constant size population evolving according to a stationary Wright-Fisher model; (2) that the data consist of a multiple alignment of a moderate number of sequenced genomes drawn randomly from the population; and (3) that within the genome a large number of independent, neutral sites evolving with with a common mutation rate matrix can be identified. No restrictions are imposed on the scaled rate matrix other than that the off-diagonal elements are positive and <<1, and that the rows sum to zero. In particular the rate matrix is not assumed to be reversible. The key to the method is an approximate stationary solution to the forward Kolmogorov equation for the multi-allele neutral Wright-Fisher model in the limit of low mutation rates.
[ { "created": "Fri, 15 Jul 2016 03:33:56 GMT", "version": "v1" } ]
2016-07-18
[ [ "Burden", "Conrad J.", "" ], [ "Tang", "Yurong", "" ] ]
A procedure is described for estimating evolutionary rate matrices from observed site frequency data. The procedure assumes (1) that the data are obtained from a constant size population evolving according to a stationary Wright-Fisher model; (2) that the data consist of a multiple alignment of a moderate number of sequenced genomes drawn randomly from the population; and (3) that within the genome a large number of independent, neutral sites evolving with with a common mutation rate matrix can be identified. No restrictions are imposed on the scaled rate matrix other than that the off-diagonal elements are positive and <<1, and that the rows sum to zero. In particular the rate matrix is not assumed to be reversible. The key to the method is an approximate stationary solution to the forward Kolmogorov equation for the multi-allele neutral Wright-Fisher model in the limit of low mutation rates.
1710.10548
Stephanie Elizabeth Palmer
Joseph A. Lombardo, Matthew V. Macellaio, Bing Liu, Stephanie E. Palmer, and Leslie C. Osborne
State Dependence of Stimulus-Induced Variability Tuning in Macaque MT
36 pages, 18 figures
null
10.1371/journal.pcbi.1006527
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Behavioral states marked by varying levels of arousal and attention modulate some properties of cortical responses (e.g. average firing rates or pairwise correlations), yet it is not fully understood what drives these response changes and how they might affect downstream stimulus decoding. Here we show that changes in state modulate the tuning of response variance-to-mean ratios (Fano factors) in a fashion that is neither predicted by a Poisson spiking model nor changes in the mean firing rate, with a substantial effect on stimulus discriminability. We recorded motion-sensitive neurons in middle temporal cortex (MT) in two states: alert fixation and light, opioid anesthesia. Anesthesia tended to lower average spike counts, without decreasing trial-to-trial variability compared to the alert state. Under anesthesia, within-trial fluctuations in excitability were correlated over longer time scales compared to the alert state, creating supra-Poisson Fano factors. In contrast, alert-state MT neurons have higher mean firing rates and largely sub-Poisson variability that is stimulus-dependent and cannot be explained by firing rate differences alone. The absence of such stimulus-induced variability tuning in the anesthetized state suggests different sources of variability between states. A simple model explains state-dependent shifts in the distribution of observed Fano factors via a suppression in the variance of gain fluctuations in the alert state. A population model with stimulus-induced variability tuning and behaviorally constrained information-limiting correlations explores the potential enhancement in stimulus discriminability by the cortical population in the alert state.
[ { "created": "Sun, 29 Oct 2017 01:12:17 GMT", "version": "v1" }, { "created": "Wed, 3 Oct 2018 23:13:18 GMT", "version": "v2" } ]
2018-11-21
[ [ "Lombardo", "Joseph A.", "" ], [ "Macellaio", "Matthew V.", "" ], [ "Liu", "Bing", "" ], [ "Palmer", "Stephanie E.", "" ], [ "Osborne", "Leslie C.", "" ] ]
Behavioral states marked by varying levels of arousal and attention modulate some properties of cortical responses (e.g. average firing rates or pairwise correlations), yet it is not fully understood what drives these response changes and how they might affect downstream stimulus decoding. Here we show that changes in state modulate the tuning of response variance-to-mean ratios (Fano factors) in a fashion that is neither predicted by a Poisson spiking model nor changes in the mean firing rate, with a substantial effect on stimulus discriminability. We recorded motion-sensitive neurons in middle temporal cortex (MT) in two states: alert fixation and light, opioid anesthesia. Anesthesia tended to lower average spike counts, without decreasing trial-to-trial variability compared to the alert state. Under anesthesia, within-trial fluctuations in excitability were correlated over longer time scales compared to the alert state, creating supra-Poisson Fano factors. In contrast, alert-state MT neurons have higher mean firing rates and largely sub-Poisson variability that is stimulus-dependent and cannot be explained by firing rate differences alone. The absence of such stimulus-induced variability tuning in the anesthetized state suggests different sources of variability between states. A simple model explains state-dependent shifts in the distribution of observed Fano factors via a suppression in the variance of gain fluctuations in the alert state. A population model with stimulus-induced variability tuning and behaviorally constrained information-limiting correlations explores the potential enhancement in stimulus discriminability by the cortical population in the alert state.
2405.01385
Yujiang Wang
Guillermo M. Besne, Nathan Evans, Mariella Panagiotopoulou, Billy Smith, Fahmida A Chowdhury, Beate Diehl, John S Duncan, Andrew W McEvoy, Anna Miserocchi, Jane de Tisi, Mathew Walker, Peter N. Taylor, Chris Thornton, Yujiang Wang
Anti-seizure medication tapering is associated with delta band power reduction in a dose, region and time-dependent manner
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Anti-seizure medications (ASMs) are the primary treatment for epilepsy, yet medication tapering effects have not been investigated in a dose, region, and time-dependent manner, despite their potential impact on research and clinical practice. We examined over 3000 hours of intracranial EEG recordings in 32 subjects during long-term monitoring, of which 22 underwent concurrent ASM tapering. We estimated ASM plasma levels based on known pharmaco-kinetics of all the major ASM types. We found an overall decrease in the power of delta band activity around the period of maximum medication withdrawal in most (80%) subjects, independent of their epilepsy type or medication combination. The degree of withdrawal correlated positively with the magnitude of delta power decrease. This dose-dependent effect was strongly seen across all recorded cortical regions during daytime; but not in sub-cortical regions, or during night time. We found no evidence of differential effect in seizure onset, spiking, or pathological brain regions. The finding of decreased delta band power during ASM tapering agrees with previous literature. Our observed dose-dependent effect indicates that monitoring ASM levels in cortical regions may be feasible for applications such as medication reminder systems, or closed-loop ASM delivery systems. ASMs are also used in other neurological and psychiatric conditions, making our findings relevant to a general neuroscience and neurology audience.
[ { "created": "Thu, 2 May 2024 15:27:10 GMT", "version": "v1" } ]
2024-05-03
[ [ "Besne", "Guillermo M.", "" ], [ "Evans", "Nathan", "" ], [ "Panagiotopoulou", "Mariella", "" ], [ "Smith", "Billy", "" ], [ "Chowdhury", "Fahmida A", "" ], [ "Diehl", "Beate", "" ], [ "Duncan", "John S", "" ], [ "McEvoy", "Andrew W", "" ], [ "Miserocchi", "Anna", "" ], [ "de Tisi", "Jane", "" ], [ "Walker", "Mathew", "" ], [ "Taylor", "Peter N.", "" ], [ "Thornton", "Chris", "" ], [ "Wang", "Yujiang", "" ] ]
Anti-seizure medications (ASMs) are the primary treatment for epilepsy, yet medication tapering effects have not been investigated in a dose, region, and time-dependent manner, despite their potential impact on research and clinical practice. We examined over 3000 hours of intracranial EEG recordings in 32 subjects during long-term monitoring, of which 22 underwent concurrent ASM tapering. We estimated ASM plasma levels based on known pharmaco-kinetics of all the major ASM types. We found an overall decrease in the power of delta band activity around the period of maximum medication withdrawal in most (80%) subjects, independent of their epilepsy type or medication combination. The degree of withdrawal correlated positively with the magnitude of delta power decrease. This dose-dependent effect was strongly seen across all recorded cortical regions during daytime; but not in sub-cortical regions, or during night time. We found no evidence of differential effect in seizure onset, spiking, or pathological brain regions. The finding of decreased delta band power during ASM tapering agrees with previous literature. Our observed dose-dependent effect indicates that monitoring ASM levels in cortical regions may be feasible for applications such as medication reminder systems, or closed-loop ASM delivery systems. ASMs are also used in other neurological and psychiatric conditions, making our findings relevant to a general neuroscience and neurology audience.
2109.12407
Laurent H\'ebert-Dufresne
Blake J.M. Williams, C. Brandon Ogbunugafor, Benjamin M. Althouse, Laurent H\'ebert-Dufresne
Immunity-induced criticality of the genotype network of influenza A (H3N2) hemagglutinin
null
null
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Seasonal influenza kills hundreds of thousands every year, with multiple constantly-changing strains in circulation at any given time. A high mutation rate enables the influenza virus to evade recognition by the human immune system, including immunity acquired through past infection and vaccination. Here, we capture the genetic similarity of influenza strains and their evolutionary dynamics with genotype networks. We show that the genotype networks of influenza A (H3N2) hemagglutinin are characterized by heavy-tailed distributions of module sizes and connectivity, suggesting critical-like behavior. We argue that: (i) genotype networks are driven by mutation and host immunity to explore a subspace of networks predictable in structure, and (ii) genotype networks provide an underlying structure necessary to capture the rich dynamics of multistrain epidemic models. In particular, inclusion of strain-transcending immunity in epidemic models is dependent upon the structure of an underlying genotype network. This interplay suggests a self-organized criticality where the epidemic dynamics of influenza locates critical-like regions of its genotype network. We conclude that this interplay between disease dynamics and network structure might be key for future network analysis of pathogen evolution and realistic multistrain epidemic models.
[ { "created": "Sat, 25 Sep 2021 17:25:03 GMT", "version": "v1" } ]
2021-09-28
[ [ "Williams", "Blake J. M.", "" ], [ "Ogbunugafor", "C. Brandon", "" ], [ "Althouse", "Benjamin M.", "" ], [ "Hébert-Dufresne", "Laurent", "" ] ]
Seasonal influenza kills hundreds of thousands every year, with multiple constantly-changing strains in circulation at any given time. A high mutation rate enables the influenza virus to evade recognition by the human immune system, including immunity acquired through past infection and vaccination. Here, we capture the genetic similarity of influenza strains and their evolutionary dynamics with genotype networks. We show that the genotype networks of influenza A (H3N2) hemagglutinin are characterized by heavy-tailed distributions of module sizes and connectivity, suggesting critical-like behavior. We argue that: (i) genotype networks are driven by mutation and host immunity to explore a subspace of networks predictable in structure, and (ii) genotype networks provide an underlying structure necessary to capture the rich dynamics of multistrain epidemic models. In particular, inclusion of strain-transcending immunity in epidemic models is dependent upon the structure of an underlying genotype network. This interplay suggests a self-organized criticality where the epidemic dynamics of influenza locates critical-like regions of its genotype network. We conclude that this interplay between disease dynamics and network structure might be key for future network analysis of pathogen evolution and realistic multistrain epidemic models.
2405.02820
Ulrich S. Schwarz
Felix Frey (IST Austria) and Ulrich S. Schwarz (Heidelberg University)
Coat stiffening explains the consensus pathway of clathrin-mediated endocytosis
revtex, 12 pages, 5 figures in PDF-format
null
null
null
q-bio.SC cond-mat.soft
http://creativecommons.org/licenses/by-nc-nd/4.0/
Clathrin-mediated endocytosis is the main pathway used by eukaryotic cells to take up extracellular material, but the dominant physical mechanisms driving this process are still elusive. Recently several high-resolution imaging techniques have been used on different cell lines to measure the geometrical properties of clathrin-coated pits over their whole lifetime. Here we first show that all datasets follow the same consensus pathway, which is well described by the recently introduced cooperative curvature model, which predicts a flat-to curved transition at finite area, followed by linear growth and subsequent saturation of curvature. We then apply an energetic model for the composite of plasma membrane and clathrin coat to the consensus pathway to show that the dominant mechanism for invagination is coat stiffening, which results from cooperative interactions between the different clathrin molecules and progressively drives the system towards its intrinsic curvature. Our theory predicts that two length scales determine the time course of invagination, namely the patch size at which the flat-to-curved transition occurs and the final pit radius.
[ { "created": "Sun, 5 May 2024 05:51:16 GMT", "version": "v1" } ]
2024-05-07
[ [ "Frey", "Felix", "", "IST Austria" ], [ "Schwarz", "Ulrich S.", "", "Heidelberg University" ] ]
Clathrin-mediated endocytosis is the main pathway used by eukaryotic cells to take up extracellular material, but the dominant physical mechanisms driving this process are still elusive. Recently several high-resolution imaging techniques have been used on different cell lines to measure the geometrical properties of clathrin-coated pits over their whole lifetime. Here we first show that all datasets follow the same consensus pathway, which is well described by the recently introduced cooperative curvature model, which predicts a flat-to curved transition at finite area, followed by linear growth and subsequent saturation of curvature. We then apply an energetic model for the composite of plasma membrane and clathrin coat to the consensus pathway to show that the dominant mechanism for invagination is coat stiffening, which results from cooperative interactions between the different clathrin molecules and progressively drives the system towards its intrinsic curvature. Our theory predicts that two length scales determine the time course of invagination, namely the patch size at which the flat-to-curved transition occurs and the final pit radius.
2103.07721
Ankit Vikrant
Ankit Vikrant and Martin Nilsson Jacobi
Complex ecological communities and the emergence of island species area relationships
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
It has been a century since the species-area relationship (SAR) was first proposed as a power law to explain how species richness scales with area. There have been many attempts to explain the origin of this predominant form. Apart from the power law, numerous empirical studies also report a semi-log form of the SAR, but very few have addressed its incidence. In this work, we test whether these relationships could emerge from the assembly of large random communities on island-like systems. We reformulate the generalized Lotka-Volterra model by introducing an area parameter that determines the species richness of the assembled communities. Our analysis demonstrates that the two most widely reported relationship forms can emerge due to differences in immigration rates and skewness towards weak interactions. We particularly highlight the incidence of the semi-log SAR for low immigration rates from a source pool, which is consistent with several previous empirical studies. The two SAR forms might show good fits to data over a large span of areas but a power-law overestimates species richness on smaller islands in remote archipelagoes.
[ { "created": "Sat, 13 Mar 2021 14:10:06 GMT", "version": "v1" }, { "created": "Tue, 20 Apr 2021 11:56:25 GMT", "version": "v2" }, { "created": "Mon, 16 Aug 2021 12:59:04 GMT", "version": "v3" } ]
2021-08-17
[ [ "Vikrant", "Ankit", "" ], [ "Jacobi", "Martin Nilsson", "" ] ]
It has been a century since the species-area relationship (SAR) was first proposed as a power law to explain how species richness scales with area. There have been many attempts to explain the origin of this predominant form. Apart from the power law, numerous empirical studies also report a semi-log form of the SAR, but very few have addressed its incidence. In this work, we test whether these relationships could emerge from the assembly of large random communities on island-like systems. We reformulate the generalized Lotka-Volterra model by introducing an area parameter that determines the species richness of the assembled communities. Our analysis demonstrates that the two most widely reported relationship forms can emerge due to differences in immigration rates and skewness towards weak interactions. We particularly highlight the incidence of the semi-log SAR for low immigration rates from a source pool, which is consistent with several previous empirical studies. The two SAR forms might show good fits to data over a large span of areas but a power-law overestimates species richness on smaller islands in remote archipelagoes.
1306.3466
Yarden Katz
Yarden Katz, Eric T. Wang, Jacob Silterra, Schraga Schwartz, Bang Wong, Jill P. Mesirov, Edoardo M. Airoldi, Christopher B. Burge
Sashimi plots: Quantitative visualization of RNA sequencing read alignments
2 figures
null
10.1093/bioinformatics/btv034
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Sashimi plots, a quantitative multi-sample visualization of mRNA sequencing reads aligned to gene annotations. Sashimi plots are made using alignments (stored in the SAM/BAM format) and gene model annotations (in GFF format), which can be custom-made by the user or obtained from databases such as Ensembl or UCSC. We describe two implementations of Sashimi plots: (1) a stand-alone command line implementation aimed at making customizable publication quality figures, and (2) an implementation built into the Integrated Genome Viewer (IGV) browser, which enables rapid and dynamic creation of Sashimi plots for any genomic region of interest, suitable for exploratory analysis of alternatively spliced regions of the transcriptome. Isoform expression estimates outputted by the MISO program can be optionally plotted along with Sashimi plots. Sashimi plots can be used to quickly screen differentially spliced exons along genomic regions of interest and can be used in publication quality figures. The Sashimi plot software and documentation is available from: http://genes.mit.edu/burgelab/miso/docs/sashimi.html
[ { "created": "Fri, 14 Jun 2013 17:41:23 GMT", "version": "v1" } ]
2018-06-29
[ [ "Katz", "Yarden", "" ], [ "Wang", "Eric T.", "" ], [ "Silterra", "Jacob", "" ], [ "Schwartz", "Schraga", "" ], [ "Wong", "Bang", "" ], [ "Mesirov", "Jill P.", "" ], [ "Airoldi", "Edoardo M.", "" ], [ "Burge", "Christopher B.", "" ] ]
We introduce Sashimi plots, a quantitative multi-sample visualization of mRNA sequencing reads aligned to gene annotations. Sashimi plots are made using alignments (stored in the SAM/BAM format) and gene model annotations (in GFF format), which can be custom-made by the user or obtained from databases such as Ensembl or UCSC. We describe two implementations of Sashimi plots: (1) a stand-alone command line implementation aimed at making customizable publication quality figures, and (2) an implementation built into the Integrated Genome Viewer (IGV) browser, which enables rapid and dynamic creation of Sashimi plots for any genomic region of interest, suitable for exploratory analysis of alternatively spliced regions of the transcriptome. Isoform expression estimates outputted by the MISO program can be optionally plotted along with Sashimi plots. Sashimi plots can be used to quickly screen differentially spliced exons along genomic regions of interest and can be used in publication quality figures. The Sashimi plot software and documentation is available from: http://genes.mit.edu/burgelab/miso/docs/sashimi.html
q-bio/0411041
D. Allan Drummond
D. Allan Drummond, Brent L. Iverson, George Georgiou, Frances H. Arnold
Why high-error-rate random mutagenesis libraries are enriched in functional and improved proteins
Optimality results improved. 26 pages, 4 figures, 3 tables
Journal of Molecular Biology 350(4):806-816 (2005).
10.1016/j.jmb.2005.05.023
null
q-bio.QM q-bio.BM
null
Recently, several groups have used error-prone polymerase chain reactions to construct mutant libraries containing up to 27 nucleotide mutations per gene on average, and reported a striking observation: although retention of protein function initially declines exponentially with mutations as has previously been observed, orders of magnitude more proteins remain viable at the highest mutation rates than this trend would predict. Mutant proteins having improved or novel activity were isolated disproportionately from these heavily mutated libraries, leading to the suggestion that distant regions of sequence space are enriched in useful cooperative mutations and that optimal mutagenesis should target these regions. If true, these claims have profound implications for laboratory evolution and for evolutionary theory. Here, we demonstrate that properties of the polymerase chain reaction can explain these results and, consequently, that average protein viability indeed decreases exponentially with mutational distance at all error rates. We show that high-error-rate mutagenesis may be useful in certain cases, though for very different reasons than originally proposed, and that optimal mutation rates are inherently protocol-dependent. Our results allow optimal mutation rates to be found given mutagenesis conditions and a protein of known mutational robustness.
[ { "created": "Mon, 22 Nov 2004 17:58:13 GMT", "version": "v1" }, { "created": "Thu, 10 Feb 2005 16:16:57 GMT", "version": "v2" }, { "created": "Fri, 18 Feb 2005 05:53:24 GMT", "version": "v3" } ]
2007-05-23
[ [ "Drummond", "D. Allan", "" ], [ "Iverson", "Brent L.", "" ], [ "Georgiou", "George", "" ], [ "Arnold", "Frances H.", "" ] ]
Recently, several groups have used error-prone polymerase chain reactions to construct mutant libraries containing up to 27 nucleotide mutations per gene on average, and reported a striking observation: although retention of protein function initially declines exponentially with mutations as has previously been observed, orders of magnitude more proteins remain viable at the highest mutation rates than this trend would predict. Mutant proteins having improved or novel activity were isolated disproportionately from these heavily mutated libraries, leading to the suggestion that distant regions of sequence space are enriched in useful cooperative mutations and that optimal mutagenesis should target these regions. If true, these claims have profound implications for laboratory evolution and for evolutionary theory. Here, we demonstrate that properties of the polymerase chain reaction can explain these results and, consequently, that average protein viability indeed decreases exponentially with mutational distance at all error rates. We show that high-error-rate mutagenesis may be useful in certain cases, though for very different reasons than originally proposed, and that optimal mutation rates are inherently protocol-dependent. Our results allow optimal mutation rates to be found given mutagenesis conditions and a protein of known mutational robustness.
1503.04558
Da Zhou Dr.
Xiufang Chen, Yue Wang, Tianquan Feng, Ming Yi, Xingan Zhang, Da Zhou
The overshoot and phenotypic equilibrium in characterizing cancer dynamics of reversible phenotypic plasticity
24 pages, 6 figures
null
10.1016/j.jtbi.2015.11.008
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paradigm of phenotypic plasticity indicates reversible relations of different cancer cell phenotypes, which extends the cellular hierarchy proposed by the classical cancer stem cell (CSC) theory. Since it is still question able if the phenotypic plasticity is a crucial improvement to the hierarchical model or just a minor extension to it, it is worthwhile to explore the dynamic behavior characterizing the reversible phenotypic plasticity. In this study we compare the hierarchical model and the reversible model in predicting the cell-state dynamics observed in biological experiments. Our results show that the hierarchical model shows significant disadvantages over the reversible model in describing both long-term stability (phenotypic equilibrium) and short-term transient dynamics (overshoot) of cancer cells. In a very specific case in which the total growth of population due to each cell type is identical, the hierarchical model predicts neither phenotypic equilibrium nor overshoot, whereas thereversible model succeeds in predicting both of them. Even though the performance of the hierarchical model can be improved by relaxing the specific assumption, its prediction to the phenotypic equilibrium strongly depends on a precondition that may be unrealistic in biological experiments, and it also fails to capture the overshoot of CSCs. By comparison, it is more likely for the reversible model to correctly describe the stability of the phenotypic mixture and various types of overshoot behavior.
[ { "created": "Mon, 16 Mar 2015 08:12:22 GMT", "version": "v1" }, { "created": "Tue, 15 Sep 2015 05:00:50 GMT", "version": "v2" } ]
2015-12-04
[ [ "Chen", "Xiufang", "" ], [ "Wang", "Yue", "" ], [ "Feng", "Tianquan", "" ], [ "Yi", "Ming", "" ], [ "Zhang", "Xingan", "" ], [ "Zhou", "Da", "" ] ]
The paradigm of phenotypic plasticity indicates reversible relations of different cancer cell phenotypes, which extends the cellular hierarchy proposed by the classical cancer stem cell (CSC) theory. Since it is still question able if the phenotypic plasticity is a crucial improvement to the hierarchical model or just a minor extension to it, it is worthwhile to explore the dynamic behavior characterizing the reversible phenotypic plasticity. In this study we compare the hierarchical model and the reversible model in predicting the cell-state dynamics observed in biological experiments. Our results show that the hierarchical model shows significant disadvantages over the reversible model in describing both long-term stability (phenotypic equilibrium) and short-term transient dynamics (overshoot) of cancer cells. In a very specific case in which the total growth of population due to each cell type is identical, the hierarchical model predicts neither phenotypic equilibrium nor overshoot, whereas thereversible model succeeds in predicting both of them. Even though the performance of the hierarchical model can be improved by relaxing the specific assumption, its prediction to the phenotypic equilibrium strongly depends on a precondition that may be unrealistic in biological experiments, and it also fails to capture the overshoot of CSCs. By comparison, it is more likely for the reversible model to correctly describe the stability of the phenotypic mixture and various types of overshoot behavior.
1207.1891
Samrat Chatterjee
Kamiya Tikoo, Shashank Mishr, V. Manivel, Kanury VS Rao, Parul Tripathi and Sachin Sharma
Immunomodulatory role of an Ayurvedic formulation on imbalanced immune-metabolics during inflammatory responses of obesity and pre-diabetic disease
16 pages, 8 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Obesity and related type 2 diabetes are associated with a low level chronic inflammatory state with disease status increasing at epidemic proportions worldwide. It is now universally accepted that the underlying immune-inflammatory responses mediated within the adipose tissue in obesity are central to the development of disease. Once initiated, chronic inflammation associated with obesity leads to the modulation of immune cell function. In the present study we aimed to investigate the effect of an ayurvedic formulation (named as Kal-1, an abbreviation derived from the procuring source) on diet-induced obesity and type II diabetes using C57BL/6J mice. The study was planned into two groups using obese and pre-diabetic mouse model. The mice were fed on high-fat with increased diet and, different amounts (5, 20 and 75 {\mu}g) of Kal-1 were given with disease progression for 21 weeks in first group whereas mice were first put on the same diet for 21 weeks and then treated with same amounts of Kal-1. A significant reduction in body weight and fat pads, fasting blood glucose levels, insulin levels and biochemical parameters like triglycerides and cholesterol were observed in obese and diabetic mice+Kal-1 than control (lean) mice fed on low fat diet, though the optimum amounts of Kal-1 were 20 and 75 {\mu}g in first and second group respectively. A noteworthy immunological correction in important readouts viz. resistin, leptin, HMW adiponection and an array of pro- & anti-cytokines (IL-4, IL-10, IL-1{\alpha}, IL-1{\beta}, IL-4, IL-6, IL-10, TNF-{\alpha} and MCP-1) was also observed in both the groups with the mentioned amount of Kal-1. We conclude that Kal-1 has immunomodulatory potential for diet-induced obesity and associated metabolic disorders.
[ { "created": "Sun, 8 Jul 2012 17:28:17 GMT", "version": "v1" } ]
2012-07-10
[ [ "Tikoo", "Kamiya", "" ], [ "Mishr", "Shashank", "" ], [ "Manivel", "V.", "" ], [ "Rao", "Kanury VS", "" ], [ "Tripathi", "Parul", "" ], [ "Sharma", "Sachin", "" ] ]
Obesity and related type 2 diabetes are associated with a low level chronic inflammatory state with disease status increasing at epidemic proportions worldwide. It is now universally accepted that the underlying immune-inflammatory responses mediated within the adipose tissue in obesity are central to the development of disease. Once initiated, chronic inflammation associated with obesity leads to the modulation of immune cell function. In the present study we aimed to investigate the effect of an ayurvedic formulation (named as Kal-1, an abbreviation derived from the procuring source) on diet-induced obesity and type II diabetes using C57BL/6J mice. The study was planned into two groups using obese and pre-diabetic mouse model. The mice were fed on high-fat with increased diet and, different amounts (5, 20 and 75 {\mu}g) of Kal-1 were given with disease progression for 21 weeks in first group whereas mice were first put on the same diet for 21 weeks and then treated with same amounts of Kal-1. A significant reduction in body weight and fat pads, fasting blood glucose levels, insulin levels and biochemical parameters like triglycerides and cholesterol were observed in obese and diabetic mice+Kal-1 than control (lean) mice fed on low fat diet, though the optimum amounts of Kal-1 were 20 and 75 {\mu}g in first and second group respectively. A noteworthy immunological correction in important readouts viz. resistin, leptin, HMW adiponection and an array of pro- & anti-cytokines (IL-4, IL-10, IL-1{\alpha}, IL-1{\beta}, IL-4, IL-6, IL-10, TNF-{\alpha} and MCP-1) was also observed in both the groups with the mentioned amount of Kal-1. We conclude that Kal-1 has immunomodulatory potential for diet-induced obesity and associated metabolic disorders.
1807.08915
Martin Wilson
Martin Wilson
Robust Retrospective Frequency and Phase Correction for Single-voxel MR Spectroscopy
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: subject motion and static field (B$_0$) drift are known to reduce the quality of single voxel MR spectroscopy data due to incoherent averaging. Retrospective correction has previously been shown to improve data quality by adjusting the phase and frequency offset of each average to match a reference spectrum. In this work, a new method (RATS) is developed to be tolerant to large frequency shifts (greater than 7Hz) and baseline instability resulting from inconsistent water suppression. Methods: in contrast to previous approaches, the variable-projection method and baseline fitting is incorporated into the correction procedure to improve robustness to fluctuating baseline signals and optimization instability. RATS is compared to an alternative method, based on time-domain spectral registration (TDSR), using simulated data to model frequency, phase and baseline instability. In addition, a J-difference edited glutathione in-vivo dataset is processed using both approaches and compared. Results: RATS offers improved accuracy and stability for large frequency shifts and unstable baselines. Reduced subtraction artifacts are demonstrated for glutathione edited MRS when using RATS, compared with uncorrected or TDSR corrected spectra. Conclusion: the RATS algorithm has been shown to provide accurate retrospective correction of SVS MRS data in the presence of large frequency shifts and baseline instability. The method is rapid, generic and therefore readily incorporated into MRS processing pipelines to improve lineshape, SNR and aid quality assessment.
[ { "created": "Tue, 24 Jul 2018 05:59:47 GMT", "version": "v1" }, { "created": "Mon, 10 Sep 2018 15:06:35 GMT", "version": "v2" }, { "created": "Fri, 14 Sep 2018 14:12:01 GMT", "version": "v3" } ]
2018-09-17
[ [ "Wilson", "Martin", "" ] ]
Purpose: subject motion and static field (B$_0$) drift are known to reduce the quality of single voxel MR spectroscopy data due to incoherent averaging. Retrospective correction has previously been shown to improve data quality by adjusting the phase and frequency offset of each average to match a reference spectrum. In this work, a new method (RATS) is developed to be tolerant to large frequency shifts (greater than 7Hz) and baseline instability resulting from inconsistent water suppression. Methods: in contrast to previous approaches, the variable-projection method and baseline fitting is incorporated into the correction procedure to improve robustness to fluctuating baseline signals and optimization instability. RATS is compared to an alternative method, based on time-domain spectral registration (TDSR), using simulated data to model frequency, phase and baseline instability. In addition, a J-difference edited glutathione in-vivo dataset is processed using both approaches and compared. Results: RATS offers improved accuracy and stability for large frequency shifts and unstable baselines. Reduced subtraction artifacts are demonstrated for glutathione edited MRS when using RATS, compared with uncorrected or TDSR corrected spectra. Conclusion: the RATS algorithm has been shown to provide accurate retrospective correction of SVS MRS data in the presence of large frequency shifts and baseline instability. The method is rapid, generic and therefore readily incorporated into MRS processing pipelines to improve lineshape, SNR and aid quality assessment.
1605.03241
Benjamin Armbruster
Benjamin Armbruster, Li Wang, Martina Morris
Forward Reachable Sets: Analytically derived properties of connected components for dynamic networks
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Formal analysis of the emergent structural properties of dynamic networks is largely uncharted territory. We focus here on the properties of forward reachable sets (FRS) as a function of the underlying degree distribution and edge duration. FRS are defined as the set of nodes that can be reached from an initial seed via a path of temporally ordered edges; a natural extension of connected component measures to dynamic networks. Working in a stochastic framework, we derive closed-form expressions for the mean and variance of the exponential growth rate of the FRS for temporal networks with both edge and node dynamics. For networks with node dynamics, we calculate thresholds for the growth of the FRS. The effects of finite population size are explored via simulation and approximation. We examine how these properties vary by edge duration and different cross-sectional degree distributions that characterize a range of scientifically interesting normative outcomes (Poisson and Bernoulli). The size of the forward reachable set gives an upper bound for the epidemic size in disease transmission network models, relating this work to epidemic modeling (Ferguson 2000, Eames 2004).
[ { "created": "Tue, 10 May 2016 23:16:06 GMT", "version": "v1" }, { "created": "Tue, 9 Aug 2016 20:35:15 GMT", "version": "v2" }, { "created": "Mon, 19 Sep 2016 22:53:49 GMT", "version": "v3" } ]
2016-09-21
[ [ "Armbruster", "Benjamin", "" ], [ "Wang", "Li", "" ], [ "Morris", "Martina", "" ] ]
Formal analysis of the emergent structural properties of dynamic networks is largely uncharted territory. We focus here on the properties of forward reachable sets (FRS) as a function of the underlying degree distribution and edge duration. FRS are defined as the set of nodes that can be reached from an initial seed via a path of temporally ordered edges; a natural extension of connected component measures to dynamic networks. Working in a stochastic framework, we derive closed-form expressions for the mean and variance of the exponential growth rate of the FRS for temporal networks with both edge and node dynamics. For networks with node dynamics, we calculate thresholds for the growth of the FRS. The effects of finite population size are explored via simulation and approximation. We examine how these properties vary by edge duration and different cross-sectional degree distributions that characterize a range of scientifically interesting normative outcomes (Poisson and Bernoulli). The size of the forward reachable set gives an upper bound for the epidemic size in disease transmission network models, relating this work to epidemic modeling (Ferguson 2000, Eames 2004).
2001.11432
Lou Zonca
Lou Zonca and David Holcman
Modeling bursting in neuronal networks using facilitation-depression and afterhyperpolarization
8 figs
Communications in Nonlinear Science and Numerical Simulation Volume 94, March 2021, 105555
10.1016/j.cnsns.2020.105555
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the absence of inhibition, excitatory neuronal networks can alternate between bursts and interburst intervals (IBI), with heterogeneous length distributions. As this dynamic remains unclear, especially the durations of each epoch, we develop here a bursting model based on synaptic depression and facilitation that also accounts for afterhyperpolarization (AHP), which is a key component of IBI. The framework is a novel stochastic three dimensional dynamical system perturbed by noise: numerical simulations can reproduce a succession of bursts and interbursts. Each phase corresponds to an exploration of a fraction of the phase-space, which contains three critical points (one attractor and two saddles) separated by a two-dimensional stable manifold $\Sigma$. We show here that bursting is defined by long deterministic excursions away from the attractor, while IBI corresponds to escape induced by random fluctuations. We show that the variability in the burst durations, depends on the distribution of exit points located on $\Sigma$ that we compute using WKB and the method of characteristics. Finally, to better characterize the role of several parameters such as the network connectivity or the AHP time scale, we compute analytically the mean burst and AHP durations in a linear approximation. To conclude the distribution of bursting and IBI could result from synaptic dynamics modulated by AHP.
[ { "created": "Thu, 30 Jan 2020 16:24:23 GMT", "version": "v1" }, { "created": "Tue, 15 Dec 2020 19:51:48 GMT", "version": "v2" } ]
2020-12-17
[ [ "Zonca", "Lou", "" ], [ "Holcman", "David", "" ] ]
In the absence of inhibition, excitatory neuronal networks can alternate between bursts and interburst intervals (IBI), with heterogeneous length distributions. As this dynamic remains unclear, especially the durations of each epoch, we develop here a bursting model based on synaptic depression and facilitation that also accounts for afterhyperpolarization (AHP), which is a key component of IBI. The framework is a novel stochastic three dimensional dynamical system perturbed by noise: numerical simulations can reproduce a succession of bursts and interbursts. Each phase corresponds to an exploration of a fraction of the phase-space, which contains three critical points (one attractor and two saddles) separated by a two-dimensional stable manifold $\Sigma$. We show here that bursting is defined by long deterministic excursions away from the attractor, while IBI corresponds to escape induced by random fluctuations. We show that the variability in the burst durations, depends on the distribution of exit points located on $\Sigma$ that we compute using WKB and the method of characteristics. Finally, to better characterize the role of several parameters such as the network connectivity or the AHP time scale, we compute analytically the mean burst and AHP durations in a linear approximation. To conclude the distribution of bursting and IBI could result from synaptic dynamics modulated by AHP.
2402.07955
Zuobai Zhang
Zuobai Zhang, Jiarui Lu, Vijil Chenthamarakshan, Aur\'elie Lozano, Payel Das, Jian Tang
ProtIR: Iterative Refinement between Retrievers and Predictors for Protein Function Annotation
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein function annotation is an important yet challenging task in biology. Recent deep learning advancements show significant potential for accurate function prediction by learning from protein sequences and structures. Nevertheless, these predictor-based methods often overlook the modeling of protein similarity, an idea commonly employed in traditional approaches using sequence or structure retrieval tools. To fill this gap, we first study the effect of inter-protein similarity modeling by benchmarking retriever-based methods against predictors on protein function annotation tasks. Our results show that retrievers can match or outperform predictors without large-scale pre-training. Building on these insights, we introduce a novel variational pseudo-likelihood framework, ProtIR, designed to improve function predictors by incorporating inter-protein similarity modeling. This framework iteratively refines knowledge between a function predictor and retriever, thereby combining the strengths of both predictors and retrievers. ProtIR showcases around 10% improvement over vanilla predictor-based methods. Besides, it achieves performance on par with protein language model-based methods, yet without the need for massive pre-training, highlighting the efficacy of our framework. Code will be released upon acceptance.
[ { "created": "Sat, 10 Feb 2024 17:31:46 GMT", "version": "v1" } ]
2024-02-14
[ [ "Zhang", "Zuobai", "" ], [ "Lu", "Jiarui", "" ], [ "Chenthamarakshan", "Vijil", "" ], [ "Lozano", "Aurélie", "" ], [ "Das", "Payel", "" ], [ "Tang", "Jian", "" ] ]
Protein function annotation is an important yet challenging task in biology. Recent deep learning advancements show significant potential for accurate function prediction by learning from protein sequences and structures. Nevertheless, these predictor-based methods often overlook the modeling of protein similarity, an idea commonly employed in traditional approaches using sequence or structure retrieval tools. To fill this gap, we first study the effect of inter-protein similarity modeling by benchmarking retriever-based methods against predictors on protein function annotation tasks. Our results show that retrievers can match or outperform predictors without large-scale pre-training. Building on these insights, we introduce a novel variational pseudo-likelihood framework, ProtIR, designed to improve function predictors by incorporating inter-protein similarity modeling. This framework iteratively refines knowledge between a function predictor and retriever, thereby combining the strengths of both predictors and retrievers. ProtIR showcases around 10% improvement over vanilla predictor-based methods. Besides, it achieves performance on par with protein language model-based methods, yet without the need for massive pre-training, highlighting the efficacy of our framework. Code will be released upon acceptance.
1810.09262
Steven Frank
Steven A. Frank
The Price equation program: simple invariances unify population dynamics, thermodynamics, probability, information and inference
Version 3: added figure illustrating geometry; added table of symbols and two tables summarizing mathematical relations; this version accepted for publication in Entropy
2018. Entropy 20:978
10.3390/e20120978
null
q-bio.PE cond-mat.stat-mech cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
The fundamental equations of various disciplines often seem to share the same basic structure. Natural selection increases information in the same way that Bayesian updating increases information. Thermodynamics and the forms of common probability distributions express maximum increase in entropy, which appears mathematically as loss of information. Physical mechanics follows paths of change that maximize Fisher information. The information expressions typically have analogous interpretations as the Newtonian balance between force and acceleration, representing a partition between direct causes of change and opposing changes in the frame of reference. This web of vague analogies hints at a deeper common mathematical structure. I suggest that the Price equation expresses that underlying universal structure. The abstract Price equation describes dynamics as the change between two sets. One component of dynamics expresses the change in the frequency of things, holding constant the values associated with things. The other component of dynamics expresses the change in the values of things, holding constant the frequency of things. The separation of frequency from value generalizes Shannon's separation of the frequency of symbols from the meaning of symbols in information theory. The Price equation's generalized separation of frequency and value reveals a few simple invariances that define universal geometric aspects of change. For example, the conservation of total frequency, although a trivial invariance by itself, creates a powerful constraint on the geometry of change. That constraint plus a few others seem to explain the common structural forms of the equations in different disciplines. From that abstract perspective, interpretations such as selection, information, entropy, force, acceleration, and physical work arise from the same underlying geometry expressed by the Price equation.
[ { "created": "Mon, 22 Oct 2018 13:29:46 GMT", "version": "v1" }, { "created": "Tue, 23 Oct 2018 04:44:32 GMT", "version": "v2" }, { "created": "Fri, 14 Dec 2018 17:42:07 GMT", "version": "v3" } ]
2018-12-18
[ [ "Frank", "Steven A.", "" ] ]
The fundamental equations of various disciplines often seem to share the same basic structure. Natural selection increases information in the same way that Bayesian updating increases information. Thermodynamics and the forms of common probability distributions express maximum increase in entropy, which appears mathematically as loss of information. Physical mechanics follows paths of change that maximize Fisher information. The information expressions typically have analogous interpretations as the Newtonian balance between force and acceleration, representing a partition between direct causes of change and opposing changes in the frame of reference. This web of vague analogies hints at a deeper common mathematical structure. I suggest that the Price equation expresses that underlying universal structure. The abstract Price equation describes dynamics as the change between two sets. One component of dynamics expresses the change in the frequency of things, holding constant the values associated with things. The other component of dynamics expresses the change in the values of things, holding constant the frequency of things. The separation of frequency from value generalizes Shannon's separation of the frequency of symbols from the meaning of symbols in information theory. The Price equation's generalized separation of frequency and value reveals a few simple invariances that define universal geometric aspects of change. For example, the conservation of total frequency, although a trivial invariance by itself, creates a powerful constraint on the geometry of change. That constraint plus a few others seem to explain the common structural forms of the equations in different disciplines. From that abstract perspective, interpretations such as selection, information, entropy, force, acceleration, and physical work arise from the same underlying geometry expressed by the Price equation.
1511.00296
Matteo Smerlak
Matteo Smerlak, Ahmed Youssef
Limiting fitness distributions in evolutionary dynamics
15 pages + appendices
J. Theor. Biol. 416, 68-80 (2017)
10.1016/j.jtbi.2017.01.005
null
q-bio.PE cond-mat.stat-mech cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Darwinian evolution can be modeled in general terms as a flow in the space of fitness (i.e. reproductive rate) distributions. In the diffusion approximation, Tsimring et al. have showed that this flow admits "fitness wave" solutions: Gaussian-shape fitness distributions moving towards higher fitness values at constant speed. Here we show more generally that evolving fitness distributions are attracted to a one-parameter family of distributions with a fixed parabolic relationship between skewness and kurtosis. Unlike fitness waves, this statistical pattern encompasses both positive and negative (a.k.a. purifying) selection and is not restricted to rapidly adapting populations. Moreover we find that the mean fitness of a population under the selection of pre-existing variation is a power-law function of time, as observed in microbiological evolution experiments but at variance with fitness wave theory. At the conceptual level, our results can be viewed as the resolution of the "dynamic insufficiency" of Fisher's fundamental theorem of natural selection. Our predictions are in good agreement with numerical simulations.
[ { "created": "Sun, 1 Nov 2015 19:16:47 GMT", "version": "v1" }, { "created": "Tue, 16 Feb 2016 16:15:56 GMT", "version": "v2" } ]
2017-01-30
[ [ "Smerlak", "Matteo", "" ], [ "Youssef", "Ahmed", "" ] ]
Darwinian evolution can be modeled in general terms as a flow in the space of fitness (i.e. reproductive rate) distributions. In the diffusion approximation, Tsimring et al. have showed that this flow admits "fitness wave" solutions: Gaussian-shape fitness distributions moving towards higher fitness values at constant speed. Here we show more generally that evolving fitness distributions are attracted to a one-parameter family of distributions with a fixed parabolic relationship between skewness and kurtosis. Unlike fitness waves, this statistical pattern encompasses both positive and negative (a.k.a. purifying) selection and is not restricted to rapidly adapting populations. Moreover we find that the mean fitness of a population under the selection of pre-existing variation is a power-law function of time, as observed in microbiological evolution experiments but at variance with fitness wave theory. At the conceptual level, our results can be viewed as the resolution of the "dynamic insufficiency" of Fisher's fundamental theorem of natural selection. Our predictions are in good agreement with numerical simulations.
1812.05156
Justin Faber
Justin Faber and Dolores Bozovic
Chaotic Dynamics Enhance the Sensitivity of Inner Ear Hair Cells
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hair cells of the auditory and vestibular systems are capable of detecting sounds that induce sub-nanometer vibrations of the hair bundle, below the stochastic noise levels of the surrounding fluid. Hair bundles of certain species are also known to oscillate without external stimulation, indicating the presence of an underlying active mechanism. We propose that chaotic dynamics enhance the sensitivity and temporal resolution of the hair bundle response, and provide experimental and theoretical evidence for this effect. By varying the viscosity and ionic composition of the surrounding fluid, we are able to modulate the degree of chaos observed in the hair bundle dynamics in vitro. We consistently find that the hair bundle is most sensitive to a stimulus of small amplitude when it is poised in the weakly chaotic regime. Further, we show that the response time to a force step decreases with increasing levels of chaos. These results agree well with our numerical simulations of a chaotic Hopf oscillator and suggest that chaos may be responsible for the sensitivity and temporal resolution of hair cells.
[ { "created": "Wed, 28 Nov 2018 22:09:58 GMT", "version": "v1" }, { "created": "Tue, 21 May 2019 20:57:54 GMT", "version": "v2" } ]
2019-05-23
[ [ "Faber", "Justin", "" ], [ "Bozovic", "Dolores", "" ] ]
Hair cells of the auditory and vestibular systems are capable of detecting sounds that induce sub-nanometer vibrations of the hair bundle, below the stochastic noise levels of the surrounding fluid. Hair bundles of certain species are also known to oscillate without external stimulation, indicating the presence of an underlying active mechanism. We propose that chaotic dynamics enhance the sensitivity and temporal resolution of the hair bundle response, and provide experimental and theoretical evidence for this effect. By varying the viscosity and ionic composition of the surrounding fluid, we are able to modulate the degree of chaos observed in the hair bundle dynamics in vitro. We consistently find that the hair bundle is most sensitive to a stimulus of small amplitude when it is poised in the weakly chaotic regime. Further, we show that the response time to a force step decreases with increasing levels of chaos. These results agree well with our numerical simulations of a chaotic Hopf oscillator and suggest that chaos may be responsible for the sensitivity and temporal resolution of hair cells.
2107.10583
Arash Roostaei
Arash Roostaei, Hadi Barzegar, Fakhteh Ghanbarnejad
Emergence of Hopf bifurcation in an extended SIR dynamic
21 pages, 7 figures
null
10.1371/journal.pone.0276969
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the SIR dynamics is extended by considering another compartmental which represents hospitalization of the critical cases. So a system of differential equations with four blocks is considered when there is intensive care unit (ICU) to cure critical cases. Outgoing rate of survived infected individuals is divided into $nI$ and $\frac{bI}{I+b}$. The second term represents the rate of critical cases who enter ICUs. It is proved that there are forward, backward and Hopf bifurcations in different regimes of parameters.
[ { "created": "Thu, 22 Jul 2021 11:25:34 GMT", "version": "v1" } ]
2023-01-11
[ [ "Roostaei", "Arash", "" ], [ "Barzegar", "Hadi", "" ], [ "Ghanbarnejad", "Fakhteh", "" ] ]
In this paper, the SIR dynamics is extended by considering another compartmental which represents hospitalization of the critical cases. So a system of differential equations with four blocks is considered when there is intensive care unit (ICU) to cure critical cases. Outgoing rate of survived infected individuals is divided into $nI$ and $\frac{bI}{I+b}$. The second term represents the rate of critical cases who enter ICUs. It is proved that there are forward, backward and Hopf bifurcations in different regimes of parameters.
2102.06526
Luca Pasquini
Luca Pasquini, Antonio Napolitano, Martina Lucignani, Emanuela Tagliente, Francesco Dellepiane, Maria Camilla Rossi-Espagnet, Matteo Ritrovato, Antonello Vidiri, Veronica Villani, Giulio Ranazzi, Antonella Stoppacciaro, Andrea Romano, Alberto Di Napoli, Alessandro Bozzao
Comparison of Machine Learning Classifiers to Predict Patient Survival and Genetics of GBM: Towards a Standardized Model for Clinical Implementation
null
null
null
null
q-bio.QM cs.LG q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radiomic models have been shown to outperform clinical data for outcome prediction in glioblastoma (GBM). However, clinical implementation is limited by lack of parameters standardization. We aimed to compare nine machine learning classifiers, with different optimization parameters, to predict overall survival (OS), isocitrate dehydrogenase (IDH) mutation, O-6-methylguanine-DNA-methyltransferase (MGMT) promoter methylation, epidermal growth factor receptor (EGFR) VII amplification and Ki-67 expression in GBM patients, based on radiomic features from conventional and advanced MR. 156 adult patients with pathologic diagnosis of GBM were included. Three tumoral regions were analyzed: contrast-enhancing tumor, necrosis and non-enhancing tumor, selected by manual segmentation. Radiomic features were extracted with a custom version of Pyradiomics, and selected through Boruta algorithm. A Grid Search algorithm was applied when computing 4 times K-fold cross validation (K=10) to get the highest mean and lowest spread of accuracy. Once optimal parameters were identified, model performances were assessed in terms of Area Under The Curve-Receiver Operating Characteristics (AUC-ROC). Metaheuristic and ensemble classifiers showed the best performance across tasks. xGB obtained maximum accuracy for OS (74.5%), AB for IDH mutation (88%), MGMT methylation (71,7%), Ki-67 expression (86,6%), and EGFR amplification (81,6%). Best performing features shed light on possible correlations between MR and tumor histology.
[ { "created": "Wed, 10 Feb 2021 15:10:37 GMT", "version": "v1" } ]
2021-02-15
[ [ "Pasquini", "Luca", "" ], [ "Napolitano", "Antonio", "" ], [ "Lucignani", "Martina", "" ], [ "Tagliente", "Emanuela", "" ], [ "Dellepiane", "Francesco", "" ], [ "Rossi-Espagnet", "Maria Camilla", "" ], [ "Ritrovato", "Matteo", "" ], [ "Vidiri", "Antonello", "" ], [ "Villani", "Veronica", "" ], [ "Ranazzi", "Giulio", "" ], [ "Stoppacciaro", "Antonella", "" ], [ "Romano", "Andrea", "" ], [ "Di Napoli", "Alberto", "" ], [ "Bozzao", "Alessandro", "" ] ]
Radiomic models have been shown to outperform clinical data for outcome prediction in glioblastoma (GBM). However, clinical implementation is limited by lack of parameters standardization. We aimed to compare nine machine learning classifiers, with different optimization parameters, to predict overall survival (OS), isocitrate dehydrogenase (IDH) mutation, O-6-methylguanine-DNA-methyltransferase (MGMT) promoter methylation, epidermal growth factor receptor (EGFR) VII amplification and Ki-67 expression in GBM patients, based on radiomic features from conventional and advanced MR. 156 adult patients with pathologic diagnosis of GBM were included. Three tumoral regions were analyzed: contrast-enhancing tumor, necrosis and non-enhancing tumor, selected by manual segmentation. Radiomic features were extracted with a custom version of Pyradiomics, and selected through Boruta algorithm. A Grid Search algorithm was applied when computing 4 times K-fold cross validation (K=10) to get the highest mean and lowest spread of accuracy. Once optimal parameters were identified, model performances were assessed in terms of Area Under The Curve-Receiver Operating Characteristics (AUC-ROC). Metaheuristic and ensemble classifiers showed the best performance across tasks. xGB obtained maximum accuracy for OS (74.5%), AB for IDH mutation (88%), MGMT methylation (71,7%), Ki-67 expression (86,6%), and EGFR amplification (81,6%). Best performing features shed light on possible correlations between MR and tumor histology.