id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1109.3922
Jiao Sy
Shuyun Jiao, Yanbo Wang, Bo Yuan, Ping Ao
Kinetics of Muller's Ratchet from Adaptive Landscape Viewpoint
6 pages, 3 figures; IEEE Conference on Systems Biology, 2011; ISBN 978-1-4577-1666-9
2011 IEEE Conference on Systems Biology, pp: 27-32. Zhuhai, China, Sep 2-4
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The accumulation of deleterious mutations of a population directly contributes to the fate as to how long the population would exist. Muller's ratchet provides a quantitative framework to study the effect of accumulation. Adaptive landscape as a powerful concept in system biology provides a handle to describe complex and rare biological events. In this article we study the evolutionary process of a population exposed to Muller's ratchet from the new viewpoint of adaptive landscape which allows us estimate the single click of the ratchet starting with an intuitive understanding. Methods: We describe how Wright-Fisher process maps to Muller's ratchet. We analytically construct adaptive landscape from general diffusion equation. It shows that the construction is dynamical and the adaptive landscape is independent of the existence and normalization of the stationary distribution. We generalize the application of diffusion model from adaptive landscape viewpoint. Results: We develop a novel method to describe the dynamical behavior of the population exposed to Muller's ratchet, and analytically derive the decaying time of the fittest class of populations as a mean first passage time. Most importantly, we describe the absorption phenomenon by adaptive landscape, where the stationary distribution is non-normalizable. These results suggest the method may be used to understand the mechanism of populations evolution and describe the biological processes quantitatively.
[ { "created": "Mon, 19 Sep 2011 01:56:27 GMT", "version": "v1" }, { "created": "Wed, 21 Sep 2011 04:49:13 GMT", "version": "v2" } ]
2011-09-22
[ [ "Jiao", "Shuyun", "" ], [ "Wang", "Yanbo", "" ], [ "Yuan", "Bo", "" ], [ "Ao", "Ping", "" ] ]
Background: The accumulation of deleterious mutations of a population directly contributes to the fate as to how long the population would exist. Muller's ratchet provides a quantitative framework to study the effect of accumulation. Adaptive landscape as a powerful concept in system biology provides a handle to describe complex and rare biological events. In this article we study the evolutionary process of a population exposed to Muller's ratchet from the new viewpoint of adaptive landscape which allows us estimate the single click of the ratchet starting with an intuitive understanding. Methods: We describe how Wright-Fisher process maps to Muller's ratchet. We analytically construct adaptive landscape from general diffusion equation. It shows that the construction is dynamical and the adaptive landscape is independent of the existence and normalization of the stationary distribution. We generalize the application of diffusion model from adaptive landscape viewpoint. Results: We develop a novel method to describe the dynamical behavior of the population exposed to Muller's ratchet, and analytically derive the decaying time of the fittest class of populations as a mean first passage time. Most importantly, we describe the absorption phenomenon by adaptive landscape, where the stationary distribution is non-normalizable. These results suggest the method may be used to understand the mechanism of populations evolution and describe the biological processes quantitatively.
1703.05827
Kelath Murali Manoj
Kelath Murali Manoj
Murburn concept: A facile explanation for oxygen-centered cellular respiration
Main manuscript (including abstract, text and data) is 19 pages; 1 Table & 1 Figure Supplementary information has 4 items and the overall document totals 46 pages
Biomedical Reviews, 2017, 28, 35-52
10.14748/bmr.v28.4450
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Via a concomitant communication (the first part of my work), I have conclusively debunked the prevailing explanations for mitochondrial oxidative phosphorylation and established the need for a novel rationale to account for the reaction paradigm. Towards the same, murburn concept is hereby floated as a viable explanation (in the second part of my work). It is proposed that the inner mitochondrial membrane (harboring the various metal and flavin enzyme complexes) serves as means to confine and stabilize radical reactions, which effectively couple and bring about ATP synthesis in the proton-deficient microcosm. The proposed scheme is un-ordered and favored by Ockham's razor and evolutionary perspectives. Murburn concept is a paradigm-shift in biochemistry because it advocates that diffusible reactive (oxygen) species are mainstay of routine cellular metabolic process within the mitochondria.
[ { "created": "Thu, 16 Mar 2017 21:21:29 GMT", "version": "v1" } ]
2018-12-21
[ [ "Manoj", "Kelath Murali", "" ] ]
Via a concomitant communication (the first part of my work), I have conclusively debunked the prevailing explanations for mitochondrial oxidative phosphorylation and established the need for a novel rationale to account for the reaction paradigm. Towards the same, murburn concept is hereby floated as a viable explanation (in the second part of my work). It is proposed that the inner mitochondrial membrane (harboring the various metal and flavin enzyme complexes) serves as means to confine and stabilize radical reactions, which effectively couple and bring about ATP synthesis in the proton-deficient microcosm. The proposed scheme is un-ordered and favored by Ockham's razor and evolutionary perspectives. Murburn concept is a paradigm-shift in biochemistry because it advocates that diffusible reactive (oxygen) species are mainstay of routine cellular metabolic process within the mitochondria.
2006.15255
Zhijian Yang
Zhijian Yang, Junhao Wen, Christos Davatzikos
Smile-GANs: Semi-supervised clustering via GANs for dissecting brain disease heterogeneity from medical images
null
null
null
null
q-bio.QM cs.LG eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning methods applied to complex biomedical data has enabled the construction of disease signatures of diagnostic/prognostic value. However, less attention has been given to understanding disease heterogeneity. Semi-supervised clustering methods can address this problem by estimating multiple transformations from a (e.g. healthy) control (CN) group to a patient (PT) group, seeking to capture the heterogeneity of underlying pathlogic processes. Herein, we propose a novel method, Smile-GANs (SeMi-supervIsed cLustEring via GANs), for semi-supervised clustering, and apply it to brain MRI scans. Smile-GANs first learns multiple distinct mappings by generating PT from CN, with each mapping characterizing one relatively distinct pathological pattern. Moreover, a clustering model is trained interactively with mapping functions to assign PT into corresponding subtype memberships. Using relaxed assumptions on PT/CN data distribution and imposing mapping non-linearity, Smile-GANs captures heterogeneous differences in distribution between the CN and PT domains. We first validate Smile-GANs using simulated data, subsequently on real data, by demonstrating its potential in characterizing heterogeneity in Alzheimer's Disease (AD) and its prodromal phases. The model was first trained using baseline MRIs from the ADNI2 database and then applied to longitudinal data from ADNI1 and BLSA. Four robust subtypes with distinct neuroanatomical patterns were discovered: 1) normal brain, 2) diffuse atrophy atypical of AD, 3) focal medial temporal lobe atrophy, 4) typical-AD. Further longitudinal analyses discover two distinct progressive pathways from prodromal to full AD: i) subtypes 1 - 2 - 4, and ii) subtypes 1 - 3 - 4. Although demonstrated on an important biomedical problem, Smile-GANs is general and can find application in many biomedical and other domains.
[ { "created": "Sat, 27 Jun 2020 02:06:21 GMT", "version": "v1" } ]
2020-06-30
[ [ "Yang", "Zhijian", "" ], [ "Wen", "Junhao", "" ], [ "Davatzikos", "Christos", "" ] ]
Machine learning methods applied to complex biomedical data has enabled the construction of disease signatures of diagnostic/prognostic value. However, less attention has been given to understanding disease heterogeneity. Semi-supervised clustering methods can address this problem by estimating multiple transformations from a (e.g. healthy) control (CN) group to a patient (PT) group, seeking to capture the heterogeneity of underlying pathlogic processes. Herein, we propose a novel method, Smile-GANs (SeMi-supervIsed cLustEring via GANs), for semi-supervised clustering, and apply it to brain MRI scans. Smile-GANs first learns multiple distinct mappings by generating PT from CN, with each mapping characterizing one relatively distinct pathological pattern. Moreover, a clustering model is trained interactively with mapping functions to assign PT into corresponding subtype memberships. Using relaxed assumptions on PT/CN data distribution and imposing mapping non-linearity, Smile-GANs captures heterogeneous differences in distribution between the CN and PT domains. We first validate Smile-GANs using simulated data, subsequently on real data, by demonstrating its potential in characterizing heterogeneity in Alzheimer's Disease (AD) and its prodromal phases. The model was first trained using baseline MRIs from the ADNI2 database and then applied to longitudinal data from ADNI1 and BLSA. Four robust subtypes with distinct neuroanatomical patterns were discovered: 1) normal brain, 2) diffuse atrophy atypical of AD, 3) focal medial temporal lobe atrophy, 4) typical-AD. Further longitudinal analyses discover two distinct progressive pathways from prodromal to full AD: i) subtypes 1 - 2 - 4, and ii) subtypes 1 - 3 - 4. Although demonstrated on an important biomedical problem, Smile-GANs is general and can find application in many biomedical and other domains.
1204.4558
Ralph Stoop
Ralph L. Stoop, Victor Saase, Clemens Wagner, Britta Stoop, Ruedi Stoop
Cortical columns for quick brains
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is widely believed that the particular wiring observed within cortical columns boosts neural computation. We use rewiring of neural networks performing real-world cognitive tasks to study the validity of this argument. In a vast survey of wirings within the column we detect, however, no traces of the proposed effect. It is on the mesoscopic inter-columnar scale that the existence of columns - largely irrespective of their inner organization - enhances the speed of information transfer and minimizes the total wiring length required to bind the distributed columnar computations towards spatio-temporally coherent results.
[ { "created": "Fri, 20 Apr 2012 08:19:08 GMT", "version": "v1" } ]
2012-04-23
[ [ "Stoop", "Ralph L.", "" ], [ "Saase", "Victor", "" ], [ "Wagner", "Clemens", "" ], [ "Stoop", "Britta", "" ], [ "Stoop", "Ruedi", "" ] ]
It is widely believed that the particular wiring observed within cortical columns boosts neural computation. We use rewiring of neural networks performing real-world cognitive tasks to study the validity of this argument. In a vast survey of wirings within the column we detect, however, no traces of the proposed effect. It is on the mesoscopic inter-columnar scale that the existence of columns - largely irrespective of their inner organization - enhances the speed of information transfer and minimizes the total wiring length required to bind the distributed columnar computations towards spatio-temporally coherent results.
1012.5977
Ilmari Karonen
Ilmari Karonen
Stable trimorphic coexistence in a lattice model of spatial competition with two site types
22 pages, 7 figures. 3rd revision submitted to JTB
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I examine the effect of exogenous spatial heterogeneity on the coexistence of competing species using a simple model of non-hierarchical competition for site occupancy on a lattice. The sites on the lattice are divided into two types representing two different habitats or spatial resources. The model features no temporal variability, hierarchical competition, type-dependent interactions or other features traditionally known to support more competing species than there are resources. Nonetheless, stable coexistence of two habitat specialists and a generalist is observed in this model for a range of parameter values. In the spatially implicit mean field approximation of the model, such coexistence is shown to be impossible, demonstrating that it indeed arises from the explicit spatial structure.
[ { "created": "Wed, 29 Dec 2010 16:42:11 GMT", "version": "v1" }, { "created": "Sun, 26 Jun 2011 13:09:42 GMT", "version": "v2" }, { "created": "Thu, 13 Oct 2011 21:46:19 GMT", "version": "v3" } ]
2011-10-17
[ [ "Karonen", "Ilmari", "" ] ]
I examine the effect of exogenous spatial heterogeneity on the coexistence of competing species using a simple model of non-hierarchical competition for site occupancy on a lattice. The sites on the lattice are divided into two types representing two different habitats or spatial resources. The model features no temporal variability, hierarchical competition, type-dependent interactions or other features traditionally known to support more competing species than there are resources. Nonetheless, stable coexistence of two habitat specialists and a generalist is observed in this model for a range of parameter values. In the spatially implicit mean field approximation of the model, such coexistence is shown to be impossible, demonstrating that it indeed arises from the explicit spatial structure.
2303.03363
Philipp Seidl
Philipp Seidl, Andreu Vall, Sepp Hochreiter, G\"unter Klambauer
Enhancing Activity Prediction Models in Drug Discovery with the Ability to Understand Human Language
ICML version, 15 pages + 18 pages appendix
null
null
null
q-bio.BM cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Activity and property prediction models are the central workhorses in drug discovery and materials sciences, but currently they have to be trained or fine-tuned for new tasks. Without training or fine-tuning, scientific language models could be used for such low-data tasks through their announced zero- and few-shot capabilities. However, their predictive quality at activity prediction is lacking. In this work, we envision a novel type of activity prediction model that is able to adapt to new prediction tasks at inference time, via understanding textual information describing the task. To this end, we propose a new architecture with separate modules for chemical and natural language inputs, and a contrastive pre-training objective on data from large biochemical databases. In extensive experiments, we show that our method CLAMP yields improved predictive performance on few-shot learning benchmarks and zero-shot problems in drug discovery. We attribute the advances of our method to the modularized architecture and to our pre-training objective.
[ { "created": "Mon, 6 Mar 2023 18:49:09 GMT", "version": "v1" }, { "created": "Fri, 16 Jun 2023 09:59:34 GMT", "version": "v2" } ]
2023-06-19
[ [ "Seidl", "Philipp", "" ], [ "Vall", "Andreu", "" ], [ "Hochreiter", "Sepp", "" ], [ "Klambauer", "Günter", "" ] ]
Activity and property prediction models are the central workhorses in drug discovery and materials sciences, but currently they have to be trained or fine-tuned for new tasks. Without training or fine-tuning, scientific language models could be used for such low-data tasks through their announced zero- and few-shot capabilities. However, their predictive quality at activity prediction is lacking. In this work, we envision a novel type of activity prediction model that is able to adapt to new prediction tasks at inference time, via understanding textual information describing the task. To this end, we propose a new architecture with separate modules for chemical and natural language inputs, and a contrastive pre-training objective on data from large biochemical databases. In extensive experiments, we show that our method CLAMP yields improved predictive performance on few-shot learning benchmarks and zero-shot problems in drug discovery. We attribute the advances of our method to the modularized architecture and to our pre-training objective.
1809.10301
Aline Amabile Viol Barbosa
A. Viol, Fernanda Palhano-Fontes, Heloisa Onias, Draulio B. de Araujo, Philipp H\"ovel and G. M. Viswanathan
Characterizing complex networks using Entropy-degree diagrams: unveiling changes in functional brain connectivity induced by Ayahuasca
null
null
10.3390/e21020128
null
q-bio.NC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open problems abound in the theory of complex networks, which has found successful application to diverse fields of science. With the aim of further advancing the understanding of the brain's functional connectivity, we propose to evaluate a network metric which we term the geodesic entropy. This entropy, in a way that can be made precise, quantifies the Shannon entropy of the distance distribution to a specific node from all other nodes. Measurements of geodesic entropy allow for the characterization of the structural information of a network that takes into account the distinct role of each node into the network topology. The measurement and characterization of this structural information has the potential to greatly improve our understanding of sustained activity and other emergent behaviors in networks, such as self-organized criticality sometimes seen in such contexts. We apply these concepts and methods to study the effects of how the psychedelic Ayahuasca affects the functional connectivity of the human brain. We show that the geodesic entropy is able to differentiate the functional networks of the human brain in two different states of consciousness in the resting state: (i) the ordinary waking state and (ii) a state altered by ingestion of the Ayahuasca. The entropy of the nodes of brain networks from subjects under the influence of Ayahuasca diverge significantly from those of the ordinary waking state. The functional brain networks from subjects in the altered state have, on average, a larger geodesic entropy compared to the ordinary state. We conclude that geodesic entropy is a useful tool for analyzing complex networks and discuss how and why it may bring even further valuable insights into the study of the human brain and other empirical networks.
[ { "created": "Wed, 26 Sep 2018 07:04:50 GMT", "version": "v1" } ]
2019-02-20
[ [ "Viol", "A.", "" ], [ "Palhano-Fontes", "Fernanda", "" ], [ "Onias", "Heloisa", "" ], [ "de Araujo", "Draulio B.", "" ], [ "Hövel", "Philipp", "" ], [ "Viswanathan", "G. M.", "" ] ]
Open problems abound in the theory of complex networks, which has found successful application to diverse fields of science. With the aim of further advancing the understanding of the brain's functional connectivity, we propose to evaluate a network metric which we term the geodesic entropy. This entropy, in a way that can be made precise, quantifies the Shannon entropy of the distance distribution to a specific node from all other nodes. Measurements of geodesic entropy allow for the characterization of the structural information of a network that takes into account the distinct role of each node into the network topology. The measurement and characterization of this structural information has the potential to greatly improve our understanding of sustained activity and other emergent behaviors in networks, such as self-organized criticality sometimes seen in such contexts. We apply these concepts and methods to study the effects of how the psychedelic Ayahuasca affects the functional connectivity of the human brain. We show that the geodesic entropy is able to differentiate the functional networks of the human brain in two different states of consciousness in the resting state: (i) the ordinary waking state and (ii) a state altered by ingestion of the Ayahuasca. The entropy of the nodes of brain networks from subjects under the influence of Ayahuasca diverge significantly from those of the ordinary waking state. The functional brain networks from subjects in the altered state have, on average, a larger geodesic entropy compared to the ordinary state. We conclude that geodesic entropy is a useful tool for analyzing complex networks and discuss how and why it may bring even further valuable insights into the study of the human brain and other empirical networks.
1806.03454
Kavita Jain
Kavita Jain and Archana Devi
Polygenic adaptation in changing environments
null
EPL 123, 48002 (2018)
10.1209/0295-5075/123/48002
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although many phenotypic traits are determined by a large number of genetic variants, how a polygenic trait adapts in response to the changes in the environment is still poorly understood. Here we study the adaptation dynamics of a polygenic trait that is determined by a finite number of genetic loci in an infinitely large population which is evolving under stabilising selection and recurrent mutations. We find that in a changing environment, modeled here by a linearly moving phenotypic optimum, the mean trait also moves linearly with time. But its speed is smaller than that of the phenotypic optimum when the effect sizes of the genetic variants are small and approaches that of the environmental change for larger effect sizes. Our study thus highlights the influence of the genetic architecture of a polygenic trait on its adaptability.
[ { "created": "Sat, 9 Jun 2018 10:44:50 GMT", "version": "v1" }, { "created": "Thu, 18 Oct 2018 10:30:20 GMT", "version": "v2" } ]
2018-10-19
[ [ "Jain", "Kavita", "" ], [ "Devi", "Archana", "" ] ]
Although many phenotypic traits are determined by a large number of genetic variants, how a polygenic trait adapts in response to the changes in the environment is still poorly understood. Here we study the adaptation dynamics of a polygenic trait that is determined by a finite number of genetic loci in an infinitely large population which is evolving under stabilising selection and recurrent mutations. We find that in a changing environment, modeled here by a linearly moving phenotypic optimum, the mean trait also moves linearly with time. But its speed is smaller than that of the phenotypic optimum when the effect sizes of the genetic variants are small and approaches that of the environmental change for larger effect sizes. Our study thus highlights the influence of the genetic architecture of a polygenic trait on its adaptability.
1504.02550
Mikhail Tikhonov
Mikhail Tikhonov
Theoretical ecology without species
10 pages, 6 figures + Supplementary Material; improved and clarified
Phys. Rev. E 96, 032410 (2017)
10.1103/PhysRevE.96.032410
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecosystems are commonly conceptualized as networks of interacting species. However, partitioning natural diversity of organisms into discrete units is notoriously problematic, and mounting experimental evidence raises the intriguing question whether this perspective is appropriate for the microbial world. Here, an alternative formalism is proposed that does not require postulating the existence of species as fundamental ecological variables, and provides a naturally hierarchical description of community dynamics. This formalism allows approaching the "species problem" from the opposite direction. While the classical models treat a world of imperfectly clustered organism types as a perturbation around well-clustered "species", the presented approach allows gradually adding structure to a fully disordered background. The relevance of this theoretical construct for describing highly diverse natural ecosystems is discussed.
[ { "created": "Fri, 10 Apr 2015 04:52:24 GMT", "version": "v1" }, { "created": "Thu, 14 Apr 2016 19:55:56 GMT", "version": "v2" }, { "created": "Thu, 24 Nov 2016 21:31:58 GMT", "version": "v3" }, { "created": "Mon, 3 Apr 2017 22:38:44 GMT", "version": "v4" } ]
2017-09-22
[ [ "Tikhonov", "Mikhail", "" ] ]
Ecosystems are commonly conceptualized as networks of interacting species. However, partitioning natural diversity of organisms into discrete units is notoriously problematic, and mounting experimental evidence raises the intriguing question whether this perspective is appropriate for the microbial world. Here, an alternative formalism is proposed that does not require postulating the existence of species as fundamental ecological variables, and provides a naturally hierarchical description of community dynamics. This formalism allows approaching the "species problem" from the opposite direction. While the classical models treat a world of imperfectly clustered organism types as a perturbation around well-clustered "species", the presented approach allows gradually adding structure to a fully disordered background. The relevance of this theoretical construct for describing highly diverse natural ecosystems is discussed.
1707.03647
Yani Zhao
Yani Zhao, Mateusz Chwastyk and Marek Cieplak
Structural entanglements in protein complexes
11 figures, 16 pages
The Journal of Chemical Physics, 146(22), 225102
10.1063/1.4985221
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider multi-chain protein native structures and propose a criterion that determines whether two chains in the system are entangled or not. The criterion is based on the behavior observed by pulling at both temini of each chain simultaneously in the two chains. We have identified about 900 entangled systems in the Protein Data Bank and provided a more detailed analysis for several of them. We argue that entanglement enhances the thermodynamic stability of the system but it may have other functions: burying the hydrophobic residues at the interface, and increasing the DNA or RNA binding area. We also study the folding and stretching properties of the knotted dimeric proteins MJ0366, YibK and bacteriophytochrome. These proteins have been studied theoretically in their monomeric versions so far. The dimers are seen to separate on stretching through the tensile mechanism and the characteristic unraveling force depends on the pulling direction.
[ { "created": "Wed, 12 Jul 2017 11:17:03 GMT", "version": "v1" } ]
2017-07-19
[ [ "Zhao", "Yani", "" ], [ "Chwastyk", "Mateusz", "" ], [ "Cieplak", "Marek", "" ] ]
We consider multi-chain protein native structures and propose a criterion that determines whether two chains in the system are entangled or not. The criterion is based on the behavior observed by pulling at both temini of each chain simultaneously in the two chains. We have identified about 900 entangled systems in the Protein Data Bank and provided a more detailed analysis for several of them. We argue that entanglement enhances the thermodynamic stability of the system but it may have other functions: burying the hydrophobic residues at the interface, and increasing the DNA or RNA binding area. We also study the folding and stretching properties of the knotted dimeric proteins MJ0366, YibK and bacteriophytochrome. These proteins have been studied theoretically in their monomeric versions so far. The dimers are seen to separate on stretching through the tensile mechanism and the characteristic unraveling force depends on the pulling direction.
2407.00008
Peiyu Duan
Peiyu Duan, Nicha C. Dvornek, Jiyao Wang, Jeffrey Eilbott, Yuexi Du, Denis G. Sukhodolsky, James S. Duncan
Spectral Brain Graph Neural Network for Prediction of Anxiety in Children with Autism Spectrum Disorder
ISBI 2024 Oral
null
null
null
q-bio.NC eess.IV
http://creativecommons.org/publicdomain/zero/1.0/
Children with Autism Spectrum Disorder (ASD) frequently exhibit comorbid anxiety, which contributes to impairment and requires treatment. Therefore, it is critical to investigate co-occurring autism and anxiety with functional imaging tools to understand the brain mechanisms of this comorbidity. Multidimensional Anxiety Scale for Children, 2nd edition (MASC-2) score is a common tool to evaluate the daily anxiety level in autistic children. Predicting MASC-2 score with Functional Magnetic Resonance Imaging (fMRI) data will help gain more insights into the brain functional networks of children with ASD complicated by anxiety. However, most of the current graph neural network (GNN) studies using fMRI only focus on graph operations but ignore the spectral features. In this paper, we explored the feasibility of using spectral features to predict the MASC-2 total scores. We proposed SpectBGNN, a graph-based network, which uses spectral features and integrates graph spectral filtering layers to extract hidden information. We experimented with multiple spectral analysis algorithms and compared the performance of the SpectBGNN model with CPM, GAT, and BrainGNN on a dataset consisting of 26 typically developing and 70 ASD children with 5-fold cross-validation. We showed that among all spectral analysis algorithms tested, using the Fast Fourier Transform (FFT) or Welch's Power Spectrum Density (PSD) as node features performs significantly better than correlation features, and adding the graph spectral filtering layer significantly increases the network's performance.
[ { "created": "Tue, 23 Apr 2024 19:32:34 GMT", "version": "v1" } ]
2024-07-02
[ [ "Duan", "Peiyu", "" ], [ "Dvornek", "Nicha C.", "" ], [ "Wang", "Jiyao", "" ], [ "Eilbott", "Jeffrey", "" ], [ "Du", "Yuexi", "" ], [ "Sukhodolsky", "Denis G.", "" ], [ "Duncan", "James S.", "" ] ]
Children with Autism Spectrum Disorder (ASD) frequently exhibit comorbid anxiety, which contributes to impairment and requires treatment. Therefore, it is critical to investigate co-occurring autism and anxiety with functional imaging tools to understand the brain mechanisms of this comorbidity. Multidimensional Anxiety Scale for Children, 2nd edition (MASC-2) score is a common tool to evaluate the daily anxiety level in autistic children. Predicting MASC-2 score with Functional Magnetic Resonance Imaging (fMRI) data will help gain more insights into the brain functional networks of children with ASD complicated by anxiety. However, most of the current graph neural network (GNN) studies using fMRI only focus on graph operations but ignore the spectral features. In this paper, we explored the feasibility of using spectral features to predict the MASC-2 total scores. We proposed SpectBGNN, a graph-based network, which uses spectral features and integrates graph spectral filtering layers to extract hidden information. We experimented with multiple spectral analysis algorithms and compared the performance of the SpectBGNN model with CPM, GAT, and BrainGNN on a dataset consisting of 26 typically developing and 70 ASD children with 5-fold cross-validation. We showed that among all spectral analysis algorithms tested, using the Fast Fourier Transform (FFT) or Welch's Power Spectrum Density (PSD) as node features performs significantly better than correlation features, and adding the graph spectral filtering layer significantly increases the network's performance.
1602.07266
Tin Yau Pang
Tin Y Pang, Martin Lercher
Supra-operonic clusters of functionally related genes (SOCs) are a source of horizontal gene co-transfers
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptation of bacteria occurs predominantly via horizontal gene transfer (HGT). While it is widely recognized that horizontal acquisitions frequently encompass multiple genes, it is unclear what the size distribution of successfully transferred DNA segments looks like and what evolutionary forces shape this distribution. Here, we identified 1790 gene family pairs that were consistently co-gained on the same branches across a phylogeny of 53 E. coli strains. We estimated a lower limit of their genomic distances at the time they were transferred to their host genomes; this distribution shows a sharp upper bound at 30 kb. The same gene-pairs can have larger distances (up to 70 kb) in other genomes. These more distant pairs likely represent recent acquisitions via transduction that involve the co-transfer of excised prophage genes, as they are almost always associated with intervening phage-associated genes. The observed distribution of genomic distances of co-transferred genes is much broader than expected from a model based on the co-transfer of genes within operons; instead, this distribution is highly consistent with the size distribution of supra-operonic clusters (SOCs), groups of co-occurring and co-functioning genes that extend beyond operons. Thus, we propose that SOCs form a basic unit of horizontal gene transfer.
[ { "created": "Tue, 23 Feb 2016 19:11:54 GMT", "version": "v1" }, { "created": "Wed, 24 Feb 2016 09:45:34 GMT", "version": "v2" }, { "created": "Tue, 15 Nov 2016 14:05:32 GMT", "version": "v3" } ]
2016-11-16
[ [ "Pang", "Tin Y", "" ], [ "Lercher", "Martin", "" ] ]
Adaptation of bacteria occurs predominantly via horizontal gene transfer (HGT). While it is widely recognized that horizontal acquisitions frequently encompass multiple genes, it is unclear what the size distribution of successfully transferred DNA segments looks like and what evolutionary forces shape this distribution. Here, we identified 1790 gene family pairs that were consistently co-gained on the same branches across a phylogeny of 53 E. coli strains. We estimated a lower limit of their genomic distances at the time they were transferred to their host genomes; this distribution shows a sharp upper bound at 30 kb. The same gene-pairs can have larger distances (up to 70 kb) in other genomes. These more distant pairs likely represent recent acquisitions via transduction that involve the co-transfer of excised prophage genes, as they are almost always associated with intervening phage-associated genes. The observed distribution of genomic distances of co-transferred genes is much broader than expected from a model based on the co-transfer of genes within operons; instead, this distribution is highly consistent with the size distribution of supra-operonic clusters (SOCs), groups of co-occurring and co-functioning genes that extend beyond operons. Thus, we propose that SOCs form a basic unit of horizontal gene transfer.
2103.10947
Purba Chatterjee
Purba Chatterjee, Nigel Goldenfeld, and Sangjin Kim
DNA Supercoiling Drives a Transition between Collective Modes of Gene Synthesis
null
null
10.1103/PhysRevLett.127.218101
null
q-bio.SC physics.bio-ph q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent experiments showed that multiple copies of the molecular machine RNA polymerase (RNAP) can efficiently synthesize mRNA collectively in the active state of the promoter. However, environmentally-induced promoter repression results in long-distance antagonistic interactions that drastically reduce the speed of RNAPs and cause a quick arrest of mRNA synthesis. The mechanism underlying this transition between cooperative and antagonistic dynamics remains poorly understood. In this Letter, we introduce a continuum deterministic model for the translocation of RNAPs, where the speed of an RNAP is coupled to the local DNA supercoiling as well as the density of RNAPs on the gene. We assume that torsional stress experienced by individual RNAPs is exacerbated by high RNAP density on the gene and that transcription factors act as physical barriers to the diffusion of DNA supercoils. We show that this minimal model exhibits two transcription modes mediated by the torsional stress: a fluid mode when the promoter is active and a torsionally stressed mode when the promoter is repressed, in quantitative agreement with experimentally observed dynamics of co-transcribing RNAPs. Our work provides an important step towards understanding the collective dynamics of molecular machines involved in gene expression.
[ { "created": "Fri, 19 Mar 2021 17:56:15 GMT", "version": "v1" } ]
2021-12-01
[ [ "Chatterjee", "Purba", "" ], [ "Goldenfeld", "Nigel", "" ], [ "Kim", "Sangjin", "" ] ]
Recent experiments showed that multiple copies of the molecular machine RNA polymerase (RNAP) can efficiently synthesize mRNA collectively in the active state of the promoter. However, environmentally-induced promoter repression results in long-distance antagonistic interactions that drastically reduce the speed of RNAPs and cause a quick arrest of mRNA synthesis. The mechanism underlying this transition between cooperative and antagonistic dynamics remains poorly understood. In this Letter, we introduce a continuum deterministic model for the translocation of RNAPs, where the speed of an RNAP is coupled to the local DNA supercoiling as well as the density of RNAPs on the gene. We assume that torsional stress experienced by individual RNAPs is exacerbated by high RNAP density on the gene and that transcription factors act as physical barriers to the diffusion of DNA supercoils. We show that this minimal model exhibits two transcription modes mediated by the torsional stress: a fluid mode when the promoter is active and a torsionally stressed mode when the promoter is repressed, in quantitative agreement with experimentally observed dynamics of co-transcribing RNAPs. Our work provides an important step towards understanding the collective dynamics of molecular machines involved in gene expression.
1902.07129
Enzo Marinari
A. De Martino, D. De Martino and E. Marinari
The Essential Role of Thermodynamics in metabolic network modeling: physical insights and computational challenges
Contribution to "Chemical Kinetics: Beyond the Textbook", edited by Katja Lindenberg, Ralf Metzler and Gleb Oshanin (World Scientific, Singapore 2019)
null
null
null
q-bio.MN cond-mat.stat-mech q-bio.BM q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantitative studies of cell metabolism are often based on large chemical reaction network models. A steady state approach is suited to analyze phenomena on the timescale of cell growth and circumvents the problem of incomplete experimental knowledge on kinetic laws and parameters, but it shall be supported by a correct implementation of thermodynamic constraints. In this article we review the latter aspect highlighting its computational challenges and physical insights. The simple introduction of Gibbs inequalities avoids the presence of unfeasible loops allowing for correct timescale analysis but leads to possibly non-convex feasible flux spaces, whose exploration needs efficient algorithms. We shorty review on the implementation of thermodynamics through variational principles in constraints based models of metabolic networks.
[ { "created": "Tue, 19 Feb 2019 16:39:50 GMT", "version": "v1" } ]
2019-02-20
[ [ "De Martino", "A.", "" ], [ "De Martino", "D.", "" ], [ "Marinari", "E.", "" ] ]
Quantitative studies of cell metabolism are often based on large chemical reaction network models. A steady state approach is suited to analyze phenomena on the timescale of cell growth and circumvents the problem of incomplete experimental knowledge on kinetic laws and parameters, but it shall be supported by a correct implementation of thermodynamic constraints. In this article we review the latter aspect highlighting its computational challenges and physical insights. The simple introduction of Gibbs inequalities avoids the presence of unfeasible loops allowing for correct timescale analysis but leads to possibly non-convex feasible flux spaces, whose exploration needs efficient algorithms. We shorty review on the implementation of thermodynamics through variational principles in constraints based models of metabolic networks.
2210.12294
Kohei Ichikawa
Kohei Ichikawa, Kunihiko Kaneko
Bayesian inference is facilitated by modular neural networks with different time scales
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Various animals, including humans, have been suggested to perform Bayesian inferences to handle noisy, time-varying external information. In performing Bayesian inference, the prior distribution must be shaped by sampling noisy external inputs. However, the mechanism by which neural activities represent such distributions has not yet been elucidated. In this study, we demonstrated that the neural networks with modular structures including fast and slow modules effectively represented the prior distribution in performing accurate Bayesian inferences. Using a recurrent neural network consisting of a main module connected with input and output layers and a sub-module connected only with the main module and having slower neural activity, we demonstrated that the modular network with distinct time scales performed more accurate Bayesian inference compared with the neural networks with uniform time scales. Prior information was represented selectively by the slow sub-module, which could integrate observed signals over an appropriate period and represent input means and variances. Accordingly, the network could effectively predict the time-varying inputs. Furthermore, by training the time scales of neurons starting from networks with uniform time scales and without modular structure, the above slow-fast modular network structure spontaneously emerged as a result of learning wherein prior information was selectively represented in the slower sub-module. These results explain how the prior distribution for Bayesian inference is represented in the brain, provide insight into the relevance of modular structure with time scale hierarchy to information processing, and elucidate the significance of brain areas with slower time scales.
[ { "created": "Fri, 21 Oct 2022 23:08:23 GMT", "version": "v1" } ]
2022-10-25
[ [ "Ichikawa", "Kohei", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Various animals, including humans, have been suggested to perform Bayesian inferences to handle noisy, time-varying external information. In performing Bayesian inference, the prior distribution must be shaped by sampling noisy external inputs. However, the mechanism by which neural activities represent such distributions has not yet been elucidated. In this study, we demonstrated that the neural networks with modular structures including fast and slow modules effectively represented the prior distribution in performing accurate Bayesian inferences. Using a recurrent neural network consisting of a main module connected with input and output layers and a sub-module connected only with the main module and having slower neural activity, we demonstrated that the modular network with distinct time scales performed more accurate Bayesian inference compared with the neural networks with uniform time scales. Prior information was represented selectively by the slow sub-module, which could integrate observed signals over an appropriate period and represent input means and variances. Accordingly, the network could effectively predict the time-varying inputs. Furthermore, by training the time scales of neurons starting from networks with uniform time scales and without modular structure, the above slow-fast modular network structure spontaneously emerged as a result of learning wherein prior information was selectively represented in the slower sub-module. These results explain how the prior distribution for Bayesian inference is represented in the brain, provide insight into the relevance of modular structure with time scale hierarchy to information processing, and elucidate the significance of brain areas with slower time scales.
2312.02994
Manal Helal
Manal Helal, Hossam El-Gindy, Bruno Gaeta, Vitali Sinchenko
High Performance Multiple Sequence Alignment Algorithms for Comparison of Microbial Genomes
null
Proc 19th Int Conf Genome Informatics (GIW 2008), Gold Coast, Australia
null
null
q-bio.GN cs.DC
http://creativecommons.org/licenses/by/4.0/
Advances in gene sequencing have enabled in silico analyses of microbial genomes and have led to the revision of concepts of microbial taxonomy and evolution. We explore deficiencies in existing multiple sequence global alignment algorithms and introduce a new indexing scheme to partition the dynamic programming algorithm hypercube scoring tensor over processors based on the dependency between partitions to be scored in parallel. The performance of algorithms is compared in the study of rpoB gene sequences of Mycoplasma species.
[ { "created": "Wed, 29 Nov 2023 11:43:35 GMT", "version": "v1" } ]
2023-12-07
[ [ "Helal", "Manal", "" ], [ "El-Gindy", "Hossam", "" ], [ "Gaeta", "Bruno", "" ], [ "Sinchenko", "Vitali", "" ] ]
Advances in gene sequencing have enabled in silico analyses of microbial genomes and have led to the revision of concepts of microbial taxonomy and evolution. We explore deficiencies in existing multiple sequence global alignment algorithms and introduce a new indexing scheme to partition the dynamic programming algorithm hypercube scoring tensor over processors based on the dependency between partitions to be scored in parallel. The performance of algorithms is compared in the study of rpoB gene sequences of Mycoplasma species.
2007.01201
Michael Fu
Jian Chen, Michael C. Fu, Wenhong Zhang, Junhua Zheng
Supporting Real-Time COVID-19 Medical Management Decisions: The Transition Matrix Model Approach
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the onset of the COVID-19 outbreak in Wuhan, China, numerous forecasting models have been proposed to project the trajectory of coronavirus infection cases. We propose a new discrete-time Markov chain transition matrix model that directly incorporates stochastic behavior and for which parameter estimation is straightforward from available data. Using such data from China's Hubei province (for which Wuhan is the provincial capital city), the model is shown to be flexible, robust, and accurate. As a result, it has been adopted by the first Shanghai assistance medical team in Wuhan's Jinyintan Hospital, which was the first designated hospital to take COVID-19 patients in the world. The forecast has been used for preparing medical staff, intensive care unit (ICU) beds, ventilators, and other critical care medical resources and for supporting real-time medical management decisions. Empirical data from China's first two months (January/February) of fighting COVID-19 was collected and used to enhance the model by embedding NPI efficiency into the model. We applied the model to forecast Italy, South Korea, and Iran on March 9. Later we made forecasts for Spain, Germany, France, US on March 24. Again, the model has performed very well, proven to be flexible, robust, and accurate for most of these countries/regions outside China.
[ { "created": "Wed, 1 Jul 2020 15:59:16 GMT", "version": "v1" } ]
2020-07-03
[ [ "Chen", "Jian", "" ], [ "Fu", "Michael C.", "" ], [ "Zhang", "Wenhong", "" ], [ "Zheng", "Junhua", "" ] ]
Since the onset of the COVID-19 outbreak in Wuhan, China, numerous forecasting models have been proposed to project the trajectory of coronavirus infection cases. We propose a new discrete-time Markov chain transition matrix model that directly incorporates stochastic behavior and for which parameter estimation is straightforward from available data. Using such data from China's Hubei province (for which Wuhan is the provincial capital city), the model is shown to be flexible, robust, and accurate. As a result, it has been adopted by the first Shanghai assistance medical team in Wuhan's Jinyintan Hospital, which was the first designated hospital to take COVID-19 patients in the world. The forecast has been used for preparing medical staff, intensive care unit (ICU) beds, ventilators, and other critical care medical resources and for supporting real-time medical management decisions. Empirical data from China's first two months (January/February) of fighting COVID-19 was collected and used to enhance the model by embedding NPI efficiency into the model. We applied the model to forecast Italy, South Korea, and Iran on March 9. Later we made forecasts for Spain, Germany, France, US on March 24. Again, the model has performed very well, proven to be flexible, robust, and accurate for most of these countries/regions outside China.
2204.05835
Joachim Krug
Maurice G\"ortz and Joachim Krug
Nonlinear dynamics of an epidemic compartment model with asymptomatic infections and mitigation
15 pages, 4 figures
J. Phys. A: Math. Theor. 55 414005 (2022)
10.1088/1751-8121/ac8fc7
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A significant proportion of the infections driving the current {SARS-CoV-2} pandemic are transmitted asymptomatically. Here we introduce and study a simple epidemic model with separate compartments comprising asymptomatic and symptomatic infected individuals. The linear dynamics determining the outbreak condition of the model is equivalent to a renewal theory approach with exponential waiting time distributions. Exploiting a nontrivial conservation law of the full nonlinear dynamics, we derive analytic bounds on the peak number of infections in the absence and presence of mitigation through isolation and testing. The bounds are compared to numerical solutions of the differential equations.
[ { "created": "Tue, 12 Apr 2022 14:21:18 GMT", "version": "v1" }, { "created": "Sat, 8 Oct 2022 14:46:58 GMT", "version": "v2" } ]
2022-10-12
[ [ "Görtz", "Maurice", "" ], [ "Krug", "Joachim", "" ] ]
A significant proportion of the infections driving the current {SARS-CoV-2} pandemic are transmitted asymptomatically. Here we introduce and study a simple epidemic model with separate compartments comprising asymptomatic and symptomatic infected individuals. The linear dynamics determining the outbreak condition of the model is equivalent to a renewal theory approach with exponential waiting time distributions. Exploiting a nontrivial conservation law of the full nonlinear dynamics, we derive analytic bounds on the peak number of infections in the absence and presence of mitigation through isolation and testing. The bounds are compared to numerical solutions of the differential equations.
1002.4273
Filipe Tostevin
Filipe Tostevin, Pieter Rein ten Wolde
Mutual information in time-varying biochemical systems
Minor corrections and expanded references. 17 pages, 4 figures, 2 tables; revtex4
Phys. Rev. E 81, 061917 (2010)
10.1103/PhysRevE.81.061917
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells must continuously sense and respond to time-varying environmental stimuli. These signals are transmitted and processed by biochemical signalling networks. However, the biochemical reactions making up these networks are intrinsically noisy, which limits the reliability of intracellular signalling. Here we use information theory to characterise the reliability of transmission of time-varying signals through elementary biochemical reactions in the presence of noise. We calculate the mutual information for both instantaneous measurements and trajectories of biochemical systems for a Gaussian model. Our results indicate that the same network can have radically different characteristics for the transmission of instantaneous signals and trajectories. For trajectories, the ability of a network to respond to changes in the input signal is determined by the timing of reaction events, and is independent of the correlation time of the output of the network. We also study how reliably signals on different time-scales can be transmitted by considering the frequency-dependent coherence and gain-to-noise ratio. We find that a detector that does not consume the ligand molecule upon detection can more reliably transmit slowly varying signals, while an absorbing detector can more reliably transmit rapidly varying signals. Furthermore, we find that while one reaction may more reliably transmit information than another when considered in isolation, when placed within a signalling cascade the relative performance of the two reactions can be reversed. This means that optimising signal transmission at a single level of a signalling cascade can reduce signalling performance for the cascade as a whole.
[ { "created": "Tue, 23 Feb 2010 09:14:43 GMT", "version": "v1" }, { "created": "Wed, 16 Jun 2010 16:50:31 GMT", "version": "v2" } ]
2015-05-18
[ [ "Tostevin", "Filipe", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
Cells must continuously sense and respond to time-varying environmental stimuli. These signals are transmitted and processed by biochemical signalling networks. However, the biochemical reactions making up these networks are intrinsically noisy, which limits the reliability of intracellular signalling. Here we use information theory to characterise the reliability of transmission of time-varying signals through elementary biochemical reactions in the presence of noise. We calculate the mutual information for both instantaneous measurements and trajectories of biochemical systems for a Gaussian model. Our results indicate that the same network can have radically different characteristics for the transmission of instantaneous signals and trajectories. For trajectories, the ability of a network to respond to changes in the input signal is determined by the timing of reaction events, and is independent of the correlation time of the output of the network. We also study how reliably signals on different time-scales can be transmitted by considering the frequency-dependent coherence and gain-to-noise ratio. We find that a detector that does not consume the ligand molecule upon detection can more reliably transmit slowly varying signals, while an absorbing detector can more reliably transmit rapidly varying signals. Furthermore, we find that while one reaction may more reliably transmit information than another when considered in isolation, when placed within a signalling cascade the relative performance of the two reactions can be reversed. This means that optimising signal transmission at a single level of a signalling cascade can reduce signalling performance for the cascade as a whole.
2202.07035
Grace Lindsay
Grace W. Lindsay
Testing the Tools of Systems Neuroscience on Artificial Neural Networks
Perspective article; 10 pages, 2 figures
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by/4.0/
Neuroscientists apply a range of common analysis tools to recorded neural activity in order to glean insights into how neural circuits implement computations. Despite the fact that these tools shape the progress of the field as a whole, we have little empirical evidence that they are effective at quickly identifying the phenomena of interest. Here I argue that these tools should be explicitly tested and that artificial neural networks (ANNs) are an appropriate testing grounds for them. The recent resurgence of the use of ANNs as models of everything from perception to memory to motor control stems from a rough similarity between artificial and biological neural networks and the ability to train these networks to perform complex high-dimensional tasks. These properties, combined with the ability to perfectly observe and manipulate these systems, makes them well-suited for vetting the tools of systems and cognitive neuroscience. I provide here both a roadmap for performing this testing and a list of tools that are suitable to be tested on ANNs. Using ANNs to reflect on the extent to which these tools provide a productive understanding of neural systems -- and on exactly what understanding should mean here -- has the potential to expedite progress in the study of the brain.
[ { "created": "Mon, 14 Feb 2022 20:55:26 GMT", "version": "v1" } ]
2022-02-16
[ [ "Lindsay", "Grace W.", "" ] ]
Neuroscientists apply a range of common analysis tools to recorded neural activity in order to glean insights into how neural circuits implement computations. Despite the fact that these tools shape the progress of the field as a whole, we have little empirical evidence that they are effective at quickly identifying the phenomena of interest. Here I argue that these tools should be explicitly tested and that artificial neural networks (ANNs) are an appropriate testing grounds for them. The recent resurgence of the use of ANNs as models of everything from perception to memory to motor control stems from a rough similarity between artificial and biological neural networks and the ability to train these networks to perform complex high-dimensional tasks. These properties, combined with the ability to perfectly observe and manipulate these systems, makes them well-suited for vetting the tools of systems and cognitive neuroscience. I provide here both a roadmap for performing this testing and a list of tools that are suitable to be tested on ANNs. Using ANNs to reflect on the extent to which these tools provide a productive understanding of neural systems -- and on exactly what understanding should mean here -- has the potential to expedite progress in the study of the brain.
1611.05669
Vasilis Andreadakis Mr
Ioannis Smyrnakis, Vassilios Andreadakis, Vassilios Selimis, Michail Kalaitzakis, Theodora Bachourou, Georgios Kaloutsakis, George D. Kymionis, Stelios Smirnakis, Ioannis M. Aslanides
RADAR: A Novel Fast-Screening Method for Reading Difficulties with Special Focus on Dyslexia
null
null
10.1371/journal.pone.0182597
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dyslexia is a developmental learning disorder of single word reading accuracy and/or fluency, with compelling research directed towards understanding the contributions of the visual system. While dyslexia is not an oculomotor disease, readers with dyslexia have shown different eye movements than typically developing students during text reading. Readers with dyslexia exhibit longer and more frequent fixations, shorter saccade lengths, more backward refixations than typical readers. Furthermore, readers with dyslexia are known to have difficulty in reading long words, lower skipping rate of short words, and high gaze duration on many words. It is an open question whether it is possible to harness these distinctive oculomotor scanning patterns observed during reading in order to develop a screening tool that can reliably identify struggling readers, who may be candidates for dyslexia. Here, we introduce a novel, fast, objective, non-invasive method, named Rapid Assessment of Difficulties and Abnormalities in Reading (RADAR) that screens for features associated with the aberrant visual scanning of reading text seen in dyslexia. Eye tracking parameter measurements that are stable under retest and have high discriminative power, as indicated by their ROC curves, were obtained during silent text reading. These parameters were combined to derive a total reading score (TRS) that can reliably separate readers with dyslexia from typical readers. We tested TRS in a group of school-age children ranging from 8.5 to 12.5 years of age. TRS achieved 94.2% correct classification of children tested. Specifically, 35 out of 37 control (specificity 94.6%) and 30 out of 32 readers with dyslexia (sensitivity 93.8%) were classified correctly using RADAR, under a circular validation condition where the individual evaluated was not included in the test construction group.
[ { "created": "Thu, 17 Nov 2016 13:05:55 GMT", "version": "v1" }, { "created": "Fri, 18 Nov 2016 12:50:23 GMT", "version": "v2" }, { "created": "Thu, 24 Nov 2016 09:27:23 GMT", "version": "v3" }, { "created": "Mon, 9 Jan 2017 16:36:01 GMT", "version": "v4" } ]
2017-11-01
[ [ "Smyrnakis", "Ioannis", "" ], [ "Andreadakis", "Vassilios", "" ], [ "Selimis", "Vassilios", "" ], [ "Kalaitzakis", "Michail", "" ], [ "Bachourou", "Theodora", "" ], [ "Kaloutsakis", "Georgios", "" ], [ "Kymionis", "George D.", "" ], [ "Smirnakis", "Stelios", "" ], [ "Aslanides", "Ioannis M.", "" ] ]
Dyslexia is a developmental learning disorder of single word reading accuracy and/or fluency, with compelling research directed towards understanding the contributions of the visual system. While dyslexia is not an oculomotor disease, readers with dyslexia have shown different eye movements than typically developing students during text reading. Readers with dyslexia exhibit longer and more frequent fixations, shorter saccade lengths, more backward refixations than typical readers. Furthermore, readers with dyslexia are known to have difficulty in reading long words, lower skipping rate of short words, and high gaze duration on many words. It is an open question whether it is possible to harness these distinctive oculomotor scanning patterns observed during reading in order to develop a screening tool that can reliably identify struggling readers, who may be candidates for dyslexia. Here, we introduce a novel, fast, objective, non-invasive method, named Rapid Assessment of Difficulties and Abnormalities in Reading (RADAR) that screens for features associated with the aberrant visual scanning of reading text seen in dyslexia. Eye tracking parameter measurements that are stable under retest and have high discriminative power, as indicated by their ROC curves, were obtained during silent text reading. These parameters were combined to derive a total reading score (TRS) that can reliably separate readers with dyslexia from typical readers. We tested TRS in a group of school-age children ranging from 8.5 to 12.5 years of age. TRS achieved 94.2% correct classification of children tested. Specifically, 35 out of 37 control (specificity 94.6%) and 30 out of 32 readers with dyslexia (sensitivity 93.8%) were classified correctly using RADAR, under a circular validation condition where the individual evaluated was not included in the test construction group.
1705.08261
Samir Suweis Dr.
Chengyi Tu, Rodrigo P. Rocha, Maurizio Corbetta, Sandro Zampieri, Marzo Zorzi and Samir Suweis
Warnings and Caveats in Brain Controllability
9 pages, 1 Figure, 1 Table
NeuroImage, Volume 176, 1 August 2018, Pages 83-91
10.1016/j.neuroimage.2018.04.010
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we challenge the main conclusions of Gu et al work (Controllability of structural brain networks. Nature communications 6, 8414, doi:10.1038/ncomms9414, 2015) on brain controllability. Using the same methods and analyses on four datasets we find that the minimum set of nodes to control brain networks is always larger than one. We also find that the relationships between the average/modal controllability and weighted degrees also hold for randomized data and the there are not specific roles played by Resting State Networks in controlling the brain. In conclusion, we show that there is no evidence that topology plays specific and unique roles in the controllability of brain networks. Accordingly, Gu et al. interpretation of their results, in particular in terms of translational applications (e.g. using single node controllability properties to define target region(s) for neurostimulation) should be revisited. Though theoretically intriguing, our understanding of the relationship between controllability and structural brain network remains elusive.
[ { "created": "Wed, 17 May 2017 15:51:37 GMT", "version": "v1" } ]
2018-07-10
[ [ "Tu", "Chengyi", "" ], [ "Rocha", "Rodrigo P.", "" ], [ "Corbetta", "Maurizio", "" ], [ "Zampieri", "Sandro", "" ], [ "Zorzi", "Marzo", "" ], [ "Suweis", "Samir", "" ] ]
In this work we challenge the main conclusions of Gu et al work (Controllability of structural brain networks. Nature communications 6, 8414, doi:10.1038/ncomms9414, 2015) on brain controllability. Using the same methods and analyses on four datasets we find that the minimum set of nodes to control brain networks is always larger than one. We also find that the relationships between the average/modal controllability and weighted degrees also hold for randomized data and the there are not specific roles played by Resting State Networks in controlling the brain. In conclusion, we show that there is no evidence that topology plays specific and unique roles in the controllability of brain networks. Accordingly, Gu et al. interpretation of their results, in particular in terms of translational applications (e.g. using single node controllability properties to define target region(s) for neurostimulation) should be revisited. Though theoretically intriguing, our understanding of the relationship between controllability and structural brain network remains elusive.
2106.14982
Giacomo Cacciapaglia
Giacomo Cacciapaglia, Corentin Cot, Adele de Hoffer, Stefan Hohenegger, Francesco Sannino and Shahram Vatani
Epidemiological theory of virus variants
38 pages, 32 figures
null
10.1016/j.physa.2022.127071
LYCEN 2021-01
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
We propose a physical theory underlying the temporal evolution of competing virus variants that relies on the existence of (quasi) fixed points capturing the large time scale invariance of the dynamics. To motivate our result we first modify the time-honoured compartmental models of the SIR type to account for the existence of competing variants and then show how their evolution can be naturally re-phrased in terms of flow equations ending at quasi fixed points. As the natural next step we employ (near) scale invariance to organise the time evolution of the competing variants within the effective description of the epidemic Renormalization Group framework. We test the resulting theory against the time evolution of COVID-19 virus variants that validate the theory empirically.
[ { "created": "Mon, 28 Jun 2021 20:57:58 GMT", "version": "v1" } ]
2022-03-23
[ [ "Cacciapaglia", "Giacomo", "" ], [ "Cot", "Corentin", "" ], [ "de Hoffer", "Adele", "" ], [ "Hohenegger", "Stefan", "" ], [ "Sannino", "Francesco", "" ], [ "Vatani", "Shahram", "" ] ]
We propose a physical theory underlying the temporal evolution of competing virus variants that relies on the existence of (quasi) fixed points capturing the large time scale invariance of the dynamics. To motivate our result we first modify the time-honoured compartmental models of the SIR type to account for the existence of competing variants and then show how their evolution can be naturally re-phrased in terms of flow equations ending at quasi fixed points. As the natural next step we employ (near) scale invariance to organise the time evolution of the competing variants within the effective description of the epidemic Renormalization Group framework. We test the resulting theory against the time evolution of COVID-19 virus variants that validate the theory empirically.
0806.4662
Casey Diekman
C. O. Diekman, P. S. Sastry, K. P. Unnikrishnan
On the statistical significance of temporal firing patterns in multi-neuronal spike trains
17 pages, 7 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Repeated occurrences of serial firing sequences of a group of neurons with fixed time delays between neurons are observed in many experiments involving simultaneous recordings from multiple neurons. Such temporal patterns are potentially indicative of underlying microcircuits and it is important to know when a repeatedly occurring pattern is statistically significant. These sequences are typically identified through correlation counts, such as in the two-tape algorithm of Abeles and Gerstein. In this paper we present a method for deciding on the significance of such correlations by characterizing the influence of one neuron on another in terms of conditional probabilities and specifying our null hypothesis in terms of a bound on the conditional probabilities. This method of testing significance of correlation counts is more general than the currently available methods since under our null hypothesis we do not assume that the spiking processes of different neurons are independent. The structure of our null hypothesis also allows us to rank order the detected patterns in terms of the strength of interaction among the neurons constituting the pattern. We demonstrate our method of assessing significance on simulated spike trains involving inhomogeneous Poisson processes with strong interactions, where the correlation counts are obtained using the two-tape algorithm.
[ { "created": "Sat, 28 Jun 2008 06:14:58 GMT", "version": "v1" }, { "created": "Mon, 1 Sep 2008 14:45:30 GMT", "version": "v2" } ]
2008-09-01
[ [ "Diekman", "C. O.", "" ], [ "Sastry", "P. S.", "" ], [ "Unnikrishnan", "K. P.", "" ] ]
Repeated occurrences of serial firing sequences of a group of neurons with fixed time delays between neurons are observed in many experiments involving simultaneous recordings from multiple neurons. Such temporal patterns are potentially indicative of underlying microcircuits and it is important to know when a repeatedly occurring pattern is statistically significant. These sequences are typically identified through correlation counts, such as in the two-tape algorithm of Abeles and Gerstein. In this paper we present a method for deciding on the significance of such correlations by characterizing the influence of one neuron on another in terms of conditional probabilities and specifying our null hypothesis in terms of a bound on the conditional probabilities. This method of testing significance of correlation counts is more general than the currently available methods since under our null hypothesis we do not assume that the spiking processes of different neurons are independent. The structure of our null hypothesis also allows us to rank order the detected patterns in terms of the strength of interaction among the neurons constituting the pattern. We demonstrate our method of assessing significance on simulated spike trains involving inhomogeneous Poisson processes with strong interactions, where the correlation counts are obtained using the two-tape algorithm.
1606.09459
Jacek Urbanek PhD
Jacek K. Urbanek, Jaroslaw Harezlak, Nancy W. Glynn, Tamara Harris, Ciprian Crainiceanu, Vadim Zipunnikov
Stride variability measures derived from wrist- and hip-worn accelerometers
null
Gait & Posture 52, 2017
10.1016/j.gaitpost.2016.11.045
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many epidemiological and clinical studies use accelerometry to objectively measure physical activity using the activity counts, vector magnitude, or number of steps. These measures use just a fraction of the information in the raw accelerometry data as they are typically summarized at the minute level. To address this problem we define and estimate two gait measures of temporal stride-to-stride variability based on raw accelerometry data: Amplitude Deviation (AD) and Phase Deviation (PD). We explore the sensitivity of our approach to on-body placement of the accelerometer by comparing hip, left and right wrist placements. We illustrate the approach by estimating AD and PD in 46 elderly participants in the Developmental Epidemiologic Cohort Study (DECOS) who worn accelerometers during a 400 meter walk test. We also show that AD and PD have a statistically significant association with the gait speed and sit-to-stand test performance
[ { "created": "Thu, 30 Jun 2016 12:25:11 GMT", "version": "v1" } ]
2016-12-16
[ [ "Urbanek", "Jacek K.", "" ], [ "Harezlak", "Jaroslaw", "" ], [ "Glynn", "Nancy W.", "" ], [ "Harris", "Tamara", "" ], [ "Crainiceanu", "Ciprian", "" ], [ "Zipunnikov", "Vadim", "" ] ]
Many epidemiological and clinical studies use accelerometry to objectively measure physical activity using the activity counts, vector magnitude, or number of steps. These measures use just a fraction of the information in the raw accelerometry data as they are typically summarized at the minute level. To address this problem we define and estimate two gait measures of temporal stride-to-stride variability based on raw accelerometry data: Amplitude Deviation (AD) and Phase Deviation (PD). We explore the sensitivity of our approach to on-body placement of the accelerometer by comparing hip, left and right wrist placements. We illustrate the approach by estimating AD and PD in 46 elderly participants in the Developmental Epidemiologic Cohort Study (DECOS) who worn accelerometers during a 400 meter walk test. We also show that AD and PD have a statistically significant association with the gait speed and sit-to-stand test performance
2106.08120
Simone Scacchi
Edoardo Beretta, Vincenzo Capasso, Simone Scacchi, Matteo Brunetti, Matteo Montagna
Qualitative analysis of a mathematical model for Xylella fastidiosa epidemics
null
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
In Southern Italy, since 2013, there has been an ongoing Olive Quick Decline Syndrome (OQDS) outbreak, due to the bacterium Xylella fastidiosa. In a couple of previous papers, the authors have proposed a mathematical approach for identifying possible control strategies for eliminating or at least reduce the economic impact of such event. The main players involved in OQDS are represented by the insect vector, Philaenus spumarius, its host plants (olive trees and weeds) and the bacterium, X. fastidiosa. A basic mathematical model has been expressed in terms of a system of ordinary differential equations; a preliminary analysis already provided interesting results about possible control strategies within an integrated pest management framework, not requiring the removal of the productive resource represented by the olive trees. The same conjectures have been later confirmed by analyzing the impact of possible spatial heterogeneities on controlling a X. fastidiosa epidemic. These encouraging facts have stimulated a more detailed and rigorous mathematical analysis of the same system, as presented in this paper. A clear picture of the possible steady states (equilibria) and their stability properties has been outlined, within a variety of different parameter scenarios, for the original spatially homogeneous ecosystem. The results obtained here confirm, in a mathematically rigorous way, what had been conjectured in the previous papers, i.e. that the removal of a suitable amount of weed biomass (reservoir of the juvenile stages of the insect vector of X. fastidiosa from olive orchards and surrounding areas is the most acceptable strategy to control the spread of the OQDS. In addition, as expected, the adoption of more resistant olive tree cultivars has been shown to be a good strategy, though less cost-effective, in controlling the pathogen.
[ { "created": "Tue, 15 Jun 2021 13:29:51 GMT", "version": "v1" } ]
2021-06-16
[ [ "Beretta", "Edoardo", "" ], [ "Capasso", "Vincenzo", "" ], [ "Scacchi", "Simone", "" ], [ "Brunetti", "Matteo", "" ], [ "Montagna", "Matteo", "" ] ]
In Southern Italy, since 2013, there has been an ongoing Olive Quick Decline Syndrome (OQDS) outbreak, due to the bacterium Xylella fastidiosa. In a couple of previous papers, the authors have proposed a mathematical approach for identifying possible control strategies for eliminating or at least reduce the economic impact of such event. The main players involved in OQDS are represented by the insect vector, Philaenus spumarius, its host plants (olive trees and weeds) and the bacterium, X. fastidiosa. A basic mathematical model has been expressed in terms of a system of ordinary differential equations; a preliminary analysis already provided interesting results about possible control strategies within an integrated pest management framework, not requiring the removal of the productive resource represented by the olive trees. The same conjectures have been later confirmed by analyzing the impact of possible spatial heterogeneities on controlling a X. fastidiosa epidemic. These encouraging facts have stimulated a more detailed and rigorous mathematical analysis of the same system, as presented in this paper. A clear picture of the possible steady states (equilibria) and their stability properties has been outlined, within a variety of different parameter scenarios, for the original spatially homogeneous ecosystem. The results obtained here confirm, in a mathematically rigorous way, what had been conjectured in the previous papers, i.e. that the removal of a suitable amount of weed biomass (reservoir of the juvenile stages of the insect vector of X. fastidiosa from olive orchards and surrounding areas is the most acceptable strategy to control the spread of the OQDS. In addition, as expected, the adoption of more resistant olive tree cultivars has been shown to be a good strategy, though less cost-effective, in controlling the pathogen.
1207.4618
Satoru Morita
Satoru Morita and Jin Yoshimura
Analytical Solution of Metapopulation Dynamics in Stochastic Environment
Latex file, 10 pages, 3 figures
Physical Review E 86, 045102R (2012)
10.1103/PhysRevE.86.045102
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a stochastic linear discrete metapolulation model to understand the effect of risk spreading by dispersion. We calculate analytically the stable distribution of populations that live in different habitats. The result shows that the simultaneous distribution of the populations has a complicated self-similar structure, but a population at each habitat follows a log-normal distribution.
[ { "created": "Thu, 19 Jul 2012 11:41:59 GMT", "version": "v1" }, { "created": "Mon, 3 Sep 2012 15:50:22 GMT", "version": "v2" } ]
2015-11-12
[ [ "Morita", "Satoru", "" ], [ "Yoshimura", "Jin", "" ] ]
We study a stochastic linear discrete metapolulation model to understand the effect of risk spreading by dispersion. We calculate analytically the stable distribution of populations that live in different habitats. The result shows that the simultaneous distribution of the populations has a complicated self-similar structure, but a population at each habitat follows a log-normal distribution.
1610.01010
Thomas Schmidt
Thomas Schmidt
Sources of false positives and false negatives in the STATCHECK algorithm: Reply to Nuijten et al. (2016)
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
STATCHECK is an R algorithm designed to scan papers automatically for inconsistencies between test statistics and their associated p values (Nuijten et al., 2016). The goal of this comment is to point out an important and well-documented flaw in this busily applied algorithm: It cannot handle corrected p values. As a result, statistical tests applying appropriate corrections to the p value (e.g., for multiple tests, post-hoc tests, violations of assumptions, etc.) are likely to be flagged as reporting inconsistent statistics, whereas papers omitting necessary corrections are certified as correct. The STATCHECK algorithm is thus valid for only a subset of scientific papers, and conclusions about the quality or integrity of statistical reports should never be based solely on this program.
[ { "created": "Tue, 4 Oct 2016 14:19:57 GMT", "version": "v1" }, { "created": "Wed, 5 Oct 2016 09:24:20 GMT", "version": "v2" }, { "created": "Mon, 10 Oct 2016 15:19:31 GMT", "version": "v3" }, { "created": "Tue, 15 Nov 2016 13:12:33 GMT", "version": "v4" }, { "created": "Wed, 16 Nov 2016 10:37:09 GMT", "version": "v5" }, { "created": "Fri, 31 Mar 2017 12:37:36 GMT", "version": "v6" }, { "created": "Tue, 16 May 2017 09:17:21 GMT", "version": "v7" }, { "created": "Thu, 23 Nov 2017 10:01:43 GMT", "version": "v8" } ]
2017-11-27
[ [ "Schmidt", "Thomas", "" ] ]
STATCHECK is an R algorithm designed to scan papers automatically for inconsistencies between test statistics and their associated p values (Nuijten et al., 2016). The goal of this comment is to point out an important and well-documented flaw in this busily applied algorithm: It cannot handle corrected p values. As a result, statistical tests applying appropriate corrections to the p value (e.g., for multiple tests, post-hoc tests, violations of assumptions, etc.) are likely to be flagged as reporting inconsistent statistics, whereas papers omitting necessary corrections are certified as correct. The STATCHECK algorithm is thus valid for only a subset of scientific papers, and conclusions about the quality or integrity of statistical reports should never be based solely on this program.
1910.10476
Maxime Lenormand
Camille Jahel, Maxime Lenormand, Ismaila Seck, Andrea Apolloni, Ibra Toure, Coumba Faye, Baba Sall, Mbargou Lo, C\'ecile Squarzoni Diaw, Renaud Lancelot, Caroline Coste
Mapping livestock movements in Sahelian Africa
12 pages, 10 figures
Scientific Reports 10, 8339 (2020)
10.1038/s41598-020-65132-8
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the dominant livestock systems of Sahelian countries herds have to move across territories. Their mobility is often a source of conflict with farmers in the areas crossed, and helps spread diseases such as Rift Valley Fever. Knowledge of the routes followed by herds is therefore core to guiding the implementation of preventive and control measures for transboundary animal diseases, land use planning and conflict management. However, the lack of quantitative data on livestock movements, together with the high temporal and spatial variability of herd movements, has so far hampered the production of fine resolution maps of animal movements. This paper proposes a general framework for mapping potential paths for livestock movements and identifying areas of high animal passage potential for those movements. The method consists in combining the information contained in livestock mobility networks with landscape connectivity, based on different mobility conductance layers. We illustrate our approach with a livestock mobility network in Senegal and Mauritania in the 2014 dry and wet seasons.
[ { "created": "Sat, 19 Oct 2019 12:06:47 GMT", "version": "v1" }, { "created": "Wed, 20 May 2020 09:13:30 GMT", "version": "v2" } ]
2020-05-21
[ [ "Jahel", "Camille", "" ], [ "Lenormand", "Maxime", "" ], [ "Seck", "Ismaila", "" ], [ "Apolloni", "Andrea", "" ], [ "Toure", "Ibra", "" ], [ "Faye", "Coumba", "" ], [ "Sall", "Baba", "" ], [ "Lo", "Mbargou", "" ], [ "Diaw", "Cécile Squarzoni", "" ], [ "Lancelot", "Renaud", "" ], [ "Coste", "Caroline", "" ] ]
In the dominant livestock systems of Sahelian countries herds have to move across territories. Their mobility is often a source of conflict with farmers in the areas crossed, and helps spread diseases such as Rift Valley Fever. Knowledge of the routes followed by herds is therefore core to guiding the implementation of preventive and control measures for transboundary animal diseases, land use planning and conflict management. However, the lack of quantitative data on livestock movements, together with the high temporal and spatial variability of herd movements, has so far hampered the production of fine resolution maps of animal movements. This paper proposes a general framework for mapping potential paths for livestock movements and identifying areas of high animal passage potential for those movements. The method consists in combining the information contained in livestock mobility networks with landscape connectivity, based on different mobility conductance layers. We illustrate our approach with a livestock mobility network in Senegal and Mauritania in the 2014 dry and wet seasons.
2011.08741
Delfim F. M. Torres
Ana P. Lemos-Paiao, Cristiana J. Silva, Delfim F. M. Torres
A New Compartmental Epidemiological Model for COVID-19 with a Case Study of Portugal
This is a preprint of a paper whose final and definite form is published by 'Ecological Complexity' (ISSN: 1476-945X). Paper Submitted 05/June/2020; Revised 24/July/2020, 08/Sept/2020 and 02/Nov/2020; Accepted 17/Nov/2020
Ecological Complexity 44 (2020) Art. 100885, 8 pp
10.1016/j.ecocom.2020.100885
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a compartmental mathematical model for the spread of the COVID-19 disease, showing its usefulness with respect to the pandemic in Portugal, from the first recorded case in the country till the end of the three states of emergency. New results include the compartmental model, described by a system of seven ordinary differential equations; proof of positivity and boundedness of solutions; investigation of equilibrium points and their stability analysis; computation of the basic reproduction number; and numerical simulations with official real data from the Portuguese health authorities. Besides completely new, the proposed model allows to describe quite well the spread of COVID-19 in Portugal, fitting simultaneously not only the number of active infected individuals but also the number of hospitalized individuals, respectively with a $L^2$ error of $9.2152e-04$ and $1.6136e-04$ with respect to the initial population. Such results are very important, from a practical point of view, and far from trivial from a mathematical perspective. Moreover, the obtained value for the basic reproduction number is in agreement with the one given by the Portuguese authorities at the end of the emergency states.
[ { "created": "Tue, 17 Nov 2020 16:19:37 GMT", "version": "v1" } ]
2020-11-30
[ [ "Lemos-Paiao", "Ana P.", "" ], [ "Silva", "Cristiana J.", "" ], [ "Torres", "Delfim F. M.", "" ] ]
We propose a compartmental mathematical model for the spread of the COVID-19 disease, showing its usefulness with respect to the pandemic in Portugal, from the first recorded case in the country till the end of the three states of emergency. New results include the compartmental model, described by a system of seven ordinary differential equations; proof of positivity and boundedness of solutions; investigation of equilibrium points and their stability analysis; computation of the basic reproduction number; and numerical simulations with official real data from the Portuguese health authorities. Besides completely new, the proposed model allows to describe quite well the spread of COVID-19 in Portugal, fitting simultaneously not only the number of active infected individuals but also the number of hospitalized individuals, respectively with a $L^2$ error of $9.2152e-04$ and $1.6136e-04$ with respect to the initial population. Such results are very important, from a practical point of view, and far from trivial from a mathematical perspective. Moreover, the obtained value for the basic reproduction number is in agreement with the one given by the Portuguese authorities at the end of the emergency states.
2407.06052
Dominik Tschimmel
Dominik Tschimmel, Momina Saeed, Maria Milani, Steffen Waldherr, Tim Hucho
Protein-environment-sensitive computational epitope accessibility analysis from antibody dose-response data
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Antibodies are widely used in life-sciences and medical therapy. Yet, broadly applicable methods are missing to determine, in the biological system of choice, antibody specificity and its quantitative contribution to e.g. immunofluorescence stainings. Thereby, antibody-based data often needs to be seen with caution. Here, we present a simple-to-use approach to characterize and quantify antibody binding properties directly in the system of choice. We determine an epitope accessibility distribution in the system of interest based on a computational analysis of antibody-dilution immunofluorescence stainings. This allows the selection of specific antibodies, the choice of a dilution to maximize signal-specificity, and an improvement of signal quantification. It further expands the scope of antibody-based imaging to detect changes of the subcellular nano-environment and allows for antibody multiplexing.
[ { "created": "Mon, 8 Jul 2024 15:54:26 GMT", "version": "v1" } ]
2024-07-09
[ [ "Tschimmel", "Dominik", "" ], [ "Saeed", "Momina", "" ], [ "Milani", "Maria", "" ], [ "Waldherr", "Steffen", "" ], [ "Hucho", "Tim", "" ] ]
Antibodies are widely used in life-sciences and medical therapy. Yet, broadly applicable methods are missing to determine, in the biological system of choice, antibody specificity and its quantitative contribution to e.g. immunofluorescence stainings. Thereby, antibody-based data often needs to be seen with caution. Here, we present a simple-to-use approach to characterize and quantify antibody binding properties directly in the system of choice. We determine an epitope accessibility distribution in the system of interest based on a computational analysis of antibody-dilution immunofluorescence stainings. This allows the selection of specific antibodies, the choice of a dilution to maximize signal-specificity, and an improvement of signal quantification. It further expands the scope of antibody-based imaging to detect changes of the subcellular nano-environment and allows for antibody multiplexing.
1710.05098
Songting Li
Songting Li, Nan Liu, Xiaohui Zhang, Douglas Zhou, and David Cai
Determination of Effective Synaptic Conductances Using Somatic Voltage Clamp
null
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The interplay between excitatory and inhibitory neurons imparts rich functions of the brain. To understand the underlying synaptic mechanisms, a fundamental approach is to study the dynamics of excitatory and inhibitory conductances of each neuron. The traditional method of determining conductance employs the synaptic current-voltage (I-V) relation obtained via voltage clamp. Using theoretical analysis, electrophysiological experiments, and realistic simulations, here we demonstrate that the traditional method conceptually fails to measure the conductance due to the neglect of a nonlinear interaction between the clamp current and the synaptic current. Consequently, it incurs substantial measurement error, even giving rise to unphysically negative conductance as observed in experiments. To elucidate synaptic impact on neuronal information processing, we introduce the concept of effective conductance and propose a framework to determine it accurately. Our work suggests re-examination of previous studies involving conductance measurement and provides a reliable approach to assess synaptic influence on neuronal computation.
[ { "created": "Fri, 13 Oct 2017 23:10:20 GMT", "version": "v1" } ]
2017-10-17
[ [ "Li", "Songting", "" ], [ "Liu", "Nan", "" ], [ "Zhang", "Xiaohui", "" ], [ "Zhou", "Douglas", "" ], [ "Cai", "David", "" ] ]
The interplay between excitatory and inhibitory neurons imparts rich functions of the brain. To understand the underlying synaptic mechanisms, a fundamental approach is to study the dynamics of excitatory and inhibitory conductances of each neuron. The traditional method of determining conductance employs the synaptic current-voltage (I-V) relation obtained via voltage clamp. Using theoretical analysis, electrophysiological experiments, and realistic simulations, here we demonstrate that the traditional method conceptually fails to measure the conductance due to the neglect of a nonlinear interaction between the clamp current and the synaptic current. Consequently, it incurs substantial measurement error, even giving rise to unphysically negative conductance as observed in experiments. To elucidate synaptic impact on neuronal information processing, we introduce the concept of effective conductance and propose a framework to determine it accurately. Our work suggests re-examination of previous studies involving conductance measurement and provides a reliable approach to assess synaptic influence on neuronal computation.
2206.11769
Will Greedy
Will Greedy, Heng Wei Zhu, Joseph Pemberton, Jack Mellor and Rui Ponte Costa
Single-phase deep learning in cortico-cortical networks
Accepted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 22 pages, 9 figures, 5 tables
null
null
null
q-bio.NC cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The error-backpropagation (backprop) algorithm remains the most common solution to the credit assignment problem in artificial neural networks. In neuroscience, it is unclear whether the brain could adopt a similar strategy to correctly modify its synapses. Recent models have attempted to bridge this gap while being consistent with a range of experimental observations. However, these models are either unable to effectively backpropagate error signals across multiple layers or require a multi-phase learning process, neither of which are reminiscent of learning in the brain. Here, we introduce a new model, Bursting Cortico-Cortical Networks (BurstCCN), which solves these issues by integrating known properties of cortical networks namely bursting activity, short-term plasticity (STP) and dendrite-targeting interneurons. BurstCCN relies on burst multiplexing via connection-type-specific STP to propagate backprop-like error signals within deep cortical networks. These error signals are encoded at distal dendrites and induce burst-dependent plasticity as a result of excitatory-inhibitory top-down inputs. First, we demonstrate that our model can effectively backpropagate errors through multiple layers using a single-phase learning process. Next, we show both empirically and analytically that learning in our model approximates backprop-derived gradients. Finally, we demonstrate that our model is capable of learning complex image classification tasks (MNIST and CIFAR-10). Overall, our results suggest that cortical features across sub-cellular, cellular, microcircuit and systems levels jointly underlie single-phase efficient deep learning in the brain.
[ { "created": "Thu, 23 Jun 2022 15:10:57 GMT", "version": "v1" }, { "created": "Mon, 24 Oct 2022 15:32:18 GMT", "version": "v2" } ]
2022-10-25
[ [ "Greedy", "Will", "" ], [ "Zhu", "Heng Wei", "" ], [ "Pemberton", "Joseph", "" ], [ "Mellor", "Jack", "" ], [ "Costa", "Rui Ponte", "" ] ]
The error-backpropagation (backprop) algorithm remains the most common solution to the credit assignment problem in artificial neural networks. In neuroscience, it is unclear whether the brain could adopt a similar strategy to correctly modify its synapses. Recent models have attempted to bridge this gap while being consistent with a range of experimental observations. However, these models are either unable to effectively backpropagate error signals across multiple layers or require a multi-phase learning process, neither of which are reminiscent of learning in the brain. Here, we introduce a new model, Bursting Cortico-Cortical Networks (BurstCCN), which solves these issues by integrating known properties of cortical networks namely bursting activity, short-term plasticity (STP) and dendrite-targeting interneurons. BurstCCN relies on burst multiplexing via connection-type-specific STP to propagate backprop-like error signals within deep cortical networks. These error signals are encoded at distal dendrites and induce burst-dependent plasticity as a result of excitatory-inhibitory top-down inputs. First, we demonstrate that our model can effectively backpropagate errors through multiple layers using a single-phase learning process. Next, we show both empirically and analytically that learning in our model approximates backprop-derived gradients. Finally, we demonstrate that our model is capable of learning complex image classification tasks (MNIST and CIFAR-10). Overall, our results suggest that cortical features across sub-cellular, cellular, microcircuit and systems levels jointly underlie single-phase efficient deep learning in the brain.
0806.2888
Joel Miller
Joel C. Miller
Spread of infectious diseases through clustered populations
Updated version
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks of person-person contacts form the substrate along which infectious diseases spread. Most network-based studies of the spread focus on the impact of variations in degree (the number of contacts an individual has). However, other effects such as clustering, variations in infectiousness or susceptibility, or variations in closeness of contacts may play a significant role. We develop analytic techniques to predict how these effects alter the growth rate, probability, and size of epidemics and validate the predictions with a realistic social network. We find that (for given degree distribution and average transmissibility) clustering is the dominant factor controlling the growth rate, heterogeneity in infectiousness is the dominant factor controlling the probability of an epidemic, and heterogeneity in susceptibility is the dominant factor controlling the size of an epidemic. Edge weights (measuring closeness or duration of contacts) have impact only if correlations exist between different edges. Combined, these effects can play a minor role in reinforcing one another, with the impact of clustering largest when the population is maximally heterogeneous or if the closer contacts are also strongly clustered. Our most significant contribution is a systematic way to address clustering in infectious disease models, and our results have a number of implications for the design of interventions.
[ { "created": "Wed, 18 Jun 2008 02:57:33 GMT", "version": "v1" }, { "created": "Mon, 15 Dec 2008 02:17:39 GMT", "version": "v2" } ]
2008-12-15
[ [ "Miller", "Joel C.", "" ] ]
Networks of person-person contacts form the substrate along which infectious diseases spread. Most network-based studies of the spread focus on the impact of variations in degree (the number of contacts an individual has). However, other effects such as clustering, variations in infectiousness or susceptibility, or variations in closeness of contacts may play a significant role. We develop analytic techniques to predict how these effects alter the growth rate, probability, and size of epidemics and validate the predictions with a realistic social network. We find that (for given degree distribution and average transmissibility) clustering is the dominant factor controlling the growth rate, heterogeneity in infectiousness is the dominant factor controlling the probability of an epidemic, and heterogeneity in susceptibility is the dominant factor controlling the size of an epidemic. Edge weights (measuring closeness or duration of contacts) have impact only if correlations exist between different edges. Combined, these effects can play a minor role in reinforcing one another, with the impact of clustering largest when the population is maximally heterogeneous or if the closer contacts are also strongly clustered. Our most significant contribution is a systematic way to address clustering in infectious disease models, and our results have a number of implications for the design of interventions.
1906.05471
Masahiko Ueda
Masahiko Ueda
Absolute negative mobility in evolution
13 pages, 13 figures
J. Phys. A: Math. Theor. 53, 075601 (2020)
10.1088/1751-8121/ab6a6a
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a population-genetic model with a temporally-fluctuating sawtooth fitness landscape. We numerically show that a counter-intuitive behavior occurs where the rate of evolution of the system decreases as selection pressure increases from zero. This phenomenon is understood by analogy with absolute negative mobility in particle flow. A phenomenological explanation about the direction of evolution is also provided.
[ { "created": "Thu, 13 Jun 2019 04:04:21 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2019 00:55:56 GMT", "version": "v2" }, { "created": "Fri, 17 Jan 2020 02:50:25 GMT", "version": "v3" } ]
2020-01-28
[ [ "Ueda", "Masahiko", "" ] ]
We investigate a population-genetic model with a temporally-fluctuating sawtooth fitness landscape. We numerically show that a counter-intuitive behavior occurs where the rate of evolution of the system decreases as selection pressure increases from zero. This phenomenon is understood by analogy with absolute negative mobility in particle flow. A phenomenological explanation about the direction of evolution is also provided.
1808.06603
Ali Oskooei
Ali Oskooei, Matteo Manica, Roland Mathis and Maria Rodriguez Martinez
Network-based Biased Tree Ensembles (NetBiTE) for Drug Sensitivity Prediction and Drug Sensitivity Biomarker Identification in Cancer
36 pages, 5 figures, 3 supplementary figures
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the Network-based Biased Tree Ensembles (NetBiTE) method for drug sensitivity prediction and drug sensitivity biomarker identification in cancer using a combination of prior knowledge and gene expression data. Our devised method consists of a biased tree ensemble that is built according to a probabilistic bias weight distribution. The bias weight distribution is obtained from the assignment of high weights to the drug targets and propagating the assigned weights over a protein-protein interaction network such as STRING. The propagation of weights, defines neighborhoods of influence around the drug targets and as such simulates the spread of perturbations within the cell, following drug administration. Using a synthetic dataset, we showcase how application of biased tree ensembles (BiTE) results in significant accuracy gains at a much lower computational cost compared to the unbiased random forests (RF) algorithm. We then apply NetBiTE to the Genomics of Drug Sensitivity in Cancer (GDSC) dataset and demonstrate that NetBiTE outperforms RF in predicting IC50 drug sensitivity, only for drugs that target membrane receptor pathways (MRPs): RTK, EGFR and IGFR signaling pathways. We propose based on the NetBiTE results, that for drugs that inhibit MRPs, the expression of target genes prior to drug administration is a biomarker for IC50 drug sensitivity following drug administration. We further verify and reinforce this proposition through control studies on, PI3K/MTOR signaling pathway inhibitors, a drug category that does not target MRPs, and through assignment of dummy targets to MRP inhibiting drugs and investigating the variation in NetBiTE accuracy.
[ { "created": "Sat, 18 Aug 2018 14:43:20 GMT", "version": "v1" }, { "created": "Fri, 26 Apr 2019 15:33:38 GMT", "version": "v2" } ]
2019-04-29
[ [ "Oskooei", "Ali", "" ], [ "Manica", "Matteo", "" ], [ "Mathis", "Roland", "" ], [ "Martinez", "Maria Rodriguez", "" ] ]
We present the Network-based Biased Tree Ensembles (NetBiTE) method for drug sensitivity prediction and drug sensitivity biomarker identification in cancer using a combination of prior knowledge and gene expression data. Our devised method consists of a biased tree ensemble that is built according to a probabilistic bias weight distribution. The bias weight distribution is obtained from the assignment of high weights to the drug targets and propagating the assigned weights over a protein-protein interaction network such as STRING. The propagation of weights, defines neighborhoods of influence around the drug targets and as such simulates the spread of perturbations within the cell, following drug administration. Using a synthetic dataset, we showcase how application of biased tree ensembles (BiTE) results in significant accuracy gains at a much lower computational cost compared to the unbiased random forests (RF) algorithm. We then apply NetBiTE to the Genomics of Drug Sensitivity in Cancer (GDSC) dataset and demonstrate that NetBiTE outperforms RF in predicting IC50 drug sensitivity, only for drugs that target membrane receptor pathways (MRPs): RTK, EGFR and IGFR signaling pathways. We propose based on the NetBiTE results, that for drugs that inhibit MRPs, the expression of target genes prior to drug administration is a biomarker for IC50 drug sensitivity following drug administration. We further verify and reinforce this proposition through control studies on, PI3K/MTOR signaling pathway inhibitors, a drug category that does not target MRPs, and through assignment of dummy targets to MRP inhibiting drugs and investigating the variation in NetBiTE accuracy.
2304.14913
Jiangjiang Cheng
Jiangjiang Cheng, Wenjun Mei, Wei Su, Ge Chen
Evolutionary Games on Networks: Phase Transition, Quasi-equilibrium, and Mathematical Principles
11 pages, 7 figures
Physica A 611 (2023) 128447
10.1016/j.physa.2023.128447
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stable cooperation ratio of spatial evolutionary games has been widely studied using simulations or approximate analysis methods. However, sometimes such ``stable'' cooperation ratios obtained via approximate methods might not be actually stable, but correspond to quasi-equilibriums instead. We find that various classic game models, like the evolutionary snowdrift game, evolutionary prisoner's dilemma, and spatial public goods game on square lattices and scale-free networks, exhibit the phase transition in convergence time to the equilibrium state. Moreover, mathematical principles are provided to explain the phase transition of convergence time and quasi-equilibrium of cooperation ratio. The findings explain why and when cooperation and defection have a long-term coexistence.
[ { "created": "Wed, 26 Apr 2023 07:40:14 GMT", "version": "v1" } ]
2023-05-01
[ [ "Cheng", "Jiangjiang", "" ], [ "Mei", "Wenjun", "" ], [ "Su", "Wei", "" ], [ "Chen", "Ge", "" ] ]
The stable cooperation ratio of spatial evolutionary games has been widely studied using simulations or approximate analysis methods. However, sometimes such ``stable'' cooperation ratios obtained via approximate methods might not be actually stable, but correspond to quasi-equilibriums instead. We find that various classic game models, like the evolutionary snowdrift game, evolutionary prisoner's dilemma, and spatial public goods game on square lattices and scale-free networks, exhibit the phase transition in convergence time to the equilibrium state. Moreover, mathematical principles are provided to explain the phase transition of convergence time and quasi-equilibrium of cooperation ratio. The findings explain why and when cooperation and defection have a long-term coexistence.
1312.6321
Bradford Taylor
Bradford P. Taylor, Michael H. Cortez, and Joshua S. Weitz
The virus of my virus is my friend: ecological effects of virophage with alternative modes of coinfection
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virophages are viruses that rely on the replication machinery of other viruses to reproduce within eukaryotic hosts. Two different modes of coinfection have been posited based on experimental observation. In one mode, the virophage and virus enter the host independently. In the other mode, the virophage adheres to the virus so both virophage and virus enter the host together. Here we ask: what are the ecological effects of these different modes of coinfection? In particular, what ecological effects are common to both infection modes, and what are the differences particular to each mode? We develop a pair of biophysically motivated ODE models of viral-host population dynamics, corresponding to dynamics arising from each mode of infection. We find both modes of coinfection allow for the coexistence of the virophage, virus, and host either at a stable fixed point or through cyclical dynamics. In both models, virophage tend to be the most abundant population and their presence always reduces the viral abundance and increases the host abundance. However, we do find qualitative differences between models. For example, via extensive sampling of biologically relevant parameter space, we only observe bistability when the virophage and virus enter the host together. We discuss how such differences may be leveraged to help identify modes of infection in natural environments from population level data.
[ { "created": "Sat, 21 Dec 2013 23:42:37 GMT", "version": "v1" }, { "created": "Mon, 12 May 2014 16:32:19 GMT", "version": "v2" } ]
2014-05-13
[ [ "Taylor", "Bradford P.", "" ], [ "Cortez", "Michael H.", "" ], [ "Weitz", "Joshua S.", "" ] ]
Virophages are viruses that rely on the replication machinery of other viruses to reproduce within eukaryotic hosts. Two different modes of coinfection have been posited based on experimental observation. In one mode, the virophage and virus enter the host independently. In the other mode, the virophage adheres to the virus so both virophage and virus enter the host together. Here we ask: what are the ecological effects of these different modes of coinfection? In particular, what ecological effects are common to both infection modes, and what are the differences particular to each mode? We develop a pair of biophysically motivated ODE models of viral-host population dynamics, corresponding to dynamics arising from each mode of infection. We find both modes of coinfection allow for the coexistence of the virophage, virus, and host either at a stable fixed point or through cyclical dynamics. In both models, virophage tend to be the most abundant population and their presence always reduces the viral abundance and increases the host abundance. However, we do find qualitative differences between models. For example, via extensive sampling of biologically relevant parameter space, we only observe bistability when the virophage and virus enter the host together. We discuss how such differences may be leveraged to help identify modes of infection in natural environments from population level data.
1203.0204
James Degnan
Tanja Stadler and James H. Degnan
A polynomial time algorithm for calculating the probability of a ranked gene tree given a species tree
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we provide a polynomial time algorithm to calculate the probability of a {\it ranked} gene tree topology for a given species tree, where a ranked tree topology is a tree topology with the internal vertices being ordered. The probability of a gene tree topology can thus be calculated in polynomial time if the number of orderings of the internal vertices is a polynomial number. However, the complexity of calculating the probability of a gene tree topology with an exponential number of rankings for a given species tree remains unknown.
[ { "created": "Thu, 1 Mar 2012 14:53:16 GMT", "version": "v1" } ]
2012-03-02
[ [ "Stadler", "Tanja", "" ], [ "Degnan", "James H.", "" ] ]
In this paper, we provide a polynomial time algorithm to calculate the probability of a {\it ranked} gene tree topology for a given species tree, where a ranked tree topology is a tree topology with the internal vertices being ordered. The probability of a gene tree topology can thus be calculated in polynomial time if the number of orderings of the internal vertices is a polynomial number. However, the complexity of calculating the probability of a gene tree topology with an exponential number of rankings for a given species tree remains unknown.
1005.4142
Federico Falcon
Federico Falcon
A natural mechanism for l-homochiralization of prebiotic aminoacids
4 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a mechanism that explains in a simple and natural form the l-homochiralization of prebiotic aminoacids in a volume of water where a geothermal gradient exists.
[ { "created": "Sat, 22 May 2010 17:28:37 GMT", "version": "v1" }, { "created": "Thu, 3 Feb 2011 00:55:40 GMT", "version": "v2" } ]
2011-02-04
[ [ "Falcon", "Federico", "" ] ]
We propose a mechanism that explains in a simple and natural form the l-homochiralization of prebiotic aminoacids in a volume of water where a geothermal gradient exists.
q-bio/0405014
Graziano Vernizzi
G. Vernizzi, H. Orland, A. Zee
Prediction of RNA pseudoknots by Monte Carlo simulations
22 pages, 14 figures
null
null
SPhT-T04/061
q-bio.BM cond-mat.soft
null
In this paper we consider the problem of RNA folding with pseudoknots. We use a graphical representation in which the secondary structures are described by planar diagrams. Pseudoknots are identified as non-planar diagrams. We analyze the non-planar topologies of RNA structures and propose a classification of RNA pseudoknots according to the minimal genus of the surface on which the RNA structure can be embedded. This classification provides a simple and natural way to tackle the problem of RNA folding prediction in presence of pseudoknots. Based on that approach, we describe a Monte Carlo algorithm for the prediction of pseudoknots in an RNA molecule.
[ { "created": "Wed, 19 May 2004 15:06:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Vernizzi", "G.", "" ], [ "Orland", "H.", "" ], [ "Zee", "A.", "" ] ]
In this paper we consider the problem of RNA folding with pseudoknots. We use a graphical representation in which the secondary structures are described by planar diagrams. Pseudoknots are identified as non-planar diagrams. We analyze the non-planar topologies of RNA structures and propose a classification of RNA pseudoknots according to the minimal genus of the surface on which the RNA structure can be embedded. This classification provides a simple and natural way to tackle the problem of RNA folding prediction in presence of pseudoknots. Based on that approach, we describe a Monte Carlo algorithm for the prediction of pseudoknots in an RNA molecule.
2005.06758
Samuel Cho
Samuel Cho
Mean-Field Game Analysis of SIR Model with Social Distancing
17 pages, 8 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current COVID-19 pandemic has proven that proper control and prevention of infectious disease require creating and enforcing the appropriate public policies. One critical policy imposed by the policymakers is encouraging the population to practice social distancing (i.e. controlling the contact rate among the population). Here we pose a mean-field game model of individuals each choosing a dynamic strategy of making contacts, given the trade-off of gaining utility but also risking infection from additional contacts. We compute and compare the mean-field equilibrium (MFE) strategy, which assumes each individual acting selfishly to maximize its own utility, to the socially optimal strategy, which maximizes the total utility of the population. We prove that the optimal decision of the infected is always to make more contacts than the level at which it would be socially optimal, which reinforces the important role of public policy to reduce contacts of the infected (e.g. quarantining, sick paid leave). Additionally, we include cost to incentivize people to change strategies, when computing the socially optimal strategies. We find that with this cost, policies reducing contacts of the infected should be further enforced after the peak of the epidemic has passed. Lastly, we compute the price of anarchy (PoA) of this system, to understand the conditions under which large discrepancies between the MFE and socially optimal strategies arise, which is when intervening public policy would be most effective.
[ { "created": "Thu, 14 May 2020 07:04:51 GMT", "version": "v1" }, { "created": "Thu, 3 Sep 2020 10:39:55 GMT", "version": "v2" } ]
2020-09-04
[ [ "Cho", "Samuel", "" ] ]
The current COVID-19 pandemic has proven that proper control and prevention of infectious disease require creating and enforcing the appropriate public policies. One critical policy imposed by the policymakers is encouraging the population to practice social distancing (i.e. controlling the contact rate among the population). Here we pose a mean-field game model of individuals each choosing a dynamic strategy of making contacts, given the trade-off of gaining utility but also risking infection from additional contacts. We compute and compare the mean-field equilibrium (MFE) strategy, which assumes each individual acting selfishly to maximize its own utility, to the socially optimal strategy, which maximizes the total utility of the population. We prove that the optimal decision of the infected is always to make more contacts than the level at which it would be socially optimal, which reinforces the important role of public policy to reduce contacts of the infected (e.g. quarantining, sick paid leave). Additionally, we include cost to incentivize people to change strategies, when computing the socially optimal strategies. We find that with this cost, policies reducing contacts of the infected should be further enforced after the peak of the epidemic has passed. Lastly, we compute the price of anarchy (PoA) of this system, to understand the conditions under which large discrepancies between the MFE and socially optimal strategies arise, which is when intervening public policy would be most effective.
2312.10916
Julien Dirani
Julien Dirani and Liina Pylkk\"anen
MEG Evidence That Modality-Independent Conceptual Representations Encode Visual but Not Lexical Representations
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be abstract, or they might be perceptual or even lexical in nature. We used a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data. We then compared these representations to models representing semantic, sensory, and lexical features. Results show that modality-independent representations are not strictly amodal; rather, they also contain visual representations. There was no evidence that lexical properties contributed to the representation of modality-independent concepts. These findings support the notion that perceptual processes play a fundamental role in encoding modality-independent conceptual representations. Conversely, lexical representations did not appear to partake in modality-independent semantic knowledge.
[ { "created": "Mon, 18 Dec 2023 03:50:08 GMT", "version": "v1" }, { "created": "Tue, 26 Dec 2023 10:03:56 GMT", "version": "v2" } ]
2023-12-27
[ [ "Dirani", "Julien", "" ], [ "Pylkkänen", "Liina", "" ] ]
The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be abstract, or they might be perceptual or even lexical in nature. We used a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data. We then compared these representations to models representing semantic, sensory, and lexical features. Results show that modality-independent representations are not strictly amodal; rather, they also contain visual representations. There was no evidence that lexical properties contributed to the representation of modality-independent concepts. These findings support the notion that perceptual processes play a fundamental role in encoding modality-independent conceptual representations. Conversely, lexical representations did not appear to partake in modality-independent semantic knowledge.
0911.1499
Giambattista Giacomin
Lorenzo Bertini, Giambattista Giacomin, Khashayar Pakdaman
Dynamical aspects of mean field plane rotators and the Kuramoto model
18 pages, 1 figure
null
10.1007/s10955-009-9908-9
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Kuramoto model has been introduced in order to describe synchronization phenomena observed in groups of cells, individuals, circuits, etc... We look at the Kuramoto model with white noise forces: in mathematical terms it is a set of N oscillators, each driven by an independent Brownian motion with a constant drift, that is each oscillator has its own frequency, which, in general, changes from one oscillator to another (these frequencies are usually taken to be random and they may be viewed as a quenched disorder). The interactions between oscillators are of long range type (mean field). We review some results on the Kuramoto model from a statistical mechanics standpoint: we give in particular necessary and sufficient conditions for reversibility and we point out a formal analogy, in the N to infinity limit, with local mean field models with conservative dynamics (an analogy that is exploited to identify in particular a Lyapunov functional in the reversible set-up). We then focus on the reversible Kuramoto model with sinusoidal interactions in the N to infinity limit and analyze the stability of the non-trivial stationary profiles arising when the interaction parameter K is larger than its critical value K_c. We provide an analysis of the linear operator describing the time evolution in a neighborhood of the synchronized profile: we exhibit a Hilbert space in which this operator has a self-adjoint extension and we establish, as our main result, a spectral gap inequality for every K>K_c.
[ { "created": "Sun, 8 Nov 2009 09:00:15 GMT", "version": "v1" } ]
2015-05-14
[ [ "Bertini", "Lorenzo", "" ], [ "Giacomin", "Giambattista", "" ], [ "Pakdaman", "Khashayar", "" ] ]
The Kuramoto model has been introduced in order to describe synchronization phenomena observed in groups of cells, individuals, circuits, etc... We look at the Kuramoto model with white noise forces: in mathematical terms it is a set of N oscillators, each driven by an independent Brownian motion with a constant drift, that is each oscillator has its own frequency, which, in general, changes from one oscillator to another (these frequencies are usually taken to be random and they may be viewed as a quenched disorder). The interactions between oscillators are of long range type (mean field). We review some results on the Kuramoto model from a statistical mechanics standpoint: we give in particular necessary and sufficient conditions for reversibility and we point out a formal analogy, in the N to infinity limit, with local mean field models with conservative dynamics (an analogy that is exploited to identify in particular a Lyapunov functional in the reversible set-up). We then focus on the reversible Kuramoto model with sinusoidal interactions in the N to infinity limit and analyze the stability of the non-trivial stationary profiles arising when the interaction parameter K is larger than its critical value K_c. We provide an analysis of the linear operator describing the time evolution in a neighborhood of the synchronized profile: we exhibit a Hilbert space in which this operator has a self-adjoint extension and we establish, as our main result, a spectral gap inequality for every K>K_c.
1804.00841
Yann Ponty
Stefan Hammer, Yann Ponty (LIX, AMIBIO), Wei Wang (LRI), Sebastian Will
Fixed-Parameter Tractable Sampling for RNA Design with Multiple Target Structures
null
RECOMB 2018 -- 22nd Annual International Conference on Research in Computational Molecular Biology, Apr 2018, Paris, France. 2018, http://recm2018.fr
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The design of multi-stable RNA molecules has important applications in biology, medicine, and biotechnology. Synthetic design approaches profit strongly from effective in-silico methods, which can tremendously impact their cost and feasibility. We revisit a central ingredient of most in-silico design methods: the sampling of sequences for the design of multi-target structures, possibly including pseudoknots. For this task, we present the efficient, tree decomposition-based algorithm. Our fixed parameter tractable approach is underpinned by establishing the P-hardness of uniform sampling. Modeling the problem as a constraint network, our program supports generic Boltzmann-weighted sampling for arbitrary additive RNA energy models; this enables the generation of RNA sequences meeting specific goals like expected free energies or \GCb-content. Finally, we empirically study general properties of the approach and generate biologically relevant multi-target Boltzmann-weighted designs for a common design benchmark. Generating seed sequences with our program, we demonstrate significant improvements over the previously best multi-target sampling strategy (uniform sampling).Our software is freely available at: https://github.com/yannponty/RNARedPrint .
[ { "created": "Tue, 3 Apr 2018 06:38:44 GMT", "version": "v1" } ]
2018-06-24
[ [ "Hammer", "Stefan", "", "LIX, AMIBIO" ], [ "Ponty", "Yann", "", "LIX, AMIBIO" ], [ "Wang", "Wei", "", "LRI" ], [ "Will", "Sebastian", "" ] ]
The design of multi-stable RNA molecules has important applications in biology, medicine, and biotechnology. Synthetic design approaches profit strongly from effective in-silico methods, which can tremendously impact their cost and feasibility. We revisit a central ingredient of most in-silico design methods: the sampling of sequences for the design of multi-target structures, possibly including pseudoknots. For this task, we present the efficient, tree decomposition-based algorithm. Our fixed parameter tractable approach is underpinned by establishing the P-hardness of uniform sampling. Modeling the problem as a constraint network, our program supports generic Boltzmann-weighted sampling for arbitrary additive RNA energy models; this enables the generation of RNA sequences meeting specific goals like expected free energies or \GCb-content. Finally, we empirically study general properties of the approach and generate biologically relevant multi-target Boltzmann-weighted designs for a common design benchmark. Generating seed sequences with our program, we demonstrate significant improvements over the previously best multi-target sampling strategy (uniform sampling).Our software is freely available at: https://github.com/yannponty/RNARedPrint .
1510.04730
Yvinec Romain M.
Romain Yvinec, Samuel Bernard, Erwan Hingant, Laurent Pujo-Menjouet
First passage times in homogeneous nucleation: dependence on the total number of particles
15 pages, 9 figures
null
10.1063/1.4940033
null
q-bio.BM math-ph math.MP math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by nucleation and molecular aggregation in physical, chemical and biological settings, we present an extension to a thorough analysis of the stochastic self-assembly of a fixed number of identical particles in a finite volume. We study the statistic of times it requires for maximal clusters to be completed, starting from a pure-monomeric particle configuration. For finite volume, we extend previous analytical approaches to the case of arbitrary size-dependent aggregation and fragmentation kinetic rates. For larger volume, we develop a scaling framework to study the behavior of the first assembly time as a function of the total quantity of particles. We find that the mean time to first completion of a maximum-sized cluster may have surprisingly a very weak dependency on the total number of particles. We highlight how the higher statistic (variance, distribution) of the first passage time may still help to infer key parameters (such as the size of the maximum cluster) from data. And last but not least, we present a framework to quantify the formation of cluster of macroscopic size, whose formation is (asymptotically) very unlikely and occurs as a large deviation phenomenon from the mean-field limit. We argue that this framework is suitable to describe phase transition phenomena, as inherent infrequent stochastic processes, in contrast to classical nucleation theory.
[ { "created": "Thu, 15 Oct 2015 22:27:10 GMT", "version": "v1" } ]
2016-02-17
[ [ "Yvinec", "Romain", "" ], [ "Bernard", "Samuel", "" ], [ "Hingant", "Erwan", "" ], [ "Pujo-Menjouet", "Laurent", "" ] ]
Motivated by nucleation and molecular aggregation in physical, chemical and biological settings, we present an extension to a thorough analysis of the stochastic self-assembly of a fixed number of identical particles in a finite volume. We study the statistic of times it requires for maximal clusters to be completed, starting from a pure-monomeric particle configuration. For finite volume, we extend previous analytical approaches to the case of arbitrary size-dependent aggregation and fragmentation kinetic rates. For larger volume, we develop a scaling framework to study the behavior of the first assembly time as a function of the total quantity of particles. We find that the mean time to first completion of a maximum-sized cluster may have surprisingly a very weak dependency on the total number of particles. We highlight how the higher statistic (variance, distribution) of the first passage time may still help to infer key parameters (such as the size of the maximum cluster) from data. And last but not least, we present a framework to quantify the formation of cluster of macroscopic size, whose formation is (asymptotically) very unlikely and occurs as a large deviation phenomenon from the mean-field limit. We argue that this framework is suitable to describe phase transition phenomena, as inherent infrequent stochastic processes, in contrast to classical nucleation theory.
1106.3005
Erik Martens A
Erik A. Martens, R. Kostadinov, Carlo C. Maley, Oskar Hallatschek
Spatial structure increases the waiting time for cancer
21 pages
New Journal of Physics 13, 115014 (2011)
10.1088/1367-2630/13/11/115014
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer results from a sequence of genetic and epigenetic changes which lead to a variety of abnormal phenotypes including increased proliferation and survival of somatic cells, and thus, to a selective advantage of pre-cancerous cells. The notion of cancer progression as an evolutionary process has been experiencing increasing interest in recent years. Many efforts have been made to better understand and predict the progression to cancer using mathematical models; these mostly consider the evolution of a well-mixed cell population, even though pre-cancerous cells often evolve in highly structured epithelial tissues. We propose a novel model of cancer progression that considers a spatially structured cell population where clones expand via adaptive waves. This model is used to asses two different paradigms of asexual evolution that have been suggested to delineate the process of cancer progression. The standard scenario of periodic selection assumes that driver mutations are accumulated strictly sequentially over time. However, when the mutation supply is sufficiently high, clones may arise simultaneously on distinct genetic backgrounds, and clonal adaptation waves interfere with each other. We find that in the presence of clonal interference, spatial structure increases the waiting time for cancer, leads to a patchwork structure of non-uniformly sized clones, decreases the survival probability of virtually neutral (passenger) mutations, and that genetic distance begins to increase over a characteristic length scale, determined here. These characteristic features of clonal interference may help to predict the onset of cancers with pronounced spatial structure and to interpret spatially-sampled genetic data obtained from biopsies. Our estimates suggest that clonal interference likely occurs in the progressing colon cancer, and possibly other cancers where spatial structure matters.
[ { "created": "Wed, 15 Jun 2011 15:51:48 GMT", "version": "v1" }, { "created": "Thu, 15 Sep 2011 22:36:26 GMT", "version": "v2" }, { "created": "Tue, 20 Sep 2011 21:15:20 GMT", "version": "v3" } ]
2011-12-05
[ [ "Martens", "Erik A.", "" ], [ "Kostadinov", "R.", "" ], [ "Maley", "Carlo C.", "" ], [ "Hallatschek", "Oskar", "" ] ]
Cancer results from a sequence of genetic and epigenetic changes which lead to a variety of abnormal phenotypes including increased proliferation and survival of somatic cells, and thus, to a selective advantage of pre-cancerous cells. The notion of cancer progression as an evolutionary process has been experiencing increasing interest in recent years. Many efforts have been made to better understand and predict the progression to cancer using mathematical models; these mostly consider the evolution of a well-mixed cell population, even though pre-cancerous cells often evolve in highly structured epithelial tissues. We propose a novel model of cancer progression that considers a spatially structured cell population where clones expand via adaptive waves. This model is used to asses two different paradigms of asexual evolution that have been suggested to delineate the process of cancer progression. The standard scenario of periodic selection assumes that driver mutations are accumulated strictly sequentially over time. However, when the mutation supply is sufficiently high, clones may arise simultaneously on distinct genetic backgrounds, and clonal adaptation waves interfere with each other. We find that in the presence of clonal interference, spatial structure increases the waiting time for cancer, leads to a patchwork structure of non-uniformly sized clones, decreases the survival probability of virtually neutral (passenger) mutations, and that genetic distance begins to increase over a characteristic length scale, determined here. These characteristic features of clonal interference may help to predict the onset of cancers with pronounced spatial structure and to interpret spatially-sampled genetic data obtained from biopsies. Our estimates suggest that clonal interference likely occurs in the progressing colon cancer, and possibly other cancers where spatial structure matters.
q-bio/0601049
Marek Cieplak
Marek Cieplak, Joanna I. Sulkowska
Thermal unfolding of proteins
3 figures, a text in latex
J. Chem. Phys. 123, 194908 (2005)
10.1063/1.2121668
null
q-bio.BM q-bio.QM
null
Thermal unfolding of proteins is compared to folding and mechanical stretching in a simple topology-based dynamical model. We define the unfolding time and demonstrate its low-temperature divergence. Below a characteristic temperature, contacts break at separate time scales and unfolding proceeds approximately in a way reverse to folding. Features in these scenarios agree with experiments and atomic simulations on titin.
[ { "created": "Mon, 30 Jan 2006 22:24:04 GMT", "version": "v1" } ]
2009-11-13
[ [ "Cieplak", "Marek", "" ], [ "Sulkowska", "Joanna I.", "" ] ]
Thermal unfolding of proteins is compared to folding and mechanical stretching in a simple topology-based dynamical model. We define the unfolding time and demonstrate its low-temperature divergence. Below a characteristic temperature, contacts break at separate time scales and unfolding proceeds approximately in a way reverse to folding. Features in these scenarios agree with experiments and atomic simulations on titin.
2003.04231
Payal Bal Dr
Payal Bal, Simon Kapitza, Natasha Cadenhead, Tom Kompas, Pham Van Ha, Brendan Wintle
Predicting the ecological outcomes of global consumption
40 pages, 7 figures
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mapping pathways to achieving the sustainable development goals requires understanding and predicting how social, economic and political factors impact biodiversity. Trends in demography, economic growth, regional alliances and consumption behaviours can have profound effects on the environment by driving resource use and production. While these distant socio-economic drivers impact species and ecosystems at global scales, for example by driving greenhouse gas emissions and climate change, the most prevalent human impacts on biodiversity manifest through habitat loss and land use change decisions at finer scales. We provide the first integrated ecological-economic analysis pathway capable of supporting both national policy design challenges and global scale assessment of biodiversity risks posed by socio-economic drivers such as population growth, consumption and trade. To achieve this, we provide state-of-the-art integration of economic, land use, and biodiversity modelling, and illustrate its application using two case studies. We evaluate the national-level implications of change in trading conditions under a multi-lateral free trade agreement for the bird biodiversity of Vietnam. We review the implications for land-use and biodiversity under coupled socio-economic (Shared Socioeconomic Pathways) and climate (Resource Concentration Pathways) scenarios for Australia. Our study provides a roadmap for setting up high dimensional integrated analyses foe evaluating global priorities for protecting nature and livelihoods in vulnerable areas with the greatest conflicts for economic, social and environmental opportunities.
[ { "created": "Mon, 9 Mar 2020 16:12:16 GMT", "version": "v1" } ]
2020-03-26
[ [ "Bal", "Payal", "" ], [ "Kapitza", "Simon", "" ], [ "Cadenhead", "Natasha", "" ], [ "Kompas", "Tom", "" ], [ "Van Ha", "Pham", "" ], [ "Wintle", "Brendan", "" ] ]
Mapping pathways to achieving the sustainable development goals requires understanding and predicting how social, economic and political factors impact biodiversity. Trends in demography, economic growth, regional alliances and consumption behaviours can have profound effects on the environment by driving resource use and production. While these distant socio-economic drivers impact species and ecosystems at global scales, for example by driving greenhouse gas emissions and climate change, the most prevalent human impacts on biodiversity manifest through habitat loss and land use change decisions at finer scales. We provide the first integrated ecological-economic analysis pathway capable of supporting both national policy design challenges and global scale assessment of biodiversity risks posed by socio-economic drivers such as population growth, consumption and trade. To achieve this, we provide state-of-the-art integration of economic, land use, and biodiversity modelling, and illustrate its application using two case studies. We evaluate the national-level implications of change in trading conditions under a multi-lateral free trade agreement for the bird biodiversity of Vietnam. We review the implications for land-use and biodiversity under coupled socio-economic (Shared Socioeconomic Pathways) and climate (Resource Concentration Pathways) scenarios for Australia. Our study provides a roadmap for setting up high dimensional integrated analyses foe evaluating global priorities for protecting nature and livelihoods in vulnerable areas with the greatest conflicts for economic, social and environmental opportunities.
1804.04853
Fr\'ed\'eric Pro\"ia
Fr\'ed\'eric Pro\"ia, Fabien Panloup, Chiraz Trabelsi and J\'er\'emy Clotault
Probabilistic reconstruction of genealogies for polyploid plant species
26 pages, 14 figures, 3 tables
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A probabilistic reconstruction of genealogies in a polyploid population (from 2x to 4x) is investigated, by considering genetic data analyzed as the probability of allele presence in a given genotype. Based on the likelihood of all possible crossbreeding patterns, our model enables us to infer and to quantify the whole potential genealogies in the population. We explain in particular how to deal with the uncertain allelic multiplicity that may occur with polyploids. Then we build an \textit{ad hoc} penalized likelihood to compare genealogies and to decide whether a particular individual brings sufficient information to be included in the taken genealogy. This decision criterion enables us in a next part to suggest a greedy algorithm in order to explore missing links and to rebuild some connections in the genealogies, retrospectively. As a by-product, we also give a way to infer the individuals that may have been favored by breeders over the years. In the last part we highlight the results given by our model and our algorithm, firstly on a simulated population and then on a real population of rose bushes. Most of the methodology relies on the maximum likelihood principle and on graph theory.
[ { "created": "Fri, 13 Apr 2018 09:27:02 GMT", "version": "v1" }, { "created": "Wed, 28 Nov 2018 07:53:51 GMT", "version": "v2" } ]
2018-11-29
[ [ "Proïa", "Frédéric", "" ], [ "Panloup", "Fabien", "" ], [ "Trabelsi", "Chiraz", "" ], [ "Clotault", "Jérémy", "" ] ]
A probabilistic reconstruction of genealogies in a polyploid population (from 2x to 4x) is investigated, by considering genetic data analyzed as the probability of allele presence in a given genotype. Based on the likelihood of all possible crossbreeding patterns, our model enables us to infer and to quantify the whole potential genealogies in the population. We explain in particular how to deal with the uncertain allelic multiplicity that may occur with polyploids. Then we build an \textit{ad hoc} penalized likelihood to compare genealogies and to decide whether a particular individual brings sufficient information to be included in the taken genealogy. This decision criterion enables us in a next part to suggest a greedy algorithm in order to explore missing links and to rebuild some connections in the genealogies, retrospectively. As a by-product, we also give a way to infer the individuals that may have been favored by breeders over the years. In the last part we highlight the results given by our model and our algorithm, firstly on a simulated population and then on a real population of rose bushes. Most of the methodology relies on the maximum likelihood principle and on graph theory.
2407.08858
James Kosmopoulos
James C. Kosmopoulos and Karthik Anantharaman
Microbial and Viral Ecology Analysis for Metagenomic Data
null
null
null
null
q-bio.QM q-bio.GN
http://creativecommons.org/licenses/by/4.0/
The explosion in known microbial diversity in the last two decades has made it abundantly clear that microbes in the environment do not exist in isolation; they are members of communities. Accordingly, omics approaches such as metagenomics have revealed that interactions between diverse groups of community members such as archaea, bacteria, and viruses (bacteriophage) are common and have significant impacts on entire microbiomes. Thus, to have a well-developed understanding of microbes as they naturally exist in the environment, biological entities of all kinds must be studied together. While numerous protocols for metagenome analysis exist, comprehensive published protocols for the simultaneous analysis of viruses and prokaryotes together are scarce. Further, as bioinformatic methods for microbiology rapidly advance, existing metagenomic tools and pipelines require frequent reevaluation. This ensures the adherence of best practices for microbiome and metagenomic data analysis. Here, we offer an expansive approach for the joint analysis of bulk sequence data from a mixed microbial community (metagenomes) and viral-sized fraction communities (viromes). This chapter serves as a beginner's-level guide for researchers with limited bioinformatics expertise who wish to engage in multi-scale metagenome and virome analyses. We cover steps from initial study design to sequence read processing, metagenome assembly, quality control, virus identification, microbial and viral genome binning, taxonomic characterization, species-level clustering, and host-virus predictions. We also provide the bioinformatic scripts used in our workflow for reuse in one's own computational methods. Lastly, we discuss additional approaches a researcher can take after processing data with this workflow.
[ { "created": "Thu, 11 Jul 2024 20:45:36 GMT", "version": "v1" } ]
2024-07-15
[ [ "Kosmopoulos", "James C.", "" ], [ "Anantharaman", "Karthik", "" ] ]
The explosion in known microbial diversity in the last two decades has made it abundantly clear that microbes in the environment do not exist in isolation; they are members of communities. Accordingly, omics approaches such as metagenomics have revealed that interactions between diverse groups of community members such as archaea, bacteria, and viruses (bacteriophage) are common and have significant impacts on entire microbiomes. Thus, to have a well-developed understanding of microbes as they naturally exist in the environment, biological entities of all kinds must be studied together. While numerous protocols for metagenome analysis exist, comprehensive published protocols for the simultaneous analysis of viruses and prokaryotes together are scarce. Further, as bioinformatic methods for microbiology rapidly advance, existing metagenomic tools and pipelines require frequent reevaluation. This ensures the adherence of best practices for microbiome and metagenomic data analysis. Here, we offer an expansive approach for the joint analysis of bulk sequence data from a mixed microbial community (metagenomes) and viral-sized fraction communities (viromes). This chapter serves as a beginner's-level guide for researchers with limited bioinformatics expertise who wish to engage in multi-scale metagenome and virome analyses. We cover steps from initial study design to sequence read processing, metagenome assembly, quality control, virus identification, microbial and viral genome binning, taxonomic characterization, species-level clustering, and host-virus predictions. We also provide the bioinformatic scripts used in our workflow for reuse in one's own computational methods. Lastly, we discuss additional approaches a researcher can take after processing data with this workflow.
1809.03614
Idaline Laigle PhD
Idaline Laigle and Isabelle Aubin and Dominique Gravel
Species traits and community properties explain species extinction effects on detritus-based food webs
39 pages, 4 tables, 4 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Effects of changes in functional composition of soil communities on nutrient cycling are still not well understood. Models simulating community dynamics overcome the technical challenges of conducting species removal experiments in the field. However, to date, available soil food web models do not adequately represent the organic matter processing chain which is key for soil dynamics. Here, we present a new model of soil food web dynamics accounting for allometric scaling of metabolic rate, ontogeny of organic matter, and explicit representation of nitrogen and carbon flows. We use this model to investigate what traits are best predictors of species effects on community productivity and on nutrient cycling. To do so, we removed 161 tropho-species (groups of functionally identical species) one at a time from 48 forest soil food webs, and simulated their dynamics until equilibrium. Simulations revealed that combinations of traits better determine removal effects than single ones. The smallest species are the most competitive ones, but carnivores of various body masses presenting the highest connectivity and resource similarity could be key stone species in the regulation of competitive forces. Despite this, most removals had low effects, suggesting functional redundancy provides a high resistance of soil food webs to single tropho-species extinction. We also highlight for the first time that food web structure and soil fertility can drastically change species effects in an unpredictable way. Moreover, the exclusion of detritus and stoichiometric constraints in past studies lead to underestimations of indirect effects and retroactions. While additional work is needed to incorporate complementarity between detritivores, it is essential to take into account these mechanisms in models in order to improve the understanding of soil food web functioning.
[ { "created": "Mon, 10 Sep 2018 22:11:40 GMT", "version": "v1" }, { "created": "Mon, 11 Feb 2019 19:01:48 GMT", "version": "v2" } ]
2019-02-13
[ [ "Laigle", "Idaline", "" ], [ "Aubin", "Isabelle", "" ], [ "Gravel", "Dominique", "" ] ]
Effects of changes in functional composition of soil communities on nutrient cycling are still not well understood. Models simulating community dynamics overcome the technical challenges of conducting species removal experiments in the field. However, to date, available soil food web models do not adequately represent the organic matter processing chain which is key for soil dynamics. Here, we present a new model of soil food web dynamics accounting for allometric scaling of metabolic rate, ontogeny of organic matter, and explicit representation of nitrogen and carbon flows. We use this model to investigate what traits are best predictors of species effects on community productivity and on nutrient cycling. To do so, we removed 161 tropho-species (groups of functionally identical species) one at a time from 48 forest soil food webs, and simulated their dynamics until equilibrium. Simulations revealed that combinations of traits better determine removal effects than single ones. The smallest species are the most competitive ones, but carnivores of various body masses presenting the highest connectivity and resource similarity could be key stone species in the regulation of competitive forces. Despite this, most removals had low effects, suggesting functional redundancy provides a high resistance of soil food webs to single tropho-species extinction. We also highlight for the first time that food web structure and soil fertility can drastically change species effects in an unpredictable way. Moreover, the exclusion of detritus and stoichiometric constraints in past studies lead to underestimations of indirect effects and retroactions. While additional work is needed to incorporate complementarity between detritivores, it is essential to take into account these mechanisms in models in order to improve the understanding of soil food web functioning.
1305.4022
Claudia Giambartolomei
Claudia Giambartolomei (1), Damjan Vukcevic (2), Eric E. Schadt (3), Lude Franke (4), Aroon D. Hingorani (1), Chris Wallace (5), Vincent Plagnol (1) ((1) University College London (UCL), London, UK, (2) Royal Children's Hospital, Melbourne, Australia, (3) Mount Sinai School of Medicine, New York USA, (4) University of Groningen, Groningen, The Netherlands, (5) University of Cambridge, Cambridge, UK)
Bayesian Test for Colocalisation Between Pairs of Genetic Association Studies Using Summary Statistics
Number of pages in main article: 25; Number of figures in main article: 6. To be published in Plos Genetics
PLoS Genet (2013) 10(5): e1004383
10.1371/journal.pgen.1004383
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetic association studies, in particular the genome-wide association study design, have provided a wealth of novel insights into the aetiology of a wide range of human diseases and traits. The next challenge consists of understanding the molecular basis of these associations. The integration of multiple association datasets, including gene expression datasets, can contribute to this goal. We have developed a novel statistical methodology to assess whether two association signals are consistent with a shared causal variant. An application is the integration of disease scans with expression quantitative trait locus (eQTL) studies, but any pair of GWAS datasets can be integrated in this framework. We demonstrate the value of the approach by reanalysing a gene expression dataset in 966 liver samples with a published meta-analysis of lipid traits including >100, 000 individuals of European ancestry. Combining all lipid biomarkers, our reanalysis supported 29 out of 38 reported colocalisation results with eQTLs and identified 14 new colocalisation results, highlighting the value of a formal statistical test. In two cases of reported eQTL-lipid pairs (IFT172, TBKBP1) for which our analysis suggests that the eQTL pattern is not consistent with the lipid association, we identify alternative colocalisation results with GCKR and KPNB1, indicating that these genes are more likely to be causal in these genomic intervals. A key feature of the method is the ability to derive the output statistics from single SNP summary statistics, hence making it possible to perform systematic meta-analysis type comparisons across multiple GWAS datasets (http://coloc.cs.ucl.ac.uk/coloc/). Our methodology provides information about candidate causal genes in associated intervals and has direct implications for the understanding of complex diseases and the design of drugs to target disease pathways.
[ { "created": "Fri, 17 May 2013 09:22:02 GMT", "version": "v1" }, { "created": "Tue, 2 Jul 2013 18:32:46 GMT", "version": "v2" }, { "created": "Mon, 4 Nov 2013 17:11:18 GMT", "version": "v3" } ]
2014-10-17
[ [ "Giambartolomei", "Claudia", "" ], [ "Vukcevic", "Damjan", "" ], [ "Schadt", "Eric E.", "" ], [ "Franke", "Lude", "" ], [ "Hingorani", "Aroon D.", "" ], [ "Wallace", "Chris", "" ], [ "Plagnol", "Vincent", "" ] ]
Genetic association studies, in particular the genome-wide association study design, have provided a wealth of novel insights into the aetiology of a wide range of human diseases and traits. The next challenge consists of understanding the molecular basis of these associations. The integration of multiple association datasets, including gene expression datasets, can contribute to this goal. We have developed a novel statistical methodology to assess whether two association signals are consistent with a shared causal variant. An application is the integration of disease scans with expression quantitative trait locus (eQTL) studies, but any pair of GWAS datasets can be integrated in this framework. We demonstrate the value of the approach by reanalysing a gene expression dataset in 966 liver samples with a published meta-analysis of lipid traits including >100, 000 individuals of European ancestry. Combining all lipid biomarkers, our reanalysis supported 29 out of 38 reported colocalisation results with eQTLs and identified 14 new colocalisation results, highlighting the value of a formal statistical test. In two cases of reported eQTL-lipid pairs (IFT172, TBKBP1) for which our analysis suggests that the eQTL pattern is not consistent with the lipid association, we identify alternative colocalisation results with GCKR and KPNB1, indicating that these genes are more likely to be causal in these genomic intervals. A key feature of the method is the ability to derive the output statistics from single SNP summary statistics, hence making it possible to perform systematic meta-analysis type comparisons across multiple GWAS datasets (http://coloc.cs.ucl.ac.uk/coloc/). Our methodology provides information about candidate causal genes in associated intervals and has direct implications for the understanding of complex diseases and the design of drugs to target disease pathways.
1408.6052
Andrei R. Akhmetzhanov
Andrei R. Akhmetzhanov, Michael E. Hochberg
Mean-field dynamics of tumor growth and control using low-impact chemoprevention
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/publicdomain/
Cancer poses danger because of its unregulated growth, development of resistant subclones, and metastatic spread to vital organs. Although the major transitions in cancer development are increasingly well understood, we lack quantitative theory for how chemoprevention is predicted to affect survival. We employ master equations and probability generating functions, the latter well known in statistical physics, to derive the dynamics of tumor growth as a mean-field approximation. We also study numerically the associated stochastic birth-death process. Our findings predict exponential tumor growth when a cancer is in its early stages of development and hyper-exponential growth thereafter. Numerical simulations are in general agreement with our analytical approach. We evaluate how constant, low impact treatments affect both neoplastic growth and the frequency of chemoresistant clones. We show that therapeutic outcomes are highly predictable for treatments starting either sufficiently early or late in terms of initial tumor size and the initial number of chemoresistant cells, whereas stochastic dynamics dominate therapies starting at intermediate neoplasm sizes, with high outcome sensitivity both in terms of tumor control and the emergence of resistant subclones. The outcome of chemoprevention can be understood in terms of both minimal physiological impacts resulting in long-term control and either preventing or slowing the emergence of resistant subclones. We argue that our model and results can also be applied to the management of early, clinically detected cancers after tumor excision.
[ { "created": "Tue, 26 Aug 2014 08:50:22 GMT", "version": "v1" } ]
2014-08-27
[ [ "Akhmetzhanov", "Andrei R.", "" ], [ "Hochberg", "Michael E.", "" ] ]
Cancer poses danger because of its unregulated growth, development of resistant subclones, and metastatic spread to vital organs. Although the major transitions in cancer development are increasingly well understood, we lack quantitative theory for how chemoprevention is predicted to affect survival. We employ master equations and probability generating functions, the latter well known in statistical physics, to derive the dynamics of tumor growth as a mean-field approximation. We also study numerically the associated stochastic birth-death process. Our findings predict exponential tumor growth when a cancer is in its early stages of development and hyper-exponential growth thereafter. Numerical simulations are in general agreement with our analytical approach. We evaluate how constant, low impact treatments affect both neoplastic growth and the frequency of chemoresistant clones. We show that therapeutic outcomes are highly predictable for treatments starting either sufficiently early or late in terms of initial tumor size and the initial number of chemoresistant cells, whereas stochastic dynamics dominate therapies starting at intermediate neoplasm sizes, with high outcome sensitivity both in terms of tumor control and the emergence of resistant subclones. The outcome of chemoprevention can be understood in terms of both minimal physiological impacts resulting in long-term control and either preventing or slowing the emergence of resistant subclones. We argue that our model and results can also be applied to the management of early, clinically detected cancers after tumor excision.
1806.11077
Ihor Lubashevsky
Ihor Lubashevsky
Psychophysical laws as reflection of mental space properties
null
null
10.1016/j.plrev.2018.10.003
null
q-bio.NC eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper is devoted to the relationship between psychophysics and physics of mind. The basic trends in psychophysics development are briefly discussed with special attention focused on Teghtsoonian's hypotheses. These hypotheses pose the concept of the universality of inner psychophysics and enable to speak about psychological space as an individual object with its own properties. Turning to the two-component description of human behavior (I. Lubashevsky, Physics of the Human Mind, Springer, 2017) the notion of mental space is formulated and human perception of external stimuli is treated as the emergence of the corresponding images in the mental space. On one hand, these images are caused by external stimuli and their magnitude bears the information about the intensity of the corresponding stimuli. On the other hand, the individual structure of such images as well as their subsistence after emergence is determined only by the properties of mental space on its own. Finally, the mental operations of image comparison and their scaling are defined in a way allowing for the bounded capacity of human cognition. As demonstrated, the developed theory of stimulus perception is able to explain the basic regularities of psychophysics, e.g., (i) the regression and range effects leading to the overestimation of weak stimuli and the underestimation of strong stimuli, (ii) scalar variability (Weber's and Ekman' laws), and (\textit{iii}) the sequential (memory) effects. As the final result, a solution to the Fechner-Stevens dilemma is proposed. This solution posits that Fechner's logarithmic law is not a consequences of Weber's law but stems from the interplay of uncertainty in evaluating stimulus intensities and the multi-step scaling required to overcome the stimulus incommensurability.
[ { "created": "Wed, 27 Jun 2018 11:17:04 GMT", "version": "v1" } ]
2020-01-29
[ [ "Lubashevsky", "Ihor", "" ] ]
The paper is devoted to the relationship between psychophysics and physics of mind. The basic trends in psychophysics development are briefly discussed with special attention focused on Teghtsoonian's hypotheses. These hypotheses pose the concept of the universality of inner psychophysics and enable to speak about psychological space as an individual object with its own properties. Turning to the two-component description of human behavior (I. Lubashevsky, Physics of the Human Mind, Springer, 2017) the notion of mental space is formulated and human perception of external stimuli is treated as the emergence of the corresponding images in the mental space. On one hand, these images are caused by external stimuli and their magnitude bears the information about the intensity of the corresponding stimuli. On the other hand, the individual structure of such images as well as their subsistence after emergence is determined only by the properties of mental space on its own. Finally, the mental operations of image comparison and their scaling are defined in a way allowing for the bounded capacity of human cognition. As demonstrated, the developed theory of stimulus perception is able to explain the basic regularities of psychophysics, e.g., (i) the regression and range effects leading to the overestimation of weak stimuli and the underestimation of strong stimuli, (ii) scalar variability (Weber's and Ekman' laws), and (\textit{iii}) the sequential (memory) effects. As the final result, a solution to the Fechner-Stevens dilemma is proposed. This solution posits that Fechner's logarithmic law is not a consequences of Weber's law but stems from the interplay of uncertainty in evaluating stimulus intensities and the multi-step scaling required to overcome the stimulus incommensurability.
1709.09541
R. Ozgur Doruk
R.Ozgur Doruk, Kechen Zhang
Fitting of dynamic recurrent neural network models to sensory stimulus-response data
arXiv admin note: text overlap with arXiv:1610.05561
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) which have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allow us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data is generated by a Phased Cosine Fourier series having fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size and sample size are applied in order to examine the effect of stimulus to the identification process. Results are presented in tabular form at the end of this text.
[ { "created": "Tue, 26 Sep 2017 08:35:35 GMT", "version": "v1" } ]
2017-09-28
[ [ "Doruk", "R. Ozgur", "" ], [ "Zhang", "Kechen", "" ] ]
We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) which have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allow us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data is generated by a Phased Cosine Fourier series having fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size and sample size are applied in order to examine the effect of stimulus to the identification process. Results are presented in tabular form at the end of this text.
2103.11959
Yves-Henri Sanejouand
Yves-Henri Sanejouand
Normal-mode driven exploration of protein domain motions
11 pages, 5 figures
J. Comput. Chem. 2021, vol.42 (31), p2250-2257
10.1002/jcc.26755
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Domain motions involved in the function of proteins can often be well described as a combination of motions along a handfull of low-frequency modes, that is, with the values of a few normal coordinates. This means that, when the functional motion of a protein is unknown, it should prove possible to predict it, since it amounts to guess a few values. However, without the help of additional experimental data, using normal coordinates for generating accurate conformers far away from the initial one is not so straightforward. To do so, a new approach is proposed: instead of building conformers directly with the values of a subset of normal coordinates, they are built in two steps, the conformer built with normal coordinates being just used for defining a set of distance constraints, the final conformer being built so as to match them. Note that this approach amounts to transform the problem of generating accurate protein conformers using normal coordinates into a better known one: the distance-geometry problem, which is herein solved with the help of the ROSETTA software. In the present study, this approach allowed to rebuild accurately six large amplitude conformational changes, using at most six low-frequency normal coordinates. As a consequence of the low-dimensionality of the corresponding subspace, random exploration also proved enough for generating low-energy conformers close to the known end-point of the conformational change of the LAO binding protein, lysozyme T4 and adenylate kinase.
[ { "created": "Mon, 22 Mar 2021 16:05:44 GMT", "version": "v1" } ]
2022-09-07
[ [ "Sanejouand", "Yves-Henri", "" ] ]
Domain motions involved in the function of proteins can often be well described as a combination of motions along a handfull of low-frequency modes, that is, with the values of a few normal coordinates. This means that, when the functional motion of a protein is unknown, it should prove possible to predict it, since it amounts to guess a few values. However, without the help of additional experimental data, using normal coordinates for generating accurate conformers far away from the initial one is not so straightforward. To do so, a new approach is proposed: instead of building conformers directly with the values of a subset of normal coordinates, they are built in two steps, the conformer built with normal coordinates being just used for defining a set of distance constraints, the final conformer being built so as to match them. Note that this approach amounts to transform the problem of generating accurate protein conformers using normal coordinates into a better known one: the distance-geometry problem, which is herein solved with the help of the ROSETTA software. In the present study, this approach allowed to rebuild accurately six large amplitude conformational changes, using at most six low-frequency normal coordinates. As a consequence of the low-dimensionality of the corresponding subspace, random exploration also proved enough for generating low-energy conformers close to the known end-point of the conformational change of the LAO binding protein, lysozyme T4 and adenylate kinase.
1112.3900
Octavio Miramontes
Octavio Miramontes and Denis Boyer and Frederic Bartumeus
The effects of spatially heterogeneous prey distributions on detection patterns in foraging seabirds
Submitted first to PLoS-ONE on 26/9/2011. Final version published on 14/04/2012
PLoS ONE 7(4) e34317, 2012
10.1371/journal.pone.0034317
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many attempts to relate animal foraging patterns to landscape heterogeneity are focused on the analysis of foragers movements. Resource detection patterns in space and time are not commonly studied, yet they are tightly coupled to landscape properties and add relevant information on foraging behavior. By exploring simple foraging models in unpredictable environments we show that the distribution of intervals between detected prey (detection statistics)is mostly determined by the spatial structure of the prey field and essentially distinct from predator displacement statistics. Detections are expected to be Poissonian in uniform random environments for markedly different foraging movements (e.g. L\'evy and ballistic). This prediction is supported by data on the time intervals between diving events on short-range foraging seabirds such as the thick-billed murre ({\it Uria lomvia}). However, Poissonian detection statistics is not observed in long-range seabirds such as the wandering albatross ({\it Diomedea exulans}) due to the fractal nature of the prey field, covering a wide range of spatial scales. For this scenario, models of fractal prey fields induce non-Poissonian patterns of detection in good agreement with two albatross data sets. We find that the specific shape of the distribution of time intervals between prey detection is mainly driven by meso and submeso-scale landscape structures and depends little on the forager strategy or behavioral responses.
[ { "created": "Fri, 16 Dec 2011 17:43:28 GMT", "version": "v1" }, { "created": "Fri, 20 Apr 2012 22:29:39 GMT", "version": "v2" } ]
2012-04-24
[ [ "Miramontes", "Octavio", "" ], [ "Boyer", "Denis", "" ], [ "Bartumeus", "Frederic", "" ] ]
Many attempts to relate animal foraging patterns to landscape heterogeneity are focused on the analysis of foragers movements. Resource detection patterns in space and time are not commonly studied, yet they are tightly coupled to landscape properties and add relevant information on foraging behavior. By exploring simple foraging models in unpredictable environments we show that the distribution of intervals between detected prey (detection statistics)is mostly determined by the spatial structure of the prey field and essentially distinct from predator displacement statistics. Detections are expected to be Poissonian in uniform random environments for markedly different foraging movements (e.g. L\'evy and ballistic). This prediction is supported by data on the time intervals between diving events on short-range foraging seabirds such as the thick-billed murre ({\it Uria lomvia}). However, Poissonian detection statistics is not observed in long-range seabirds such as the wandering albatross ({\it Diomedea exulans}) due to the fractal nature of the prey field, covering a wide range of spatial scales. For this scenario, models of fractal prey fields induce non-Poissonian patterns of detection in good agreement with two albatross data sets. We find that the specific shape of the distribution of time intervals between prey detection is mainly driven by meso and submeso-scale landscape structures and depends little on the forager strategy or behavioral responses.
1207.5725
Hugues Berry
H\'edi Soula (Insa Lyon / INRIA Grenoble Rh\^one-Alpes / UCBL, CARMEN), Bertrand Car\'e (Insa Lyon / INRIA Grenoble Rh\^one-Alpes / UCBL, CARMEN, LIRIS), Guillaume Beslon (LIRIS), Hugues Berry (Insa Lyon / INRIA Grenoble Rh\^one-Alpes / UCBL)
Anomalous versus slowed-down Brownian diffusion in the ligand-binding equilibrium
Biophysical Journal (2013)
null
10.1016/j.bpj.2013.07.023
null
q-bio.QM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Measurements of protein motion in living cells and membranes consistently report transient anomalous diffusion (subdiffusion) which converges back to a Brownian motion with reduced diffusion coefficient at long times, after the anomalous diffusion regime. Therefore, slowed-down Brownian motion could be considered the macroscopic limit of transient anomalous diffusion. On the other hand, membranes are also heterogeneous media in which Brownian motion may be locally slowed-down due to variations in lipid composition. Here, we investigate whether both situations lead to a similar behavior for the reversible ligand-binding reaction in 2d. We compare the (long-time) equilibrium properties obtained with transient anomalous diffusion due to obstacle hindrance or power-law distributed residence times (continuous-time random walks) to those obtained with space-dependent slowed-down Brownian motion. Using theoretical arguments and Monte-Carlo simulations, we show that those three scenarios have distinctive effects on the apparent affinity of the reaction. While continuous-time random walks decrease the apparent affinity of the reaction, locally slowed-down Brownian motion and local hinderance by obstacles both improve it. However, only in the case of slowed-down Brownian motion, the affinity is maximal when the slowdown is restricted to a subregion of the available space. Hence, even at long times (equilibrium), these processes are different and exhibit irreconcilable behaviors when the area fraction of reduced mobility changes.
[ { "created": "Tue, 24 Jul 2012 15:21:04 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2013 11:28:18 GMT", "version": "v2" } ]
2015-06-05
[ [ "Soula", "Hédi", "", "Insa Lyon / INRIA Grenoble Rhône-Alpes / UCBL,\n CARMEN" ], [ "Caré", "Bertrand", "", "Insa Lyon / INRIA Grenoble Rhône-Alpes / UCBL,\n CARMEN, LIRIS" ], [ "Beslon", "Guillaume", "", "LIRIS" ], [ "Berry", "Hugues", "", "Insa Lyon / INRIA\n Grenoble Rhône-Alpes / UCBL" ] ]
Measurements of protein motion in living cells and membranes consistently report transient anomalous diffusion (subdiffusion) which converges back to a Brownian motion with reduced diffusion coefficient at long times, after the anomalous diffusion regime. Therefore, slowed-down Brownian motion could be considered the macroscopic limit of transient anomalous diffusion. On the other hand, membranes are also heterogeneous media in which Brownian motion may be locally slowed-down due to variations in lipid composition. Here, we investigate whether both situations lead to a similar behavior for the reversible ligand-binding reaction in 2d. We compare the (long-time) equilibrium properties obtained with transient anomalous diffusion due to obstacle hindrance or power-law distributed residence times (continuous-time random walks) to those obtained with space-dependent slowed-down Brownian motion. Using theoretical arguments and Monte-Carlo simulations, we show that those three scenarios have distinctive effects on the apparent affinity of the reaction. While continuous-time random walks decrease the apparent affinity of the reaction, locally slowed-down Brownian motion and local hinderance by obstacles both improve it. However, only in the case of slowed-down Brownian motion, the affinity is maximal when the slowdown is restricted to a subregion of the available space. Hence, even at long times (equilibrium), these processes are different and exhibit irreconcilable behaviors when the area fraction of reduced mobility changes.
1409.1560
Matjaz Perc
Xiaojie Chen, Matjaz Perc
Optimal distribution of incentives for public cooperation in heterogeneous interaction environments
19 pages, 8 figures; accepted for publication in Frontiers in Behavioral Neuroscience
Front. Behav. Neurosci. 8 (2014) 248
10.3389/fnbeh.2014.00248
null
q-bio.PE cs.GT cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the framework of evolutionary games with institutional reciprocity, limited incentives are at disposal for rewarding cooperators and punishing defectors. In the simplest case, it can be assumed that, depending on their strategies, all players receive equal incentives from the common pool. The question arises, however, what is the optimal distribution of institutional incentives? How should we best reward and punish individuals for cooperation to thrive? We study this problem for the public goods game on a scale-free network. We show that if the synergetic effects of group interactions are weak, the level of cooperation in the population can be maximized simply by adopting the simplest "equal distribution" scheme. If synergetic effects are strong, however, it is best to reward high-degree nodes more than low-degree nodes. These distribution schemes for institutional rewards are independent of payoff normalization. For institutional punishment, however, the same optimization problem is more complex, and its solution depends on whether absolute or degree-normalized payoffs are used. We find that degree-normalized payoffs require high-degree nodes be punished more lenient than low-degree nodes. Conversely, if absolute payoffs count, then high-degree nodes should be punished stronger than low-degree nodes.
[ { "created": "Thu, 4 Sep 2014 19:54:17 GMT", "version": "v1" } ]
2014-09-05
[ [ "Chen", "Xiaojie", "" ], [ "Perc", "Matjaz", "" ] ]
In the framework of evolutionary games with institutional reciprocity, limited incentives are at disposal for rewarding cooperators and punishing defectors. In the simplest case, it can be assumed that, depending on their strategies, all players receive equal incentives from the common pool. The question arises, however, what is the optimal distribution of institutional incentives? How should we best reward and punish individuals for cooperation to thrive? We study this problem for the public goods game on a scale-free network. We show that if the synergetic effects of group interactions are weak, the level of cooperation in the population can be maximized simply by adopting the simplest "equal distribution" scheme. If synergetic effects are strong, however, it is best to reward high-degree nodes more than low-degree nodes. These distribution schemes for institutional rewards are independent of payoff normalization. For institutional punishment, however, the same optimization problem is more complex, and its solution depends on whether absolute or degree-normalized payoffs are used. We find that degree-normalized payoffs require high-degree nodes be punished more lenient than low-degree nodes. Conversely, if absolute payoffs count, then high-degree nodes should be punished stronger than low-degree nodes.
1707.05713
Youngmin Park
Youngmin Park, Stewart Heitmann, and G. Bard Ermentrout
The Utility of Phase Models in Studying Neural Synchronization
18 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synchronized neural spiking is associated with many cognitive functions and thus, merits study for its own sake. The analysis of neural synchronization naturally leads to the study of repetitive spiking and consequently to the analysis of coupled neural oscillators. Coupled oscillator theory thus informs the synchronization of spiking neuronal networks. A crucial aspect of coupled oscillator theory is the phase response curve (PRC), which describes the impact of a perturbation to the phase of an oscillator. In neural terms, the perturbation represents an incoming synaptic potential which may either advance or retard the timing of the next spike. The phase response curves and the form of coupling between reciprocally coupled oscillators defines the phase interaction function, which in turn predicts the synchronization outcome (in-phase versus anti-phase) and the rate of convergence. We review the two classes of PRC and demonstrate the utility of the phase model in predicting synchronization in reciprocally coupled neural models. In addition, we compare the rate of convergence for all combinations of reciprocally coupled Class I and Class II oscillators. These findings predict the general synchronization outcomes of broad classes of neurons under both inhibitory and excitatory reciprocal coupling.
[ { "created": "Tue, 18 Jul 2017 15:57:51 GMT", "version": "v1" } ]
2017-07-19
[ [ "Park", "Youngmin", "" ], [ "Heitmann", "Stewart", "" ], [ "Ermentrout", "G. Bard", "" ] ]
Synchronized neural spiking is associated with many cognitive functions and thus, merits study for its own sake. The analysis of neural synchronization naturally leads to the study of repetitive spiking and consequently to the analysis of coupled neural oscillators. Coupled oscillator theory thus informs the synchronization of spiking neuronal networks. A crucial aspect of coupled oscillator theory is the phase response curve (PRC), which describes the impact of a perturbation to the phase of an oscillator. In neural terms, the perturbation represents an incoming synaptic potential which may either advance or retard the timing of the next spike. The phase response curves and the form of coupling between reciprocally coupled oscillators defines the phase interaction function, which in turn predicts the synchronization outcome (in-phase versus anti-phase) and the rate of convergence. We review the two classes of PRC and demonstrate the utility of the phase model in predicting synchronization in reciprocally coupled neural models. In addition, we compare the rate of convergence for all combinations of reciprocally coupled Class I and Class II oscillators. These findings predict the general synchronization outcomes of broad classes of neurons under both inhibitory and excitatory reciprocal coupling.
2110.04801
Tao Jia
Shigang Qiu and Tao Jia
Quantifying the noise in bursty gene expression under regulation by small RNAs
14 pages, 5 figures
International Journal of Modern Physics C (IJMPC), Vol. 30, No. 07, 1940002 (2019)
10.1142/S0129183119400023
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression is a fundamental process in a living system. The small RNAs (sRNAs) is widely observed as a global regulator in gene expression. The inherent nonlinearity in this regulatory process together with the bursty production of messenger RNA (mRNA), sRNA and protein make the exact solution for this stochastic process intractable. This is particularly the case when quantifying the protein noise level, which has great impact on multiple cellular processes. Here we propose an approximate yet reasonably accurate solution for the gene expression noise with infrequent burst and strong regulation by sRNAs. This analytical solution allows us to better analyze the noise and stochastic deviation of protein level. We find that the regulation amplifies the noise, reduces the protein level. The stochasticity in the regulation generates more proteins than what if the stochasticity is removed from the system. The sRNA level is most important to the relationship between the noise and stochastic deviation. The results provide analytical tools for more general studies of gene expression and strengthen our quantitative understandings of post-transcriptional regulation in controlling gene expression processes.
[ { "created": "Sun, 10 Oct 2021 14:03:47 GMT", "version": "v1" } ]
2021-10-12
[ [ "Qiu", "Shigang", "" ], [ "Jia", "Tao", "" ] ]
Gene expression is a fundamental process in a living system. The small RNAs (sRNAs) is widely observed as a global regulator in gene expression. The inherent nonlinearity in this regulatory process together with the bursty production of messenger RNA (mRNA), sRNA and protein make the exact solution for this stochastic process intractable. This is particularly the case when quantifying the protein noise level, which has great impact on multiple cellular processes. Here we propose an approximate yet reasonably accurate solution for the gene expression noise with infrequent burst and strong regulation by sRNAs. This analytical solution allows us to better analyze the noise and stochastic deviation of protein level. We find that the regulation amplifies the noise, reduces the protein level. The stochasticity in the regulation generates more proteins than what if the stochasticity is removed from the system. The sRNA level is most important to the relationship between the noise and stochastic deviation. The results provide analytical tools for more general studies of gene expression and strengthen our quantitative understandings of post-transcriptional regulation in controlling gene expression processes.
2406.05797
Qizhi Pei
Qizhi Pei, Lijun Wu, Kaiyuan Gao, Jinhua Zhu, Rui Yan
3D-MolT5: Towards Unified 3D Molecule-Text Modeling with 3D Molecular Tokenization
18 pages
null
null
null
q-bio.BM cs.AI cs.CE cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The integration of molecule and language has garnered increasing attention in molecular science. Recent advancements in Language Models (LMs) have demonstrated potential for the comprehensive modeling of molecule and language. However, existing works exhibit notable limitations. Most existing works overlook the modeling of 3D information, which is crucial for understanding molecular structures and also functions. While some attempts have been made to leverage external structure encoding modules to inject the 3D molecular information into LMs, there exist obvious difficulties that hinder the integration of molecular structure and language text, such as modality alignment and separate tuning. To bridge this gap, we propose 3D-MolT5, a unified framework designed to model both 1D molecular sequence and 3D molecular structure. The key innovation lies in our methodology for mapping fine-grained 3D substructure representations (based on 3D molecular fingerprints) to a specialized 3D token vocabulary for 3D-MolT5. This 3D structure token vocabulary enables the seamless combination of 1D sequence and 3D structure representations in a tokenized format, allowing 3D-MolT5 to encode molecular sequence (SELFIES), molecular structure, and text sequences within a unified architecture. Alongside, we further introduce 1D and 3D joint pre-training to enhance the model's comprehension of these diverse modalities in a joint representation space and better generalize to various tasks for our foundation model. Through instruction tuning on multiple downstream datasets, our proposed 3D-MolT5 shows superior performance than existing methods in molecular property prediction, molecule captioning, and text-based molecule generation tasks. Our code will be available on GitHub soon.
[ { "created": "Sun, 9 Jun 2024 14:20:55 GMT", "version": "v1" } ]
2024-06-11
[ [ "Pei", "Qizhi", "" ], [ "Wu", "Lijun", "" ], [ "Gao", "Kaiyuan", "" ], [ "Zhu", "Jinhua", "" ], [ "Yan", "Rui", "" ] ]
The integration of molecule and language has garnered increasing attention in molecular science. Recent advancements in Language Models (LMs) have demonstrated potential for the comprehensive modeling of molecule and language. However, existing works exhibit notable limitations. Most existing works overlook the modeling of 3D information, which is crucial for understanding molecular structures and also functions. While some attempts have been made to leverage external structure encoding modules to inject the 3D molecular information into LMs, there exist obvious difficulties that hinder the integration of molecular structure and language text, such as modality alignment and separate tuning. To bridge this gap, we propose 3D-MolT5, a unified framework designed to model both 1D molecular sequence and 3D molecular structure. The key innovation lies in our methodology for mapping fine-grained 3D substructure representations (based on 3D molecular fingerprints) to a specialized 3D token vocabulary for 3D-MolT5. This 3D structure token vocabulary enables the seamless combination of 1D sequence and 3D structure representations in a tokenized format, allowing 3D-MolT5 to encode molecular sequence (SELFIES), molecular structure, and text sequences within a unified architecture. Alongside, we further introduce 1D and 3D joint pre-training to enhance the model's comprehension of these diverse modalities in a joint representation space and better generalize to various tasks for our foundation model. Through instruction tuning on multiple downstream datasets, our proposed 3D-MolT5 shows superior performance than existing methods in molecular property prediction, molecule captioning, and text-based molecule generation tasks. Our code will be available on GitHub soon.
1906.04266
Anne-Florence Bitbol
Guillaume Marmier, Martin Weigt and Anne-Florence Bitbol
Phylogenetic correlations can suffice to infer protein partners from sequences
31 pages, 14 figures
PLoS Comput. Biol. 15(10): e1007179 (2019)
10.1371/journal.pcbi.1007179
null
q-bio.BM cond-mat.stat-mech physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining which proteins interact together is crucial to a systems-level understanding of the cell. Recently, algorithms based on Direct Coupling Analysis (DCA) pairwise maximum-entropy models have allowed to identify interaction partners among paralogous proteins from sequence data. This success of DCA at predicting protein-protein interactions could be mainly based on its known ability to identify pairs of residues that are in contact in the three-dimensional structure of protein complexes and that coevolve to remain physicochemically complementary. However, interacting proteins possess similar evolutionary histories. What is the role of purely phylogenetic correlations in the performance of DCA-based methods to infer interaction partners? To address this question, we employ controlled synthetic data that only involve phylogeny and no interactions or contacts. We find that DCA accurately identifies the pairs of synthetic sequences that share evolutionary history. While phylogenetic correlations confound the identification of contacting residues by DCA, they are thus useful to predict interacting partners among paralogs. We find that DCA performs as well as phylogenetic methods to this end, and slightly better than them with large and accurate training sets. Employing DCA or phylogenetic methods within an Iterative Pairing Algorithm (IPA) allows to predict pairs of evolutionary partners without a training set. We demonstrate the ability of these various methods to correctly predict pairings among real paralogous proteins with genome proximity but no known physical interaction, illustrating the importance of phylogenetic correlations in natural data. However, for physically interacting and strongly coevolving proteins, DCA and mutual information outperform phylogenetic methods. We discuss how to distinguish physically interacting proteins from those only sharing evolutionary history.
[ { "created": "Mon, 10 Jun 2019 20:28:58 GMT", "version": "v1" }, { "created": "Wed, 4 Sep 2019 08:24:59 GMT", "version": "v2" } ]
2020-03-25
[ [ "Marmier", "Guillaume", "" ], [ "Weigt", "Martin", "" ], [ "Bitbol", "Anne-Florence", "" ] ]
Determining which proteins interact together is crucial to a systems-level understanding of the cell. Recently, algorithms based on Direct Coupling Analysis (DCA) pairwise maximum-entropy models have allowed to identify interaction partners among paralogous proteins from sequence data. This success of DCA at predicting protein-protein interactions could be mainly based on its known ability to identify pairs of residues that are in contact in the three-dimensional structure of protein complexes and that coevolve to remain physicochemically complementary. However, interacting proteins possess similar evolutionary histories. What is the role of purely phylogenetic correlations in the performance of DCA-based methods to infer interaction partners? To address this question, we employ controlled synthetic data that only involve phylogeny and no interactions or contacts. We find that DCA accurately identifies the pairs of synthetic sequences that share evolutionary history. While phylogenetic correlations confound the identification of contacting residues by DCA, they are thus useful to predict interacting partners among paralogs. We find that DCA performs as well as phylogenetic methods to this end, and slightly better than them with large and accurate training sets. Employing DCA or phylogenetic methods within an Iterative Pairing Algorithm (IPA) allows to predict pairs of evolutionary partners without a training set. We demonstrate the ability of these various methods to correctly predict pairings among real paralogous proteins with genome proximity but no known physical interaction, illustrating the importance of phylogenetic correlations in natural data. However, for physically interacting and strongly coevolving proteins, DCA and mutual information outperform phylogenetic methods. We discuss how to distinguish physically interacting proteins from those only sharing evolutionary history.
q-bio/0410012
Satya N. Majumdar
Satya N. Majumdar and Sergei Nechaev
Exact Asymptotic Results for a Model of Sequence Alignment
4 pages Revtex, 2 .eps figures included
Phys. Rev. E, 72, 020901 (2005)
10.1103/PhysRevE.72.020901
null
q-bio.GN cond-mat.stat-mech math.ST stat.TH
null
Finding analytically the statistics of the longest common subsequence (LCS) of a pair of random sequences drawn from c alphabets is a challenging problem in computational evolutionary biology. We present exact asymptotic results for the distribution of the LCS in a simpler, yet nontrivial, variant of the original model called the Bernoulli matching (BM) model which reduces to the original model in the large c limit. We show that in the BM model, for all c, the distribution of the asymptotic length of the LCS, suitably scaled, is identical to the Tracy-Widom distribution of the largest eigenvalue of a random matrix whose entries are drawn from a Gaussian unitary ensemble. In particular, in the large c limit, this provides an exact expression for the asymptotic length distribution in the original LCS problem.
[ { "created": "Mon, 11 Oct 2004 23:01:35 GMT", "version": "v1" } ]
2009-11-10
[ [ "Majumdar", "Satya N.", "" ], [ "Nechaev", "Sergei", "" ] ]
Finding analytically the statistics of the longest common subsequence (LCS) of a pair of random sequences drawn from c alphabets is a challenging problem in computational evolutionary biology. We present exact asymptotic results for the distribution of the LCS in a simpler, yet nontrivial, variant of the original model called the Bernoulli matching (BM) model which reduces to the original model in the large c limit. We show that in the BM model, for all c, the distribution of the asymptotic length of the LCS, suitably scaled, is identical to the Tracy-Widom distribution of the largest eigenvalue of a random matrix whose entries are drawn from a Gaussian unitary ensemble. In particular, in the large c limit, this provides an exact expression for the asymptotic length distribution in the original LCS problem.
2405.06628
Gabriela Marinoschi
Alberto d'Onofrio, Mimmo Iannelli, Piero Manfredi, Gabriela Marinoschi
Optimal epidemic control by social distancing and vaccination of an infection structured by time since infection: the covid-19 case study
24 pages, 2 figures
SIAM J. Appl. Math, S199-S224, 2023
10.1137/22M1499406
null
q-bio.PE math.AP math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the issue of COVID-19 mitigation, in this work we tackle the general problem of optimally controlling an epidemic outbreak of a communicable disease structured by time since exposure, by the aid of two types of control instruments namely, social distancing and vaccination by a vaccine at least partly effective in protecting from infection. Effective vaccines are assumed to be made available only in a subsequent period of the epidemic so that - in the first period - epidemic control only relies on social distancing, as it happened for the COVID-19 pandemic. By our analyses, we could prove the existence of (at least) one optimal control pair, we derived first-order necessary conditions for optimality, and proved some useful properties of such optimal solutions. A worked example provides a number of further insights on the relationships between key control and epidemic parameters.
[ { "created": "Fri, 10 May 2024 17:41:38 GMT", "version": "v1" } ]
2024-05-13
[ [ "d'Onofrio", "Alberto", "" ], [ "Iannelli", "Mimmo", "" ], [ "Manfredi", "Piero", "" ], [ "Marinoschi", "Gabriela", "" ] ]
Motivated by the issue of COVID-19 mitigation, in this work we tackle the general problem of optimally controlling an epidemic outbreak of a communicable disease structured by time since exposure, by the aid of two types of control instruments namely, social distancing and vaccination by a vaccine at least partly effective in protecting from infection. Effective vaccines are assumed to be made available only in a subsequent period of the epidemic so that - in the first period - epidemic control only relies on social distancing, as it happened for the COVID-19 pandemic. By our analyses, we could prove the existence of (at least) one optimal control pair, we derived first-order necessary conditions for optimality, and proved some useful properties of such optimal solutions. A worked example provides a number of further insights on the relationships between key control and epidemic parameters.
2101.10831
Subhasish Goswami
Mriganka Nath and Subhasish Goswami
Toxicity Detection in Drug Candidates using Simplified Molecular-Input Line-Entry System
4 Pages, 4 Figures, Published with International Journal of Computer Applications (IJCA)
International Journal of Computer Applications 175(21):1-4, September 2020
10.5120/ijca2020920695
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The need for analysis of toxicity in new drug candidates and the requirement of doing it fast have asked the consideration of scientists towards the use of artificial intelligence tools to examine toxicity levels and to develop models to a degree where they can be used commercially to measure toxicity levels efficiently in upcoming drugs. Artificial Intelligence based models can be used to predict the toxic nature of a chemical using Quantitative Structure Activity Relationship techniques. Convolutional Neural Network models have demonstrated great outcomes in predicting the qualitative analysis of chemicals in order to determine the toxicity. This paper goes for the study of Simplified Molecular Input Line-Entry System (SMILES) as a parameter to develop Long short term memory (LSTM) based models in order to examine the toxicity of a molecule and the degree to which the need can be fulfilled for practical use alongside its future outlooks for the purpose of real world applications.
[ { "created": "Thu, 21 Jan 2021 07:02:21 GMT", "version": "v1" } ]
2021-01-27
[ [ "Nath", "Mriganka", "" ], [ "Goswami", "Subhasish", "" ] ]
The need for analysis of toxicity in new drug candidates and the requirement of doing it fast have asked the consideration of scientists towards the use of artificial intelligence tools to examine toxicity levels and to develop models to a degree where they can be used commercially to measure toxicity levels efficiently in upcoming drugs. Artificial Intelligence based models can be used to predict the toxic nature of a chemical using Quantitative Structure Activity Relationship techniques. Convolutional Neural Network models have demonstrated great outcomes in predicting the qualitative analysis of chemicals in order to determine the toxicity. This paper goes for the study of Simplified Molecular Input Line-Entry System (SMILES) as a parameter to develop Long short term memory (LSTM) based models in order to examine the toxicity of a molecule and the degree to which the need can be fulfilled for practical use alongside its future outlooks for the purpose of real world applications.
2302.05734
Peter Taylor
Yujiang Wang, Gabrielle M Schroeder, Jonathan J Horsley, Mariella Panagiotopoulou, Fahmida A Chowdhury, Beate Diehl, John S Duncan, Andrew W McEvoy, Anna Miserocchi, Jane de Tisi, Peter N Taylor
Temporal stability of intracranial EEG abnormality maps for localising epileptogenic tissue
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Objective: Identifying abnormalities in interictal intracranial EEG, by comparing patient data to a normative map, has shown promise for the localisation of epileptogenic tissue and prediction of outcome. The approach typically uses short interictal segments of around one minute. However, the temporal stability of findings has not been established. Methods: Here, we generated a normative map of iEEG in non-pathological brain tissue from 249 patients. We computed regional band power abnormalities in a separate cohort of 39 patients for the duration of their monitoring period (0.92-8.62 days of iEEG data, mean 4.58 days per patient, over 4,800 hours recording). To assess the localising value of band power abnormality, we computed DRS - a measure of how different the surgically resected and spared tissue were in terms of band power abnormalities - over time. Results: In each patient, band power abnormality was relatively consistent over time. The median DRS of the entire recording period separated seizure free (ILAE = 1) and not seizure free (ILAE > 1) patients well (AUC = 0.69). This effect was similar interictally (AUC = 0.69) and peri-ictally (AUC = 0.71). Significance: Our results suggest that band power abnormality DRS, as a predictor of outcomes from epilepsy surgery, is a relatively robust metric over time. These findings add further support for abnormality mapping of neurophysiology data during presurgical evaluation.
[ { "created": "Sat, 11 Feb 2023 16:00:31 GMT", "version": "v1" } ]
2023-02-14
[ [ "Wang", "Yujiang", "" ], [ "Schroeder", "Gabrielle M", "" ], [ "Horsley", "Jonathan J", "" ], [ "Panagiotopoulou", "Mariella", "" ], [ "Chowdhury", "Fahmida A", "" ], [ "Diehl", "Beate", "" ], [ "Duncan", "John S", "" ], [ "McEvoy", "Andrew W", "" ], [ "Miserocchi", "Anna", "" ], [ "de Tisi", "Jane", "" ], [ "Taylor", "Peter N", "" ] ]
Objective: Identifying abnormalities in interictal intracranial EEG, by comparing patient data to a normative map, has shown promise for the localisation of epileptogenic tissue and prediction of outcome. The approach typically uses short interictal segments of around one minute. However, the temporal stability of findings has not been established. Methods: Here, we generated a normative map of iEEG in non-pathological brain tissue from 249 patients. We computed regional band power abnormalities in a separate cohort of 39 patients for the duration of their monitoring period (0.92-8.62 days of iEEG data, mean 4.58 days per patient, over 4,800 hours recording). To assess the localising value of band power abnormality, we computed DRS - a measure of how different the surgically resected and spared tissue were in terms of band power abnormalities - over time. Results: In each patient, band power abnormality was relatively consistent over time. The median DRS of the entire recording period separated seizure free (ILAE = 1) and not seizure free (ILAE > 1) patients well (AUC = 0.69). This effect was similar interictally (AUC = 0.69) and peri-ictally (AUC = 0.71). Significance: Our results suggest that band power abnormality DRS, as a predictor of outcomes from epilepsy surgery, is a relatively robust metric over time. These findings add further support for abnormality mapping of neurophysiology data during presurgical evaluation.
q-bio/0611047
Andrei Zinovyev Dr.
Sebastian Ahnert, Thomas Fink, Andrei Zinovyev
How much non-coding DNA do eukaryotes require?
6 pages, 2 figures, 1 table, accepted for publication in J. Theor. Biol
null
null
null
q-bio.GN
null
Despite tremendous advances in the field of genomics, the amount and function of the large non-coding part of the genome in higher organisms remains poorly understood. Here we report an observation, made for 37 fully sequenced eukaryotic genomes, which indicates that eukaryotes require a certain minimum amount of non-coding DNA (ncDNA). This minimum increases quadratically with the amount of DNA located in exons. Based on a simple model of the growth of regulatory networks, we derive a theoretical prediction of the required quantity of ncDNA and find it to be in excellent agreement with the data. The amount of additional ncDNA (in basepairs) which eukaryotes require obeys Ndef = 1/2 (Nc / Np) (Nc - Np), where Nc is the amount of exonic DNA, and Np is a constant of about 10Mb. This value Ndef corresponds to a few percent of the genome in Homo sapiens and other mammals, and up to half the genome in simpler eukaryotes. Thus our findings confirm that eukaryotic life depends on a substantial fraction of ncDNA and also make a prediction of the size of this fraction, which matches the data closely.
[ { "created": "Wed, 15 Nov 2006 17:55:21 GMT", "version": "v1" }, { "created": "Tue, 5 Dec 2006 11:46:11 GMT", "version": "v2" }, { "created": "Thu, 8 Mar 2007 12:46:18 GMT", "version": "v3" }, { "created": "Tue, 5 Feb 2008 17:40:06 GMT", "version": "v4" } ]
2008-02-05
[ [ "Ahnert", "Sebastian", "" ], [ "Fink", "Thomas", "" ], [ "Zinovyev", "Andrei", "" ] ]
Despite tremendous advances in the field of genomics, the amount and function of the large non-coding part of the genome in higher organisms remains poorly understood. Here we report an observation, made for 37 fully sequenced eukaryotic genomes, which indicates that eukaryotes require a certain minimum amount of non-coding DNA (ncDNA). This minimum increases quadratically with the amount of DNA located in exons. Based on a simple model of the growth of regulatory networks, we derive a theoretical prediction of the required quantity of ncDNA and find it to be in excellent agreement with the data. The amount of additional ncDNA (in basepairs) which eukaryotes require obeys Ndef = 1/2 (Nc / Np) (Nc - Np), where Nc is the amount of exonic DNA, and Np is a constant of about 10Mb. This value Ndef corresponds to a few percent of the genome in Homo sapiens and other mammals, and up to half the genome in simpler eukaryotes. Thus our findings confirm that eukaryotic life depends on a substantial fraction of ncDNA and also make a prediction of the size of this fraction, which matches the data closely.
2306.07447
Eric Forgoston
Eric Forgoston, Sarah Day, Peter C. de Ruiter, Arjen Doelman, Nienke Hartemink, Alan Hastings, Lia Hemerik, Alexandru Hening, Josef Hofbauer, Sonia Kefi, David A. Kessler, Toni Klauschies, Christian Kuehn, Xiaoxiao Li, John C. Moore, Elly Morrien, Anje-Margriet Neutel, Jelena Pantel, Sebastian J. Schreiber, Leah B. Shaw, Nadav Shnerb, Eric Siero, Laura S. Storch, Michael A.S. Thorne, Ingrid van de Leemput, Ellen van Velzen, Els Weinans
Stability and Fluctuations in Complex Ecological Systems
22 pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
From 08-12 August, 2022, 32 individuals participated in a workshop, Stability and Fluctuations in Complex Ecological Systems, at the Lorentz Center, located in Leiden, The Netherlands. An interdisciplinary dialogue between ecologists, mathematicians, and physicists provided a foundation of important problems to consider over the next 5-10 years. This paper outlines eight areas including (1) improving our understanding of the effect of scale, both temporal and spatial, for both deterministic and stochastic problems; (2) clarifying the different terminologies and definitions used in different scientific fields; (3) developing a comprehensive set of data analysis techniques arising from different fields but which can be used together to improve our understanding of existing data sets; (4) having theoreticians/computational scientists collaborate closely with empirical ecologists to determine what new data should be collected; (5) improving our knowledge of how to protect and/or restore ecosystems; (6) incorporating socio-economic effects into models of ecosystems; (7) improving our understanding of the role of deterministic and stochastic fluctuations; (8) studying the current state of biodiversity at the functional level, taxa level and genome level.
[ { "created": "Mon, 12 Jun 2023 22:26:47 GMT", "version": "v1" } ]
2023-06-14
[ [ "Forgoston", "Eric", "" ], [ "Day", "Sarah", "" ], [ "de Ruiter", "Peter C.", "" ], [ "Doelman", "Arjen", "" ], [ "Hartemink", "Nienke", "" ], [ "Hastings", "Alan", "" ], [ "Hemerik", "Lia", "" ], [ "Hening", "Alexandru", "" ], [ "Hofbauer", "Josef", "" ], [ "Kefi", "Sonia", "" ], [ "Kessler", "David A.", "" ], [ "Klauschies", "Toni", "" ], [ "Kuehn", "Christian", "" ], [ "Li", "Xiaoxiao", "" ], [ "Moore", "John C.", "" ], [ "Morrien", "Elly", "" ], [ "Neutel", "Anje-Margriet", "" ], [ "Pantel", "Jelena", "" ], [ "Schreiber", "Sebastian J.", "" ], [ "Shaw", "Leah B.", "" ], [ "Shnerb", "Nadav", "" ], [ "Siero", "Eric", "" ], [ "Storch", "Laura S.", "" ], [ "Thorne", "Michael A. S.", "" ], [ "van de Leemput", "Ingrid", "" ], [ "van Velzen", "Ellen", "" ], [ "Weinans", "Els", "" ] ]
From 08-12 August, 2022, 32 individuals participated in a workshop, Stability and Fluctuations in Complex Ecological Systems, at the Lorentz Center, located in Leiden, The Netherlands. An interdisciplinary dialogue between ecologists, mathematicians, and physicists provided a foundation of important problems to consider over the next 5-10 years. This paper outlines eight areas including (1) improving our understanding of the effect of scale, both temporal and spatial, for both deterministic and stochastic problems; (2) clarifying the different terminologies and definitions used in different scientific fields; (3) developing a comprehensive set of data analysis techniques arising from different fields but which can be used together to improve our understanding of existing data sets; (4) having theoreticians/computational scientists collaborate closely with empirical ecologists to determine what new data should be collected; (5) improving our knowledge of how to protect and/or restore ecosystems; (6) incorporating socio-economic effects into models of ecosystems; (7) improving our understanding of the role of deterministic and stochastic fluctuations; (8) studying the current state of biodiversity at the functional level, taxa level and genome level.
1606.00258
Valeriy I. Sbitnev
Valeriy I. Sbitnev
Quantum consciousness in warm, wet, and noisy brain
20 pages, 11 figures
Mod. Phys. Lett. B, v. 30, 1650329 (25 pages), 2016
10.1142/S0217984916503292
null
q-bio.NC physics.flu-dyn quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of quantum consciousness stems from dynamic flows of hydrogen ions in brain liquid. This liquid contains vast areas of the fourth phase of water with hexagonal packing of its molecules, the so-called exclusion zone (EZ) of water. The hydrogen ion motion on such hexagonal lattices shows as the hopping of the ions forward and the holes (vacant places) backward, caused by the Grotthuss mechanism. By supporting this motion using external infrasound sources, one may achieve the appearance of the superfluid state of the EZ water. Flows of the hydrogen ions are described by the modified Navier-Stokes equation. It, along with the continuity equation, yields the nonlinear Schrodinger equation, which describes the quantum effects of these flows, such as the tunneling at long distances or the interference on gap junctions.
[ { "created": "Fri, 1 Apr 2016 10:47:02 GMT", "version": "v1" }, { "created": "Fri, 3 Jun 2016 07:25:56 GMT", "version": "v2" }, { "created": "Thu, 14 Jul 2016 08:26:28 GMT", "version": "v3" } ]
2016-10-13
[ [ "Sbitnev", "Valeriy I.", "" ] ]
The emergence of quantum consciousness stems from dynamic flows of hydrogen ions in brain liquid. This liquid contains vast areas of the fourth phase of water with hexagonal packing of its molecules, the so-called exclusion zone (EZ) of water. The hydrogen ion motion on such hexagonal lattices shows as the hopping of the ions forward and the holes (vacant places) backward, caused by the Grotthuss mechanism. By supporting this motion using external infrasound sources, one may achieve the appearance of the superfluid state of the EZ water. Flows of the hydrogen ions are described by the modified Navier-Stokes equation. It, along with the continuity equation, yields the nonlinear Schrodinger equation, which describes the quantum effects of these flows, such as the tunneling at long distances or the interference on gap junctions.
2407.12152
R{\i}za \"Oz\c{c}elik
R{\i}za \"Oz\c{c}elik, Francesca Grisoni
A Hitchhiker's Guide to Deep Chemical Language Processing for Bioactivity Prediction
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep learning has significantly accelerated drug discovery, with 'chemical language' processing (CLP) emerging as a prominent approach. CLP learns from molecular string representations (e.g., Simplified Molecular Input Line Entry Systems [SMILES] and Self-Referencing Embedded Strings [SELFIES]) with methods akin to natural language processing. Despite their growing importance, training predictive CLP models is far from trivial, as it involves many 'bells and whistles'. Here, we analyze the key elements of CLP training, to provide guidelines for newcomers and experts alike. Our study spans three neural network architectures, two string representations, three embedding strategies, across ten bioactivity datasets, for both classification and regression purposes. This 'hitchhiker's guide' not only underscores the importance of certain methodological choices, but it also equips researchers with practical recommendations on ideal choices, e.g., in terms of neural network architectures, molecular representations, and hyperparameter optimization.
[ { "created": "Tue, 16 Jul 2024 20:13:31 GMT", "version": "v1" } ]
2024-07-18
[ [ "Özçelik", "Rıza", "" ], [ "Grisoni", "Francesca", "" ] ]
Deep learning has significantly accelerated drug discovery, with 'chemical language' processing (CLP) emerging as a prominent approach. CLP learns from molecular string representations (e.g., Simplified Molecular Input Line Entry Systems [SMILES] and Self-Referencing Embedded Strings [SELFIES]) with methods akin to natural language processing. Despite their growing importance, training predictive CLP models is far from trivial, as it involves many 'bells and whistles'. Here, we analyze the key elements of CLP training, to provide guidelines for newcomers and experts alike. Our study spans three neural network architectures, two string representations, three embedding strategies, across ten bioactivity datasets, for both classification and regression purposes. This 'hitchhiker's guide' not only underscores the importance of certain methodological choices, but it also equips researchers with practical recommendations on ideal choices, e.g., in terms of neural network architectures, molecular representations, and hyperparameter optimization.
1107.3443
Henry Tuckwell
Henry C Tuckwell
Quantitative aspects of L-type calcium currents
92 pages 12 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Calcium currents in neurons and muscle cells have been classified as being one of 5 types of which four, L, N, P/Q and R were said to be high threshold and one, T, was designated low threshold. This review focuses on quantitative aspects of L-type currents. L-type channels are now distinguished according to their structure as one of four main subtypes 1.1-1.4. L-type calcium currents play many fundamental roles in cellular dynamical processes including pacemaking in neurons and cardiac cells, the activation of transcription factors involved in synaptic plasticity and in immune cells. The half-activation potentials of L-type currents have been ascribed values as low as -50 mV and as high as near 0 mV. The inactivation of I_L has been found to be both voltage (VDI) and calcium-dependent (CDI) and the latter component may involve calcium-induced calcium release. CDI is often an important aspect of dynamical models of cell electrophysiology. We describe the basic components in modeling I_L including activation and both voltage and calcium dependent inactivation and the two main approaches to determining the current. We review, by means of tables of values from over 65 representative studies, the various details of the dynamical properties associated with I_L that have been found experimentally or employed in the last 25 years in deterministic modeling in various nervous system and cardiac cells. Distributions and statistics of several parameters related to activation and inactivation are obtained. There are few reliable experimental data on L-type calcium current kinetics for cells at physiological calcium ion concentrations. Neurons are divided approximately into two groups with experimental half-activation potentials.
[ { "created": "Mon, 18 Jul 2011 14:14:09 GMT", "version": "v1" } ]
2011-07-19
[ [ "Tuckwell", "Henry C", "" ] ]
Calcium currents in neurons and muscle cells have been classified as being one of 5 types of which four, L, N, P/Q and R were said to be high threshold and one, T, was designated low threshold. This review focuses on quantitative aspects of L-type currents. L-type channels are now distinguished according to their structure as one of four main subtypes 1.1-1.4. L-type calcium currents play many fundamental roles in cellular dynamical processes including pacemaking in neurons and cardiac cells, the activation of transcription factors involved in synaptic plasticity and in immune cells. The half-activation potentials of L-type currents have been ascribed values as low as -50 mV and as high as near 0 mV. The inactivation of I_L has been found to be both voltage (VDI) and calcium-dependent (CDI) and the latter component may involve calcium-induced calcium release. CDI is often an important aspect of dynamical models of cell electrophysiology. We describe the basic components in modeling I_L including activation and both voltage and calcium dependent inactivation and the two main approaches to determining the current. We review, by means of tables of values from over 65 representative studies, the various details of the dynamical properties associated with I_L that have been found experimentally or employed in the last 25 years in deterministic modeling in various nervous system and cardiac cells. Distributions and statistics of several parameters related to activation and inactivation are obtained. There are few reliable experimental data on L-type calcium current kinetics for cells at physiological calcium ion concentrations. Neurons are divided approximately into two groups with experimental half-activation potentials.
1410.8580
Matthew Lawlor
Matthew Lawlor and Steven Zucker
An Online Algorithm for Learning Selectivity to Mixture Means
Extended technical companion to a presentation at NIPS 2014
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a biologically-plausible learning rule called Triplet BCM that provably converges to the class means of general mixture models. This rule generalizes the classical BCM neural rule, and provides a novel interpretation of classical BCM as performing a kind of tensor decomposition. It achieves a substantial generalization over classical BCM by incorporating triplets of samples from the mixtures, which provides a novel information processing interpretation to spike-timing-dependent plasticity. We provide complete proofs of convergence of this learning rule, and an extended discussion of the connection between BCM and tensor learning.
[ { "created": "Thu, 30 Oct 2014 22:37:41 GMT", "version": "v1" } ]
2014-11-03
[ [ "Lawlor", "Matthew", "" ], [ "Zucker", "Steven", "" ] ]
We develop a biologically-plausible learning rule called Triplet BCM that provably converges to the class means of general mixture models. This rule generalizes the classical BCM neural rule, and provides a novel interpretation of classical BCM as performing a kind of tensor decomposition. It achieves a substantial generalization over classical BCM by incorporating triplets of samples from the mixtures, which provides a novel information processing interpretation to spike-timing-dependent plasticity. We provide complete proofs of convergence of this learning rule, and an extended discussion of the connection between BCM and tensor learning.
1408.6345
Benedikt Obermayer
Alexander Seeholzer, Erwin Frey, and Benedikt Obermayer
Periodic vs. intermittent adaptive cycles in quasispecies co-evolution
5 pages, 3 figures, 11 pages supplementary material; updated formatting; accepted at Phys. Rev. Lett
null
10.1103/PhysRevLett.113.128101
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
We study an abstract model for the co-evolution between mutating viruses and the adaptive immune system. In sequence space, these two populations are localized around transiently dominant strains. Delocalization or error thresholds exhibit a novel interdependence because immune response is conditional on the viral attack. An evolutionary chase is induced by stochastic fluctuations and can occur via periodic or intermittent cycles. Using simulations and stochastic analysis, we show how the transition between these two dynamic regimes depends on mutation rate, immune response, and population size.
[ { "created": "Wed, 27 Aug 2014 08:37:06 GMT", "version": "v1" }, { "created": "Wed, 10 Sep 2014 13:16:59 GMT", "version": "v2" } ]
2015-06-22
[ [ "Seeholzer", "Alexander", "" ], [ "Frey", "Erwin", "" ], [ "Obermayer", "Benedikt", "" ] ]
We study an abstract model for the co-evolution between mutating viruses and the adaptive immune system. In sequence space, these two populations are localized around transiently dominant strains. Delocalization or error thresholds exhibit a novel interdependence because immune response is conditional on the viral attack. An evolutionary chase is induced by stochastic fluctuations and can occur via periodic or intermittent cycles. Using simulations and stochastic analysis, we show how the transition between these two dynamic regimes depends on mutation rate, immune response, and population size.
1802.02268
Shiang Hu
Shiang Hu, Dezhong Yao, Pedro A. Valdes-Sosa
Unified Bayesian estimator of EEG reference at infinity: rREST
null
https://www.frontiersin.org/articles/10.3389/fnins.2018.00297/full
10.3389/fnins.2018.00297
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The choice of reference for electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both average reference (AR) and reference electrode standardization technique (REST) are two primary, irreconcilable contenders. We propose a theoretical framework to resolve this issue by formulating both a) estimation of potentials at infinity, and, b) determination of the reference, as a unified Bayesian linear inverse problem. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior of EEG generative model. We develop the regularized versions of AR and REST, named rAR, and rREST, respectively. Both depend on a regularization parameter that is the noise to signal ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real EEGs. Generated artificial EEGs, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. For practical applications, it is shown that average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the 'oracle' choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective on the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
[ { "created": "Wed, 7 Feb 2018 00:12:54 GMT", "version": "v1" } ]
2018-05-04
[ [ "Hu", "Shiang", "" ], [ "Yao", "Dezhong", "" ], [ "Valdes-Sosa", "Pedro A.", "" ] ]
The choice of reference for electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both average reference (AR) and reference electrode standardization technique (REST) are two primary, irreconcilable contenders. We propose a theoretical framework to resolve this issue by formulating both a) estimation of potentials at infinity, and, b) determination of the reference, as a unified Bayesian linear inverse problem. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior of EEG generative model. We develop the regularized versions of AR and REST, named rAR, and rREST, respectively. Both depend on a regularization parameter that is the noise to signal ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real EEGs. Generated artificial EEGs, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. For practical applications, it is shown that average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the 'oracle' choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective on the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
0901.3215
Laurent Noe
Gregory Kucherov (LIFL, INRIA Lille - Nord Europe), Laurent No\'e (LIFL, INRIA Lille - Nord Europe), Mikhail A. Roytberg (IMPB)
Multiseed Lossless Filtration
null
IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2 (1) : 51-61, 2005
10.1109/TCBB.2005.12
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a method of seed-based lossless filtration for approximate string matching and related bioinformatics applications. The method is based on a simultaneous use of several spaced seeds rather than a single seed as studied by Burkhardt and K\"arkk\"ainen [1]. We present algorithms to compute several important parameters of seed families, study their combinatorial properties, and describe several techniques to construct efficient families. We also report a large-scale application of the proposed technique to the problem of oligonucleotide selection for an EST sequence database.
[ { "created": "Wed, 21 Jan 2009 09:48:56 GMT", "version": "v1" } ]
2011-01-18
[ [ "Kucherov", "Gregory", "", "LIFL, INRIA Lille - Nord Europe" ], [ "Noé", "Laurent", "", "LIFL, INRIA Lille - Nord Europe" ], [ "Roytberg", "Mikhail A.", "", "IMPB" ] ]
We study a method of seed-based lossless filtration for approximate string matching and related bioinformatics applications. The method is based on a simultaneous use of several spaced seeds rather than a single seed as studied by Burkhardt and K\"arkk\"ainen [1]. We present algorithms to compute several important parameters of seed families, study their combinatorial properties, and describe several techniques to construct efficient families. We also report a large-scale application of the proposed technique to the problem of oligonucleotide selection for an EST sequence database.
q-bio/0309005
Chi Ming Yang Dr.
Chi Ming Yang
On the 20 canonical amino acids by a cooperative vector-addition principle based on the quasi-28-gon symmetry of the genetic code
5 pages, 3 figures and 2 schemes
null
null
null
q-bio.BM q-bio.GN
null
Upon the covalent-bonding hybrid of the nitrogen atoms taken as a measure for the structural regularity in nucleobases, it can be identified that the internal relation within the 20 amino acids follows a cooperative vector-in-space addition principle based on the spherical and rotational symmetry of a quasi-28-gon (quasi-icosikaioctagon), with two evolutionary axes.
[ { "created": "Thu, 18 Sep 2003 04:00:46 GMT", "version": "v1" } ]
2007-05-23
[ [ "Yang", "Chi Ming", "" ] ]
Upon the covalent-bonding hybrid of the nitrogen atoms taken as a measure for the structural regularity in nucleobases, it can be identified that the internal relation within the 20 amino acids follows a cooperative vector-in-space addition principle based on the spherical and rotational symmetry of a quasi-28-gon (quasi-icosikaioctagon), with two evolutionary axes.
1707.07135
Dante Chialvo
Riccardo Gallotti and Dante R. Chialvo
How ants move: individual and collective scaling properties
null
J. R. Soc. Interface 15: 20180223 (2018)
10.1098/rsif.2018.0223
null
q-bio.QM nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The motion of social insects is often used a paradigmatic example of complex adaptive dynamics arising from decentralized individual behavior. In this paper we revisit the topic of the ruling laws behind burst of activity in ants. The analysis, done over previously reported data, reconsider the causation arrows, proposed at individual level, not finding any link between the duration of the ants' activity and its moving speed. Secondly, synthetic trajectories created from steps of different ants, demonstrate that a Markov process can explain the previously reported speed shape profile. Finally we show that as more ants enter the nest, the faster they move, which implies a collective property. Overall these results provides a mechanistic explanation for the reported behavioral laws, and suggest a formal way to further study the collective properties in these scenarios.
[ { "created": "Sat, 22 Jul 2017 10:35:50 GMT", "version": "v1" }, { "created": "Wed, 25 Jul 2018 21:35:37 GMT", "version": "v2" } ]
2018-07-27
[ [ "Gallotti", "Riccardo", "" ], [ "Chialvo", "Dante R.", "" ] ]
The motion of social insects is often used a paradigmatic example of complex adaptive dynamics arising from decentralized individual behavior. In this paper we revisit the topic of the ruling laws behind burst of activity in ants. The analysis, done over previously reported data, reconsider the causation arrows, proposed at individual level, not finding any link between the duration of the ants' activity and its moving speed. Secondly, synthetic trajectories created from steps of different ants, demonstrate that a Markov process can explain the previously reported speed shape profile. Finally we show that as more ants enter the nest, the faster they move, which implies a collective property. Overall these results provides a mechanistic explanation for the reported behavioral laws, and suggest a formal way to further study the collective properties in these scenarios.
1812.09692
Antonia Mey
Nora Molkenthin, Steffen M\"uhle, Antonia S J S Mey, Marc Timme
Geometric constraints in protein folding
null
null
10.1371/journal.pone.0229230
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The intricate three-dimensional geometries of protein tertiary structures underlie protein function and emerge through a folding process from one-dimensional chains of amino acids. The exact spatial sequence and configuration of amino acids, the biochemical environment and the temporal sequence of distinct interactions yield a complex folding process that cannot yet be easily tracked for all proteins. To gain qualitative insights into the fundamental mechanisms behind the folding dynamics and generic features of the folded structure, we propose a simple model of structure formation that takes into account only fundamental geometric constraints and otherwise assumes randomly paired connections. We find that despite its simplicity, the model results in a network ensemble consistent with key overall features of the ensemble of Protein Residue Networks we obtained from more than 1000 biological protein geometries as available through the Protein Data Base. Specifically, the distribution of the number of interaction neighbors a unit (amino acid) has, the scaling of the structure's spatial extent with chain length, the eigenvalue spectrum and the scaling of the smallest relaxation time with chain length are all consistent between model and real proteins. These results indicate that geometric constraints alone may already account for a number of generic features of protein tertiary structures.
[ { "created": "Sun, 23 Dec 2018 11:30:26 GMT", "version": "v1" } ]
2021-02-24
[ [ "Molkenthin", "Nora", "" ], [ "Mühle", "Steffen", "" ], [ "Mey", "Antonia S J S", "" ], [ "Timme", "Marc", "" ] ]
The intricate three-dimensional geometries of protein tertiary structures underlie protein function and emerge through a folding process from one-dimensional chains of amino acids. The exact spatial sequence and configuration of amino acids, the biochemical environment and the temporal sequence of distinct interactions yield a complex folding process that cannot yet be easily tracked for all proteins. To gain qualitative insights into the fundamental mechanisms behind the folding dynamics and generic features of the folded structure, we propose a simple model of structure formation that takes into account only fundamental geometric constraints and otherwise assumes randomly paired connections. We find that despite its simplicity, the model results in a network ensemble consistent with key overall features of the ensemble of Protein Residue Networks we obtained from more than 1000 biological protein geometries as available through the Protein Data Base. Specifically, the distribution of the number of interaction neighbors a unit (amino acid) has, the scaling of the structure's spatial extent with chain length, the eigenvalue spectrum and the scaling of the smallest relaxation time with chain length are all consistent between model and real proteins. These results indicate that geometric constraints alone may already account for a number of generic features of protein tertiary structures.
1606.08294
Uwe C. T\"auber
Jacob Carroll (Virginia Tech), Matthew Raum (Baker Hughes), Kimberly Forsten-Williams (Duquesne University), and Uwe C. T\"auber (Virginia Tech)
Ligand-receptor binding kinetics in surface plasmon resonance cells: A Monte Carlo analysis
21 pages, 9 figures
Phys. Biol. 13 (2016) 066010
10.1088/1478-3975/13/6/066010
null
q-bio.QM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Surface plasmon resonance (SPR) chips are widely used to measure association and dissociation rates for the binding kinetics between two species of chemicals, e.g., cell receptors and ligands. It is commonly assumed that ligands are spatially well mixed in the SPR region, and hence a mean-field rate equation description is appropriate. This approximation however ignores the spatial fluctuations as well as temporal correlations induced by multiple local rebinding events, which become prominent for slow diffusion rates and high binding affinities. We report detailed Monte Carlo simulations of ligand binding kinetics in an SPR cell subject to laminar flow. We extract the binding and dissociation rates by means of the techniques frequently employed in experimental analysis that are motivated by the mean-field approximation. We find major discrepancies in a wide parameter regime between the thus extracted rates and the known input simulation values. These results underscore the crucial quantitative importance of spatio-temporal correlations in binary reaction kinetics in SPR cell geometries, and demonstrate the failure of a mean-field analysis of SPR cells in the regime of high Damk\"ohler number Da > 0.1, where the spatio-temporal correlations due to diffusive transport and ligand-receptor rebinding events dominate the dynamics of SPR systems.
[ { "created": "Mon, 27 Jun 2016 14:42:10 GMT", "version": "v1" }, { "created": "Wed, 14 Sep 2016 20:23:51 GMT", "version": "v2" } ]
2016-12-07
[ [ "Carroll", "Jacob", "", "Virginia Tech" ], [ "Raum", "Matthew", "", "Baker Hughes" ], [ "Forsten-Williams", "Kimberly", "", "Duquesne University" ], [ "Täuber", "Uwe C.", "", "Virginia Tech" ] ]
Surface plasmon resonance (SPR) chips are widely used to measure association and dissociation rates for the binding kinetics between two species of chemicals, e.g., cell receptors and ligands. It is commonly assumed that ligands are spatially well mixed in the SPR region, and hence a mean-field rate equation description is appropriate. This approximation however ignores the spatial fluctuations as well as temporal correlations induced by multiple local rebinding events, which become prominent for slow diffusion rates and high binding affinities. We report detailed Monte Carlo simulations of ligand binding kinetics in an SPR cell subject to laminar flow. We extract the binding and dissociation rates by means of the techniques frequently employed in experimental analysis that are motivated by the mean-field approximation. We find major discrepancies in a wide parameter regime between the thus extracted rates and the known input simulation values. These results underscore the crucial quantitative importance of spatio-temporal correlations in binary reaction kinetics in SPR cell geometries, and demonstrate the failure of a mean-field analysis of SPR cells in the regime of high Damk\"ohler number Da > 0.1, where the spatio-temporal correlations due to diffusive transport and ligand-receptor rebinding events dominate the dynamics of SPR systems.
2202.06634
Yousef Jamali
Sheida Kazemi and Yousef Jamali
On the influence of input triggering on the dynamics of the Jansen-Rit oscillators network
null
null
null
null
q-bio.NC cond-mat.dis-nn cond-mat.soft physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
We investigate the dynamical properties of a network of coupled neural oscillators with identical Jansen-Rit masses. The connections between nodes follow the regular Watts-Strogatz topology. Each node receives deterministic external input plus internal input based on the output signal from the neighbors. This paper aims to change these two inputs and analyze the generated results. First, we attempt to analyze the model using the mean-field approximation, i.e., identical inputs for all nodes. Then, this assumption is relaxed. A more detailed analysis of these two states is discussed. We use the Pearson correlation coefficient to measure the amount of synchronization. As a result of the mean-field approach, we find that despite observed changes in behavior, there is no phase transition. Surprisingly, both the first (discontinuous) and second (continuous) phase transition occurs by relaxing the mean-field assumption. We also demonstrate how changes in input can result in pathological oscillations similar to those observed in epilepsy. Results show that coupled Jansen-Rit masses emerge a variety of behaviors influenced by various external and internal inputs. Moreover, our findings indicate that delta waves can be generated by altering these inputs in a network of Jansen-Rit neural mass models, which has not been previously observed in a single Jansen-Rit neural mass model analysis. \\ Overall, a wide range of behavioral patterns, including epilepsy, healthy behavior, and the transition between synchrony, and asynchrony are examined comprehensively in this paper. Moreover, this work highlights the putative contribution of external and internal inputs in studying the phase transition and synchronization of neural mass models.
[ { "created": "Mon, 14 Feb 2022 11:41:28 GMT", "version": "v1" }, { "created": "Wed, 16 Feb 2022 17:33:08 GMT", "version": "v2" } ]
2022-02-17
[ [ "Kazemi", "Sheida", "" ], [ "Jamali", "Yousef", "" ] ]
We investigate the dynamical properties of a network of coupled neural oscillators with identical Jansen-Rit masses. The connections between nodes follow the regular Watts-Strogatz topology. Each node receives deterministic external input plus internal input based on the output signal from the neighbors. This paper aims to change these two inputs and analyze the generated results. First, we attempt to analyze the model using the mean-field approximation, i.e., identical inputs for all nodes. Then, this assumption is relaxed. A more detailed analysis of these two states is discussed. We use the Pearson correlation coefficient to measure the amount of synchronization. As a result of the mean-field approach, we find that despite observed changes in behavior, there is no phase transition. Surprisingly, both the first (discontinuous) and second (continuous) phase transition occurs by relaxing the mean-field assumption. We also demonstrate how changes in input can result in pathological oscillations similar to those observed in epilepsy. Results show that coupled Jansen-Rit masses emerge a variety of behaviors influenced by various external and internal inputs. Moreover, our findings indicate that delta waves can be generated by altering these inputs in a network of Jansen-Rit neural mass models, which has not been previously observed in a single Jansen-Rit neural mass model analysis. \\ Overall, a wide range of behavioral patterns, including epilepsy, healthy behavior, and the transition between synchrony, and asynchrony are examined comprehensively in this paper. Moreover, this work highlights the putative contribution of external and internal inputs in studying the phase transition and synchronization of neural mass models.
1004.4031
Rhonda Dzakpasu
Xin Chen and Rhonda Dzakpasu
Observed network dynamics from altering the balance between excitatory and inhibitory neurons in cultured networks
null
Phys Rev E, 82, 031907 (2010)
10.1103/PhysRevE.82.031907
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complexity in the temporal organization of neural systems may be a reflection of the diversity of its neural constituents. These constituents, excitatory and inhibitory neurons, comprise an invariant ratio in vivo and form the substrate for rhythmic oscillatory activity. To begin to elucidate the dynamical mechanisms that underlie this balance, we construct novel neural circuits not ordinarily found in nature. We culture several networks of neurons composed of excitatory and inhibitory cells and use a multi-electrode array to study their temporal dynamics as the balance is modulated. We use the electrode burst as the temporal imprimatur to signify the presence of network activity. Burst durations, inter-burst intervals, and the number of spikes participating within a burst are used to illustrate the vivid dynamical differences between the various cultured networks. When the network consists largely of excitatory neurons, no network temporal structure is apparent. However, the addition of inhibitory neurons evokes a temporal order. Calculation of the temporal autocorrelation shows that when the number of inhibitory neurons is a major fraction of the network, a striking network pattern materializes when none was previously present.
[ { "created": "Fri, 23 Apr 2010 00:21:19 GMT", "version": "v1" }, { "created": "Wed, 8 Sep 2010 02:13:27 GMT", "version": "v2" } ]
2015-05-18
[ [ "Chen", "Xin", "" ], [ "Dzakpasu", "Rhonda", "" ] ]
Complexity in the temporal organization of neural systems may be a reflection of the diversity of its neural constituents. These constituents, excitatory and inhibitory neurons, comprise an invariant ratio in vivo and form the substrate for rhythmic oscillatory activity. To begin to elucidate the dynamical mechanisms that underlie this balance, we construct novel neural circuits not ordinarily found in nature. We culture several networks of neurons composed of excitatory and inhibitory cells and use a multi-electrode array to study their temporal dynamics as the balance is modulated. We use the electrode burst as the temporal imprimatur to signify the presence of network activity. Burst durations, inter-burst intervals, and the number of spikes participating within a burst are used to illustrate the vivid dynamical differences between the various cultured networks. When the network consists largely of excitatory neurons, no network temporal structure is apparent. However, the addition of inhibitory neurons evokes a temporal order. Calculation of the temporal autocorrelation shows that when the number of inhibitory neurons is a major fraction of the network, a striking network pattern materializes when none was previously present.
2209.14664
Jitao David Zhang
Tom Michoel and Jitao David Zhang
Causal inference in drug discovery and development
null
null
null
null
q-bio.QM cs.LG stat.AP
http://creativecommons.org/licenses/by-nc-nd/4.0/
To discover new drugs is to seek and to prove causality. As an emerging approach leveraging human knowledge and creativity, data, and machine intelligence, causal inference holds the promise of reducing cognitive bias and improving decision making in drug discovery. While it has been applied across the value chain, the concepts and practice of causal inference remain obscure to many practitioners. This article offers a non-technical introduction to causal inference, reviews its recent applications, and discusses opportunities and challenges of adopting the causal language in drug discovery and development.
[ { "created": "Thu, 29 Sep 2022 09:54:18 GMT", "version": "v1" } ]
2022-09-30
[ [ "Michoel", "Tom", "" ], [ "Zhang", "Jitao David", "" ] ]
To discover new drugs is to seek and to prove causality. As an emerging approach leveraging human knowledge and creativity, data, and machine intelligence, causal inference holds the promise of reducing cognitive bias and improving decision making in drug discovery. While it has been applied across the value chain, the concepts and practice of causal inference remain obscure to many practitioners. This article offers a non-technical introduction to causal inference, reviews its recent applications, and discusses opportunities and challenges of adopting the causal language in drug discovery and development.
1402.5214
Takahiro Kohsokabe
Takahiro Kohsokabe and Kunihiko Kaneko
Evolution-Development Congruence in Pattern Formation Dynamics: Bifurcations in Gene Expressions and Regulation of Networks Structures
null
null
null
null
q-bio.PE physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Search for possible relationships between phylogeny and ontogeny is one of the most important issues in the field of evolutionary developmental biology. By representing developmental dynamics of spatially located cells with gene expression dynamics with cell-to-cell interaction under external morphogen gradient, evolved are gene regulation networks under mutation and selection with the fitness to approach a prescribed spatial pattern of expressed genes. For most of thousands of numerical evolution experiments, evolution of pattern over generations and development of pattern by an evolved network exhibit remarkable congruence. Here, both the pattern dynamics consist of several epochs to form successive stripe formations between quasi-stationary regimes. In evolution, the regimes are generations needed to hit relevant mutations, while in development, they are due to the emergence of slowly varying expression that controls the pattern change. Successive pattern changes are thus generated, which are regulated by successive combinations of feedback or feedforward regulations under the upstream feedforward network that reads the morphogen gradient. By using a pattern generated by the upstream feedforward network as a boundary condition, downstream networks form later stripe patterns. These epochal changes in development and evolution are represented as same bifurcations in dynamical-systems theory, and this agreement of bifurcations lead to the evolution-development congruences. Violation of the evolution-development congruence, observed exceptionally, is shown to be originated in alteration of the boundary due to mutation at the upstream feedforward network. Our results provide a new look on developmental stages, punctuated equilibrium, developmental bottlenecks, and evolutionary acquisition of novelty in morphogenesis.
[ { "created": "Fri, 21 Feb 2014 06:35:54 GMT", "version": "v1" }, { "created": "Tue, 31 Mar 2015 00:59:06 GMT", "version": "v2" } ]
2015-04-01
[ [ "Kohsokabe", "Takahiro", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Search for possible relationships between phylogeny and ontogeny is one of the most important issues in the field of evolutionary developmental biology. By representing developmental dynamics of spatially located cells with gene expression dynamics with cell-to-cell interaction under external morphogen gradient, evolved are gene regulation networks under mutation and selection with the fitness to approach a prescribed spatial pattern of expressed genes. For most of thousands of numerical evolution experiments, evolution of pattern over generations and development of pattern by an evolved network exhibit remarkable congruence. Here, both the pattern dynamics consist of several epochs to form successive stripe formations between quasi-stationary regimes. In evolution, the regimes are generations needed to hit relevant mutations, while in development, they are due to the emergence of slowly varying expression that controls the pattern change. Successive pattern changes are thus generated, which are regulated by successive combinations of feedback or feedforward regulations under the upstream feedforward network that reads the morphogen gradient. By using a pattern generated by the upstream feedforward network as a boundary condition, downstream networks form later stripe patterns. These epochal changes in development and evolution are represented as same bifurcations in dynamical-systems theory, and this agreement of bifurcations lead to the evolution-development congruences. Violation of the evolution-development congruence, observed exceptionally, is shown to be originated in alteration of the boundary due to mutation at the upstream feedforward network. Our results provide a new look on developmental stages, punctuated equilibrium, developmental bottlenecks, and evolutionary acquisition of novelty in morphogenesis.
2001.03693
Mason Youngblood
Mason Youngblood
A Raspberry Pi-based, RFID-equipped birdfeeder for the remote monitoring of wild bird populations
null
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Radio-frequency identification (RFID) is an increasingly popular wireless technology that allows researchers to monitor wild bird populations from fixed locations in the field. Our lab has developed an RFID-equipped birdfeeder based on the Raspberry Pi Zero W, a low-cost single-board computer, that collects continuous visitation data from birds tagged with passive integrated transponder (PIT) tags. Each birdfeeder has a perch antenna connected to an RFID reader board on a Raspberry Pi powered by a portable battery. When a tagged bird lands on the perch to eat from the feeder, its unique code is stored with the date and time on the Raspberry Pi. These birdfeeders require only basic soldering and coding skills to assemble, and can be easily outfitted with additional hardware like video cameras and microphones. We outline the process of assembling the hardware and setting up the operating system for the birdfeeders. Then, we describe an example implementation of the birdfeeders to track house finches (Haemorhous mexicanus) on the campus of Queens College in New York City.
[ { "created": "Sat, 11 Jan 2020 00:04:15 GMT", "version": "v1" } ]
2020-01-14
[ [ "Youngblood", "Mason", "" ] ]
Radio-frequency identification (RFID) is an increasingly popular wireless technology that allows researchers to monitor wild bird populations from fixed locations in the field. Our lab has developed an RFID-equipped birdfeeder based on the Raspberry Pi Zero W, a low-cost single-board computer, that collects continuous visitation data from birds tagged with passive integrated transponder (PIT) tags. Each birdfeeder has a perch antenna connected to an RFID reader board on a Raspberry Pi powered by a portable battery. When a tagged bird lands on the perch to eat from the feeder, its unique code is stored with the date and time on the Raspberry Pi. These birdfeeders require only basic soldering and coding skills to assemble, and can be easily outfitted with additional hardware like video cameras and microphones. We outline the process of assembling the hardware and setting up the operating system for the birdfeeders. Then, we describe an example implementation of the birdfeeders to track house finches (Haemorhous mexicanus) on the campus of Queens College in New York City.
2110.12392
Ashutosh Singh
Ashutosh Singh, Christiana Westlin, Hedwig Eisenbarth, Elizabeth A. Reynolds Losin, Jessica R. Andrews-Hanna, Tor D. Wager, Ajay B. Satpute, Lisa Feldman Barrett, Dana H. Brooks, Deniz Erdogmus
Variation is the Norm: Brain State Dynamics Evoked By Emotional Video Clips
null
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the last several decades, emotion research has attempted to identify a "biomarker" or consistent pattern of brain activity to characterize a single category of emotion (e.g., fear) that will remain consistent across all instances of that category, regardless of individual and context. In this study, we investigated variation rather than consistency during emotional experiences while people watched video clips chosen to evoke instances of specific emotion categories. Specifically, we developed a sequential probabilistic approach to model the temporal dynamics in a participant's brain activity during video viewing. We characterized brain states during these clips as distinct state occupancy periods between state transitions in blood oxygen level dependent (BOLD) signal patterns. We found substantial variation in the state occupancy probability distributions across individuals watching the same video, supporting the hypothesis that when it comes to the brain correlates of emotional experience, variation may indeed be the norm.
[ { "created": "Sun, 24 Oct 2021 08:53:05 GMT", "version": "v1" } ]
2021-10-26
[ [ "Singh", "Ashutosh", "" ], [ "Westlin", "Christiana", "" ], [ "Eisenbarth", "Hedwig", "" ], [ "Losin", "Elizabeth A. Reynolds", "" ], [ "Andrews-Hanna", "Jessica R.", "" ], [ "Wager", "Tor D.", "" ], [ "Satpute", "Ajay B.", "" ], [ "Barrett", "Lisa Feldman", "" ], [ "Brooks", "Dana H.", "" ], [ "Erdogmus", "Deniz", "" ] ]
For the last several decades, emotion research has attempted to identify a "biomarker" or consistent pattern of brain activity to characterize a single category of emotion (e.g., fear) that will remain consistent across all instances of that category, regardless of individual and context. In this study, we investigated variation rather than consistency during emotional experiences while people watched video clips chosen to evoke instances of specific emotion categories. Specifically, we developed a sequential probabilistic approach to model the temporal dynamics in a participant's brain activity during video viewing. We characterized brain states during these clips as distinct state occupancy periods between state transitions in blood oxygen level dependent (BOLD) signal patterns. We found substantial variation in the state occupancy probability distributions across individuals watching the same video, supporting the hypothesis that when it comes to the brain correlates of emotional experience, variation may indeed be the norm.
2106.04540
Jordan Lei
Jordan Lei, Ari S. Benjamin, Konrad P. Kording
Object Based Attention Through Internal Gating
null
null
null
null
q-bio.NC cs.AI cs.CV cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Object-based attention is a key component of the visual system, relevant for perception, learning, and memory. Neurons tuned to features of attended objects tend to be more active than those associated with non-attended objects. There is a rich set of models of this phenomenon in computational neuroscience. However, there is currently a divide between models that successfully match physiological data but can only deal with extremely simple problems and models of attention used in computer vision. For example, attention in the brain is known to depend on top-down processing, whereas self-attention in deep learning does not. Here, we propose an artificial neural network model of object-based attention that captures the way in which attention is both top-down and recurrent. Our attention model works well both on simple test stimuli, such as those using images of handwritten digits, and on more complex stimuli, such as natural images drawn from the COCO dataset. We find that our model replicates a range of findings from neuroscience, including attention-invariant tuning, inhibition of return, and attention-mediated scaling of activity. Understanding object based attention is both computationally interesting and a key problem for computational neuroscience.
[ { "created": "Tue, 8 Jun 2021 17:20:50 GMT", "version": "v1" } ]
2021-06-09
[ [ "Lei", "Jordan", "" ], [ "Benjamin", "Ari S.", "" ], [ "Kording", "Konrad P.", "" ] ]
Object-based attention is a key component of the visual system, relevant for perception, learning, and memory. Neurons tuned to features of attended objects tend to be more active than those associated with non-attended objects. There is a rich set of models of this phenomenon in computational neuroscience. However, there is currently a divide between models that successfully match physiological data but can only deal with extremely simple problems and models of attention used in computer vision. For example, attention in the brain is known to depend on top-down processing, whereas self-attention in deep learning does not. Here, we propose an artificial neural network model of object-based attention that captures the way in which attention is both top-down and recurrent. Our attention model works well both on simple test stimuli, such as those using images of handwritten digits, and on more complex stimuli, such as natural images drawn from the COCO dataset. We find that our model replicates a range of findings from neuroscience, including attention-invariant tuning, inhibition of return, and attention-mediated scaling of activity. Understanding object based attention is both computationally interesting and a key problem for computational neuroscience.
1808.07717
David Morrison
David A. Morrison
Multiple Sequence Alignment is not a Solved Problem
37 pages, 5 figures, 3 tables
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Multiple sequence alignment is a basic procedure in molecular biology, and it is often treated as being essentially a solved computational problem. However, this is not so, and here I review the evidence for this claim, and outline the requirements for a solution. The goal of alignment is often stated to be to juxtapose nucleotides (or their derivatives, such as amino acids) that have been inherited from a common ancestral nucleotide (although other goals are also possible). Unfortunately, this is not an operational definition, because homology (in this sense) refers to unique and unobservable historical events, and so there can be no objective mathematical function to optimize. Consequently, almost all algorithms developed for multiple sequence alignment are based on optimizing some sort of compositional similarity (similarity = homology + analogy). As a result, many, if not most, practitioners either manually modify computer-produced alignments or they perform de novo manual alignment, especially in the field of phylogenetics. So, if homology is the goal, then multiple sequence alignment is not yet a solved computational problem. Several criteria have been developed by biologists to help them identify potential homologies (compositional, ontogenetic, topographical and functional similarity, plus conjunction and congruence), and these criteria can be applied to molecular data, in principle. Current computer programs do implement one (or occasionally two) of these criteria, but no program implements them all. What is needed is a program that evaluates all of the evidence for the sequence homologies, optimizes their combination, and thus produces the best hypotheses of homology. This is basically an inference problem not an optimization problem.
[ { "created": "Thu, 23 Aug 2018 12:40:07 GMT", "version": "v1" } ]
2018-08-24
[ [ "Morrison", "David A.", "" ] ]
Multiple sequence alignment is a basic procedure in molecular biology, and it is often treated as being essentially a solved computational problem. However, this is not so, and here I review the evidence for this claim, and outline the requirements for a solution. The goal of alignment is often stated to be to juxtapose nucleotides (or their derivatives, such as amino acids) that have been inherited from a common ancestral nucleotide (although other goals are also possible). Unfortunately, this is not an operational definition, because homology (in this sense) refers to unique and unobservable historical events, and so there can be no objective mathematical function to optimize. Consequently, almost all algorithms developed for multiple sequence alignment are based on optimizing some sort of compositional similarity (similarity = homology + analogy). As a result, many, if not most, practitioners either manually modify computer-produced alignments or they perform de novo manual alignment, especially in the field of phylogenetics. So, if homology is the goal, then multiple sequence alignment is not yet a solved computational problem. Several criteria have been developed by biologists to help them identify potential homologies (compositional, ontogenetic, topographical and functional similarity, plus conjunction and congruence), and these criteria can be applied to molecular data, in principle. Current computer programs do implement one (or occasionally two) of these criteria, but no program implements them all. What is needed is a program that evaluates all of the evidence for the sequence homologies, optimizes their combination, and thus produces the best hypotheses of homology. This is basically an inference problem not an optimization problem.
2104.01418
Cees Van Leeuwen
Ilias Rentzeperis, Steeve Laquitaine, Cees van Leeuwen
Adaptive rewiring of random neural networks generates convergent-divergent units
null
null
10.1016/j.cnsns.2021.106135
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Brain networks are adaptively rewired continually, adjusting their topology to bring about functionality and efficiency in sensory, motor and cognitive tasks. In model neural network architectures, adaptive rewiring generates complex, brain-like topologies. Present models, however, cannot account for the emergence of complex directed connectivity structures. We tested a biologically plausible model of adaptive rewiring in directed networks, based on two algorithms widely used in distributed computing: advection and consensus. When both are used in combination as rewiring criteria, adaptive rewiring shortens path length and enhances connectivity. When keeping a balance between advection and consensus, adaptive rewiring produces convergent-divergent units consisting of convergent hub nodes, which collect inputs from pools of sparsely connected, or local, nodes and project them via densely interconnected processing nodes onto divergent hubs that broadcast output back to the local pools. Convergent-divergent units operate within and between sensory, motor, and cognitive brain regions as their connective core, mediating context-sensitivity to local network units. By showing how these structures emerge spontaneously in directed networks models, adaptive rewiring offers self-organization as a principle for efficient information propagation and integration in the brain.
[ { "created": "Sat, 3 Apr 2021 14:31:21 GMT", "version": "v1" } ]
2022-01-05
[ [ "Rentzeperis", "Ilias", "" ], [ "Laquitaine", "Steeve", "" ], [ "van Leeuwen", "Cees", "" ] ]
Brain networks are adaptively rewired continually, adjusting their topology to bring about functionality and efficiency in sensory, motor and cognitive tasks. In model neural network architectures, adaptive rewiring generates complex, brain-like topologies. Present models, however, cannot account for the emergence of complex directed connectivity structures. We tested a biologically plausible model of adaptive rewiring in directed networks, based on two algorithms widely used in distributed computing: advection and consensus. When both are used in combination as rewiring criteria, adaptive rewiring shortens path length and enhances connectivity. When keeping a balance between advection and consensus, adaptive rewiring produces convergent-divergent units consisting of convergent hub nodes, which collect inputs from pools of sparsely connected, or local, nodes and project them via densely interconnected processing nodes onto divergent hubs that broadcast output back to the local pools. Convergent-divergent units operate within and between sensory, motor, and cognitive brain regions as their connective core, mediating context-sensitivity to local network units. By showing how these structures emerge spontaneously in directed networks models, adaptive rewiring offers self-organization as a principle for efficient information propagation and integration in the brain.
2208.03540
Swadesh Pal
Swadesh Pal, Roderick Melnik
Non-Markovian behaviour and the dual role of astrocytes in Alzheimer's disease development and propagation
16 pages, 8 figures, 3 tables
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Alzheimer's disease (AD) is a common neurodegenerative disorder nowadays. Amyloid-beta (A$\beta$) and tau proteins are among the main contributors to the development or propagation of AD. In AD, A$\beta$ proteins clump together to form plaques and disrupt cell functions. On the other hand, the abnormal chemical change in the brain helps to build sticky tau tangles that block the neuron's transport system. Astrocytes generally maintain a healthy balance in the brain by clearing the A$\beta$ plaques (toxic A$\beta$). However, over-activated astrocytes release chemokines and cytokines in the presence of A$\beta$ and react to pro-inflammatory cytokines, further increasing the production of A$\beta$. In this paper, we construct a mathematical model that can capture astrocytes' dual behaviour. Furthermore, we reveal that the disease propagation depends on the current time instance and the disease's earlier status, called the ``memory effect''. We consider a fractional order network mathematical model to capture the influence of such memory effect on AD propagation. We have integrated brain connectome data into the model and studied the memory effect, the dual role of astrocytes, and the brain's neuronal damage. Based on the pathology, primary, secondary, and mixed tauopathies parameters are considered in the model. Due to the mixed tauopathy, different brain nodes or regions in the brain connectome accumulate different toxic concentrations of A$\beta$ and tau proteins. Finally, we explain how the memory effect can slow down the propagation of such toxic proteins in the brain, decreasing the rate of neuronal damage.
[ { "created": "Sat, 6 Aug 2022 16:14:19 GMT", "version": "v1" }, { "created": "Tue, 5 Dec 2023 17:26:02 GMT", "version": "v2" } ]
2023-12-06
[ [ "Pal", "Swadesh", "" ], [ "Melnik", "Roderick", "" ] ]
Alzheimer's disease (AD) is a common neurodegenerative disorder nowadays. Amyloid-beta (A$\beta$) and tau proteins are among the main contributors to the development or propagation of AD. In AD, A$\beta$ proteins clump together to form plaques and disrupt cell functions. On the other hand, the abnormal chemical change in the brain helps to build sticky tau tangles that block the neuron's transport system. Astrocytes generally maintain a healthy balance in the brain by clearing the A$\beta$ plaques (toxic A$\beta$). However, over-activated astrocytes release chemokines and cytokines in the presence of A$\beta$ and react to pro-inflammatory cytokines, further increasing the production of A$\beta$. In this paper, we construct a mathematical model that can capture astrocytes' dual behaviour. Furthermore, we reveal that the disease propagation depends on the current time instance and the disease's earlier status, called the ``memory effect''. We consider a fractional order network mathematical model to capture the influence of such memory effect on AD propagation. We have integrated brain connectome data into the model and studied the memory effect, the dual role of astrocytes, and the brain's neuronal damage. Based on the pathology, primary, secondary, and mixed tauopathies parameters are considered in the model. Due to the mixed tauopathy, different brain nodes or regions in the brain connectome accumulate different toxic concentrations of A$\beta$ and tau proteins. Finally, we explain how the memory effect can slow down the propagation of such toxic proteins in the brain, decreasing the rate of neuronal damage.
2002.07480
Kizito Nkurikiyeyezu
Kizito Nkurikiyeyezu, Yuta Suzuki, Yoshito Tobe, Guillaume Lopez, Kiyoshi Itao
Heart Rate Variability as an Indicator of Thermal Comfort State
null
2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)
10.23919/SICE.2017.8105506
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermal comfort is a personal assessment of one's satisfaction with the surroundings. Yet, most thermal comfort delivery mechanisms preclude physiological and psychological precursors to thermal comfort. Accordingly, many people feel either cold or hot in an environment that is supposedly thermally comfortable to most people. To address this issue, this paper proposes to use people's heart rate variability (HRV) as an alternative indicator of thermal comfort. Since HRV is linked to homeostasis, we hypothesize that it could be used to predict people's thermal comfort status. To test our hypothesis, we analyzed statistical, spectral, and nonlinear HRV indices of 17 human subjects doing light office work in a cold, a neutral, and a hot environment. The resulting HRV indices were used as inputs to machine learning classification algorithms. We observed that HRV is distinctively altered depending on the thermal environment and that it is possible to steadfastly predict each subject's thermal environment (cold, neutral, and hot) with up to a 93.7% prediction accuracy. The result of this study implies that it could be possible to design automatic real-Time thermal comfort controllers based on people's HRV.
[ { "created": "Tue, 18 Feb 2020 10:38:11 GMT", "version": "v1" } ]
2020-02-19
[ [ "Nkurikiyeyezu", "Kizito", "" ], [ "Suzuki", "Yuta", "" ], [ "Tobe", "Yoshito", "" ], [ "Lopez", "Guillaume", "" ], [ "Itao", "Kiyoshi", "" ] ]
Thermal comfort is a personal assessment of one's satisfaction with the surroundings. Yet, most thermal comfort delivery mechanisms preclude physiological and psychological precursors to thermal comfort. Accordingly, many people feel either cold or hot in an environment that is supposedly thermally comfortable to most people. To address this issue, this paper proposes to use people's heart rate variability (HRV) as an alternative indicator of thermal comfort. Since HRV is linked to homeostasis, we hypothesize that it could be used to predict people's thermal comfort status. To test our hypothesis, we analyzed statistical, spectral, and nonlinear HRV indices of 17 human subjects doing light office work in a cold, a neutral, and a hot environment. The resulting HRV indices were used as inputs to machine learning classification algorithms. We observed that HRV is distinctively altered depending on the thermal environment and that it is possible to steadfastly predict each subject's thermal environment (cold, neutral, and hot) with up to a 93.7% prediction accuracy. The result of this study implies that it could be possible to design automatic real-Time thermal comfort controllers based on people's HRV.
1212.4161
Cristian Micheletti
C. Micheletti
Comparing proteins by their internal dynamics: exploring structure-function relationships beyond static structural alignments
Review article, 24 pages, 10 figures. Journal article in "Physics of Life Reviews" available at http://dx.doi.org/10.1016/j.plrev.2012.10.009 along with several commentaries
null
10.1016/j.plrev.2012.10.009
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing interest for comparing protein internal dynamics owes much to the realization that protein function can be accompanied or assisted by structural fluctuations and conformational changes. Analogously to the case of functional structural elements, those aspects of protein flexibility and dynamics that are functionally oriented should be subject to evolutionary conservation. Accordingly, dynamics-based protein comparisons or alignments could be used to detect protein relationships that are more elusive to sequence and structural alignments. Here we provide an account of the progress that has been made in recent years towards developing and applying general methods for comparing proteins in terms of their internal dynamics and advance the understanding of the structure-function relationship.
[ { "created": "Mon, 17 Dec 2012 21:02:48 GMT", "version": "v1" } ]
2012-12-19
[ [ "Micheletti", "C.", "" ] ]
The growing interest for comparing protein internal dynamics owes much to the realization that protein function can be accompanied or assisted by structural fluctuations and conformational changes. Analogously to the case of functional structural elements, those aspects of protein flexibility and dynamics that are functionally oriented should be subject to evolutionary conservation. Accordingly, dynamics-based protein comparisons or alignments could be used to detect protein relationships that are more elusive to sequence and structural alignments. Here we provide an account of the progress that has been made in recent years towards developing and applying general methods for comparing proteins in terms of their internal dynamics and advance the understanding of the structure-function relationship.
1804.07523
Johan-Owen De Craene
Serge Feyder, Johan-Owen De Craene (GMGM), S\'everine B\"ar, Dimitri Bertazzi (GMGM), Sylvie Friant (GMGM)
Membrane Trafficking in the Yeast Saccharomyces cerevisiae Model
null
International Journal of Molecular Sciences, MDPI, 2015, 16 (1), pp.1509 - 1525
10.3390/ijms16011509
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The yeast Saccharomyces cerevisiae is one of the best characterized eukaryotic models. The secretory pathway was the first trafficking pathway clearly understood mainly thanks to the work done in the laboratory of Randy Schekman in the 1980s. They have isolated yeast sec mutants unable to secrete an extracellular enzyme and these SEC genes were identified as encoding key effectors of the secretory machinery. For this work, the 2013 Nobel Prize in Physiology and Medicine has been awarded to Randy Schekman; the prize is shared with James Rothman and Thomas S{\"u}dhof. Here, we present the different trafficking pathways of yeast S. cerevisiae. At the Golgi apparatus newly synthesized proteins are sorted between those transported to the plasma membrane (PM), or the external medium, via the exocytosis or secretory pathway (SEC), and those targeted to the vacuole either through endosomes (vacuolar protein sorting or VPS pathway) or directly (alkaline phosphatase or ALP pathway). Plasma membrane proteins can be internalized by endocytosis (END) and transported to endosomes where they are sorted between those targeted for vacuolar degradation and those redirected to the Golgi (recycling or RCY pathway). Studies in yeast S. cerevisiae allowed the identification of most of the known effectors, protein complexes, and trafficking pathways in eukaryotic cells, and most of them are conserved among eukaryotes.
[ { "created": "Fri, 20 Apr 2018 09:55:37 GMT", "version": "v1" } ]
2018-06-24
[ [ "Feyder", "Serge", "", "GMGM" ], [ "De Craene", "Johan-Owen", "", "GMGM" ], [ "Bär", "Séverine", "", "GMGM" ], [ "Bertazzi", "Dimitri", "", "GMGM" ], [ "Friant", "Sylvie", "", "GMGM" ] ]
The yeast Saccharomyces cerevisiae is one of the best characterized eukaryotic models. The secretory pathway was the first trafficking pathway clearly understood mainly thanks to the work done in the laboratory of Randy Schekman in the 1980s. They have isolated yeast sec mutants unable to secrete an extracellular enzyme and these SEC genes were identified as encoding key effectors of the secretory machinery. For this work, the 2013 Nobel Prize in Physiology and Medicine has been awarded to Randy Schekman; the prize is shared with James Rothman and Thomas S{\"u}dhof. Here, we present the different trafficking pathways of yeast S. cerevisiae. At the Golgi apparatus newly synthesized proteins are sorted between those transported to the plasma membrane (PM), or the external medium, via the exocytosis or secretory pathway (SEC), and those targeted to the vacuole either through endosomes (vacuolar protein sorting or VPS pathway) or directly (alkaline phosphatase or ALP pathway). Plasma membrane proteins can be internalized by endocytosis (END) and transported to endosomes where they are sorted between those targeted for vacuolar degradation and those redirected to the Golgi (recycling or RCY pathway). Studies in yeast S. cerevisiae allowed the identification of most of the known effectors, protein complexes, and trafficking pathways in eukaryotic cells, and most of them are conserved among eukaryotes.
1312.5486
Chun Tung Chou
Chun Tung Chou
Molecular communication networks with general molecular circuit receivers
null
Proceedings of ACM The First Annual International Conference on Nanoscale Computing and Communication, 2014
10.1145/2619955.2619966
null
q-bio.MN cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a molecular communication network, transmitters may encode information in concentration or frequency of signalling molecules. When the signalling molecules reach the receivers, they react, via a set of chemical reactions or a molecular circuit, to produce output molecules. The counts of output molecules over time is the output signal of the receiver. The aim of this paper is to investigate the impact of different reaction types on the information transmission capacity of molecular communication networks. We realise this aim by using a general molecular circuit model. We derive general expressions of mean receiver output, and signal and noise spectra. We use these expressions to investigate the information transmission capacities of a number of molecular circuits.
[ { "created": "Thu, 19 Dec 2013 11:21:13 GMT", "version": "v1" } ]
2020-07-23
[ [ "Chou", "Chun Tung", "" ] ]
In a molecular communication network, transmitters may encode information in concentration or frequency of signalling molecules. When the signalling molecules reach the receivers, they react, via a set of chemical reactions or a molecular circuit, to produce output molecules. The counts of output molecules over time is the output signal of the receiver. The aim of this paper is to investigate the impact of different reaction types on the information transmission capacity of molecular communication networks. We realise this aim by using a general molecular circuit model. We derive general expressions of mean receiver output, and signal and noise spectra. We use these expressions to investigate the information transmission capacities of a number of molecular circuits.
1712.00359
Olha Shchur
Alexander Vidybida, Olha Shchur
Relation between firing statistics of spiking neuron with delayed fast inhibitory feedback and without feedback
13 pages, 2 figures
Fluctuation and Noise Letters, Vol. 17, No. 01, 1850005 (2018)
10.1142/S0219477518500050
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a class of spiking neuronal models, defined by a set of conditions typical for basic threshold-type models, such as the leaky integrate-and-fire or the binding neuron model and also for some artificial neurons. A neuron is fed with a Poisson process. Each output impulse is applied to the neuron itself after a finite delay $\Delta$. This impulse acts as being delivered through a fast Cl-type inhibitory synapse. We derive a general relation which allows calculating exactly the probability density function (pdf) $p(t)$ of output interspike intervals of a neuron with feedback based on known pdf $p^0(t)$ for the same neuron without feedback and on the properties of the feedback line (the $\Delta$ value). Similar relations between corresponding moments are derived. Furthermore, we prove that initial segment of pdf $p^0(t)$ for a neuron with a fixed threshold level is the same for any neuron satisfying the imposed conditions and is completely determined by the input stream. For the Poisson input stream, we calculate that initial segment exactly and, based on it, obtain exactly the initial segment of pdf $p(t)$ for a neuron with feedback. That is the initial segment of $p(t)$ is model-independent as well. The obtained expressions are checked by means of Monte Carlo simulation. The course of $p(t)$ has a pronounced peculiarity, which makes it impossible to approximate $p(t)$ by Poisson or another simple stochastic process.
[ { "created": "Fri, 1 Dec 2017 15:21:27 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2017 11:16:30 GMT", "version": "v2" }, { "created": "Wed, 3 Oct 2018 07:14:28 GMT", "version": "v3" } ]
2018-10-04
[ [ "Vidybida", "Alexander", "" ], [ "Shchur", "Olha", "" ] ]
We consider a class of spiking neuronal models, defined by a set of conditions typical for basic threshold-type models, such as the leaky integrate-and-fire or the binding neuron model and also for some artificial neurons. A neuron is fed with a Poisson process. Each output impulse is applied to the neuron itself after a finite delay $\Delta$. This impulse acts as being delivered through a fast Cl-type inhibitory synapse. We derive a general relation which allows calculating exactly the probability density function (pdf) $p(t)$ of output interspike intervals of a neuron with feedback based on known pdf $p^0(t)$ for the same neuron without feedback and on the properties of the feedback line (the $\Delta$ value). Similar relations between corresponding moments are derived. Furthermore, we prove that initial segment of pdf $p^0(t)$ for a neuron with a fixed threshold level is the same for any neuron satisfying the imposed conditions and is completely determined by the input stream. For the Poisson input stream, we calculate that initial segment exactly and, based on it, obtain exactly the initial segment of pdf $p(t)$ for a neuron with feedback. That is the initial segment of $p(t)$ is model-independent as well. The obtained expressions are checked by means of Monte Carlo simulation. The course of $p(t)$ has a pronounced peculiarity, which makes it impossible to approximate $p(t)$ by Poisson or another simple stochastic process.
1012.0036
Andrew Mugler
Andrew Mugler, Boris Grinshpun, Riley Franks, and Chris H. Wiggins
A statistical method for revealing form-function relations in biological networks
To appear in Proc. Natl. Acad. Sci. USA. 17 pages, 9 figures, 2 tables
Proc. Nat. Acad. Sci. USA, 108(2):446, 2011
10.1073/pnas.1008898108
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past decade, a number of researchers in systems biology have sought to relate the function of biological systems to their network-level descriptions -- lists of the most important players and the pairwise interactions between them. Both for large networks (in which statistical analysis is often framed in terms of the abundance of repeated small subgraphs) and for small networks which can be analyzed in greater detail (or even synthesized in vivo and subjected to experiment), revealing the relationship between the topology of small subgraphs and their biological function has been a central goal. We here seek to pose this revelation as a statistical task, illustrated using a particular setup which has been constructed experimentally and for which parameterized models of transcriptional regulation have been studied extensively. The question "how does function follow form" is here mathematized by identifying which topological attributes correlate with the diverse possible information-processing tasks which a transcriptional regulatory network can realize. The resulting method reveals one form-function relationship which had earlier been predicted based on analytic results, and reveals a second for which we can provide an analytic interpretation. Resulting source code is distributed via http://formfunction.sourceforge.net.
[ { "created": "Tue, 30 Nov 2010 21:52:27 GMT", "version": "v1" } ]
2012-03-14
[ [ "Mugler", "Andrew", "" ], [ "Grinshpun", "Boris", "" ], [ "Franks", "Riley", "" ], [ "Wiggins", "Chris H.", "" ] ]
Over the past decade, a number of researchers in systems biology have sought to relate the function of biological systems to their network-level descriptions -- lists of the most important players and the pairwise interactions between them. Both for large networks (in which statistical analysis is often framed in terms of the abundance of repeated small subgraphs) and for small networks which can be analyzed in greater detail (or even synthesized in vivo and subjected to experiment), revealing the relationship between the topology of small subgraphs and their biological function has been a central goal. We here seek to pose this revelation as a statistical task, illustrated using a particular setup which has been constructed experimentally and for which parameterized models of transcriptional regulation have been studied extensively. The question "how does function follow form" is here mathematized by identifying which topological attributes correlate with the diverse possible information-processing tasks which a transcriptional regulatory network can realize. The resulting method reveals one form-function relationship which had earlier been predicted based on analytic results, and reveals a second for which we can provide an analytic interpretation. Resulting source code is distributed via http://formfunction.sourceforge.net.
2406.14187
Joanna Polanska
Tomasz Strzoda, Lourdes Cruz-Garcia, Mustafa Najim, Christophe Badie, Joanna Polanska
A mapping-free NLP-based technique for sequence search in Nanopore long-reads
25 pages, 9 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
In unforeseen situations, such as nuclear power plant's or civilian radiation accidents, there is a need for effective and computationally inexpensive methods to determine the expression level of a selected gene panel, allowing for rough dose estimates in thousands of donors. The new generation in-situ mapper, fast and of low energy consumption, working at the level of single nanopore output, is in demand. We aim to create a sequence identification tool that utilizes Natural Language Processing (NLP) techniques and ensures a high level of negative predictive value (NPV) compared to the classical approach. The training dataset consisted of RNASeq data from 6 samples. Having tested multiple NLP models, the best configuration analyses the entire sequence and uses a word length of 3 base pairs with one-word neighbor on each side. For the considered FDXR gene, the achieved mean balanced accuracy (BACC) was 98.29% and NPV 99.25%, compared to minimap2's performance in a cross-validation scenario. Reducing the dictionary from 1024 to 145 changed BACC to 96.49% and the NPV to 98.15%. Obtained NLP model, validated on an external independent genome sequencing dataset, gave NPV of 99.64% for complete and 95.87% for reduced dictionary. The salmon-estimated read counts differed from the classical approach on average by 3.48% for the complete dictionary and by 5.82% for the reduced one. We conclude that for long Oxford Nanopore reads, an NLP-based approach can successfully replace classical mapping in case of emergency. The developed NLP model can be easily retrained to identify selected transcripts and/or work with various long-read sequencing techniques. Our results of the study clearly demonstrate the potential of applying techniques known from classical text processing to nucleotide sequences and represent a significant advancement in this field of science.
[ { "created": "Thu, 20 Jun 2024 10:48:19 GMT", "version": "v1" } ]
2024-06-21
[ [ "Strzoda", "Tomasz", "" ], [ "Cruz-Garcia", "Lourdes", "" ], [ "Najim", "Mustafa", "" ], [ "Badie", "Christophe", "" ], [ "Polanska", "Joanna", "" ] ]
In unforeseen situations, such as nuclear power plant's or civilian radiation accidents, there is a need for effective and computationally inexpensive methods to determine the expression level of a selected gene panel, allowing for rough dose estimates in thousands of donors. The new generation in-situ mapper, fast and of low energy consumption, working at the level of single nanopore output, is in demand. We aim to create a sequence identification tool that utilizes Natural Language Processing (NLP) techniques and ensures a high level of negative predictive value (NPV) compared to the classical approach. The training dataset consisted of RNASeq data from 6 samples. Having tested multiple NLP models, the best configuration analyses the entire sequence and uses a word length of 3 base pairs with one-word neighbor on each side. For the considered FDXR gene, the achieved mean balanced accuracy (BACC) was 98.29% and NPV 99.25%, compared to minimap2's performance in a cross-validation scenario. Reducing the dictionary from 1024 to 145 changed BACC to 96.49% and the NPV to 98.15%. Obtained NLP model, validated on an external independent genome sequencing dataset, gave NPV of 99.64% for complete and 95.87% for reduced dictionary. The salmon-estimated read counts differed from the classical approach on average by 3.48% for the complete dictionary and by 5.82% for the reduced one. We conclude that for long Oxford Nanopore reads, an NLP-based approach can successfully replace classical mapping in case of emergency. The developed NLP model can be easily retrained to identify selected transcripts and/or work with various long-read sequencing techniques. Our results of the study clearly demonstrate the potential of applying techniques known from classical text processing to nucleotide sequences and represent a significant advancement in this field of science.
0712.0883
Conrad Burden
C. J. Burden
Understanding the physics of oligonucleotide microarrays: the Affymetrix spike-in data reanalysed
32 pages, 13 figures, minor amendments
Phys. Biol. 5 (2008) 016004
10.1088/1478-3975/5/1/016004
null
q-bio.QM q-bio.BM
null
The Affymetrix U95 and U133 Latin Square spike-in datasets are reanalysed, together with a dataset from a version of the U95 spike-in experiment without a complex non-specific background. The approach uses a physico-chemical model which includes the effects the specific and non-specific hybridisation and probe folding at the microarray surface, target folding and hybridisation in the bulk RNA target solution, and duplex dissociation during the post-hybridisatoin washing phase. The model predicts a three parameter hyperbolic response function that fits well with fluorescence intensity data from all three datasets. The importance of the various hybridisation and washing effects in determining each of the three parameters is examined, and some guidance is given as to how a practical algorithm for determining specific target concentrations might be developed.
[ { "created": "Thu, 6 Dec 2007 02:11:19 GMT", "version": "v1" }, { "created": "Fri, 7 Mar 2008 14:50:53 GMT", "version": "v2" } ]
2009-11-13
[ [ "Burden", "C. J.", "" ] ]
The Affymetrix U95 and U133 Latin Square spike-in datasets are reanalysed, together with a dataset from a version of the U95 spike-in experiment without a complex non-specific background. The approach uses a physico-chemical model which includes the effects the specific and non-specific hybridisation and probe folding at the microarray surface, target folding and hybridisation in the bulk RNA target solution, and duplex dissociation during the post-hybridisatoin washing phase. The model predicts a three parameter hyperbolic response function that fits well with fluorescence intensity data from all three datasets. The importance of the various hybridisation and washing effects in determining each of the three parameters is examined, and some guidance is given as to how a practical algorithm for determining specific target concentrations might be developed.
2206.12473
Gr\'egoire Clart\'e
Gr\'egoire Clart\'e and Robin J. Ryder
A Phylogenetic Model of the Evolution of Discrete Matrices for the Joint Inference of Lexical and Phonological Language Histories
null
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
We propose a model of the evolution of a matrix along a phylogenetic tree, in which transformations affect either entire rows or columns of the matrix. This represents the change of both lexical and phonological aspects of linguistic data, by allowing for new words to appear and for systematic phonological changes to affect the entire vocabulary. We implement a Sequential Monte Carlo method to sample from the posterior distribution, and infer jointly the phylogeny, model parameters, and latent variables representing cognate births and phonological transformations. We successfully apply this method to synthetic and real data of moderate size.
[ { "created": "Fri, 24 Jun 2022 19:21:40 GMT", "version": "v1" } ]
2022-06-28
[ [ "Clarté", "Grégoire", "" ], [ "Ryder", "Robin J.", "" ] ]
We propose a model of the evolution of a matrix along a phylogenetic tree, in which transformations affect either entire rows or columns of the matrix. This represents the change of both lexical and phonological aspects of linguistic data, by allowing for new words to appear and for systematic phonological changes to affect the entire vocabulary. We implement a Sequential Monte Carlo method to sample from the posterior distribution, and infer jointly the phylogeny, model parameters, and latent variables representing cognate births and phonological transformations. We successfully apply this method to synthetic and real data of moderate size.