id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2401.10860
Florian Hartig
Florian Hartig, Nerea Abrego, Alex Bush, Jonathan M. Chase, Gurutzeta Guillera-Arroita, Mathew A. Leibold, Otso Ovaskainen, Lo\"ic Pellissier, Maximilian Pichler, Giovanni Poggiato, Laura Pollock, Sara Si-Moussi, Wilfried Thuiller, Duarte S. Viana, David I. Warton, Damaris Zurell, Douglas W. Yu
Novel community data in ecology -- properties and prospects
null
Trends in Ecology & Evolution, 2024
10.1016/j.tree.2023.09.017
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
New technologies for acquiring biological information such as eDNA, acoustic or optical sensors, make it possible to generate spatial community observations at unprecedented scales. The potential of these novel community data to standardize community observations at high spatial, temporal, and taxonomic resolution and at large spatial scale ('many rows and many columns') has been widely discussed, but so far, there has been little integration of these data with ecological models and theory. Here, we review these developments and highlight emerging solutions, focusing on statistical methods for analyzing novel community data, in particular joint species distribution models; the new ecological questions that can be answered with these data; and the potential implications of these developments for policy and conservation.
[ { "created": "Fri, 19 Jan 2024 18:04:01 GMT", "version": "v1" } ]
2024-01-22
[ [ "Hartig", "Florian", "" ], [ "Abrego", "Nerea", "" ], [ "Bush", "Alex", "" ], [ "Chase", "Jonathan M.", "" ], [ "Guillera-Arroita", "Gurutzeta", "" ], [ "Leibold", "Mathew A.", "" ], [ "Ovaskainen", "Otso", "" ], [ "Pellissier", "Loïc", "" ], [ "Pichler", "Maximilian", "" ], [ "Poggiato", "Giovanni", "" ], [ "Pollock", "Laura", "" ], [ "Si-Moussi", "Sara", "" ], [ "Thuiller", "Wilfried", "" ], [ "Viana", "Duarte S.", "" ], [ "Warton", "David I.", "" ], [ "Zurell", "Damaris", "" ], [ "Yu", "Douglas W.", "" ] ]
New technologies for acquiring biological information such as eDNA, acoustic or optical sensors, make it possible to generate spatial community observations at unprecedented scales. The potential of these novel community data to standardize community observations at high spatial, temporal, and taxonomic resolution and at large spatial scale ('many rows and many columns') has been widely discussed, but so far, there has been little integration of these data with ecological models and theory. Here, we review these developments and highlight emerging solutions, focusing on statistical methods for analyzing novel community data, in particular joint species distribution models; the new ecological questions that can be answered with these data; and the potential implications of these developments for policy and conservation.
2309.05939
Richard Ewald Rosch
Richard E. Rosch, Dominic R. W. Burrows, Christopher W. Lynn, and Arian Ashourvan
Spontaneous brain activity emerges from pairwise interactions in the larval zebrafish brain
14 pages, 4 figures; additionaly supplementary material
null
null
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Brain activity is characterized by brain-wide spatiotemporal patterns which emerge from synapse-mediated interactions between individual neurons. Calcium imaging provides access to in vivo recordings of whole-brain activity at single-neuron resolution and, therefore, allows the study of how large-scale brain dynamics emerge from local activity. In this study, we used a statistical mechanics approach - the pairwise maximum entropy model (MEM) - to infer microscopic network features from collective patterns of activity in the larval zebrafish brain, and relate these features to the emergence of observed whole-brain dynamics. Our findings indicate that the pairwise interactions between neural populations and their intrinsic activity states are sufficient to explain observed whole-brain dynamics. In fact, the pairwise relationships between neuronal populations estimated with the MEM strongly correspond to observed structural connectivity patterns. Model simulations also demonstrated how tuning pairwise neuronal interactions drives transitions between critical and pathologically hyper-excitable whole-brain regimes. Finally, we use virtual resection to identify the brain structures that are important for maintaining the brain in a critical regime. Together, our results indicate that whole-brain activity emerges out of a complex dynamical system that transitions between basins of attraction whose strength and topology depend on the connectivity between brain areas.
[ { "created": "Tue, 12 Sep 2023 03:30:52 GMT", "version": "v1" } ]
2023-09-13
[ [ "Rosch", "Richard E.", "" ], [ "Burrows", "Dominic R. W.", "" ], [ "Lynn", "Christopher W.", "" ], [ "Ashourvan", "Arian", "" ] ]
Brain activity is characterized by brain-wide spatiotemporal patterns which emerge from synapse-mediated interactions between individual neurons. Calcium imaging provides access to in vivo recordings of whole-brain activity at single-neuron resolution and, therefore, allows the study of how large-scale brain dynamics emerge from local activity. In this study, we used a statistical mechanics approach - the pairwise maximum entropy model (MEM) - to infer microscopic network features from collective patterns of activity in the larval zebrafish brain, and relate these features to the emergence of observed whole-brain dynamics. Our findings indicate that the pairwise interactions between neural populations and their intrinsic activity states are sufficient to explain observed whole-brain dynamics. In fact, the pairwise relationships between neuronal populations estimated with the MEM strongly correspond to observed structural connectivity patterns. Model simulations also demonstrated how tuning pairwise neuronal interactions drives transitions between critical and pathologically hyper-excitable whole-brain regimes. Finally, we use virtual resection to identify the brain structures that are important for maintaining the brain in a critical regime. Together, our results indicate that whole-brain activity emerges out of a complex dynamical system that transitions between basins of attraction whose strength and topology depend on the connectivity between brain areas.
2009.01094
Suban Sahoo
Suban K Sahoo and Seshu Vardhan
Computational evidence on repurposing the anti-influenza drugs baloxavir acid and baloxavir marboxil against COVID-19
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main reasons for the ongoing COVID-19 (coronavirus disease 2019) pandemic are the unavailability of recommended efficacious drugs or vaccines along with the human to human transmission nature of SARS-CoV-2 virus. So, there is urgent need to search appropriate therapeutic approach by repurposing approved drugs. In this communication, molecular docking analyses of two influenza antiviral drugs baloxavir acid (BXA) and baloxavir marboxil (BXM) were performed with the three therapeutic target proteins of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), i.e., main protease (Mpro), papain-like protease (PLpro) and RNA-dependent RNA polymerase (RdRp). The molecular docking results of both the drugs BXA and BXM were analysed and compared. The investigational drug BXA binds at the active site of Mpro and RdRp, whereas the approved drug BXM binds only at the active site of RdRp. Also, comparison of dock score revealed that BXA is binding more effectively at the active site of RdRp than BXM. The computational molecular docking revealed that the drug BXA may be more effective against COVID-19 as compared to BXM.
[ { "created": "Wed, 2 Sep 2020 14:14:35 GMT", "version": "v1" } ]
2020-09-03
[ [ "Sahoo", "Suban K", "" ], [ "Vardhan", "Seshu", "" ] ]
The main reasons for the ongoing COVID-19 (coronavirus disease 2019) pandemic are the unavailability of recommended efficacious drugs or vaccines along with the human to human transmission nature of SARS-CoV-2 virus. So, there is urgent need to search appropriate therapeutic approach by repurposing approved drugs. In this communication, molecular docking analyses of two influenza antiviral drugs baloxavir acid (BXA) and baloxavir marboxil (BXM) were performed with the three therapeutic target proteins of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), i.e., main protease (Mpro), papain-like protease (PLpro) and RNA-dependent RNA polymerase (RdRp). The molecular docking results of both the drugs BXA and BXM were analysed and compared. The investigational drug BXA binds at the active site of Mpro and RdRp, whereas the approved drug BXM binds only at the active site of RdRp. Also, comparison of dock score revealed that BXA is binding more effectively at the active site of RdRp than BXM. The computational molecular docking revealed that the drug BXA may be more effective against COVID-19 as compared to BXM.
q-bio/0411024
Simon Kogan
Simon Kogan and Edward N. Trifonov
Gene splice sites correlate with nucleosome positions
9 pages, 2 figures, 2 tables
null
null
null
q-bio.GN
null
Gene sequences in the vicinity of splice sites are found to possess dinucleotide periodicities, especially RR and YY, with the period close to the pitch of nucleosome DNA. This confirms previously reported finding about preferential positioning of splice junctions within the nucleosomes. The RR and YY dinucleotides oscillate counterphase, i.e., their respective preferred positions are shifted about half-period one from another, as it was observed earlier for AA and TT dinucleotides. Species specificity of nucleosome positioning DNA pattern is indicated by predominant use of the periodical GG(CC) dinucleotides in human and mouse genes, as opposed to predominant AA(TT) dinucleotides in Arabidopsis and C.elegans. Keywords: chromatin; gene splicing; intron; exon; dinucleotide; periodical pattern
[ { "created": "Thu, 11 Nov 2004 10:56:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kogan", "Simon", "" ], [ "Trifonov", "Edward N.", "" ] ]
Gene sequences in the vicinity of splice sites are found to possess dinucleotide periodicities, especially RR and YY, with the period close to the pitch of nucleosome DNA. This confirms previously reported finding about preferential positioning of splice junctions within the nucleosomes. The RR and YY dinucleotides oscillate counterphase, i.e., their respective preferred positions are shifted about half-period one from another, as it was observed earlier for AA and TT dinucleotides. Species specificity of nucleosome positioning DNA pattern is indicated by predominant use of the periodical GG(CC) dinucleotides in human and mouse genes, as opposed to predominant AA(TT) dinucleotides in Arabidopsis and C.elegans. Keywords: chromatin; gene splicing; intron; exon; dinucleotide; periodical pattern
1112.3322
Valmir Barbosa
Valmir C. Barbosa, Raul Donangelo, Sergio R. Souza
Quasispecies dynamics with network constraints
null
Journal of Theoretical Biology 312 (2012), 114-119
10.1016/j.jtbi.2012.07.032
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A quasispecies is a set of interrelated genotypes that have reached a situation of equilibrium while evolving according to the usual Darwinian principles of selection and mutation. Quasispecies studies invariably assume that it is possible for any genotype to mutate into any other, but recent finds indicate that this assumption is not necessarily true. Here we revisit the traditional quasispecies theory by adopting a network structure to constrain the occurrence of mutations. Such structure is governed by a random-graph model, whose single parameter (a probability p) controls both the graph's density and the dynamics of mutation. We contribute two further modifications to the theory, one to account for the fact that different loci in a genotype may be differently susceptible to the occurrence of mutations, the other to allow for a more plausible description of the transition from adaptation to degeneracy of the quasispecies as p is increased. We give analytical and simulation results for the usual case of binary genotypes, assuming the fitness landscape in which a genotype's fitness decays exponentially with its Hamming distance to the wild type. These results support the theory's assertions regarding the adaptation of the quasispecies to the fitness landscape and also its possible demise as a function of p.
[ { "created": "Wed, 14 Dec 2011 20:17:01 GMT", "version": "v1" } ]
2012-08-21
[ [ "Barbosa", "Valmir C.", "" ], [ "Donangelo", "Raul", "" ], [ "Souza", "Sergio R.", "" ] ]
A quasispecies is a set of interrelated genotypes that have reached a situation of equilibrium while evolving according to the usual Darwinian principles of selection and mutation. Quasispecies studies invariably assume that it is possible for any genotype to mutate into any other, but recent finds indicate that this assumption is not necessarily true. Here we revisit the traditional quasispecies theory by adopting a network structure to constrain the occurrence of mutations. Such structure is governed by a random-graph model, whose single parameter (a probability p) controls both the graph's density and the dynamics of mutation. We contribute two further modifications to the theory, one to account for the fact that different loci in a genotype may be differently susceptible to the occurrence of mutations, the other to allow for a more plausible description of the transition from adaptation to degeneracy of the quasispecies as p is increased. We give analytical and simulation results for the usual case of binary genotypes, assuming the fitness landscape in which a genotype's fitness decays exponentially with its Hamming distance to the wild type. These results support the theory's assertions regarding the adaptation of the quasispecies to the fitness landscape and also its possible demise as a function of p.
1406.7485
Dave Thirumalai
D. Thirumalai
Universal relations in the self-assembly of proteins and RNA
To appear in Physical Biology
null
10.1088/1478-3975/11/5/053005
null
q-bio.BM cond-mat.soft cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Concepts rooted in physics are becoming increasingly important in biology as we transition to an era in which quantitative descriptions of all processes from molecular to cellular level are needed. In this essay I discuss two unexpected findings of universal behavior, uncommon in biology, in the self-assembly of proteins and RNA. These findings, which are surprising, reveal that physics ideas applied to biological problems ranging from folding to gene expression to cellular movement and communication between cells might lead to discovery of universal principles operating in adoptable living systems.
[ { "created": "Sun, 29 Jun 2014 10:32:20 GMT", "version": "v1" } ]
2015-06-22
[ [ "Thirumalai", "D.", "" ] ]
Concepts rooted in physics are becoming increasingly important in biology as we transition to an era in which quantitative descriptions of all processes from molecular to cellular level are needed. In this essay I discuss two unexpected findings of universal behavior, uncommon in biology, in the self-assembly of proteins and RNA. These findings, which are surprising, reveal that physics ideas applied to biological problems ranging from folding to gene expression to cellular movement and communication between cells might lead to discovery of universal principles operating in adoptable living systems.
2012.10677
Selma Mehyaoui
Selma Mehyaoui
How does neural activity encode spontaneous motor behavior in zebrafish larvae ?
null
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The origins of spontaneous movements have been investigated in human as well as in other vertebrates. Studies have reported an increase in neuronal activity one second before the onset of a given movement: this is known as readiness potential. The mechanisms underlying this increase are still unclear. Zebrafish larva is an ideal animal model to study the neuronal basis of spontaneous movements. Because of its small size and transparency, this vertebrate is an ideal candidate to apply optical recording methods. In order to understand what neuronal activity causes the execution of a specific tail movement at a given time, we will mainly use a prediction approach.
[ { "created": "Sat, 19 Dec 2020 13:04:29 GMT", "version": "v1" } ]
2020-12-22
[ [ "Mehyaoui", "Selma", "" ] ]
The origins of spontaneous movements have been investigated in human as well as in other vertebrates. Studies have reported an increase in neuronal activity one second before the onset of a given movement: this is known as readiness potential. The mechanisms underlying this increase are still unclear. Zebrafish larva is an ideal animal model to study the neuronal basis of spontaneous movements. Because of its small size and transparency, this vertebrate is an ideal candidate to apply optical recording methods. In order to understand what neuronal activity causes the execution of a specific tail movement at a given time, we will mainly use a prediction approach.
1408.1325
Thierry Rabilloud
Thierry Rabilloud (LCBM - UMR 5249)
Paleoproteomics explained to youngsters: how did the wedding of two-dimensional electrophoresis and protein sequencing spark proteomics on: Let there be light
null
Journal of Proteomics 107C (2014) 5-12
10.1016/j.jprot.2014.03.011
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Taking the opportunity of the 20th anniversary of the word "proteomics", this young adult age is a good time to remember how proteomics came from enormous progress in protein separation and protein microanalysis techniques, and from the conjugation of these advances into a high performance and streamlined working setup. However, in the history of the almost three decades that encompass the first attempts to perform large scale analysis of proteins to the current high throughput proteomics that we can enjoy now, it is also interesting to underline and to recall how difficult the first decade was. Indeed when the word was cast, the battle was already won. This recollection is mostly devoted to the almost forgotten period where proteomics was being conceived and put to birth, as this collective scientific work will never appear when searched through the keyword "proteomics". BIOLOGICAL SIGNIFICANCE: The significance of this manuscript is to recall and review the two decades that separated the first attempts of performing large scale analysis of proteins from the solid technical corpus that existed when the word "proteomics" was coined twenty years ago. This recollection is made within the scientific historical context of this decade, which also saw the blossoming of DNA cloning and sequencing. This article is part of a Special Issue entitled: 20 years of Proteomics in memory of Viatliano Pallini. Guest Editors: Luca Bini , Juan J. Calvete, Natacha Turck, Denis Hochstrasser and Jean-Charles Sanchez.
[ { "created": "Wed, 6 Aug 2014 15:42:50 GMT", "version": "v1" } ]
2014-08-07
[ [ "Rabilloud", "Thierry", "", "LCBM - UMR 5249" ] ]
Taking the opportunity of the 20th anniversary of the word "proteomics", this young adult age is a good time to remember how proteomics came from enormous progress in protein separation and protein microanalysis techniques, and from the conjugation of these advances into a high performance and streamlined working setup. However, in the history of the almost three decades that encompass the first attempts to perform large scale analysis of proteins to the current high throughput proteomics that we can enjoy now, it is also interesting to underline and to recall how difficult the first decade was. Indeed when the word was cast, the battle was already won. This recollection is mostly devoted to the almost forgotten period where proteomics was being conceived and put to birth, as this collective scientific work will never appear when searched through the keyword "proteomics". BIOLOGICAL SIGNIFICANCE: The significance of this manuscript is to recall and review the two decades that separated the first attempts of performing large scale analysis of proteins from the solid technical corpus that existed when the word "proteomics" was coined twenty years ago. This recollection is made within the scientific historical context of this decade, which also saw the blossoming of DNA cloning and sequencing. This article is part of a Special Issue entitled: 20 years of Proteomics in memory of Viatliano Pallini. Guest Editors: Luca Bini , Juan J. Calvete, Natacha Turck, Denis Hochstrasser and Jean-Charles Sanchez.
2202.05142
Maxime Lenormand
Nicolas Dubos, Maxime Lenormand, Leandro Castello, Thierry Oberdorff, Antoine Guisan and Sandra Luque
Protection gaps in Amazon floodplains will increase with climate change: Insight from the world's largest scaled freshwater fish
30 pages, 3 figures + Appendix
Aquatic Conservation: Marine and Freshwater Ecosystems 32, 1830-1841 (2022)
10.1002/aqc.3877
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Amazon floodplains represent important surfaces of highly valuable ecosystems, yet they remain neglected from protected areas. While the efficiency of the protected area network of the Amazon basin may be jeopardised by climate change, floodplains are exposed to important consequences of climate change but are omitted from species distribution models and protection gap analyses. We modelled the current and future (2070) distribution of the giant bony-tongue fish Arapaima sp. accounting for climate and habitat requirements, with consideration of dam presence (already existing and planned constructions) and hydroperiod (high- and low-water stages). We further quantified the amount of suitable environment which falls inside and outside the current network of protected areas to identify spatial conservation gaps. We predict climate change to cause the decline of environmental suitability by 16.6% during the high-water stage, and by 19.4% during the low-water stage. We found that about 70% of the suitable environments of Arapaima sp. remain currently unprotected, which is likely to increase by 5% with future climate change effects. Both current and projected dam constructions may hamper population flows between the central and the Bolivian and Peruvian parts of the basin. We highlight protection gaps mostly in the southwestern part of the basin and recommend the extension of the current network of protected areas in the floodplains of the upper Ucayali, Juru\`a and Purus Rivers and their tributaries. This study showed the importance of taking into account hydroperiods and dispersal barriers in forecasting the distribution of freshwater fish species, and stresses the urgent need to integrate floodplains to the protected area networks.
[ { "created": "Thu, 10 Feb 2022 16:49:22 GMT", "version": "v1" }, { "created": "Wed, 24 Aug 2022 10:33:44 GMT", "version": "v2" }, { "created": "Thu, 25 Aug 2022 07:27:53 GMT", "version": "v3" } ]
2022-11-03
[ [ "Dubos", "Nicolas", "" ], [ "Lenormand", "Maxime", "" ], [ "Castello", "Leandro", "" ], [ "Oberdorff", "Thierry", "" ], [ "Guisan", "Antoine", "" ], [ "Luque", "Sandra", "" ] ]
The Amazon floodplains represent important surfaces of highly valuable ecosystems, yet they remain neglected from protected areas. While the efficiency of the protected area network of the Amazon basin may be jeopardised by climate change, floodplains are exposed to important consequences of climate change but are omitted from species distribution models and protection gap analyses. We modelled the current and future (2070) distribution of the giant bony-tongue fish Arapaima sp. accounting for climate and habitat requirements, with consideration of dam presence (already existing and planned constructions) and hydroperiod (high- and low-water stages). We further quantified the amount of suitable environment which falls inside and outside the current network of protected areas to identify spatial conservation gaps. We predict climate change to cause the decline of environmental suitability by 16.6% during the high-water stage, and by 19.4% during the low-water stage. We found that about 70% of the suitable environments of Arapaima sp. remain currently unprotected, which is likely to increase by 5% with future climate change effects. Both current and projected dam constructions may hamper population flows between the central and the Bolivian and Peruvian parts of the basin. We highlight protection gaps mostly in the southwestern part of the basin and recommend the extension of the current network of protected areas in the floodplains of the upper Ucayali, Juru\`a and Purus Rivers and their tributaries. This study showed the importance of taking into account hydroperiods and dispersal barriers in forecasting the distribution of freshwater fish species, and stresses the urgent need to integrate floodplains to the protected area networks.
q-bio/0605015
Radford M. Neal
Babak Shahbaba and Radford M. Neal
Gene Function Classification Using Bayesian Models with Hierarchy-Based Priors
null
null
null
null
q-bio.GN
null
We investigate the application of hierarchical classification schemes to the annotation of gene function based on several characteristics of protein sequences including phylogenic descriptors, sequence based attributes, and predicted secondary structure. We discuss three Bayesian models and compare their performance in terms of predictive accuracy. These models are the ordinary multinomial logit (MNL) model, a hierarchical model based on a set of nested MNL models, and a MNL model with a prior that introduces correlations between the parameters for classes that are nearby in the hierarchy. We also provide a new scheme for combining different sources of information. We use these models to predict the functional class of Open Reading Frames (ORFs) from the E. coli genome. The results from all three models show substantial improvement over previous methods, which were based on the C5 algorithm. The MNL model using a prior based on the hierarchy outperforms both the non-hierarchical MNL model and the nested MNL model. In contrast to previous attempts at combining these sources of information, our approach results in a higher accuracy rate when compared to models that use each data source alone. Together, these results show that gene function can be predicted with higher accuracy than previously achieved, using Bayesian models that incorporate suitable prior information.
[ { "created": "Wed, 10 May 2006 20:23:41 GMT", "version": "v1" } ]
2007-05-23
[ [ "Shahbaba", "Babak", "" ], [ "Neal", "Radford M.", "" ] ]
We investigate the application of hierarchical classification schemes to the annotation of gene function based on several characteristics of protein sequences including phylogenic descriptors, sequence based attributes, and predicted secondary structure. We discuss three Bayesian models and compare their performance in terms of predictive accuracy. These models are the ordinary multinomial logit (MNL) model, a hierarchical model based on a set of nested MNL models, and a MNL model with a prior that introduces correlations between the parameters for classes that are nearby in the hierarchy. We also provide a new scheme for combining different sources of information. We use these models to predict the functional class of Open Reading Frames (ORFs) from the E. coli genome. The results from all three models show substantial improvement over previous methods, which were based on the C5 algorithm. The MNL model using a prior based on the hierarchy outperforms both the non-hierarchical MNL model and the nested MNL model. In contrast to previous attempts at combining these sources of information, our approach results in a higher accuracy rate when compared to models that use each data source alone. Together, these results show that gene function can be predicted with higher accuracy than previously achieved, using Bayesian models that incorporate suitable prior information.
1905.02330
Ann Sizemore Blevins
Ann Sizemore Blevins and Danielle S. Bassett
On the reorderability of node-filtered order complexes
41 pages, 16 figures
Phys. Rev. E 101, 052311 (2020)
10.1103/PhysRevE.101.052311
null
q-bio.QM math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Growing graphs describe a multitude of developing processes from maturing brains to expanding vocabularies to burgeoning public transit systems. Each of these growing processes likely adheres to proliferation rules that establish an effective order of node and connection emergence. When followed, such proliferation rules allow the system to properly develop along a predetermined trajectory. But rules are rarely followed. Here we ask what topological changes in the growing graph trajectories might occur after the specific but basic perturbation of permuting the node emergence order. Specifically we harness applied topological methods to determine which of six growing graph models exhibit topology that is robust to randomizing node order, termed global reorderability, and robust to temporally-local node swaps, termed local reorderability. We find that the six graph models fall upon a spectrum of both local and global reorderability, and furthermore we provide theoretical connections between robustness to node pair ordering and robustness to arbitrary node orderings. Finally we discuss real-world applications of reorderability analyses and suggest possibilities for designing reorderable networks.
[ { "created": "Tue, 7 May 2019 02:29:11 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2019 21:22:50 GMT", "version": "v2" } ]
2020-05-27
[ [ "Blevins", "Ann Sizemore", "" ], [ "Bassett", "Danielle S.", "" ] ]
Growing graphs describe a multitude of developing processes from maturing brains to expanding vocabularies to burgeoning public transit systems. Each of these growing processes likely adheres to proliferation rules that establish an effective order of node and connection emergence. When followed, such proliferation rules allow the system to properly develop along a predetermined trajectory. But rules are rarely followed. Here we ask what topological changes in the growing graph trajectories might occur after the specific but basic perturbation of permuting the node emergence order. Specifically we harness applied topological methods to determine which of six growing graph models exhibit topology that is robust to randomizing node order, termed global reorderability, and robust to temporally-local node swaps, termed local reorderability. We find that the six graph models fall upon a spectrum of both local and global reorderability, and furthermore we provide theoretical connections between robustness to node pair ordering and robustness to arbitrary node orderings. Finally we discuss real-world applications of reorderability analyses and suggest possibilities for designing reorderable networks.
2006.06840
Jonas Wallin
Adam Altmejd, Joacim Rockl\"ov, Jonas Wallin
Nowcasting Covid-19 statistics reported withdelay: a case-study of Sweden
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The new corona virus disease -- COVID-2019 -- is rapidly spreading through the world. The availability of unbiased timely statistics of trends in disease events are a key to effective responses. But due to reporting delays, the most recently reported numbers are frequently underestimating of the total number of infections, hospitalizations and deaths creating an illusion of a downward trend. Here we describe a statistical methodology for predicting true daily quantities and their uncertainty, estimated using historical reporting delays. The methodology takes into account the observed distribution pattern of the lag. It is derived from the removal method, a well-established estimation framework in the field of ecology.
[ { "created": "Thu, 11 Jun 2020 21:35:17 GMT", "version": "v1" } ]
2020-06-15
[ [ "Altmejd", "Adam", "" ], [ "Rocklöv", "Joacim", "" ], [ "Wallin", "Jonas", "" ] ]
The new corona virus disease -- COVID-2019 -- is rapidly spreading through the world. The availability of unbiased timely statistics of trends in disease events are a key to effective responses. But due to reporting delays, the most recently reported numbers are frequently underestimating of the total number of infections, hospitalizations and deaths creating an illusion of a downward trend. Here we describe a statistical methodology for predicting true daily quantities and their uncertainty, estimated using historical reporting delays. The methodology takes into account the observed distribution pattern of the lag. It is derived from the removal method, a well-established estimation framework in the field of ecology.
1409.0356
Michele Caselle
A. Colliva, R. Pellegrini, A. Testori, M. Caselle
Ising model description of long range correlations in DNA sequences
15 pages, 6 figures. Substantial changes in sect. 4.2. Some clarifications and comments added throughout the paper. Journal version
Phys. Rev. E 91, 052703 (2015)
10.1103/PhysRevE.91.052703
null
q-bio.OT cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model long range correlations of nucleotides in the human DNA sequence using the long range one dimensional Ising model. We show that for distances between $10^3$ and $10^6$ bp the correlations show an universal behaviour and may be described by the non-mean field limit of the long range 1d Ising model. This allows us to make some testable hypothesis on the nature of the interaction between distant portions of the DNA chain which led to the DNA structure that we observe today in higher eukaryotes.
[ { "created": "Mon, 1 Sep 2014 10:32:45 GMT", "version": "v1" }, { "created": "Fri, 19 Sep 2014 17:14:04 GMT", "version": "v2" }, { "created": "Wed, 29 Apr 2015 13:20:25 GMT", "version": "v3" }, { "created": "Mon, 11 May 2015 14:26:20 GMT", "version": "v4" } ]
2015-05-13
[ [ "Colliva", "A.", "" ], [ "Pellegrini", "R.", "" ], [ "Testori", "A.", "" ], [ "Caselle", "M.", "" ] ]
We model long range correlations of nucleotides in the human DNA sequence using the long range one dimensional Ising model. We show that for distances between $10^3$ and $10^6$ bp the correlations show an universal behaviour and may be described by the non-mean field limit of the long range 1d Ising model. This allows us to make some testable hypothesis on the nature of the interaction between distant portions of the DNA chain which led to the DNA structure that we observe today in higher eukaryotes.
0801.1846
Alain Barrat
Aurelien Gautreau (LPT), Alain Barrat (LPT), Marc Barthelemy (CEA DIF/DPTA)
Global disease spread: statistics and estimation of arrival times
J. Theor. Biol., in press
Journal of Theoretical Biology 251 (2008) 509-522
10.1016/j.jtbi.2007.12.001
null
q-bio.PE cond-mat.stat-mech
null
We study metapopulation models for the spread of epidemics in which different subpopulations (cities) are connected by fluxes of individuals (travelers). This framework allows to describe the spread of a disease on a large scale and we focus here on the computation of the arrival time of a disease as a function of the properties of the seed of the epidemics and of the characteristics of the network connecting the various subpopulations. Using analytical and numerical arguments, we introduce an easily computable quantity which approximates this average arrival time. We show on the example of a disease spread on the world-wide airport network that this quantity predicts with a good accuracy the order of arrival of the disease in the various subpopulations in each realization of epidemic scenario, and not only for an average over realizations. Finally, this quantity might be useful in the identification of the dominant paths of the disease spread.
[ { "created": "Fri, 11 Jan 2008 21:05:18 GMT", "version": "v1" } ]
2008-03-20
[ [ "Gautreau", "Aurelien", "", "LPT" ], [ "Barrat", "Alain", "", "LPT" ], [ "Barthelemy", "Marc", "", "CEA\n DIF/DPTA" ] ]
We study metapopulation models for the spread of epidemics in which different subpopulations (cities) are connected by fluxes of individuals (travelers). This framework allows to describe the spread of a disease on a large scale and we focus here on the computation of the arrival time of a disease as a function of the properties of the seed of the epidemics and of the characteristics of the network connecting the various subpopulations. Using analytical and numerical arguments, we introduce an easily computable quantity which approximates this average arrival time. We show on the example of a disease spread on the world-wide airport network that this quantity predicts with a good accuracy the order of arrival of the disease in the various subpopulations in each realization of epidemic scenario, and not only for an average over realizations. Finally, this quantity might be useful in the identification of the dominant paths of the disease spread.
2006.00971
Aristides Moustakas
Aristides Moustakas
Internet search effort on Covid-19 and the underlying public interventions and epidemiological status
null
null
null
null
q-bio.PE q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Disease spread is a complex phenomenon requiring an interdisciplinary approach. Covid-19 exhibited a global spatial spread in a very short time frame resulting in a global pandemic. Data of web search effort in Greece on Covid-19 as a topic for one year on a weekly temporal scale were analyzed using governmental intervention measures such a s school closures, movement restrictions, national and international travelling restrictions, stay at home requirements, mask requirements, financial support measures, and epidemiological variables such as new cases and new deaths as potential explanatory covariates. The relationship between web search effort on Covid-19 and the 16 in total explanatory covariates was analyzed with machine learning. Web search in time was compared with the corresponding epidemiological situation, expressed by the Rt at the same week. Results indicated that the trained model exhibited a fit of R2 = 91% between the actual and predicted web search effort. The top five variables for predicting web search effort were new deaths, the opening of international borders to non-Greek nationals, new cases, testing policy, and restrictions in internal movements. Web search peaked during the same weeks that the Rt was peaking although new deaths or new cases were not peaking during those dates, and Rt rarely is reported in public media. As both web search effort and Rt peaked during 1-15 August 2020, the peak of the tourist season, the implications of this are discussed.
[ { "created": "Fri, 29 May 2020 17:55:02 GMT", "version": "v1" }, { "created": "Mon, 22 Mar 2021 14:17:02 GMT", "version": "v2" } ]
2021-03-23
[ [ "Moustakas", "Aristides", "" ] ]
Disease spread is a complex phenomenon requiring an interdisciplinary approach. Covid-19 exhibited a global spatial spread in a very short time frame resulting in a global pandemic. Data of web search effort in Greece on Covid-19 as a topic for one year on a weekly temporal scale were analyzed using governmental intervention measures such a s school closures, movement restrictions, national and international travelling restrictions, stay at home requirements, mask requirements, financial support measures, and epidemiological variables such as new cases and new deaths as potential explanatory covariates. The relationship between web search effort on Covid-19 and the 16 in total explanatory covariates was analyzed with machine learning. Web search in time was compared with the corresponding epidemiological situation, expressed by the Rt at the same week. Results indicated that the trained model exhibited a fit of R2 = 91% between the actual and predicted web search effort. The top five variables for predicting web search effort were new deaths, the opening of international borders to non-Greek nationals, new cases, testing policy, and restrictions in internal movements. Web search peaked during the same weeks that the Rt was peaking although new deaths or new cases were not peaking during those dates, and Rt rarely is reported in public media. As both web search effort and Rt peaked during 1-15 August 2020, the peak of the tourist season, the implications of this are discussed.
1012.1894
Haret Rosu
H.C. Rosu, J.S. Murguia, V. Ibarra-Junquera
Detection of mixed-culture growth in the total biomass data by wavelet transforms
6 pages,3 figures, three other similar figures left out
J. Appl. Res. Tech. 8(2), 240-248 (2010)
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have shown elsewhere that the presence of mixed-culture growth of microbial species in fermentation processes can be detected with high accuracy by employing the wavelet transform. This is achieved because the crosses in the different growth processes contributing to the total biomass signal appear as singularities that are very well evidenced through their singularity cones in the wavelet transform. However, we used very simple two-species cases. In this work, we extend the wavelet method to a more complicated illustrative fermentation case of three microbial species for which we employ several wavelets of different number of vanishing moments in order to eliminate possible numerical artifacts. Working in this way allows to filter in a more precise way the numerical values of the H\"older exponents. Therefore, we were able to determine the characteristic H\"older exponents for the corresponding crossing singularities of the microbial growth processes and their stability logarithmic scale ranges up to the first decimal in the value of the characteristic exponents. Since calibrating the mixed microbial growth by means of their H\"older exponents could have potential industrial applications, the dependence of the H\"older exponents on the kinetic and physical parameters of the growth models remains as a future experimental task
[ { "created": "Wed, 8 Dec 2010 23:52:10 GMT", "version": "v1" } ]
2010-12-10
[ [ "Rosu", "H. C.", "" ], [ "Murguia", "J. S.", "" ], [ "Ibarra-Junquera", "V.", "" ] ]
We have shown elsewhere that the presence of mixed-culture growth of microbial species in fermentation processes can be detected with high accuracy by employing the wavelet transform. This is achieved because the crosses in the different growth processes contributing to the total biomass signal appear as singularities that are very well evidenced through their singularity cones in the wavelet transform. However, we used very simple two-species cases. In this work, we extend the wavelet method to a more complicated illustrative fermentation case of three microbial species for which we employ several wavelets of different number of vanishing moments in order to eliminate possible numerical artifacts. Working in this way allows to filter in a more precise way the numerical values of the H\"older exponents. Therefore, we were able to determine the characteristic H\"older exponents for the corresponding crossing singularities of the microbial growth processes and their stability logarithmic scale ranges up to the first decimal in the value of the characteristic exponents. Since calibrating the mixed microbial growth by means of their H\"older exponents could have potential industrial applications, the dependence of the H\"older exponents on the kinetic and physical parameters of the growth models remains as a future experimental task
1101.4573
Alfredo Braunstein
M. Bailly-Bechet, C. Borgs, A. Braunstein, J. Chayes, A. Dagkessamanskaia, J.-M. Fran\c{c}ois, and R. Zecchina
Finding undetected protein associations in cell signaling by belief propagation
6 pages, 3 figures, 1 table, Supporting Information
Published online before print December 27, 2010, doi: 10.1073/pnas.1004751108 PNAS January 11, 2011 vol. 108 no. 2 882-887
10.1073/pnas.1004751108
null
q-bio.MN cond-mat.stat-mech cs.AI cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
External information propagates in the cell mainly through signaling cascades and transcriptional activation, allowing it to react to a wide spectrum of environmental changes. High throughput experiments identify numerous molecular components of such cascades that may, however, interact through unknown partners. Some of them may be detected using data coming from the integration of a protein-protein interaction network and mRNA expression profiles. This inference problem can be mapped onto the problem of finding appropriate optimal connected subgraphs of a network defined by these datasets. The optimization procedure turns out to be computationally intractable in general. Here we present a new distributed algorithm for this task, inspired from statistical physics, and apply this scheme to alpha factor and drug perturbations data in yeast. We identify the role of the COS8 protein, a member of a gene family of previously unknown function, and validate the results by genetic experiments. The algorithm we present is specially suited for very large datasets, can run in parallel, and can be adapted to other problems in systems biology. On renowned benchmarks it outperforms other algorithms in the field.
[ { "created": "Mon, 24 Jan 2011 15:57:48 GMT", "version": "v1" } ]
2011-01-25
[ [ "Bailly-Bechet", "M.", "" ], [ "Borgs", "C.", "" ], [ "Braunstein", "A.", "" ], [ "Chayes", "J.", "" ], [ "Dagkessamanskaia", "A.", "" ], [ "François", "J. -M.", "" ], [ "Zecchina", "R.", "" ] ]
External information propagates in the cell mainly through signaling cascades and transcriptional activation, allowing it to react to a wide spectrum of environmental changes. High throughput experiments identify numerous molecular components of such cascades that may, however, interact through unknown partners. Some of them may be detected using data coming from the integration of a protein-protein interaction network and mRNA expression profiles. This inference problem can be mapped onto the problem of finding appropriate optimal connected subgraphs of a network defined by these datasets. The optimization procedure turns out to be computationally intractable in general. Here we present a new distributed algorithm for this task, inspired from statistical physics, and apply this scheme to alpha factor and drug perturbations data in yeast. We identify the role of the COS8 protein, a member of a gene family of previously unknown function, and validate the results by genetic experiments. The algorithm we present is specially suited for very large datasets, can run in parallel, and can be adapted to other problems in systems biology. On renowned benchmarks it outperforms other algorithms in the field.
2306.00580
David Franklin
Sae Franklin and David W. Franklin
Visuomotor feedback tuning in the absence of visual error information
29 pages, 11 figures. arXiv admin note: substantial text overlap with arXiv:2008.07574
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large increases in visuomotor feedback gains occur during initial adaptation to novel dynamics, which we propose are due to increased internal model uncertainty. That is, large errors indicate increased uncertainty in our prediction of the environment, increasing feedback gains and co-contraction as a coping mechanism. Our previous work showed distinct patterns of visuomotor feedback gains during abrupt or gradual adaptation to a force field, suggesting two complementary processes: reactive feedback gains increasing with internal model uncertainty and the gradual learning of predictive feedback gains tuned to the environment. Here we further investigate what drives these changes visuomotor feedback gains in learning, by separating the effects of internal model uncertainty from visual error signal through removal of visual error information. Removing visual error information suppresses the visuomotor feedback gains in all conditions, but the pattern of modulation throughout adaptation is unaffected. Moreover, we find increased muscle co-contraction in both abrupt and gradual adaptation protocols, demonstrating that visuomotor feedback responses are independent from the level of co-contraction. Our result suggests that visual feedback benefits motor adaptation tasks through higher visuomotor feedback gains, but when it is not available participants adapt at a similar rate through increased co-contraction. We have demonstrated a direct connection between learning and predictive visuomotor feedback gains, independent from visual error signals. This further supports our hypothesis that internal model uncertainty drives initial increases in feedback gains.
[ { "created": "Thu, 1 Jun 2023 11:52:42 GMT", "version": "v1" }, { "created": "Tue, 12 Dec 2023 10:50:22 GMT", "version": "v2" } ]
2023-12-15
[ [ "Franklin", "Sae", "" ], [ "Franklin", "David W.", "" ] ]
Large increases in visuomotor feedback gains occur during initial adaptation to novel dynamics, which we propose are due to increased internal model uncertainty. That is, large errors indicate increased uncertainty in our prediction of the environment, increasing feedback gains and co-contraction as a coping mechanism. Our previous work showed distinct patterns of visuomotor feedback gains during abrupt or gradual adaptation to a force field, suggesting two complementary processes: reactive feedback gains increasing with internal model uncertainty and the gradual learning of predictive feedback gains tuned to the environment. Here we further investigate what drives these changes visuomotor feedback gains in learning, by separating the effects of internal model uncertainty from visual error signal through removal of visual error information. Removing visual error information suppresses the visuomotor feedback gains in all conditions, but the pattern of modulation throughout adaptation is unaffected. Moreover, we find increased muscle co-contraction in both abrupt and gradual adaptation protocols, demonstrating that visuomotor feedback responses are independent from the level of co-contraction. Our result suggests that visual feedback benefits motor adaptation tasks through higher visuomotor feedback gains, but when it is not available participants adapt at a similar rate through increased co-contraction. We have demonstrated a direct connection between learning and predictive visuomotor feedback gains, independent from visual error signals. This further supports our hypothesis that internal model uncertainty drives initial increases in feedback gains.
1810.06292
Isti Rodiah
Isti Rodiah, Wolfgang Bock, and Torben Fattler
An Agent Based Modeling of Spatially Inhomogeneous Host-Vector Disease Transmission
6 pages, 23 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we consider a microscopic model for host-vector disease transmission based on configuration space analysis. Using Vlasov scaling we obtain the corresponding mesoscopic (kinetic) equations, describing the density of susceptible and infected compartments in space. The resulting system of equations can be seen as a generalization to a spatial SISUV model.
[ { "created": "Mon, 15 Oct 2018 12:04:00 GMT", "version": "v1" } ]
2018-10-16
[ [ "Rodiah", "Isti", "" ], [ "Bock", "Wolfgang", "" ], [ "Fattler", "Torben", "" ] ]
In this article we consider a microscopic model for host-vector disease transmission based on configuration space analysis. Using Vlasov scaling we obtain the corresponding mesoscopic (kinetic) equations, describing the density of susceptible and infected compartments in space. The resulting system of equations can be seen as a generalization to a spatial SISUV model.
1902.07585
Soumyadip Banerjee
Soumyadip Banerjee and Shaunak Sen
Robustness of a Biomolecular Oscillator to Pulse Perturbations
There are 13 pages and 4 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Biomolecular oscillators can function robustly in the presence of environmental perturbations, which can either be static or dynamic. While the effect of different circuit parameters and mechanisms on the robustness to steady perturbations has been investigated, the scenario for dynamic perturbations is relatively unclear. To address this we use a benchmark three protein oscillator design - the repressilator - and investigate its robustness to pulse perturbations, computationally as well as using analytical tools of Floquet theory. We find that the metric provided by direct computations of the time it takes for the oscillator to settle after a pulse perturbation is applied, correlates well with the metric provided by Floquet theory. We investigate the parametric dependence of the Floquet metric, finding that the parameters that increase the effective delay enhance robustness to pulse perturbation. We find that the structural changes such as increasing the number of proteins in a ring oscillator as well as adding positive feedback, both of which increase effective delay, facilitates such robustness. These results highlight such design principles, especially the role of delay, for designing an oscillator that is robust to pulse perturbation.
[ { "created": "Wed, 20 Feb 2019 14:59:01 GMT", "version": "v1" }, { "created": "Wed, 4 Dec 2019 10:57:55 GMT", "version": "v2" }, { "created": "Fri, 28 Feb 2020 20:16:31 GMT", "version": "v3" } ]
2020-03-03
[ [ "Banerjee", "Soumyadip", "" ], [ "Sen", "Shaunak", "" ] ]
Biomolecular oscillators can function robustly in the presence of environmental perturbations, which can either be static or dynamic. While the effect of different circuit parameters and mechanisms on the robustness to steady perturbations has been investigated, the scenario for dynamic perturbations is relatively unclear. To address this we use a benchmark three protein oscillator design - the repressilator - and investigate its robustness to pulse perturbations, computationally as well as using analytical tools of Floquet theory. We find that the metric provided by direct computations of the time it takes for the oscillator to settle after a pulse perturbation is applied, correlates well with the metric provided by Floquet theory. We investigate the parametric dependence of the Floquet metric, finding that the parameters that increase the effective delay enhance robustness to pulse perturbation. We find that the structural changes such as increasing the number of proteins in a ring oscillator as well as adding positive feedback, both of which increase effective delay, facilitates such robustness. These results highlight such design principles, especially the role of delay, for designing an oscillator that is robust to pulse perturbation.
1601.01935
Majid Bani-Yaghoub
Majid Bani-Yaghoub and Aaron Reed
Social Network Analysis of a Grassland Rodent Community Using a Lotka-Volterra Modeling Approach
A short article accepted by 2015 Symposium on Biomathematics and Ecology
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although social network analysis is a promising tool to study the structure and dynamics of wildlife communities, the current methods require costly and detailed network data, which often are not available over long time periods (e.g. decades). The present work aims to resolve this issue by developing a new methodology that requires much less detailed data and it relies on well-established Lotka-Volterra models. Using the long-term abundance data (1973-2003) of northeastern Kansas rodents, the changes in the magnitude and direction of interactions (e.g., changes from cooperative behavior to competitive behavior or changes in the magnitude of competition) are quantified.
[ { "created": "Fri, 8 Jan 2016 16:37:52 GMT", "version": "v1" } ]
2016-01-11
[ [ "Bani-Yaghoub", "Majid", "" ], [ "Reed", "Aaron", "" ] ]
Although social network analysis is a promising tool to study the structure and dynamics of wildlife communities, the current methods require costly and detailed network data, which often are not available over long time periods (e.g. decades). The present work aims to resolve this issue by developing a new methodology that requires much less detailed data and it relies on well-established Lotka-Volterra models. Using the long-term abundance data (1973-2003) of northeastern Kansas rodents, the changes in the magnitude and direction of interactions (e.g., changes from cooperative behavior to competitive behavior or changes in the magnitude of competition) are quantified.
2112.05406
Gregory Huey
Larry Harper, Greg Huey
Symmetry-Breaking in Plant Stems
27 pages, 17 figures
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
The purpose of this paper is to present a model of a phenomenon of plant stem morphogenesis observed by Cesar Gomez-Campo in 1970. We consider a simplified model of auxin dynamics in plant stems, showing that, after creation of the original primordium, it can represent random, distichous and spiral phyllotaxis (leaf arrangement) just by varying one parameter, the rate of diffusion. The same analysis extends to the $n$-jugate case where $n$ primordia are initiated at each plastochrone. Having validated the model, we consider how it can give rise to the Gomez-Campo phenomenon, showing how a stem with spiral phyllotaxis can produce branches of the same or opposite chirality. And finally, how the relationship can change from discordant to concordant over the course of a growing season.
[ { "created": "Fri, 10 Dec 2021 09:24:32 GMT", "version": "v1" } ]
2021-12-13
[ [ "Harper", "Larry", "" ], [ "Huey", "Greg", "" ] ]
The purpose of this paper is to present a model of a phenomenon of plant stem morphogenesis observed by Cesar Gomez-Campo in 1970. We consider a simplified model of auxin dynamics in plant stems, showing that, after creation of the original primordium, it can represent random, distichous and spiral phyllotaxis (leaf arrangement) just by varying one parameter, the rate of diffusion. The same analysis extends to the $n$-jugate case where $n$ primordia are initiated at each plastochrone. Having validated the model, we consider how it can give rise to the Gomez-Campo phenomenon, showing how a stem with spiral phyllotaxis can produce branches of the same or opposite chirality. And finally, how the relationship can change from discordant to concordant over the course of a growing season.
0911.3661
David Wilson
David P. Wilson, Alexei V. Tkachenko, Jens-Christian Meiners
A Generalized Theory of DNA Looping and Cyclization
6 pages, 4 figures, submitted to EPL
null
10.1209/0295-5075/89/58005
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have developed a generalized semi-analytic approach for efficiently computing cyclization and looping $J$ factors of DNA under arbitrary binding constraints. Many biological systems involving DNA-protein interactions impose precise boundary conditions on DNA, which necessitates a treatment beyond the Shimada-Yamakawa model for ring cyclization. Our model allows for DNA to be treated as a heteropolymer with sequence-dependent intrinsic curvature and stiffness. In this framework, we independently compute enthlapic and entropic contributions to the $J$ factor and show that even at small length scales $(\sim \ell_{p})$ entropic effects are significant. We propose a simple analytic formula to describe our numerical results for a homogenous DNA in planar loops, which can be used to predict experimental cyclization and loop formation rates as a function of loop size and binding geometry. We also introduce an effective torsional persistence length that describes the coupling between twist and bending of DNA when looped.
[ { "created": "Wed, 18 Nov 2009 21:16:52 GMT", "version": "v1" } ]
2015-05-14
[ [ "Wilson", "David P.", "" ], [ "Tkachenko", "Alexei V.", "" ], [ "Meiners", "Jens-Christian", "" ] ]
We have developed a generalized semi-analytic approach for efficiently computing cyclization and looping $J$ factors of DNA under arbitrary binding constraints. Many biological systems involving DNA-protein interactions impose precise boundary conditions on DNA, which necessitates a treatment beyond the Shimada-Yamakawa model for ring cyclization. Our model allows for DNA to be treated as a heteropolymer with sequence-dependent intrinsic curvature and stiffness. In this framework, we independently compute enthlapic and entropic contributions to the $J$ factor and show that even at small length scales $(\sim \ell_{p})$ entropic effects are significant. We propose a simple analytic formula to describe our numerical results for a homogenous DNA in planar loops, which can be used to predict experimental cyclization and loop formation rates as a function of loop size and binding geometry. We also introduce an effective torsional persistence length that describes the coupling between twist and bending of DNA when looped.
0904.4843
Gunnar Boldhaus
Gunnar Boldhaus and Konstantin Klemm
Regulatory networks and connected components of the neutral space
6 pages, 5 figures
null
10.1140/epjb/e2010-00176-4
null
q-bio.MN cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The functioning of a living cell is largely determined by the structure of its regulatory network, comprising non-linear interactions between regulatory genes. An important factor for the stability and evolvability of such regulatory systems is neutrality - typically a large number of alternative network structures give rise to the necessary dynamics. Here we study the discretized regulatory dynamics of the yeast cell cycle [Li et al., PNAS, 2004] and the set of networks capable of reproducing it, which we call functional. Among these, the empirical yeast wildtype network is close to optimal with respect to sparse wiring. Under point mutations, which establish or delete single interactions, the neutral space of functional networks is fragmented into 4.7 * 10^8 components. One of the smaller ones contains the wildtype network. On average, functional networks reachable from the wildtype by mutations are sparser, have higher noise resilience and fewer fixed point attractors as compared with networks outside of this wildtype component.
[ { "created": "Thu, 30 Apr 2009 14:05:22 GMT", "version": "v1" } ]
2015-05-13
[ [ "Boldhaus", "Gunnar", "" ], [ "Klemm", "Konstantin", "" ] ]
The functioning of a living cell is largely determined by the structure of its regulatory network, comprising non-linear interactions between regulatory genes. An important factor for the stability and evolvability of such regulatory systems is neutrality - typically a large number of alternative network structures give rise to the necessary dynamics. Here we study the discretized regulatory dynamics of the yeast cell cycle [Li et al., PNAS, 2004] and the set of networks capable of reproducing it, which we call functional. Among these, the empirical yeast wildtype network is close to optimal with respect to sparse wiring. Under point mutations, which establish or delete single interactions, the neutral space of functional networks is fragmented into 4.7 * 10^8 components. One of the smaller ones contains the wildtype network. On average, functional networks reachable from the wildtype by mutations are sparser, have higher noise resilience and fewer fixed point attractors as compared with networks outside of this wildtype component.
2401.16052
Alessandro Fontana
Alessandro Fontana and Marios Kyriazis
Why evolution needs the old: a theory of ageing as adaptive force
null
null
null
null
q-bio.PE q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At any moment in time, evolution is faced with a formidable challenge: refining the already highly optimised design of biological species, a feat accomplished through all preceding generations. In such a scenario, the impact of random changes (the method employed by evolution) is much more likely to be harmful than advantageous, potentially lowering the reproductive fitness of the affected individuals. Our hypothesis is that ageing is, at least in part, caused by the cumulative effect of all the experiments carried out by evolution to improve a species' design. These experiments are almost always unsuccessful, as expected given their pseudorandom nature, cause harm to the body and ultimately lead to death. On the other hand, a small minority of experiments have positive outcome, offering valuable insight into the direction evolution should pursue. This hypothesis is consistent with the concept of "terminal addition", by which nature is biased towards adding innovations at the end of development. From the perspective of evolution as an optimisation algorithm, ageing is advantageous as it allows to test innovations during a phase when their impact on fitness is present but less pronounced. Our inference suggests that ageing has a key biological role, as it contributes to the system's evolvability by exerting a regularisation effect on the fitness landscape of evolution.
[ { "created": "Mon, 29 Jan 2024 11:02:37 GMT", "version": "v1" }, { "created": "Sat, 17 Feb 2024 12:08:19 GMT", "version": "v2" } ]
2024-02-20
[ [ "Fontana", "Alessandro", "" ], [ "Kyriazis", "Marios", "" ] ]
At any moment in time, evolution is faced with a formidable challenge: refining the already highly optimised design of biological species, a feat accomplished through all preceding generations. In such a scenario, the impact of random changes (the method employed by evolution) is much more likely to be harmful than advantageous, potentially lowering the reproductive fitness of the affected individuals. Our hypothesis is that ageing is, at least in part, caused by the cumulative effect of all the experiments carried out by evolution to improve a species' design. These experiments are almost always unsuccessful, as expected given their pseudorandom nature, cause harm to the body and ultimately lead to death. On the other hand, a small minority of experiments have positive outcome, offering valuable insight into the direction evolution should pursue. This hypothesis is consistent with the concept of "terminal addition", by which nature is biased towards adding innovations at the end of development. From the perspective of evolution as an optimisation algorithm, ageing is advantageous as it allows to test innovations during a phase when their impact on fitness is present but less pronounced. Our inference suggests that ageing has a key biological role, as it contributes to the system's evolvability by exerting a regularisation effect on the fitness landscape of evolution.
1505.05709
Yuko Okamoto
Yoshitake Sakae (Nagoya University), Tomoyuki Hiroyasu (Doshisha University), Mitsunori Miki (Doshisha University), Katsuya Ishii (Nagoya University), and Yuko Okamoto (Nagoya University)
A Conformational Search Method for Protein Systems Using Genetic Crossover and Metropolis Criterion
4 pages, 3 figures
Journal of Physics: Conference Series 487, 012003 (5 pages) (2014)
10.1088/1742-6596/487/1/012003
null
q-bio.BM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many proteins carry out their biological functions by forming the characteristic tertiary structures. Therefore, the search of the stable states of proteins by molecular simulations is important to understand their functions and stabilities. However, getting the stable state by conformational search is difficult, because the energy landscape of the system is characterized by many local minima separated by high energy barriers. In order to overcome this difficulty, various sampling and optimization methods for conformations of proteins have been proposed. In this study, we propose a new conformational search method for proteins by using genetic crossover and Metropolis criterion. We applied this method to an $\alpha$-helical protein. The conformations obtained from the simulations are in good agreement with the experimental results.
[ { "created": "Thu, 21 May 2015 12:59:56 GMT", "version": "v1" } ]
2015-05-22
[ [ "Sakae", "Yoshitake", "", "Nagoya University" ], [ "Hiroyasu", "Tomoyuki", "", "Doshisha\n University" ], [ "Miki", "Mitsunori", "", "Doshisha University" ], [ "Ishii", "Katsuya", "", "Nagoya\n University" ], [ "Okamoto", "Yuko", "", "Nagoya University" ] ]
Many proteins carry out their biological functions by forming the characteristic tertiary structures. Therefore, the search of the stable states of proteins by molecular simulations is important to understand their functions and stabilities. However, getting the stable state by conformational search is difficult, because the energy landscape of the system is characterized by many local minima separated by high energy barriers. In order to overcome this difficulty, various sampling and optimization methods for conformations of proteins have been proposed. In this study, we propose a new conformational search method for proteins by using genetic crossover and Metropolis criterion. We applied this method to an $\alpha$-helical protein. The conformations obtained from the simulations are in good agreement with the experimental results.
2211.10965
Mikhail Prokopenko
Sheryl L. Chang, Quang Dang Nguyen, Alexandra Martiniuk, Vitali Sintchenko, Tania C. Sorrell, Mikhail Prokopenko
Persistence of the Omicron variant of SARS-CoV-2 in Australia: The impact of fluctuating social distancing
30 pages, 12 figures, source code: https://doi.org/10.5281/zenodo.7325675
null
null
null
q-bio.PE cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We modelled emergence and spread of the Omicron variant of SARS-CoV-2 in Australia between December 2021 and June 2022. This pandemic stage exhibited a diverse epidemiological profile with emergence of co-circulating sub-lineages of Omicron, further complicated by differences in social distancing behaviour which varied over time. Our study delineated distinct phases of the Omicron-associated pandemic stage, and retrospectively quantified the adoption of social distancing measures, fluctuating over different time periods in response to the observable incidence dynamics. We also modelled the corresponding disease burden, in terms of hospitalisations, intensive care unit occupancy, and mortality. Supported by good agreement between simulated and actual health data, our study revealed that the nonlinear dynamics observed in the daily incidence and disease burden were determined not only by introduction of sub-lineages of Omicron, but also by the fluctuating adoption of social distancing measures. Our high-resolution model can be used in design and evaluation of public health interventions during future crises.
[ { "created": "Sun, 20 Nov 2022 12:22:13 GMT", "version": "v1" }, { "created": "Mon, 3 Apr 2023 06:07:54 GMT", "version": "v2" } ]
2023-04-04
[ [ "Chang", "Sheryl L.", "" ], [ "Nguyen", "Quang Dang", "" ], [ "Martiniuk", "Alexandra", "" ], [ "Sintchenko", "Vitali", "" ], [ "Sorrell", "Tania C.", "" ], [ "Prokopenko", "Mikhail", "" ] ]
We modelled emergence and spread of the Omicron variant of SARS-CoV-2 in Australia between December 2021 and June 2022. This pandemic stage exhibited a diverse epidemiological profile with emergence of co-circulating sub-lineages of Omicron, further complicated by differences in social distancing behaviour which varied over time. Our study delineated distinct phases of the Omicron-associated pandemic stage, and retrospectively quantified the adoption of social distancing measures, fluctuating over different time periods in response to the observable incidence dynamics. We also modelled the corresponding disease burden, in terms of hospitalisations, intensive care unit occupancy, and mortality. Supported by good agreement between simulated and actual health data, our study revealed that the nonlinear dynamics observed in the daily incidence and disease burden were determined not only by introduction of sub-lineages of Omicron, but also by the fluctuating adoption of social distancing measures. Our high-resolution model can be used in design and evaluation of public health interventions during future crises.
0906.0527
Woodrow L Shew
Woodrow L. Shew, Hongdian Yang, Thomas Petermann, Rajarshi Roy, Dietmar Plenz
Neuronal avalanches imply maximum dynamic range in cortical networks at criticality
main text - 16 pages, 4 figures; supplementary materials - 6 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spontaneous neuronal activity is a ubiquitous feature of cortex. Its spatiotemporal organization reflects past input and modulates future network output. Here we study whether a particular type of spontaneous activity is generated by a network that is optimized for input processing. Neuronal avalanches are a type of spontaneous activity observed in superficial cortical layers in vitro and in vivo with statistical properties expected from a network in a 'critical state'. Theory predicts that the critical state and, therefore, neuronal avalanches are optimal for input processing, but until now, this is untested in experiments. Here, we use cortex slice cultures grown on planar microelectrode arrays to demonstrate that cortical networks which generate neuronal avalanches benefit from maximized dynamic range, i.e. the ability to respond to the greatest range of stimuli. By changing the ratio of excitation and inhibition in the cultures, we derive a network tuning curve for stimulus processing as a function of distance from the critical state in agreement with predictions from our simulations. Our findings suggest that in the cortex, (1) balanced excitation and inhibition establishes the critical state, which maximizes the range of inputs that can be processed and (2) spontaneous activity and input processing are unified in the context of critical phenomena.
[ { "created": "Tue, 2 Jun 2009 18:17:44 GMT", "version": "v1" }, { "created": "Wed, 10 Jun 2009 17:55:48 GMT", "version": "v2" }, { "created": "Mon, 9 Nov 2009 15:46:27 GMT", "version": "v3" } ]
2009-11-09
[ [ "Shew", "Woodrow L.", "" ], [ "Yang", "Hongdian", "" ], [ "Petermann", "Thomas", "" ], [ "Roy", "Rajarshi", "" ], [ "Plenz", "Dietmar", "" ] ]
Spontaneous neuronal activity is a ubiquitous feature of cortex. Its spatiotemporal organization reflects past input and modulates future network output. Here we study whether a particular type of spontaneous activity is generated by a network that is optimized for input processing. Neuronal avalanches are a type of spontaneous activity observed in superficial cortical layers in vitro and in vivo with statistical properties expected from a network in a 'critical state'. Theory predicts that the critical state and, therefore, neuronal avalanches are optimal for input processing, but until now, this is untested in experiments. Here, we use cortex slice cultures grown on planar microelectrode arrays to demonstrate that cortical networks which generate neuronal avalanches benefit from maximized dynamic range, i.e. the ability to respond to the greatest range of stimuli. By changing the ratio of excitation and inhibition in the cultures, we derive a network tuning curve for stimulus processing as a function of distance from the critical state in agreement with predictions from our simulations. Our findings suggest that in the cortex, (1) balanced excitation and inhibition establishes the critical state, which maximizes the range of inputs that can be processed and (2) spontaneous activity and input processing are unified in the context of critical phenomena.
0804.3945
Michael Sadovsky
Michael G.Sadovsky
Dynamic origin of species
4 pages, 4 figures
null
null
null
q-bio.PE q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A simple model of species origin resulted from dynamic features of a population, solely, is developed. The model is based on the evolution optimality in space distribution, and the selection is gone over the mobility. Some biological issues are discussed.
[ { "created": "Thu, 24 Apr 2008 15:20:01 GMT", "version": "v1" } ]
2008-04-25
[ [ "Sadovsky", "Michael G.", "" ] ]
A simple model of species origin resulted from dynamic features of a population, solely, is developed. The model is based on the evolution optimality in space distribution, and the selection is gone over the mobility. Some biological issues are discussed.
1905.12033
Vincent Mallet
Vincent Mallet, Carlos G. Oliver, Nicolas Moitessier, Jerome Waldispuhl
Leveraging binding-site structure for drug discovery with point-cloud methods
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational drug discovery strategies can be broadly placed in two categories: ligand-based methods which identify novel molecules by similarity with known ligands, and structure-based methods which predict molecules with high-affinity to a given 3D structure (e.g. a protein). However, ligand-based methods do not leverage information about the binding site, and structure-based approaches rely on the knowledge of a finite set of ligands binding the target. In this work, we introduce TarLig, a novel approach that aims to bridge the gap between ligand and structure-based approaches. We use the 3D structure of the binding site as input to a model which predicts the ligand preferences of the binding site. The resulting predictions could then offer promising seeds and constraints in the chemical space search, based on the binding site structure. TarLig outperforms standard models by introducing a data-alignment and augmentation technique. The recent popularity of Volumetric 3DCNN pipelines in structural bioinformatics suggests that this extra step could help a wide range of methods to improve their results with minimal modifications.
[ { "created": "Tue, 28 May 2019 19:03:02 GMT", "version": "v1" } ]
2019-05-30
[ [ "Mallet", "Vincent", "" ], [ "Oliver", "Carlos G.", "" ], [ "Moitessier", "Nicolas", "" ], [ "Waldispuhl", "Jerome", "" ] ]
Computational drug discovery strategies can be broadly placed in two categories: ligand-based methods which identify novel molecules by similarity with known ligands, and structure-based methods which predict molecules with high-affinity to a given 3D structure (e.g. a protein). However, ligand-based methods do not leverage information about the binding site, and structure-based approaches rely on the knowledge of a finite set of ligands binding the target. In this work, we introduce TarLig, a novel approach that aims to bridge the gap between ligand and structure-based approaches. We use the 3D structure of the binding site as input to a model which predicts the ligand preferences of the binding site. The resulting predictions could then offer promising seeds and constraints in the chemical space search, based on the binding site structure. TarLig outperforms standard models by introducing a data-alignment and augmentation technique. The recent popularity of Volumetric 3DCNN pipelines in structural bioinformatics suggests that this extra step could help a wide range of methods to improve their results with minimal modifications.
2405.09699
Mehdi Bouhaddou
Prashant Kaushal, Manisha R. Ummadi, Gwendolyn M. Jang, Yennifer Delgado, Sara K. Makanani, Sophie F. Blanc, Decan M. Winters, Jiewei Xu, Benjamin Polacco, Yuan Zhou, Erica Stevenson, Manon Eckhardt, Lorena Zuliani-Alvarez, Robyn Kaake, Danielle L. Swaney, Nevan Krogan, and Mehdi Bouhaddou
Mapping Differential Protein-Protein Interaction Networks using Affinity Purification Mass Spectrometry
29 pages, 3 figures
null
null
null
q-bio.QM q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Proteins congregate into complexes to perform fundamental cellular functions. Phenotypic outcomes, in health and disease, are often mechanistically driven by the remodeling of protein complexes by protein coding mutations or cellular signaling changes in response to molecular cues. Here, we present an affinity purification mass spectrometry (APMS) proteomics protocol to quantify and visualize global changes in protein protein interaction (PPI) networks between pairwise conditions. We describe steps for expressing affinity tagged bait proteins in mammalian cells, identifying purified protein complexes, quantifying differential PPIs, and visualizing differential PPI networks. Specifically, this protocol details steps for designing affinity tagged bait gene constructs, transfection, affinity purification, mass spectrometry sample preparation, data acquisition, database search, data quality control, PPI confidence scoring, cross run normalization, statistical data analysis, and differential PPI visualization. Our protocol discusses caveats and limitations with applicability across cell types and biological areas.
[ { "created": "Wed, 15 May 2024 20:51:15 GMT", "version": "v1" } ]
2024-05-17
[ [ "Kaushal", "Prashant", "" ], [ "Ummadi", "Manisha R.", "" ], [ "Jang", "Gwendolyn M.", "" ], [ "Delgado", "Yennifer", "" ], [ "Makanani", "Sara K.", "" ], [ "Blanc", "Sophie F.", "" ], [ "Winters", "Decan M.", "" ], [ "Xu", "Jiewei", "" ], [ "Polacco", "Benjamin", "" ], [ "Zhou", "Yuan", "" ], [ "Stevenson", "Erica", "" ], [ "Eckhardt", "Manon", "" ], [ "Zuliani-Alvarez", "Lorena", "" ], [ "Kaake", "Robyn", "" ], [ "Swaney", "Danielle L.", "" ], [ "Krogan", "Nevan", "" ], [ "Bouhaddou", "Mehdi", "" ] ]
Proteins congregate into complexes to perform fundamental cellular functions. Phenotypic outcomes, in health and disease, are often mechanistically driven by the remodeling of protein complexes by protein coding mutations or cellular signaling changes in response to molecular cues. Here, we present an affinity purification mass spectrometry (APMS) proteomics protocol to quantify and visualize global changes in protein protein interaction (PPI) networks between pairwise conditions. We describe steps for expressing affinity tagged bait proteins in mammalian cells, identifying purified protein complexes, quantifying differential PPIs, and visualizing differential PPI networks. Specifically, this protocol details steps for designing affinity tagged bait gene constructs, transfection, affinity purification, mass spectrometry sample preparation, data acquisition, database search, data quality control, PPI confidence scoring, cross run normalization, statistical data analysis, and differential PPI visualization. Our protocol discusses caveats and limitations with applicability across cell types and biological areas.
1903.01458
Katherine Storrs
Katherine R. Storrs and Nikolaus Kriegeskorte
Deep Learning for Cognitive Neuroscience
Chapter to appear in The Cognitive Neurosciences, 6th Edition
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural network models can now recognise images, understand text, translate languages, and play many human games at human or superhuman levels. These systems are highly abstracted, but are inspired by biological brains and use only biologically plausible computations. In the coming years, neural networks are likely to become less reliant on learning from massive labelled datasets, and more robust and generalisable in their task performance. From their successes and failures, we can learn about the computational requirements of the different tasks at which brains excel. Deep learning also provides the tools for testing cognitive theories. In order to test a theory, we need to realise the proposed information-processing system at scale, so as to be able to assess its feasibility and emergent behaviours. Deep learning allows us to scale up from principles and circuit models to end-to-end trainable models capable of performing complex tasks. There are many levels at which cognitive neuroscientists can use deep learning in their work, from inspiring theories to serving as full computational models. Ongoing advances in deep learning bring us closer to understanding how cognition and perception may be implemented in the brain -- the grand challenge at the core of cognitive neuroscience.
[ { "created": "Mon, 4 Mar 2019 14:34:52 GMT", "version": "v1" } ]
2019-03-06
[ [ "Storrs", "Katherine R.", "" ], [ "Kriegeskorte", "Nikolaus", "" ] ]
Neural network models can now recognise images, understand text, translate languages, and play many human games at human or superhuman levels. These systems are highly abstracted, but are inspired by biological brains and use only biologically plausible computations. In the coming years, neural networks are likely to become less reliant on learning from massive labelled datasets, and more robust and generalisable in their task performance. From their successes and failures, we can learn about the computational requirements of the different tasks at which brains excel. Deep learning also provides the tools for testing cognitive theories. In order to test a theory, we need to realise the proposed information-processing system at scale, so as to be able to assess its feasibility and emergent behaviours. Deep learning allows us to scale up from principles and circuit models to end-to-end trainable models capable of performing complex tasks. There are many levels at which cognitive neuroscientists can use deep learning in their work, from inspiring theories to serving as full computational models. Ongoing advances in deep learning bring us closer to understanding how cognition and perception may be implemented in the brain -- the grand challenge at the core of cognitive neuroscience.
q-bio/0505037
Wenzhe Ma
Wenzhe Ma, Chao Tang and Luhua Lai
Specificity of Trypsin and Chymotrypsin: Loop Motion Controlled Dynamic Correlation as a Determinant
41 pages, 7 figures
null
10.1529/biophysj.104.057158
null
q-bio.BM
null
Trypsin and chymotrypsin are both serine proteases with high sequence and structural similarities, but with different substrate specificity. Previous experiments have demonstrated the critical role of the two loops outside the binding pocket in controlling the specificity of the two enzymes. To understand the mechanism of such a control of specificity by distant loops, we have used the Gaussian Network Model to study the dynamic properties of trypsin and chymotrypsin and the roles played by the two loops. A clustering method was introduced to analyze the correlated motions of residues. We have found that trypsin and chymotrypsin have distinct dynamic signatures in the two loop regions which are in turn highly correlated with motions of certain residues in the binding pockets. Interestingly, replacing the two loops of trypsin with those of chymotrypsin changes the motion style of trypsin to chymotrypsin-like, whereas the same experimental replacement was shown necessary to make trypsin have chymotrypsin's enzyme specificity and activity. These results suggest that the cooperative motions of the two loops and the substrate-binding sites contribute to the activity and substrate specificity of trypsin and chymotrypsin.
[ { "created": "Thu, 19 May 2005 14:25:02 GMT", "version": "v1" } ]
2009-11-11
[ [ "Ma", "Wenzhe", "" ], [ "Tang", "Chao", "" ], [ "Lai", "Luhua", "" ] ]
Trypsin and chymotrypsin are both serine proteases with high sequence and structural similarities, but with different substrate specificity. Previous experiments have demonstrated the critical role of the two loops outside the binding pocket in controlling the specificity of the two enzymes. To understand the mechanism of such a control of specificity by distant loops, we have used the Gaussian Network Model to study the dynamic properties of trypsin and chymotrypsin and the roles played by the two loops. A clustering method was introduced to analyze the correlated motions of residues. We have found that trypsin and chymotrypsin have distinct dynamic signatures in the two loop regions which are in turn highly correlated with motions of certain residues in the binding pockets. Interestingly, replacing the two loops of trypsin with those of chymotrypsin changes the motion style of trypsin to chymotrypsin-like, whereas the same experimental replacement was shown necessary to make trypsin have chymotrypsin's enzyme specificity and activity. These results suggest that the cooperative motions of the two loops and the substrate-binding sites contribute to the activity and substrate specificity of trypsin and chymotrypsin.
1311.2452
Vikram Saraph
Vikram Saraph and Tijana Milenkovi\'c
MAGNA: Maximizing Accuracy in Global Network Alignment
10 pages, 2 figures
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological network alignment aims to identify similar regions between networks of different species. Existing methods compute node "similarities" to rapidly identify from possible alignments the "high-scoring" alignments with respect to the overall node similarity. However, the accuracy of the alignments is then evaluated with some other measure that is different than the node similarity used to construct the alignments. Typically, one measures the amount of conserved edges. Thus, the existing methods align similar nodes between networks hoping to conserve many edges (after the alignment is constructed!). Instead, we introduce MAGNA to directly "optimize" edge conservation while the alignment is constructed. MAGNA uses a genetic algorithm and our novel function for crossover of two "parent" alignments into a superior "child" alignment to simulate a "population" of alignments that "evolves" over time; the "fittest" alignments survive and proceed to the next "generation", until the alignment accuracy cannot be optimized further. While we optimize our new and superior measure of the amount of conserved edges, MAGNA can optimize any alignment accuracy measure. In systematic evaluations against existing state-of-the-art methods (IsoRank, MI-GRAAL, and GHOST), MAGNA improves alignment accuracy of all methods.
[ { "created": "Mon, 11 Nov 2013 14:34:44 GMT", "version": "v1" } ]
2013-11-12
[ [ "Saraph", "Vikram", "" ], [ "Milenković", "Tijana", "" ] ]
Biological network alignment aims to identify similar regions between networks of different species. Existing methods compute node "similarities" to rapidly identify from possible alignments the "high-scoring" alignments with respect to the overall node similarity. However, the accuracy of the alignments is then evaluated with some other measure that is different than the node similarity used to construct the alignments. Typically, one measures the amount of conserved edges. Thus, the existing methods align similar nodes between networks hoping to conserve many edges (after the alignment is constructed!). Instead, we introduce MAGNA to directly "optimize" edge conservation while the alignment is constructed. MAGNA uses a genetic algorithm and our novel function for crossover of two "parent" alignments into a superior "child" alignment to simulate a "population" of alignments that "evolves" over time; the "fittest" alignments survive and proceed to the next "generation", until the alignment accuracy cannot be optimized further. While we optimize our new and superior measure of the amount of conserved edges, MAGNA can optimize any alignment accuracy measure. In systematic evaluations against existing state-of-the-art methods (IsoRank, MI-GRAAL, and GHOST), MAGNA improves alignment accuracy of all methods.
1911.12006
Bina Melvia Girsang
Bina Melvia Girsang, Eqlima Elvira
Characteristics Of Post Partum Perineum Wounds With Infra Red Therapy
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Perineal wounds other than their localization in humid feminine areas and can pose a risk of infection. This quantitative study aims to see a description of the characteristics of post partum maternal perineal wounds with infrared therapy twice a day for three consecutive days with a descriptive analytic analysis design. Sample selection technique with purposive sampling in post partum mothers with spontaneous parturition, 1-2 degrees of perineal wounds in the amount of 20 people. Perineal wound characteristics were assessed and observed using the Southampton instrument with the results of the instrument trial r = 0.99 (Karl Pearson correlation coefficient), and the reliability value of 0.99 (brown spearman). The results of this study showed that degeneration of wounds was found only until the first day of post-intervention (1.95 ), poor wound regeneration occurred until the second day of pre-intervention (1.95), and wound regeneration was being encountered changes from the first day to the second day pre-intervention (1.05). These results indicate that infrared therapy does not show significant changes if only done for one day. Key Words: localization, parturition, regeneration
[ { "created": "Wed, 27 Nov 2019 08:04:32 GMT", "version": "v1" } ]
2019-11-28
[ [ "Girsang", "Bina Melvia", "" ], [ "Elvira", "Eqlima", "" ] ]
Perineal wounds other than their localization in humid feminine areas and can pose a risk of infection. This quantitative study aims to see a description of the characteristics of post partum maternal perineal wounds with infrared therapy twice a day for three consecutive days with a descriptive analytic analysis design. Sample selection technique with purposive sampling in post partum mothers with spontaneous parturition, 1-2 degrees of perineal wounds in the amount of 20 people. Perineal wound characteristics were assessed and observed using the Southampton instrument with the results of the instrument trial r = 0.99 (Karl Pearson correlation coefficient), and the reliability value of 0.99 (brown spearman). The results of this study showed that degeneration of wounds was found only until the first day of post-intervention (1.95 ), poor wound regeneration occurred until the second day of pre-intervention (1.95), and wound regeneration was being encountered changes from the first day to the second day pre-intervention (1.05). These results indicate that infrared therapy does not show significant changes if only done for one day. Key Words: localization, parturition, regeneration
2309.12385
Jhoana Romero Dra
Jhoana P. Romero-Leiton, Idriss Sekkak, Julien Arino, Bouchra Nasri
Mathematical modelling of the first HIV/ZIKV co-infection cases in Colombia and Brazil
null
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
This paper presents a mathematical model to investigate co-infection with HIV/AIDS and zika virus (ZIKV) in Colombia and Brazil, where the first cases were reported in 2015-2016. The model considers the sexual transmission dynamics of both viruses and vector-host interactions. We begin by exploring the qualitative behaviour of each model separately. Then, we analyze the dynamics of the co-infection model using the thresholds and results defined separately for each model. The model also considers the impact of intervention strategies, such as, personal protection, antiretroviral therapy (ART), and sexual protection (condoms use). Using available parameter values for Colombia and Brazil, the model is calibrated to predict the potential effect of implementing combinations of those intervention strategies on the co-infection spread. According to these findings, transmission through sexual contact is a determining factor in the long-term behaviour of these two diseases. Furthermore, it is important to note that co-infection with HIV and ZIKV may result in higher rates of HIV transmission and an increased risk of severe congenital disabilities linked to ZIKV infection. As a result, control measures have been implemented to limit the number of infected individuals and mosquitoes, with the aim of halting disease transmission. This study provides novel insights into the dynamics of HIV/ZIKV co-infection and highlights the importance of integrated intervention strategies in controlling the spread of these viruses, which may impact public health
[ { "created": "Thu, 21 Sep 2023 17:21:52 GMT", "version": "v1" }, { "created": "Tue, 16 Apr 2024 03:48:51 GMT", "version": "v2" } ]
2024-04-17
[ [ "Romero-Leiton", "Jhoana P.", "" ], [ "Sekkak", "Idriss", "" ], [ "Arino", "Julien", "" ], [ "Nasri", "Bouchra", "" ] ]
This paper presents a mathematical model to investigate co-infection with HIV/AIDS and zika virus (ZIKV) in Colombia and Brazil, where the first cases were reported in 2015-2016. The model considers the sexual transmission dynamics of both viruses and vector-host interactions. We begin by exploring the qualitative behaviour of each model separately. Then, we analyze the dynamics of the co-infection model using the thresholds and results defined separately for each model. The model also considers the impact of intervention strategies, such as, personal protection, antiretroviral therapy (ART), and sexual protection (condoms use). Using available parameter values for Colombia and Brazil, the model is calibrated to predict the potential effect of implementing combinations of those intervention strategies on the co-infection spread. According to these findings, transmission through sexual contact is a determining factor in the long-term behaviour of these two diseases. Furthermore, it is important to note that co-infection with HIV and ZIKV may result in higher rates of HIV transmission and an increased risk of severe congenital disabilities linked to ZIKV infection. As a result, control measures have been implemented to limit the number of infected individuals and mosquitoes, with the aim of halting disease transmission. This study provides novel insights into the dynamics of HIV/ZIKV co-infection and highlights the importance of integrated intervention strategies in controlling the spread of these viruses, which may impact public health
2307.10194
Hui Wei Dr.
Jingmeng Li, Hui Wei
Important Clues that Facilitate Visual Emergence: Three Psychological Experiments
null
null
null
null
q-bio.NC cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Visual emergence is the phenomenon in which the visual system obtains a holistic perception after grouping and reorganizing local signals. The picture Dalmatian dog is known for its use in explaining visual emergence. This type of image, which consists of a set of discrete black speckles (speckles), is called an emerging image. Not everyone can find the dog in Dalmatian dog, and among those who can, the time spent varies greatly. Although Gestalt theory summarizes perceptual organization into several principles, it remains ambiguous how these principles affect the perception of emerging images. This study, therefore, designed three psychological experiments to explore the factors that influence the perception of emerging images. In the first, we found that the density of speckles in the local area and the arrangements of some key speckles played a key role in the perception of an emerging case. We set parameters in the algorithm to characterize these two factors. We then automatically generated diversified emerging-test images (ETIs) through the algorithm and verified their effectiveness in two subsequent experiments.
[ { "created": "Mon, 10 Jul 2023 13:46:43 GMT", "version": "v1" } ]
2023-07-21
[ [ "Li", "Jingmeng", "" ], [ "Wei", "Hui", "" ] ]
Visual emergence is the phenomenon in which the visual system obtains a holistic perception after grouping and reorganizing local signals. The picture Dalmatian dog is known for its use in explaining visual emergence. This type of image, which consists of a set of discrete black speckles (speckles), is called an emerging image. Not everyone can find the dog in Dalmatian dog, and among those who can, the time spent varies greatly. Although Gestalt theory summarizes perceptual organization into several principles, it remains ambiguous how these principles affect the perception of emerging images. This study, therefore, designed three psychological experiments to explore the factors that influence the perception of emerging images. In the first, we found that the density of speckles in the local area and the arrangements of some key speckles played a key role in the perception of an emerging case. We set parameters in the algorithm to characterize these two factors. We then automatically generated diversified emerging-test images (ETIs) through the algorithm and verified their effectiveness in two subsequent experiments.
0906.3252
Valmir Barbosa
Andre Nathan, Valmir C. Barbosa
Network algorithmics and the emergence of the cortical synaptic-weight distribution
null
Physical Review E 81 (2010), 021916
10.1103/PhysRevE.81.021916
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When a neuron fires and the resulting action potential travels down its axon toward other neurons' dendrites, the effect on each of those neurons is mediated by the weight of the synapse that separates it from the firing neuron. This weight, in turn, is affected by the postsynaptic neuron's response through a mechanism that is thought to underlie important processes such as learning and memory. Although of difficult quantification, cortical synaptic weights have been found to obey a long-tailed unimodal distribution peaking near the lowest values, thus confirming some of the predictive models built previously. These models are all causally local, in the sense that they refer to the situation in which a number of neurons all fire directly at the same postsynaptic neuron. Consequently, they necessarily embody assumptions regarding the generation of action potentials by the presynaptic neurons that have little biological interpretability. In this letter we introduce a network model of large groups of interconnected neurons and demonstrate, making none of the assumptions that characterize the causally local models, that its long-term behavior gives rise to a distribution of synaptic weights with the same properties that were experimentally observed. In our model the action potentials that create a neuron's input are, ultimately, the product of network-wide causal chains relating what happens at a neuron to the firings of others. Our model is then of a causally global nature and predicates the emergence of the synaptic-weight distribution on network structure and function. As such, it has the potential to become instrumental also in the study of other emergent cortical phenomena.
[ { "created": "Wed, 17 Jun 2009 17:11:35 GMT", "version": "v1" } ]
2010-02-19
[ [ "Nathan", "Andre", "" ], [ "Barbosa", "Valmir C.", "" ] ]
When a neuron fires and the resulting action potential travels down its axon toward other neurons' dendrites, the effect on each of those neurons is mediated by the weight of the synapse that separates it from the firing neuron. This weight, in turn, is affected by the postsynaptic neuron's response through a mechanism that is thought to underlie important processes such as learning and memory. Although of difficult quantification, cortical synaptic weights have been found to obey a long-tailed unimodal distribution peaking near the lowest values, thus confirming some of the predictive models built previously. These models are all causally local, in the sense that they refer to the situation in which a number of neurons all fire directly at the same postsynaptic neuron. Consequently, they necessarily embody assumptions regarding the generation of action potentials by the presynaptic neurons that have little biological interpretability. In this letter we introduce a network model of large groups of interconnected neurons and demonstrate, making none of the assumptions that characterize the causally local models, that its long-term behavior gives rise to a distribution of synaptic weights with the same properties that were experimentally observed. In our model the action potentials that create a neuron's input are, ultimately, the product of network-wide causal chains relating what happens at a neuron to the firings of others. Our model is then of a causally global nature and predicates the emergence of the synaptic-weight distribution on network structure and function. As such, it has the potential to become instrumental also in the study of other emergent cortical phenomena.
1201.4618
Charalampos Tsourakakis
Charalampos E. Tsourakakis
Perfect Reconstruction of Oncogenetic Trees
null
null
null
null
q-bio.QM cs.DM math.CO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note we provide the necessary and sufficient conditions to uniquely reconstruct an oncogenetic tree.
[ { "created": "Mon, 23 Jan 2012 00:57:43 GMT", "version": "v1" } ]
2012-10-18
[ [ "Tsourakakis", "Charalampos E.", "" ] ]
In this note we provide the necessary and sufficient conditions to uniquely reconstruct an oncogenetic tree.
1911.09103
Andrew White
Rainier Barrett and Andrew D. White
Investigating Active Learning and Meta-Learning for Iterative Peptide Design
19 pages, 8 figures, 9 tables
null
null
null
q-bio.BM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Often the development of novel functional peptides is not amenable to high throughput or purely computational screening methods. Peptides must be synthesized one at a time in a process that does not generate large amounts of data. One way this method can be improved is by ensuring that each experiment provides the best improvement in both peptide properties and predictive modeling accuracy. Here, we study the effectiveness of active learning, optimizing experiment order, and meta-learning, transferring knowledge between contexts, to reduce the number of experiments necessary to build a predictive model. We present a multi-task benchmark database of peptides designed to advance these methods for experimental design. Each task is binary classification of peptides represented as a sequence string. We find neither active learning method tested to be better than random choice. The meta-learning method Reptile was found to improve average accuracy across datasets. Combining meta-learning with active learning offers inconsistent benefits.
[ { "created": "Wed, 20 Nov 2019 18:33:01 GMT", "version": "v1" }, { "created": "Thu, 6 Feb 2020 14:46:46 GMT", "version": "v2" }, { "created": "Mon, 27 Jul 2020 23:33:27 GMT", "version": "v3" }, { "created": "Thu, 10 Dec 2020 22:02:17 GMT", "version": "v4" } ]
2020-12-14
[ [ "Barrett", "Rainier", "" ], [ "White", "Andrew D.", "" ] ]
Often the development of novel functional peptides is not amenable to high throughput or purely computational screening methods. Peptides must be synthesized one at a time in a process that does not generate large amounts of data. One way this method can be improved is by ensuring that each experiment provides the best improvement in both peptide properties and predictive modeling accuracy. Here, we study the effectiveness of active learning, optimizing experiment order, and meta-learning, transferring knowledge between contexts, to reduce the number of experiments necessary to build a predictive model. We present a multi-task benchmark database of peptides designed to advance these methods for experimental design. Each task is binary classification of peptides represented as a sequence string. We find neither active learning method tested to be better than random choice. The meta-learning method Reptile was found to improve average accuracy across datasets. Combining meta-learning with active learning offers inconsistent benefits.
1510.09009
Tomas Szemes PhD.
Gabriel Min\'arik, Em\'ilia Nagyov\'a, Gabriela Repisk\'a, Tom\'a\v{s} Szemes, Barbora Vlkov\'a-Izrael, Peter Celec
Detection of contamination in noninvasive prenatal fetal gender test
10 pages
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The risk of false positive results in noninvasive prenatal diagnosis focused on fetal gender and RhD status determination could be a problem in clinical routine. This is because these tests are based on detection of presence of DNA sequences with high population frequency and so there is the risk of sample contamination during sample collection and processing. In our study the different fragmentation of fetal and maternal DNA molecules present in maternal circulation was utilized in identification of contaminated samples. Amplification of Y-chromosome specific assays different in size was tested on circulating DNA samples. Of the four tested assays two shorter (84 and 177 bp) showed expected qPCR efficiency and have comparable amplification profiles. The difference in Ct values between these two assays was found to be statistically significant in comparison of fetal male and normal male samples (p<0.0001) as well as in blinded pilot study performed on 10 artificially contaminated and 10 non-contaminated samples (F=34.4, p<0.0001) that were all identified correctly. Our results showed that differently sized assays performed well in detection of external contamination of samples in noninvasive prenatal fetal gender test and could be of help in clinical laboratories to minimize the risk of false positive results.
[ { "created": "Fri, 30 Oct 2015 08:56:16 GMT", "version": "v1" } ]
2015-11-02
[ [ "Minárik", "Gabriel", "" ], [ "Nagyová", "Emília", "" ], [ "Repiská", "Gabriela", "" ], [ "Szemes", "Tomáš", "" ], [ "Vlková-Izrael", "Barbora", "" ], [ "Celec", "Peter", "" ] ]
The risk of false positive results in noninvasive prenatal diagnosis focused on fetal gender and RhD status determination could be a problem in clinical routine. This is because these tests are based on detection of presence of DNA sequences with high population frequency and so there is the risk of sample contamination during sample collection and processing. In our study the different fragmentation of fetal and maternal DNA molecules present in maternal circulation was utilized in identification of contaminated samples. Amplification of Y-chromosome specific assays different in size was tested on circulating DNA samples. Of the four tested assays two shorter (84 and 177 bp) showed expected qPCR efficiency and have comparable amplification profiles. The difference in Ct values between these two assays was found to be statistically significant in comparison of fetal male and normal male samples (p<0.0001) as well as in blinded pilot study performed on 10 artificially contaminated and 10 non-contaminated samples (F=34.4, p<0.0001) that were all identified correctly. Our results showed that differently sized assays performed well in detection of external contamination of samples in noninvasive prenatal fetal gender test and could be of help in clinical laboratories to minimize the risk of false positive results.
1912.08304
Thierry Mora
Maximilian Puelma Touzel, Aleksandra M. Walczak, Thierry Mora
Inferring the immune response from repertoire sequencing
null
PLoS Comput Biol 16(4): e1007873 (2020)
10.1371/journal.pcbi.1007873
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput sequencing of B- and T-cell receptors makes it possible to track immune repertoires across time, in different tissues, and in acute and chronic diseases or in healthy individuals. However, quantitative comparison between repertoires is confounded by variability in the read count of each receptor clonotype due to sampling, library preparation, and expression noise. Here, we present a general Bayesian approach to disentangle repertoire variations from these stochastic effects. Using replicate experiments, we first show how to learn the natural variability of read counts by inferring the distributions of clone sizes as well as an explicit noise model relating true frequencies of clones to their read count. We then use that null model as a baseline to infer a model of clonal expansion from two repertoire time points taken before and after an immune challenge. Applying our approach to yellow fever vaccination as a model of acute infection in humans, we identify candidate clones participating in the response.
[ { "created": "Tue, 17 Dec 2019 22:49:09 GMT", "version": "v1" }, { "created": "Thu, 16 Apr 2020 17:51:28 GMT", "version": "v2" } ]
2020-11-20
[ [ "Touzel", "Maximilian Puelma", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Mora", "Thierry", "" ] ]
High-throughput sequencing of B- and T-cell receptors makes it possible to track immune repertoires across time, in different tissues, and in acute and chronic diseases or in healthy individuals. However, quantitative comparison between repertoires is confounded by variability in the read count of each receptor clonotype due to sampling, library preparation, and expression noise. Here, we present a general Bayesian approach to disentangle repertoire variations from these stochastic effects. Using replicate experiments, we first show how to learn the natural variability of read counts by inferring the distributions of clone sizes as well as an explicit noise model relating true frequencies of clones to their read count. We then use that null model as a baseline to infer a model of clonal expansion from two repertoire time points taken before and after an immune challenge. Applying our approach to yellow fever vaccination as a model of acute infection in humans, we identify candidate clones participating in the response.
1812.07328
Gestionnaire Hal-Su
Claire Meyniel, Dalila Samri (IM2A), Farah Stefano (CH St Joseph), Joel Crevoisier, Florence Bont\'e (SAPPH), Raffaella Migliaccio (ICM, IM2A, UPMC), Laure Delaby (IM2A), Anne Bertrand (ARAMIS, UPMC, ICM), Marie Odile Habert (CATI), Bruno Dubois (UPMC, ICM, IM2A), Bahram Bodaghi, St\'ephane Epelbaum (IM2A, ARAMIS, UPMC, ICM)
COGEVIS: A New Scale to Evaluate Cognition in Patients with Visual Deficiency
null
Behavioural Neurology, IOS Press, 2018, 2018, pp.1-7
10.1155/2018/4295184
null
q-bio.NC cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We evaluated the cognitive status of visually impaired patients referred to low vision rehabilitation (LVR) based on a standard cognitive battery and a new evaluation tool, named the COGEVIS, which can be used to assess patients with severe visual deficits. We studied patients aged 60 and above, referred to the LVR Hospital in Paris. Neurological and cognitive evaluations were performed in an expert memory center. Thirty-eight individuals, 17 women and 21 men with a mean age of 70.3 $\pm$ 1.3 years and a mean visual acuity of 0.12 $\pm$ 0.02, were recruited over a one-year period. Sixty-three percent of participants had normal cognitive status. Cognitive impairment was diagnosed in 37.5% of participants. The COGEVIS score cutoff point to screen for cognitive impairment was 24 (maximum score of 30) with a sensitivity of 66.7% and a specificity of 95%. Evaluation following 4 months of visual rehabilitation showed an improvement of Instrumental Activities of Daily Living (p = 0 004), National Eye Institute Visual Functioning Questionnaire (p = 0 035), and Montgomery-{\AA}sberg Depression Rating Scale (p = 0 037). This study introduces a new short test to screen for cognitive impairment in visually impaired patients.
[ { "created": "Tue, 18 Dec 2018 12:28:48 GMT", "version": "v1" } ]
2018-12-19
[ [ "Meyniel", "Claire", "", "IM2A" ], [ "Samri", "Dalila", "", "IM2A" ], [ "Stefano", "Farah", "", "CH St Joseph" ], [ "Crevoisier", "Joel", "", "SAPPH" ], [ "Bonté", "Florence", "", "SAPPH" ], [ "Migliaccio", "Raffaella", "", "ICM, IM2A,\n UPMC" ], [ "Delaby", "Laure", "", "IM2A" ], [ "Bertrand", "Anne", "", "ARAMIS, UPMC, ICM" ], [ "Habert", "Marie Odile", "", "CATI" ], [ "Dubois", "Bruno", "", "UPMC, ICM, IM2A" ], [ "Bodaghi", "Bahram", "", "IM2A, ARAMIS, UPMC, ICM" ], [ "Epelbaum", "Stéphane", "", "IM2A, ARAMIS, UPMC, ICM" ] ]
We evaluated the cognitive status of visually impaired patients referred to low vision rehabilitation (LVR) based on a standard cognitive battery and a new evaluation tool, named the COGEVIS, which can be used to assess patients with severe visual deficits. We studied patients aged 60 and above, referred to the LVR Hospital in Paris. Neurological and cognitive evaluations were performed in an expert memory center. Thirty-eight individuals, 17 women and 21 men with a mean age of 70.3 $\pm$ 1.3 years and a mean visual acuity of 0.12 $\pm$ 0.02, were recruited over a one-year period. Sixty-three percent of participants had normal cognitive status. Cognitive impairment was diagnosed in 37.5% of participants. The COGEVIS score cutoff point to screen for cognitive impairment was 24 (maximum score of 30) with a sensitivity of 66.7% and a specificity of 95%. Evaluation following 4 months of visual rehabilitation showed an improvement of Instrumental Activities of Daily Living (p = 0 004), National Eye Institute Visual Functioning Questionnaire (p = 0 035), and Montgomery-{\AA}sberg Depression Rating Scale (p = 0 037). This study introduces a new short test to screen for cognitive impairment in visually impaired patients.
1802.01833
Agnieszka Pregowska
Agnieszka Pregowska, Ehud Kaplan, Janusz Szczepanski
How far can neural correlations reduce uncertainty? Comparison of Information Transmission Rates for Markov and Bernoulli processes
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The nature of neural codes is central to neuroscience. Do neurons encode information through relatively slow changes in the emission rates of individual spikes (rate code), or by the precise timing of every spike (temporal codes)? Here we compare the loss of information due to correlations for these two possible neural codes. The essence of Shannon's definition of information is to combine information with uncertainty: the higher the uncertainty of a given event, the more information is conveyed by that event. Correlations can reduce uncertainty or the amount of information, but by how much? In this paper we address this question by a direct comparison of the information per symbol conveyed by the words coming from a binary Markov source (temporal codes) with the information per symbol coming from the corresponding Bernoulli source (uncorrelated, rate code source). In a previous paper we found that a crucial role in the relation between Information Transmission Rates (ITR) and Firing Rates is played by a parameter s, which is the sum of transitions probabilities from the no-spike-state to the spike-state and vice versa. It turned out that also in this case a crucial role is played by the same parameter s. We found bounds of the quotient of ITRs for these sources, i.e. this quotient's minimal and maximal values. Next, making use of the entropy grouping axiom, we determined the loss of information in a Markov source in relation to its corresponding Bernoulli source for a given length of word. Our results show that in practical situations in the case of correlated signals the loss of information is relatively small, thus temporal codes, which are more energetically efficient, can replace the rate code effectively. These phenomena were confirmed by experiments.
[ { "created": "Tue, 6 Feb 2018 07:49:06 GMT", "version": "v1" }, { "created": "Wed, 7 Mar 2018 06:34:29 GMT", "version": "v2" } ]
2018-03-08
[ [ "Pregowska", "Agnieszka", "" ], [ "Kaplan", "Ehud", "" ], [ "Szczepanski", "Janusz", "" ] ]
The nature of neural codes is central to neuroscience. Do neurons encode information through relatively slow changes in the emission rates of individual spikes (rate code), or by the precise timing of every spike (temporal codes)? Here we compare the loss of information due to correlations for these two possible neural codes. The essence of Shannon's definition of information is to combine information with uncertainty: the higher the uncertainty of a given event, the more information is conveyed by that event. Correlations can reduce uncertainty or the amount of information, but by how much? In this paper we address this question by a direct comparison of the information per symbol conveyed by the words coming from a binary Markov source (temporal codes) with the information per symbol coming from the corresponding Bernoulli source (uncorrelated, rate code source). In a previous paper we found that a crucial role in the relation between Information Transmission Rates (ITR) and Firing Rates is played by a parameter s, which is the sum of transitions probabilities from the no-spike-state to the spike-state and vice versa. It turned out that also in this case a crucial role is played by the same parameter s. We found bounds of the quotient of ITRs for these sources, i.e. this quotient's minimal and maximal values. Next, making use of the entropy grouping axiom, we determined the loss of information in a Markov source in relation to its corresponding Bernoulli source for a given length of word. Our results show that in practical situations in the case of correlated signals the loss of information is relatively small, thus temporal codes, which are more energetically efficient, can replace the rate code effectively. These phenomena were confirmed by experiments.
0806.1029
Kasper Peeters
Kasper Peeters and Anne Taormina
Group theory of icosahedral virus capsids: a dynamical top-down approach
35 pages, many figures, animations available at http://biomaths.org
J. Theor. Biol., 256 4, (2009), 607- 624
10.1016/j.jtbi.2008.10.019
DCPT-08/33, ITP-UU-08/31, SPIN-08/23
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the use of a top-down approach to analyse the dynamics of icosahedral virus capsids and complement the information obtained from bottom-up studies of viral vibrations available in the literature. A normal mode analysis based on protein association energies is used to study the frequency spectrum, in which we reveal a universal plateau of low-frequency modes shared by a large class of Caspar-Klug capsids. These modes break icosahedral symmetry and are potentially relevant to the genome release mechanism. We comment on the role of viral tiling theory in such dynamical considerations.
[ { "created": "Thu, 5 Jun 2008 19:01:51 GMT", "version": "v1" } ]
2013-04-09
[ [ "Peeters", "Kasper", "" ], [ "Taormina", "Anne", "" ] ]
We explore the use of a top-down approach to analyse the dynamics of icosahedral virus capsids and complement the information obtained from bottom-up studies of viral vibrations available in the literature. A normal mode analysis based on protein association energies is used to study the frequency spectrum, in which we reveal a universal plateau of low-frequency modes shared by a large class of Caspar-Klug capsids. These modes break icosahedral symmetry and are potentially relevant to the genome release mechanism. We comment on the role of viral tiling theory in such dynamical considerations.
1706.10058
Patrick Ruther
Patrick Ruther, Oliver Paul
New approaches for CMOS-based devices for large-scale neural recording
null
Current Opinion in Neurobiology, 32, 31-37 (2015)
10.1016/j.conb.2014.10.007
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracellular, large scale in vivo recording of neural activity is mandatory for elucidating the interaction of neurons within large neural networks at the level of their single unit activity. Technological achievements in MEMS-based multichannel electrode arrays offer electrophysiological recording capabilities that go far beyond those of classical wire electrodes. Despite their impressive channel counts, recording systems with modest interconnection overhead have been demonstrated thanks to the hybrid integration of CMOS circuitry for signal preprocessing and data handling. The number of addressable channels is increased even further by a switch matrix for electrode selection co-integrated along the slender probe shafts. When realized by IC fabrication technologies, these probes offer highest recording site densities along the entire shaft length.
[ { "created": "Fri, 30 Jun 2017 08:28:59 GMT", "version": "v1" } ]
2017-07-03
[ [ "Ruther", "Patrick", "" ], [ "Paul", "Oliver", "" ] ]
Extracellular, large scale in vivo recording of neural activity is mandatory for elucidating the interaction of neurons within large neural networks at the level of their single unit activity. Technological achievements in MEMS-based multichannel electrode arrays offer electrophysiological recording capabilities that go far beyond those of classical wire electrodes. Despite their impressive channel counts, recording systems with modest interconnection overhead have been demonstrated thanks to the hybrid integration of CMOS circuitry for signal preprocessing and data handling. The number of addressable channels is increased even further by a switch matrix for electrode selection co-integrated along the slender probe shafts. When realized by IC fabrication technologies, these probes offer highest recording site densities along the entire shaft length.
1811.05930
Tyler Cassidy
Tyler Cassidy, Morgan Craig, Antony R. Humphries
A Recipe for State Dependent Distributed Delay Differential Equations
28 pages
Mathematical Biosciences and Engineering, 2019, 16(5): 5419-5450
10.3934/mbe.2019270
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use the McKendrick equation with variable ageing rate and randomly distributed maturation time to derive a state dependent distributed delay differential equation. We show that the resulting delay differential equation preserves non-negativity of initial conditions and we characterise local stability of equilibria. By specifying the distribution of maturation age, we recover state dependent discrete, uniform and gamma distributed delay differential equations. We show how to reduce the uniform case to a system of state dependent discrete delay equations and the gamma distributed case to a system of ordinary differential equations. To illustrate the benefits of these reductions, we convert previously published transit compartment models into equivalent distributed delay differential equations.
[ { "created": "Wed, 14 Nov 2018 18:00:48 GMT", "version": "v1" }, { "created": "Fri, 7 Dec 2018 08:26:20 GMT", "version": "v2" } ]
2021-03-22
[ [ "Cassidy", "Tyler", "" ], [ "Craig", "Morgan", "" ], [ "Humphries", "Antony R.", "" ] ]
We use the McKendrick equation with variable ageing rate and randomly distributed maturation time to derive a state dependent distributed delay differential equation. We show that the resulting delay differential equation preserves non-negativity of initial conditions and we characterise local stability of equilibria. By specifying the distribution of maturation age, we recover state dependent discrete, uniform and gamma distributed delay differential equations. We show how to reduce the uniform case to a system of state dependent discrete delay equations and the gamma distributed case to a system of ordinary differential equations. To illustrate the benefits of these reductions, we convert previously published transit compartment models into equivalent distributed delay differential equations.
1111.6217
Pascal Grange
Pascal Grange, Jason Bohland, Hemant Bokil, Sacha Nelson, Benjamin Okaty, Ken Sugino, Lydia Ng, Michael Hawrylycz, Partha P. Mitra
A cell-type based model explaining co-expression patterns of genes in the brain
41 pages; v2: typos corrected
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much of the genome is expressed in the vertebrate brain, with individual genes exhibiting different spatially-varying patterns of expression. These variations are not independent, with pairs of genes exhibiting complex patterns of co-expression, such that two genes may be similarly expressed in one region, but differentially expressed in other regions. These correlations have been previously studied quantitatively, particularly for the gene expression atlas of the mouse brain, but the biological meaning of the co-expression patterns remains obscure. We propose a simple model of the co-expression patterns in terms of spatial distributions of underlying cell types. We establish the plausibility of the model in terms of a test set of cell types for which both the gene expression profiles and the spatial distributions are known.
[ { "created": "Sun, 27 Nov 2011 02:18:31 GMT", "version": "v1" }, { "created": "Thu, 5 Jan 2012 01:59:35 GMT", "version": "v2" } ]
2012-01-06
[ [ "Grange", "Pascal", "" ], [ "Bohland", "Jason", "" ], [ "Bokil", "Hemant", "" ], [ "Nelson", "Sacha", "" ], [ "Okaty", "Benjamin", "" ], [ "Sugino", "Ken", "" ], [ "Ng", "Lydia", "" ], [ "Hawrylycz", "Michael", "" ], [ "Mitra", "Partha P.", "" ] ]
Much of the genome is expressed in the vertebrate brain, with individual genes exhibiting different spatially-varying patterns of expression. These variations are not independent, with pairs of genes exhibiting complex patterns of co-expression, such that two genes may be similarly expressed in one region, but differentially expressed in other regions. These correlations have been previously studied quantitatively, particularly for the gene expression atlas of the mouse brain, but the biological meaning of the co-expression patterns remains obscure. We propose a simple model of the co-expression patterns in terms of spatial distributions of underlying cell types. We establish the plausibility of the model in terms of a test set of cell types for which both the gene expression profiles and the spatial distributions are known.
1303.5953
Lorenzo Taggi Mr
Lorenzo Taggi, Francesca Colaiori, Vittorio Loreto, Francesca Tria
Size and structure of an epistatic space
This is the supplementary material for the article "Dynamical Correlations in the escape strategy of Influenza A virus"
null
10.1209/0295-5075/101/68003
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide quantitative estimates on the size and the structure of the epistatic space defined in the main article, "Dynamical Correlations in the escape strategy of Influenza A virus", EPL 101 68003.
[ { "created": "Sun, 24 Mar 2013 14:30:31 GMT", "version": "v1" }, { "created": "Sun, 17 May 2015 15:57:37 GMT", "version": "v2" } ]
2015-06-15
[ [ "Taggi", "Lorenzo", "" ], [ "Colaiori", "Francesca", "" ], [ "Loreto", "Vittorio", "" ], [ "Tria", "Francesca", "" ] ]
We provide quantitative estimates on the size and the structure of the epistatic space defined in the main article, "Dynamical Correlations in the escape strategy of Influenza A virus", EPL 101 68003.
1810.02107
Aurore Bussalb
Aurore Bussalb, Marie Prat, David Ojeda, Quentin Barth\'elemy, Julien Bonnaud, Louis Mayaud
A framework for the comparison of different EEG acquisition solutions
null
null
null
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this work is to propose a framework for the benchmarking of EEG amplifiers, headsets, and electrodes providing objective recommendation for a given application. The framework covers: data collection paradigm, data analysis, and statistical framework. To illustrate, data was collected from 12 different devices totaling up to 6 subjects per device. Two data acquisition protocols were implemented: a resting-state protocol eyes-open (EO) and eyes-closed (EC), and an Auditory Evoked Potential (AEP) protocol. Signal-to-noise ratio (SNR) on alpha band (EO/EC) and Event Related Potential (ERP) were extracted as objective quantification of physiologically meaningful information. Then, visual representation, univariate statistical analysis, and multivariate model were performed to increase results interpretability. Objective criteria show that the spectral SNR in alpha does not provide much discrimination between systems, suggesting that the acquisition quality might not be of primary importance for spectral and specifically alpha-based applications. On the contrary, AEP SNR proved much more variable stressing the importance of the acquisition setting for ERP experiments. The multivariate analysis identified some individuals and some systems as independent statistically significant contributors to the SNR. It highlights the importance of inter-individual differences in neurophysiological experiments (sample size) and suggests some device might objectively be superior to others when it comes to ERP recordings. However, the illustration of the proposed benchmarking framework suffers from severe limitations including small sample size and sound card jitter in the auditory stimulations. While these limitations hinders a definite ranking of the evaluated hardware, we believe the proposed benchmarking framework to be a modest yet valuable contribution to the field.
[ { "created": "Thu, 4 Oct 2018 09:09:19 GMT", "version": "v1" } ]
2018-10-05
[ [ "Bussalb", "Aurore", "" ], [ "Prat", "Marie", "" ], [ "Ojeda", "David", "" ], [ "Barthélemy", "Quentin", "" ], [ "Bonnaud", "Julien", "" ], [ "Mayaud", "Louis", "" ] ]
The purpose of this work is to propose a framework for the benchmarking of EEG amplifiers, headsets, and electrodes providing objective recommendation for a given application. The framework covers: data collection paradigm, data analysis, and statistical framework. To illustrate, data was collected from 12 different devices totaling up to 6 subjects per device. Two data acquisition protocols were implemented: a resting-state protocol eyes-open (EO) and eyes-closed (EC), and an Auditory Evoked Potential (AEP) protocol. Signal-to-noise ratio (SNR) on alpha band (EO/EC) and Event Related Potential (ERP) were extracted as objective quantification of physiologically meaningful information. Then, visual representation, univariate statistical analysis, and multivariate model were performed to increase results interpretability. Objective criteria show that the spectral SNR in alpha does not provide much discrimination between systems, suggesting that the acquisition quality might not be of primary importance for spectral and specifically alpha-based applications. On the contrary, AEP SNR proved much more variable stressing the importance of the acquisition setting for ERP experiments. The multivariate analysis identified some individuals and some systems as independent statistically significant contributors to the SNR. It highlights the importance of inter-individual differences in neurophysiological experiments (sample size) and suggests some device might objectively be superior to others when it comes to ERP recordings. However, the illustration of the proposed benchmarking framework suffers from severe limitations including small sample size and sound card jitter in the auditory stimulations. While these limitations hinders a definite ranking of the evaluated hardware, we believe the proposed benchmarking framework to be a modest yet valuable contribution to the field.
1812.01779
Vibhuti Gupta
Vibhuti Gupta
Voice Disorder Detection Using Long Short Term Memory (LSTM) Model
null
null
null
null
q-bio.QM cs.LG cs.SD eess.AS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated detection of voice disorders with computational methods is a recent research area in the medical domain since it requires a rigorous endoscopy for the accurate diagnosis. Efficient screening methods are required for the diagnosis of voice disorders so as to provide timely medical facilities in minimal resources. Detecting Voice disorder using computational methods is a challenging problem since audio data is continuous due to which extracting relevant features and applying machine learning is hard and unreliable. This paper proposes a Long short term memory model (LSTM) to detect pathological voice disorders and evaluates its performance in a real 400 testing samples without any labels. Different feature extraction methods are used to provide the best set of features before applying LSTM model for classification. The paper describes the approach and experiments that show promising results with 22% sensitivity, 97% specificity and 56% unweighted average recall.
[ { "created": "Tue, 4 Dec 2018 00:37:33 GMT", "version": "v1" } ]
2018-12-06
[ [ "Gupta", "Vibhuti", "" ] ]
Automated detection of voice disorders with computational methods is a recent research area in the medical domain since it requires a rigorous endoscopy for the accurate diagnosis. Efficient screening methods are required for the diagnosis of voice disorders so as to provide timely medical facilities in minimal resources. Detecting Voice disorder using computational methods is a challenging problem since audio data is continuous due to which extracting relevant features and applying machine learning is hard and unreliable. This paper proposes a Long short term memory model (LSTM) to detect pathological voice disorders and evaluates its performance in a real 400 testing samples without any labels. Different feature extraction methods are used to provide the best set of features before applying LSTM model for classification. The paper describes the approach and experiments that show promising results with 22% sensitivity, 97% specificity and 56% unweighted average recall.
2208.12486
Ivana Pajic-Lijakovic Dr.
Ivana Pajic-Lijakovic and Milan Milivojevic
Active wetting of epithelial tissues: modeling considerations
25 pages, 2 figures, 1 table
null
null
null
q-bio.CB cond-mat.soft physics.bio-ph q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Morphogenesis, tissue regeneration and cancer invasion involve transitions in tissue morphology. These transitions, caused by collective cell migration (CCM), have been interpreted as active wetting/de-wetting transitions. This phenomenon is considered on model system such as wetting of cell aggregate on rigid substrate which includes cell aggregate movement and isotropic/anisotropic spreading of cell monolayer around the aggregate depending on the substrate rigidity and aggregate size. This model system accounts for the transition between 3D epithelial aggregate and 2D cell monolayer as a product of: (1) tissue surface tension, (2) surface tension of substrate matrix, (3) cell-matrix interfacial tension, (4) interfacial tension gradient, (5) viscoelasticity caused by CCM, and (6) viscoelasticity of substrate matrix. These physical parameters depend on the cell contractility and state of cell-cell and cell matrix adhesion contacts, as well as, the stretching/compression of cellular systems caused by CCM. Despite extensive research devoted to study cell wetting, we still do not understand interplay among these physical parameters which induces oscillatory trend of cell rearrangement. This review focuses on these physical parameters in governing the cell rearrangement in the context of epithelial aggregate wetting.de-wetting, and on the modelling approaches aimed at reproducing and understanding these biological systems. In this context, we do not only review previously-published bio-physics models for cell rearrangement caused by CCM, but also propose new extensions of those models in order to point out the interplay between cell-matrix interfacial tension and epithelial viscoelasticity and the role of the interfacial tension gradient in cell spreading.
[ { "created": "Fri, 26 Aug 2022 07:46:36 GMT", "version": "v1" } ]
2022-08-29
[ [ "Pajic-Lijakovic", "Ivana", "" ], [ "Milivojevic", "Milan", "" ] ]
Morphogenesis, tissue regeneration and cancer invasion involve transitions in tissue morphology. These transitions, caused by collective cell migration (CCM), have been interpreted as active wetting/de-wetting transitions. This phenomenon is considered on model system such as wetting of cell aggregate on rigid substrate which includes cell aggregate movement and isotropic/anisotropic spreading of cell monolayer around the aggregate depending on the substrate rigidity and aggregate size. This model system accounts for the transition between 3D epithelial aggregate and 2D cell monolayer as a product of: (1) tissue surface tension, (2) surface tension of substrate matrix, (3) cell-matrix interfacial tension, (4) interfacial tension gradient, (5) viscoelasticity caused by CCM, and (6) viscoelasticity of substrate matrix. These physical parameters depend on the cell contractility and state of cell-cell and cell matrix adhesion contacts, as well as, the stretching/compression of cellular systems caused by CCM. Despite extensive research devoted to study cell wetting, we still do not understand interplay among these physical parameters which induces oscillatory trend of cell rearrangement. This review focuses on these physical parameters in governing the cell rearrangement in the context of epithelial aggregate wetting.de-wetting, and on the modelling approaches aimed at reproducing and understanding these biological systems. In this context, we do not only review previously-published bio-physics models for cell rearrangement caused by CCM, but also propose new extensions of those models in order to point out the interplay between cell-matrix interfacial tension and epithelial viscoelasticity and the role of the interfacial tension gradient in cell spreading.
2007.16121
Maike Klingel
Maike Klingel (1, 2 and 3), Norbert Kopco (3) and Bernhard Laback (1) ((1) Acoustics Research Institute, Austrian Academy of Sciences, (2) Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, (3) Institute of Computer Science, Faculty of Science, P. J. Safarik University in Kosice)
Reweighting of Binaural Localization Cues Induced by Lateralization Training
null
null
10.1007/s10162-021-00800-8
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Normal-hearing listeners adapt to alterations in sound localization cues. This adaptation can result from the establishment of a new spatial map of the altered cues or from a stronger relative weighting of unaltered compared to altered cues. Such reweighting has been shown for monaural vs. binaural cues. However, studies attempting to reweight the two binaural cues, interaural differences in time and level, yielded inconclusive results. In this study we investigated whether binaural cue reweighting can be induced by a lateralization training in a virtual audio-visual environment. 20 normal-hearing participants, divided into two groups, completed the experiment consisting of a seven-day lateralization training in a virtual audio-visual environment, preceded and followed by a test measuring the binaural cue weights. During testing, the participants task was to lateralize 500-ms bandpass-filtered (2-4 kHz) noise bursts containing various combinations of spatially consistent and inconsistent ITDs and ILDs. During training, the task was extended by visual cues reinforcing ITDs in one group and ILDs in the other group as well as manipulating the azimuthal ranges of the two cues. In both groups, the weight given to the reinforced cue increased significantly from pre- to posttest, suggesting that participants reweighted the binaural cues in the expected direction. This reweighting occurred predominantly within the first training session. The present results are relevant as binaural cue reweighting is, for example, likely to occur when normal-hearing listeners adapt to new acoustic environments. Similarly, binaural cue reweighting might be a factor underlying the low contribution of ITDs to sound localization of cochlear-implant listeners as they typically do not experience reliable ITD cues with their clinical devices.
[ { "created": "Fri, 31 Jul 2020 15:05:35 GMT", "version": "v1" } ]
2021-05-10
[ [ "Klingel", "Maike", "", "1, 2 and 3" ], [ "Kopco", "Norbert", "" ], [ "Laback", "Bernhard", "" ] ]
Normal-hearing listeners adapt to alterations in sound localization cues. This adaptation can result from the establishment of a new spatial map of the altered cues or from a stronger relative weighting of unaltered compared to altered cues. Such reweighting has been shown for monaural vs. binaural cues. However, studies attempting to reweight the two binaural cues, interaural differences in time and level, yielded inconclusive results. In this study we investigated whether binaural cue reweighting can be induced by a lateralization training in a virtual audio-visual environment. 20 normal-hearing participants, divided into two groups, completed the experiment consisting of a seven-day lateralization training in a virtual audio-visual environment, preceded and followed by a test measuring the binaural cue weights. During testing, the participants task was to lateralize 500-ms bandpass-filtered (2-4 kHz) noise bursts containing various combinations of spatially consistent and inconsistent ITDs and ILDs. During training, the task was extended by visual cues reinforcing ITDs in one group and ILDs in the other group as well as manipulating the azimuthal ranges of the two cues. In both groups, the weight given to the reinforced cue increased significantly from pre- to posttest, suggesting that participants reweighted the binaural cues in the expected direction. This reweighting occurred predominantly within the first training session. The present results are relevant as binaural cue reweighting is, for example, likely to occur when normal-hearing listeners adapt to new acoustic environments. Similarly, binaural cue reweighting might be a factor underlying the low contribution of ITDs to sound localization of cochlear-implant listeners as they typically do not experience reliable ITD cues with their clinical devices.
1712.08041
Payel Das
Ravi Tejwani, Adam Liska, Hongyuan You, Jenna Reinen, and Payel Das
Autism Classification Using Brain Functional Connectivity Dynamics and Machine Learning
null
null
null
null
q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of the present study is to identify autism using machine learning techniques and resting-state brain imaging data, leveraging the temporal variability of the functional connections (FC) as the only information. We estimated and compared the FC variability across brain regions between typical, healthy subjects and autistic population by analyzing brain imaging data from a world-wide multi-site database known as ABIDE (Autism Brain Imaging Data Exchange). Our analysis revealed that patients diagnosed with autism spectrum disorder (ASD) show increased FC variability in several brain regions that are associated with low FC variability in the typical brain. We then used the enhanced FC variability of brain regions as features for training machine learning models for ASD classification and achieved 65% accuracy in identification of ASD versus control subjects within the dataset. We also used node strength estimated from number of functional connections per node averaged over the whole scan as features for ASD classification.The results reveal that the dynamic FC measures outperform or are comparable with the static FC measures in predicting ASD.
[ { "created": "Thu, 21 Dec 2017 16:08:16 GMT", "version": "v1" } ]
2017-12-22
[ [ "Tejwani", "Ravi", "" ], [ "Liska", "Adam", "" ], [ "You", "Hongyuan", "" ], [ "Reinen", "Jenna", "" ], [ "Das", "Payel", "" ] ]
The goal of the present study is to identify autism using machine learning techniques and resting-state brain imaging data, leveraging the temporal variability of the functional connections (FC) as the only information. We estimated and compared the FC variability across brain regions between typical, healthy subjects and autistic population by analyzing brain imaging data from a world-wide multi-site database known as ABIDE (Autism Brain Imaging Data Exchange). Our analysis revealed that patients diagnosed with autism spectrum disorder (ASD) show increased FC variability in several brain regions that are associated with low FC variability in the typical brain. We then used the enhanced FC variability of brain regions as features for training machine learning models for ASD classification and achieved 65% accuracy in identification of ASD versus control subjects within the dataset. We also used node strength estimated from number of functional connections per node averaged over the whole scan as features for ASD classification.The results reveal that the dynamic FC measures outperform or are comparable with the static FC measures in predicting ASD.
2205.03503
Yuri A. Dabaghian
Clarissa Hoffman, Jingheng Cheng, Daoyun Ji, Y. Dabaghian
Pattern dynamics and stochasticity of the brain rhythms
14 pages, 8 figures, 5 suppl figuree
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Our current understanding of brain rhythms is based on quantifying their instantaneous or time-averaged characteristics. What remains unexplored, is the actual structure of the waves -- their shapes and patterns over finite timescales. To address this, we used two independent approaches to link wave forms to their physiological functions: the first is based on quantifying their consistency with the underlying mean behavior, and the second assesses "orderliness" of the waves' features. The corresponding measures capture the wave's characteristic and abnormal behaviors, such as atypical periodicity or excessive clustering, and demonstrate coupling between the patterns' dynamics and the animal's location, speed and acceleration. Specifically, we studied patterns of $\theta$ and $\gamma$ waves, and Sharp Wave Ripples, and observed speed-modulated changes of the wave's cadence, an antiphase relationship between orderliness and acceleration, as well as spatial selectiveness of patterns. Furthermore, we found an interdependence between orderliness and regularity: larger deviations from steady oscillatory behavior tend to accompany disarrayed temporal cluttering of peaks and troughs. Taken together, our results offer a complementary -- mesoscale -- perspective on brain wave structure, dynamics, and functionality.
[ { "created": "Fri, 6 May 2022 23:29:03 GMT", "version": "v1" } ]
2022-05-10
[ [ "Hoffman", "Clarissa", "" ], [ "Cheng", "Jingheng", "" ], [ "Ji", "Daoyun", "" ], [ "Dabaghian", "Y.", "" ] ]
Our current understanding of brain rhythms is based on quantifying their instantaneous or time-averaged characteristics. What remains unexplored, is the actual structure of the waves -- their shapes and patterns over finite timescales. To address this, we used two independent approaches to link wave forms to their physiological functions: the first is based on quantifying their consistency with the underlying mean behavior, and the second assesses "orderliness" of the waves' features. The corresponding measures capture the wave's characteristic and abnormal behaviors, such as atypical periodicity or excessive clustering, and demonstrate coupling between the patterns' dynamics and the animal's location, speed and acceleration. Specifically, we studied patterns of $\theta$ and $\gamma$ waves, and Sharp Wave Ripples, and observed speed-modulated changes of the wave's cadence, an antiphase relationship between orderliness and acceleration, as well as spatial selectiveness of patterns. Furthermore, we found an interdependence between orderliness and regularity: larger deviations from steady oscillatory behavior tend to accompany disarrayed temporal cluttering of peaks and troughs. Taken together, our results offer a complementary -- mesoscale -- perspective on brain wave structure, dynamics, and functionality.
1906.01663
Enrico Carlon
Stefanos K. Nomidis, Michal Szymonik, Tom Venken, Enrico Carlon, Jef Hooyberghs
Enhancing the performance of DNA surface-hybridization biosensors through target depletion
27 pages, 4 figures
Langmuir 35, 12276 (2019)
10.1021/acs.langmuir.9b01761
null
q-bio.BM physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DNA surface-hybridization biosensors utilize the selective hybridization of target sequences in solution to surface-immobilized probes. In this process, the target is usually assumed to be in excess, so that its concentration does not significantly vary while hybridizing to the surface-bound probes. If the target is initially at low concentrations and/or if the number of probes is very large and have high affinity for the target, the DNA in solution may get depleted. In this paper we analyze the equilibrium and kinetics of hybridization of DNA biosensors in the case of strong target depletion, by extending the Langmuir adsorption model. We focus, in particular, on the detection of a small amount of a single-nucleotide "mutant" sequence (concentration $c_2$) in a solution, which differs by one or more nucleotides from an abundant "wild-type" sequence (concentration $c_1 \gg c_2$). We show that depletion can give rise to a strongly-enhanced sensitivity of the biosensors. Using representative values of rate constants and hybridization free energies, we find that in the depletion regime one could detect relative concentrations $c_2/c_1$ that are up to three orders of magnitude smaller than in the conventional approach. The kinetics is surprisingly rich, and exhibits a non-monotonic adsorption with no counterpart in the no-depletion case. Finally, we show that, alongside enhanced detection sensitivity, this approach offers the possibility of sample enrichment, by substantially increasing the relative amount of the mutant over the wild-type sequence.
[ { "created": "Tue, 4 Jun 2019 18:08:02 GMT", "version": "v1" }, { "created": "Thu, 26 Sep 2019 08:41:20 GMT", "version": "v2" } ]
2019-09-27
[ [ "Nomidis", "Stefanos K.", "" ], [ "Szymonik", "Michal", "" ], [ "Venken", "Tom", "" ], [ "Carlon", "Enrico", "" ], [ "Hooyberghs", "Jef", "" ] ]
DNA surface-hybridization biosensors utilize the selective hybridization of target sequences in solution to surface-immobilized probes. In this process, the target is usually assumed to be in excess, so that its concentration does not significantly vary while hybridizing to the surface-bound probes. If the target is initially at low concentrations and/or if the number of probes is very large and have high affinity for the target, the DNA in solution may get depleted. In this paper we analyze the equilibrium and kinetics of hybridization of DNA biosensors in the case of strong target depletion, by extending the Langmuir adsorption model. We focus, in particular, on the detection of a small amount of a single-nucleotide "mutant" sequence (concentration $c_2$) in a solution, which differs by one or more nucleotides from an abundant "wild-type" sequence (concentration $c_1 \gg c_2$). We show that depletion can give rise to a strongly-enhanced sensitivity of the biosensors. Using representative values of rate constants and hybridization free energies, we find that in the depletion regime one could detect relative concentrations $c_2/c_1$ that are up to three orders of magnitude smaller than in the conventional approach. The kinetics is surprisingly rich, and exhibits a non-monotonic adsorption with no counterpart in the no-depletion case. Finally, we show that, alongside enhanced detection sensitivity, this approach offers the possibility of sample enrichment, by substantially increasing the relative amount of the mutant over the wild-type sequence.
2308.01431
Catherine Galbraith
James A. Galbraith and Catherine G. Galbraith
Using Single Molecule Imaging to Explore Intracellular Heterogeneity
21 pages, 2 figures, 1 table
null
10.1016/j.biocel.2023.106455
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Despite more than 100 years of study, it is unclear if the movement of proteins inside the cell is best described as a mosh pit or an exquisitely choreographed dance. Recent studies suggest the latter. Local interactions induce molecular condensates such as liquid-liquid phase separations (LLPSs) or non-liquid, functionally significant molecular aggregates, including synaptic densities, nucleoli, and Amyloid fibrils. Molecular condensates trigger intracellular signaling and drive processes ranging from gene expression to cell division. However, the descriptions of condensates tend to be qualitative and correlative. Here, we indicate how single-molecule imaging and analyses can be applied to quantify condensates. We discuss the pros and cons of different techniques for measuring differences between transient molecular behaviors inside and outside condensates. Finally, we offer suggestions for how imaging and analyses from different time and space regimes can be combined to identify molecular behaviors indicative of condensates within the dynamic high-density intracellular environment.
[ { "created": "Wed, 2 Aug 2023 21:03:01 GMT", "version": "v1" } ]
2023-08-25
[ [ "Galbraith", "James A.", "" ], [ "Galbraith", "Catherine G.", "" ] ]
Despite more than 100 years of study, it is unclear if the movement of proteins inside the cell is best described as a mosh pit or an exquisitely choreographed dance. Recent studies suggest the latter. Local interactions induce molecular condensates such as liquid-liquid phase separations (LLPSs) or non-liquid, functionally significant molecular aggregates, including synaptic densities, nucleoli, and Amyloid fibrils. Molecular condensates trigger intracellular signaling and drive processes ranging from gene expression to cell division. However, the descriptions of condensates tend to be qualitative and correlative. Here, we indicate how single-molecule imaging and analyses can be applied to quantify condensates. We discuss the pros and cons of different techniques for measuring differences between transient molecular behaviors inside and outside condensates. Finally, we offer suggestions for how imaging and analyses from different time and space regimes can be combined to identify molecular behaviors indicative of condensates within the dynamic high-density intracellular environment.
q-bio/0609039
Mauro Mobilia Dr
Mauro Mobilia, Ivan T. Georgiev, and Uwe C. Tauber
Spatial stochastic predator-prey models
5 pages, 2 figures. Proceedings paper of the workshop "Stochastic models in biological sciences" (May 29 - June 2, 2006 in Warsaw) for the Banach Center Publications
Banach Center Publications Vol. 80, 253-257 (2008)
10.4064/bc80-0-16
LMU-ASC 63/06
q-bio.PE cond-mat.stat-mech nlin.AO
null
We consider a broad class of stochastic lattice predator-prey models, whose main features are overviewed. In particular, this article aims at drawing a picture of the influence of spatial fluctuations, which are not accounted for by the deterministic rate equations, on the properties of the stochastic models. Here, we outline the robust scenario obeyed by most of the lattice predator-prey models with an interaction "a' la Lotka-Volterra". We also show how a drastically different behavior can emerge as the result of a subtle interplay between long-range interactions and a nearest-neighbor exchange process.
[ { "created": "Mon, 25 Sep 2006 17:47:48 GMT", "version": "v1" } ]
2011-12-20
[ [ "Mobilia", "Mauro", "" ], [ "Georgiev", "Ivan T.", "" ], [ "Tauber", "Uwe C.", "" ] ]
We consider a broad class of stochastic lattice predator-prey models, whose main features are overviewed. In particular, this article aims at drawing a picture of the influence of spatial fluctuations, which are not accounted for by the deterministic rate equations, on the properties of the stochastic models. Here, we outline the robust scenario obeyed by most of the lattice predator-prey models with an interaction "a' la Lotka-Volterra". We also show how a drastically different behavior can emerge as the result of a subtle interplay between long-range interactions and a nearest-neighbor exchange process.
1304.1880
Mantu Santra
Mantu Santra and Biman Bagchi
Kinetic proofreading at single molecular level: Aminoacylation of tRNA^{Ile} and the role of water as an editor
null
null
10.1371/journal.pone.0066112
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proofreading/editing in protein synthesis is essential for accurate translation of information from the genetic code. In this article we present a theoretical investigation of efficiency of a kinetic proofreading mechanism that employs hydrolysis of the wrong substrate as the discriminatory step in enzyme catalytic reactions. We consider aminoacylation of tRNA^{Ile} which is a crucial step in protein synthesis and for which experimental results are now available. We present an augmented kinetic scheme and then employ methods of stochastic simulation algorithm to obtain time dependent concentrations of different substances involved in the reaction and their rates of formation. We obtain the rates of product formation and ATP hydrolysis for both correct and wrong substrates (isoleucine and valine in our case), in single molecular enzyme as well as ensemble enzyme kinetics. The present theoretical scheme correctly reproduces (i) the amplitude of the discrimination factor in the overall rates between isoleucine and valine which is obtained as (1.8 \times 10^2).(4.33 \times 10^2) = 7.8 \times 10^4, (ii) the rates of ATP hydrolysis for both Ile and Val at different substrate concentrations in the aminoacylation of tRNA^{Ile}. The present study shows a non-michaelis type dependence of rate of reaction on tRNA^{Ile} concentration in case of valine. The editing in steady state is found to be independent of amino acid concentration. Interestingly, the computed ATP hydrolysis rate for valine at high substrate concentration is same as the rate of formation of Ile-tRNA^{Ile} whereas at intermediate substrate concentration the ATP hydrolysis rate is relatively low.
[ { "created": "Sat, 6 Apr 2013 11:41:14 GMT", "version": "v1" } ]
2015-06-15
[ [ "Santra", "Mantu", "" ], [ "Bagchi", "Biman", "" ] ]
Proofreading/editing in protein synthesis is essential for accurate translation of information from the genetic code. In this article we present a theoretical investigation of efficiency of a kinetic proofreading mechanism that employs hydrolysis of the wrong substrate as the discriminatory step in enzyme catalytic reactions. We consider aminoacylation of tRNA^{Ile} which is a crucial step in protein synthesis and for which experimental results are now available. We present an augmented kinetic scheme and then employ methods of stochastic simulation algorithm to obtain time dependent concentrations of different substances involved in the reaction and their rates of formation. We obtain the rates of product formation and ATP hydrolysis for both correct and wrong substrates (isoleucine and valine in our case), in single molecular enzyme as well as ensemble enzyme kinetics. The present theoretical scheme correctly reproduces (i) the amplitude of the discrimination factor in the overall rates between isoleucine and valine which is obtained as (1.8 \times 10^2).(4.33 \times 10^2) = 7.8 \times 10^4, (ii) the rates of ATP hydrolysis for both Ile and Val at different substrate concentrations in the aminoacylation of tRNA^{Ile}. The present study shows a non-michaelis type dependence of rate of reaction on tRNA^{Ile} concentration in case of valine. The editing in steady state is found to be independent of amino acid concentration. Interestingly, the computed ATP hydrolysis rate for valine at high substrate concentration is same as the rate of formation of Ile-tRNA^{Ile} whereas at intermediate substrate concentration the ATP hydrolysis rate is relatively low.
q-bio/0504012
Karsten Suhre
Karsten Suhre (IGS), St\'ephane Audic (IGS), Jean-Michel Claverie (IGS)
Mimivirus Gene Promoters Exhibit an Unprecedented Conservation among all Eukaryotes
null
null
10.1073/pnas.0506465102
null
q-bio.GN
null
The initial analysis of the recently sequenced genome of Acanthamoeba polyphaga Mimivirus, the largest known double-stranded DNA virus, predicted a proteome of size and complexity more akin to small parasitic bacteria than to other nucleo-cytoplasmic large DNA viruses, and identified numerous functions never before described in a virus. It has been proposed that the Mimivirus lineage could have emerged before the individualization of cellular organisms from the 3 domains of life. An exhaustive in silico analysis of the non-coding moiety of all known viral genomes, now uncovers the unprecedented perfect conservation of a AAAATTGA motif in close to 50% of the Mimivirus genes. This motif preferentially occurs in genes transcribed from the predicted leading strand and is associated with functions required early in the viral infectious cycle, such as transcription and protein translation. A comparison with the known promoter of unicellular eukaryotes, in particular amoebal protists, strongly suggests that the AAAATTGA motif is the structural equivalent of the TATA box core promoter element. This element is specific to the Mimivirus lineage, and may correspond to an ancestral promoter structure predating the radiation of the eukaryotic kingdoms. This unprecedented conservation of core promoter regions is another exceptional features of Mimivirus, that again raises the question of its evolutionary origin.
[ { "created": "Fri, 8 Apr 2005 13:46:44 GMT", "version": "v1" }, { "created": "Fri, 29 Jul 2005 15:10:35 GMT", "version": "v2" } ]
2016-08-16
[ [ "Suhre", "Karsten", "", "IGS" ], [ "Audic", "Stéphane", "", "IGS" ], [ "Claverie", "Jean-Michel", "", "IGS" ] ]
The initial analysis of the recently sequenced genome of Acanthamoeba polyphaga Mimivirus, the largest known double-stranded DNA virus, predicted a proteome of size and complexity more akin to small parasitic bacteria than to other nucleo-cytoplasmic large DNA viruses, and identified numerous functions never before described in a virus. It has been proposed that the Mimivirus lineage could have emerged before the individualization of cellular organisms from the 3 domains of life. An exhaustive in silico analysis of the non-coding moiety of all known viral genomes, now uncovers the unprecedented perfect conservation of a AAAATTGA motif in close to 50% of the Mimivirus genes. This motif preferentially occurs in genes transcribed from the predicted leading strand and is associated with functions required early in the viral infectious cycle, such as transcription and protein translation. A comparison with the known promoter of unicellular eukaryotes, in particular amoebal protists, strongly suggests that the AAAATTGA motif is the structural equivalent of the TATA box core promoter element. This element is specific to the Mimivirus lineage, and may correspond to an ancestral promoter structure predating the radiation of the eukaryotic kingdoms. This unprecedented conservation of core promoter regions is another exceptional features of Mimivirus, that again raises the question of its evolutionary origin.
1912.06591
Iain Johnston
Ryan Kerr, Sara Jabbari, Iain G. Johnston
Intracellular Energy Variability Modulates Cellular Decision-Making Capacity
null
null
null
null
q-bio.CB q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Cells are able to generate phenotypic diversity both during development and in response to stressful and changing environments, aiding survival. The biologically and medically vital process of a cell assuming a functionally important fate from a range of phenotypic possibilities can be thought of as a cell decision. To make these decisions, a cell relies on energy dependent pathways of signalling and expression. However, energy availability is often overlooked as a modulator of cellular decision-making. As cells can vary dramatically in energy availability, this limits our knowledge of how this key biological axis affects cell behaviour. Here, we consider the energy dependence of a highly generalisable decision-making regulatory network, and show that energy variability changes the sets of decisions a cell can make and the ease with which they can be made. Increasing intracellular energy levels can increase the number of stable phenotypes it can generate, corresponding to increased decision-making capacity. For this decision-making architecture, a cell with intracellular energy below a threshold is limited to a singular phenotype, potentially forcing the adoption of a specific cell fate. We suggest that common energetic differences between cells may explain some of the observed variability in cellular decision-making, and demonstrate the importance of considering energy levels in several diverse biological decision-making phenomena.
[ { "created": "Fri, 13 Dec 2019 16:34:26 GMT", "version": "v1" } ]
2019-12-16
[ [ "Kerr", "Ryan", "" ], [ "Jabbari", "Sara", "" ], [ "Johnston", "Iain G.", "" ] ]
Cells are able to generate phenotypic diversity both during development and in response to stressful and changing environments, aiding survival. The biologically and medically vital process of a cell assuming a functionally important fate from a range of phenotypic possibilities can be thought of as a cell decision. To make these decisions, a cell relies on energy dependent pathways of signalling and expression. However, energy availability is often overlooked as a modulator of cellular decision-making. As cells can vary dramatically in energy availability, this limits our knowledge of how this key biological axis affects cell behaviour. Here, we consider the energy dependence of a highly generalisable decision-making regulatory network, and show that energy variability changes the sets of decisions a cell can make and the ease with which they can be made. Increasing intracellular energy levels can increase the number of stable phenotypes it can generate, corresponding to increased decision-making capacity. For this decision-making architecture, a cell with intracellular energy below a threshold is limited to a singular phenotype, potentially forcing the adoption of a specific cell fate. We suggest that common energetic differences between cells may explain some of the observed variability in cellular decision-making, and demonstrate the importance of considering energy levels in several diverse biological decision-making phenomena.
1711.01767
Wlodzislaw Duch
W{\l}odzis{\l}aw Duch
Kurt Lewin, psychological constructs and sources of brain cognitive activity
10 pages, conference paper: XXVI Psychology Colloqium, Polish Academy of Science "Brain-behavior relations in psychology", June 2017
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding mind-brain-environment relations is one of the key topics in psychology. Kurt Lewin, inspired by theoretical physics, tried to establish topological and vector psychology analyzing patterns of interaction between the individual and her/his environment. The time is ripe to reformulate his ambitious goals, searching for ways to interpret objectively measured brain processes in terms of suitable psychological constructs. Connecting cognitive and social psychology constructs to neurophenomics, as it is done now in psychiatry, should ground them in physical reality.
[ { "created": "Mon, 6 Nov 2017 07:58:14 GMT", "version": "v1" } ]
2017-11-07
[ [ "Duch", "Włodzisław", "" ] ]
Understanding mind-brain-environment relations is one of the key topics in psychology. Kurt Lewin, inspired by theoretical physics, tried to establish topological and vector psychology analyzing patterns of interaction between the individual and her/his environment. The time is ripe to reformulate his ambitious goals, searching for ways to interpret objectively measured brain processes in terms of suitable psychological constructs. Connecting cognitive and social psychology constructs to neurophenomics, as it is done now in psychiatry, should ground them in physical reality.
2403.18352
Lucie Pellissier
Caroline Gora (PRC), Ana Dudas (PRC), Oc\'eane Vaugrente (PRC), Lucile Drobecq (PRC), Emmanuel Pecnard (PRC), Ga\"elle Lefort (PRC), Lucie P. Pellissier (PRC)
Deciphering autism heterogeneity: a molecular stratification approach in four mouse models
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by impairments in social interaction, communication, as well as restrained or stereotyped behaviors. The inherent heterogeneity within the autism spectrum poses challenges for developing effective pharmacological treatments targeting core features. Successful clinical trials require the identification of robust markers to enable patient stratification. In this study, we explored molecular markers within the oxytocin and immediate early gene families across five interconnected brain structures of the social circuit in four distinct ASD mouse models, each exhibiting unique behavioral features along the autism spectrum. While dysregulations in the oxytocin family were model-specific, immediate early genes displayed widespread alterations, reflecting global changes in social plasticity. Through integrative analysis, we identified Egr1, Foxp1, Homer1a, Oxt and Oxtr as five robust and discriminant molecular markers facilitating successful stratification of the four models. Importantly, our stratification demonstrated predictive values when challenged with a fifth mouse model or identifying subgroups of mice potentially responsive to oxytocin treatment. Beyond providing insights into oxytocin and immediate early gene mRNA dynamics, this proof-of-concept study represents a significant step toward potential stratification of individuals with ASD. The implications extend to enhancing the success of clinical trials and guiding personalized medicine for distinct subgroups of individuals with autism.
[ { "created": "Wed, 27 Mar 2024 08:44:22 GMT", "version": "v1" } ]
2024-03-28
[ [ "Gora", "Caroline", "", "PRC" ], [ "Dudas", "Ana", "", "PRC" ], [ "Vaugrente", "Océane", "", "PRC" ], [ "Drobecq", "Lucile", "", "PRC" ], [ "Pecnard", "Emmanuel", "", "PRC" ], [ "Lefort", "Gaëlle", "", "PRC" ], [ "Pellissier", "Lucie P.", "", "PRC" ] ]
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by impairments in social interaction, communication, as well as restrained or stereotyped behaviors. The inherent heterogeneity within the autism spectrum poses challenges for developing effective pharmacological treatments targeting core features. Successful clinical trials require the identification of robust markers to enable patient stratification. In this study, we explored molecular markers within the oxytocin and immediate early gene families across five interconnected brain structures of the social circuit in four distinct ASD mouse models, each exhibiting unique behavioral features along the autism spectrum. While dysregulations in the oxytocin family were model-specific, immediate early genes displayed widespread alterations, reflecting global changes in social plasticity. Through integrative analysis, we identified Egr1, Foxp1, Homer1a, Oxt and Oxtr as five robust and discriminant molecular markers facilitating successful stratification of the four models. Importantly, our stratification demonstrated predictive values when challenged with a fifth mouse model or identifying subgroups of mice potentially responsive to oxytocin treatment. Beyond providing insights into oxytocin and immediate early gene mRNA dynamics, this proof-of-concept study represents a significant step toward potential stratification of individuals with ASD. The implications extend to enhancing the success of clinical trials and guiding personalized medicine for distinct subgroups of individuals with autism.
1006.4111
Steven Lettieri
Steven Lettieri, Artem B. Mamonov, Daniel M. Zuckerman
Extending fragment-based free energy calculations with library Monte Carlo simulation: Annealing in interaction space
null
null
10.1016/j.bpj.2010.12.1056
null
q-bio.QM physics.bio-ph physics.comp-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-calculated libraries of molecular fragment configurations have previously been used as a basis for both equilibrium sampling (via "library-based Monte Carlo") and for obtaining absolute free energies using a polymer-growth formalism. Here, we combine the two approaches to extend the size of systems for which free energies can be calculated. We study a series of all-atom poly-alanine systems in a simple dielectric "solvent" and find that precise free energies can be obtained rapidly. For instance, for 12 residues, less than an hour of single-processor is required. The combined approach is formally equivalent to the "annealed importance sampling" algorithm; instead of annealing by decreasing temperature, however, interactions among fragments are gradually added as the molecule is "grown." We discuss implications for future binding affinity calculations in which a ligand is grown into a binding site.
[ { "created": "Mon, 21 Jun 2010 16:22:55 GMT", "version": "v1" }, { "created": "Fri, 10 Sep 2010 20:07:59 GMT", "version": "v2" } ]
2018-12-26
[ [ "Lettieri", "Steven", "" ], [ "Mamonov", "Artem B.", "" ], [ "Zuckerman", "Daniel M.", "" ] ]
Pre-calculated libraries of molecular fragment configurations have previously been used as a basis for both equilibrium sampling (via "library-based Monte Carlo") and for obtaining absolute free energies using a polymer-growth formalism. Here, we combine the two approaches to extend the size of systems for which free energies can be calculated. We study a series of all-atom poly-alanine systems in a simple dielectric "solvent" and find that precise free energies can be obtained rapidly. For instance, for 12 residues, less than an hour of single-processor is required. The combined approach is formally equivalent to the "annealed importance sampling" algorithm; instead of annealing by decreasing temperature, however, interactions among fragments are gradually added as the molecule is "grown." We discuss implications for future binding affinity calculations in which a ligand is grown into a binding site.
2112.11448
Stephanie Lewkiewicz
Stephanie M. Lewkiewicz and Sebastiano De Bona and Matthew R. Helmus and Benjamin Seibold
Temperature sensitivity of pest reproductive numbers in age-structured PDE models, with a focus on the invasive spotted lanternfly
36 pages, 9 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Invasive pest establishment is a pervasive threat to global ecosystems, agriculture, and public health. The recent establishment of the invasive spotted lanternfly in the northeastern United States has proven devastating to farms and vineyards, necessitating urgent development of population dynamical models and effective control practices. In this paper, we propose a stage- and age-structured system of PDEs to model insect pest populations, in which underlying dynamics are dictated by ambient temperature through rates of development, fecundity, and mortality. The model incorporates diapause and non-diapause pathways, and is calibrated to experimental and field data on the spotted lanternfly. We develop a novel moving mesh method for capturing age-advection accurately, even for coarse discretization parameters. We define a one-year reproductive number ($R_0$) from the spectrum of a one-year solution operator, and study its sensitivity to variations in the mean and amplitude of the annual temperature profile. We quantify assumptions sufficient to give rise to the low-rank structure of the solution operator characteristic of part of the parameter domain. We discuss establishment potential as it results from the pairing of a favorable $R_0$ value and transient population survival, and address implications for pest control strategies.
[ { "created": "Tue, 21 Dec 2021 18:54:52 GMT", "version": "v1" } ]
2021-12-22
[ [ "Lewkiewicz", "Stephanie M.", "" ], [ "De Bona", "Sebastiano", "" ], [ "Helmus", "Matthew R.", "" ], [ "Seibold", "Benjamin", "" ] ]
Invasive pest establishment is a pervasive threat to global ecosystems, agriculture, and public health. The recent establishment of the invasive spotted lanternfly in the northeastern United States has proven devastating to farms and vineyards, necessitating urgent development of population dynamical models and effective control practices. In this paper, we propose a stage- and age-structured system of PDEs to model insect pest populations, in which underlying dynamics are dictated by ambient temperature through rates of development, fecundity, and mortality. The model incorporates diapause and non-diapause pathways, and is calibrated to experimental and field data on the spotted lanternfly. We develop a novel moving mesh method for capturing age-advection accurately, even for coarse discretization parameters. We define a one-year reproductive number ($R_0$) from the spectrum of a one-year solution operator, and study its sensitivity to variations in the mean and amplitude of the annual temperature profile. We quantify assumptions sufficient to give rise to the low-rank structure of the solution operator characteristic of part of the parameter domain. We discuss establishment potential as it results from the pairing of a favorable $R_0$ value and transient population survival, and address implications for pest control strategies.
2407.00332
Zi Iun Lai
Zi Iun Lai, Wai Kit Fung, Enquan Chew
Machine Learning Models for Dengue Forecasting in Singapore
12 pages, 6 figures
null
null
null
q-bio.OT cs.LG
http://creativecommons.org/licenses/by/4.0/
With emerging prevalence beyond traditionally endemic regions, the global burden of dengue disease is forecasted to be one of the fastest growing. With limited direct treatment or vaccination currently available, prevention through vector control is widely believed to be the most effective form of managing outbreaks. This study examines traditional state space models (moving average, autoregressive, ARIMA, SARIMA), supervised learning techniques (XGBoost, SVM, KNN) and deep networks (LSTM, CNN, ConvLSTM) for forecasting weekly dengue cases in Singapore. Meteorological data and search engine trends were included as features for ML techniques. Forecasts using CNNs yielded lowest RMSE in weekly cases in 2019.
[ { "created": "Sat, 29 Jun 2024 06:27:52 GMT", "version": "v1" } ]
2024-07-02
[ [ "Lai", "Zi Iun", "" ], [ "Fung", "Wai Kit", "" ], [ "Chew", "Enquan", "" ] ]
With emerging prevalence beyond traditionally endemic regions, the global burden of dengue disease is forecasted to be one of the fastest growing. With limited direct treatment or vaccination currently available, prevention through vector control is widely believed to be the most effective form of managing outbreaks. This study examines traditional state space models (moving average, autoregressive, ARIMA, SARIMA), supervised learning techniques (XGBoost, SVM, KNN) and deep networks (LSTM, CNN, ConvLSTM) for forecasting weekly dengue cases in Singapore. Meteorological data and search engine trends were included as features for ML techniques. Forecasts using CNNs yielded lowest RMSE in weekly cases in 2019.
2305.19023
Lanlan Su
Sei Zhen Khong and Lanlan Su
Steady-state analysis of networked epidemic models
null
null
null
null
q-bio.PE cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compartmental epidemic models with dynamics that evolve over a graph network have gained considerable importance in recent years but analysis of these models is in general difficult due to their complexity. In this paper, we develop two positive feedback frameworks that are applicable to the study of steady-state values in a wide range of compartmental epidemic models, including both group and networked processes. In the case of a group (resp. networked) model, we show that the convergence limit of the susceptible proportion of the population (resp. the susceptible proportion in at least one of the subgroups) is upper bounded by the reciprocal of the basic reproduction number (BRN) of the model. The BRN, when it is greater than unity, thus demonstrates the level of penetration into a subpopulation by the disease. Both non-strict and strict bounds on the convergence limits are derived and shown to correspond to substantially distinct scenarios in the epidemic processes, one in the presence of the endemic state and another without. Formulae for calculating the limits are provided in the latter case. We apply the developed framework to examining various group and networked epidemic models commonly seen in the literature to verify the validity of our conclusions.
[ { "created": "Tue, 30 May 2023 13:25:49 GMT", "version": "v1" } ]
2023-05-31
[ [ "Khong", "Sei Zhen", "" ], [ "Su", "Lanlan", "" ] ]
Compartmental epidemic models with dynamics that evolve over a graph network have gained considerable importance in recent years but analysis of these models is in general difficult due to their complexity. In this paper, we develop two positive feedback frameworks that are applicable to the study of steady-state values in a wide range of compartmental epidemic models, including both group and networked processes. In the case of a group (resp. networked) model, we show that the convergence limit of the susceptible proportion of the population (resp. the susceptible proportion in at least one of the subgroups) is upper bounded by the reciprocal of the basic reproduction number (BRN) of the model. The BRN, when it is greater than unity, thus demonstrates the level of penetration into a subpopulation by the disease. Both non-strict and strict bounds on the convergence limits are derived and shown to correspond to substantially distinct scenarios in the epidemic processes, one in the presence of the endemic state and another without. Formulae for calculating the limits are provided in the latter case. We apply the developed framework to examining various group and networked epidemic models commonly seen in the literature to verify the validity of our conclusions.
0905.1718
Vitaly Ganusov
Vitaly V. Ganusov, Daniel L. Barber, and Rob J. De Boer
Killing of targets by effector CD8$^+$T cells in the mouse spleen follows the law of mass action
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been difficult to measure efficacy of T cell-based vaccines and to correlate efficacy of CD8$^+$T cell responses with protection against viral infections. In part, this difficulty is due to our poor understanding of the in vivo efficacy of CD8$^+$T cells. Using a recently developed experimental method of in vivo cytotoxicity we investigated quantitative aspects of killing of peptide-pulsed targets by effector and memory CD8$^+$T cells, specific to three epitopes of lymphocytic choriomeningitis virus (LCMV), in the mouse spleen. By analyzing data on killing of targets with varying number of epitope-specific effector and memory CD8$^+$T cells, we find that killing of targets by effectors follows the law of mass-action, that is the death rate of peptide-pulsed targets is proportional to the frequency of CTLs in the spleen. In contrast, killing of targets by memory CD8$^+$T cells does not follow the mass action law because the death rate of targets saturates at high frequencies of memory CD8$^+$T cells. For both effector and memory cells, we also find no support for a killing term that includes the decrease of the death rate of targets with increasing target cell density. Importantly, we find that at low CD8$^+$T cell frequencies, effector and memory CD8$^+$T cells on the per capita basis are equally efficient at killing peptide-pulsed targets. Our framework provides the guideline for the calculation of the level of memory CD8$^+$T cells required to provide sterilizing protection against viral infection. Our results thus form a basis for quantitative understanding of the process of killing of virus-infected cells by T cell responses in tissues and can be used to correlate the phenotype of vaccine-induced memory CD8 T cells with their killing efficacy in vivo.
[ { "created": "Mon, 11 May 2009 21:29:42 GMT", "version": "v1" } ]
2009-05-13
[ [ "Ganusov", "Vitaly V.", "" ], [ "Barber", "Daniel L.", "" ], [ "De Boer", "Rob J.", "" ] ]
It has been difficult to measure efficacy of T cell-based vaccines and to correlate efficacy of CD8$^+$T cell responses with protection against viral infections. In part, this difficulty is due to our poor understanding of the in vivo efficacy of CD8$^+$T cells. Using a recently developed experimental method of in vivo cytotoxicity we investigated quantitative aspects of killing of peptide-pulsed targets by effector and memory CD8$^+$T cells, specific to three epitopes of lymphocytic choriomeningitis virus (LCMV), in the mouse spleen. By analyzing data on killing of targets with varying number of epitope-specific effector and memory CD8$^+$T cells, we find that killing of targets by effectors follows the law of mass-action, that is the death rate of peptide-pulsed targets is proportional to the frequency of CTLs in the spleen. In contrast, killing of targets by memory CD8$^+$T cells does not follow the mass action law because the death rate of targets saturates at high frequencies of memory CD8$^+$T cells. For both effector and memory cells, we also find no support for a killing term that includes the decrease of the death rate of targets with increasing target cell density. Importantly, we find that at low CD8$^+$T cell frequencies, effector and memory CD8$^+$T cells on the per capita basis are equally efficient at killing peptide-pulsed targets. Our framework provides the guideline for the calculation of the level of memory CD8$^+$T cells required to provide sterilizing protection against viral infection. Our results thus form a basis for quantitative understanding of the process of killing of virus-infected cells by T cell responses in tissues and can be used to correlate the phenotype of vaccine-induced memory CD8 T cells with their killing efficacy in vivo.
2405.06642
Haitao Lin
Haitao Lin, Odin Zhang, Huifeng Zhao, Dejun Jiang, Lirong Wu, Zicheng Liu, Yufei Huang, Stan Z. Li
PPFlow: Target-aware Peptide Design with Torsional Flow Matching
18 pages
null
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Therapeutic peptides have proven to have great pharmaceutical value and potential in recent decades. However, methods of AI-assisted peptide drug discovery are not fully explored. To fill the gap, we propose a target-aware peptide design method called \textsc{PPFlow}, based on conditional flow matching on torus manifolds, to model the internal geometries of torsion angles for the peptide structure design. Besides, we establish a protein-peptide binding dataset named PPBench2024 to fill the void of massive data for the task of structure-based peptide drug design and to allow the training of deep learning methods. Extensive experiments show that PPFlow reaches state-of-the-art performance in tasks of peptide drug generation and optimization in comparison with baseline models, and can be generalized to other tasks including docking and side-chain packing.
[ { "created": "Tue, 5 Mar 2024 13:26:42 GMT", "version": "v1" }, { "created": "Wed, 15 May 2024 07:09:35 GMT", "version": "v2" }, { "created": "Sun, 16 Jun 2024 11:33:54 GMT", "version": "v3" } ]
2024-06-18
[ [ "Lin", "Haitao", "" ], [ "Zhang", "Odin", "" ], [ "Zhao", "Huifeng", "" ], [ "Jiang", "Dejun", "" ], [ "Wu", "Lirong", "" ], [ "Liu", "Zicheng", "" ], [ "Huang", "Yufei", "" ], [ "Li", "Stan Z.", "" ] ]
Therapeutic peptides have proven to have great pharmaceutical value and potential in recent decades. However, methods of AI-assisted peptide drug discovery are not fully explored. To fill the gap, we propose a target-aware peptide design method called \textsc{PPFlow}, based on conditional flow matching on torus manifolds, to model the internal geometries of torsion angles for the peptide structure design. Besides, we establish a protein-peptide binding dataset named PPBench2024 to fill the void of massive data for the task of structure-based peptide drug design and to allow the training of deep learning methods. Extensive experiments show that PPFlow reaches state-of-the-art performance in tasks of peptide drug generation and optimization in comparison with baseline models, and can be generalized to other tasks including docking and side-chain packing.
q-bio/0612032
Eduardo D. Sontag
Eduardo D. Sontag
Monotone and near-monotone network structure (part I)
In two parts. See http://www.math.rutgers.edu/~sontag/PUBDIR/index.html for additional materials
null
null
null
q-bio.MN
null
This paper (parts I and II) provides an expository introduction to monotone and near-monotone dynamical systems associated to biochemical networks, those whose graphs are consistent or near-consistent. Many conclusions can be drawn from signed network structure, associated to purely stoichiometric information and ignoring fluxes. In particular, monotone systems respond in a predictable fashion to perturbations and have robust and ordered dynamical characteristics, making them reliable components of larger networks. Interconnections of monotone systems may be fruitfully analyzed using tools from control theory, by viewing larger systems as interconnections of monotone subsystems. This allows one to obtain precise bifurcation diagrams without appeal to explicit knowledge of fluxes or of kinetic constants and other parameters, using merely "input/output characteristics" (steady-state responses or DC gains). The procedure may be viewed as a "model reduction" approach in which monotone subsystems are viewed as essentially one-dimensional objects. The possibility of performing a decomposition into a small number of monotone components is closely tied to the question of how "near" a system is to being monotone. We argue that systems that are "near monotone" may be more biologically more desirable than systems that are far from being monotone. Indeed, there are indications that biological networks may be much closer to being monotone than random networks that have the same numbers of vertices and of positive and negative edges.
[ { "created": "Sun, 17 Dec 2006 17:57:28 GMT", "version": "v1" } ]
2007-05-23
[ [ "Sontag", "Eduardo D.", "" ] ]
This paper (parts I and II) provides an expository introduction to monotone and near-monotone dynamical systems associated to biochemical networks, those whose graphs are consistent or near-consistent. Many conclusions can be drawn from signed network structure, associated to purely stoichiometric information and ignoring fluxes. In particular, monotone systems respond in a predictable fashion to perturbations and have robust and ordered dynamical characteristics, making them reliable components of larger networks. Interconnections of monotone systems may be fruitfully analyzed using tools from control theory, by viewing larger systems as interconnections of monotone subsystems. This allows one to obtain precise bifurcation diagrams without appeal to explicit knowledge of fluxes or of kinetic constants and other parameters, using merely "input/output characteristics" (steady-state responses or DC gains). The procedure may be viewed as a "model reduction" approach in which monotone subsystems are viewed as essentially one-dimensional objects. The possibility of performing a decomposition into a small number of monotone components is closely tied to the question of how "near" a system is to being monotone. We argue that systems that are "near monotone" may be more biologically more desirable than systems that are far from being monotone. Indeed, there are indications that biological networks may be much closer to being monotone than random networks that have the same numbers of vertices and of positive and negative edges.
1701.05567
Muyuan Chen
Muyuan Chen, Wei Dai, Ying Sun, Darius Jonasch, Cynthia Y He, Michael F. Schmid, Wah Chiu, Steven J Ludtke
Convolutional Neural Networks for Automated Annotation of Cellular Cryo-Electron Tomograms
21 pages, 8 figures
Nature Methods volume 14, 983-985 (2017)
10.1038/nmeth.4405
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Cellular Electron Cryotomography (CryoET) offers the ability to look inside cells and observe macromolecules frozen in action. A primary challenge for this technique is identifying and extracting the molecular components within the crowded cellular environment. We introduce a method using neural networks to dramatically reduce the time and human effort required for subcellular annotation and feature extraction. Subsequent subtomogram classification and averaging yields in-situ structures of molecular components of interest.
[ { "created": "Thu, 19 Jan 2017 19:04:56 GMT", "version": "v1" }, { "created": "Sun, 11 Jun 2017 16:44:46 GMT", "version": "v2" } ]
2021-08-04
[ [ "Chen", "Muyuan", "" ], [ "Dai", "Wei", "" ], [ "Sun", "Ying", "" ], [ "Jonasch", "Darius", "" ], [ "He", "Cynthia Y", "" ], [ "Schmid", "Michael F.", "" ], [ "Chiu", "Wah", "" ], [ "Ludtke", "Steven J", "" ] ]
Cellular Electron Cryotomography (CryoET) offers the ability to look inside cells and observe macromolecules frozen in action. A primary challenge for this technique is identifying and extracting the molecular components within the crowded cellular environment. We introduce a method using neural networks to dramatically reduce the time and human effort required for subcellular annotation and feature extraction. Subsequent subtomogram classification and averaging yields in-situ structures of molecular components of interest.
1704.00945
Ana Ortega
Ana Ortega, Estefan\'ia Taraz\'on, Carolina Gil-Cayuela, Luis Mart\'inez-Dolz, Francisca Lago, Jos\'e Ram\'on Gonz\'alez-Juanatey, Juan Sandoval, Manuel Portol\'es, Esther Rosell\'o-Llet\'i, Miguel Rivera
ASB1 differential methylation in ischaemic cardiomyopathy. Relationship with left ventricular performance in end stage heart failure patients
17 pages, 2 figures and 1 table. Submitted to ESC Heart Failure
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aims: Ischaemic cardiomyopathy (ICM) leads to impaired contraction and ventricular dysfunction causing high rates of morbidity and mortality. Epigenomics allows the identification of epigenetic signatures in human diseases. We analyse the differential epigenetic patterns of ASB gene family in ICM patients and relate these alterations to their haemodynamic and functional status. Methods and Results: Epigenomic analysis was carried out using 16 left ventricular (LV) tissue samples, 8 from ICM patients undergoing heart transplantation and 8 from control (CNT) subjects without cardiac disease. We increased the sample size up to 13 ICM and 10 CNT for RNA-sequencing and to 14 ICM for pyrosequencing analyses. We found a hypermethylated profile (cg11189868) in the ASB1 gene that showed a differential methylation of 0.26 beta difference, P < 0.05. This result was validated by pyrosequencing technique (0.23 beta difference, P < 0.05). Notably, the methylation pattern was strongly related to LV ejection fraction (r = -0.849, P = 0.008) stroke volume (r = -0.929, P = 0.001) and end-systolic and diastolic LV diameters (r = -0.743, P = 0.035 for both). ASB1 showed a down regulation in mRNA levels (-1.2 fold, P < 0.05). Conclusion: Our findings link a specific ASB1 methylation pattern to LV structure and performance in end-stage ICM, opening new therapeutic opportunities and providing new insights regarding which is the functionally relevant genome in the ischemic failing myocardium. Keywords: ischaemic cardiomyopathy; epigenomics; heart failure; left ventricular dysfunction; stroke volume; ASB1.
[ { "created": "Tue, 4 Apr 2017 10:30:42 GMT", "version": "v1" } ]
2017-04-05
[ [ "Ortega", "Ana", "" ], [ "Tarazón", "Estefanía", "" ], [ "Gil-Cayuela", "Carolina", "" ], [ "Martínez-Dolz", "Luis", "" ], [ "Lago", "Francisca", "" ], [ "González-Juanatey", "José Ramón", "" ], [ "Sandoval", "Juan", "" ], [ "Portolés", "Manuel", "" ], [ "Roselló-Lletí", "Esther", "" ], [ "Rivera", "Miguel", "" ] ]
Aims: Ischaemic cardiomyopathy (ICM) leads to impaired contraction and ventricular dysfunction causing high rates of morbidity and mortality. Epigenomics allows the identification of epigenetic signatures in human diseases. We analyse the differential epigenetic patterns of ASB gene family in ICM patients and relate these alterations to their haemodynamic and functional status. Methods and Results: Epigenomic analysis was carried out using 16 left ventricular (LV) tissue samples, 8 from ICM patients undergoing heart transplantation and 8 from control (CNT) subjects without cardiac disease. We increased the sample size up to 13 ICM and 10 CNT for RNA-sequencing and to 14 ICM for pyrosequencing analyses. We found a hypermethylated profile (cg11189868) in the ASB1 gene that showed a differential methylation of 0.26 beta difference, P < 0.05. This result was validated by pyrosequencing technique (0.23 beta difference, P < 0.05). Notably, the methylation pattern was strongly related to LV ejection fraction (r = -0.849, P = 0.008) stroke volume (r = -0.929, P = 0.001) and end-systolic and diastolic LV diameters (r = -0.743, P = 0.035 for both). ASB1 showed a down regulation in mRNA levels (-1.2 fold, P < 0.05). Conclusion: Our findings link a specific ASB1 methylation pattern to LV structure and performance in end-stage ICM, opening new therapeutic opportunities and providing new insights regarding which is the functionally relevant genome in the ischemic failing myocardium. Keywords: ischaemic cardiomyopathy; epigenomics; heart failure; left ventricular dysfunction; stroke volume; ASB1.
2009.00484
Emma Hubert
Emma Hubert and Thibaut Mastrolia and Dylan Possama\"i and Xavier Warin
Incentives, lockdown, and testing: from Thucydides's analysis to the COVID-19 pandemic
null
Journal of Mathematical Biology (2022) 84:37
10.1007/s00285-022-01736-0
null
q-bio.PE econ.TH math.OC math.PR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we provide a general mathematical formalism to study the optimal control of an epidemic, such as the COVID-19 pandemic, via incentives to lockdown and testing. In particular, we model the interplay between the government and the population as a principal-agent problem with moral hazard, \`a la Cvitani\'c, Possama\"i, and Touzi [27], while an epidemic is spreading according to dynamics given by compartmental stochastic SIS or SIR models, as proposed respectively by Gray, Greenhalgh, Hu, Mao, and Pan [45] and Tornatore, Buccellato, and Vetro [88]. More precisely, to limit the spread of a virus, the population can decrease the transmission rate of the disease by reducing interactions between individuals. However, this effort, which cannot be perfectly monitored by the government, comes at social and monetary cost for the population. To mitigate this cost, and thus encourage the lockdown of the population, the government can put in place an incentive policy, in the form of a tax or subsidy. In addition, the government may also implement a testing policy in order to know more precisely the spread of the epidemic within the country, and to isolate infected individuals. In terms of technical results, we demonstrate the optimal form of the tax, indexed on the proportion of infected individuals, as well as the optimal effort of the population, namely the transmission rate chosen in response to this tax. The government's optimisation problem then boils down to solving an Hamilton-Jacobi-Bellman equation. Numerical results confirm that if a tax policy is implemented, the population is encouraged to significantly reduce its interactions. If the government also adjusts its testing policy, less effort is required on the population side, individuals can interact almost as usual, and the epidemic is largely contained by the targeted isolation of positively-tested individuals.
[ { "created": "Tue, 1 Sep 2020 14:36:28 GMT", "version": "v1" }, { "created": "Wed, 13 Apr 2022 15:28:10 GMT", "version": "v2" } ]
2022-04-14
[ [ "Hubert", "Emma", "" ], [ "Mastrolia", "Thibaut", "" ], [ "Possamaï", "Dylan", "" ], [ "Warin", "Xavier", "" ] ]
In this work, we provide a general mathematical formalism to study the optimal control of an epidemic, such as the COVID-19 pandemic, via incentives to lockdown and testing. In particular, we model the interplay between the government and the population as a principal-agent problem with moral hazard, \`a la Cvitani\'c, Possama\"i, and Touzi [27], while an epidemic is spreading according to dynamics given by compartmental stochastic SIS or SIR models, as proposed respectively by Gray, Greenhalgh, Hu, Mao, and Pan [45] and Tornatore, Buccellato, and Vetro [88]. More precisely, to limit the spread of a virus, the population can decrease the transmission rate of the disease by reducing interactions between individuals. However, this effort, which cannot be perfectly monitored by the government, comes at social and monetary cost for the population. To mitigate this cost, and thus encourage the lockdown of the population, the government can put in place an incentive policy, in the form of a tax or subsidy. In addition, the government may also implement a testing policy in order to know more precisely the spread of the epidemic within the country, and to isolate infected individuals. In terms of technical results, we demonstrate the optimal form of the tax, indexed on the proportion of infected individuals, as well as the optimal effort of the population, namely the transmission rate chosen in response to this tax. The government's optimisation problem then boils down to solving an Hamilton-Jacobi-Bellman equation. Numerical results confirm that if a tax policy is implemented, the population is encouraged to significantly reduce its interactions. If the government also adjusts its testing policy, less effort is required on the population side, individuals can interact almost as usual, and the epidemic is largely contained by the targeted isolation of positively-tested individuals.
2008.05882
Asma Azizi
Asma Azizi, Jeremy Dewar, Zhuolin Qu, James Mac Hymanm
Using an agent-based sexual-network model to analyze the impact of mitigation efforts for controlling chlamydia
null
null
10.1016/j.epidem.2021.100456
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chlamydia trachomatis (Ct) is the most reported sexually transmitted infection in the United States with a major cause of infertility and pelvic inflammatory disease among women. Despite decades of screening women for Ct, rates increase among young African Americans (AA). We create and analyze an agent-based network model to understand the spread of Ct. We calibrate the model parameters to agree with survey data showing Ct prevalence of 12% of the women and 10% of the men in the 15-25 year-old AA in New Orleans, Louisiana. Our model accounts for long-term and casual partnerships. The network captures assortative mixing of individuals by preserving the joint-degree distributions observed in the data. We compare the efficiency of intervention strategies of randomly screening men, partner notification, which includes partner treatment, partner screening, and rescreening for infection. We compare the difference between treating partners of an infected person both with and without testing them. We observe that although increased Ct screening, rescreening and treating most of the partners of infected people will reduce the prevalence, these mitigations alone are not sufficient to control the epidemic. The current practice is to treat the partners of an infected individual, without first testing them for infection. The model predicts that if a sufficient number of the partners of all infected people are tested and treated, then there is a threshold condition where the epidemic can be mitigated. This threshold results from the expanded treatment network created by treating the partners of the infected partners of an individual. Although these conclusions can help design future Ct mitigation studies, we caution the reader that these conclusions are for the mathematical model, not the real world, and are contingent on the validity of the model assumptions.
[ { "created": "Thu, 6 Aug 2020 02:05:52 GMT", "version": "v1" } ]
2021-03-26
[ [ "Azizi", "Asma", "" ], [ "Dewar", "Jeremy", "" ], [ "Qu", "Zhuolin", "" ], [ "Mac Hymanm", "James", "" ] ]
Chlamydia trachomatis (Ct) is the most reported sexually transmitted infection in the United States with a major cause of infertility and pelvic inflammatory disease among women. Despite decades of screening women for Ct, rates increase among young African Americans (AA). We create and analyze an agent-based network model to understand the spread of Ct. We calibrate the model parameters to agree with survey data showing Ct prevalence of 12% of the women and 10% of the men in the 15-25 year-old AA in New Orleans, Louisiana. Our model accounts for long-term and casual partnerships. The network captures assortative mixing of individuals by preserving the joint-degree distributions observed in the data. We compare the efficiency of intervention strategies of randomly screening men, partner notification, which includes partner treatment, partner screening, and rescreening for infection. We compare the difference between treating partners of an infected person both with and without testing them. We observe that although increased Ct screening, rescreening and treating most of the partners of infected people will reduce the prevalence, these mitigations alone are not sufficient to control the epidemic. The current practice is to treat the partners of an infected individual, without first testing them for infection. The model predicts that if a sufficient number of the partners of all infected people are tested and treated, then there is a threshold condition where the epidemic can be mitigated. This threshold results from the expanded treatment network created by treating the partners of the infected partners of an individual. Although these conclusions can help design future Ct mitigation studies, we caution the reader that these conclusions are for the mathematical model, not the real world, and are contingent on the validity of the model assumptions.
1909.00911
Wen-Ting Chu
Wen-Ting Chu, Xiakun Chu, Jin Wang
Investigations of the Underlying Mechanisms of HIF-1{\alpha} and CITED2 Binding to TAZ1
12 pages, 6 figures
null
10.1073/pnas.1915333117
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The TAZ1 domain of CREB binding protein is crucial for transcriptional regulation and recognizes multiple targets. The interactions between TAZ1 and its specific targets are related to the cellular hypoxic negative feedback regulation. Previous experiments reported that one of the TAZ1 targets CITED2 is an efficient competitor of another target HIF-1{\alpha}. Here by developing the structure-based models of TAZ1 complexes we have uncovered the underlying mechanisms of the competitions between HIF-1{\alpha} and CITED2 binding to TAZ1. Our results are consistent with the experimental hypothesis on the competition mechanisms and the apparent affinity. In addition, the simulations prove the dominant position of forming TAZ1-CITED2 complex in both thermodynamics and kinetics. For thermodynamics, TAZ1-CITED2 is the lowest basin located on the free energy surface of binding in the ternary system. For kinetics, the results suggest that CITED2 binds to TAZ1 faster than HIF-1{\alpha}. Besides, the analysis of contact map and f values in this study will be helpful for further experiments on TAZ1 systems.
[ { "created": "Tue, 3 Sep 2019 01:39:27 GMT", "version": "v1" } ]
2022-06-08
[ [ "Chu", "Wen-Ting", "" ], [ "Chu", "Xiakun", "" ], [ "Wang", "Jin", "" ] ]
The TAZ1 domain of CREB binding protein is crucial for transcriptional regulation and recognizes multiple targets. The interactions between TAZ1 and its specific targets are related to the cellular hypoxic negative feedback regulation. Previous experiments reported that one of the TAZ1 targets CITED2 is an efficient competitor of another target HIF-1{\alpha}. Here by developing the structure-based models of TAZ1 complexes we have uncovered the underlying mechanisms of the competitions between HIF-1{\alpha} and CITED2 binding to TAZ1. Our results are consistent with the experimental hypothesis on the competition mechanisms and the apparent affinity. In addition, the simulations prove the dominant position of forming TAZ1-CITED2 complex in both thermodynamics and kinetics. For thermodynamics, TAZ1-CITED2 is the lowest basin located on the free energy surface of binding in the ternary system. For kinetics, the results suggest that CITED2 binds to TAZ1 faster than HIF-1{\alpha}. Besides, the analysis of contact map and f values in this study will be helpful for further experiments on TAZ1 systems.
1702.08078
Yuhong Wang
Yuhong Wang, Junzhou Huang, Wei Li, Sheng Wang, Chuanfan Ding
Intra-protein binding peptide fragments have specific and intrinsic sequence patterns
6 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The key finding in the DNA double helix model is the specific pairing or binding between nucleotides A-T and C-G, and the pairing rules are the molecule basis of genetic code. Unfortunately, no such rules have been discovered for proteins. Here we show that similar rules and intrinsic sequence patterns between intra-protein binding peptide fragments do exist, and they can be extracted using a deep learning algorithm. Multi-millions of binding and non-binding peptide fragments from currently available protein X-ray structures are classified with an accuracy of up to 93%. This discovery has the potential in helping solve protein folding and protein-protein interaction problems, two open and fundamental problems in molecular biology.
[ { "created": "Sun, 26 Feb 2017 20:13:20 GMT", "version": "v1" } ]
2017-02-28
[ [ "Wang", "Yuhong", "" ], [ "Huang", "Junzhou", "" ], [ "Li", "Wei", "" ], [ "Wang", "Sheng", "" ], [ "Ding", "Chuanfan", "" ] ]
The key finding in the DNA double helix model is the specific pairing or binding between nucleotides A-T and C-G, and the pairing rules are the molecule basis of genetic code. Unfortunately, no such rules have been discovered for proteins. Here we show that similar rules and intrinsic sequence patterns between intra-protein binding peptide fragments do exist, and they can be extracted using a deep learning algorithm. Multi-millions of binding and non-binding peptide fragments from currently available protein X-ray structures are classified with an accuracy of up to 93%. This discovery has the potential in helping solve protein folding and protein-protein interaction problems, two open and fundamental problems in molecular biology.
2402.04547
Tom Chou
Xiangting Li and Tom Chou
Reliable ligand discrimination in stochastic multistep kinetic proofreading: First passage time vs. product counting strategies
21 pages, 10 figures + Supporting Information (7 figures)
null
null
null
q-bio.MN cond-mat.stat-mech q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cellular signaling, crucial for biological processes like immune response and homeostasis, relies on specificity and fidelity in signal transduction to accurately respond to stimuli amidst biological noise. Kinetic proofreading (KPR) is a key mechanism enhancing signaling specificity through time-delayed steps, although its effectiveness is debated due to intrinsic noise potentially reducing signal fidelity. In this study, we reformulate the theory of kinetic proofreading (KPR) by convolving multiple intermediate states into a single state and then define an overall "processing" time required to traverse these states. This simplification allows us to succinctly describe kinetic proofreading in terms of a single waiting time parameter, facilitating a more direct evaluation and comparison of KPR performance across different biological contexts such as DNA replication and T cell receptor (TCR) signaling. We find that loss of fidelity for longer proofreading steps relies on the specific strategy of information extraction and show that in the first-passage time (FPT) discrimination strategy, longer proofreading steps can exponentially improve the accuracy of KPR at the cost of speed. Thus, KPR can still be an effective discrimination mechanism in the high noise regime. However, in a product concentration-based discrimination strategy, longer proofreading steps do not necessarily lead to an increase in performance. However, by introducing activation thresholds on product concentrations, can we decompose the product-based strategy into a series of FPT based strategies to better resolve the subtleties of KPR-mediated product discrimination. Our findings underscore the importance of understanding KPR in the context of how information is extracted and processed in the cell.
[ { "created": "Wed, 7 Feb 2024 03:05:16 GMT", "version": "v1" } ]
2024-02-08
[ [ "Li", "Xiangting", "" ], [ "Chou", "Tom", "" ] ]
Cellular signaling, crucial for biological processes like immune response and homeostasis, relies on specificity and fidelity in signal transduction to accurately respond to stimuli amidst biological noise. Kinetic proofreading (KPR) is a key mechanism enhancing signaling specificity through time-delayed steps, although its effectiveness is debated due to intrinsic noise potentially reducing signal fidelity. In this study, we reformulate the theory of kinetic proofreading (KPR) by convolving multiple intermediate states into a single state and then define an overall "processing" time required to traverse these states. This simplification allows us to succinctly describe kinetic proofreading in terms of a single waiting time parameter, facilitating a more direct evaluation and comparison of KPR performance across different biological contexts such as DNA replication and T cell receptor (TCR) signaling. We find that loss of fidelity for longer proofreading steps relies on the specific strategy of information extraction and show that in the first-passage time (FPT) discrimination strategy, longer proofreading steps can exponentially improve the accuracy of KPR at the cost of speed. Thus, KPR can still be an effective discrimination mechanism in the high noise regime. However, in a product concentration-based discrimination strategy, longer proofreading steps do not necessarily lead to an increase in performance. However, by introducing activation thresholds on product concentrations, can we decompose the product-based strategy into a series of FPT based strategies to better resolve the subtleties of KPR-mediated product discrimination. Our findings underscore the importance of understanding KPR in the context of how information is extracted and processed in the cell.
1304.5535
Anthony J Cox
Anthony J. Cox, Tobias Jakobi, Giovanna Rosone, Ole B. Schulz-Trieglaff
Comparing DNA sequence collections by direct comparison of compressed text indexes
Springer LNCS (Lecture Notes in Computer Science) should be considered as the original place of publication, please cite accordingly. The final version of this manuscript is available at http://link.springer.com/chapter/10.1007/978-3-642-33122-0_17
Lecture Notes in Computer Science Volume 7534, 2012, pp 214-224
10.1007/978-3-642-33122-0_17
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Popular sequence alignment tools such as BWA convert a reference genome to an indexing data structure based on the Burrows-Wheeler Transform (BWT), from which matches to individual query sequences can be rapidly determined. However the utility of also indexing the query sequences themselves remains relatively unexplored. Here we show that an all-against-all comparison of two sequence collections can be computed from the BWT of each collection with the BWTs held entirely in external memory, i.e. on disk and not in RAM. As an application of this technique, we show that BWTs of transcriptomic and genomic reads can be compared to obtain reference-free predictions of splice junctions that have high overlap with results from more standard reference-based methods. Code to construct and compare the BWT of large genomic data sets is available at http://beetl.github.com/BEETL/ as part of the BEETL library.
[ { "created": "Fri, 19 Apr 2013 20:16:19 GMT", "version": "v1" } ]
2013-04-23
[ [ "Cox", "Anthony J.", "" ], [ "Jakobi", "Tobias", "" ], [ "Rosone", "Giovanna", "" ], [ "Schulz-Trieglaff", "Ole B.", "" ] ]
Popular sequence alignment tools such as BWA convert a reference genome to an indexing data structure based on the Burrows-Wheeler Transform (BWT), from which matches to individual query sequences can be rapidly determined. However the utility of also indexing the query sequences themselves remains relatively unexplored. Here we show that an all-against-all comparison of two sequence collections can be computed from the BWT of each collection with the BWTs held entirely in external memory, i.e. on disk and not in RAM. As an application of this technique, we show that BWTs of transcriptomic and genomic reads can be compared to obtain reference-free predictions of splice junctions that have high overlap with results from more standard reference-based methods. Code to construct and compare the BWT of large genomic data sets is available at http://beetl.github.com/BEETL/ as part of the BEETL library.
0807.4511
Pierre Sens
Lionel Foret and Pierre Sens
Kinetic regulation of coated vesicle secretion
6 pages 4 figures accepted at PNAS
null
10.1073/pnas.0801173105
null
q-bio.SC cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The secretion of vesicles for intracellular transport often rely on the aggregation of specialized membrane-bound proteins into a coat able to curve cell membranes. The nucleation and growth of a protein coat is a kinetic process that competes with the energy-consuming turnover of coat components between the membrane and the cytosol. We propose a generic kinetic description of coat assembly and the formation of coated vesicles, and discuss its implication to the dynamics of COP vesicles that traffic within the Golgi and with the Endoplasmic Reticulum. We show that stationary coats of fixed area emerge from the competition between coat growth and the recycling of coat components, in a fashion resembling the treadmilling of cytoskeletal filaments. We further show that the turnover of coat components allows for a highly sensitive switching mechanism between a quiescent and a vesicle producing membrane, upon a slowing down of the exchange kinetics. We claim that the existence of this switching behaviour, also triggered by factors such as the presence of cargo and variation of the membrane mechanical tension, allows for efficient regulation of vesicle secretion. We propose a model, supported by different experimental observations, in which vesiculation of secretory membranes is impaired by the energy consuming desorption of coat proteins, until the presence of cargo or other factors triggers a dynamical switch into a vesicle producing state.
[ { "created": "Mon, 28 Jul 2008 17:59:11 GMT", "version": "v1" } ]
2009-11-13
[ [ "Foret", "Lionel", "" ], [ "Sens", "Pierre", "" ] ]
The secretion of vesicles for intracellular transport often rely on the aggregation of specialized membrane-bound proteins into a coat able to curve cell membranes. The nucleation and growth of a protein coat is a kinetic process that competes with the energy-consuming turnover of coat components between the membrane and the cytosol. We propose a generic kinetic description of coat assembly and the formation of coated vesicles, and discuss its implication to the dynamics of COP vesicles that traffic within the Golgi and with the Endoplasmic Reticulum. We show that stationary coats of fixed area emerge from the competition between coat growth and the recycling of coat components, in a fashion resembling the treadmilling of cytoskeletal filaments. We further show that the turnover of coat components allows for a highly sensitive switching mechanism between a quiescent and a vesicle producing membrane, upon a slowing down of the exchange kinetics. We claim that the existence of this switching behaviour, also triggered by factors such as the presence of cargo and variation of the membrane mechanical tension, allows for efficient regulation of vesicle secretion. We propose a model, supported by different experimental observations, in which vesiculation of secretory membranes is impaired by the energy consuming desorption of coat proteins, until the presence of cargo or other factors triggers a dynamical switch into a vesicle producing state.
1504.07006
Sascha Z\"ollner
Sascha Z\"ollner, Mikhail E. Sokolnikov and Markus Eidem\"uller
Beyond two-stage models for lung carcinogenesis in the Mayak workers: Implications for Plutonium risk
null
PLoS ONE 10 (2015): e0126238
10.1371/journal.pone.0126238
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mechanistic multi-stage models are used to analyze lung-cancer mortality after Plutonium exposure in the Mayak-workers cohort, with follow-up until 2008. Besides the established two-stage model with clonal expansion, models with three mutation stages as well as a model with two distinct pathways to cancer are studied. The results suggest that three-stage models offer an improved description of the data. The best-fitting models point to a mechanism where radiation increases the rate of clonal expansion. This is interpreted in terms of changes in cell-cycle control mediated by bystander signaling or repopulation following cell killing. No statistical evidence for a two-pathway model is found. To elucidate the implications of the different models for radiation risk, several exposure scenarios are studied. Models with a radiation effect at an early stage show a delayed response and a pronounced drop-off with older ages at exposure. Moreover, the dose-response relationship is strongly nonlinear for all three-stage models, revealing a marked increase above a critical dose.
[ { "created": "Mon, 27 Apr 2015 09:58:05 GMT", "version": "v1" } ]
2015-05-29
[ [ "Zöllner", "Sascha", "" ], [ "Sokolnikov", "Mikhail E.", "" ], [ "Eidemüller", "Markus", "" ] ]
Mechanistic multi-stage models are used to analyze lung-cancer mortality after Plutonium exposure in the Mayak-workers cohort, with follow-up until 2008. Besides the established two-stage model with clonal expansion, models with three mutation stages as well as a model with two distinct pathways to cancer are studied. The results suggest that three-stage models offer an improved description of the data. The best-fitting models point to a mechanism where radiation increases the rate of clonal expansion. This is interpreted in terms of changes in cell-cycle control mediated by bystander signaling or repopulation following cell killing. No statistical evidence for a two-pathway model is found. To elucidate the implications of the different models for radiation risk, several exposure scenarios are studied. Models with a radiation effect at an early stage show a delayed response and a pronounced drop-off with older ages at exposure. Moreover, the dose-response relationship is strongly nonlinear for all three-stage models, revealing a marked increase above a critical dose.
1812.06964
Philipp H\"ovel
Jason Bassett, Pascal Blunk, Thomas Isele, J\"orn Gethmann, Philipp H\"ovel
An Agent-Based Model for Bovine Viral Diarrhea
42 pages, 14 tables, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an exhaustive description of a stochastic, event-driven, hierarchical agent-based model designed to reproduce the infectious state of the cattle disease called Bovine Viral Diarrhea, for which the livestock-trade network is the main route of spreading. For the farm-node dynamics, it takes into account a vast number of breeding, infectious and animal movement mechanisms via a susceptible-infected-recovered type of dynamics with an additional permanently infectious class. The interaction between the farms is described by a supply-demand farm manager mechanism governing the network structure and dynamics. The model includes realistic disease and breeding dynamics and allows to study numerous mitigation strategies of present and past government regulations, including different testing and vaccination scenarios.
[ { "created": "Fri, 14 Dec 2018 20:58:36 GMT", "version": "v1" } ]
2018-12-19
[ [ "Bassett", "Jason", "" ], [ "Blunk", "Pascal", "" ], [ "Isele", "Thomas", "" ], [ "Gethmann", "Jörn", "" ], [ "Hövel", "Philipp", "" ] ]
We present an exhaustive description of a stochastic, event-driven, hierarchical agent-based model designed to reproduce the infectious state of the cattle disease called Bovine Viral Diarrhea, for which the livestock-trade network is the main route of spreading. For the farm-node dynamics, it takes into account a vast number of breeding, infectious and animal movement mechanisms via a susceptible-infected-recovered type of dynamics with an additional permanently infectious class. The interaction between the farms is described by a supply-demand farm manager mechanism governing the network structure and dynamics. The model includes realistic disease and breeding dynamics and allows to study numerous mitigation strategies of present and past government regulations, including different testing and vaccination scenarios.
0803.1838
Julius Lucks
Julius B. Lucks
Python - All a Scientist Needs
Regularly maintained at http://openwetware.org/wiki/Julius_B._Lucks/Projects/Python_All_A_Scientist_Needs
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/3.0/
Any cutting-edge scientific research project requires a myriad of computational tools for data generation, management, analysis and visualization. Python is a flexible and extensible scientific programming platform that offered the perfect solution in our recent comparative genomics investigation (J. B. Lucks, D. R. Nelson, G. Kudla, J. B. Plotkin. Genome landscapes and bacteriophage codon usage, PLoS Computational Biology, 4, 1000001, 2008). In this paper, we discuss the challenges of this project, and how the combined power of Biopython, Matplotlib and SWIG were utilized for the required computational tasks. We finish by discussing how python goes beyond being a convenient programming language, and promotes good scientific practice by enabling clean code, integration with professional programming techniques such as unit testing, and strong data provenance.
[ { "created": "Wed, 12 Mar 2008 20:08:07 GMT", "version": "v1" } ]
2008-03-14
[ [ "Lucks", "Julius B.", "" ] ]
Any cutting-edge scientific research project requires a myriad of computational tools for data generation, management, analysis and visualization. Python is a flexible and extensible scientific programming platform that offered the perfect solution in our recent comparative genomics investigation (J. B. Lucks, D. R. Nelson, G. Kudla, J. B. Plotkin. Genome landscapes and bacteriophage codon usage, PLoS Computational Biology, 4, 1000001, 2008). In this paper, we discuss the challenges of this project, and how the combined power of Biopython, Matplotlib and SWIG were utilized for the required computational tasks. We finish by discussing how python goes beyond being a convenient programming language, and promotes good scientific practice by enabling clean code, integration with professional programming techniques such as unit testing, and strong data provenance.
1106.1382
Tom Kelsey
W H B Wallace and T W Kelsey
Human ovarian reserve from conception to the menopause
null
null
10.1371/journal.pone.0008772
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human ovary contains a fixed number of non-growing follicles (NGFs) established before birth that decline with increasing age culminating in the menopause at 50-51 years. The objective of this study is to model the age-related population of NGFs in the human ovary from conception to menopause. Data were taken from eight separate quantitative histological studies (n = 325) in which NGF populations at known ages from seven weeks post conception to 51 years (median 32 years) were calculated. The data set was fitted to 20 peak function models, with the results ranked by obtained r2 correlation coefficient. The highest ranked model was chosen. Our model matches the log-adjusted NGF population from conception to menopause to a five-parameter asymmetric double Gaussian cumulative (ADC) curve (r2 = 0.81). When restricted to ages up to 25 years, the ADC curve has r2 = 0.95. We estimate that for 95% of women by the age of 30 years only 12% of their maximum pre-birth NGF population is present and by the age of 40 years only 3% remains. Furthermore, we found that the rate of NGF recruitment towards maturation for most women increases from birth until approximately age 14 years then decreases towards the menopause. To our knowledge, this is the first model of ovarian reserve from conception to menopause. This model allows us to estimate the number of NGFs present in the ovary at any given age, suggests that 81% of the variance in NGF populations is due to age alone, and shows for the first time, to our knowledge, that the rate of NGF recruitment increases from birth to age 14 years then declines with age until menopause. An increased understanding of the dynamics of human ovarian reserve will provide a more scientfic basis for fertility counselling for both healthy women and those who have survived gonadotoxic cancer treatments.
[ { "created": "Tue, 7 Jun 2011 16:00:36 GMT", "version": "v1" } ]
2011-06-08
[ [ "Wallace", "W H B", "" ], [ "Kelsey", "T W", "" ] ]
The human ovary contains a fixed number of non-growing follicles (NGFs) established before birth that decline with increasing age culminating in the menopause at 50-51 years. The objective of this study is to model the age-related population of NGFs in the human ovary from conception to menopause. Data were taken from eight separate quantitative histological studies (n = 325) in which NGF populations at known ages from seven weeks post conception to 51 years (median 32 years) were calculated. The data set was fitted to 20 peak function models, with the results ranked by obtained r2 correlation coefficient. The highest ranked model was chosen. Our model matches the log-adjusted NGF population from conception to menopause to a five-parameter asymmetric double Gaussian cumulative (ADC) curve (r2 = 0.81). When restricted to ages up to 25 years, the ADC curve has r2 = 0.95. We estimate that for 95% of women by the age of 30 years only 12% of their maximum pre-birth NGF population is present and by the age of 40 years only 3% remains. Furthermore, we found that the rate of NGF recruitment towards maturation for most women increases from birth until approximately age 14 years then decreases towards the menopause. To our knowledge, this is the first model of ovarian reserve from conception to menopause. This model allows us to estimate the number of NGFs present in the ovary at any given age, suggests that 81% of the variance in NGF populations is due to age alone, and shows for the first time, to our knowledge, that the rate of NGF recruitment increases from birth to age 14 years then declines with age until menopause. An increased understanding of the dynamics of human ovarian reserve will provide a more scientfic basis for fertility counselling for both healthy women and those who have survived gonadotoxic cancer treatments.
1504.01304
Yuzhen Ye
Yuzhen Ye and Haixu Tang
Utilizing de Bruijn graph of metagenome assembly for metatranscriptome analysis
8 pages, 4 figures, accepted in RECOMB-Seq 2015, under consideration in Bioinformatics (a special issue for RECOMB-Seq/CBB)
null
null
null
q-bio.GN cs.CE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenomics research has accelerated the studies of microbial organisms, providing insights into the composition and potential functionality of various microbial communities. Metatranscriptomics (studies of the transcripts from a mixture of microbial species) and other meta-omics approaches hold even greater promise for providing additional insights into functional and regulatory characteristics of the microbial communities. Current metatranscriptomics projects are often carried out without matched metagenomic datasets (of the same microbial communities). For the projects that produce both metatranscriptomic and metagenomic datasets, their analyses are often not integrated. Metagenome assemblies are far from perfect, partially explaining why metagenome assemblies are not used for the analysis of metatranscriptomic datasets. Here we report a reads mapping algorithm for mapping of short reads onto a de Bruijn graph of assemblies. A hash table of junction k-mers (k-mers spanning branching structures in the de Bruijn graph) is used to facilitate fast mapping of reads to the graph. We developed an application of this mapping algorithm: a reference based approach to metatranscriptome assembly using graphs of metagenome assembly as the reference. Our results show that this new approach (called TAG) helps to assemble substantially more transcripts that otherwise would have been missed or truncated because of the fragmented nature of the reference metagenome. TAG was implemented in C++ and has been tested extensively on the linux platform. It is available for download as open source at http://omics.informatics.indiana.edu/TAG.
[ { "created": "Mon, 6 Apr 2015 15:17:29 GMT", "version": "v1" } ]
2015-04-07
[ [ "Ye", "Yuzhen", "" ], [ "Tang", "Haixu", "" ] ]
Metagenomics research has accelerated the studies of microbial organisms, providing insights into the composition and potential functionality of various microbial communities. Metatranscriptomics (studies of the transcripts from a mixture of microbial species) and other meta-omics approaches hold even greater promise for providing additional insights into functional and regulatory characteristics of the microbial communities. Current metatranscriptomics projects are often carried out without matched metagenomic datasets (of the same microbial communities). For the projects that produce both metatranscriptomic and metagenomic datasets, their analyses are often not integrated. Metagenome assemblies are far from perfect, partially explaining why metagenome assemblies are not used for the analysis of metatranscriptomic datasets. Here we report a reads mapping algorithm for mapping of short reads onto a de Bruijn graph of assemblies. A hash table of junction k-mers (k-mers spanning branching structures in the de Bruijn graph) is used to facilitate fast mapping of reads to the graph. We developed an application of this mapping algorithm: a reference based approach to metatranscriptome assembly using graphs of metagenome assembly as the reference. Our results show that this new approach (called TAG) helps to assemble substantially more transcripts that otherwise would have been missed or truncated because of the fragmented nature of the reference metagenome. TAG was implemented in C++ and has been tested extensively on the linux platform. It is available for download as open source at http://omics.informatics.indiana.edu/TAG.
2005.08290
Thierry Mora
Anastasia A. Minervina, Ekaterina A. Komech, Aleksei Titov, Meriem Bensouda Koraichi, Elisa Rosati, Ilgar Z. Mamedov, Andre Franke, Grigory A. Efimov, Dmitriy M. Chudakov, Thierry Mora, Aleksandra M. Walczak, Yuri B. Lebedev, Mikhail V. Pogorelyy
Longitudinal high-throughput TCR repertoire profiling reveals the dynamics of T cell memory formation after mild COVID-19 infection
null
eLife 10 e63502 (2020)
10.7554/eLife.63502
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COVID-19 is a global pandemic caused by the SARS-CoV-2 coronavirus. T cells play a key role in the adaptive antiviral immune response by killing infected cells and facilitating the selection of virus-specific antibodies. However neither the dynamics and cross-reactivity of the SARS-CoV-2-specific T cell response nor the diversity of resulting immune memory are well understood. In this study we use longitudinal high-throughput T cell receptor (TCR) sequencing to track changes in the T cell repertoire following two mild cases of COVID-19. In both donors we identified CD4+ and CD8+ T cell clones with transient clonal expansion after infection. The antigen specificity of CD8+ TCR sequences to SARS-CoV-2 epitopes was confirmed by both MHC tetramer binding and presence in large database of SARS-CoV-2 epitope-specific TCRs. We describe characteristic motifs in TCR sequences of COVID-19-reactive clones and show preferential occurence of these motifs in publicly available large dataset of repertoires from COVID-19 patients. We show that in both donors the majority of infection-reactive clonotypes acquire memory phenotypes. Certain T cell clones were detected in the memory fraction at the pre-infection timepoint, suggesting participation of pre-existing cross-reactive memory T cells in the immune response to SARS-CoV-2.
[ { "created": "Sun, 17 May 2020 16:33:13 GMT", "version": "v1" }, { "created": "Tue, 8 Sep 2020 10:36:20 GMT", "version": "v2" }, { "created": "Wed, 30 Sep 2020 15:05:43 GMT", "version": "v3" } ]
2021-02-05
[ [ "Minervina", "Anastasia A.", "" ], [ "Komech", "Ekaterina A.", "" ], [ "Titov", "Aleksei", "" ], [ "Koraichi", "Meriem Bensouda", "" ], [ "Rosati", "Elisa", "" ], [ "Mamedov", "Ilgar Z.", "" ], [ "Franke", "Andre", "" ], [ "Efimov", "Grigory A.", "" ], [ "Chudakov", "Dmitriy M.", "" ], [ "Mora", "Thierry", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Lebedev", "Yuri B.", "" ], [ "Pogorelyy", "Mikhail V.", "" ] ]
COVID-19 is a global pandemic caused by the SARS-CoV-2 coronavirus. T cells play a key role in the adaptive antiviral immune response by killing infected cells and facilitating the selection of virus-specific antibodies. However neither the dynamics and cross-reactivity of the SARS-CoV-2-specific T cell response nor the diversity of resulting immune memory are well understood. In this study we use longitudinal high-throughput T cell receptor (TCR) sequencing to track changes in the T cell repertoire following two mild cases of COVID-19. In both donors we identified CD4+ and CD8+ T cell clones with transient clonal expansion after infection. The antigen specificity of CD8+ TCR sequences to SARS-CoV-2 epitopes was confirmed by both MHC tetramer binding and presence in large database of SARS-CoV-2 epitope-specific TCRs. We describe characteristic motifs in TCR sequences of COVID-19-reactive clones and show preferential occurence of these motifs in publicly available large dataset of repertoires from COVID-19 patients. We show that in both donors the majority of infection-reactive clonotypes acquire memory phenotypes. Certain T cell clones were detected in the memory fraction at the pre-infection timepoint, suggesting participation of pre-existing cross-reactive memory T cells in the immune response to SARS-CoV-2.
1807.07593
Samuele Soraggi
Samuele Soraggi and Carsten Wiuf
General theory for stochastic admixture graphs and F-statistics
null
null
10.1016/j.tpb.2018.12.002
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
We provide a general mathematical framework based on the theory of graphical models to study admixture graphs. Admixture graphs are used to describe the ancestral relationships between past and present populations, allowing for population merges and migration events, by means of gene flow. We give various mathematical properties of admixture graphs with particular focus on properties of the so-called $F$-statistics. Also the Wright-Fisher model is studied and a general expression for the loss of heterozygosity is derived.
[ { "created": "Thu, 19 Jul 2018 18:25:53 GMT", "version": "v1" } ]
2018-12-31
[ [ "Soraggi", "Samuele", "" ], [ "Wiuf", "Carsten", "" ] ]
We provide a general mathematical framework based on the theory of graphical models to study admixture graphs. Admixture graphs are used to describe the ancestral relationships between past and present populations, allowing for population merges and migration events, by means of gene flow. We give various mathematical properties of admixture graphs with particular focus on properties of the so-called $F$-statistics. Also the Wright-Fisher model is studied and a general expression for the loss of heterozygosity is derived.
1402.5956
Henrik Jeldtoft Jensen
Xiaogeng Wan, Bjorn Cruts and Henrik Jeldtoft Jensen
The causal inference of cortical neural networks during music improvisations
22 pages, 9 figures. The version was a revised in accordance with referee's comments. The language was also improved
null
10.1371/journal.pone.0112776
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present an EEG study of two music improvisation experiments. Professional musicians with high level of improvisation skills were asked to perform music either according to notes (composed music) or in improvisation. Each piece of music was performed in two different modes: strict mode and "let-go" mode. Synchronized EEG data was measured from both musicians and listeners. We used one of the most reliable causality measures: conditional mutual information from mixed embedding (MIME), to analyze directed correlations between different EEG channels, which was combined with network theory to construct both intra-brain and cross-brain neural networks. Differences were identified in intra-brain neural networks between composed music and improvisation and between strict mode and "let-go" mode. Particular brain regions such as frontal, parietal and temporal regions were found to play a key role in differentiating the brain activities between different playing conditions. By comparing the level of degree centralities in intra-brain neural networks, we found musicians responding differently to listeners when playing music in different conditions.
[ { "created": "Mon, 24 Feb 2014 17:13:07 GMT", "version": "v1" }, { "created": "Thu, 24 Jul 2014 15:06:18 GMT", "version": "v2" } ]
2015-06-18
[ [ "Wan", "Xiaogeng", "" ], [ "Cruts", "Bjorn", "" ], [ "Jensen", "Henrik Jeldtoft", "" ] ]
In this paper, we present an EEG study of two music improvisation experiments. Professional musicians with high level of improvisation skills were asked to perform music either according to notes (composed music) or in improvisation. Each piece of music was performed in two different modes: strict mode and "let-go" mode. Synchronized EEG data was measured from both musicians and listeners. We used one of the most reliable causality measures: conditional mutual information from mixed embedding (MIME), to analyze directed correlations between different EEG channels, which was combined with network theory to construct both intra-brain and cross-brain neural networks. Differences were identified in intra-brain neural networks between composed music and improvisation and between strict mode and "let-go" mode. Particular brain regions such as frontal, parietal and temporal regions were found to play a key role in differentiating the brain activities between different playing conditions. By comparing the level of degree centralities in intra-brain neural networks, we found musicians responding differently to listeners when playing music in different conditions.
q-bio/0410015
David A. Kessler
Elisheva Cohen, David A. Kessler, Herbert Levine
Recombination dramatically speeds up evolution of finite populations
null
null
10.1103/PhysRevLett.94.098102
null
q-bio.PE cond-mat.stat-mech
null
We study the role of recombination, as practiced by genetically-competent bacteria, in speeding up Darwinian evolution. This is done by adding a new process to a previously-studied Markov model of evolution on a smooth fitness landscape; this new process allows alleles to be exchanged with those in the surrounding medium. Our results, both numerical and analytic, indicate that for a wide range of intermediate population sizes, recombination dramatically speeds up the evolutionary advance.
[ { "created": "Wed, 13 Oct 2004 19:56:16 GMT", "version": "v1" } ]
2009-11-10
[ [ "Cohen", "Elisheva", "" ], [ "Kessler", "David A.", "" ], [ "Levine", "Herbert", "" ] ]
We study the role of recombination, as practiced by genetically-competent bacteria, in speeding up Darwinian evolution. This is done by adding a new process to a previously-studied Markov model of evolution on a smooth fitness landscape; this new process allows alleles to be exchanged with those in the surrounding medium. Our results, both numerical and analytic, indicate that for a wide range of intermediate population sizes, recombination dramatically speeds up the evolutionary advance.
1209.5231
Eric Mjolsness
Eric Mjolsness
Time-Ordered Product Expansions for Computational Stochastic Systems Biology
Submitted to Q-Bio 2012 conference, Santa Fe, New Mexico
null
10.1088/1478-3975/10/3/035009
null
q-bio.QM cs.CE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's Stochastic Simulation Algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion (TOPE) can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
[ { "created": "Mon, 24 Sep 2012 11:32:24 GMT", "version": "v1" } ]
2015-06-11
[ [ "Mjolsness", "Eric", "" ] ]
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's Stochastic Simulation Algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion (TOPE) can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
1506.04374
Tomasz Rutkowski
Chisaki Nakaizumi, Shoji Makino, and Tomasz M. Rutkowski
Head-related Impulse Response Cues for Spatial Auditory Brain-computer Interface
4 pages, 4 figures, accepted for EMBC 2015, IEEE copyright
null
10.1109/EMBC.2015.7318550
null
q-bio.NC cs.HC
http://creativecommons.org/licenses/by-nc-sa/3.0/
This study provides a comprehensive test of a head-related impulse response (HRIR) cues for a spatial auditory brain-computer interface (saBCI) speller paradigm. We present a comparison with the conventional virtual sound headphone-based spatial auditory modality. We propose and optimize the three types of sound spatialization settings using a variable elevation in order to evaluate the HRIR efficacy for the saBCI. Three experienced and seven naive BCI users participated in the three experimental setups based on ten presented Japanese syllables. The obtained EEG auditory evoked potentials (AEP) resulted with encouragingly good and stable P300 responses in online BCI experiments. Our case study indicated that users could perceive elevation in the saBCI experiments generated using the HRIR measured from a general head model. The saBCI accuracy and information transfer rate (ITR) scores have been improved comparing to the classical horizontal plane-based virtual spatial sound reproduction modality, as far as the healthy users in the current pilot study are concerned.
[ { "created": "Sun, 14 Jun 2015 10:36:29 GMT", "version": "v1" } ]
2016-11-17
[ [ "Nakaizumi", "Chisaki", "" ], [ "Makino", "Shoji", "" ], [ "Rutkowski", "Tomasz M.", "" ] ]
This study provides a comprehensive test of a head-related impulse response (HRIR) cues for a spatial auditory brain-computer interface (saBCI) speller paradigm. We present a comparison with the conventional virtual sound headphone-based spatial auditory modality. We propose and optimize the three types of sound spatialization settings using a variable elevation in order to evaluate the HRIR efficacy for the saBCI. Three experienced and seven naive BCI users participated in the three experimental setups based on ten presented Japanese syllables. The obtained EEG auditory evoked potentials (AEP) resulted with encouragingly good and stable P300 responses in online BCI experiments. Our case study indicated that users could perceive elevation in the saBCI experiments generated using the HRIR measured from a general head model. The saBCI accuracy and information transfer rate (ITR) scores have been improved comparing to the classical horizontal plane-based virtual spatial sound reproduction modality, as far as the healthy users in the current pilot study are concerned.
2111.08338
Sarah Kaakai
Nicole El Karoui and Kaouther Hadji and Sarah Kaakai
Simulating long-term impacts of mortality shocks: learning from the cholera pandemic
25 pages, 11 figures
null
null
null
q-bio.PE math.PR q-fin.RM
http://creativecommons.org/licenses/by/4.0/
The aim of this paper is to study the long-term consequence on longevity of a mortality shock. We adopt an historical and modeling approach to study how the population evolution following a mortality shock such as the COVID-19 pandemic could impact future mortality rates. In the first of part the paper, we study the several cholera epidemics in France and in England starting from the 1830s, and their impact on the major development of public health at the end of the nineteenth century. In the second part, we present the mathematical modeling of stochastic Individual-Based models. Using the R package IBMPopSim, this flexible framework is then applied to simulate the long-term impact of a mortality shock, using a toy model where nonlinear population compositional changes affect future mortality rates.
[ { "created": "Tue, 16 Nov 2021 10:27:16 GMT", "version": "v1" } ]
2021-11-17
[ [ "Karoui", "Nicole El", "" ], [ "Hadji", "Kaouther", "" ], [ "Kaakai", "Sarah", "" ] ]
The aim of this paper is to study the long-term consequence on longevity of a mortality shock. We adopt an historical and modeling approach to study how the population evolution following a mortality shock such as the COVID-19 pandemic could impact future mortality rates. In the first of part the paper, we study the several cholera epidemics in France and in England starting from the 1830s, and their impact on the major development of public health at the end of the nineteenth century. In the second part, we present the mathematical modeling of stochastic Individual-Based models. Using the R package IBMPopSim, this flexible framework is then applied to simulate the long-term impact of a mortality shock, using a toy model where nonlinear population compositional changes affect future mortality rates.
2301.10865
Xiaoqi Wei
Xiaoqi Wei, Jiahui Chen, and Guo-Wei Wei
Persistent topological Laplacian analysis of SARS-CoV-2 variants
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Topological data analysis (TDA) is an emerging field in mathematics and data science. Its central technique, persistent homology, has had tremendous success in many science and engineering disciplines. However, persistent homology has limitations, including its incapability of describing the homotopic shape evolution of data during filtration. Persistent topological Laplacians (PTLs), such as persistent Laplacian and persistent sheaf Laplacian, were proposed to overcome the drawback of persistent homology. In this work, we examine the modeling and analysis power of PTLs in the study of the protein structures of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike receptor binding domain (RBD) and its variants, i.e., Alpha, Beta, Gamma, BA.1, and BA.2. First, we employ PTLs to study how the RBD mutation-induced structural changes of RBD-angiotensin-converting enzyme 2 (ACE2) binding complexes are captured in the changes of spectra of the PTLs among SARS-CoV-2 variants. Additionally, we use PTLs to analyze the binding of RBD and ACE2-induced structural changes of various SARS-CoV-2 variants. Finally, we explore the impacts of computationally generated RBD structures on PTL-based machine learning, including deep learning, and predictions of deep mutational scanning datasets for the SARS-CoV-2 Omicron BA.2 variant. Our results indicate that PTLs have advantages over persistent homology in analyzing protein structural changes and provide a powerful new TDA tool for data science.
[ { "created": "Wed, 25 Jan 2023 23:15:43 GMT", "version": "v1" }, { "created": "Thu, 6 Apr 2023 02:51:23 GMT", "version": "v2" } ]
2023-04-07
[ [ "Wei", "Xiaoqi", "" ], [ "Chen", "Jiahui", "" ], [ "Wei", "Guo-Wei", "" ] ]
Topological data analysis (TDA) is an emerging field in mathematics and data science. Its central technique, persistent homology, has had tremendous success in many science and engineering disciplines. However, persistent homology has limitations, including its incapability of describing the homotopic shape evolution of data during filtration. Persistent topological Laplacians (PTLs), such as persistent Laplacian and persistent sheaf Laplacian, were proposed to overcome the drawback of persistent homology. In this work, we examine the modeling and analysis power of PTLs in the study of the protein structures of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike receptor binding domain (RBD) and its variants, i.e., Alpha, Beta, Gamma, BA.1, and BA.2. First, we employ PTLs to study how the RBD mutation-induced structural changes of RBD-angiotensin-converting enzyme 2 (ACE2) binding complexes are captured in the changes of spectra of the PTLs among SARS-CoV-2 variants. Additionally, we use PTLs to analyze the binding of RBD and ACE2-induced structural changes of various SARS-CoV-2 variants. Finally, we explore the impacts of computationally generated RBD structures on PTL-based machine learning, including deep learning, and predictions of deep mutational scanning datasets for the SARS-CoV-2 Omicron BA.2 variant. Our results indicate that PTLs have advantages over persistent homology in analyzing protein structural changes and provide a powerful new TDA tool for data science.
1210.7205
Agnes Noy
Agnes Noy and Ramin Golestanian
Length scale dependence of DNA mechanical properties
5 pages, 4 figures, accepted for publication in Phys. Rev. Lett
null
10.1103/PhysRevLett.109.228101
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although mechanical properties of DNA are well characterized at the kilo base-pair range, a number of recent experiments have suggested that DNA is more flexible at shorter length scales, which correspond to the regime that is crucial for cellular processes such as DNA packaging and gene regulation. Here, we perform a systematic study of the effective elastic properties of DNA at different length scales by probing the conformation and fluctuations of DNA from single base-pair level up to four helical turns, using trajectories from atomistic simulation. We find evidence that supports cooperative softening of the stretch modulus and identify the essential modes that give rise to this effect. The bend correlation exhibits modulations that reflect the helical periodicity, while it yields a reasonable value for the effective persistence length, and the twist modulus undergoes a smooth crossover---from a relatively smaller value at the single base-pair level to the bulk value---over half a DNA-turn.
[ { "created": "Fri, 26 Oct 2012 17:49:43 GMT", "version": "v1" } ]
2015-06-11
[ [ "Noy", "Agnes", "" ], [ "Golestanian", "Ramin", "" ] ]
Although mechanical properties of DNA are well characterized at the kilo base-pair range, a number of recent experiments have suggested that DNA is more flexible at shorter length scales, which correspond to the regime that is crucial for cellular processes such as DNA packaging and gene regulation. Here, we perform a systematic study of the effective elastic properties of DNA at different length scales by probing the conformation and fluctuations of DNA from single base-pair level up to four helical turns, using trajectories from atomistic simulation. We find evidence that supports cooperative softening of the stretch modulus and identify the essential modes that give rise to this effect. The bend correlation exhibits modulations that reflect the helical periodicity, while it yields a reasonable value for the effective persistence length, and the twist modulus undergoes a smooth crossover---from a relatively smaller value at the single base-pair level to the bulk value---over half a DNA-turn.
2004.05778
David Saakian
David B. Saakian
A simple statistical physics model for the epidemic with incubation period
5 pages, 1 figure
null
10.1016/j.cjph.2021.07.007
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on the classical SIR model, we derive a simple modification for the dynamics of epidemics with a known incubation period of infection. The model is described by a system of integro-differential equations. Parameters of our model directly related to epidemiological data. We derive some analytical results, as well as perform numerical simulations. We use the proposed model to analyze COVID-19 epidemic data in Armenia. We propose a strategy: organize a quarantine, and then conduct extensive testing of risk groups during the quarantine, evaluating the percentage of the population among risk groups and people with symptoms.
[ { "created": "Mon, 13 Apr 2020 05:56:37 GMT", "version": "v1" } ]
2021-09-01
[ [ "Saakian", "David B.", "" ] ]
Based on the classical SIR model, we derive a simple modification for the dynamics of epidemics with a known incubation period of infection. The model is described by a system of integro-differential equations. Parameters of our model directly related to epidemiological data. We derive some analytical results, as well as perform numerical simulations. We use the proposed model to analyze COVID-19 epidemic data in Armenia. We propose a strategy: organize a quarantine, and then conduct extensive testing of risk groups during the quarantine, evaluating the percentage of the population among risk groups and people with symptoms.
2004.03878
Chandrika Prakash Vyasarayani
C. P. Vyasarayani and Anindya Chatterjee
New approximations, and policy implications, from a delayed dynamic model of a fast pandemic
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study an SEIQR (Susceptible-Exposed-Infectious-Quarantined-Recovered) model for an infectious disease, with time delays for latency and an asymptomatic phase. For fast pandemics where nobody has prior immunity and everyone has immunity after recovery, the SEIQR model decouples into two nonlinear delay differential equations (DDEs) with five parameters. One parameter is set to unity by scaling time. The subcase of perfect quarantining and zero self-recovery before quarantine, with two free parameters, is examined first. The method of multiple scales yields a hyperbolic tangent solution; and a long-wave approximation yields a first order ordinary differential equation (ODE). With imperfect quarantining and nonzero self-recovery, the long-wave approximation is a second order ODE. These three approximations each capture the full outbreak, from infinitesimal initiation to final saturation. Low-dimensional dynamics in the DDEs is demonstrated using a six state non-delayed reduced order model obtained by Galerkin projection. Numerical solutions from the reduced order model match the DDE over a range of parameter choices and initial conditions. Finally, stability analysis and numerics show how correctly executed time-varying social distancing, within the present model, can cut the number of affected people by almost half. Alternatively, faster detection followed by near-certain quarantining can potentially be even more effective.
[ { "created": "Wed, 8 Apr 2020 08:36:29 GMT", "version": "v1" } ]
2020-04-09
[ [ "Vyasarayani", "C. P.", "" ], [ "Chatterjee", "Anindya", "" ] ]
We study an SEIQR (Susceptible-Exposed-Infectious-Quarantined-Recovered) model for an infectious disease, with time delays for latency and an asymptomatic phase. For fast pandemics where nobody has prior immunity and everyone has immunity after recovery, the SEIQR model decouples into two nonlinear delay differential equations (DDEs) with five parameters. One parameter is set to unity by scaling time. The subcase of perfect quarantining and zero self-recovery before quarantine, with two free parameters, is examined first. The method of multiple scales yields a hyperbolic tangent solution; and a long-wave approximation yields a first order ordinary differential equation (ODE). With imperfect quarantining and nonzero self-recovery, the long-wave approximation is a second order ODE. These three approximations each capture the full outbreak, from infinitesimal initiation to final saturation. Low-dimensional dynamics in the DDEs is demonstrated using a six state non-delayed reduced order model obtained by Galerkin projection. Numerical solutions from the reduced order model match the DDE over a range of parameter choices and initial conditions. Finally, stability analysis and numerics show how correctly executed time-varying social distancing, within the present model, can cut the number of affected people by almost half. Alternatively, faster detection followed by near-certain quarantining can potentially be even more effective.
2110.01505
Yasuhisa Saito
Hiromu Gion, Yasuhisa Saito
Backward bifurcation of a disease-severity-structured epidemic model with treatment
13 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a disease-severity-structured epidemic model with treatment necessary only to severe infective individuals to discuss the effect of the treatment capacity on the disease transmission. It is shown that a backward bifurcation occurs in the basic reproduction number R0, where a stable endemic equilibrium co-exists with a stable disease-free equilibrium when R0 <1, if the capacity is relatively small. This epidemiological implication is that, when there is not enough capacity for treatment, the requirement R0 <1 is not sufficient for effective disease control and disease outbreak can happen to a high endemic level even though R0 <1.
[ { "created": "Mon, 4 Oct 2021 15:21:31 GMT", "version": "v1" } ]
2021-10-05
[ [ "Gion", "Hiromu", "" ], [ "Saito", "Yasuhisa", "" ] ]
This paper presents a disease-severity-structured epidemic model with treatment necessary only to severe infective individuals to discuss the effect of the treatment capacity on the disease transmission. It is shown that a backward bifurcation occurs in the basic reproduction number R0, where a stable endemic equilibrium co-exists with a stable disease-free equilibrium when R0 <1, if the capacity is relatively small. This epidemiological implication is that, when there is not enough capacity for treatment, the requirement R0 <1 is not sufficient for effective disease control and disease outbreak can happen to a high endemic level even though R0 <1.
2110.01345
Ashishi Puri
Ashishi Puri and Sanjeev Kumar
A new iterative algorithm for generating gradient directions to detect white matter fibers in brain from MRI data
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes an iterative algorithm for choosing gradient directions use to reconstruct white matter fibers in the brain. The present study is not focusing on data acquisition where scanning is performed. The Adaptive Gradient Directions (AGD) approach is extended to refine the position and area of the grid, resulting in an admissible reduction in angular error. We begin with the gradient directions distributed uniformly inside a grid of bigger size and with larger spacing between the points. Both (size of the grid and spacing between the points) reduce iteratively. The proposed algorithm ensures that the actual position of fiber comes inside the grid at each iteration, unlike as in the AGD approach. As a result, the solution tends to actual orientation in each iteration followed by better estimation of fibers. The proposed algorithm is validated by associating it with mixture of Gaussian diffusion and mixture of non-central Wishart distribution models. The proposed approach significantly reduce the angular error for multiple computer-generated experiments on synthetic simulations and real data. Moreover, we have also performed simulations with fibers not residing in the XY-plane. For this set-up also, the proposed work outperforms, giving lesser angular error with both the models. Synthetic simulations have been performed with Rician distributed (R-D) noise of standard deviation ranging from 0.02-0.1. This work helps in better understanding of the anatomy of the brain using the MRI signal data.
[ { "created": "Mon, 4 Oct 2021 11:47:53 GMT", "version": "v1" } ]
2021-10-05
[ [ "Puri", "Ashishi", "" ], [ "Kumar", "Sanjeev", "" ] ]
This paper proposes an iterative algorithm for choosing gradient directions use to reconstruct white matter fibers in the brain. The present study is not focusing on data acquisition where scanning is performed. The Adaptive Gradient Directions (AGD) approach is extended to refine the position and area of the grid, resulting in an admissible reduction in angular error. We begin with the gradient directions distributed uniformly inside a grid of bigger size and with larger spacing between the points. Both (size of the grid and spacing between the points) reduce iteratively. The proposed algorithm ensures that the actual position of fiber comes inside the grid at each iteration, unlike as in the AGD approach. As a result, the solution tends to actual orientation in each iteration followed by better estimation of fibers. The proposed algorithm is validated by associating it with mixture of Gaussian diffusion and mixture of non-central Wishart distribution models. The proposed approach significantly reduce the angular error for multiple computer-generated experiments on synthetic simulations and real data. Moreover, we have also performed simulations with fibers not residing in the XY-plane. For this set-up also, the proposed work outperforms, giving lesser angular error with both the models. Synthetic simulations have been performed with Rician distributed (R-D) noise of standard deviation ranging from 0.02-0.1. This work helps in better understanding of the anatomy of the brain using the MRI signal data.
0801.1392
Masatoshi Nishikawa
Masatoshi Nishikawa, Hiroaki Takagi, Atsuko H. Iwane, Toshio Yanagida
Fluctuation analysis of mechanochemical coupling depending on the type of bio-molecular motor
null
null
10.1103/PhysRevLett.101.128103
null
q-bio.SC
null
Mechanochemical coupling was studied for two different types of myosin motors in cells: myosin V, which carries cargo over long distances by as a single molecule; and myosin II, which generates a contracting force in cooperation with other myosin II molecules. Both mean and variance of myosin V velocity at various [ATP] obeyed Michaelis-Menten mechanics, consistent with tight mechanochemical coupling. Myosin II, working in an ensemble, however, was explained by a loose coupling mechanism, generating variable step sizes depending on the ATP concentration and realizing a much larger step (200 nm) per ATP hydrolysis than myosin V through its cooperative nature at zero load. These different mechanics are ideal for the respective myosin's physiological functions.
[ { "created": "Wed, 9 Jan 2008 11:00:37 GMT", "version": "v1" } ]
2009-11-13
[ [ "Nishikawa", "Masatoshi", "" ], [ "Takagi", "Hiroaki", "" ], [ "Iwane", "Atsuko H.", "" ], [ "Yanagida", "Toshio", "" ] ]
Mechanochemical coupling was studied for two different types of myosin motors in cells: myosin V, which carries cargo over long distances by as a single molecule; and myosin II, which generates a contracting force in cooperation with other myosin II molecules. Both mean and variance of myosin V velocity at various [ATP] obeyed Michaelis-Menten mechanics, consistent with tight mechanochemical coupling. Myosin II, working in an ensemble, however, was explained by a loose coupling mechanism, generating variable step sizes depending on the ATP concentration and realizing a much larger step (200 nm) per ATP hydrolysis than myosin V through its cooperative nature at zero load. These different mechanics are ideal for the respective myosin's physiological functions.
1703.09078
Thibault Bourgeron
Thibault Bourgeron, Vincent Calvez, Jimmy Garnier, Thomas Lepoutre
Existence of recombination-selection equilibria for sexual populations
null
null
null
null
q-bio.PE math.FA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a birth and death model for the adapatation of a sexual population to an environment. The population is structured by a phenotypical trait, and, possibly, an age variable. Recombination is modeled by Fisher's infinitesimal operator. We prove the existence of principal eigenelements for the corresponding eigenproblem. As the infinitesimal operator is 1-homogeneous but nor linear nor monotone, the general Krein-Rutman theory cannot be applied to this problem.
[ { "created": "Fri, 24 Mar 2017 13:55:06 GMT", "version": "v1" } ]
2017-03-28
[ [ "Bourgeron", "Thibault", "" ], [ "Calvez", "Vincent", "" ], [ "Garnier", "Jimmy", "" ], [ "Lepoutre", "Thomas", "" ] ]
We study a birth and death model for the adapatation of a sexual population to an environment. The population is structured by a phenotypical trait, and, possibly, an age variable. Recombination is modeled by Fisher's infinitesimal operator. We prove the existence of principal eigenelements for the corresponding eigenproblem. As the infinitesimal operator is 1-homogeneous but nor linear nor monotone, the general Krein-Rutman theory cannot be applied to this problem.
1610.04959
Christos Skiadas H
Christos H Skiadas and Charilaos Skiadas
The Health Status of a Population: Health State and Survival Curves, and HALE Estimates
21 pages, 8 figures, 1 table
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we explore the very important case of finding a health measure in the lines of the survival curve but independent of the standard deviation parameter. This is done by estimating the health state curve and calculating the total area under the curve. An interesting comparison with the survival curves is done. The health status measure (HSM or HS) resulting is of the form of the life expectancy (LE) as it is expressed in terms of years of age. The HS is independent of the standard deviation. When a perfect rectangularization of the survival curve appear, the LE is equal to the age where the health state curve approaches zero. The provided HS form gives rise to an interesting classification of various countries. The results are close to those obtained by the LE estimator with an interesting reorganization of country ranking. We also provide illustrations of our estimated Health Status along with comparative presentations of the Healthy Life Expectancy and the HALE measures in several countries. Theoretical issues are provided and important stochastic simulations are done along with reproduction of the death probability density forms.
[ { "created": "Mon, 17 Oct 2016 02:58:55 GMT", "version": "v1" } ]
2016-10-18
[ [ "Skiadas", "Christos H", "" ], [ "Skiadas", "Charilaos", "" ] ]
In this paper we explore the very important case of finding a health measure in the lines of the survival curve but independent of the standard deviation parameter. This is done by estimating the health state curve and calculating the total area under the curve. An interesting comparison with the survival curves is done. The health status measure (HSM or HS) resulting is of the form of the life expectancy (LE) as it is expressed in terms of years of age. The HS is independent of the standard deviation. When a perfect rectangularization of the survival curve appear, the LE is equal to the age where the health state curve approaches zero. The provided HS form gives rise to an interesting classification of various countries. The results are close to those obtained by the LE estimator with an interesting reorganization of country ranking. We also provide illustrations of our estimated Health Status along with comparative presentations of the Healthy Life Expectancy and the HALE measures in several countries. Theoretical issues are provided and important stochastic simulations are done along with reproduction of the death probability density forms.