id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2203.04115
Moksh Jain
Moksh Jain, Emmanuel Bengio, Alex-Hernandez Garcia, Jarrid Rector-Brooks, Bonaventure F. P. Dossou, Chanakya Ekbote, Jie Fu, Tianyu Zhang, Micheal Kilgour, Dinghuai Zhang, Lena Simine, Payel Das, Yoshua Bengio
Biological Sequence Design with GFlowNets
ICML 2022. 15 pages, 3 figures. Code available at: https://github.com/MJ10/BioSeq-GFN-AL. Updated GFP results
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Design of de novo biological sequences with desired properties, like protein and DNA sequences, often involves an active loop with several rounds of molecule ideation and expensive wet-lab evaluations. These experiments can consist of multiple stages, with increasing levels of precision and cost of evaluation, where candidates are filtered. This makes the diversity of proposed candidates a key consideration in the ideation phase. In this work, we propose an active learning algorithm leveraging epistemic uncertainty estimation and the recently proposed GFlowNets as a generator of diverse candidate solutions, with the objective to obtain a diverse batch of useful (as defined by some utility function, for example, the predicted anti-microbial activity of a peptide) and informative candidates after each round. We also propose a scheme to incorporate existing labeled datasets of candidates, in addition to a reward function, to speed up learning in GFlowNets. We present empirical results on several biological sequence design tasks, and we find that our method generates more diverse and novel batches with high scoring candidates compared to existing approaches.
[ { "created": "Wed, 2 Mar 2022 15:53:38 GMT", "version": "v1" }, { "created": "Mon, 24 Oct 2022 15:43:38 GMT", "version": "v2" }, { "created": "Wed, 24 May 2023 11:21:54 GMT", "version": "v3" } ]
2023-05-25
[ [ "Jain", "Moksh", "" ], [ "Bengio", "Emmanuel", "" ], [ "Garcia", "Alex-Hernandez", "" ], [ "Rector-Brooks", "Jarrid", "" ], [ "Dossou", "Bonaventure F. P.", "" ], [ "Ekbote", "Chanakya", "" ], [ "Fu", "Jie", ...
Design of de novo biological sequences with desired properties, like protein and DNA sequences, often involves an active loop with several rounds of molecule ideation and expensive wet-lab evaluations. These experiments can consist of multiple stages, with increasing levels of precision and cost of evaluation, where candidates are filtered. This makes the diversity of proposed candidates a key consideration in the ideation phase. In this work, we propose an active learning algorithm leveraging epistemic uncertainty estimation and the recently proposed GFlowNets as a generator of diverse candidate solutions, with the objective to obtain a diverse batch of useful (as defined by some utility function, for example, the predicted anti-microbial activity of a peptide) and informative candidates after each round. We also propose a scheme to incorporate existing labeled datasets of candidates, in addition to a reward function, to speed up learning in GFlowNets. We present empirical results on several biological sequence design tasks, and we find that our method generates more diverse and novel batches with high scoring candidates compared to existing approaches.
1308.4706
Liane Gabora
Apara Ranjan, Liane Gabora, and Brian O'Connor
The Cross-Domain Re-interpretation of Artistic Ideas
6 pages
Proceedings 35th Annual Meeting of the Cognitive Science Society (pp. 3251-3256). Austin, TX: Cognitive Science Society. (2013)
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this study was to investigate the translation of creative works into other domains. We tested whether people were able to recognize which works of art were inspired by which pieces of music. Three expert painters created four paintings, each of which was the artist's interpretation of one of four different pieces of instrumental music. Participants were able to identify which paintings were inspired by which pieces of music at statistically significant above chance levels. The findings support the hypothesis that creative ideas can exist in an at least somewhat domain independent state of potentiality and become more well defined as they are actualized in accordance with the constraints of a particular domain.
[ { "created": "Tue, 20 Aug 2013 06:58:32 GMT", "version": "v1" }, { "created": "Fri, 5 Jul 2019 20:43:32 GMT", "version": "v2" }, { "created": "Tue, 9 Jul 2019 19:51:48 GMT", "version": "v3" } ]
2019-07-11
[ [ "Ranjan", "Apara", "" ], [ "Gabora", "Liane", "" ], [ "O'Connor", "Brian", "" ] ]
The goal of this study was to investigate the translation of creative works into other domains. We tested whether people were able to recognize which works of art were inspired by which pieces of music. Three expert painters created four paintings, each of which was the artist's interpretation of one of four different pieces of instrumental music. Participants were able to identify which paintings were inspired by which pieces of music at statistically significant above chance levels. The findings support the hypothesis that creative ideas can exist in an at least somewhat domain independent state of potentiality and become more well defined as they are actualized in accordance with the constraints of a particular domain.
1808.00301
Romain M. Yvinec
Romain Yvinec, Pascale Cr\'epieux, Eric Reiter, Anne Poupon, Fr\'ed\'erique Cl\'ement
Advances in computational modeling approaches in pituitary gonadotropin signaling
in press
Expert Opinion on Drug Discovery, 1-15. (2018)
10.1080/17460441.2018.1501025
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pituitary gonadotropins play an essential and pivotal role in the control of human and animal reproduction within the hypothalamic-pituitary-gonadal (HPG) axis. The computational modeling of pituitary gonadotropin signaling encompasses phenomena of different natures such as the dynamic encoding of gonadotropin secretion, and the intracellular cascades triggered by gonadotropin binding to their cognate receptors, resulting in a variety of biological outcomes. We overview historical and ongoing issues in modeling and data analysis related to gonadotropin secretion in the field of both physiology and neuro-endocrinology. We mention the different mathematical formalisms involved, their interest and limits. We discuss open statistical questions in signal analysis associated with key endocrine issues. We also review recent advances in the modeling of the intracellular pathways activated by gonadotropins, which yields promising development for innovative approaches in drug discovery. The greatest challenge to be tackled in computational modeling of pituitary gonadotropin signaling is the embedding of gonadotropin signaling within its natural multi-scale environment, from the single cell level, to the organic and whole HPG level. The development of modeling approaches of G protein-coupled receptor signaling, together with multicellular systems biology may lead to unexampled mechanistic understanding with critical expected fallouts in the therapeutic management of reproduction.
[ { "created": "Wed, 1 Aug 2018 12:48:59 GMT", "version": "v1" } ]
2018-08-02
[ [ "Yvinec", "Romain", "" ], [ "Crépieux", "Pascale", "" ], [ "Reiter", "Eric", "" ], [ "Poupon", "Anne", "" ], [ "Clément", "Frédérique", "" ] ]
Pituitary gonadotropins play an essential and pivotal role in the control of human and animal reproduction within the hypothalamic-pituitary-gonadal (HPG) axis. The computational modeling of pituitary gonadotropin signaling encompasses phenomena of different natures such as the dynamic encoding of gonadotropin secretion, and the intracellular cascades triggered by gonadotropin binding to their cognate receptors, resulting in a variety of biological outcomes. We overview historical and ongoing issues in modeling and data analysis related to gonadotropin secretion in the field of both physiology and neuro-endocrinology. We mention the different mathematical formalisms involved, their interest and limits. We discuss open statistical questions in signal analysis associated with key endocrine issues. We also review recent advances in the modeling of the intracellular pathways activated by gonadotropins, which yields promising development for innovative approaches in drug discovery. The greatest challenge to be tackled in computational modeling of pituitary gonadotropin signaling is the embedding of gonadotropin signaling within its natural multi-scale environment, from the single cell level, to the organic and whole HPG level. The development of modeling approaches of G protein-coupled receptor signaling, together with multicellular systems biology may lead to unexampled mechanistic understanding with critical expected fallouts in the therapeutic management of reproduction.
1909.10580
Dimitris Vavoulis
DV Vavoulis, AT Pagnamenta, SJL Knight, MM Pentony, M Armstrong, EC Galizia, S Balestrini, SM Sisodiya, JC Taylor
Whole genome sequencing identifies putative associations between genomic polymorphisms and clinical response to the antiepileptic drug levetiracetam
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
In the context of pharmacogenomics, whole genome sequencing provides a powerful approach for identifying correlations between response variability to specific drugs and genomic polymorphisms in a population, in an unbiased manner. In this study, we employed whole genome sequencing of DNA samples from patients showing extreme response (n=72) and non-response (n=27) to the antiepileptic drug levetiracetam, in order to identify genomic variants that underlie response to the drug. Although no common SNP (MAF>5%) crossed the conventional genome-wide significance threshold of 5e-8, we found common polymorphisms in genes SPNS3, HDC, MDGA2, NSG1 and RASGEF1C, which collectively predict clinical response to levetiracetam in our cohort with ~91% predictive accuracy. Among these genes, HDC, NSG1, MDGA2 and RASGEF1C are potentially implicated in synaptic neurotransmission, while SPNS3 is an atypical solute carrier transporter homologous to SV2A, the known molecular target of levetiracetam. Furthermore, we performed gene- and pathway-based statistical analysis on sets of rare and low-frequency variants (MAF<5%) and we identified associations between the following genes or pathways and response to levetiracetam: a) genes PRKCB and DLG2, which are involved in glutamatergic neurotransmission, a known target of anticonvulsants, including levetiracetam; b) genes FILIP1 and SEMA6D, which are involved in axon guidance and modelling of neural connections; and c) pathways with a role in synaptic neurotransmission, such as WNT5A-dependent internalization of FZD4 and disinhibition of SNARE formation. In summary, our approach to utilise whole genome sequencing on subjects with extreme response phenotypes is a feasible route to generate plausible hypotheses for investigating the genetic factors underlying drug response variability in cases of pharmaco-resistant epilepsy.
[ { "created": "Mon, 23 Sep 2019 19:18:25 GMT", "version": "v1" } ]
2019-09-25
[ [ "Vavoulis", "DV", "" ], [ "Pagnamenta", "AT", "" ], [ "Knight", "SJL", "" ], [ "Pentony", "MM", "" ], [ "Armstrong", "M", "" ], [ "Galizia", "EC", "" ], [ "Balestrini", "S", "" ], [ "Sisodiya", ...
In the context of pharmacogenomics, whole genome sequencing provides a powerful approach for identifying correlations between response variability to specific drugs and genomic polymorphisms in a population, in an unbiased manner. In this study, we employed whole genome sequencing of DNA samples from patients showing extreme response (n=72) and non-response (n=27) to the antiepileptic drug levetiracetam, in order to identify genomic variants that underlie response to the drug. Although no common SNP (MAF>5%) crossed the conventional genome-wide significance threshold of 5e-8, we found common polymorphisms in genes SPNS3, HDC, MDGA2, NSG1 and RASGEF1C, which collectively predict clinical response to levetiracetam in our cohort with ~91% predictive accuracy. Among these genes, HDC, NSG1, MDGA2 and RASGEF1C are potentially implicated in synaptic neurotransmission, while SPNS3 is an atypical solute carrier transporter homologous to SV2A, the known molecular target of levetiracetam. Furthermore, we performed gene- and pathway-based statistical analysis on sets of rare and low-frequency variants (MAF<5%) and we identified associations between the following genes or pathways and response to levetiracetam: a) genes PRKCB and DLG2, which are involved in glutamatergic neurotransmission, a known target of anticonvulsants, including levetiracetam; b) genes FILIP1 and SEMA6D, which are involved in axon guidance and modelling of neural connections; and c) pathways with a role in synaptic neurotransmission, such as WNT5A-dependent internalization of FZD4 and disinhibition of SNARE formation. In summary, our approach to utilise whole genome sequencing on subjects with extreme response phenotypes is a feasible route to generate plausible hypotheses for investigating the genetic factors underlying drug response variability in cases of pharmaco-resistant epilepsy.
q-bio/0312022
M. J. Gagen
M. J. Gagen and J. S. Mattick
Failed "nonaccelerating" models of prokaryote gene regulatory networks
Corrected error in biological input parameter: 13 pages, 9 figures
null
null
null
q-bio.MN cond-mat.stat-mech q-bio.GN
null
Much current network analysis is predicated on the assumption that important biological networks will either possess scale free or exponential statistics which are independent of network size allowing unconstrained network growth over time. In this paper, we demonstrate that such network growth models are unable to explain recent comparative genomics results on the growth of prokaryote regulatory gene networks as a function of gene number. This failure largely results as prokaryote regulatory gene networks are "accelerating" and have total link numbers growing faster than linearly with network size and so can exhibit transitions from stationary to nonstationary statistics and from random to scale-free to regular statistics at particular critical network sizes. In the limit, these networks can undergo transitions so marked as to constrain network sizes to be below some critical value. This is of interest as the regulatory gene networks of single celled prokaryotes are indeed characterized by an accelerating quadratic growth with gene count and are size constrained to be less than about 10,000 genes encoded in DNA sequence of less than about 10 megabases. We develop two "nonaccelerating" network models of prokaryote regulatory gene networks in an endeavor to match observation and demonstrate that these approaches fail to reproduce observed statistics.
[ { "created": "Tue, 16 Dec 2003 06:24:01 GMT", "version": "v1" }, { "created": "Sun, 4 Jan 2004 00:22:49 GMT", "version": "v2" } ]
2007-05-23
[ [ "Gagen", "M. J.", "" ], [ "Mattick", "J. S.", "" ] ]
Much current network analysis is predicated on the assumption that important biological networks will either possess scale free or exponential statistics which are independent of network size allowing unconstrained network growth over time. In this paper, we demonstrate that such network growth models are unable to explain recent comparative genomics results on the growth of prokaryote regulatory gene networks as a function of gene number. This failure largely results as prokaryote regulatory gene networks are "accelerating" and have total link numbers growing faster than linearly with network size and so can exhibit transitions from stationary to nonstationary statistics and from random to scale-free to regular statistics at particular critical network sizes. In the limit, these networks can undergo transitions so marked as to constrain network sizes to be below some critical value. This is of interest as the regulatory gene networks of single celled prokaryotes are indeed characterized by an accelerating quadratic growth with gene count and are size constrained to be less than about 10,000 genes encoded in DNA sequence of less than about 10 megabases. We develop two "nonaccelerating" network models of prokaryote regulatory gene networks in an endeavor to match observation and demonstrate that these approaches fail to reproduce observed statistics.
2006.02632
Andr\'es Navas
Andr\'es Navas, Gast\'on Vergara-Hermosilla
On the dynamics of the Coronavirus epidemic and the unreported cases: the Chilean case
22 pages
null
null
null
q-bio.PE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the dynamics of the COVID-19 epidemic taking into account the role of the unreported cases. After a first section in which we deal with a framework of very slow test capacity, we turn to the model recently introduced/implemented by Liu, Magal, Seydi and Webb. First, we prove some basic structural results for the corresponding ODE, as for instance the convergence of S(t) to a positive limit. These are similar to those of the classical SIR model, although the maxima of the corresponding curves are not necessarily unique. Finally, we implement the model -- but with a variable transmission rate -- in the Chilean context. A key parameter adjustment (namely, the fraction of unreported cases) is done via an argument using mortality rates. We conclude with several conclusions and lines of future research.
[ { "created": "Thu, 4 Jun 2020 03:51:49 GMT", "version": "v1" } ]
2020-06-05
[ [ "Navas", "Andrés", "" ], [ "Vergara-Hermosilla", "Gastón", "" ] ]
We analyze the dynamics of the COVID-19 epidemic taking into account the role of the unreported cases. After a first section in which we deal with a framework of very slow test capacity, we turn to the model recently introduced/implemented by Liu, Magal, Seydi and Webb. First, we prove some basic structural results for the corresponding ODE, as for instance the convergence of S(t) to a positive limit. These are similar to those of the classical SIR model, although the maxima of the corresponding curves are not necessarily unique. Finally, we implement the model -- but with a variable transmission rate -- in the Chilean context. A key parameter adjustment (namely, the fraction of unreported cases) is done via an argument using mortality rates. We conclude with several conclusions and lines of future research.
2401.11481
Rupchand Sutradhar
Rupchand Sutradhar, D C Dalal
Understanding Hepatitis B Virus Infection through Hepatocyte Proliferation and Capsid Recycling
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proliferation of uninfected as well as infected hepatocytes and recycling of DNA-containing capsids are two major mechanisms playing significant roles in the clearance of hepatitis B virus (HBV) infection. In this study, the temporal dynamics of this infection are investigated through two in silico bio-mathematical models considering both proliferation of hepatocytes and the recycling of capsids. Both models are formulated on the basis of a key finding in the existing literature: mitosis of infected yields in two uninfected progenies. In the first model, we examine regular proliferation (occurs continuously), while the second model deals with the irregular proliferation (happens when the total number of liver cells decreases to less than 70% of its initial volume). The models are calibrated with the experimental data obtained from an adult chimpanzee. Results of this study suggest that when both hepatocytes proliferate with equal rate, proliferation aids the individual in a rapid recovery from the acute infection whereas in the case of chronic infection, the severity of the infection increases if the proliferation occurs frequently. On the other hand, if the infected cells proliferate at a slower rate than uninfected cells, the proliferation of uninfected hepatocytes contributes to increase the infection, but the proliferation of infected hepatocytes acts to reduce the infection from the long-term perspective. Furthermore, it is also observed that the differences between the outcomes of regular and irregular proliferations are substantial and noteworthy.
[ { "created": "Sun, 21 Jan 2024 13:07:53 GMT", "version": "v1" } ]
2024-01-23
[ [ "Sutradhar", "Rupchand", "" ], [ "Dalal", "D C", "" ] ]
Proliferation of uninfected as well as infected hepatocytes and recycling of DNA-containing capsids are two major mechanisms playing significant roles in the clearance of hepatitis B virus (HBV) infection. In this study, the temporal dynamics of this infection are investigated through two in silico bio-mathematical models considering both proliferation of hepatocytes and the recycling of capsids. Both models are formulated on the basis of a key finding in the existing literature: mitosis of infected yields in two uninfected progenies. In the first model, we examine regular proliferation (occurs continuously), while the second model deals with the irregular proliferation (happens when the total number of liver cells decreases to less than 70% of its initial volume). The models are calibrated with the experimental data obtained from an adult chimpanzee. Results of this study suggest that when both hepatocytes proliferate with equal rate, proliferation aids the individual in a rapid recovery from the acute infection whereas in the case of chronic infection, the severity of the infection increases if the proliferation occurs frequently. On the other hand, if the infected cells proliferate at a slower rate than uninfected cells, the proliferation of uninfected hepatocytes contributes to increase the infection, but the proliferation of infected hepatocytes acts to reduce the infection from the long-term perspective. Furthermore, it is also observed that the differences between the outcomes of regular and irregular proliferations are substantial and noteworthy.
2111.08001
Sathyanarayanan Aakur
Sathyanarayanan N. Aakur, Vineela Indla, Vennela Indla, Sai Narayanan, Arunkumar Bagavathi, Vishalini Laguduva Ramnath, Akhilesh Ramachandran
Metagenome2Vec: Building Contextualized Representations for Scalable Metagenome Analysis
To appear in DMBIH Workshop at ICDM 2021
null
null
null
q-bio.GN cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Advances in next-generation metagenome sequencing have the potential to revolutionize the point-of-care diagnosis of novel pathogen infections, which could help prevent potential widespread transmission of diseases. Given the high volume of metagenome sequences, there is a need for scalable frameworks to analyze and segment metagenome sequences from clinical samples, which can be highly imbalanced. There is an increased need for learning robust representations from metagenome reads since pathogens within a family can have highly similar genome structures (some more than 90%) and hence enable the segmentation and identification of novel pathogen sequences with limited labeled data. In this work, we propose Metagenome2Vec - a contextualized representation that captures the global structural properties inherent in metagenome data and local contextualized properties through self-supervised representation learning. We show that the learned representations can help detect six (6) related pathogens from clinical samples with less than 100 labeled sequences. Extensive experiments on simulated and clinical metagenome data show that the proposed representation encodes compositional properties that can generalize beyond annotations to segment novel pathogens in an unsupervised setting.
[ { "created": "Tue, 9 Nov 2021 23:21:10 GMT", "version": "v1" } ]
2021-11-17
[ [ "Aakur", "Sathyanarayanan N.", "" ], [ "Indla", "Vineela", "" ], [ "Indla", "Vennela", "" ], [ "Narayanan", "Sai", "" ], [ "Bagavathi", "Arunkumar", "" ], [ "Ramnath", "Vishalini Laguduva", "" ], [ "Ramachandran", ...
Advances in next-generation metagenome sequencing have the potential to revolutionize the point-of-care diagnosis of novel pathogen infections, which could help prevent potential widespread transmission of diseases. Given the high volume of metagenome sequences, there is a need for scalable frameworks to analyze and segment metagenome sequences from clinical samples, which can be highly imbalanced. There is an increased need for learning robust representations from metagenome reads since pathogens within a family can have highly similar genome structures (some more than 90%) and hence enable the segmentation and identification of novel pathogen sequences with limited labeled data. In this work, we propose Metagenome2Vec - a contextualized representation that captures the global structural properties inherent in metagenome data and local contextualized properties through self-supervised representation learning. We show that the learned representations can help detect six (6) related pathogens from clinical samples with less than 100 labeled sequences. Extensive experiments on simulated and clinical metagenome data show that the proposed representation encodes compositional properties that can generalize beyond annotations to segment novel pathogens in an unsupervised setting.
1609.08083
Kieran Fox
Kieran C.R. Fox, Nicholas S. Fitz, Peter B. Reiner
The multiplicity of memory enhancement: Practical and ethical implications of the diverse neural substrates underlying human memory systems
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The neural basis of human memory is incredibly complex. We argue that the diversity of neural systems underlying various forms of memory suggests that any discussion of enhancing 'memory' per se is too broad, thus obfuscating the biopolitical debate about human enhancement. Memory can be differentiated into at least four major (and several minor) systems with largely dissociable (i.e., non-overlapping) neural substrates. We outline each system, and discuss both the practical and the ethical implications of these diverse neural substrates. In practice, distinct neural bases imply the possibility, and likely the necessity, of specific approaches for the safe and effective enhancement of various memory systems. In the debate over the ethical and social implications of enhancement technologies, this fine-grained perspective clarifies - and may partially mitigate - certain common concerns in enhancement debates, including issues related to safety, fairness, coercion, and authenticity. While many researchers certainly appreciate the neurobiological complexity of memory, the political debate tends to revolve around a monolithic one-size-fits-all conception. The overall project - exploring how human enhancement technologies affect society - stands to benefit from a deeper appreciation of memory's neurobiological diversity.
[ { "created": "Mon, 26 Sep 2016 17:17:04 GMT", "version": "v1" } ]
2016-09-27
[ [ "Fox", "Kieran C. R.", "" ], [ "Fitz", "Nicholas S.", "" ], [ "Reiner", "Peter B.", "" ] ]
The neural basis of human memory is incredibly complex. We argue that the diversity of neural systems underlying various forms of memory suggests that any discussion of enhancing 'memory' per se is too broad, thus obfuscating the biopolitical debate about human enhancement. Memory can be differentiated into at least four major (and several minor) systems with largely dissociable (i.e., non-overlapping) neural substrates. We outline each system, and discuss both the practical and the ethical implications of these diverse neural substrates. In practice, distinct neural bases imply the possibility, and likely the necessity, of specific approaches for the safe and effective enhancement of various memory systems. In the debate over the ethical and social implications of enhancement technologies, this fine-grained perspective clarifies - and may partially mitigate - certain common concerns in enhancement debates, including issues related to safety, fairness, coercion, and authenticity. While many researchers certainly appreciate the neurobiological complexity of memory, the political debate tends to revolve around a monolithic one-size-fits-all conception. The overall project - exploring how human enhancement technologies affect society - stands to benefit from a deeper appreciation of memory's neurobiological diversity.
1808.09546
Joshua M. Deutsch
Stephen E Martin, Matthew E Brunner, and Joshua M Deutsch
Spontaneous Circulation of Active Microtubules Confined by Optical Traps
9 pages, 6 figures
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an experiment to demonstrate spontaneous ordering and symmetry breaking of kinesin-driven microtubules confined to an optical trap. Calculations involving the feasibility of such an experiment are first performed which analyze the power needed to confine microtubules and address heating concerns. We then present the results of first-principles simulations of active microtubules confined in such a trap and analyze the types of motion observed by the microtubules as well as the velocity of the surrounding fluid, both near the trap and in the far-field. We find three distinct phases characterized by breaking of distinct symmetries and also analyze the power spectrum of the angular momenta of polymers to further quantify the differences between these phases. Under the correct conditions, microtubules were found to spontaneously align with one another and circle the trap in one direction.
[ { "created": "Tue, 28 Aug 2018 21:11:11 GMT", "version": "v1" } ]
2018-08-30
[ [ "Martin", "Stephen E", "" ], [ "Brunner", "Matthew E", "" ], [ "Deutsch", "Joshua M", "" ] ]
We propose an experiment to demonstrate spontaneous ordering and symmetry breaking of kinesin-driven microtubules confined to an optical trap. Calculations involving the feasibility of such an experiment are first performed which analyze the power needed to confine microtubules and address heating concerns. We then present the results of first-principles simulations of active microtubules confined in such a trap and analyze the types of motion observed by the microtubules as well as the velocity of the surrounding fluid, both near the trap and in the far-field. We find three distinct phases characterized by breaking of distinct symmetries and also analyze the power spectrum of the angular momenta of polymers to further quantify the differences between these phases. Under the correct conditions, microtubules were found to spontaneously align with one another and circle the trap in one direction.
2010.10107
Jason Swedlow
Jason R. Swedlow, Pasi Kankaanp\"a\"a, Ugis Sarkans, Wojtek Goscinski, Graham Galloway, Ryan P. Sullivan, Claire M. Brown, Chris Wood, Antje Keppler, Ben Loos, Sara Zullino, Dario Livio Longo, Silvio Aime, Shuichi Onami
A Global View of Standards for Open Image Data Formats and Repositories
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Biological and biomedical imaging datasets record the constitution, architecture and dynamics of living organisms across several orders of magnitude of space and time. Imaging technologies are now used throughout the life and biomedical sciences to achieve discovery and understanding of biological mechanisms in the basic sciences as well as assessment, diagnosis and therapeutic intervention in clinical trials and animal and human medicine. The universal application and use of imaging raises an important question and opportunity: what is the value and ultimate destination of biological and medical imaging data? In the last few years, several informatics and data science technologies have matured sufficiently so that routine publication of these datasets is now possible. Participants in Global BioImaging from 15 countries and all populated continents have agreed on the need for recommendations and guidelines for the establishment of image data repositories and the formats they use for delivering data to the global scientific community. This white paper presents a shared, global view of criteria for these common, globally applicable guidelines and provisional proposals for open tools and resources that are available now and can provide a foundation for future development.
[ { "created": "Tue, 20 Oct 2020 08:00:19 GMT", "version": "v1" } ]
2020-10-21
[ [ "Swedlow", "Jason R.", "" ], [ "Kankaanpää", "Pasi", "" ], [ "Sarkans", "Ugis", "" ], [ "Goscinski", "Wojtek", "" ], [ "Galloway", "Graham", "" ], [ "Sullivan", "Ryan P.", "" ], [ "Brown", "Claire M.", "" ...
Biological and biomedical imaging datasets record the constitution, architecture and dynamics of living organisms across several orders of magnitude of space and time. Imaging technologies are now used throughout the life and biomedical sciences to achieve discovery and understanding of biological mechanisms in the basic sciences as well as assessment, diagnosis and therapeutic intervention in clinical trials and animal and human medicine. The universal application and use of imaging raises an important question and opportunity: what is the value and ultimate destination of biological and medical imaging data? In the last few years, several informatics and data science technologies have matured sufficiently so that routine publication of these datasets is now possible. Participants in Global BioImaging from 15 countries and all populated continents have agreed on the need for recommendations and guidelines for the establishment of image data repositories and the formats they use for delivering data to the global scientific community. This white paper presents a shared, global view of criteria for these common, globally applicable guidelines and provisional proposals for open tools and resources that are available now and can provide a foundation for future development.
1802.10080
Masaki Watabe
Masaki Watabe, Satya N. V. Arjunan, Wei Xiang Chew, Kazunari Kaizu and Koichi Takahashi
Simulation of live-cell imaging system reveals hidden uncertainties in cooperative binding measurements
110 pages, 96 figures
1 July 2019 issue of Physical Review E (Vol. 100, No. 1)
10.1103/PhysRevE.100.010402
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a computational method to quantitatively evaluate the systematic uncertainties that arise from undetectable sources in biological measurements using live-cell imaging techniques. We then demonstrate this method in measuring biological cooperativity of molecular binding networks: in particular, ligand molecules binding to cell surface receptor proteins. Our results show how the non-statistical uncertainties lead to invalid identification of the measured cooperativity. Through this computational scheme, the biological interpretation can be more objectively evaluated and understood under a specific experimental configuration of interest.
[ { "created": "Tue, 27 Feb 2018 00:29:20 GMT", "version": "v1" }, { "created": "Wed, 7 Mar 2018 01:14:11 GMT", "version": "v2" }, { "created": "Tue, 20 Mar 2018 04:51:19 GMT", "version": "v3" }, { "created": "Wed, 15 Aug 2018 07:29:05 GMT", "version": "v4" }, { "cr...
2019-07-09
[ [ "Watabe", "Masaki", "" ], [ "Arjunan", "Satya N. V.", "" ], [ "Chew", "Wei Xiang", "" ], [ "Kaizu", "Kazunari", "" ], [ "Takahashi", "Koichi", "" ] ]
We propose a computational method to quantitatively evaluate the systematic uncertainties that arise from undetectable sources in biological measurements using live-cell imaging techniques. We then demonstrate this method in measuring biological cooperativity of molecular binding networks: in particular, ligand molecules binding to cell surface receptor proteins. Our results show how the non-statistical uncertainties lead to invalid identification of the measured cooperativity. Through this computational scheme, the biological interpretation can be more objectively evaluated and understood under a specific experimental configuration of interest.
1909.05114
Jannis Born
Jannis Born, Matteo Manica, Ali Oskooei, Joris Cadow, Karsten Borgwardt, Mar\'ia Rodr\'iguez Mart\'inez
PaccMann$^{RL}$: Designing anticancer drugs from transcriptomic data via reinforcement learning
18 pages total (12 pages main text, 4 pages references, 11 pages appendix) 8 figures
International Conference on Research in Computational Molecular Biology 2020
10.1007/978-3-030-45257-5_18
null
q-bio.BM cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
With the advent of deep generative models in computational chemistry, in silico anticancer drug design has undergone an unprecedented transformation. While state-of-the-art deep learning approaches have shown potential in generating compounds with desired chemical properties, they disregard the genetic profile and properties of the target disease. Here, we introduce the first generative model capable of tailoring anticancer compounds for a specific biomolecular profile. Using a RL framework, the transcriptomic profiles of cancer cells are used as a context for the generation of candidate molecules. Our molecule generator combines two separately pretrained variational autoencoders (VAEs) - the first VAE encodes transcriptomic profiles into a smooth, latent space which in turn is used to condition a second VAE to generate novel molecular structures on the given transcriptomic profile. The generative process is optimized through PaccMann, a previously developed drug sensitivity prediction model to obtain effective anticancer compounds for the given context (i.e., transcriptomic profile). We demonstrate how the molecule generation can be biased towards compounds with high predicted inhibitory effect against individual cell lines or specific cancer sites. We verify our approach by investigating candidate drugs generated against specific cancer types and find the highest structural similarity to existing compounds with known efficacy against these cancer types. We envision our approach to transform in silico anticancer drug design by leveraging the biomolecular characteristics of the disease in order to increase success rates in lead compound discovery.
[ { "created": "Thu, 29 Aug 2019 18:27:27 GMT", "version": "v1" }, { "created": "Wed, 6 Nov 2019 10:08:35 GMT", "version": "v2" }, { "created": "Fri, 27 Mar 2020 21:08:45 GMT", "version": "v3" }, { "created": "Thu, 16 Apr 2020 18:25:28 GMT", "version": "v4" } ]
2020-11-24
[ [ "Born", "Jannis", "" ], [ "Manica", "Matteo", "" ], [ "Oskooei", "Ali", "" ], [ "Cadow", "Joris", "" ], [ "Borgwardt", "Karsten", "" ], [ "Martínez", "María Rodríguez", "" ] ]
With the advent of deep generative models in computational chemistry, in silico anticancer drug design has undergone an unprecedented transformation. While state-of-the-art deep learning approaches have shown potential in generating compounds with desired chemical properties, they disregard the genetic profile and properties of the target disease. Here, we introduce the first generative model capable of tailoring anticancer compounds for a specific biomolecular profile. Using a RL framework, the transcriptomic profiles of cancer cells are used as a context for the generation of candidate molecules. Our molecule generator combines two separately pretrained variational autoencoders (VAEs) - the first VAE encodes transcriptomic profiles into a smooth, latent space which in turn is used to condition a second VAE to generate novel molecular structures on the given transcriptomic profile. The generative process is optimized through PaccMann, a previously developed drug sensitivity prediction model to obtain effective anticancer compounds for the given context (i.e., transcriptomic profile). We demonstrate how the molecule generation can be biased towards compounds with high predicted inhibitory effect against individual cell lines or specific cancer sites. We verify our approach by investigating candidate drugs generated against specific cancer types and find the highest structural similarity to existing compounds with known efficacy against these cancer types. We envision our approach to transform in silico anticancer drug design by leveraging the biomolecular characteristics of the disease in order to increase success rates in lead compound discovery.
2401.08308
Alexandra Blenkinsop
Alexandra Blenkinsop, Nikos Pantazis, Evangelia Georgia Kostaki, Lysandros Sofocleous, Ard van Sighem, Daniela Bezemer, Thijs van de Laar, Marc van der Valk, Peter Reiss, Godelieve de Bree, Oliver Ratmann (on behalf of the HIV Transmission Elimination AMsterdam Initiative)
Sources of HIV infections among MSM with a migration background: a viral phylogenetic case study in Amsterdam, the Netherlands
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Background: Men and women with a migration background comprise an increasing proportion of incident HIV cases across Western Europe. Several studies indicate a substantial proportion acquire HIV post-migration. Methods: We used partial HIV consensus sequences with linked demographic and clinical data from the opt-out ATHENA cohort of people with HIV in the Netherlands to quantify population-level sources of transmission to Dutch-born and foreign-born Amsterdam men who have sex with men (MSM) between 2010-2021. We identified phylogenetically and epidemiologically possible transmission pairs in local transmission chains and interpreted these in the context of estimated infection dates, quantifying transmission dynamics between sub-populations by world region of birth. Results: We estimate the majority of Amsterdam MSM who acquired their infection locally had a Dutch-born Amsterdam MSM source (56% [53-58%]). Dutch-born MSM were the predominant source population of infections among almost all foreign-born Amsterdam MSM sub-populations. Stratifying by two-year intervals indicated shifts in transmission dynamics, with a majority of infections originating from foreign-born MSM since 2018, although uncertainty ranges remained wide. Conclusions: In the context of declining HIV incidence among Amsterdam MSM, our data suggest whilst native-born MSM have predominantly driven transmissions in 2010-2021, the contribution from foreign-born MSM living in Amsterdam is increasing.
[ { "created": "Tue, 16 Jan 2024 12:08:59 GMT", "version": "v1" } ]
2024-01-17
[ [ "Blenkinsop", "Alexandra", "", "on behalf\n of the HIV Transmission Elimination AMsterdam Initiative" ], [ "Pantazis", "Nikos", "", "on behalf\n of the HIV Transmission Elimination AMsterdam Initiative" ], [ "Kostaki", "Evangelia Georgia", "", "on behalf\n...
Background: Men and women with a migration background comprise an increasing proportion of incident HIV cases across Western Europe. Several studies indicate a substantial proportion acquire HIV post-migration. Methods: We used partial HIV consensus sequences with linked demographic and clinical data from the opt-out ATHENA cohort of people with HIV in the Netherlands to quantify population-level sources of transmission to Dutch-born and foreign-born Amsterdam men who have sex with men (MSM) between 2010-2021. We identified phylogenetically and epidemiologically possible transmission pairs in local transmission chains and interpreted these in the context of estimated infection dates, quantifying transmission dynamics between sub-populations by world region of birth. Results: We estimate the majority of Amsterdam MSM who acquired their infection locally had a Dutch-born Amsterdam MSM source (56% [53-58%]). Dutch-born MSM were the predominant source population of infections among almost all foreign-born Amsterdam MSM sub-populations. Stratifying by two-year intervals indicated shifts in transmission dynamics, with a majority of infections originating from foreign-born MSM since 2018, although uncertainty ranges remained wide. Conclusions: In the context of declining HIV incidence among Amsterdam MSM, our data suggest whilst native-born MSM have predominantly driven transmissions in 2010-2021, the contribution from foreign-born MSM living in Amsterdam is increasing.
2106.08476
Sage Malingen
Sage A Malingen, Kaitlyn Hood, Eric Lauga, Anette Hosoi, Thomas L Daniel
Fluid flow in the sarcomere
null
Archives of Biochemistry and Biophysics, 108923 (2021)
10.1016/j.abb.2021.108923
null
q-bio.SC physics.bio-ph physics.flu-dyn
http://creativecommons.org/licenses/by-nc-nd/4.0/
A highly organized and densely packed lattice of molecular machinery within the sarcomeres of muscle cells powers contraction. Although many of the proteins that drive contraction have been studied extensively, the mechanical impact of fluid shearing within the lattice of molecular machinery has received minimal attention. It was recently proposed that fluid flow augments substrate transport in the sarcomere, however, this analysis used analytical models of fluid flow in the molecular machinery that could not capture its full complexity. By building a finite element model of the sarcomere, we estimate the explicit flow field, and contrast it with analytical models. Our results demonstrate that viscous drag forces on sliding filaments are surprisingly small in contrast to the forces generated by single myosin molecular motors. This model also indicates that the energetic cost of fluid flow through viscous shearing with lattice proteins is likely minimal. The model also highlights a steep velocity gradient between sliding filaments and demonstrates that the maximal radial fluid velocity occurs near the tips of the filaments. To our knowledge, this is the first computational analysis of fluid flow within the highly structured sarcomere.
[ { "created": "Tue, 15 Jun 2021 22:57:21 GMT", "version": "v1" } ]
2021-06-22
[ [ "Malingen", "Sage A", "" ], [ "Hood", "Kaitlyn", "" ], [ "Lauga", "Eric", "" ], [ "Hosoi", "Anette", "" ], [ "Daniel", "Thomas L", "" ] ]
A highly organized and densely packed lattice of molecular machinery within the sarcomeres of muscle cells powers contraction. Although many of the proteins that drive contraction have been studied extensively, the mechanical impact of fluid shearing within the lattice of molecular machinery has received minimal attention. It was recently proposed that fluid flow augments substrate transport in the sarcomere, however, this analysis used analytical models of fluid flow in the molecular machinery that could not capture its full complexity. By building a finite element model of the sarcomere, we estimate the explicit flow field, and contrast it with analytical models. Our results demonstrate that viscous drag forces on sliding filaments are surprisingly small in contrast to the forces generated by single myosin molecular motors. This model also indicates that the energetic cost of fluid flow through viscous shearing with lattice proteins is likely minimal. The model also highlights a steep velocity gradient between sliding filaments and demonstrates that the maximal radial fluid velocity occurs near the tips of the filaments. To our knowledge, this is the first computational analysis of fluid flow within the highly structured sarcomere.
2005.04055
Luca Parma
L. Parma (1), N. F. Pelusio (1), E. Gisbert (2), M. A. Esteban (3), F. D'amico (4), M. Soverini (4), M. Candela (4), F. Dondi (1), P. P. Gatta (1), A. Bonaldo (1) ((1) Department of Veterinary Medical Sciences University of Bologna Ozzano Emilia Italy (2) Irta programa aquicultura Sant Carles de la rapita Spain (3) Department of cell biology and histology faculty of biology campus regional de excelencia internacional campus mare nostrum university of Murcia Spain (4) Unit of microbial ecology of health Department of pharmacy and biotechnology University of Bologna Bologna Italy)
Effects of rearing density on growth, digestive conditions, welfare indicators and gut bacterial community of gilthead sea bream (Sparus aurata, L. 1758) fed different fishmeal and fish oil dietary levels
null
null
10.1016/j.aquaculture.2019.734854
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
In Mediterranean aquaculture little research has examined the interaction between rearing density and dietary composition on main key performance indicators, physiological processes and gut bacterial community. A study was undertaken, therefore to assess growth response, digestive enzyme activity, humoral immunity on skin mucus, plasma biochemistry and gut microbiota of gilthead sea bream (Sparus aurata, L. 1758) reared at high (HD) and low (LD) final stocking densities and fed high (FM30FO15, 30 % fishmeal FM, 15 % fish oil, FO) and low (FM10FO3; 10 % FM and 3 % FO) FM and FO levels. Isonitrogenous and isolipidic extruded diets were fed to triplicate fish groups (initial weight: 96.2 g) to overfeeding over 98 days. The densities tested had no major effects on overall growth and feed efficiency of sea bream reared at high or low FM and FO dietary level. Results of digestive enzyme activity indicated a comparable digestive efficiency among rearing densities and within each dietary treatment. Plasma parameters related to nutritional and physiological conditions were not affected by rearing densities under both nutritional conditions a similar observation was also achieved through the study of lysozyme, protease, antiprotease and total protein determination in skin mucus, For the first time on this species, the effect of rearing density on gut bacterial community was studied. Different response in relation to dietary treatment under HD and LD were detected. Low FM-FO diet maintained steady the biodiversity of the gut bacterial community between LD and HD conditions while fish fed high FM-FO level showed a reduced biodiversity at HD. According to the results, it seems feasible to rear gilthead sea bream at the on-growing phase at a density up to 36-44 kg m3 with low or high FM-FO diet without negatively affecting growth, feed efficiency, welfare condition and gut bacterial community.
[ { "created": "Wed, 8 Apr 2020 13:05:53 GMT", "version": "v1" }, { "created": "Fri, 15 May 2020 06:56:39 GMT", "version": "v2" } ]
2020-05-18
[ [ "Parma", "L.", "" ], [ "Pelusio", "N. F.", "" ], [ "Gisbert", "E.", "" ], [ "Esteban", "M. A.", "" ], [ "D'amico", "F.", "" ], [ "Soverini", "M.", "" ], [ "Candela", "M.", "" ], [ "Dondi", "F.",...
In Mediterranean aquaculture little research has examined the interaction between rearing density and dietary composition on main key performance indicators, physiological processes and gut bacterial community. A study was undertaken, therefore to assess growth response, digestive enzyme activity, humoral immunity on skin mucus, plasma biochemistry and gut microbiota of gilthead sea bream (Sparus aurata, L. 1758) reared at high (HD) and low (LD) final stocking densities and fed high (FM30FO15, 30 % fishmeal FM, 15 % fish oil, FO) and low (FM10FO3; 10 % FM and 3 % FO) FM and FO levels. Isonitrogenous and isolipidic extruded diets were fed to triplicate fish groups (initial weight: 96.2 g) to overfeeding over 98 days. The densities tested had no major effects on overall growth and feed efficiency of sea bream reared at high or low FM and FO dietary level. Results of digestive enzyme activity indicated a comparable digestive efficiency among rearing densities and within each dietary treatment. Plasma parameters related to nutritional and physiological conditions were not affected by rearing densities under both nutritional conditions a similar observation was also achieved through the study of lysozyme, protease, antiprotease and total protein determination in skin mucus, For the first time on this species, the effect of rearing density on gut bacterial community was studied. Different response in relation to dietary treatment under HD and LD were detected. Low FM-FO diet maintained steady the biodiversity of the gut bacterial community between LD and HD conditions while fish fed high FM-FO level showed a reduced biodiversity at HD. According to the results, it seems feasible to rear gilthead sea bream at the on-growing phase at a density up to 36-44 kg m3 with low or high FM-FO diet without negatively affecting growth, feed efficiency, welfare condition and gut bacterial community.
1212.4491
Vadim Volkov S
Vadim Volkov
Quantitative description of ion transport via plasma membrane of yeast and small cells
22 pages, 6 figures, 1 table
Front. Plant Sci. 6:425 (2015)
10.3389/fpls.2015.00425
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling of ion transport via plasma membrane needs identification and quantitative understanding of the involved processes. Brief characterization of main ion transport systems of a yeast cell (Pma1, Ena1, TOK1, Nha1, Trk1, Trk2, non-selective cation conductance) and determining the exact number of molecules of each transporter per a typical cell allow us to predict the corresponding ion flows. In this review a comparison of ion transport in small yeast cell and several animal cell types is provided. The importance of cell volume to surface ratio is emphasized. The role of cell wall and lipid rafts is discussed in respect to required increase in spatial and temporal resolution of measurements. Conclusions are formulated to describe specific features of ion transport in a yeast cell. Potential directions of future research are outlined based on the assumptions.
[ { "created": "Tue, 18 Dec 2012 16:17:14 GMT", "version": "v1" }, { "created": "Sat, 16 Jan 2016 13:08:23 GMT", "version": "v2" } ]
2016-01-19
[ [ "Volkov", "Vadim", "" ] ]
Modeling of ion transport via plasma membrane needs identification and quantitative understanding of the involved processes. Brief characterization of main ion transport systems of a yeast cell (Pma1, Ena1, TOK1, Nha1, Trk1, Trk2, non-selective cation conductance) and determining the exact number of molecules of each transporter per a typical cell allow us to predict the corresponding ion flows. In this review a comparison of ion transport in small yeast cell and several animal cell types is provided. The importance of cell volume to surface ratio is emphasized. The role of cell wall and lipid rafts is discussed in respect to required increase in spatial and temporal resolution of measurements. Conclusions are formulated to describe specific features of ion transport in a yeast cell. Potential directions of future research are outlined based on the assumptions.
1104.2464
Kirill Korolev S
K. S. Korolev and David R. Nelson
Competition and cooperation in one-dimensional stepping stone models
8 pages, 4 figures
null
10.1103/PhysRevLett.107.088103
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperative mutualism is a major force driving evolution and sustaining ecosystems. Although the importance of spatial degrees of freedom and number fluctuations is well-known, their effects on mutualism are not fully understood. With range expansions of microbes in mind, we show that, even when mutualism confers a distinct selective advantage, it persists only in populations with high density and frequent migrations. When these parameters are reduced, mutualism is generically lost via a directed percolation process, with a phase diagram strongly influenced by an exceptional DP2 transition.
[ { "created": "Wed, 13 Apr 2011 12:15:48 GMT", "version": "v1" }, { "created": "Sun, 17 Apr 2011 21:06:01 GMT", "version": "v2" } ]
2015-05-27
[ [ "Korolev", "K. S.", "" ], [ "Nelson", "David R.", "" ] ]
Cooperative mutualism is a major force driving evolution and sustaining ecosystems. Although the importance of spatial degrees of freedom and number fluctuations is well-known, their effects on mutualism are not fully understood. With range expansions of microbes in mind, we show that, even when mutualism confers a distinct selective advantage, it persists only in populations with high density and frequent migrations. When these parameters are reduced, mutualism is generically lost via a directed percolation process, with a phase diagram strongly influenced by an exceptional DP2 transition.
2002.07112
Z.X. Hu
Zixin Hu, Qiyang Ge, Shudi Li, Li Jin and Momiao Xiong
Artificial Intelligence Forecasting of Covid-19 in China
14 pages, 5 figures, 1 table
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BACKGROUND An alternative to epidemiological models for transmission dynamics of Covid-19 in China, we propose the artificial intelligence (AI)-inspired methods for real-time forecasting of Covid-19 to estimate the size, lengths and ending time of Covid-19 across China. METHODS We developed a modified stacked auto-encoder for modeling the transmission dynamics of the epidemics. We applied this model to real-time forecasting the confirmed cases of Covid-19 across China. The data were collected from January 11 to February 27, 2020 by WHO. We used the latent variables in the auto-encoder and clustering algorithms to group the provinces/cities for investigating the transmission structure. RESULTS We forecasted curves of cumulative confirmed cases of Covid-19 across China from Jan 20, 2020 to April 20, 2020. Using the multiple-step forecasting, the estimated average errors of 6-step, 7-step, 8-step, 9-step and 10-step forecasting were 1.64%, 2.27%, 2.14%, 2.08%, 0.73%, respectively. We predicted that the time points of the provinces/cities entering the plateau of the forecasted transmission dynamic curves varied, ranging from Jan 21 to April 19, 2020. The 34 provinces/cities were grouped into 9 clusters. CONCLUSIONS The accuracy of the AI-based methods for forecasting the trajectory of Covid-19 was high. We predicted that the epidemics of Covid-19 will be over by the middle of April. If the data are reliable and there are no second transmissions, we can accurately forecast the transmission dynamics of the Covid-19 across the provinces/cities in China. The AI-inspired methods are a powerful tool for helping public health planning and policymaking.
[ { "created": "Mon, 17 Feb 2020 18:14:47 GMT", "version": "v1" }, { "created": "Sun, 1 Mar 2020 05:19:42 GMT", "version": "v2" } ]
2020-03-03
[ [ "Hu", "Zixin", "" ], [ "Ge", "Qiyang", "" ], [ "Li", "Shudi", "" ], [ "Jin", "Li", "" ], [ "Xiong", "Momiao", "" ] ]
BACKGROUND An alternative to epidemiological models for transmission dynamics of Covid-19 in China, we propose the artificial intelligence (AI)-inspired methods for real-time forecasting of Covid-19 to estimate the size, lengths and ending time of Covid-19 across China. METHODS We developed a modified stacked auto-encoder for modeling the transmission dynamics of the epidemics. We applied this model to real-time forecasting the confirmed cases of Covid-19 across China. The data were collected from January 11 to February 27, 2020 by WHO. We used the latent variables in the auto-encoder and clustering algorithms to group the provinces/cities for investigating the transmission structure. RESULTS We forecasted curves of cumulative confirmed cases of Covid-19 across China from Jan 20, 2020 to April 20, 2020. Using the multiple-step forecasting, the estimated average errors of 6-step, 7-step, 8-step, 9-step and 10-step forecasting were 1.64%, 2.27%, 2.14%, 2.08%, 0.73%, respectively. We predicted that the time points of the provinces/cities entering the plateau of the forecasted transmission dynamic curves varied, ranging from Jan 21 to April 19, 2020. The 34 provinces/cities were grouped into 9 clusters. CONCLUSIONS The accuracy of the AI-based methods for forecasting the trajectory of Covid-19 was high. We predicted that the epidemics of Covid-19 will be over by the middle of April. If the data are reliable and there are no second transmissions, we can accurately forecast the transmission dynamics of the Covid-19 across the provinces/cities in China. The AI-inspired methods are a powerful tool for helping public health planning and policymaking.
2309.16684
Jiamin Wu
Jiamin Wu, He Cao, Yuan Yao
Leveraging Side Information for Ligand Conformation Generation using Diffusion-Based Approaches
null
null
null
null
q-bio.BM cs.LG physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ligand molecule conformation generation is a critical challenge in drug discovery. Deep learning models have been developed to tackle this problem, particularly through the use of generative models in recent years. However, these models often generate conformations that lack meaningful structure and randomness due to the absence of essential side information. Examples of such side information include the chemical and geometric features of the target protein, ligand-target compound interactions, and ligand chemical properties. Without these constraints, the generated conformations may not be suitable for further selection and design of new drugs. To address this limitation, we propose a novel method for generating ligand conformations that leverage side information and incorporate flexible constraints into standard diffusion models. Drawing inspiration from the concept of message passing, we introduce ligand-target massage passing block, a mechanism that facilitates the exchange of information between target nodes and ligand nodes, thereby incorporating target node features. To capture non-covalent interactions, we introduce ligand-target compound inter and intra edges. To further improve the biological relevance of the generated conformations, we train energy models using scalar chemical features. These models guide the progress of the standard Denoising Diffusion Probabilistic Models, resulting in more biologically meaningful conformations. We evaluate the performance of SIDEGEN using the PDBBind-2020 dataset, comparing it against other methods. The results demonstrate improvements in both Aligned RMSD and Ligand RMSD evaluations. Specifically, our model outperforms GeoDiff (trained on PDBBind-2020) by 20% in terms of the median aligned RMSD metric.
[ { "created": "Wed, 2 Aug 2023 09:56:47 GMT", "version": "v1" } ]
2023-10-02
[ [ "Wu", "Jiamin", "" ], [ "Cao", "He", "" ], [ "Yao", "Yuan", "" ] ]
Ligand molecule conformation generation is a critical challenge in drug discovery. Deep learning models have been developed to tackle this problem, particularly through the use of generative models in recent years. However, these models often generate conformations that lack meaningful structure and randomness due to the absence of essential side information. Examples of such side information include the chemical and geometric features of the target protein, ligand-target compound interactions, and ligand chemical properties. Without these constraints, the generated conformations may not be suitable for further selection and design of new drugs. To address this limitation, we propose a novel method for generating ligand conformations that leverage side information and incorporate flexible constraints into standard diffusion models. Drawing inspiration from the concept of message passing, we introduce ligand-target massage passing block, a mechanism that facilitates the exchange of information between target nodes and ligand nodes, thereby incorporating target node features. To capture non-covalent interactions, we introduce ligand-target compound inter and intra edges. To further improve the biological relevance of the generated conformations, we train energy models using scalar chemical features. These models guide the progress of the standard Denoising Diffusion Probabilistic Models, resulting in more biologically meaningful conformations. We evaluate the performance of SIDEGEN using the PDBBind-2020 dataset, comparing it against other methods. The results demonstrate improvements in both Aligned RMSD and Ligand RMSD evaluations. Specifically, our model outperforms GeoDiff (trained on PDBBind-2020) by 20% in terms of the median aligned RMSD metric.
1912.09676
Gimenez Olivier
Julie Louvrier, Julien Papa\"ix, Christophe Duchamp, Olivier Gimenez
A mechanistic-statistical species distribution model to explain and forecast wolf (Canis lupus) colonization in South-Eastern France
null
Spatial Statistics 36 (2020) 100428
10.1016/j.spasta.2020.100428
null
q-bio.PE q-bio.QM stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Species distribution models (SDMs) are important statistical tools for ecologists to understand and predict species range. However, standard SDMs do not explicitly incorporate dynamic processes like dispersal. This limitation may lead to bias in inference about species distribution. Here, we adopt the theory of ecological diffusion that has recently been introduced in statistical ecology to incorporate spatio-temporal processes in ecological models. As a case study, we considered the wolf (Canis lupus) that has been recolonizing Eastern France naturally through dispersal from the Apennines since the early 90's. Using partial differential equations for modelling species diffusion and growth in a fragmented landscape, we develop a mechanistic-statistical spatio-temporal model accounting for ecological diffusion, logistic growth and imperfect species detection. We conduct a simulation study and show the ability of our model to i) estimate ecological parameters in various situations with contrasted species detection probability and number of surveyed sites and ii) forecast the distribution into the future. We found that the growth rate of the wolf population in France was explained by the proportion of forest cover, that diffusion was influenced by human density and that species detectability increased with increasing survey effort. Using the parameters estimated from the 2007-2015 period, we then forecasted wolf distribution in 2016 and found good agreement with the actual detections made that year. Our approach may be useful for managing species that interact with human activities to anticipate potential conflicts.
[ { "created": "Fri, 20 Dec 2019 07:54:11 GMT", "version": "v1" }, { "created": "Mon, 17 Feb 2020 12:10:50 GMT", "version": "v2" } ]
2021-08-27
[ [ "Louvrier", "Julie", "" ], [ "Papaïx", "Julien", "" ], [ "Duchamp", "Christophe", "" ], [ "Gimenez", "Olivier", "" ] ]
Species distribution models (SDMs) are important statistical tools for ecologists to understand and predict species range. However, standard SDMs do not explicitly incorporate dynamic processes like dispersal. This limitation may lead to bias in inference about species distribution. Here, we adopt the theory of ecological diffusion that has recently been introduced in statistical ecology to incorporate spatio-temporal processes in ecological models. As a case study, we considered the wolf (Canis lupus) that has been recolonizing Eastern France naturally through dispersal from the Apennines since the early 90's. Using partial differential equations for modelling species diffusion and growth in a fragmented landscape, we develop a mechanistic-statistical spatio-temporal model accounting for ecological diffusion, logistic growth and imperfect species detection. We conduct a simulation study and show the ability of our model to i) estimate ecological parameters in various situations with contrasted species detection probability and number of surveyed sites and ii) forecast the distribution into the future. We found that the growth rate of the wolf population in France was explained by the proportion of forest cover, that diffusion was influenced by human density and that species detectability increased with increasing survey effort. Using the parameters estimated from the 2007-2015 period, we then forecasted wolf distribution in 2016 and found good agreement with the actual detections made that year. Our approach may be useful for managing species that interact with human activities to anticipate potential conflicts.
2008.00973
Rajat Mittal
Rajat Mittal, Charles Meneveau and Wen Wu
A Mathematical Framework for Estimating Risk of Airborne Transmission of COVID-19 with Application to Face Mask Use and Social Distancing
null
null
10.1063/5.0025476
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mathematical model for estimating the risk of airborne transmission of a respiratory infection such as COVID-19, is presented. The model employs basic concepts from fluid dynamics and incorporates the known scope of factors involved in the airborne transmission of such diseases. Simplicity in the mathematical form of the model is by design, so that it can serve not only as a common basis for scientific inquiry across disciplinary boundaries, but also be understandable by a broad audience outside science and academia. The caveats and limitations of the model are discussed in detail. The model is used to assess the protection from transmission afforded by face coverings made from a variety of fabrics. The reduction in transmission risk associated with increased physical distance between the host and susceptible is also quantified by coupling the model with available data on scalar dispersion in canonical flows. Finally, the effect of the level of physical activity (or exercise intensity) of the host and the susceptible in enhancing transmission risk, is also assessed.
[ { "created": "Mon, 3 Aug 2020 15:43:31 GMT", "version": "v1" }, { "created": "Sat, 15 Aug 2020 15:03:20 GMT", "version": "v2" }, { "created": "Wed, 2 Sep 2020 18:57:05 GMT", "version": "v3" }, { "created": "Sun, 20 Sep 2020 22:57:16 GMT", "version": "v4" } ]
2020-10-28
[ [ "Mittal", "Rajat", "" ], [ "Meneveau", "Charles", "" ], [ "Wu", "Wen", "" ] ]
A mathematical model for estimating the risk of airborne transmission of a respiratory infection such as COVID-19, is presented. The model employs basic concepts from fluid dynamics and incorporates the known scope of factors involved in the airborne transmission of such diseases. Simplicity in the mathematical form of the model is by design, so that it can serve not only as a common basis for scientific inquiry across disciplinary boundaries, but also be understandable by a broad audience outside science and academia. The caveats and limitations of the model are discussed in detail. The model is used to assess the protection from transmission afforded by face coverings made from a variety of fabrics. The reduction in transmission risk associated with increased physical distance between the host and susceptible is also quantified by coupling the model with available data on scalar dispersion in canonical flows. Finally, the effect of the level of physical activity (or exercise intensity) of the host and the susceptible in enhancing transmission risk, is also assessed.
1407.0895
Joris Paijmans
Joris Paijmans and Pieter Rein ten Wolde
The lower bound on the precision of transcriptional regulation
35 pages, 5 figures, 1 table and two appendixes
Phys. Rev. E 90 (2014) 032708
10.1103/PhysRevE.90.032708
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The diffusive arrival of transcription factors at the promoter sites on the DNA sets a lower bound on how accurately a cell can regulate its protein levels. Using results from the literature on diffusion-influenced reactions, we derive an analytical expression for the lower bound on the precision of transcriptional regulation. In our theory, transcription factors can perform multiple rounds of 1D diffusion along the DNA and 3D diffusion in the cytoplasm before binding to the promoter. Comparing our expression for the lower bound on the precision against results from Green's Function Reaction Dynamics simulations shows that the theory is highly accurate under biologically relevant conditions. Our results demonstrate that, to an excellent approximation, the promoter switches between the transcription-factor bound and unbound state in a Markovian fashion. This remains true even in the presence of sliding, i.e. with 1D diffusion along the DNA. This has two important implications: (1) minimizing the noise in the promoter state is equivalent to minimizing the search time of transcription factors for their promoters; (2) the complicated dynamics of 3D diffusion in the cytoplasm and 1D diffusion along the DNA can be captured in a well-stirred model by renormalizing the promoter association and dissociation rates, making it possible to efficiently simulate the promoter dynamics using Gillespie simulations. Based on the recent experimental observation that sliding can speed up the promoter search by a factor of 4, our theory predicts that sliding can enhance the precision of transcriptional regulation by a factor of 2.
[ { "created": "Thu, 3 Jul 2014 12:46:43 GMT", "version": "v1" } ]
2014-10-24
[ [ "Paijmans", "Joris", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
The diffusive arrival of transcription factors at the promoter sites on the DNA sets a lower bound on how accurately a cell can regulate its protein levels. Using results from the literature on diffusion-influenced reactions, we derive an analytical expression for the lower bound on the precision of transcriptional regulation. In our theory, transcription factors can perform multiple rounds of 1D diffusion along the DNA and 3D diffusion in the cytoplasm before binding to the promoter. Comparing our expression for the lower bound on the precision against results from Green's Function Reaction Dynamics simulations shows that the theory is highly accurate under biologically relevant conditions. Our results demonstrate that, to an excellent approximation, the promoter switches between the transcription-factor bound and unbound state in a Markovian fashion. This remains true even in the presence of sliding, i.e. with 1D diffusion along the DNA. This has two important implications: (1) minimizing the noise in the promoter state is equivalent to minimizing the search time of transcription factors for their promoters; (2) the complicated dynamics of 3D diffusion in the cytoplasm and 1D diffusion along the DNA can be captured in a well-stirred model by renormalizing the promoter association and dissociation rates, making it possible to efficiently simulate the promoter dynamics using Gillespie simulations. Based on the recent experimental observation that sliding can speed up the promoter search by a factor of 4, our theory predicts that sliding can enhance the precision of transcriptional regulation by a factor of 2.
1401.3604
Hector Zenil
Hector Zenil, Narsis A. Kiani, Jesper Tegn\'er
Methods of Information Theory and Algorithmic Complexity for Network Biology
28 pages. Forthcoming in the journal Seminars in Cell and Developmental Biology
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdos-Renyi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity.
[ { "created": "Wed, 15 Jan 2014 14:34:00 GMT", "version": "v1" }, { "created": "Wed, 5 Feb 2014 17:36:26 GMT", "version": "v2" }, { "created": "Wed, 15 Oct 2014 10:17:00 GMT", "version": "v3" }, { "created": "Sat, 6 Dec 2014 21:51:24 GMT", "version": "v4" }, { "cre...
2015-12-14
[ [ "Zenil", "Hector", "" ], [ "Kiani", "Narsis A.", "" ], [ "Tegnér", "Jesper", "" ] ]
We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdos-Renyi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity.
1211.0462
Tommaso Biancalani Tommaso Biancalani
Alan J. McKane, Tommaso Biancalani and Tim Rogers
Stochastic pattern formation and spontaneous polarisation: the linear noise approximation and beyond
27 pages, 6 figures
null
null
null
q-bio.PE cond-mat.stat-mech nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review the mathematical formalism underlying the modelling of stochasticity in biological systems. Beginning with a description of the system in terms of its basic constituents, we derive the mesoscopic equations governing the dynamics which generalise the more familiar macroscopic equations. We apply this formalism to the analysis of two specific noise-induced phenomena observed in biologically-inspired models. In the first example, we show how the stochastic amplification of a Turing instability gives rise to spatial and temporal patterns which may be understood within the linear noise approximation. The second example concerns the spontaneous emergence of cell polarity, where we make analytic progress by exploiting a separation of time-scales.
[ { "created": "Fri, 2 Nov 2012 14:49:12 GMT", "version": "v1" } ]
2012-11-05
[ [ "McKane", "Alan J.", "" ], [ "Biancalani", "Tommaso", "" ], [ "Rogers", "Tim", "" ] ]
We review the mathematical formalism underlying the modelling of stochasticity in biological systems. Beginning with a description of the system in terms of its basic constituents, we derive the mesoscopic equations governing the dynamics which generalise the more familiar macroscopic equations. We apply this formalism to the analysis of two specific noise-induced phenomena observed in biologically-inspired models. In the first example, we show how the stochastic amplification of a Turing instability gives rise to spatial and temporal patterns which may be understood within the linear noise approximation. The second example concerns the spontaneous emergence of cell polarity, where we make analytic progress by exploiting a separation of time-scales.
1708.01562
Jonathan Harrison
Jonathan U. Harrison and Ruth E. Baker
The impact of temporal sampling resolution on parameter inference for biological transport models
Published in PLOS Computational Biology
null
10.1371/journal.pcbi.1006235
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imaging data has become widely available to study biological systems at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of key transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate these mechanistic mathematical models to imaging data, we need to estimate the parameters of the models. In this work, we study the impact of collecting data at different temporal resolutions on parameter inference for biological transport models by performing exact inference for simple velocity jump process models in a Bayesian framework. This issue is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be collected, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we avoid such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates.
[ { "created": "Fri, 4 Aug 2017 15:45:08 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2017 13:45:54 GMT", "version": "v2" }, { "created": "Mon, 4 Jun 2018 08:00:58 GMT", "version": "v3" } ]
2018-06-05
[ [ "Harrison", "Jonathan U.", "" ], [ "Baker", "Ruth E.", "" ] ]
Imaging data has become widely available to study biological systems at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of key transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate these mechanistic mathematical models to imaging data, we need to estimate the parameters of the models. In this work, we study the impact of collecting data at different temporal resolutions on parameter inference for biological transport models by performing exact inference for simple velocity jump process models in a Bayesian framework. This issue is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be collected, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we avoid such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates.
2311.08433
Sangwon Baek
Sangwon Baek, Seung Jun Lee
Clinical Characteristics and Laboratory Biomarkers in ICU-admitted Septic Patients with and without Bacteremia
This article is not the right fit to be published as preprint in arXiv
null
null
null
q-bio.QM cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
Few studies have investigated the diagnostic utilities of biomarkers for predicting bacteremia among septic patients admitted to intensive care units (ICU). Therefore, this study evaluated the prediction power of laboratory biomarkers to utilize those markers with high performance to optimize the predictive model for bacteremia. This retrospective cross-sectional study was conducted at the ICU department of Gyeongsang National University Changwon Hospital in 2019. Adult patients qualifying SEPSIS-3 (increase in sequential organ failure score greater than or equal to 2) criteria with at least two sets of blood culture were selected. Collected data was initially analyzed independently to identify the significant predictors, which was then used to build the multivariable logistic regression (MLR) model. A total of 218 patients with 48 cases of true bacteremia were analyzed in this research. Both CRP and PCT showed a substantial area under the curve (AUC) value for discriminating bacteremia among septic patients (0.757 and 0.845, respectively). To further enhance the predictive accuracy, we combined PCT, bilirubin, neutrophil lymphocyte ratio (NLR), platelets, lactic acid, erythrocyte sedimentation rate (ESR), and Glasgow Coma Scale (GCS) score to build the predictive model with an AUC of 0.907 (95% CI, 0.843 to 0.956). In addition, a high association between bacteremia and mortality rate was discovered through the survival analysis (0.004). While PCT is certainly a useful index for distinguishing patients with and without bacteremia by itself, our MLR model indicates that the accuracy of bacteremia prediction substantially improves by the combined use of PCT, bilirubin, NLR, platelets, lactic acid, ESR, and GCS score.
[ { "created": "Tue, 14 Nov 2023 06:44:26 GMT", "version": "v1" }, { "created": "Thu, 16 Nov 2023 12:21:56 GMT", "version": "v2" } ]
2023-11-17
[ [ "Baek", "Sangwon", "" ], [ "Lee", "Seung Jun", "" ] ]
Few studies have investigated the diagnostic utilities of biomarkers for predicting bacteremia among septic patients admitted to intensive care units (ICU). Therefore, this study evaluated the prediction power of laboratory biomarkers to utilize those markers with high performance to optimize the predictive model for bacteremia. This retrospective cross-sectional study was conducted at the ICU department of Gyeongsang National University Changwon Hospital in 2019. Adult patients qualifying SEPSIS-3 (increase in sequential organ failure score greater than or equal to 2) criteria with at least two sets of blood culture were selected. Collected data was initially analyzed independently to identify the significant predictors, which was then used to build the multivariable logistic regression (MLR) model. A total of 218 patients with 48 cases of true bacteremia were analyzed in this research. Both CRP and PCT showed a substantial area under the curve (AUC) value for discriminating bacteremia among septic patients (0.757 and 0.845, respectively). To further enhance the predictive accuracy, we combined PCT, bilirubin, neutrophil lymphocyte ratio (NLR), platelets, lactic acid, erythrocyte sedimentation rate (ESR), and Glasgow Coma Scale (GCS) score to build the predictive model with an AUC of 0.907 (95% CI, 0.843 to 0.956). In addition, a high association between bacteremia and mortality rate was discovered through the survival analysis (0.004). While PCT is certainly a useful index for distinguishing patients with and without bacteremia by itself, our MLR model indicates that the accuracy of bacteremia prediction substantially improves by the combined use of PCT, bilirubin, NLR, platelets, lactic acid, ESR, and GCS score.
2005.06933
Katar\'ina Bo\v{d}ov\'a
Katarina Bodova and Richard Kollar
Emerging Polynomial Growth Trends in COVID-19 Pandemic Data and Their Reconciliation with Compartment Based Models
27 pages, 11 figures, 4 tables
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the reported data from the COVID-19 pandemic outbreak in January - May 2020 in 119 countries. We observe that the time series of active cases in individual countries (the difference of the total number of confirmed infections and the sum of the total number of reported deaths and recovered cases) display a strong agreement with polynomial growth and at a later epidemic stage also with a combined polynomial growth with exponential decay. Our results are also formulated in terms of compartment type mathematical models of epidemics. Within these models the universal scaling characterizing the observed regime in an advanced epidemic stage can be interpreted as an algebraic decay of the relative reproduction number $R_0$ as $T_M/t$, where $T_M$ is a constant and $t$ is the duration of the epidemic outbreak. We show how our findings can be applied to improve predictions of the reported pandemic data and estimate some epidemic parameters. Note that although the model shows a good agreement with the reported data we do not make any claims about the real size of the pandemics as the relation of the observed reported data to the total number of infected in the population is still unknown.
[ { "created": "Thu, 14 May 2020 13:05:38 GMT", "version": "v1" } ]
2020-05-15
[ [ "Bodova", "Katarina", "" ], [ "Kollar", "Richard", "" ] ]
We study the reported data from the COVID-19 pandemic outbreak in January - May 2020 in 119 countries. We observe that the time series of active cases in individual countries (the difference of the total number of confirmed infections and the sum of the total number of reported deaths and recovered cases) display a strong agreement with polynomial growth and at a later epidemic stage also with a combined polynomial growth with exponential decay. Our results are also formulated in terms of compartment type mathematical models of epidemics. Within these models the universal scaling characterizing the observed regime in an advanced epidemic stage can be interpreted as an algebraic decay of the relative reproduction number $R_0$ as $T_M/t$, where $T_M$ is a constant and $t$ is the duration of the epidemic outbreak. We show how our findings can be applied to improve predictions of the reported pandemic data and estimate some epidemic parameters. Note that although the model shows a good agreement with the reported data we do not make any claims about the real size of the pandemics as the relation of the observed reported data to the total number of infected in the population is still unknown.
2211.02891
Jinan Wang
Jinan Wang, Hung N. Do, Kushal Koirala, Yinglong Miao
Predicting biomolecular binding kinetics: A review
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Biomolecular binding kinetics including the association (kon) and dissociation (koff) rates are critical parameters for therapeutic design of small-molecule drugs, peptides and antibodies. Notably, drug molecule residence time or dissociation rate has been shown to correlate with their efficacies better than binding affinities. A wide range of modeling approaches including quantitative structure-kinetic relationship models, Molecular Dynamics simulations, enhanced sampling and Machine Learning have been developed to explore biomolecular binding and dissociation mechanisms and predict binding kinetic rates. Here, we review recent advances in computational modeling of biomolecular binding kinetics, with an outlook for future improvements.
[ { "created": "Sat, 5 Nov 2022 12:19:36 GMT", "version": "v1" } ]
2022-11-08
[ [ "Wang", "Jinan", "" ], [ "Do", "Hung N.", "" ], [ "Koirala", "Kushal", "" ], [ "Miao", "Yinglong", "" ] ]
Biomolecular binding kinetics including the association (kon) and dissociation (koff) rates are critical parameters for therapeutic design of small-molecule drugs, peptides and antibodies. Notably, drug molecule residence time or dissociation rate has been shown to correlate with their efficacies better than binding affinities. A wide range of modeling approaches including quantitative structure-kinetic relationship models, Molecular Dynamics simulations, enhanced sampling and Machine Learning have been developed to explore biomolecular binding and dissociation mechanisms and predict binding kinetic rates. Here, we review recent advances in computational modeling of biomolecular binding kinetics, with an outlook for future improvements.
2101.07082
Joost van Opheusden
Joost H.J. van Opheusden, Lia Hemerik
Some paradoxes in the Tilman model, how to avoid or accept them
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In making models for biological systems one expects to grasp the biology and be able to use ones intuition to predict the outcome. This paper is about the discrepancy between what is expected and what is the outcome of the analysis of the model, a paradox. The Tilman model of consumers competing for resources has some aspects that may appear counterintuitive. Within the standard Tilman model for a single consumer and a single resource, when a very efficient consumer rapidly eats all available food, the resource density becomes zero, but if there is no food, how can the consumer survive? The paradox can be resolved by realising that in the short term indeed rapid consumption may lead to starvation and a decline in the consumer population, but in the long term a small resource and consumer density remain. A single consumer living on two essential nutrients leaves the density of the non-limiting nutrient above its critical level, so what is done with the extra food? We explain that the extra food is not consumed, just because it is available, but that in fact the additional food is not consumed at all, because the consumer has no use for it. For a model with two consumer species competing for a single nutrient one of the consumers will eventually disappear, even if the food economics of the surviving one is worse and there is still much food wasted. Both are inherent aspects of the model, and the paradox can be avoided if the difference between the consumer species is small, in which case it will take very long to reach equilibrium. We argue that in general an extended stability analysis, in which not only the asymptotically stable state is considered, but also the unstable steady states and all time scales involved in the transient dynamics, gives more insight in the dynamics involved and thus can help in avoiding apparent contradictions in ecological models, or accepting them.
[ { "created": "Mon, 18 Jan 2021 14:02:16 GMT", "version": "v1" }, { "created": "Thu, 4 Apr 2024 10:20:10 GMT", "version": "v2" } ]
2024-04-05
[ [ "van Opheusden", "Joost H. J.", "" ], [ "Hemerik", "Lia", "" ] ]
In making models for biological systems one expects to grasp the biology and be able to use ones intuition to predict the outcome. This paper is about the discrepancy between what is expected and what is the outcome of the analysis of the model, a paradox. The Tilman model of consumers competing for resources has some aspects that may appear counterintuitive. Within the standard Tilman model for a single consumer and a single resource, when a very efficient consumer rapidly eats all available food, the resource density becomes zero, but if there is no food, how can the consumer survive? The paradox can be resolved by realising that in the short term indeed rapid consumption may lead to starvation and a decline in the consumer population, but in the long term a small resource and consumer density remain. A single consumer living on two essential nutrients leaves the density of the non-limiting nutrient above its critical level, so what is done with the extra food? We explain that the extra food is not consumed, just because it is available, but that in fact the additional food is not consumed at all, because the consumer has no use for it. For a model with two consumer species competing for a single nutrient one of the consumers will eventually disappear, even if the food economics of the surviving one is worse and there is still much food wasted. Both are inherent aspects of the model, and the paradox can be avoided if the difference between the consumer species is small, in which case it will take very long to reach equilibrium. We argue that in general an extended stability analysis, in which not only the asymptotically stable state is considered, but also the unstable steady states and all time scales involved in the transient dynamics, gives more insight in the dynamics involved and thus can help in avoiding apparent contradictions in ecological models, or accepting them.
0902.3148
Claude Pasquier
Claude Pasquier, Stavros Hamodrakas
An hierarchical artificial neural network system for the classification of transmembrane proteins
null
Protein Engineering Design & Selection, 1999, 12 (8), pp.631-4
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a simple artificial neural network which classifies proteins into two classes from their sequences alone: the membrane protein class and the non-membrane protein class. This may be important in the functional assignment and analysis of open reading frames (ORF's) identified in complete genomes and, especially, those ORF's that correspond to proteins with unknown function. The network described here has a simple hierarchical feed-forward topology and a limited number of neurons which makes it very fast. By using only information contained in 11 protein sequences, the method was able to identify, with 100% accuracy, all membrane proteins with reliable topologies collected from several papers in the literature. Applied to a test set of 995 globular, water-soluble proteins, the neural network classified falsely 23 of them in the membrane protein class (97.7% of correct assignment). The method was also applied to the complete SWISS-PROT database with considerable success and on ORF's of several complete genomes. The neural network developed was associated with the PRED-TMR algorithm (Pasquier,C., Promponas,V.J., Palaios,G.A., Hamodrakas,J.S. and Hamodrakas,S.J., 1999) in a new application package called PRED-TMR2. A WWW server running the PRED-TMR2 software is available at http://o2.db.uoa.gr/PRED-TMR2
[ { "created": "Wed, 18 Feb 2009 14:06:18 GMT", "version": "v1" }, { "created": "Mon, 9 May 2016 07:40:29 GMT", "version": "v2" } ]
2016-05-10
[ [ "Pasquier", "Claude", "" ], [ "Hamodrakas", "Stavros", "" ] ]
This work presents a simple artificial neural network which classifies proteins into two classes from their sequences alone: the membrane protein class and the non-membrane protein class. This may be important in the functional assignment and analysis of open reading frames (ORF's) identified in complete genomes and, especially, those ORF's that correspond to proteins with unknown function. The network described here has a simple hierarchical feed-forward topology and a limited number of neurons which makes it very fast. By using only information contained in 11 protein sequences, the method was able to identify, with 100% accuracy, all membrane proteins with reliable topologies collected from several papers in the literature. Applied to a test set of 995 globular, water-soluble proteins, the neural network classified falsely 23 of them in the membrane protein class (97.7% of correct assignment). The method was also applied to the complete SWISS-PROT database with considerable success and on ORF's of several complete genomes. The neural network developed was associated with the PRED-TMR algorithm (Pasquier,C., Promponas,V.J., Palaios,G.A., Hamodrakas,J.S. and Hamodrakas,S.J., 1999) in a new application package called PRED-TMR2. A WWW server running the PRED-TMR2 software is available at http://o2.db.uoa.gr/PRED-TMR2
2212.07105
Angelo Troina
Marco Aldinucci and Livio Bioglio and Cristina Calcagno and Mario Coppo and Ferruccio Damiani and Maurizio Drocco and Elena Grassi and Pablo Ram\'on and Eva Sciacca and Salvatore Spinella and Angelo Troina
Modelling Biological and Ecological Systems with the Calculus of Wrapped Compartments
arXiv admin note: text overlap with arXiv:1505.01985
null
null
null
q-bio.QM cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The modelling and analysis of biological systems has deep roots in Mathematics, specifically in the field of Ordinary Differential Equations. Alternative approaches based on formal calculi, often derived from process algebras or term rewriting systems, provide a quite complementary way to analyse the behaviour of biological systems. These calculi allow to cope in a natural way with notions like compartments and membranes, which are not easy (sometimes impossible) to handle with purely numerical approaches, and are often based on stochastic simulation methods. The Calculus of Wrapped Compartments is a framework based on stochastic multiset rewriting in a compartmentalised setting used for the description of biological and ecological systems. We provide an extended presentation of the Calculus of Wrapped Compartments, sketch a few modelling guidelines to encode biological and ecological interactions, show how spatial properties can be handled within the framework and define a hybrid simulation algorithm. Several applications in Biology and Ecology are proposed as modelling case studies.
[ { "created": "Wed, 14 Dec 2022 08:50:18 GMT", "version": "v1" } ]
2022-12-15
[ [ "Aldinucci", "Marco", "" ], [ "Bioglio", "Livio", "" ], [ "Calcagno", "Cristina", "" ], [ "Coppo", "Mario", "" ], [ "Damiani", "Ferruccio", "" ], [ "Drocco", "Maurizio", "" ], [ "Grassi", "Elena", "" ], ...
The modelling and analysis of biological systems has deep roots in Mathematics, specifically in the field of Ordinary Differential Equations. Alternative approaches based on formal calculi, often derived from process algebras or term rewriting systems, provide a quite complementary way to analyse the behaviour of biological systems. These calculi allow to cope in a natural way with notions like compartments and membranes, which are not easy (sometimes impossible) to handle with purely numerical approaches, and are often based on stochastic simulation methods. The Calculus of Wrapped Compartments is a framework based on stochastic multiset rewriting in a compartmentalised setting used for the description of biological and ecological systems. We provide an extended presentation of the Calculus of Wrapped Compartments, sketch a few modelling guidelines to encode biological and ecological interactions, show how spatial properties can be handled within the framework and define a hybrid simulation algorithm. Several applications in Biology and Ecology are proposed as modelling case studies.
2101.03470
Tom Chou
Mingtao Xia, Tom Chou
Kinetic theory for structured populations: application to stochastic sizer-timer models of cell proliferation
14 pages, 1 figure
null
10.1088/1751-8121/abf532
null
q-bio.PE cond-mat.stat-mech
http://creativecommons.org/licenses/by-nc-nd/4.0/
We derive the full kinetic equations describing the evolution of the probability density distribution for a structured population such as cells distributed according to their ages and sizes. The kinetic equations for such a "sizer-timer" model incorporates both demographic and individual cell growth rate stochasticities. Averages taken over the densities obeying the kinetic equations can be used to generate a second order PDE that incorporates the growth rate stochasticity. On the other hand, marginalizing over the densities yields a modified birth-death process that shows how age and size influence demographic stochasticity. Our kinetic framework is thus a more complete model that subsumes both the deterministic PDE and birth-death master equation representations for structured populations.
[ { "created": "Sun, 10 Jan 2021 03:58:07 GMT", "version": "v1" } ]
2021-09-22
[ [ "Xia", "Mingtao", "" ], [ "Chou", "Tom", "" ] ]
We derive the full kinetic equations describing the evolution of the probability density distribution for a structured population such as cells distributed according to their ages and sizes. The kinetic equations for such a "sizer-timer" model incorporates both demographic and individual cell growth rate stochasticities. Averages taken over the densities obeying the kinetic equations can be used to generate a second order PDE that incorporates the growth rate stochasticity. On the other hand, marginalizing over the densities yields a modified birth-death process that shows how age and size influence demographic stochasticity. Our kinetic framework is thus a more complete model that subsumes both the deterministic PDE and birth-death master equation representations for structured populations.
2311.08544
Jian Li
Jian Li, Greta Tuckute, Evelina Fedorenko, Brian L. Edlow, Adrian V. Dalca, Bruce Fischl
JOSA: Joint surface-based registration and atlas construction of brain geometry and function
A. V. Dalca and B. Fischl are co-senior authors with equal contribution. arXiv admin note: text overlap with arXiv:2303.01592
null
null
null
q-bio.NC cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Surface-based cortical registration is an important topic in medical image analysis and facilitates many downstream applications. Current approaches for cortical registration are mainly driven by geometric features, such as sulcal depth and curvature, and often assume that registration of folding patterns leads to alignment of brain function. However, functional variability of anatomically corresponding areas across subjects has been widely reported, particularly in higher-order cognitive areas. In this work, we present JOSA, a novel cortical registration framework that jointly models the mismatch between geometry and function while simultaneously learning an unbiased population-specific atlas. Using a semi-supervised training strategy, JOSA achieves superior registration performance in both geometry and function to the state-of-the-art methods but without requiring functional data at inference. This learning framework can be extended to any auxiliary data to guide spherical registration that is available during training but is difficult or impossible to obtain during inference, such as parcellations, architectonic identity, transcriptomic information, and molecular profiles. By recognizing the mismatch between geometry and function, JOSA provides new insights into the future development of registration methods using joint analysis of the brain structure and function.
[ { "created": "Sun, 22 Oct 2023 02:16:48 GMT", "version": "v1" } ]
2023-11-16
[ [ "Li", "Jian", "" ], [ "Tuckute", "Greta", "" ], [ "Fedorenko", "Evelina", "" ], [ "Edlow", "Brian L.", "" ], [ "Dalca", "Adrian V.", "" ], [ "Fischl", "Bruce", "" ] ]
Surface-based cortical registration is an important topic in medical image analysis and facilitates many downstream applications. Current approaches for cortical registration are mainly driven by geometric features, such as sulcal depth and curvature, and often assume that registration of folding patterns leads to alignment of brain function. However, functional variability of anatomically corresponding areas across subjects has been widely reported, particularly in higher-order cognitive areas. In this work, we present JOSA, a novel cortical registration framework that jointly models the mismatch between geometry and function while simultaneously learning an unbiased population-specific atlas. Using a semi-supervised training strategy, JOSA achieves superior registration performance in both geometry and function to the state-of-the-art methods but without requiring functional data at inference. This learning framework can be extended to any auxiliary data to guide spherical registration that is available during training but is difficult or impossible to obtain during inference, such as parcellations, architectonic identity, transcriptomic information, and molecular profiles. By recognizing the mismatch between geometry and function, JOSA provides new insights into the future development of registration methods using joint analysis of the brain structure and function.
1509.01434
Charles Perretti
Charles T. Perretti, Stephan B. Munch, Michael J. Fogarty, George Sugihara
Global evidence for non-random dynamics in fish recruitment
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding what controls apparently random fluctuations in fish recruitment is a major challenge in fisheries science. Our current inability to anticipate recruitment failures has led to costly management actions and in some cases complete fishery collapse. Time series observations of fish recruitment reflect an interplay of underlying population processes, environmental forcing, measurement error, and process error. Given that the error component is often very strong, an important unresolved question is whether any non-random signal can be uncovered in the annual fluctuations of recruitment time series. Here, we address this fundamental question in an analysis of 569 fish populations from a global database of recruitment. Using a nonparametric time series analysis method, we find overwhelming evidence for non-random dynamics operating in the recruitment process. Unlike previous explorations of this topic, our approach does not require the specification of a stock-recruitment model, and it can be used on recruitment estimates derived from a wide-range of abundance estimation methods. The evidence for non-randomness is robust across a wide range of fish families and abundance estimation methods. We also find that the statistical support for non-randomness increases consistently with the number of observations of a fish stock. This result provides the encouraging news that with continued observations, and the appropriate covariates, we should ultimately be able to uncover the mechanistic drivers of fish recruitment.
[ { "created": "Fri, 4 Sep 2015 13:06:27 GMT", "version": "v1" } ]
2015-09-07
[ [ "Perretti", "Charles T.", "" ], [ "Munch", "Stephan B.", "" ], [ "Fogarty", "Michael J.", "" ], [ "Sugihara", "George", "" ] ]
Understanding what controls apparently random fluctuations in fish recruitment is a major challenge in fisheries science. Our current inability to anticipate recruitment failures has led to costly management actions and in some cases complete fishery collapse. Time series observations of fish recruitment reflect an interplay of underlying population processes, environmental forcing, measurement error, and process error. Given that the error component is often very strong, an important unresolved question is whether any non-random signal can be uncovered in the annual fluctuations of recruitment time series. Here, we address this fundamental question in an analysis of 569 fish populations from a global database of recruitment. Using a nonparametric time series analysis method, we find overwhelming evidence for non-random dynamics operating in the recruitment process. Unlike previous explorations of this topic, our approach does not require the specification of a stock-recruitment model, and it can be used on recruitment estimates derived from a wide-range of abundance estimation methods. The evidence for non-randomness is robust across a wide range of fish families and abundance estimation methods. We also find that the statistical support for non-randomness increases consistently with the number of observations of a fish stock. This result provides the encouraging news that with continued observations, and the appropriate covariates, we should ultimately be able to uncover the mechanistic drivers of fish recruitment.
2303.07830
Tengjun Liu
Tengjun Liu, Yansong Chua, Yiwei Zhang, Yuxiao Ning, Pengfu Liu, Guihua Wan, Zijun Wan, Shaomin Zhang, Weidong Chen
Emergent Bio-Functional Similarities in a Cortical-Spike-Train-Decoding Spiking Neural Network Facilitate Predictions of Neural Computation
null
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite its better bio-plausibility, goal-driven spiking neural network (SNN) has not achieved applicable performance for classifying biological spike trains, and showed little bio-functional similarities compared to traditional artificial neural networks. In this study, we proposed the motorSRNN, a recurrent SNN topologically inspired by the neural motor circuit of primates. By employing the motorSRNN in decoding spike trains from the primary motor cortex of monkeys, we achieved a good balance between classification accuracy and energy consumption. The motorSRNN communicated with the input by capturing and cultivating more cosine-tuning, an essential property of neurons in the motor cortex, and maintained its stability during training. Such training-induced cultivation and persistency of cosine-tuning was also observed in our monkeys. Moreover, the motorSRNN produced additional bio-functional similarities at the single-neuron, population, and circuit levels, demonstrating biological authenticity. Thereby, ablation studies on motorSRNN have suggested long-term stable feedback synapses contribute to the training-induced cultivation in the motor cortex. Besides these novel findings and predictions, we offer a new framework for building authentic models of neural computation.
[ { "created": "Tue, 14 Mar 2023 12:06:56 GMT", "version": "v1" } ]
2023-03-15
[ [ "Liu", "Tengjun", "" ], [ "Chua", "Yansong", "" ], [ "Zhang", "Yiwei", "" ], [ "Ning", "Yuxiao", "" ], [ "Liu", "Pengfu", "" ], [ "Wan", "Guihua", "" ], [ "Wan", "Zijun", "" ], [ "Zhang", "Shaom...
Despite its better bio-plausibility, goal-driven spiking neural network (SNN) has not achieved applicable performance for classifying biological spike trains, and showed little bio-functional similarities compared to traditional artificial neural networks. In this study, we proposed the motorSRNN, a recurrent SNN topologically inspired by the neural motor circuit of primates. By employing the motorSRNN in decoding spike trains from the primary motor cortex of monkeys, we achieved a good balance between classification accuracy and energy consumption. The motorSRNN communicated with the input by capturing and cultivating more cosine-tuning, an essential property of neurons in the motor cortex, and maintained its stability during training. Such training-induced cultivation and persistency of cosine-tuning was also observed in our monkeys. Moreover, the motorSRNN produced additional bio-functional similarities at the single-neuron, population, and circuit levels, demonstrating biological authenticity. Thereby, ablation studies on motorSRNN have suggested long-term stable feedback synapses contribute to the training-induced cultivation in the motor cortex. Besides these novel findings and predictions, we offer a new framework for building authentic models of neural computation.
2107.14016
Ron Togunov
Ron R. Togunov, Andrew E. Derocher, Nicholas J. Lunn, Marie Auger-M\'eth\'e
Characterising menotactic behaviours in movement data using hidden Markov models
null
null
10.1111/2041-210X.13681
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
1. Movement is the primary means by which animals obtain resources and avoid hazards. Most movement exhibits directional bias that is related to environmental features (taxis), such as the location of food patches, predators, ocean currents, or wind. Numerous behaviours with directional bias can be characterized by maintaining orientation at an angle relative to the environmental stimuli (menotaxis), including navigation relative to sunlight or magnetic fields and energy-conserving flight across wind. However, no statistical methods exist to flexibly classify and characterise such directional bias. 2. We propose a biased correlated random walk model that can identify menotactic behaviours by predicting turning angle as a trade-off between directional persistence and directional bias relative to environmental stimuli without making a priori assumptions about the angle of bias. We apply the model within the framework of a multi-state hidden Markov model (HMM) and describe methods to remedy information loss associated with coarse environmental data to improve the classification and parameterization of directional bias. 3. Using simulation studies, we illustrate how our method more accurately classifies behavioural states compared to conventional correlated random walk HMMs that do not incorporate directional bias. We illustrate the application of these methods by identifying cross wind olfactory foraging and drifting behaviour mediated by wind-driven sea ice drift in polar bears (Ursus maritimus) from movement data collected by satellite telemetry. 4. The extensions we propose can be readily applied to movement data to identify and characterize behaviours with directional bias toward any angle, and open up new avenues to investigate more mechanistic relationships between animal movement and the environment.
[ { "created": "Tue, 27 Jul 2021 20:14:24 GMT", "version": "v1" } ]
2021-07-30
[ [ "Togunov", "Ron R.", "" ], [ "Derocher", "Andrew E.", "" ], [ "Lunn", "Nicholas J.", "" ], [ "Auger-Méthé", "Marie", "" ] ]
1. Movement is the primary means by which animals obtain resources and avoid hazards. Most movement exhibits directional bias that is related to environmental features (taxis), such as the location of food patches, predators, ocean currents, or wind. Numerous behaviours with directional bias can be characterized by maintaining orientation at an angle relative to the environmental stimuli (menotaxis), including navigation relative to sunlight or magnetic fields and energy-conserving flight across wind. However, no statistical methods exist to flexibly classify and characterise such directional bias. 2. We propose a biased correlated random walk model that can identify menotactic behaviours by predicting turning angle as a trade-off between directional persistence and directional bias relative to environmental stimuli without making a priori assumptions about the angle of bias. We apply the model within the framework of a multi-state hidden Markov model (HMM) and describe methods to remedy information loss associated with coarse environmental data to improve the classification and parameterization of directional bias. 3. Using simulation studies, we illustrate how our method more accurately classifies behavioural states compared to conventional correlated random walk HMMs that do not incorporate directional bias. We illustrate the application of these methods by identifying cross wind olfactory foraging and drifting behaviour mediated by wind-driven sea ice drift in polar bears (Ursus maritimus) from movement data collected by satellite telemetry. 4. The extensions we propose can be readily applied to movement data to identify and characterize behaviours with directional bias toward any angle, and open up new avenues to investigate more mechanistic relationships between animal movement and the environment.
1409.2942
Gabriel Kreiman
Hanlin Tang, Calin Buia, Joseph Madsen, William S. Anderson, Gabriel Kreiman
A role for recurrent processing in object completion: neurophysiological, psychophysical and computational"evidence
null
Neuron. 2014 Aug 6;83(3):736-48
10.1016/j.neuron.2014.06.017.
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognition of objects from partial information presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. We combined neurophysiological recordings in human cortex with psychophysical measurements and computational modeling to investigate the mechanisms involved in object completion. We recorded intracranial field potentials from 1,699 electrodes in 18 epilepsy patients to measure the timing and selectivity of responses along human visual cortex to whole and partial objects. Responses along the ventral visual stream remained selective despite showing only 9-25% of the object. However, these visually selective signals emerged ~100 ms later for partial versus whole objects. The processing delays were particularly pronounced in higher visual areas within the ventral stream, suggesting the involvement of additional recurrent processing. In separate psychophysics experiments, disrupting this recurrent computation with a backward mask at ~75ms significantly impaired recognition of partial, but not whole, objects. Additionally, computational modeling shows that the performance of a purely bottom-up architecture is impaired by heavy occlusion and that this effect can be partially rescued via the incorporation of top-down connections. These results provide spatiotemporal constraints on theories of object recognition that involve recurrent processing to recognize objects from partial information.
[ { "created": "Wed, 10 Sep 2014 02:51:01 GMT", "version": "v1" } ]
2014-09-11
[ [ "Tang", "Hanlin", "" ], [ "Buia", "Calin", "" ], [ "Madsen", "Joseph", "" ], [ "Anderson", "William S.", "" ], [ "Kreiman", "Gabriel", "" ] ]
Recognition of objects from partial information presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. We combined neurophysiological recordings in human cortex with psychophysical measurements and computational modeling to investigate the mechanisms involved in object completion. We recorded intracranial field potentials from 1,699 electrodes in 18 epilepsy patients to measure the timing and selectivity of responses along human visual cortex to whole and partial objects. Responses along the ventral visual stream remained selective despite showing only 9-25% of the object. However, these visually selective signals emerged ~100 ms later for partial versus whole objects. The processing delays were particularly pronounced in higher visual areas within the ventral stream, suggesting the involvement of additional recurrent processing. In separate psychophysics experiments, disrupting this recurrent computation with a backward mask at ~75ms significantly impaired recognition of partial, but not whole, objects. Additionally, computational modeling shows that the performance of a purely bottom-up architecture is impaired by heavy occlusion and that this effect can be partially rescued via the incorporation of top-down connections. These results provide spatiotemporal constraints on theories of object recognition that involve recurrent processing to recognize objects from partial information.
1212.3583
Bj\"orn Zelinski
Bj\"orn Zelinski, Nina M\"uller, and Jan Kierfeld
Dynamics and length distribution of microtubules under force and confinement
null
Phys. Rev. E 86, 041918 (2012)
10.1103/PhysRevE.86.041918
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the microtubule polymerization dynamics with catastrophe and rescue events for three different confinement scenarios, which mimic typical cellular environments: (i) The microtubule is confined by rigid and fixed walls, (ii) it grows under constant force, and (iii) it grows against an elastic obstacle with a linearly increasing force. We use realistic catastrophe models and analyze the microtubule dynamics, the resulting microtubule length distributions, and force generation by stochastic and mean field calculations; in addition, we perform stochastic simulations. We also investigate the force dynamics if growth parameters are perturbed in dilution experiments. Finally, we show the robustness of our results against changes of catastrophe models and load distribution factors.
[ { "created": "Fri, 14 Dec 2012 20:22:03 GMT", "version": "v1" } ]
2015-05-12
[ [ "Zelinski", "Björn", "" ], [ "Müller", "Nina", "" ], [ "Kierfeld", "Jan", "" ] ]
We investigate the microtubule polymerization dynamics with catastrophe and rescue events for three different confinement scenarios, which mimic typical cellular environments: (i) The microtubule is confined by rigid and fixed walls, (ii) it grows under constant force, and (iii) it grows against an elastic obstacle with a linearly increasing force. We use realistic catastrophe models and analyze the microtubule dynamics, the resulting microtubule length distributions, and force generation by stochastic and mean field calculations; in addition, we perform stochastic simulations. We also investigate the force dynamics if growth parameters are perturbed in dilution experiments. Finally, we show the robustness of our results against changes of catastrophe models and load distribution factors.
0801.2855
Attila Szolnoki
Gyorgy Szabo and Attila Szolnoki
Phase transitions induced by variation of invasion rates in spatial cyclic predator-prey models with four or six species
4 pages, 4 figures
Phys. Rev. E 77 (2008) 011906
10.1103/PhysRevE.77.011906
null
q-bio.PE physics.bio-ph
null
Cyclic predator-prey models with four or six species are studied on a square lattice when the invasion rates are varied. It is found that the cyclic invasions maintain a self-organizing pattern as long as the deviation of the invasion rate(s) from a uniform value does not exceed a threshold value. For larger deviations the system exhibits a continuous phase transition into a frozen distribution of odd (or even) label species.
[ { "created": "Fri, 18 Jan 2008 11:38:05 GMT", "version": "v1" } ]
2008-01-21
[ [ "Szabo", "Gyorgy", "" ], [ "Szolnoki", "Attila", "" ] ]
Cyclic predator-prey models with four or six species are studied on a square lattice when the invasion rates are varied. It is found that the cyclic invasions maintain a self-organizing pattern as long as the deviation of the invasion rate(s) from a uniform value does not exceed a threshold value. For larger deviations the system exhibits a continuous phase transition into a frozen distribution of odd (or even) label species.
2103.02081
Jim Wu
Jim Wu, Pankaj Mehta, and David Schwab
Understanding Species Abundance Distributions in Complex Ecosystems of Interacting Species
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Niche and neutral theory are two prevailing, yet much debated, ideas in ecology proposed to explain the patterns of biodiversity. Whereas niche theory emphasizes selective differences between species and interspecific interactions in shaping the community, neutral theory supposes functional equivalence between species and points to stochasticity as the primary driver of ecological dynamics. In this work, we draw a bridge between these two opposing theories. Starting from a Lotka-Volterra (LV) model with demographic noise and random symmetric interactions, we analytically derive the stationary population statistics and species abundance distribution (SAD). Using these results, we demonstrate that the model can exhibit three classes of SADs commonly found in niche and neutral theories and found conditions that allow an ecosystem to transition between these various regimes. Thus, we reconcile how neutral-like statistics may arise from a diverse community with niche differentiation.
[ { "created": "Tue, 2 Mar 2021 23:13:34 GMT", "version": "v1" }, { "created": "Thu, 4 Mar 2021 09:22:18 GMT", "version": "v2" } ]
2021-03-05
[ [ "Wu", "Jim", "" ], [ "Mehta", "Pankaj", "" ], [ "Schwab", "David", "" ] ]
Niche and neutral theory are two prevailing, yet much debated, ideas in ecology proposed to explain the patterns of biodiversity. Whereas niche theory emphasizes selective differences between species and interspecific interactions in shaping the community, neutral theory supposes functional equivalence between species and points to stochasticity as the primary driver of ecological dynamics. In this work, we draw a bridge between these two opposing theories. Starting from a Lotka-Volterra (LV) model with demographic noise and random symmetric interactions, we analytically derive the stationary population statistics and species abundance distribution (SAD). Using these results, we demonstrate that the model can exhibit three classes of SADs commonly found in niche and neutral theories and found conditions that allow an ecosystem to transition between these various regimes. Thus, we reconcile how neutral-like statistics may arise from a diverse community with niche differentiation.
2303.13591
Patricio Arru\'e Pa
Patricio Arru\'e, Kaveh Laksari, Nima Toosizadeh
Associating Frailty and Dynamic Dysregulation between Motor and Cardiac Autonomic Systems
16 pages, 3 tables, 4 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Frailty is a geriatric syndrome associated with the lack of physiological reserve and consequent adverse outcomes (therapy complications and death) in older adults. Recent research has shown associations between heart rate (HR) dynamics (HR changes during physical activity) with frailty. The goal of the present study was to determine the effect of frailty on the interconnection between motor and cardiac systems during a localized upper-extremity function (UEF) test. Fifty-six older adults aged 65 or older were recruited and performed the UEF task of rapid elbow flexion for 20-seconds with the right arm. Frailty was assessed using the Fried phenotype. Wearable gyroscopes and electrocardiography were used to measure motor function and HR dynamics. Using convergent cross-mapping (CCM) the interconnection between motor (angular displacement) and cardiac (HR) performance was assessed. A significantly weaker interconnection was observed among pre-frail and frail participants compared to non-frail individuals (p<0.01, effect size=0.81$\pm$0.08). Using logistic models pre-frailty and frailty were identified with sensitivity and specificity of 82% to 89%, using motor, HR dynamics, and interconnection parameters. Findings suggested a strong association between cardiac-motor interconnection and frailty. Adding CCM parameters in a multimodal model may provide a promising measure of frailty.
[ { "created": "Thu, 23 Mar 2023 18:11:41 GMT", "version": "v1" } ]
2023-03-27
[ [ "Arrué", "Patricio", "" ], [ "Laksari", "Kaveh", "" ], [ "Toosizadeh", "Nima", "" ] ]
Frailty is a geriatric syndrome associated with the lack of physiological reserve and consequent adverse outcomes (therapy complications and death) in older adults. Recent research has shown associations between heart rate (HR) dynamics (HR changes during physical activity) with frailty. The goal of the present study was to determine the effect of frailty on the interconnection between motor and cardiac systems during a localized upper-extremity function (UEF) test. Fifty-six older adults aged 65 or older were recruited and performed the UEF task of rapid elbow flexion for 20-seconds with the right arm. Frailty was assessed using the Fried phenotype. Wearable gyroscopes and electrocardiography were used to measure motor function and HR dynamics. Using convergent cross-mapping (CCM) the interconnection between motor (angular displacement) and cardiac (HR) performance was assessed. A significantly weaker interconnection was observed among pre-frail and frail participants compared to non-frail individuals (p<0.01, effect size=0.81$\pm$0.08). Using logistic models pre-frailty and frailty were identified with sensitivity and specificity of 82% to 89%, using motor, HR dynamics, and interconnection parameters. Findings suggested a strong association between cardiac-motor interconnection and frailty. Adding CCM parameters in a multimodal model may provide a promising measure of frailty.
2405.17960
Massimo Bilancioni
Massimo Bilancioni and Massimiliano Esposito
Elementary Flux Modes as CRN Gears for Free Energy Transduction
6 pages, 3 figures
null
null
null
q-bio.MN cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
We demonstrate that, for a chemical reaction network (CRN) engaged in energy transduction, its optimal operation from a thermodynamic efficiency standpoint is contingent upon its working conditions. Analogously to the bicycle gear system, CRNs have at their disposal several transducing mechanisms characterized by different yields. We highlight the critical role of the CRN's elementary flux modes in determining this "gearing" and their impact on maximizing energy transduction efficiency. Furthermore, we introduce an enzymatically regulated CRN, engineered to autonomously adjust its "gear", thereby optimizing its efficiency under different external conditions.
[ { "created": "Tue, 28 May 2024 08:41:09 GMT", "version": "v1" }, { "created": "Fri, 31 May 2024 15:48:59 GMT", "version": "v2" } ]
2024-06-03
[ [ "Bilancioni", "Massimo", "" ], [ "Esposito", "Massimiliano", "" ] ]
We demonstrate that, for a chemical reaction network (CRN) engaged in energy transduction, its optimal operation from a thermodynamic efficiency standpoint is contingent upon its working conditions. Analogously to the bicycle gear system, CRNs have at their disposal several transducing mechanisms characterized by different yields. We highlight the critical role of the CRN's elementary flux modes in determining this "gearing" and their impact on maximizing energy transduction efficiency. Furthermore, we introduce an enzymatically regulated CRN, engineered to autonomously adjust its "gear", thereby optimizing its efficiency under different external conditions.
2407.18336
Musa Azeem
Musa Azeem, Homayoun Valafar
Dihedral Angle Adherence: Evaluating Protein Structure Predictions in the Absence of Experimental Data
6 pages, 7 figures. Accepted to and to be published by BIOCOMP'24, The 25th International Conference on Bioinformatics & Computational Biology
null
null
null
q-bio.BM cs.CE
http://creativecommons.org/licenses/by/4.0/
Determining the 3D structures of proteins is essential in understanding their behavior in the cellular environment. Computational methods of predicting protein structures have advanced, but assessing prediction accuracy remains a challenge. The traditional method, RMSD, relies on experimentally determined structures and lacks insight into improvement areas of predictions. We propose an alternative: analyzing dihedral angles, bypassing the need for the reference structure of an evaluated protein. Our method segments proteins into amino acid subsequences and searches for matches, comparing dihedral angles across numerous proteins to compute a metric using Mahalanobis distance. Evaluated on many predictions, our approach correlates with RMSD and identifies areas for prediction enhancement. This method offers a promising route for accurate protein structure prediction assessment and improvement.
[ { "created": "Tue, 9 Jul 2024 16:54:36 GMT", "version": "v1" } ]
2024-07-29
[ [ "Azeem", "Musa", "" ], [ "Valafar", "Homayoun", "" ] ]
Determining the 3D structures of proteins is essential in understanding their behavior in the cellular environment. Computational methods of predicting protein structures have advanced, but assessing prediction accuracy remains a challenge. The traditional method, RMSD, relies on experimentally determined structures and lacks insight into improvement areas of predictions. We propose an alternative: analyzing dihedral angles, bypassing the need for the reference structure of an evaluated protein. Our method segments proteins into amino acid subsequences and searches for matches, comparing dihedral angles across numerous proteins to compute a metric using Mahalanobis distance. Evaluated on many predictions, our approach correlates with RMSD and identifies areas for prediction enhancement. This method offers a promising route for accurate protein structure prediction assessment and improvement.
2002.11181
Timothy Oleskiw
Timothy D. Oleskiw, Wyeth Bair, Eric Shea-Brown, Nicolas Brunel
Firing rate of the leaky integrate-and-fire neuron with stochastic conductance-based synaptic inputs with short decay times
5 pages, 2 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We compute the firing rate of a leaky integrate-and-fire (LIF) neuron with stochastic conductance-based inputs in the limit when synaptic decay times are much shorter than the membrane time constant. A comparison of our analytical results to numeric simulations is presented for a range of biophysically-realistic parameters.
[ { "created": "Tue, 25 Feb 2020 21:26:40 GMT", "version": "v1" } ]
2020-02-27
[ [ "Oleskiw", "Timothy D.", "" ], [ "Bair", "Wyeth", "" ], [ "Shea-Brown", "Eric", "" ], [ "Brunel", "Nicolas", "" ] ]
We compute the firing rate of a leaky integrate-and-fire (LIF) neuron with stochastic conductance-based inputs in the limit when synaptic decay times are much shorter than the membrane time constant. A comparison of our analytical results to numeric simulations is presented for a range of biophysically-realistic parameters.
1110.6250
Naoto Morikawa
Naoto Morikawa
A novel method for identification of local conformational changes in proteins
6 pages, 5 figures. Program ProteinEncoder and detailed data are freely available at http://www.genocript.com
null
null
null
q-bio.BM math.CO math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Proteins are known to undergo conformational changes in the course of their functions. The changes in conformation are often attributable to a small fraction of residues within the protein. Therefore identification of these variable regions is important for an understanding of protein function. Results: We propose a novel method for identification of local conformational changes in proteins. In our method, backbone conformations are encoded into a sequence of letters from a 16-letter alphabet (called D2 codes) to perform structural comparison. Since we do not use clustering analysis to encode local structures, the D2 codes not only provides a intuitively understandable description of protein structures, but also covers wide varieties of distortions. This paper shows that the D2 codes are better correlated with changes in the dihedral angles than a structural alphabet and a secondary structure description. In the case of the N37S mutant of HIV-1 protease, local conformational changes were captured by the D2 coding method more accurately than other methods. The D2 coding also provided a reliable representation of the difference between NMR models of an HIV-1 protease mutant.
[ { "created": "Fri, 28 Oct 2011 04:51:36 GMT", "version": "v1" } ]
2011-10-31
[ [ "Morikawa", "Naoto", "" ] ]
Motivation: Proteins are known to undergo conformational changes in the course of their functions. The changes in conformation are often attributable to a small fraction of residues within the protein. Therefore identification of these variable regions is important for an understanding of protein function. Results: We propose a novel method for identification of local conformational changes in proteins. In our method, backbone conformations are encoded into a sequence of letters from a 16-letter alphabet (called D2 codes) to perform structural comparison. Since we do not use clustering analysis to encode local structures, the D2 codes not only provides a intuitively understandable description of protein structures, but also covers wide varieties of distortions. This paper shows that the D2 codes are better correlated with changes in the dihedral angles than a structural alphabet and a secondary structure description. In the case of the N37S mutant of HIV-1 protease, local conformational changes were captured by the D2 coding method more accurately than other methods. The D2 coding also provided a reliable representation of the difference between NMR models of an HIV-1 protease mutant.
1106.0269
Jason McEwen
A. Daducci, J. D. McEwen, D. Van De Ville, J.-Ph. Thiran, Y. Wiaux
Harmonic analysis of spherical sampling in diffusion MRI
1 page, 2 figures, 19th Annual Meeting of International Society for Magnetic Resonance in Medicine
null
null
null
q-bio.QM physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last decade diffusion MRI has become a powerful tool to non-invasively study white-matter integrity in the brain. Recently many research groups have focused their attention on multi-shell spherical acquisitions with the aim of effectively mapping the diffusion signal with a lower number of q-space samples, hence enabling a crucial reduction of acquisition time. One of the quantities commonly studied in this context is the so-called orientation distribution function (ODF). In this setting, the spherical harmonic (SH) transform has gained a great deal of popularity thanks to its ability to perform convolution operations efficiently and accurately, such as the Funk-Radon transform notably required for ODF computation from q-space data. However, if the q-space signal is described with an unsuitable angular resolution at any b-value probed, aliasing (or interpolation) artifacts are unavoidably created. So far this aspect has been tackled empirically and, to our knowledge, no study has addressed this problem in a quantitative approach. The aim of the present work is to study more theoretically the efficiency of multi-shell spherical sampling in diffusion MRI, in order to gain understanding in HYDI-like approaches, possibly paving the way to further optimization strategies.
[ { "created": "Wed, 1 Jun 2011 17:54:40 GMT", "version": "v1" } ]
2011-06-02
[ [ "Daducci", "A.", "" ], [ "McEwen", "J. D.", "" ], [ "Van De Ville", "D.", "" ], [ "Thiran", "J. -Ph.", "" ], [ "Wiaux", "Y.", "" ] ]
In the last decade diffusion MRI has become a powerful tool to non-invasively study white-matter integrity in the brain. Recently many research groups have focused their attention on multi-shell spherical acquisitions with the aim of effectively mapping the diffusion signal with a lower number of q-space samples, hence enabling a crucial reduction of acquisition time. One of the quantities commonly studied in this context is the so-called orientation distribution function (ODF). In this setting, the spherical harmonic (SH) transform has gained a great deal of popularity thanks to its ability to perform convolution operations efficiently and accurately, such as the Funk-Radon transform notably required for ODF computation from q-space data. However, if the q-space signal is described with an unsuitable angular resolution at any b-value probed, aliasing (or interpolation) artifacts are unavoidably created. So far this aspect has been tackled empirically and, to our knowledge, no study has addressed this problem in a quantitative approach. The aim of the present work is to study more theoretically the efficiency of multi-shell spherical sampling in diffusion MRI, in order to gain understanding in HYDI-like approaches, possibly paving the way to further optimization strategies.
1810.06255
Laura Moro
Giulia Pinton, Angelo Ferraro, Massimo Balma and Laura Moro
Specific low frequency electromagnetic fields induce epigenetic and functional changes in U937 cells
14 pg, 18 figures, 1 table. The paper has not been published elsewhere, Massimo Balma (author) was responsible for designing page layouts, in the revised version the copyright MB has been removed
null
null
null
q-bio.MN q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we investigated the effects of specific low frequency electromagnetic fields sequences on U937 cells, an in vitro model of human monocyte/macrophage differentiation. U937 cells were exposed to electromagnetic stimulation by means of the SyntheXer system using two similar sequences, XR-BC31 and XR-BC31/F. Each sequence was a time series of twenty-nine wave segments. Here, we report that exposure (4 days, once a day) of U937 cells to the XR-BC31 setting, but not to the XR-BC31/F, resulted in increased expression of the histone demethylase KDM6B along with a global reduction in histone H3 lysine 27 (H3K27) tri-methylation. Furthermore, exposure to the XR-BC31 sequence induced differentiation of U937 cells towards a macrophage-like phenotype displaying a KDM6B dependent increase in expression and secretion of the anti-inflammatory interleukins (ILs), IL-10 and IL-4. Importantly, all the observed changes were highly dependent on the sequence's nature. Our results open a new way of interpretation for the effects of low frequency electromagnetic fields observed in vivo. Indeed, it is conceivable that a specific low frequency electromagnetic fields treatment may cause changes in chromatin accessibility and consequently in the expression of anti-inflammatory mediators and in cell differentiation.
[ { "created": "Mon, 15 Oct 2018 10:12:21 GMT", "version": "v1" } ]
2018-10-16
[ [ "Pinton", "Giulia", "" ], [ "Ferraro", "Angelo", "" ], [ "Balma", "Massimo", "" ], [ "Moro", "Laura", "" ] ]
In this study, we investigated the effects of specific low frequency electromagnetic fields sequences on U937 cells, an in vitro model of human monocyte/macrophage differentiation. U937 cells were exposed to electromagnetic stimulation by means of the SyntheXer system using two similar sequences, XR-BC31 and XR-BC31/F. Each sequence was a time series of twenty-nine wave segments. Here, we report that exposure (4 days, once a day) of U937 cells to the XR-BC31 setting, but not to the XR-BC31/F, resulted in increased expression of the histone demethylase KDM6B along with a global reduction in histone H3 lysine 27 (H3K27) tri-methylation. Furthermore, exposure to the XR-BC31 sequence induced differentiation of U937 cells towards a macrophage-like phenotype displaying a KDM6B dependent increase in expression and secretion of the anti-inflammatory interleukins (ILs), IL-10 and IL-4. Importantly, all the observed changes were highly dependent on the sequence's nature. Our results open a new way of interpretation for the effects of low frequency electromagnetic fields observed in vivo. Indeed, it is conceivable that a specific low frequency electromagnetic fields treatment may cause changes in chromatin accessibility and consequently in the expression of anti-inflammatory mediators and in cell differentiation.
0710.5423
Paolo Tieri
Paolo Tieri
The immune system: look who's talking
8 pages, from "Biocomplexity, a paradigm useful for other disciplines?", International Italo-Canadian workshop, Bologna, Italy, 13-15 July 2005. Added references for section X.5; corrected typos
null
null
null
q-bio.OT
null
Human language and its governing rules present a number of analogies with the organization and structure of communication and information management in living organisms. This chapter will provide a short general introduction about grammar, as well as a brief explanation on how linguistic approaches effectively contaminate scientific practice, and, finally, how they can also provide systems biology with further tools and paradigms to analyse emergent behaviours and interactions among the components of a biological system.
[ { "created": "Mon, 29 Oct 2007 13:27:23 GMT", "version": "v1" }, { "created": "Wed, 14 May 2008 10:28:19 GMT", "version": "v2" } ]
2008-05-14
[ [ "Tieri", "Paolo", "" ] ]
Human language and its governing rules present a number of analogies with the organization and structure of communication and information management in living organisms. This chapter will provide a short general introduction about grammar, as well as a brief explanation on how linguistic approaches effectively contaminate scientific practice, and, finally, how they can also provide systems biology with further tools and paradigms to analyse emergent behaviours and interactions among the components of a biological system.
1110.0452
Eugenio Urdapilleta
Eugenio Urdapilleta
Onset of negative interspike interval correlations in adapting neurons
12 pages (10 pages in the journal version), 6 figures, published in Phys. Rev. E; http://link.aps.org/doi/10.1103/PhysRevE.84.041904
Phys. Rev. E 84, 041904 (2011)
10.1103/PhysRevE.84.041904
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Negative serial correlations in single spike trains are an effective method to reduce the variability of spike counts. One of the factors contributing to the development of negative correlations between successive interspike intervals is the presence of adaptation currents. In this work, based on a hidden Markov model and a proper statistical description of conditional responses, we obtain analytically these correlations in an adequate dynamical neuron model resembling adaptation. We derive the serial correlation coefficients for arbitrary lags, under a small adaptation scenario. In this case, the behavior of correlations is universal and depends on the first-order statistical description of an exponentially driven time-inhomogeneous stochastic process.
[ { "created": "Mon, 3 Oct 2011 19:41:26 GMT", "version": "v1" } ]
2011-10-04
[ [ "Urdapilleta", "Eugenio", "" ] ]
Negative serial correlations in single spike trains are an effective method to reduce the variability of spike counts. One of the factors contributing to the development of negative correlations between successive interspike intervals is the presence of adaptation currents. In this work, based on a hidden Markov model and a proper statistical description of conditional responses, we obtain analytically these correlations in an adequate dynamical neuron model resembling adaptation. We derive the serial correlation coefficients for arbitrary lags, under a small adaptation scenario. In this case, the behavior of correlations is universal and depends on the first-order statistical description of an exponentially driven time-inhomogeneous stochastic process.
1903.03919
Lam Ho
Lam Si Tung Ho, Vu Dinh, Frederick A. Matsen IV, Marc A. Suchard
On the convergence of the maximum likelihood estimator for the transition rate under a 2-state symmetric model
null
null
10.1007/s00285-019-01453-1
null
q-bio.PE math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Maximum likelihood estimators are used extensively to estimate unknown parameters of stochastic trait evolution models on phylogenetic trees. Although the MLE has been proven to converge to the true value in the independent-sample case, we cannot appeal to this result because trait values of different species are correlated due to shared evolutionary history. In this paper, we consider a $2$-state symmetric model for a single binary trait and investigate the theoretical properties of the MLE for the transition rate in the large-tree limit. Here, the large-tree limit is a theoretical scenario where the number of taxa increases to infinity and we can observe the trait values for all species. Specifically, we prove that the MLE converges to the true value under some regularity conditions. These conditions ensure that the tree shape is not too irregular, and holds for many practical scenarios such as trees with bounded edges, trees generated from the Yule (pure birth) process, and trees generated from the coalescent point process. Our result also provides an upper bound for the distance between the MLE and the true value.
[ { "created": "Sun, 10 Mar 2019 04:20:33 GMT", "version": "v1" }, { "created": "Sat, 3 Aug 2019 19:18:12 GMT", "version": "v2" }, { "created": "Mon, 25 Nov 2019 03:13:19 GMT", "version": "v3" } ]
2019-11-26
[ [ "Ho", "Lam Si Tung", "" ], [ "Dinh", "Vu", "" ], [ "Matsen", "Frederick A.", "IV" ], [ "Suchard", "Marc A.", "" ] ]
Maximum likelihood estimators are used extensively to estimate unknown parameters of stochastic trait evolution models on phylogenetic trees. Although the MLE has been proven to converge to the true value in the independent-sample case, we cannot appeal to this result because trait values of different species are correlated due to shared evolutionary history. In this paper, we consider a $2$-state symmetric model for a single binary trait and investigate the theoretical properties of the MLE for the transition rate in the large-tree limit. Here, the large-tree limit is a theoretical scenario where the number of taxa increases to infinity and we can observe the trait values for all species. Specifically, we prove that the MLE converges to the true value under some regularity conditions. These conditions ensure that the tree shape is not too irregular, and holds for many practical scenarios such as trees with bounded edges, trees generated from the Yule (pure birth) process, and trees generated from the coalescent point process. Our result also provides an upper bound for the distance between the MLE and the true value.
q-bio/0310032
Anna Lin
Anna L. Lin, Bernward A. Mann, Gelsy Torres-Oviedo, Bryan Lincoln, Josef Kas, Harry L. Swinney
Localization and extinction of bacterial populations under inhomogeneous growth conditions
11 pages, 7 figures
null
10.1529/biophysj.103.034041
null
q-bio.PE physics.bio-ph
null
The transition from localized to systemic spreading of bacteria, viruses and other agents is a fundamental problem that spans medicine, ecology, biology and agriculture science. We have conducted experiments and simulations in a simple one-dimensional system to determine the spreading of bacterial populations that occurs for an inhomogeneous environment under the influence of external convection. Our system consists of a long channel with growth inhibited by uniform UV illumination except in a small ``oasis'', which is shielded from the UV light. To mimic blood flow or other flow past a localized infection, the oasis is moved with a constant velocity through the UV-illuminated ``desert''. The experiments are modeled with a convective reaction-diffusion equation. In both the experiment and model, localized or extinct populations are found to develop, depending on conditions, from an initially localized population. The model also yields states where the population grows everywhere. Further, the model reveals that the transitions between localized, extended, and extinct states are continuous and non-hysteretic. However, it does not capture the oscillations of the localized population that are observed in the experiment.
[ { "created": "Fri, 24 Oct 2003 18:30:25 GMT", "version": "v1" } ]
2009-11-10
[ [ "Lin", "Anna L.", "" ], [ "Mann", "Bernward A.", "" ], [ "Torres-Oviedo", "Gelsy", "" ], [ "Lincoln", "Bryan", "" ], [ "Kas", "Josef", "" ], [ "Swinney", "Harry L.", "" ] ]
The transition from localized to systemic spreading of bacteria, viruses and other agents is a fundamental problem that spans medicine, ecology, biology and agriculture science. We have conducted experiments and simulations in a simple one-dimensional system to determine the spreading of bacterial populations that occurs for an inhomogeneous environment under the influence of external convection. Our system consists of a long channel with growth inhibited by uniform UV illumination except in a small ``oasis'', which is shielded from the UV light. To mimic blood flow or other flow past a localized infection, the oasis is moved with a constant velocity through the UV-illuminated ``desert''. The experiments are modeled with a convective reaction-diffusion equation. In both the experiment and model, localized or extinct populations are found to develop, depending on conditions, from an initially localized population. The model also yields states where the population grows everywhere. Further, the model reveals that the transitions between localized, extended, and extinct states are continuous and non-hysteretic. However, it does not capture the oscillations of the localized population that are observed in the experiment.
1404.7723
Amir Szitenberg
Amir Szitenberg, Georgios Koutsovoulos, Mark L Blaxter and David H Lunt
The evolution of tyrosine-recombinase elements in Nematoda
18 pages
null
10.1371/journal.pone.0106630
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transposable elements can be categorised into DNA and RNA elements based on their mechanism of transposition. Tyrosine recombinase elements (YREs) are relatively rare and poorly understood, despite sharing characteristics with both DNA and RNA elements. Previously, the Nematoda have been reported to have a substantially different diversity of YREs compared to other animal phyla: the Dirs1-like YRE retrotransposon was encountered in most animal phyla but not in Nematoda, and a unique Pat1-like YRE retrotransposon has only been recorded from Nematoda. We explored the diversity of YREs in Nematoda by sampling broadly across the phylum and including 34 genomes representing the three classes within Nematoda. We developed a method to isolate and classify YREs based on both feature organization and phylogenetic relationships in an open and reproducible workflow. We also ensured that our phylogenetic approach to YRE classification identified truncated and degenerate elements, informatively increasing the number of elements sampled. We identified Dirs1-like elements (thought to be absent from Nematoda) in the nematode classes Enoplia and Dorylaimia indicating that nematode model species do not adequately represent the diversity of transposable elements in the phylum. Nematode Pat1-like elements were found to be a derived form of another pat1-like element that is present more widely in animals. Several sequence features used widely for the classification of YREs were found to be homoplasious, highlighting the need for a phylogenetically-based classification scheme. Nematode model species do not represent the diversity of transposable elements in the phylum.
[ { "created": "Wed, 30 Apr 2014 13:50:15 GMT", "version": "v1" }, { "created": "Mon, 12 May 2014 11:08:22 GMT", "version": "v2" } ]
2015-06-19
[ [ "Szitenberg", "Amir", "" ], [ "Koutsovoulos", "Georgios", "" ], [ "Blaxter", "Mark L", "" ], [ "Lunt", "David H", "" ] ]
Transposable elements can be categorised into DNA and RNA elements based on their mechanism of transposition. Tyrosine recombinase elements (YREs) are relatively rare and poorly understood, despite sharing characteristics with both DNA and RNA elements. Previously, the Nematoda have been reported to have a substantially different diversity of YREs compared to other animal phyla: the Dirs1-like YRE retrotransposon was encountered in most animal phyla but not in Nematoda, and a unique Pat1-like YRE retrotransposon has only been recorded from Nematoda. We explored the diversity of YREs in Nematoda by sampling broadly across the phylum and including 34 genomes representing the three classes within Nematoda. We developed a method to isolate and classify YREs based on both feature organization and phylogenetic relationships in an open and reproducible workflow. We also ensured that our phylogenetic approach to YRE classification identified truncated and degenerate elements, informatively increasing the number of elements sampled. We identified Dirs1-like elements (thought to be absent from Nematoda) in the nematode classes Enoplia and Dorylaimia indicating that nematode model species do not adequately represent the diversity of transposable elements in the phylum. Nematode Pat1-like elements were found to be a derived form of another pat1-like element that is present more widely in animals. Several sequence features used widely for the classification of YREs were found to be homoplasious, highlighting the need for a phylogenetically-based classification scheme. Nematode model species do not represent the diversity of transposable elements in the phylum.
1310.3197
Yaniv Erlich
Yaniv Erlich and Arvind Narayanan
Routes for breaching and protecting genetic privacy
Draft for comments
null
null
null
q-bio.GN cs.CR stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are entering the era of ubiquitous genetic information for research, clinical care, and personal curiosity. Sharing these datasets is vital for rapid progress in understanding the genetic basis of human diseases. However, one growing concern is the ability to protect the genetic privacy of the data originators. Here, we technically map threats to genetic privacy and discuss potential mitigation strategies for privacy-preserving dissemination of genetic data.
[ { "created": "Fri, 11 Oct 2013 17:02:54 GMT", "version": "v1" } ]
2013-10-14
[ [ "Erlich", "Yaniv", "" ], [ "Narayanan", "Arvind", "" ] ]
We are entering the era of ubiquitous genetic information for research, clinical care, and personal curiosity. Sharing these datasets is vital for rapid progress in understanding the genetic basis of human diseases. However, one growing concern is the ability to protect the genetic privacy of the data originators. Here, we technically map threats to genetic privacy and discuss potential mitigation strategies for privacy-preserving dissemination of genetic data.
0708.0564
Eduardo Candelario-Jalil
E. Candelario-Jalil, D. Alvarez, A. Gonzalez-Falcon, M. Garcia-Cabrera, G. Martinez-Sanchez, N. Merino, A. Giuliani, O. S. Leon
Neuroprotective efficacy of nimesulide against hippocampal neuronal damage following transient forebrain ischemia
null
European Journal of Pharmacology 453(2-3): 189-195 (2002)
null
null
q-bio.TO
null
Cyclooxygenase-2 is involved in the inflammatory component of the ischemic cascade, playing an important role in the delayed progression of the brain damage. The present study evaluated the pharmacological effects of the selective cyclooxygenase-2 inhibitor nimesulide on delayed neuronal death of hippocampal CA1 neurons following transient global cerebral ischemia in gerbils. Administration of therapeutically relevant doses of nimesulide (3, 6 and 12 mg/kg; i.p.) 30 min before ischemia and at 6, 12, 24, 48 and 72 h after ischemia significantly (P<0.01) reduced hippocampal neuronal damage. Treatment with a single dose of nimesulide given 30 min before ischemia also resulted in a significant increase in the number of healthy neurons in the hippocampal CA1 sector 7 days after ischemia. Of interest is the finding that nimesulide rescued CA1 pyramidal neurons from ischemic death even when treatment was delayed until 24 h after ischemia (34+/-9% protection). Neuroprotective effect of nimesulide is still evident 30 days after the ischemic episode, providing the first experimental evidence that cyclooxygenase-2 inhibitors confer a long-lasting neuroprotection. Oral administration of nimesulide was also able to significantly reduce brain damage, suggesting that protective effects are independent of the route of administration. The present study confirms the ability of cyclooxygenase-2 inhibitors to reduce brain damage induced by cerebral ischemia and indicates that nimesulide can provide protection when administered for up to 24 h post-ischemia.
[ { "created": "Fri, 3 Aug 2007 18:51:15 GMT", "version": "v1" } ]
2007-08-06
[ [ "Candelario-Jalil", "E.", "" ], [ "Alvarez", "D.", "" ], [ "Gonzalez-Falcon", "A.", "" ], [ "Garcia-Cabrera", "M.", "" ], [ "Martinez-Sanchez", "G.", "" ], [ "Merino", "N.", "" ], [ "Giuliani", "A.", "" ]...
Cyclooxygenase-2 is involved in the inflammatory component of the ischemic cascade, playing an important role in the delayed progression of the brain damage. The present study evaluated the pharmacological effects of the selective cyclooxygenase-2 inhibitor nimesulide on delayed neuronal death of hippocampal CA1 neurons following transient global cerebral ischemia in gerbils. Administration of therapeutically relevant doses of nimesulide (3, 6 and 12 mg/kg; i.p.) 30 min before ischemia and at 6, 12, 24, 48 and 72 h after ischemia significantly (P<0.01) reduced hippocampal neuronal damage. Treatment with a single dose of nimesulide given 30 min before ischemia also resulted in a significant increase in the number of healthy neurons in the hippocampal CA1 sector 7 days after ischemia. Of interest is the finding that nimesulide rescued CA1 pyramidal neurons from ischemic death even when treatment was delayed until 24 h after ischemia (34+/-9% protection). Neuroprotective effect of nimesulide is still evident 30 days after the ischemic episode, providing the first experimental evidence that cyclooxygenase-2 inhibitors confer a long-lasting neuroprotection. Oral administration of nimesulide was also able to significantly reduce brain damage, suggesting that protective effects are independent of the route of administration. The present study confirms the ability of cyclooxygenase-2 inhibitors to reduce brain damage induced by cerebral ischemia and indicates that nimesulide can provide protection when administered for up to 24 h post-ischemia.
1908.01593
Pratik Shah
Aman Rana, Alarice Lowe, Marie Lithgow, Katharine Horback, Tyler Janovitz, Annacarolina Da Silva, Harrison Tsai, Vignesh Shanmugam, Hyung-Jin Yoon, Pratik Shah
High Accuracy Tumor Diagnoses and Benchmarking of Hematoxylin and Eosin Stained Prostate Core Biopsy Images Generated by Explainable Deep Neural Networks
null
JAMA Network. 2020;3(5):e205111
10.1001/jamanetworkopen.2020.5111
null
q-bio.QM cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Histopathological diagnoses of tumors in tissue biopsy after Hematoxylin and Eosin (H&E) staining is the gold standard for oncology care. H&E staining is slow and uses dyes, reagents and precious tissue samples that cannot be reused. Thousands of native nonstained RGB Whole Slide Image (RWSI) patches of prostate core tissue biopsies were registered with their H&E stained versions. Conditional Generative Adversarial Neural Networks (cGANs) that automate conversion of native nonstained RWSI to computational H&E stained images were then trained. High similarities between computational and H&E dye stained images with Structural Similarity Index (SSIM) 0.902, Pearsons Correlation Coefficient (CC) 0.962 and Peak Signal to Noise Ratio (PSNR) 22.821 dB were calculated. A second cGAN performed accurate computational destaining of H&E dye stained images back to their native nonstained form with SSIM 0.9, CC 0.963 and PSNR 25.646 dB. A single-blind study computed more than 95% pixel-by-pixel overlap between prostate tumor annotations on computationally stained images, provided by five-board certified MD pathologists, with those on H&E dye stained counterparts. We report the first visualization and explanation of neural network kernel activation maps during H&E staining and destaining of RGB images by cGANs. High similarities between kernel activation maps of computational and H&E stained images (Mean-Squared Errors <0.0005) provide additional mathematical and mechanistic validation of the staining system. Our neural network framework thus is automated, explainable and performs high precision H&E staining and destaining of low cost native RGB images, and is computer vision and physician authenticated for rapid and accurate tumor diagnoses.
[ { "created": "Fri, 2 Aug 2019 13:25:07 GMT", "version": "v1" } ]
2020-11-12
[ [ "Rana", "Aman", "" ], [ "Lowe", "Alarice", "" ], [ "Lithgow", "Marie", "" ], [ "Horback", "Katharine", "" ], [ "Janovitz", "Tyler", "" ], [ "Da Silva", "Annacarolina", "" ], [ "Tsai", "Harrison", "" ], ...
Histopathological diagnoses of tumors in tissue biopsy after Hematoxylin and Eosin (H&E) staining is the gold standard for oncology care. H&E staining is slow and uses dyes, reagents and precious tissue samples that cannot be reused. Thousands of native nonstained RGB Whole Slide Image (RWSI) patches of prostate core tissue biopsies were registered with their H&E stained versions. Conditional Generative Adversarial Neural Networks (cGANs) that automate conversion of native nonstained RWSI to computational H&E stained images were then trained. High similarities between computational and H&E dye stained images with Structural Similarity Index (SSIM) 0.902, Pearsons Correlation Coefficient (CC) 0.962 and Peak Signal to Noise Ratio (PSNR) 22.821 dB were calculated. A second cGAN performed accurate computational destaining of H&E dye stained images back to their native nonstained form with SSIM 0.9, CC 0.963 and PSNR 25.646 dB. A single-blind study computed more than 95% pixel-by-pixel overlap between prostate tumor annotations on computationally stained images, provided by five-board certified MD pathologists, with those on H&E dye stained counterparts. We report the first visualization and explanation of neural network kernel activation maps during H&E staining and destaining of RGB images by cGANs. High similarities between kernel activation maps of computational and H&E stained images (Mean-Squared Errors <0.0005) provide additional mathematical and mechanistic validation of the staining system. Our neural network framework thus is automated, explainable and performs high precision H&E staining and destaining of low cost native RGB images, and is computer vision and physician authenticated for rapid and accurate tumor diagnoses.
1305.4125
C.C. Alan Fung
C. C. Alan Fung and He Wang and Kin Lam and K. Y. Michael Wong and Si Wu
Resolution enhancement in neural networks with dynamical synapses
25 pages and 15 Figures
Front. Comput. Neurosci. 7:73. (2013)
10.3389/fncom.2013.00073
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventionally, information is represented by spike rates in the neural system. Here, we consider the ability of temporally modulated activities in neuronal networks to carry information extra to spike rates. These temporal modulations, commonly known as population spikes, are due to the presence of synaptic depression in a neuronal network model. We discuss its relevance to an experiment on transparent motions in macaque monkeys by Treue et al. in 2000. They found that if the moving directions of objects are too close, the firing rate profile will be very similar to that with one direction. As the difference in the moving directions of objects is large enough, the neuronal system would respond in such a way that the network enhances the resolution in the moving directions of the objects. In this paper, we propose that this behavior can be reproduced by neural networks with dynamical synapses when there are multiple external inputs. We will demonstrate how resolution enhancement can be achieved, and discuss the conditions under which temporally modulated activities are able to enhance information processing performances in general.
[ { "created": "Fri, 17 May 2013 16:14:01 GMT", "version": "v1" } ]
2014-09-09
[ [ "Fung", "C. C. Alan", "" ], [ "Wang", "He", "" ], [ "Lam", "Kin", "" ], [ "Wong", "K. Y. Michael", "" ], [ "Wu", "Si", "" ] ]
Conventionally, information is represented by spike rates in the neural system. Here, we consider the ability of temporally modulated activities in neuronal networks to carry information extra to spike rates. These temporal modulations, commonly known as population spikes, are due to the presence of synaptic depression in a neuronal network model. We discuss its relevance to an experiment on transparent motions in macaque monkeys by Treue et al. in 2000. They found that if the moving directions of objects are too close, the firing rate profile will be very similar to that with one direction. As the difference in the moving directions of objects is large enough, the neuronal system would respond in such a way that the network enhances the resolution in the moving directions of the objects. In this paper, we propose that this behavior can be reproduced by neural networks with dynamical synapses when there are multiple external inputs. We will demonstrate how resolution enhancement can be achieved, and discuss the conditions under which temporally modulated activities are able to enhance information processing performances in general.
1807.05301
Gilberto Nakamura
GM Nakamura, ND Gomes, GC Cardoso, and AS Martinez
Robust parameter determination in epidemic models with analytical descriptions of uncertainties
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compartmental equations are primary tools in disease spreading studies. Their predictions are accurate for large populations but disagree with empirical and simulated data for finite populations, where uncertainties become a relevant factor. Starting from the agent-based approach, we investigate the role of uncertainties and autocorrelation functions in SIS epidemic model, including their relationship with epidemiological variables. We find new differential equations that take uncertainties into account. The findings provide improved predictions to the SIS model and it can offer new insights for emerging diseases.
[ { "created": "Fri, 13 Jul 2018 22:41:11 GMT", "version": "v1" } ]
2018-07-18
[ [ "Nakamura", "GM", "" ], [ "Gomes", "ND", "" ], [ "Cardoso", "GC", "" ], [ "Martinez", "AS", "" ] ]
Compartmental equations are primary tools in disease spreading studies. Their predictions are accurate for large populations but disagree with empirical and simulated data for finite populations, where uncertainties become a relevant factor. Starting from the agent-based approach, we investigate the role of uncertainties and autocorrelation functions in SIS epidemic model, including their relationship with epidemiological variables. We find new differential equations that take uncertainties into account. The findings provide improved predictions to the SIS model and it can offer new insights for emerging diseases.
2407.11242
Heng Yang
Heng Yang, Ke Li
OmniGenome: Aligning RNA Sequences with Secondary Structures in Genomic Foundation Models
submitted to NeurIPS 2024, 19 pages
null
null
null
q-bio.GN cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The structures of RNA sequences play a vital role in various cellular processes, while existing genomic foundation models (FMs) have struggled with precise sequence-structure alignment, due to the complexity of exponential combinations of nucleotide bases. In this study, we introduce OmniGenome, a foundation model that addresses this critical challenge of sequence-structure alignment in RNA FMs. OmniGenome bridges the sequences with secondary structures using structure-contextualized modeling, enabling hard in-silico genomic tasks that existing FMs cannot handle, e.g., RNA design tasks. The results on two comprehensive genomic benchmarks show that OmniGenome achieves state-of-the-art performance on complex RNA subtasks. For example, OmniGenome solved 74% of complex puzzles, compared to SpliceBERT which solved only 3% of the puzzles. Besides, OmniGenome solves most of the puzzles within $1$ hour, while the existing methods usually allocate $24$ hours for each puzzle. Overall, OmniGenome establishes wide genomic application cases and offers profound insights into biological mechanisms from the perspective of sequence-structure alignment.
[ { "created": "Mon, 15 Jul 2024 21:10:40 GMT", "version": "v1" } ]
2024-07-17
[ [ "Yang", "Heng", "" ], [ "Li", "Ke", "" ] ]
The structures of RNA sequences play a vital role in various cellular processes, while existing genomic foundation models (FMs) have struggled with precise sequence-structure alignment, due to the complexity of exponential combinations of nucleotide bases. In this study, we introduce OmniGenome, a foundation model that addresses this critical challenge of sequence-structure alignment in RNA FMs. OmniGenome bridges the sequences with secondary structures using structure-contextualized modeling, enabling hard in-silico genomic tasks that existing FMs cannot handle, e.g., RNA design tasks. The results on two comprehensive genomic benchmarks show that OmniGenome achieves state-of-the-art performance on complex RNA subtasks. For example, OmniGenome solved 74% of complex puzzles, compared to SpliceBERT which solved only 3% of the puzzles. Besides, OmniGenome solves most of the puzzles within $1$ hour, while the existing methods usually allocate $24$ hours for each puzzle. Overall, OmniGenome establishes wide genomic application cases and offers profound insights into biological mechanisms from the perspective of sequence-structure alignment.
1304.6226
Massimo Cencini Dr.
Simone Pigolotti and Massimo Cencini
Species abundances and lifetimes: from neutral to niche-stabilized communities
10 pages, 5 figures (typos corrected and derivation of main results polished)
Journal of Theoretical Biology 338, 1--8 (2013)
10.1016/j.jtbi.2013.08.024
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a stochastic community model able to interpolate from a neutral regime to a niche partitioned regime upon varying a single parameter tuning the intensity of niche stabilization, namely the difference between intraspecific and interspecific competition. By means of a self-consistent approach, we obtain an analytical expression for the species abundance distribution, in excellent agreement with stochastic simulations of the model. In the neutral limit, the Fisher log-series is recovered, while upon increasing the stabilization strength the species abundance distribution develops a maximum for species at intermediate abundances, corresponding to the emergence of a carrying capacity. Numerical studies of species extinction-time distribution show that niche-stabilization strongly affects also the dynamical properties of the system by increasing the average species lifetimes, while suppressing their fluctuations. The results are discussed in view of the niche-neutral debate and of their potential relevance to field data.
[ { "created": "Tue, 23 Apr 2013 10:46:05 GMT", "version": "v1" }, { "created": "Mon, 16 Sep 2013 14:09:35 GMT", "version": "v2" } ]
2013-09-17
[ [ "Pigolotti", "Simone", "" ], [ "Cencini", "Massimo", "" ] ]
We study a stochastic community model able to interpolate from a neutral regime to a niche partitioned regime upon varying a single parameter tuning the intensity of niche stabilization, namely the difference between intraspecific and interspecific competition. By means of a self-consistent approach, we obtain an analytical expression for the species abundance distribution, in excellent agreement with stochastic simulations of the model. In the neutral limit, the Fisher log-series is recovered, while upon increasing the stabilization strength the species abundance distribution develops a maximum for species at intermediate abundances, corresponding to the emergence of a carrying capacity. Numerical studies of species extinction-time distribution show that niche-stabilization strongly affects also the dynamical properties of the system by increasing the average species lifetimes, while suppressing their fluctuations. The results are discussed in view of the niche-neutral debate and of their potential relevance to field data.
2107.12817
Christopher Miller
Ajay Khanna, David E. Larson, Sridhar Nonavinkere Srivatsan, Matthew Mosior, Travis E. Abbott, Susanna Kiwala, Timothy J. Ley, Eric J. Duncavage, Matthew J. Walter, Jason R. Walker, Obi L. Griffith, Malachi Griffith, Christopher A. Miller
Bam-readcount -- rapid generation of basepair-resolution sequence metrics
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Bam-readcount is a utility for generating low-level information about sequencing data at specific nucleotide positions. Originally designed to help filter genomic mutation calls, the metrics it outputs are useful as input for variant detection tools and for resolving ambiguity between variant callers . In addition, it has found broad applicability in diverse fields including tumor evolution, single-cell genomics, climate change ecology, and tracking community spread of SARS-CoV-2. Here we report on the release of version 1.0 of this tool, which adds CRAM support, among other improvements. It is released under a permissive MIT license and available at https://github.com/genome/bam-readcount.
[ { "created": "Tue, 27 Jul 2021 13:38:47 GMT", "version": "v1" } ]
2021-07-28
[ [ "Khanna", "Ajay", "" ], [ "Larson", "David E.", "" ], [ "Srivatsan", "Sridhar Nonavinkere", "" ], [ "Mosior", "Matthew", "" ], [ "Abbott", "Travis E.", "" ], [ "Kiwala", "Susanna", "" ], [ "Ley", "Timothy J.", ...
Bam-readcount is a utility for generating low-level information about sequencing data at specific nucleotide positions. Originally designed to help filter genomic mutation calls, the metrics it outputs are useful as input for variant detection tools and for resolving ambiguity between variant callers . In addition, it has found broad applicability in diverse fields including tumor evolution, single-cell genomics, climate change ecology, and tracking community spread of SARS-CoV-2. Here we report on the release of version 1.0 of this tool, which adds CRAM support, among other improvements. It is released under a permissive MIT license and available at https://github.com/genome/bam-readcount.
1702.04972
Miroslaw Latka
Wojciech Jernajczyk, Pawel Gosek, Miroslaw Latka, Klaudia Kozlowska, Lukasz Swiecicki, Bruce J. West
Alpha Wavelet Power as a Biomarker of Antidepressant Treatment Response in Bipolar Depression
19 pages, 3 figures
null
null
null
q-bio.NC physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is mounting evidence of a link between the properties of electroencephalograms (EEGs) of depressive patients and the outcome of pharmacotherapy. The goal of this study was to develop an EEG biomarker of antidepressant treatment response which would require only a single EEG measurement. We recorded resting, 21-channel EEG in 17 inpatients suffering from bipolar depression in eyes closed and eyes open conditions. The EEG measurement was performed at the end of the short washout period which followed previously unsuccessful pharmacotherapy. We calculated the normalized wavelet power of alpha rhythm using two referential montages and an average reference montage. In particular, in the occipital (O1, O2, Oz) channels the wavelet power of responders was up to 84% higher than that of nonresponders. Using a novel classification algorithm we were able to correctly predict the outcome of treatment with 90% sensitivity and 100% specificity. The proposed biomarker requires only a single EEG measurement and consequently is intrinsically different from biomarkers which exploit the changes in prefrontal EEG induced by pharmacotherapy over a given time.
[ { "created": "Thu, 16 Feb 2017 14:08:31 GMT", "version": "v1" } ]
2017-02-17
[ [ "Jernajczyk", "Wojciech", "" ], [ "Gosek", "Pawel", "" ], [ "Latka", "Miroslaw", "" ], [ "Kozlowska", "Klaudia", "" ], [ "Swiecicki", "Lukasz", "" ], [ "West", "Bruce J.", "" ] ]
There is mounting evidence of a link between the properties of electroencephalograms (EEGs) of depressive patients and the outcome of pharmacotherapy. The goal of this study was to develop an EEG biomarker of antidepressant treatment response which would require only a single EEG measurement. We recorded resting, 21-channel EEG in 17 inpatients suffering from bipolar depression in eyes closed and eyes open conditions. The EEG measurement was performed at the end of the short washout period which followed previously unsuccessful pharmacotherapy. We calculated the normalized wavelet power of alpha rhythm using two referential montages and an average reference montage. In particular, in the occipital (O1, O2, Oz) channels the wavelet power of responders was up to 84% higher than that of nonresponders. Using a novel classification algorithm we were able to correctly predict the outcome of treatment with 90% sensitivity and 100% specificity. The proposed biomarker requires only a single EEG measurement and consequently is intrinsically different from biomarkers which exploit the changes in prefrontal EEG induced by pharmacotherapy over a given time.
2203.05714
Ryan Cabeen
Ryan P. Cabeen, Joseph Mandeville, Fahmeed Hyder, Basavaraju G. Sanganahalli, Daniel R. Thedens, Ali Arbab, Shuning Huang, Adnan Bibic, Erendiz Tarakci, Jelena Mihailovic, Andreia Morais, Jessica Lamb, Karisma Nagarkatti, Arthur W. Toga, Patrick Lyden, Cenk Ayata
Computational Image-based Stroke Assessment for Evaluation of Cerebroprotectants with Longitudinal and Multi-site Preclinical MRI
null
null
null
null
q-bio.QM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While ischemic stroke is a leading cause of death worldwide, there has been little success translating putative cerebroprotectants from rodent preclinical trials to human patients. We investigated computational image-based assessment tools for practical improvement of the quality, scalability, and outlook for large scale preclinical screening for potential therapeutic interventions in rodent models. We developed, evaluated, and deployed a pipeline for image-based stroke outcome quantification for the Stroke Preclinical Assessment Network (SPAN), a multi-site, multi-arm, multi-stage study evaluating a suite of cerebroprotectant interventions. Our fully automated pipeline combines state-of-the-art algorithmic and data analytic approaches to assess stroke outcomes from multi-parameter MRI data collected longitudinally from a rodent model of middle cerebral artery occlusion (MCAO), including measures of infarct volume, brain atrophy, midline shift, and data quality. We applied our approach to 1,368 scans and report population level results of lesion extent and longitudinal changes from injury. We validated our system by comparison with both manual annotations of coronal MRI slices and tissue sections from the same brain, using crowdsourcing from blinded stroke experts from the network. Our results demonstrate the efficacy and robustness of our image-based stroke assessments. The pipeline may provide a promising resource for ongoing rodent preclinical studies conducted by SPAN and other networks in the future.
[ { "created": "Fri, 11 Mar 2022 01:53:30 GMT", "version": "v1" }, { "created": "Wed, 29 Mar 2023 23:32:40 GMT", "version": "v2" } ]
2023-03-31
[ [ "Cabeen", "Ryan P.", "" ], [ "Mandeville", "Joseph", "" ], [ "Hyder", "Fahmeed", "" ], [ "Sanganahalli", "Basavaraju G.", "" ], [ "Thedens", "Daniel R.", "" ], [ "Arbab", "Ali", "" ], [ "Huang", "Shuning", ...
While ischemic stroke is a leading cause of death worldwide, there has been little success translating putative cerebroprotectants from rodent preclinical trials to human patients. We investigated computational image-based assessment tools for practical improvement of the quality, scalability, and outlook for large scale preclinical screening for potential therapeutic interventions in rodent models. We developed, evaluated, and deployed a pipeline for image-based stroke outcome quantification for the Stroke Preclinical Assessment Network (SPAN), a multi-site, multi-arm, multi-stage study evaluating a suite of cerebroprotectant interventions. Our fully automated pipeline combines state-of-the-art algorithmic and data analytic approaches to assess stroke outcomes from multi-parameter MRI data collected longitudinally from a rodent model of middle cerebral artery occlusion (MCAO), including measures of infarct volume, brain atrophy, midline shift, and data quality. We applied our approach to 1,368 scans and report population level results of lesion extent and longitudinal changes from injury. We validated our system by comparison with both manual annotations of coronal MRI slices and tissue sections from the same brain, using crowdsourcing from blinded stroke experts from the network. Our results demonstrate the efficacy and robustness of our image-based stroke assessments. The pipeline may provide a promising resource for ongoing rodent preclinical studies conducted by SPAN and other networks in the future.
1802.02425
Qi Liu
Qi Liu, Yuan Jiang, Zhaojiang Hou, Yulu Tian, Kejian He, Lan Fu, Hui Xu
Impact of Land Use on the DOM Composition in Different Seasons in a Subtropical River Flowing through the Region Undergoing Rapid Urbanization
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dissolved organic matter (DOM) composition in river ecosystems could reflect the human impacts on the river ecosystem, and plays an important role in the carbon cycling process. We collected water and phytoplankton samples at 107 sites in the Dongjiang River in two seasons to assess the impact of the sub-catchments land use structure on the DOM composition. The results showed that (1) the forested sub-catchments had higher humic-like C1 (16.45%) and C2 (25.04%) and lower protein-like C3 (22.57%) and C4 (35.95%) than urbanized and mixed forest-agriculture sub-catchments, while the urbanized sub-catchments showed an inverse trend (4.54%, 15.51%, 33.97% and 45.98%, respectively). (2) The significant variation in the proportion of C1 and C4 between the dry and rainy seasons was recorded in both the forested and the mixed forest-agriculture sub-catchments (p<0.01), but only C4 showed an obvious seasonal variation in the urbanized sub-catchments (p<0.01). While the DOM composition was mainly related to the proportion of urbanized land and forested land year-round (p<0.01), it had stronger correlation with forested land in the dry season and urbanized land in the rainy season. (3) No significant correlation between the DOM composition and the agricultural land proportion was found in either season (p>0.05). Our findings indicated that the DOM composition was strongly dependent on the land use structure of the sub-catchments and varied seasonally, but the seasonal variation pattern could be disturbed by human activities in the extensively urbanized catchments. Notably, the higher C4 proportion compared with those in temperate rivers implied a larger amount of CO2 released from the subtropical rivers into the atmosphere when considering bioavailability.
[ { "created": "Wed, 7 Feb 2018 14:10:16 GMT", "version": "v1" } ]
2018-02-08
[ [ "Liu", "Qi", "" ], [ "Jiang", "Yuan", "" ], [ "Hou", "Zhaojiang", "" ], [ "Tian", "Yulu", "" ], [ "He", "Kejian", "" ], [ "Fu", "Lan", "" ], [ "Xu", "Hui", "" ] ]
The dissolved organic matter (DOM) composition in river ecosystems could reflect the human impacts on the river ecosystem, and plays an important role in the carbon cycling process. We collected water and phytoplankton samples at 107 sites in the Dongjiang River in two seasons to assess the impact of the sub-catchments land use structure on the DOM composition. The results showed that (1) the forested sub-catchments had higher humic-like C1 (16.45%) and C2 (25.04%) and lower protein-like C3 (22.57%) and C4 (35.95%) than urbanized and mixed forest-agriculture sub-catchments, while the urbanized sub-catchments showed an inverse trend (4.54%, 15.51%, 33.97% and 45.98%, respectively). (2) The significant variation in the proportion of C1 and C4 between the dry and rainy seasons was recorded in both the forested and the mixed forest-agriculture sub-catchments (p<0.01), but only C4 showed an obvious seasonal variation in the urbanized sub-catchments (p<0.01). While the DOM composition was mainly related to the proportion of urbanized land and forested land year-round (p<0.01), it had stronger correlation with forested land in the dry season and urbanized land in the rainy season. (3) No significant correlation between the DOM composition and the agricultural land proportion was found in either season (p>0.05). Our findings indicated that the DOM composition was strongly dependent on the land use structure of the sub-catchments and varied seasonally, but the seasonal variation pattern could be disturbed by human activities in the extensively urbanized catchments. Notably, the higher C4 proportion compared with those in temperate rivers implied a larger amount of CO2 released from the subtropical rivers into the atmosphere when considering bioavailability.
2202.13884
Yuri Pirola
Paola Bonizzoni (1), Matteo Costantini (1), Clelia De Felice (2), Alessia Petescia (1), Yuri Pirola (1), Marco Previtali (1), Raffaella Rizzi (1), Jens Stoye (3), Rocco Zaccagnino (2), Rosalba Zizza (2) ((1) University of Milano-Bicocca, (2) University of Salerno, (3) University of Bielefeld)
Numeric Lyndon-based feature embedding of sequencing reads for machine learning approaches
null
Information Sciences 607 (2022) 458-476
10.1016/j.ins.2022.06.005
null
q-bio.GN cs.FL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Feature embedding methods have been proposed in literature to represent sequences as numeric vectors to be used in some bioinformatics investigations, such as family classification and protein structure prediction. Recent theoretical results showed that the well-known Lyndon factorization preserves common factors in overlapping strings. Surprisingly, the fingerprint of a sequencing read, which is the sequence of lengths of consecutive factors in variants of the Lyndon factorization of the read, is effective in preserving sequence similarities, suggesting it as basis for the definition of novels representations of sequencing reads. We propose a novel feature embedding method for Next-Generation Sequencing (NGS) data using the notion of fingerprint. We provide a theoretical and experimental framework to estimate the behaviour of fingerprints and of the $k$-mers extracted from it, called $k$-fingers, as possible feature embeddings for sequencing reads. As a case study to assess the effectiveness of such embeddings, we use fingerprints to represent RNA-Seq reads and to assign them to the most likely gene from which they were originated as fragments of transcripts of the gene. We provide an implementation of the proposed method in the tool lyn2vec, which produces Lyndon-based feature embeddings of sequencing reads.
[ { "created": "Mon, 28 Feb 2022 15:33:37 GMT", "version": "v1" }, { "created": "Thu, 2 Jun 2022 12:49:40 GMT", "version": "v2" } ]
2022-06-14
[ [ "Bonizzoni", "Paola", "" ], [ "Costantini", "Matteo", "" ], [ "De Felice", "Clelia", "" ], [ "Petescia", "Alessia", "" ], [ "Pirola", "Yuri", "" ], [ "Previtali", "Marco", "" ], [ "Rizzi", "Raffaella", "" ...
Feature embedding methods have been proposed in literature to represent sequences as numeric vectors to be used in some bioinformatics investigations, such as family classification and protein structure prediction. Recent theoretical results showed that the well-known Lyndon factorization preserves common factors in overlapping strings. Surprisingly, the fingerprint of a sequencing read, which is the sequence of lengths of consecutive factors in variants of the Lyndon factorization of the read, is effective in preserving sequence similarities, suggesting it as basis for the definition of novels representations of sequencing reads. We propose a novel feature embedding method for Next-Generation Sequencing (NGS) data using the notion of fingerprint. We provide a theoretical and experimental framework to estimate the behaviour of fingerprints and of the $k$-mers extracted from it, called $k$-fingers, as possible feature embeddings for sequencing reads. As a case study to assess the effectiveness of such embeddings, we use fingerprints to represent RNA-Seq reads and to assign them to the most likely gene from which they were originated as fragments of transcripts of the gene. We provide an implementation of the proposed method in the tool lyn2vec, which produces Lyndon-based feature embeddings of sequencing reads.
2310.17905
Sonja Billerbeck
Rianne C. Prins, Sonja Billerbeck
Small antimicrobial resistance proteins (SARPs): Small proteins conferring antimicrobial resistance
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Small open reading frames are understudied as they have been historically excluded from genome annotations. However, evidence for the functional significance of small proteins in various cellular processes accumulates. Proteins with less than 70 residues can also confer resistance to antimicrobial compounds, including intracellularly-acting protein toxins, membrane-acting antimicrobial peptides and various small-molecule antibiotics. Such herein coined Small Antimicrobial Resistance Proteins (SARPs) have emerged on evolutionary timescales or can be enriched from protein libraries using laboratory evolution. Our review consolidates existing knowledge on SARPs and highlights recent advancements in proteomics and genomics that reveal pervasive translation of unannotated genetic regions into small proteins that show features of known SARPs. The potential contribution of small proteins to antimicrobial resistance is awaiting exploration.
[ { "created": "Fri, 27 Oct 2023 05:37:56 GMT", "version": "v1" } ]
2023-10-30
[ [ "Prins", "Rianne C.", "" ], [ "Billerbeck", "Sonja", "" ] ]
Small open reading frames are understudied as they have been historically excluded from genome annotations. However, evidence for the functional significance of small proteins in various cellular processes accumulates. Proteins with less than 70 residues can also confer resistance to antimicrobial compounds, including intracellularly-acting protein toxins, membrane-acting antimicrobial peptides and various small-molecule antibiotics. Such herein coined Small Antimicrobial Resistance Proteins (SARPs) have emerged on evolutionary timescales or can be enriched from protein libraries using laboratory evolution. Our review consolidates existing knowledge on SARPs and highlights recent advancements in proteomics and genomics that reveal pervasive translation of unannotated genetic regions into small proteins that show features of known SARPs. The potential contribution of small proteins to antimicrobial resistance is awaiting exploration.
2307.10992
Itai Dattner
Itai Dattner
Modeling Motion Dynamics in Psychotherapy: a Dynamical Systems Approach
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
This study introduces a novel mechanistic modeling and statistical framework for analyzing motion energy dynamics within psychotherapy sessions. We transform raw motion energy data into an interpretable narrative of therapist-patient interactions, thereby revealing unique insights into the nature of these dynamics. Our methodology is established through three detailed case studies, each shedding light on the complexities of dyadic interactions. A key component of our approach is an analysis spanning four years of one therapist's sessions, allowing us to distinguish between trait-like and state-like dynamics. This research represents a significant advancement in the quantitative understanding of motion dynamics in psychotherapy, with the potential to substantially influence both future research and therapeutic practice.
[ { "created": "Wed, 12 Jul 2023 07:19:27 GMT", "version": "v1" } ]
2023-07-21
[ [ "Dattner", "Itai", "" ] ]
This study introduces a novel mechanistic modeling and statistical framework for analyzing motion energy dynamics within psychotherapy sessions. We transform raw motion energy data into an interpretable narrative of therapist-patient interactions, thereby revealing unique insights into the nature of these dynamics. Our methodology is established through three detailed case studies, each shedding light on the complexities of dyadic interactions. A key component of our approach is an analysis spanning four years of one therapist's sessions, allowing us to distinguish between trait-like and state-like dynamics. This research represents a significant advancement in the quantitative understanding of motion dynamics in psychotherapy, with the potential to substantially influence both future research and therapeutic practice.
1508.04952
Jose A Capitan
Jose A. Capitan, Susanna Manrubia
Demography-based adaptive network model reproduces the spatial organization of human linguistic groups
null
Phys. Rev. E 92, 062811 (2015)
10.1103/PhysRevE.92.062811
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The distribution of human linguistic groups presents a number of interesting and non-trivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.
[ { "created": "Thu, 20 Aug 2015 10:54:14 GMT", "version": "v1" } ]
2015-12-16
[ [ "Capitan", "Jose A.", "" ], [ "Manrubia", "Susanna", "" ] ]
The distribution of human linguistic groups presents a number of interesting and non-trivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.
1801.07585
Olivier Rivoire
Ivan Junier, Paul Fr\'emont, Olivier Rivoire
Universal and idiosyncratic characteristic lengths in bacterial genomes
null
null
10.1088/1478-3975/aab4ac
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In condensed matter physics, simplified descriptions are obtained by coarse-graining the features of a system at a certain characteristic length, defined as the typical length beyond which some properties are no longer correlated. From a physics standpoint, in vitro DNA has thus a characteristic length of 300 base pairs (bp), the Kuhn length of the molecule beyond which correlations in its orientations are typically lost. From a biology standpoint, in vivo DNA has a characteristic length of 1000 bp, the typical length of genes. Since bacteria live in very different physico-chemical conditions and since their genomes lack translational invariance, whether larger, universal characteristic lengths exist is a non-trivial question. Here, we examine this problem by leveraging the large number of fully sequenced genomes available in public databases. By analyzing GC content correlations and the evolutionary conservation of gene contexts (synteny) in hundreds of bacterial chromosomes, we conclude that a fundamental characteristic length around 10-20 kb can be defined. This characteristic length reflects elementary structures involved in the coordination of gene expression, which are present all along the genome of nearly all bacteria. Technically, reaching this conclusion required us to implement methods that are insensitive to the presence of large idiosyncratic genomic features, which may co-exist along these fundamental universal structures.
[ { "created": "Tue, 23 Jan 2018 14:48:00 GMT", "version": "v1" } ]
2018-04-18
[ [ "Junier", "Ivan", "" ], [ "Frémont", "Paul", "" ], [ "Rivoire", "Olivier", "" ] ]
In condensed matter physics, simplified descriptions are obtained by coarse-graining the features of a system at a certain characteristic length, defined as the typical length beyond which some properties are no longer correlated. From a physics standpoint, in vitro DNA has thus a characteristic length of 300 base pairs (bp), the Kuhn length of the molecule beyond which correlations in its orientations are typically lost. From a biology standpoint, in vivo DNA has a characteristic length of 1000 bp, the typical length of genes. Since bacteria live in very different physico-chemical conditions and since their genomes lack translational invariance, whether larger, universal characteristic lengths exist is a non-trivial question. Here, we examine this problem by leveraging the large number of fully sequenced genomes available in public databases. By analyzing GC content correlations and the evolutionary conservation of gene contexts (synteny) in hundreds of bacterial chromosomes, we conclude that a fundamental characteristic length around 10-20 kb can be defined. This characteristic length reflects elementary structures involved in the coordination of gene expression, which are present all along the genome of nearly all bacteria. Technically, reaching this conclusion required us to implement methods that are insensitive to the presence of large idiosyncratic genomic features, which may co-exist along these fundamental universal structures.
1603.06941
Katte Rao Toppaldoddi
Katte Rao Toppaldoddi
Perspective: Speculative role of Tmp21 mediated protein secretory pathway during endoplasmic reticulum (ER) stress induced chronic inflammation
2 pages
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By deploying myelofibrosis as the disease context, I wish to propose that increased availability of Tmp21 (an NFAT gene target) induces aberrant protein secretion from the ER contributing to pathological consequences, which has not been elucidated before. Primary myelofibrosis is now mainly considered as an advanced stage of BCR-ABL1 negative myeloproliferative neoplasms (MPN), which otherwise include polycythemia vera and essential thrombocythemia. Myelofibrosis is defined by an increased insoluble collagen fiber deposition in the bone marrow2 and harbors chronic inflammation as an important component in disease progression.
[ { "created": "Tue, 22 Mar 2016 17:40:44 GMT", "version": "v1" } ]
2016-03-24
[ [ "Toppaldoddi", "Katte Rao", "" ] ]
By deploying myelofibrosis as the disease context, I wish to propose that increased availability of Tmp21 (an NFAT gene target) induces aberrant protein secretion from the ER contributing to pathological consequences, which has not been elucidated before. Primary myelofibrosis is now mainly considered as an advanced stage of BCR-ABL1 negative myeloproliferative neoplasms (MPN), which otherwise include polycythemia vera and essential thrombocythemia. Myelofibrosis is defined by an increased insoluble collagen fiber deposition in the bone marrow2 and harbors chronic inflammation as an important component in disease progression.
2001.05369
Michael Wenske
Christian Engwer, Michael Wenske
Estimating the extent of glioblastoma invasion
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Glioblastoma Multiforme is a malignant brain tumor with poor prognosis. There have been numerous attempts to model the invasion of tumorous glioma cells via partial differential equations in the form of advection-diffusion-reaction equations. The patient-wise parametrisation of these models, and their validation via experimental data has been found to be difficult, as time sequence measurements are generally missing. Also the clinical interest lies in the actual (invisible) tumor extent for a particular MRI/DTI scan and not in a predictive estimate. Therefore we propose a stationalised approach to estimate the extent of glioblastoma (GBM) invasion at the time of a given MRI/DTI scan. The underlying dynamics can be derived from an instationary GBM model, falling into the wide class of advection-diffusion-reaction equations. The stationalisation is introduced via an analytical solution of the Fisher-KPP equation, the simplest model in the considered model class. We investigate the applicability in 1D and 2D, in the presence of inhomogeneous diffusion coefficients and on a real 3D DTI-dataset.
[ { "created": "Wed, 15 Jan 2020 15:18:50 GMT", "version": "v1" } ]
2020-01-16
[ [ "Engwer", "Christian", "" ], [ "Wenske", "Michael", "" ] ]
Glioblastoma Multiforme is a malignant brain tumor with poor prognosis. There have been numerous attempts to model the invasion of tumorous glioma cells via partial differential equations in the form of advection-diffusion-reaction equations. The patient-wise parametrisation of these models, and their validation via experimental data has been found to be difficult, as time sequence measurements are generally missing. Also the clinical interest lies in the actual (invisible) tumor extent for a particular MRI/DTI scan and not in a predictive estimate. Therefore we propose a stationalised approach to estimate the extent of glioblastoma (GBM) invasion at the time of a given MRI/DTI scan. The underlying dynamics can be derived from an instationary GBM model, falling into the wide class of advection-diffusion-reaction equations. The stationalisation is introduced via an analytical solution of the Fisher-KPP equation, the simplest model in the considered model class. We investigate the applicability in 1D and 2D, in the presence of inhomogeneous diffusion coefficients and on a real 3D DTI-dataset.
2106.07005
Vladimir Chechetkin R.
Vladimir R. Chechetkin and Vasily V. Lobzin
Evolving ribonucleocapsid assembly/packaging signals in the genomes of the human and animal coronaviruses: targeting, transmission and evolution
40 pages, 12 figures
Journal of Biomolecular Structure and Dynamics 2021
10.1080/07391102.2021.1958061
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A world-wide COVID-19 pandemic intensified strongly the studies of molecular mechanisms related to the coronaviruses. The origin of coronaviruses and the risks of human-to-human, animal-to-human, and human-to-animal transmission of coronaviral infections can be understood only on a broader evolutionary level by detailed comparative studies. In this paper, we studied ribonucleocapsid assembly-packaging signals (RNAPS) in the genomes of all seven known pathogenic human coronaviruses, SARS-CoV, SARS-CoV-2, MERS-CoV, HCoV-OC43, HCoV-HKU1, HCoV-229E, and HCoV-NL63 and compared them with RNAPS in the genomes of the related animal coronaviruses including SARS-Bat-CoV, MERS-Camel-CoV, MHV, Bat-CoV MOP1, TGEV, and one of camel alphacoronaviruses. RNAPS in the genomes of coronaviruses were evolved due to weakly specific interactions between genomic RNA and N proteins in helical nucleocapsids. Combining transitional genome mapping and Jaccard correlation coefficients allows us to perform the analysis directly in terms of underlying motifs distributed over the genome. In all coronaviruses RNAPS were distributed quasi-periodically over the genome with the period about 54 nt biased to 57 nt and to 51 nt for the genomes longer and shorter than that of SARS-CoV, respectively. The comparison with the experimentally verified packaging signals for MERS-CoV, MHV, and TGEV proved that the distribution of particular motifs is strongly correlated with the packaging signals. We also found that many motifs were highly conserved in both characters and positioning on the genomes throughout the lineages that make them promising therapeutic targets. The mechanisms of encapsidation can affect the recombination and co-infection as well.
[ { "created": "Sun, 13 Jun 2021 14:26:19 GMT", "version": "v1" }, { "created": "Sat, 31 Jul 2021 15:49:42 GMT", "version": "v2" } ]
2021-08-25
[ [ "Chechetkin", "Vladimir R.", "" ], [ "Lobzin", "Vasily V.", "" ] ]
A world-wide COVID-19 pandemic intensified strongly the studies of molecular mechanisms related to the coronaviruses. The origin of coronaviruses and the risks of human-to-human, animal-to-human, and human-to-animal transmission of coronaviral infections can be understood only on a broader evolutionary level by detailed comparative studies. In this paper, we studied ribonucleocapsid assembly-packaging signals (RNAPS) in the genomes of all seven known pathogenic human coronaviruses, SARS-CoV, SARS-CoV-2, MERS-CoV, HCoV-OC43, HCoV-HKU1, HCoV-229E, and HCoV-NL63 and compared them with RNAPS in the genomes of the related animal coronaviruses including SARS-Bat-CoV, MERS-Camel-CoV, MHV, Bat-CoV MOP1, TGEV, and one of camel alphacoronaviruses. RNAPS in the genomes of coronaviruses were evolved due to weakly specific interactions between genomic RNA and N proteins in helical nucleocapsids. Combining transitional genome mapping and Jaccard correlation coefficients allows us to perform the analysis directly in terms of underlying motifs distributed over the genome. In all coronaviruses RNAPS were distributed quasi-periodically over the genome with the period about 54 nt biased to 57 nt and to 51 nt for the genomes longer and shorter than that of SARS-CoV, respectively. The comparison with the experimentally verified packaging signals for MERS-CoV, MHV, and TGEV proved that the distribution of particular motifs is strongly correlated with the packaging signals. We also found that many motifs were highly conserved in both characters and positioning on the genomes throughout the lineages that make them promising therapeutic targets. The mechanisms of encapsidation can affect the recombination and co-infection as well.
1707.01623
Ji-Sung Kim
Ji-Sung Kim, Xin Gao, Andrey Rzhetsky
RIDDLE: Race and ethnicity Imputation from Disease history with Deep LEarning
null
PLOS Computational Biology 14(4): e1006106 (2018)
10.1371/journal.pcbi.1006106
null
q-bio.QM cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Anonymized electronic medical records are an increasingly popular source of research data. However, these datasets often lack race and ethnicity information. This creates problems for researchers modeling human disease, as race and ethnicity are powerful confounders for many health exposures and treatment outcomes; race and ethnicity are closely linked to population-specific genetic variation. We showed that deep neural networks generate more accurate estimates for missing racial and ethnic information than competing methods (e.g., logistic regression, random forest). RIDDLE yielded significantly better classification performance across all metrics that were considered: accuracy, cross-entropy loss (error), and area under the curve for receiver operating characteristic plots (all $p < 10^{-6}$). We made specific efforts to interpret the trained neural network models to identify, quantify, and visualize medical features which are predictive of race and ethnicity. We used these characterizations of informative features to perform a systematic comparison of differential disease patterns by race and ethnicity. The fact that clinical histories are informative for imputing race and ethnicity could reflect (1) a skewed distribution of blue- and white-collar professions across racial and ethnic groups, (2) uneven accessibility and subjective importance of prophylactic health, (3) possible variation in lifestyle, such as dietary habits, and (4) differences in background genetic variation which predispose to diseases.
[ { "created": "Thu, 6 Jul 2017 03:03:57 GMT", "version": "v1" }, { "created": "Fri, 27 Apr 2018 21:21:47 GMT", "version": "v2" } ]
2018-05-01
[ [ "Kim", "Ji-Sung", "" ], [ "Gao", "Xin", "" ], [ "Rzhetsky", "Andrey", "" ] ]
Anonymized electronic medical records are an increasingly popular source of research data. However, these datasets often lack race and ethnicity information. This creates problems for researchers modeling human disease, as race and ethnicity are powerful confounders for many health exposures and treatment outcomes; race and ethnicity are closely linked to population-specific genetic variation. We showed that deep neural networks generate more accurate estimates for missing racial and ethnic information than competing methods (e.g., logistic regression, random forest). RIDDLE yielded significantly better classification performance across all metrics that were considered: accuracy, cross-entropy loss (error), and area under the curve for receiver operating characteristic plots (all $p < 10^{-6}$). We made specific efforts to interpret the trained neural network models to identify, quantify, and visualize medical features which are predictive of race and ethnicity. We used these characterizations of informative features to perform a systematic comparison of differential disease patterns by race and ethnicity. The fact that clinical histories are informative for imputing race and ethnicity could reflect (1) a skewed distribution of blue- and white-collar professions across racial and ethnic groups, (2) uneven accessibility and subjective importance of prophylactic health, (3) possible variation in lifestyle, such as dietary habits, and (4) differences in background genetic variation which predispose to diseases.
1412.5004
Aldana Mar\'ia Gonz\'alez Montoro
Aldana M. Gonz\'alez Montoro, Ricardo Cao, Nelson Espinosa, Javier Cudeiro, Jorge Mari\~no
Bootstrap testing for cross-correlation under low firing activity
22 pages, 7 figures
null
10.1007/s10827-015-0557-5
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new cross-correlation synchrony index for neural activity is proposed. The index is based on the integration of the kernel estimation of the cross-correlation function. It is used to test for the dynamic synchronization levels of spontaneous neural activity under two induced brain states: sleep-like and awake-like. Two bootstrap resampling plans are proposed to approximate the distribution of the test statistics. The results of the first bootstrap method indicate that it is useful to discern significant differences in the synchronization dynamics of brain states characterized by a neural activity with low firing rate. The second bootstrap method is useful to unveil subtle differences in the synchronization levels of the awake-like state, depending on the activation pathway.
[ { "created": "Tue, 16 Dec 2014 14:10:24 GMT", "version": "v1" }, { "created": "Thu, 16 Apr 2015 18:22:07 GMT", "version": "v2" } ]
2016-02-22
[ [ "Montoro", "Aldana M. González", "" ], [ "Cao", "Ricardo", "" ], [ "Espinosa", "Nelson", "" ], [ "Cudeiro", "Javier", "" ], [ "Mariño", "Jorge", "" ] ]
A new cross-correlation synchrony index for neural activity is proposed. The index is based on the integration of the kernel estimation of the cross-correlation function. It is used to test for the dynamic synchronization levels of spontaneous neural activity under two induced brain states: sleep-like and awake-like. Two bootstrap resampling plans are proposed to approximate the distribution of the test statistics. The results of the first bootstrap method indicate that it is useful to discern significant differences in the synchronization dynamics of brain states characterized by a neural activity with low firing rate. The second bootstrap method is useful to unveil subtle differences in the synchronization levels of the awake-like state, depending on the activation pathway.
2311.16160
Shikun Feng
Shikun Feng, Minghao Li, Yinjun Jia, Weiying Ma, Yanyan Lan
Protein-ligand binding representation learning from fine-grained interactions
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The binding between proteins and ligands plays a crucial role in the realm of drug discovery. Previous deep learning approaches have shown promising results over traditional computationally intensive methods, but resulting in poor generalization due to limited supervised data. In this paper, we propose to learn protein-ligand binding representation in a self-supervised learning manner. Different from existing pre-training approaches which treat proteins and ligands individually, we emphasize to discern the intricate binding patterns from fine-grained interactions. Specifically, this self-supervised learning problem is formulated as a prediction of the conclusive binding complex structure given a pocket and ligand with a Transformer based interaction module, which naturally emulates the binding process. To ensure the representation of rich binding information, we introduce two pre-training tasks, i.e.~atomic pairwise distance map prediction and mask ligand reconstruction, which comprehensively model the fine-grained interactions from both structure and feature space. Extensive experiments have demonstrated the superiority of our method across various binding tasks, including protein-ligand affinity prediction, virtual screening and protein-ligand docking.
[ { "created": "Thu, 9 Nov 2023 01:33:09 GMT", "version": "v1" } ]
2023-11-29
[ [ "Feng", "Shikun", "" ], [ "Li", "Minghao", "" ], [ "Jia", "Yinjun", "" ], [ "Ma", "Weiying", "" ], [ "Lan", "Yanyan", "" ] ]
The binding between proteins and ligands plays a crucial role in the realm of drug discovery. Previous deep learning approaches have shown promising results over traditional computationally intensive methods, but resulting in poor generalization due to limited supervised data. In this paper, we propose to learn protein-ligand binding representation in a self-supervised learning manner. Different from existing pre-training approaches which treat proteins and ligands individually, we emphasize to discern the intricate binding patterns from fine-grained interactions. Specifically, this self-supervised learning problem is formulated as a prediction of the conclusive binding complex structure given a pocket and ligand with a Transformer based interaction module, which naturally emulates the binding process. To ensure the representation of rich binding information, we introduce two pre-training tasks, i.e.~atomic pairwise distance map prediction and mask ligand reconstruction, which comprehensively model the fine-grained interactions from both structure and feature space. Extensive experiments have demonstrated the superiority of our method across various binding tasks, including protein-ligand affinity prediction, virtual screening and protein-ligand docking.
2112.10477
Laura Cifuentes Fontanals
Laura Cifuentes-Fontanals, Elisa Tonello, Heike Siebert
Control in Boolean networks with model checking
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding control mechanisms in biological systems plays a crucial role in important applications, for instance in cell reprogramming. Boolean modeling allows the identification of possible efficient strategies, helping to reduce the usually high and time-consuming experimental efforts. Available approaches to control strategy identification usually focus either on attractor or phenotype control, and are unable to deal with more complex control problems, for instance phenotype avoidance. They also fail to capture, in many situations, all possible minimal strategies, finding instead only sub-optimal solutions. In order to fill these gaps, we present a novel approach to control strategy identification in Boolean networks based on model checking. The method is guaranteed to identify all minimal control strategies, and provides maximal flexibility in the definition of the control target. We investigate the applicability of the approach by considering a range of control problems for different biological systems, comparing the results, where possible, to those obtained by alternative control methods.
[ { "created": "Mon, 20 Dec 2021 12:25:59 GMT", "version": "v1" } ]
2021-12-21
[ [ "Cifuentes-Fontanals", "Laura", "" ], [ "Tonello", "Elisa", "" ], [ "Siebert", "Heike", "" ] ]
Understanding control mechanisms in biological systems plays a crucial role in important applications, for instance in cell reprogramming. Boolean modeling allows the identification of possible efficient strategies, helping to reduce the usually high and time-consuming experimental efforts. Available approaches to control strategy identification usually focus either on attractor or phenotype control, and are unable to deal with more complex control problems, for instance phenotype avoidance. They also fail to capture, in many situations, all possible minimal strategies, finding instead only sub-optimal solutions. In order to fill these gaps, we present a novel approach to control strategy identification in Boolean networks based on model checking. The method is guaranteed to identify all minimal control strategies, and provides maximal flexibility in the definition of the control target. We investigate the applicability of the approach by considering a range of control problems for different biological systems, comparing the results, where possible, to those obtained by alternative control methods.
0802.1762
Denis Boyer
Denis Boyer, Octavio Miramontes, Gabriel Ramos-Fern\'andez
Evidence for biological L\'evy flights stands
6 page, 1 figure
null
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Edwards et al. [Nature 449, 1044-1048 (2007)] revisited well-known studies reporting power-laws in the frequency distribution of flight duration of wandering albatrosses, and concluded that no L\'evy process could model recent observations with higher resolution. Here we show that this re-analysis suffers from a conceptual misunderstanding, and that the new albatross data remain consistent with a biological L\'evy flight.
[ { "created": "Wed, 13 Feb 2008 04:59:57 GMT", "version": "v1" } ]
2008-02-14
[ [ "Boyer", "Denis", "" ], [ "Miramontes", "Octavio", "" ], [ "Ramos-Fernández", "Gabriel", "" ] ]
Edwards et al. [Nature 449, 1044-1048 (2007)] revisited well-known studies reporting power-laws in the frequency distribution of flight duration of wandering albatrosses, and concluded that no L\'evy process could model recent observations with higher resolution. Here we show that this re-analysis suffers from a conceptual misunderstanding, and that the new albatross data remain consistent with a biological L\'evy flight.
1710.03923
Muhammad Yousefnezhad
Muhammad Yousefnezhad and Daoqiang Zhang
Deep Hyperalignment
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA
null
null
null
q-bio.NC cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes Deep Hyperalignment (DHA) as a regularized, deep extension, scalable Hyperalignment (HA) method, which is well-suited for applying functional alignment to fMRI datasets with nonlinearity, high-dimensionality (broad ROI), and a large number of subjects. Unlink previous methods, DHA is not limited by a restricted fixed kernel function. Further, it uses a parametric approach, rank-$m$ Singular Value Decomposition (SVD), and stochastic gradient descent for optimization. Therefore, DHA has a suitable time complexity for large datasets, and DHA does not require the training data when it computes the functional alignment for a new subject. Experimental studies on multi-subject fMRI analysis confirm that the DHA method achieves superior performance to other state-of-the-art HA algorithms.
[ { "created": "Wed, 11 Oct 2017 06:21:45 GMT", "version": "v1" } ]
2017-10-12
[ [ "Yousefnezhad", "Muhammad", "" ], [ "Zhang", "Daoqiang", "" ] ]
This paper proposes Deep Hyperalignment (DHA) as a regularized, deep extension, scalable Hyperalignment (HA) method, which is well-suited for applying functional alignment to fMRI datasets with nonlinearity, high-dimensionality (broad ROI), and a large number of subjects. Unlink previous methods, DHA is not limited by a restricted fixed kernel function. Further, it uses a parametric approach, rank-$m$ Singular Value Decomposition (SVD), and stochastic gradient descent for optimization. Therefore, DHA has a suitable time complexity for large datasets, and DHA does not require the training data when it computes the functional alignment for a new subject. Experimental studies on multi-subject fMRI analysis confirm that the DHA method achieves superior performance to other state-of-the-art HA algorithms.
2109.02048
Francesco Fumarola
Francesco Fumarola, Bettina Hein, Kenneth D. Miller
Mechanisms for spontaneous symmetry breaking in developing visual cortex
33 pages, 11 figures
null
null
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
For the brain to recognize local orientations within images, neurons must spontaneously break the translation and rotation symmetry of their response functions -- an archetypal example of unsupervised learning. The dominant framework for unsupervised learning in biology is Hebb's principle, but how Hebbian learning could break such symmetries is a longstanding biophysical riddle. Theoretical studies agree that this should require inputs to visual cortex to invert the relative magnitude of their correlations at long distances. Empirical measurements have searched in vain for such an inversion, and report the opposite to be true. We formally approach the question through the hermitianization of a multi-layer model, which maps it into a problem of zero-temperature phase transitions. In the emerging phase diagram, both symmetries break spontaneously as long as (1) recurrent interactions are sufficiently long-range and (2) Hebbian competition is duly accounted for. The relevant mechanism for symmetry breaking is competition among connections sprouting from the same afferent cell. Such competition, and not the structure of the inputs, is capable of triggering the broken-symmetry phase required by image processing. We provide analytic predictions on the relative magnitudes of the relevant length-scales needed for this novel mechanism to occur. These results reconcile experimental observations to the Hebbian paradigm, shed light on a new mechanism for visual cortex development, and contribute to our growing understanding of the relationship between learning and symmetry breaking.
[ { "created": "Sun, 5 Sep 2021 12:00:51 GMT", "version": "v1" } ]
2021-09-07
[ [ "Fumarola", "Francesco", "" ], [ "Hein", "Bettina", "" ], [ "Miller", "Kenneth D.", "" ] ]
For the brain to recognize local orientations within images, neurons must spontaneously break the translation and rotation symmetry of their response functions -- an archetypal example of unsupervised learning. The dominant framework for unsupervised learning in biology is Hebb's principle, but how Hebbian learning could break such symmetries is a longstanding biophysical riddle. Theoretical studies agree that this should require inputs to visual cortex to invert the relative magnitude of their correlations at long distances. Empirical measurements have searched in vain for such an inversion, and report the opposite to be true. We formally approach the question through the hermitianization of a multi-layer model, which maps it into a problem of zero-temperature phase transitions. In the emerging phase diagram, both symmetries break spontaneously as long as (1) recurrent interactions are sufficiently long-range and (2) Hebbian competition is duly accounted for. The relevant mechanism for symmetry breaking is competition among connections sprouting from the same afferent cell. Such competition, and not the structure of the inputs, is capable of triggering the broken-symmetry phase required by image processing. We provide analytic predictions on the relative magnitudes of the relevant length-scales needed for this novel mechanism to occur. These results reconcile experimental observations to the Hebbian paradigm, shed light on a new mechanism for visual cortex development, and contribute to our growing understanding of the relationship between learning and symmetry breaking.
2109.02785
Subhashis Banerjee
Subhashis Banerjee and Sushmita Mitra and Lawrence O. Hall
Analysis of MRI Biomarkers for Brain Cancer Survival Prediction
null
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Prediction of Overall Survival (OS) of brain cancer patients from multi-modal MRI is a challenging field of research. Most of the existing literature on survival prediction is based on Radiomic features, which does not consider either non-biological factors or the functional neurological status of the patient(s). Besides, the selection of an appropriate cut-off for survival and the presence of censored data create further problems. Application of deep learning models for OS prediction is also limited due to the lack of large annotated publicly available datasets. In this scenario we analyse the potential of two novel neuroimaging feature families, extracted from brain parcellation atlases and spatial habitats, along with classical radiomic and geometric features; to study their combined predictive power for analysing overall survival. A cross validation strategy with grid search is proposed to simultaneously select and evaluate the most predictive feature subset based on its predictive power. A Cox Proportional Hazard (CoxPH) model is employed for univariate feature selection, followed by the prediction of patient-specific survival functions by three multivariate parsimonious models viz. Coxnet, Random survival forests (RSF) and Survival SVM (SSVM). The brain cancer MRI data used for this research was taken from two open-access collections TCGA-GBM and TCGA-LGG available from The Cancer Imaging Archive (TCIA). Corresponding survival data for each patient was downloaded from The Cancer Genome Atlas (TCGA). A high cross validation $C-index$ score of $0.82\pm.10$ was achieved using RSF with the best $24$ selected features. Age was found to be the most important biological predictor. There were $9$, $6$, $6$ and $2$ features selected from the parcellation, habitat, radiomic and region-based feature groups respectively.
[ { "created": "Fri, 3 Sep 2021 05:35:47 GMT", "version": "v1" } ]
2021-09-08
[ [ "Banerjee", "Subhashis", "" ], [ "Mitra", "Sushmita", "" ], [ "Hall", "Lawrence O.", "" ] ]
Prediction of Overall Survival (OS) of brain cancer patients from multi-modal MRI is a challenging field of research. Most of the existing literature on survival prediction is based on Radiomic features, which does not consider either non-biological factors or the functional neurological status of the patient(s). Besides, the selection of an appropriate cut-off for survival and the presence of censored data create further problems. Application of deep learning models for OS prediction is also limited due to the lack of large annotated publicly available datasets. In this scenario we analyse the potential of two novel neuroimaging feature families, extracted from brain parcellation atlases and spatial habitats, along with classical radiomic and geometric features; to study their combined predictive power for analysing overall survival. A cross validation strategy with grid search is proposed to simultaneously select and evaluate the most predictive feature subset based on its predictive power. A Cox Proportional Hazard (CoxPH) model is employed for univariate feature selection, followed by the prediction of patient-specific survival functions by three multivariate parsimonious models viz. Coxnet, Random survival forests (RSF) and Survival SVM (SSVM). The brain cancer MRI data used for this research was taken from two open-access collections TCGA-GBM and TCGA-LGG available from The Cancer Imaging Archive (TCIA). Corresponding survival data for each patient was downloaded from The Cancer Genome Atlas (TCGA). A high cross validation $C-index$ score of $0.82\pm.10$ was achieved using RSF with the best $24$ selected features. Age was found to be the most important biological predictor. There were $9$, $6$, $6$ and $2$ features selected from the parcellation, habitat, radiomic and region-based feature groups respectively.
1805.11816
Sylvain Gretchko
Sylvain Gretchko, Jessa Marley, Rebecca C. Tyson
The Effects of Climate Change on Predator-Prey Dynamics
17 pages, 13 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this investigation we study the effects of climate change on predator-prey population dynamics. The interaction of predator and prey is described by the Variable Territory (VT) model with Allee effects in an Ordinary Differential Equation (ODE) framework. Climate influence is modeled in this ODE framework as an exogenous driver which affects the prey growth rate. Mathematically, this exogenous driver is a function of time, the climate function, that describes how favorable is the climate, on a scale from $-1$ to $+1$. Climate change has a number of effects; in this work, we focus on changes in climate variability, in particular, a decrease in the switching rate between good years and bad. With regard to the predator-prey populations, we focus on the conditions that lead to extinction. We developed a high performance software tool to allow the systematic exploration of a wide range of climate scenarios at a high level of detail. Our results show that the predator-prey system is sensitive to the rapid drop in growth rate that accompanies a switch from good growth years to bad growth years. The amplitude of this negative variation as well as its duration are key factors, but more importantly, it is the moment when this negative change occurs during the predator-prey cycle that is critical for the survival of the species.
[ { "created": "Wed, 30 May 2018 05:46:29 GMT", "version": "v1" } ]
2018-05-31
[ [ "Gretchko", "Sylvain", "" ], [ "Marley", "Jessa", "" ], [ "Tyson", "Rebecca C.", "" ] ]
In this investigation we study the effects of climate change on predator-prey population dynamics. The interaction of predator and prey is described by the Variable Territory (VT) model with Allee effects in an Ordinary Differential Equation (ODE) framework. Climate influence is modeled in this ODE framework as an exogenous driver which affects the prey growth rate. Mathematically, this exogenous driver is a function of time, the climate function, that describes how favorable is the climate, on a scale from $-1$ to $+1$. Climate change has a number of effects; in this work, we focus on changes in climate variability, in particular, a decrease in the switching rate between good years and bad. With regard to the predator-prey populations, we focus on the conditions that lead to extinction. We developed a high performance software tool to allow the systematic exploration of a wide range of climate scenarios at a high level of detail. Our results show that the predator-prey system is sensitive to the rapid drop in growth rate that accompanies a switch from good growth years to bad growth years. The amplitude of this negative variation as well as its duration are key factors, but more importantly, it is the moment when this negative change occurs during the predator-prey cycle that is critical for the survival of the species.
2211.03632
Johannes Wirtz
Johannes Wirtz
On the enumeration of leaf-labelled increasing trees with arbitrary node-degree
null
null
null
null
q-bio.PE math.CO
http://creativecommons.org/licenses/by-nc-sa/4.0/
We consider the counting problem of the number of \textit{leaf-labeled increasing trees}, where internal nodes may have an arbitrary number of descendants. The set of all such trees is a discrete representation of the genealogies obtained under certain population-genetical models such as multiple-merger coalescents. While the combinatorics of the binary trees among those are well understood, for the number of all trees only an approximate asymptotic formula is known. In this work, we validate this formula up to constant terms and compare the asymptotic behavior of the number of all leaf-labelled increasing trees to that of binary, ternary and quaternary trees.
[ { "created": "Mon, 7 Nov 2022 15:41:42 GMT", "version": "v1" } ]
2022-11-08
[ [ "Wirtz", "Johannes", "" ] ]
We consider the counting problem of the number of \textit{leaf-labeled increasing trees}, where internal nodes may have an arbitrary number of descendants. The set of all such trees is a discrete representation of the genealogies obtained under certain population-genetical models such as multiple-merger coalescents. While the combinatorics of the binary trees among those are well understood, for the number of all trees only an approximate asymptotic formula is known. In this work, we validate this formula up to constant terms and compare the asymptotic behavior of the number of all leaf-labelled increasing trees to that of binary, ternary and quaternary trees.
2010.08162
Jonathan King
Jonathan E. King, David Ryan Koes
SidechainNet: An All-Atom Protein Structure Dataset for Machine Learning
8 pages, 2 figures, 1 table, Accepted for the Machine Learning for Structural Biology Workshop at the 34th Conference on Neural Information Processing Systems (MLSB NeurIPS 2020)
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite recent advancements in deep learning methods for protein structure prediction and representation, little focus has been directed at the simultaneous inclusion and prediction of protein backbone and sidechain structure information. We present SidechainNet, a new dataset that directly extends the ProteinNet dataset. SidechainNet includes angle and atomic coordinate information capable of describing all heavy atoms of each protein structure. In this paper, we provide background information on the availability of protein structure data and the significance of ProteinNet. Thereafter, we argue for the potentially beneficial inclusion of sidechain information through SidechainNet, describe the process by which we organize SidechainNet, and provide a software package (https://github.com/jonathanking/sidechainnet) for data manipulation and training with machine learning models.
[ { "created": "Fri, 16 Oct 2020 04:38:28 GMT", "version": "v1" }, { "created": "Sun, 15 Nov 2020 07:42:27 GMT", "version": "v2" } ]
2020-11-17
[ [ "King", "Jonathan E.", "" ], [ "Koes", "David Ryan", "" ] ]
Despite recent advancements in deep learning methods for protein structure prediction and representation, little focus has been directed at the simultaneous inclusion and prediction of protein backbone and sidechain structure information. We present SidechainNet, a new dataset that directly extends the ProteinNet dataset. SidechainNet includes angle and atomic coordinate information capable of describing all heavy atoms of each protein structure. In this paper, we provide background information on the availability of protein structure data and the significance of ProteinNet. Thereafter, we argue for the potentially beneficial inclusion of sidechain information through SidechainNet, describe the process by which we organize SidechainNet, and provide a software package (https://github.com/jonathanking/sidechainnet) for data manipulation and training with machine learning models.
1409.0642
Ellen Baake
Ute Lenz, Sandra Kluth, Ellen Baake, Anton Wakolbinger
Looking down in the ancestral selection graph: A probabilistic approach to the common ancestor type distribution
Theoretical Population Biology, in press
Theor. Pop. Biol. 103 (2015), 27-37
10.1016/j.tpb.2015.01.005
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a (two-type) Wright-Fisher diffusion with directional selection and two-way mutation, let $x$ denote today's frequency of the beneficial type, and given $x$, let $h(x)$ be the probability that, among all individuals of today's population, the individual whose progeny will eventually take over in the population is of the beneficial type. Fearnhead [Fearnhead, P., 2002. The common ancestor at a nonneutral locus. J. Appl. Probab. 39, 38-54] and Taylor [Taylor, J. E., 2007. The common ancestor process for a Wright-Fisher diffusion. Electron. J. Probab. 12, 808-847] obtained a series representation for $h(x)$. We develop a construction that contains elements of both the ancestral selection graph and the lookdown construction and includes pruning of certain lines upon mutation. Besides being interesting in its own right, this construction allows a transparent derivation of the series coefficients of $h(x)$ and gives them a probabilistic meaning.
[ { "created": "Tue, 2 Sep 2014 09:35:14 GMT", "version": "v1" }, { "created": "Tue, 27 Jan 2015 17:06:44 GMT", "version": "v2" } ]
2015-07-10
[ [ "Lenz", "Ute", "" ], [ "Kluth", "Sandra", "" ], [ "Baake", "Ellen", "" ], [ "Wakolbinger", "Anton", "" ] ]
In a (two-type) Wright-Fisher diffusion with directional selection and two-way mutation, let $x$ denote today's frequency of the beneficial type, and given $x$, let $h(x)$ be the probability that, among all individuals of today's population, the individual whose progeny will eventually take over in the population is of the beneficial type. Fearnhead [Fearnhead, P., 2002. The common ancestor at a nonneutral locus. J. Appl. Probab. 39, 38-54] and Taylor [Taylor, J. E., 2007. The common ancestor process for a Wright-Fisher diffusion. Electron. J. Probab. 12, 808-847] obtained a series representation for $h(x)$. We develop a construction that contains elements of both the ancestral selection graph and the lookdown construction and includes pruning of certain lines upon mutation. Besides being interesting in its own right, this construction allows a transparent derivation of the series coefficients of $h(x)$ and gives them a probabilistic meaning.
1304.3681
Michael Manhart
Michael Manhart and Alexandre V. Morozov
A path-based approach to random walks on networks characterizes how proteins evolve new function
9 pages, 7 figures
Phys Rev Lett 111:088102, 2013
10.1103/PhysRevLett.111.088102
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a path-based approach to continuous-time random walks on networks with arbitrarily weighted edges. We describe an efficient numerical algorithm for calculating statistical properties of the stochastic path ensemble. After demonstrating our approach on two reaction rate problems, we present a biophysical model that describes how proteins evolve new functions while maintaining thermodynamic stability. We use our methodology to characterize dynamics of evolutionary adaptation, reproducing several key features observed in directed evolution experiments. We find that proteins generally fall into two qualitatively different regimes of adaptation depending on their binding and folding energetics.
[ { "created": "Fri, 12 Apr 2013 16:54:35 GMT", "version": "v1" } ]
2014-08-19
[ [ "Manhart", "Michael", "" ], [ "Morozov", "Alexandre V.", "" ] ]
We develop a path-based approach to continuous-time random walks on networks with arbitrarily weighted edges. We describe an efficient numerical algorithm for calculating statistical properties of the stochastic path ensemble. After demonstrating our approach on two reaction rate problems, we present a biophysical model that describes how proteins evolve new functions while maintaining thermodynamic stability. We use our methodology to characterize dynamics of evolutionary adaptation, reproducing several key features observed in directed evolution experiments. We find that proteins generally fall into two qualitatively different regimes of adaptation depending on their binding and folding energetics.
1508.02739
Kwang Hyun Ko
Kwang Hyun Ko
Origins of Bipedalism
null
null
null
null
q-bio.PE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The following manuscript reviews various theories of bipedalism and provides a holistic answer to human evolution. There are two questions regarding bipedalism: i) why were the earliest hominins partially bipedal? and ii) why did hominins become increasingly bipedal over time and replace their less bipedal ancestors? To answer these questions, the prominent theories in the field, such as the savanna-based theory, the postural feeding hypotheses, and the provisioning model, are collectively examined. Because biological evolution is an example of trial and error and not a simple causation, there may be multiple answers to the evolution of bipedalism. The postural feeding hypothesis (reaching for food/balancing) provides an explanation for the partial bipedalism of the earliest hominins. The savannah-based theory describes how the largely bipedal hominins that started to settle on the ground became increasingly bipedal. The provisioning model (food-gathering/monogamy) explains questions arising after the postural feeding hypothesis and before the savannah theory in an evolutionary timeline. Indeed, there are no straight lines between the theories, and multiple forces could have pushed the evolution of bipedalism at different points. Finally, this manuscript states that the arboreal hominins that possessed ambiguous traits of bipedalism were eliminated through choice and selection. Using the biological analogy of the okapi and giraffe, I explain how one of the branches (Homo) became increasingly bipedal while the other (Pan) adapted to locomotion for forest life by narrowing the anatomical/biological focus in evolution.
[ { "created": "Sun, 9 Aug 2015 08:43:49 GMT", "version": "v1" }, { "created": "Sat, 17 Oct 2015 16:48:55 GMT", "version": "v2" } ]
2015-10-20
[ [ "Ko", "Kwang Hyun", "" ] ]
The following manuscript reviews various theories of bipedalism and provides a holistic answer to human evolution. There are two questions regarding bipedalism: i) why were the earliest hominins partially bipedal? and ii) why did hominins become increasingly bipedal over time and replace their less bipedal ancestors? To answer these questions, the prominent theories in the field, such as the savanna-based theory, the postural feeding hypotheses, and the provisioning model, are collectively examined. Because biological evolution is an example of trial and error and not a simple causation, there may be multiple answers to the evolution of bipedalism. The postural feeding hypothesis (reaching for food/balancing) provides an explanation for the partial bipedalism of the earliest hominins. The savannah-based theory describes how the largely bipedal hominins that started to settle on the ground became increasingly bipedal. The provisioning model (food-gathering/monogamy) explains questions arising after the postural feeding hypothesis and before the savannah theory in an evolutionary timeline. Indeed, there are no straight lines between the theories, and multiple forces could have pushed the evolution of bipedalism at different points. Finally, this manuscript states that the arboreal hominins that possessed ambiguous traits of bipedalism were eliminated through choice and selection. Using the biological analogy of the okapi and giraffe, I explain how one of the branches (Homo) became increasingly bipedal while the other (Pan) adapted to locomotion for forest life by narrowing the anatomical/biological focus in evolution.
1201.4987
Alireza Valizadeh
Mehdi Bayati and Alireza Valizadeh
Effect of synaptic plasticity in the structure and dynamics of disordered networks of coupled neurons
null
null
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In an all-to-all network of integrate-fire oscillators in which there is a disorder in the intrinsic firing rates of the neurons, we show that through spike timing-dependent plasticity the links which have the faster oscillators as presynaptic, tend to be strengthened while the links originated from the slow spiking neurons are weakened. The emergent effective flow of directed connections, introduces the faster neurons as the more influent elements in the network and facilitates synchronization by decreasing the synaptic cost for onset of synchronization.
[ { "created": "Tue, 24 Jan 2012 14:36:34 GMT", "version": "v1" } ]
2012-01-25
[ [ "Bayati", "Mehdi", "" ], [ "Valizadeh", "Alireza", "" ] ]
In an all-to-all network of integrate-fire oscillators in which there is a disorder in the intrinsic firing rates of the neurons, we show that through spike timing-dependent plasticity the links which have the faster oscillators as presynaptic, tend to be strengthened while the links originated from the slow spiking neurons are weakened. The emergent effective flow of directed connections, introduces the faster neurons as the more influent elements in the network and facilitates synchronization by decreasing the synaptic cost for onset of synchronization.
2011.05471
Carla Sofia Carvalho
Carla Sofia Carvalho
A deep-learning classifier for cardiac arrhythmias
To appear in the IEEE BIBE 2020 conference proceedings (peer-reviewed)
null
null
null
q-bio.QM cs.LG physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report on a method that classifies heart beats according to a set of 13 classes, including cardiac arrhythmias. The method localises the QRS peak complex to define each heart beat and uses a neural network to infer the patterns characteristic of each heart beat class. The best performing neural network contains six one-dimensional convolutional layers and four dense layers, with the kernel sizes being multiples of the characteristic scale of the problem, thus resulting a computationally fast and physically motivated neural network. For the same number of heart beat classes, our method yields better results with a considerably smaller neural network than previously published methods, which renders our method competitive for deployment in an internet-of-things solution.
[ { "created": "Wed, 11 Nov 2020 00:13:15 GMT", "version": "v1" } ]
2020-11-12
[ [ "Carvalho", "Carla Sofia", "" ] ]
We report on a method that classifies heart beats according to a set of 13 classes, including cardiac arrhythmias. The method localises the QRS peak complex to define each heart beat and uses a neural network to infer the patterns characteristic of each heart beat class. The best performing neural network contains six one-dimensional convolutional layers and four dense layers, with the kernel sizes being multiples of the characteristic scale of the problem, thus resulting a computationally fast and physically motivated neural network. For the same number of heart beat classes, our method yields better results with a considerably smaller neural network than previously published methods, which renders our method competitive for deployment in an internet-of-things solution.
2307.01320
Mengman Wei
Mengman Wei
In Silico Tools in PROTACs design
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
PROTACs, as a highly promising new. therapeutic paradigm, have attracted widespread attention from the academic and pharmaceutical communities in recent years. To date, the design and validation of PROTACs molecule's druggability primarily rely on experimental approaches, making the development process of this kind of drug molecule time-consuming. Computer-aided tools for PROTACs design may offer a potential solution to expedite the design process and enhance its efficiency. This mini review briefly summarizes the in silico tools for PROTACs drug molecule design reported recently.
[ { "created": "Mon, 3 Jul 2023 19:49:47 GMT", "version": "v1" }, { "created": "Sun, 9 Jul 2023 14:58:42 GMT", "version": "v2" } ]
2023-07-11
[ [ "Wei", "Mengman", "" ] ]
PROTACs, as a highly promising new. therapeutic paradigm, have attracted widespread attention from the academic and pharmaceutical communities in recent years. To date, the design and validation of PROTACs molecule's druggability primarily rely on experimental approaches, making the development process of this kind of drug molecule time-consuming. Computer-aided tools for PROTACs design may offer a potential solution to expedite the design process and enhance its efficiency. This mini review briefly summarizes the in silico tools for PROTACs drug molecule design reported recently.
1703.00547
Pradeep Kumar
Khanh Nguyen, Steven Murray, Jeffery A. Lewis, Pradeep Kumar
Morphology, cell division, and viability of Saccharomyces cerevisiae at high hydrostatic pressure
19 pages, 9 figures
null
null
null
q-bio.CB q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High hydrostatic pressure is commonly encountered in many environments, but the effects of high pressure on eukaryotic cells have been understudied. To understand the effects of hydrostatic pressure in the model eukaryote, Saccharomyces cerevisiae, we have performed quantitative experiments of cell division, cell morphology, and cell death under a wide range of pressures. We developed an automated image analysis method for quantification of the yeast budding index - a measure of cell cycle state - as well as a continuum model of budding to investigate the effect of pressure on cell division and cell morphology. We find that the budding index, the average cell size, and the eccentricity - a measure of how much the cell morphology varies from the being elliptical - of the cells decrease with increasing pressure. Furthermore, high hydrostatic pressure led to the small but finite probability of cell death via both apoptosis and necrosis. Our experiments suggest that decrease of budding index arises from cellular arrest or death at the cell cycle checkpoints during different stages of cell division.
[ { "created": "Wed, 1 Mar 2017 23:38:59 GMT", "version": "v1" } ]
2017-03-03
[ [ "Nguyen", "Khanh", "" ], [ "Murray", "Steven", "" ], [ "Lewis", "Jeffery A.", "" ], [ "Kumar", "Pradeep", "" ] ]
High hydrostatic pressure is commonly encountered in many environments, but the effects of high pressure on eukaryotic cells have been understudied. To understand the effects of hydrostatic pressure in the model eukaryote, Saccharomyces cerevisiae, we have performed quantitative experiments of cell division, cell morphology, and cell death under a wide range of pressures. We developed an automated image analysis method for quantification of the yeast budding index - a measure of cell cycle state - as well as a continuum model of budding to investigate the effect of pressure on cell division and cell morphology. We find that the budding index, the average cell size, and the eccentricity - a measure of how much the cell morphology varies from the being elliptical - of the cells decrease with increasing pressure. Furthermore, high hydrostatic pressure led to the small but finite probability of cell death via both apoptosis and necrosis. Our experiments suggest that decrease of budding index arises from cellular arrest or death at the cell cycle checkpoints during different stages of cell division.
q-bio/0505004
Ramon Grima
Ramon Grima
Strong-coupling dynamics of a multi-cellular chemotactic system
4 pages, 2 figures
Physical Review Letters 95, 128103 (2005)
10.1103/PhysRevLett.95.128103
null
q-bio.CB physics.bio-ph
null
Chemical signaling is one of the ubiquitous mechanisms by which inter-cellular communication takes place at the microscopic level, particularly via chemotaxis. Such multi-cellular systems are popularly studied using continuum, mean-field equations. In this letter we study a stochastic model of chemotactic signaling. The Langevin formalism of the model makes it amenable to calculation via non-perturbative analysis, which enables a quantification of the effect of fluctuations on both the weak and strongly-coupled biological dynamics. In particular we show that the (i) self-localization due to auto-chemotaxis is impossible. (ii) when aggregation occurs, the aggregate performs a random walk with a renormalized diffusion coefficient $D_R \propto \epsilon^{-2} N^{-3}$. (iii) the stochastic model exhibits sharp transitions in cell motile behavior for negative chemotaxis, behavior which has no parallel in the mean-field Keller-Segel equations.
[ { "created": "Mon, 2 May 2005 23:01:53 GMT", "version": "v1" }, { "created": "Mon, 31 Jul 2006 22:19:59 GMT", "version": "v2" } ]
2009-11-11
[ [ "Grima", "Ramon", "" ] ]
Chemical signaling is one of the ubiquitous mechanisms by which inter-cellular communication takes place at the microscopic level, particularly via chemotaxis. Such multi-cellular systems are popularly studied using continuum, mean-field equations. In this letter we study a stochastic model of chemotactic signaling. The Langevin formalism of the model makes it amenable to calculation via non-perturbative analysis, which enables a quantification of the effect of fluctuations on both the weak and strongly-coupled biological dynamics. In particular we show that the (i) self-localization due to auto-chemotaxis is impossible. (ii) when aggregation occurs, the aggregate performs a random walk with a renormalized diffusion coefficient $D_R \propto \epsilon^{-2} N^{-3}$. (iii) the stochastic model exhibits sharp transitions in cell motile behavior for negative chemotaxis, behavior which has no parallel in the mean-field Keller-Segel equations.
1304.5519
Krzysztof Argasinski
Krzysztof Argasinski
The Dynamics of Sex Ratio Evolution: From the Gene Perspective to Multilevel Selection
3 figures
Argasinski K (2013) The Dynamics of Sex Ratio Evolution: From the Gene Perspective to Multilevel Selection. PLoS ONE 8(4): e60405. doi:10.1371/journal.pone.0060405
10.1371/journal.pone.0060405
null
q-bio.PE math.CA nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The new dynamical game theoretic model of sex ratio evolution emphasizes the role of males as passive carriers of sex ratio genes. This shows inconsistency between population genetic models of sex ratio evolution and classical strategic models. In this work a novel technique of change of coordinates will be applied to the new model. This will reveal new aspects of the modelled phenomenon which cannot be shown or proven in the original formulation. The underlying goal is to describe the dynamics of selection of particular genes in the entire population, instead of in the same sex subpopulation, as in the previous paper and earlier population genetics approaches. This allows for analytical derivation of the unbiased strategic model from the model with rigorous non-simplified genetics. In effect, an alternative system of replicator equations is derived. It contains two subsystems: the first describes changes in gene frequencies (this is an alternative unbiased formalization of the Fisher-Dusing argument), whereas the second describes changes in the sex ratios in subpopulations of carriers of genes for each strategy. An intriguing analytical result of this work is that fitness of a gene depends on the current sex ratio in the subpopulation of its carriers, not on the encoded individual strategy. Thus, the argument of the gene fitness function is not constant but is determined by the trajectory of the sex ratio among carriers of that gene. This aspect of the modelled phenomenon cannot be revealed by the static analysis. Dynamics of the sex ratio among gene carriers is driven by a dynamic "tug of war" between female carriers expressing the encoded strategic trait value and random partners of male carriers expressing the average population strategy (a primary sex ratio). This mechanism can be called "double level selection". Therefore, gene interest perspective leads to multi-level selection.
[ { "created": "Fri, 19 Apr 2013 19:50:56 GMT", "version": "v1" }, { "created": "Sun, 2 Mar 2014 23:00:13 GMT", "version": "v2" } ]
2014-03-05
[ [ "Argasinski", "Krzysztof", "" ] ]
The new dynamical game theoretic model of sex ratio evolution emphasizes the role of males as passive carriers of sex ratio genes. This shows inconsistency between population genetic models of sex ratio evolution and classical strategic models. In this work a novel technique of change of coordinates will be applied to the new model. This will reveal new aspects of the modelled phenomenon which cannot be shown or proven in the original formulation. The underlying goal is to describe the dynamics of selection of particular genes in the entire population, instead of in the same sex subpopulation, as in the previous paper and earlier population genetics approaches. This allows for analytical derivation of the unbiased strategic model from the model with rigorous non-simplified genetics. In effect, an alternative system of replicator equations is derived. It contains two subsystems: the first describes changes in gene frequencies (this is an alternative unbiased formalization of the Fisher-Dusing argument), whereas the second describes changes in the sex ratios in subpopulations of carriers of genes for each strategy. An intriguing analytical result of this work is that fitness of a gene depends on the current sex ratio in the subpopulation of its carriers, not on the encoded individual strategy. Thus, the argument of the gene fitness function is not constant but is determined by the trajectory of the sex ratio among carriers of that gene. This aspect of the modelled phenomenon cannot be revealed by the static analysis. Dynamics of the sex ratio among gene carriers is driven by a dynamic "tug of war" between female carriers expressing the encoded strategic trait value and random partners of male carriers expressing the average population strategy (a primary sex ratio). This mechanism can be called "double level selection". Therefore, gene interest perspective leads to multi-level selection.
q-bio/0402013
Byung Mook Weon
Byung Mook Weon
General functions for human survival and mortality
8 pages, 2 figures, submitted to Mechanisms of Ageing and Development
null
null
null
q-bio.PE
null
General functions for human survival and mortality may support a possibility of general mechanisms in human ageing. We discovered that the survival and mortality curves could be described very simply and accurately by the Weibull survival function with age-dependent shape parameter. The age-dependence of shape parameter determines the shape of the survival and mortality curves and tells the nature of the ageing rate. Especially, the progression of shape parameter with age may be explained by the increase of interaction among vital processes or the evolution of susceptibility to faults with age. Age-related diseases may be attributed to the evolution of susceptibility to faults with age.
[ { "created": "Sat, 7 Feb 2004 02:51:02 GMT", "version": "v1" } ]
2007-05-23
[ [ "Weon", "Byung Mook", "" ] ]
General functions for human survival and mortality may support a possibility of general mechanisms in human ageing. We discovered that the survival and mortality curves could be described very simply and accurately by the Weibull survival function with age-dependent shape parameter. The age-dependence of shape parameter determines the shape of the survival and mortality curves and tells the nature of the ageing rate. Especially, the progression of shape parameter with age may be explained by the increase of interaction among vital processes or the evolution of susceptibility to faults with age. Age-related diseases may be attributed to the evolution of susceptibility to faults with age.
1609.05513
Lee Susman
Lee Susman, Maryam Kohram, Harsh Vashistha, Jeffrey T. Nechleba, Hanna Salman and Naama Brenner
Statistical properties and dynamics of phenotype components in individual bacteria
This work is now available in a new submission; the content has changed significantly between versions, as has the author list
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cellular phenotype is characterized by different components such as cell size, protein content and cell cycle time. These are global variables that are the outcome of multiple internal microscopic processes. Accordingly, they display some universal statistical properties and scaling relations, such as distribution collapse and relation between moments. Cell size statistics and its relation to growth and division has been mostly studied separately from proteins and other cellular variables. Here we present experimental and theoretical analyses of these phenotype components in a unified framework that reveals their correlations and interactions inside the cell. We measure these components simultaneously in single cells over dozens of generations, quantify their correlations, and compare population to temporal statistics. We find that cell size and highly expressed proteins have very similar dynamics over growth and division cycles, which result in parallel statistical properties, both universal and individual. In particular, while distribution shapes of fluctuations along time are common to all cells and components, other properties are variable and remain distinct in individual cells for a surprisingly large number of generations. These include temporal averages of cell size and protein content, and the structure of their auto-correlation functions. We explore possible roles of the different components in controlling cell growth and division. We find that in order to stabilize exponential accumulation and division of all components across generations, coupled dynamics among them is required. Finally, we incorporate effective coupling within the cell cycle with a phenomenological mapping across consecutive cycles, and show that this model reproduces the entire array of experimental observations.
[ { "created": "Sun, 18 Sep 2016 16:37:35 GMT", "version": "v1" }, { "created": "Thu, 25 May 2017 11:19:20 GMT", "version": "v2" }, { "created": "Tue, 15 May 2018 11:37:52 GMT", "version": "v3" } ]
2018-05-16
[ [ "Susman", "Lee", "" ], [ "Kohram", "Maryam", "" ], [ "Vashistha", "Harsh", "" ], [ "Nechleba", "Jeffrey T.", "" ], [ "Salman", "Hanna", "" ], [ "Brenner", "Naama", "" ] ]
Cellular phenotype is characterized by different components such as cell size, protein content and cell cycle time. These are global variables that are the outcome of multiple internal microscopic processes. Accordingly, they display some universal statistical properties and scaling relations, such as distribution collapse and relation between moments. Cell size statistics and its relation to growth and division has been mostly studied separately from proteins and other cellular variables. Here we present experimental and theoretical analyses of these phenotype components in a unified framework that reveals their correlations and interactions inside the cell. We measure these components simultaneously in single cells over dozens of generations, quantify their correlations, and compare population to temporal statistics. We find that cell size and highly expressed proteins have very similar dynamics over growth and division cycles, which result in parallel statistical properties, both universal and individual. In particular, while distribution shapes of fluctuations along time are common to all cells and components, other properties are variable and remain distinct in individual cells for a surprisingly large number of generations. These include temporal averages of cell size and protein content, and the structure of their auto-correlation functions. We explore possible roles of the different components in controlling cell growth and division. We find that in order to stabilize exponential accumulation and division of all components across generations, coupled dynamics among them is required. Finally, we incorporate effective coupling within the cell cycle with a phenomenological mapping across consecutive cycles, and show that this model reproduces the entire array of experimental observations.
1207.1288
Guido Tiana
Guido Tiana and Carlo Camilloni
Ratcheted molecular-dynamics simulations identify efficiently the transition state of protein folding
null
null
10.1063/1.4769085
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The atomistic characterization of the transition state is a fundamental step to improve the understanding of the folding mechanism and the function of proteins. From a computational point of view, the identification of the conformations that build out the transition state is particularly cumbersome, mainly because of the large computational cost of generating a statistically-sound set of folding trajectories. Here we show that a biasing algorithm, based on the physics of the ratchet-and-pawl, can be used to identify efficiently the transition state. The basic idea is that the algorithmic ratchet exerts a force on the protein when it is climbing the free-energy barrier, while it is inactive when it is descending. The transition state can be identified as the point of the trajectory where the ratchet changes regime. Besides discussing this strategy in general terms, we test it within a protein model whose transition state can be studied independently by plain molecular dynamics simulations. Finally, we show its power in explicit-solvent simulations, obtaining and characterizing a set of transition--state conformations for ACBP and CI2.
[ { "created": "Thu, 5 Jul 2012 15:36:20 GMT", "version": "v1" } ]
2015-06-05
[ [ "Tiana", "Guido", "" ], [ "Camilloni", "Carlo", "" ] ]
The atomistic characterization of the transition state is a fundamental step to improve the understanding of the folding mechanism and the function of proteins. From a computational point of view, the identification of the conformations that build out the transition state is particularly cumbersome, mainly because of the large computational cost of generating a statistically-sound set of folding trajectories. Here we show that a biasing algorithm, based on the physics of the ratchet-and-pawl, can be used to identify efficiently the transition state. The basic idea is that the algorithmic ratchet exerts a force on the protein when it is climbing the free-energy barrier, while it is inactive when it is descending. The transition state can be identified as the point of the trajectory where the ratchet changes regime. Besides discussing this strategy in general terms, we test it within a protein model whose transition state can be studied independently by plain molecular dynamics simulations. Finally, we show its power in explicit-solvent simulations, obtaining and characterizing a set of transition--state conformations for ACBP and CI2.
2304.02896
Jacek Mi\c{e}kisz
Jacek Mi\k{e}kisz, Javad Mohamadichamgavi, and Jakub {\L}\k{a}cki
Phase transitions in the Prisoner's Dilemma game on scale-free networks
9 pages, 4 figures
BioPhysMath, 2024: 1, 9pp
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
We study stochastic dynamics of the Prisoner's Dilemma game on random Erd\"{o}s-R\'{e}nyi and Barab\'{a}si-Albert networks with a cost of maintaining a link between interacting players. Stochastic simulations show that when the cost increases, the population of players located on Barab\'{a}si-Albert network undergoes a sharp transition from an ordered state, where almost all players cooperate, to a state in which both cooperators and defectors coexist. At the critical cost, the population oscillates in time between these two states. Such a situation is not present in the Erd\"{o}s-R\'{e}nyi network. We provide some heuristic analytical arguments for the phase transition and the value of the critical cost in the Barab\'{a}si-Albert network.
[ { "created": "Thu, 6 Apr 2023 07:02:12 GMT", "version": "v1" }, { "created": "Mon, 12 Feb 2024 14:09:30 GMT", "version": "v2" } ]
2024-02-13
[ [ "Miękisz", "Jacek", "" ], [ "Mohamadichamgavi", "Javad", "" ], [ "Łącki", "Jakub", "" ] ]
We study stochastic dynamics of the Prisoner's Dilemma game on random Erd\"{o}s-R\'{e}nyi and Barab\'{a}si-Albert networks with a cost of maintaining a link between interacting players. Stochastic simulations show that when the cost increases, the population of players located on Barab\'{a}si-Albert network undergoes a sharp transition from an ordered state, where almost all players cooperate, to a state in which both cooperators and defectors coexist. At the critical cost, the population oscillates in time between these two states. Such a situation is not present in the Erd\"{o}s-R\'{e}nyi network. We provide some heuristic analytical arguments for the phase transition and the value of the critical cost in the Barab\'{a}si-Albert network.
2004.10377
Daniel Mas Montserrat
Daniel Mas Montserrat, Carlos Bustamante, Alexander Ioannidis
LAI-Net: Local-Ancestry Inference with Neural Networks
null
null
10.1109/ICASSP40776.2020.9053662
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local-ancestry inference (LAI), also referred to as ancestry deconvolution, provides high-resolution ancestry estimation along the human genome. In both research and industry, LAI is emerging as a critical step in DNA sequence analysis with applications extending from polygenic risk scores (used to predict traits in embryos and disease risk in adults) to genome-wide association studies, and from pharmacogenomics to inference of human population history. While many LAI methods have been developed, advances in computing hardware (GPUs) combined with machine learning techniques, such as neural networks, are enabling the development of new methods that are fast, robust and easily shared and stored. In this paper we develop the first neural network based LAI method, named LAI-Net, providing competitive accuracy with state-of-the-art methods and robustness to missing or noisy data, while having a small number of layers.
[ { "created": "Wed, 22 Apr 2020 02:57:49 GMT", "version": "v1" } ]
2020-04-23
[ [ "Montserrat", "Daniel Mas", "" ], [ "Bustamante", "Carlos", "" ], [ "Ioannidis", "Alexander", "" ] ]
Local-ancestry inference (LAI), also referred to as ancestry deconvolution, provides high-resolution ancestry estimation along the human genome. In both research and industry, LAI is emerging as a critical step in DNA sequence analysis with applications extending from polygenic risk scores (used to predict traits in embryos and disease risk in adults) to genome-wide association studies, and from pharmacogenomics to inference of human population history. While many LAI methods have been developed, advances in computing hardware (GPUs) combined with machine learning techniques, such as neural networks, are enabling the development of new methods that are fast, robust and easily shared and stored. In this paper we develop the first neural network based LAI method, named LAI-Net, providing competitive accuracy with state-of-the-art methods and robustness to missing or noisy data, while having a small number of layers.
1209.4142
Manuel Beltr\'an del R\'io
Manuel Beltr\'an del R\'io, Christopher R. Stephens, David A. Rosenblueth
Is modularity the reason why recombination is so ubiquitous?
46 pages, 19 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Homologous recombination is an important operator in the evolution of biological organisms. However, there is still no clear, generally accepted understanding of why it exists and under what circumstances it is useful. In this paper we consider its utility in the context of an infinite population haploid model with selection and homologous recombination. We define utility in terms of two metrics - the increase in frequency of fit genotypes, and the increase in average population fitness, relative to those associated with selection only. Explicitly, we exhaustively explore the eight-dimensional parameter space of a two-locus two-allele system, showing, as a function of the landscape and the initial population, that recombination is beneficial in terms of our metrics in two distinct regimes: a landscape independent regime - the "search" regime - where recombination aids in the search for a fit genotype that is absent or at low frequency in the population; and the "modular" regime, associated with quasi-additive fitness landscapes with low epistasis, where recombination allows for the juxtaposition of fit "modules" or Building Blocks. Thus, we conclude that the ubiquity and utility of recombination is intimately associated with the existence of modularity in biological fitness landscapes.
[ { "created": "Wed, 19 Sep 2012 03:08:20 GMT", "version": "v1" }, { "created": "Tue, 25 Sep 2012 19:44:33 GMT", "version": "v2" }, { "created": "Wed, 6 Feb 2013 19:51:17 GMT", "version": "v3" } ]
2013-08-12
[ [ "del Río", "Manuel Beltrán", "" ], [ "Stephens", "Christopher R.", "" ], [ "Rosenblueth", "David A.", "" ] ]
Homologous recombination is an important operator in the evolution of biological organisms. However, there is still no clear, generally accepted understanding of why it exists and under what circumstances it is useful. In this paper we consider its utility in the context of an infinite population haploid model with selection and homologous recombination. We define utility in terms of two metrics - the increase in frequency of fit genotypes, and the increase in average population fitness, relative to those associated with selection only. Explicitly, we exhaustively explore the eight-dimensional parameter space of a two-locus two-allele system, showing, as a function of the landscape and the initial population, that recombination is beneficial in terms of our metrics in two distinct regimes: a landscape independent regime - the "search" regime - where recombination aids in the search for a fit genotype that is absent or at low frequency in the population; and the "modular" regime, associated with quasi-additive fitness landscapes with low epistasis, where recombination allows for the juxtaposition of fit "modules" or Building Blocks. Thus, we conclude that the ubiquity and utility of recombination is intimately associated with the existence of modularity in biological fitness landscapes.
q-bio/0503014
Francesco Rao
Francesco Rao, Giovanni Settanni, Enrico Guarnera, Amedeo Caflisch
Estimation of protein folding probability from equilibrium simulations
7 pages, 4 figures, supplemetary materials
null
10.1063/1.1893753
null
q-bio.BM cond-mat.soft q-bio.QM
null
The assumption that similar structures have similar folding probabilities ($p_{fold}$) leads naturally to a procedure to evaluate $p_{fold}$ for every snapshot saved along an equilibrium folding-unfolding trajectory of a structured peptide or protein. The procedure utilizes a structurally homogeneous clustering and does not require any additional simulation. It can be used to detect multiple folding pathways as shown for a three-stranded antiparallel $\beta$-sheet peptide investigated by implicit solvent molecular dynamics simulations.
[ { "created": "Thu, 10 Mar 2005 18:49:36 GMT", "version": "v1" } ]
2009-11-11
[ [ "Rao", "Francesco", "" ], [ "Settanni", "Giovanni", "" ], [ "Guarnera", "Enrico", "" ], [ "Caflisch", "Amedeo", "" ] ]
The assumption that similar structures have similar folding probabilities ($p_{fold}$) leads naturally to a procedure to evaluate $p_{fold}$ for every snapshot saved along an equilibrium folding-unfolding trajectory of a structured peptide or protein. The procedure utilizes a structurally homogeneous clustering and does not require any additional simulation. It can be used to detect multiple folding pathways as shown for a three-stranded antiparallel $\beta$-sheet peptide investigated by implicit solvent molecular dynamics simulations.
1801.02550
Max Souza
Fabio A. C. C. Chalub and Max O. Souza
From fixation probabilities to d-player games: an inverse problem in evolutionary dynamics
19 pages, 11 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The probability that the frequency of a particular trait will eventually become unity, the so-called fixation probability, is a central issue in the study of population evolution. Its computation, once we are given a stochastic finite population model without mutations and a (possibly frequency dependent) fitness function, is straightforward and it can be done in several ways. Nevertheless, despite the fact that the fixation probability is an important macroscopic property of the population, its precise knowledge does not give any clear information about the interaction patterns among individuals in the population. Here we address the inverse problem: From a given fixation pattern and population size, we want to infer what is the game being played by the population. This is done by first exploiting the framework developed in FACC Chalub and MO Souza, J. Math. Biol. 75: 1735, 2017., which yields a fitness function that realises this fixation pattern in the Wright-Fisher model. This fitness function always exists, but it is not necessarily unique. Subsequently, we show that any such fitness function can be approximated, with arbitrary precision, using $d$-player game theory, provided $d$ is large enough. The pay-off matrix that emerges naturally from the approximating game will provide useful information about the individual interaction structure that is not itself apparent in the fixation pattern. We present extensive numerical support for our conclusions.
[ { "created": "Mon, 8 Jan 2018 16:48:52 GMT", "version": "v1" }, { "created": "Tue, 20 Nov 2018 19:37:13 GMT", "version": "v2" } ]
2018-11-22
[ [ "Chalub", "Fabio A. C. C.", "" ], [ "Souza", "Max O.", "" ] ]
The probability that the frequency of a particular trait will eventually become unity, the so-called fixation probability, is a central issue in the study of population evolution. Its computation, once we are given a stochastic finite population model without mutations and a (possibly frequency dependent) fitness function, is straightforward and it can be done in several ways. Nevertheless, despite the fact that the fixation probability is an important macroscopic property of the population, its precise knowledge does not give any clear information about the interaction patterns among individuals in the population. Here we address the inverse problem: From a given fixation pattern and population size, we want to infer what is the game being played by the population. This is done by first exploiting the framework developed in FACC Chalub and MO Souza, J. Math. Biol. 75: 1735, 2017., which yields a fitness function that realises this fixation pattern in the Wright-Fisher model. This fitness function always exists, but it is not necessarily unique. Subsequently, we show that any such fitness function can be approximated, with arbitrary precision, using $d$-player game theory, provided $d$ is large enough. The pay-off matrix that emerges naturally from the approximating game will provide useful information about the individual interaction structure that is not itself apparent in the fixation pattern. We present extensive numerical support for our conclusions.