id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2401.13495
Alice Longhena
Alice Longhena, Martin Guillemaud, Mario Chavez
Detecting local perturbations of networks in a latent hyperbolic embedding space
null
Chaos 1 June 2024; 34 (6): 063117
10.1063/5.0199546
null
q-bio.QM physics.data-an
http://creativecommons.org/licenses/by/4.0/
Graph theoretical approaches have been proven to be effective in the characterization of connected systems, as well as in quantifying their dysfunction due to perturbation. In this paper, we show the advantage of a non-Euclidean (hyperbolic) representation of networks to identify local connectivity perturbations and to characterize the induced effects on a large scale. We propose two perturbation scores based on representations of the networks in a latent geometric space, obtained through an embedding onto the hyperbolic Poincar\'e disk. We numerically demonstrate that these methods are able to localize perturbations in networks with homogeneous or heterogeneous degree connectivity. We apply this framework to identify the most perturbed brain areas in epileptic patients following surgery. This study is conceived in the effort of developing more powerful tools to represent and analyze brain networks, and it is the first to apply geometric network embedding techniques to the case of epilepsy.
[ { "created": "Wed, 24 Jan 2024 14:42:19 GMT", "version": "v1" }, { "created": "Wed, 17 Apr 2024 15:17:30 GMT", "version": "v2" }, { "created": "Fri, 7 Jun 2024 14:52:45 GMT", "version": "v3" } ]
2024-06-10
[ [ "Longhena", "Alice", "" ], [ "Guillemaud", "Martin", "" ], [ "Chavez", "Mario", "" ] ]
Graph theoretical approaches have been proven to be effective in the characterization of connected systems, as well as in quantifying their dysfunction due to perturbation. In this paper, we show the advantage of a non-Euclidean (hyperbolic) representation of networks to identify local connectivity perturbations and to characterize the induced effects on a large scale. We propose two perturbation scores based on representations of the networks in a latent geometric space, obtained through an embedding onto the hyperbolic Poincar\'e disk. We numerically demonstrate that these methods are able to localize perturbations in networks with homogeneous or heterogeneous degree connectivity. We apply this framework to identify the most perturbed brain areas in epileptic patients following surgery. This study is conceived in the effort of developing more powerful tools to represent and analyze brain networks, and it is the first to apply geometric network embedding techniques to the case of epilepsy.
1704.07906
Chandre Dharma-wardana
M.W.C. Dharma-wardana (NRC-Canada)
Chronic Kidney Disease of Unknown aetiology (CKDu) and multiple-ion interactions in drinking water
14 pages, one figure
Environmental Geochemistry and Health 1st September (2017)
10.1007/s10653-017-0017-4
null
q-bio.TO cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experimental work on the nephrotoxicity of contaminants in drinking water using laboratory mice, motivated by the need to understand the origin of chronic kidney disease of unknown aetiology is examined within our understanding of the hydration of ions and proteins. Qualitative considerations based on Hofmeister-type action of these ions, as well as quantitative electrochemical models for the Gibbs free-energy change for ion-pair formation are used to explain why Cd$^{2+}$ in the presence of F$^-$ and water hardness due to Mg$^{2+}$ ions (but not Ca$^{2+}$) can be expected to be more nephrotoxic, while AsO$_3^{3-}$ in the presence of F$^-$ and hardness may be expected to be less nephrotoxic. The analysis is applied to a variety of ionic species typically found in water to predict their likely combined electro-chemical action. These results clarify the origins of chronic kidney disease in the north-central province of Sri Lanka. The conclusion is further strengthened by a study of the dietary load of Cd and As, where the dietary loads are found to be safe, especially when the mitigating effects of micronutrient ionic forms of Zn and Se, as well as corrections for bio-availability are taken in to account. The resulting aetiological picture supports the views that F$^-$, Cd$^{2+}$ (to a lesser extent), and Mg$^{2+}$ ions found in stagnant household well water act together with enhanced toxicity, becoming the most likely causative factor of the disease. Similar incidence of CKDu found in other tropical climates may have similar geological origins.
[ { "created": "Fri, 7 Apr 2017 02:34:04 GMT", "version": "v1" } ]
2017-12-15
[ [ "Dharma-wardana", "M. W. C.", "", "NRC-Canada" ] ]
Recent experimental work on the nephrotoxicity of contaminants in drinking water using laboratory mice, motivated by the need to understand the origin of chronic kidney disease of unknown aetiology is examined within our understanding of the hydration of ions and proteins. Qualitative considerations based on Hofmeister-type action of these ions, as well as quantitative electrochemical models for the Gibbs free-energy change for ion-pair formation are used to explain why Cd$^{2+}$ in the presence of F$^-$ and water hardness due to Mg$^{2+}$ ions (but not Ca$^{2+}$) can be expected to be more nephrotoxic, while AsO$_3^{3-}$ in the presence of F$^-$ and hardness may be expected to be less nephrotoxic. The analysis is applied to a variety of ionic species typically found in water to predict their likely combined electro-chemical action. These results clarify the origins of chronic kidney disease in the north-central province of Sri Lanka. The conclusion is further strengthened by a study of the dietary load of Cd and As, where the dietary loads are found to be safe, especially when the mitigating effects of micronutrient ionic forms of Zn and Se, as well as corrections for bio-availability are taken in to account. The resulting aetiological picture supports the views that F$^-$, Cd$^{2+}$ (to a lesser extent), and Mg$^{2+}$ ions found in stagnant household well water act together with enhanced toxicity, becoming the most likely causative factor of the disease. Similar incidence of CKDu found in other tropical climates may have similar geological origins.
2004.08320
Vignayanandam Ravindernath Muddapu
Vignayanandam R. Muddapu, Karthik Vijayakumar, Keerthiga Ramakrishnan, V Srinivasa Chakravarthy
A Computational Model of Levodopa-Induced Toxicity in Substantia Nigra Pars Compacta in Parkinson's Disease
null
null
null
null
q-bio.NC q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Parkinson's disease (PD) is caused by the progressive loss of dopaminergic cells in substantia nigra pars compacta (SNc). The root cause of this cell loss in PD is still not decisively elucidated. A recent line of thinking traces the cause of PD neurodegeneration to metabolic deficiency. Due to exceptionally high energy demand, SNc neurons exhibit a higher basal metabolic rate and higher oxygen consumption rate, which results in oxidative stress. Recently, we have suggested that the excitotoxic loss of SNc cells might be due to energy deficiency occurring at different levels of neural hierarchy. Levodopa (LDOPA), a precursor of dopamine, which is used as a symptom-relieving treatment for PD, leads to outcomes that are both positive and negative. Several researchers suggested that LDOPA might be harmful to SNc cells due to oxidative stress. The role of LDOPA in the course of PD pathogenesis is still debatable. We hypothesize that energy deficiency can lead to LDOPA-induced toxicity (LIT) in two ways: by promoting dopamine-induced oxidative stress and by exacerbating excitotoxicity in SNc. We present a multiscale computational model of SNc-striatum system, which will help us in understanding the mechanism behind neurodegeneration postulated above and provides insights for developing disease-modifying therapeutics. It was observed that SNc terminals are more vulnerable to energy deficiency than SNc somas. During LDOPA therapy, it was observed that higher LDOPA dosage results in increased loss of somas and terminals in SNc. It was also observed that co-administration of LDOPA and glutathione (antioxidant) evades LDOPA-induced toxicity in SNc neurons. We show that our proposed model was able to capture LDOPA-induced toxicity in SNc, caused by energy deficiency.
[ { "created": "Wed, 1 Apr 2020 11:04:52 GMT", "version": "v1" } ]
2020-04-20
[ [ "Muddapu", "Vignayanandam R.", "" ], [ "Vijayakumar", "Karthik", "" ], [ "Ramakrishnan", "Keerthiga", "" ], [ "Chakravarthy", "V Srinivasa", "" ] ]
Parkinson's disease (PD) is caused by the progressive loss of dopaminergic cells in substantia nigra pars compacta (SNc). The root cause of this cell loss in PD is still not decisively elucidated. A recent line of thinking traces the cause of PD neurodegeneration to metabolic deficiency. Due to exceptionally high energy demand, SNc neurons exhibit a higher basal metabolic rate and higher oxygen consumption rate, which results in oxidative stress. Recently, we have suggested that the excitotoxic loss of SNc cells might be due to energy deficiency occurring at different levels of neural hierarchy. Levodopa (LDOPA), a precursor of dopamine, which is used as a symptom-relieving treatment for PD, leads to outcomes that are both positive and negative. Several researchers suggested that LDOPA might be harmful to SNc cells due to oxidative stress. The role of LDOPA in the course of PD pathogenesis is still debatable. We hypothesize that energy deficiency can lead to LDOPA-induced toxicity (LIT) in two ways: by promoting dopamine-induced oxidative stress and by exacerbating excitotoxicity in SNc. We present a multiscale computational model of SNc-striatum system, which will help us in understanding the mechanism behind neurodegeneration postulated above and provides insights for developing disease-modifying therapeutics. It was observed that SNc terminals are more vulnerable to energy deficiency than SNc somas. During LDOPA therapy, it was observed that higher LDOPA dosage results in increased loss of somas and terminals in SNc. It was also observed that co-administration of LDOPA and glutathione (antioxidant) evades LDOPA-induced toxicity in SNc neurons. We show that our proposed model was able to capture LDOPA-induced toxicity in SNc, caused by energy deficiency.
2308.07954
Hyun Park
Hyun Park, Parth Patel, Roland Haas, E. A. Huerta
APACE: AlphaFold2 and advanced computing as a service for accelerated discovery in biophysics
7 pages, 4 figures, 2 tables
Proceedings of the National Academy of Sciences, 121, 27, (2024)
10.1073/pnas.2311888121
null
q-bio.BM cs.AI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prediction of protein 3D structure from amino acid sequence is a computational grand challenge in biophysics, and plays a key role in robust protein structure prediction algorithms, from drug discovery to genome interpretation. The advent of AI models, such as AlphaFold, is revolutionizing applications that depend on robust protein structure prediction algorithms. To maximize the impact, and ease the usability, of these novel AI tools we introduce APACE, AlphaFold2 and advanced computing as a service, a novel computational framework that effectively handles this AI model and its TB-size database to conduct accelerated protein structure prediction analyses in modern supercomputing environments. We deployed APACE in the Delta and Polaris supercomputers, and quantified its performance for accurate protein structure predictions using four exemplar proteins: 6AWO, 6OAN, 7MEZ, and 6D6U. Using up to 300 ensembles, distributed across 200 NVIDIA A100 GPUs, we found that APACE is up to two orders of magnitude faster than off-the-self AlphaFold2 implementations, reducing time-to-solution from weeks to minutes. This computational approach may be readily linked with robotics laboratories to automate and accelerate scientific discovery.
[ { "created": "Tue, 15 Aug 2023 18:00:01 GMT", "version": "v1" }, { "created": "Mon, 1 Jul 2024 20:25:05 GMT", "version": "v2" } ]
2024-07-03
[ [ "Park", "Hyun", "" ], [ "Patel", "Parth", "" ], [ "Haas", "Roland", "" ], [ "Huerta", "E. A.", "" ] ]
The prediction of protein 3D structure from amino acid sequence is a computational grand challenge in biophysics, and plays a key role in robust protein structure prediction algorithms, from drug discovery to genome interpretation. The advent of AI models, such as AlphaFold, is revolutionizing applications that depend on robust protein structure prediction algorithms. To maximize the impact, and ease the usability, of these novel AI tools we introduce APACE, AlphaFold2 and advanced computing as a service, a novel computational framework that effectively handles this AI model and its TB-size database to conduct accelerated protein structure prediction analyses in modern supercomputing environments. We deployed APACE in the Delta and Polaris supercomputers, and quantified its performance for accurate protein structure predictions using four exemplar proteins: 6AWO, 6OAN, 7MEZ, and 6D6U. Using up to 300 ensembles, distributed across 200 NVIDIA A100 GPUs, we found that APACE is up to two orders of magnitude faster than off-the-self AlphaFold2 implementations, reducing time-to-solution from weeks to minutes. This computational approach may be readily linked with robotics laboratories to automate and accelerate scientific discovery.
1205.3435
Raphael Cerf
Rapha\"el Cerf
Critical population and error threshold on the sharp peak landscape for a Moran model
In the first version, there was a wrong use of correlation inequalities (application of Harris theorem to a discrete time process). This is fixed here with the help of an exponential estimate
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this work is to propose a finite population counterpart to Eigen's model, which incorporates stochastic effects. We consider a Moran model describing the evolution of a population of size $m$ of chromosomes of length $\ell$ over an alphabet of cardinality $\kappa$. The mutation probability per locus is $q$. We deal only with the sharp peak landscape: the replication rate is $\sigma>1$ for the master sequence and 1 for the other sequences. We study the equilibrium distribution of the process in the regime where $\ell, m\to +\infty$, $q\to 0$, $\ell q \to a$, $m/\ell\to\alpha$. We obtain an equation $\alpha\phi(a)=\ln\kappa$ in the parameter space $(a,\alpha)$ separating the regime where the equilibrium population is totally random from the regime where a quasispecies is formed. We observe the existence of a critical population size necessary for a quasispecies to emerge and we recover the finite population counterpart of the error threshold. These results are supported by computer simulations.
[ { "created": "Tue, 15 May 2012 16:31:35 GMT", "version": "v1" }, { "created": "Tue, 23 Oct 2012 21:11:11 GMT", "version": "v2" } ]
2012-10-25
[ [ "Cerf", "Raphaël", "" ] ]
The goal of this work is to propose a finite population counterpart to Eigen's model, which incorporates stochastic effects. We consider a Moran model describing the evolution of a population of size $m$ of chromosomes of length $\ell$ over an alphabet of cardinality $\kappa$. The mutation probability per locus is $q$. We deal only with the sharp peak landscape: the replication rate is $\sigma>1$ for the master sequence and 1 for the other sequences. We study the equilibrium distribution of the process in the regime where $\ell, m\to +\infty$, $q\to 0$, $\ell q \to a$, $m/\ell\to\alpha$. We obtain an equation $\alpha\phi(a)=\ln\kappa$ in the parameter space $(a,\alpha)$ separating the regime where the equilibrium population is totally random from the regime where a quasispecies is formed. We observe the existence of a critical population size necessary for a quasispecies to emerge and we recover the finite population counterpart of the error threshold. These results are supported by computer simulations.
2311.00667
Aniruddha Acharya
Aniruddha Acharya
Development and application of SEM/EDS in biological, biomedical & nanotechnological research
32 pages, 5 figures, 1 table, unpublished work
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
This comprehensive review discusses the development of scanning electron microscopy and the application of this technology in different fields such as biology, nanobiotechnology and biomedical science. Besides being a tool for high resolution imaging of surface or topography, the technology is coupled with analytical techniques such as energy dispersive spectroscopy for elemental mapping. Since the commercialization of the technology, it has developed manifold and currently very high-resolution nano scale imaging is possible by this technology. The development of FIB-SEM has allowed three-dimensional imaging of materials while the development of cryostage allows imaging of hydrated biological samples. Though variable pressure or environmental SEM can be used for imaging hydrated samples, they cannot capture a high-resolution image. SBEM and ATUM-SEM has automated the sampling process while improved and more powerful software along with user-friendly computer interface has made image analysis faster and more reliable. This review presents one of the most widely used analytical techniques used across the globe for scientific investigation. The power and potential of SEM is expanding with the development of accessory technology.
[ { "created": "Wed, 1 Nov 2023 17:14:52 GMT", "version": "v1" } ]
2023-11-02
[ [ "Acharya", "Aniruddha", "" ] ]
This comprehensive review discusses the development of scanning electron microscopy and the application of this technology in different fields such as biology, nanobiotechnology and biomedical science. Besides being a tool for high resolution imaging of surface or topography, the technology is coupled with analytical techniques such as energy dispersive spectroscopy for elemental mapping. Since the commercialization of the technology, it has developed manifold and currently very high-resolution nano scale imaging is possible by this technology. The development of FIB-SEM has allowed three-dimensional imaging of materials while the development of cryostage allows imaging of hydrated biological samples. Though variable pressure or environmental SEM can be used for imaging hydrated samples, they cannot capture a high-resolution image. SBEM and ATUM-SEM has automated the sampling process while improved and more powerful software along with user-friendly computer interface has made image analysis faster and more reliable. This review presents one of the most widely used analytical techniques used across the globe for scientific investigation. The power and potential of SEM is expanding with the development of accessory technology.
1206.3031
Mike Steel Prof.
Iain Martyn and Mike Steel
The impact and interplay of long and short branches on phylogenetic information content
20 pages, 2 figures
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In molecular systematics, evolutionary trees are reconstructed from sequences at the tips under simple models of site substitution. A central question is how much sequence data is required to reconstruct a tree accurately? The answer depends on the lengths of the branches (edges) of the tree, with very short and very long edges requiring long sequences for accurate tree inference, particularly when these branch lengths are arranged in certain ways. For four-taxon trees, the sequence length question was settled for the case of a rapid speciation event in the distant past. Here, we generalize this result and show that the same sequence length requirement holds even when the speciation event is recent, provided that at least one of the four taxa is distantly related to the others. However, this equivalence disappears if a molecular clock applies, since the length of the long outgroup edge becomes largely irrelevant in the estimation of the tree topology for a recent (but not a deep) divergence. We also show how our results can be extended to models in which substitution rates vary across sites, and to settings where more than four taxa are involved.
[ { "created": "Thu, 14 Jun 2012 08:17:15 GMT", "version": "v1" }, { "created": "Mon, 16 Jul 2012 20:16:19 GMT", "version": "v2" } ]
2012-07-18
[ [ "Martyn", "Iain", "" ], [ "Steel", "Mike", "" ] ]
In molecular systematics, evolutionary trees are reconstructed from sequences at the tips under simple models of site substitution. A central question is how much sequence data is required to reconstruct a tree accurately? The answer depends on the lengths of the branches (edges) of the tree, with very short and very long edges requiring long sequences for accurate tree inference, particularly when these branch lengths are arranged in certain ways. For four-taxon trees, the sequence length question was settled for the case of a rapid speciation event in the distant past. Here, we generalize this result and show that the same sequence length requirement holds even when the speciation event is recent, provided that at least one of the four taxa is distantly related to the others. However, this equivalence disappears if a molecular clock applies, since the length of the long outgroup edge becomes largely irrelevant in the estimation of the tree topology for a recent (but not a deep) divergence. We also show how our results can be extended to models in which substitution rates vary across sites, and to settings where more than four taxa are involved.
2307.04603
Jaemyung Lee
Jaemyung Lee, Kyeongtak Han, Jaehoon Kim, Hasun Yu, Youhan Lee
Solvent: A Framework for Protein Folding
preprint, 9pages
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consistency and reliability are crucial for conducting AI research. Many famous research fields, such as object detection, have been compared and validated with solid benchmark frameworks. After AlphaFold2, the protein folding task has entered a new phase, and many methods are proposed based on the component of AlphaFold2. The importance of a unified research framework in protein folding contains implementations and benchmarks to consistently and fairly compare various approaches. To achieve this, we present Solvent, a protein folding framework that supports significant components of state-of-the-art models in the manner of an off-the-shelf interface Solvent contains different models implemented in a unified codebase and supports training and evaluation for defined models on the same dataset. We benchmark well-known algorithms and their components and provide experiments that give helpful insights into the protein structure modeling field. We hope that Solvent will increase the reliability and consistency of proposed models and give efficiency in both speed and costs, resulting in acceleration on protein folding modeling research. The code is available at https://github.com/kakaobrain/solvent, and the project will continue to be developed.
[ { "created": "Fri, 7 Jul 2023 09:01:42 GMT", "version": "v1" }, { "created": "Wed, 12 Jul 2023 05:18:51 GMT", "version": "v2" }, { "created": "Wed, 19 Jul 2023 05:43:44 GMT", "version": "v3" }, { "created": "Thu, 20 Jul 2023 00:49:13 GMT", "version": "v4" }, { "created": "Mon, 31 Jul 2023 05:29:16 GMT", "version": "v5" } ]
2023-08-01
[ [ "Lee", "Jaemyung", "" ], [ "Han", "Kyeongtak", "" ], [ "Kim", "Jaehoon", "" ], [ "Yu", "Hasun", "" ], [ "Lee", "Youhan", "" ] ]
Consistency and reliability are crucial for conducting AI research. Many famous research fields, such as object detection, have been compared and validated with solid benchmark frameworks. After AlphaFold2, the protein folding task has entered a new phase, and many methods are proposed based on the component of AlphaFold2. The importance of a unified research framework in protein folding contains implementations and benchmarks to consistently and fairly compare various approaches. To achieve this, we present Solvent, a protein folding framework that supports significant components of state-of-the-art models in the manner of an off-the-shelf interface Solvent contains different models implemented in a unified codebase and supports training and evaluation for defined models on the same dataset. We benchmark well-known algorithms and their components and provide experiments that give helpful insights into the protein structure modeling field. We hope that Solvent will increase the reliability and consistency of proposed models and give efficiency in both speed and costs, resulting in acceleration on protein folding modeling research. The code is available at https://github.com/kakaobrain/solvent, and the project will continue to be developed.
1706.00125
Alyssa Morrow
Alyssa Morrow, Vaishaal Shankar, Devin Petersohn, Anthony Joseph, Benjamin Recht, Nir Yosef
Convolutional Kitchen Sinks for Transcription Factor Binding Site Prediction
5 pages, 2 tables, NIPS MLCB Workshop 2016
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a simple and efficient method for prediction of transcription factor binding sites from DNA sequence. Our method computes a random approximation of a convolutional kernel feature map from DNA sequence and then learns a linear model from the approximated feature map. Our method outperforms state-of-the-art deep learning methods on five out of six test datasets from the ENCODE consortium, while training in less than one eighth the time.
[ { "created": "Wed, 31 May 2017 23:39:11 GMT", "version": "v1" } ]
2017-06-02
[ [ "Morrow", "Alyssa", "" ], [ "Shankar", "Vaishaal", "" ], [ "Petersohn", "Devin", "" ], [ "Joseph", "Anthony", "" ], [ "Recht", "Benjamin", "" ], [ "Yosef", "Nir", "" ] ]
We present a simple and efficient method for prediction of transcription factor binding sites from DNA sequence. Our method computes a random approximation of a convolutional kernel feature map from DNA sequence and then learns a linear model from the approximated feature map. Our method outperforms state-of-the-art deep learning methods on five out of six test datasets from the ENCODE consortium, while training in less than one eighth the time.
1103.2397
Peter Ralph
Alistair N. Boettiger, Peter L. Ralph, Steven N. Evans
Transcriptional regulation: Effects of promoter proximal pausing on speed, synchrony and reliability
21 pages, 6 figures; to be published in PLoS Computational Biology
PLoS Comput Biol 7(5): e1001136 (2011)
10.1371/journal.pcbi.1001136
null
q-bio.MN math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent whole genome polymerase binding assays have shown that a large proportion of unexpressed genes have pre-assembled RNA pol II transcription initiation complex stably bound to their promoters. Some such promoter proximally paused genes are regulated at transcription elongation rather than at initiation; it has been proposed that this difference allows these genes to both express faster and achieve more synchronous expression across populations of cells, thus overcoming molecular "noise" arising from low copy number factors. It has been established experimentally that genes which are regulated at elongation tend to express faster and more synchronously; however, it has not been shown directly whether or not it is the change in the regulated step {\em per se} that causes this increase in speed and synchrony. We investigate this question by proposing and analyzing a continuous-time Markov chain model of polymerase complex assembly regulated at one of two steps: initial polymerase association with DNA, or release from a paused, transcribing state. Our analysis demonstrates that, over a wide range of physical parameters, increased speed and synchrony are functional consequences of elongation control. Further, we make new predictions about the effect of elongation regulation on the consistent control of total transcript number between cells, and identify which elements in the transcription induction pathway are most sensitive to molecular noise and thus may be most evolutionarily constrained. Our methods produce symbolic expressions for quantities of interest with reasonable computational effort and can be used to explore the interplay between interaction topology and molecular noise in a broader class of biochemical networks. We provide general-purpose code implementing these methods.
[ { "created": "Fri, 11 Mar 2011 23:43:31 GMT", "version": "v1" } ]
2015-03-19
[ [ "Boettiger", "Alistair N.", "" ], [ "Ralph", "Peter L.", "" ], [ "Evans", "Steven N.", "" ] ]
Recent whole genome polymerase binding assays have shown that a large proportion of unexpressed genes have pre-assembled RNA pol II transcription initiation complex stably bound to their promoters. Some such promoter proximally paused genes are regulated at transcription elongation rather than at initiation; it has been proposed that this difference allows these genes to both express faster and achieve more synchronous expression across populations of cells, thus overcoming molecular "noise" arising from low copy number factors. It has been established experimentally that genes which are regulated at elongation tend to express faster and more synchronously; however, it has not been shown directly whether or not it is the change in the regulated step {\em per se} that causes this increase in speed and synchrony. We investigate this question by proposing and analyzing a continuous-time Markov chain model of polymerase complex assembly regulated at one of two steps: initial polymerase association with DNA, or release from a paused, transcribing state. Our analysis demonstrates that, over a wide range of physical parameters, increased speed and synchrony are functional consequences of elongation control. Further, we make new predictions about the effect of elongation regulation on the consistent control of total transcript number between cells, and identify which elements in the transcription induction pathway are most sensitive to molecular noise and thus may be most evolutionarily constrained. Our methods produce symbolic expressions for quantities of interest with reasonable computational effort and can be used to explore the interplay between interaction topology and molecular noise in a broader class of biochemical networks. We provide general-purpose code implementing these methods.
1808.01951
Islem Rekik
Mayssa Soussia and Islem Rekik
A Review on Image- and Network-based Brain Data Analysis Techniques for Alzheimer's Disease Diagnosis Reveals a Gap in Developing Predictive Methods for Prognosis
MICCAI Connectomics in NeuroImaging Workshop (2018)
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unveiling pathological brain changes associated with Alzheimer's disease (AD) is a challenging task especially that people do not show symptoms of dementia until it is late. Over the past years, neuroimaging techniques paved the way for computer-based diagnosis and prognosis to facilitate the automation of medical decision support and help clinicians identify cognitively intact subjects that are at high-risk of developing AD. As a progressive neurodegenerative disorder, researchers investigated how AD affects the brain using different approaches: 1) image-based methods where mainly neuroimaging modalities are used to provide early AD biomarkers, and 2) network-based methods which focus on functional and structural brain connectivities to give insights into how AD alters brain wiring. In this study, we reviewed neuroimaging-based technical methods developed for AD and mild-cognitive impairment (MCI) classification and prediction tasks, selected by screening all MICCAI proceedings published between 2010 and 2016. We included papers that fit into image-based or network-based categories. The majority of papers focused on classifying MCI vs. AD brain states, which has enabled the discovery of discriminative or altered brain regions and connections. However, very few works aimed to predict MCI progression based on early neuroimaging-based observations. Despite the high importance of reliably identifying which early MCI patient will convert to AD, remain stable or reverse to normal over months/years, predictive models are still lagging behind.
[ { "created": "Mon, 6 Aug 2018 15:00:17 GMT", "version": "v1" } ]
2018-08-07
[ [ "Soussia", "Mayssa", "" ], [ "Rekik", "Islem", "" ] ]
Unveiling pathological brain changes associated with Alzheimer's disease (AD) is a challenging task especially that people do not show symptoms of dementia until it is late. Over the past years, neuroimaging techniques paved the way for computer-based diagnosis and prognosis to facilitate the automation of medical decision support and help clinicians identify cognitively intact subjects that are at high-risk of developing AD. As a progressive neurodegenerative disorder, researchers investigated how AD affects the brain using different approaches: 1) image-based methods where mainly neuroimaging modalities are used to provide early AD biomarkers, and 2) network-based methods which focus on functional and structural brain connectivities to give insights into how AD alters brain wiring. In this study, we reviewed neuroimaging-based technical methods developed for AD and mild-cognitive impairment (MCI) classification and prediction tasks, selected by screening all MICCAI proceedings published between 2010 and 2016. We included papers that fit into image-based or network-based categories. The majority of papers focused on classifying MCI vs. AD brain states, which has enabled the discovery of discriminative or altered brain regions and connections. However, very few works aimed to predict MCI progression based on early neuroimaging-based observations. Despite the high importance of reliably identifying which early MCI patient will convert to AD, remain stable or reverse to normal over months/years, predictive models are still lagging behind.
2308.12735
Casper Asbj{\o}rn Eriksen
Casper Asbj{\o}rn Eriksen, Jakob Lykke Andersen, Rolf Fagerberg, Daniel Merkle
Reconciling Inconsistent Molecular Structures from Biochemical Databases
14 pages, 4 figures, accepted at ISBRA 2023
null
null
null
q-bio.BM cs.DB q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Information on the structure of molecules, retrieved via biochemical databases, plays a pivotal role in various disciplines, such as metabolomics, systems biology, and drug discovery. However, no such database can be complete, and the chemical structure for a given compound is not necessarily consistent between databases. This paper presents StructRecon, a novel tool for resolving unique and correct molecular structures from database identifiers. StructRecon traverses the cross-links between database entries in different databases to construct what we call an identifier graph, which offers a more complete view of the total information available on a particular compound across all the databases. In order to reconcile discrepancies between databases, we first present an extensible model for chemical structure which supports multiple independent levels of detail, allowing standardisation of the structure to be applied iteratively. In some cases, our standardisation approach results in multiple structures for a given compound, in which case a random walk-based algorithm is used to select the most likely structure among incompatible alternates. We applied StructRecon to the EColiCore2 model, resolving a unique chemical structure for 85.11 % of identifiers. StructRecon is open-source and modular, which enables the potential support for more databases in the future.
[ { "created": "Thu, 24 Aug 2023 12:26:20 GMT", "version": "v1" } ]
2023-08-25
[ [ "Eriksen", "Casper Asbjørn", "" ], [ "Andersen", "Jakob Lykke", "" ], [ "Fagerberg", "Rolf", "" ], [ "Merkle", "Daniel", "" ] ]
Information on the structure of molecules, retrieved via biochemical databases, plays a pivotal role in various disciplines, such as metabolomics, systems biology, and drug discovery. However, no such database can be complete, and the chemical structure for a given compound is not necessarily consistent between databases. This paper presents StructRecon, a novel tool for resolving unique and correct molecular structures from database identifiers. StructRecon traverses the cross-links between database entries in different databases to construct what we call an identifier graph, which offers a more complete view of the total information available on a particular compound across all the databases. In order to reconcile discrepancies between databases, we first present an extensible model for chemical structure which supports multiple independent levels of detail, allowing standardisation of the structure to be applied iteratively. In some cases, our standardisation approach results in multiple structures for a given compound, in which case a random walk-based algorithm is used to select the most likely structure among incompatible alternates. We applied StructRecon to the EColiCore2 model, resolving a unique chemical structure for 85.11 % of identifiers. StructRecon is open-source and modular, which enables the potential support for more databases in the future.
0712.1365
Alexei Vazquez
Alexei Vazquez
Population stratification using a statistical model on hypergraphs
7 pages, 6 figures
Phys. Rev. E 77, 066106 (2008)
10.1103/PhysRevE.77.066106
null
q-bio.PE cs.AI physics.data-an
null
Population stratification is a problem encountered in several areas of biology and public health. We tackle this problem by mapping a population and its elements attributes into a hypergraph, a natural extension of the concept of graph or network to encode associations among any number of elements. On this hypergraph, we construct a statistical model reflecting our intuition about how the elements attributes can emerge from a postulated population structure. Finally, we introduce the concept of stratification representativeness as a mean to identify the simplest stratification already containing most of the information about the population structure. We demonstrate the power of this framework stratifying an animal and a human population based on phenotypic and genotypic properties, respectively.
[ { "created": "Sun, 9 Dec 2007 20:53:45 GMT", "version": "v1" } ]
2009-11-13
[ [ "Vazquez", "Alexei", "" ] ]
Population stratification is a problem encountered in several areas of biology and public health. We tackle this problem by mapping a population and its elements attributes into a hypergraph, a natural extension of the concept of graph or network to encode associations among any number of elements. On this hypergraph, we construct a statistical model reflecting our intuition about how the elements attributes can emerge from a postulated population structure. Finally, we introduce the concept of stratification representativeness as a mean to identify the simplest stratification already containing most of the information about the population structure. We demonstrate the power of this framework stratifying an animal and a human population based on phenotypic and genotypic properties, respectively.
2311.07117
Kevin Sean Chen
Kevin S. Chen, Anuj K. Sharma, Jonathan W. Pillow, Andrew M. Leifer
Olfactory learning alters navigation strategies and behavioral variability in C. elegans
null
null
null
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Animals adjust their behavioral response to sensory input adaptively depending on past experiences. The flexible brain computation is crucial for survival and is of great interest in neuroscience. The nematode C. elegans modulates its navigation behavior depending on the association of odor butanone with food (appetitive training) or starvation (aversive training), and will then climb up the butanone gradient or ignore it, respectively. However, the exact change in navigation strategy in response to learning is still unknown. Here we study the learned odor navigation in worms by combining precise experimental measurement and a novel descriptive model of navigation. Our model consists of two known navigation strategies in worms: biased random walk and weathervaning. We infer weights on these strategies by applying the model to worm navigation trajectories and the exact odor concentration it experiences. Compared to naive worms, appetitive trained worms up-regulate the biased random walk strategy, and aversive trained worms down-regulate the weathervaning strategy. The statistical model provides prediction with $>90 \%$ accuracy of the past training condition given navigation data, which outperforms the classical chemotaxis metric. We find that the behavioral variability is altered by learning, such that worms are less variable after training compared to naive ones. The model further predicts the learning-dependent response and variability under optogenetic perturbation of the olfactory neuron AWC$^\mathrm{ON}$. Lastly, we investigate neural circuits downstream from AWC$^\mathrm{ON}$ that are differentially recruited for learned odor-guided navigation. Together, we provide a new paradigm to quantify flexible navigation algorithms and pinpoint the underlying neural substrates.
[ { "created": "Mon, 13 Nov 2023 07:21:22 GMT", "version": "v1" }, { "created": "Fri, 23 Feb 2024 05:42:33 GMT", "version": "v2" } ]
2024-02-26
[ [ "Chen", "Kevin S.", "" ], [ "Sharma", "Anuj K.", "" ], [ "Pillow", "Jonathan W.", "" ], [ "Leifer", "Andrew M.", "" ] ]
Animals adjust their behavioral response to sensory input adaptively depending on past experiences. The flexible brain computation is crucial for survival and is of great interest in neuroscience. The nematode C. elegans modulates its navigation behavior depending on the association of odor butanone with food (appetitive training) or starvation (aversive training), and will then climb up the butanone gradient or ignore it, respectively. However, the exact change in navigation strategy in response to learning is still unknown. Here we study the learned odor navigation in worms by combining precise experimental measurement and a novel descriptive model of navigation. Our model consists of two known navigation strategies in worms: biased random walk and weathervaning. We infer weights on these strategies by applying the model to worm navigation trajectories and the exact odor concentration it experiences. Compared to naive worms, appetitive trained worms up-regulate the biased random walk strategy, and aversive trained worms down-regulate the weathervaning strategy. The statistical model provides prediction with $>90 \%$ accuracy of the past training condition given navigation data, which outperforms the classical chemotaxis metric. We find that the behavioral variability is altered by learning, such that worms are less variable after training compared to naive ones. The model further predicts the learning-dependent response and variability under optogenetic perturbation of the olfactory neuron AWC$^\mathrm{ON}$. Lastly, we investigate neural circuits downstream from AWC$^\mathrm{ON}$ that are differentially recruited for learned odor-guided navigation. Together, we provide a new paradigm to quantify flexible navigation algorithms and pinpoint the underlying neural substrates.
2407.14798
Harvey Wang
Harvey Wang, Selena Singh, Thomas Trappenberg, Abraham Nunes
An Information-Geometric Formulation of Pattern Separation and Evaluation of Existing Indices
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Pattern separation is a computational process by which dissimilar neural patterns are generated from similar input patterns. We present an information-geometric formulation of pattern separation, where a pattern separator is modelled as a family of statistical distributions on a manifold. Such a manifold maps an input (i.e. coordinates) to a probability distribution that generates firing patterns. Pattern separation occurs when small coordinate changes result in large distances between samples from the corresponding distributions. Under this formulation, we implement a two-neuron system whose probability law forms a 3-dimensional manifold with mutually orthogonal coordinates representing the neurons' marginal and correlational firing rates. We use this highly controlled system to examine the behaviour of spike train similarity indices commonly used in pattern separation research. We found that all indices (except scaling factor) were sensitive to relative differences in marginal firing rates, but no index adequately captured differences in spike trains that resulted from altering the correlation in activity between the two neurons. That is, existing pattern separation metrics appear (A) sensitive to patterns that are encoded by different neurons, but (B) insensitive to patterns that differ only in relative spike timing (e.g. synchrony between neurons in the ensemble).
[ { "created": "Sat, 20 Jul 2024 07:58:23 GMT", "version": "v1" } ]
2024-07-23
[ [ "Wang", "Harvey", "" ], [ "Singh", "Selena", "" ], [ "Trappenberg", "Thomas", "" ], [ "Nunes", "Abraham", "" ] ]
Pattern separation is a computational process by which dissimilar neural patterns are generated from similar input patterns. We present an information-geometric formulation of pattern separation, where a pattern separator is modelled as a family of statistical distributions on a manifold. Such a manifold maps an input (i.e. coordinates) to a probability distribution that generates firing patterns. Pattern separation occurs when small coordinate changes result in large distances between samples from the corresponding distributions. Under this formulation, we implement a two-neuron system whose probability law forms a 3-dimensional manifold with mutually orthogonal coordinates representing the neurons' marginal and correlational firing rates. We use this highly controlled system to examine the behaviour of spike train similarity indices commonly used in pattern separation research. We found that all indices (except scaling factor) were sensitive to relative differences in marginal firing rates, but no index adequately captured differences in spike trains that resulted from altering the correlation in activity between the two neurons. That is, existing pattern separation metrics appear (A) sensitive to patterns that are encoded by different neurons, but (B) insensitive to patterns that differ only in relative spike timing (e.g. synchrony between neurons in the ensemble).
1804.11195
Samir Farooq
Samir Farooq, Samuel J. Weisenthal, Melissa Trayhan, Robert J. White, Kristen Bush, Peter R. Mariuz, Martin S. Zand
Revealing patterns in HIV viral load data and classifying patients via a novel machine learning cluster summarization method
17 page paper with additional 10 pages of references and supplementary material. 7 figures and 9 supplementary figures
null
null
null
q-bio.QM cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
HIV RNA viral load (VL) is an important outcome variable in studies of HIV infected persons. There exists only a handful of methods which classify patients by viral load patterns. Most methods place limits on the use of viral load measurements, are often specific to a particular study design, and do not account for complex, temporal variation. To address this issue, we propose a set of four unambiguous computable characteristics (features) of time-varying HIV viral load patterns, along with a novel centroid-based classification algorithm, which we use to classify a population of 1,576 HIV positive clinic patients into one of five different viral load patterns (clusters) often found in the literature: durably suppressed viral load (DSVL), sustained low viral load (SLVL), sustained high viral load (SHVL), high viral load suppression (HVLS), and rebounding viral load (RVL). The centroid algorithm summarizes these clusters in terms of their centroids and radii. We show that this allows new viral load patterns to be assigned pattern membership based on the distance from the centroid relative to its radius, which we term radial normalization classification. This method has the benefit of providing an objective and quantitative method to assign viral load pattern membership with a concise and interpretable model that aids clinical decision making. This method also facilitates meta-analyses by providing computably distinct HIV categories. Finally we propose that this novel centroid algorithm could also be useful in the areas of cluster comparison for outcomes research and data reduction in machine learning.
[ { "created": "Wed, 25 Apr 2018 22:40:03 GMT", "version": "v1" } ]
2018-05-01
[ [ "Farooq", "Samir", "" ], [ "Weisenthal", "Samuel J.", "" ], [ "Trayhan", "Melissa", "" ], [ "White", "Robert J.", "" ], [ "Bush", "Kristen", "" ], [ "Mariuz", "Peter R.", "" ], [ "Zand", "Martin S.", "" ] ]
HIV RNA viral load (VL) is an important outcome variable in studies of HIV infected persons. There exists only a handful of methods which classify patients by viral load patterns. Most methods place limits on the use of viral load measurements, are often specific to a particular study design, and do not account for complex, temporal variation. To address this issue, we propose a set of four unambiguous computable characteristics (features) of time-varying HIV viral load patterns, along with a novel centroid-based classification algorithm, which we use to classify a population of 1,576 HIV positive clinic patients into one of five different viral load patterns (clusters) often found in the literature: durably suppressed viral load (DSVL), sustained low viral load (SLVL), sustained high viral load (SHVL), high viral load suppression (HVLS), and rebounding viral load (RVL). The centroid algorithm summarizes these clusters in terms of their centroids and radii. We show that this allows new viral load patterns to be assigned pattern membership based on the distance from the centroid relative to its radius, which we term radial normalization classification. This method has the benefit of providing an objective and quantitative method to assign viral load pattern membership with a concise and interpretable model that aids clinical decision making. This method also facilitates meta-analyses by providing computably distinct HIV categories. Finally we propose that this novel centroid algorithm could also be useful in the areas of cluster comparison for outcomes research and data reduction in machine learning.
q-bio/0407037
Taguchi Y.-H.
Y.-h. Taguchi and Y. Oono
Relational patterns of gene expression via nonmetric multidimensional scaling analysis
16 pages, 7 figures, to appear in Bioinformatics
null
10.1093/bioinformatics/bti067
null
q-bio.GN q-bio.CB
null
Motivation:Microarray experiments result in large scale data sets that require extensive mining and refining to extract useful information. We demonstrate the usefulness of (nonmetric) multidimensional scaling (MDS) method in analyzing a large number of genes. Applying MDS to the microarray data is certainly not new, but the existing works are all on small numbers (< 100) of points to be analyzed. We have been developing an efficient novel algorithm for nonmetric multidimensional scaling (nMDS) analysis for very large data sets as a maximally unsupervised data mining device. We wish to demonstrate its usefulness in the context of bioinformatics (unraveling relational patterns among genes from time series data in this paper). Results: The Pearson correlation coefficient with its sign flipped is used to measure the dissimilarity of the gene activities in transcriptional response of cell-cycle-synchronized human fibroblasts to serum [Iyer {\it et al}., Science {\bf 283}, 83 (1999)]. These dissimilarity data have been analyzed with our nMDS algorithm to produce an almost circular relational pattern of the genes. The obtained pattern expresses a temporal order in the data in this example; the temporal expression pattern of the genes rotates along this circular arrangement and is related to the cell cycle. For the data we analyze in this paper we observe the following. If an appropriate preparation procedure is applied to the original data set, linear methods such as the principal component analysis (PCA) could achieve reasonable results, but without data preprocessing linear methods such as PCA cannot achieve a useful picture. Furthermore, even with an appropriate data preprocessing, the outcomes of linear procedures are not as clearcut as those by nMDS without preprocessing.
[ { "created": "Thu, 29 Jul 2004 06:48:09 GMT", "version": "v1" }, { "created": "Sat, 18 Sep 2004 20:46:21 GMT", "version": "v2" } ]
2009-09-29
[ [ "Taguchi", "Y. -h.", "" ], [ "Oono", "Y.", "" ] ]
Motivation:Microarray experiments result in large scale data sets that require extensive mining and refining to extract useful information. We demonstrate the usefulness of (nonmetric) multidimensional scaling (MDS) method in analyzing a large number of genes. Applying MDS to the microarray data is certainly not new, but the existing works are all on small numbers (< 100) of points to be analyzed. We have been developing an efficient novel algorithm for nonmetric multidimensional scaling (nMDS) analysis for very large data sets as a maximally unsupervised data mining device. We wish to demonstrate its usefulness in the context of bioinformatics (unraveling relational patterns among genes from time series data in this paper). Results: The Pearson correlation coefficient with its sign flipped is used to measure the dissimilarity of the gene activities in transcriptional response of cell-cycle-synchronized human fibroblasts to serum [Iyer {\it et al}., Science {\bf 283}, 83 (1999)]. These dissimilarity data have been analyzed with our nMDS algorithm to produce an almost circular relational pattern of the genes. The obtained pattern expresses a temporal order in the data in this example; the temporal expression pattern of the genes rotates along this circular arrangement and is related to the cell cycle. For the data we analyze in this paper we observe the following. If an appropriate preparation procedure is applied to the original data set, linear methods such as the principal component analysis (PCA) could achieve reasonable results, but without data preprocessing linear methods such as PCA cannot achieve a useful picture. Furthermore, even with an appropriate data preprocessing, the outcomes of linear procedures are not as clearcut as those by nMDS without preprocessing.
1201.0912
Hung-Chung Huang
Rongqing Xie, Gaochao Lin, and Hung-Chung Huang
Experimental Evidence Supporting a New "Osmosis Law & Theory" Derived New Formula that Improves van't Hoff Osmotic Pressure Equation
This is a revised and improved version providing proof of the experimental data and evidence on the validity of the new osmotic pressure formula described in this article. (which was missing in previous version)
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experimental data were used to support a new concept of osmotic force and a new osmotic law that can explain the osmotic process without the difficulties encountered with van't Hoff osmotic pressure theory. Derived new osmotic formula with curvilinear equation (via new osmotic law) overcomes the limitations and incompleteness of van't Hoff (linear) osmotic pressure equation, $\pi=(n/v)RT$, (for ideal dilute solution only). The application of this classical theory often resulted in contradiction regardless of miscellaneous explaining efforts. This is due to the lack of a scientific concept like "osmotic force" that we believe can elaborate the osmotic process. Via this new concept, the proposed new osmotic law and derived new osmotic pressure equation will greatly complete and improve the theoretical consistency within the scientific framework of osmosis.
[ { "created": "Mon, 2 Jan 2012 04:21:59 GMT", "version": "v1" }, { "created": "Sat, 31 Dec 2022 04:23:59 GMT", "version": "v2" } ]
2023-01-03
[ [ "Xie", "Rongqing", "" ], [ "Lin", "Gaochao", "" ], [ "Huang", "Hung-Chung", "" ] ]
Experimental data were used to support a new concept of osmotic force and a new osmotic law that can explain the osmotic process without the difficulties encountered with van't Hoff osmotic pressure theory. Derived new osmotic formula with curvilinear equation (via new osmotic law) overcomes the limitations and incompleteness of van't Hoff (linear) osmotic pressure equation, $\pi=(n/v)RT$, (for ideal dilute solution only). The application of this classical theory often resulted in contradiction regardless of miscellaneous explaining efforts. This is due to the lack of a scientific concept like "osmotic force" that we believe can elaborate the osmotic process. Via this new concept, the proposed new osmotic law and derived new osmotic pressure equation will greatly complete and improve the theoretical consistency within the scientific framework of osmosis.
1807.08570
Jorge P. Rodr\'iguez
Jorge P. Rodr\'iguez, Juan Fern\'andez-Gracia, Michele Thums, Mark A. Hindell, Ana M. M. Sequeira, Mark G. Meekan, Daniel P. Costa, Christophe Guinet, Robert G. Harcourt, Clive R. McMahon, Monica Muelbert, Carlos M. Duarte, V\'ictor M. Egu\'iluz
Big data analyses reveal patterns and drivers of the movements of southern elephant seals
18 pages, 5 figures, 6 supplementary figures
Sci. Rep. 7, 112 (2017)
10.1038/s41598-017-00165-0
null
q-bio.QM physics.bio-ph physics.data-an q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing number of large databases of animal tracking provides an opportunity for analyses of movement patterns at the scales of populations and even species. We used analytical approaches, developed to cope with big data, that require no a priori assumptions about the behaviour of the target agents, to analyse a pooled tracking dataset of 272 elephant seals (Mirounga leonina) in the Southern Ocean, that was comprised of >500,000 location estimates collected over more than a decade. Our analyses showed that the displacements of these seals were described by a truncated power law distribution across several spatial and temporal scales, with a clear signature of directed movement. This pattern was evident when analysing the aggregated tracks despite a wide diversity of individual trajectories. We also identified marine provinces that described the migratory and foraging habitats of these seals. Our analysis provides evidence for the presence of intrinsic drivers of movement, such as memory, that cannot be detected using common models of movement behaviour. These results highlight the potential for big data techniques to provide new insights into movement behaviour when applied to large datasets of animal tracking.
[ { "created": "Mon, 23 Jul 2018 12:47:19 GMT", "version": "v1" }, { "created": "Tue, 24 Jul 2018 09:02:02 GMT", "version": "v2" } ]
2018-07-25
[ [ "Rodríguez", "Jorge P.", "" ], [ "Fernández-Gracia", "Juan", "" ], [ "Thums", "Michele", "" ], [ "Hindell", "Mark A.", "" ], [ "Sequeira", "Ana M. M.", "" ], [ "Meekan", "Mark G.", "" ], [ "Costa", "Daniel P.", "" ], [ "Guinet", "Christophe", "" ], [ "Harcourt", "Robert G.", "" ], [ "McMahon", "Clive R.", "" ], [ "Muelbert", "Monica", "" ], [ "Duarte", "Carlos M.", "" ], [ "Eguíluz", "Víctor M.", "" ] ]
The growing number of large databases of animal tracking provides an opportunity for analyses of movement patterns at the scales of populations and even species. We used analytical approaches, developed to cope with big data, that require no a priori assumptions about the behaviour of the target agents, to analyse a pooled tracking dataset of 272 elephant seals (Mirounga leonina) in the Southern Ocean, that was comprised of >500,000 location estimates collected over more than a decade. Our analyses showed that the displacements of these seals were described by a truncated power law distribution across several spatial and temporal scales, with a clear signature of directed movement. This pattern was evident when analysing the aggregated tracks despite a wide diversity of individual trajectories. We also identified marine provinces that described the migratory and foraging habitats of these seals. Our analysis provides evidence for the presence of intrinsic drivers of movement, such as memory, that cannot be detected using common models of movement behaviour. These results highlight the potential for big data techniques to provide new insights into movement behaviour when applied to large datasets of animal tracking.
1602.07207
Christian R\"over
Steffen Unkel, Christian R\"over, Nigel Stallard, Norbert Benda, Martin Posch, Sarah Zohar, Tim Friede
Systematic reviews in paediatric multiple sclerosis and Creutzfeldt-Jakob disease exemplify shortcomings in methods used to evaluate therapies in rare conditions
11 pages, 2 figures, 3 tables
Orphanet Journal of Rare Diseases, 11:16, 2016
10.1186/s13023-016-0402-6
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BACKGROUND: Randomized controlled trials (RCTs) are the gold standard design of clinical research to assess interventions. However, RCTs cannot always be applied for practical or ethical reasons. To investigate the current practices in rare diseases, we review evaluations of therapeutic interventions in paediatric multiple sclerosis (MS) and Creutzfeldt-Jakob disease (CJD). In particular, we shed light on the endpoints used, the study designs implemented and the statistical methodologies applied. METHODS: We conducted literature searches to identify relevant primary studies. Data on study design, objectives, endpoints, patient characteristics, randomization and masking, type of intervention, control, withdrawals and statistical methodology were extracted from the selected studies. The risk of bias and the quality of the studies were assessed. RESULTS: Twelve (seven) primary studies on paediatric MS (CJD) were included in the qualitative synthesis. No double-blind, randomized placebo-controlled trial for evaluating interventions in paediatric MS has been published yet. Evidence from one open-label RCT is available. The observational studies are before-after studies or controlled studies. Three of the seven selected studies on CJD are RCTs, of which two received the maximum mark on the Oxford Quality Scale. Four trials are controlled observational studies. CONCLUSIONS: Evidence from double-blind RCTs on the efficacy of treatments appears to be variable between rare diseases. With regard to paediatric conditions it remains to be seen what impact regulators will have through e.g., paediatric investigation plans. Overall, there is space for improvement by using innovative trial designs and data analysis techniques.
[ { "created": "Sun, 21 Feb 2016 16:28:58 GMT", "version": "v1" } ]
2016-02-25
[ [ "Unkel", "Steffen", "" ], [ "Röver", "Christian", "" ], [ "Stallard", "Nigel", "" ], [ "Benda", "Norbert", "" ], [ "Posch", "Martin", "" ], [ "Zohar", "Sarah", "" ], [ "Friede", "Tim", "" ] ]
BACKGROUND: Randomized controlled trials (RCTs) are the gold standard design of clinical research to assess interventions. However, RCTs cannot always be applied for practical or ethical reasons. To investigate the current practices in rare diseases, we review evaluations of therapeutic interventions in paediatric multiple sclerosis (MS) and Creutzfeldt-Jakob disease (CJD). In particular, we shed light on the endpoints used, the study designs implemented and the statistical methodologies applied. METHODS: We conducted literature searches to identify relevant primary studies. Data on study design, objectives, endpoints, patient characteristics, randomization and masking, type of intervention, control, withdrawals and statistical methodology were extracted from the selected studies. The risk of bias and the quality of the studies were assessed. RESULTS: Twelve (seven) primary studies on paediatric MS (CJD) were included in the qualitative synthesis. No double-blind, randomized placebo-controlled trial for evaluating interventions in paediatric MS has been published yet. Evidence from one open-label RCT is available. The observational studies are before-after studies or controlled studies. Three of the seven selected studies on CJD are RCTs, of which two received the maximum mark on the Oxford Quality Scale. Four trials are controlled observational studies. CONCLUSIONS: Evidence from double-blind RCTs on the efficacy of treatments appears to be variable between rare diseases. With regard to paediatric conditions it remains to be seen what impact regulators will have through e.g., paediatric investigation plans. Overall, there is space for improvement by using innovative trial designs and data analysis techniques.
2208.01456
Ben Lonnqvist
Ben Lonnqvist, Harshitha Machiraju, Michael H. Herzog
A comment on Guo et al. [arXiv:2206.11228]
null
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by/4.0/
In a recent article, Guo et al. [arXiv:2206.11228] report that adversarially trained neural representations in deep networks may already be as robust as corresponding primate IT neural representations. While we find the paper's primary experiment illuminating, we have doubts about the interpretation and phrasing of the results presented in the paper.
[ { "created": "Tue, 2 Aug 2022 13:47:40 GMT", "version": "v1" } ]
2022-08-03
[ [ "Lonnqvist", "Ben", "" ], [ "Machiraju", "Harshitha", "" ], [ "Herzog", "Michael H.", "" ] ]
In a recent article, Guo et al. [arXiv:2206.11228] report that adversarially trained neural representations in deep networks may already be as robust as corresponding primate IT neural representations. While we find the paper's primary experiment illuminating, we have doubts about the interpretation and phrasing of the results presented in the paper.
1803.03146
Jade Shi
Jade Shi (EteRNA players), Rhiju Das, and Vijay S. Pande
SentRNA: Improving computational RNA design by incorporating a prior of human design strategies
27 pages (not including Supplementary Information), 9 figures, 7 tables
null
null
null
q-bio.QM cs.AI stat.ML
http://creativecommons.org/licenses/by/4.0/
Solving the RNA inverse folding problem is a critical prerequisite to RNA design, an emerging field in bioengineering with a broad range of applications from reaction catalysis to cancer therapy. Although significant progress has been made in developing machine-based inverse RNA folding algorithms, current approaches still have difficulty designing sequences for large or complex targets. On the other hand, human players of the online RNA design game EteRNA have consistently shown superior performance in this regard, being able to readily design sequences for targets that are challenging for machine algorithms. Here we present a novel approach to the RNA design problem, SentRNA, a design agent consisting of a fully-connected neural network trained end-to-end using human-designed RNA sequences. We show that through this approach, SentRNA can solve complex targets previously unsolvable by any machine-based approach and achieve state-of-the-art performance on two separate challenging test sets. Our results demonstrate that incorporating human design strategies into a design algorithm can significantly boost machine performance and suggests a new paradigm for machine-based RNA design.
[ { "created": "Thu, 8 Mar 2018 15:12:16 GMT", "version": "v1" }, { "created": "Wed, 6 Mar 2019 01:01:53 GMT", "version": "v2" } ]
2019-03-07
[ [ "Shi", "Jade", "", "EteRNA players" ], [ "Das", "Rhiju", "" ], [ "Pande", "Vijay S.", "" ] ]
Solving the RNA inverse folding problem is a critical prerequisite to RNA design, an emerging field in bioengineering with a broad range of applications from reaction catalysis to cancer therapy. Although significant progress has been made in developing machine-based inverse RNA folding algorithms, current approaches still have difficulty designing sequences for large or complex targets. On the other hand, human players of the online RNA design game EteRNA have consistently shown superior performance in this regard, being able to readily design sequences for targets that are challenging for machine algorithms. Here we present a novel approach to the RNA design problem, SentRNA, a design agent consisting of a fully-connected neural network trained end-to-end using human-designed RNA sequences. We show that through this approach, SentRNA can solve complex targets previously unsolvable by any machine-based approach and achieve state-of-the-art performance on two separate challenging test sets. Our results demonstrate that incorporating human design strategies into a design algorithm can significantly boost machine performance and suggests a new paradigm for machine-based RNA design.
2110.01219
Soojung Yang
Soojung Yang and Doyeong Hwang and Seul Lee and Seongok Ryu and Sung Ju Hwang
Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation
To be published in NeurIPS 2021
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Recently, utilizing reinforcement learning (RL) to generate molecules with desired properties has been highlighted as a promising strategy for drug design. A molecular docking program - a physical simulation that estimates protein-small molecule binding affinity - can be an ideal reward scoring function for RL, as it is a straightforward proxy of the therapeutic potential. Still, two imminent challenges exist for this task. First, the models often fail to generate chemically realistic and pharmacochemically acceptable molecules. Second, the docking score optimization is a difficult exploration problem that involves many local optima and less smooth surfaces with respect to molecular structure. To tackle these challenges, we propose a novel RL framework that generates pharmacochemically acceptable molecules with large docking scores. Our method - Fragment-based generative RL with Explorative Experience replay for Drug design (FREED) - constrains the generated molecules to a realistic and qualified chemical space and effectively explores the space to find drugs by coupling our fragment-based generation method and a novel error-prioritized experience replay (PER). We also show that our model performs well on both de novo and scaffold-based schemes. Our model produces molecules of higher quality compared to existing methods while achieving state-of-the-art performance on two of three targets in terms of the docking scores of the generated molecules. We further show with ablation studies that our method, predictive error-PER (FREED(PE)), significantly improves the model performance.
[ { "created": "Mon, 4 Oct 2021 07:21:00 GMT", "version": "v1" }, { "created": "Tue, 5 Oct 2021 15:22:33 GMT", "version": "v2" }, { "created": "Wed, 27 Oct 2021 03:44:26 GMT", "version": "v3" } ]
2021-10-28
[ [ "Yang", "Soojung", "" ], [ "Hwang", "Doyeong", "" ], [ "Lee", "Seul", "" ], [ "Ryu", "Seongok", "" ], [ "Hwang", "Sung Ju", "" ] ]
Recently, utilizing reinforcement learning (RL) to generate molecules with desired properties has been highlighted as a promising strategy for drug design. A molecular docking program - a physical simulation that estimates protein-small molecule binding affinity - can be an ideal reward scoring function for RL, as it is a straightforward proxy of the therapeutic potential. Still, two imminent challenges exist for this task. First, the models often fail to generate chemically realistic and pharmacochemically acceptable molecules. Second, the docking score optimization is a difficult exploration problem that involves many local optima and less smooth surfaces with respect to molecular structure. To tackle these challenges, we propose a novel RL framework that generates pharmacochemically acceptable molecules with large docking scores. Our method - Fragment-based generative RL with Explorative Experience replay for Drug design (FREED) - constrains the generated molecules to a realistic and qualified chemical space and effectively explores the space to find drugs by coupling our fragment-based generation method and a novel error-prioritized experience replay (PER). We also show that our model performs well on both de novo and scaffold-based schemes. Our model produces molecules of higher quality compared to existing methods while achieving state-of-the-art performance on two of three targets in terms of the docking scores of the generated molecules. We further show with ablation studies that our method, predictive error-PER (FREED(PE)), significantly improves the model performance.
1906.09465
David Murrugarra
David Murrugarra and Elena Dimitrova
Quantifying the Total Effect of Edge Interventions in Discrete Multistate Networks
10 pages, 8 figures
Automatica, 125, 109453, 2021
10.1016/j.automatica.2020.109453
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developing efficient computational methods to assess the impact of external interventions on the dynamics of a network model is an important problem in systems biology. This paper focuses on quantifying the global changes that result from the application of an intervention to produce a desired effect, which we define as the total effect of the intervention. The type of mathematical models that we will consider are discrete dynamical systems which include the widely used Boolean networks and their generalizations. The potential interventions can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. We use a class of regulatory rules called nested canalizing functions that frequently appear in published models and were inspired by the concept of canalization in evolutionary biology. In this paper, we provide a polynomial normal form based on the canalizing properties of regulatory functions. Using this polynomial normal form, we give a set of formulas for counting the maximum number of transitions that will change in the state space upon an edge deletion in the wiring diagram. These formulas rely on the canalizing structure of the target function since the number of changed transitions depends on the canalizing layer that includes the input to be deleted. We also present computations on random networks to compare the exact number of changes with the upper bounds provided by our formulas. Finally, we provide statistics on the sharpness of these upper bounds in random networks.
[ { "created": "Sat, 22 Jun 2019 15:49:23 GMT", "version": "v1" }, { "created": "Sun, 24 Nov 2019 18:03:13 GMT", "version": "v2" }, { "created": "Tue, 21 Jul 2020 18:02:47 GMT", "version": "v3" }, { "created": "Sun, 11 Oct 2020 20:18:30 GMT", "version": "v4" } ]
2024-07-09
[ [ "Murrugarra", "David", "" ], [ "Dimitrova", "Elena", "" ] ]
Developing efficient computational methods to assess the impact of external interventions on the dynamics of a network model is an important problem in systems biology. This paper focuses on quantifying the global changes that result from the application of an intervention to produce a desired effect, which we define as the total effect of the intervention. The type of mathematical models that we will consider are discrete dynamical systems which include the widely used Boolean networks and their generalizations. The potential interventions can be represented by a set of nodes and edges that can be manipulated to produce a desired effect on the system. We use a class of regulatory rules called nested canalizing functions that frequently appear in published models and were inspired by the concept of canalization in evolutionary biology. In this paper, we provide a polynomial normal form based on the canalizing properties of regulatory functions. Using this polynomial normal form, we give a set of formulas for counting the maximum number of transitions that will change in the state space upon an edge deletion in the wiring diagram. These formulas rely on the canalizing structure of the target function since the number of changed transitions depends on the canalizing layer that includes the input to be deleted. We also present computations on random networks to compare the exact number of changes with the upper bounds provided by our formulas. Finally, we provide statistics on the sharpness of these upper bounds in random networks.
2407.00175
Paul Kirk
Leiv R{\o}nneberg, Vidhi Lalchand, Paul D. W. Kirk
Permutation invariant multi-output Gaussian Processes for drug combination prediction in cancer
null
null
null
null
q-bio.QM cs.LG stat.AP stat.ML
http://creativecommons.org/licenses/by/4.0/
Dose-response prediction in cancer is an active application field in machine learning. Using large libraries of \textit{in-vitro} drug sensitivity screens, the goal is to develop accurate predictive models that can be used to guide experimental design or inform treatment decisions. Building on previous work that makes use of permutation invariant multi-output Gaussian Processes in the context of dose-response prediction for drug combinations, we develop a variational approximation to these models. The variational approximation enables a more scalable model that provides uncertainty quantification and naturally handles missing data. Furthermore, we propose using a deep generative model to encode the chemical space in a continuous manner, enabling prediction for new drugs and new combinations. We demonstrate the performance of our model in a simple setting using a high-throughput dataset and show that the model is able to efficiently borrow information across outputs.
[ { "created": "Fri, 28 Jun 2024 18:28:38 GMT", "version": "v1" } ]
2024-07-02
[ [ "Rønneberg", "Leiv", "" ], [ "Lalchand", "Vidhi", "" ], [ "Kirk", "Paul D. W.", "" ] ]
Dose-response prediction in cancer is an active application field in machine learning. Using large libraries of \textit{in-vitro} drug sensitivity screens, the goal is to develop accurate predictive models that can be used to guide experimental design or inform treatment decisions. Building on previous work that makes use of permutation invariant multi-output Gaussian Processes in the context of dose-response prediction for drug combinations, we develop a variational approximation to these models. The variational approximation enables a more scalable model that provides uncertainty quantification and naturally handles missing data. Furthermore, we propose using a deep generative model to encode the chemical space in a continuous manner, enabling prediction for new drugs and new combinations. We demonstrate the performance of our model in a simple setting using a high-throughput dataset and show that the model is able to efficiently borrow information across outputs.
1309.6015
Heiko Enderling
Jan Poleszczuk, Heiko Enderling
A High-Performance Cellular Automaton Model of Tumor Growth with Dynamically Growing Domains
8 pages, 8 figures
null
null
null
q-bio.QM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumor growth from a single transformed cancer cell up to a clinically apparent mass spans many spatial and temporal orders of magnitude. Implementation of cellular automata simulations of such tumor growth can be straightforward but computing performance often counterbalances simplicity. Computationally convenient simulation times can be achieved by choosing appropriate data structures, memory and cell handling as well as domain setup. We propose a cellular automaton model of tumor growth with a domain that expands dynamically as the tumor population increases. We discuss memory access, data structures and implementation techniques that yield high-performance multi-scale Monte Carlo simulations of tumor growth. We present simulation results of the tumor growth model and discuss tumor properties that favor the proposed high-performance design.
[ { "created": "Tue, 24 Sep 2013 00:42:47 GMT", "version": "v1" } ]
2013-09-25
[ [ "Poleszczuk", "Jan", "" ], [ "Enderling", "Heiko", "" ] ]
Tumor growth from a single transformed cancer cell up to a clinically apparent mass spans many spatial and temporal orders of magnitude. Implementation of cellular automata simulations of such tumor growth can be straightforward but computing performance often counterbalances simplicity. Computationally convenient simulation times can be achieved by choosing appropriate data structures, memory and cell handling as well as domain setup. We propose a cellular automaton model of tumor growth with a domain that expands dynamically as the tumor population increases. We discuss memory access, data structures and implementation techniques that yield high-performance multi-scale Monte Carlo simulations of tumor growth. We present simulation results of the tumor growth model and discuss tumor properties that favor the proposed high-performance design.
q-bio/0503038
Hiroki Ueda M. D. . D.
Hiroki R. Ueda, John B. Hogenesch
Principles in the Evolution of Metabolic Networks
37 pages(15 pages for main text, 18 pages for supplementary information, 4 figures); 5 Supplementary Figures are omitted from this submission because of file size limitation (<1MB). This work was presented on March 15th 2004, at the closed meeting with Akutsu lab and Kanehisa lab in Institute for Chemical Research, Kyoto University
null
null
null
q-bio.MN q-bio.GN
null
Understanding design principles of complex cellular organization is one of the major challenges in biology. Recent analysis of the large-scale cellular organization has revealed the scale-free nature and robustness of metabolic and protein networks. However, the underlying evolutional process that creates such a cellular organization is not fully elucidated. To approach this problem, we analyzed the metabolic networks of 126 organisms, whose draft or complete genome sequences have been published. This analysis has revealed that the evolutional process of metabolic networks follows the same and surprisingly simple principles in Archaea, Bacteria and Eukaryotes; where highly linked metabolites change their chemical links more dynamically than less linked metabolites. Here we demonstrate that this rich-travel-more mechanism rather than the previously proposed rich-get-richer mechanism can generate the observed scale-free organization of metabolic networks. These findings illustrate universal principles in evolution of metabolic networks and suggest marked flexibility of metabolic network throughout evolution.
[ { "created": "Mon, 28 Mar 2005 16:31:20 GMT", "version": "v1" } ]
2007-05-23
[ [ "Ueda", "Hiroki R.", "" ], [ "Hogenesch", "John B.", "" ] ]
Understanding design principles of complex cellular organization is one of the major challenges in biology. Recent analysis of the large-scale cellular organization has revealed the scale-free nature and robustness of metabolic and protein networks. However, the underlying evolutional process that creates such a cellular organization is not fully elucidated. To approach this problem, we analyzed the metabolic networks of 126 organisms, whose draft or complete genome sequences have been published. This analysis has revealed that the evolutional process of metabolic networks follows the same and surprisingly simple principles in Archaea, Bacteria and Eukaryotes; where highly linked metabolites change their chemical links more dynamically than less linked metabolites. Here we demonstrate that this rich-travel-more mechanism rather than the previously proposed rich-get-richer mechanism can generate the observed scale-free organization of metabolic networks. These findings illustrate universal principles in evolution of metabolic networks and suggest marked flexibility of metabolic network throughout evolution.
0912.5179
Mauro Mobilia
Mauro Mobilia
Oscillatory Dynamics in Rock-Paper-Scissors Games with Mutations
25 pages, 9 figures. To appear in the Journal of Theoretical Biology
J. Theor. Biol. 264, 1-10 (2010)
10.1016/j.jtbi.2010.01.008
null
q-bio.PE cond-mat.stat-mech nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the oscillatory dynamics in the generic three-species rock-paper-scissors games with mutations. In the mean-field limit, different behaviors are found: (a) for high mutation rate, there is a stable interior fixed point with coexistence of all species; (b) for low mutation rates, there is a region of the parameter space characterized by a limit cycle resulting from a Hopf bifurcation; (c) in the absence of mutations, there is a region where heteroclinic cycles yield oscillations of large amplitude (not robust against noise). After a discussion on the main properties of the mean-field dynamics, we investigate the stochastic version of the model within an individual-based formulation. Demographic fluctuations are therefore naturally accounted and their effects are studied using a diffusion theory complemented by numerical simulations. It is thus shown that persistent erratic oscillations (quasi-cycles) of large amplitude emerge from a noise-induced resonance phenomenon. We also analytically and numerically compute the average escape time necessary to reach a (quasi-)cycle on which the system oscillates at a given amplitude.
[ { "created": "Mon, 28 Dec 2009 15:07:55 GMT", "version": "v1" }, { "created": "Tue, 26 Jan 2010 14:00:03 GMT", "version": "v2" } ]
2010-03-17
[ [ "Mobilia", "Mauro", "" ] ]
We study the oscillatory dynamics in the generic three-species rock-paper-scissors games with mutations. In the mean-field limit, different behaviors are found: (a) for high mutation rate, there is a stable interior fixed point with coexistence of all species; (b) for low mutation rates, there is a region of the parameter space characterized by a limit cycle resulting from a Hopf bifurcation; (c) in the absence of mutations, there is a region where heteroclinic cycles yield oscillations of large amplitude (not robust against noise). After a discussion on the main properties of the mean-field dynamics, we investigate the stochastic version of the model within an individual-based formulation. Demographic fluctuations are therefore naturally accounted and their effects are studied using a diffusion theory complemented by numerical simulations. It is thus shown that persistent erratic oscillations (quasi-cycles) of large amplitude emerge from a noise-induced resonance phenomenon. We also analytically and numerically compute the average escape time necessary to reach a (quasi-)cycle on which the system oscillates at a given amplitude.
2405.02853
Jiabao Ren
Chen Zhenzhen (1,2), Ren Jiabao (1,2), Duan Tingyu (3), Chen Ke (4), Hou Ruyi (5), Li Yimiao (5), Zeng Leixiao (5), Meng Xiaoxuan (6), Wu Yibo (7), Liu Yu (2), ((1) College of Science, Minzu University of China, Beijing, China, (2) School of Nursing, China Medical University, Shenyang, Liaoning Province, China, (3) Hebei Institute of Communications, Hebei, China, (4) Department of Social Science and Humanities, Harbin Medical University, Harbin, Heilongjiang Province, China, (5) School of Journalism and Communication, Renmin University of China, Beijing, China, (6) Tianjin Medical University, Tianjin, China, (7) School of Public Health, Peking University, Beijing, China)
Development and validation of a short form of the medication literacy scale for Chinese College Students
25 pages, 3 figures,3 tables
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-sa/4.0/
Medication literacy is integral to health literacy, pivotal for medication safety and adherence. It denotes an individual's capacity to discern, comprehend, and convey medication-related information. Existing scales, however, are time-consuming and predominantly cater to patients and community dwellers, necessitating a more succinct instrument. This study presents the development of a brief Medication Literacy Scale (MLS-14) utilizing classical test theory (CTT) and item response theory (IRT), targeting a college student demographic. The MLS-14's abbreviated version, a 6-item scale (MLS-SF), was distilled through CTT and IRT methodologies, engaging 2431 Chinese college students to scrutinize its psychometric properties. The MLS-SF demonstrated a Cronbach's {\alpha} of 0.765, with three extracted factors via exploratory factor analysis, accounting for 66% of the cumulative variance. All items exhibited factor loadings above 0.5. The scale's three-factor structure was substantiated through confirmatory factor analysis with satisfactory fit indices (chi2/df=5.11, RMSEA=0.063, GFI=0.990, AGFI=0.966, NFI=0.984, IFI=0.987, CFI=0.987). IRT modeling confirmed reasonable discrimination and location parameters for all items, free of differential item functioning (DIF) by gender. Except for items 4 and 10, the remaining items were informative at medium theta levels, indicating their utility in assessing medication literacy efficiently. The developed 6-item Medication Literacy Short Form (MLS-SF) proves to be a reliable and valid instrument for the expedited evaluation of college students' medication literacy, offering a valuable addition to the arsenal of health literacy assessment tools.
[ { "created": "Sun, 5 May 2024 08:56:54 GMT", "version": "v1" } ]
2024-05-07
[ [ "Zhenzhen", "Chen", "" ], [ "Jiabao", "Ren", "" ], [ "Tingyu", "Duan", "" ], [ "Ke", "Chen", "" ], [ "Ruyi", "Hou", "" ], [ "Yimiao", "Li", "" ], [ "Leixiao", "Zeng", "" ], [ "Xiaoxuan", "Meng", "" ], [ "Yibo", "Wu", "" ], [ "Yu", "Liu", "" ] ]
Medication literacy is integral to health literacy, pivotal for medication safety and adherence. It denotes an individual's capacity to discern, comprehend, and convey medication-related information. Existing scales, however, are time-consuming and predominantly cater to patients and community dwellers, necessitating a more succinct instrument. This study presents the development of a brief Medication Literacy Scale (MLS-14) utilizing classical test theory (CTT) and item response theory (IRT), targeting a college student demographic. The MLS-14's abbreviated version, a 6-item scale (MLS-SF), was distilled through CTT and IRT methodologies, engaging 2431 Chinese college students to scrutinize its psychometric properties. The MLS-SF demonstrated a Cronbach's {\alpha} of 0.765, with three extracted factors via exploratory factor analysis, accounting for 66% of the cumulative variance. All items exhibited factor loadings above 0.5. The scale's three-factor structure was substantiated through confirmatory factor analysis with satisfactory fit indices (chi2/df=5.11, RMSEA=0.063, GFI=0.990, AGFI=0.966, NFI=0.984, IFI=0.987, CFI=0.987). IRT modeling confirmed reasonable discrimination and location parameters for all items, free of differential item functioning (DIF) by gender. Except for items 4 and 10, the remaining items were informative at medium theta levels, indicating their utility in assessing medication literacy efficiently. The developed 6-item Medication Literacy Short Form (MLS-SF) proves to be a reliable and valid instrument for the expedited evaluation of college students' medication literacy, offering a valuable addition to the arsenal of health literacy assessment tools.
1808.00204
Heather Etchevers
Heather Etchevers (MMG, GMGF), Elisabeth Dupin, Nicole Le Douarin
The importance and impact of discoveries about neural crest fates
null
Development (Cambridge, England), Company of Biologists, In press
null
null
q-bio.CB q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review here some of the historical highlights in exploratory studies of the vertebrate embryonic structure known as the neural crest. The study of the molecular properties of the cells that it produces, their migratory capacities and plasticity, and the still-growing list of tissues that depend on their presence for form and function, continue to enrich our understanding of congenital malformations, pediatric cancers but also of evolutionary biology.
[ { "created": "Wed, 1 Aug 2018 07:29:25 GMT", "version": "v1" } ]
2018-08-02
[ [ "Etchevers", "Heather", "", "MMG, GMGF" ], [ "Dupin", "Elisabeth", "" ], [ "Douarin", "Nicole Le", "" ] ]
We review here some of the historical highlights in exploratory studies of the vertebrate embryonic structure known as the neural crest. The study of the molecular properties of the cells that it produces, their migratory capacities and plasticity, and the still-growing list of tissues that depend on their presence for form and function, continue to enrich our understanding of congenital malformations, pediatric cancers but also of evolutionary biology.
1902.01267
Dafydd Gibbon
Dafydd Gibbon and Xuewei Lin
Rhythm Zone Theory: Speech Rhythms are Physical after all
15 pages, 9 figures, submitted
null
null
null
q-bio.NC cs.CL cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speech rhythms have been dealt with in three main ways: from the introspective analyses of rhythm as a correlate of syllable and foot timing in linguistics and applied linguistics, through analyses of durations of segments of utterances associated with consonantal and vocalic properties, syllables, feet and words, to models of rhythms in speech production and perception as physical oscillations. The present study avoids introspection and human-filtered annotation methods and extends the signal processing paradigm of amplitude envelope spectrum analysis by adding an additional analytic step of edge detection, and postulating the co-existence of multiple speech rhythms in rhythm zones marked by identifiable edges (Rhythm Zone Theory, RZT). An exploratory investigation of the utility of RZT is conducted, suggesting that native and non-native readings of the same text are distinct sub-genres of read speech: a reading by a US native speaker and non-native readings by relatively low-performing Cantonese adult learners of English. The study concludes by noting that with the methods used, RZT can distinguish between the speech rhythms of well-defined sub-genres of native speaker reading vs. non-native learner reading, but needs further refinement in order to be applied to the paradoxically more complex speech of low-performing language learners, whose speech rhythms are co-determined by non-fluency and disfluency factors in addition to well-known linguistic factors of grammar, vocabulary and discourse constraints.
[ { "created": "Thu, 31 Jan 2019 20:49:17 GMT", "version": "v1" }, { "created": "Tue, 12 Mar 2019 19:01:22 GMT", "version": "v2" } ]
2019-03-14
[ [ "Gibbon", "Dafydd", "" ], [ "Lin", "Xuewei", "" ] ]
Speech rhythms have been dealt with in three main ways: from the introspective analyses of rhythm as a correlate of syllable and foot timing in linguistics and applied linguistics, through analyses of durations of segments of utterances associated with consonantal and vocalic properties, syllables, feet and words, to models of rhythms in speech production and perception as physical oscillations. The present study avoids introspection and human-filtered annotation methods and extends the signal processing paradigm of amplitude envelope spectrum analysis by adding an additional analytic step of edge detection, and postulating the co-existence of multiple speech rhythms in rhythm zones marked by identifiable edges (Rhythm Zone Theory, RZT). An exploratory investigation of the utility of RZT is conducted, suggesting that native and non-native readings of the same text are distinct sub-genres of read speech: a reading by a US native speaker and non-native readings by relatively low-performing Cantonese adult learners of English. The study concludes by noting that with the methods used, RZT can distinguish between the speech rhythms of well-defined sub-genres of native speaker reading vs. non-native learner reading, but needs further refinement in order to be applied to the paradoxically more complex speech of low-performing language learners, whose speech rhythms are co-determined by non-fluency and disfluency factors in addition to well-known linguistic factors of grammar, vocabulary and discourse constraints.
1711.06865
Yue Ren
Yue Ren and Johannes W. R. Martini and Jacinta Torres
Decoupled molecules with binding polynomials of bidegree (n,2)
18 pages, 8 figures
Journal of Mathematical Biology (2019) https://doi.org/10.1007/s00285-018-1295-x
null
null
q-bio.BM cs.SC math.AG physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a result on the number of decoupled molecules for systems binding two different types of ligands. In the case of $n$ and $2$ binding sites respectively, we show that, generically, there are $2(n!)^{2}$ decoupled molecules with the same binding polynomial. For molecules with more binding sites for the second ligand, we provide computational results.
[ { "created": "Sat, 18 Nov 2017 14:02:59 GMT", "version": "v1" } ]
2020-02-12
[ [ "Ren", "Yue", "" ], [ "Martini", "Johannes W. R.", "" ], [ "Torres", "Jacinta", "" ] ]
We present a result on the number of decoupled molecules for systems binding two different types of ligands. In the case of $n$ and $2$ binding sites respectively, we show that, generically, there are $2(n!)^{2}$ decoupled molecules with the same binding polynomial. For molecules with more binding sites for the second ligand, we provide computational results.
1510.03351
Martin Weigt
Eleonora De Leonardis, Benjamin Lutz, Sebastian Ratz, Simona Cocco, Remi Monasson, Alexander Schug, Martin Weigt
Direct-Coupling Analysis of nucleotide coevolution facilitates RNA secondary and tertiary structure prediction
22 pages, 8 figures, supplemental information available on the publisher's webpage (http://nar.oxfordjournals.org/content/early/2015/09/29/nar.gkv932.abstract)
Nucl. Acids Res. (2015) doi: 10.1093/nar/gkv932, First published online: September 29, 2015
10.1093/nar/gkv932
null
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite the biological importance of non-coding RNA, their structural characterization remains challenging. Making use of the rapidly growing sequence databases, we analyze nucleotide coevolution across homologous sequences via Direct-Coupling Analysis to detect nucleotide-nucleotide contacts. For a representative set of riboswitches, we show that the results of Direct-Coupling Analysis in combination with a generalized Nussinov algorithm systematically improve the results of RNA secondary structure prediction beyond traditional covariance approaches based on mutual information. Even more importantly, we show that the results of Direct-Coupling Analysis are enriched in tertiary structure contacts. By integrating these predictions into molecular modeling tools, systematically improved tertiary structure predictions can be obtained, as compared to using secondary structure information alone.
[ { "created": "Mon, 12 Oct 2015 16:17:04 GMT", "version": "v1" } ]
2015-10-13
[ [ "De Leonardis", "Eleonora", "" ], [ "Lutz", "Benjamin", "" ], [ "Ratz", "Sebastian", "" ], [ "Cocco", "Simona", "" ], [ "Monasson", "Remi", "" ], [ "Schug", "Alexander", "" ], [ "Weigt", "Martin", "" ] ]
Despite the biological importance of non-coding RNA, their structural characterization remains challenging. Making use of the rapidly growing sequence databases, we analyze nucleotide coevolution across homologous sequences via Direct-Coupling Analysis to detect nucleotide-nucleotide contacts. For a representative set of riboswitches, we show that the results of Direct-Coupling Analysis in combination with a generalized Nussinov algorithm systematically improve the results of RNA secondary structure prediction beyond traditional covariance approaches based on mutual information. Even more importantly, we show that the results of Direct-Coupling Analysis are enriched in tertiary structure contacts. By integrating these predictions into molecular modeling tools, systematically improved tertiary structure predictions can be obtained, as compared to using secondary structure information alone.
1006.0079
Denis Boyer
Denis Boyer and Peter D. Walsh
Modeling the mobility of living organisms in heterogeneous landscapes: Does memory improve foraging success?
14 pages, 4 figures, improved discussion
null
10.1098/rsta.2010.0275
null
q-bio.PE cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thanks to recent technological advances, it is now possible to track with an unprecedented precision and for long periods of time the movement patterns of many living organisms in their habitat. The increasing amount of data available on single trajectories offers the possibility of understanding how animals move and of testing basic movement models. Random walks have long represented the main description for micro-organisms and have also been useful to understand the foraging behaviour of large animals. Nevertheless, most vertebrates, in particular humans and other primates, rely on sophisticated cognitive tools such as spatial maps, episodic memory and travel cost discounting. These properties call for other modeling approaches of mobility patterns. We propose a foraging framework where a learning mobile agent uses a combination of memory-based and random steps. We investigate how advantageous it is to use memory for exploiting resources in heterogeneous and changing environments. An adequate balance of determinism and random exploration is found to maximize the foraging efficiency and to generate trajectories with an intricate spatio-temporal order. Based on this approach, we propose some tools for analysing the non-random nature of mobility patterns in general.
[ { "created": "Tue, 1 Jun 2010 08:24:26 GMT", "version": "v1" }, { "created": "Tue, 12 Oct 2010 18:21:38 GMT", "version": "v2" } ]
2015-05-19
[ [ "Boyer", "Denis", "" ], [ "Walsh", "Peter D.", "" ] ]
Thanks to recent technological advances, it is now possible to track with an unprecedented precision and for long periods of time the movement patterns of many living organisms in their habitat. The increasing amount of data available on single trajectories offers the possibility of understanding how animals move and of testing basic movement models. Random walks have long represented the main description for micro-organisms and have also been useful to understand the foraging behaviour of large animals. Nevertheless, most vertebrates, in particular humans and other primates, rely on sophisticated cognitive tools such as spatial maps, episodic memory and travel cost discounting. These properties call for other modeling approaches of mobility patterns. We propose a foraging framework where a learning mobile agent uses a combination of memory-based and random steps. We investigate how advantageous it is to use memory for exploiting resources in heterogeneous and changing environments. An adequate balance of determinism and random exploration is found to maximize the foraging efficiency and to generate trajectories with an intricate spatio-temporal order. Based on this approach, we propose some tools for analysing the non-random nature of mobility patterns in general.
1912.05625
Seonwoo Min
Seonwoo Min, Seunghyun Park, Siwon Kim, Hyun-Soo Choi, Byunghan Lee, Sungroh Yoon
Pre-Training of Deep Bidirectional Protein Sequence Representations with Structural Information
Published in IEEE Access 2021 (https://ieeexplore.ieee.org/document/9529198)
null
null
null
q-bio.BM cs.LG q-bio.GN stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bridging the exponentially growing gap between the numbers of unlabeled and labeled protein sequences, several studies adopted semi-supervised learning for protein sequence modeling. In these studies, models were pre-trained with a substantial amount of unlabeled data, and the representations were transferred to various downstream tasks. Most pre-training methods solely rely on language modeling and often exhibit limited performance. In this paper, we introduce a novel pre-training scheme called PLUS, which stands for Protein sequence representations Learned Using Structural information. PLUS consists of masked language modeling and a complementary protein-specific pre-training task, namely same-family prediction. PLUS can be used to pre-train various model architectures. In this work, we use PLUS to pre-train a bidirectional recurrent neural network and refer to the resulting model as PLUS-RNN. Our experiment results demonstrate that PLUS-RNN outperforms other models of similar size solely pre-trained with the language modeling in six out of seven widely used protein biology tasks. Furthermore, we present the results from our qualitative interpretation analyses to illustrate the strengths of PLUS-RNN. PLUS provides a novel way to exploit evolutionary relationships among unlabeled proteins and is broadly applicable across a variety of protein biology tasks. We expect that the gap between the numbers of unlabeled and labeled proteins will continue to grow exponentially, and the proposed pre-training method will play a larger role.
[ { "created": "Mon, 25 Nov 2019 10:12:10 GMT", "version": "v1" }, { "created": "Mon, 3 Feb 2020 09:06:30 GMT", "version": "v2" }, { "created": "Sat, 25 Apr 2020 03:58:33 GMT", "version": "v3" }, { "created": "Thu, 16 Sep 2021 23:13:47 GMT", "version": "v4" } ]
2021-09-20
[ [ "Min", "Seonwoo", "" ], [ "Park", "Seunghyun", "" ], [ "Kim", "Siwon", "" ], [ "Choi", "Hyun-Soo", "" ], [ "Lee", "Byunghan", "" ], [ "Yoon", "Sungroh", "" ] ]
Bridging the exponentially growing gap between the numbers of unlabeled and labeled protein sequences, several studies adopted semi-supervised learning for protein sequence modeling. In these studies, models were pre-trained with a substantial amount of unlabeled data, and the representations were transferred to various downstream tasks. Most pre-training methods solely rely on language modeling and often exhibit limited performance. In this paper, we introduce a novel pre-training scheme called PLUS, which stands for Protein sequence representations Learned Using Structural information. PLUS consists of masked language modeling and a complementary protein-specific pre-training task, namely same-family prediction. PLUS can be used to pre-train various model architectures. In this work, we use PLUS to pre-train a bidirectional recurrent neural network and refer to the resulting model as PLUS-RNN. Our experiment results demonstrate that PLUS-RNN outperforms other models of similar size solely pre-trained with the language modeling in six out of seven widely used protein biology tasks. Furthermore, we present the results from our qualitative interpretation analyses to illustrate the strengths of PLUS-RNN. PLUS provides a novel way to exploit evolutionary relationships among unlabeled proteins and is broadly applicable across a variety of protein biology tasks. We expect that the gap between the numbers of unlabeled and labeled proteins will continue to grow exponentially, and the proposed pre-training method will play a larger role.
1904.11136
Youshan Zhang
Youshan Zhang
Corticospinal Tract (CST) reconstruction based on fiber orientation distributions(FODs) tractography
null
2018 IEEE 18th International Conference on Bioinformatics and Bioengineering (BIBE), Taichung, 2018, pp. 305-310
10.1109/BIBE.2018.00066
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Corticospinal Tract (CST) is a part of pyramidal tract (PT), and it can innervate the voluntary movement of skeletal muscle through spinal interneurons (the 4th layer of the Rexed gray board layers), and anterior horn motorneurons (which control trunk and proximal limb muscles). Spinal cord injury (SCI) is a highly disabling disease often caused by traffic accidents. The recovery of CST and the functional reconstruction of spinal anterior horn motor neurons play an essential role in the treatment of SCI. However, the localization and reconstruction of CST are still challenging issues; the accuracy of the geometric reconstruction can directly affect the results of the surgery. The main contribution of this paper is the reconstruction of the CST based on the fiber orientation distributions (FODs) tractography. Differing from tensor-based tractography in which the primary direction is a determined orientation, the direction of FODs tractography is determined by the probability. The spherical harmonics (SPHARM) can be used to approximate the efficiency of FODs tractography. We manually delineate the three ROIs (the posterior limb of the internal capsule, the cerebral peduncle, and the anterior pontine area) by the ITK-SNAP software, and use the pipeline software to reconstruct both the left and right sides of the CST fibers. Our results demonstrate that FOD-based tractography can show more and correct anatomical CST fiber bundles.
[ { "created": "Tue, 23 Apr 2019 16:19:06 GMT", "version": "v1" } ]
2019-04-26
[ [ "Zhang", "Youshan", "" ] ]
The Corticospinal Tract (CST) is a part of pyramidal tract (PT), and it can innervate the voluntary movement of skeletal muscle through spinal interneurons (the 4th layer of the Rexed gray board layers), and anterior horn motorneurons (which control trunk and proximal limb muscles). Spinal cord injury (SCI) is a highly disabling disease often caused by traffic accidents. The recovery of CST and the functional reconstruction of spinal anterior horn motor neurons play an essential role in the treatment of SCI. However, the localization and reconstruction of CST are still challenging issues; the accuracy of the geometric reconstruction can directly affect the results of the surgery. The main contribution of this paper is the reconstruction of the CST based on the fiber orientation distributions (FODs) tractography. Differing from tensor-based tractography in which the primary direction is a determined orientation, the direction of FODs tractography is determined by the probability. The spherical harmonics (SPHARM) can be used to approximate the efficiency of FODs tractography. We manually delineate the three ROIs (the posterior limb of the internal capsule, the cerebral peduncle, and the anterior pontine area) by the ITK-SNAP software, and use the pipeline software to reconstruct both the left and right sides of the CST fibers. Our results demonstrate that FOD-based tractography can show more and correct anatomical CST fiber bundles.
2301.00148
Zixiang Luo
Zixiang Luo, Kaining Peng, Zhichao Liang, Shengyuan Cai, Chenyu Xu, Dan Li, Yu Hu, Changsong Zhou, Quanying Liu
Mapping effective connectivity by virtually perturbing a surrogate brain
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Effective connectivity (EC), indicative of the causal interactions between brain regions, is fundamental to understanding information processing in the brain. Traditional approaches, which infer EC from neural responses to stimulations, are not suited for mapping whole-brain EC in human due to being invasive and limited spatial coverage of stimulations. To address this gap, we present Neural Perturbational Inference (NPI), a data-driven framework designed to map EC across the entire brain. NPI employs an artificial neural network trained to learn large-scale neural dynamics as a computational surrogate of the brain. NPI maps EC by perturbing each region of the surrogate brain and observing the resulting responses in the rest of regions. NPI captures the directionality, strength, and excitatory/inhibitory properties of EC on a brain-wide scale. Our validation of NPI, using models with established EC, shows its superiority over Granger Causality and Dynamic Causal Modeling. Applying NPI to resting-state fMRI data from diverse datasets reveals consistent and structurally supported EC. Applications on a disease-specific dataset highlight the potential of using personalized EC as biomarkers for neurological diseases. By transitioning from correlational to causal understandings of brain functionality, NPI marks a stride in decoding the brain's functional architecture and can facilitate neuroscience research and clinical applications.
[ { "created": "Sat, 31 Dec 2022 08:09:13 GMT", "version": "v1" }, { "created": "Tue, 21 Mar 2023 08:38:51 GMT", "version": "v2" }, { "created": "Thu, 14 Mar 2024 13:58:44 GMT", "version": "v3" } ]
2024-03-15
[ [ "Luo", "Zixiang", "" ], [ "Peng", "Kaining", "" ], [ "Liang", "Zhichao", "" ], [ "Cai", "Shengyuan", "" ], [ "Xu", "Chenyu", "" ], [ "Li", "Dan", "" ], [ "Hu", "Yu", "" ], [ "Zhou", "Changsong", "" ], [ "Liu", "Quanying", "" ] ]
Effective connectivity (EC), indicative of the causal interactions between brain regions, is fundamental to understanding information processing in the brain. Traditional approaches, which infer EC from neural responses to stimulations, are not suited for mapping whole-brain EC in human due to being invasive and limited spatial coverage of stimulations. To address this gap, we present Neural Perturbational Inference (NPI), a data-driven framework designed to map EC across the entire brain. NPI employs an artificial neural network trained to learn large-scale neural dynamics as a computational surrogate of the brain. NPI maps EC by perturbing each region of the surrogate brain and observing the resulting responses in the rest of regions. NPI captures the directionality, strength, and excitatory/inhibitory properties of EC on a brain-wide scale. Our validation of NPI, using models with established EC, shows its superiority over Granger Causality and Dynamic Causal Modeling. Applying NPI to resting-state fMRI data from diverse datasets reveals consistent and structurally supported EC. Applications on a disease-specific dataset highlight the potential of using personalized EC as biomarkers for neurological diseases. By transitioning from correlational to causal understandings of brain functionality, NPI marks a stride in decoding the brain's functional architecture and can facilitate neuroscience research and clinical applications.
q-bio/0605036
Nigel Goldenfeld
Kalin Vetsigian, Carl Woese and Nigel Goldenfeld (University of Illinois at Urbana-Champaign)
Collective evolution and the genetic code
null
null
10.1073/pnas.0603780103
null
q-bio.PE nlin.AO
null
A dynamical theory for the evolution of the genetic code is presented, which accounts for its universality and optimality. The central concept is that a variety of collective, but non-Darwinian, mechanisms likely to be present in early communal life generically lead to refinement and selection of innovation-sharing protocols, such as the genetic code. Our proposal is illustrated using a simplified computer model, and placed within the context of a sequence of transitions that early life may have made, prior to the emergence of vertical descent.
[ { "created": "Mon, 22 May 2006 16:52:27 GMT", "version": "v1" } ]
2009-11-13
[ [ "Vetsigian", "Kalin", "", "University of\n Illinois at Urbana-Champaign" ], [ "Woese", "Carl", "", "University of\n Illinois at Urbana-Champaign" ], [ "Goldenfeld", "Nigel", "", "University of\n Illinois at Urbana-Champaign" ] ]
A dynamical theory for the evolution of the genetic code is presented, which accounts for its universality and optimality. The central concept is that a variety of collective, but non-Darwinian, mechanisms likely to be present in early communal life generically lead to refinement and selection of innovation-sharing protocols, such as the genetic code. Our proposal is illustrated using a simplified computer model, and placed within the context of a sequence of transitions that early life may have made, prior to the emergence of vertical descent.
1612.00644
Peter Zeidman
Peter Zeidman, Edward Harry Silson, Dietrich Samuel Schwarzkopf, Chris Ian Baker, Will Penny
Bayesian Population Receptive Field Modelling
30 pages, 10 figures. Code available at https://github.com/pzeidman/BayespRF
null
10.1016/j.neuroimage.2017.09.008
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a probabilistic (Bayesian) framework and associated software toolbox for mapping population receptive fields (pRFs) based on fMRI data. This generic approach is intended to work with stimuli of any dimension and is demonstrated and validated in the context of 2D retinotopic mapping. The framework enables the experimenter to specify generative (encoding) models of fMRI timeseries, in which experimental manipulations enter a pRF model of neural activity, which in turns drives a nonlinear model of neurovascular coupling and Blood Oxygenation Level Dependent (BOLD) response. The neuronal and haemodynamic parameters are estimated together on a voxel-by-voxel or region-of-interest basis using a Bayesian estimation algorithm (variational Laplace). This offers several novel contributions to receptive field modelling. The variance / covariance of parameters are estimated, enabling receptive fields to be plotted while properly representing uncertainty about pRF size and location. Variability in the haemodynamic response across the brain is accounted for. Furthermore, the framework introduces formal hypothesis testing to pRF analysis, enabling competing models to be evaluated based on their model evidence (approximated by the variational free energy), which represents the optimal tradeoff between accuracy and complexity. Using simulations and empirical data, we found that parameters typically used to represent pRF size and neuronal scaling are strongly correlated, which should be taken into account when making inferences. We used the framework to compare the evidence for six variants of pRF model using 7T functional MRI data and we found a circular Difference of Gaussians (DoG) model to be the best explanation for our data overall. We hope this framework will prove useful for mapping stimulus spaces with any number of dimensions onto the anatomy of the brain.
[ { "created": "Fri, 2 Dec 2016 11:48:17 GMT", "version": "v1" } ]
2018-05-21
[ [ "Zeidman", "Peter", "" ], [ "Silson", "Edward Harry", "" ], [ "Schwarzkopf", "Dietrich Samuel", "" ], [ "Baker", "Chris Ian", "" ], [ "Penny", "Will", "" ] ]
We introduce a probabilistic (Bayesian) framework and associated software toolbox for mapping population receptive fields (pRFs) based on fMRI data. This generic approach is intended to work with stimuli of any dimension and is demonstrated and validated in the context of 2D retinotopic mapping. The framework enables the experimenter to specify generative (encoding) models of fMRI timeseries, in which experimental manipulations enter a pRF model of neural activity, which in turns drives a nonlinear model of neurovascular coupling and Blood Oxygenation Level Dependent (BOLD) response. The neuronal and haemodynamic parameters are estimated together on a voxel-by-voxel or region-of-interest basis using a Bayesian estimation algorithm (variational Laplace). This offers several novel contributions to receptive field modelling. The variance / covariance of parameters are estimated, enabling receptive fields to be plotted while properly representing uncertainty about pRF size and location. Variability in the haemodynamic response across the brain is accounted for. Furthermore, the framework introduces formal hypothesis testing to pRF analysis, enabling competing models to be evaluated based on their model evidence (approximated by the variational free energy), which represents the optimal tradeoff between accuracy and complexity. Using simulations and empirical data, we found that parameters typically used to represent pRF size and neuronal scaling are strongly correlated, which should be taken into account when making inferences. We used the framework to compare the evidence for six variants of pRF model using 7T functional MRI data and we found a circular Difference of Gaussians (DoG) model to be the best explanation for our data overall. We hope this framework will prove useful for mapping stimulus spaces with any number of dimensions onto the anatomy of the brain.
1803.11274
Matteo Manica
Matteo Manica, Joris Cadow, Roland Mathis and Mar\'ia Rodr\'iguez Mart\'inez
PIMKL: Pathway Induced Multiple Kernel Learning
null
npj Systems Biology and Applications (2019)
10.1038/s41540-019-0086-3
null
q-bio.MN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reliable identification of molecular biomarkers is essential for accurate patient stratification. While state-of-the-art machine learning approaches for sample classification continue to push boundaries in terms of performance, most of these methods are not able to integrate different data types and lack generalization power, limiting their application in a clinical setting. Furthermore, many methods behave as black boxes, and we have very little understanding about the mechanisms that lead to the prediction. While opaqueness concerning machine behaviour might not be a problem in deterministic domains, in health care, providing explanations about the molecular factors and phenotypes that are driving the classification is crucial to build trust in the performance of the predictive system. We propose Pathway Induced Multiple Kernel Learning (PIMKL), a novel methodology to reliably classify samples that can also help gain insights into the molecular mechanisms that underlie the classification. PIMKL exploits prior knowledge in the form of a molecular interaction network and annotated gene sets, by optimizing a mixture of pathway-induced kernels using a Multiple Kernel Learning (MKL) algorithm, an approach that has demonstrated excellent performance in different machine learning applications. After optimizing the combination of kernels for prediction of a specific phenotype, the model provides a stable molecular signature that can be interpreted in the light of the ingested prior knowledge and that can be used in transfer learning tasks.
[ { "created": "Thu, 29 Mar 2018 22:28:51 GMT", "version": "v1" }, { "created": "Fri, 13 Apr 2018 13:20:51 GMT", "version": "v2" }, { "created": "Thu, 5 Jul 2018 14:29:15 GMT", "version": "v3" } ]
2019-11-07
[ [ "Manica", "Matteo", "" ], [ "Cadow", "Joris", "" ], [ "Mathis", "Roland", "" ], [ "Martínez", "María Rodríguez", "" ] ]
Reliable identification of molecular biomarkers is essential for accurate patient stratification. While state-of-the-art machine learning approaches for sample classification continue to push boundaries in terms of performance, most of these methods are not able to integrate different data types and lack generalization power, limiting their application in a clinical setting. Furthermore, many methods behave as black boxes, and we have very little understanding about the mechanisms that lead to the prediction. While opaqueness concerning machine behaviour might not be a problem in deterministic domains, in health care, providing explanations about the molecular factors and phenotypes that are driving the classification is crucial to build trust in the performance of the predictive system. We propose Pathway Induced Multiple Kernel Learning (PIMKL), a novel methodology to reliably classify samples that can also help gain insights into the molecular mechanisms that underlie the classification. PIMKL exploits prior knowledge in the form of a molecular interaction network and annotated gene sets, by optimizing a mixture of pathway-induced kernels using a Multiple Kernel Learning (MKL) algorithm, an approach that has demonstrated excellent performance in different machine learning applications. After optimizing the combination of kernels for prediction of a specific phenotype, the model provides a stable molecular signature that can be interpreted in the light of the ingested prior knowledge and that can be used in transfer learning tasks.
1306.2353
Wai Lim Ku
Wai Lim Ku, Michelle Girvan, Guo-Cheng Yuan, Francesco Sorrentino, Edward Ott
Modeling the dynamics of bivalent histone modifications
23 pages, 10 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epigenetic modifications to histones may promote either activation or repression of the transcription of nearby genes. Recent experimental studies show that the promoters of many lineage-control genes in stem cells have "bivalent domains" in which the nucleosomes contain both active (H3K4me3) and repressive (H3K27me3) marks. It is generally agreed that bivalent domains play an important role in stem cell differentiation, but the underlying mechanisms remain unclear. Here we formulate a mathematical model to investigate the dynamic properties of histone modification patterns. We then illustrate that our modeling framework can be used to capture key features of experimentally observed combinatorial chromatin states.
[ { "created": "Fri, 7 Jun 2013 15:52:11 GMT", "version": "v1" } ]
2013-06-12
[ [ "Ku", "Wai Lim", "" ], [ "Girvan", "Michelle", "" ], [ "Yuan", "Guo-Cheng", "" ], [ "Sorrentino", "Francesco", "" ], [ "Ott", "Edward", "" ] ]
Epigenetic modifications to histones may promote either activation or repression of the transcription of nearby genes. Recent experimental studies show that the promoters of many lineage-control genes in stem cells have "bivalent domains" in which the nucleosomes contain both active (H3K4me3) and repressive (H3K27me3) marks. It is generally agreed that bivalent domains play an important role in stem cell differentiation, but the underlying mechanisms remain unclear. Here we formulate a mathematical model to investigate the dynamic properties of histone modification patterns. We then illustrate that our modeling framework can be used to capture key features of experimentally observed combinatorial chromatin states.
1906.11398
James Hope Mr
James Hope, Zaid Aqrawe, Marshall Lim, Frederique Vanholsbeeck, Andrew McDaid
Increasing signal amplitude in electrical impedance tomography of neural activity using a parallel resistor inductor capacitor (RLC) circuit
18 pages, 14 figures, journal submission
null
10.1088/1741-2552/ab462b
null
q-bio.NC physics.ins-det
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: To increase the impedance signal amplitude produced during neural activity using a novel approach of implementing a parallel resistor inductor capacitor (RLC) circuit across the current source used in electrical impedance tomography (EIT) of peripheral nerve. Approach: Experiments were performed in vitro on sciatic nerve of Sprague-Dawley rats. Design of the RLC circuit was performed in electrical circuit modelling software, aided by in vitro impedance measurements on nerve and nerve cuff in the range 5 Hz to 50 kHz. Main results: The frequency range 17 +/- 1 kHz was selected for the RLC experiment. The RLC experiment was performed on four subjects using an RLC circuit designed to produce a resonant frequency of 17 kHz with a bandwidth of 3.6 kHz, and containing a 22 mH inductive element and a 3.45 nF capacitive element. With the RLC circuit connected, relative increases in the impedance signal (+/- 3sig noise) of 44 % (+/-15 %), 33 % (+/-30 %), 37 % (+/-8.6 %), and 16 % (+/-19 %) were produced. Significance: The increase in impedance signal amplitude at high frequencies, generated by the novel implementation of a parallel RLC circuit across the drive current, improves spatial resolution by increasing the number of parallel drive currents which can be implemented in a frequency division multiplexed (FDM) EIT system, and aids the long term goal of a real-time FDM EIT system by reducing the need for ensemble averaging.
[ { "created": "Thu, 27 Jun 2019 00:19:25 GMT", "version": "v1" } ]
2019-12-06
[ [ "Hope", "James", "" ], [ "Aqrawe", "Zaid", "" ], [ "Lim", "Marshall", "" ], [ "Vanholsbeeck", "Frederique", "" ], [ "McDaid", "Andrew", "" ] ]
Objective: To increase the impedance signal amplitude produced during neural activity using a novel approach of implementing a parallel resistor inductor capacitor (RLC) circuit across the current source used in electrical impedance tomography (EIT) of peripheral nerve. Approach: Experiments were performed in vitro on sciatic nerve of Sprague-Dawley rats. Design of the RLC circuit was performed in electrical circuit modelling software, aided by in vitro impedance measurements on nerve and nerve cuff in the range 5 Hz to 50 kHz. Main results: The frequency range 17 +/- 1 kHz was selected for the RLC experiment. The RLC experiment was performed on four subjects using an RLC circuit designed to produce a resonant frequency of 17 kHz with a bandwidth of 3.6 kHz, and containing a 22 mH inductive element and a 3.45 nF capacitive element. With the RLC circuit connected, relative increases in the impedance signal (+/- 3sig noise) of 44 % (+/-15 %), 33 % (+/-30 %), 37 % (+/-8.6 %), and 16 % (+/-19 %) were produced. Significance: The increase in impedance signal amplitude at high frequencies, generated by the novel implementation of a parallel RLC circuit across the drive current, improves spatial resolution by increasing the number of parallel drive currents which can be implemented in a frequency division multiplexed (FDM) EIT system, and aids the long term goal of a real-time FDM EIT system by reducing the need for ensemble averaging.
1301.1730
Pascal Grange
Pascal Grange, Michael Hawrylycz and Partha P. Mitra
Computational neuroanatomy and co-expression of genes in the adult mouse brain, analysis tools for the Allen Brain Atlas
25 pages, 8 figures, accepted in Quantitative Biology (2012) 0002
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review quantitative methods and software developed to analyze genome-scale, brain-wide spatially-mapped gene-expression data. We expose new methods based on the underlying high-dimensional geometry of voxel space and gene space, and on simulations of the distribution of co-expression networks of a given size. We apply them to the Allen Atlas of the adult mouse brain, and to the co-expression network of a set of genes related to nicotine addiction retrieved from the NicSNP database. The computational methods are implemented in {\ttfamily{BrainGeneExpressionAnalysis}}, a Matlab toolbox available for download.
[ { "created": "Wed, 9 Jan 2013 01:03:10 GMT", "version": "v1" } ]
2013-01-10
[ [ "Grange", "Pascal", "" ], [ "Hawrylycz", "Michael", "" ], [ "Mitra", "Partha P.", "" ] ]
We review quantitative methods and software developed to analyze genome-scale, brain-wide spatially-mapped gene-expression data. We expose new methods based on the underlying high-dimensional geometry of voxel space and gene space, and on simulations of the distribution of co-expression networks of a given size. We apply them to the Allen Atlas of the adult mouse brain, and to the co-expression network of a set of genes related to nicotine addiction retrieved from the NicSNP database. The computational methods are implemented in {\ttfamily{BrainGeneExpressionAnalysis}}, a Matlab toolbox available for download.
2310.03269
Zeyuan Wang
Zeyuan Wang, Qiang Zhang, Keyan Ding, Ming Qin, Xiang Zhuang, Xiaotong Li, Huajun Chen
InstructProtein: Aligning Human and Protein Language via Knowledge Instruction
null
null
null
null
q-bio.BM cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have revolutionized the field of natural language processing, but they fall short in comprehending biological sequences such as proteins. To address this challenge, we propose InstructProtein, an innovative LLM that possesses bidirectional generation capabilities in both human and protein languages: (i) taking a protein sequence as input to predict its textual function description and (ii) using natural language to prompt protein sequence generation. To achieve this, we first pre-train an LLM on both protein and natural language corpora, enabling it to comprehend individual languages. Then supervised instruction tuning is employed to facilitate the alignment of these two distinct languages. Herein, we introduce a knowledge graph-based instruction generation framework to construct a high-quality instruction dataset, addressing annotation imbalance and instruction deficits in existing protein-text corpus. In particular, the instructions inherit the structural relations between proteins and function annotations in knowledge graphs, which empowers our model to engage in the causal modeling of protein functions, akin to the chain-of-thought processes in natural languages. Extensive experiments on bidirectional protein-text generation tasks show that InstructProtein outperforms state-of-the-art LLMs by large margins. Moreover, InstructProtein serves as a pioneering step towards text-based protein function prediction and sequence design, effectively bridging the gap between protein and human language understanding.
[ { "created": "Thu, 5 Oct 2023 02:45:39 GMT", "version": "v1" } ]
2023-10-06
[ [ "Wang", "Zeyuan", "" ], [ "Zhang", "Qiang", "" ], [ "Ding", "Keyan", "" ], [ "Qin", "Ming", "" ], [ "Zhuang", "Xiang", "" ], [ "Li", "Xiaotong", "" ], [ "Chen", "Huajun", "" ] ]
Large Language Models (LLMs) have revolutionized the field of natural language processing, but they fall short in comprehending biological sequences such as proteins. To address this challenge, we propose InstructProtein, an innovative LLM that possesses bidirectional generation capabilities in both human and protein languages: (i) taking a protein sequence as input to predict its textual function description and (ii) using natural language to prompt protein sequence generation. To achieve this, we first pre-train an LLM on both protein and natural language corpora, enabling it to comprehend individual languages. Then supervised instruction tuning is employed to facilitate the alignment of these two distinct languages. Herein, we introduce a knowledge graph-based instruction generation framework to construct a high-quality instruction dataset, addressing annotation imbalance and instruction deficits in existing protein-text corpus. In particular, the instructions inherit the structural relations between proteins and function annotations in knowledge graphs, which empowers our model to engage in the causal modeling of protein functions, akin to the chain-of-thought processes in natural languages. Extensive experiments on bidirectional protein-text generation tasks show that InstructProtein outperforms state-of-the-art LLMs by large margins. Moreover, InstructProtein serves as a pioneering step towards text-based protein function prediction and sequence design, effectively bridging the gap between protein and human language understanding.
1601.07415
Thomas House
Frank Ball and Thomas House
Heterogeneous network epidemics: real-time growth, variance and extinction of infection
30 pages, 4 figures, Journal of Mathematical Biology, 2017
null
10.1007/s00285-016-1092-3
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multi-type branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution - in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.
[ { "created": "Wed, 27 Jan 2016 15:41:23 GMT", "version": "v1" }, { "created": "Fri, 20 Jan 2017 15:54:12 GMT", "version": "v2" } ]
2017-01-23
[ [ "Ball", "Frank", "" ], [ "House", "Thomas", "" ] ]
Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multi-type branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution - in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.
2102.02669
Xiaoyu Zhang
Xiaoyu Zhang, Yuting Xing, Kai Sun, Yike Guo
OmiEmbed: a unified multi-task deep learning framework for multi-omics data
14 pages, 8 figures, 7 tables
Cancers 2021, 13(12), 3047
10.3390/cancers13123047
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-dimensional omics data contains intrinsic biomedical information that is crucial for personalised medicine. Nevertheless, it is challenging to capture them from the genome-wide data due to the large number of molecular features and small number of available samples, which is also called 'the curse of dimensionality' in machine learning. To tackle this problem and pave the way for machine learning aided precision medicine, we proposed a unified multi-task deep learning framework named OmiEmbed to capture biomedical information from high-dimensional omics data with the deep embedding and downstream task modules. The deep embedding module learnt an omics embedding that mapped multiple omics data types into a latent space with lower dimensionality. Based on the new representation of multi-omics data, different downstream task modules were trained simultaneously and efficiently with the multi-task strategy to predict the comprehensive phenotype profile of each sample. OmiEmbed support multiple tasks for omics data including dimensionality reduction, tumour type classification, multi-omics integration, demographic and clinical feature reconstruction, and survival prediction. The framework outperformed other methods on all three types of downstream tasks and achieved better performance with the multi-task strategy comparing to training them individually. OmiEmbed is a powerful and unified framework that can be widely adapted to various application of high-dimensional omics data and has a great potential to facilitate more accurate and personalised clinical decision making.
[ { "created": "Wed, 3 Feb 2021 07:34:29 GMT", "version": "v1" }, { "created": "Tue, 18 May 2021 15:45:00 GMT", "version": "v2" } ]
2021-06-22
[ [ "Zhang", "Xiaoyu", "" ], [ "Xing", "Yuting", "" ], [ "Sun", "Kai", "" ], [ "Guo", "Yike", "" ] ]
High-dimensional omics data contains intrinsic biomedical information that is crucial for personalised medicine. Nevertheless, it is challenging to capture them from the genome-wide data due to the large number of molecular features and small number of available samples, which is also called 'the curse of dimensionality' in machine learning. To tackle this problem and pave the way for machine learning aided precision medicine, we proposed a unified multi-task deep learning framework named OmiEmbed to capture biomedical information from high-dimensional omics data with the deep embedding and downstream task modules. The deep embedding module learnt an omics embedding that mapped multiple omics data types into a latent space with lower dimensionality. Based on the new representation of multi-omics data, different downstream task modules were trained simultaneously and efficiently with the multi-task strategy to predict the comprehensive phenotype profile of each sample. OmiEmbed support multiple tasks for omics data including dimensionality reduction, tumour type classification, multi-omics integration, demographic and clinical feature reconstruction, and survival prediction. The framework outperformed other methods on all three types of downstream tasks and achieved better performance with the multi-task strategy comparing to training them individually. OmiEmbed is a powerful and unified framework that can be widely adapted to various application of high-dimensional omics data and has a great potential to facilitate more accurate and personalised clinical decision making.
1504.00283
Joshua Weitz
Hayriye Gulbudak, Joshua S. Weitz
A Touch of Sleep: Biophysical Model of Contact-mediated Dormancy of Archaea by Viruses
8 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The canonical view of the interactions between viruses and their microbial hosts presumes that changes in host and virus fate require the initiation of infection of a host by a virus. That is, first virus particles diffuse randomly outside of host cells, then the virus genome enters the target host cell, and only then do intracellular dynamics and regulation of virus and host cell fate unfold. Intracellular dynamics may lead to the death of the host cell and release of viruses, to the elimination of the virus genome through cellular defense mechanisms, or the integration of the virus genome with the host as a chromosomal or extra-chromosomal element. Here we revisit this canonical view, inspired by recent experimental findings of Bautista and colleagues (mBio, 2015) in which the majority of target host cells can be induced into a dormant state when exposed to either active or de-activated viruses, even when viruses are present at low relative titer. We propose that both the qualitative phenomena and the quantitative time-scales of dormancy induction can be reconciled given the hypothesis that cellular physiology can be altered by contact on the surface of host cells rather than strictly by infection. We develop a biophysical model of contact-mediated dynamics involving virus particles and target cells. We show how in this model virus particles can catalyze - extracellularly - cellular transformations amongst many cells, even if they ultimately infect only one (or none). We discuss implications of the present biophysical model relevant to the study of virus-microbe interactions more generally.
[ { "created": "Wed, 1 Apr 2015 16:35:34 GMT", "version": "v1" } ]
2015-04-02
[ [ "Gulbudak", "Hayriye", "" ], [ "Weitz", "Joshua S.", "" ] ]
The canonical view of the interactions between viruses and their microbial hosts presumes that changes in host and virus fate require the initiation of infection of a host by a virus. That is, first virus particles diffuse randomly outside of host cells, then the virus genome enters the target host cell, and only then do intracellular dynamics and regulation of virus and host cell fate unfold. Intracellular dynamics may lead to the death of the host cell and release of viruses, to the elimination of the virus genome through cellular defense mechanisms, or the integration of the virus genome with the host as a chromosomal or extra-chromosomal element. Here we revisit this canonical view, inspired by recent experimental findings of Bautista and colleagues (mBio, 2015) in which the majority of target host cells can be induced into a dormant state when exposed to either active or de-activated viruses, even when viruses are present at low relative titer. We propose that both the qualitative phenomena and the quantitative time-scales of dormancy induction can be reconciled given the hypothesis that cellular physiology can be altered by contact on the surface of host cells rather than strictly by infection. We develop a biophysical model of contact-mediated dynamics involving virus particles and target cells. We show how in this model virus particles can catalyze - extracellularly - cellular transformations amongst many cells, even if they ultimately infect only one (or none). We discuss implications of the present biophysical model relevant to the study of virus-microbe interactions more generally.
2304.05823
Nan Rosemary Ke
Nan Rosemary Ke, Sara-Jane Dunn, Jorg Bornschein, Silvia Chiappa, Melanie Rey, Jean-Baptiste Lespiau, Albin Cassirer, Jane Wang, Theophane Weber, David Barrett, Matthew Botvinick, Anirudh Goyal, Mike Mozer, Danilo Rezende
DiscoGen: Learning to Discover Gene Regulatory Networks
null
null
null
null
q-bio.MN cs.LG q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Accurately inferring Gene Regulatory Networks (GRNs) is a critical and challenging task in biology. GRNs model the activatory and inhibitory interactions between genes and are inherently causal in nature. To accurately identify GRNs, perturbational data is required. However, most GRN discovery methods only operate on observational data. Recent advances in neural network-based causal discovery methods have significantly improved causal discovery, including handling interventional data, improvements in performance and scalability. However, applying state-of-the-art (SOTA) causal discovery methods in biology poses challenges, such as noisy data and a large number of samples. Thus, adapting the causal discovery methods is necessary to handle these challenges. In this paper, we introduce DiscoGen, a neural network-based GRN discovery method that can denoise gene expression measurements and handle interventional data. We demonstrate that our model outperforms SOTA neural network-based causal discovery methods.
[ { "created": "Wed, 12 Apr 2023 13:02:49 GMT", "version": "v1" } ]
2023-04-13
[ [ "Ke", "Nan Rosemary", "" ], [ "Dunn", "Sara-Jane", "" ], [ "Bornschein", "Jorg", "" ], [ "Chiappa", "Silvia", "" ], [ "Rey", "Melanie", "" ], [ "Lespiau", "Jean-Baptiste", "" ], [ "Cassirer", "Albin", "" ], [ "Wang", "Jane", "" ], [ "Weber", "Theophane", "" ], [ "Barrett", "David", "" ], [ "Botvinick", "Matthew", "" ], [ "Goyal", "Anirudh", "" ], [ "Mozer", "Mike", "" ], [ "Rezende", "Danilo", "" ] ]
Accurately inferring Gene Regulatory Networks (GRNs) is a critical and challenging task in biology. GRNs model the activatory and inhibitory interactions between genes and are inherently causal in nature. To accurately identify GRNs, perturbational data is required. However, most GRN discovery methods only operate on observational data. Recent advances in neural network-based causal discovery methods have significantly improved causal discovery, including handling interventional data, improvements in performance and scalability. However, applying state-of-the-art (SOTA) causal discovery methods in biology poses challenges, such as noisy data and a large number of samples. Thus, adapting the causal discovery methods is necessary to handle these challenges. In this paper, we introduce DiscoGen, a neural network-based GRN discovery method that can denoise gene expression measurements and handle interventional data. We demonstrate that our model outperforms SOTA neural network-based causal discovery methods.
1906.07899
Tomasz Rutkowski
Tomasz M. Rutkowski and Marcin Koculak and Masato S. Abe and Mihoko Otake-Matsuura
Brain correlates of task-load and dementia elucidation with tensor machine learning using oddball BCI paradigm
In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8578-8582, May 2019
null
10.1109/ICASSP.2019.8682387
null
q-bio.NC cs.LG eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Dementia in the elderly has recently become the most usual cause of cognitive decline. The proliferation of dementia cases in aging societies creates a remarkable economic as well as medical problems in many communities worldwide. A recently published report by The World Health Organization (WHO) estimates that about 47 million people are suffering from dementia-related neurocognitive declines worldwide. The number of dementia cases is predicted by 2050 to triple, which requires the creation of an AI-based technology application to support interventions with early screening for subsequent mental wellbeing checking as well as preservation with digital-pharma (the so-called beyond a pill) therapeutical approaches. We present an attempt and exploratory results of brain signal (EEG) classification to establish digital biomarkers for dementia stage elucidation. We discuss a comparison of various machine learning approaches for automatic event-related potentials (ERPs) classification of a high and low task-load sound stimulus recognition. These ERPs are similar to those in dementia. The proposed winning method using tensor-based machine learning in a deep fully connected neural network setting is a step forward to develop AI-based approaches for a subsequent application for subjective- and mild-cognitive impairment (SCI and MCI) diagnostics.
[ { "created": "Wed, 19 Jun 2019 03:43:39 GMT", "version": "v1" } ]
2019-06-20
[ [ "Rutkowski", "Tomasz M.", "" ], [ "Koculak", "Marcin", "" ], [ "Abe", "Masato S.", "" ], [ "Otake-Matsuura", "Mihoko", "" ] ]
Dementia in the elderly has recently become the most usual cause of cognitive decline. The proliferation of dementia cases in aging societies creates a remarkable economic as well as medical problems in many communities worldwide. A recently published report by The World Health Organization (WHO) estimates that about 47 million people are suffering from dementia-related neurocognitive declines worldwide. The number of dementia cases is predicted by 2050 to triple, which requires the creation of an AI-based technology application to support interventions with early screening for subsequent mental wellbeing checking as well as preservation with digital-pharma (the so-called beyond a pill) therapeutical approaches. We present an attempt and exploratory results of brain signal (EEG) classification to establish digital biomarkers for dementia stage elucidation. We discuss a comparison of various machine learning approaches for automatic event-related potentials (ERPs) classification of a high and low task-load sound stimulus recognition. These ERPs are similar to those in dementia. The proposed winning method using tensor-based machine learning in a deep fully connected neural network setting is a step forward to develop AI-based approaches for a subsequent application for subjective- and mild-cognitive impairment (SCI and MCI) diagnostics.
1606.07748
Cameron Browne
Cameron J. Browne
Global Properties of Nested Network Model with Application to Multi-Epitope HIV/CTL Dynamics
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical modeling and analysis can provide insight on the dynamics of ecosystems which maintain biodiversity in the face of competitive and prey-predator interactions. Of primary interests are the underlying structure and features which stabilize diverse ecological networks. Recently Korytowski and Smith [17] proved that a perfectly nested infection network, along with appropriate life history trade-offs, leads to coexistence and persistence of bacteria-phage communities in a chemostat model. In this article, we generalize their model in order to apply it to the within-host dynamics virus and immune response, in particular HIV and CTL (Cytotoxic T Lymphocyte) cells. Our model can produce a diverse hierarchy of viral and immune populations, built through sequential viral escape from dominant immune responses and rise in subdominant immune responses, consistent with observed patterns of HIV/CTL evolution. We find a Lyapunov function for the system which leads to rigorous characterization of persistent viral and immune variants, and informs upon equilibria stability and global dynamics. Results are interpreted in the context of within-host HIV/CTL evolution and numerical simulations are provided.
[ { "created": "Fri, 24 Jun 2016 16:31:02 GMT", "version": "v1" }, { "created": "Sun, 1 Jan 2017 20:13:54 GMT", "version": "v2" } ]
2017-01-03
[ [ "Browne", "Cameron J.", "" ] ]
Mathematical modeling and analysis can provide insight on the dynamics of ecosystems which maintain biodiversity in the face of competitive and prey-predator interactions. Of primary interests are the underlying structure and features which stabilize diverse ecological networks. Recently Korytowski and Smith [17] proved that a perfectly nested infection network, along with appropriate life history trade-offs, leads to coexistence and persistence of bacteria-phage communities in a chemostat model. In this article, we generalize their model in order to apply it to the within-host dynamics virus and immune response, in particular HIV and CTL (Cytotoxic T Lymphocyte) cells. Our model can produce a diverse hierarchy of viral and immune populations, built through sequential viral escape from dominant immune responses and rise in subdominant immune responses, consistent with observed patterns of HIV/CTL evolution. We find a Lyapunov function for the system which leads to rigorous characterization of persistent viral and immune variants, and informs upon equilibria stability and global dynamics. Results are interpreted in the context of within-host HIV/CTL evolution and numerical simulations are provided.
1407.2234
Matthew Turner
Daniel J. G. Pearce and Matthew S. Turner
Density regulation in strictly metric-free swarms
null
null
10.1088/1367-2630/16/8/082002
null
q-bio.QM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is now experimental evidence that nearest-neighbour interactions in flocks of birds are metric free, i.e. they have no characteristic interaction length scale. However, models that involve interactions between neighbours that are assigned topologically are naturally invariant under spatial expansion, supporting a continuous reduction in density towards zero, unless additional cohesive interactions are introduced or the density is artificially controlled, e.g. via a finite system size. We propose a solution that involves a metric-free motional bias on those individuals that are topologically identified to be on an edge of the swarm. This model has only two primary control parameters, one controlling the relative strength of stochastic noise to the degree of co-alignment and another controlling the degree of the motional bias for those on the edge, relative to the tendency to co-align. We find a novel power-law scaling of the real-space density with the number of individuals N as well as a familiar order-to-disorder transition.
[ { "created": "Mon, 7 Jul 2014 15:40:00 GMT", "version": "v1" } ]
2015-06-22
[ [ "Pearce", "Daniel J. G.", "" ], [ "Turner", "Matthew S.", "" ] ]
There is now experimental evidence that nearest-neighbour interactions in flocks of birds are metric free, i.e. they have no characteristic interaction length scale. However, models that involve interactions between neighbours that are assigned topologically are naturally invariant under spatial expansion, supporting a continuous reduction in density towards zero, unless additional cohesive interactions are introduced or the density is artificially controlled, e.g. via a finite system size. We propose a solution that involves a metric-free motional bias on those individuals that are topologically identified to be on an edge of the swarm. This model has only two primary control parameters, one controlling the relative strength of stochastic noise to the degree of co-alignment and another controlling the degree of the motional bias for those on the edge, relative to the tendency to co-align. We find a novel power-law scaling of the real-space density with the number of individuals N as well as a familiar order-to-disorder transition.
2306.03696
William Jacobs
Yaxin An, Michael A. Webb, William M. Jacobs
Active learning of the thermodynamics-dynamics tradeoff in protein condensates
null
Science Advances 10, adj2448 2024
10.1126/sciadv.adj2448
null
q-bio.BM cond-mat.soft physics.bio-ph physics.comp-ph
http://creativecommons.org/licenses/by/4.0/
Phase-separated biomolecular condensates exhibit a wide range of dynamical properties, which depend on the sequences of the constituent proteins and RNAs. However, it is unclear to what extent condensate dynamics can be tuned without also changing the thermodynamic properties that govern phase separation. Using coarse-grained simulations of intrinsically disordered proteins, we show that the dynamics and thermodynamics of homopolymer condensates are strongly correlated, with increased condensate stability being coincident with low mobilities and high viscosities. We then apply an "active learning" strategy to identify heteropolymer sequences that break this correlation. This data-driven approach and accompanying analysis reveal how heterogeneous amino-acid compositions and non-uniform sequence patterning map to a range of independently tunable dynamical and thermodynamic properties of biomolecular condensates. Our results highlight key molecular determinants governing the physical properties of biomolecular condensates and establish design rules for the development of stimuli-responsive biomaterials.
[ { "created": "Tue, 6 Jun 2023 14:09:38 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2023 14:40:15 GMT", "version": "v2" }, { "created": "Sat, 9 Dec 2023 16:15:44 GMT", "version": "v3" } ]
2024-07-31
[ [ "An", "Yaxin", "" ], [ "Webb", "Michael A.", "" ], [ "Jacobs", "William M.", "" ] ]
Phase-separated biomolecular condensates exhibit a wide range of dynamical properties, which depend on the sequences of the constituent proteins and RNAs. However, it is unclear to what extent condensate dynamics can be tuned without also changing the thermodynamic properties that govern phase separation. Using coarse-grained simulations of intrinsically disordered proteins, we show that the dynamics and thermodynamics of homopolymer condensates are strongly correlated, with increased condensate stability being coincident with low mobilities and high viscosities. We then apply an "active learning" strategy to identify heteropolymer sequences that break this correlation. This data-driven approach and accompanying analysis reveal how heterogeneous amino-acid compositions and non-uniform sequence patterning map to a range of independently tunable dynamical and thermodynamic properties of biomolecular condensates. Our results highlight key molecular determinants governing the physical properties of biomolecular condensates and establish design rules for the development of stimuli-responsive biomaterials.
1309.1898
Hong Qian
Hong Qian
Fitness and entropy production in a cell population dynamics with epigenetic phenotype switching
16 pages
Quantitative Biology, vol. 2, 47-53 (2014)
10.1007/s40484-014-0028-4
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by recent understandings in the stochastic natures of gene expression, biochemical signaling, and spontaneous reversible epigenetic switchings, we study a simple deterministic cell population dynamics in which subpopulations grow with different rates and individual cells can bi-directionally switch between a small number of different epigenetic phenotypes. Two theories in the past, the population dynamics and thermodynamics of master equations, separatedly defined two important concepts in mathematical terms: the {\em fitness} in the former and the (non-adiabatic) {\em entropy production} in the latter. Both play important roles in the evolution of the cell population dynamics. The switching sustains the variations among the subpopulation growth thus continuous natural selection. As a form of Price's equation, the fitness increases with ($i$) natural selection through variations and $(ii)$ a positive covariance between the per capita growth and switching, which represents a Lamarchian-like behavior. A negative covariance balances the natural selection in a fitness steady state | "the red queen" scenario. At the same time the growth keeps the proportions of subpopulations away from the "intrinsic" switching equilibrium of individual cells, thus leads to a continous entropy production. A covariance, between the per capita growth rate and the "chemical potential" of subpopulation, counter-acts the entropy production. Analytical results are obtained for the limiting cases of growth dominating switching and vice versa.
[ { "created": "Sat, 7 Sep 2013 19:46:41 GMT", "version": "v1" }, { "created": "Fri, 1 Aug 2014 04:53:00 GMT", "version": "v2" } ]
2014-08-04
[ [ "Qian", "Hong", "" ] ]
Motivated by recent understandings in the stochastic natures of gene expression, biochemical signaling, and spontaneous reversible epigenetic switchings, we study a simple deterministic cell population dynamics in which subpopulations grow with different rates and individual cells can bi-directionally switch between a small number of different epigenetic phenotypes. Two theories in the past, the population dynamics and thermodynamics of master equations, separatedly defined two important concepts in mathematical terms: the {\em fitness} in the former and the (non-adiabatic) {\em entropy production} in the latter. Both play important roles in the evolution of the cell population dynamics. The switching sustains the variations among the subpopulation growth thus continuous natural selection. As a form of Price's equation, the fitness increases with ($i$) natural selection through variations and $(ii)$ a positive covariance between the per capita growth and switching, which represents a Lamarchian-like behavior. A negative covariance balances the natural selection in a fitness steady state | "the red queen" scenario. At the same time the growth keeps the proportions of subpopulations away from the "intrinsic" switching equilibrium of individual cells, thus leads to a continous entropy production. A covariance, between the per capita growth rate and the "chemical potential" of subpopulation, counter-acts the entropy production. Analytical results are obtained for the limiting cases of growth dominating switching and vice versa.
1411.0395
Robert Nowak M.
Robert M. Nowak
Assembly of repetitive regions using next-generation sequencing data
11 pages, 5 figures, 6 tables. The C++ sources, the Python scripts and the additional data are available at http://dnaasm.sourceforge.org
null
10.1016/j.bbe.2014.12.001
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High read depth can be used to assemble short sequence repeats. The existing genome assemblers fail in repetitive regions of longer than average read. I propose a new algorithm for a DNA assembly which uses the relative frequency of reads to properly reconstruct repetitive sequences. The mathematical model shows the upper limits of accuracy of the results as a function of read coverage. For high coverage, the estimation error depends linearly on repetitive sequence length and inversely proportional to the sequencing coverage. The algorithm requires high read depth, provided by the next-generation sequencers and could use the existing data. The tests on errorless reads, generated in silico from several model genomes, pointed the properly reconstructed repetitive sequences, where existing assemblers fail.
[ { "created": "Mon, 3 Nov 2014 08:51:38 GMT", "version": "v1" } ]
2015-01-08
[ [ "Nowak", "Robert M.", "" ] ]
High read depth can be used to assemble short sequence repeats. The existing genome assemblers fail in repetitive regions of longer than average read. I propose a new algorithm for a DNA assembly which uses the relative frequency of reads to properly reconstruct repetitive sequences. The mathematical model shows the upper limits of accuracy of the results as a function of read coverage. For high coverage, the estimation error depends linearly on repetitive sequence length and inversely proportional to the sequencing coverage. The algorithm requires high read depth, provided by the next-generation sequencers and could use the existing data. The tests on errorless reads, generated in silico from several model genomes, pointed the properly reconstructed repetitive sequences, where existing assemblers fail.
1308.3843
Jonathan Doye
Jonathan P.K. Doye, Thomas E. Ouldridge, Ard A. Louis, Flavio Romano, Petr Sulc, Christian Matek, Benedict E.K. Snodin, Lorenzo Rovigatti, John S. Schreck, Ryan M. Harrison and William P.J. Smith
Coarse-graining DNA for simulations of DNA nanotechnology
20 pages, 9 figures
Phys. Chem. Chem. Phys. 15, 20395-20414 (2013)
10.1039/C3CP53545B
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To simulate long time and length scale processes involving DNA it is necessary to use a coarse-grained description. Here we provide an overview of different approaches to such coarse graining, focussing on those at the nucleotide level that allow the self-assembly processes associated with DNA nanotechnology to be studied. OxDNA, our recently-developed coarse-grained DNA model, is particularly suited to this task, and has opened up this field to systematic study by simulations. We illustrate some of the range of DNA nanotechnology systems to which the model is being applied, as well as the insights it can provide into fundamental biophysical properties of DNA.
[ { "created": "Sun, 18 Aug 2013 08:41:39 GMT", "version": "v1" } ]
2017-09-13
[ [ "Doye", "Jonathan P. K.", "" ], [ "Ouldridge", "Thomas E.", "" ], [ "Louis", "Ard A.", "" ], [ "Romano", "Flavio", "" ], [ "Sulc", "Petr", "" ], [ "Matek", "Christian", "" ], [ "Snodin", "Benedict E. K.", "" ], [ "Rovigatti", "Lorenzo", "" ], [ "Schreck", "John S.", "" ], [ "Harrison", "Ryan M.", "" ], [ "Smith", "William P. J.", "" ] ]
To simulate long time and length scale processes involving DNA it is necessary to use a coarse-grained description. Here we provide an overview of different approaches to such coarse graining, focussing on those at the nucleotide level that allow the self-assembly processes associated with DNA nanotechnology to be studied. OxDNA, our recently-developed coarse-grained DNA model, is particularly suited to this task, and has opened up this field to systematic study by simulations. We illustrate some of the range of DNA nanotechnology systems to which the model is being applied, as well as the insights it can provide into fundamental biophysical properties of DNA.
1803.09107
Moti Salti
Moti Salti, Asaf Harel, Sebastien Marti
Conscious Perception: Time for an Update?
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the neural mechanism underlying subjective representation has become a central endeavor in cognitive-neuroscience. In theories of conscious perception, stimulus gaining conscious access is usually considered as a discrete neuronal event to be characterized in time or space, sometimes refer to as a 'conscious episode'. Surprisingly, the alternative hypothesis according to which conscious perception is a dynamic process has been rarely considered. Here, we discuss this hypothesis and envisage its implications. We show how it can reconcile inconsistent empirical findings on the timing of the neural correlates of consciousness (NCCs), and make testable predictions. According to this hypothesis, a stimulus is consciously perceived for as long as it is recoded to fit an ongoing stream composed of all other perceived stimuli. We suggest that this 'updating' process is governed by at least three factors (1) context, (2) stimulus saliency and (3) observer's goal. Finally, this framework forces us to reconsider the typical distinction between conscious and unconscious information processing.
[ { "created": "Sat, 24 Mar 2018 13:25:26 GMT", "version": "v1" } ]
2018-03-28
[ [ "Salti", "Moti", "" ], [ "Harel", "Asaf", "" ], [ "Marti", "Sebastien", "" ] ]
Understanding the neural mechanism underlying subjective representation has become a central endeavor in cognitive-neuroscience. In theories of conscious perception, stimulus gaining conscious access is usually considered as a discrete neuronal event to be characterized in time or space, sometimes refer to as a 'conscious episode'. Surprisingly, the alternative hypothesis according to which conscious perception is a dynamic process has been rarely considered. Here, we discuss this hypothesis and envisage its implications. We show how it can reconcile inconsistent empirical findings on the timing of the neural correlates of consciousness (NCCs), and make testable predictions. According to this hypothesis, a stimulus is consciously perceived for as long as it is recoded to fit an ongoing stream composed of all other perceived stimuli. We suggest that this 'updating' process is governed by at least three factors (1) context, (2) stimulus saliency and (3) observer's goal. Finally, this framework forces us to reconsider the typical distinction between conscious and unconscious information processing.
1902.01210
Liliana Camarillo Rodriguez
L. Camarillo-Rodriguez, Z.J. Waldman, I. Orosz, J. Stein, S. Das, R. Gorniak, A.D. Sharan, R. Gross, B.C. Lega, K. Zaghloul, B.C. Jobst, K.A. Davis, P.A. Wanda, G. Worrell, M.R. Sperling, S.A. Weiss
Epileptiform spikes in specific left temporal and mesial temporal structures disrupt verbal episodic memory encoding
All of the co-authors of this article agree to withdraw it, because it is not ready yet for its submission
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Patients diagnosed with epilepsy experience cognitive dysfunction that may be due to a transient cognitive/memory impairment (TCI/TMI) caused by spontaneous epileptiform spikes. We asked in a cohort of 166 adult patients with medically refractory focal epilepsy if spikes in specific neuroanatomical regions during verbal episodic memory encoding would significantly decrease the probability of recall. We found using a na\"ive Bayesian machine learning model that the probability of correct word recall decreased significantly by 11.9% when spikes occurred in left Brodmann area 21 (BA)21, (p<0.001), 49.7% in left BA38 (p=0.01), and 32.2% in right BA38 (p<0.001), and 21.4% in left BA36 (p<0.01). We also examined the influence of the seizure-onset zone and the language dominant hemisphere on this effect. Our results demonstrate that spontaneous epileptiform spikes produce a large effect TCI/TMI in brain regions known to be important in semantic processing and episodic memory. Thus memory impairment in patients with epilepsy may be attributable to cellular events associated with abnormal inter-ictal electrical events.
[ { "created": "Thu, 31 Jan 2019 23:12:14 GMT", "version": "v1" }, { "created": "Mon, 18 Feb 2019 01:40:53 GMT", "version": "v2" } ]
2019-02-19
[ [ "Camarillo-Rodriguez", "L.", "" ], [ "Waldman", "Z. J.", "" ], [ "Orosz", "I.", "" ], [ "Stein", "J.", "" ], [ "Das", "S.", "" ], [ "Gorniak", "R.", "" ], [ "Sharan", "A. D.", "" ], [ "Gross", "R.", "" ], [ "Lega", "B. C.", "" ], [ "Zaghloul", "K.", "" ], [ "Jobst", "B. C.", "" ], [ "Davis", "K. A.", "" ], [ "Wanda", "P. A.", "" ], [ "Worrell", "G.", "" ], [ "Sperling", "M. R.", "" ], [ "Weiss", "S. A.", "" ] ]
Patients diagnosed with epilepsy experience cognitive dysfunction that may be due to a transient cognitive/memory impairment (TCI/TMI) caused by spontaneous epileptiform spikes. We asked in a cohort of 166 adult patients with medically refractory focal epilepsy if spikes in specific neuroanatomical regions during verbal episodic memory encoding would significantly decrease the probability of recall. We found using a na\"ive Bayesian machine learning model that the probability of correct word recall decreased significantly by 11.9% when spikes occurred in left Brodmann area 21 (BA)21, (p<0.001), 49.7% in left BA38 (p=0.01), and 32.2% in right BA38 (p<0.001), and 21.4% in left BA36 (p<0.01). We also examined the influence of the seizure-onset zone and the language dominant hemisphere on this effect. Our results demonstrate that spontaneous epileptiform spikes produce a large effect TCI/TMI in brain regions known to be important in semantic processing and episodic memory. Thus memory impairment in patients with epilepsy may be attributable to cellular events associated with abnormal inter-ictal electrical events.
q-bio/0409039
Ken Kiyono
Ken Kiyono, Zbigniew R. Struzik, Naoko Aoyagi, Seiichiro Sakata, Junichiro Hayano, Yoshiharu Yamamoto
Critical Scale-invariance in Healthy Human Heart Rate
9 pages, 3 figures. Phys. Rev. Lett., to appear (2004)
null
10.1103/PhysRevLett.93.178103
null
q-bio.TO
null
We demonstrate the robust scale-invariance in the probability density function (PDF) of detrended healthy human heart rate increments, which is preserved not only in a quiescent condition, but also in a dynamic state where the mean level of heart rate is dramatically changing. This scale-independent and fractal structure is markedly different from the scale-dependent PDF evolution observed in a turbulent-like, cascade heart rate model. These results strongly support the view that healthy human heart rate is controlled to converge continually to a critical state.
[ { "created": "Thu, 30 Sep 2004 23:46:04 GMT", "version": "v1" } ]
2009-11-10
[ [ "Kiyono", "Ken", "" ], [ "Struzik", "Zbigniew R.", "" ], [ "Aoyagi", "Naoko", "" ], [ "Sakata", "Seiichiro", "" ], [ "Hayano", "Junichiro", "" ], [ "Yamamoto", "Yoshiharu", "" ] ]
We demonstrate the robust scale-invariance in the probability density function (PDF) of detrended healthy human heart rate increments, which is preserved not only in a quiescent condition, but also in a dynamic state where the mean level of heart rate is dramatically changing. This scale-independent and fractal structure is markedly different from the scale-dependent PDF evolution observed in a turbulent-like, cascade heart rate model. These results strongly support the view that healthy human heart rate is controlled to converge continually to a critical state.
2004.05256
Hao Tian
Hao Tian and Peng Tao
Deciphering the Protein Motion of S1 Subunit in SARS-CoV-2 Spike Glycoprotein Through Integrated Computational Methods
null
null
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a major worldwide public health emergency that has infected over $1.5$ million people. The partially open state of S1 subunit in spike glycoprotein is considered vital for its infection with host cell and is represented as a key target for neutralizing antibodies. However, the mechanism elucidating the transition from the closed state to the partially open state still remains unclear. Here, we applied a combination of Markov state model, transition path theory and random forest to analyze the S1 motion. Our results explored a promising complete conformational movement of receptor-binding domain, from buried, partially open, to detached states. We also numerically confirmed the transition probability between those states. Based on the asymmetry in both the dynamics behavior and backbone C$\alpha$ importance, we further suggested a relation between chains in the trimer spike protein, which may help in the vaccine design and antibody neutralization.
[ { "created": "Fri, 10 Apr 2020 23:27:55 GMT", "version": "v1" } ]
2020-04-14
[ [ "Tian", "Hao", "" ], [ "Tao", "Peng", "" ] ]
The novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a major worldwide public health emergency that has infected over $1.5$ million people. The partially open state of S1 subunit in spike glycoprotein is considered vital for its infection with host cell and is represented as a key target for neutralizing antibodies. However, the mechanism elucidating the transition from the closed state to the partially open state still remains unclear. Here, we applied a combination of Markov state model, transition path theory and random forest to analyze the S1 motion. Our results explored a promising complete conformational movement of receptor-binding domain, from buried, partially open, to detached states. We also numerically confirmed the transition probability between those states. Based on the asymmetry in both the dynamics behavior and backbone C$\alpha$ importance, we further suggested a relation between chains in the trimer spike protein, which may help in the vaccine design and antibody neutralization.
1001.3813
David Morrison
David A. Morrison
How and where to look for tRNAs in Metazoan mitochondrial genomes, and what you might find when you get there
27 pages, including 1 Table and 9 Figures, plus 6 online Appendices
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to locate and annotate mitochondrial genes is an important practical issue, given the rapidly increasing number of mitogenomes appearing in the public databases. Unfortunately, tRNA genes in Metazoan mitochondria have proved to be problematic because they often vary in number (genes missing or duplicated) and also in the secondary structure of the transcribed tRNAs (T or D arms missing). I have performed a series of comparative analyses of the tRNA genes of a broad range of Metazoan mitogenomes in order to address this issue. I conclude that no single computer program is necessarily capable of finding all of the tRNA genes in any given mitogenome, and that use of both the ARWEN and DOGMA programs is sometimes necessary because they produce complementary false negatives. There are apparently a very large number of erroneous annotations in the databased mitogenome sequences, including missed genes, wrongly annotated locations, false complements, and inconsistent criteria for assigning the 5' and 3' boundaries; and I have listed many of these. The extent of overlap between genes is often greatly exaggerated due to inconsistent annotations, although notable overlaps involving tRNAs are apparently real. Finally, three novel hypotheses were examined and found to have support from the comparative analyses: (1) some organisms have mitogenomic locations that simultaneously code for multiple tRNAs; (2) some organisms have mitogenomic locations that simultaneously code for tRNAs and proteins (but not rRNAs); and (3) one group of nematodes has several genes that code for tRNAs lacking both the D and T arms.
[ { "created": "Thu, 21 Jan 2010 14:33:36 GMT", "version": "v1" }, { "created": "Fri, 3 Feb 2012 13:46:19 GMT", "version": "v2" } ]
2012-02-06
[ [ "Morrison", "David A.", "" ] ]
The ability to locate and annotate mitochondrial genes is an important practical issue, given the rapidly increasing number of mitogenomes appearing in the public databases. Unfortunately, tRNA genes in Metazoan mitochondria have proved to be problematic because they often vary in number (genes missing or duplicated) and also in the secondary structure of the transcribed tRNAs (T or D arms missing). I have performed a series of comparative analyses of the tRNA genes of a broad range of Metazoan mitogenomes in order to address this issue. I conclude that no single computer program is necessarily capable of finding all of the tRNA genes in any given mitogenome, and that use of both the ARWEN and DOGMA programs is sometimes necessary because they produce complementary false negatives. There are apparently a very large number of erroneous annotations in the databased mitogenome sequences, including missed genes, wrongly annotated locations, false complements, and inconsistent criteria for assigning the 5' and 3' boundaries; and I have listed many of these. The extent of overlap between genes is often greatly exaggerated due to inconsistent annotations, although notable overlaps involving tRNAs are apparently real. Finally, three novel hypotheses were examined and found to have support from the comparative analyses: (1) some organisms have mitogenomic locations that simultaneously code for multiple tRNAs; (2) some organisms have mitogenomic locations that simultaneously code for tRNAs and proteins (but not rRNAs); and (3) one group of nematodes has several genes that code for tRNAs lacking both the D and T arms.
0808.2660
Mike Steel Prof.
Mike Steel
A basic limitation on inferring phylogenies by pairwise sequence comparisons
13 pages, 2 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distance-based approaches in phylogenetics such as Neighbor-Joining are a fast and popular approach for building trees. These methods take pairs of sequences from them construct a value that, in expectation, is additive under a stochastic model of site substitution. Most models assume a distribution of rates across sites, often based on a gamma distribution. Provided the (shape) parameter of this distribution is known, the method can correctly reconstruct the tree. However, if the shape parameter is not known then we show that topologically different trees, with different shape parameters and associated positive branch lengths, can lead to exactly matching distributions on pairwise site patterns between all pairs of taxa. Thus, one could not distinguish between the two trees using pairs of sequences without some prior knowledge of the shape parameter. More surprisingly, this can happen for {\em any} choice of distinct shape parameters on the two trees, and thus the result is not peculiar to a particular or contrived selection of the shape parameters. On a positive note, we point out known conditions where identifiability can be restored (namely, when the branch lengths are clocklike, or if methods such as maximum likelihood are used).
[ { "created": "Tue, 19 Aug 2008 21:25:42 GMT", "version": "v1" } ]
2008-08-21
[ [ "Steel", "Mike", "" ] ]
Distance-based approaches in phylogenetics such as Neighbor-Joining are a fast and popular approach for building trees. These methods take pairs of sequences from them construct a value that, in expectation, is additive under a stochastic model of site substitution. Most models assume a distribution of rates across sites, often based on a gamma distribution. Provided the (shape) parameter of this distribution is known, the method can correctly reconstruct the tree. However, if the shape parameter is not known then we show that topologically different trees, with different shape parameters and associated positive branch lengths, can lead to exactly matching distributions on pairwise site patterns between all pairs of taxa. Thus, one could not distinguish between the two trees using pairs of sequences without some prior knowledge of the shape parameter. More surprisingly, this can happen for {\em any} choice of distinct shape parameters on the two trees, and thus the result is not peculiar to a particular or contrived selection of the shape parameters. On a positive note, we point out known conditions where identifiability can be restored (namely, when the branch lengths are clocklike, or if methods such as maximum likelihood are used).
1704.02533
Kieran Fox
Jessica R. Andrews-Hanna, Zachary C. Irving, Kieran C.R. Fox, R. Nathan Spreng, Kalina Christoff
The Neuroscience of Spontaneous Thought: An Evolving, Interdisciplinary Field
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An often-overlooked characteristic of the human mind is its propensity to wander. Despite growing interest in the science of mind-wandering, most studies operationalize mind-wandering by its task-unrelated contents, which may be orthogonal to the processes constraining how thoughts are evoked and unfold over time. In this chapter, we emphasize the importance of incorporating such processes into current definitions of mind-wandering, and proposing that mind-wandering and other forms of spontaneous thought (such as dreaming and creativity) are mental states that arise and transition relatively freely due to an absence of constraints on cognition. We review existing psychological, philosophical, and neuroscientific research on spontaneous thought through the lens of this framework, and call for additional research into the dynamic properties of the mind and brain.
[ { "created": "Sat, 8 Apr 2017 20:16:58 GMT", "version": "v1" } ]
2017-04-11
[ [ "Andrews-Hanna", "Jessica R.", "" ], [ "Irving", "Zachary C.", "" ], [ "Fox", "Kieran C. R.", "" ], [ "Spreng", "R. Nathan", "" ], [ "Christoff", "Kalina", "" ] ]
An often-overlooked characteristic of the human mind is its propensity to wander. Despite growing interest in the science of mind-wandering, most studies operationalize mind-wandering by its task-unrelated contents, which may be orthogonal to the processes constraining how thoughts are evoked and unfold over time. In this chapter, we emphasize the importance of incorporating such processes into current definitions of mind-wandering, and proposing that mind-wandering and other forms of spontaneous thought (such as dreaming and creativity) are mental states that arise and transition relatively freely due to an absence of constraints on cognition. We review existing psychological, philosophical, and neuroscientific research on spontaneous thought through the lens of this framework, and call for additional research into the dynamic properties of the mind and brain.
2006.09932
Shashank Subramanian
Shashank Subramanian, Klaudius Scheufele, Naveen Himthani, George Biros
Multiatlas Calibration of Biophysical Brain Tumor Growth Models with Mass Effect
Provisionally accepted to MICCAI 2020
null
null
null
q-bio.QM cs.CE physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a 3D fully-automatic method for the calibration of partial differential equation (PDE) models of glioblastoma (GBM) growth with mass effect, the deformation of brain tissue due to the tumor. We quantify the mass effect, tumor proliferation, tumor migration, and the localized tumor initial condition from a single multiparameteric Magnetic Resonance Imaging (mpMRI) patient scan. The PDE is a reaction-advection-diffusion partial differential equation coupled with linear elasticity equations to capture mass effect. The single-scan calibration model is notoriously difficult because the precancerous (healthy) brain anatomy is unknown. To solve this inherently ill-posed and ill-conditioned optimization problem, we introduce a novel inversion scheme that uses multiple brain atlases as proxies for the healthy precancer patient brain resulting in robust and reliable parameter estimation. We apply our method on both synthetic and clinical datasets representative of the heterogeneous spatial landscape typically observed in glioblastomas to demonstrate the validity and performance of our methods. In the synthetic data, we report calibration errors (due to the ill-posedness and our solution scheme) in the 10\%-20\% range. In the clinical data, we report good quantitative agreement with the observed tumor and qualitative agreement with the mass effect (for which we do not have a ground truth). Our method uses a minimal set of parameters and provides both global and local quantitative measures of tumor infiltration and mass effect.
[ { "created": "Wed, 17 Jun 2020 15:24:05 GMT", "version": "v1" } ]
2020-06-18
[ [ "Subramanian", "Shashank", "" ], [ "Scheufele", "Klaudius", "" ], [ "Himthani", "Naveen", "" ], [ "Biros", "George", "" ] ]
We present a 3D fully-automatic method for the calibration of partial differential equation (PDE) models of glioblastoma (GBM) growth with mass effect, the deformation of brain tissue due to the tumor. We quantify the mass effect, tumor proliferation, tumor migration, and the localized tumor initial condition from a single multiparameteric Magnetic Resonance Imaging (mpMRI) patient scan. The PDE is a reaction-advection-diffusion partial differential equation coupled with linear elasticity equations to capture mass effect. The single-scan calibration model is notoriously difficult because the precancerous (healthy) brain anatomy is unknown. To solve this inherently ill-posed and ill-conditioned optimization problem, we introduce a novel inversion scheme that uses multiple brain atlases as proxies for the healthy precancer patient brain resulting in robust and reliable parameter estimation. We apply our method on both synthetic and clinical datasets representative of the heterogeneous spatial landscape typically observed in glioblastomas to demonstrate the validity and performance of our methods. In the synthetic data, we report calibration errors (due to the ill-posedness and our solution scheme) in the 10\%-20\% range. In the clinical data, we report good quantitative agreement with the observed tumor and qualitative agreement with the mass effect (for which we do not have a ground truth). Our method uses a minimal set of parameters and provides both global and local quantitative measures of tumor infiltration and mass effect.
2303.06071
Azra Bihorac
Esra Adiyeke, Yuanfang Ren, Ziyuan Guan, Matthew M. Ruppert, Parisa Rashidi, Azra Bihorac, Tezcan Ozrazgat-Baslanti
Clinical Courses of Acute Kidney Injury in Hospitalized Patients: A Multistate Analysis
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives: We aim to quantify longitudinal acute kidney injury (AKI) trajectories and to describe transitions through progressing and recovery states and outcomes among hospitalized patients using multistate models. Methods: In this large, longitudinal cohort study, 138,449 adult patients admitted to a quaternary care hospital between 2012 and 2019 were staged based on Kidney Disease: Improving Global Outcomes serum creatinine criteria for the first 14 days of their hospital stay. We fit multistate models to estimate probability of being in a certain clinical state at a given time after entering each one of the AKI stages. We investigated the effects of selected variables on transition rates via Cox proportional hazards regression models. Results: Twenty percent of hospitalized encounters (49,325/246,964) had AKI; among patients with AKI, 66% had Stage 1 AKI, 18% had Stage 2 AKI, and 17% had AKI Stage 3 with or without RRT. At seven days following Stage 1 AKI, 69% (95% confidence interval [CI]: 68.8%-70.5%) were either resolved to No AKI or discharged, while smaller proportions of recovery (26.8%, 95% CI: 26.1%-27.5%) and discharge (17.4%, 95% CI: 16.8%-18.0%) were observed following AKI Stage 2. At 14 days following Stage 1 AKI, patients with more frail conditions (Charlson comorbidity index greater than or equal to 3 and had prolonged ICU stay) had lower proportion of transitioning to No AKI or discharge states. Discussion: Multistate analyses showed that the majority of Stage 2 and higher severity AKI patients could not resolve within seven days; therefore, strategies preventing the persistence or progression of AKI would contribute to the patients' life quality. Conclusions: We demonstrate multistate modeling framework's utility as a mechanism for a better understanding of the clinical course of AKI with the potential to facilitate treatment and resource planning.
[ { "created": "Wed, 8 Mar 2023 19:06:39 GMT", "version": "v1" } ]
2023-03-13
[ [ "Adiyeke", "Esra", "" ], [ "Ren", "Yuanfang", "" ], [ "Guan", "Ziyuan", "" ], [ "Ruppert", "Matthew M.", "" ], [ "Rashidi", "Parisa", "" ], [ "Bihorac", "Azra", "" ], [ "Ozrazgat-Baslanti", "Tezcan", "" ] ]
Objectives: We aim to quantify longitudinal acute kidney injury (AKI) trajectories and to describe transitions through progressing and recovery states and outcomes among hospitalized patients using multistate models. Methods: In this large, longitudinal cohort study, 138,449 adult patients admitted to a quaternary care hospital between 2012 and 2019 were staged based on Kidney Disease: Improving Global Outcomes serum creatinine criteria for the first 14 days of their hospital stay. We fit multistate models to estimate probability of being in a certain clinical state at a given time after entering each one of the AKI stages. We investigated the effects of selected variables on transition rates via Cox proportional hazards regression models. Results: Twenty percent of hospitalized encounters (49,325/246,964) had AKI; among patients with AKI, 66% had Stage 1 AKI, 18% had Stage 2 AKI, and 17% had AKI Stage 3 with or without RRT. At seven days following Stage 1 AKI, 69% (95% confidence interval [CI]: 68.8%-70.5%) were either resolved to No AKI or discharged, while smaller proportions of recovery (26.8%, 95% CI: 26.1%-27.5%) and discharge (17.4%, 95% CI: 16.8%-18.0%) were observed following AKI Stage 2. At 14 days following Stage 1 AKI, patients with more frail conditions (Charlson comorbidity index greater than or equal to 3 and had prolonged ICU stay) had lower proportion of transitioning to No AKI or discharge states. Discussion: Multistate analyses showed that the majority of Stage 2 and higher severity AKI patients could not resolve within seven days; therefore, strategies preventing the persistence or progression of AKI would contribute to the patients' life quality. Conclusions: We demonstrate multistate modeling framework's utility as a mechanism for a better understanding of the clinical course of AKI with the potential to facilitate treatment and resource planning.
1412.1243
Willy Rodr\'iguez
Olivier Mazet, Willy Rodr\'iguez, Loun\`es Chikhi
Demographic inference using genetic data from a single individual: separating population size variation from population structure
40 pages, 8 figures
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid development of sequencing technologies represents new opportunities for population genetics research. It is expected that genomic data will increase our ability to reconstruct the history of populations. While this increase in genetic information will likely help biologists and anthropologists to reconstruct the demographic history of populations, it also represents new challenges. Recent work has shown that structured populations generate signals of population size change. As a consequence it is often difficult to determine whether demographic events such as expansions or contractions (bottlenecks) inferred from genetic data are real or due to the fact that populations are structured in nature. Given that few inferential methods allow us to account for that structure, and that genomic data will necessarily increase the precision of parameter estimates, it is important to develop new approaches. In the present study we analyse two demographic models. The first is a model of instantaneous population size change whereas the second is the classical symmetric island model. We (i) re-derive the distribution of coalescence times under the two models for a sample of size two, (ii) use a maximum likelihood approach to estimate the parameters of these models (iii) validate this estimation procedure under a wide array of parameter combinations, (iv) implement and validate a model choice procedure by using a Kolmogorov-Smirnov test. Altogether we show that it is possible to estimate parameters under several models and perform efficient model choice using genetic data from a single diploid individual.
[ { "created": "Wed, 3 Dec 2014 09:28:29 GMT", "version": "v1" } ]
2014-12-04
[ [ "Mazet", "Olivier", "" ], [ "Rodríguez", "Willy", "" ], [ "Chikhi", "Lounès", "" ] ]
The rapid development of sequencing technologies represents new opportunities for population genetics research. It is expected that genomic data will increase our ability to reconstruct the history of populations. While this increase in genetic information will likely help biologists and anthropologists to reconstruct the demographic history of populations, it also represents new challenges. Recent work has shown that structured populations generate signals of population size change. As a consequence it is often difficult to determine whether demographic events such as expansions or contractions (bottlenecks) inferred from genetic data are real or due to the fact that populations are structured in nature. Given that few inferential methods allow us to account for that structure, and that genomic data will necessarily increase the precision of parameter estimates, it is important to develop new approaches. In the present study we analyse two demographic models. The first is a model of instantaneous population size change whereas the second is the classical symmetric island model. We (i) re-derive the distribution of coalescence times under the two models for a sample of size two, (ii) use a maximum likelihood approach to estimate the parameters of these models (iii) validate this estimation procedure under a wide array of parameter combinations, (iv) implement and validate a model choice procedure by using a Kolmogorov-Smirnov test. Altogether we show that it is possible to estimate parameters under several models and perform efficient model choice using genetic data from a single diploid individual.
1802.08279
Sarah Solomon
Sarah H. Solomon, John D. Medaglia, and Sharon L. Thompson-Schill
Implementing a Concept Network Model
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The same concept can mean different things or be instantiated in different forms depending on context, suggesting a degree of flexibility within the conceptual system. We propose that a compositional network model can be used to capture and predict this flexibility. We modeled individual concepts (e.g., BANANA, BOTTLE) as graph-theoretical networks, in which properties (e.g., YELLOW, SWEET) were represented as nodes and their associations as edges. In this framework, networks capture the within-concept statistics that reflect how properties correlate with each other across instances of a concept. We ran a classification analysis using graph eigendecomposition to validate these models, and find that these models can successfully discriminate between object concepts. We then computed formal measures from these concept networks and explored their relationship to conceptual structure. We find that diversity coefficients and core-periphery structure can be interpreted as network-based measures of conceptual flexibility and stability, respectively. These results support the feasibility of a concept network framework and highlight its ability to formally capture important characteristics of the conceptual system.
[ { "created": "Thu, 22 Feb 2018 19:49:59 GMT", "version": "v1" }, { "created": "Mon, 14 May 2018 16:29:43 GMT", "version": "v2" }, { "created": "Tue, 22 May 2018 16:14:17 GMT", "version": "v3" }, { "created": "Wed, 20 Mar 2019 16:04:03 GMT", "version": "v4" } ]
2019-03-21
[ [ "Solomon", "Sarah H.", "" ], [ "Medaglia", "John D.", "" ], [ "Thompson-Schill", "Sharon L.", "" ] ]
The same concept can mean different things or be instantiated in different forms depending on context, suggesting a degree of flexibility within the conceptual system. We propose that a compositional network model can be used to capture and predict this flexibility. We modeled individual concepts (e.g., BANANA, BOTTLE) as graph-theoretical networks, in which properties (e.g., YELLOW, SWEET) were represented as nodes and their associations as edges. In this framework, networks capture the within-concept statistics that reflect how properties correlate with each other across instances of a concept. We ran a classification analysis using graph eigendecomposition to validate these models, and find that these models can successfully discriminate between object concepts. We then computed formal measures from these concept networks and explored their relationship to conceptual structure. We find that diversity coefficients and core-periphery structure can be interpreted as network-based measures of conceptual flexibility and stability, respectively. These results support the feasibility of a concept network framework and highlight its ability to formally capture important characteristics of the conceptual system.
2108.00994
Michael Assaf
Jason Hindes, Michael Assaf and Ira B. Schwartz
Extreme outbreak dynamics in epidemic models
8 pages, 3 figures; to appear in Physical Review Letters (2022)
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by recent epidemic outbreaks, including those of COVID-19, we solve the canonical problem of calculating the dynamics and likelihood of extensive outbreaks in a population within a large class of stochastic epidemic models with demographic noise, including the Susceptible-Infected-Recovered (SIR) model and its general extensions. In the limit of large populations, we compute the probability distribution for all extensive outbreaks, including those that entail unusually large or small (extreme) proportions of the population infected. Our approach reveals that, unlike other well-known examples of rare events occurring in discrete-state stochastic systems, the statistics of extreme outbreaks emanate from a full continuum of Hamiltonian paths, each satisfying unique boundary conditions with a conserved probability flux.
[ { "created": "Mon, 2 Aug 2021 15:48:03 GMT", "version": "v1" }, { "created": "Fri, 28 Jan 2022 12:01:50 GMT", "version": "v2" } ]
2022-01-31
[ [ "Hindes", "Jason", "" ], [ "Assaf", "Michael", "" ], [ "Schwartz", "Ira B.", "" ] ]
Motivated by recent epidemic outbreaks, including those of COVID-19, we solve the canonical problem of calculating the dynamics and likelihood of extensive outbreaks in a population within a large class of stochastic epidemic models with demographic noise, including the Susceptible-Infected-Recovered (SIR) model and its general extensions. In the limit of large populations, we compute the probability distribution for all extensive outbreaks, including those that entail unusually large or small (extreme) proportions of the population infected. Our approach reveals that, unlike other well-known examples of rare events occurring in discrete-state stochastic systems, the statistics of extreme outbreaks emanate from a full continuum of Hamiltonian paths, each satisfying unique boundary conditions with a conserved probability flux.
1904.05238
Paul Reiser
Paul A. Reiser
A Physical Model for Self-Similar Seashells
34 pages, 5 figures
null
null
null
q-bio.QM
http://creativecommons.org/publicdomain/zero/1.0/
This paper presents a simple physical model for self-similar (gnomonic, or first-order) seashell growth which is expressed in coordinate-free terms. The shell is expressed as the solution of a differential equation which expresses the growth dynamics, and may be used to investigate shell growth from both the local viewpoint of the organism building it and moving with the shell opening (aperture), as well as that of a researcher making global measurements upon a complete motionless shell. Coordinate systems needed to express the global and local descriptions of the shell are chosen. The parameters of growth, or their information equivalent, remain constant in the local system, and are used by the organism to build the shell, and are likely mirrored in the DNA of the organism building it. The transformations between local and global representations are provided. The global model of Cortie, which is very similar to the present model, is expressed in terms of the present model, and the global parameters provided by Cortie for various species of mollusk may be used to calculate the equivalent local parameters.Mathematica code is provided to implement these transformations, as well as to plot the shells using both global and local parameters.
[ { "created": "Fri, 5 Apr 2019 04:46:13 GMT", "version": "v1" } ]
2019-04-11
[ [ "Reiser", "Paul A.", "" ] ]
This paper presents a simple physical model for self-similar (gnomonic, or first-order) seashell growth which is expressed in coordinate-free terms. The shell is expressed as the solution of a differential equation which expresses the growth dynamics, and may be used to investigate shell growth from both the local viewpoint of the organism building it and moving with the shell opening (aperture), as well as that of a researcher making global measurements upon a complete motionless shell. Coordinate systems needed to express the global and local descriptions of the shell are chosen. The parameters of growth, or their information equivalent, remain constant in the local system, and are used by the organism to build the shell, and are likely mirrored in the DNA of the organism building it. The transformations between local and global representations are provided. The global model of Cortie, which is very similar to the present model, is expressed in terms of the present model, and the global parameters provided by Cortie for various species of mollusk may be used to calculate the equivalent local parameters.Mathematica code is provided to implement these transformations, as well as to plot the shells using both global and local parameters.
2005.12993
Donghui Yan
Donghui Yan, Ying Xu, Pei Wang
Estimating the Number of Infected Cases in COVID-19 Pandemic
20 pages, 10 figures
null
null
null
q-bio.PE physics.soc-ph stat.ME
http://creativecommons.org/licenses/by-nc-nd/4.0/
The COVID-19 pandemic has caused major disturbance to human life. An important reason behind the widespread social anxiety is the huge uncertainty about the pandemic. A fundamental uncertainty is how many or what percentage of people have been infected. There are published and frequently updated data on various statistics of the pandemic, at local, country or global level. However, due to various reasons, many cases were not included in those reported numbers. We propose a structured approach for the estimation of the number of unreported cases, where we distinguish cases that arrive late in the reported numbers and those who had mild or no symptoms and thus were not captured by any medical system at all. We use post-report data for the estimation of the former and population matching to the latter. We estimate that the reported number of infected cases in the US should be corrected by multiplying a factor of 220.54% as of Apr 20, 2020, while the infection ratio out of the US population is estimated to be 0.53%, implying a case mortality rate at 2.85% which is close to the 3.4% suggested by the WHO in Mar 2020. Towards the end of the summer of 2020, the overall infection ratio of the US rises to 2.49% while the case mortality decreases to 2.09%, and the ratio of asymptomatic cases out of all infected cases reduces from the pre-summer 35-40% to around 20-25%.
[ { "created": "Sun, 24 May 2020 22:19:43 GMT", "version": "v1" }, { "created": "Wed, 3 Mar 2021 13:53:05 GMT", "version": "v2" } ]
2021-03-04
[ [ "Yan", "Donghui", "" ], [ "Xu", "Ying", "" ], [ "Wang", "Pei", "" ] ]
The COVID-19 pandemic has caused major disturbance to human life. An important reason behind the widespread social anxiety is the huge uncertainty about the pandemic. A fundamental uncertainty is how many or what percentage of people have been infected. There are published and frequently updated data on various statistics of the pandemic, at local, country or global level. However, due to various reasons, many cases were not included in those reported numbers. We propose a structured approach for the estimation of the number of unreported cases, where we distinguish cases that arrive late in the reported numbers and those who had mild or no symptoms and thus were not captured by any medical system at all. We use post-report data for the estimation of the former and population matching to the latter. We estimate that the reported number of infected cases in the US should be corrected by multiplying a factor of 220.54% as of Apr 20, 2020, while the infection ratio out of the US population is estimated to be 0.53%, implying a case mortality rate at 2.85% which is close to the 3.4% suggested by the WHO in Mar 2020. Towards the end of the summer of 2020, the overall infection ratio of the US rises to 2.49% while the case mortality decreases to 2.09%, and the ratio of asymptomatic cases out of all infected cases reduces from the pre-summer 35-40% to around 20-25%.
q-bio/0702029
P. Grassberger
Kim Baskerville, Peter Grassberger, and Maya Paczuski
Graph animals, subgraph sampling and motif search in large networks
14 pages, uncludes 16 figures (color); version 2: several minor changes; to be published in Phys. Rev. E
null
10.1103/PhysRevE.76.036107
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
null
We generalize a sampling algorithm for lattice animals (connected clusters on a regular lattice) to a Monte Carlo algorithm for `graph animals', i.e. connected subgraphs in arbitrary networks. As with the algorithm in [N. Kashtan et al., Bioinformatics 20, 1746 (2004)], it provides a weighted sample, but the computation of the weights is much faster (linear in the size of subgraphs, instead of super-exponential). This allows subgraphs with up to ten or more nodes to be sampled with very high statistics, from arbitrarily large networks. Using this together with a heuristic algorithm for rapidly classifying isomorphic graphs, we present results for two protein interaction networks obtained using the TAP high throughput method: one of Escherichia coli with 230 nodes and 695 links, and one for yeast (Saccharomyces cerevisiae) with roughly ten times more nodes and links. We find in both cases that most connected subgraphs are strong motifs (Z-scores >10) or anti-motifs (Z-scores <-10) when the null model is the ensemble of networks with fixed degree sequence. Strong differences appear between the two networks, with dominant motifs in E. coli being (nearly) bipartite graphs and having many pairs of nodes which connect to the same neighbors, while dominant motifs in yeast tend towards completeness or contain large cliques. We also explore a number of methods that do not rely on measurements of Z-scores or comparisons with null models. For instance, we discuss the influence of specific complexes like the 26S proteasome in yeast, where a small number of complexes dominate the $k$-cores with large k and have a decisive effect on the strongest motifs with 6 to 8 nodes. We also present Zipf plots of counts versus rank. They show broad distributions that are not power laws, in contrast to the case when disconnected subgraphs are included.
[ { "created": "Tue, 13 Feb 2007 22:52:43 GMT", "version": "v1" }, { "created": "Fri, 22 Jun 2007 21:35:24 GMT", "version": "v2" } ]
2009-11-13
[ [ "Baskerville", "Kim", "" ], [ "Grassberger", "Peter", "" ], [ "Paczuski", "Maya", "" ] ]
We generalize a sampling algorithm for lattice animals (connected clusters on a regular lattice) to a Monte Carlo algorithm for `graph animals', i.e. connected subgraphs in arbitrary networks. As with the algorithm in [N. Kashtan et al., Bioinformatics 20, 1746 (2004)], it provides a weighted sample, but the computation of the weights is much faster (linear in the size of subgraphs, instead of super-exponential). This allows subgraphs with up to ten or more nodes to be sampled with very high statistics, from arbitrarily large networks. Using this together with a heuristic algorithm for rapidly classifying isomorphic graphs, we present results for two protein interaction networks obtained using the TAP high throughput method: one of Escherichia coli with 230 nodes and 695 links, and one for yeast (Saccharomyces cerevisiae) with roughly ten times more nodes and links. We find in both cases that most connected subgraphs are strong motifs (Z-scores >10) or anti-motifs (Z-scores <-10) when the null model is the ensemble of networks with fixed degree sequence. Strong differences appear between the two networks, with dominant motifs in E. coli being (nearly) bipartite graphs and having many pairs of nodes which connect to the same neighbors, while dominant motifs in yeast tend towards completeness or contain large cliques. We also explore a number of methods that do not rely on measurements of Z-scores or comparisons with null models. For instance, we discuss the influence of specific complexes like the 26S proteasome in yeast, where a small number of complexes dominate the $k$-cores with large k and have a decisive effect on the strongest motifs with 6 to 8 nodes. We also present Zipf plots of counts versus rank. They show broad distributions that are not power laws, in contrast to the case when disconnected subgraphs are included.
1510.04738
Umut G\"u\c{c}l\"u
Umut G\"u\c{c}l\"u, Marcel A. J. van Gerven
Semantic vector space models predict neural responses to complex visual stimuli
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Encoding models have as their objective to predict neural responses to naturalistic stimuli with the aim of elucidating how sensory information is represented in the brain. This prediction is achieved by representing the stimulus in terms of a suitable feature space and using this feature space to linearly predict observed neural responses. Here, we investigate to what extent semantic vector space models can be used to predict neural responses to complex visual stimuli. We show that these models provide good predictions of neural responses in downstream visual areas, improving significantly over a low-level control model based on Gabor wavelet pyramids. The outlined approach provides a new way to model and map high-level semantic representations across cortex.
[ { "created": "Thu, 15 Oct 2015 22:52:42 GMT", "version": "v1" } ]
2015-10-19
[ [ "Güçlü", "Umut", "" ], [ "van Gerven", "Marcel A. J.", "" ] ]
Encoding models have as their objective to predict neural responses to naturalistic stimuli with the aim of elucidating how sensory information is represented in the brain. This prediction is achieved by representing the stimulus in terms of a suitable feature space and using this feature space to linearly predict observed neural responses. Here, we investigate to what extent semantic vector space models can be used to predict neural responses to complex visual stimuli. We show that these models provide good predictions of neural responses in downstream visual areas, improving significantly over a low-level control model based on Gabor wavelet pyramids. The outlined approach provides a new way to model and map high-level semantic representations across cortex.
2110.10105
Giulia Laura Celora
Giulia L. Celora, Helen M. Byrne, P.G. Kevrekidis
Spatio-temporal modelling of phenotypic heterogeneity in tumour tissues and its impact on radiotherapy treatment
null
null
10.1016/j.jtbi.2022.111248
null
q-bio.CB nlin.PS physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
We present a mathematical model that describes how tumour heterogeneity evolves in a tissue slice that is oxygenated by a single blood vessel. Phenotype is identified with the stemness level of a cell, $s$, that determines its proliferative capacity, apoptosis propensity and response to treatment. Our study is based on numerical bifurcation analysis and dynamical simulations of a system of coupled non-local (in phenotypic space) partial differential equations that links the phenotypic evolution of the tumour cells to local oxygen levels in the tissue. In our formulation, we consider a 1D geometry where oxygen is supplied by a blood vessel located on the domain boundary and consumed by the tumour cells as it diffuses through the tissue. For biologically relevant parameter values, the system exhibits multiple steady states; in particular, depending on the initial conditions, the tumour is either eliminated ("tumour-extinction") or it persists ("tumour-invasion"). We conclude by using the model to investigate tumour responses to radiotherapy (RT), and focus on establishing which RT strategies can eliminate the tumour. Numerical simulations reveal how phenotypic heterogeneity evolves during treatment and highlight the critical role of tissue oxygen levels on the efficacy of radiation protocols that are commonly used clinically.
[ { "created": "Tue, 19 Oct 2021 16:59:30 GMT", "version": "v1" } ]
2023-11-14
[ [ "Celora", "Giulia L.", "" ], [ "Byrne", "Helen M.", "" ], [ "Kevrekidis", "P. G.", "" ] ]
We present a mathematical model that describes how tumour heterogeneity evolves in a tissue slice that is oxygenated by a single blood vessel. Phenotype is identified with the stemness level of a cell, $s$, that determines its proliferative capacity, apoptosis propensity and response to treatment. Our study is based on numerical bifurcation analysis and dynamical simulations of a system of coupled non-local (in phenotypic space) partial differential equations that links the phenotypic evolution of the tumour cells to local oxygen levels in the tissue. In our formulation, we consider a 1D geometry where oxygen is supplied by a blood vessel located on the domain boundary and consumed by the tumour cells as it diffuses through the tissue. For biologically relevant parameter values, the system exhibits multiple steady states; in particular, depending on the initial conditions, the tumour is either eliminated ("tumour-extinction") or it persists ("tumour-invasion"). We conclude by using the model to investigate tumour responses to radiotherapy (RT), and focus on establishing which RT strategies can eliminate the tumour. Numerical simulations reveal how phenotypic heterogeneity evolves during treatment and highlight the critical role of tissue oxygen levels on the efficacy of radiation protocols that are commonly used clinically.
2312.14249
Yingzhou Lu
Yingzhou Lu, Minjie Shen, Yue Zhao, Chenhao Li, Fan Meng, Xiao Wang, David Herrington, Yue Wang, Tim Fu, Capucine Van Rechem
GenoCraft: A Comprehensive, User-Friendly Web-Based Platform for High-Throughput Omics Data Analysis and Visualization
null
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The surge in high-throughput omics data has reshaped the landscape of biological research, underlining the need for powerful, user-friendly data analysis and interpretation tools. This paper presents GenoCraft, a web-based comprehensive software solution designed to handle the entire pipeline of omics data processing. GenoCraft offers a unified platform featuring advanced bioinformatics tools, covering all aspects of omics data analysis. It encompasses a range of functionalities, such as normalization, quality control, differential analysis, network analysis, pathway analysis, and diverse visualization techniques. This software makes state-of-the-art omics data analysis more accessible to a wider range of users. With GenoCraft, researchers and data scientists have access to an array of cutting-edge bioinformatics tools under a user-friendly interface, making it a valuable resource for managing and analyzing large-scale omics data. The API with an interactive web interface is publicly available at https://genocraft.stanford. edu/. We also release all the codes in https://github.com/futianfan/GenoCraft.
[ { "created": "Thu, 21 Dec 2023 19:06:34 GMT", "version": "v1" } ]
2023-12-25
[ [ "Lu", "Yingzhou", "" ], [ "Shen", "Minjie", "" ], [ "Zhao", "Yue", "" ], [ "Li", "Chenhao", "" ], [ "Meng", "Fan", "" ], [ "Wang", "Xiao", "" ], [ "Herrington", "David", "" ], [ "Wang", "Yue", "" ], [ "Fu", "Tim", "" ], [ "Van Rechem", "Capucine", "" ] ]
The surge in high-throughput omics data has reshaped the landscape of biological research, underlining the need for powerful, user-friendly data analysis and interpretation tools. This paper presents GenoCraft, a web-based comprehensive software solution designed to handle the entire pipeline of omics data processing. GenoCraft offers a unified platform featuring advanced bioinformatics tools, covering all aspects of omics data analysis. It encompasses a range of functionalities, such as normalization, quality control, differential analysis, network analysis, pathway analysis, and diverse visualization techniques. This software makes state-of-the-art omics data analysis more accessible to a wider range of users. With GenoCraft, researchers and data scientists have access to an array of cutting-edge bioinformatics tools under a user-friendly interface, making it a valuable resource for managing and analyzing large-scale omics data. The API with an interactive web interface is publicly available at https://genocraft.stanford. edu/. We also release all the codes in https://github.com/futianfan/GenoCraft.
2007.00159
Niayesh Afshordi
Niayesh Afshordi (U-Waterloo/Perimeter), Benjamin Holder (GVSU/U-Waterloo), Mohammad Bahrami, and Daniel Lichtblau (Wolfram Research)
Diverse local epidemics reveal the distinct effects of population density, demographics, climate, depletion of susceptibles, and intervention in the first wave of COVID-19 in the United States
28 pages, 17 figures, COVID-19 cloud simulations and resources available at https://nafshordi.com/covid/ and https://wolfr.am/COVID19Dash
null
null
null
q-bio.PE physics.soc-ph q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The SARS-CoV-2 pandemic has caused significant mortality and morbidity worldwide, sparing almost no community. As the disease will likely remain a threat for years to come, an understanding of the precise influences of human demographics and settlement, as well as the dynamic factors of climate, susceptible depletion, and intervention, on the spread of localized epidemics will be vital for mounting an effective response. We consider the entire set of local epidemics in the United States; a broad selection of demographic, population density, and climate factors; and local mobility data, tracking social distancing interventions, to determine the key factors driving the spread and containment of the virus. Assuming first a linear model for the rate of exponential growth (or decay) in cases/mortality, we find that population-weighted density, humidity, and median age dominate the dynamics of growth and decline, once interventions are accounted for. A focus on distinct metropolitan areas suggests that some locales benefited from the timing of a nearly simultaneous nationwide shutdown, and/or the regional climate conditions in mid-March; while others suffered significant outbreaks prior to intervention. Using a first-principles model of the infection spread, we then develop predictions for the impact of the relaxation of social distancing and local climate conditions. A few regions, where a significant fraction of the population was infected, show evidence that the epidemic has partially resolved via depletion of the susceptible population (i.e., "herd immunity"), while most regions in the United States remain overwhelmingly susceptible. These results will be important for optimal management of intervention strategies, which can be facilitated using our online dashboard.
[ { "created": "Wed, 1 Jul 2020 00:19:39 GMT", "version": "v1" } ]
2020-07-02
[ [ "Afshordi", "Niayesh", "", "U-Waterloo/Perimeter" ], [ "Holder", "Benjamin", "", "GVSU/U-Waterloo" ], [ "Bahrami", "Mohammad", "", "Wolfram Research" ], [ "Lichtblau", "Daniel", "", "Wolfram Research" ] ]
The SARS-CoV-2 pandemic has caused significant mortality and morbidity worldwide, sparing almost no community. As the disease will likely remain a threat for years to come, an understanding of the precise influences of human demographics and settlement, as well as the dynamic factors of climate, susceptible depletion, and intervention, on the spread of localized epidemics will be vital for mounting an effective response. We consider the entire set of local epidemics in the United States; a broad selection of demographic, population density, and climate factors; and local mobility data, tracking social distancing interventions, to determine the key factors driving the spread and containment of the virus. Assuming first a linear model for the rate of exponential growth (or decay) in cases/mortality, we find that population-weighted density, humidity, and median age dominate the dynamics of growth and decline, once interventions are accounted for. A focus on distinct metropolitan areas suggests that some locales benefited from the timing of a nearly simultaneous nationwide shutdown, and/or the regional climate conditions in mid-March; while others suffered significant outbreaks prior to intervention. Using a first-principles model of the infection spread, we then develop predictions for the impact of the relaxation of social distancing and local climate conditions. A few regions, where a significant fraction of the population was infected, show evidence that the epidemic has partially resolved via depletion of the susceptible population (i.e., "herd immunity"), while most regions in the United States remain overwhelmingly susceptible. These results will be important for optimal management of intervention strategies, which can be facilitated using our online dashboard.
1708.08916
Erfan Sayyari
Erfan Sayyari and Siavash Mirarab
Testing for polytomies in phylogenetic species trees using quartet frequencies
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic species trees typically represent the speciation history as a bifurcating tree. Speciation events that simultaneously create more than two descendants, thereby creating polytomies in the phylogeny, are possible. Moreover, the inability to resolve relationships is often shown as a (soft) polytomy. Both types of polytomies have been traditionally studied in the context of gene tree reconstruction from sequence data. However, polytomies in the species tree cannot be detected or ruled out without considering gene tree discordance. In this paper, we describe a statistical test based on properties of the multi-species coalescent model to test the null hypothesis that a branch in an estimated species tree should be replaced by a polytomy. On both simulated and biological datasets, we show that the null hypothesis is rejected for all but the shortest branches, and in most cases, it is retained for true polytomies. The test, available as part of the ASTRAL package, can help systematists decide whether their datasets are sufficient to resolve specific relationships of interest.
[ { "created": "Tue, 29 Aug 2017 02:25:07 GMT", "version": "v1" }, { "created": "Thu, 31 Aug 2017 04:08:47 GMT", "version": "v2" }, { "created": "Sat, 2 Dec 2017 01:22:44 GMT", "version": "v3" }, { "created": "Wed, 7 Feb 2018 03:52:24 GMT", "version": "v4" } ]
2018-02-08
[ [ "Sayyari", "Erfan", "" ], [ "Mirarab", "Siavash", "" ] ]
Phylogenetic species trees typically represent the speciation history as a bifurcating tree. Speciation events that simultaneously create more than two descendants, thereby creating polytomies in the phylogeny, are possible. Moreover, the inability to resolve relationships is often shown as a (soft) polytomy. Both types of polytomies have been traditionally studied in the context of gene tree reconstruction from sequence data. However, polytomies in the species tree cannot be detected or ruled out without considering gene tree discordance. In this paper, we describe a statistical test based on properties of the multi-species coalescent model to test the null hypothesis that a branch in an estimated species tree should be replaced by a polytomy. On both simulated and biological datasets, we show that the null hypothesis is rejected for all but the shortest branches, and in most cases, it is retained for true polytomies. The test, available as part of the ASTRAL package, can help systematists decide whether their datasets are sufficient to resolve specific relationships of interest.
1412.1399
Lutz Brusch
Lionel Foret, Lutz Brusch and Frank J\"ulicher
Theory of cargo and membrane trafficking
review, 11 pages, 3 figures
null
null
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Endocytosis underlies many cellular functions including signaling and nutrient uptake. The endocytosed cargo gets redistributed across a dynamic network of endosomes undergoing fusion and fission. Here, a theoretical approach is reviewed which can explain how the microscopic properties of endosome interactions cause the emergent macroscopic properties of cargo trafficking in the endosomal network. Predictions by the theory have been tested experimentally and include the inference of dependencies and parameter values of the microscopic processes. This theory could also be used to infer mechanisms of signal-trafficking crosstalk. It is applicable to in vivo systems since fixed samples at few time points suffice as input data.
[ { "created": "Wed, 3 Dec 2014 16:53:11 GMT", "version": "v1" } ]
2014-12-04
[ [ "Foret", "Lionel", "" ], [ "Brusch", "Lutz", "" ], [ "Jülicher", "Frank", "" ] ]
Endocytosis underlies many cellular functions including signaling and nutrient uptake. The endocytosed cargo gets redistributed across a dynamic network of endosomes undergoing fusion and fission. Here, a theoretical approach is reviewed which can explain how the microscopic properties of endosome interactions cause the emergent macroscopic properties of cargo trafficking in the endosomal network. Predictions by the theory have been tested experimentally and include the inference of dependencies and parameter values of the microscopic processes. This theory could also be used to infer mechanisms of signal-trafficking crosstalk. It is applicable to in vivo systems since fixed samples at few time points suffice as input data.
2406.00735
Jiahan Li
Jiahan Li, Chaoran Cheng, Zuofan Wu, Ruihan Guo, Shitong Luo, Zhizhou Ren, Jian Peng, Jianzhu Ma
Full-Atom Peptide Design based on Multi-modal Flow Matching
ICML 2024
null
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Peptides, short chains of amino acid residues, play a vital role in numerous biological processes by interacting with other target molecules, offering substantial potential in drug discovery. In this work, we present PepFlow, the first multi-modal deep generative model grounded in the flow-matching framework for the design of full-atom peptides that target specific protein receptors. Drawing inspiration from the crucial roles of residue backbone orientations and side-chain dynamics in protein-peptide interactions, we characterize the peptide structure using rigid backbone frames within the $\mathrm{SE}(3)$ manifold and side-chain angles on high-dimensional tori. Furthermore, we represent discrete residue types in the peptide sequence as categorical distributions on the probability simplex. By learning the joint distributions of each modality using derived flows and vector fields on corresponding manifolds, our method excels in the fine-grained design of full-atom peptides. Harnessing the multi-modal paradigm, our approach adeptly tackles various tasks such as fix-backbone sequence design and side-chain packing through partial sampling. Through meticulously crafted experiments, we demonstrate that PepFlow exhibits superior performance in comprehensive benchmarks, highlighting its significant potential in computational peptide design and analysis.
[ { "created": "Sun, 2 Jun 2024 12:59:54 GMT", "version": "v1" } ]
2024-06-04
[ [ "Li", "Jiahan", "" ], [ "Cheng", "Chaoran", "" ], [ "Wu", "Zuofan", "" ], [ "Guo", "Ruihan", "" ], [ "Luo", "Shitong", "" ], [ "Ren", "Zhizhou", "" ], [ "Peng", "Jian", "" ], [ "Ma", "Jianzhu", "" ] ]
Peptides, short chains of amino acid residues, play a vital role in numerous biological processes by interacting with other target molecules, offering substantial potential in drug discovery. In this work, we present PepFlow, the first multi-modal deep generative model grounded in the flow-matching framework for the design of full-atom peptides that target specific protein receptors. Drawing inspiration from the crucial roles of residue backbone orientations and side-chain dynamics in protein-peptide interactions, we characterize the peptide structure using rigid backbone frames within the $\mathrm{SE}(3)$ manifold and side-chain angles on high-dimensional tori. Furthermore, we represent discrete residue types in the peptide sequence as categorical distributions on the probability simplex. By learning the joint distributions of each modality using derived flows and vector fields on corresponding manifolds, our method excels in the fine-grained design of full-atom peptides. Harnessing the multi-modal paradigm, our approach adeptly tackles various tasks such as fix-backbone sequence design and side-chain packing through partial sampling. Through meticulously crafted experiments, we demonstrate that PepFlow exhibits superior performance in comprehensive benchmarks, highlighting its significant potential in computational peptide design and analysis.
2205.12629
Erida Gjini
Ermanda Dekaj and Erida Gjini
Pneumococcus and the stress-gradient hypothesis: a trade-off links $R_0$ and susceptibility to co-colonization across countries
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Modern molecular technologies have revolutionized our understanding of bacterial epidemiology, but reported data across different settings remain under-integrated in common theoretical frameworks. Pneumococcus serotype co-colonization, caused by the polymorphic bacteria Streptococcus pneumoniae, has been increasingly investigated in recent years. While the global genomic diversity and serotype distribution of S. pneumoniae are well-characterized, there is limited information on how co-colonization patterns vary globally, critical for understanding bacterial evolution and dynamics. Gathering a rich dataset of cross-sectional pneumococcal colonization studies in the literature, we quantified patterns of transmission intensity and co-colonization prevalence in children populations across 17 geographic locations. Fitting these data to an SIS model with co-colonization under the assumption of similarity among interacting strains, our analysis reveals strong patterns of negative co-variation between transmission intensity ($R_0$) and susceptibility to co-colonization ($k$). In support of the stress-gradient hypothesis in ecology (SGH), pneumococcus serotypes appear to compete more in high-transmission settings and less in low-transmission settings, a trade-off which ultimately leads to a conserved ratio of single to co-colonization $\mu=1/(R_0-1)k$. Within our mathematical model, such conservation suggests preservation of 'stability-diversity-complexity' regimes in multi-strain coexistence. We find no major study differences in serotype composition, pointing to underlying adaptation of the same set of serotypes across environments. Our work highlights that understanding pneumococcus transmission patterns from global epidemiological data can benefit from simple analytical approaches that account for quasi-neutrality among strains, co-colonization, as well as variable environmental adaptation.
[ { "created": "Wed, 25 May 2022 10:08:47 GMT", "version": "v1" } ]
2022-05-26
[ [ "Dekaj", "Ermanda", "" ], [ "Gjini", "Erida", "" ] ]
Modern molecular technologies have revolutionized our understanding of bacterial epidemiology, but reported data across different settings remain under-integrated in common theoretical frameworks. Pneumococcus serotype co-colonization, caused by the polymorphic bacteria Streptococcus pneumoniae, has been increasingly investigated in recent years. While the global genomic diversity and serotype distribution of S. pneumoniae are well-characterized, there is limited information on how co-colonization patterns vary globally, critical for understanding bacterial evolution and dynamics. Gathering a rich dataset of cross-sectional pneumococcal colonization studies in the literature, we quantified patterns of transmission intensity and co-colonization prevalence in children populations across 17 geographic locations. Fitting these data to an SIS model with co-colonization under the assumption of similarity among interacting strains, our analysis reveals strong patterns of negative co-variation between transmission intensity ($R_0$) and susceptibility to co-colonization ($k$). In support of the stress-gradient hypothesis in ecology (SGH), pneumococcus serotypes appear to compete more in high-transmission settings and less in low-transmission settings, a trade-off which ultimately leads to a conserved ratio of single to co-colonization $\mu=1/(R_0-1)k$. Within our mathematical model, such conservation suggests preservation of 'stability-diversity-complexity' regimes in multi-strain coexistence. We find no major study differences in serotype composition, pointing to underlying adaptation of the same set of serotypes across environments. Our work highlights that understanding pneumococcus transmission patterns from global epidemiological data can benefit from simple analytical approaches that account for quasi-neutrality among strains, co-colonization, as well as variable environmental adaptation.
1703.04182
Alexander Vasilyev
Alexander Yurievich Vasilyev
Optimal control of eye-movements during visual search
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of optimal oculomotor control during the execution of visual search tasks. We introduce a computational model of human eye movements, which takes into account various constraints of the human visual and oculomotor systems. In the model, the choice of the subsequent fixation location is posed as a problem of stochastic optimal control, which relies on reinforcement learning methods. We show that if biological constraints are taken into account, the trajectories simulated under learned policy share both basic statistical properties and scaling behaviour with human eye movements. We validated our model simulations with human psychophysical eye-tracking experiments
[ { "created": "Sun, 12 Mar 2017 21:56:29 GMT", "version": "v1" }, { "created": "Wed, 15 Mar 2017 10:23:08 GMT", "version": "v2" }, { "created": "Sun, 19 Mar 2017 16:59:39 GMT", "version": "v3" }, { "created": "Thu, 13 Apr 2017 14:50:03 GMT", "version": "v4" }, { "created": "Thu, 14 Sep 2017 21:39:50 GMT", "version": "v5" }, { "created": "Wed, 4 Apr 2018 14:24:03 GMT", "version": "v6" }, { "created": "Tue, 28 Aug 2018 20:42:53 GMT", "version": "v7" } ]
2018-08-30
[ [ "Vasilyev", "Alexander Yurievich", "" ] ]
We study the problem of optimal oculomotor control during the execution of visual search tasks. We introduce a computational model of human eye movements, which takes into account various constraints of the human visual and oculomotor systems. In the model, the choice of the subsequent fixation location is posed as a problem of stochastic optimal control, which relies on reinforcement learning methods. We show that if biological constraints are taken into account, the trajectories simulated under learned policy share both basic statistical properties and scaling behaviour with human eye movements. We validated our model simulations with human psychophysical eye-tracking experiments
1801.01853
Trang-Anh Estelle Nghiem
Trang-Anh Nghiem, Bartosz Telenczuk, Olivier Marre, Alain Destexhe, Ulisse Ferrari
Maximum entropy models reveal the excitatory and inhibitory correlation structures in cortical neuronal activity
17 pages, 11 figures (including 5 supplementary)
Phys. Rev. E 98, 012402 (2018)
10.1103/PhysRevE.98.012402
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Maximum Entropy models can be inferred from large data-sets to uncover how collective dynamics emerge from local interactions. Here, such models are employed to investigate neurons recorded by multielectrode arrays in the human and monkey cortex. Taking advantage of the separation of excitatory and inhibitory neuron types, we construct a model including this distinction. This approach allows to shed light upon differences between excitatory and inhibitory activity across different brain states such as wakefulness and deep sleep, in agreement with previous findings. Additionally, Maximum Entropy models can also unveil novel features of neuronal interactions, which are found to be dominated by pairwise interactions during wakefulness, but are population-wide during deep sleep. In particular, inhibitory neurons are observed to be strongly tuned to the inhibitory population. Overall, we demonstrate Maximum Entropy models can be useful to analyze data-sets with classified neuron types, and to reveal the respective roles of excitatory and inhibitory neurons in organizing coherent dynamics in the cerebral cortex.
[ { "created": "Fri, 5 Jan 2018 17:49:05 GMT", "version": "v1" }, { "created": "Mon, 15 Jan 2018 17:34:49 GMT", "version": "v2" }, { "created": "Tue, 10 Jul 2018 13:25:59 GMT", "version": "v3" } ]
2018-07-11
[ [ "Nghiem", "Trang-Anh", "" ], [ "Telenczuk", "Bartosz", "" ], [ "Marre", "Olivier", "" ], [ "Destexhe", "Alain", "" ], [ "Ferrari", "Ulisse", "" ] ]
Maximum Entropy models can be inferred from large data-sets to uncover how collective dynamics emerge from local interactions. Here, such models are employed to investigate neurons recorded by multielectrode arrays in the human and monkey cortex. Taking advantage of the separation of excitatory and inhibitory neuron types, we construct a model including this distinction. This approach allows to shed light upon differences between excitatory and inhibitory activity across different brain states such as wakefulness and deep sleep, in agreement with previous findings. Additionally, Maximum Entropy models can also unveil novel features of neuronal interactions, which are found to be dominated by pairwise interactions during wakefulness, but are population-wide during deep sleep. In particular, inhibitory neurons are observed to be strongly tuned to the inhibitory population. Overall, we demonstrate Maximum Entropy models can be useful to analyze data-sets with classified neuron types, and to reveal the respective roles of excitatory and inhibitory neurons in organizing coherent dynamics in the cerebral cortex.
1904.10124
Christos Skiadas H
Christos H Skiadas and Charilaos Skiadas
Relation of the Weibull Shape Parameter with the Healthy Life Years Lost Estimates: Analytic Derivation and Estimation from an Extended Life Table
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Matsushita et al (1992) have done an interesting finding. They observed that the shape parameter of the Weibull model presented systematic changes over time and age when applied to mortality data for males and females in Japan. They have also estimated that this parameter was smaller in the 1891-1898 data in Japan compared to the 1980 mortality data and they presented an illustrative figure for females where the values of the shape parameter are illustrated on the diagram close to the corresponding survival curves. However, they have not provided an analytical explanation of this behavior of the shape parameter of the Weibull model. The cumulative hazard of this model can express the additive process of applying a force in a material for enough time before cracking. To pass to the human data, the Weibull model and the cumulative hazard can express the additive process which disabilities and diseases cause the human organism during the life span leading to healthy life years lost. In this paper we further analytically derive a more general model of survival-mortality in which we estimate a parameter related to the Healthy Life Years Lost (HLYL) and leading to the Weibull model and the corresponding shape parameter as a specific case. We have also demonstrated that the results found for the general HLYL parameter we have proposed provides results similar to those provided by the World Health Organization for the Healthy Life Expectancy (HALE) and the corresponding HLYL estimates. An analytic derivation of the mathematical formulas is presented along with an easy to apply Excel program. This program is an extension of the classical life table including four more columns to estimate the cumulative mortality, the average mortality, the person life years lost and finally the HLYL parameter bx. The latest versions of this program appear in the Demographics2019 website
[ { "created": "Tue, 23 Apr 2019 02:41:55 GMT", "version": "v1" } ]
2019-04-24
[ [ "Skiadas", "Christos H", "" ], [ "Skiadas", "Charilaos", "" ] ]
Matsushita et al (1992) have done an interesting finding. They observed that the shape parameter of the Weibull model presented systematic changes over time and age when applied to mortality data for males and females in Japan. They have also estimated that this parameter was smaller in the 1891-1898 data in Japan compared to the 1980 mortality data and they presented an illustrative figure for females where the values of the shape parameter are illustrated on the diagram close to the corresponding survival curves. However, they have not provided an analytical explanation of this behavior of the shape parameter of the Weibull model. The cumulative hazard of this model can express the additive process of applying a force in a material for enough time before cracking. To pass to the human data, the Weibull model and the cumulative hazard can express the additive process which disabilities and diseases cause the human organism during the life span leading to healthy life years lost. In this paper we further analytically derive a more general model of survival-mortality in which we estimate a parameter related to the Healthy Life Years Lost (HLYL) and leading to the Weibull model and the corresponding shape parameter as a specific case. We have also demonstrated that the results found for the general HLYL parameter we have proposed provides results similar to those provided by the World Health Organization for the Healthy Life Expectancy (HALE) and the corresponding HLYL estimates. An analytic derivation of the mathematical formulas is presented along with an easy to apply Excel program. This program is an extension of the classical life table including four more columns to estimate the cumulative mortality, the average mortality, the person life years lost and finally the HLYL parameter bx. The latest versions of this program appear in the Demographics2019 website
0809.0110
Francesc Rossell\'o
Gabriel Cardona, Merce Llabres, Francesc Rossello, Gabriel Valiente
On Nakhleh's latest metric for phylogenetic networks
15 pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/3.0/
We prove that Nakhleh's latest dissimilarity measure for phylogenetic networks is a metric on the classes of tree-child phylogenetic networks, of semi-binary time consistent tree-sibling phylogenetic networks, and of multi-labeled phylogenetic trees. We also prove that it distinguishes phylogenetic networks with different reduced versions. In this way, it becomes the dissimilarity measure for phylogenetic networks with the strongest separation power available so far.
[ { "created": "Sun, 31 Aug 2008 10:27:04 GMT", "version": "v1" } ]
2008-09-02
[ [ "Cardona", "Gabriel", "" ], [ "Llabres", "Merce", "" ], [ "Rossello", "Francesc", "" ], [ "Valiente", "Gabriel", "" ] ]
We prove that Nakhleh's latest dissimilarity measure for phylogenetic networks is a metric on the classes of tree-child phylogenetic networks, of semi-binary time consistent tree-sibling phylogenetic networks, and of multi-labeled phylogenetic trees. We also prove that it distinguishes phylogenetic networks with different reduced versions. In this way, it becomes the dissimilarity measure for phylogenetic networks with the strongest separation power available so far.
1505.06726
Sergey Denisov
Olena Tkachenko, Juzar Thingna, Sergey Denisov, Vasily Zaburdaev, and Peter H\"anggi
Seasonal Floquet states in a game-driven evolutionary dynamics
5 pages + 4 figures + supplementary material included
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mating preferences of many biological species are not constant but season-dependent. Within the framework of evolutionary game theory this can be modeled with two finite opposite-sex populations playing against each other following the rules that are periodically changing. By combining Floquet theory and the concept of quasi-stationary distributions, we reveal existence of metastable time-periodic states in the evolution of finite game-driven populations. The evolutionary Floquet states correspond to time-periodic probability flows in the strategy space which cannot be resolved within the mean-field framework. The lifetime of metastable Floquet states increases with the size $N$ of populations so that they become attractors in the limit $N \rightarrow \infty$.
[ { "created": "Mon, 25 May 2015 19:54:56 GMT", "version": "v1" }, { "created": "Fri, 29 May 2015 19:37:57 GMT", "version": "v2" }, { "created": "Wed, 12 Aug 2015 13:05:18 GMT", "version": "v3" } ]
2015-08-13
[ [ "Tkachenko", "Olena", "" ], [ "Thingna", "Juzar", "" ], [ "Denisov", "Sergey", "" ], [ "Zaburdaev", "Vasily", "" ], [ "Hänggi", "Peter", "" ] ]
Mating preferences of many biological species are not constant but season-dependent. Within the framework of evolutionary game theory this can be modeled with two finite opposite-sex populations playing against each other following the rules that are periodically changing. By combining Floquet theory and the concept of quasi-stationary distributions, we reveal existence of metastable time-periodic states in the evolution of finite game-driven populations. The evolutionary Floquet states correspond to time-periodic probability flows in the strategy space which cannot be resolved within the mean-field framework. The lifetime of metastable Floquet states increases with the size $N$ of populations so that they become attractors in the limit $N \rightarrow \infty$.
1404.0196
Salva Duran-Nebreda
Salva Duran-Nebreda and Ricard V. Sol\'e
Emergence of multicellularity in a model of cell growth, death and aggregation under size-dependent selection
7 pages, 5 figures
null
10.1098/rsif.2014.0982
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How multicellular life forms evolved out from unicellular ones constitutes a major problem in our understanding of the evolution of our biosphere. A recent set of experiments involving yeast cell populations has shown that selection for faster sedimenting cells leads to the appearance of stable aggregates of cells that are able to split into smaller clusters. It was suggested that the observed evolutionary patterns could be the result of evolved programs affecting cell death. Here we show, using a simple model of cell-cell interactions and evolving adhesion rates, that the observed patterns in cluster size and localized mortality can be easily interpreted in terms of waste accumulation and toxicity driven apoptosis. This simple mechanism would have played a key role in the early evolution of multicellular life forms based on both aggregative and clonal development. The potential extensions of this work and its implications for natural and synthetic multicellularity are discussed.
[ { "created": "Tue, 1 Apr 2014 10:53:32 GMT", "version": "v1" }, { "created": "Sun, 8 Nov 2015 17:11:39 GMT", "version": "v2" } ]
2015-11-10
[ [ "Duran-Nebreda", "Salva", "" ], [ "Solé", "Ricard V.", "" ] ]
How multicellular life forms evolved out from unicellular ones constitutes a major problem in our understanding of the evolution of our biosphere. A recent set of experiments involving yeast cell populations has shown that selection for faster sedimenting cells leads to the appearance of stable aggregates of cells that are able to split into smaller clusters. It was suggested that the observed evolutionary patterns could be the result of evolved programs affecting cell death. Here we show, using a simple model of cell-cell interactions and evolving adhesion rates, that the observed patterns in cluster size and localized mortality can be easily interpreted in terms of waste accumulation and toxicity driven apoptosis. This simple mechanism would have played a key role in the early evolution of multicellular life forms based on both aggregative and clonal development. The potential extensions of this work and its implications for natural and synthetic multicellularity are discussed.
q-bio/0401031
Paul Smolen
Paul Smolen, Paul E. Hardin, Brian S. Lo, Douglas A. Baxter, John H. Byrne
Simulation of Drosophila Circadian Oscillations, Mutations, and Light Responses by a Model with VRI, PDP-1, and CLK
Accepted to Biophysical Journal, 1/16/04. Single PDF file, 7 figures at end
null
10.1016/S0006-3495(04)74332-5
null
q-bio.MN q-bio.SC
null
A model of Drosophila circadian rhythm generation was developed to represent feedback loops based on transcriptional regulation of per, Clk (dclock), Pdp-1, and vri (vrille). The model postulates that histone acetylation kinetics make transcriptional activation a nonlinear function of [CLK]. Such a nonlinearity is essential to simulate robust circadian oscillations of transcription in our model and in previous models. Simulations suggest two positive feedback loops involving Clk are not essential for oscillations, because oscillations of [PER] were preserved when Clk, vri, or Pdp-1 expression was fixed. Eliminating the negative feedback loop in which PER represses per expression abolished oscillations. Simulations of per or Clk null mutations and of vri, Clk, or Pdp-1 heterozygous null mutations altered model behavior in ways similar to experimental data. The model simulated a photic phase-response curve resembling experimental curves, and oscillations entrained to simulated light-dark cycles. The model makes experimental predictions, some of which could be tested in transgenic Drosophila.
[ { "created": "Fri, 23 Jan 2004 20:09:48 GMT", "version": "v1" } ]
2009-11-10
[ [ "Smolen", "Paul", "" ], [ "Hardin", "Paul E.", "" ], [ "Lo", "Brian S.", "" ], [ "Baxter", "Douglas A.", "" ], [ "Byrne", "John H.", "" ] ]
A model of Drosophila circadian rhythm generation was developed to represent feedback loops based on transcriptional regulation of per, Clk (dclock), Pdp-1, and vri (vrille). The model postulates that histone acetylation kinetics make transcriptional activation a nonlinear function of [CLK]. Such a nonlinearity is essential to simulate robust circadian oscillations of transcription in our model and in previous models. Simulations suggest two positive feedback loops involving Clk are not essential for oscillations, because oscillations of [PER] were preserved when Clk, vri, or Pdp-1 expression was fixed. Eliminating the negative feedback loop in which PER represses per expression abolished oscillations. Simulations of per or Clk null mutations and of vri, Clk, or Pdp-1 heterozygous null mutations altered model behavior in ways similar to experimental data. The model simulated a photic phase-response curve resembling experimental curves, and oscillations entrained to simulated light-dark cycles. The model makes experimental predictions, some of which could be tested in transgenic Drosophila.
2403.02603
Dong Chen
Dong Chen, Gengzhuo Liu, Hongyan Du, Junjie Wee, Rui Wang, Jiahui Chen, Jana Shen, and Guo-Wei Wei
Drug resistance revealed by in silico deep mutational scanning and mutation tracker
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As COVID-19 enters its fifth year, it continues to pose a significant global health threat, with the constantly mutating SARS-CoV-2 virus challenging drug effectiveness. A comprehensive understanding of virus-drug interactions is essential for predicting and improving drug effectiveness, especially in combating drug resistance during the pandemic. In response, the Path Laplacian Transformer-based Prospective Analysis Framework (PLFormer-PAF) has been proposed, integrating historical data analysis and predictive modeling strategies. This dual-strategy approach utilizes path topology to transform protein-ligand complexes into topological sequences, enabling the use of advanced large language models for analyzing protein-ligand interactions and enhancing its reliability with factual insights garnered from historical data. It has shown unparalleled performance in predicting binding affinity tasks across various benchmarks, including specific evaluations related to SARS-CoV-2, and assesses the impact of virus mutations on drug efficacy, offering crucial insights into potential drug resistance. The predictions align with observed mutation patterns in SARS-CoV-2, indicating that the widespread use of the Pfizer drug has lead to viral evolution and reduced drug efficacy. PLFormer-PAF's capabilities extend beyond identifying drug-resistant strains, positioning it as a key tool in drug discovery research and the development of new therapeutic strategies against fast-mutating viruses like COVID-19.
[ { "created": "Tue, 5 Mar 2024 02:35:47 GMT", "version": "v1" } ]
2024-03-06
[ [ "Chen", "Dong", "" ], [ "Liu", "Gengzhuo", "" ], [ "Du", "Hongyan", "" ], [ "Wee", "Junjie", "" ], [ "Wang", "Rui", "" ], [ "Chen", "Jiahui", "" ], [ "Shen", "Jana", "" ], [ "Wei", "Guo-Wei", "" ] ]
As COVID-19 enters its fifth year, it continues to pose a significant global health threat, with the constantly mutating SARS-CoV-2 virus challenging drug effectiveness. A comprehensive understanding of virus-drug interactions is essential for predicting and improving drug effectiveness, especially in combating drug resistance during the pandemic. In response, the Path Laplacian Transformer-based Prospective Analysis Framework (PLFormer-PAF) has been proposed, integrating historical data analysis and predictive modeling strategies. This dual-strategy approach utilizes path topology to transform protein-ligand complexes into topological sequences, enabling the use of advanced large language models for analyzing protein-ligand interactions and enhancing its reliability with factual insights garnered from historical data. It has shown unparalleled performance in predicting binding affinity tasks across various benchmarks, including specific evaluations related to SARS-CoV-2, and assesses the impact of virus mutations on drug efficacy, offering crucial insights into potential drug resistance. The predictions align with observed mutation patterns in SARS-CoV-2, indicating that the widespread use of the Pfizer drug has lead to viral evolution and reduced drug efficacy. PLFormer-PAF's capabilities extend beyond identifying drug-resistant strains, positioning it as a key tool in drug discovery research and the development of new therapeutic strategies against fast-mutating viruses like COVID-19.
0807.1061
Michel Aoun
Michel Aoun, Gilbert Charles, Annick Hourmant
Micropropagation of three genotypes of Indian mustard [{Brassica juncea} (L.) Czern.] using seedling-derived transverse thin cell layer (tTCL) explants
12 pages, 2 figures and 2 tables
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Micropropagation of three genotypes of Indian mustard [\textit{Brassica juncea} (L.) Czern.] using 7-days old seedling-derived transverse thin cell layer (tTCL) explants was accomplished. The genotype, explant source and addition of silver nitrate to the medium significantly influenced shoot bud induction. MS medium with 26.6 $\mu$M of 6-Benzylaminopurin (BAP) and 3.22 $\mu$M of 1-naphtaleneacetic acid (NAA) was identical (in the case of cotyledon tTCLs whatever the organ) and superior for the induction of buds (in the cases of petiole tTCL explants of genotypes 1 and 2 and hypocotyl tTCL explants of genotypes 1 and 3) than 53.3 $\mu$M of BAP and 3.22 $\mu$M of NAA. However, 53.3 $\mu$M of BAP was superior for the induction of buds than 26.6 $\mu$M in the presence of the same concentration of NAA for petiole tTCL explants of genotype 3 and hypocotyl tTCL explants of genotype 2. The addition of silver nitrate significantly enhanced the rate of shoot induction in all genotypes. Cotyledon-derived tTCL explants exhibited the highest shoot bud induction potential and was followed by petiole- and hypocotyl-derived ones. Addition of 10 $\mu$M of silver nitrate to BAP and NAA supplemented medium induced higher frequency shoot bud induction (up to 100 %) with the highest means of 4.45 shoots per cotyledon-derived tTCL explants obtained with the genotype 2. Shoot regenerated were rooted on MS basal medium without PGRs which induced 99 % of roots per shoot. The plantlets established in greenhouse conditions with 99 % survival, flowered normally and set seeds.
[ { "created": "Mon, 7 Jul 2008 16:17:23 GMT", "version": "v1" } ]
2008-07-08
[ [ "Aoun", "Michel", "" ], [ "Charles", "Gilbert", "" ], [ "Hourmant", "Annick", "" ] ]
Micropropagation of three genotypes of Indian mustard [\textit{Brassica juncea} (L.) Czern.] using 7-days old seedling-derived transverse thin cell layer (tTCL) explants was accomplished. The genotype, explant source and addition of silver nitrate to the medium significantly influenced shoot bud induction. MS medium with 26.6 $\mu$M of 6-Benzylaminopurin (BAP) and 3.22 $\mu$M of 1-naphtaleneacetic acid (NAA) was identical (in the case of cotyledon tTCLs whatever the organ) and superior for the induction of buds (in the cases of petiole tTCL explants of genotypes 1 and 2 and hypocotyl tTCL explants of genotypes 1 and 3) than 53.3 $\mu$M of BAP and 3.22 $\mu$M of NAA. However, 53.3 $\mu$M of BAP was superior for the induction of buds than 26.6 $\mu$M in the presence of the same concentration of NAA for petiole tTCL explants of genotype 3 and hypocotyl tTCL explants of genotype 2. The addition of silver nitrate significantly enhanced the rate of shoot induction in all genotypes. Cotyledon-derived tTCL explants exhibited the highest shoot bud induction potential and was followed by petiole- and hypocotyl-derived ones. Addition of 10 $\mu$M of silver nitrate to BAP and NAA supplemented medium induced higher frequency shoot bud induction (up to 100 %) with the highest means of 4.45 shoots per cotyledon-derived tTCL explants obtained with the genotype 2. Shoot regenerated were rooted on MS basal medium without PGRs which induced 99 % of roots per shoot. The plantlets established in greenhouse conditions with 99 % survival, flowered normally and set seeds.
1506.06925
Tamar Friedlander
Tamar Friedlander and Roshan Prizak and C\u{a}lin C. Guet and Nicholas H. Barton and Ga\v{s}per Tka\v{c}ik
Intrinsic limits to gene regulation by global crosstalk
null
Nature Communications 7:12307 (2016)
10.1038/ncomms12307
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulation relies on the specificity of transcription factor (TF) - DNA interactions. In equilibrium, limited specificity may lead to crosstalk: a regulatory state in which a gene is either incorrectly activated due to noncognate TF-DNA interactions or remains erroneously inactive. We present a tractable biophysical model of global crosstalk, where many genes are simultaneously regulated by many TFs. We show that in the simplest regulatory scenario, a lower bound on crosstalk severity can be analytically derived solely from the number of (co)regulated genes and a suitable parameter that describes binding site similarity. Estimates show that crosstalk could present a significant challenge for organisms with low-specificity TFs, such as metazoans, unless they use appropriate regulation schemes. Strong cooperativity substantially decreases crosstalk, while joint regulation by activators and repressors, surprisingly, does not; moreover, certain microscopic details about promoter architecture emerge as globally important determinants of crosstalk strength. Our results suggest that crosstalk imposes a new type of global constraint on the functioning and evolution of regulatory networks, which is qualitatively distinct from the known constraints acting at the level of individual gene regulatory elements.
[ { "created": "Tue, 23 Jun 2015 09:43:17 GMT", "version": "v1" }, { "created": "Tue, 3 May 2016 15:21:59 GMT", "version": "v2" } ]
2016-10-31
[ [ "Friedlander", "Tamar", "" ], [ "Prizak", "Roshan", "" ], [ "Guet", "Călin C.", "" ], [ "Barton", "Nicholas H.", "" ], [ "Tkačik", "Gašper", "" ] ]
Gene regulation relies on the specificity of transcription factor (TF) - DNA interactions. In equilibrium, limited specificity may lead to crosstalk: a regulatory state in which a gene is either incorrectly activated due to noncognate TF-DNA interactions or remains erroneously inactive. We present a tractable biophysical model of global crosstalk, where many genes are simultaneously regulated by many TFs. We show that in the simplest regulatory scenario, a lower bound on crosstalk severity can be analytically derived solely from the number of (co)regulated genes and a suitable parameter that describes binding site similarity. Estimates show that crosstalk could present a significant challenge for organisms with low-specificity TFs, such as metazoans, unless they use appropriate regulation schemes. Strong cooperativity substantially decreases crosstalk, while joint regulation by activators and repressors, surprisingly, does not; moreover, certain microscopic details about promoter architecture emerge as globally important determinants of crosstalk strength. Our results suggest that crosstalk imposes a new type of global constraint on the functioning and evolution of regulatory networks, which is qualitatively distinct from the known constraints acting at the level of individual gene regulatory elements.
0801.3832
Michael Schnabel
Michael Schnabel, Matthias Kaschube and Fred Wolf
Pinwheel stability, pattern selection and the geometry of visual space
5 pages, 5 figures
null
null
null
q-bio.NC nlin.PS physics.bio-ph
null
It has been proposed that the dynamical stability of topological defects in the visual cortex reflects the Euclidean symmetry of the visual world. We analyze defect stability and pattern selection in a generalized Swift-Hohenberg model of visual cortical development symmetric under the Euclidean group E(2). Euclidean symmetry strongly influences the geometry and multistability of model solutions but does not directly impact on defect stability.
[ { "created": "Thu, 24 Jan 2008 20:42:26 GMT", "version": "v1" }, { "created": "Thu, 24 Jan 2008 23:07:20 GMT", "version": "v2" } ]
2008-01-30
[ [ "Schnabel", "Michael", "" ], [ "Kaschube", "Matthias", "" ], [ "Wolf", "Fred", "" ] ]
It has been proposed that the dynamical stability of topological defects in the visual cortex reflects the Euclidean symmetry of the visual world. We analyze defect stability and pattern selection in a generalized Swift-Hohenberg model of visual cortical development symmetric under the Euclidean group E(2). Euclidean symmetry strongly influences the geometry and multistability of model solutions but does not directly impact on defect stability.
1802.04347
Yuriria Cortes-Poza
Yuriria Cortes Poza, Pablo Padilla Longoria, Elena Alvarez Buylla
Spatial dynamics of flower organ formation
32 pages, 11 figures
null
null
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the emergence of biological structures and their changes is a complex problem. On a biochemical level, it is based on gene regulatory networks (GRN) consisting on interactions between the genes responsible for cell differentiation and coupled in a greater scale with external factors. In this work we provide a systematic methodological framework to construct Waddington's epigenetic landscape of the GRN involved in cellular determination during the early stages of development of angiosperms. As a specific example we consider the flower of the plant \textit{Arabidopsis thaliana}. Our model, which is based on experimental data, recovers accurately the spatial configuration of the flower during cell fate determination, not only for the wild type, but for its homeotic mutants as well. The method developed in this project is general enough to be used in the study of the relationship between genotype-phenotype in other living organisms.
[ { "created": "Mon, 29 Jan 2018 18:26:01 GMT", "version": "v1" } ]
2018-02-14
[ [ "Poza", "Yuriria Cortes", "" ], [ "Longoria", "Pablo Padilla", "" ], [ "Buylla", "Elena Alvarez", "" ] ]
Understanding the emergence of biological structures and their changes is a complex problem. On a biochemical level, it is based on gene regulatory networks (GRN) consisting on interactions between the genes responsible for cell differentiation and coupled in a greater scale with external factors. In this work we provide a systematic methodological framework to construct Waddington's epigenetic landscape of the GRN involved in cellular determination during the early stages of development of angiosperms. As a specific example we consider the flower of the plant \textit{Arabidopsis thaliana}. Our model, which is based on experimental data, recovers accurately the spatial configuration of the flower during cell fate determination, not only for the wild type, but for its homeotic mutants as well. The method developed in this project is general enough to be used in the study of the relationship between genotype-phenotype in other living organisms.
1304.1985
Jiankui He
Jiankui He, Luwen Ning, Yin Tong
Origins and evolutionary genomics of the novel 2013 avian-origin H7N9 influenza A virus in China: Early findings
8 pages, 5 figures, 2 table
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In March and early April 2013, a new avian-origin influenza A (H7N9) virus (A-OIV) emerged in the eastern China. During the first week of April, the 18 infection cases have been confirmed and 6 people have died since March. This virus has caused global concern as a potential pandemic threat. Here we use evolutionary analysis to reconstruct the origins and early development of the A-OIV viruses. We found that A-OIV was derived from a reassortment of three avian flu virus strains, and substantial mutations have been detected. Our results highlight the need for systematic surveillance of influenza in birds, and provide evidence that the mixing of new genetic elements in avian can result in the emergence of viruses with pandemic potential in humans.
[ { "created": "Sun, 7 Apr 2013 11:42:46 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2013 13:31:28 GMT", "version": "v2" } ]
2013-04-10
[ [ "He", "Jiankui", "" ], [ "Ning", "Luwen", "" ], [ "Tong", "Yin", "" ] ]
In March and early April 2013, a new avian-origin influenza A (H7N9) virus (A-OIV) emerged in the eastern China. During the first week of April, the 18 infection cases have been confirmed and 6 people have died since March. This virus has caused global concern as a potential pandemic threat. Here we use evolutionary analysis to reconstruct the origins and early development of the A-OIV viruses. We found that A-OIV was derived from a reassortment of three avian flu virus strains, and substantial mutations have been detected. Our results highlight the need for systematic surveillance of influenza in birds, and provide evidence that the mixing of new genetic elements in avian can result in the emergence of viruses with pandemic potential in humans.
2405.00333
Sarah Vollert
Sarah A. Vollert, Christopher Drovandi, Matthew P. Adams
Reevaluating coexistence and stability in ecosystem networks to address ecological transients: methods and implications
null
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
Representing ecosystems at equilibrium has been foundational for building ecological theories, forecasting species populations and planning conservation actions. The equilibrium "balance of nature" ideal suggests that populations will eventually stabilise to a coexisting balance of species. However, a growing body of literature argues that the equilibrium ideal is inappropriate for ecosystems. Here, we develop and demonstrate a new framework for representing ecosystems without considering equilibrium dynamics. Instead, far more pragmatic ecosystem models are constructed by considering population trajectories, regardless of whether they exhibit equilibrium or transient (i.e. non-equilibrium) behaviour. This novel framework maximally utilises readily available, but often overlooked, knowledge from field observations and expert elicitation, rather than relying on theoretical ecosystem properties. We developed innovative Bayesian algorithms to generate ecosystem models in this new statistical framework, without excessive computational burden. Our results reveal that our pragmatic framework could have a dramatic impact on conservation decision-making and enhance the realism of ecosystem models and forecasts.
[ { "created": "Wed, 1 May 2024 05:52:15 GMT", "version": "v1" } ]
2024-05-02
[ [ "Vollert", "Sarah A.", "" ], [ "Drovandi", "Christopher", "" ], [ "Adams", "Matthew P.", "" ] ]
Representing ecosystems at equilibrium has been foundational for building ecological theories, forecasting species populations and planning conservation actions. The equilibrium "balance of nature" ideal suggests that populations will eventually stabilise to a coexisting balance of species. However, a growing body of literature argues that the equilibrium ideal is inappropriate for ecosystems. Here, we develop and demonstrate a new framework for representing ecosystems without considering equilibrium dynamics. Instead, far more pragmatic ecosystem models are constructed by considering population trajectories, regardless of whether they exhibit equilibrium or transient (i.e. non-equilibrium) behaviour. This novel framework maximally utilises readily available, but often overlooked, knowledge from field observations and expert elicitation, rather than relying on theoretical ecosystem properties. We developed innovative Bayesian algorithms to generate ecosystem models in this new statistical framework, without excessive computational burden. Our results reveal that our pragmatic framework could have a dramatic impact on conservation decision-making and enhance the realism of ecosystem models and forecasts.
2204.04573
David Vulakh
Elchanan Mossel, David Vulakh
Efficient Reconstruction of Stochastic Pedigrees: Some Steps From Theory to Practice
To appear in PSB 2023
PSB '23: Proceedings of the 2023 Pacific Symposium on Biocomputing. November 2022. Pp. 133-144
10.1142/9789811270611_0013
null
q-bio.PE cs.DS cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In an extant population, how much information do extant individuals provide on the pedigree of their ancestors? Recent work by Kim, Mossel, Ramnarayan and Turner (2020) studied this question under a number of simplifying assumptions, including random mating, fixed length inheritance blocks and sufficiently large founding population. They showed that under these conditions if the average number of offspring is a sufficiently large constant, then it is possible to recover a large fraction of the pedigree structure and genetic content by an algorithm they named REC-GEN. We are interested in studying the performance of REC-GEN on simulated data generated according to the model. As a first step, we improve the running time of the algorithm. However, we observe that even the faster version of the algorithm does not do well in any simulations in recovering the pedigree beyond 2 generations. We claim that this is due to the inbreeding present in any setting where the algorithm can be run, even on simulated data. To support the claim we show that a main step of the algorithm, called ancestral reconstruction, performs accurately in a idealized setting with no inbreeding but performs poorly in random mating populations. To overcome the poor behavior of REC-GEN we introduce a Belief-Propagation based heuristic that accounts for the inbreeding and performs much better in our simulations.
[ { "created": "Sun, 10 Apr 2022 01:08:39 GMT", "version": "v1" }, { "created": "Sun, 25 Sep 2022 01:26:17 GMT", "version": "v2" } ]
2022-11-29
[ [ "Mossel", "Elchanan", "" ], [ "Vulakh", "David", "" ] ]
In an extant population, how much information do extant individuals provide on the pedigree of their ancestors? Recent work by Kim, Mossel, Ramnarayan and Turner (2020) studied this question under a number of simplifying assumptions, including random mating, fixed length inheritance blocks and sufficiently large founding population. They showed that under these conditions if the average number of offspring is a sufficiently large constant, then it is possible to recover a large fraction of the pedigree structure and genetic content by an algorithm they named REC-GEN. We are interested in studying the performance of REC-GEN on simulated data generated according to the model. As a first step, we improve the running time of the algorithm. However, we observe that even the faster version of the algorithm does not do well in any simulations in recovering the pedigree beyond 2 generations. We claim that this is due to the inbreeding present in any setting where the algorithm can be run, even on simulated data. To support the claim we show that a main step of the algorithm, called ancestral reconstruction, performs accurately in a idealized setting with no inbreeding but performs poorly in random mating populations. To overcome the poor behavior of REC-GEN we introduce a Belief-Propagation based heuristic that accounts for the inbreeding and performs much better in our simulations.
1809.05621
Ludmil Zikatanov
Katherine Y. Zipp, Yangqingxiang Wu, Kaiyi Wu, and Ludmil T. Zikatanov
Optimal spatial-dynamic management to minimize the damages caused by aquatic invasive species
10 pages, 4 algorithms
null
null
null
q-bio.PE math.NA math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Invasive species have been recognized as a leading threat to biodiversity. In particular, lakes are especially affected by species invasions because they are closed systems sensitive to disruption. Accurately controlling the spread of invasive species requires solving a complex spatial-dynamic optimization problem. In this work we propose a novel framework for determining the optimal management strategy to maximize the value of a lake system net of damages from invasive species, including an endogenous diffusion mechanism for the spread of invasive species through boaters' trips between lakes. The proposed method includes a combined global iterative process which determines the optimal number of trips to each lake in each season and the spatial-dynamic optimal boat ramp fee.
[ { "created": "Sat, 15 Sep 2018 00:37:54 GMT", "version": "v1" } ]
2018-09-18
[ [ "Zipp", "Katherine Y.", "" ], [ "Wu", "Yangqingxiang", "" ], [ "Wu", "Kaiyi", "" ], [ "Zikatanov", "Ludmil T.", "" ] ]
Invasive species have been recognized as a leading threat to biodiversity. In particular, lakes are especially affected by species invasions because they are closed systems sensitive to disruption. Accurately controlling the spread of invasive species requires solving a complex spatial-dynamic optimization problem. In this work we propose a novel framework for determining the optimal management strategy to maximize the value of a lake system net of damages from invasive species, including an endogenous diffusion mechanism for the spread of invasive species through boaters' trips between lakes. The proposed method includes a combined global iterative process which determines the optimal number of trips to each lake in each season and the spatial-dynamic optimal boat ramp fee.
1504.01298
Christian Lyngby Vestergaard
Christian L. Vestergaard, Mathieu G\'enois
Temporal Gillespie algorithm: Fast simulation of contagion processes on time-varying networks
Minor changes and updates to references
PLoS Comput. Biol. 11 (2015) e1004579
10.1371/journal.pcbi.1004579
null
q-bio.QM physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.
[ { "created": "Fri, 3 Apr 2015 15:55:03 GMT", "version": "v1" }, { "created": "Wed, 9 Sep 2015 14:38:48 GMT", "version": "v2" }, { "created": "Mon, 26 Oct 2015 08:50:54 GMT", "version": "v3" } ]
2015-11-09
[ [ "Vestergaard", "Christian L.", "" ], [ "Génois", "Mathieu", "" ] ]
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.
2406.05058
Yuan Yin
Yuan Yin, Jennifer A. Flegg, Mark B. Flegg
Accurate stochastic simulation algorithm for multiscale models of infectious diseases
23 pages, 7 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In the infectious disease literature, significant effort has been devoted to studying dynamics at a single scale. For example, compartmental models describing population-level dynamics are often formulated using differential equations. In cases where small numbers or noise play a crucial role, these differential equations are replaced with memoryless Markovian models, where discrete individuals can be members of a compartment and transition stochastically. Classic stochastic simulation algorithms, such as Gillespie's algorithm and the next reaction method, can be employed to solve these Markovian models exactly. The intricate coupling between models at different scales underscores the importance of multiscale modelling in infectious diseases. However, several computational challenges arise when the multiscale model becomes non-Markovian. In this paper, we address these challenges by developing a novel exact stochastic simulation algorithm. We apply it to a showcase multiscale system where all individuals share the same deterministic within-host model while the population-level dynamics are governed by a stochastic formulation. We demonstrate that as long as the within-host information is harvested at a reasonable resolution, the novel algorithm we develop will always be accurate. Moreover, the novel algorithm we develop is general and can be easily applied to other multiscale models in (or outside) the realm of infectious diseases.
[ { "created": "Fri, 7 Jun 2024 16:28:19 GMT", "version": "v1" } ]
2024-06-10
[ [ "Yin", "Yuan", "" ], [ "Flegg", "Jennifer A.", "" ], [ "Flegg", "Mark B.", "" ] ]
In the infectious disease literature, significant effort has been devoted to studying dynamics at a single scale. For example, compartmental models describing population-level dynamics are often formulated using differential equations. In cases where small numbers or noise play a crucial role, these differential equations are replaced with memoryless Markovian models, where discrete individuals can be members of a compartment and transition stochastically. Classic stochastic simulation algorithms, such as Gillespie's algorithm and the next reaction method, can be employed to solve these Markovian models exactly. The intricate coupling between models at different scales underscores the importance of multiscale modelling in infectious diseases. However, several computational challenges arise when the multiscale model becomes non-Markovian. In this paper, we address these challenges by developing a novel exact stochastic simulation algorithm. We apply it to a showcase multiscale system where all individuals share the same deterministic within-host model while the population-level dynamics are governed by a stochastic formulation. We demonstrate that as long as the within-host information is harvested at a reasonable resolution, the novel algorithm we develop will always be accurate. Moreover, the novel algorithm we develop is general and can be easily applied to other multiscale models in (or outside) the realm of infectious diseases.
1501.01338
Richard McMurtrey
Richard J. McMurtrey
Patterned and Functionalized Nanofiber Scaffolds in Three-Dimensional Hydrogel Constructs Enhance Neurite Outgrowth and Directional Control
null
J Neural Eng. 2014 Dec;11(6):066009
10.1088/1741-2560/11/6/066009
null
q-bio.TO
http://creativecommons.org/licenses/by/3.0/
Neural tissue engineering holds incredible potential to restore functional capabilities to damaged neural tissue. It was hypothesized that patterned and functionalized nanofiber scaffolds could control neurite direction and enhance neurite outgrowth. Aligned nanofibers were created according to a mathematical model and were shown to enable significant control over the direction of neurite outgrowth in both two-dimensional (2D) and three-dimensional (3D) neuronal cultures. Laminin-functionalized nanofibers in 3D hyaluronic acid (HA) hydrogels enabled significant alignment of neurites with nanofibers, enabled significant neurite tracking of nanofibers, and significantly increased the distance over which neurites could extend. This work demonstrates the ability to create unique 3D neural tissue constructs using a combined system of hydrogel and nanofiber scaffolding. Importantly, patterned and biofunctionalized nanofiber scaffolds that can control direction and increase length of neurite outgrowth in three-dimensions hold much potential for neural tissue engineering. This approach offers advancements in the development of implantable neural tissue constructs that enable control of neural development and reproduction of neuroanatomical pathways, with the ultimate goal being the achievement of functional neural regeneration.
[ { "created": "Wed, 7 Jan 2015 00:16:22 GMT", "version": "v1" } ]
2016-01-05
[ [ "McMurtrey", "Richard J.", "" ] ]
Neural tissue engineering holds incredible potential to restore functional capabilities to damaged neural tissue. It was hypothesized that patterned and functionalized nanofiber scaffolds could control neurite direction and enhance neurite outgrowth. Aligned nanofibers were created according to a mathematical model and were shown to enable significant control over the direction of neurite outgrowth in both two-dimensional (2D) and three-dimensional (3D) neuronal cultures. Laminin-functionalized nanofibers in 3D hyaluronic acid (HA) hydrogels enabled significant alignment of neurites with nanofibers, enabled significant neurite tracking of nanofibers, and significantly increased the distance over which neurites could extend. This work demonstrates the ability to create unique 3D neural tissue constructs using a combined system of hydrogel and nanofiber scaffolding. Importantly, patterned and biofunctionalized nanofiber scaffolds that can control direction and increase length of neurite outgrowth in three-dimensions hold much potential for neural tissue engineering. This approach offers advancements in the development of implantable neural tissue constructs that enable control of neural development and reproduction of neuroanatomical pathways, with the ultimate goal being the achievement of functional neural regeneration.
2003.08913
Giulia Bassignana
Giulia Bassignana, Jennifer Fransson, Vincent Henry, Olivier Colliot, Violetta Zujovic, Fabrizio De Vico Fallani
Step-wise target controllability identifies dysregulated pathways of macrophage networks in multiple sclerosis
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying the nodes that have the potential to influence the state of a network is a relevant question for many complex systems. In many applications it is often essential to test the ability of an individual node to control a specific target subset of the network. In biological networks, this might provide precious information on how single genes regulate the expression of specific groups of molecules in the cell. Taking into account these constraints, we propose an optimized heuristic based on the Kalman rank condition to quantify the centrality of a node as the number of target nodes it can control. By introducing a hierarchy among the nodes in the target set, and performing a step-wise research, we ensure for sparse and directed networks the identification of a controllable driver-target configuration in a significantly reduced space and time complexity. We show how the method works for simple network configurations, then we use it to characterize the inflammatory pathways in molecular gene networks associated with macrophage dysfunction in patients with multiple sclerosis. Results indicate that the targeted secreted molecules can in general be controlled by a large number of driver nodes (51%) involved in different cell functions, i.e. sensing, signaling and transcription. However, during the inflammatory response only a moderate fraction of all the possible driver-target pairs are significantly coactivated, as measured by gene expression data obtained from human blood samples. Notably, they differ between multiple sclerosis patients and healthy controls, and we find that this is related to the presence of dysregulated genes along the controllable walks. Our method, that we name step-wise target controllability, represents a practical solution to identify controllable driver-target configurations in directed complex networks and test their relevance from a functional perspective.
[ { "created": "Thu, 19 Mar 2020 17:24:07 GMT", "version": "v1" }, { "created": "Fri, 20 Mar 2020 14:54:43 GMT", "version": "v2" }, { "created": "Tue, 24 Mar 2020 10:12:25 GMT", "version": "v3" }, { "created": "Mon, 7 Dec 2020 14:01:57 GMT", "version": "v4" } ]
2020-12-08
[ [ "Bassignana", "Giulia", "" ], [ "Fransson", "Jennifer", "" ], [ "Henry", "Vincent", "" ], [ "Colliot", "Olivier", "" ], [ "Zujovic", "Violetta", "" ], [ "Fallani", "Fabrizio De Vico", "" ] ]
Identifying the nodes that have the potential to influence the state of a network is a relevant question for many complex systems. In many applications it is often essential to test the ability of an individual node to control a specific target subset of the network. In biological networks, this might provide precious information on how single genes regulate the expression of specific groups of molecules in the cell. Taking into account these constraints, we propose an optimized heuristic based on the Kalman rank condition to quantify the centrality of a node as the number of target nodes it can control. By introducing a hierarchy among the nodes in the target set, and performing a step-wise research, we ensure for sparse and directed networks the identification of a controllable driver-target configuration in a significantly reduced space and time complexity. We show how the method works for simple network configurations, then we use it to characterize the inflammatory pathways in molecular gene networks associated with macrophage dysfunction in patients with multiple sclerosis. Results indicate that the targeted secreted molecules can in general be controlled by a large number of driver nodes (51%) involved in different cell functions, i.e. sensing, signaling and transcription. However, during the inflammatory response only a moderate fraction of all the possible driver-target pairs are significantly coactivated, as measured by gene expression data obtained from human blood samples. Notably, they differ between multiple sclerosis patients and healthy controls, and we find that this is related to the presence of dysregulated genes along the controllable walks. Our method, that we name step-wise target controllability, represents a practical solution to identify controllable driver-target configurations in directed complex networks and test their relevance from a functional perspective.
1507.02716
Pierre Sacr\'e
Pierre Sacr\'e, Sridevi V. Sarma, Yun Guan, William S. Anderson
Electrical neurostimulation for chronic pain: on selective relay of sensory neural activities in myelinated nerve fibers
null
null
10.1109/EMBC.2015.7319444
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chronic pain affects about 100 million adults in the US. Despite their great need, neuropharmacology and neurostimulation therapies for chronic pain have been associated with suboptimal efficacy and limited long-term success, as their mechanisms of action are unclear. Yet current computational models of pain transmission suffer from several limitations. In particular, dorsal column models do not include the fundamental underlying sensory activity traveling in these nerve fibers. We developed a (simple) simulation test bed of electrical neurostimulation of myelinated nerve fibers with underlying sensory activity. This paper reports our findings so far. Interactions between stimulation-evoked and underlying activities are mainly due to collisions of action potentials and losses of excitability due to the refractory period following an action potential. In addition, intuitively, the reliability of sensory activity decreases as the stimulation frequency increases. This first step opens the door to a better understanding of pain transmission and its modulation by neurostimulation therapies.
[ { "created": "Thu, 9 Jul 2015 21:11:41 GMT", "version": "v1" } ]
2024-05-03
[ [ "Sacré", "Pierre", "" ], [ "Sarma", "Sridevi V.", "" ], [ "Guan", "Yun", "" ], [ "Anderson", "William S.", "" ] ]
Chronic pain affects about 100 million adults in the US. Despite their great need, neuropharmacology and neurostimulation therapies for chronic pain have been associated with suboptimal efficacy and limited long-term success, as their mechanisms of action are unclear. Yet current computational models of pain transmission suffer from several limitations. In particular, dorsal column models do not include the fundamental underlying sensory activity traveling in these nerve fibers. We developed a (simple) simulation test bed of electrical neurostimulation of myelinated nerve fibers with underlying sensory activity. This paper reports our findings so far. Interactions between stimulation-evoked and underlying activities are mainly due to collisions of action potentials and losses of excitability due to the refractory period following an action potential. In addition, intuitively, the reliability of sensory activity decreases as the stimulation frequency increases. This first step opens the door to a better understanding of pain transmission and its modulation by neurostimulation therapies.
2312.12062
Rati Sharma
Binil Shyam T V and Rati Sharma
mRNA translation from a unidirectional traffic perspective
39 pages, 5 figures, review article
null
null
null
q-bio.SC cond-mat.soft q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
mRNA translation is a crucial process that leads to protein synthesis in living cells. Therefore, it is a process that needs to work optimally for a cell to stay healthy and alive. With advancements in microscopy and novel experimental techniques, a lot of the intricate details about the translation mechanism are now known. However, the why and how of this mechanism are still ill understood, and therefore, is an active area of research. Theoretical studies of mRNA translation typically view it in terms of the Totally Asymmetric Simple Exclusion Process or TASEP. Various works have used the TASEP model in order to study a wide range of phenomena and factors affecting translation, such as ribosome traffic on an mRNA under noisy (codon-dependent or otherwise) conditions, ribosome stalling, premature termination, ribosome reinitiation and dropoff, codon-dependent elongation and competition among mRNA for ribosomes, among others. In this review, we relay the history and physics of the translation process in terms of the TASEP framework. In particular, we discuss the viability and evolution of this model and its limitations while also formulating the reasons behind its success. Finally, we also identify gaps in the existing literature and suggest possible extensions and applications that will lead to a better understanding of ribosome traffic on the mRNA.
[ { "created": "Tue, 19 Dec 2023 11:28:24 GMT", "version": "v1" } ]
2023-12-20
[ [ "T", "Binil Shyam", "V" ], [ "Sharma", "Rati", "" ] ]
mRNA translation is a crucial process that leads to protein synthesis in living cells. Therefore, it is a process that needs to work optimally for a cell to stay healthy and alive. With advancements in microscopy and novel experimental techniques, a lot of the intricate details about the translation mechanism are now known. However, the why and how of this mechanism are still ill understood, and therefore, is an active area of research. Theoretical studies of mRNA translation typically view it in terms of the Totally Asymmetric Simple Exclusion Process or TASEP. Various works have used the TASEP model in order to study a wide range of phenomena and factors affecting translation, such as ribosome traffic on an mRNA under noisy (codon-dependent or otherwise) conditions, ribosome stalling, premature termination, ribosome reinitiation and dropoff, codon-dependent elongation and competition among mRNA for ribosomes, among others. In this review, we relay the history and physics of the translation process in terms of the TASEP framework. In particular, we discuss the viability and evolution of this model and its limitations while also formulating the reasons behind its success. Finally, we also identify gaps in the existing literature and suggest possible extensions and applications that will lead to a better understanding of ribosome traffic on the mRNA.