id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0412007
Anders Irb\"ack
Anders Irb\"ack, Sandipan Mohanty
Folding thermodynamics of peptides
26 pages, 10 figures
Biophys. J. 88 (2005) 1560-1569
10.1529/biophysj.104.050427
LU TP 04-28
q-bio.BM
null
A simplified interaction potential for protein folding studies at the atomic level is discussed and tested on a set of peptides with about 20 residues each. The test set contains both alpha-helical (Trp cage, Fs) and beta-sheet (GB1p, GB1m2, GB1m3, Betanova, LLM) peptides. The model, which is entirely sequence-based, is able to fold these different peptides for one and the same choice of model parameters. Furthermore, the melting behavior of the peptides is in good quantitative agreement with experimental data. Apparent folded populations obtained using different observables are compared, and are found to be very different for some of the peptides (e.g., Betanova). In other cases (in particular, GB1m2 and GB1m3), the different estimates agree reasonably well, indicating a more two-state-like melting behavior.
[ { "created": "Fri, 3 Dec 2004 17:54:21 GMT", "version": "v1" } ]
2009-11-10
[ [ "Irbäck", "Anders", "" ], [ "Mohanty", "Sandipan", "" ] ]
A simplified interaction potential for protein folding studies at the atomic level is discussed and tested on a set of peptides with about 20 residues each. The test set contains both alpha-helical (Trp cage, Fs) and beta-sheet (GB1p, GB1m2, GB1m3, Betanova, LLM) peptides. The model, which is entirely sequence-based, is able to fold these different peptides for one and the same choice of model parameters. Furthermore, the melting behavior of the peptides is in good quantitative agreement with experimental data. Apparent folded populations obtained using different observables are compared, and are found to be very different for some of the peptides (e.g., Betanova). In other cases (in particular, GB1m2 and GB1m3), the different estimates agree reasonably well, indicating a more two-state-like melting behavior.
2003.05405
Arthur Mensch
Kamalaker Dadi (PARIETAL), Ga\"el Varoquaux (PARIETAL), Antonia Machlouzarides-Shalit (PARIETAL), Krzysztof J. Gorgolewski, Demian Wassermann (PARIETAL), Bertrand Thirion (PARIETAL), Arthur Mensch (DMA, PARIETAL)
Fine-grain atlases of functional modes for fMRI analysis
null
null
null
null
q-bio.NC eess.SP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population imaging markedly increased the size of functional-imaging datasets, shedding new light on the neural basis of inter-individual differences. Analyzing these large data entails new scalability challenges, computational and statistical. For this reason, brain images are typically summarized in a few signals, for instance reducing voxel-level measures with brain atlases or functional modes. A good choice of the corresponding brain networks is important, as most data analyses start from these reduced signals. We contribute finely-resolved atlases of functional modes, comprising from 64 to 1024 networks. These dictionaries of functional modes (DiFuMo) are trained on millions of fMRI functional brain volumes of total size 2.4TB, spanned over 27 studies and many research groups. We demonstrate the benefits of extracting reduced signals on our fine-grain atlases for many classic functional data analysis pipelines: stimuli decoding from 12,334 brain responses, standard GLM analysis of fMRI across sessions and individuals, extraction of resting-state functional-connectomes biomarkers for 2,500 individuals, data compression and meta-analysis over more than 15,000 statistical maps. In each of these analysis scenarii, we compare the performance of our functional atlases with that of other popular references, and to a simple voxel-level analysis. Results highlight the importance of using high-dimensional "soft" functional atlases, to represent and analyse brain activity while capturing its functional gradients. Analyses on high-dimensional modes achieve similar statistical performance as at the voxel level, but with much reduced computational cost and higher interpretability. In addition to making them available, we provide meaningful names for these modes, based on their anatomical location. It will facilitate reporting of results.
[ { "created": "Thu, 5 Mar 2020 12:04:12 GMT", "version": "v1" } ]
2020-03-12
[ [ "Dadi", "Kamalaker", "", "PARIETAL" ], [ "Varoquaux", "Gaël", "", "PARIETAL" ], [ "Machlouzarides-Shalit", "Antonia", "", "PARIETAL" ], [ "Gorgolewski", "Krzysztof J.", "", "PARIETAL" ], [ "Wassermann", "Demian", "", "PARIETAL" ], [ "Thirion", "Bertrand", "", "PARIETAL" ], [ "Mensch", "Arthur", "", "DMA, PARIETAL" ] ]
Population imaging markedly increased the size of functional-imaging datasets, shedding new light on the neural basis of inter-individual differences. Analyzing these large data entails new scalability challenges, computational and statistical. For this reason, brain images are typically summarized in a few signals, for instance reducing voxel-level measures with brain atlases or functional modes. A good choice of the corresponding brain networks is important, as most data analyses start from these reduced signals. We contribute finely-resolved atlases of functional modes, comprising from 64 to 1024 networks. These dictionaries of functional modes (DiFuMo) are trained on millions of fMRI functional brain volumes of total size 2.4TB, spanned over 27 studies and many research groups. We demonstrate the benefits of extracting reduced signals on our fine-grain atlases for many classic functional data analysis pipelines: stimuli decoding from 12,334 brain responses, standard GLM analysis of fMRI across sessions and individuals, extraction of resting-state functional-connectomes biomarkers for 2,500 individuals, data compression and meta-analysis over more than 15,000 statistical maps. In each of these analysis scenarii, we compare the performance of our functional atlases with that of other popular references, and to a simple voxel-level analysis. Results highlight the importance of using high-dimensional "soft" functional atlases, to represent and analyse brain activity while capturing its functional gradients. Analyses on high-dimensional modes achieve similar statistical performance as at the voxel level, but with much reduced computational cost and higher interpretability. In addition to making them available, we provide meaningful names for these modes, based on their anatomical location. It will facilitate reporting of results.
q-bio/0512003
Ole Steuernagel
Ole Steuernagel, Daniel Polani
Optimal strategies for fighting persistent bugs
6 pages, 6 figures
null
10.1109/TEVC.2010.2040181
null
q-bio.OT q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Some microbial organisms are known to randomly slip into and out of hibernation, irrespective of environmental conditions [1]. In a (genetically) uniform population a typically very small subpopulation becomes metabolically inactive whereas the majority subpopulation remains active and grows. Bacteria such as E. coli, Staphylococcus aureus (MRSA-superbug), Mycobacterium tuberculosis, and Pseudomonas aeruginosa [1-3] show persistence. It can render bacteria less vulnerable in adverse environments [1, 4, 5] and their effective eradication through medication more difficult [2, 3, 6]. Here we show that medication treatment regimes may have to be modified when persistence is taken into account and characterize optimal approaches assuming that the total medication dose is constrained. The determining factors are cumulative toxicity, eradication power of the medication and bacterial response timescales. Persistent organisms have to be fought using tailored eradication strategies which display two fundamental characteristics. Ideally, the treatment time should be significantly longer than in the case of persistence with the medication uniformly spread out over time; however, if treatment time has to be limited, then the application of medication has to be concentrated towards the beginning and end of the treatment. These findings deviate from current clinical practice, and may therefore help to optimize and simplify treatments. Our use of multi-objective optimization [7] to map out the optimal strategies can be generalized to other related problems.
[ { "created": "Thu, 1 Dec 2005 22:25:03 GMT", "version": "v1" }, { "created": "Mon, 12 Jul 2010 17:29:21 GMT", "version": "v2" } ]
2010-07-13
[ [ "Steuernagel", "Ole", "" ], [ "Polani", "Daniel", "" ] ]
Some microbial organisms are known to randomly slip into and out of hibernation, irrespective of environmental conditions [1]. In a (genetically) uniform population a typically very small subpopulation becomes metabolically inactive whereas the majority subpopulation remains active and grows. Bacteria such as E. coli, Staphylococcus aureus (MRSA-superbug), Mycobacterium tuberculosis, and Pseudomonas aeruginosa [1-3] show persistence. It can render bacteria less vulnerable in adverse environments [1, 4, 5] and their effective eradication through medication more difficult [2, 3, 6]. Here we show that medication treatment regimes may have to be modified when persistence is taken into account and characterize optimal approaches assuming that the total medication dose is constrained. The determining factors are cumulative toxicity, eradication power of the medication and bacterial response timescales. Persistent organisms have to be fought using tailored eradication strategies which display two fundamental characteristics. Ideally, the treatment time should be significantly longer than in the case of persistence with the medication uniformly spread out over time; however, if treatment time has to be limited, then the application of medication has to be concentrated towards the beginning and end of the treatment. These findings deviate from current clinical practice, and may therefore help to optimize and simplify treatments. Our use of multi-objective optimization [7] to map out the optimal strategies can be generalized to other related problems.
1904.00431
Sumeet Agarwal
Aditi Jha and Sumeet Agarwal
Do Deep Neural Networks Model Nonlinear Compositionality in the Neural Representation of Human-Object Interactions?
4 pages, 2 figures; presented at CCN 2019
null
10.32470/CCN.2019.1269-0
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual scene understanding often requires the processing of human-object interactions. Here we seek to explore if and how well Deep Neural Network (DNN) models capture features similar to the brain's representation of humans, objects, and their interactions. We investigate brain regions which process human-, object-, or interaction-specific information, and establish correspondences between them and DNN features. Our results suggest that we can infer the selectivity of these regions to particular visual stimuli using DNN representations. We also map features from the DNN to the regions, thus linking the DNN representations to those found in specific parts of the visual cortex. In particular, our results suggest that a typical DNN representation contains encoding of compositional information for human-object interactions which goes beyond a linear combination of the encodings for the two components, thus suggesting that DNNs may be able to model this important property of biological vision.
[ { "created": "Sun, 31 Mar 2019 15:07:48 GMT", "version": "v1" }, { "created": "Wed, 6 Nov 2019 17:14:38 GMT", "version": "v2" } ]
2019-11-07
[ [ "Jha", "Aditi", "" ], [ "Agarwal", "Sumeet", "" ] ]
Visual scene understanding often requires the processing of human-object interactions. Here we seek to explore if and how well Deep Neural Network (DNN) models capture features similar to the brain's representation of humans, objects, and their interactions. We investigate brain regions which process human-, object-, or interaction-specific information, and establish correspondences between them and DNN features. Our results suggest that we can infer the selectivity of these regions to particular visual stimuli using DNN representations. We also map features from the DNN to the regions, thus linking the DNN representations to those found in specific parts of the visual cortex. In particular, our results suggest that a typical DNN representation contains encoding of compositional information for human-object interactions which goes beyond a linear combination of the encodings for the two components, thus suggesting that DNNs may be able to model this important property of biological vision.
1610.04134
John Medaglia
John D. Medaglia, Perry Zurn, Walter Sinnott-Armstrong, Danielle S. Bassett
Mind Control as a Guide for the Mind
8 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain is a complex network that supports mental function. The nascent field of network neuroscience applies tools from mathematics to neuroimaging data in the hopes of shedding light on cognitive function. A critical question arising from these empirical studies is how to modulate a human brain network to treat cognitive deficits or enhance mental abilities. While historically a number of tools have been employed to modulate mental states (such as cognitive behavioral therapy and brain stimulation), theoretical frameworks to guide these interventions - and to optimize them for clinical use - are fundamentally lacking. One promising and as-yet underexplored approach lies in a sub-discipline of engineering known as network control theory. Here, we posit that network control fundamentally relates to mind control, and that this relationship highlights important areas for future empirical research and opportunities to translate knowledge in practical domains. We clarify the conceptual intersection between neuroanatomy, cognition, and control engineering in the context of network neuroscience. Finally, we discuss the challenges, ethics, and promises of mind control.
[ { "created": "Thu, 13 Oct 2016 15:42:13 GMT", "version": "v1" }, { "created": "Tue, 25 Apr 2017 17:49:25 GMT", "version": "v2" } ]
2017-04-26
[ [ "Medaglia", "John D.", "" ], [ "Zurn", "Perry", "" ], [ "Sinnott-Armstrong", "Walter", "" ], [ "Bassett", "Danielle S.", "" ] ]
The human brain is a complex network that supports mental function. The nascent field of network neuroscience applies tools from mathematics to neuroimaging data in the hopes of shedding light on cognitive function. A critical question arising from these empirical studies is how to modulate a human brain network to treat cognitive deficits or enhance mental abilities. While historically a number of tools have been employed to modulate mental states (such as cognitive behavioral therapy and brain stimulation), theoretical frameworks to guide these interventions - and to optimize them for clinical use - are fundamentally lacking. One promising and as-yet underexplored approach lies in a sub-discipline of engineering known as network control theory. Here, we posit that network control fundamentally relates to mind control, and that this relationship highlights important areas for future empirical research and opportunities to translate knowledge in practical domains. We clarify the conceptual intersection between neuroanatomy, cognition, and control engineering in the context of network neuroscience. Finally, we discuss the challenges, ethics, and promises of mind control.
1309.6277
William J. Tyler
Jerel Mueller and William J. Tyler
A Quantitative Overview of Biophysical Forces Governing Neural Function
13 pages
null
10.1088/1478-3975/11/5/051001
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Hodgkin-Huxley (HH) model is the currently accepted formalism of neuronal excitability. However, the HH model does not capture a number of biophysical behaviors associated with action potentials or propagating nerve impulses. Physical mechanisms underlying these processes, such as reversible heat transfer and axonal swelling have been separately investigated and compartmentally modeled to indicate the nervous system is not purely electrical or biochemical. Rather, mechanical forces and principles of thermodynamics also govern neuronal excitability and signaling. To advance our understanding of neural function and dysfunction, compartmentalized analyses of electrical, chemical, and mechanical processes need to revaluated and integrated into more comprehensive theories. The present quantitative perspective is intended to broaden the awareness of known biophysical phenomena, which are often overlooked in neuroscience. By starting to consider the collective influence of the biophysical forces influencing neural function, new paradigms can be applied to the characterization and manipulation of nervous systems.
[ { "created": "Tue, 24 Sep 2013 18:19:50 GMT", "version": "v1" }, { "created": "Wed, 25 Sep 2013 13:18:54 GMT", "version": "v2" } ]
2015-06-17
[ [ "Mueller", "Jerel", "" ], [ "Tyler", "William J.", "" ] ]
The Hodgkin-Huxley (HH) model is the currently accepted formalism of neuronal excitability. However, the HH model does not capture a number of biophysical behaviors associated with action potentials or propagating nerve impulses. Physical mechanisms underlying these processes, such as reversible heat transfer and axonal swelling have been separately investigated and compartmentally modeled to indicate the nervous system is not purely electrical or biochemical. Rather, mechanical forces and principles of thermodynamics also govern neuronal excitability and signaling. To advance our understanding of neural function and dysfunction, compartmentalized analyses of electrical, chemical, and mechanical processes need to revaluated and integrated into more comprehensive theories. The present quantitative perspective is intended to broaden the awareness of known biophysical phenomena, which are often overlooked in neuroscience. By starting to consider the collective influence of the biophysical forces influencing neural function, new paradigms can be applied to the characterization and manipulation of nervous systems.
2310.12188
Aram Mohammed
Aram Akram Mohammed
Role of Cold Storage in Rooting of Stem Cuttings: A Review
null
null
10.9734/AJAAR/2023/v21i3416
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Cold storage is a strategy mainly used for ornamental cuttings for the purpose of prolonging production duration that provide the cuttings with the best condition at the time of planting. The results have been obtained from cold storage of carnations and chrysanthemums confirmed the fact that the herbaceous cuttings could be stored for about two months at not lower than 0 C and more than 4 C, with similar results of the cuttings directly planted without storage. On the other hand, hardwood cuttings are stored at low temperature in order to improve rooting ability via reducing rooting inhibitors that occur in dormant cuttings which were taken during dormant seasons (late fall or early winter). So, declining in rooting inhibitors has been observed in the hardwood cuttings of deciduous trees and Vitis spp which were subjected to low temperature for around 2-6 months. Consequently, rooting was improved in these species. Also, cold storage of cuttings of some coniferous trees increased rooting at a high rate. Other factors like cultivar and storage method may interact with cold storage duration and temperature degree as well. This review article outlines the information of the studies that have been obtained as a result of cold storage of the cuttings of different species.
[ { "created": "Wed, 18 Oct 2023 09:39:14 GMT", "version": "v1" } ]
2023-10-20
[ [ "Mohammed", "Aram Akram", "" ] ]
Cold storage is a strategy mainly used for ornamental cuttings for the purpose of prolonging production duration that provide the cuttings with the best condition at the time of planting. The results have been obtained from cold storage of carnations and chrysanthemums confirmed the fact that the herbaceous cuttings could be stored for about two months at not lower than 0 C and more than 4 C, with similar results of the cuttings directly planted without storage. On the other hand, hardwood cuttings are stored at low temperature in order to improve rooting ability via reducing rooting inhibitors that occur in dormant cuttings which were taken during dormant seasons (late fall or early winter). So, declining in rooting inhibitors has been observed in the hardwood cuttings of deciduous trees and Vitis spp which were subjected to low temperature for around 2-6 months. Consequently, rooting was improved in these species. Also, cold storage of cuttings of some coniferous trees increased rooting at a high rate. Other factors like cultivar and storage method may interact with cold storage duration and temperature degree as well. This review article outlines the information of the studies that have been obtained as a result of cold storage of the cuttings of different species.
2110.09567
Adib Khazaee
Adib Khazaee and Fakhteh Ghanbarnejad
Effects of measures on phase transitions in two cooperative susceptible-infectious-recovered dynamics
null
Phys. Rev. E 105, 034311 (2022)
10.1103/PhysRevE.105.034311
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent studies, it has been shown that a cooperative interaction in a co-infection spread can lead to a discontinuous transition at a decreased threshold. Here, we investigate effects of immunization with a rate proportional to the extent of the infection on phase transitions of a cooperative co-infection. We use the mean-field approximation to illustrate how measures that remove a portion of the susceptible compartment, like vaccination, with high enough rates can change discontinuous transitions in two coupled susceptible-infectious-recovered dynamics into continuous ones while increasing the threshold of transitions. First, we introduce vaccination with a fixed rate into a symmetric spread of two diseases and investigate the numerical results. Second, we set the rate of measures proportional to the size of the infectious compartment and scrutinize the dynamics. We solve the equations numerically and analytically and probe the transitions for a wide range of parameters. We also determine transition points from the analytical solutions. Third, we adopt a heterogeneous mean-field approach to include heterogeneity and asymmetry in the dynamics and see if the results corresponding to homogeneous symmetric case stand.
[ { "created": "Mon, 18 Oct 2021 18:36:04 GMT", "version": "v1" }, { "created": "Fri, 29 Apr 2022 13:14:03 GMT", "version": "v2" } ]
2022-05-02
[ [ "Khazaee", "Adib", "" ], [ "Ghanbarnejad", "Fakhteh", "" ] ]
In recent studies, it has been shown that a cooperative interaction in a co-infection spread can lead to a discontinuous transition at a decreased threshold. Here, we investigate effects of immunization with a rate proportional to the extent of the infection on phase transitions of a cooperative co-infection. We use the mean-field approximation to illustrate how measures that remove a portion of the susceptible compartment, like vaccination, with high enough rates can change discontinuous transitions in two coupled susceptible-infectious-recovered dynamics into continuous ones while increasing the threshold of transitions. First, we introduce vaccination with a fixed rate into a symmetric spread of two diseases and investigate the numerical results. Second, we set the rate of measures proportional to the size of the infectious compartment and scrutinize the dynamics. We solve the equations numerically and analytically and probe the transitions for a wide range of parameters. We also determine transition points from the analytical solutions. Third, we adopt a heterogeneous mean-field approach to include heterogeneity and asymmetry in the dynamics and see if the results corresponding to homogeneous symmetric case stand.
1807.04745
Garren Gaut
Garren Gaut, Xiangrui Li, Brandon Turner, William A. Cunningham, Zhong-Lin Lu, Mark Steyvers
Predicting Task and Subject Differences with Functional Connectivity and BOLD Variability
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous research has found that functional connectivity (FC) can accurately predict the identity of a subject performing a task and the type of task being performed. We replicate these results using a large dataset collected at the OSU Center for Cognitive and Behavioral Brain Imaging. We also introduce a novel perspective on task and subject identity prediction: BOLD Variability (BV). Conceptually, BV is a region-specific measure based on the variance within each brain region. BV is simple to compute, interpret, and visualize. We show that both FC and BV are predictive of task and subject, even across scanning sessions separated by multiple years. Subject differences rather than task differences account for the majority of changes in BV and FC. Similar to results in FC, we show that BV is reduced during cognitive tasks relative to rest.
[ { "created": "Thu, 12 Jul 2018 17:53:57 GMT", "version": "v1" } ]
2018-07-13
[ [ "Gaut", "Garren", "" ], [ "Li", "Xiangrui", "" ], [ "Turner", "Brandon", "" ], [ "Cunningham", "William A.", "" ], [ "Lu", "Zhong-Lin", "" ], [ "Steyvers", "Mark", "" ] ]
Previous research has found that functional connectivity (FC) can accurately predict the identity of a subject performing a task and the type of task being performed. We replicate these results using a large dataset collected at the OSU Center for Cognitive and Behavioral Brain Imaging. We also introduce a novel perspective on task and subject identity prediction: BOLD Variability (BV). Conceptually, BV is a region-specific measure based on the variance within each brain region. BV is simple to compute, interpret, and visualize. We show that both FC and BV are predictive of task and subject, even across scanning sessions separated by multiple years. Subject differences rather than task differences account for the majority of changes in BV and FC. Similar to results in FC, we show that BV is reduced during cognitive tasks relative to rest.
1810.07983
Kavita Jain
Kavita Jain and Wolfgang Stephan
Modes of rapid polygenic adaptation
null
Mol. Biol. Evol. 34 (12):3169-3175 (2017)
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many experimental and field studies have shown that adaptation can occur very rapidly. Two qualitatively different modes of fast adaptation have been proposed: selective sweeps wherein large shifts in the allele frequencies occur at a few loci and evolution via small changes in the allele frequencies at many loci. While the first process has been thoroughly investigated within the framework of population genetics, the latter is based on quantitative genetics and is much less understood. Here we summarize results from our recent theoretical studies of a quantitative genetic model of polygenic adaptation that makes explicit reference to population genetics to bridge the gap between the two frameworks. Our key results are that polygenic adaptation may be a rapid process and can proceed via subtle or dramatic changes in the allele frequency depending on the sizes of the phenotypic effects relative to a threshold value. We also discuss how the signals of polygenic selection may be detected in the genome. While powerful methods are available to identify signatures of selective sweeps at loci controling quantitative traits, the development of statistical tests for detecting small shifts of allele frequencies at quantitative trait loci is still in its infancy.
[ { "created": "Thu, 18 Oct 2018 10:30:34 GMT", "version": "v1" } ]
2018-10-19
[ [ "Jain", "Kavita", "" ], [ "Stephan", "Wolfgang", "" ] ]
Many experimental and field studies have shown that adaptation can occur very rapidly. Two qualitatively different modes of fast adaptation have been proposed: selective sweeps wherein large shifts in the allele frequencies occur at a few loci and evolution via small changes in the allele frequencies at many loci. While the first process has been thoroughly investigated within the framework of population genetics, the latter is based on quantitative genetics and is much less understood. Here we summarize results from our recent theoretical studies of a quantitative genetic model of polygenic adaptation that makes explicit reference to population genetics to bridge the gap between the two frameworks. Our key results are that polygenic adaptation may be a rapid process and can proceed via subtle or dramatic changes in the allele frequency depending on the sizes of the phenotypic effects relative to a threshold value. We also discuss how the signals of polygenic selection may be detected in the genome. While powerful methods are available to identify signatures of selective sweeps at loci controling quantitative traits, the development of statistical tests for detecting small shifts of allele frequencies at quantitative trait loci is still in its infancy.
1609.00491
Leonardo L. Gollo
Leonardo L. Gollo, James A. Roberts, Luca Cocchi
Mapping how local perturbations influence systems-level brain dynamics
41 pages, 9 figures
Neuroimage 160:97-112 (2017)
10.1016/j.neuroimage.2017.01.057
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain exhibits a relatively stable spatiotemporal organization that supports brain function and can be manipulated via local brain stimulation. Such perturbations to local cortical dynamics are globally integrated by distinct neural systems. However, it remains unclear how and why local changes in neural activity affect large-scale system dynamics. Here, we briefly review empirical and computational studies addressing how localized perturbations affect brain activity. We then systematically analyze a model of large-scale brain dynamics, assessing how localized changes in brain activity at the different sites affect whole-brain dynamics. We find that local stimulation induces changes in brain activity that can be summarized by relatively smooth tuning curves, which relate a region's effectiveness as a stimulation site to its position within the cortical hierarchy. Our results also support the notion that brain hubs, operating in a slower regime, are more resilient to focal perturbations and critically contribute to maintain stability in global brain dynamics. In contrast, perturbations of peripheral regions, characterized by faster activity, have greater impact on functional connectivity. As a parallel with this region-level result, we also find that peripheral systems such as the visual and sensorimotor networks were more affected by local perturbations than high-level systems such as the cingulo-opercular network. Our results highlight the importance of a periphery-to-core hierarchy to determine the effect of local stimulation on the brain network. We also provide novel resources to orient empirical work aiming at manipulating functional connectivity using non-invasive brain stimulation.
[ { "created": "Fri, 2 Sep 2016 07:51:27 GMT", "version": "v1" } ]
2018-01-19
[ [ "Gollo", "Leonardo L.", "" ], [ "Roberts", "James A.", "" ], [ "Cocchi", "Luca", "" ] ]
The human brain exhibits a relatively stable spatiotemporal organization that supports brain function and can be manipulated via local brain stimulation. Such perturbations to local cortical dynamics are globally integrated by distinct neural systems. However, it remains unclear how and why local changes in neural activity affect large-scale system dynamics. Here, we briefly review empirical and computational studies addressing how localized perturbations affect brain activity. We then systematically analyze a model of large-scale brain dynamics, assessing how localized changes in brain activity at the different sites affect whole-brain dynamics. We find that local stimulation induces changes in brain activity that can be summarized by relatively smooth tuning curves, which relate a region's effectiveness as a stimulation site to its position within the cortical hierarchy. Our results also support the notion that brain hubs, operating in a slower regime, are more resilient to focal perturbations and critically contribute to maintain stability in global brain dynamics. In contrast, perturbations of peripheral regions, characterized by faster activity, have greater impact on functional connectivity. As a parallel with this region-level result, we also find that peripheral systems such as the visual and sensorimotor networks were more affected by local perturbations than high-level systems such as the cingulo-opercular network. Our results highlight the importance of a periphery-to-core hierarchy to determine the effect of local stimulation on the brain network. We also provide novel resources to orient empirical work aiming at manipulating functional connectivity using non-invasive brain stimulation.
1404.2133
Joseba Dalmau
Joseba Dalmau
Convergence of a Moran model to Eigen's quasispecies model
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove that a Moran model converges in probability to Eigen's quasispecies model in the infinite population limit.
[ { "created": "Tue, 8 Apr 2014 13:59:13 GMT", "version": "v1" } ]
2014-04-09
[ [ "Dalmau", "Joseba", "" ] ]
We prove that a Moran model converges in probability to Eigen's quasispecies model in the infinite population limit.
2401.10303
Nicolas Weidberg
Nicolas Weidberg, Wayne Goschen, Jennifer M. Jackson, Paula Pattrick, Christopher D. McQuaid, Francesca Porri
Fine scale depth regulation of invertebrate larvae around coastal fronts
null
Limnology and Oceanography. 64 - 2, pp. 785 - 802, 2019
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Vertical migrations of zooplankters have been widely described, but their active movements through shallow, highly dynamic water columns within the inner shelf may be more complex and difficult to characterize. In this study, invertebrate larvae, currents, and hydrographic variables were sampled at different depths during and after the presence of fronts on three different cruises off the southern coast of South Africa. Internal wave dynamics were observed in the hydrographic data set but also through satellite imagery, although strong surface convergent currents were absent and thermal stratification was weak. During the first two cruises, fronts were more conspicuous and they preceded strong onshore currents at depth which developed with the rising tide. Vertical distributions of larvae changed accordingly, with higher abundances at these deep layers once the front disappeared. The third cruise was carried out during slack tides, the front was not conspicuous, deep strong onshore currents did not occur afterward and larval distributions did not change consistently through time. Overall, the vertical distributions of many larval taxa matched the vertical profiles of shoreward currents and multivariate analyses revealed that these flows structured the larval community, which was neither influenced by temperature nor chlorophyll. Thus, the ability to regulate active vertical positioning may enhance shoreward advection and determine nearshore larval distributions.
[ { "created": "Thu, 18 Jan 2024 11:53:58 GMT", "version": "v1" } ]
2024-01-22
[ [ "Weidberg", "Nicolas", "" ], [ "Goschen", "Wayne", "" ], [ "Jackson", "Jennifer M.", "" ], [ "Pattrick", "Paula", "" ], [ "McQuaid", "Christopher D.", "" ], [ "Porri", "Francesca", "" ] ]
Vertical migrations of zooplankters have been widely described, but their active movements through shallow, highly dynamic water columns within the inner shelf may be more complex and difficult to characterize. In this study, invertebrate larvae, currents, and hydrographic variables were sampled at different depths during and after the presence of fronts on three different cruises off the southern coast of South Africa. Internal wave dynamics were observed in the hydrographic data set but also through satellite imagery, although strong surface convergent currents were absent and thermal stratification was weak. During the first two cruises, fronts were more conspicuous and they preceded strong onshore currents at depth which developed with the rising tide. Vertical distributions of larvae changed accordingly, with higher abundances at these deep layers once the front disappeared. The third cruise was carried out during slack tides, the front was not conspicuous, deep strong onshore currents did not occur afterward and larval distributions did not change consistently through time. Overall, the vertical distributions of many larval taxa matched the vertical profiles of shoreward currents and multivariate analyses revealed that these flows structured the larval community, which was neither influenced by temperature nor chlorophyll. Thus, the ability to regulate active vertical positioning may enhance shoreward advection and determine nearshore larval distributions.
2403.05602
Gilchan Park
Gilchan Park, Sean McCorkle, Carlos Soto, Ian Blaby, Shinjae Yoo
Extracting Protein-Protein Interactions (PPIs) from Biomedical Literature using Attention-based Relational Context Information
10 pages, 3 figures, 7 tables, 2022 IEEE International Conference on Big Data (Big Data)
In 2022 IEEE Big Data, pp. 2052-2061 (2022)
10.1109/BigData55660.2022.10021099
null
q-bio.BM cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Because protein-protein interactions (PPIs) are crucial to understand living systems, harvesting these data is essential to probe disease development and discern gene/protein functions and biological processes. Some curated datasets contain PPI data derived from the literature and other sources (e.g., IntAct, BioGrid, DIP, and HPRD). However, they are far from exhaustive, and their maintenance is a labor-intensive process. On the other hand, machine learning methods to automate PPI knowledge extraction from the scientific literature have been limited by a shortage of appropriate annotated data. This work presents a unified, multi-source PPI corpora with vetted interaction definitions augmented by binary interaction type labels and a Transformer-based deep learning method that exploits entities' relational context information for relation representation to improve relation classification performance. The model's performance is evaluated on four widely studied biomedical relation extraction datasets, as well as this work's target PPI datasets, to observe the effectiveness of the representation to relation extraction tasks in various data. Results show the model outperforms prior state-of-the-art models. The code and data are available at: https://github.com/BNLNLP/PPI-Relation-Extraction
[ { "created": "Fri, 8 Mar 2024 01:43:21 GMT", "version": "v1" } ]
2024-03-12
[ [ "Park", "Gilchan", "" ], [ "McCorkle", "Sean", "" ], [ "Soto", "Carlos", "" ], [ "Blaby", "Ian", "" ], [ "Yoo", "Shinjae", "" ] ]
Because protein-protein interactions (PPIs) are crucial to understand living systems, harvesting these data is essential to probe disease development and discern gene/protein functions and biological processes. Some curated datasets contain PPI data derived from the literature and other sources (e.g., IntAct, BioGrid, DIP, and HPRD). However, they are far from exhaustive, and their maintenance is a labor-intensive process. On the other hand, machine learning methods to automate PPI knowledge extraction from the scientific literature have been limited by a shortage of appropriate annotated data. This work presents a unified, multi-source PPI corpora with vetted interaction definitions augmented by binary interaction type labels and a Transformer-based deep learning method that exploits entities' relational context information for relation representation to improve relation classification performance. The model's performance is evaluated on four widely studied biomedical relation extraction datasets, as well as this work's target PPI datasets, to observe the effectiveness of the representation to relation extraction tasks in various data. Results show the model outperforms prior state-of-the-art models. The code and data are available at: https://github.com/BNLNLP/PPI-Relation-Extraction
1109.6558
Celia Blanco
Celia Blanco and David Hochberg
Chiral symmetry breaking via crystallization of the glycine and \alpha-amino acid system: a mathematical model
null
Phys. Chem. Chem. Phys., 2011, 13, 12920-12934
10.1039/C1CP21011D
null
q-bio.BM physics.chem-ph q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce and numerically solve a mathematical model of the experimentally established mechanisms responsible for the symmetry breaking transition observed in the chiral crystallization experiments reported by I. Weissbuch, L. Addadi, L. Leiserowitz and M. Lahav, J. Am. Chem. Soc. 110 (1988), 561-567. The mathematical model is based on five basic processes: (1) The formation of achiral glycine clusters in solution, (2) The nucleation of oriented glycine crystals at the air/water interface in the presence of hydrophobic amino acids, (3) A kinetic orienting effect which inhibits crystal growth, (4) The enantioselective occlusion of the amino acids from solution, and (5) The growth of oriented host glycine crystals at the interface. We translate these processes into differential rate equations. We first study the model with the orienting process (2) without (3) and then combine both allowing us to make detailed comparisons of both orienting effects which actually act in unison in the experiment. Numerical results indicate that the model can yield a high percentage orientation of the mixed crystals at the interface and the consequent resolution of the initially racemic mixture of amino acids in solution. The model thus leads to separation of enantiomeric territories, the generation and amplification of optical activity by enantioselective occlusion of chiral additives through chiral surfaces of glycine crystals.
[ { "created": "Thu, 29 Sep 2011 15:27:46 GMT", "version": "v1" } ]
2017-09-13
[ [ "Blanco", "Celia", "" ], [ "Hochberg", "David", "" ] ]
We introduce and numerically solve a mathematical model of the experimentally established mechanisms responsible for the symmetry breaking transition observed in the chiral crystallization experiments reported by I. Weissbuch, L. Addadi, L. Leiserowitz and M. Lahav, J. Am. Chem. Soc. 110 (1988), 561-567. The mathematical model is based on five basic processes: (1) The formation of achiral glycine clusters in solution, (2) The nucleation of oriented glycine crystals at the air/water interface in the presence of hydrophobic amino acids, (3) A kinetic orienting effect which inhibits crystal growth, (4) The enantioselective occlusion of the amino acids from solution, and (5) The growth of oriented host glycine crystals at the interface. We translate these processes into differential rate equations. We first study the model with the orienting process (2) without (3) and then combine both allowing us to make detailed comparisons of both orienting effects which actually act in unison in the experiment. Numerical results indicate that the model can yield a high percentage orientation of the mixed crystals at the interface and the consequent resolution of the initially racemic mixture of amino acids in solution. The model thus leads to separation of enantiomeric territories, the generation and amplification of optical activity by enantioselective occlusion of chiral additives through chiral surfaces of glycine crystals.
1701.01079
Irwin Kuntz
Irwin Kuntz
Selection and Coalescence in a Finite State Model
27 pages, 14 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To introduce selection into a model of coalescence, I explore the use of modified integer partitions that allow the identification of a preferred lineage. I show that a partition-partition transition matrix, along with Monte Carlo discrete time kinetics, treats both the neutral case and a wide range of positive and negative selection pressures for small population sizes. Selection pressure causes multiple collisions per generation, short coalescence times, increased lengths of terminal branches, increased tree asymmetry, and dependence of coalescence times on the logarithm of population size. These features are consistent with higher order coalescences that permit multiple collisions per generation. While the treatment is exact in terms of the simplified Wright-Fisher model used, it is not easily extended to large population size. Keywords: Selection, Coalescence, Integer Partitions, Multiple Collisions, Tree Asymmetry.
[ { "created": "Wed, 4 Jan 2017 17:14:53 GMT", "version": "v1" } ]
2017-01-05
[ [ "Kuntz", "Irwin", "" ] ]
To introduce selection into a model of coalescence, I explore the use of modified integer partitions that allow the identification of a preferred lineage. I show that a partition-partition transition matrix, along with Monte Carlo discrete time kinetics, treats both the neutral case and a wide range of positive and negative selection pressures for small population sizes. Selection pressure causes multiple collisions per generation, short coalescence times, increased lengths of terminal branches, increased tree asymmetry, and dependence of coalescence times on the logarithm of population size. These features are consistent with higher order coalescences that permit multiple collisions per generation. While the treatment is exact in terms of the simplified Wright-Fisher model used, it is not easily extended to large population size. Keywords: Selection, Coalescence, Integer Partitions, Multiple Collisions, Tree Asymmetry.
1601.07655
Kyle Gustafson
Kyle B Gustafson, Basil S. Bayati, Philip A. Eckhoff
Fractional diffusion emulates a human mobility network during a simulated disease outbreak
26 pages, 4 figures
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
From footpaths to flight routes, human mobility networks facilitate the spread of communicable diseases. Control and elimination efforts depend on characterizing these networks in terms of connections and flux rates of individuals between contact nodes. In some cases, transport can be parameterized with gravity-type models or approximated by a diffusive random walk. As a alternative, we have isolated intranational commercial air traffic as a case study for the utility of non-diffusive, heavy-tailed transport models. We implemented new stochastic simulations of a prototypical influenza-like infection, focusing on the dense, highly-connected United States air travel network. We show that mobility on this network can be described mainly by a power law, in agreement with previous studies. Remarkably, we find that the global evolution of an outbreak on this network is accurately reproduced by a two-parameter space-fractional diffusion equation, such that those parameters are determined by the air travel network.
[ { "created": "Thu, 28 Jan 2016 06:04:55 GMT", "version": "v1" } ]
2016-01-29
[ [ "Gustafson", "Kyle B", "" ], [ "Bayati", "Basil S.", "" ], [ "Eckhoff", "Philip A.", "" ] ]
From footpaths to flight routes, human mobility networks facilitate the spread of communicable diseases. Control and elimination efforts depend on characterizing these networks in terms of connections and flux rates of individuals between contact nodes. In some cases, transport can be parameterized with gravity-type models or approximated by a diffusive random walk. As a alternative, we have isolated intranational commercial air traffic as a case study for the utility of non-diffusive, heavy-tailed transport models. We implemented new stochastic simulations of a prototypical influenza-like infection, focusing on the dense, highly-connected United States air travel network. We show that mobility on this network can be described mainly by a power law, in agreement with previous studies. Remarkably, we find that the global evolution of an outbreak on this network is accurately reproduced by a two-parameter space-fractional diffusion equation, such that those parameters are determined by the air travel network.
2401.01059
Weixin Xie
Weixin Xie, Jianhang Zhang, Qin Xie, Chaojun Gong, Youjun Xu, Luhua Lai, Jianfeng Pei
Accelerating Discovery of Novel and Bioactive Ligands With Pharmacophore-Informed Generative Models
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep generative models have gained significant advancements to accelerate drug discovery by generating bioactive chemicals against desired targets. Nevertheless, most generated compounds that have been validated for potent bioactivity often exhibit structural novelty levels that fall short of satisfaction, thereby providing limited inspiration to human medicinal chemists. The challenge faced by generative models lies in their ability to produce compounds that are both bioactive and novel, rather than merely making minor modifications to known actives present in the training set. Recognizing the utility of pharmacophores in facilitating scaffold hopping, we developed TransPharmer, an innovative generative model that integrates ligand-based interpretable pharmacophore fingerprints with generative pre-training transformer (GPT) for de novo molecule generation. TransPharmer demonstrates superior performance across tasks involving unconditioned distribution learning, de novo generation and scaffold elaboration under pharmacophoric constraints. Its distinct exploration mode within the local chemical space renders it particularly useful for scaffold hopping, producing compounds that are structurally novel while pharmaceutically related. The efficacy of TransPharmer is validated through two case studies involving the dopamine receptor D2 (DRD2) and polo-like kinase 1 (PLK1). Notably in the case of PLK1, three out of four synthesized designed compounds exhibit submicromolar activities, with the most potent one, IIP0943, demonstrating a potency of 5.1 nM. Featuring a new scaffold of 4-(benzo[b]thiophen-7-yloxy)pyrimidine, IIP0943 also exhibits high selectivity for PLK1. It was demonstrated that TransPharmer is a powerful tool for discovery of novel and bioactive ligands.
[ { "created": "Tue, 2 Jan 2024 06:34:58 GMT", "version": "v1" } ]
2024-01-03
[ [ "Xie", "Weixin", "" ], [ "Zhang", "Jianhang", "" ], [ "Xie", "Qin", "" ], [ "Gong", "Chaojun", "" ], [ "Xu", "Youjun", "" ], [ "Lai", "Luhua", "" ], [ "Pei", "Jianfeng", "" ] ]
Deep generative models have gained significant advancements to accelerate drug discovery by generating bioactive chemicals against desired targets. Nevertheless, most generated compounds that have been validated for potent bioactivity often exhibit structural novelty levels that fall short of satisfaction, thereby providing limited inspiration to human medicinal chemists. The challenge faced by generative models lies in their ability to produce compounds that are both bioactive and novel, rather than merely making minor modifications to known actives present in the training set. Recognizing the utility of pharmacophores in facilitating scaffold hopping, we developed TransPharmer, an innovative generative model that integrates ligand-based interpretable pharmacophore fingerprints with generative pre-training transformer (GPT) for de novo molecule generation. TransPharmer demonstrates superior performance across tasks involving unconditioned distribution learning, de novo generation and scaffold elaboration under pharmacophoric constraints. Its distinct exploration mode within the local chemical space renders it particularly useful for scaffold hopping, producing compounds that are structurally novel while pharmaceutically related. The efficacy of TransPharmer is validated through two case studies involving the dopamine receptor D2 (DRD2) and polo-like kinase 1 (PLK1). Notably in the case of PLK1, three out of four synthesized designed compounds exhibit submicromolar activities, with the most potent one, IIP0943, demonstrating a potency of 5.1 nM. Featuring a new scaffold of 4-(benzo[b]thiophen-7-yloxy)pyrimidine, IIP0943 also exhibits high selectivity for PLK1. It was demonstrated that TransPharmer is a powerful tool for discovery of novel and bioactive ligands.
2208.09290
Boris Rusakov
Boris Rusakov
Concepts as Elementary Constituents of Human Consciousness
11 pages
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
It is asserted that consciousness functionally is a vision. Biologically it is a sensation of work of a brain that converts external and internal signals into visual output. However, as we well know, human consciousness includes meaning (interpretation), and therefore functionally possesses an additional level of complexity. It consists of concepts which are its minimal components (elementary constituents). A concept is an abstract (i.e. non-existent in nature) common property of different observables, as perceived by humans. It is an irreducible entity since it cannot be divided into any other functional components. At the biological and physical level each concept is a unique sensation encoded in a nervous system as a sensory and visual image. Concepts evolve throughout one's lifetime. We continuously create and acquire new concepts, develop and expand the existing ones, and eliminate and abandon some others. 'I' ('self') is one of the concepts. To acquire a concept one has to learn (to teach one's nervous system) feeling or sensing it, and to connect it to existing concepts, if any. We make a conjecture that both the creation and the acquisition of a concept is a phase transition. The ability to generate and acquire concepts is unique to humans and is the only principal difference between human and animal. We offer scenarios for how this ability could have been acquired by our predecessors, thus making them humans. It is suggested that the animal's brain, being a processor of visual signals that converts them into internal images, has developed this ability as a result of "guessing" or imagining details that are missing in the actual visible picture.
[ { "created": "Thu, 18 Aug 2022 13:37:36 GMT", "version": "v1" }, { "created": "Tue, 11 Oct 2022 14:25:46 GMT", "version": "v2" } ]
2022-10-12
[ [ "Rusakov", "Boris", "" ] ]
It is asserted that consciousness functionally is a vision. Biologically it is a sensation of work of a brain that converts external and internal signals into visual output. However, as we well know, human consciousness includes meaning (interpretation), and therefore functionally possesses an additional level of complexity. It consists of concepts which are its minimal components (elementary constituents). A concept is an abstract (i.e. non-existent in nature) common property of different observables, as perceived by humans. It is an irreducible entity since it cannot be divided into any other functional components. At the biological and physical level each concept is a unique sensation encoded in a nervous system as a sensory and visual image. Concepts evolve throughout one's lifetime. We continuously create and acquire new concepts, develop and expand the existing ones, and eliminate and abandon some others. 'I' ('self') is one of the concepts. To acquire a concept one has to learn (to teach one's nervous system) feeling or sensing it, and to connect it to existing concepts, if any. We make a conjecture that both the creation and the acquisition of a concept is a phase transition. The ability to generate and acquire concepts is unique to humans and is the only principal difference between human and animal. We offer scenarios for how this ability could have been acquired by our predecessors, thus making them humans. It is suggested that the animal's brain, being a processor of visual signals that converts them into internal images, has developed this ability as a result of "guessing" or imagining details that are missing in the actual visible picture.
2205.05897
Shantanu Jain
The Critical Assessment of Genome Interpretation Consortium
CAGI, the Critical Assessment of Genome Interpretation, establishes progress and prospects for computational genetic variant interpretation methods
For supplementary material, see https://github.com/genomeinterpretation/CAGI50
Genome Biology 25.1 (2024) 1-46
10.1186/s13059-023-03113-6
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Critical Assessment of Genome Interpretation (CAGI) aims to advance the state of the art for computational prediction of genetic variant impact, particularly those relevant to disease. The five complete editions of the CAGI community experiment comprised 50 challenges, in which participants made blind predictions of phenotypes from genetic data, and these were evaluated by independent assessors. Overall, results show that while current methods are imperfect, they have major utility for research and clinical applications. Missense variant interpretation methods are able to estimate biochemical effects with increasing accuracy. Performance is particularly strong for clinical pathogenic variants, including some difficult-to-diagnose cases, and extends to interpretation of cancer-related variants. Assessment of methods for regulatory variants and complex trait disease risk is less definitive, and indicates performance potentially suitable for auxiliary use in the clinic. Emerging methods and increasingly large, robust datasets for training and assessment promise further progress ahead.
[ { "created": "Thu, 12 May 2022 06:34:39 GMT", "version": "v1" } ]
2024-06-19
[ [ "Consortium", "The Critical Assessment of Genome Interpretation", "" ] ]
The Critical Assessment of Genome Interpretation (CAGI) aims to advance the state of the art for computational prediction of genetic variant impact, particularly those relevant to disease. The five complete editions of the CAGI community experiment comprised 50 challenges, in which participants made blind predictions of phenotypes from genetic data, and these were evaluated by independent assessors. Overall, results show that while current methods are imperfect, they have major utility for research and clinical applications. Missense variant interpretation methods are able to estimate biochemical effects with increasing accuracy. Performance is particularly strong for clinical pathogenic variants, including some difficult-to-diagnose cases, and extends to interpretation of cancer-related variants. Assessment of methods for regulatory variants and complex trait disease risk is less definitive, and indicates performance potentially suitable for auxiliary use in the clinic. Emerging methods and increasingly large, robust datasets for training and assessment promise further progress ahead.
q-bio/0607049
Kate Davison
K. Davison, P. M. Dolukhanov, G. R. Sarson, A. Shukurov, G. I. Zaitseva
Multiple Sources of the European Neolithic: Mathematical Modelling Constrained by Radiocarbon Dates
Submitted to Quaternary International for publication. 17 pages, 4 Figures
null
null
null
q-bio.PE q-bio.QM
null
We present a mathematical model, based on the compilation and statistical processing of radiocarbon dates, of the transition from the Mesolithic to the Neolithic, from about 7,000 to 4,000 BC in Europe. The arrival of the Neolithic is traditionally associated with the establishment of farming-based economies; yet in considerable areas of north-eastern Europe it is linked with the beginning of pottery-making in the context of foraging-type communities. Archaeological evidence, radiocarbon dates and genetic markers are consistent with the spread of farming from a source in the Near East. However, farming was less important in the East; the Eastern and Western Neolithic have distinct signatures. We use a population dynamics model to suggest that this distinction can be attributed to the presence of two waves of advance, one from the Near East, and another through Eastern Europe. Thus, we provide a quantitative framework in which a unified interpretation of the Western and Eastern Neolithic can be developed.
[ { "created": "Thu, 27 Jul 2006 09:53:16 GMT", "version": "v1" }, { "created": "Mon, 10 Sep 2007 13:02:49 GMT", "version": "v2" } ]
2007-09-10
[ [ "Davison", "K.", "" ], [ "Dolukhanov", "P. M.", "" ], [ "Sarson", "G. R.", "" ], [ "Shukurov", "A.", "" ], [ "Zaitseva", "G. I.", "" ] ]
We present a mathematical model, based on the compilation and statistical processing of radiocarbon dates, of the transition from the Mesolithic to the Neolithic, from about 7,000 to 4,000 BC in Europe. The arrival of the Neolithic is traditionally associated with the establishment of farming-based economies; yet in considerable areas of north-eastern Europe it is linked with the beginning of pottery-making in the context of foraging-type communities. Archaeological evidence, radiocarbon dates and genetic markers are consistent with the spread of farming from a source in the Near East. However, farming was less important in the East; the Eastern and Western Neolithic have distinct signatures. We use a population dynamics model to suggest that this distinction can be attributed to the presence of two waves of advance, one from the Near East, and another through Eastern Europe. Thus, we provide a quantitative framework in which a unified interpretation of the Western and Eastern Neolithic can be developed.
1508.02172
Yucheng Hu
Yucheng Hu and John Lowengrub
Collective Properties of a Transcription Initiation Model under Varying Environment
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of gene transcription is tightly regulated in eukaryotes. Recent experiments have revealed various kinds of transcriptional dynamics, such as RNA polymerase II pausing, that involves regulation at the transcription initiation stage, and the choice of different regulation pattern is closely related to the physiological functions of the target gene. Here we consider a simplified model of transcription initiation, a process including the assembly of transcription complex and the pausing and releasing of the RNA polymerase II. Focusing on the collective behaviors on a population level, we explore potential regulatory functions this model can offer. These functions include fast and synchronized response to environmental change, or long-term memory about the transcriptional status. As a proof of concept we also show that, by selecting different control mechanisms cells can adapt to different environments. These findings may help us better understand the design principles of transcriptional regulation.
[ { "created": "Mon, 10 Aug 2015 09:02:15 GMT", "version": "v1" } ]
2015-08-11
[ [ "Hu", "Yucheng", "" ], [ "Lowengrub", "John", "" ] ]
The dynamics of gene transcription is tightly regulated in eukaryotes. Recent experiments have revealed various kinds of transcriptional dynamics, such as RNA polymerase II pausing, that involves regulation at the transcription initiation stage, and the choice of different regulation pattern is closely related to the physiological functions of the target gene. Here we consider a simplified model of transcription initiation, a process including the assembly of transcription complex and the pausing and releasing of the RNA polymerase II. Focusing on the collective behaviors on a population level, we explore potential regulatory functions this model can offer. These functions include fast and synchronized response to environmental change, or long-term memory about the transcriptional status. As a proof of concept we also show that, by selecting different control mechanisms cells can adapt to different environments. These findings may help us better understand the design principles of transcriptional regulation.
1705.03092
Ricardo Ruiz Baier I
A. Gizzi, A. Loppini, R. Ruiz-Baier, A. Ippolito, A. Camassa, A. La Camera, E. Emmi, L. Di Perna, V. Garofalo, C. Cherubini, S. Filippi
Nonlinear diffusion & thermo-electric coupling in a two-variable model of cardiac action potential
null
Chaos (2017)
10.1063/1.4999610
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work reports the results of the theoretical investigation of nonlinear dynamics and spiral wave breakup in a generalized two-variable model of cardiac action potential accounting for thermo-electric coupling and diffusion nonlinearities. As customary in excitable media, the common Q10 and Moore factors are used to describe thermo-electric feedback in a 10-degrees range. Motivated by the porous nature of the cardiac tissue, in this study we also propose a nonlinear Fickian flux formulated by Taylor expanding the voltage dependent diffusion coefficient up to quadratic terms. A fine tuning of the diffusive parameters is performed a priori to match the conduction velocity of the equivalent cable model. The resulting combined effects are then studied by numerically simulating different stimulation protocols on a one-dimensional cable. Model features are compared in terms of action potential morphology, restitution curves, frequency spectra and spatio-temporal phase differences. Two-dimensional long-run simulations are finally performed to characterize spiral breakup during sustained fibrillation at different thermal states. Temperature and nonlinear diffusion effects are found to impact the repolarization phase of the action potential wave with non-monotone patterns and to increase the propensity of arrhythmogenesis.
[ { "created": "Mon, 8 May 2017 21:21:00 GMT", "version": "v1" } ]
2018-11-07
[ [ "Gizzi", "A.", "" ], [ "Loppini", "A.", "" ], [ "Ruiz-Baier", "R.", "" ], [ "Ippolito", "A.", "" ], [ "Camassa", "A.", "" ], [ "La Camera", "A.", "" ], [ "Emmi", "E.", "" ], [ "Di Perna", "L.", "" ], [ "Garofalo", "V.", "" ], [ "Cherubini", "C.", "" ], [ "Filippi", "S.", "" ] ]
This work reports the results of the theoretical investigation of nonlinear dynamics and spiral wave breakup in a generalized two-variable model of cardiac action potential accounting for thermo-electric coupling and diffusion nonlinearities. As customary in excitable media, the common Q10 and Moore factors are used to describe thermo-electric feedback in a 10-degrees range. Motivated by the porous nature of the cardiac tissue, in this study we also propose a nonlinear Fickian flux formulated by Taylor expanding the voltage dependent diffusion coefficient up to quadratic terms. A fine tuning of the diffusive parameters is performed a priori to match the conduction velocity of the equivalent cable model. The resulting combined effects are then studied by numerically simulating different stimulation protocols on a one-dimensional cable. Model features are compared in terms of action potential morphology, restitution curves, frequency spectra and spatio-temporal phase differences. Two-dimensional long-run simulations are finally performed to characterize spiral breakup during sustained fibrillation at different thermal states. Temperature and nonlinear diffusion effects are found to impact the repolarization phase of the action potential wave with non-monotone patterns and to increase the propensity of arrhythmogenesis.
1303.2333
Bin Ao
Zhitong Bing, Bin Ao, Yanan Zhang, Fengling Wang, Caiyong Ye, Jinpeng He, Jintu Sun, Jie Xiong, Nan Ding, Xiao-fei Gao, Ji Qi, Sheng Zhang, Guangming Zhou, Lei Yang
Warburg Effect due to Exposure to Different Types of Radiation
null
null
null
null
q-bio.TO q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/3.0/
Cancer cells maintain a high level of aerobic glycolysis (the Warburg effect), which is associated with their rapid proliferation. Many studies have reported that the suppression of glycolysis and activation of oxidative phosphorylation can repress the growth of cancer cells through regulation of key regulators. Whether Warburg effect of cancer cells could be switched by some other environmental stimulus? Herein, we report an interesting phenomenon in which cells alternated between glycolysis and mitochondrial respiration depending on the type of radiation they were exposed to. We observed enhanced glycolysis and mitochondrial respiration in HeLa cells exposed to 2-Gy X-ray and 2-Gy carbon ion radiation, respectively. This discovery may provide novel insights for tumor therapy.
[ { "created": "Sun, 10 Mar 2013 15:47:57 GMT", "version": "v1" } ]
2013-03-12
[ [ "Bing", "Zhitong", "" ], [ "Ao", "Bin", "" ], [ "Zhang", "Yanan", "" ], [ "Wang", "Fengling", "" ], [ "Ye", "Caiyong", "" ], [ "He", "Jinpeng", "" ], [ "Sun", "Jintu", "" ], [ "Xiong", "Jie", "" ], [ "Ding", "Nan", "" ], [ "Gao", "Xiao-fei", "" ], [ "Qi", "Ji", "" ], [ "Zhang", "Sheng", "" ], [ "Zhou", "Guangming", "" ], [ "Yang", "Lei", "" ] ]
Cancer cells maintain a high level of aerobic glycolysis (the Warburg effect), which is associated with their rapid proliferation. Many studies have reported that the suppression of glycolysis and activation of oxidative phosphorylation can repress the growth of cancer cells through regulation of key regulators. Whether Warburg effect of cancer cells could be switched by some other environmental stimulus? Herein, we report an interesting phenomenon in which cells alternated between glycolysis and mitochondrial respiration depending on the type of radiation they were exposed to. We observed enhanced glycolysis and mitochondrial respiration in HeLa cells exposed to 2-Gy X-ray and 2-Gy carbon ion radiation, respectively. This discovery may provide novel insights for tumor therapy.
1607.06860
Anatolij Gelimson
Anatolij Gelimson, Kun Zhao, Calvin K. Lee, W. Till Kranz, Gerard C. L. Wong, Ramin Golestanian
Multicellular self-organization of P. aeruginosa due to interactions with secreted trails
10 pages
Phys. Rev. Lett. 117, 178102 (2016)
10.1103/PhysRevLett.117.178102
null
q-bio.CB cond-mat.soft physics.bio-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Guided movement in response to slowly diffusing polymeric trails provides a unique mechanism for self-organization of some microorganisms. To elucidate how this signaling route leads to microcolony formation, we experimentally probe the trajectory and orientation of Pseudomonas aeruginosa that propel themselves on a surface using type IV pili motility appendages, which preferentially attach to deposited exopolysaccharides. We construct a stochastic model by analyzing single-bacterium trajectories, and show that the resulting theoretical prediction for the many-body behavior of the bacteria is in quantitative agreement with our experimental characterization of how cells explore the surface via a power law strategy.
[ { "created": "Fri, 22 Jul 2016 23:15:42 GMT", "version": "v1" }, { "created": "Mon, 19 Sep 2016 20:14:47 GMT", "version": "v2" } ]
2016-10-26
[ [ "Gelimson", "Anatolij", "" ], [ "Zhao", "Kun", "" ], [ "Lee", "Calvin K.", "" ], [ "Kranz", "W. Till", "" ], [ "Wong", "Gerard C. L.", "" ], [ "Golestanian", "Ramin", "" ] ]
Guided movement in response to slowly diffusing polymeric trails provides a unique mechanism for self-organization of some microorganisms. To elucidate how this signaling route leads to microcolony formation, we experimentally probe the trajectory and orientation of Pseudomonas aeruginosa that propel themselves on a surface using type IV pili motility appendages, which preferentially attach to deposited exopolysaccharides. We construct a stochastic model by analyzing single-bacterium trajectories, and show that the resulting theoretical prediction for the many-body behavior of the bacteria is in quantitative agreement with our experimental characterization of how cells explore the surface via a power law strategy.
1810.07429
Jochen Kerdels
Jochen Kerdels and Gabriele Peters
A Survey of Entorhinal Grid Cell Properties
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
About a decade ago grid cells were discovered in the medial entorhinal cortex of rat. Their peculiar firing patterns, which correlate with periodic locations in the environment, led to early hypothesis that grid cells may provide some form of metric for space. Subsequent research has since uncovered a wealth of new insights into the characteristics of grid cells and their neural neighborhood, the parahippocampal-hippocampal region, calling for a revision and refinement of earlier grid cell models. This survey paper aims to provide a comprehensive summary of grid cell research published in the past decade. It focuses on the functional characteristics of grid cells such as the influence of external cues or the alignment to environmental geometry, but also provides a basic overview of the underlying neural substrate.
[ { "created": "Wed, 17 Oct 2018 08:36:07 GMT", "version": "v1" } ]
2018-10-18
[ [ "Kerdels", "Jochen", "" ], [ "Peters", "Gabriele", "" ] ]
About a decade ago grid cells were discovered in the medial entorhinal cortex of rat. Their peculiar firing patterns, which correlate with periodic locations in the environment, led to early hypothesis that grid cells may provide some form of metric for space. Subsequent research has since uncovered a wealth of new insights into the characteristics of grid cells and their neural neighborhood, the parahippocampal-hippocampal region, calling for a revision and refinement of earlier grid cell models. This survey paper aims to provide a comprehensive summary of grid cell research published in the past decade. It focuses on the functional characteristics of grid cells such as the influence of external cues or the alignment to environmental geometry, but also provides a basic overview of the underlying neural substrate.
q-bio/0502042
Rui Dilao
Rui Dilao, Tiago Domingos and Elman M. Shahverdiev
Harvesting in a resource dependent age structured Leslie type population model
26 pages, 4 figures
Math. Biosciences vol. 189 (2004) 141-151
null
null
q-bio.PE
null
We analyse the effect of harvesting in a resource dependent age structured population model, deriving the conditions for the existence of a stable steady state as a function of fertility coefficients, harvesting mortality and carrying capacity of the resources. Under the effect of proportional harvest, we give a sufficient condition for a population to extinguish, and we show that the magnitude of proportional harvest depends on the resources available to the population. We show that the harvesting yield can be periodic, quasi-periodic or chaotic, depending on the dynamics of the harvested population. For populations with large fertility numbers, small harvesting mortality leads to abrupt extinction, but larger harvesting mortality leads to controlled population numbers by avoiding over consumption of resources. Harvesting can be a strategy in order to stabilise periodic or quasi-periodic oscillations in the number of individuals of a population.
[ { "created": "Fri, 25 Feb 2005 22:28:54 GMT", "version": "v1" } ]
2007-05-23
[ [ "Dilao", "Rui", "" ], [ "Domingos", "Tiago", "" ], [ "Shahverdiev", "Elman M.", "" ] ]
We analyse the effect of harvesting in a resource dependent age structured population model, deriving the conditions for the existence of a stable steady state as a function of fertility coefficients, harvesting mortality and carrying capacity of the resources. Under the effect of proportional harvest, we give a sufficient condition for a population to extinguish, and we show that the magnitude of proportional harvest depends on the resources available to the population. We show that the harvesting yield can be periodic, quasi-periodic or chaotic, depending on the dynamics of the harvested population. For populations with large fertility numbers, small harvesting mortality leads to abrupt extinction, but larger harvesting mortality leads to controlled population numbers by avoiding over consumption of resources. Harvesting can be a strategy in order to stabilise periodic or quasi-periodic oscillations in the number of individuals of a population.
1709.00152
Christopher Calderon
Christopher P. Calderon, Austin L. Daniels and Theodore W. Randolph
Using Deep Convolutional Neural Networks to Circumvent Morphological Feature Specification when Classifying Subvisible Protein Aggregates from Micro-Flow Images
9 pages, 7 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flow-Imaging Microscopy (FIM) is commonly used in both academia and industry to characterize subvisible particles (those $\le 25 \mu m$ in size) in protein therapeutics. Pharmaceutical companies are required to record vast volumes of FIM data on protein therapeutic products, but are only mandated under US FDA regulations (i.e., USP $\big \langle 788 \big \rangle$) to control the number of particles exceeding $10$ and $25 \mu m$ in delivered products. Hence, a vast amount of digital images are available to analyze. Current state-of-the-art methods rely on a relatively low-dimensional list of "morphological features" to characterize particles, but these methods ignore an enormous amount of information encoded in the existing large digital image repositories. Deep Convolutional Neural Networks (CNNs or "ConvNets") have demonstrated the ability to extract predictive information from raw macroscopic image data without requiring the selection or specification of "morphological features" in a variety of tasks. However, the heterogeneity, polydispersity of protein therapeutics, and optical phenomena associated with subvisible FIM particle measurements introduce new challenges regarding the application of CNNs to FIM image analysis. In this article, we demonstrate a supervised learning technique leveraging CNNs to extract information from raw images in order to predict the process conditions or stress states (freeze-thaw, mechanical shaking, etc.) that produced a variety of different protein images. We demonstrate that our new classifier (in combination with a sample "image pooling" strategy) can obtain nearly perfect predictions using as few as 20 FIM images from a given protein formulation in a variety of scenarios of relevance to protein therapeutics quality control and process monitoring.
[ { "created": "Fri, 1 Sep 2017 04:36:45 GMT", "version": "v1" } ]
2017-09-04
[ [ "Calderon", "Christopher P.", "" ], [ "Daniels", "Austin L.", "" ], [ "Randolph", "Theodore W.", "" ] ]
Flow-Imaging Microscopy (FIM) is commonly used in both academia and industry to characterize subvisible particles (those $\le 25 \mu m$ in size) in protein therapeutics. Pharmaceutical companies are required to record vast volumes of FIM data on protein therapeutic products, but are only mandated under US FDA regulations (i.e., USP $\big \langle 788 \big \rangle$) to control the number of particles exceeding $10$ and $25 \mu m$ in delivered products. Hence, a vast amount of digital images are available to analyze. Current state-of-the-art methods rely on a relatively low-dimensional list of "morphological features" to characterize particles, but these methods ignore an enormous amount of information encoded in the existing large digital image repositories. Deep Convolutional Neural Networks (CNNs or "ConvNets") have demonstrated the ability to extract predictive information from raw macroscopic image data without requiring the selection or specification of "morphological features" in a variety of tasks. However, the heterogeneity, polydispersity of protein therapeutics, and optical phenomena associated with subvisible FIM particle measurements introduce new challenges regarding the application of CNNs to FIM image analysis. In this article, we demonstrate a supervised learning technique leveraging CNNs to extract information from raw images in order to predict the process conditions or stress states (freeze-thaw, mechanical shaking, etc.) that produced a variety of different protein images. We demonstrate that our new classifier (in combination with a sample "image pooling" strategy) can obtain nearly perfect predictions using as few as 20 FIM images from a given protein formulation in a variety of scenarios of relevance to protein therapeutics quality control and process monitoring.
2006.03766
Rudy Kusdiantara
A. Hasan, H. Susanto, V.R. Tjahjono, R. Kusdiantara, E.R.M. Putri, P. Hadisoemarto, N. Nuraini
A new estimation method for COVID-19 time-varying reproduction number using active cases
https://www.nature.com/articles/s41598-022-10723-w#citeas
Sci Rep 12, 6675 (2022)
10.1038/s41598-022-10723-w
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new method to estimate the time-varying effective (or instantaneous) reproduction number of the novel coronavirus disease (COVID-19). The method is based on a discrete-time stochastic augmented compartmental model that describes the virus transmission. A two-stage estimation method, which combines the Extended Kalman Filter (EKF) to estimate the reported state variables (active and removed cases) and a low pass filter based on a rational transfer function to remove short term fluctuations of the reported cases, is used with case uncertainties that are assumed to follow a Gaussian distribution. Our method does not require information regarding serial intervals, which makes the estimation procedure simpler without reducing the quality of the estimate. We show that the proposed method is comparable to common approaches, e.g., age-structured and new cases based sequential Bayesian models. We also apply it to COVID-19 cases in the Scandinavian countries: Denmark, Sweden, and Norway, where the positive rates were below 5\% recommended by WHO.
[ { "created": "Sat, 6 Jun 2020 03:02:29 GMT", "version": "v1" }, { "created": "Tue, 9 Jun 2020 11:41:43 GMT", "version": "v2" }, { "created": "Tue, 26 Apr 2022 00:00:14 GMT", "version": "v3" } ]
2022-04-27
[ [ "Hasan", "A.", "" ], [ "Susanto", "H.", "" ], [ "Tjahjono", "V. R.", "" ], [ "Kusdiantara", "R.", "" ], [ "Putri", "E. R. M.", "" ], [ "Hadisoemarto", "P.", "" ], [ "Nuraini", "N.", "" ] ]
We propose a new method to estimate the time-varying effective (or instantaneous) reproduction number of the novel coronavirus disease (COVID-19). The method is based on a discrete-time stochastic augmented compartmental model that describes the virus transmission. A two-stage estimation method, which combines the Extended Kalman Filter (EKF) to estimate the reported state variables (active and removed cases) and a low pass filter based on a rational transfer function to remove short term fluctuations of the reported cases, is used with case uncertainties that are assumed to follow a Gaussian distribution. Our method does not require information regarding serial intervals, which makes the estimation procedure simpler without reducing the quality of the estimate. We show that the proposed method is comparable to common approaches, e.g., age-structured and new cases based sequential Bayesian models. We also apply it to COVID-19 cases in the Scandinavian countries: Denmark, Sweden, and Norway, where the positive rates were below 5\% recommended by WHO.
1401.7972
Charalampos Chrysanthakopoulos A.
Charalampos A. Chrysanthakopoulos
Predicting the Next Maxima Incidents of the Seasonally Forced SEIR Epidemic Model
6 pages, 3 figures, 2 tables, typos added
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper aims at predicting the next maxima values of the state variables of the seasonal SEIR epidemic model and their in-between time intervals. Lorenz's method of analogues is applied on the attractor formed by the maxima of the corresponding state variables. It is found that both quantities are characterized by a high degree of predictability in the case of the chaotic regime of the parameter space.
[ { "created": "Thu, 30 Jan 2014 20:23:24 GMT", "version": "v1" }, { "created": "Fri, 31 Jan 2014 14:06:50 GMT", "version": "v2" } ]
2014-02-03
[ [ "Chrysanthakopoulos", "Charalampos A.", "" ] ]
This paper aims at predicting the next maxima values of the state variables of the seasonal SEIR epidemic model and their in-between time intervals. Lorenz's method of analogues is applied on the attractor formed by the maxima of the corresponding state variables. It is found that both quantities are characterized by a high degree of predictability in the case of the chaotic regime of the parameter space.
2001.05534
C\'edric Beaulac
C\'edric Beaulac, Jeffrey S. Rosenthal, Qinglin Pei, Debra Friedman, Suzanne Wolden and David Hodgson
An evaluation of machine learning techniques to predict the outcome of children treated for Hodgkin-Lymphoma on the AHOD0031 trial: A report from the Children's Oncology Group
null
Applied Artificial Intelligence 2020
10.1080/08839514.2020.1815151
null
q-bio.QM stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this manuscript we analyze a data set containing information on children with Hodgkin Lymphoma (HL) enrolled on a clinical trial. Treatments received and survival status were collected together with other covariates such as demographics and clinical measurements. Our main task is to explore the potential of machine learning (ML) algorithms in a survival analysis context in order to improve over the Cox Proportional Hazard (CoxPH) model. We discuss the weaknesses of the CoxPH model we would like to improve upon and then we introduce multiple algorithms, from well-established ones to state-of-the-art models, that solve these issues. We then compare every model according to the concordance index and the brier score. Finally, we produce a series of recommendations, based on our experience, for practitioners that would like to benefit from the recent advances in artificial intelligence.
[ { "created": "Wed, 15 Jan 2020 20:03:26 GMT", "version": "v1" }, { "created": "Fri, 26 Mar 2021 17:43:56 GMT", "version": "v2" } ]
2021-03-29
[ [ "Beaulac", "Cédric", "" ], [ "Rosenthal", "Jeffrey S.", "" ], [ "Pei", "Qinglin", "" ], [ "Friedman", "Debra", "" ], [ "Wolden", "Suzanne", "" ], [ "Hodgson", "David", "" ] ]
In this manuscript we analyze a data set containing information on children with Hodgkin Lymphoma (HL) enrolled on a clinical trial. Treatments received and survival status were collected together with other covariates such as demographics and clinical measurements. Our main task is to explore the potential of machine learning (ML) algorithms in a survival analysis context in order to improve over the Cox Proportional Hazard (CoxPH) model. We discuss the weaknesses of the CoxPH model we would like to improve upon and then we introduce multiple algorithms, from well-established ones to state-of-the-art models, that solve these issues. We then compare every model according to the concordance index and the brier score. Finally, we produce a series of recommendations, based on our experience, for practitioners that would like to benefit from the recent advances in artificial intelligence.
2002.00418
Keji Liu
Yu Chen, Jin Cheng, Yu Jiang and Keji Liu
A Time Delay Dynamical Model for Outbreak of 2019-nCoV and the Parameter Identification
11 pages, 7 figures. arXiv admin note: text overlap with arXiv:2002.02590
null
null
null
q-bio.PE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel dynamical system with time delay to describe the outbreak of 2019-nCoV in China. One typical feature of this epidemic is that it can spread in latent period, which is therefore described by the time delay process in the differential equations. The accumulated numbers of classified populations are employed as variables, which is consistent with the official data and facilitates the parameter identification. The numerical methods for the prediction of outbreak of 2019-nCoV and parameter identification are provided, and the numerical results show that the novel dynamic system can well predict the outbreak trend so far. Based on the numerical simulations, we suggest that the transmission of individuals should be greatly controlled with high isolation rate by the government.
[ { "created": "Sun, 2 Feb 2020 15:56:12 GMT", "version": "v1" }, { "created": "Mon, 10 Feb 2020 12:45:10 GMT", "version": "v2" }, { "created": "Thu, 13 Feb 2020 04:18:12 GMT", "version": "v3" } ]
2020-02-14
[ [ "Chen", "Yu", "" ], [ "Cheng", "Jin", "" ], [ "Jiang", "Yu", "" ], [ "Liu", "Keji", "" ] ]
In this paper, we propose a novel dynamical system with time delay to describe the outbreak of 2019-nCoV in China. One typical feature of this epidemic is that it can spread in latent period, which is therefore described by the time delay process in the differential equations. The accumulated numbers of classified populations are employed as variables, which is consistent with the official data and facilitates the parameter identification. The numerical methods for the prediction of outbreak of 2019-nCoV and parameter identification are provided, and the numerical results show that the novel dynamic system can well predict the outbreak trend so far. Based on the numerical simulations, we suggest that the transmission of individuals should be greatly controlled with high isolation rate by the government.
2005.03425
Stuart Hastings
S. P. Hastings and M. M Sussman
On the initiation of spiral waves in excitable media
null
null
null
null
q-bio.QM nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Excitable media are systems which are at rest in the absence of external input but which respond to a sufficiently strong stimulus by sending a wave of "excitation" across the medium. Examples include cardiac and cortical tissue, and in each of these examples rotating spiral waves have been observed and associated with abnormalities. How such waves are initiated is not well understood. In this numerical study of a standard mathematical model of excitable media, we obtain spirals and other oscillatory patterns by a method, simple in design, which had previously been ruled out. We analyze the early stages of this process and show that long term stable oscillatory behavior, including spiral waves, can start with very simple initial conditions, such as two small spots of excitation, and no subsequent input. Thus, there are no refractory cells in the initial condition. For this model, even random spots of stimulation result in periodic rotating patterns relatively often, leading us to suggest that this could happen in living tissue.
[ { "created": "Thu, 7 May 2020 12:50:13 GMT", "version": "v1" }, { "created": "Sun, 17 May 2020 18:05:57 GMT", "version": "v2" }, { "created": "Sat, 23 May 2020 20:06:41 GMT", "version": "v3" }, { "created": "Wed, 30 Dec 2020 15:24:41 GMT", "version": "v4" } ]
2021-01-01
[ [ "Hastings", "S. P.", "" ], [ "Sussman", "M. M", "" ] ]
Excitable media are systems which are at rest in the absence of external input but which respond to a sufficiently strong stimulus by sending a wave of "excitation" across the medium. Examples include cardiac and cortical tissue, and in each of these examples rotating spiral waves have been observed and associated with abnormalities. How such waves are initiated is not well understood. In this numerical study of a standard mathematical model of excitable media, we obtain spirals and other oscillatory patterns by a method, simple in design, which had previously been ruled out. We analyze the early stages of this process and show that long term stable oscillatory behavior, including spiral waves, can start with very simple initial conditions, such as two small spots of excitation, and no subsequent input. Thus, there are no refractory cells in the initial condition. For this model, even random spots of stimulation result in periodic rotating patterns relatively often, leading us to suggest that this could happen in living tissue.
2106.16219
Seongmin Park
Linda Q. Yu, Seongmin A. Park, Sarah C. Sweigart, Erie D. Boorman, Matthew R. Nassar
Do grid codes afford generalization and flexible decision-making?
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Behavioral flexibility is learning from previous experiences and planning appropriate actions in a changing or novel environment. Successful behavioral adaptation depends on internal models the brain builds to represent the relational structure of an abstract task. Emerging evidence suggests that the well-known roles of the hippocampus and entorhinal cortex (HC-EC) in integrating spatial relationships into cognitive maps can be extended to map the transition structure between states in non-spatial abstract tasks. However, what the EC grid-codes actually compute to afford generalization remains elusive. We introduce two non-exclusive ideas regarding what grid-codes may represent to afford higher-level cognition. One idea is that grid-codes are eigenvectors of the successor representation (SR) learned online during a task. This view assumes that the grid codes serve as an efficient basis function for learning and representing experienced relationships between entities. Subsequently, the grid codes facilitate generalization in novel contexts such as when the goal changes. The second idea is that the grid-codes reflect the inferred global task structure. This view assumes that the grid-code represents a structural code that is factorized from specific sensory content, enabling structural information to be transferred across tasks. Subsequently, the brain could afford one-shot inferences without requiring experience. The ability to generalize experiences and make appropriate decisions in novel situations is critical for both animals and machines. Here we review proposed computations of the grid-code in the brain, which is potentially critical to behavioral flexibility.
[ { "created": "Wed, 30 Jun 2021 17:18:09 GMT", "version": "v1" } ]
2021-07-01
[ [ "Yu", "Linda Q.", "" ], [ "Park", "Seongmin A.", "" ], [ "Sweigart", "Sarah C.", "" ], [ "Boorman", "Erie D.", "" ], [ "Nassar", "Matthew R.", "" ] ]
Behavioral flexibility is learning from previous experiences and planning appropriate actions in a changing or novel environment. Successful behavioral adaptation depends on internal models the brain builds to represent the relational structure of an abstract task. Emerging evidence suggests that the well-known roles of the hippocampus and entorhinal cortex (HC-EC) in integrating spatial relationships into cognitive maps can be extended to map the transition structure between states in non-spatial abstract tasks. However, what the EC grid-codes actually compute to afford generalization remains elusive. We introduce two non-exclusive ideas regarding what grid-codes may represent to afford higher-level cognition. One idea is that grid-codes are eigenvectors of the successor representation (SR) learned online during a task. This view assumes that the grid codes serve as an efficient basis function for learning and representing experienced relationships between entities. Subsequently, the grid codes facilitate generalization in novel contexts such as when the goal changes. The second idea is that the grid-codes reflect the inferred global task structure. This view assumes that the grid-code represents a structural code that is factorized from specific sensory content, enabling structural information to be transferred across tasks. Subsequently, the brain could afford one-shot inferences without requiring experience. The ability to generalize experiences and make appropriate decisions in novel situations is critical for both animals and machines. Here we review proposed computations of the grid-code in the brain, which is potentially critical to behavioral flexibility.
2402.16901
Duan Chenrui
ChenRui Duan, Zelin Zang, Yongjie Xu, Hang He, Zihan Liu, Zijia Song, Ju-Sheng Zheng, Stan Z. Li
FGBERT: Function-Driven Pre-trained Gene Language Model for Metagenomics
null
null
null
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenomic data, comprising mixed multi-species genomes, are prevalent in diverse environments like oceans and soils, significantly impacting human health and ecological functions. However, current research relies on K-mer representations, limiting the capture of structurally relevant gene contexts. To address these limitations and further our understanding of complex relationships between metagenomic sequences and their functions, we introduce a protein-based gene representation as a context-aware and structure-relevant tokenizer. Our approach includes Masked Gene Modeling (MGM) for gene group-level pre-training, providing insights into inter-gene contextual information, and Triple Enhanced Metagenomic Contrastive Learning (TEM-CL) for gene-level pre-training to model gene sequence-function relationships. MGM and TEM-CL constitute our novel metagenomic language model {\NAME}, pre-trained on 100 million metagenomic sequences. We demonstrate the superiority of our proposed {\NAME} on eight datasets.
[ { "created": "Sat, 24 Feb 2024 13:13:17 GMT", "version": "v1" } ]
2024-02-28
[ [ "Duan", "ChenRui", "" ], [ "Zang", "Zelin", "" ], [ "Xu", "Yongjie", "" ], [ "He", "Hang", "" ], [ "Liu", "Zihan", "" ], [ "Song", "Zijia", "" ], [ "Zheng", "Ju-Sheng", "" ], [ "Li", "Stan Z.", "" ] ]
Metagenomic data, comprising mixed multi-species genomes, are prevalent in diverse environments like oceans and soils, significantly impacting human health and ecological functions. However, current research relies on K-mer representations, limiting the capture of structurally relevant gene contexts. To address these limitations and further our understanding of complex relationships between metagenomic sequences and their functions, we introduce a protein-based gene representation as a context-aware and structure-relevant tokenizer. Our approach includes Masked Gene Modeling (MGM) for gene group-level pre-training, providing insights into inter-gene contextual information, and Triple Enhanced Metagenomic Contrastive Learning (TEM-CL) for gene-level pre-training to model gene sequence-function relationships. MGM and TEM-CL constitute our novel metagenomic language model {\NAME}, pre-trained on 100 million metagenomic sequences. We demonstrate the superiority of our proposed {\NAME} on eight datasets.
1303.4201
Thomas House
Christopher A. Rhodes and Thomas House
The rate of convergence to early asymptotic behaviour in age-structured epidemic models
9 pages, 2 figures
Published in Theoretical Population Biology 85 (2013) 58-62
10.1016/j.tpb.2013.02.003
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Age structure is incorporated in many types of epidemic model. Often it is convenient to assume that such models converge to early asymptotic behaviour quickly, before the susceptible population has been appreciably depleted. We make use of dynamical systems theory to show that for some reasonable parameter values, this convergence can be slow. Such a possibility should therefore be considered when parameterising age-structured epidemic models.
[ { "created": "Mon, 18 Mar 2013 10:40:53 GMT", "version": "v1" } ]
2013-03-19
[ [ "Rhodes", "Christopher A.", "" ], [ "House", "Thomas", "" ] ]
Age structure is incorporated in many types of epidemic model. Often it is convenient to assume that such models converge to early asymptotic behaviour quickly, before the susceptible population has been appreciably depleted. We make use of dynamical systems theory to show that for some reasonable parameter values, this convergence can be slow. Such a possibility should therefore be considered when parameterising age-structured epidemic models.
1402.0990
Darka Labavi\'c
Darka Labavic and Hildegard Meyer-Ortmanns
A simple mechanism for controlling the onset and arrest of collective oscillations in genetic circuits
null
null
null
null
q-bio.MN nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a system of dynamical units, each of which shows excitable or oscillatory behavior, depending on the choice of parameters. When we couple these units with repressive bonds, we can control the duration of collective oscillations for an intermediate period between collective fixed-point behavior. The control mechanism works by monotonically increasing a single bifurcation parameter. Both the onset and arrest of oscillations are due to bifurcations. Depending on the coupling strength, the network topology and the tuning speed, our numerical simulations reveal a rich dynamics out-of-equilibrium with multiple inherent time scales, long transients towards the stationary states and interesting transient patterns like self-organized pacemakers. Zooming into the transition regime, we pursue the arrest of oscillations along the arms of spirals. We point out possible relations to the genetic network of segmentation clocks.
[ { "created": "Wed, 5 Feb 2014 09:45:43 GMT", "version": "v1" } ]
2014-02-06
[ [ "Labavic", "Darka", "" ], [ "Meyer-Ortmanns", "Hildegard", "" ] ]
We study a system of dynamical units, each of which shows excitable or oscillatory behavior, depending on the choice of parameters. When we couple these units with repressive bonds, we can control the duration of collective oscillations for an intermediate period between collective fixed-point behavior. The control mechanism works by monotonically increasing a single bifurcation parameter. Both the onset and arrest of oscillations are due to bifurcations. Depending on the coupling strength, the network topology and the tuning speed, our numerical simulations reveal a rich dynamics out-of-equilibrium with multiple inherent time scales, long transients towards the stationary states and interesting transient patterns like self-organized pacemakers. Zooming into the transition regime, we pursue the arrest of oscillations along the arms of spirals. We point out possible relations to the genetic network of segmentation clocks.
2403.01578
Tu Luan
Tu Luan, Victoria Cepeda, Bo Liu, Zac Bowen, Ujjwal Ayyangar, Mathieu Almeida, Christopher M. Hill, Sergey Koren, Todd J. Treangen, Adam Porter, Mihai Pop
MetaCompass: Reference-guided Assembly of Metagenomes
18 pages, 6 figures, 3 tables, one supplementary material
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Metagenomic studies have primarily relied on de novo assembly for reconstructing genes and genomes from microbial mixtures. While reference-guided approaches have been employed in the assembly of single organisms, they have not been used in a metagenomic context. Here we describe the first effective approach for reference-guided metagenomic assembly that can complement and improve upon de novo metagenomic assembly methods for certain organisms. Such approaches will be increasingly useful as more genomes are sequenced and made publicly available.
[ { "created": "Sun, 3 Mar 2024 18:00:46 GMT", "version": "v1" } ]
2024-03-05
[ [ "Luan", "Tu", "" ], [ "Cepeda", "Victoria", "" ], [ "Liu", "Bo", "" ], [ "Bowen", "Zac", "" ], [ "Ayyangar", "Ujjwal", "" ], [ "Almeida", "Mathieu", "" ], [ "Hill", "Christopher M.", "" ], [ "Koren", "Sergey", "" ], [ "Treangen", "Todd J.", "" ], [ "Porter", "Adam", "" ], [ "Pop", "Mihai", "" ] ]
Metagenomic studies have primarily relied on de novo assembly for reconstructing genes and genomes from microbial mixtures. While reference-guided approaches have been employed in the assembly of single organisms, they have not been used in a metagenomic context. Here we describe the first effective approach for reference-guided metagenomic assembly that can complement and improve upon de novo metagenomic assembly methods for certain organisms. Such approaches will be increasingly useful as more genomes are sequenced and made publicly available.
2311.09340
Charles Semple
Katharina T. Huber and Simone Linz and Vincent Moulton and Charles Semple
Phylogenetic trees defined by at most three characters
21 pages, 9 figures
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In evolutionary biology, phylogenetic trees are commonly inferred from a set of characters (partitions) of a collection of biological entities (e.g., species or individuals in a population). Such characters naturally arise from molecular sequences or morphological data. Interestingly, it has been known for some time that any binary phylogenetic tree can be (convexly) defined by a set of at most four characters, and that there are binary phylogenetic trees for which three characters are not enough. Thus, it is of interest to characterise those phylogenetic trees that are defined by a set of at most three characters. In this paper, we provide such a characterisation, in particular proving that a binary phylogenetic tree $T$ is defined by a set of at most three characters precisely if $T$ has no internal subtree isomorphic to a certain tree.
[ { "created": "Wed, 15 Nov 2023 19:58:42 GMT", "version": "v1" } ]
2023-11-17
[ [ "Huber", "Katharina T.", "" ], [ "Linz", "Simone", "" ], [ "Moulton", "Vincent", "" ], [ "Semple", "Charles", "" ] ]
In evolutionary biology, phylogenetic trees are commonly inferred from a set of characters (partitions) of a collection of biological entities (e.g., species or individuals in a population). Such characters naturally arise from molecular sequences or morphological data. Interestingly, it has been known for some time that any binary phylogenetic tree can be (convexly) defined by a set of at most four characters, and that there are binary phylogenetic trees for which three characters are not enough. Thus, it is of interest to characterise those phylogenetic trees that are defined by a set of at most three characters. In this paper, we provide such a characterisation, in particular proving that a binary phylogenetic tree $T$ is defined by a set of at most three characters precisely if $T$ has no internal subtree isomorphic to a certain tree.
1903.05610
Paul Smolen
Paul Smolen, Douglas A. Baxter, John H. Byrne
How Can Memories Last for Days, Years, or a Lifetime? Proposed Mechanisms for Maintaining Synaptic Potentiation and Memory
null
Learn. Mem. (2019) 26: 133-150
10.1101/lm.049395.119
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
With memory encoding reliant on persistent changes in the properties of synapses, a key question is how can memories be maintained from days to months or a lifetime given molecular turnover? It is likely that positive feedback loops are necessary to persistently maintain the strength of synapses that participate in encoding. Such feedback may occur within signal-transduction cascades and/or the regulation of translation, and it may occur within specific subcellular compartments or within neuronal networks. Not surprisingly, numerous positive feedback loops have been proposed. Some posited loops operate at the level of biochemical signal transduction cascades, such as persistent activation of calcium/calmodulin kinase II or protein kinase M. Another level consists of feedback loops involving transcriptional, epigenetic and translational pathways, and autocrine actions of growth factors such as BDNF. Finally, at the neuronal network level, recurrent reactivation of cell assemblies encoding memories is likely to be essential for late maintenance of memory. These levels are not isolated, but linked by shared components of feedback loops. Here, we review characteristics of some commonly discussed feedback loops proposed to underlie the maintenance of memory and long-term synaptic plasticity, assess evidence for and against their necessity, and suggest experiments that could further delineate the dynamics of these feedback loops. We also discuss crosstalk between proposed loops, and ways in which such interaction can facilitate the rapidity and robustness of memory formation and storage.
[ { "created": "Wed, 13 Mar 2019 17:16:52 GMT", "version": "v1" }, { "created": "Fri, 5 Apr 2019 17:02:01 GMT", "version": "v2" }, { "created": "Tue, 16 Apr 2019 21:58:51 GMT", "version": "v3" } ]
2019-04-18
[ [ "Smolen", "Paul", "" ], [ "Baxter", "Douglas A.", "" ], [ "Byrne", "John H.", "" ] ]
With memory encoding reliant on persistent changes in the properties of synapses, a key question is how can memories be maintained from days to months or a lifetime given molecular turnover? It is likely that positive feedback loops are necessary to persistently maintain the strength of synapses that participate in encoding. Such feedback may occur within signal-transduction cascades and/or the regulation of translation, and it may occur within specific subcellular compartments or within neuronal networks. Not surprisingly, numerous positive feedback loops have been proposed. Some posited loops operate at the level of biochemical signal transduction cascades, such as persistent activation of calcium/calmodulin kinase II or protein kinase M. Another level consists of feedback loops involving transcriptional, epigenetic and translational pathways, and autocrine actions of growth factors such as BDNF. Finally, at the neuronal network level, recurrent reactivation of cell assemblies encoding memories is likely to be essential for late maintenance of memory. These levels are not isolated, but linked by shared components of feedback loops. Here, we review characteristics of some commonly discussed feedback loops proposed to underlie the maintenance of memory and long-term synaptic plasticity, assess evidence for and against their necessity, and suggest experiments that could further delineate the dynamics of these feedback loops. We also discuss crosstalk between proposed loops, and ways in which such interaction can facilitate the rapidity and robustness of memory formation and storage.
0901.4701
Michele Caselle
M.Osella and M.Caselle
Entropic contributions to the splicing process
15 pages, 6 figures. Extended version, accepted for publication in Physical Biology
null
10.1088/1478-3975/6/4/046018
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been recently argued that the depletion attraction may play an important role in different aspects of the cellular organization, ranging from the organization of transcriptional activity in transcription factories to the formation of the nuclear bodies. In this paper we suggest a new application of these ideas in the context of the splicing process, a crucial step of messanger RNA maturation in Eukaryotes. We shall show that entropy effects and the resulting depletion attraction may explain the relevance of the aspecific intron length variable in the choice of the splice-site recognition modality. On top of that, some qualitative features of the genome architecture of higher Eukaryotes can find an evolutionary realistic motivation in the light of our model.
[ { "created": "Thu, 29 Jan 2009 15:07:55 GMT", "version": "v1" }, { "created": "Mon, 26 Oct 2009 15:55:16 GMT", "version": "v2" } ]
2015-05-13
[ [ "Osella", "M.", "" ], [ "Caselle", "M.", "" ] ]
It has been recently argued that the depletion attraction may play an important role in different aspects of the cellular organization, ranging from the organization of transcriptional activity in transcription factories to the formation of the nuclear bodies. In this paper we suggest a new application of these ideas in the context of the splicing process, a crucial step of messanger RNA maturation in Eukaryotes. We shall show that entropy effects and the resulting depletion attraction may explain the relevance of the aspecific intron length variable in the choice of the splice-site recognition modality. On top of that, some qualitative features of the genome architecture of higher Eukaryotes can find an evolutionary realistic motivation in the light of our model.
2102.06178
Marissa Renardy
Marissa Renardy (1), Denise Kirschner (1), Marisa Eisenberg (2) ((1) University of Michigan Medical School, Department of Microbiology and Immunology, (2) University of Michigan, Department of Epidemiology)
Structural identifiability analysis of PDEs: A case study in continuous age-structured epidemic models
27 pages, 3 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational and mathematical models rely heavily on estimated parameter values for model development. Identifiability analysis determines how well the parameters of a model can be estimated from experimental data. Identifiability analysis is crucial for interpreting and determining confidence in model parameter values and to provide biologically relevant predictions. Structural identifiability analysis, in which one assumes data to be noiseless and arbitrarily fine-grained, has been extensively studied in the context of ordinary differential equation (ODE) models, but has not yet been widely explored for age-structured partial differential equation (PDE) models. These models present additional difficulties due to increased number of variables and partial derivatives as well as the presence of boundary conditions. In this work, we establish a pipeline for structural identifiability analysis of age-structured PDE models using a differential algebra framework and derive identifiability results for specific age-structured models. We use epidemic models to demonstrate this framework because of their wide-spread use in many different diseases and for the corresponding parallel work previously done for ODEs. In our application of the identifiability analysis pipeline, we focus on a Susceptible-Exposed-Infected model for which we compare identifiability results for a PDE and corresponding ODE system and explore effects of age-dependent parameters on identifiability. We also show how practical identifiability analysis can be applied in this example.
[ { "created": "Thu, 11 Feb 2021 18:43:30 GMT", "version": "v1" } ]
2021-02-12
[ [ "Renardy", "Marissa", "" ], [ "Kirschner", "Denise", "" ], [ "Eisenberg", "Marisa", "" ] ]
Computational and mathematical models rely heavily on estimated parameter values for model development. Identifiability analysis determines how well the parameters of a model can be estimated from experimental data. Identifiability analysis is crucial for interpreting and determining confidence in model parameter values and to provide biologically relevant predictions. Structural identifiability analysis, in which one assumes data to be noiseless and arbitrarily fine-grained, has been extensively studied in the context of ordinary differential equation (ODE) models, but has not yet been widely explored for age-structured partial differential equation (PDE) models. These models present additional difficulties due to increased number of variables and partial derivatives as well as the presence of boundary conditions. In this work, we establish a pipeline for structural identifiability analysis of age-structured PDE models using a differential algebra framework and derive identifiability results for specific age-structured models. We use epidemic models to demonstrate this framework because of their wide-spread use in many different diseases and for the corresponding parallel work previously done for ODEs. In our application of the identifiability analysis pipeline, we focus on a Susceptible-Exposed-Infected model for which we compare identifiability results for a PDE and corresponding ODE system and explore effects of age-dependent parameters on identifiability. We also show how practical identifiability analysis can be applied in this example.
2102.03568
Thierry Mora
Meriem Bensouda Koraichi, Maximilian Puelma Touzel, Andrea Mazzolini, Thierry Mora, Aleksandra M. Walczak
NoisET: Noise learning and Expansion detection of T-cell receptors
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput sequencing of T- and B-cell receptors makes it possible to track immune repertoires across time, in different tissues, in acute and chronic diseases and in healthy individuals. However quantitative comparison between repertoires is confounded by variability in the read count of each receptor clonotype due to sampling, library preparation, and expression noise. We review methods for accounting for both biological and experimental noise and present an easy-to-use python package NoisET that implements and generalizes a previously developed Bayesian method. It can be used to learn experimental noise models for repertoire sequencing from replicates, and to detect responding clones following a stimulus. We test the package on different repertoire sequencing technologies and datasets. We review how such approaches have been used to identify responding clonotypes in vaccination and disease data. Availability: NoisET is freely available to use with source code at github.com/statbiophys/NoisET.
[ { "created": "Sat, 6 Feb 2021 11:42:44 GMT", "version": "v1" }, { "created": "Sun, 17 Jul 2022 20:16:01 GMT", "version": "v2" } ]
2022-07-19
[ [ "Koraichi", "Meriem Bensouda", "" ], [ "Touzel", "Maximilian Puelma", "" ], [ "Mazzolini", "Andrea", "" ], [ "Mora", "Thierry", "" ], [ "Walczak", "Aleksandra M.", "" ] ]
High-throughput sequencing of T- and B-cell receptors makes it possible to track immune repertoires across time, in different tissues, in acute and chronic diseases and in healthy individuals. However quantitative comparison between repertoires is confounded by variability in the read count of each receptor clonotype due to sampling, library preparation, and expression noise. We review methods for accounting for both biological and experimental noise and present an easy-to-use python package NoisET that implements and generalizes a previously developed Bayesian method. It can be used to learn experimental noise models for repertoire sequencing from replicates, and to detect responding clones following a stimulus. We test the package on different repertoire sequencing technologies and datasets. We review how such approaches have been used to identify responding clonotypes in vaccination and disease data. Availability: NoisET is freely available to use with source code at github.com/statbiophys/NoisET.
1305.1706
Thorsten Pr\"ustel
Thorsten Pr\"ustel and Martin Meier-Schellersheim
Exact Green's function of the reversible ABCD reaction in two space dimensions
9 pages, 1 figure
null
null
null
q-bio.QM physics.chem-ph
http://creativecommons.org/licenses/publicdomain/
We derive an exact expression for the Green's functions in the time domain of the reversible diffusion-influenced ABCD reaction $A+B\leftrightarrow C+D$ in two space dimensions. Furthermore, we calculate the corresponding survival and reaction probabilities. The obtained expressions should prove useful for the study of reversible membrane-bound reactions in cell biology and can serve as a useful ingredient of enhanced stochastic particle-based simulation algorithms.
[ { "created": "Wed, 8 May 2013 03:26:16 GMT", "version": "v1" } ]
2013-05-09
[ [ "Prüstel", "Thorsten", "" ], [ "Meier-Schellersheim", "Martin", "" ] ]
We derive an exact expression for the Green's functions in the time domain of the reversible diffusion-influenced ABCD reaction $A+B\leftrightarrow C+D$ in two space dimensions. Furthermore, we calculate the corresponding survival and reaction probabilities. The obtained expressions should prove useful for the study of reversible membrane-bound reactions in cell biology and can serve as a useful ingredient of enhanced stochastic particle-based simulation algorithms.
0907.4680
Reid Ginoza
Reid Ginoza, Andrew Mugler
Network motifs come in sets: correlations in the randomization process
null
Phys. Rev. E 82, 011921 (2010)
10.1103/PhysRevE.82.011921
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The identification of motifs--subgraphs that appear significantly more often in a particular network than in an ensemble of randomized networks--has become a ubiquitous method for uncovering potentially important subunits within networks drawn from a wide variety of fields. We find that the most common algorithms used to generate the ensemble from the real network change subgraph counts in a highly correlated manner, so that one subgraph's status as a motif may not be independent from the statuses of the other subgraphs. We demonstrate this effect for the problem of 3- and 4-node motif identification in the transcriptional regulatory networks of E. coli and S. cerevisiae in which randomized networks are generated via an edge-swapping algorithm (Milo et al., Science 298:824, 2002). We show that correlations among 3-node subgraphs are easily interpreted, and we present an information-theoretic tool that may be used to identify correlations among subgraphs of any size.
[ { "created": "Mon, 27 Jul 2009 16:13:43 GMT", "version": "v1" } ]
2010-08-27
[ [ "Ginoza", "Reid", "" ], [ "Mugler", "Andrew", "" ] ]
The identification of motifs--subgraphs that appear significantly more often in a particular network than in an ensemble of randomized networks--has become a ubiquitous method for uncovering potentially important subunits within networks drawn from a wide variety of fields. We find that the most common algorithms used to generate the ensemble from the real network change subgraph counts in a highly correlated manner, so that one subgraph's status as a motif may not be independent from the statuses of the other subgraphs. We demonstrate this effect for the problem of 3- and 4-node motif identification in the transcriptional regulatory networks of E. coli and S. cerevisiae in which randomized networks are generated via an edge-swapping algorithm (Milo et al., Science 298:824, 2002). We show that correlations among 3-node subgraphs are easily interpreted, and we present an information-theoretic tool that may be used to identify correlations among subgraphs of any size.
1412.0176
Nathan Baker
Guo Wei Wei, Nathan A. Baker
Differential geometry-based solvation and electrolyte transport models for biomolecular modeling: a review
null
null
10.1201/b21343-15
null
q-bio.BM physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This chapter reviews the differential geometry-based solvation and electrolyte transport for biomolecular solvation that have been developed over the past decade. A key component of these methods is the differential geometry of surfaces theory, as applied to the solvent-solute boundary. In these approaches, the solvent-solute boundary is determined by a variational principle that determines the major physical observables of interest, for example, biomolecular surface area, enclosed volume, electrostatic potential, ion density, electron density, etc. Recently, differential geometry theory has been used to define the surfaces that separate the microscopic (solute) domains for biomolecules from the macroscopic (solvent) domains. In these approaches, the microscopic domains are modeled with atomistic or quantum mechanical descriptions, while continuum mechanics models (including fluid mechanics, elastic mechanics, and continuum electrostatics) are applied to the macroscopic domains. This multiphysics description is integrated through an energy functional formalism and the resulting Euler-Lagrange equation is employed to derive a variety of governing partial differential equations for different solvation and transport processes; e.g., the Laplace-Beltrami equation for the solvent-solute interface, Poisson or Poisson-Boltzmann equations for electrostatic potentials, the Nernst-Planck equation for ion densities, and the Kohn-Sham equation for solute electron density. Extensive validation of these models has been carried out over hundreds of molecules, including proteins and ion channels, and the experimental data have been compared in terms of solvation energies, voltage-current curves, and density distributions. We also propose a new quantum model for electrolyte transport.
[ { "created": "Sun, 30 Nov 2014 03:43:18 GMT", "version": "v1" } ]
2017-09-14
[ [ "Wei", "Guo Wei", "" ], [ "Baker", "Nathan A.", "" ] ]
This chapter reviews the differential geometry-based solvation and electrolyte transport for biomolecular solvation that have been developed over the past decade. A key component of these methods is the differential geometry of surfaces theory, as applied to the solvent-solute boundary. In these approaches, the solvent-solute boundary is determined by a variational principle that determines the major physical observables of interest, for example, biomolecular surface area, enclosed volume, electrostatic potential, ion density, electron density, etc. Recently, differential geometry theory has been used to define the surfaces that separate the microscopic (solute) domains for biomolecules from the macroscopic (solvent) domains. In these approaches, the microscopic domains are modeled with atomistic or quantum mechanical descriptions, while continuum mechanics models (including fluid mechanics, elastic mechanics, and continuum electrostatics) are applied to the macroscopic domains. This multiphysics description is integrated through an energy functional formalism and the resulting Euler-Lagrange equation is employed to derive a variety of governing partial differential equations for different solvation and transport processes; e.g., the Laplace-Beltrami equation for the solvent-solute interface, Poisson or Poisson-Boltzmann equations for electrostatic potentials, the Nernst-Planck equation for ion densities, and the Kohn-Sham equation for solute electron density. Extensive validation of these models has been carried out over hundreds of molecules, including proteins and ion channels, and the experimental data have been compared in terms of solvation energies, voltage-current curves, and density distributions. We also propose a new quantum model for electrolyte transport.
2405.14887
Delfim F. M. Torres
Meriem Boukhobza, Amar Debbouche, Lingeshwaran Shangerganesh, Delfim F. M. Torres
Modeling the dynamics of the Hepatitis B virus via a variable-order discrete system
This is a preprint whose final form is published in 'Chaos, Solitons and Fractals' (see https://doi.org/10.1016/j.chaos.2024.114987)
Chaos Solitons Fractals 184 (2024), Art. 114987, 8pp
10.1016/j.chaos.2024.114987
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the dynamics of the hepatitis B virus by integrating variable-order calculus and discrete analysis. Specifically, we utilize the Caputo variable-order difference operator in this study. To establish the existence and uniqueness results of the model, we employ a fixed-point technique. Furthermore, we prove that the model exhibits bounded and positive solutions. Additionally, we explore the local stability of the proposed model by determining the basic reproduction number. Finally, we present several numerical simulations to illustrate the richness of our results.
[ { "created": "Wed, 15 May 2024 19:47:00 GMT", "version": "v1" } ]
2024-05-27
[ [ "Boukhobza", "Meriem", "" ], [ "Debbouche", "Amar", "" ], [ "Shangerganesh", "Lingeshwaran", "" ], [ "Torres", "Delfim F. M.", "" ] ]
We investigate the dynamics of the hepatitis B virus by integrating variable-order calculus and discrete analysis. Specifically, we utilize the Caputo variable-order difference operator in this study. To establish the existence and uniqueness results of the model, we employ a fixed-point technique. Furthermore, we prove that the model exhibits bounded and positive solutions. Additionally, we explore the local stability of the proposed model by determining the basic reproduction number. Finally, we present several numerical simulations to illustrate the richness of our results.
1503.02974
Matthew Wade
Matthew J. Wade and Thomas P. Curtis and Russell J. Davenport
Modelling Computational Resources for Next Generation Sequencing Bioinformatics Analysis of 16S rRNA Samples
23 pages, 8 figures
null
null
null
q-bio.GN cs.CE cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the rapidly evolving domain of next generation sequencing and bioinformatics analysis, data generation is one aspect that is increasing at a concomitant rate. The burden associated with processing large amounts of sequencing data has emphasised the need to allocate sufficient computing resources to complete analyses in the shortest possible time with manageable and predictable costs. A novel method for predicting time to completion for a popular bioinformatics software (QIIME), was developed using key variables characteristic of the input data assumed to impact processing time. Multiple Linear Regression models were developed to determine run time for two denoising algorithms and a general bioinformatics pipeline. The models were able to accurately predict clock time for denoising sequences from a naturally assembled community dataset, but not an artificial community. Speedup and efficiency tests for AmpliconNoise also highlighted that caution was needed when allocating resources for parallel processing of data. Accurate modelling of computational processing time using easily measurable predictors can assist NGS analysts in determining resource requirements for bioinformatics software and pipelines. Whilst demonstrated on a specific group of scripts, the methodology can be extended to encompass other packages running on multiple architectures, either in parallel or sequentially.
[ { "created": "Tue, 10 Mar 2015 16:18:57 GMT", "version": "v1" } ]
2015-03-11
[ [ "Wade", "Matthew J.", "" ], [ "Curtis", "Thomas P.", "" ], [ "Davenport", "Russell J.", "" ] ]
In the rapidly evolving domain of next generation sequencing and bioinformatics analysis, data generation is one aspect that is increasing at a concomitant rate. The burden associated with processing large amounts of sequencing data has emphasised the need to allocate sufficient computing resources to complete analyses in the shortest possible time with manageable and predictable costs. A novel method for predicting time to completion for a popular bioinformatics software (QIIME), was developed using key variables characteristic of the input data assumed to impact processing time. Multiple Linear Regression models were developed to determine run time for two denoising algorithms and a general bioinformatics pipeline. The models were able to accurately predict clock time for denoising sequences from a naturally assembled community dataset, but not an artificial community. Speedup and efficiency tests for AmpliconNoise also highlighted that caution was needed when allocating resources for parallel processing of data. Accurate modelling of computational processing time using easily measurable predictors can assist NGS analysts in determining resource requirements for bioinformatics software and pipelines. Whilst demonstrated on a specific group of scripts, the methodology can be extended to encompass other packages running on multiple architectures, either in parallel or sequentially.
1311.0753
Carsten Allefeld
Carsten Allefeld, Chun Siong Soon, Carsten Bogler, Jakob Heinzle, John-Dylan Haynes
Sequential dependencies between trials in free choice tasks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In two previous experiments we investigated the neural precursors of subjects' "free" choices for one of two options (pressing one of two buttons, and choosing between adding and subtracting numbers). In these experiments the distribution of sequence lengths was taken as an approximate indicator of the randomness (or lack of sequential dependency) of the choice sequences. However, this method is limited in its ability to reveal sequential dependencies. Here we present a more detailed individual-subject analysis and conclude that despite of the presence of significant sequential dependencies the subjects' behavior still approximates randomness, as measured by an entropy rate (on pooled data) of 0.940 bit / trial and 0.965 bit / trial in the two experiments. We also provide the raw single-subject behavioral data.
[ { "created": "Mon, 4 Nov 2013 16:24:27 GMT", "version": "v1" } ]
2013-11-05
[ [ "Allefeld", "Carsten", "" ], [ "Soon", "Chun Siong", "" ], [ "Bogler", "Carsten", "" ], [ "Heinzle", "Jakob", "" ], [ "Haynes", "John-Dylan", "" ] ]
In two previous experiments we investigated the neural precursors of subjects' "free" choices for one of two options (pressing one of two buttons, and choosing between adding and subtracting numbers). In these experiments the distribution of sequence lengths was taken as an approximate indicator of the randomness (or lack of sequential dependency) of the choice sequences. However, this method is limited in its ability to reveal sequential dependencies. Here we present a more detailed individual-subject analysis and conclude that despite of the presence of significant sequential dependencies the subjects' behavior still approximates randomness, as measured by an entropy rate (on pooled data) of 0.940 bit / trial and 0.965 bit / trial in the two experiments. We also provide the raw single-subject behavioral data.
2402.06880
Zheyu Wen
Zheyu Wen, Ali Ghafouri and George Biros
A single-snapshot inverse solver for two-species graph model of tau pathology spreading in human Alzheimer disease
10 pages, 6 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a method that uses a two-species ordinary differential equation (ODE) biophysical model to characterize misfolded tau (or simply tau) protein spreading in Alzheimer disease (AD) and calibrates it from clinical data. The unknown model parameters are the initial condition (IC) for tau and three scalar parameters representing the migration, proliferation, and clearance of tau proteins. Driven by imaging data, these parameters are estimated by formulating a constrained optimization problem with a sparsity regularization for the IC. This optimization problem is solved with a projection-based quasi-Newton algorithm. We investigate the sensitivity of our method to different algorithm parameters. We evaluate the performance of our method on both synthetic and clinical data. The latter comprises cases from the AD Neuroimaging Initiative (ADNI) and Harvard Aging Brain Study (HABS) datasets: 455 cognitively normal (CN), 212 mild cognitive impairment (MCI), and 45 AD subjects. We compare the performance of our approach to the commonly used Fisher-Kolmogorov (FK) model with a fixed IC at the entorhinal cortex (EC). Our method demonstrates an average improvement of 25.7% relative error compared to the FK model on the AD dataset. HFK also achieves an R-squared score of 0.664 for fitting AD data compared with 0.55 from FK model results. Furthermore, for cases that have longitudinal data, we estimate a subject-specific AD onset time.
[ { "created": "Sat, 10 Feb 2024 04:34:34 GMT", "version": "v1" } ]
2024-02-13
[ [ "Wen", "Zheyu", "" ], [ "Ghafouri", "Ali", "" ], [ "Biros", "George", "" ] ]
We propose a method that uses a two-species ordinary differential equation (ODE) biophysical model to characterize misfolded tau (or simply tau) protein spreading in Alzheimer disease (AD) and calibrates it from clinical data. The unknown model parameters are the initial condition (IC) for tau and three scalar parameters representing the migration, proliferation, and clearance of tau proteins. Driven by imaging data, these parameters are estimated by formulating a constrained optimization problem with a sparsity regularization for the IC. This optimization problem is solved with a projection-based quasi-Newton algorithm. We investigate the sensitivity of our method to different algorithm parameters. We evaluate the performance of our method on both synthetic and clinical data. The latter comprises cases from the AD Neuroimaging Initiative (ADNI) and Harvard Aging Brain Study (HABS) datasets: 455 cognitively normal (CN), 212 mild cognitive impairment (MCI), and 45 AD subjects. We compare the performance of our approach to the commonly used Fisher-Kolmogorov (FK) model with a fixed IC at the entorhinal cortex (EC). Our method demonstrates an average improvement of 25.7% relative error compared to the FK model on the AD dataset. HFK also achieves an R-squared score of 0.664 for fitting AD data compared with 0.55 from FK model results. Furthermore, for cases that have longitudinal data, we estimate a subject-specific AD onset time.
1410.4620
Hyekyoung Lee
Hyekyoung Lee and Hyejin Kang and Moo K. Chung and Seonhee Lim and Bung-Nyun Kim and Dong Soo Lee
Integrated multimodal network approach to PET and MRI based on multidimensional persistent homology
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding the underlying relationships among multiple imaging modalities in a coherent fashion is one of challenging problems in the multimodal analysis. In this study, we propose a novel multimodal network approach based on multidi- mensional persistent homology. In this extension of the previous threshold-free method of persistent homology, we visualize and discriminate the topological change of integrated brain networks by varying not only threshold but also mixing ratios between two different imaging modalities. Moreover, we also pro- pose an integration method for multimodal networks, called one-dimensional projection, with a specific mixing ratio between modalities. We applied the proposed methods to PET and MRI data from 21 autism spectrum disorder (ASD) children and 10 pediatric control subjects. From the results, we found that the brain networks of ASD children and controls differ significantly, with ASD showing asymmetrical changes of connected structures between PET and MRI. The integrated MRI and PET networks showed that ASD children had weaker connections than controls within the visual cortex, between dorsal and ventral parts of the temporal pole, between frontal and parietal regions, and between the left perisylvian and other brain regions. These results provide a multidimensional homological understanding of disease-related PET and MRI networks that discloses the network association with ASD.
[ { "created": "Fri, 17 Oct 2014 03:04:32 GMT", "version": "v1" }, { "created": "Thu, 25 Feb 2016 03:00:27 GMT", "version": "v2" } ]
2016-02-26
[ [ "Lee", "Hyekyoung", "" ], [ "Kang", "Hyejin", "" ], [ "Chung", "Moo K.", "" ], [ "Lim", "Seonhee", "" ], [ "Kim", "Bung-Nyun", "" ], [ "Lee", "Dong Soo", "" ] ]
Finding the underlying relationships among multiple imaging modalities in a coherent fashion is one of challenging problems in the multimodal analysis. In this study, we propose a novel multimodal network approach based on multidi- mensional persistent homology. In this extension of the previous threshold-free method of persistent homology, we visualize and discriminate the topological change of integrated brain networks by varying not only threshold but also mixing ratios between two different imaging modalities. Moreover, we also pro- pose an integration method for multimodal networks, called one-dimensional projection, with a specific mixing ratio between modalities. We applied the proposed methods to PET and MRI data from 21 autism spectrum disorder (ASD) children and 10 pediatric control subjects. From the results, we found that the brain networks of ASD children and controls differ significantly, with ASD showing asymmetrical changes of connected structures between PET and MRI. The integrated MRI and PET networks showed that ASD children had weaker connections than controls within the visual cortex, between dorsal and ventral parts of the temporal pole, between frontal and parietal regions, and between the left perisylvian and other brain regions. These results provide a multidimensional homological understanding of disease-related PET and MRI networks that discloses the network association with ASD.
1802.05317
Pawel Kulakowski
Jakub Kmiecik, Pawel Kulakowski, Krzysztof Wojcik and Andrzej Jajszczyk
Communication via FRET in Nanonetworks of Mobile Proteins
It is the extended version of the paper, presented later (in shorter version) on the NanoCom 2016
3rd ACM International Conference on Nanoscale Computing and Communication, article no. 32, New York, USA, 28-30 September 2016
10.1145/2967446.2967477
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A practical, biologically motivated case of protein complexes (immunoglobulin G and FcRII receptors) moving on the surface of mastcells, that are common parts of an immunological system, is investigated. Proteins are considered as nanomachines creating a nanonetwork. Accurate molecular models of the proteins and the fluorophores which act as their nanoantennas are used to simulate the communication between the nanomachines when they are close to each other. The theory of diffusion-based Brownian motion is applied to model movements of the proteins. It is assumed that fluorophore molecules send and receive signals using the Forster Resonance Energy Transfer. The probability of the efficient signal transfer and the respective bit error rate are calculated and discussed.
[ { "created": "Wed, 14 Feb 2018 20:50:05 GMT", "version": "v1" } ]
2018-02-16
[ [ "Kmiecik", "Jakub", "" ], [ "Kulakowski", "Pawel", "" ], [ "Wojcik", "Krzysztof", "" ], [ "Jajszczyk", "Andrzej", "" ] ]
A practical, biologically motivated case of protein complexes (immunoglobulin G and FcRII receptors) moving on the surface of mastcells, that are common parts of an immunological system, is investigated. Proteins are considered as nanomachines creating a nanonetwork. Accurate molecular models of the proteins and the fluorophores which act as their nanoantennas are used to simulate the communication between the nanomachines when they are close to each other. The theory of diffusion-based Brownian motion is applied to model movements of the proteins. It is assumed that fluorophore molecules send and receive signals using the Forster Resonance Energy Transfer. The probability of the efficient signal transfer and the respective bit error rate are calculated and discussed.
2405.07622
Fr\'ed\'eric Dreyer
Daniel Cutting, Fr\'ed\'eric A. Dreyer, David Errington, Constantin Schneider, Charlotte M. Deane
De novo antibody design with SE(3) diffusion
20 pages, 11 figures, 4 tables, model weights and samples available at https://zenodo.org/records/11184374
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce IgDiff, an antibody variable domain diffusion model based on a general protein backbone diffusion framework which was extended to handle multiple chains. Assessing the designability and novelty of the structures generated with our model, we find that IgDiff produces highly designable antibodies that can contain novel binding regions. The backbone dihedral angles of sampled structures show good agreement with a reference antibody distribution. We verify these designed antibodies experimentally and find that all express with high yield. Finally, we compare our model with a state-of-the-art generative backbone diffusion model on a range of antibody design tasks, such as the design of the complementarity determining regions or the pairing of a light chain to an existing heavy chain, and show improved properties and designability.
[ { "created": "Mon, 13 May 2024 10:27:17 GMT", "version": "v1" } ]
2024-05-14
[ [ "Cutting", "Daniel", "" ], [ "Dreyer", "Frédéric A.", "" ], [ "Errington", "David", "" ], [ "Schneider", "Constantin", "" ], [ "Deane", "Charlotte M.", "" ] ]
We introduce IgDiff, an antibody variable domain diffusion model based on a general protein backbone diffusion framework which was extended to handle multiple chains. Assessing the designability and novelty of the structures generated with our model, we find that IgDiff produces highly designable antibodies that can contain novel binding regions. The backbone dihedral angles of sampled structures show good agreement with a reference antibody distribution. We verify these designed antibodies experimentally and find that all express with high yield. Finally, we compare our model with a state-of-the-art generative backbone diffusion model on a range of antibody design tasks, such as the design of the complementarity determining regions or the pairing of a light chain to an existing heavy chain, and show improved properties and designability.
2009.07466
Lin Yang
Jiacheng Li, Xiaoliang Ma, Hongchi Zhang, Chengyu Hou, Liping Shi, Shuai Guo, Chenchen Liao, Bing Zheng, Lin Ye, Lin Yang, Xiaodong He
The role of hydrophobic interactions in folding of $\beta$-sheets
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploring the protein-folding problem has been a long-standing challenge in molecular biology. Protein folding is highly dependent on folding of secondary structures as the way to pave a native folding pathway. Here, we demonstrate that a feature of a large hydrophobic surface area covering most side-chains on one side or the other side of adjacent $\beta$-strands of a $\beta$-sheet is prevail in almost all experimentally determined $\beta$-sheets, indicating that folding of $\beta$-sheets is most likely triggered by multistage hydrophobic interactions among neighbored side-chains of unfolded polypeptides, enable $\beta$-sheets fold reproducibly following explicit physical folding codes in aqueous environments. $\beta$-turns often contain five types of residues characterized with relatively small exposed hydrophobic proportions of their side-chains, that is explained as these residues can block hydrophobic effect among neighbored side-chains in sequence. Temperature dependence of the folding of $\beta$-sheet is thus attributed to temperature dependence of the strength of the hydrophobicity. The hydrophobic-effect-based mechanism responsible for $\beta$-sheets folding is verified by bioinformatics analyses of thousands of results available from experiments. The folding codes in amino acid sequence that dictate formation of a $\beta$-hairpin can be deciphered through evaluating hydrophobic interaction among side-chains of an unfolded polypeptide from a $\beta$-strand-like thermodynamic metastable state.
[ { "created": "Wed, 16 Sep 2020 04:33:58 GMT", "version": "v1" } ]
2020-09-17
[ [ "Li", "Jiacheng", "" ], [ "Ma", "Xiaoliang", "" ], [ "Zhang", "Hongchi", "" ], [ "Hou", "Chengyu", "" ], [ "Shi", "Liping", "" ], [ "Guo", "Shuai", "" ], [ "Liao", "Chenchen", "" ], [ "Zheng", "Bing", "" ], [ "Ye", "Lin", "" ], [ "Yang", "Lin", "" ], [ "He", "Xiaodong", "" ] ]
Exploring the protein-folding problem has been a long-standing challenge in molecular biology. Protein folding is highly dependent on folding of secondary structures as the way to pave a native folding pathway. Here, we demonstrate that a feature of a large hydrophobic surface area covering most side-chains on one side or the other side of adjacent $\beta$-strands of a $\beta$-sheet is prevail in almost all experimentally determined $\beta$-sheets, indicating that folding of $\beta$-sheets is most likely triggered by multistage hydrophobic interactions among neighbored side-chains of unfolded polypeptides, enable $\beta$-sheets fold reproducibly following explicit physical folding codes in aqueous environments. $\beta$-turns often contain five types of residues characterized with relatively small exposed hydrophobic proportions of their side-chains, that is explained as these residues can block hydrophobic effect among neighbored side-chains in sequence. Temperature dependence of the folding of $\beta$-sheet is thus attributed to temperature dependence of the strength of the hydrophobicity. The hydrophobic-effect-based mechanism responsible for $\beta$-sheets folding is verified by bioinformatics analyses of thousands of results available from experiments. The folding codes in amino acid sequence that dictate formation of a $\beta$-hairpin can be deciphered through evaluating hydrophobic interaction among side-chains of an unfolded polypeptide from a $\beta$-strand-like thermodynamic metastable state.
1307.2404
P. Grassberger
Li Chen, Fakhteh Ghanbarnejad, Weiran Cai, and Peter Grassberger
Outbreaks of coinfections: the critical role of cooperativity
5 pages, including 5 figures
EPL 104 (2013) 50001
10.1209/0295-5075/104/50001
null
q-bio.PE cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling epidemic dynamics plays an important role in studying how diseases spread, predicting their future course, and designing strategies to control them. In this letter, we introduce a model of SIR (susceptible-infected-removed) type which explicitly incorporates the effect of {\it cooperative coinfection}. More precisely, each individual can get infected by two different diseases, and an individual already infected with one disease has an increased probability to get infected by the other. Depending on the amount of this increase, we observe different threshold scenarios. Apart from the standard continuous phase transition for single disease outbreaks, we observe continuous transitions where both diseases must coexist, but also discontinuous transitions are observed, where a finite fraction of the population is already affected by both diseases at the threshold. All our results are obtained in a mean field model using rate equations, but we argue that they should hold also in more general frameworks.
[ { "created": "Tue, 9 Jul 2013 11:19:21 GMT", "version": "v1" }, { "created": "Mon, 16 Dec 2013 15:17:53 GMT", "version": "v2" } ]
2013-12-17
[ [ "Chen", "Li", "" ], [ "Ghanbarnejad", "Fakhteh", "" ], [ "Cai", "Weiran", "" ], [ "Grassberger", "Peter", "" ] ]
Modeling epidemic dynamics plays an important role in studying how diseases spread, predicting their future course, and designing strategies to control them. In this letter, we introduce a model of SIR (susceptible-infected-removed) type which explicitly incorporates the effect of {\it cooperative coinfection}. More precisely, each individual can get infected by two different diseases, and an individual already infected with one disease has an increased probability to get infected by the other. Depending on the amount of this increase, we observe different threshold scenarios. Apart from the standard continuous phase transition for single disease outbreaks, we observe continuous transitions where both diseases must coexist, but also discontinuous transitions are observed, where a finite fraction of the population is already affected by both diseases at the threshold. All our results are obtained in a mean field model using rate equations, but we argue that they should hold also in more general frameworks.
q-bio/0501009
Francisco Coutinho DPhil.
L.F.Lopez, F.A.B.Coutinho, M.N.Burattini and E.Massad
A schematic age-structured compartment model of the impact of antiretroviral therapy on HIV incidence and prevalence
29 pages, 14 figures
null
null
null
q-bio.PE q-bio.QM
null
A simple deterministic model is proposed to represent the basic aspects concerning the effects of different antiretroviral treatment schedules on HIV incidence and prevalence of affected populations. The model mimics current treatment guidelines applied in Brazil. However, the model does not intend to fit the data with any acceptable degree of accuracy since uncertainties on the values of the parameters and on the precise effect of the treatment put some limits on the practical implications of our model from which only orders of magnitude and some qualitative effects can be deduced. So, this paper intends to provide a conceptual and mechanistic understanding of the possible long term effects of treatment on the dynamics of HIV transmission. According to the model, the effect of the treatment depends on the level of sexual activity of the subpopulations considered, being more pronounced on the subpopulations with the highest sexual activity levels. Also, inefficient treatment can be prejudicial depending on the level of sexual activity and on the capacity to provide adequate treatment coverages to the population affected.
[ { "created": "Thu, 6 Jan 2005 16:49:46 GMT", "version": "v1" } ]
2009-09-29
[ [ "Lopez", "L. F.", "" ], [ "Coutinho", "F. A. B.", "" ], [ "Burattini", "M. N.", "" ], [ "Massad", "E.", "" ] ]
A simple deterministic model is proposed to represent the basic aspects concerning the effects of different antiretroviral treatment schedules on HIV incidence and prevalence of affected populations. The model mimics current treatment guidelines applied in Brazil. However, the model does not intend to fit the data with any acceptable degree of accuracy since uncertainties on the values of the parameters and on the precise effect of the treatment put some limits on the practical implications of our model from which only orders of magnitude and some qualitative effects can be deduced. So, this paper intends to provide a conceptual and mechanistic understanding of the possible long term effects of treatment on the dynamics of HIV transmission. According to the model, the effect of the treatment depends on the level of sexual activity of the subpopulations considered, being more pronounced on the subpopulations with the highest sexual activity levels. Also, inefficient treatment can be prejudicial depending on the level of sexual activity and on the capacity to provide adequate treatment coverages to the population affected.
2303.06470
Samuel Goldman
Samuel Goldman, John Bradshaw, Jiayi Xin, and Connor W. Coley
Prefix-Tree Decoding for Predicting Mass Spectra from Molecules
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Computational predictions of mass spectra from molecules have enabled the discovery of clinically relevant metabolites. However, such predictive tools are still limited as they occupy one of two extremes, either operating (a) by fragmenting molecules combinatorially with overly rigid constraints on potential rearrangements and poor time complexity or (b) by decoding lossy and nonphysical discretized spectra vectors. In this work, we use a new intermediate strategy for predicting mass spectra from molecules by treating mass spectra as sets of molecular formulae, which are themselves multisets of atoms. After first encoding an input molecular graph, we decode a set of molecular subformulae, each of which specify a predicted peak in the mass spectrum, the intensities of which are predicted by a second model. Our key insight is to overcome the combinatorial possibilities for molecular subformulae by decoding the formula set using a prefix tree structure, atom-type by atom-type, representing a general method for ordered multiset decoding. We show promising empirical results on mass spectra prediction tasks.
[ { "created": "Sat, 11 Mar 2023 17:44:28 GMT", "version": "v1" }, { "created": "Sun, 22 Oct 2023 00:56:23 GMT", "version": "v2" }, { "created": "Sun, 3 Dec 2023 22:29:11 GMT", "version": "v3" } ]
2023-12-05
[ [ "Goldman", "Samuel", "" ], [ "Bradshaw", "John", "" ], [ "Xin", "Jiayi", "" ], [ "Coley", "Connor W.", "" ] ]
Computational predictions of mass spectra from molecules have enabled the discovery of clinically relevant metabolites. However, such predictive tools are still limited as they occupy one of two extremes, either operating (a) by fragmenting molecules combinatorially with overly rigid constraints on potential rearrangements and poor time complexity or (b) by decoding lossy and nonphysical discretized spectra vectors. In this work, we use a new intermediate strategy for predicting mass spectra from molecules by treating mass spectra as sets of molecular formulae, which are themselves multisets of atoms. After first encoding an input molecular graph, we decode a set of molecular subformulae, each of which specify a predicted peak in the mass spectrum, the intensities of which are predicted by a second model. Our key insight is to overcome the combinatorial possibilities for molecular subformulae by decoding the formula set using a prefix tree structure, atom-type by atom-type, representing a general method for ordered multiset decoding. We show promising empirical results on mass spectra prediction tasks.
1711.09242
Dmytro Guzenko
Dmytro Guzenko and Sergei V. Strelkov
Granular clustering of de novo protein models
This is a pre-copyedited, author-produced version of an article accepted for publication in Bioinformatics following peer review. The version of record is available online at: https://academic.oup.com/bioinformatics/article/33/3/390/2525725
Bioinformatics, Volume 33, Issue 3, 1 February 2017, Pages 390-396
10.1093/bioinformatics/btw628
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern algorithms for de novo prediction of protein structures typically output multiple full-length models (decoys) rather than a single solution. Subsequent clustering of such decoys is used both to gauge the success of the modelling and to decide on the most native-like conformation. At the same time, partial protein models are sufficient for some applications such as crystallographic phasing by molecular replacement (MR) in particular, provided these models represent a certain part of the target structure with reasonable accuracy. Here we propose a novel clustering algorithm that natively operates in the space of partial models through an approach known as granular clustering (GC). The algorithm is based on growing local similarities found in a pool of initial decoys. We demonstrate that the resulting clusters of partial models provide a substantially more accurate structural detail on the target protein than those obtained upon a global alignment of decoys. As the result, the partial models output by our GC algorithm are also much more effective towards the MR procedure, compared to the models produced by existing software. The source code is freely available at https://github.com/biocryst/gc
[ { "created": "Sat, 25 Nov 2017 14:43:04 GMT", "version": "v1" } ]
2017-12-12
[ [ "Guzenko", "Dmytro", "" ], [ "Strelkov", "Sergei V.", "" ] ]
Modern algorithms for de novo prediction of protein structures typically output multiple full-length models (decoys) rather than a single solution. Subsequent clustering of such decoys is used both to gauge the success of the modelling and to decide on the most native-like conformation. At the same time, partial protein models are sufficient for some applications such as crystallographic phasing by molecular replacement (MR) in particular, provided these models represent a certain part of the target structure with reasonable accuracy. Here we propose a novel clustering algorithm that natively operates in the space of partial models through an approach known as granular clustering (GC). The algorithm is based on growing local similarities found in a pool of initial decoys. We demonstrate that the resulting clusters of partial models provide a substantially more accurate structural detail on the target protein than those obtained upon a global alignment of decoys. As the result, the partial models output by our GC algorithm are also much more effective towards the MR procedure, compared to the models produced by existing software. The source code is freely available at https://github.com/biocryst/gc
2406.01143
Dolores Bernenko
Dolores Bernenko, Meng Li, Hampus M{\aa}nefjord, Samuel Jansson, Anna Runemark, Carsten Kirkeby, Mikkel Brydegaard
Insect Diversity Estimation in Polarimetric Lidar
24 pages, 9 figures
null
null
null
q-bio.QM physics.ins-det
http://creativecommons.org/licenses/by-nc-nd/4.0/
Identification of insects in flight is a particular challenge for ecologists in several settings with no other method able to count and classify insects at the pace of entomological lidar. Thus, it can play a unique role as a non-intrusive diagnostic tool to assess insect biodiversity, inform planning, and evaluate mitigation efforts aimed at tackling declines in insect abundance and diversity. While species richness of co-existing insects could reach tens of thousands, to date, photonic sensors and lidars can differentiate roughly one hundred signal types. This taxonomic specificity or number of discernible signal types is currently limited by instrumentation and algorithm sophistication. In this study we report 32,533 observations of wild flying insects along a 500-meter transect. We report the benefits of lidar polarization bands for differentiating species and compare the performance of two unsupervised clustering algorithms, namely Hierarchical Cluster Analysis and Gaussian Mixture Model. We demonstrate that polarimetric properties could be partially predicted even with unpolarized light, thus polarimetric lidar bands provide only a minor improvement in specificity. Finally, we use physical properties of the clustered observation, such as wing beat frequency, daily activity patterns, and spatial distribution, to establish a lower bound for the number of species represented by the differentiated signal types.
[ { "created": "Mon, 3 Jun 2024 09:36:30 GMT", "version": "v1" } ]
2024-06-04
[ [ "Bernenko", "Dolores", "" ], [ "Li", "Meng", "" ], [ "Månefjord", "Hampus", "" ], [ "Jansson", "Samuel", "" ], [ "Runemark", "Anna", "" ], [ "Kirkeby", "Carsten", "" ], [ "Brydegaard", "Mikkel", "" ] ]
Identification of insects in flight is a particular challenge for ecologists in several settings with no other method able to count and classify insects at the pace of entomological lidar. Thus, it can play a unique role as a non-intrusive diagnostic tool to assess insect biodiversity, inform planning, and evaluate mitigation efforts aimed at tackling declines in insect abundance and diversity. While species richness of co-existing insects could reach tens of thousands, to date, photonic sensors and lidars can differentiate roughly one hundred signal types. This taxonomic specificity or number of discernible signal types is currently limited by instrumentation and algorithm sophistication. In this study we report 32,533 observations of wild flying insects along a 500-meter transect. We report the benefits of lidar polarization bands for differentiating species and compare the performance of two unsupervised clustering algorithms, namely Hierarchical Cluster Analysis and Gaussian Mixture Model. We demonstrate that polarimetric properties could be partially predicted even with unpolarized light, thus polarimetric lidar bands provide only a minor improvement in specificity. Finally, we use physical properties of the clustered observation, such as wing beat frequency, daily activity patterns, and spatial distribution, to establish a lower bound for the number of species represented by the differentiated signal types.
1405.2951
Cengiz Pehlevan
Tao Hu, Zaid J. Towfic, Cengiz Pehlevan, Alex Genkin, Dmitri B. Chklovskii
A Neuron as a Signal Processing Device
2013 Asilomar Conference on Signals, Systems and Computers, see http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6810296
null
10.1109/ACSSC.2013.6810296
null
q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A neuron is a basic physiological and computational unit of the brain. While much is known about the physiological properties of a neuron, its computational role is poorly understood. Here we propose to view a neuron as a signal processing device that represents the incoming streaming data matrix as a sparse vector of synaptic weights scaled by an outgoing sparse activity vector. Formally, a neuron minimizes a cost function comprising a cumulative squared representation error and regularization terms. We derive an online algorithm that minimizes such cost function by alternating between the minimization with respect to activity and with respect to synaptic weights. The steps of this algorithm reproduce well-known physiological properties of a neuron, such as weighted summation and leaky integration of synaptic inputs, as well as an Oja-like, but parameter-free, synaptic learning rule. Our theoretical framework makes several predictions, some of which can be verified by the existing data, others require further experiments. Such framework should allow modeling the function of neuronal circuits without necessarily measuring all the microscopic biophysical parameters, as well as facilitate the design of neuromorphic electronics.
[ { "created": "Mon, 12 May 2014 20:43:33 GMT", "version": "v1" } ]
2014-05-14
[ [ "Hu", "Tao", "" ], [ "Towfic", "Zaid J.", "" ], [ "Pehlevan", "Cengiz", "" ], [ "Genkin", "Alex", "" ], [ "Chklovskii", "Dmitri B.", "" ] ]
A neuron is a basic physiological and computational unit of the brain. While much is known about the physiological properties of a neuron, its computational role is poorly understood. Here we propose to view a neuron as a signal processing device that represents the incoming streaming data matrix as a sparse vector of synaptic weights scaled by an outgoing sparse activity vector. Formally, a neuron minimizes a cost function comprising a cumulative squared representation error and regularization terms. We derive an online algorithm that minimizes such cost function by alternating between the minimization with respect to activity and with respect to synaptic weights. The steps of this algorithm reproduce well-known physiological properties of a neuron, such as weighted summation and leaky integration of synaptic inputs, as well as an Oja-like, but parameter-free, synaptic learning rule. Our theoretical framework makes several predictions, some of which can be verified by the existing data, others require further experiments. Such framework should allow modeling the function of neuronal circuits without necessarily measuring all the microscopic biophysical parameters, as well as facilitate the design of neuromorphic electronics.
2209.13729
Jaime Cofre
Jaime Cofre
The Neoplasia as embryological phenomenon and its implication in the animal evolution and the origin of cancer. II. The neoplastic process as an evolutionary engine
49 pages, 2 figures, Keywords: Cancer; Neoplasia; Evolution; Embryology; Physics; Morphogenesis
null
null
null
q-bio.TO physics.bio-ph q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In this article, I put forward the idea that the neoplastic process (NP) has deep evolutionary roots and make specific predictions about the connection between cancer and the formation of the first embryo, which allowed for the evolutionary radiation of metazoans. My main hypothesis is that the NP is at the heart of cellular mechanisms responsible for animal morphogenesis and, given its embryological basis, also at the center of animal evolution. It is thus understood that NP-associated mechanisms are deeply rooted in evolutionary history and tied to the formation of the first animal embryo. In my consideration of these arguments, I expound on how cancer biology is perfectly intertwined with evolutionary biology. I describe essential cellular components of unicellular holozoans that served as a basis for the formation of the neoplastic functional module (NFM) and its subsequent exaptation, which brought forth two great biophysical revolutions within the first embryo. Finally, I examine the role of Physics in the modeling of the NFM and its contribution to morphogenesis to reveal the totipotency of the zygote.
[ { "created": "Tue, 27 Sep 2022 22:40:21 GMT", "version": "v1" } ]
2022-09-29
[ [ "Cofre", "Jaime", "" ] ]
In this article, I put forward the idea that the neoplastic process (NP) has deep evolutionary roots and make specific predictions about the connection between cancer and the formation of the first embryo, which allowed for the evolutionary radiation of metazoans. My main hypothesis is that the NP is at the heart of cellular mechanisms responsible for animal morphogenesis and, given its embryological basis, also at the center of animal evolution. It is thus understood that NP-associated mechanisms are deeply rooted in evolutionary history and tied to the formation of the first animal embryo. In my consideration of these arguments, I expound on how cancer biology is perfectly intertwined with evolutionary biology. I describe essential cellular components of unicellular holozoans that served as a basis for the formation of the neoplastic functional module (NFM) and its subsequent exaptation, which brought forth two great biophysical revolutions within the first embryo. Finally, I examine the role of Physics in the modeling of the NFM and its contribution to morphogenesis to reveal the totipotency of the zygote.
2401.14208
Agnieszka Pregowska
Zofia Rudnicka, Klaudia Proniewska, Mark Perkins, Agnieszka Pregowska
Health Digital Twins Supported by Artificial Intelligence-based Algorithms and Extended Reality in Cardiology
null
https://www.mdpi.com/2079-9292/13/5/866
10.3390/electronics13050866
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, significant efforts have been made to create Health Digital Twins (HDTs), digital twins for clinical applications. Heart modeling is one of the fastest-growing fields, which favors the effective application of HDTs. The clinical application of HDTs will be increasingly widespread in the future of healthcare services and has a huge potential to form part of the mainstream in medicine. However, it requires the development of both models and algorithms for the analysis of medical data, and advances in Artificial Intelligence (AI) based algorithms have already revolutionized image segmentation processes. Precise segmentation of lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapy. In this paper, a brief overview of recent achievements in HDT technologies in the field of cardiology, including interventional cardiology was conducted. HDTs were studied taking into account the application of Extended Reality (XR) and AI, as well as data security, technical risks, and ethics-related issues. Special emphasis was put on automatic segmentation issues. It appears that improvements in data processing will focus on automatic segmentation of medical imaging in addition to three-dimensional (3D) pictures to reconstruct the anatomy of the heart and torso that can be displayed in XR-based devices. This will contribute to the development of effective heart diagnostics. The combination of AI, XR, and an HDT-based solution will help to avoid technical errors and serve as a universal methodology in the development of personalized cardiology. Additionally, we describe potential applications, limitations, and further research directions.
[ { "created": "Thu, 25 Jan 2024 14:46:52 GMT", "version": "v1" } ]
2024-03-21
[ [ "Rudnicka", "Zofia", "" ], [ "Proniewska", "Klaudia", "" ], [ "Perkins", "Mark", "" ], [ "Pregowska", "Agnieszka", "" ] ]
Recently, significant efforts have been made to create Health Digital Twins (HDTs), digital twins for clinical applications. Heart modeling is one of the fastest-growing fields, which favors the effective application of HDTs. The clinical application of HDTs will be increasingly widespread in the future of healthcare services and has a huge potential to form part of the mainstream in medicine. However, it requires the development of both models and algorithms for the analysis of medical data, and advances in Artificial Intelligence (AI) based algorithms have already revolutionized image segmentation processes. Precise segmentation of lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapy. In this paper, a brief overview of recent achievements in HDT technologies in the field of cardiology, including interventional cardiology was conducted. HDTs were studied taking into account the application of Extended Reality (XR) and AI, as well as data security, technical risks, and ethics-related issues. Special emphasis was put on automatic segmentation issues. It appears that improvements in data processing will focus on automatic segmentation of medical imaging in addition to three-dimensional (3D) pictures to reconstruct the anatomy of the heart and torso that can be displayed in XR-based devices. This will contribute to the development of effective heart diagnostics. The combination of AI, XR, and an HDT-based solution will help to avoid technical errors and serve as a universal methodology in the development of personalized cardiology. Additionally, we describe potential applications, limitations, and further research directions.
q-bio/0402036
Sebastien Neukirch
Sebastien Neukirch
Getting DNA twist rigidity from single molecule experiments
null
null
10.1103/PhysRevLett.93.198107
null
q-bio.BM cond-mat.dis-nn cond-mat.stat-mech
null
We use an elastic rod model with contact to study the extension versus rotation diagrams of single supercoiled DNA molecules. We reproduce quantitatively the supercoiling response of overtwisted DNA and, using experimental data, we get an estimation of the effective supercoiling radius and of the twist rigidity of B-DNA. We find that unlike the bending rigidity, the twist rigidity of DNA seems to vary widely with the nature and concentration of the salt buffer in which it is immerged.
[ { "created": "Tue, 17 Feb 2004 22:12:04 GMT", "version": "v1" } ]
2013-05-29
[ [ "Neukirch", "Sebastien", "" ] ]
We use an elastic rod model with contact to study the extension versus rotation diagrams of single supercoiled DNA molecules. We reproduce quantitatively the supercoiling response of overtwisted DNA and, using experimental data, we get an estimation of the effective supercoiling radius and of the twist rigidity of B-DNA. We find that unlike the bending rigidity, the twist rigidity of DNA seems to vary widely with the nature and concentration of the salt buffer in which it is immerged.
1412.2390
Panagiotis Papasaikas
Panagiotis Papasaikas, J. Ramon Tejedor, Luisa Vigevani and Juan Valcarcel
Functional Splicing Network Reveals Extensive Regulatory Potential of the Core Spliceosomal Machinery
Main text 17 pages + Supplementary Information 23 pages v2 added missing pdf metadata
MolCell. 2015 Jan 8;57(1):7-22
10.1016/j.molcel.2014.10.030
null
q-bio.MN q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-mRNA splicing relies on the poorly understood dynamic interplay between >150 protein components of the spliceosome. The steps at which splicing can be regulated remain largely unknown. We systematically analyzed the effect of knocking down the components of the splicing machinery on alternative splicing events relevant for cell proliferation and apoptosis and used this information to reconstruct a network of functional interactions. The network accurately captures known physical and functional associations and identifies new ones, revealing remarkable regulatory potential of core spliceosomal components, related to the order and duration of their recruitment during spliceosome assembly. In contrast with standard models of regulation at early steps of splice site recognition, factors involved in catalytic activation of the spliceosome display regulatory properties. The network also sheds light on the antagonism between hnRNP C and U2AF, and on targets of antitumor drugs, and can be widely used to identify mechanisms of splicing regulation.
[ { "created": "Sun, 7 Dec 2014 19:45:42 GMT", "version": "v1" }, { "created": "Wed, 10 Dec 2014 14:57:32 GMT", "version": "v2" } ]
2015-01-19
[ [ "Papasaikas", "Panagiotis", "" ], [ "Tejedor", "J. Ramon", "" ], [ "Vigevani", "Luisa", "" ], [ "Valcarcel", "Juan", "" ] ]
Pre-mRNA splicing relies on the poorly understood dynamic interplay between >150 protein components of the spliceosome. The steps at which splicing can be regulated remain largely unknown. We systematically analyzed the effect of knocking down the components of the splicing machinery on alternative splicing events relevant for cell proliferation and apoptosis and used this information to reconstruct a network of functional interactions. The network accurately captures known physical and functional associations and identifies new ones, revealing remarkable regulatory potential of core spliceosomal components, related to the order and duration of their recruitment during spliceosome assembly. In contrast with standard models of regulation at early steps of splice site recognition, factors involved in catalytic activation of the spliceosome display regulatory properties. The network also sheds light on the antagonism between hnRNP C and U2AF, and on targets of antitumor drugs, and can be widely used to identify mechanisms of splicing regulation.
2210.03882
Musaddiq Al Ali
Musaddiq Al Ali, Amjad Y. Sahib, Muazez Al Ali
Investigation of Applying Quantum Neural Network of Early-Stage Breast Cancer Detection
null
null
null
null
q-bio.QM cs.IT math.IT math.QA
http://creativecommons.org/licenses/by-nc-nd/4.0/
Due to the heavy burden on medical institutes and computer-aided image diagnostics (CAD) have been gaining importance in diagnostic medicine to aid the medical staff to attain better service for the patients. Breast cancer is a fatal disease that can be treated successfully if it is detected early. Quantum neural network (QNN) has been introduced by many researchers around the world and presented recently by research corporations such as Microsoft, Google, and IBM. In this paper, we are trying to answer the question of: whether can the QNN be an effective method for mass-scale early breast cancer detection. This paper is dedicated to drawing a baseline for examining QNN, and the results showed a promising opportunity to use it for mass-scale screening using a fully functional quantum computer.
[ { "created": "Sat, 8 Oct 2022 02:19:45 GMT", "version": "v1" }, { "created": "Thu, 23 Mar 2023 00:21:53 GMT", "version": "v2" } ]
2023-03-24
[ [ "Ali", "Musaddiq Al", "" ], [ "Sahib", "Amjad Y.", "" ], [ "Ali", "Muazez Al", "" ] ]
Due to the heavy burden on medical institutes and computer-aided image diagnostics (CAD) have been gaining importance in diagnostic medicine to aid the medical staff to attain better service for the patients. Breast cancer is a fatal disease that can be treated successfully if it is detected early. Quantum neural network (QNN) has been introduced by many researchers around the world and presented recently by research corporations such as Microsoft, Google, and IBM. In this paper, we are trying to answer the question of: whether can the QNN be an effective method for mass-scale early breast cancer detection. This paper is dedicated to drawing a baseline for examining QNN, and the results showed a promising opportunity to use it for mass-scale screening using a fully functional quantum computer.
1706.09813
Augusto Gonzalez
Augusto Gonzalez, Joan Nieves, Maria Luisa Bringas Vega and Pedro Valdes Sosa
Gene expression rearrangements denoting changes in the biological state
null
Sci Rep 11, 8470 (2021)
10.1038/s41598-021-87764-0
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many situations, the gene expression signature is a unique marker of the biological state. We study the modification of the gene expression distribution function when the biological state of a system experiences a change. This change may be the result of a selective pressure, as in the Long Term Evolution Experiment with E. Coli populations, or the progression to Alzheimer disease in aged brains, or the progression from a normal tissue to the cancer state. The first two cases seem to belong to a class of transitions, where the initial and final states are relatively close to each other, and the distribution function for the differential expressions is short ranged, with a tail of only a few dozens of strongly varying genes. In the latter case, cancer, the initial and final states are far apart and separated by a low-fitness barrier. The distribution function shows a very heavy tail, with thousands of silenced and over-expressed genes. We characterize the biological states by means of their principal component representations, and the expression distribution functions by their maximal and minimal differential expression values and the exponents of the Pareto laws describing the tails.
[ { "created": "Thu, 29 Jun 2017 15:40:19 GMT", "version": "v1" }, { "created": "Tue, 4 Jul 2017 16:26:14 GMT", "version": "v2" }, { "created": "Thu, 2 Jul 2020 12:32:13 GMT", "version": "v3" } ]
2022-05-19
[ [ "Gonzalez", "Augusto", "" ], [ "Nieves", "Joan", "" ], [ "Vega", "Maria Luisa Bringas", "" ], [ "Sosa", "Pedro Valdes", "" ] ]
In many situations, the gene expression signature is a unique marker of the biological state. We study the modification of the gene expression distribution function when the biological state of a system experiences a change. This change may be the result of a selective pressure, as in the Long Term Evolution Experiment with E. Coli populations, or the progression to Alzheimer disease in aged brains, or the progression from a normal tissue to the cancer state. The first two cases seem to belong to a class of transitions, where the initial and final states are relatively close to each other, and the distribution function for the differential expressions is short ranged, with a tail of only a few dozens of strongly varying genes. In the latter case, cancer, the initial and final states are far apart and separated by a low-fitness barrier. The distribution function shows a very heavy tail, with thousands of silenced and over-expressed genes. We characterize the biological states by means of their principal component representations, and the expression distribution functions by their maximal and minimal differential expression values and the exponents of the Pareto laws describing the tails.
2101.12191
Charles Puelz
Seong Woo Han, Charles Puelz, Craig G. Rusin, Daniel J. Penny, Ryan Coleman, Charles S. Peskin
Computer simulation of surgical interventions for the treatment of refractory pulmonary hypertension
null
null
null
null
q-bio.TO math.DS physics.flu-dyn
http://creativecommons.org/licenses/by/4.0/
This paper describes computer models of three interventions used for treating refractory pulmonary hypertension (RPH). These procedures create either an atrial septal defect, a ventricular septal defect, or, in the case of a Potts shunt, a patent ductus arteriosus. The aim in all three cases is to generate a right-to-left shunt, allowing for either pressure or volume unloading of the right side of the heart in the setting of right ventricular failure, while maintaining cardiac output. These shunts are created, however, at the expense of introducing de-oxygenated blood into the systemic circulation, thereby lowering the systemic arterial oxygen saturation. The models developed in this paper are based on compartmental descriptions of human hemodynamics and oxygen transport. An important parameter included in our models is the cross-sectional area of the surgically created defect. Numerical simulations are performed to compare different interventions and various shunt sizes and to assess their impact on hemodynamic variables and oxygen saturations. We also create a model for exercise and use it to study exercise tolerance in simulated pre-intervention and post-intervention RPH patients.
[ { "created": "Thu, 28 Jan 2021 18:53:25 GMT", "version": "v1" }, { "created": "Thu, 25 Nov 2021 01:26:39 GMT", "version": "v2" } ]
2021-11-29
[ [ "Han", "Seong Woo", "" ], [ "Puelz", "Charles", "" ], [ "Rusin", "Craig G.", "" ], [ "Penny", "Daniel J.", "" ], [ "Coleman", "Ryan", "" ], [ "Peskin", "Charles S.", "" ] ]
This paper describes computer models of three interventions used for treating refractory pulmonary hypertension (RPH). These procedures create either an atrial septal defect, a ventricular septal defect, or, in the case of a Potts shunt, a patent ductus arteriosus. The aim in all three cases is to generate a right-to-left shunt, allowing for either pressure or volume unloading of the right side of the heart in the setting of right ventricular failure, while maintaining cardiac output. These shunts are created, however, at the expense of introducing de-oxygenated blood into the systemic circulation, thereby lowering the systemic arterial oxygen saturation. The models developed in this paper are based on compartmental descriptions of human hemodynamics and oxygen transport. An important parameter included in our models is the cross-sectional area of the surgically created defect. Numerical simulations are performed to compare different interventions and various shunt sizes and to assess their impact on hemodynamic variables and oxygen saturations. We also create a model for exercise and use it to study exercise tolerance in simulated pre-intervention and post-intervention RPH patients.
2108.09499
Denis Schapiro
Denis Schapiro, Clarence Yapp, Artem Sokolov, Sheila M. Reynolds, Yu-An Chen, Damir Sudar, Yubin Xie, Jeremy L. Muhlich, Raquel Arias-Camison, Sarah Arena, Adam J. Taylor, Milen Nikolov, Madison Tyler, Jia-Ren Lin, Erik A. Burlingame, Human Tumor Atlas Network, Young H. Chang, Samouil L Farhi, V\'esteinn Thorsson, Nithya Venkatamohan, Julia L. Drewes, Dana Pe'er, David A. Gutman, Markus D. Herrmann, Nils Gehlenborg, Peter Bankhead, Joseph T. Roland, John M. Herndon, Michael P. Snyder, Michael Angelo, Garry Nolan, Jason R. Swedlow, Nikolaus Schultz, Daniel T. Merrick, Sarah A. Mazzilli, Ethan Cerami, Scott J. Rodig, Sandro Santagata and Peter K. Sorger
MITI Minimum Information guidelines for highly multiplexed tissue images
null
null
null
null
q-bio.OT eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The imminent release of tissue atlases combining multi-channel microscopy with single cell sequencing and other omics data from normal and diseased specimens creates an urgent need for data and metadata standards that guide data deposition, curation and release. We describe a Minimum Information about highly multiplexed Tissue Imaging (MITI) standard that applies best practices developed for genomics and other microscopy data to highly multiplexed tissue images and traditional histology.
[ { "created": "Sat, 21 Aug 2021 12:17:45 GMT", "version": "v1" }, { "created": "Wed, 23 Feb 2022 17:06:09 GMT", "version": "v2" } ]
2022-02-24
[ [ "Schapiro", "Denis", "" ], [ "Yapp", "Clarence", "" ], [ "Sokolov", "Artem", "" ], [ "Reynolds", "Sheila M.", "" ], [ "Chen", "Yu-An", "" ], [ "Sudar", "Damir", "" ], [ "Xie", "Yubin", "" ], [ "Muhlich", "Jeremy L.", "" ], [ "Arias-Camison", "Raquel", "" ], [ "Arena", "Sarah", "" ], [ "Taylor", "Adam J.", "" ], [ "Nikolov", "Milen", "" ], [ "Tyler", "Madison", "" ], [ "Lin", "Jia-Ren", "" ], [ "Burlingame", "Erik A.", "" ], [ "Network", "Human Tumor Atlas", "" ], [ "Chang", "Young H.", "" ], [ "Farhi", "Samouil L", "" ], [ "Thorsson", "Vésteinn", "" ], [ "Venkatamohan", "Nithya", "" ], [ "Drewes", "Julia L.", "" ], [ "Pe'er", "Dana", "" ], [ "Gutman", "David A.", "" ], [ "Herrmann", "Markus D.", "" ], [ "Gehlenborg", "Nils", "" ], [ "Bankhead", "Peter", "" ], [ "Roland", "Joseph T.", "" ], [ "Herndon", "John M.", "" ], [ "Snyder", "Michael P.", "" ], [ "Angelo", "Michael", "" ], [ "Nolan", "Garry", "" ], [ "Swedlow", "Jason R.", "" ], [ "Schultz", "Nikolaus", "" ], [ "Merrick", "Daniel T.", "" ], [ "Mazzilli", "Sarah A.", "" ], [ "Cerami", "Ethan", "" ], [ "Rodig", "Scott J.", "" ], [ "Santagata", "Sandro", "" ], [ "Sorger", "Peter K.", "" ] ]
The imminent release of tissue atlases combining multi-channel microscopy with single cell sequencing and other omics data from normal and diseased specimens creates an urgent need for data and metadata standards that guide data deposition, curation and release. We describe a Minimum Information about highly multiplexed Tissue Imaging (MITI) standard that applies best practices developed for genomics and other microscopy data to highly multiplexed tissue images and traditional histology.
0810.4168
Martin Rosvall
C. T. Bergstrom and M. Rosvall
The transmission sense of information
7 pages, 4 figures
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biologists rely heavily on the language of information, coding, and transmission that is commonplace in the field of information theory as developed by Claude Shannon, but there is open debate about whether such language is anything more than facile metaphor. Philosophers of biology have argued that when biologists talk about information in genes and in evolution, they are not talking about the sort of information that Shannon's theory addresses. First, philosophers have suggested that Shannon theory is only useful for developing a shallow notion of correlation, the so-called "causal sense" of information. Second they typically argue that in genetics and evolutionary biology, information language is used in a "semantic sense," whereas semantics are deliberately omitted from Shannon theory. Neither critique is well-founded. Here we propose an alternative to the causal and semantic senses of information: a transmission sense of information, in which an object X conveys information if the function of X is to reduce, by virtue of its sequence properties, uncertainty on the part of an agent who observes X. The transmission sense not only captures much of what biologists intend when they talk about information in genes, but also brings Shannon's theory back to the fore. By taking the viewpoint of a communications engineer and focusing on the decision problem of how information is to be packaged for transport, this approach resolves several problems that have plagued the information concept in biology, and highlights a number of important features of the way that information is encoded, stored, and transmitted as genetic sequence.
[ { "created": "Wed, 22 Oct 2008 22:09:02 GMT", "version": "v1" } ]
2008-10-24
[ [ "Bergstrom", "C. T.", "" ], [ "Rosvall", "M.", "" ] ]
Biologists rely heavily on the language of information, coding, and transmission that is commonplace in the field of information theory as developed by Claude Shannon, but there is open debate about whether such language is anything more than facile metaphor. Philosophers of biology have argued that when biologists talk about information in genes and in evolution, they are not talking about the sort of information that Shannon's theory addresses. First, philosophers have suggested that Shannon theory is only useful for developing a shallow notion of correlation, the so-called "causal sense" of information. Second they typically argue that in genetics and evolutionary biology, information language is used in a "semantic sense," whereas semantics are deliberately omitted from Shannon theory. Neither critique is well-founded. Here we propose an alternative to the causal and semantic senses of information: a transmission sense of information, in which an object X conveys information if the function of X is to reduce, by virtue of its sequence properties, uncertainty on the part of an agent who observes X. The transmission sense not only captures much of what biologists intend when they talk about information in genes, but also brings Shannon's theory back to the fore. By taking the viewpoint of a communications engineer and focusing on the decision problem of how information is to be packaged for transport, this approach resolves several problems that have plagued the information concept in biology, and highlights a number of important features of the way that information is encoded, stored, and transmitted as genetic sequence.
1702.07460
Tal Einav
Manuel Razo-Mejia, Stephanie L. Barnes, Nathan M. Belliveau, Griffin Chure, Tal Einav, Mitchell Lewis, Rob Phillips
Tuning transcriptional regulation through signaling: A predictive theory of allosteric induction
Substantial revisions for resubmission (3 new figures, significantly elaborated discussion); added Professor Mitchell Lewis as another author for his continuing contributions to the project
null
null
null
q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Allosteric regulation is found across all domains of life, yet we still lack simple, predictive theories that directly link the experimentally tunable parameters of a system to its input-output response. To that end, we present a general theory of allosteric transcriptional regulation using the Monod-Wyman-Changeux model. We rigorously test this model using the ubiquitous simple repression motif in bacteria by first predicting the behavior of strains that span a large range of repressor copy numbers and DNA binding strengths and then constructing and measuring their response. Our model not only accurately captures the induction profiles of these strains but also enables us to derive analytic expressions for key properties such as the dynamic range and $[EC_{50}]$. Finally, we derive an expression for the free energy of allosteric repressors which enables us to collapse our experimental data onto a single master curve that captures the diverse phenomenology of the induction profiles.
[ { "created": "Fri, 24 Feb 2017 04:12:20 GMT", "version": "v1" }, { "created": "Wed, 21 Jun 2017 21:32:21 GMT", "version": "v2" } ]
2017-06-23
[ [ "Razo-Mejia", "Manuel", "" ], [ "Barnes", "Stephanie L.", "" ], [ "Belliveau", "Nathan M.", "" ], [ "Chure", "Griffin", "" ], [ "Einav", "Tal", "" ], [ "Lewis", "Mitchell", "" ], [ "Phillips", "Rob", "" ] ]
Allosteric regulation is found across all domains of life, yet we still lack simple, predictive theories that directly link the experimentally tunable parameters of a system to its input-output response. To that end, we present a general theory of allosteric transcriptional regulation using the Monod-Wyman-Changeux model. We rigorously test this model using the ubiquitous simple repression motif in bacteria by first predicting the behavior of strains that span a large range of repressor copy numbers and DNA binding strengths and then constructing and measuring their response. Our model not only accurately captures the induction profiles of these strains but also enables us to derive analytic expressions for key properties such as the dynamic range and $[EC_{50}]$. Finally, we derive an expression for the free energy of allosteric repressors which enables us to collapse our experimental data onto a single master curve that captures the diverse phenomenology of the induction profiles.
1404.0630
Paulo Bandiera-Paiva
Paulo Bandiera-Paiva, Jackson C. Lima and Marcelo R.S. Briones
G-protein coupled receptor subfamily identification using phylogenetic comparison of gene and species trees
null
null
null
null
q-bio.PE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most approaches to prediction of protein function from primary structure are based on similarity between the query sequence and sequences of known function. This approach, however, disregards the occurrence of gene duplication (paralogy) or convergent evolution of the genes. The analysis of correlated proteins that share a common domain, taking into consideration the evolutionary history of genes under study may provide a more reliable annotation of predicted proteins with unknown function. A computer program that enables real-time comparison of 'gene trees' with 'species trees' was developed. The Phylogenetic Genome Annotator (PGA) performs a profile based multiple sequence alignment of a set of sequences that share a common domain to generate a phylogenetic gene tree, which is compared to the species phylogeny inferred from aligned ribossomal RNA data. The correlated protein domains are then displayed side-by-side with the phylogeny of the corresponding species. The statistical support of gene clusters (branches) is given by the quartet puzzling method. This analysis readily discriminates paralogs from orthologs, enabling the identification of proteins originated by gene duplications and the prediction of possible functional divergence in groups of similar sequences. The tool was tested in three distinct subfamilies of the G-protein coupled receptor superfamily. In the analysed datasets, the paralogy prediction agreed with the known subfamily grouping, suggesting that subfamily divergence was facilitated by duplication events in the ancestral nodes.
[ { "created": "Wed, 2 Apr 2014 17:43:42 GMT", "version": "v1" } ]
2014-04-03
[ [ "Bandiera-Paiva", "Paulo", "" ], [ "Lima", "Jackson C.", "" ], [ "Briones", "Marcelo R. S.", "" ] ]
Most approaches to prediction of protein function from primary structure are based on similarity between the query sequence and sequences of known function. This approach, however, disregards the occurrence of gene duplication (paralogy) or convergent evolution of the genes. The analysis of correlated proteins that share a common domain, taking into consideration the evolutionary history of genes under study may provide a more reliable annotation of predicted proteins with unknown function. A computer program that enables real-time comparison of 'gene trees' with 'species trees' was developed. The Phylogenetic Genome Annotator (PGA) performs a profile based multiple sequence alignment of a set of sequences that share a common domain to generate a phylogenetic gene tree, which is compared to the species phylogeny inferred from aligned ribossomal RNA data. The correlated protein domains are then displayed side-by-side with the phylogeny of the corresponding species. The statistical support of gene clusters (branches) is given by the quartet puzzling method. This analysis readily discriminates paralogs from orthologs, enabling the identification of proteins originated by gene duplications and the prediction of possible functional divergence in groups of similar sequences. The tool was tested in three distinct subfamilies of the G-protein coupled receptor superfamily. In the analysed datasets, the paralogy prediction agreed with the known subfamily grouping, suggesting that subfamily divergence was facilitated by duplication events in the ancestral nodes.
1511.08317
Pierre Peterlongo
Maillet Nicolas, Collet Guillaume, Vanier Thomas, Lavenier Dominique, Pierre Peterlongo
Commet: comparing and combining multiple metagenomic datasets
IEEE BIBM 2014, Nov 2014, Belfast, United Kingdom. 2014
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenomics offers a way to analyze biotopes at the genomic level and to reach functional and taxonomical conclusions. The bio-analyzes of large metagenomic projects face critical limitations: complex metagenomes cannot be assembled and the taxonomical or functional annotations are much smaller than the real biological diversity. This motivated the development of de novo metagenomic read comparison approaches to extract information contained in metagenomic datasets. However, these new approaches do not scale up large metagenomic projects, or generate an important number of large intermediate and result files. We introduce Commet ("COmpare Multiple METagenomes"), a method that provides similarity overview between all datasets of large metagenomic projects. Directly from non-assembled reads, all against all comparisons are performed through an efficient indexing strategy. Then, results are stored as bit vectors, a compressed representation of read files, that can be used to further combine read subsets by common logical operations. Finally, Commet computes a clusterization of metagenomic datasets, which is visualized by dendrogram and heatmaps. Availability: http://github.com/pierrepeterlongo/commet
[ { "created": "Thu, 26 Nov 2015 08:18:04 GMT", "version": "v1" } ]
2015-11-30
[ [ "Nicolas", "Maillet", "" ], [ "Guillaume", "Collet", "" ], [ "Thomas", "Vanier", "" ], [ "Dominique", "Lavenier", "" ], [ "Peterlongo", "Pierre", "" ] ]
Metagenomics offers a way to analyze biotopes at the genomic level and to reach functional and taxonomical conclusions. The bio-analyzes of large metagenomic projects face critical limitations: complex metagenomes cannot be assembled and the taxonomical or functional annotations are much smaller than the real biological diversity. This motivated the development of de novo metagenomic read comparison approaches to extract information contained in metagenomic datasets. However, these new approaches do not scale up large metagenomic projects, or generate an important number of large intermediate and result files. We introduce Commet ("COmpare Multiple METagenomes"), a method that provides similarity overview between all datasets of large metagenomic projects. Directly from non-assembled reads, all against all comparisons are performed through an efficient indexing strategy. Then, results are stored as bit vectors, a compressed representation of read files, that can be used to further combine read subsets by common logical operations. Finally, Commet computes a clusterization of metagenomic datasets, which is visualized by dendrogram and heatmaps. Availability: http://github.com/pierrepeterlongo/commet
1501.02454
Wolfram Liebermeister
Wolfram Liebermeister and Elad Noor
The enzyme cost of given metabolic flux distributions, as a function of logarithmic metabolite levels, is convex
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Enzyme costs play a major role in the choice of metabolic routes, both in evolution and bioengineering. Given desired fluxes, necessary enzyme levels can be estimated based on known rate laws and on a principle of minimal enzyme cost. With logarithmic metabolite levels as free variables, enzyme cost functions and constraints in optimality and sampling problems can be handled easily. The set of feasible metabolite profiles forms a polytope in log-concentration space, whose points represent all possible steady states of a kinetic model. We show that enzyme cost is a convex function on this polytope. This makes enzyme cost minimization - finding optimal enzyme profiles and corresponding metabolite profiles that realize a desired flux at a minimal cost - a convex optimization problem.
[ { "created": "Sun, 11 Jan 2015 12:41:09 GMT", "version": "v1" } ]
2015-01-13
[ [ "Liebermeister", "Wolfram", "" ], [ "Noor", "Elad", "" ] ]
Enzyme costs play a major role in the choice of metabolic routes, both in evolution and bioengineering. Given desired fluxes, necessary enzyme levels can be estimated based on known rate laws and on a principle of minimal enzyme cost. With logarithmic metabolite levels as free variables, enzyme cost functions and constraints in optimality and sampling problems can be handled easily. The set of feasible metabolite profiles forms a polytope in log-concentration space, whose points represent all possible steady states of a kinetic model. We show that enzyme cost is a convex function on this polytope. This makes enzyme cost minimization - finding optimal enzyme profiles and corresponding metabolite profiles that realize a desired flux at a minimal cost - a convex optimization problem.
2312.12482
Guy Doron
Guy Doron, Sam Genway, Mark Roberts and Sai Jasti
New Horizons: Pioneering Pharmaceutical R&D with Generative AI from lab to the clinic -- an industry perspective
21 pages, 4 figures
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The rapid advance of generative AI is reshaping the strategic vision for R&D across industries. The unique challenges of pharmaceutical R&D will see applications of generative AI deliver value along the entire value chain from early discovery to regulatory approval. This perspective reviews these challenges and takes a three-horizon approach to explore the generative AI applications already delivering impact, the disruptive opportunities which are just around the corner, and the longer-term transformation which will shape the future of the industry. Selected applications are reviewed for their potential to drive increase productivity, accelerate timelines, improve the quality of research, data and decision making, and support a sustainable future for the industry. Recommendations are given for Pharma R&D leaders developing a generative AI strategy today which will lay the groundwork for getting real value from the technology and safeguarding future growth. Generative AI is today providing new, efficient routes to accessing and combining organisational data to drive productivity. Next, this impact will reach clinical development, enhancing the patient experience, driving operational efficiency, and unlocking digital innovation to better tackle the future burden of disease. Looking to the furthest horizon, rapid acquisition of rich multi-omics data, which capture the 'language of life', in combination with next generation AI technologies will allow organisations to close the loop around phases of the pipeline through rapid, automated generation and testing of hypotheses from bench to bedside. This provides a vision for the future of R&D with sustainability at the core, with reduced timescales and reduced dependency on resources, while offering new hope to patients to treat the untreatable and ultimately cure diseases.
[ { "created": "Tue, 19 Dec 2023 16:04:07 GMT", "version": "v1" } ]
2023-12-21
[ [ "Doron", "Guy", "" ], [ "Genway", "Sam", "" ], [ "Roberts", "Mark", "" ], [ "Jasti", "Sai", "" ] ]
The rapid advance of generative AI is reshaping the strategic vision for R&D across industries. The unique challenges of pharmaceutical R&D will see applications of generative AI deliver value along the entire value chain from early discovery to regulatory approval. This perspective reviews these challenges and takes a three-horizon approach to explore the generative AI applications already delivering impact, the disruptive opportunities which are just around the corner, and the longer-term transformation which will shape the future of the industry. Selected applications are reviewed for their potential to drive increase productivity, accelerate timelines, improve the quality of research, data and decision making, and support a sustainable future for the industry. Recommendations are given for Pharma R&D leaders developing a generative AI strategy today which will lay the groundwork for getting real value from the technology and safeguarding future growth. Generative AI is today providing new, efficient routes to accessing and combining organisational data to drive productivity. Next, this impact will reach clinical development, enhancing the patient experience, driving operational efficiency, and unlocking digital innovation to better tackle the future burden of disease. Looking to the furthest horizon, rapid acquisition of rich multi-omics data, which capture the 'language of life', in combination with next generation AI technologies will allow organisations to close the loop around phases of the pipeline through rapid, automated generation and testing of hypotheses from bench to bedside. This provides a vision for the future of R&D with sustainability at the core, with reduced timescales and reduced dependency on resources, while offering new hope to patients to treat the untreatable and ultimately cure diseases.
1808.07888
Sebastian Schreiber
Michel Bena\"im and Sebastian J. Schreiber
Persistence and extinction for stochastic ecological models with internal and external variables
34 pages, 3 figures
null
null
null
q-bio.PE math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of species' densities depend both on internal and external variables. Internal variables include frequencies of individuals exhibiting different phenotypes or living in different spatial locations. External variables include abiotic factors or non-focal species. These internal or external variables may fluctuate due to stochastic fluctuations in environmental conditions. We prove theorems for stochastic persistence and exclusion for stochastic ecological difference equations accounting for internal and external variables. Specifically, we use a stochastic analog of average Lyapunov functions to develop sufficient and necessary conditions for (i) all population densities spending little time at low densities, and (ii) population trajectories asymptotically approaching the extinction set with positive probability. For (i) and (ii), respectively, we provide quantitative estimates on the fraction of time that the system is near the extinction set, and the probability of asymptotic extinction as a function of the initial state of the system. Furthermore, we provide lower bounds for the expected time to escape neighborhoods of the extinction set. To illustrate the applicability of our results, we analyze stochastic models of evolutionary games, Lotka-Volterra dynamics, trait evolution, and spatially structured disease dynamics. Our analysis of these models demonstrates environmental stochasticity facilitates coexistence of strategies in the hawk-dove game, but inhibits coexistence in the rock-paper-scissors game and a Lotka-Volterra predator-prey model. Furthermore, environmental fluctuations with positive auto-correlations can promote persistence of evolving populations and persistence of diseases in patchy landscapes. While our results help close the gap between the persistence theories for deterministic and stochastic systems, we highlight challenges for future research.
[ { "created": "Thu, 23 Aug 2018 18:01:17 GMT", "version": "v1" }, { "created": "Thu, 13 Sep 2018 20:49:10 GMT", "version": "v2" }, { "created": "Tue, 26 Mar 2019 18:29:36 GMT", "version": "v3" } ]
2019-03-28
[ [ "Benaïm", "Michel", "" ], [ "Schreiber", "Sebastian J.", "" ] ]
The dynamics of species' densities depend both on internal and external variables. Internal variables include frequencies of individuals exhibiting different phenotypes or living in different spatial locations. External variables include abiotic factors or non-focal species. These internal or external variables may fluctuate due to stochastic fluctuations in environmental conditions. We prove theorems for stochastic persistence and exclusion for stochastic ecological difference equations accounting for internal and external variables. Specifically, we use a stochastic analog of average Lyapunov functions to develop sufficient and necessary conditions for (i) all population densities spending little time at low densities, and (ii) population trajectories asymptotically approaching the extinction set with positive probability. For (i) and (ii), respectively, we provide quantitative estimates on the fraction of time that the system is near the extinction set, and the probability of asymptotic extinction as a function of the initial state of the system. Furthermore, we provide lower bounds for the expected time to escape neighborhoods of the extinction set. To illustrate the applicability of our results, we analyze stochastic models of evolutionary games, Lotka-Volterra dynamics, trait evolution, and spatially structured disease dynamics. Our analysis of these models demonstrates environmental stochasticity facilitates coexistence of strategies in the hawk-dove game, but inhibits coexistence in the rock-paper-scissors game and a Lotka-Volterra predator-prey model. Furthermore, environmental fluctuations with positive auto-correlations can promote persistence of evolving populations and persistence of diseases in patchy landscapes. While our results help close the gap between the persistence theories for deterministic and stochastic systems, we highlight challenges for future research.
2111.11558
Eli Newby
Eli Newby, Jorge G\'omez Tejda Za\~nudo, R\'eka Albert
Structure-based approach to identifying small sets of driver nodes in biological networks
22 pages for main text 4 pages for supplementary materials, 10 figures. Supplementary tables S5 and S6 are ancillary files
null
10.1063/5.0080843
null
q-bio.QM cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
In network control theory, driving all the nodes in the Feedback Vertex Set (FVS) forces the network into one of its attractors (long-term dynamic behaviors). The FVS is often composed of more nodes than can be realistically manipulated in a system; for example, only up to three nodes can be controlled in intracellular networks, while their FVS may contain more than 10 nodes. Thus, we developed an approach to rank subsets of the FVS on Boolean models of intracellular networks using topological, dynamics-independent measures. We investigated the use of seven topological prediction measures sorted into three categories -- centrality measures, propagation measures, and cycle-based measures. Using each measure every subset was ranked and then evaluated against two dynamics-based metrics that measure the ability of interventions to drive the system towards or away from its attractors: To Control and Away Control. After examining an array of biological networks, we found that the FVS subsets that ranked in the top according to the propagation metrics can most effectively control the network. This result was independently corroborated on a second array of different Boolean models of biological networks. Consequently, overriding the entire FVS is not required to drive a biological network to one of its attractors, and this method provides a way to reliably identify effective FVS subsets without knowledge of the network's dynamics.
[ { "created": "Mon, 22 Nov 2021 22:20:34 GMT", "version": "v1" } ]
2022-06-15
[ [ "Newby", "Eli", "" ], [ "Zañudo", "Jorge Gómez Tejda", "" ], [ "Albert", "Réka", "" ] ]
In network control theory, driving all the nodes in the Feedback Vertex Set (FVS) forces the network into one of its attractors (long-term dynamic behaviors). The FVS is often composed of more nodes than can be realistically manipulated in a system; for example, only up to three nodes can be controlled in intracellular networks, while their FVS may contain more than 10 nodes. Thus, we developed an approach to rank subsets of the FVS on Boolean models of intracellular networks using topological, dynamics-independent measures. We investigated the use of seven topological prediction measures sorted into three categories -- centrality measures, propagation measures, and cycle-based measures. Using each measure every subset was ranked and then evaluated against two dynamics-based metrics that measure the ability of interventions to drive the system towards or away from its attractors: To Control and Away Control. After examining an array of biological networks, we found that the FVS subsets that ranked in the top according to the propagation metrics can most effectively control the network. This result was independently corroborated on a second array of different Boolean models of biological networks. Consequently, overriding the entire FVS is not required to drive a biological network to one of its attractors, and this method provides a way to reliably identify effective FVS subsets without knowledge of the network's dynamics.
2111.08430
Henrik Sahlin Pettersen
Henrik Sahlin Pettersen, Ilya Belevich, Elin Synn{\o}ve R{\o}yset, Erik Smistad, Eija Jokitalo, Ingerid Reinertsen, Ingunn Bakke, Andr\'e Pedersen
Code-free development and deployment of deep segmentation models for digital pathology
18 pages, 4 figures, 2 tables
null
null
null
q-bio.QM cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Application of deep learning on histopathological whole slide images (WSIs) holds promise of improving diagnostic efficiency and reproducibility but is largely dependent on the ability to write computer code or purchase commercial solutions. We present a code-free pipeline utilizing free-to-use, open-source software (QuPath, DeepMIB, and FastPathology) for creating and deploying deep learning-based segmentation models for computational pathology. We demonstrate the pipeline on a use case of separating epithelium from stroma in colonic mucosa. A dataset of 251 annotated WSIs, comprising 140 hematoxylin-eosin (HE)-stained and 111 CD3 immunostained colon biopsy WSIs, were developed through active learning using the pipeline. On a hold-out test set of 36 HE and 21 CD3-stained WSIs a mean intersection over union score of 96.6% and 95.3% was achieved on epithelium segmentation. We demonstrate pathologist-level segmentation accuracy and clinical acceptable runtime performance and show that pathologists without programming experience can create near state-of-the-art segmentation solutions for histopathological WSIs using only free-to-use software. The study further demonstrates the strength of open-source solutions in its ability to create generalizable, open pipelines, of which trained models and predictions can seamlessly be exported in open formats and thereby used in external solutions. All scripts, trained models, a video tutorial, and the full dataset of 251 WSIs with ~31k epithelium annotations are made openly available at https://github.com/andreped/NoCodeSeg to accelerate research in the field.
[ { "created": "Tue, 16 Nov 2021 13:08:05 GMT", "version": "v1" } ]
2021-11-17
[ [ "Pettersen", "Henrik Sahlin", "" ], [ "Belevich", "Ilya", "" ], [ "Røyset", "Elin Synnøve", "" ], [ "Smistad", "Erik", "" ], [ "Jokitalo", "Eija", "" ], [ "Reinertsen", "Ingerid", "" ], [ "Bakke", "Ingunn", "" ], [ "Pedersen", "André", "" ] ]
Application of deep learning on histopathological whole slide images (WSIs) holds promise of improving diagnostic efficiency and reproducibility but is largely dependent on the ability to write computer code or purchase commercial solutions. We present a code-free pipeline utilizing free-to-use, open-source software (QuPath, DeepMIB, and FastPathology) for creating and deploying deep learning-based segmentation models for computational pathology. We demonstrate the pipeline on a use case of separating epithelium from stroma in colonic mucosa. A dataset of 251 annotated WSIs, comprising 140 hematoxylin-eosin (HE)-stained and 111 CD3 immunostained colon biopsy WSIs, were developed through active learning using the pipeline. On a hold-out test set of 36 HE and 21 CD3-stained WSIs a mean intersection over union score of 96.6% and 95.3% was achieved on epithelium segmentation. We demonstrate pathologist-level segmentation accuracy and clinical acceptable runtime performance and show that pathologists without programming experience can create near state-of-the-art segmentation solutions for histopathological WSIs using only free-to-use software. The study further demonstrates the strength of open-source solutions in its ability to create generalizable, open pipelines, of which trained models and predictions can seamlessly be exported in open formats and thereby used in external solutions. All scripts, trained models, a video tutorial, and the full dataset of 251 WSIs with ~31k epithelium annotations are made openly available at https://github.com/andreped/NoCodeSeg to accelerate research in the field.
1201.2845
Ueli Rutishauser
Ueli Rutishauser, Jean-Jacques Slotine, Rodney J. Douglas
Competition through selective inhibitory synchrony
in press at Neural computation; 4 figures
Neural Comput. 2012 Aug;24(8):2033-52
10.1162/NECO_a_00304
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and to selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily, because their axons and dendrites are co-localized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this co-localization assumption is not valid. In this paper we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron, and even across different cortical areas. We prove by non-linear contraction analysis, and demonstrate by simulation that distributed WTA sub-systems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with each other.
[ { "created": "Fri, 13 Jan 2012 14:16:51 GMT", "version": "v1" }, { "created": "Tue, 3 Apr 2012 10:22:29 GMT", "version": "v2" } ]
2018-01-16
[ [ "Rutishauser", "Ueli", "" ], [ "Slotine", "Jean-Jacques", "" ], [ "Douglas", "Rodney J.", "" ] ]
Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and to selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily, because their axons and dendrites are co-localized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this co-localization assumption is not valid. In this paper we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron, and even across different cortical areas. We prove by non-linear contraction analysis, and demonstrate by simulation that distributed WTA sub-systems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with each other.
2004.02136
Nicolo' Savioli
Nicol\`o Savioli
One-shot screening of potential peptide ligands on HR1 domain in COVID-19 glycosylated spike (S) protein with deep siamese network
11 pages, 5 figures, 1 Table, added reference, revisited the introduction
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The novel coronavirus (2019-nCoV) has been declared to be a new international health emergence and no specific drug has been yet identified. Several methods are currently being evaluated such as protease and glycosylated spike (S) protein inhibitors, that outlines the main fusion site among coronavirus and host cells. Notwithstanding, the Heptad Repeat 1 (HR1) domain on the glycosylated spike (S) protein is the region with less mutability and then the most encouraging target for new inhibitors drugs.The novelty of the proposed approach, compared to others, lies in a precise training of a deep neural network toward the 2019-nCoV virus. Where a Siamese Neural Network (SNN) has been trained to distingue the whole 2019-nCoV protein sequence amongst two different viruses family such as HIV-1 and Ebola. In this way, the present deep learning system has precise knowledge of peptide linkage among 2019-nCoV protein structure and differently, of other works, is not trivially trained on public datasets that have not been provided any ligand-peptide information for 2019-nCoV. Suddenly, the SNN shows a sensitivity of $83\%$ of peptide affinity classification, where $3027$ peptides on SATPdb bank have been tested towards the specific region HR1 of 2019-nCoV exhibiting an affinity of $93\%$ for the peptidyl-prolyl cis-trans isomerase (PPIase) peptide. This affinity between PPIase and HR1 can open new horizons of research since several scientific papers have already shown that CsA immunosuppression drug, a main inhibitor of PPIase, suppress the reproduction of different CoV virus included SARS-CoV and MERS-CoV. Finally, to ensure the scientific reproducibility, code and data have been made public at the following link: https://github.com/bionick87/2019-nCoV
[ { "created": "Sun, 5 Apr 2020 09:35:41 GMT", "version": "v1" }, { "created": "Tue, 7 Apr 2020 16:07:15 GMT", "version": "v2" }, { "created": "Sat, 11 Apr 2020 09:23:54 GMT", "version": "v3" } ]
2020-04-14
[ [ "Savioli", "Nicolò", "" ] ]
The novel coronavirus (2019-nCoV) has been declared to be a new international health emergence and no specific drug has been yet identified. Several methods are currently being evaluated such as protease and glycosylated spike (S) protein inhibitors, that outlines the main fusion site among coronavirus and host cells. Notwithstanding, the Heptad Repeat 1 (HR1) domain on the glycosylated spike (S) protein is the region with less mutability and then the most encouraging target for new inhibitors drugs.The novelty of the proposed approach, compared to others, lies in a precise training of a deep neural network toward the 2019-nCoV virus. Where a Siamese Neural Network (SNN) has been trained to distingue the whole 2019-nCoV protein sequence amongst two different viruses family such as HIV-1 and Ebola. In this way, the present deep learning system has precise knowledge of peptide linkage among 2019-nCoV protein structure and differently, of other works, is not trivially trained on public datasets that have not been provided any ligand-peptide information for 2019-nCoV. Suddenly, the SNN shows a sensitivity of $83\%$ of peptide affinity classification, where $3027$ peptides on SATPdb bank have been tested towards the specific region HR1 of 2019-nCoV exhibiting an affinity of $93\%$ for the peptidyl-prolyl cis-trans isomerase (PPIase) peptide. This affinity between PPIase and HR1 can open new horizons of research since several scientific papers have already shown that CsA immunosuppression drug, a main inhibitor of PPIase, suppress the reproduction of different CoV virus included SARS-CoV and MERS-CoV. Finally, to ensure the scientific reproducibility, code and data have been made public at the following link: https://github.com/bionick87/2019-nCoV
2106.07622
Lu Zhang
Lu Zhang, Xiaowei Yu, Yanjun Lyu, Li Wang, Dajiang Zhu
Representative Functional Connectivity Learning for Multiple Clinical groups in Alzheimer's Disease
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mild cognitive impairment (MCI) is a high-risk dementia condition which progresses to probable Alzheimer's disease (AD) at approximately 10% to 15% per year. Characterization of group-level differences between two subtypes of MCI - stable MCI (sMCI) and progressive MCI (pMCI) is the key step to understand the mechanisms of MCI progression and enable possible delay of transition from MCI to AD. Functional connectivity (FC) is considered as a promising way to study MCI progression since which may show alterations even in preclinical stages and provide substrates for AD progression. However, the representative FC patterns during AD development for different clinical groups, especially for sMCI and pMCI, have been understudied. In this work, we integrated autoencoder and multi-class classification into a single deep model and successfully learned a set of clinical group related feature vectors. Specifically, we trained two non-linear mappings which realized the mutual transformations between original FC space and the feature space. By mapping the learned clinical group related feature vectors to the original FC space, representative FCs were constructed for each group. Moreover, based on these feature vectors, our model achieves a high classification accuracy - 68% for multi-class classification (NC vs SMC vs sMCI vs pMCI vs AD).
[ { "created": "Mon, 14 Jun 2021 17:27:54 GMT", "version": "v1" } ]
2021-06-15
[ [ "Zhang", "Lu", "" ], [ "Yu", "Xiaowei", "" ], [ "Lyu", "Yanjun", "" ], [ "Wang", "Li", "" ], [ "Zhu", "Dajiang", "" ] ]
Mild cognitive impairment (MCI) is a high-risk dementia condition which progresses to probable Alzheimer's disease (AD) at approximately 10% to 15% per year. Characterization of group-level differences between two subtypes of MCI - stable MCI (sMCI) and progressive MCI (pMCI) is the key step to understand the mechanisms of MCI progression and enable possible delay of transition from MCI to AD. Functional connectivity (FC) is considered as a promising way to study MCI progression since which may show alterations even in preclinical stages and provide substrates for AD progression. However, the representative FC patterns during AD development for different clinical groups, especially for sMCI and pMCI, have been understudied. In this work, we integrated autoencoder and multi-class classification into a single deep model and successfully learned a set of clinical group related feature vectors. Specifically, we trained two non-linear mappings which realized the mutual transformations between original FC space and the feature space. By mapping the learned clinical group related feature vectors to the original FC space, representative FCs were constructed for each group. Moreover, based on these feature vectors, our model achieves a high classification accuracy - 68% for multi-class classification (NC vs SMC vs sMCI vs pMCI vs AD).
1908.09653
Yipeng Song
Yipeng Song
Fusing heterogeneous data sets
PhD thesis, 173 pages, 60 figures
null
null
null
q-bio.GN cs.LG stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In systems biology, it is common to measure biochemical entities at different levels of the same biological system. One of the central problems for the data fusion of such data sets is the heterogeneity of the data. This thesis discusses two types of heterogeneity. The first one is the type of data, such as metabolomics, proteomics and RNAseq data in genomics. These different omics data reflect the properties of the studied biological system from different perspectives. The second one is the type of scale, which indicates the measurements obtained at different scales, such as binary, ordinal, interval and ratio-scaled variables. In this thesis, we developed several statistical methods capable to fuse data sets of these two types of heterogeneity. The advantages of the proposed methods in comparison with other approaches are assessed using comprehensive simulations as well as the analysis of real biological data sets.
[ { "created": "Fri, 23 Aug 2019 12:20:04 GMT", "version": "v1" } ]
2019-08-27
[ [ "Song", "Yipeng", "" ] ]
In systems biology, it is common to measure biochemical entities at different levels of the same biological system. One of the central problems for the data fusion of such data sets is the heterogeneity of the data. This thesis discusses two types of heterogeneity. The first one is the type of data, such as metabolomics, proteomics and RNAseq data in genomics. These different omics data reflect the properties of the studied biological system from different perspectives. The second one is the type of scale, which indicates the measurements obtained at different scales, such as binary, ordinal, interval and ratio-scaled variables. In this thesis, we developed several statistical methods capable to fuse data sets of these two types of heterogeneity. The advantages of the proposed methods in comparison with other approaches are assessed using comprehensive simulations as well as the analysis of real biological data sets.
2303.10642
Zhanwei Du
Yuan Bai, Zengyang Shao, Xiao Zhang, Ruohan Chen, Lin Wang, Sheikh Taslim Ali, Tianmu Chen, Eric H. Y. Lau, Dong-Yan Jin, Zhanwei Du
Reproduction number of SARS-CoV-2 Omicron variants, China, December 2022-January 2023
null
null
null
null
q-bio.PE cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
China adjusted the zero-COVID strategy in late 2022, triggering an unprecedented Omicron wave. We estimated the time-varying reproduction numbers of 32 provincial-level administrative divisions from December 2022 to January 2023. We found that the pooled estimate of initial reproduction numbers is 4.74 (95% CI: 4.41, 5.07).
[ { "created": "Sun, 19 Mar 2023 12:26:51 GMT", "version": "v1" } ]
2023-03-21
[ [ "Bai", "Yuan", "" ], [ "Shao", "Zengyang", "" ], [ "Zhang", "Xiao", "" ], [ "Chen", "Ruohan", "" ], [ "Wang", "Lin", "" ], [ "Ali", "Sheikh Taslim", "" ], [ "Chen", "Tianmu", "" ], [ "Lau", "Eric H. Y.", "" ], [ "Jin", "Dong-Yan", "" ], [ "Du", "Zhanwei", "" ] ]
China adjusted the zero-COVID strategy in late 2022, triggering an unprecedented Omicron wave. We estimated the time-varying reproduction numbers of 32 provincial-level administrative divisions from December 2022 to January 2023. We found that the pooled estimate of initial reproduction numbers is 4.74 (95% CI: 4.41, 5.07).
1402.1077
Ricardo Martinez-Garcia
Ricardo Martinez-Garcia, Justin M. Calabrese, E. Hernandez-Garcia, C. Lopez
Minimal mechanisms for vegetation patterns in semiarid regions
8 pages, 4 figures
Phil. Trans. R. Soc. A 28 October 2014 vol. 372 no. 2027 20140068
10.1098/rsta.2014.0068
null
q-bio.PE nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The minimal ecological requirements for formation of regular vegetation patterns in semiarid systems have been recently questioned. Against the general belief that a combination of facilitative and competitive interactions is necessary, recent theoretical studies suggest that, under broad conditions, nonlocal competition among plants alone may induce patterns. In this paper, we review results along this line, presenting a series of models that yield spatial patterns when finite-range competition is the only driving force. A preliminary derivation of this type of model from a more detailed one that considers water-biomass dynamics is also presented. Keywords: Vegetation patterns, nonlocal interactions
[ { "created": "Wed, 5 Feb 2014 16:23:43 GMT", "version": "v1" }, { "created": "Mon, 8 Sep 2014 13:00:40 GMT", "version": "v2" } ]
2014-09-24
[ [ "Martinez-Garcia", "Ricardo", "" ], [ "Calabrese", "Justin M.", "" ], [ "Hernandez-Garcia", "E.", "" ], [ "Lopez", "C.", "" ] ]
The minimal ecological requirements for formation of regular vegetation patterns in semiarid systems have been recently questioned. Against the general belief that a combination of facilitative and competitive interactions is necessary, recent theoretical studies suggest that, under broad conditions, nonlocal competition among plants alone may induce patterns. In this paper, we review results along this line, presenting a series of models that yield spatial patterns when finite-range competition is the only driving force. A preliminary derivation of this type of model from a more detailed one that considers water-biomass dynamics is also presented. Keywords: Vegetation patterns, nonlocal interactions
2111.14459
Yuki Koyanagi
J{\o}rgen Ellegaard Andersen, Yuki Koyanagi, Jakob Toudahl Nielsen and Rasmus Villemoes
Prediction of H-Bond Rotations from Protein H-Bond Topology
24 pages, 15 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
H-bonds are known to play an important role in the folding of proteins into three-dimensional structures, which in turn determine their diverse functions. The conformations around H-bonds are important, in that they can be non-local along the backbone and are therefore not captured by the methods such as Ramachandran plots. We study the relationship between the geometry of H-bonds in proteins, expressed as a spatial rotation between the two bonded peptide units, and their topology, expressed as a subgraph of the protein fatgraph. We describe two experiments to predict H-bond rotations from their corresponding subgraphs. The first method is based on sequence alignment between sequences of the signed lengths of H-bonds measured along the backbone. The second method is based on finding an exact match between the descriptions of subgraphs around H-bonds. We find that 88.14% of the predictions lie inside the ball, centred around the true rotation, occupying just 1% of the volume of the rotation space SO(3).
[ { "created": "Mon, 29 Nov 2021 11:15:44 GMT", "version": "v1" } ]
2021-11-30
[ [ "Andersen", "Jørgen Ellegaard", "" ], [ "Koyanagi", "Yuki", "" ], [ "Nielsen", "Jakob Toudahl", "" ], [ "Villemoes", "Rasmus", "" ] ]
H-bonds are known to play an important role in the folding of proteins into three-dimensional structures, which in turn determine their diverse functions. The conformations around H-bonds are important, in that they can be non-local along the backbone and are therefore not captured by the methods such as Ramachandran plots. We study the relationship between the geometry of H-bonds in proteins, expressed as a spatial rotation between the two bonded peptide units, and their topology, expressed as a subgraph of the protein fatgraph. We describe two experiments to predict H-bond rotations from their corresponding subgraphs. The first method is based on sequence alignment between sequences of the signed lengths of H-bonds measured along the backbone. The second method is based on finding an exact match between the descriptions of subgraphs around H-bonds. We find that 88.14% of the predictions lie inside the ball, centred around the true rotation, occupying just 1% of the volume of the rotation space SO(3).
1902.07795
Martin L\"offler
Martin L\"offler, Pia Schneider, Sigrid Schuh-Hofer, Sandra Kamping, Katrin Usai, Rolf-Detlef Treede, Frauke Nees, Herta Flor
Stress-induced hyperalgesia instead of analgesia in patients with chronic musculoskeletal pain
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many individuals with chronic musculoskeletal pain (CMP) show impairments in their pain-modulatory capacity. Although stress plays an important role in chronic pain, it is not known if stress-induced analgesia (SIA) is affected in patients with CMP. We investigated SIA in 22 patients with CMP and 18 pain-free participants. Pain thresholds, pain tolerance and suprathreshold pain ratings were examined before and after a cognitive stressor that typically induces pain reduction (SIA). Whereas the controls displayed a significant increase in pain threshold in response to the stressor, the patients with CMP showed no analgesia. In addition, increased pain intensity ratings after the stressor indicated hyperalgesia (SIH) in the patients with CMP compared to controls. An exploratory analysis showed no significant association of SIA or SIH with spatial pain extent. We did not observe significant changes in pain tolerance or pain unpleasantness ratings after the stressor in patients with CMP or controls. Our data suggest that altered stress-induced pain modulation is an important mechanism involved in CMP. Future studies need to clarify the psychobiological mechanisms of these stress-induced alterations in pain processing and determine the role of contributing factors such as early childhood trauma, catastrophizing, comorbidity with mental disorders and genetic predisposition.
[ { "created": "Wed, 20 Feb 2019 22:18:46 GMT", "version": "v1" }, { "created": "Fri, 2 Dec 2022 09:29:55 GMT", "version": "v2" } ]
2022-12-05
[ [ "Löffler", "Martin", "" ], [ "Schneider", "Pia", "" ], [ "Schuh-Hofer", "Sigrid", "" ], [ "Kamping", "Sandra", "" ], [ "Usai", "Katrin", "" ], [ "Treede", "Rolf-Detlef", "" ], [ "Nees", "Frauke", "" ], [ "Flor", "Herta", "" ] ]
Many individuals with chronic musculoskeletal pain (CMP) show impairments in their pain-modulatory capacity. Although stress plays an important role in chronic pain, it is not known if stress-induced analgesia (SIA) is affected in patients with CMP. We investigated SIA in 22 patients with CMP and 18 pain-free participants. Pain thresholds, pain tolerance and suprathreshold pain ratings were examined before and after a cognitive stressor that typically induces pain reduction (SIA). Whereas the controls displayed a significant increase in pain threshold in response to the stressor, the patients with CMP showed no analgesia. In addition, increased pain intensity ratings after the stressor indicated hyperalgesia (SIH) in the patients with CMP compared to controls. An exploratory analysis showed no significant association of SIA or SIH with spatial pain extent. We did not observe significant changes in pain tolerance or pain unpleasantness ratings after the stressor in patients with CMP or controls. Our data suggest that altered stress-induced pain modulation is an important mechanism involved in CMP. Future studies need to clarify the psychobiological mechanisms of these stress-induced alterations in pain processing and determine the role of contributing factors such as early childhood trauma, catastrophizing, comorbidity with mental disorders and genetic predisposition.
1308.1352
Paul Gribble
Jeremy D Wong, Elizabeth T Wilson, Dinant A Kistemaker, Paul L Gribble
Bimanual proprioception: are two hands better than one?
null
J Neurophysiol. 2014 Mar;111(6):1362-8
10.1152/jn.00537.2013
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information about the position of an object that is held in both hands, such as a golf club or a tennis racquet, is transmitted to the human central nervous system from peripheral sensors in both left and right arms. How does the brain combine these two sources of information? Using a robot to move participant's passive limbs, we performed psychophysical estimates of proprioceptive function for each limb independently, and again when subjects grasped the robot handle with both arms. We compared empirical estimates of bimanual proprioception to several models from the sensory integration literature: some that propose a combination of signals from the left and right arms (such as a Bayesian maximum-likelihood estimate), and some that propose using unimanual signals alone. Our results are consistent with the hypothesis that the nervous system both has knowledge of, and uses the limb with the best proprioceptive acuity for bimanual proprioception. Surprisingly, a Bayesian model that postulates optimal combination of sensory signals could not predict empirically observed bimanual acuity. These findings suggest that while the central nervous system seems to have information about the relative sensory acuity of each limb, it uses this information in a rather rudimentary fashion, essentially ignoring information from the less reliable limb.
[ { "created": "Tue, 6 Aug 2013 17:18:53 GMT", "version": "v1" } ]
2014-04-10
[ [ "Wong", "Jeremy D", "" ], [ "Wilson", "Elizabeth T", "" ], [ "Kistemaker", "Dinant A", "" ], [ "Gribble", "Paul L", "" ] ]
Information about the position of an object that is held in both hands, such as a golf club or a tennis racquet, is transmitted to the human central nervous system from peripheral sensors in both left and right arms. How does the brain combine these two sources of information? Using a robot to move participant's passive limbs, we performed psychophysical estimates of proprioceptive function for each limb independently, and again when subjects grasped the robot handle with both arms. We compared empirical estimates of bimanual proprioception to several models from the sensory integration literature: some that propose a combination of signals from the left and right arms (such as a Bayesian maximum-likelihood estimate), and some that propose using unimanual signals alone. Our results are consistent with the hypothesis that the nervous system both has knowledge of, and uses the limb with the best proprioceptive acuity for bimanual proprioception. Surprisingly, a Bayesian model that postulates optimal combination of sensory signals could not predict empirically observed bimanual acuity. These findings suggest that while the central nervous system seems to have information about the relative sensory acuity of each limb, it uses this information in a rather rudimentary fashion, essentially ignoring information from the less reliable limb.
0902.2912
Stefan Engblom
Stefan Engblom
The URDME manual Version 1.4
The latest version of URDME is available from http://www.urdme.org
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method, extended to unstructured meshes by obtaining jump coefficients from a finite element formulation of the corresponding macroscopic equation. This manual describes version 1.4 of the software. Refer to www.urdme.org for the latest updates.
[ { "created": "Tue, 17 Feb 2009 12:39:25 GMT", "version": "v1" }, { "created": "Thu, 24 Mar 2011 07:35:05 GMT", "version": "v2" }, { "created": "Tue, 18 Dec 2012 10:41:12 GMT", "version": "v3" }, { "created": "Tue, 14 Mar 2017 18:31:17 GMT", "version": "v4" }, { "created": "Mon, 27 Jan 2020 08:11:13 GMT", "version": "v5" } ]
2020-01-28
[ [ "Engblom", "Stefan", "" ] ]
We have developed URDME, a general software for simulation of stochastic reaction-diffusion processes on unstructured meshes. This allows for a more flexible handling of complicated geometries and curved boundaries compared to simulations on structured, cartesian meshes. The underlying algorithm is the next subvolume method, extended to unstructured meshes by obtaining jump coefficients from a finite element formulation of the corresponding macroscopic equation. This manual describes version 1.4 of the software. Refer to www.urdme.org for the latest updates.
1603.04054
Duc Nguyen
Duc D. Nguyen and Bao Wang and Guo-wei Wei
Accurate, robust and reliable calculations of Poisson-Boltzmann binding energies
26 pages, 7 figures
null
null
null
q-bio.BM math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, $\Delta G_{\text{el}}$, and binding free energy, $\Delta\Delta G_{\text{el}}$, is of tremendous significance to computational biophysics and biochemistry. Recently, it has been warned in the literature (Journal of Chemical Theory and Computation 2013, 9, 3677-3685) that the widely used grid spacing of $0.5$ \AA $ $ produces unacceptable errors in $\Delta\Delta G_{\text{el}}$ estimation with the solvent exclude surface (SES). In this work, we investigate the grid dependence of our PB solver (MIBPB) with SESs for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of $\Delta G_{\text{el}}$ obtained at the grid spacing of $1.0$ \AA $ $ compared to $\Delta G_{\text{el}}$ at $0.2$ \AA $ $ averaged over 153 molecules is less than 0.2\%. Our results indicate that the use of grid spacing $0.6$ \AA $ $ ensures accuracy and reliability in $\Delta\Delta G_{\text{el}}$ calculation. In fact, the grid spacing of $1.1$ \AA $ $ appears to deliver adequate accuracy for high throughput screening.
[ { "created": "Sun, 13 Mar 2016 17:51:27 GMT", "version": "v1" }, { "created": "Thu, 9 Jun 2016 15:05:37 GMT", "version": "v2" } ]
2016-06-10
[ [ "Nguyen", "Duc D.", "" ], [ "Wang", "Bao", "" ], [ "Wei", "Guo-wei", "" ] ]
Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, $\Delta G_{\text{el}}$, and binding free energy, $\Delta\Delta G_{\text{el}}$, is of tremendous significance to computational biophysics and biochemistry. Recently, it has been warned in the literature (Journal of Chemical Theory and Computation 2013, 9, 3677-3685) that the widely used grid spacing of $0.5$ \AA $ $ produces unacceptable errors in $\Delta\Delta G_{\text{el}}$ estimation with the solvent exclude surface (SES). In this work, we investigate the grid dependence of our PB solver (MIBPB) with SESs for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of $\Delta G_{\text{el}}$ obtained at the grid spacing of $1.0$ \AA $ $ compared to $\Delta G_{\text{el}}$ at $0.2$ \AA $ $ averaged over 153 molecules is less than 0.2\%. Our results indicate that the use of grid spacing $0.6$ \AA $ $ ensures accuracy and reliability in $\Delta\Delta G_{\text{el}}$ calculation. In fact, the grid spacing of $1.1$ \AA $ $ appears to deliver adequate accuracy for high throughput screening.
1805.00605
Hao Wang
Hao Wang, Jiahui Wang, Xin Yuan Thow, Chengkuo Lee
The first principle of neural circuit and the general Circuit-Probability theory
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new neural circuit is proposed by considering the myelin as an inductor. This new neural circuit can explain why the lump-parameter circuit used in previous C-P theory is valid. Meanwhile, it provides a new explanation of the biological function of myelin for neural signal propagation. Furthermore, a new model for magnetic nerve stimulation is built and all phenomena in magnetic nerve stimulation can be well explained. Based on this model, the coil structure can be optimized.
[ { "created": "Wed, 2 May 2018 02:33:05 GMT", "version": "v1" }, { "created": "Mon, 28 May 2018 07:21:11 GMT", "version": "v2" }, { "created": "Sat, 2 Jun 2018 03:26:51 GMT", "version": "v3" } ]
2018-06-05
[ [ "Wang", "Hao", "" ], [ "Wang", "Jiahui", "" ], [ "Thow", "Xin Yuan", "" ], [ "Lee", "Chengkuo", "" ] ]
A new neural circuit is proposed by considering the myelin as an inductor. This new neural circuit can explain why the lump-parameter circuit used in previous C-P theory is valid. Meanwhile, it provides a new explanation of the biological function of myelin for neural signal propagation. Furthermore, a new model for magnetic nerve stimulation is built and all phenomena in magnetic nerve stimulation can be well explained. Based on this model, the coil structure can be optimized.
2004.11851
Janin Heuer
Timo de Wolff, Dirk Pfl\"uger, Michael Rehme, Janin Heuer and Martin-Immanuel Bittner
Evaluation of Pool-based Testing Approaches to Enable Population-wide Screening for COVID-19
Revision; 16 pages, 3 figures, 2 tables, 2 supplementary figures
null
10.1371/journal.pone.0243692
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Rapid testing for an infection is paramount during a pandemic to prevent continued viral spread and excess morbidity and mortality. This study aimed to determine whether alternative testing strategies based on sample pooling can increase the speed and throughput of screening for SARS-CoV-2. Methods: A mathematical modelling approach was chosen to simulate six different testing strategies based on key input parameters (infection rate, test characteristics, population size, testing capacity etc.). The situations in five countries (US, DE, UK, IT and SG) currently experiencing COVID-19 outbreaks were simulated to reflect a broad variety of population sizes and testing capacities. The primary study outcome measurements that were finalised prior to any data collection were time and number of tests required; number of cases identified; and number of false positives. Findings: The performance of all tested methods depends on the input parameters, i.e. the specific circumstances of a screening campaign. To screen one tenth of each country's population at an infection rate of 1% - e.g. when prioritising frontline medical staff and public workers -, realistic optimised testing strategies enable such a campaign to be completed in ca. 29 days in the US, 71 in the UK, 25 in Singapore, 17 in Italy and 10 in Germany (ca. eight times faster compared to individual testing). When infection rates are considerably lower, or when employing an optimal, yet logistically more complex pooling method, the gains are more pronounced. Pool-based approaches also reduces the number of false positive diagnoses by 50%. Interpretation: The results of this study provide a clear rationale for adoption of pool-based testing strategies to increase speed and throughput of testing for SARS-CoV-2. The current individual testing approach unnecessarily wastes valuable time and resources.
[ { "created": "Fri, 24 Apr 2020 16:51:43 GMT", "version": "v1" }, { "created": "Thu, 8 Oct 2020 13:57:07 GMT", "version": "v2" } ]
2021-01-27
[ [ "de Wolff", "Timo", "" ], [ "Pflüger", "Dirk", "" ], [ "Rehme", "Michael", "" ], [ "Heuer", "Janin", "" ], [ "Bittner", "Martin-Immanuel", "" ] ]
Background: Rapid testing for an infection is paramount during a pandemic to prevent continued viral spread and excess morbidity and mortality. This study aimed to determine whether alternative testing strategies based on sample pooling can increase the speed and throughput of screening for SARS-CoV-2. Methods: A mathematical modelling approach was chosen to simulate six different testing strategies based on key input parameters (infection rate, test characteristics, population size, testing capacity etc.). The situations in five countries (US, DE, UK, IT and SG) currently experiencing COVID-19 outbreaks were simulated to reflect a broad variety of population sizes and testing capacities. The primary study outcome measurements that were finalised prior to any data collection were time and number of tests required; number of cases identified; and number of false positives. Findings: The performance of all tested methods depends on the input parameters, i.e. the specific circumstances of a screening campaign. To screen one tenth of each country's population at an infection rate of 1% - e.g. when prioritising frontline medical staff and public workers -, realistic optimised testing strategies enable such a campaign to be completed in ca. 29 days in the US, 71 in the UK, 25 in Singapore, 17 in Italy and 10 in Germany (ca. eight times faster compared to individual testing). When infection rates are considerably lower, or when employing an optimal, yet logistically more complex pooling method, the gains are more pronounced. Pool-based approaches also reduces the number of false positive diagnoses by 50%. Interpretation: The results of this study provide a clear rationale for adoption of pool-based testing strategies to increase speed and throughput of testing for SARS-CoV-2. The current individual testing approach unnecessarily wastes valuable time and resources.
2103.15125
Lester Beltran
Lester Beltran
Quantum Bose-Einstein Statistics for Indistinguishable Concepts in Human Language
12 pages, 5 figures
null
null
null
q-bio.NC cs.CL quant-ph
http://creativecommons.org/licenses/by/4.0/
We investigate the hypothesis that within a combination of a 'number concept' plus a 'substantive concept', such as 'eleven animals,' the identity and indistinguishability present on the level of the concepts, i.e., all eleven animals are identical and indistinguishable, gives rise to a statistical structure of the Bose-Einstein type similar to how Bose-Einstein statistics is present for identical and indistinguishable quantum particles. We proceed by identifying evidence for this hypothesis by extracting the statistical data from the World-Wide-Web utilizing the Google Search tool. By using the Kullback-Leibler divergence method, we then compare the obtained distribution with the Maxwell-Boltzmann as well as with the Bose-Einstein distributions and show that the Bose-Einstein's provides a better fit as compared to the Maxwell-Boltzmanns.
[ { "created": "Sun, 28 Mar 2021 13:07:12 GMT", "version": "v1" } ]
2021-03-30
[ [ "Beltran", "Lester", "" ] ]
We investigate the hypothesis that within a combination of a 'number concept' plus a 'substantive concept', such as 'eleven animals,' the identity and indistinguishability present on the level of the concepts, i.e., all eleven animals are identical and indistinguishable, gives rise to a statistical structure of the Bose-Einstein type similar to how Bose-Einstein statistics is present for identical and indistinguishable quantum particles. We proceed by identifying evidence for this hypothesis by extracting the statistical data from the World-Wide-Web utilizing the Google Search tool. By using the Kullback-Leibler divergence method, we then compare the obtained distribution with the Maxwell-Boltzmann as well as with the Bose-Einstein distributions and show that the Bose-Einstein's provides a better fit as compared to the Maxwell-Boltzmanns.
2007.07445
Michael Small
Michael Small and Orlando Porras and Michael Little and David Cavanagh and Harry Nicholas
Modelling remote epidemic transmission in Western Australia and implications for pandemic response
17 pages, 7 figures, preprint
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop an agent-based model of disease transmission in remote communities in Western Australia. Despite extreme isolation, we show that the movement of people amongst a large number of small but isolated communities has the effect of causing transmission to spread quickly. Significant movement between remote communities, and regional and urban centres allows for infection to quickly spread to and then among these remote communities. Our conclusions are based on two characteristic features of remote communities in Western Australia: (1) high mobility of people amongst these communities, and (2) relatively high proportion of travellers from very small communities to major population centres. In models of infection initiated in the state capital, Perth, these remote communities are collectively and uniquely vulnerable. Our model and analysis does not account for possibly heightened impact due to preexisting conditions, such additional assumptions would only make the projections of this model more dire. We advocate stringent monitoring and control of movement to prevent significant impact on the indigenous population of Western Australia.
[ { "created": "Wed, 15 Jul 2020 02:41:21 GMT", "version": "v1" } ]
2020-07-16
[ [ "Small", "Michael", "" ], [ "Porras", "Orlando", "" ], [ "Little", "Michael", "" ], [ "Cavanagh", "David", "" ], [ "Nicholas", "Harry", "" ] ]
We develop an agent-based model of disease transmission in remote communities in Western Australia. Despite extreme isolation, we show that the movement of people amongst a large number of small but isolated communities has the effect of causing transmission to spread quickly. Significant movement between remote communities, and regional and urban centres allows for infection to quickly spread to and then among these remote communities. Our conclusions are based on two characteristic features of remote communities in Western Australia: (1) high mobility of people amongst these communities, and (2) relatively high proportion of travellers from very small communities to major population centres. In models of infection initiated in the state capital, Perth, these remote communities are collectively and uniquely vulnerable. Our model and analysis does not account for possibly heightened impact due to preexisting conditions, such additional assumptions would only make the projections of this model more dire. We advocate stringent monitoring and control of movement to prevent significant impact on the indigenous population of Western Australia.
2405.15841
Anna Maltsev
Anna V. Maltsev, Yasir Z. Barlas, Adina Hazan, Rui Zhang, Michela Ottolia, Joshua I. Goldhaber
Dual network structure of the AV node
14 pages, 8 figures at the end of the manuscript, two videos and three datasets in the Source folder
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological systems, particularly the brain, are frequently analyzed as networks, conveying mechanistic insights into their function and pathophysiology. This is the first study of a functional network of cardiac tissue. We use calcium imaging to obtain two functional networks in a subsidiary but essential pacemaker of the heart, the atrioventricular node (AVN). The AVN is a small cellular structure with dual functions: a) to delay the pacemaker signal passing from the sinoatrial node (SAN) to the ventricles, and b) to serve as a back-up pacemaker should the primary SAN pacemaker fail. Failure of the AVN can lead to syncope and death. We found that the shortest path lengths and clustering coefficients of the AVN are remarkably similar to those of the brain. The network is ``small-world," thus optimized for energy use vs transmission efficiency. We further study the network properties of AVN tissue with knock-out of the sodium-calcium exchange transporter. In this case, the average shortest path-lengths remained nearly unchanged showing network resilience, while the clustering coefficient was somewhat reduced, similar to schizophrenia in brain networks. When we removed the global action potential using principal component analysis (PCA) in wild-type model, the network lost its ``small-world" characteristics with less information-passing efficiency due to longer shortest path lengths but more robust signal propagation resulting from higher clustering. These two wild-type networks (with and without global action potential) may correspond to fast and slow conduction pathways. Laslty, a one-parameter non-linear preferential attachment model is a good fit to all three AVN networks.
[ { "created": "Fri, 24 May 2024 16:06:15 GMT", "version": "v1" }, { "created": "Thu, 30 May 2024 10:11:54 GMT", "version": "v2" } ]
2024-05-31
[ [ "Maltsev", "Anna V.", "" ], [ "Barlas", "Yasir Z.", "" ], [ "Hazan", "Adina", "" ], [ "Zhang", "Rui", "" ], [ "Ottolia", "Michela", "" ], [ "Goldhaber", "Joshua I.", "" ] ]
Biological systems, particularly the brain, are frequently analyzed as networks, conveying mechanistic insights into their function and pathophysiology. This is the first study of a functional network of cardiac tissue. We use calcium imaging to obtain two functional networks in a subsidiary but essential pacemaker of the heart, the atrioventricular node (AVN). The AVN is a small cellular structure with dual functions: a) to delay the pacemaker signal passing from the sinoatrial node (SAN) to the ventricles, and b) to serve as a back-up pacemaker should the primary SAN pacemaker fail. Failure of the AVN can lead to syncope and death. We found that the shortest path lengths and clustering coefficients of the AVN are remarkably similar to those of the brain. The network is ``small-world," thus optimized for energy use vs transmission efficiency. We further study the network properties of AVN tissue with knock-out of the sodium-calcium exchange transporter. In this case, the average shortest path-lengths remained nearly unchanged showing network resilience, while the clustering coefficient was somewhat reduced, similar to schizophrenia in brain networks. When we removed the global action potential using principal component analysis (PCA) in wild-type model, the network lost its ``small-world" characteristics with less information-passing efficiency due to longer shortest path lengths but more robust signal propagation resulting from higher clustering. These two wild-type networks (with and without global action potential) may correspond to fast and slow conduction pathways. Laslty, a one-parameter non-linear preferential attachment model is a good fit to all three AVN networks.
1210.3632
Dante Chialvo
Dante R. Chialvo
Critical brain dynamics at large scale
In "Criticality in Neural Systems", Niebur E, Plenz D, Schuster HG. (eds.) 2013 (in press)
null
null
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Highly correlated brain dynamics produces synchronized states with no behavioral value, while weakly correlated dynamics prevent information flow. In between these states, the unique dynamical features of the critical state endow the brain with properties which are fundamental for adaptive behavior. We discuss the idea put forward two decades ago by Per Bak that the working brain stays at an intermediate (critical) regime characterized by power-law correlations. This proposal is now supported by a wide body of empirical evidence at different scales demonstrating that the spatiotemporal brain dynamics exhibit key signatures of critical dynamics, previously recognized in other complex systems. The rationale behind this program is discussed in these notes, followed by an account of the most recent results.
[ { "created": "Fri, 12 Oct 2012 20:31:40 GMT", "version": "v1" } ]
2012-10-16
[ [ "Chialvo", "Dante R.", "" ] ]
Highly correlated brain dynamics produces synchronized states with no behavioral value, while weakly correlated dynamics prevent information flow. In between these states, the unique dynamical features of the critical state endow the brain with properties which are fundamental for adaptive behavior. We discuss the idea put forward two decades ago by Per Bak that the working brain stays at an intermediate (critical) regime characterized by power-law correlations. This proposal is now supported by a wide body of empirical evidence at different scales demonstrating that the spatiotemporal brain dynamics exhibit key signatures of critical dynamics, previously recognized in other complex systems. The rationale behind this program is discussed in these notes, followed by an account of the most recent results.
2007.14957
Feng Huang
Feng Huang, Ming Cao, and Long Wang
Learning enables adaptation in cooperation for multi-player stochastic games
null
J. R. Soc. Interface 17: 20200639 (2020)
10.1098/rsif.2020.0639
null
q-bio.PE cs.GT cs.MA physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactions among individuals in natural populations often occur in a dynamically changing environment. Understanding the role of environmental variation in population dynamics has long been a central topic in theoretical ecology and population biology. However, the key question of how individuals, in the middle of challenging social dilemmas (e.g., the "tragedy of the commons"), modulate their behaviors to adapt to the fluctuation of the environment has not yet been addressed satisfactorily. Utilizing evolutionary game theory and stochastic games, we develop a game-theoretical framework that incorporates the adaptive mechanism of reinforcement learning to investigate whether cooperative behaviors can evolve in the ever-changing group interaction environment. When the action choices of players are just slightly influenced by past reinforcements, we construct an analytical condition to determine whether cooperation can be favored over defection. Intuitively, this condition reveals why and how the environment can mediate cooperative dilemmas. Under our model architecture, we also compare this learning mechanism with two non-learning decision rules, and we find that learning significantly improves the propensity for cooperation in weak social dilemmas, and, in sharp contrast, hinders cooperation in strong social dilemmas. Our results suggest that in complex social-ecological dilemmas, learning enables the adaptation of individuals to varying environments.
[ { "created": "Wed, 29 Jul 2020 17:01:24 GMT", "version": "v1" } ]
2021-05-18
[ [ "Huang", "Feng", "" ], [ "Cao", "Ming", "" ], [ "Wang", "Long", "" ] ]
Interactions among individuals in natural populations often occur in a dynamically changing environment. Understanding the role of environmental variation in population dynamics has long been a central topic in theoretical ecology and population biology. However, the key question of how individuals, in the middle of challenging social dilemmas (e.g., the "tragedy of the commons"), modulate their behaviors to adapt to the fluctuation of the environment has not yet been addressed satisfactorily. Utilizing evolutionary game theory and stochastic games, we develop a game-theoretical framework that incorporates the adaptive mechanism of reinforcement learning to investigate whether cooperative behaviors can evolve in the ever-changing group interaction environment. When the action choices of players are just slightly influenced by past reinforcements, we construct an analytical condition to determine whether cooperation can be favored over defection. Intuitively, this condition reveals why and how the environment can mediate cooperative dilemmas. Under our model architecture, we also compare this learning mechanism with two non-learning decision rules, and we find that learning significantly improves the propensity for cooperation in weak social dilemmas, and, in sharp contrast, hinders cooperation in strong social dilemmas. Our results suggest that in complex social-ecological dilemmas, learning enables the adaptation of individuals to varying environments.
1506.02424
Yue Wang
Yue Wang
Algorithms for determining transposons in gene sequences
null
null
null
null
q-bio.GN cs.CE cs.DS
http://creativecommons.org/licenses/by/4.0/
Some genes can change their relative locations in a genome. Thus for different individuals of the same species, the orders of genes might be different. Such jumping genes are called transposons. A practical problem is to determine transposons in given gene sequences. Through an intuitive rule, we transform the biological problem of determining transposons into a rigorous mathematical problem of determining the longest common subsequence. Depending on whether the gene sequence is linear (each sequence has a fixed head and tail) or circular (we can choose any gene as the head, and the previous one is the tail), and whether genes have multiple copies, we classify the problem of determining transposons into four scenarios: (1) linear sequences without duplicated genes; (2) circular sequences without duplicated genes; (3) linear sequences with duplicated genes; (4) circular sequences with duplicated genes. With the help of graph theory, we design fast algorithms for different scenarios. We also derive some results that might be of theoretical interests in combinatorics.
[ { "created": "Mon, 8 Jun 2015 09:58:19 GMT", "version": "v1" }, { "created": "Mon, 13 Jun 2022 06:56:26 GMT", "version": "v2" }, { "created": "Wed, 22 Jun 2022 19:26:26 GMT", "version": "v3" }, { "created": "Thu, 1 Sep 2022 06:06:29 GMT", "version": "v4" } ]
2022-09-02
[ [ "Wang", "Yue", "" ] ]
Some genes can change their relative locations in a genome. Thus for different individuals of the same species, the orders of genes might be different. Such jumping genes are called transposons. A practical problem is to determine transposons in given gene sequences. Through an intuitive rule, we transform the biological problem of determining transposons into a rigorous mathematical problem of determining the longest common subsequence. Depending on whether the gene sequence is linear (each sequence has a fixed head and tail) or circular (we can choose any gene as the head, and the previous one is the tail), and whether genes have multiple copies, we classify the problem of determining transposons into four scenarios: (1) linear sequences without duplicated genes; (2) circular sequences without duplicated genes; (3) linear sequences with duplicated genes; (4) circular sequences with duplicated genes. With the help of graph theory, we design fast algorithms for different scenarios. We also derive some results that might be of theoretical interests in combinatorics.
1902.09308
Paulo Cabrita Ph.D.
Paulo Cabrita
Holocrine Secretion and Kino Flow in Angiosperms: Their Role and Physiological Advantages in Plant Defence Mechanisms
69 pages, 7 figures, 2 Tables
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kinos are plant exudates, rich in polyphenols, produced by several angiosperms in reaction to damage. They flow out of kino veins, schizolysigenous ducts composing an anatomically distinct continuous system of tangentially anastomosing lacunae produced by the vascular cambium, which encircle the plant. Kino is secreted holocrinously into the vein lumen by a cambiform epithelium lined by suberized cells that separate kino veins from the surrounding axial parenchyma. A model describing kino flow in eucalypts is presented to investigate how vein distribution and structure, as well as kino holocrine loading, crystallization, and viscosity affect flow. Considering viscosity, vein anatomy, and a time-dependent holocrine loading of kino, the unsteady Stokes equation was applied. Qualitatively, kino flow is similar to resin flow. There is an increase in flow towards the vein open end, and both pressure and flow depend on vein dimensions, kino properties and holocrine loading. However, kino veins present a much smaller specific resistance to flow compared to resin ducts. Also, unlike resin loading in conifers, holocrine kino loading is not pressure-driven. The pressure and pressure gradient required to drive an equally fast flow are smaller than what is observed on the resin ducts of conifers. These results agree with previous observations on some angiosperms and suggest that holocrinous gum flow may have lower metabolic energy costs; thus presenting physiological advantages and possibly constituting an evolutionary step of angiosperms in using internal secretory systems in plant defence mechanisms compared to resin flow in conifers. Understanding of how these physiological and morphological parameters affect kino flow might be useful for selecting species and developing more sustainable and economically viable methods of tapping gum and gum resin in angiosperms.
[ { "created": "Wed, 20 Feb 2019 13:08:19 GMT", "version": "v1" }, { "created": "Tue, 26 May 2020 17:12:07 GMT", "version": "v2" } ]
2020-05-27
[ [ "Cabrita", "Paulo", "" ] ]
Kinos are plant exudates, rich in polyphenols, produced by several angiosperms in reaction to damage. They flow out of kino veins, schizolysigenous ducts composing an anatomically distinct continuous system of tangentially anastomosing lacunae produced by the vascular cambium, which encircle the plant. Kino is secreted holocrinously into the vein lumen by a cambiform epithelium lined by suberized cells that separate kino veins from the surrounding axial parenchyma. A model describing kino flow in eucalypts is presented to investigate how vein distribution and structure, as well as kino holocrine loading, crystallization, and viscosity affect flow. Considering viscosity, vein anatomy, and a time-dependent holocrine loading of kino, the unsteady Stokes equation was applied. Qualitatively, kino flow is similar to resin flow. There is an increase in flow towards the vein open end, and both pressure and flow depend on vein dimensions, kino properties and holocrine loading. However, kino veins present a much smaller specific resistance to flow compared to resin ducts. Also, unlike resin loading in conifers, holocrine kino loading is not pressure-driven. The pressure and pressure gradient required to drive an equally fast flow are smaller than what is observed on the resin ducts of conifers. These results agree with previous observations on some angiosperms and suggest that holocrinous gum flow may have lower metabolic energy costs; thus presenting physiological advantages and possibly constituting an evolutionary step of angiosperms in using internal secretory systems in plant defence mechanisms compared to resin flow in conifers. Understanding of how these physiological and morphological parameters affect kino flow might be useful for selecting species and developing more sustainable and economically viable methods of tapping gum and gum resin in angiosperms.
1507.03572
Leonard Harris
Leonard A. Harris, Justin S. Hogg, Jose-Juan Tapia, John A. P. Sekar, Sanjana A. Gupta, Ilya Korsunsky, Arshi Arora, Dipak Barua, Robert P. Sheehan, and James R. Faeder
BioNetGen 2.2: Advances in Rule-Based Modeling
3 pages, 1 figure, 1 supplementary text file. Supplementary text includes a brief discussion of the RK-PLA along with a performance analysis, two tables listing all new actions/arguments added in BioNetGen 2.2, and the "BioNetGen Quick Reference Guide". Accepted for publication in Bioinformatics
Bioinformatics 32, 3366-3368 (2016)
10.1093/bioinformatics/btw469
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BioNetGen is an open-source software package for rule-based modeling of complex biochemical systems. Version 2.2 of the software introduces numerous new features for both model specification and simulation. Here, we report on these additions, discussing how they facilitate the construction, simulation, and analysis of larger and more complex models than previously possible.
[ { "created": "Thu, 2 Jul 2015 09:05:27 GMT", "version": "v1" }, { "created": "Fri, 1 Jul 2016 08:09:05 GMT", "version": "v2" } ]
2016-11-21
[ [ "Harris", "Leonard A.", "" ], [ "Hogg", "Justin S.", "" ], [ "Tapia", "Jose-Juan", "" ], [ "Sekar", "John A. P.", "" ], [ "Gupta", "Sanjana A.", "" ], [ "Korsunsky", "Ilya", "" ], [ "Arora", "Arshi", "" ], [ "Barua", "Dipak", "" ], [ "Sheehan", "Robert P.", "" ], [ "Faeder", "James R.", "" ] ]
BioNetGen is an open-source software package for rule-based modeling of complex biochemical systems. Version 2.2 of the software introduces numerous new features for both model specification and simulation. Here, we report on these additions, discussing how they facilitate the construction, simulation, and analysis of larger and more complex models than previously possible.
0805.2071
Luis G. Moyano
Luis G. Moyano and Angel S\'anchez
Evolving learning rules and emergence of cooperation in spatial Prisoner's Dilemma
Final version, to appear in J. Theor. Biol
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the evolutionary Prisoner's Dilemma (PD) game, agents play with each other and update their strategies in every generation according to some microscopic dynamical rule. In its spatial version, agents do not play with every other but, instead, interact only with their neighbors, thus mimicking the existing of a social or contact network that defines who interacts with whom. In this work, we explore evolutionary, spatial PD systems consisting of two types of agents, each with a certain update (reproduction, learning) rule. We investigate two different scenarios: in the first case, update rules remain fixed for the entire evolution of the system; in the second case, agents update both strategy and update rule in every generation. We show that in a well-mixed population the evolutionary outcome is always full defection. We subsequently focus on two-strategy competition with nearest-neighbor interactions on the contact network and synchronized update of strategies. Our results show that, for an important range of the parameters of the game, the final state of the system is largely different from that arising from the usual setup of a single, fixed dynamical rule. Furthermore, the results are also very different if update rules are fixed or evolve with the strategies. In these respect, we have studied representative update rules, finding that some of them may become extinct while others prevail. We describe the new and rich variety of final outcomes that arise from this co-evolutionary dynamics. We include examples of other neighborhoods and asynchronous updating that confirm the robustness of our conclusions. Our results pave the way to an evolutionary rationale for modelling social interactions through game theory with a preferred set of update rules.
[ { "created": "Wed, 14 May 2008 14:27:11 GMT", "version": "v1" }, { "created": "Thu, 9 Oct 2008 07:26:05 GMT", "version": "v2" }, { "created": "Sun, 8 Mar 2009 20:58:42 GMT", "version": "v3" } ]
2009-03-08
[ [ "Moyano", "Luis G.", "" ], [ "Sánchez", "Angel", "" ] ]
In the evolutionary Prisoner's Dilemma (PD) game, agents play with each other and update their strategies in every generation according to some microscopic dynamical rule. In its spatial version, agents do not play with every other but, instead, interact only with their neighbors, thus mimicking the existing of a social or contact network that defines who interacts with whom. In this work, we explore evolutionary, spatial PD systems consisting of two types of agents, each with a certain update (reproduction, learning) rule. We investigate two different scenarios: in the first case, update rules remain fixed for the entire evolution of the system; in the second case, agents update both strategy and update rule in every generation. We show that in a well-mixed population the evolutionary outcome is always full defection. We subsequently focus on two-strategy competition with nearest-neighbor interactions on the contact network and synchronized update of strategies. Our results show that, for an important range of the parameters of the game, the final state of the system is largely different from that arising from the usual setup of a single, fixed dynamical rule. Furthermore, the results are also very different if update rules are fixed or evolve with the strategies. In these respect, we have studied representative update rules, finding that some of them may become extinct while others prevail. We describe the new and rich variety of final outcomes that arise from this co-evolutionary dynamics. We include examples of other neighborhoods and asynchronous updating that confirm the robustness of our conclusions. Our results pave the way to an evolutionary rationale for modelling social interactions through game theory with a preferred set of update rules.
1909.11791
Sajad Mousavi
Sajad Mousavi, Atiyeh Fotoohinasab and Fatemeh Afghah
Single-modal and Multi-modal False Arrhythmia Alarm Reduction using Attention-based Convolutional and Recurrent Neural Networks
null
null
10.1371/journal.pone.0226990
null
q-bio.QM cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study proposes a deep learning model that effectively suppresses the false alarms in the intensive care units (ICUs) without ignoring the true alarms using single- and multimodal biosignals. Most of the current work in the literature are either rule-based methods, requiring prior knowledge of arrhythmia analysis to build rules, or classical machine learning approaches, depending on hand-engineered features. In this work, we apply convolutional neural networks to automatically extract time-invariant features, an attention mechanism to put more emphasis on the important regions of the input segmented signal(s) that are more likely to contribute to an alarm, and long short-term memory units to capture the temporal information presented in the signal segments. We trained our method efficiently using a two-step training algorithm (i.e., pre-training and fine-tuning the proposed network) on the dataset provided by the PhysioNet computing in cardiology challenge 2015. The evaluation results demonstrate that the proposed method obtains better results compared to other existing algorithms for the false alarm reduction task in ICUs. The proposed method achieves a sensitivity of 93.88% and a specificity of 92.05% for the alarm classification, considering three different signals. In addition, our experiments for 5 separate alarm types leads significant results, where we just consider a single-lead ECG (e.g., a sensitivity of 90.71%, a specificity of 88.30%, an AUC of 89.51 for alarm type of Ventricular Tachycardia arrhythmia)
[ { "created": "Wed, 25 Sep 2019 22:00:26 GMT", "version": "v1" } ]
2020-07-01
[ [ "Mousavi", "Sajad", "" ], [ "Fotoohinasab", "Atiyeh", "" ], [ "Afghah", "Fatemeh", "" ] ]
This study proposes a deep learning model that effectively suppresses the false alarms in the intensive care units (ICUs) without ignoring the true alarms using single- and multimodal biosignals. Most of the current work in the literature are either rule-based methods, requiring prior knowledge of arrhythmia analysis to build rules, or classical machine learning approaches, depending on hand-engineered features. In this work, we apply convolutional neural networks to automatically extract time-invariant features, an attention mechanism to put more emphasis on the important regions of the input segmented signal(s) that are more likely to contribute to an alarm, and long short-term memory units to capture the temporal information presented in the signal segments. We trained our method efficiently using a two-step training algorithm (i.e., pre-training and fine-tuning the proposed network) on the dataset provided by the PhysioNet computing in cardiology challenge 2015. The evaluation results demonstrate that the proposed method obtains better results compared to other existing algorithms for the false alarm reduction task in ICUs. The proposed method achieves a sensitivity of 93.88% and a specificity of 92.05% for the alarm classification, considering three different signals. In addition, our experiments for 5 separate alarm types leads significant results, where we just consider a single-lead ECG (e.g., a sensitivity of 90.71%, a specificity of 88.30%, an AUC of 89.51 for alarm type of Ventricular Tachycardia arrhythmia)