id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2304.12725
Rajesh Kumar
Varun Nair (1), Gavish Uppal (1), Saurav Bharadwaj (1), Ruchi Sinha (2), Manjit Kaur (3) and Rajesh Kumar (1). ((1) Department of Biomedical Engineering, Indian Institute of Technology Ropar, Punjab-140001, India. (2) Department of Pathology, All India Institute of Medical Sciences Patna, Bihar -801507, India. (3) Department of Pathology, All India Institute of Medical Sciences Bathinda, Punjab-151001, India)
Quantitative analysis of collagen remodeling in pancreatic lesions using computationally translated collagen images derived from brightfield microscopy images
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The changes in stromal collagen play a crucial role during the pathogenesis and progression of pancreatic intraepithelial neoplasm (PanIN) to pancreatic ductal adenocarcinoma (PDAC) while misdiagnosis of PanIN is common because of the resemblance to chronic pancreatitis (CP) in its symptoms and subsequent evaluations similarities. To visualize fibrillar collagen in tissues, second harmonic generation microscopy is now utilized as a gold standard in various stromal-based research analyses. However, a technical approach that can perform a quantitative analysis of fibrillar collagen directly on standard slides stained with H&E can (i) discard the need for specialized and costly equipment or labels, (ii) further supplement the conventional histopathological insights and, (iii) potentially be integrated within the framework of standard histopathology workflow. In this study, the whole-core brightfield H&E-stained images of pancreatic tissues were translated computationally into the new collagen images. Subsequently, collagen characteristics of PDAC, PanIN, CP, and normal pancreatic tissues (control) were extracted and compared. The highest alignment (p < 0.01, R2 = 0.2594) was observed in PDAC cores in comparison to the remaining three groups, while the lowest fiber density (p < 0.0001, R2 = 0.3569) was observed in case of normal tissue cores. Moreover, the collagen area and fiber length had shown higher area under curve (0.83 and 0.81, respectively) in discriminating neoplastic and non-neoplastic tissues based on their receiver operating characteristics. The study demonstrated that the computationally generated collagen images can provide a quantitative assessment of collagen remodeling in pancreatic lesions. The cross-modality image synthesis may further lead towards better histopathological and tissue microenvironment insights without the need of specialized imaging equipment or labels.
[ { "created": "Tue, 25 Apr 2023 11:14:04 GMT", "version": "v1" } ]
2023-04-26
[ [ "Nair", "Varun", "" ], [ "Uppal", "Gavish", "" ], [ "Bharadwaj", "Saurav", "" ], [ "Sinha", "Ruchi", "" ], [ "Kaur", "Manjit", "" ], [ "Kumar", "Rajesh", "" ], [ ".", "", "" ] ]
The changes in stromal collagen play a crucial role during the pathogenesis and progression of pancreatic intraepithelial neoplasm (PanIN) to pancreatic ductal adenocarcinoma (PDAC) while misdiagnosis of PanIN is common because of the resemblance to chronic pancreatitis (CP) in its symptoms and subsequent evaluations similarities. To visualize fibrillar collagen in tissues, second harmonic generation microscopy is now utilized as a gold standard in various stromal-based research analyses. However, a technical approach that can perform a quantitative analysis of fibrillar collagen directly on standard slides stained with H&E can (i) discard the need for specialized and costly equipment or labels, (ii) further supplement the conventional histopathological insights and, (iii) potentially be integrated within the framework of standard histopathology workflow. In this study, the whole-core brightfield H&E-stained images of pancreatic tissues were translated computationally into the new collagen images. Subsequently, collagen characteristics of PDAC, PanIN, CP, and normal pancreatic tissues (control) were extracted and compared. The highest alignment (p < 0.01, R2 = 0.2594) was observed in PDAC cores in comparison to the remaining three groups, while the lowest fiber density (p < 0.0001, R2 = 0.3569) was observed in case of normal tissue cores. Moreover, the collagen area and fiber length had shown higher area under curve (0.83 and 0.81, respectively) in discriminating neoplastic and non-neoplastic tissues based on their receiver operating characteristics. The study demonstrated that the computationally generated collagen images can provide a quantitative assessment of collagen remodeling in pancreatic lesions. The cross-modality image synthesis may further lead towards better histopathological and tissue microenvironment insights without the need of specialized imaging equipment or labels.
2003.03988
Nasir Ahmad
Nasir Ahmad, Luca Ambrogioni, Marcel A. J. van Gerven
Overcoming the Weight Transport Problem via Spike-Timing-Dependent Weight Inference
20 pages, 6 figures
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose a solution to the weight transport problem, which questions the biological plausibility of the backpropagation algorithm. We derive our method based upon a theoretical analysis of the (approximate) dynamics of leaky integrate-and-fire neurons. We show that the use of spike timing alone outcompetes existing biologically plausible methods for synaptic weight inference in spiking neural network models. Furthermore, our proposed method is more flexible, being applicable to any spiking neuron model, is conservative in how many parameters are required for implementation and can be deployed in an online-fashion with minimal computational overhead. These features, together with its biological plausibility, make it an attractive mechanism underlying weight inference at single synapses.
[ { "created": "Mon, 9 Mar 2020 09:26:23 GMT", "version": "v1" }, { "created": "Wed, 10 Jun 2020 08:19:32 GMT", "version": "v2" }, { "created": "Mon, 2 Nov 2020 09:33:39 GMT", "version": "v3" }, { "created": "Wed, 11 Aug 2021 13:25:03 GMT", "version": "v4" } ]
2021-08-12
[ [ "Ahmad", "Nasir", "" ], [ "Ambrogioni", "Luca", "" ], [ "van Gerven", "Marcel A. J.", "" ] ]
We propose a solution to the weight transport problem, which questions the biological plausibility of the backpropagation algorithm. We derive our method based upon a theoretical analysis of the (approximate) dynamics of leaky integrate-and-fire neurons. We show that the use of spike timing alone outcompetes existing biologically plausible methods for synaptic weight inference in spiking neural network models. Furthermore, our proposed method is more flexible, being applicable to any spiking neuron model, is conservative in how many parameters are required for implementation and can be deployed in an online-fashion with minimal computational overhead. These features, together with its biological plausibility, make it an attractive mechanism underlying weight inference at single synapses.
2407.04816
Christian Quaia
Christian Quaia and Richard J Krauzlis
Object recognition in primates: What can early visual areas contribute?
null
null
null
null
q-bio.NC cs.CV cs.NE
http://creativecommons.org/publicdomain/zero/1.0/
If neuroscientists were asked which brain area is responsible for object recognition in primates, most would probably answer infero-temporal (IT) cortex. While IT is likely responsible for fine discriminations, and it is accordingly dominated by foveal visual inputs, there is more to object recognition than fine discrimination. Importantly, foveation of an object of interest usually requires recognizing, with reasonable confidence, its presence in the periphery. Arguably, IT plays a secondary role in such peripheral recognition, and other visual areas might instead be more critical. To investigate how signals carried by early visual processing areas (such as LGN and V1) could be used for object recognition in the periphery, we focused here on the task of distinguishing faces from non-faces. We tested how sensitive various models were to nuisance parameters, such as changes in scale and orientation of the image, and the type of image background. We found that a model of V1 simple or complex cells could provide quite reliable information, resulting in performance better than 80% in realistic scenarios. An LGN model performed considerably worse. Because peripheral recognition is both crucial to enable fine recognition (by bringing an object of interest on the fovea), and probably sufficient to account for a considerable fraction of our daily recognition-guided behavior, we think that the current focus on area IT and foveal processing is too narrow. We propose that rather than a hierarchical system with IT-like properties as its primary aim, object recognition should be seen as a parallel process, with high-accuracy foveal modules operating in parallel with lower-accuracy and faster modules that can operate across the visual field.
[ { "created": "Fri, 5 Jul 2024 18:57:09 GMT", "version": "v1" } ]
2024-07-09
[ [ "Quaia", "Christian", "" ], [ "Krauzlis", "Richard J", "" ] ]
If neuroscientists were asked which brain area is responsible for object recognition in primates, most would probably answer infero-temporal (IT) cortex. While IT is likely responsible for fine discriminations, and it is accordingly dominated by foveal visual inputs, there is more to object recognition than fine discrimination. Importantly, foveation of an object of interest usually requires recognizing, with reasonable confidence, its presence in the periphery. Arguably, IT plays a secondary role in such peripheral recognition, and other visual areas might instead be more critical. To investigate how signals carried by early visual processing areas (such as LGN and V1) could be used for object recognition in the periphery, we focused here on the task of distinguishing faces from non-faces. We tested how sensitive various models were to nuisance parameters, such as changes in scale and orientation of the image, and the type of image background. We found that a model of V1 simple or complex cells could provide quite reliable information, resulting in performance better than 80% in realistic scenarios. An LGN model performed considerably worse. Because peripheral recognition is both crucial to enable fine recognition (by bringing an object of interest on the fovea), and probably sufficient to account for a considerable fraction of our daily recognition-guided behavior, we think that the current focus on area IT and foveal processing is too narrow. We propose that rather than a hierarchical system with IT-like properties as its primary aim, object recognition should be seen as a parallel process, with high-accuracy foveal modules operating in parallel with lower-accuracy and faster modules that can operate across the visual field.
2405.04912
Samuel Hoffman
Jerret Ross, Brian Belgodere, Samuel C. Hoffman, Vijil Chenthamarakshan, Youssef Mroueh, Payel Das
GP-MoLFormer: A Foundation Model For Molecular Generation
null
null
null
null
q-bio.BM cs.LG physics.chem-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Transformer-based models trained on large and general purpose datasets consisting of molecular strings have recently emerged as a powerful tool for successfully modeling various structure-property relations. Inspired by this success, we extend the paradigm of training chemical language transformers on large-scale chemical datasets to generative tasks in this work. Specifically, we propose GP-MoLFormer, an autoregressive molecular string generator that is trained on more than 1.1B chemical SMILES. GP-MoLFormer uses a 46.8M parameter transformer decoder model with linear attention and rotary positional encodings as the base architecture. We explore the utility of GP-MoLFormer in generating novel, valid, and unique SMILES. Impressively, we find GP-MoLFormer is able to generate a significant fraction of novel, valid, and unique SMILES even when the number of generated molecules is in the 10 billion range and the reference set is over a billion. We also find strong memorization of training data in GP-MoLFormer generations, which has so far remained unexplored for chemical language models. Our analyses reveal that training data memorization and novelty in generations are impacted by the quality of the training data; duplication bias in training data can enhance memorization at the cost of lowering novelty. We evaluate GP-MoLFormer's utility and compare it with that of existing baselines on three different tasks: de novo generation, scaffold-constrained molecular decoration, and unconstrained property-guided optimization. While the first two are handled with no additional training, we propose a parameter-efficient fine-tuning method for the last task, which uses property-ordered molecular pairs as input. We call this new approach pair-tuning. Our results show GP-MoLFormer performs better or comparable with baselines across all three tasks, demonstrating its general utility.
[ { "created": "Thu, 4 Apr 2024 16:20:06 GMT", "version": "v1" } ]
2024-05-09
[ [ "Ross", "Jerret", "" ], [ "Belgodere", "Brian", "" ], [ "Hoffman", "Samuel C.", "" ], [ "Chenthamarakshan", "Vijil", "" ], [ "Mroueh", "Youssef", "" ], [ "Das", "Payel", "" ] ]
Transformer-based models trained on large and general purpose datasets consisting of molecular strings have recently emerged as a powerful tool for successfully modeling various structure-property relations. Inspired by this success, we extend the paradigm of training chemical language transformers on large-scale chemical datasets to generative tasks in this work. Specifically, we propose GP-MoLFormer, an autoregressive molecular string generator that is trained on more than 1.1B chemical SMILES. GP-MoLFormer uses a 46.8M parameter transformer decoder model with linear attention and rotary positional encodings as the base architecture. We explore the utility of GP-MoLFormer in generating novel, valid, and unique SMILES. Impressively, we find GP-MoLFormer is able to generate a significant fraction of novel, valid, and unique SMILES even when the number of generated molecules is in the 10 billion range and the reference set is over a billion. We also find strong memorization of training data in GP-MoLFormer generations, which has so far remained unexplored for chemical language models. Our analyses reveal that training data memorization and novelty in generations are impacted by the quality of the training data; duplication bias in training data can enhance memorization at the cost of lowering novelty. We evaluate GP-MoLFormer's utility and compare it with that of existing baselines on three different tasks: de novo generation, scaffold-constrained molecular decoration, and unconstrained property-guided optimization. While the first two are handled with no additional training, we propose a parameter-efficient fine-tuning method for the last task, which uses property-ordered molecular pairs as input. We call this new approach pair-tuning. Our results show GP-MoLFormer performs better or comparable with baselines across all three tasks, demonstrating its general utility.
1811.04827
Chad M. Topaz
M. Ulmer, Lori Ziegelmeier, Chad M. Topaz
Assessing biological models using topological data analysis
18 pages, 6 figures, submitted to PLOS One
null
null
null
q-bio.QM math.AT nlin.AO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use topological data analysis as a tool to analyze the fit of mathematical models to experimental data. This study is built on data obtained from motion tracking groups of aphids in [Nilsen et al., PLOS One, 2013] and two random walk models that were proposed to describe the data. One model incorporates social interactions between the insects, and the second model is a control model that excludes these interactions. We compare data from each model to data from experiment by performing statistical tests based on three different sets of measures. First, we use time series of order parameters commonly used in collective motion studies. These order parameters measure the overall polarization and angular momentum of the group, and do not rely on a priori knowledge of the models that produced the data. Second, we use order parameter time series that do rely on a priori knowledge, namely average distance to nearest neighbor and percentage of aphids moving. Third, we use computational persistent homology to calculate topological signatures of the data. Analysis of the a priori order parameters indicates that the interactive model better describes the experimental data than the control model does. The topological approach performs as well as these a priori order parameters and better than the other order parameters, suggesting the utility of the topological approach in the absence of specific knowledge of mechanisms underlying the data.
[ { "created": "Mon, 12 Nov 2018 16:11:58 GMT", "version": "v1" } ]
2018-11-13
[ [ "Ulmer", "M.", "" ], [ "Ziegelmeier", "Lori", "" ], [ "Topaz", "Chad M.", "" ] ]
We use topological data analysis as a tool to analyze the fit of mathematical models to experimental data. This study is built on data obtained from motion tracking groups of aphids in [Nilsen et al., PLOS One, 2013] and two random walk models that were proposed to describe the data. One model incorporates social interactions between the insects, and the second model is a control model that excludes these interactions. We compare data from each model to data from experiment by performing statistical tests based on three different sets of measures. First, we use time series of order parameters commonly used in collective motion studies. These order parameters measure the overall polarization and angular momentum of the group, and do not rely on a priori knowledge of the models that produced the data. Second, we use order parameter time series that do rely on a priori knowledge, namely average distance to nearest neighbor and percentage of aphids moving. Third, we use computational persistent homology to calculate topological signatures of the data. Analysis of the a priori order parameters indicates that the interactive model better describes the experimental data than the control model does. The topological approach performs as well as these a priori order parameters and better than the other order parameters, suggesting the utility of the topological approach in the absence of specific knowledge of mechanisms underlying the data.
2004.04025
Sourendu Gupta
Sourendu Gupta, R. Shankar
Estimating the number of COVID-19 infections in Indian hot-spots using fatality data
null
null
null
TIFR/TH/20-10
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In India the COVID-19 infected population has not yet been accurately established. As always in the early stages of any epidemic, the need to test serious cases first has meant that the population with asymptomatic or mild sub-clinical symptoms has not yet been analyzed. Using counts of fatalities, and previously estimated parameters for the progress of the disease, we give statistical estimates of the infected population. The doubling time is a crucial unknown input parameter which affects these estimates, and may differ strongly from one geographical location to another. We suggest a method for estimating epidemiological parameters for COVID-19 in different locations within a few days, so adding to the information required for gauging the success of public health interventions
[ { "created": "Tue, 7 Apr 2020 10:11:03 GMT", "version": "v1" } ]
2020-04-09
[ [ "Gupta", "Sourendu", "" ], [ "Shankar", "R.", "" ] ]
In India the COVID-19 infected population has not yet been accurately established. As always in the early stages of any epidemic, the need to test serious cases first has meant that the population with asymptomatic or mild sub-clinical symptoms has not yet been analyzed. Using counts of fatalities, and previously estimated parameters for the progress of the disease, we give statistical estimates of the infected population. The doubling time is a crucial unknown input parameter which affects these estimates, and may differ strongly from one geographical location to another. We suggest a method for estimating epidemiological parameters for COVID-19 in different locations within a few days, so adding to the information required for gauging the success of public health interventions
1803.04152
Lucas Patty
C.H. Lucas Patty and David A. Luo and Frans Snik and Freek Ariese and Wybren Jan Buma and Inge Loes ten Kate and Rob J.M. van Spanning and William B. Sparks and Thomas A. Germer and Gy\H{o}z\H{o} Garab and Michael W. Kudenov
Imaging linear and circular polarization features in leaves with complete Mueller matrix polarimetry
40 pages, 13 figures
null
10.1016/j.bbagen.2018.03.005
null
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Spectropolarimetry of intact plant leaves allows to probe the molecular architecture of vegetation photosynthesis in a non-invasive and non-destructive way and, as such, can offer a wealth of physiological information. In addition to the molecular signals due to the photosynthetic machinery, the cell structure and its arrangement within a leaf can create and modify polarization signals. Using Mueller matrix polarimetry with rotating retarder modulation, we have visualized spatial variations in polarization in transmission around the chlorophyll a absorbance band from 650 nm to 710 nm. We show linear and circular polarization measurements of maple leaves and cultivated maize leaves and discuss the corresponding Mueller matrices and the Mueller matrix decompositions, which show distinct features in diattenuation, polarizance, retardance and depolarization. Importantly, while normal leaf tissue shows a typical split signal with both a negative and a positive peak in the induced fractional circular polarization and circular dichroism, the signals close to the veins only display a negative band. The results are similar to the negative band as reported earlier for single macrodomains. We discuss the possible role of the chloroplast orientation around the veins as a cause of this phenomenon. Systematic artefacts are ruled out as three independent measurements by different instruments gave similar results. These results provide better insight into circular polarization measurements on whole leaves and options for vegetation remote sensing using circular polarization.
[ { "created": "Mon, 12 Mar 2018 08:33:53 GMT", "version": "v1" } ]
2018-03-13
[ [ "Patty", "C. H. Lucas", "" ], [ "Luo", "David A.", "" ], [ "Snik", "Frans", "" ], [ "Ariese", "Freek", "" ], [ "Buma", "Wybren Jan", "" ], [ "Kate", "Inge Loes ten", "" ], [ "van Spanning", "Rob J. M.", "" ], [ "Sparks", "William B.", "" ], [ "Germer", "Thomas A.", "" ], [ "Garab", "Győző", "" ], [ "Kudenov", "Michael W.", "" ] ]
Spectropolarimetry of intact plant leaves allows to probe the molecular architecture of vegetation photosynthesis in a non-invasive and non-destructive way and, as such, can offer a wealth of physiological information. In addition to the molecular signals due to the photosynthetic machinery, the cell structure and its arrangement within a leaf can create and modify polarization signals. Using Mueller matrix polarimetry with rotating retarder modulation, we have visualized spatial variations in polarization in transmission around the chlorophyll a absorbance band from 650 nm to 710 nm. We show linear and circular polarization measurements of maple leaves and cultivated maize leaves and discuss the corresponding Mueller matrices and the Mueller matrix decompositions, which show distinct features in diattenuation, polarizance, retardance and depolarization. Importantly, while normal leaf tissue shows a typical split signal with both a negative and a positive peak in the induced fractional circular polarization and circular dichroism, the signals close to the veins only display a negative band. The results are similar to the negative band as reported earlier for single macrodomains. We discuss the possible role of the chloroplast orientation around the veins as a cause of this phenomenon. Systematic artefacts are ruled out as three independent measurements by different instruments gave similar results. These results provide better insight into circular polarization measurements on whole leaves and options for vegetation remote sensing using circular polarization.
1807.10374
Leslie Valiant
Leslie Valiant
Towards Identifying the Systems-Level Primitives of Cortex by In-Circuit Testing
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The hypothesis considered here is that cognition is based on a small set of systems-level computational primitives that are defined at a level higher than single neurons. It is pointed out that for one such set of primitives, whose quantitative effectiveness has been demonstrated by analysis and computer simulation, emerging technologies for stimulation and recording are making it possible to test directly whether cortex is capable of performing them.
[ { "created": "Thu, 26 Jul 2018 21:37:02 GMT", "version": "v1" } ]
2018-07-30
[ [ "Valiant", "Leslie", "" ] ]
The hypothesis considered here is that cognition is based on a small set of systems-level computational primitives that are defined at a level higher than single neurons. It is pointed out that for one such set of primitives, whose quantitative effectiveness has been demonstrated by analysis and computer simulation, emerging technologies for stimulation and recording are making it possible to test directly whether cortex is capable of performing them.
2311.09470
Maxime Clenet
Maxime Clenet, Fran\c{c}ois Massol, Jamal Najim
Impact of a block structure on the Lotka-Volterra model
36 pages, 12 figures
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
The Lotka-Volterra (LV) model is a simple, robust, and versatile model used to describe large interacting systems such as food webs or microbiomes. The model consists of $n$ coupled differential equations linking the abundances of $n$ different species. We consider a large random interaction matrix with independent entries and a block variance profile. The $i$th diagonal block represents the intra-community interaction in community $i$, while the off-diagonal blocks represent the inter-community interactions. The variance remains constant within each block, but may vary across blocks. We investigate the important case of two communities of interacting species, study how interactions affect their respective equilibrium. We also describe equilibrium with feasibility (i.e., whether there exists an equilibrium with all species at non-zero abundances) and the existence of an attrition phenomenon (some species may vanish) within each community. Information about the general case of $b$ communities ($b> 2$) is provided in the appendix.
[ { "created": "Thu, 16 Nov 2023 00:20:52 GMT", "version": "v1" }, { "created": "Tue, 25 Jun 2024 22:30:29 GMT", "version": "v2" } ]
2024-06-27
[ [ "Clenet", "Maxime", "" ], [ "Massol", "François", "" ], [ "Najim", "Jamal", "" ] ]
The Lotka-Volterra (LV) model is a simple, robust, and versatile model used to describe large interacting systems such as food webs or microbiomes. The model consists of $n$ coupled differential equations linking the abundances of $n$ different species. We consider a large random interaction matrix with independent entries and a block variance profile. The $i$th diagonal block represents the intra-community interaction in community $i$, while the off-diagonal blocks represent the inter-community interactions. The variance remains constant within each block, but may vary across blocks. We investigate the important case of two communities of interacting species, study how interactions affect their respective equilibrium. We also describe equilibrium with feasibility (i.e., whether there exists an equilibrium with all species at non-zero abundances) and the existence of an attrition phenomenon (some species may vanish) within each community. Information about the general case of $b$ communities ($b> 2$) is provided in the appendix.
2005.11254
Mahmudul Hasan
Md Sorwer Alam Parvez, Kazi Faizul Azim, Abdus Shukur Imran, Topu Raihan, Aklima Begum, Tasfia Saiyara Shammi, Sabbir Howlader, Farhana Rumzum Bhuiyan, Mahmudul Hasan
Virtual Screening of Plant Metabolites against Main protease, RNA-dependent RNA polymerase and Spike protein of SARS-CoV-2: Therapeutics option of COVID-19
29 pages, 5 figures
null
null
null
q-bio.GN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Covid-19, a serious respiratory complications caused by SARS-CoV-2 has become one of the global threat to human healthcare system. The present study evaluated the possibility of plant originated approved 117 therapeutics against the main protease protein (MPP), RNA-dependent RNA polymerase (RdRp) and spike protein (S) of SARS-CoV-2 including drug surface analysis by using molecular docking through drug repurposing approaches. The molecular interaction study revealed that Rifampin (-16.3 kcal/mol) were topmost inhibitor of MPP where Azobechalcone were found most potent plant therapeutics for blocking the RdRp (-15.9 kcal /mol) and S (-14.4 kcal/mol) protein of SARS-CoV-2. After the comparative analysis of all docking results, Azobechalcone, Rifampin, Isolophirachalcone, Tetrandrine and Fangchinoline were exhibited as the most potential inhibitory plant compounds for targeting the key proteins of SARS-CoV-2. However, amino acid positions; H41, C145, and M165 of MPP played crucial roles for the drug surface interaction where F368, L371, L372, A375, W509, L514, Y515 were pivotal for RdRP. In addition, the drug interaction surface of S proteins also showed similar patterns with all of its maximum inhibitors. ADME analysis also strengthened the possibility of screened plant therapeutics as the potent drug candidates against SARS-C with the highest drug friendliness.
[ { "created": "Fri, 22 May 2020 16:03:49 GMT", "version": "v1" } ]
2020-05-25
[ [ "Parvez", "Md Sorwer Alam", "" ], [ "Azim", "Kazi Faizul", "" ], [ "Imran", "Abdus Shukur", "" ], [ "Raihan", "Topu", "" ], [ "Begum", "Aklima", "" ], [ "Shammi", "Tasfia Saiyara", "" ], [ "Howlader", "Sabbir", "" ], [ "Bhuiyan", "Farhana Rumzum", "" ], [ "Hasan", "Mahmudul", "" ] ]
Covid-19, a serious respiratory complications caused by SARS-CoV-2 has become one of the global threat to human healthcare system. The present study evaluated the possibility of plant originated approved 117 therapeutics against the main protease protein (MPP), RNA-dependent RNA polymerase (RdRp) and spike protein (S) of SARS-CoV-2 including drug surface analysis by using molecular docking through drug repurposing approaches. The molecular interaction study revealed that Rifampin (-16.3 kcal/mol) were topmost inhibitor of MPP where Azobechalcone were found most potent plant therapeutics for blocking the RdRp (-15.9 kcal /mol) and S (-14.4 kcal/mol) protein of SARS-CoV-2. After the comparative analysis of all docking results, Azobechalcone, Rifampin, Isolophirachalcone, Tetrandrine and Fangchinoline were exhibited as the most potential inhibitory plant compounds for targeting the key proteins of SARS-CoV-2. However, amino acid positions; H41, C145, and M165 of MPP played crucial roles for the drug surface interaction where F368, L371, L372, A375, W509, L514, Y515 were pivotal for RdRP. In addition, the drug interaction surface of S proteins also showed similar patterns with all of its maximum inhibitors. ADME analysis also strengthened the possibility of screened plant therapeutics as the potent drug candidates against SARS-C with the highest drug friendliness.
2302.13283
Giuseppe Tronci
Charles Brooker, Giuseppe Tronci
A collagen-based theranostic wound dressing with visual, long-lasting infection detection capability
21 pages, 9 figures, 1 table; accepted in International Journal of Biological Macromolecules
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Continuous wound monitoring is one strategy to minimise infection severity and inform prompt variations in therapeutic care following infection diagnosis. However, integration of this functionality in therapeutic wound dressings is still challenging. We hypothesised that a theranostic dressing could be realised by integrating a collagen-based wound contact layer with previously demonstrated wound healing capability, and a halochromic dye, i.e. bromothymol blue (BTB), undergoing colour change following infection-associated pH changes (pH: 5-6 --> >7). Two different BTB integration strategies, i.e. electrospinning and drop-casting, were pursued to introduce long-lasting visual infection detection capability through retention of BTB within the dressing. Both systems had an average BTB loading efficiency of 99 wt.% and displayed a colour change within one minute of contact with simulated wound fluid. Drop-cast samples retained up to 85 wt.% of BTB after 96 hours in a near-infected wound environment, in contrast to the fibre-bearing prototypes, which released over 80 wt.% of BTB over the same time period. An increase in collagen denaturation temperature (DSC) and red shifts (ATR-FTIR) suggests the formation of secondary interactions between the collagen-based hydrogel and the BTB, which are attributed to count for the long-lasting dye confinement and durable dressing colour change. Given the high L929 fibroblast viability in drop-cast sample extracts (92%, 7 days), the presented multiscale design is simple, cell- and regulatory-friendly, and compliant with industrial scale-up. This design, therefore, offers a new platform for the development of theranostic dressings enabling accelerated wound healing and prompt infection diagnosis.
[ { "created": "Sun, 26 Feb 2023 10:13:23 GMT", "version": "v1" } ]
2023-02-28
[ [ "Brooker", "Charles", "" ], [ "Tronci", "Giuseppe", "" ] ]
Continuous wound monitoring is one strategy to minimise infection severity and inform prompt variations in therapeutic care following infection diagnosis. However, integration of this functionality in therapeutic wound dressings is still challenging. We hypothesised that a theranostic dressing could be realised by integrating a collagen-based wound contact layer with previously demonstrated wound healing capability, and a halochromic dye, i.e. bromothymol blue (BTB), undergoing colour change following infection-associated pH changes (pH: 5-6 --> >7). Two different BTB integration strategies, i.e. electrospinning and drop-casting, were pursued to introduce long-lasting visual infection detection capability through retention of BTB within the dressing. Both systems had an average BTB loading efficiency of 99 wt.% and displayed a colour change within one minute of contact with simulated wound fluid. Drop-cast samples retained up to 85 wt.% of BTB after 96 hours in a near-infected wound environment, in contrast to the fibre-bearing prototypes, which released over 80 wt.% of BTB over the same time period. An increase in collagen denaturation temperature (DSC) and red shifts (ATR-FTIR) suggests the formation of secondary interactions between the collagen-based hydrogel and the BTB, which are attributed to count for the long-lasting dye confinement and durable dressing colour change. Given the high L929 fibroblast viability in drop-cast sample extracts (92%, 7 days), the presented multiscale design is simple, cell- and regulatory-friendly, and compliant with industrial scale-up. This design, therefore, offers a new platform for the development of theranostic dressings enabling accelerated wound healing and prompt infection diagnosis.
1702.01436
Mareike Fischer
Lina Herbst and Mareike Fischer
Ancestral sequence reconstruction with Maximum Parsimony
7 figures, 18 pages
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say $a$, at a particular site in order for MP to unambiguously return $a$ as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.
[ { "created": "Sun, 5 Feb 2017 18:11:26 GMT", "version": "v1" } ]
2017-02-07
[ [ "Herbst", "Lina", "" ], [ "Fischer", "Mareike", "" ] ]
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say $a$, at a particular site in order for MP to unambiguously return $a$ as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.
1510.06621
Macha Nikolski
Hayssam Soueidan and Macha Nikolski
Machine learning for metagenomics: methods and tools
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Owing to the complexity and variability of metagenomic studies, modern machine learning approaches have seen increased usage to answer a variety of question encompassing the full range of metagenomic NGS data analysis. We review here the contribution of machine learning techniques for the field of metagenomics, by presenting known successful approaches in a unified framework. This review focuses on five important metagenomic problems: OTU-clustering, binning, taxonomic profling and assignment, comparative metagenomics and gene prediction. For each of these problems, we identify the most prominent methods, summarize the machine learning approaches used and put them into perspective of similar methods. We conclude our review looking further ahead at the challenge posed by the analysis of interactions within microbial communities and different environments, in a field one could call "integrative metagenomics".
[ { "created": "Thu, 22 Oct 2015 13:31:40 GMT", "version": "v1" }, { "created": "Tue, 8 Mar 2016 15:18:38 GMT", "version": "v2" } ]
2016-03-09
[ [ "Soueidan", "Hayssam", "" ], [ "Nikolski", "Macha", "" ] ]
Owing to the complexity and variability of metagenomic studies, modern machine learning approaches have seen increased usage to answer a variety of question encompassing the full range of metagenomic NGS data analysis. We review here the contribution of machine learning techniques for the field of metagenomics, by presenting known successful approaches in a unified framework. This review focuses on five important metagenomic problems: OTU-clustering, binning, taxonomic profling and assignment, comparative metagenomics and gene prediction. For each of these problems, we identify the most prominent methods, summarize the machine learning approaches used and put them into perspective of similar methods. We conclude our review looking further ahead at the challenge posed by the analysis of interactions within microbial communities and different environments, in a field one could call "integrative metagenomics".
2306.08752
Ton Ta
Yujie Gao, Malay Banerjee, Ton Viet Ta
Dynamics of infectious diseases in predator-prey populations: a stochastic model, sustainability, and invariant measure
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces an innovative model for infectious diseases in predator-prey populations. We not only prove the existence of global non-negative solutions but also establish essential criteria for the system's decline and sustainability. Furthermore, we demonstrate the presence of a Borel invariant measure, adding a new dimension to our understanding of the system. To illustrate the practical implications of our findings, we present numerical results. With our model's comprehensive approach, we aim to provide valuable insights into the dynamics of infectious diseases and their impact on predator-prey populations.
[ { "created": "Wed, 14 Jun 2023 21:23:52 GMT", "version": "v1" } ]
2023-06-16
[ [ "Gao", "Yujie", "" ], [ "Banerjee", "Malay", "" ], [ "Ta", "Ton Viet", "" ] ]
This paper introduces an innovative model for infectious diseases in predator-prey populations. We not only prove the existence of global non-negative solutions but also establish essential criteria for the system's decline and sustainability. Furthermore, we demonstrate the presence of a Borel invariant measure, adding a new dimension to our understanding of the system. To illustrate the practical implications of our findings, we present numerical results. With our model's comprehensive approach, we aim to provide valuable insights into the dynamics of infectious diseases and their impact on predator-prey populations.
q-bio/0508006
Weidong Huang
Weidong Huang, Chundu Wu, Bingjia Xiao, Weidong Xia
Computational Fluid Dynamic Approach for Biological System Modeling
13 pages; 5 figures
null
null
null
q-bio.QM q-bio.PE
null
Various biological system models have been proposed in systems biology, which are based on the complex biological reactions kinetic of various components. These models are not practical because we lack of kinetic information. In this paper, it is found that the enzymatic reaction and multi-order reaction rate is often controlled by the transport of the reactants in biological systems. A Computational Fluid Dynamic (CFD) approach, which is based on transport of the components and kinetics of biological reactions, is introduced for biological system modeling. We apply this approach to a biological wastewater treatment system for the study of metabolism of organic carbon substrates and the population of microbial. The results show that CFD model coupled with reaction kinetics is more accurate and more feasible than kinetic models for biological system modeling.
[ { "created": "Tue, 2 Aug 2005 16:36:34 GMT", "version": "v1" } ]
2011-02-19
[ [ "Huang", "Weidong", "" ], [ "Wu", "Chundu", "" ], [ "Xiao", "Bingjia", "" ], [ "Xia", "Weidong", "" ] ]
Various biological system models have been proposed in systems biology, which are based on the complex biological reactions kinetic of various components. These models are not practical because we lack of kinetic information. In this paper, it is found that the enzymatic reaction and multi-order reaction rate is often controlled by the transport of the reactants in biological systems. A Computational Fluid Dynamic (CFD) approach, which is based on transport of the components and kinetics of biological reactions, is introduced for biological system modeling. We apply this approach to a biological wastewater treatment system for the study of metabolism of organic carbon substrates and the population of microbial. The results show that CFD model coupled with reaction kinetics is more accurate and more feasible than kinetic models for biological system modeling.
2109.07645
Jos\'e Luis P\'erez J.L. P\'erez
Mar\'ia Emilia Caballero, Adri\'an Gonz\'alez Casanova, Jos\'e Luis P\'erez
Two-type branching processes with immigration, and the structured coalescents
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a population constituted by two types of individuals; each of them can produce offspring in two different islands (as a particular case the islands can be interpreted as active or dormant individuals). We model the evolution of the population of each type using a two-type Feller diffusion with immigration, and we study the frequency of one of the types, in each island, when the total population size in each island is forced to be constant at a dense set of times. This leads to the solution of a SDE which we call the asymmetric two-island frequency process. We derive properties of this process and obtain a large population limit when the total size of each island tends to infinity. Additionally, we compute the fluctuations of the process around its deterministic limit. We establish conditions under which the asymmetric two-island frequency process has a moment dual. The dual is a continuous-time two-dimensional Markov chain that can be interpreted in terms of mutation, branching, pairwise branching, coalescence, and a novel mixed selection-migration term. Also, we conduct a stability analysis of the limiting deterministic dynamical system and present some numerical results to study fixation and a new form of balancing selection. When restricting to the seedbank model, we observe that some combinations of the parameters lead to balancing selection. Besides finding yet another way in which genetic reservoirs increase the genetic variability, we find that if a population that sustains a seedbank competes with one that does not, the seed producers will have a selective advantage if they reproduce faster, but will not have a selective disadvantage if they reproduce slower: their worst case scenario is balancing selection.
[ { "created": "Thu, 16 Sep 2021 00:55:44 GMT", "version": "v1" }, { "created": "Thu, 2 May 2024 02:11:07 GMT", "version": "v2" } ]
2024-05-03
[ [ "Caballero", "María Emilia", "" ], [ "Casanova", "Adrián González", "" ], [ "Pérez", "José Luis", "" ] ]
We consider a population constituted by two types of individuals; each of them can produce offspring in two different islands (as a particular case the islands can be interpreted as active or dormant individuals). We model the evolution of the population of each type using a two-type Feller diffusion with immigration, and we study the frequency of one of the types, in each island, when the total population size in each island is forced to be constant at a dense set of times. This leads to the solution of a SDE which we call the asymmetric two-island frequency process. We derive properties of this process and obtain a large population limit when the total size of each island tends to infinity. Additionally, we compute the fluctuations of the process around its deterministic limit. We establish conditions under which the asymmetric two-island frequency process has a moment dual. The dual is a continuous-time two-dimensional Markov chain that can be interpreted in terms of mutation, branching, pairwise branching, coalescence, and a novel mixed selection-migration term. Also, we conduct a stability analysis of the limiting deterministic dynamical system and present some numerical results to study fixation and a new form of balancing selection. When restricting to the seedbank model, we observe that some combinations of the parameters lead to balancing selection. Besides finding yet another way in which genetic reservoirs increase the genetic variability, we find that if a population that sustains a seedbank competes with one that does not, the seed producers will have a selective advantage if they reproduce faster, but will not have a selective disadvantage if they reproduce slower: their worst case scenario is balancing selection.
q-bio/0503010
Petter Holme
Petter Holme and Mikael Huss
Role-similarity based functional prediction in networked systems: Application to the yeast proteome
null
J. Roy. Soc. Interface 2, 327 (2005)
10.1098/rsif.2005.0046
null
q-bio.MN cond-mat.other q-bio.OT
null
We propose a general method to predict functions of vertices where: 1. The wiring of the network is somehow related to the vertex functionality. 2. A fraction of the vertices are functionally classified. The method is influenced by role-similarity measures of social network analysis. The two versions of our prediction scheme is tested on model networks were the functions of the vertices are designed to match their network surroundings. We also apply these methods to the proteome of the yeast Saccharomyces cerevisiae and find the results compatible with more specialized methods.
[ { "created": "Sun, 6 Mar 2005 18:52:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "Holme", "Petter", "" ], [ "Huss", "Mikael", "" ] ]
We propose a general method to predict functions of vertices where: 1. The wiring of the network is somehow related to the vertex functionality. 2. A fraction of the vertices are functionally classified. The method is influenced by role-similarity measures of social network analysis. The two versions of our prediction scheme is tested on model networks were the functions of the vertices are designed to match their network surroundings. We also apply these methods to the proteome of the yeast Saccharomyces cerevisiae and find the results compatible with more specialized methods.
1101.6054
Fernando Rozenblit
Fernando Rozenblit and Mauro Copelli
Collective oscillations of excitable elements: order parameters, bistability and the role of stochasticity
19 pages, 7 figures
J. Stat. Mech. (2011) P01012
10.1088/1742-5468/2011/01/P01012
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech nlin.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the effects of a probabilistic refractory period in the collective behavior of coupled discrete-time excitable cells (SIRS-like cellular automata). Using mean-field analysis and simulations, we show that a synchronized phase with stable collective oscillations exists even with non-deterministic refractory periods. Moreover, further increasing the coupling strength leads to a reentrant transition, where the synchronized phase loses stability. In an intermediate regime, we also observe bistability (and consequently hysteresis) between a synchronized phase and an active but incoherent phase without oscillations. The onset of the oscillations appears in the mean-field equations as a Neimark-Sacker bifurcation, the nature of which (i.e. super- or subcritical) is determined by the first Lyapunov coefficient. This allows us to determine the borders of the oscillating and of the bistable regions. The mean-field prediction thus obtained agrees quantitatively with simulations of complete graphs and, for random graphs, qualitatively predicts the overall structure of the phase diagram. The latter can be obtained from simulations by defining an order parameter q suited for detecting collective oscillations of excitable elements. We briefly review other commonly used order parameters and show (via data collapse) that q satisfies the expected finite size scaling relations.
[ { "created": "Mon, 31 Jan 2011 19:28:46 GMT", "version": "v1" } ]
2015-03-18
[ [ "Rozenblit", "Fernando", "" ], [ "Copelli", "Mauro", "" ] ]
We study the effects of a probabilistic refractory period in the collective behavior of coupled discrete-time excitable cells (SIRS-like cellular automata). Using mean-field analysis and simulations, we show that a synchronized phase with stable collective oscillations exists even with non-deterministic refractory periods. Moreover, further increasing the coupling strength leads to a reentrant transition, where the synchronized phase loses stability. In an intermediate regime, we also observe bistability (and consequently hysteresis) between a synchronized phase and an active but incoherent phase without oscillations. The onset of the oscillations appears in the mean-field equations as a Neimark-Sacker bifurcation, the nature of which (i.e. super- or subcritical) is determined by the first Lyapunov coefficient. This allows us to determine the borders of the oscillating and of the bistable regions. The mean-field prediction thus obtained agrees quantitatively with simulations of complete graphs and, for random graphs, qualitatively predicts the overall structure of the phase diagram. The latter can be obtained from simulations by defining an order parameter q suited for detecting collective oscillations of excitable elements. We briefly review other commonly used order parameters and show (via data collapse) that q satisfies the expected finite size scaling relations.
1905.02621
Bertrand Roehner
Alex Bois, Eduardo M. Garcia-Roger, Elim Hong, Stefan Hutzler, Ali Irannezhad, Abdelkrim Mannioui, Peter Richmond, Bertrand M. Roehner, Stephane Tronche
Infant mortality across species. A global probe of congenital abnormalities
31 pages, 16 figures
null
10.1016/j.physa.2019.122308
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infant mortality, by which we understand the postnatal stage during which mortality is declining, is a manifestation and embodiment of congenital abnormalities. Severe defects will translate into death occurring shortly after birth whereas slighter anomalies may contribute to death much later, possibly only in adult age. While for many species birth defects would be nearly impossible to identify, infant mortality provides a convenient global assessment. In the present paper we examine a broad range of species from mammals to fish to gastropods to insects. One of the objectives of our comparative analysis is to test a conjecture suggested by reliability engineering according to which the frequency of defects tends to increase together with the complexity of organisms. For that purpose, we set up experiments specially designed to measure infant mortality. In particular, we two species commonly used as model species in biological laboratories, namely the zebrafish Danio rerio and the rotifer Brachionus plicatilis. For the second, whose number of cells is about hundred times smaller than for the first, we find as expected that the screening effect of the infant phase is of much smaller amplitude. Our analysis also raises a number of challenging questions for which further investigation is necessary. For instance, why is the infant death rate of beetles and mollusks falling off exponentially rather than as a power law as observed for most other species? A possible research agenda is discussed in the conclusion of the paper.
[ { "created": "Sun, 5 May 2019 14:52:11 GMT", "version": "v1" } ]
2019-10-02
[ [ "Bois", "Alex", "" ], [ "Garcia-Roger", "Eduardo M.", "" ], [ "Hong", "Elim", "" ], [ "Hutzler", "Stefan", "" ], [ "Irannezhad", "Ali", "" ], [ "Mannioui", "Abdelkrim", "" ], [ "Richmond", "Peter", "" ], [ "Roehner", "Bertrand M.", "" ], [ "Tronche", "Stephane", "" ] ]
Infant mortality, by which we understand the postnatal stage during which mortality is declining, is a manifestation and embodiment of congenital abnormalities. Severe defects will translate into death occurring shortly after birth whereas slighter anomalies may contribute to death much later, possibly only in adult age. While for many species birth defects would be nearly impossible to identify, infant mortality provides a convenient global assessment. In the present paper we examine a broad range of species from mammals to fish to gastropods to insects. One of the objectives of our comparative analysis is to test a conjecture suggested by reliability engineering according to which the frequency of defects tends to increase together with the complexity of organisms. For that purpose, we set up experiments specially designed to measure infant mortality. In particular, we two species commonly used as model species in biological laboratories, namely the zebrafish Danio rerio and the rotifer Brachionus plicatilis. For the second, whose number of cells is about hundred times smaller than for the first, we find as expected that the screening effect of the infant phase is of much smaller amplitude. Our analysis also raises a number of challenging questions for which further investigation is necessary. For instance, why is the infant death rate of beetles and mollusks falling off exponentially rather than as a power law as observed for most other species? A possible research agenda is discussed in the conclusion of the paper.
2004.14331
Shaoxiong Sun
Shaoxiong Sun, Amos Folarin, Yatharth Ranjan, Zulqarnain Rashid, Pauline Conde, Callum Stewart, Nicholas Cummins, Faith Matcham, Gloria Dalla Costa, Sara Simblett, Letizia Leocani, Per Soelberg S{\o}rensen, Mathias Buron, Ana Isabel Guerrero, Ana Zabalza, Brenda WJH Penninx, Femke Lamers, Sara Siddi, Josep Maria Haro, Inez Myin-Germeys, Aki Rintala, Til Wykes, Vaibhav A. Narayan, Giancarlo Comi, Matthew Hotopf, Richard JB Dobson (on behalf of the RADAR-CNS consortium)
Using smartphones and wearable devices to monitor behavioural changes during COVID-19
null
null
10.2196/19992
null
q-bio.QM cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We aimed to explore the utility of the recently developed open-source mobile health platform RADAR-base as a toolbox to rapidly test the effect and response to NPIs aimed at limiting the spread of COVID-19. We analysed data extracted from smartphone and wearable devices and managed by the RADAR-base from 1062 participants recruited in Italy, Spain, Denmark, the UK, and the Netherlands. We derived nine features on a daily basis including time spent at home, maximum distance travelled from home, maximum number of Bluetooth-enabled nearby devices (as a proxy for physical distancing), step count, average heart rate, sleep duration, bedtime, phone unlock duration, and social app use duration. We performed Kruskal-Wallis tests followed by post-hoc Dunns tests to assess differences in these features among baseline, pre-, and during-lockdown periods. We also studied behavioural differences by age, gender, body mass index (BMI), and educational background. We were able to quantify expected changes in time spent at home, distance travelled, and the number of nearby Bluetooth-enabled devices between pre- and during-lockdown periods. We saw reduced sociality as measured through mobility features, and increased virtual sociality through phone usage. People were more active on their phones, spending more time using social media apps, particularly around major news events. Furthermore, participants had lower heart rate, went to bed later, and slept more. We also found that young people had longer homestay than older people during lockdown and fewer daily steps. Although there was no significant difference between the high and low BMI groups in time spent at home, the low BMI group walked more. RADAR-base can be used to rapidly quantify and provide a holistic view of behavioural changes in response to public health interventions as a result of infectious outbreaks such as COVID-19.
[ { "created": "Wed, 29 Apr 2020 16:58:39 GMT", "version": "v1" }, { "created": "Fri, 1 May 2020 10:19:12 GMT", "version": "v2" }, { "created": "Wed, 22 Jul 2020 13:52:35 GMT", "version": "v3" } ]
2020-10-05
[ [ "Sun", "Shaoxiong", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Folarin", "Amos", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Ranjan", "Yatharth", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Rashid", "Zulqarnain", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Conde", "Pauline", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Stewart", "Callum", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Cummins", "Nicholas", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Matcham", "Faith", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Costa", "Gloria Dalla", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Simblett", "Sara", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Leocani", "Letizia", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Sørensen", "Per Soelberg", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Buron", "Mathias", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Guerrero", "Ana Isabel", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Zabalza", "Ana", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Penninx", "Brenda WJH", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Lamers", "Femke", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Siddi", "Sara", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Haro", "Josep Maria", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Myin-Germeys", "Inez", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Rintala", "Aki", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Wykes", "Til", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Narayan", "Vaibhav A.", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Comi", "Giancarlo", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Hotopf", "Matthew", "", "on\n behalf of the RADAR-CNS consortium" ], [ "Dobson", "Richard JB", "", "on\n behalf of the RADAR-CNS consortium" ] ]
We aimed to explore the utility of the recently developed open-source mobile health platform RADAR-base as a toolbox to rapidly test the effect and response to NPIs aimed at limiting the spread of COVID-19. We analysed data extracted from smartphone and wearable devices and managed by the RADAR-base from 1062 participants recruited in Italy, Spain, Denmark, the UK, and the Netherlands. We derived nine features on a daily basis including time spent at home, maximum distance travelled from home, maximum number of Bluetooth-enabled nearby devices (as a proxy for physical distancing), step count, average heart rate, sleep duration, bedtime, phone unlock duration, and social app use duration. We performed Kruskal-Wallis tests followed by post-hoc Dunns tests to assess differences in these features among baseline, pre-, and during-lockdown periods. We also studied behavioural differences by age, gender, body mass index (BMI), and educational background. We were able to quantify expected changes in time spent at home, distance travelled, and the number of nearby Bluetooth-enabled devices between pre- and during-lockdown periods. We saw reduced sociality as measured through mobility features, and increased virtual sociality through phone usage. People were more active on their phones, spending more time using social media apps, particularly around major news events. Furthermore, participants had lower heart rate, went to bed later, and slept more. We also found that young people had longer homestay than older people during lockdown and fewer daily steps. Although there was no significant difference between the high and low BMI groups in time spent at home, the low BMI group walked more. RADAR-base can be used to rapidly quantify and provide a holistic view of behavioural changes in response to public health interventions as a result of infectious outbreaks such as COVID-19.
0901.0181
Osamu Narikiyo
Tatsuro Yamashita and Osamu Narikiyo
Physical Model for the Evolution of the Genetic Code
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a physical model to describe the mechanisms of two major scenarios of the genetic code evolution, the codon capture and ambiguous intermediate scenarios, in a consistent manner. We sketch the lowest dimensional version of our model, a minimal model, by introducing a physical quantity, codon level. On the basis of the hierarchical structure of the codon levels two scenarios are classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform a simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario.
[ { "created": "Thu, 1 Jan 2009 15:42:19 GMT", "version": "v1" } ]
2009-09-30
[ [ "Yamashita", "Tatsuro", "" ], [ "Narikiyo", "Osamu", "" ] ]
We propose a physical model to describe the mechanisms of two major scenarios of the genetic code evolution, the codon capture and ambiguous intermediate scenarios, in a consistent manner. We sketch the lowest dimensional version of our model, a minimal model, by introducing a physical quantity, codon level. On the basis of the hierarchical structure of the codon levels two scenarios are classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform a simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario.
2006.11988
Joseph Paul Cohen
Joseph Paul Cohen and Paul Morrison and Lan Dao and Karsten Roth and Tim Q Duong and Marzyeh Ghassemi
COVID-19 Image Data Collection: Prospective Predictions Are the Future
Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) https://melba-journal.org. Code for baseline experiments can be found here: https://github.com/mlmed/covid-baselines
null
10.59275/j.melba.2020-48g7
null
q-bio.QM cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Across the world's coronavirus disease 2019 (COVID-19) hot spots, the need to streamline patient diagnosis and management has become more pressing than ever. As one of the main imaging tools, chest X-rays (CXRs) are common, fast, non-invasive, relatively cheap, and potentially bedside to monitor the progression of the disease. This paper describes the first public COVID-19 image data collection as well as a preliminary exploration of possible use cases for the data. This dataset currently contains hundreds of frontal view X-rays and is the largest public resource for COVID-19 image and prognostic data, making it a necessary resource to develop and evaluate tools to aid in the treatment of COVID-19. It was manually aggregated from publication figures as well as various web based repositories into a machine learning (ML) friendly format with accompanying dataloader code. We collected frontal and lateral view imagery and metadata such as the time since first symptoms, intensive care unit (ICU) status, survival status, intubation status, or hospital location. We present multiple possible use cases for the data such as predicting the need for the ICU, predicting patient survival, and understanding a patient's trajectory during treatment. Data can be accessed here: https://github.com/ieee8023/covid-chestxray-dataset
[ { "created": "Mon, 22 Jun 2020 03:20:36 GMT", "version": "v1" }, { "created": "Thu, 1 Oct 2020 20:31:04 GMT", "version": "v2" }, { "created": "Mon, 14 Dec 2020 18:52:43 GMT", "version": "v3" } ]
2023-06-29
[ [ "Cohen", "Joseph Paul", "" ], [ "Morrison", "Paul", "" ], [ "Dao", "Lan", "" ], [ "Roth", "Karsten", "" ], [ "Duong", "Tim Q", "" ], [ "Ghassemi", "Marzyeh", "" ] ]
Across the world's coronavirus disease 2019 (COVID-19) hot spots, the need to streamline patient diagnosis and management has become more pressing than ever. As one of the main imaging tools, chest X-rays (CXRs) are common, fast, non-invasive, relatively cheap, and potentially bedside to monitor the progression of the disease. This paper describes the first public COVID-19 image data collection as well as a preliminary exploration of possible use cases for the data. This dataset currently contains hundreds of frontal view X-rays and is the largest public resource for COVID-19 image and prognostic data, making it a necessary resource to develop and evaluate tools to aid in the treatment of COVID-19. It was manually aggregated from publication figures as well as various web based repositories into a machine learning (ML) friendly format with accompanying dataloader code. We collected frontal and lateral view imagery and metadata such as the time since first symptoms, intensive care unit (ICU) status, survival status, intubation status, or hospital location. We present multiple possible use cases for the data such as predicting the need for the ICU, predicting patient survival, and understanding a patient's trajectory during treatment. Data can be accessed here: https://github.com/ieee8023/covid-chestxray-dataset
2301.03659
Nan Zheng
Nan Zheng, Ying Jiang, Shan Jiang, Jongwoon Kim, Yueming Li, Ji-Xin Cheng, Xiaoting Jia and Chen Yang
Multifunctional fiber-based optoacoustic emitter for non-genetic bidirectional neural communication
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A bidirectional brain interface with both "write" and "read" functions can be an important tool for fundamental studies and potential clinical treatments for neurological diseases. Here we report a miniaturized multifunctional fiber based optoacoustic emitter (mFOE) that first integrates simultaneous non-genetic optoacoustic stimulation for "write" and electrophysiology recording of neural circuits for "read". The non-genetic feature addresses the challenges of the viral transfection required by optogenetics in primates and human. The orthogonality between optoacoustic waves and electrical field provides a solution to avoid the interference between electrical stimulation and recording. We first validated the non-genetic stimulation function of the mFOE in rat cultured neurons using calcium imaging. In vivo application of mFOE for successful simultaneous optoacoustic stimulation and electrical recording of brain activities was confirmed in mouse hippocampus in both acute and chronical applications up to 1 month. Minimal brain tissue damage has been confirmed after these applications. The capability of non-genetic neural stimulation and recording enabled by mFOE opens up new possibilities for the investigation of neural circuits and brings new insights into the study of ultrasound neurostimulation.
[ { "created": "Mon, 9 Jan 2023 20:04:50 GMT", "version": "v1" } ]
2023-01-11
[ [ "Zheng", "Nan", "" ], [ "Jiang", "Ying", "" ], [ "Jiang", "Shan", "" ], [ "Kim", "Jongwoon", "" ], [ "Li", "Yueming", "" ], [ "Cheng", "Ji-Xin", "" ], [ "Jia", "Xiaoting", "" ], [ "Yang", "Chen", "" ] ]
A bidirectional brain interface with both "write" and "read" functions can be an important tool for fundamental studies and potential clinical treatments for neurological diseases. Here we report a miniaturized multifunctional fiber based optoacoustic emitter (mFOE) that first integrates simultaneous non-genetic optoacoustic stimulation for "write" and electrophysiology recording of neural circuits for "read". The non-genetic feature addresses the challenges of the viral transfection required by optogenetics in primates and human. The orthogonality between optoacoustic waves and electrical field provides a solution to avoid the interference between electrical stimulation and recording. We first validated the non-genetic stimulation function of the mFOE in rat cultured neurons using calcium imaging. In vivo application of mFOE for successful simultaneous optoacoustic stimulation and electrical recording of brain activities was confirmed in mouse hippocampus in both acute and chronical applications up to 1 month. Minimal brain tissue damage has been confirmed after these applications. The capability of non-genetic neural stimulation and recording enabled by mFOE opens up new possibilities for the investigation of neural circuits and brings new insights into the study of ultrasound neurostimulation.
1906.12090
Simone Pigolotti
Simone Pigolotti, Mogens H. Jensen, Yinxiu Zhan, Guido Tiana
Bifractal nature of chromosome contact maps
9 pages, 5 figures, accepted for publication in Physical Review Research
Phys. Rev. Research 2, 043078 (2020)
10.1103/PhysRevResearch.2.043078
null
q-bio.BM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern biological techniques such as Hi-C permit to measure probabilities that different chromosomal regions are close in space. These probabilities can be visualised as matrices called contact maps. In this paper, we introduce a multifractal analysis of chromosomal contact maps. Our analysis reveals that Hi-C maps are bifractal, i.e. complex geometrical objects characterized by two distinct fractal dimensions. To rationalize this observation, we introduce a model that describes chromosomes as a hierarchical set of nested domains and we solve it exactly. The predicted multifractal spectrum is in excellent quantitative agreement with experimental data. Moreover, we show that our theory yields to a more robust estimation of the scaling exponent of the contact probability than existing methods. By applying this method to experimental data, we detect subtle conformational changes among chromosomes during differentiation of human stem cells.
[ { "created": "Fri, 28 Jun 2019 08:34:15 GMT", "version": "v1" }, { "created": "Mon, 9 Mar 2020 02:08:06 GMT", "version": "v2" }, { "created": "Tue, 13 Oct 2020 06:19:00 GMT", "version": "v3" } ]
2020-10-21
[ [ "Pigolotti", "Simone", "" ], [ "Jensen", "Mogens H.", "" ], [ "Zhan", "Yinxiu", "" ], [ "Tiana", "Guido", "" ] ]
Modern biological techniques such as Hi-C permit to measure probabilities that different chromosomal regions are close in space. These probabilities can be visualised as matrices called contact maps. In this paper, we introduce a multifractal analysis of chromosomal contact maps. Our analysis reveals that Hi-C maps are bifractal, i.e. complex geometrical objects characterized by two distinct fractal dimensions. To rationalize this observation, we introduce a model that describes chromosomes as a hierarchical set of nested domains and we solve it exactly. The predicted multifractal spectrum is in excellent quantitative agreement with experimental data. Moreover, we show that our theory yields to a more robust estimation of the scaling exponent of the contact probability than existing methods. By applying this method to experimental data, we detect subtle conformational changes among chromosomes during differentiation of human stem cells.
1411.1400
Si-Wei Qiu
Siwei Qiu and Carson Chow
Field theory for biophysical neural networks
7 pages, 3 figures, The 32nd International Symposium on Lattice Field Theory 23-28 June, 2014 Columbia University New York, NY
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain is a complex system composed of a network of hundreds of billions of discrete neurons that are coupled through time dependent synapses. Simulating the entire brain is a daunting challenge. Here, we show how ideas from quantum field theory can be used to construct an effective reduced theory, which may be analyzed with lattice computations. We give some examples of how the formalism can be applied to biophysically plausible neural network models.
[ { "created": "Wed, 5 Nov 2014 04:52:06 GMT", "version": "v1" } ]
2014-11-07
[ [ "Qiu", "Siwei", "" ], [ "Chow", "Carson", "" ] ]
The human brain is a complex system composed of a network of hundreds of billions of discrete neurons that are coupled through time dependent synapses. Simulating the entire brain is a daunting challenge. Here, we show how ideas from quantum field theory can be used to construct an effective reduced theory, which may be analyzed with lattice computations. We give some examples of how the formalism can be applied to biophysically plausible neural network models.
1403.4264
Adam Auton
Adam Auton, Simon Myers, Gil McVean
Identifying recombination hotspots using population genetic data
3 pages, 1 figure
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Recombination rates vary considerably at the fine scale within mammalian genomes, with the majority of recombination occurring within hotspots of ~2 kb in width. We present a method for inferring the location of recombination hotspots from patterns of linkage disequilibrium within samples of population genetic data. Results: Using simulations, we show that our method has hotspot detection power of approximately 50-60%, but depending on the magnitude of the hotspot. The false positive rate is between 0.24 and 0.56 false positives per Mb for data typical of humans. Availability: http://github.com/auton1/LDhot
[ { "created": "Mon, 17 Mar 2014 20:17:15 GMT", "version": "v1" } ]
2014-03-19
[ [ "Auton", "Adam", "" ], [ "Myers", "Simon", "" ], [ "McVean", "Gil", "" ] ]
Motivation: Recombination rates vary considerably at the fine scale within mammalian genomes, with the majority of recombination occurring within hotspots of ~2 kb in width. We present a method for inferring the location of recombination hotspots from patterns of linkage disequilibrium within samples of population genetic data. Results: Using simulations, we show that our method has hotspot detection power of approximately 50-60%, but depending on the magnitude of the hotspot. The false positive rate is between 0.24 and 0.56 false positives per Mb for data typical of humans. Availability: http://github.com/auton1/LDhot
2004.06291
Calvin Tsay
Calvin Tsay, Fernando Lejarza, Mark A. Stadtherr, Michael Baldea
Modeling, state estimation, and optimal control for the US COVID-19 outbreak
Updated with available data through April 16, 2020
Scientific Reports (2020) 10:10711
10.1038/s41598-020-67459-8
null
q-bio.PE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The novel coronavirus SARS-CoV-2 and resulting COVID-19 disease have had an unprecedented spread and continue to cause an increasing number of fatalities worldwide. While vaccines are still under development, social distancing, extensive testing, and quarantining of confirmed infected subjects remain the most effective measures to contain the pandemic. These measures carry a significant socioeconomic cost. In this work, we introduce a novel optimization-based decision-making framework for managing the COVID-19 outbreak in the US. This includes modeling the dynamics of affected populations, estimating the model parameters and hidden states from data, and an optimal control strategy for sequencing social distancing and testing events such that the number of infections is minimized. The analysis of our extensive computational efforts reveals that social distancing and quarantining are most effective when implemented early, with quarantining of confirmed infected subjects having a much higher impact. Further, we find that "on-off" policies alternating between strict social distancing and relaxing such restrictions can be effective at "flattening" the curve while likely minimizing social and economic cost.
[ { "created": "Tue, 14 Apr 2020 03:44:11 GMT", "version": "v1" }, { "created": "Sat, 18 Apr 2020 04:33:21 GMT", "version": "v2" } ]
2020-07-02
[ [ "Tsay", "Calvin", "" ], [ "Lejarza", "Fernando", "" ], [ "Stadtherr", "Mark A.", "" ], [ "Baldea", "Michael", "" ] ]
The novel coronavirus SARS-CoV-2 and resulting COVID-19 disease have had an unprecedented spread and continue to cause an increasing number of fatalities worldwide. While vaccines are still under development, social distancing, extensive testing, and quarantining of confirmed infected subjects remain the most effective measures to contain the pandemic. These measures carry a significant socioeconomic cost. In this work, we introduce a novel optimization-based decision-making framework for managing the COVID-19 outbreak in the US. This includes modeling the dynamics of affected populations, estimating the model parameters and hidden states from data, and an optimal control strategy for sequencing social distancing and testing events such that the number of infections is minimized. The analysis of our extensive computational efforts reveals that social distancing and quarantining are most effective when implemented early, with quarantining of confirmed infected subjects having a much higher impact. Further, we find that "on-off" policies alternating between strict social distancing and relaxing such restrictions can be effective at "flattening" the curve while likely minimizing social and economic cost.
2207.03826
Maxime Lenormand
Cl\'ementine Pr\'eau, Julien Tournebize, Maxime Lenormand, Samuel Alleaume, V\'eronique Gouy Boussada and Sandra Luque
Habitat connectivity in agricultural landscapes improving multi-functionality of constructed wetlands as nature-based solutions
26 pages, 4 figures
Ecological Engineering 182, 106725 (2022)
10.1016/j.ecoleng.2022.106725
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prevention of biodiversity loss in agricultural landscapes to protect ecosystem stability and functions is of major importance in itself and for the maintenance of associated ecosystem services. Intense agriculture leads to a loss in species richness and homogenization of species pools as well as the fragmentation of natural habitats and groundwater pollution. Constructed wetlands stand as nature-based solutions (NBS) to buffer the degradation of water quality by intercepting the transfer of particles, nutrients and pesticides between crops and surface waters. In karstic watersheds where sinkholes short-cut surface water directly to groundwater increasing water resource vulnerability, constructed wetlands are recommended to mitigate agricultural pollutants. Constructed wetlands also have the potential to improve landscape connectivity by providing refuge and breeding sites for wildlife, especially for amphibians. We propose here a methodology to identify optimal locations for water pollution mitigation using constructed wetlands from the perspective of habitat connectivity. We use ecological niche modelling at the regional scale to model the potential of habitat suitability for nine amphibian species, and to infer how the landscape impedes species movements. We combine those results to graph theory to identify connectivity priorities at the operational scale of an agricultural catchment area. Our framework allowed us to identify optimal areas from the point of view of the species, to analyze the effect of multifunctional constructed wetlands aiming to both reduce water pollution and to improve amphibian species habitat overall connectivity. More generally, we show the potential of habitat connectivity assessment to improve multifunctionality of NBS for pollution mitigation.
[ { "created": "Fri, 8 Jul 2022 11:21:10 GMT", "version": "v1" } ]
2022-07-11
[ [ "Préau", "Clémentine", "" ], [ "Tournebize", "Julien", "" ], [ "Lenormand", "Maxime", "" ], [ "Alleaume", "Samuel", "" ], [ "Boussada", "Véronique Gouy", "" ], [ "Luque", "Sandra", "" ] ]
The prevention of biodiversity loss in agricultural landscapes to protect ecosystem stability and functions is of major importance in itself and for the maintenance of associated ecosystem services. Intense agriculture leads to a loss in species richness and homogenization of species pools as well as the fragmentation of natural habitats and groundwater pollution. Constructed wetlands stand as nature-based solutions (NBS) to buffer the degradation of water quality by intercepting the transfer of particles, nutrients and pesticides between crops and surface waters. In karstic watersheds where sinkholes short-cut surface water directly to groundwater increasing water resource vulnerability, constructed wetlands are recommended to mitigate agricultural pollutants. Constructed wetlands also have the potential to improve landscape connectivity by providing refuge and breeding sites for wildlife, especially for amphibians. We propose here a methodology to identify optimal locations for water pollution mitigation using constructed wetlands from the perspective of habitat connectivity. We use ecological niche modelling at the regional scale to model the potential of habitat suitability for nine amphibian species, and to infer how the landscape impedes species movements. We combine those results to graph theory to identify connectivity priorities at the operational scale of an agricultural catchment area. Our framework allowed us to identify optimal areas from the point of view of the species, to analyze the effect of multifunctional constructed wetlands aiming to both reduce water pollution and to improve amphibian species habitat overall connectivity. More generally, we show the potential of habitat connectivity assessment to improve multifunctionality of NBS for pollution mitigation.
0806.4576
Nico Stollenwerk
Maira Aguiar, Nico Stollenwerk, Bob W. Kooi
Torus bifurcations, isolas and chaotic attractors in a simple dengue model with ADE and temporary cross immunity
13 pages, 4 fugures; proceedings of CMMSE 2008, ISBN 978-84-612-1982-7
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyse an epidemiological model of competing strains of pathogens and hence differences in transmission for first versus secondary infection due to interaction of the strains with previously aquired immunities, as has been described for dengue fever (in dengue known as antibody dependent enhancement, ADE). Such models show a rich variety of dynamics through bifurcations up to deterministic chaos. Including temporary cross-immunity even enlarges the parameter range of such chaotic attractors, and also gives rise to various coexisting attractors, which are difficult to identify by standard numerical bifurcation programs using continuation methods. A combination of techniques, including classical bifurcation plots and Lyapunov exponent spectra has to be applied in comparison to get further insight into such dynamical structures. Here we present for the first time multi-parameter studies in a range of biologically plausible values for dengue. The multi-strain interaction with the immune system is expected to also have implications for the epidemiology of other diseases.
[ { "created": "Fri, 27 Jun 2008 18:40:23 GMT", "version": "v1" } ]
2008-06-30
[ [ "Aguiar", "Maira", "" ], [ "Stollenwerk", "Nico", "" ], [ "Kooi", "Bob W.", "" ] ]
We analyse an epidemiological model of competing strains of pathogens and hence differences in transmission for first versus secondary infection due to interaction of the strains with previously aquired immunities, as has been described for dengue fever (in dengue known as antibody dependent enhancement, ADE). Such models show a rich variety of dynamics through bifurcations up to deterministic chaos. Including temporary cross-immunity even enlarges the parameter range of such chaotic attractors, and also gives rise to various coexisting attractors, which are difficult to identify by standard numerical bifurcation programs using continuation methods. A combination of techniques, including classical bifurcation plots and Lyapunov exponent spectra has to be applied in comparison to get further insight into such dynamical structures. Here we present for the first time multi-parameter studies in a range of biologically plausible values for dengue. The multi-strain interaction with the immune system is expected to also have implications for the epidemiology of other diseases.
q-bio/0504033
Isaac Hubner
Isaac A. Hubner and Eugene I. Shakhnovich
Geometric and physical considerations for realistic protein models
null
null
10.1103/PhysRevE.72.022901
null
q-bio.BM
null
Protein structure is generally conceptualized as the global arrangement or of smaller, local motifs of helices, sheets, and loops. These regular, recurring secondary structural elements have well-understood and standardized definitions in terms of amino acid backbone geometry and the manner in which hydrogen bonding requirements are satisfied. Recently, "tube" models have been proposed to explain protein secondary structure in terms of the geometrically optimal packing of a featureless cylinder. However, atomically detailed simulations demonstrate that such packing considerations alone are insufficient for defining secondary structure; both excluded volume and hydrogen bonding must be explicitly modeled for helix formation. These results have fundamental implications for the construction and interpretation of realistic and meaningful biomacromolecular models.
[ { "created": "Fri, 29 Apr 2005 00:02:59 GMT", "version": "v1" } ]
2009-11-11
[ [ "Hubner", "Isaac A.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
Protein structure is generally conceptualized as the global arrangement or of smaller, local motifs of helices, sheets, and loops. These regular, recurring secondary structural elements have well-understood and standardized definitions in terms of amino acid backbone geometry and the manner in which hydrogen bonding requirements are satisfied. Recently, "tube" models have been proposed to explain protein secondary structure in terms of the geometrically optimal packing of a featureless cylinder. However, atomically detailed simulations demonstrate that such packing considerations alone are insufficient for defining secondary structure; both excluded volume and hydrogen bonding must be explicitly modeled for helix formation. These results have fundamental implications for the construction and interpretation of realistic and meaningful biomacromolecular models.
2108.04176
Qin Wang
Qin Wang, Jun Wei, Boyuan Wang, Zhen Li1, Sheng Wang, Shuguang Cu
Adaptive Residue-wise Profile Fusion for Low Homologous Protein SecondaryStructure Prediction Using External Knowledge
Accepted in IJCAI-21
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Protein secondary structure prediction (PSSP) is essential for protein function analysis. However, for low homologous proteins, the PSSP suffers from insufficient input features. In this paper, we explicitly import external self-supervised knowledge for low homologous PSSP under the guidance of residue-wise profile fusion. In practice, we firstly demonstrate the superiority of profile over Position-Specific Scoring Matrix (PSSM) for low homologous PSSP. Based on this observation, we introduce the novel self-supervised BERT features as the pseudo profile, which implicitly involves the residue distribution in all native discovered sequences as the complementary features. Further-more, a novel residue-wise attention is specially designed to adaptively fuse different features (i.e.,original low-quality profile, BERT based pseudo profile), which not only takes full advantage of each feature but also avoids noise disturbance. Be-sides, the feature consistency loss is proposed to accelerate the model learning from multiple semantic levels. Extensive experiments confirm that our method outperforms state-of-the-arts (i.e.,4.7%forextremely low homologous cases on BC40 dataset).
[ { "created": "Thu, 5 Aug 2021 08:31:43 GMT", "version": "v1" } ]
2021-08-10
[ [ "Wang", "Qin", "" ], [ "Wei", "Jun", "" ], [ "Wang", "Boyuan", "" ], [ "Li1", "Zhen", "" ], [ "Wang", "Sheng", "" ], [ "Cu", "Shuguang", "" ] ]
Protein secondary structure prediction (PSSP) is essential for protein function analysis. However, for low homologous proteins, the PSSP suffers from insufficient input features. In this paper, we explicitly import external self-supervised knowledge for low homologous PSSP under the guidance of residue-wise profile fusion. In practice, we firstly demonstrate the superiority of profile over Position-Specific Scoring Matrix (PSSM) for low homologous PSSP. Based on this observation, we introduce the novel self-supervised BERT features as the pseudo profile, which implicitly involves the residue distribution in all native discovered sequences as the complementary features. Further-more, a novel residue-wise attention is specially designed to adaptively fuse different features (i.e.,original low-quality profile, BERT based pseudo profile), which not only takes full advantage of each feature but also avoids noise disturbance. Be-sides, the feature consistency loss is proposed to accelerate the model learning from multiple semantic levels. Extensive experiments confirm that our method outperforms state-of-the-arts (i.e.,4.7%forextremely low homologous cases on BC40 dataset).
2103.17131
Ugo Bardi
Ilaria Perissi, Ugo Bardi
The Sixth Law of Stupidity: A Biophysical Interpretation of Carlo Cipolla's Stupidity Laws
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Carlo Cipolla's stupidity quadrant and his five laws of stupidity were proposed for the first time in 1976. Exposed in a humorous mood by the author, these concepts nevertheless describe very serious features of the interactions among human beings. Here, we propose a new interpretation of Cipolla's ideas in a biophysical framework, using the well-known predator-prey or "Lotka-Volterra" model. We find that there is indeed a correspondence between Cipolla's approach, based on economics, and biophysical economics. On the basis of this examination, we propose a sixth law of stupidity, additional to the five proposed by Cipolla. The law states that humans are the stupidest species in the ecosystem.
[ { "created": "Wed, 31 Mar 2021 14:54:09 GMT", "version": "v1" } ]
2021-04-01
[ [ "Perissi", "Ilaria", "" ], [ "Bardi", "Ugo", "" ] ]
Carlo Cipolla's stupidity quadrant and his five laws of stupidity were proposed for the first time in 1976. Exposed in a humorous mood by the author, these concepts nevertheless describe very serious features of the interactions among human beings. Here, we propose a new interpretation of Cipolla's ideas in a biophysical framework, using the well-known predator-prey or "Lotka-Volterra" model. We find that there is indeed a correspondence between Cipolla's approach, based on economics, and biophysical economics. On the basis of this examination, we propose a sixth law of stupidity, additional to the five proposed by Cipolla. The law states that humans are the stupidest species in the ecosystem.
2012.05368
Radostin Simitev
Peter Mortensen, Hao Gao, Godfrey Smith and Radostin D. Simitev
Action potential propagation and block in a model of atrial tissue with myocyte-fibroblast coupling
To appear in Mathematical Medicine and Biology: A Journal of the IMA (ISSN (Online):1477-8602). Accepted for publication on 2020-12-08
Mathematical Medicine and Biology: A Journal of the IMA 38(1), 106-131 (2021)
10.1093/imammb/dqaa014
null
q-bio.TO nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The electrical coupling between myocytes and fibroblasts and the spacial distribution of fibroblasts within myocardial tissues are significant factors in triggering and sustaining cardiac arrhythmias but their roles are poorly understood. This article describes both direct numerical simulations and an asymptotic theory of propagation and block of electrical excitation in a model of atrial tissue with myocyte-fibroblast coupling. In particular, three idealised fibroblast distributions are introduced: uniform distribution, fibroblast barrier and myocyte strait, all believed to be constituent blocks of realistic fibroblast distributions. Primary action potential biomarkers including conduction velocity, peak potential and triangulation index are estimated from direct simulations in all cases. Propagation block is found to occur at certain critical values of the parameters defining each idealised fibroblast distribution and these critical values are accurately determined. An asymptotic theory proposed earlier is extended and applied to the case of a uniform fibroblast distribution. Biomarker values are obtained from hybrid analytical-numerical solutions of coupled fast-time and slow-time periodic boundary value problems and compare well to direct numerical simulations. The boundary of absolute refractoriness is determined solely by the fast-time problem and is found to depend on the values of the myocyte potential and on the slow inactivation variable of the sodium current ahead of the propagating pulse. In turn, these quantities are estimated from the slow-time problem using a regular perturbation expansion to find the steady state of the coupled myocyte-fibroblast kinetics. The asymptotic theory gives a simple analytical expression that captures with remarkable accuracy the block of propagation in the presence of fibroblasts.
[ { "created": "Wed, 9 Dec 2020 23:33:01 GMT", "version": "v1" } ]
2021-05-11
[ [ "Mortensen", "Peter", "" ], [ "Gao", "Hao", "" ], [ "Smith", "Godfrey", "" ], [ "Simitev", "Radostin D.", "" ] ]
The electrical coupling between myocytes and fibroblasts and the spacial distribution of fibroblasts within myocardial tissues are significant factors in triggering and sustaining cardiac arrhythmias but their roles are poorly understood. This article describes both direct numerical simulations and an asymptotic theory of propagation and block of electrical excitation in a model of atrial tissue with myocyte-fibroblast coupling. In particular, three idealised fibroblast distributions are introduced: uniform distribution, fibroblast barrier and myocyte strait, all believed to be constituent blocks of realistic fibroblast distributions. Primary action potential biomarkers including conduction velocity, peak potential and triangulation index are estimated from direct simulations in all cases. Propagation block is found to occur at certain critical values of the parameters defining each idealised fibroblast distribution and these critical values are accurately determined. An asymptotic theory proposed earlier is extended and applied to the case of a uniform fibroblast distribution. Biomarker values are obtained from hybrid analytical-numerical solutions of coupled fast-time and slow-time periodic boundary value problems and compare well to direct numerical simulations. The boundary of absolute refractoriness is determined solely by the fast-time problem and is found to depend on the values of the myocyte potential and on the slow inactivation variable of the sodium current ahead of the propagating pulse. In turn, these quantities are estimated from the slow-time problem using a regular perturbation expansion to find the steady state of the coupled myocyte-fibroblast kinetics. The asymptotic theory gives a simple analytical expression that captures with remarkable accuracy the block of propagation in the presence of fibroblasts.
2008.12184
Jeffrey Doser
Jeffrey W. Doser, Aaron S. Weed, Elise F. Zipkin, Kathryn M. Miller, Andrew O. Finley
Trends in bird abundance differ among protected forests but not bird guilds
39 pages, 6 figures
Ecological Applications (2021)
10.1002/eap.2377
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Improved monitoring and associated inferential tools to efficiently identify declining bird populations, particularly of rare or sparsely distributed species, is key to informed conservation and management across large spatio-temporal regions. We assess abundance trends for 106 bird species in a network of eight national park forests located within the northeast USA from 2006-2019 using a novel hierarchical model. We develop a multi-species, multi-region removal sampling model that shares information across species and parks to enable inference on rare species and sparsely sampled parks and to evaluate the effects of local forest structure. Trends in bird abundance over time varied widely across parks, but species showed similar trends within parks. Three parks (Acadia, Marsh-Billings-Rockefeller, and Morristown) decreased in bird abundance across all species, while three parks (Saratoga, Roosevelt-Vanderbilt, and Weir-Farm) increased in abundance. Bird abundance peaked at medium levels of basal area and high levels of percent forest and forest regeneration, with percent forest having the largest effect. Variation in these effects across parks could be a result of differences in forest structural stage and diversity. Our novel hierarchical model enables estimates of abundance at the network, park, guild, and species levels. We found large variation in abundance trends across parks but not across bird guilds, suggesting that local forest condition may have a broad and consistent effect on the entire bird community within a given park. Management should target the three parks with overall decreasing trends in bird abundance to further identify what specific factors are driving observed declines across the bird community. Understanding how bird communities respond to local forest structure and other stressors is crucial for informed and lasting management.
[ { "created": "Thu, 27 Aug 2020 15:22:56 GMT", "version": "v1" }, { "created": "Thu, 6 May 2021 10:44:10 GMT", "version": "v2" } ]
2021-09-07
[ [ "Doser", "Jeffrey W.", "" ], [ "Weed", "Aaron S.", "" ], [ "Zipkin", "Elise F.", "" ], [ "Miller", "Kathryn M.", "" ], [ "Finley", "Andrew O.", "" ] ]
Improved monitoring and associated inferential tools to efficiently identify declining bird populations, particularly of rare or sparsely distributed species, is key to informed conservation and management across large spatio-temporal regions. We assess abundance trends for 106 bird species in a network of eight national park forests located within the northeast USA from 2006-2019 using a novel hierarchical model. We develop a multi-species, multi-region removal sampling model that shares information across species and parks to enable inference on rare species and sparsely sampled parks and to evaluate the effects of local forest structure. Trends in bird abundance over time varied widely across parks, but species showed similar trends within parks. Three parks (Acadia, Marsh-Billings-Rockefeller, and Morristown) decreased in bird abundance across all species, while three parks (Saratoga, Roosevelt-Vanderbilt, and Weir-Farm) increased in abundance. Bird abundance peaked at medium levels of basal area and high levels of percent forest and forest regeneration, with percent forest having the largest effect. Variation in these effects across parks could be a result of differences in forest structural stage and diversity. Our novel hierarchical model enables estimates of abundance at the network, park, guild, and species levels. We found large variation in abundance trends across parks but not across bird guilds, suggesting that local forest condition may have a broad and consistent effect on the entire bird community within a given park. Management should target the three parks with overall decreasing trends in bird abundance to further identify what specific factors are driving observed declines across the bird community. Understanding how bird communities respond to local forest structure and other stressors is crucial for informed and lasting management.
2210.04627
Nima Dehghani
Nima Dehghani
Systematizing Cellular Complexity: A Hilbertian Approach To Biological Problems
null
null
null
null
q-bio.OT physics.bio-ph
http://creativecommons.org/licenses/by-sa/4.0/
Examining individual components of cellular systems has been successful in uncovering molecular reactions and interactions. However, the challenge lies in integrating these components into a comprehensive system-scale map. This difficulty arises due to factors such as missing links (unknown variables), overlooked nonlinearities in high-dimensional parameter space, downplayed natural noisiness and stochasticity, and a lack of focus on causal influence and temporal dynamics. Composite static and phenomenological descriptions, while appearing complicated, lack the essence of what makes the biological systems truly "complex". The formalization of system-level problems is therefore important in constructing a meta-theory of biology. Addressing fundamental aspects of cellular regulation, adaptability, and noise management is vital for understanding the robustness and functionality of biological systems. These aspects encapsulate the challenges that cells face in maintaining stability, responding to environmental changes, and harnessing noise for functionality. This work examines these key problems that cells must solve, serving as a template for such formalization and as a step towards the axiomatization of biological investigations. Through a detailed exploration of cellular mechanisms, particularly homeostatic configuration, ion channels and harnessing noise, this paper aims to illustrate complex concepts and theories in a tangible context, providing a bridge between abstract theoretical frameworks and concrete biological phenomena.
[ { "created": "Wed, 5 Oct 2022 18:15:04 GMT", "version": "v1" }, { "created": "Mon, 29 May 2023 23:27:30 GMT", "version": "v2" }, { "created": "Thu, 4 Jan 2024 19:47:18 GMT", "version": "v3" }, { "created": "Tue, 2 Jul 2024 01:02:46 GMT", "version": "v4" } ]
2024-07-03
[ [ "Dehghani", "Nima", "" ] ]
Examining individual components of cellular systems has been successful in uncovering molecular reactions and interactions. However, the challenge lies in integrating these components into a comprehensive system-scale map. This difficulty arises due to factors such as missing links (unknown variables), overlooked nonlinearities in high-dimensional parameter space, downplayed natural noisiness and stochasticity, and a lack of focus on causal influence and temporal dynamics. Composite static and phenomenological descriptions, while appearing complicated, lack the essence of what makes the biological systems truly "complex". The formalization of system-level problems is therefore important in constructing a meta-theory of biology. Addressing fundamental aspects of cellular regulation, adaptability, and noise management is vital for understanding the robustness and functionality of biological systems. These aspects encapsulate the challenges that cells face in maintaining stability, responding to environmental changes, and harnessing noise for functionality. This work examines these key problems that cells must solve, serving as a template for such formalization and as a step towards the axiomatization of biological investigations. Through a detailed exploration of cellular mechanisms, particularly homeostatic configuration, ion channels and harnessing noise, this paper aims to illustrate complex concepts and theories in a tangible context, providing a bridge between abstract theoretical frameworks and concrete biological phenomena.
1909.11338
Erwin Frey
J. Cremer and A. Melbinger and K. Wienand and T. Henriquez and H. Jung and E. Frey
Cooperation in Microbial Populations: Theory and Experimental Model Systems
Review article, 88 pages, 14 figures
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Cooperative behavior, the costly provision of benefits to others, is common across all domains of life. This review article discusses cooperative behavior in the microbial world, mediated by the exchange of extracellular products called public goods. We focus on model species for which the production of a public good and the related growth disadvantage for the producing cells are well described. To unveil the biological and ecological factors promoting the emergence and stability of cooperative traits we take an interdisciplinary perspective and review insights gained from both mathematical models and well-controlled experimental model systems. Ecologically, we include crucial aspects of the microbial life cycle into our analysis and particularly consider population structures where an ensemble of local communities (sub populations) continuously emerge, grow, and disappear again. Biologically, we explicitly consider the synthesis and regulation of public good production. The discussion of the theoretical approaches includes general evolutionary concepts, population dynamics, and evolutionary game theory. As a specific but generic biological example we consider populations of Pseudomonas putida and its regulation and utilization of pyoverdines, iron scavenging molecules. The review closes with an overview on cooperation in spatially extended systems and also provides a critical assessment of the insights gained from the experimental and theoretical studies discussed. Current challenges and important new research opportunities are discussed, including the biochemical regulation of public goods, more realistic ecological scenarios resembling native environments, cell to cell signalling, and multi-species communities.
[ { "created": "Wed, 25 Sep 2019 08:30:41 GMT", "version": "v1" } ]
2019-09-26
[ [ "Cremer", "J.", "" ], [ "Melbinger", "A.", "" ], [ "Wienand", "K.", "" ], [ "Henriquez", "T.", "" ], [ "Jung", "H.", "" ], [ "Frey", "E.", "" ] ]
Cooperative behavior, the costly provision of benefits to others, is common across all domains of life. This review article discusses cooperative behavior in the microbial world, mediated by the exchange of extracellular products called public goods. We focus on model species for which the production of a public good and the related growth disadvantage for the producing cells are well described. To unveil the biological and ecological factors promoting the emergence and stability of cooperative traits we take an interdisciplinary perspective and review insights gained from both mathematical models and well-controlled experimental model systems. Ecologically, we include crucial aspects of the microbial life cycle into our analysis and particularly consider population structures where an ensemble of local communities (sub populations) continuously emerge, grow, and disappear again. Biologically, we explicitly consider the synthesis and regulation of public good production. The discussion of the theoretical approaches includes general evolutionary concepts, population dynamics, and evolutionary game theory. As a specific but generic biological example we consider populations of Pseudomonas putida and its regulation and utilization of pyoverdines, iron scavenging molecules. The review closes with an overview on cooperation in spatially extended systems and also provides a critical assessment of the insights gained from the experimental and theoretical studies discussed. Current challenges and important new research opportunities are discussed, including the biochemical regulation of public goods, more realistic ecological scenarios resembling native environments, cell to cell signalling, and multi-species communities.
1301.1569
Jacopo Grilli
Samir Suweis, Jacopo Grilli and Amos Maritan
Effects of Mixing Interaction Types on Ecological Community Stability
12 pages, 5 figures
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last years, a remarkable theoretical effort has been made in order to understand stability and complexity in ecological communities. The non-random structures of real ecological interaction networks has been recognized as one key ingredient contributing to the coexistence between high complexity and stability in real ecosystems. However most of the theoretical studies have considered communities with only one interaction type (either antagonistic, competitive, or mutualistic). Recently it has been proposed a theoretical analysis on multiple interaction types in ecological systems, concluding that: a) Mixture of antagonistic and mutualistic interactions stabilize the system with respect to the less realistic case of only one interaction type; b) Complexity, measured in terms of the number of interactions and the number of species, increases stability in systems with different types of interactions. By introducing new theoretical investigations and analyzing 21 empirical data sets representing mutualistic plant-pollinator networks, we show that that conclusions are incorrect. We will prove that the positive complexity-stability effect observed in systems with different kind of interactions is a mere consequence of a rescaling of the interaction strengths, and thus unrelated to the mixing of interaction types.
[ { "created": "Tue, 8 Jan 2013 15:48:50 GMT", "version": "v1" } ]
2013-01-09
[ [ "Suweis", "Samir", "" ], [ "Grilli", "Jacopo", "" ], [ "Maritan", "Amos", "" ] ]
In the last years, a remarkable theoretical effort has been made in order to understand stability and complexity in ecological communities. The non-random structures of real ecological interaction networks has been recognized as one key ingredient contributing to the coexistence between high complexity and stability in real ecosystems. However most of the theoretical studies have considered communities with only one interaction type (either antagonistic, competitive, or mutualistic). Recently it has been proposed a theoretical analysis on multiple interaction types in ecological systems, concluding that: a) Mixture of antagonistic and mutualistic interactions stabilize the system with respect to the less realistic case of only one interaction type; b) Complexity, measured in terms of the number of interactions and the number of species, increases stability in systems with different types of interactions. By introducing new theoretical investigations and analyzing 21 empirical data sets representing mutualistic plant-pollinator networks, we show that that conclusions are incorrect. We will prove that the positive complexity-stability effect observed in systems with different kind of interactions is a mere consequence of a rescaling of the interaction strengths, and thus unrelated to the mixing of interaction types.
1609.01270
Mikhail Tikhonov
Mikhail Tikhonov and Remi Monasson
A collective phase in resource competition in a highly diverse ecosystem
5 pages, 3 figures + Supplementary Material
Phys. Rev. Lett. 118, 048103 (2017)
10.1103/PhysRevLett.118.048103
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Organisms shape their own environment, which in turn affects their survival. This feedback becomes especially important for communities containing a large number of species; however, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to analytically solve a classic ecological model of resource competition introduced by MacArthur in 1969. We show that the non-intuitive phenomenology of highly diverse ecosystems includes a phase where the environment constructed by the community becomes fully decoupled from the outside world.
[ { "created": "Mon, 5 Sep 2016 19:54:15 GMT", "version": "v1" } ]
2017-02-01
[ [ "Tikhonov", "Mikhail", "" ], [ "Monasson", "Remi", "" ] ]
Organisms shape their own environment, which in turn affects their survival. This feedback becomes especially important for communities containing a large number of species; however, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to analytically solve a classic ecological model of resource competition introduced by MacArthur in 1969. We show that the non-intuitive phenomenology of highly diverse ecosystems includes a phase where the environment constructed by the community becomes fully decoupled from the outside world.
1604.04800
Ehtibar Dzhafarov
Victor H. Cervantes and Ehtibar N. Dzhafarov
Exploration of Contextuality in a Psychophysical Double-Detection Experiment
to appear in Lecture Notes in Computer Science, based on Quantum Interaction 2016 conference; version 2 is a minor revision
null
null
null
q-bio.NC math.PR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Contextuality-by-Default (CbD) theory allows one to separate contextuality from context-dependent errors and violations of selective influences (aka "no-signaling" or "no-disturbance" principles). This makes the theory especially applicable to behavioral systems, where violations of selective influences are ubiquitous. For cyclic systems with binary random variables, CbD provides necessary and sufficient conditions for noncontextuality, and these conditions are known to be breached in certain quantum systems. We apply the theory of cyclic systems to a psychophysical double-detection experiment, in which observers were asked to determine presence or absence of a signal property in each of two simultaneously presented stimuli. The results, as in all other behavioral and social systems previous analyzed, indicate lack of contextuality. The role of context in double-detection is confined to lack of selectiveness: the distribution of responses to one of the stimuli is influenced by the state of the other stimulus.
[ { "created": "Sat, 16 Apr 2016 21:12:29 GMT", "version": "v1" }, { "created": "Wed, 24 Aug 2016 02:51:53 GMT", "version": "v2" } ]
2016-08-25
[ [ "Cervantes", "Victor H.", "" ], [ "Dzhafarov", "Ehtibar N.", "" ] ]
The Contextuality-by-Default (CbD) theory allows one to separate contextuality from context-dependent errors and violations of selective influences (aka "no-signaling" or "no-disturbance" principles). This makes the theory especially applicable to behavioral systems, where violations of selective influences are ubiquitous. For cyclic systems with binary random variables, CbD provides necessary and sufficient conditions for noncontextuality, and these conditions are known to be breached in certain quantum systems. We apply the theory of cyclic systems to a psychophysical double-detection experiment, in which observers were asked to determine presence or absence of a signal property in each of two simultaneously presented stimuli. The results, as in all other behavioral and social systems previous analyzed, indicate lack of contextuality. The role of context in double-detection is confined to lack of selectiveness: the distribution of responses to one of the stimuli is influenced by the state of the other stimulus.
1605.04718
Eduard Campillo-Funollet
Eduard Campillo-Funollet, Chandrasekhar Venkataraman and Anotida Madzvamuse
A Bayesian approach to parameter identification with an application to Turing systems
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a Bayesian methodology for infinite as well as finite dimensional parameter identification for partial differential equation models. The Bayesian framework provides a rigorous mathematical framework for incorporating prior knowledge on uncertainty in the observations and the parameters themselves, resulting in an approximation of the full probability distribution for the parameters, given the data. Although the numerical approximation of the full probability distribution is computationally expensive, parallelised algorithms can make many practically relevant problems computationally feasible. The probability distribution not only provides estimates for the values of the parameters, but also provides information about the inferability of parameters and the sensitivity of the model. This information is crucial when a mathematical model is used to study the outcome of real-world experiments. Keeping in mind the applicability of our approach to tackle real-world practical problems with data from experiments, in this initial proof of concept work, we apply this theoretical and computational framework to parameter identification for a well studied semilinear reaction-diffusion system with activator-depleted reaction kinetics, posed on evolving and stationary domains.
[ { "created": "Mon, 16 May 2016 10:29:39 GMT", "version": "v1" } ]
2016-05-17
[ [ "Campillo-Funollet", "Eduard", "" ], [ "Venkataraman", "Chandrasekhar", "" ], [ "Madzvamuse", "Anotida", "" ] ]
We present a Bayesian methodology for infinite as well as finite dimensional parameter identification for partial differential equation models. The Bayesian framework provides a rigorous mathematical framework for incorporating prior knowledge on uncertainty in the observations and the parameters themselves, resulting in an approximation of the full probability distribution for the parameters, given the data. Although the numerical approximation of the full probability distribution is computationally expensive, parallelised algorithms can make many practically relevant problems computationally feasible. The probability distribution not only provides estimates for the values of the parameters, but also provides information about the inferability of parameters and the sensitivity of the model. This information is crucial when a mathematical model is used to study the outcome of real-world experiments. Keeping in mind the applicability of our approach to tackle real-world practical problems with data from experiments, in this initial proof of concept work, we apply this theoretical and computational framework to parameter identification for a well studied semilinear reaction-diffusion system with activator-depleted reaction kinetics, posed on evolving and stationary domains.
2009.12984
Konstantin Kalitin
Konstantin Y. Kalitin, Alexey A. Nevzorov, Denis A. Babkov, Alexander A. Spasov, Olga Y. Mukha
Deep learning analysis of intracranial EEG for recognizing drug effects and mechanisms of action
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Drug-target interaction (DTI) prediction has become a foundational task in drug repositioning, polypharmacology, drug discovery, as well as drug resistance and side-effect prediction. DTI identification using machine learning is gaining popularity in these research areas. Through the years, numerous deep learning methods have been proposed for DTI prediction. Nevertheless, prediction accuracy and efficiency remain key challenges. Pharmaco-electroencephalogram (pharmaco-EEG) is considered valuable in the development of central nervous system-active drugs. Quantitative EEG analysis demonstrates high reliability in studying the effects of drugs on the brain. Earlier preclinical pharmaco-EEG studies showed that different types of drugs can be classified according to their mechanism of action on neural activity. Here, we propose a convolutional neural network for EEG-mediated DTI prediction. This new approach can explain the mechanisms underlying complicated drug actions, as it allows the identification of similarities in the mechanisms of action and effects of psychotropic drugs.
[ { "created": "Mon, 28 Sep 2020 00:01:04 GMT", "version": "v1" }, { "created": "Sun, 12 Nov 2023 13:23:04 GMT", "version": "v2" }, { "created": "Thu, 16 Nov 2023 07:54:46 GMT", "version": "v3" } ]
2023-11-17
[ [ "Kalitin", "Konstantin Y.", "" ], [ "Nevzorov", "Alexey A.", "" ], [ "Babkov", "Denis A.", "" ], [ "Spasov", "Alexander A.", "" ], [ "Mukha", "Olga Y.", "" ] ]
Drug-target interaction (DTI) prediction has become a foundational task in drug repositioning, polypharmacology, drug discovery, as well as drug resistance and side-effect prediction. DTI identification using machine learning is gaining popularity in these research areas. Through the years, numerous deep learning methods have been proposed for DTI prediction. Nevertheless, prediction accuracy and efficiency remain key challenges. Pharmaco-electroencephalogram (pharmaco-EEG) is considered valuable in the development of central nervous system-active drugs. Quantitative EEG analysis demonstrates high reliability in studying the effects of drugs on the brain. Earlier preclinical pharmaco-EEG studies showed that different types of drugs can be classified according to their mechanism of action on neural activity. Here, we propose a convolutional neural network for EEG-mediated DTI prediction. This new approach can explain the mechanisms underlying complicated drug actions, as it allows the identification of similarities in the mechanisms of action and effects of psychotropic drugs.
0911.0589
Jorge S\'a Martins S
S. Cebrat, D. Stauffer, J.S. Sa Martins, S. Moss de Oliveira, and P.M.C. de Oliveira
Modelling survival and allele complementation in the evolution of genomes with polymorphic loci
20 pages, 11 figures
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have simulated the evolution of sexually reproducing populations composed of individuals represented by diploid genomes. A series of eight bits formed an allele occupying one of 128 loci of one haploid genome (chromosome). The environment required a specific activity of each locus, this being the sum of the activities of both alleles located at the corresponding loci on two chromosomes. This activity is represented by the number of bits set to zero. In a constant environment the best fitted individuals were homozygous with alleles' activities corresponding to half of the environment requirement for a locus (in diploid genome two alleles at corresponding loci produced a proper activity). Changing the environment under a relatively low recombination rate promotes generation of more polymorphic alleles. In the heterozygous loci, alleles of different activities complement each other fulfilling the environment requirements. Nevertheless, the genetic pool of populations evolves in the direction of a very restricted number of complementing haplotypes and a fast changing environment kills the population. If simulations start with all loci heterozygous, they stay heterozygous for a long time.
[ { "created": "Tue, 3 Nov 2009 15:17:50 GMT", "version": "v1" } ]
2009-11-04
[ [ "Cebrat", "S.", "" ], [ "Stauffer", "D.", "" ], [ "Martins", "J. S. Sa", "" ], [ "de Oliveira", "S. Moss", "" ], [ "de Oliveira", "P. M. C.", "" ] ]
We have simulated the evolution of sexually reproducing populations composed of individuals represented by diploid genomes. A series of eight bits formed an allele occupying one of 128 loci of one haploid genome (chromosome). The environment required a specific activity of each locus, this being the sum of the activities of both alleles located at the corresponding loci on two chromosomes. This activity is represented by the number of bits set to zero. In a constant environment the best fitted individuals were homozygous with alleles' activities corresponding to half of the environment requirement for a locus (in diploid genome two alleles at corresponding loci produced a proper activity). Changing the environment under a relatively low recombination rate promotes generation of more polymorphic alleles. In the heterozygous loci, alleles of different activities complement each other fulfilling the environment requirements. Nevertheless, the genetic pool of populations evolves in the direction of a very restricted number of complementing haplotypes and a fast changing environment kills the population. If simulations start with all loci heterozygous, they stay heterozygous for a long time.
1903.01257
Tyler Meadows
Tyler Meadows, Marion Weedermann, Gail S.K. Wolkowicz
Global analysis of a simplified model of anaerobic digestion and a new result for the chemostat
22 pages, 7 figures
null
10.1137/18M1198788
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A. Bornh\"oft, R. Hanke-Rauschenbach, and K. Sundmacher, [Nonlinear Dyn., 73 (2013), pp. 535-549] introduced a qualitative simplification to the ADM1 model for anaerobic digestion. We obtain global results for this model by first analyzing the limiting system, a model of single species growth in the chemostat in which the response function is non-monotone and the species decay rate is included. Using a Lyapunov function argument and the theory of asymptotically autonomous systems, we prove that even in the parameter regime where there is bistability, no periodic orbits exist and every solution converges to one of the equilibrium points. We then describe two algorithms for stochastically perturbing the parameters of the model. Simulations done with these two algorithms are compared with simulations done using the Gillespie and tau-leaping algorithms. They illustrate the severe impact environmental factors may have on anaerobic digestion in the transient phase.
[ { "created": "Mon, 4 Mar 2019 14:16:41 GMT", "version": "v1" } ]
2019-04-15
[ [ "Meadows", "Tyler", "" ], [ "Weedermann", "Marion", "" ], [ "Wolkowicz", "Gail S. K.", "" ] ]
A. Bornh\"oft, R. Hanke-Rauschenbach, and K. Sundmacher, [Nonlinear Dyn., 73 (2013), pp. 535-549] introduced a qualitative simplification to the ADM1 model for anaerobic digestion. We obtain global results for this model by first analyzing the limiting system, a model of single species growth in the chemostat in which the response function is non-monotone and the species decay rate is included. Using a Lyapunov function argument and the theory of asymptotically autonomous systems, we prove that even in the parameter regime where there is bistability, no periodic orbits exist and every solution converges to one of the equilibrium points. We then describe two algorithms for stochastically perturbing the parameters of the model. Simulations done with these two algorithms are compared with simulations done using the Gillespie and tau-leaping algorithms. They illustrate the severe impact environmental factors may have on anaerobic digestion in the transient phase.
q-bio/0606036
Herv\'e Isambert
K. Evlampiev & H. Isambert
Evolution of Protein Interaction Networks by Whole Genome Duplication and Domain Shuffling
8 pages, 4 figures
null
null
null
q-bio.MN
null
Successive whole genome duplications have recently been firmly established in all major eukaryote kingdoms. It is not clear, however, how such dramatic evolutionary process has contributed to shape the large scale topology of protein-protein interaction (PPI) networks. We propose and analytically solve a generic model of PPI network evolution under successive whole genome duplications. This demonstrates that the observed scale-free degree distributions and conserved multi-protein complexes may have concomitantly arised from i) intrinsic exponential dynamics of PPI network evolution and ii) asymmetric divergence of gene duplicates. This requirement of asymmetric divergence is in fact "spontaneously" fulfilled at the level of protein-binding domains. In addition, domain shuffling of multi-domain proteins is shown to provide a powerful combinatorial source of PPI network innovation, while preserving essential structures of the underlying single-domain interaction network. Finally, large scale features of PPI networks reflecting the "combinatorial logic" behind direct and indirect protein interactions are well reproduced numerically with only two adjusted parameters of clear biological significance.
[ { "created": "Sun, 25 Jun 2006 23:58:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Evlampiev", "K.", "" ], [ "Isambert", "H.", "" ] ]
Successive whole genome duplications have recently been firmly established in all major eukaryote kingdoms. It is not clear, however, how such dramatic evolutionary process has contributed to shape the large scale topology of protein-protein interaction (PPI) networks. We propose and analytically solve a generic model of PPI network evolution under successive whole genome duplications. This demonstrates that the observed scale-free degree distributions and conserved multi-protein complexes may have concomitantly arised from i) intrinsic exponential dynamics of PPI network evolution and ii) asymmetric divergence of gene duplicates. This requirement of asymmetric divergence is in fact "spontaneously" fulfilled at the level of protein-binding domains. In addition, domain shuffling of multi-domain proteins is shown to provide a powerful combinatorial source of PPI network innovation, while preserving essential structures of the underlying single-domain interaction network. Finally, large scale features of PPI networks reflecting the "combinatorial logic" behind direct and indirect protein interactions are well reproduced numerically with only two adjusted parameters of clear biological significance.
1503.05939
Benedikt Bauer
Benedikt Bauer, Chaitanya S. Gokhale
Repeatability of evolution on epistatic landscapes
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolution is a dynamic process. The two classical forces of evolution are mutation and selection. Assuming small mutation rates, evolution can be predicted based solely on the fitness differences between phenotypes. Predicting an evolutionary process under varying mutation rates as well as varying fitness is still an open question. Experimental procedures, however, do include these complexities along with fluctuating population sizes and stochastic events such as extinctions. We investigate the mutational path probabilities of systems having epistatic effects on both fitness and mutation rates using a theoretical and computational framework. In contrast to previous models, we do not limit ourselves to the typical strong selection, weak mutation (SSWM)-regime or to fixed population sizes. Rather we allow epistatic interactions to also affect mutation rates. This can lead to qualitatively non-trivial dynamics. Pathways, that are negligible in the SSWM-regime, can overcome fitness valleys and become accessible. This finding has the potential to extend the traditional predictions based on the SSWM foundation and bring us closer to what is observed in experimental systems.
[ { "created": "Thu, 19 Mar 2015 20:34:40 GMT", "version": "v1" } ]
2015-03-23
[ [ "Bauer", "Benedikt", "" ], [ "Gokhale", "Chaitanya S.", "" ] ]
Evolution is a dynamic process. The two classical forces of evolution are mutation and selection. Assuming small mutation rates, evolution can be predicted based solely on the fitness differences between phenotypes. Predicting an evolutionary process under varying mutation rates as well as varying fitness is still an open question. Experimental procedures, however, do include these complexities along with fluctuating population sizes and stochastic events such as extinctions. We investigate the mutational path probabilities of systems having epistatic effects on both fitness and mutation rates using a theoretical and computational framework. In contrast to previous models, we do not limit ourselves to the typical strong selection, weak mutation (SSWM)-regime or to fixed population sizes. Rather we allow epistatic interactions to also affect mutation rates. This can lead to qualitatively non-trivial dynamics. Pathways, that are negligible in the SSWM-regime, can overcome fitness valleys and become accessible. This finding has the potential to extend the traditional predictions based on the SSWM foundation and bring us closer to what is observed in experimental systems.
2310.00681
Seonghwan Seo
Seonghwan Seo and Woo Youn Kim
PharmacoNet: Accelerating Large-Scale Virtual Screening by Deep Pharmacophore Modeling
21 pages, 5 figures
NeurIPS 2023 Workshop on New Frontiers of AI for Drug Discovery and Development
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
As the size of accessible compound libraries expands to over 10 billion, the need for more efficient structure-based virtual screening methods is emerging. Different pre-screening methods have been developed for rapid screening, but there is still a lack of structure-based methods applicable to various proteins that perform protein-ligand binding conformation prediction and scoring in an extremely short time. Here, we describe for the first time a deep-learning framework for structure-based pharmacophore modeling to address this challenge. We frame pharmacophore modeling as an instance segmentation problem to determine each protein hotspot and the location of corresponding pharmacophores, and protein-ligand binding pose prediction as a graph-matching problem. PharmacoNet is significantly faster than state-of-the-art structure-based approaches, yet reasonably accurate with a simple scoring function. Furthermore, we show the promising result that PharmacoNet effectively retains hit candidates even under the high pre-screening filtration rates. Overall, our study uncovers the hitherto untapped potential of a pharmacophore modeling approach in deep learning-based drug discovery.
[ { "created": "Sun, 1 Oct 2023 14:13:09 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2023 07:29:20 GMT", "version": "v2" }, { "created": "Mon, 18 Dec 2023 06:03:08 GMT", "version": "v3" } ]
2023-12-19
[ [ "Seo", "Seonghwan", "" ], [ "Kim", "Woo Youn", "" ] ]
As the size of accessible compound libraries expands to over 10 billion, the need for more efficient structure-based virtual screening methods is emerging. Different pre-screening methods have been developed for rapid screening, but there is still a lack of structure-based methods applicable to various proteins that perform protein-ligand binding conformation prediction and scoring in an extremely short time. Here, we describe for the first time a deep-learning framework for structure-based pharmacophore modeling to address this challenge. We frame pharmacophore modeling as an instance segmentation problem to determine each protein hotspot and the location of corresponding pharmacophores, and protein-ligand binding pose prediction as a graph-matching problem. PharmacoNet is significantly faster than state-of-the-art structure-based approaches, yet reasonably accurate with a simple scoring function. Furthermore, we show the promising result that PharmacoNet effectively retains hit candidates even under the high pre-screening filtration rates. Overall, our study uncovers the hitherto untapped potential of a pharmacophore modeling approach in deep learning-based drug discovery.
1012.4978
Salvatore De Martino
E. De Lauro, S. De Martino, S. De Siena, and V. Giorno
Stochastic origin of Gompertzian growths
null
null
null
null
q-bio.QM nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work faces the problem of the origin of the logarithmic character of the Gompertzian growth. We show that the macroscopic, deterministic Gompertz equation describes the evolution from the initial state to the final stationary value of the median of a log-normally distributed, stochastic process. Moreover, by exploiting a stochastic variational principle, we account for self-regulating feature of Gompertzian growths provided by self-consistent feedback of relative density variations. This well defined conceptual framework shows its usefulness by allowing a reliable control of the growth by external actions.
[ { "created": "Wed, 22 Dec 2010 14:00:43 GMT", "version": "v1" } ]
2010-12-23
[ [ "De Lauro", "E.", "" ], [ "De Martino", "S.", "" ], [ "De Siena", "S.", "" ], [ "Giorno", "V.", "" ] ]
This work faces the problem of the origin of the logarithmic character of the Gompertzian growth. We show that the macroscopic, deterministic Gompertz equation describes the evolution from the initial state to the final stationary value of the median of a log-normally distributed, stochastic process. Moreover, by exploiting a stochastic variational principle, we account for self-regulating feature of Gompertzian growths provided by self-consistent feedback of relative density variations. This well defined conceptual framework shows its usefulness by allowing a reliable control of the growth by external actions.
2401.16190
Tao Hu
Tao Hu, Joshua Freeze, Prerna Singh, Justin Kim, Yingnan Song, Hao Wu, Juhwan Lee, Sadeer Al-Kindi, Sanjay Rajagopalan, David L. Wilson, Ammar Hoori
AI prediction of cardiovascular events using opportunistic epicardial adipose tissue assessments from CT calcium score
7 pages, 1 central illustration, 6 figures, 5 tables
null
null
null
q-bio.QM cs.AI
http://creativecommons.org/licenses/by/4.0/
Background: Recent studies have used basic epicardial adipose tissue (EAT) assessments (e.g., volume and mean HU) to predict risk of atherosclerosis-related, major adverse cardiovascular events (MACE). Objectives: Create novel, hand-crafted EAT features, 'fat-omics', to capture the pathophysiology of EAT and improve MACE prediction. Methods: We segmented EAT using a previously-validated deep learning method with optional manual correction. We extracted 148 radiomic features (morphological, spatial, and intensity) and used Cox elastic-net for feature reduction and prediction of MACE. Results: Traditional fat features gave marginal prediction (EAT-volume/EAT-mean-HU/ BMI gave C-index 0.53/0.55/0.57, respectively). Significant improvement was obtained with 15 fat-omics features (C-index=0.69, test set). High-risk features included volume-of-voxels-having-elevated-HU-[-50, -30-HU] and HU-negative-skewness, both of which assess high HU, which as been implicated in fat inflammation. Other high-risk features include kurtosis-of-EAT-thickness, reflecting the heterogeneity of thicknesses, and EAT-volume-in-the-top-25%-of-the-heart, emphasizing adipose near the proximal coronary arteries. Kaplan-Meyer plots of Cox-identified, high- and low-risk patients were well separated with the median of the fat-omics risk, while high-risk group having HR 2.4 times that of the low-risk group (P<0.001). Conclusion: Preliminary findings indicate an opportunity to use more finely tuned, explainable assessments on EAT for improved cardiovascular risk prediction.
[ { "created": "Mon, 29 Jan 2024 14:42:06 GMT", "version": "v1" } ]
2024-01-31
[ [ "Hu", "Tao", "" ], [ "Freeze", "Joshua", "" ], [ "Singh", "Prerna", "" ], [ "Kim", "Justin", "" ], [ "Song", "Yingnan", "" ], [ "Wu", "Hao", "" ], [ "Lee", "Juhwan", "" ], [ "Al-Kindi", "Sadeer", "" ], [ "Rajagopalan", "Sanjay", "" ], [ "Wilson", "David L.", "" ], [ "Hoori", "Ammar", "" ] ]
Background: Recent studies have used basic epicardial adipose tissue (EAT) assessments (e.g., volume and mean HU) to predict risk of atherosclerosis-related, major adverse cardiovascular events (MACE). Objectives: Create novel, hand-crafted EAT features, 'fat-omics', to capture the pathophysiology of EAT and improve MACE prediction. Methods: We segmented EAT using a previously-validated deep learning method with optional manual correction. We extracted 148 radiomic features (morphological, spatial, and intensity) and used Cox elastic-net for feature reduction and prediction of MACE. Results: Traditional fat features gave marginal prediction (EAT-volume/EAT-mean-HU/ BMI gave C-index 0.53/0.55/0.57, respectively). Significant improvement was obtained with 15 fat-omics features (C-index=0.69, test set). High-risk features included volume-of-voxels-having-elevated-HU-[-50, -30-HU] and HU-negative-skewness, both of which assess high HU, which as been implicated in fat inflammation. Other high-risk features include kurtosis-of-EAT-thickness, reflecting the heterogeneity of thicknesses, and EAT-volume-in-the-top-25%-of-the-heart, emphasizing adipose near the proximal coronary arteries. Kaplan-Meyer plots of Cox-identified, high- and low-risk patients were well separated with the median of the fat-omics risk, while high-risk group having HR 2.4 times that of the low-risk group (P<0.001). Conclusion: Preliminary findings indicate an opportunity to use more finely tuned, explainable assessments on EAT for improved cardiovascular risk prediction.
1704.04681
Andrei Khrennikov Yu
Andrei Khrennikov and Ekaterina Yurova
Automaton model of protein: dynamics of conformational and functional states
null
Progress in Biophysics and Molecular Biology 130, Part A, 2-14 (2017)
null
null
q-bio.BM cs.IT math.IT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this conceptual paper we propose to explore the analogy between ontic/epistemic description of quantum phenomena and interrelation between dynamics of conformational and functional states of proteins. Another new idea is to apply theory of automata to model the latter dynamics. In our model protein's behavior is modeled with the aid of two dynamical systems, ontic and epistemic, which describe evolution of conformational and functional states of proteins, respectively. The epistemic automaton is constructed from the ontic automaton on the basis of functional (observational) equivalence relation on the space of ontic states. This reminds a few approaches to emergent quantum mechanics in which a quantum (epistemic) state is treated as representing a class of prequantum (ontic) states. This approach does not match to the standard {\it protein structure-function paradigm.} However, it is perfect for modeling of behavior of intrinsically disordered proteins. Mathematically space of protein's ontic states (conformational states) is modeled with the aid of $p$-adic numbers or more general ultrametric spaces encoding the internal hierarchical structure of proteins. Connection with theory of $p$-adic dynamical systems is briefly discussed.
[ { "created": "Sat, 15 Apr 2017 19:29:24 GMT", "version": "v1" } ]
2018-07-18
[ [ "Khrennikov", "Andrei", "" ], [ "Yurova", "Ekaterina", "" ] ]
In this conceptual paper we propose to explore the analogy between ontic/epistemic description of quantum phenomena and interrelation between dynamics of conformational and functional states of proteins. Another new idea is to apply theory of automata to model the latter dynamics. In our model protein's behavior is modeled with the aid of two dynamical systems, ontic and epistemic, which describe evolution of conformational and functional states of proteins, respectively. The epistemic automaton is constructed from the ontic automaton on the basis of functional (observational) equivalence relation on the space of ontic states. This reminds a few approaches to emergent quantum mechanics in which a quantum (epistemic) state is treated as representing a class of prequantum (ontic) states. This approach does not match to the standard {\it protein structure-function paradigm.} However, it is perfect for modeling of behavior of intrinsically disordered proteins. Mathematically space of protein's ontic states (conformational states) is modeled with the aid of $p$-adic numbers or more general ultrametric spaces encoding the internal hierarchical structure of proteins. Connection with theory of $p$-adic dynamical systems is briefly discussed.
1009.4172
Joshua Goldwyn
Joshua H. Goldwyn, Nikita S. Imennov, Michael Famulare, and Eric Shea-Brown
On stochastic differential equation models for ion channel noise in Hodgkin-Huxley neurons
19 pages, including 6 figures and 3 appendices
null
10.1103/PhysRevE.83.041908
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The random transitions of ion channels between conducting and non-conducting states generate a source of internal fluctuations in a neuron, known as channel noise. The standard method for modeling fluctuations in the states of ion channels uses continuous-time Markov chains nonlinearly coupled to a differential equation for voltage. Beginning with the work of Fox and Lu, there have been attempts to generate simpler models that use stochastic differential equation (SDEs) to approximate the stochastic spiking activity produced by Markov chain models. Recent numerical investigations, however, have raised doubts that SDE models can preserve the stochastic dynamics of Markov chain models. We analyze three SDE models that have been proposed as approximations to the Markov chain model: one that describes the states of the ion channels and two that describe the states of the ion channel subunits. We show that the former channel-based approach can capture the distribution of channel noise and its effect on spiking in a Hodgkin-Huxley neuron model to a degree not previously demonstrated, but the latter two subunit-based approaches cannot. Our analysis provides intuitive and mathematical explanations for why this is the case: the temporal correlation in the channel noise is determined by the combinatorics of bundling subunits into channels, and the subunit-based approaches do not correctly account for this structure. Our study therefore confirms and elucidates the findings of previous numerical investigations of subunit-based SDE models. Moreover, it presents the first evidence that Markov chain models of the nonlinear, stochastic dynamics of neural membranes can be accurately approximated by SDEs. This finding opens a door to future modeling work using SDE techniques to further illuminate the effects of ion channel fluctuations on electrically active cells.
[ { "created": "Tue, 21 Sep 2010 18:56:58 GMT", "version": "v1" } ]
2011-04-27
[ [ "Goldwyn", "Joshua H.", "" ], [ "Imennov", "Nikita S.", "" ], [ "Famulare", "Michael", "" ], [ "Shea-Brown", "Eric", "" ] ]
The random transitions of ion channels between conducting and non-conducting states generate a source of internal fluctuations in a neuron, known as channel noise. The standard method for modeling fluctuations in the states of ion channels uses continuous-time Markov chains nonlinearly coupled to a differential equation for voltage. Beginning with the work of Fox and Lu, there have been attempts to generate simpler models that use stochastic differential equation (SDEs) to approximate the stochastic spiking activity produced by Markov chain models. Recent numerical investigations, however, have raised doubts that SDE models can preserve the stochastic dynamics of Markov chain models. We analyze three SDE models that have been proposed as approximations to the Markov chain model: one that describes the states of the ion channels and two that describe the states of the ion channel subunits. We show that the former channel-based approach can capture the distribution of channel noise and its effect on spiking in a Hodgkin-Huxley neuron model to a degree not previously demonstrated, but the latter two subunit-based approaches cannot. Our analysis provides intuitive and mathematical explanations for why this is the case: the temporal correlation in the channel noise is determined by the combinatorics of bundling subunits into channels, and the subunit-based approaches do not correctly account for this structure. Our study therefore confirms and elucidates the findings of previous numerical investigations of subunit-based SDE models. Moreover, it presents the first evidence that Markov chain models of the nonlinear, stochastic dynamics of neural membranes can be accurately approximated by SDEs. This finding opens a door to future modeling work using SDE techniques to further illuminate the effects of ion channel fluctuations on electrically active cells.
1705.11146
Friedemann Zenke
Friedemann Zenke and Surya Ganguli
SuperSpike: Supervised learning in multi-layer spiking neural networks
null
null
10.1162/neco_a_01086
null
q-bio.NC cs.LG cs.NE stat.ML
http://creativecommons.org/licenses/by/4.0/
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multi-layer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns.
[ { "created": "Wed, 31 May 2017 15:31:26 GMT", "version": "v1" }, { "created": "Sat, 14 Oct 2017 15:08:04 GMT", "version": "v2" } ]
2018-05-31
[ [ "Zenke", "Friedemann", "" ], [ "Ganguli", "Surya", "" ] ]
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multi-layer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns.
2308.11167
Kapil Panda
Kapil Panda, Anirudh Mazumder
Predicting Dosage of Immunosuppressant Drugs After Kidney Transplantation Using Machine Learning
5 pages, 4 figures; In proceedings of MIT IEEE URTC
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
While kidney transplants are seen as the best treatment option for patients with end-stage renal disease and kidney failure, the organ's health depends on the dosage of immunosuppressant drugs post-transplantation. Due to the dosage variance based on each patient's unique physiology, nephrologists face numerous difficulties when determining the precise dosage needed for each patient. Therefore, in this research we aim to devise a machine learning algorithm to forecast the dosage of immunosuppressant drugs needed for different patients after kidney transplantation. Utilizing a random forest algorithm, the devised model is able to achieve accurate measurements for patient drug dosages.
[ { "created": "Tue, 22 Aug 2023 03:52:29 GMT", "version": "v1" }, { "created": "Mon, 18 Sep 2023 03:04:12 GMT", "version": "v2" } ]
2023-09-19
[ [ "Panda", "Kapil", "" ], [ "Mazumder", "Anirudh", "" ] ]
While kidney transplants are seen as the best treatment option for patients with end-stage renal disease and kidney failure, the organ's health depends on the dosage of immunosuppressant drugs post-transplantation. Due to the dosage variance based on each patient's unique physiology, nephrologists face numerous difficulties when determining the precise dosage needed for each patient. Therefore, in this research we aim to devise a machine learning algorithm to forecast the dosage of immunosuppressant drugs needed for different patients after kidney transplantation. Utilizing a random forest algorithm, the devised model is able to achieve accurate measurements for patient drug dosages.
1707.04362
Bala Krishnamoorthy
Methun Kamruzzaman, Ananth Kalyanaraman, Bala Krishnamoorthy, Stefan Hey, and Patrick Schnable
Hyppo-X: A Scalable Exploratory Framework for Analyzing Complex Phenomics Data
Substantially expanded from previous version. Now illustrating interesting flares and paths on two different data sets
null
null
null
q-bio.QM cs.CG math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phenomics is an emerging branch of modern biology that uses high throughput phenotyping tools to capture multiple environmental and phenotypic traits, often at massive spatial and temporal scales. The resulting high dimensional data represent a treasure trove of information for providing an in-depth understanding of how multiple factors interact and contribute to the overall growth and behavior of different genotypes. However, computational tools that can parse through such complex data and aid in extracting plausible hypotheses are currently lacking. In this paper, we present Hyppo-X, a new algorithmic approach to visually explore complex phenomics data and in the process characterize the role of environment on phenotypic traits. We model the problem as one of unsupervised structure discovery, and use emerging principles from algebraic topology and graph theory for discovering higher-order structures of complex phenomics data. We present an open source software which has interactive visualization capabilities to facilitate data navigation and hypothesis formulation. We test and evaluate Hyppo-X on two real-world plant (maize) data sets. Our results demonstrate the ability of our approach to delineate divergent subpopulation-level behavior. Notably, our approach shows how environmental factors could influence phenotypic behavior, and how that effect varies across different genotypes and different time scales. To the best of our knowledge, this effort provides one of the first approaches to systematically formalize the problem of hypothesis extraction for phenomics data. Considering the infancy of the phenomics field, tools that help users explore complex data and extract plausible hypotheses in a data-guided manner will be critical to future advancements in the use of such data.
[ { "created": "Fri, 14 Jul 2017 01:03:04 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2019 10:01:18 GMT", "version": "v2" } ]
2019-06-06
[ [ "Kamruzzaman", "Methun", "" ], [ "Kalyanaraman", "Ananth", "" ], [ "Krishnamoorthy", "Bala", "" ], [ "Hey", "Stefan", "" ], [ "Schnable", "Patrick", "" ] ]
Phenomics is an emerging branch of modern biology that uses high throughput phenotyping tools to capture multiple environmental and phenotypic traits, often at massive spatial and temporal scales. The resulting high dimensional data represent a treasure trove of information for providing an in-depth understanding of how multiple factors interact and contribute to the overall growth and behavior of different genotypes. However, computational tools that can parse through such complex data and aid in extracting plausible hypotheses are currently lacking. In this paper, we present Hyppo-X, a new algorithmic approach to visually explore complex phenomics data and in the process characterize the role of environment on phenotypic traits. We model the problem as one of unsupervised structure discovery, and use emerging principles from algebraic topology and graph theory for discovering higher-order structures of complex phenomics data. We present an open source software which has interactive visualization capabilities to facilitate data navigation and hypothesis formulation. We test and evaluate Hyppo-X on two real-world plant (maize) data sets. Our results demonstrate the ability of our approach to delineate divergent subpopulation-level behavior. Notably, our approach shows how environmental factors could influence phenotypic behavior, and how that effect varies across different genotypes and different time scales. To the best of our knowledge, this effort provides one of the first approaches to systematically formalize the problem of hypothesis extraction for phenomics data. Considering the infancy of the phenomics field, tools that help users explore complex data and extract plausible hypotheses in a data-guided manner will be critical to future advancements in the use of such data.
1906.00676
Caglar Cakan
Caglar Cakan (1, 2) and Klaus Obermayer (1, 2) ((1) Department of Software Engineering and Theoretical Computer Science, Technische Universit\"at Berlin, Germany, (2) Bernstein Center for Computational Neuroscience Berlin, Germany)
Biophysically grounded mean-field models of neural populations under electrical stimulation
A Python package with an implementation of the AdEx mean-field model can be found at https://github.com/neurolib-dev/neurolib - code for simulation and data analysis can be found at https://github.com/caglarcakan/stimulus_neural_populations
PLOS Comput. Biol. 16, e1007822 (2020)
10.1371/journal.pcbi.1007822
null
q-bio.NC cond-mat.dis-nn nlin.AO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Electrical stimulation of neural systems is a key tool for understanding neural dynamics and ultimately for developing clinical treatments. Many applications of electrical stimulation affect large populations of neurons. However, computational models of large networks of spiking neurons are inherently hard to simulate and analyze. We evaluate a reduced mean-field model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons which can be used to efficiently study the effects of electrical stimulation on large neural populations. The rich dynamical properties of this basic cortical model are described in detail and validated using large network simulations. Bifurcation diagrams reflecting the network's state reveal asynchronous up- and down-states, bistable regimes, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. The biophysical parameters of the AdEx neuron can be coupled to an electric field with realistic field strengths which then can be propagated up to the population description.We show how on the edge of bifurcation, direct electrical inputs cause network state transitions, such as turning on and off oscillations of the population rate. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. Relatively weak electric field strengths on the order of 1 V/m are able to produce these effects, indicating that field effects are strongly amplified in the network. The effects of time-varying external stimulation are well-predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.
[ { "created": "Mon, 3 Jun 2019 09:57:04 GMT", "version": "v1" }, { "created": "Thu, 25 Jul 2019 10:34:26 GMT", "version": "v2" }, { "created": "Sun, 26 Jan 2020 12:51:26 GMT", "version": "v3" }, { "created": "Wed, 29 Jan 2020 10:44:05 GMT", "version": "v4" }, { "created": "Wed, 10 Jun 2020 10:09:41 GMT", "version": "v5" }, { "created": "Thu, 9 Jul 2020 07:59:31 GMT", "version": "v6" }, { "created": "Tue, 17 Nov 2020 14:54:26 GMT", "version": "v7" } ]
2020-11-18
[ [ "Cakan", "Caglar", "" ], [ "Obermayer", "Klaus", "" ] ]
Electrical stimulation of neural systems is a key tool for understanding neural dynamics and ultimately for developing clinical treatments. Many applications of electrical stimulation affect large populations of neurons. However, computational models of large networks of spiking neurons are inherently hard to simulate and analyze. We evaluate a reduced mean-field model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons which can be used to efficiently study the effects of electrical stimulation on large neural populations. The rich dynamical properties of this basic cortical model are described in detail and validated using large network simulations. Bifurcation diagrams reflecting the network's state reveal asynchronous up- and down-states, bistable regimes, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. The biophysical parameters of the AdEx neuron can be coupled to an electric field with realistic field strengths which then can be propagated up to the population description.We show how on the edge of bifurcation, direct electrical inputs cause network state transitions, such as turning on and off oscillations of the population rate. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. Relatively weak electric field strengths on the order of 1 V/m are able to produce these effects, indicating that field effects are strongly amplified in the network. The effects of time-varying external stimulation are well-predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.
q-bio/0502021
Ugo Bastolla
Ugo Bastolla, Michael L\"assig, Susanna C. Manrubia and Angelo Valleriani
Biodiversity in model ecosystems, I: Coexistence conditions for competing species
null
null
null
null
q-bio.PE
null
This is the first of two papers where we discuss the limits imposed by competition to the biodiversity of species communities. In this first paper we study the coexistence of competing species at the fixed point of population dynamic equations. For many simple models, this imposes a limit on the width of the productivity distribution, which is more severe the more diverse the ecosystem is (Chesson, 1994). Here we review and generalize this analysis, beyond the ``mean-field''-like approximation of the competition matrix used in previous works, and extend it to structured food webs. In all cases analysed, we obtain qualitatively similar relations between biodiversity and competition: the narrower the productivity distribution is, the more species can stably coexist. We discuss how this result, considered together with environmental fluctuations, limits the maximal biodiversity that a trophic level can host.
[ { "created": "Sat, 19 Feb 2005 15:49:35 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bastolla", "Ugo", "" ], [ "Lässig", "Michael", "" ], [ "Manrubia", "Susanna C.", "" ], [ "Valleriani", "Angelo", "" ] ]
This is the first of two papers where we discuss the limits imposed by competition to the biodiversity of species communities. In this first paper we study the coexistence of competing species at the fixed point of population dynamic equations. For many simple models, this imposes a limit on the width of the productivity distribution, which is more severe the more diverse the ecosystem is (Chesson, 1994). Here we review and generalize this analysis, beyond the ``mean-field''-like approximation of the competition matrix used in previous works, and extend it to structured food webs. In all cases analysed, we obtain qualitatively similar relations between biodiversity and competition: the narrower the productivity distribution is, the more species can stably coexist. We discuss how this result, considered together with environmental fluctuations, limits the maximal biodiversity that a trophic level can host.
2312.14685
Laurent Perrinet
Hugo J. Ladret, Christian Casanova, Laurent Udo Perrinet
Kernel Heterogeneity Improves Sparseness of Natural Images Representations
null
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by-sa/4.0/
Both biological and artificial neural networks inherently balance their performance with their operational cost, which balances their computational abilities. Typically, an efficient neuromorphic neural network is one that learns representations that reduce the redundancies and dimensionality of its input. This is for instance achieved in sparse coding, and sparse representations derived from natural images yield representations that are heterogeneous, both in their sampling of input features and in the variance of those features. Here, we investigated the connection between natural images' structure, particularly oriented features, and their corresponding sparse codes. We showed that representations of input features scattered across multiple levels of variance substantially improve the sparseness and resilience of sparse codes, at the cost of reconstruction performance. This echoes the structure of the model's input, allowing to account for the heterogeneously aleatoric structures of natural images. We demonstrate that learning kernel from natural images produces heterogeneity by balancing between approximate and dense representations, which improves all reconstruction metrics. Using a parametrized control of the kernels' heterogeneity used by a convolutional sparse coding algorithm, we show that heterogeneity emphasizes sparseness, while homogeneity improves representation granularity. In a broader context, these encoding strategy can serve as inputs to deep convolutional neural networks. We prove that such variance-encoded sparse image datasets enhance computational efficiency, emphasizing the benefits of kernel heterogeneity to leverage naturalistic and variant input structures and possible applications to improve the throughput of neuromorphic hardware.
[ { "created": "Fri, 22 Dec 2023 13:36:27 GMT", "version": "v1" } ]
2023-12-25
[ [ "Ladret", "Hugo J.", "" ], [ "Casanova", "Christian", "" ], [ "Perrinet", "Laurent Udo", "" ] ]
Both biological and artificial neural networks inherently balance their performance with their operational cost, which balances their computational abilities. Typically, an efficient neuromorphic neural network is one that learns representations that reduce the redundancies and dimensionality of its input. This is for instance achieved in sparse coding, and sparse representations derived from natural images yield representations that are heterogeneous, both in their sampling of input features and in the variance of those features. Here, we investigated the connection between natural images' structure, particularly oriented features, and their corresponding sparse codes. We showed that representations of input features scattered across multiple levels of variance substantially improve the sparseness and resilience of sparse codes, at the cost of reconstruction performance. This echoes the structure of the model's input, allowing to account for the heterogeneously aleatoric structures of natural images. We demonstrate that learning kernel from natural images produces heterogeneity by balancing between approximate and dense representations, which improves all reconstruction metrics. Using a parametrized control of the kernels' heterogeneity used by a convolutional sparse coding algorithm, we show that heterogeneity emphasizes sparseness, while homogeneity improves representation granularity. In a broader context, these encoding strategy can serve as inputs to deep convolutional neural networks. We prove that such variance-encoded sparse image datasets enhance computational efficiency, emphasizing the benefits of kernel heterogeneity to leverage naturalistic and variant input structures and possible applications to improve the throughput of neuromorphic hardware.
1706.05256
Vitalii Akimenko
Vitalii Akimenko, Cyril Piou
Two-phase Age-Structured Model of Solitarious and Gregarious Locust Population Dynamics
null
null
10.1002/mma.4947
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study a nonlinear age structured model of locust population dynamics with variable time of egg incubation that describes the phase polyphenism and behaviour of desert locust, Schistocerca gregaria. The analysis of asymptotical stability of trivial and nontrivial equilibriums of autonomous system allows us to derive the conditions and understand the particularities of bidirectional phase transitions between solitarious and gregarious. Simulation of locust population dynamics with different conditions of phase transitions for autonomous and non-autonomous dynamical systems of our two phase age-structured competitive model with time delay exhibits the features of behaviour of documented outbreak dynamics of Schistocerca gregaria.
[ { "created": "Fri, 16 Jun 2017 12:52:21 GMT", "version": "v1" } ]
2018-12-26
[ [ "Akimenko", "Vitalii", "" ], [ "Piou", "Cyril", "" ] ]
In this paper we study a nonlinear age structured model of locust population dynamics with variable time of egg incubation that describes the phase polyphenism and behaviour of desert locust, Schistocerca gregaria. The analysis of asymptotical stability of trivial and nontrivial equilibriums of autonomous system allows us to derive the conditions and understand the particularities of bidirectional phase transitions between solitarious and gregarious. Simulation of locust population dynamics with different conditions of phase transitions for autonomous and non-autonomous dynamical systems of our two phase age-structured competitive model with time delay exhibits the features of behaviour of documented outbreak dynamics of Schistocerca gregaria.
1909.11899
Po-Ya Hsu
Po-Ya Hsu
Dynamic Parameter Estimation of Brain Mechanisms
null
null
null
null
q-bio.QM cs.CE math.DS q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Demystifying effective connectivity among neuronal populations has become the trend to understand the brain mechanisms of Parkinson's disease, schizophrenia, mild traumatic brain injury, and many other unlisted neurological diseases. Dynamic modeling is a state-of-the-art approach to explore various connectivities among neuronal populations corresponding to different electrophysiological responses. Through estimating the parameters in the dynamic models, including the strengths and propagation delays of the electrophysiological signals, the discovery of the underlying connectivities can lead to the elucidation of functional brain mechanisms. In this report, we survey six dynamic models that describe the intrinsic function of a single neuronal/subneuronal population and three effective network estimation methods that can trace the connections among the neuronal/subneuronal populations. The six dynamic models are event related potential, local field potential, conductance-based neural mass model, mean field model, neural field model, and canonical micro-circuits; the three effective network estimation approaches are dynamic causal modeling, structural causal model, and vector autoregression. Subsequently, we discuss dynamic parameter estimation methods including variational Bayesian, particle filtering, Metropolis-Hastings algorithm, Gauss-Newton algorithm, collocation method, and constrained optimization. We summarize the merits and drawbacks of each model, network estimation approach, and parameter estimation method. In addition, we demonstrate an exemplary effective network estimation problem statement. Last, we identify possible future work and challenges to develop an elevated package.
[ { "created": "Thu, 26 Sep 2019 05:28:40 GMT", "version": "v1" } ]
2019-09-27
[ [ "Hsu", "Po-Ya", "" ] ]
Demystifying effective connectivity among neuronal populations has become the trend to understand the brain mechanisms of Parkinson's disease, schizophrenia, mild traumatic brain injury, and many other unlisted neurological diseases. Dynamic modeling is a state-of-the-art approach to explore various connectivities among neuronal populations corresponding to different electrophysiological responses. Through estimating the parameters in the dynamic models, including the strengths and propagation delays of the electrophysiological signals, the discovery of the underlying connectivities can lead to the elucidation of functional brain mechanisms. In this report, we survey six dynamic models that describe the intrinsic function of a single neuronal/subneuronal population and three effective network estimation methods that can trace the connections among the neuronal/subneuronal populations. The six dynamic models are event related potential, local field potential, conductance-based neural mass model, mean field model, neural field model, and canonical micro-circuits; the three effective network estimation approaches are dynamic causal modeling, structural causal model, and vector autoregression. Subsequently, we discuss dynamic parameter estimation methods including variational Bayesian, particle filtering, Metropolis-Hastings algorithm, Gauss-Newton algorithm, collocation method, and constrained optimization. We summarize the merits and drawbacks of each model, network estimation approach, and parameter estimation method. In addition, we demonstrate an exemplary effective network estimation problem statement. Last, we identify possible future work and challenges to develop an elevated package.
2107.07582
Venet Osmani
Behrooz Mamandipoor, Wesley Yeung, Louis Agha-Mir-Salim, David J. Stone, Venet Osmani, Leo Anthony Celi
Prediction of Blood Lactate Values in Critically Ill Patients: A Retrospective Multi-center Cohort Study
15 pages, 6 Appendices
J Clin Monit Comput. 2021 PMID: 34224051
10.1007/s10877-021-00739-4
null
q-bio.QM cs.CY cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose. Elevations in initially obtained serum lactate levels are strong predictors of mortality in critically ill patients. Identifying patients whose serum lactate levels are more likely to increase can alert physicians to intensify care and guide them in the frequency of tending the blood test. We investigate whether machine learning models can predict subsequent serum lactate changes. Methods. We investigated serum lactate change prediction using the MIMIC-III and eICU-CRD datasets in internal as well as external validation of the eICU cohort on the MIMIC-III cohort. Three subgroups were defined based on the initial lactate levels: i) normal group (<2 mmol/L), ii) mild group (2-4 mmol/L), and iii) severe group (>4 mmol/L). Outcomes were defined based on increase or decrease of serum lactate levels between the groups. We also performed sensitivity analysis by defining the outcome as lactate change of >10% and furthermore investigated the influence of the time interval between subsequent lactate measurements on predictive performance. Results. The LSTM models were able to predict deterioration of serum lactate values of MIMIC-III patients with an AUC of 0.77 (95% CI 0.762-0.771) for the normal group, 0.77 (95% CI 0.768-0.772) for the mild group, and 0.85 (95% CI 0.840-0.851) for the severe group, with a slightly lower performance in the external validation. Conclusion. The LSTM demonstrated good discrimination of patients who had deterioration in serum lactate levels. Clinical studies are needed to evaluate whether utilization of a clinical decision support tool based on these results could positively impact decision-making and patient outcomes.
[ { "created": "Wed, 7 Jul 2021 09:46:47 GMT", "version": "v1" } ]
2021-07-19
[ [ "Mamandipoor", "Behrooz", "" ], [ "Yeung", "Wesley", "" ], [ "Agha-Mir-Salim", "Louis", "" ], [ "Stone", "David J.", "" ], [ "Osmani", "Venet", "" ], [ "Celi", "Leo Anthony", "" ] ]
Purpose. Elevations in initially obtained serum lactate levels are strong predictors of mortality in critically ill patients. Identifying patients whose serum lactate levels are more likely to increase can alert physicians to intensify care and guide them in the frequency of tending the blood test. We investigate whether machine learning models can predict subsequent serum lactate changes. Methods. We investigated serum lactate change prediction using the MIMIC-III and eICU-CRD datasets in internal as well as external validation of the eICU cohort on the MIMIC-III cohort. Three subgroups were defined based on the initial lactate levels: i) normal group (<2 mmol/L), ii) mild group (2-4 mmol/L), and iii) severe group (>4 mmol/L). Outcomes were defined based on increase or decrease of serum lactate levels between the groups. We also performed sensitivity analysis by defining the outcome as lactate change of >10% and furthermore investigated the influence of the time interval between subsequent lactate measurements on predictive performance. Results. The LSTM models were able to predict deterioration of serum lactate values of MIMIC-III patients with an AUC of 0.77 (95% CI 0.762-0.771) for the normal group, 0.77 (95% CI 0.768-0.772) for the mild group, and 0.85 (95% CI 0.840-0.851) for the severe group, with a slightly lower performance in the external validation. Conclusion. The LSTM demonstrated good discrimination of patients who had deterioration in serum lactate levels. Clinical studies are needed to evaluate whether utilization of a clinical decision support tool based on these results could positively impact decision-making and patient outcomes.
1904.00874
Andreas Brechtel
Andreas Brechtel, Thilo Gross, Barbara Drossel
Far-ranging generalist top predators enhance the stability of meta-foodwebs
null
Scientific Reportsvolume 9, Article number: 12268 (2019)
10.1038/s41598-019-48731-y
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying stabilizing factors in foodwebs is a long standing challenge with wide implications for community ecology and conservation. Here, we investigate the stability of spatially resolved meta-foodwebs with far-ranging super-predators for whom the whole meta-foodwebs appears to be a single habitat. By using a combination of generalised modeling with a master stability function approach, we are able to efficiently explore the asymptotic stability of large classes of realistic many-patch meta-foodwebs. We show that meta-foodwebs with far-ranging top predators are more stable than those with localized top predators. Moreover, adding far-ranging generalist top predators to a system can have a net stabilizing effect, despite increasing the food web size. These results highlight the importance of top predator conservation.
[ { "created": "Mon, 1 Apr 2019 14:17:31 GMT", "version": "v1" } ]
2019-09-13
[ [ "Brechtel", "Andreas", "" ], [ "Gross", "Thilo", "" ], [ "Drossel", "Barbara", "" ] ]
Identifying stabilizing factors in foodwebs is a long standing challenge with wide implications for community ecology and conservation. Here, we investigate the stability of spatially resolved meta-foodwebs with far-ranging super-predators for whom the whole meta-foodwebs appears to be a single habitat. By using a combination of generalised modeling with a master stability function approach, we are able to efficiently explore the asymptotic stability of large classes of realistic many-patch meta-foodwebs. We show that meta-foodwebs with far-ranging top predators are more stable than those with localized top predators. Moreover, adding far-ranging generalist top predators to a system can have a net stabilizing effect, despite increasing the food web size. These results highlight the importance of top predator conservation.
2307.00033
Shyam Kumar Sudhakar
Isha Thombre, Pavan Kumar Perepu, Shyam Kumar Sudhakar
Application of data engineering approaches to address challenges in microbiome data for optimal medical decision-making
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
The human gut microbiota is known to contribute to numerous physiological functions of the body and also implicated in a myriad of pathological conditions. Prolific research work in the past few decades have yielded valuable information regarding the relative taxonomic distribution of gut microbiota. Unfortunately, the microbiome data suffers from class imbalance and high dimensionality issues that must be addressed. In this study, we have implemented data engineering algorithms to address the above-mentioned issues inherent to microbiome data. Four standard machine learning classifiers (logistic regression (LR), support vector machines (SVM), random forests (RF), and extreme gradient boosting (XGB) decision trees) were implemented on a previously published dataset. The issue of class imbalance and high dimensionality of the data was addressed through synthetic minority oversampling technique (SMOTE) and principal component analysis (PCA). Our results indicate that ensemble classifiers (RF and XGB decision trees) exhibit superior classification accuracy in predicting the host phenotype. The application of PCA significantly reduced testing time while maintaining high classification accuracy. The highest classification accuracy was obtained at the levels of species for most classifiers. The prototype employed in the study addresses the issues inherent to microbiome datasets and could be highly beneficial for providing personalized medicine.
[ { "created": "Fri, 30 Jun 2023 05:36:39 GMT", "version": "v1" }, { "created": "Tue, 11 Jul 2023 11:01:42 GMT", "version": "v2" } ]
2023-07-12
[ [ "Thombre", "Isha", "" ], [ "Perepu", "Pavan Kumar", "" ], [ "Sudhakar", "Shyam Kumar", "" ] ]
The human gut microbiota is known to contribute to numerous physiological functions of the body and also implicated in a myriad of pathological conditions. Prolific research work in the past few decades have yielded valuable information regarding the relative taxonomic distribution of gut microbiota. Unfortunately, the microbiome data suffers from class imbalance and high dimensionality issues that must be addressed. In this study, we have implemented data engineering algorithms to address the above-mentioned issues inherent to microbiome data. Four standard machine learning classifiers (logistic regression (LR), support vector machines (SVM), random forests (RF), and extreme gradient boosting (XGB) decision trees) were implemented on a previously published dataset. The issue of class imbalance and high dimensionality of the data was addressed through synthetic minority oversampling technique (SMOTE) and principal component analysis (PCA). Our results indicate that ensemble classifiers (RF and XGB decision trees) exhibit superior classification accuracy in predicting the host phenotype. The application of PCA significantly reduced testing time while maintaining high classification accuracy. The highest classification accuracy was obtained at the levels of species for most classifiers. The prototype employed in the study addresses the issues inherent to microbiome datasets and could be highly beneficial for providing personalized medicine.
2012.06608
Jesus Malo
Jesus Malo, Jose Juan Esteve-Taboada, Guillermo Aguilar, Marianne Maertens, Felix A. Wichmann
Estimating the contribution of early and late noise in vision from psychophysical data
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In many psychophysical detection and discrimination tasks human performance is thought to be limited by internal or inner noise when neuronal activity is converted into an overt behavioural response. It is unclear, however, to what extent the behaviourally limiting inner noise arises from early noise in the photoreceptors and the retina, or from late noise in cortex at or immediately prior to the decision stage. Presumably, the behaviourally limiting inner noise is a non-trivial combination of both early and late noises. Here we propose a method to quantify the contributions of early and late noise purely from psychophysical data. Our analysis generalizes classical results for linear systems (Burgess and Colborne, 1988) by combining the theory of noise propagation through a nonlinear network (Ahumada, 1987) with the expressions to obtain the perceptual metric along the nonlinear network (Malo and Simoncelli, 2006; Laparra et al., 2010). We show that from threshold-only data the relative contribution of early and late noise can only be determined if the experiments include substantial external noise in some of the stimuli used during experiments. If experimenters collected full psychometric functions, however, then early and late noise sources can be quantified even in the absence of external noise. Our psychophysical estimate of the magnitude of the early noise assuming a standard cascade of linear and nonlinear model stages is substantially lower than the noise in cone photocurrents computed via an accurate model of retinal physiology (Brainard and Wandell, 2020, ISETBIO). This is consistent with the idea that one of the fundamental tasks of early vision is to reduce the comparatively large retinal noise.
[ { "created": "Fri, 11 Dec 2020 19:25:46 GMT", "version": "v1" }, { "created": "Mon, 13 May 2024 14:47:36 GMT", "version": "v2" } ]
2024-05-14
[ [ "Malo", "Jesus", "" ], [ "Esteve-Taboada", "Jose Juan", "" ], [ "Aguilar", "Guillermo", "" ], [ "Maertens", "Marianne", "" ], [ "Wichmann", "Felix A.", "" ] ]
In many psychophysical detection and discrimination tasks human performance is thought to be limited by internal or inner noise when neuronal activity is converted into an overt behavioural response. It is unclear, however, to what extent the behaviourally limiting inner noise arises from early noise in the photoreceptors and the retina, or from late noise in cortex at or immediately prior to the decision stage. Presumably, the behaviourally limiting inner noise is a non-trivial combination of both early and late noises. Here we propose a method to quantify the contributions of early and late noise purely from psychophysical data. Our analysis generalizes classical results for linear systems (Burgess and Colborne, 1988) by combining the theory of noise propagation through a nonlinear network (Ahumada, 1987) with the expressions to obtain the perceptual metric along the nonlinear network (Malo and Simoncelli, 2006; Laparra et al., 2010). We show that from threshold-only data the relative contribution of early and late noise can only be determined if the experiments include substantial external noise in some of the stimuli used during experiments. If experimenters collected full psychometric functions, however, then early and late noise sources can be quantified even in the absence of external noise. Our psychophysical estimate of the magnitude of the early noise assuming a standard cascade of linear and nonlinear model stages is substantially lower than the noise in cone photocurrents computed via an accurate model of retinal physiology (Brainard and Wandell, 2020, ISETBIO). This is consistent with the idea that one of the fundamental tasks of early vision is to reduce the comparatively large retinal noise.
2110.14329
Pengyi Yang
Pengyi Yang, Hao Huang, Chunlei Liu
Feature selection revisited in the single-cell era
null
Genome Biology 22, 321 (2021)
10.1186/s13059-021-02544-3
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Feature selection techniques are essential for high-dimensional data analysis. In the last two decades, their popularity has been fuelled by the increasing availability of high-throughput biomolecular data where high-dimensionality is a common data property. Recent advances in biotechnologies enable global profiling of various molecular and cellular features at single-cell resolution, resulting in large-scale datasets with increased complexity. These technological developments have led to a resurgence in feature selection research and application in the single-cell field. Here, we revisit feature selection techniques and summarise recent developments. We review their versatile application to a range of single-cell data types including those generated from traditional cytometry and imaging technologies and the latest array of single-cell omics technologies. We highlight some of the challenges and future directions on which feature selection could have a significant impact. Finally, we consider the scalability and make general recommendations on the utility of each type of feature selection method. We hope this review serves as a reference point to stimulate future research and application of feature selection in the single-cell era.
[ { "created": "Wed, 27 Oct 2021 10:18:20 GMT", "version": "v1" } ]
2024-01-18
[ [ "Yang", "Pengyi", "" ], [ "Huang", "Hao", "" ], [ "Liu", "Chunlei", "" ] ]
Feature selection techniques are essential for high-dimensional data analysis. In the last two decades, their popularity has been fuelled by the increasing availability of high-throughput biomolecular data where high-dimensionality is a common data property. Recent advances in biotechnologies enable global profiling of various molecular and cellular features at single-cell resolution, resulting in large-scale datasets with increased complexity. These technological developments have led to a resurgence in feature selection research and application in the single-cell field. Here, we revisit feature selection techniques and summarise recent developments. We review their versatile application to a range of single-cell data types including those generated from traditional cytometry and imaging technologies and the latest array of single-cell omics technologies. We highlight some of the challenges and future directions on which feature selection could have a significant impact. Finally, we consider the scalability and make general recommendations on the utility of each type of feature selection method. We hope this review serves as a reference point to stimulate future research and application of feature selection in the single-cell era.
1604.06358
Diala Abu Awad
Diala Abu Awad and Viet-Chi Tran and Sylvain Billiard
Perennial life-histories and demographic advantages may play contradictory roles in the evolution of plant mating systems
28 pages, 5 figures, 2 tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When predicting the fate and consequences of recurring deleterious mutations in self-fertilising populations most models developed make the assumption that populations have discrete non-overlapping generations. This makes them biologically irrelevant when considering perennial species with over-lapping generations and where mating occurs independently of the age group. Previous models studying the effect of perennial life-histories on the genetic properties of populations in the presence of self-fertilisation have done so considering age-dependent selection and have found that, contrary to empirical observations, perennial populations should exhibit lower levels of inbreeding depression. Here we propose a simple deterministic model in continuous time with selection at different fitness traits and feedback between population fitness and size. We find that a perennial life-history can result in high levels of inbreeding depression in spite of inbreeding, due to higher frequencies of heterozygous individuals at the adult stage. We also propose that there may be demographic advantages for self-fertilisation that are independent of reproductive success.
[ { "created": "Thu, 21 Apr 2016 15:33:30 GMT", "version": "v1" } ]
2016-04-22
[ [ "Awad", "Diala Abu", "" ], [ "Tran", "Viet-Chi", "" ], [ "Billiard", "Sylvain", "" ] ]
When predicting the fate and consequences of recurring deleterious mutations in self-fertilising populations most models developed make the assumption that populations have discrete non-overlapping generations. This makes them biologically irrelevant when considering perennial species with over-lapping generations and where mating occurs independently of the age group. Previous models studying the effect of perennial life-histories on the genetic properties of populations in the presence of self-fertilisation have done so considering age-dependent selection and have found that, contrary to empirical observations, perennial populations should exhibit lower levels of inbreeding depression. Here we propose a simple deterministic model in continuous time with selection at different fitness traits and feedback between population fitness and size. We find that a perennial life-history can result in high levels of inbreeding depression in spite of inbreeding, due to higher frequencies of heterozygous individuals at the adult stage. We also propose that there may be demographic advantages for self-fertilisation that are independent of reproductive success.
1709.00883
Fabrizio Pucci Dr.
Fabrizio Pucci and Marianne Rooman
Insights into the relation between noise and biological complexity
5 pages, 3 figures
Phys. Rev. E 98, 012137 (2018)
10.1103/PhysRevE.98.012137
null
q-bio.MN math.DS physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding under which conditions the increase of systems complexity is evolutionary advantageous, and how this trend is related to the modulation of the intrinsic noise, are fascinating issues of utmost importance for synthetic and systems biology. To get insights into these matters, we analyzed chemical reaction networks with different topologies and degrees of complexity, interacting or not with the environment. We showed that the global level of fluctuations at the steady state, as measured by the sum of the Fano factors of the number of molecules of all species, is directly related to the topology of the network. For systems with zero deficiency, this sum is constant and equal to the rank of the network. For higher deficiencies, we observed an increase or decrease of the fluctuation levels according to the values of the reaction fluxes that link internal species, multiplied by the associated stoichiometry. We showed that the noise is reduced when the fluxes all flow towards the species of higher complexity, whereas it is amplified when the fluxes are directed towards lower complexity species.
[ { "created": "Mon, 4 Sep 2017 09:57:12 GMT", "version": "v1" } ]
2018-08-01
[ [ "Pucci", "Fabrizio", "" ], [ "Rooman", "Marianne", "" ] ]
Understanding under which conditions the increase of systems complexity is evolutionary advantageous, and how this trend is related to the modulation of the intrinsic noise, are fascinating issues of utmost importance for synthetic and systems biology. To get insights into these matters, we analyzed chemical reaction networks with different topologies and degrees of complexity, interacting or not with the environment. We showed that the global level of fluctuations at the steady state, as measured by the sum of the Fano factors of the number of molecules of all species, is directly related to the topology of the network. For systems with zero deficiency, this sum is constant and equal to the rank of the network. For higher deficiencies, we observed an increase or decrease of the fluctuation levels according to the values of the reaction fluxes that link internal species, multiplied by the associated stoichiometry. We showed that the noise is reduced when the fluxes all flow towards the species of higher complexity, whereas it is amplified when the fluxes are directed towards lower complexity species.
1904.02049
Tarik Gouhier
Tarik C. Gouhier and Pradeep Pillai
No evidence of fish biodiversity effects on coral reef ecosystem functioning across scales
null
Frontiers in Ecology and Evolution 7 (2019) 212
10.3389/fevo.2019.00212
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate that the conclusions drawn by Lefcheck et al. (2019) regarding the positive effects of fish diversity on coral reef ecosystem functioning across scales are flawed because of a series of conceptual and statistical issues that include spurious correlations, the conflation of population size and species diversity effects and a failure to recognize that observing a biodiversity effect at multiple sites is not equivalent to observing it at multiple scales.
[ { "created": "Wed, 3 Apr 2019 15:15:05 GMT", "version": "v1" }, { "created": "Sun, 7 Apr 2019 15:29:00 GMT", "version": "v2" } ]
2019-06-07
[ [ "Gouhier", "Tarik C.", "" ], [ "Pillai", "Pradeep", "" ] ]
We demonstrate that the conclusions drawn by Lefcheck et al. (2019) regarding the positive effects of fish diversity on coral reef ecosystem functioning across scales are flawed because of a series of conceptual and statistical issues that include spurious correlations, the conflation of population size and species diversity effects and a failure to recognize that observing a biodiversity effect at multiple sites is not equivalent to observing it at multiple scales.
1710.00960
Prakash Narayan PhD
Ping Zhou, Anne Hwang, Christopher Shi, Edward Zhu, Farha Naaz, Zainab Rasheed, Michelle Liu, Lindsey S. Jung, Jingsong Li, Kai Jiang, Latha Paka, Michael A. Yamin, Itzhak D. Goldberg and Prakash Narayan
A Circulating Biomarker-based Framework for Diagnosis of Hepatocellular Carcinoma in a Clinically Relevant Model of Non-alcoholic Steatohepatitis; An OAD to NASH
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although cirrhosis is a key risk factor for the development of hepatocellular carcinoma (HCC), mounting evidence indicates that in a subset of patients presenting with non-alcoholic steatohepatitis (NASH), HCC manifests in the absence of cirrhosis. Given the sheer size of the non-alcoholic fatty liver disease (NAFLD) epidemic, and the dismal prognosis associated with late-stage primary liver cancer, there is an urgent need for HCC surveillance in the NASH patient. In the present study, adult male mice randomized to control diet or a fast food diet (FFD) were followed for up to 14 mo and serum level of a panel of HCC-relevant biomarkers was compared with liver biopsies at 3 and 14 mo. Both NAFLD Activity Score (NAS) and hepatic hydroxyproline content were elevated at 3 and 14 mo on FFD. Picrosirius red staining of liver sections revealed a filigree pattern of fibrillar collagen deposition with no cirrhosis at 14 mo on FFD. Nevertheless, 46% of animals bore one or more tumors on their livers confirmed as HCC in hematoxylin-eosin-stained liver sections. Receiver operating characteristic (ROC) curves analysis for serum levels of the HCC biomarkers osteopontin (OPN), alpha-fetoprotein (AFP) and Dickkopf-1 (DKK1) returned concordance-statistic/area under ROC curve of > 0.89. These data suggest that serum levels of OPN (threshold, 218 ng/mL; sensitivity, 82%; specificity, 86%), AFP (136 ng/mL; 91%; 97%) and DKK1 (2.4 ng/mL; 82%; 81%) are diagnostic for HCC in a clinically relevant model of NASH
[ { "created": "Tue, 3 Oct 2017 02:14:22 GMT", "version": "v1" }, { "created": "Thu, 5 Oct 2017 15:25:18 GMT", "version": "v2" }, { "created": "Thu, 12 Oct 2017 14:28:02 GMT", "version": "v3" }, { "created": "Sun, 19 Nov 2017 23:52:11 GMT", "version": "v4" } ]
2017-11-21
[ [ "Zhou", "Ping", "" ], [ "Hwang", "Anne", "" ], [ "Shi", "Christopher", "" ], [ "Zhu", "Edward", "" ], [ "Naaz", "Farha", "" ], [ "Rasheed", "Zainab", "" ], [ "Liu", "Michelle", "" ], [ "Jung", "Lindsey S.", "" ], [ "Li", "Jingsong", "" ], [ "Jiang", "Kai", "" ], [ "Paka", "Latha", "" ], [ "Yamin", "Michael A.", "" ], [ "Goldberg", "Itzhak D.", "" ], [ "Narayan", "Prakash", "" ] ]
Although cirrhosis is a key risk factor for the development of hepatocellular carcinoma (HCC), mounting evidence indicates that in a subset of patients presenting with non-alcoholic steatohepatitis (NASH), HCC manifests in the absence of cirrhosis. Given the sheer size of the non-alcoholic fatty liver disease (NAFLD) epidemic, and the dismal prognosis associated with late-stage primary liver cancer, there is an urgent need for HCC surveillance in the NASH patient. In the present study, adult male mice randomized to control diet or a fast food diet (FFD) were followed for up to 14 mo and serum level of a panel of HCC-relevant biomarkers was compared with liver biopsies at 3 and 14 mo. Both NAFLD Activity Score (NAS) and hepatic hydroxyproline content were elevated at 3 and 14 mo on FFD. Picrosirius red staining of liver sections revealed a filigree pattern of fibrillar collagen deposition with no cirrhosis at 14 mo on FFD. Nevertheless, 46% of animals bore one or more tumors on their livers confirmed as HCC in hematoxylin-eosin-stained liver sections. Receiver operating characteristic (ROC) curves analysis for serum levels of the HCC biomarkers osteopontin (OPN), alpha-fetoprotein (AFP) and Dickkopf-1 (DKK1) returned concordance-statistic/area under ROC curve of > 0.89. These data suggest that serum levels of OPN (threshold, 218 ng/mL; sensitivity, 82%; specificity, 86%), AFP (136 ng/mL; 91%; 97%) and DKK1 (2.4 ng/mL; 82%; 81%) are diagnostic for HCC in a clinically relevant model of NASH
2005.13112
Cristian Tomasetti
Haley Grant Yifan Zhang, Lu Li, Yan Wang, Satomi Kawamoto, Sophie P\'enisson, Daniel F. Fouladi, Shahab Shayesteh, Alejandra Blanco, Saeed Ghandili, Eva Zinreich, Jefferson S. Graves, Seyoun Park, Scott Kern, Jody Hooper, Alan L. Yuille, Elliot K Fishman, Linda Chu, Cristian Tomasetti
Organ size increases with obesity and correlates with cancer risk
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Obesity increases significantly cancer risk in various organs. Although this has been recognized for decades, the mechanism through which this happens has never been explained. Here, we show that the volumes of kidneys, pancreas, and liver are strongly correlated (median correlation = 0.625; P-value<10-47) with the body mass index (BMI) of an individual. We also find a significant relationship between the increase in organ volume and the increase in cancer risk (P-value<10-12). These results provide a mechanism explaining why obese individuals have higher cancer risk in several organs: the larger the organ volume the more cells at risk of becoming cancerous. These findings are important for a better understanding of the effects obesity has on cancer risk and, more generally, for the development of better preventive strategies to limit the mortality caused by obesity.
[ { "created": "Wed, 27 May 2020 01:26:50 GMT", "version": "v1" } ]
2020-05-28
[ [ "Zhang", "Haley Grant Yifan", "" ], [ "Li", "Lu", "" ], [ "Wang", "Yan", "" ], [ "Kawamoto", "Satomi", "" ], [ "Pénisson", "Sophie", "" ], [ "Fouladi", "Daniel F.", "" ], [ "Shayesteh", "Shahab", "" ], [ "Blanco", "Alejandra", "" ], [ "Ghandili", "Saeed", "" ], [ "Zinreich", "Eva", "" ], [ "Graves", "Jefferson S.", "" ], [ "Park", "Seyoun", "" ], [ "Kern", "Scott", "" ], [ "Hooper", "Jody", "" ], [ "Yuille", "Alan L.", "" ], [ "Fishman", "Elliot K", "" ], [ "Chu", "Linda", "" ], [ "Tomasetti", "Cristian", "" ] ]
Obesity increases significantly cancer risk in various organs. Although this has been recognized for decades, the mechanism through which this happens has never been explained. Here, we show that the volumes of kidneys, pancreas, and liver are strongly correlated (median correlation = 0.625; P-value<10-47) with the body mass index (BMI) of an individual. We also find a significant relationship between the increase in organ volume and the increase in cancer risk (P-value<10-12). These results provide a mechanism explaining why obese individuals have higher cancer risk in several organs: the larger the organ volume the more cells at risk of becoming cancerous. These findings are important for a better understanding of the effects obesity has on cancer risk and, more generally, for the development of better preventive strategies to limit the mortality caused by obesity.
1802.06507
Joaquin Goni
Uttara Tipnis, Enrico Amico, Mario Ventresca, Joaquin Goni
Modeling communication processes in the human connectome through cooperative learning
22 pages, 4 tables, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Communication processes within the human brain at different cognitive states are neither well understood nor completely characterized. We assess communication processes in the human connectome using ant colony-inspired cooperative learning algorithm, starting from a source with no a priori information about the network topology, and cooperatively searching for the target through a pheromone-inspired model. This framework relies on two parameters, namely pheromone perception and edge perception, to define the cognizance and subsequent behaviour of the ants on the network and, overall, the communication processes happening between source and target nodes. Simulations obtained through different configurations allow the identification of path-ensembles that are involved in the communication between node pairs. These path-ensembles may contain different number of paths depending on the perception parameters and the node pair. In order to assess the different communication regimes displayed on the simulations and their associations with functional connectivity, we introduce two network measurements, effective path-length and arrival rate. These communication features are tested as individual as well as combined predictors of functional connectivity during different tasks. Finally, different communication regimes are found in different specialized functional networks. Overall, this framework may be used as a test-bed for different communication regimes on top of an underlaying topology.
[ { "created": "Mon, 19 Feb 2018 02:58:34 GMT", "version": "v1" } ]
2018-02-20
[ [ "Tipnis", "Uttara", "" ], [ "Amico", "Enrico", "" ], [ "Ventresca", "Mario", "" ], [ "Goni", "Joaquin", "" ] ]
Communication processes within the human brain at different cognitive states are neither well understood nor completely characterized. We assess communication processes in the human connectome using ant colony-inspired cooperative learning algorithm, starting from a source with no a priori information about the network topology, and cooperatively searching for the target through a pheromone-inspired model. This framework relies on two parameters, namely pheromone perception and edge perception, to define the cognizance and subsequent behaviour of the ants on the network and, overall, the communication processes happening between source and target nodes. Simulations obtained through different configurations allow the identification of path-ensembles that are involved in the communication between node pairs. These path-ensembles may contain different number of paths depending on the perception parameters and the node pair. In order to assess the different communication regimes displayed on the simulations and their associations with functional connectivity, we introduce two network measurements, effective path-length and arrival rate. These communication features are tested as individual as well as combined predictors of functional connectivity during different tasks. Finally, different communication regimes are found in different specialized functional networks. Overall, this framework may be used as a test-bed for different communication regimes on top of an underlaying topology.
2203.08312
Fernanda Ribeiro
Fernanda L. Ribeiro, Steffen Bollmann, Ross Cunnington, and Alexander M. Puckett
An explainability framework for cortical surface-based deep learning
null
null
null
null
q-bio.NC cs.CV q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The emergence of explainability methods has enabled a better comprehension of how deep neural networks operate through concepts that are easily understood and implemented by the end user. While most explainability methods have been designed for traditional deep learning, some have been further developed for geometric deep learning, in which data are predominantly represented as graphs. These representations are regularly derived from medical imaging data, particularly in the field of neuroimaging, in which graphs are used to represent brain structural and functional wiring patterns (brain connectomes) and cortical surface models are used to represent the anatomical structure of the brain. Although explainability techniques have been developed for identifying important vertices (brain areas) and features for graph classification, these methods are still lacking for more complex tasks, such as surface-based modality transfer (or vertex-wise regression). Here, we address the need for surface-based explainability approaches by developing a framework for cortical surface-based deep learning, providing a transparent system for modality transfer tasks. First, we adapted a perturbation-based approach for use with surface data. Then, we applied our perturbation-based method to investigate the key features and vertices used by a geometric deep learning model developed to predict brain function from anatomy directly on a cortical surface model. We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
[ { "created": "Tue, 15 Mar 2022 23:16:49 GMT", "version": "v1" } ]
2022-03-17
[ [ "Ribeiro", "Fernanda L.", "" ], [ "Bollmann", "Steffen", "" ], [ "Cunnington", "Ross", "" ], [ "Puckett", "Alexander M.", "" ] ]
The emergence of explainability methods has enabled a better comprehension of how deep neural networks operate through concepts that are easily understood and implemented by the end user. While most explainability methods have been designed for traditional deep learning, some have been further developed for geometric deep learning, in which data are predominantly represented as graphs. These representations are regularly derived from medical imaging data, particularly in the field of neuroimaging, in which graphs are used to represent brain structural and functional wiring patterns (brain connectomes) and cortical surface models are used to represent the anatomical structure of the brain. Although explainability techniques have been developed for identifying important vertices (brain areas) and features for graph classification, these methods are still lacking for more complex tasks, such as surface-based modality transfer (or vertex-wise regression). Here, we address the need for surface-based explainability approaches by developing a framework for cortical surface-based deep learning, providing a transparent system for modality transfer tasks. First, we adapted a perturbation-based approach for use with surface data. Then, we applied our perturbation-based method to investigate the key features and vertices used by a geometric deep learning model developed to predict brain function from anatomy directly on a cortical surface model. We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
2309.03348
Longbin Zeng
Longbin Zeng, Fengjian Feng, Wenlian Lu
A General Description of Criticality in Neural Network Models
arXiv admin note: substantial text overlap with arXiv:1803.06180 by other authors
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experimental observations have supported the hypothesis that the cerebral cortex operates in a dynamical regime near criticality, where the neuronal network exhibits a mixture of ordered and disordered patterns. However, A comprehensive study of how criticality emerges and how to reproduce it is still lacking. In this study, we investigate coupled networks with conductance-based neurons and illustrate the co-existence of different spiking patterns, including asynchronous irregular (AI) firing and synchronous regular (SR) state, along with a scale-invariant neuronal avalanche phenomenon (criticality). We show that fast-acting synaptic coupling can evoke neuronal avalanches in the mean-dominated regime but has little effect in the fluctuation-dominated regime. In a narrow region of parameter space, the network exhibits avalanche dynamics with power-law avalanche size and duration distributions. We conclude that three stages which may be responsible for reproducing the synchronized bursting: mean-dominated subthreshold dynamics, fast-initiating a spike event, and time-delayed inhibitory cancellation. Remarkably, we illustrate the mechanisms underlying critical avalanches in the presence of noise, which can be explained as a stochastic crossing state around the Hopf bifurcation under the mean-dominated regime. Moreover, we apply the ensemble Kalman filter to determine and track effective connections for the neuronal network. The method is validated on noisy synthetic BOLD signals and could exactly reproduce the corresponding critical network activity. Our results provide a special perspective to understand and model the criticality, which can be useful for large-scale modeling and computation of brain dynamics.
[ { "created": "Fri, 25 Aug 2023 07:15:19 GMT", "version": "v1" } ]
2023-09-08
[ [ "Zeng", "Longbin", "" ], [ "Feng", "Fengjian", "" ], [ "Lu", "Wenlian", "" ] ]
Recent experimental observations have supported the hypothesis that the cerebral cortex operates in a dynamical regime near criticality, where the neuronal network exhibits a mixture of ordered and disordered patterns. However, A comprehensive study of how criticality emerges and how to reproduce it is still lacking. In this study, we investigate coupled networks with conductance-based neurons and illustrate the co-existence of different spiking patterns, including asynchronous irregular (AI) firing and synchronous regular (SR) state, along with a scale-invariant neuronal avalanche phenomenon (criticality). We show that fast-acting synaptic coupling can evoke neuronal avalanches in the mean-dominated regime but has little effect in the fluctuation-dominated regime. In a narrow region of parameter space, the network exhibits avalanche dynamics with power-law avalanche size and duration distributions. We conclude that three stages which may be responsible for reproducing the synchronized bursting: mean-dominated subthreshold dynamics, fast-initiating a spike event, and time-delayed inhibitory cancellation. Remarkably, we illustrate the mechanisms underlying critical avalanches in the presence of noise, which can be explained as a stochastic crossing state around the Hopf bifurcation under the mean-dominated regime. Moreover, we apply the ensemble Kalman filter to determine and track effective connections for the neuronal network. The method is validated on noisy synthetic BOLD signals and could exactly reproduce the corresponding critical network activity. Our results provide a special perspective to understand and model the criticality, which can be useful for large-scale modeling and computation of brain dynamics.
1510.03982
Philippe Robert
Sarah Eugene (LJLL, RAP), Wei-Feng Xue, Philippe Robert (RAP), Marie Doumic-Jauffret (MAMBA, LJLL)
Insights into the variability of nucleated amyloid polymerization by a minimalistic model of stochastic protein assembly
null
null
10.1063/1.4947472
null
q-bio.CB math.PR physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-assembly of proteins into amyloid aggregates is an important biological phenomenon associated with human diseases such as Alzheimer's disease. Amyloid fibrils also have potential applications in nano-engineering of biomaterials. The kinetics of amyloid assembly show an exponential growth phase preceded by a lag phase, variable in duration as seen in bulk experiments and experiments that mimic the small volumes of cells. Here, to investigate the origins and the properties of the observed variability in the lag phase of amyloid assembly currently not accounted for by deterministic nucleation dependent mechanisms, we formulate a new stochastic minimal model that is capable of describing the characteristics of amyloid growth curves despite its simplicity. We then solve the stochastic differential equations of our model and give mathematical proof of a central limit theorem for the sample growth trajectories of the nucleated aggregation process. These results give an asymptotic description for our simple model, from which closed form analytical results capable of describing and predicting the variability of nucleated amyloid assembly were derived. We also demonstrate the application of our results to inform experiments in a conceptually friendly and clear fashion. Our model offers a new perspective and paves the way for a new and efficient approach on extracting vital information regarding the key initial events of amyloid formation.
[ { "created": "Wed, 14 Oct 2015 07:26:49 GMT", "version": "v1" } ]
2016-05-25
[ [ "Eugene", "Sarah", "", "LJLL, RAP" ], [ "Xue", "Wei-Feng", "", "RAP" ], [ "Robert", "Philippe", "", "RAP" ], [ "Doumic-Jauffret", "Marie", "", "MAMBA, LJLL" ] ]
Self-assembly of proteins into amyloid aggregates is an important biological phenomenon associated with human diseases such as Alzheimer's disease. Amyloid fibrils also have potential applications in nano-engineering of biomaterials. The kinetics of amyloid assembly show an exponential growth phase preceded by a lag phase, variable in duration as seen in bulk experiments and experiments that mimic the small volumes of cells. Here, to investigate the origins and the properties of the observed variability in the lag phase of amyloid assembly currently not accounted for by deterministic nucleation dependent mechanisms, we formulate a new stochastic minimal model that is capable of describing the characteristics of amyloid growth curves despite its simplicity. We then solve the stochastic differential equations of our model and give mathematical proof of a central limit theorem for the sample growth trajectories of the nucleated aggregation process. These results give an asymptotic description for our simple model, from which closed form analytical results capable of describing and predicting the variability of nucleated amyloid assembly were derived. We also demonstrate the application of our results to inform experiments in a conceptually friendly and clear fashion. Our model offers a new perspective and paves the way for a new and efficient approach on extracting vital information regarding the key initial events of amyloid formation.
1508.00073
Liane Gabora
Liane Gabora and Nicole Carbert
A Study and Preliminary Model of Cross-Domain Influences on Creativity
6 pages
(2015). In R. Dale, C. Jennings, P. Maglio, T. Matlock, D. Noelle, A. Warlaumont & J. Yashimi (Eds.), Proceedings of the 37th annual meeting of Cognitive Science Society (pp. 758-763). Austin TX: Cognitive Science Society
null
null
q-bio.NC q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper takes a two-pronged approach to investigate the phenomenon of cross-domain influence on creativity. We present a study in which creative individuals were asked to list influences on their creative work. More than half the listed influences were unrelated to their creative domain, thus demonstrating empirically that cross-domain influence is widespread. We then present a preliminary model of exaptation, a form of cross-domain influence on creativity in which a different context suggests a new use for an existing item, using an example from the study.
[ { "created": "Sat, 1 Aug 2015 04:39:16 GMT", "version": "v1" }, { "created": "Mon, 15 Jul 2019 21:23:48 GMT", "version": "v2" } ]
2019-07-17
[ [ "Gabora", "Liane", "" ], [ "Carbert", "Nicole", "" ] ]
This paper takes a two-pronged approach to investigate the phenomenon of cross-domain influence on creativity. We present a study in which creative individuals were asked to list influences on their creative work. More than half the listed influences were unrelated to their creative domain, thus demonstrating empirically that cross-domain influence is widespread. We then present a preliminary model of exaptation, a form of cross-domain influence on creativity in which a different context suggests a new use for an existing item, using an example from the study.
2307.02173
K. Anton Feenstra
Bas Stringer, Annika Jacobsen, Qingzhen Hou, Hans de Ferrante, Olga Ivanova, Katharina Waury, Jose Gavald\'a-Garci\'a, Sanne Abeln, K. Anton Feenstra
Function Prediction
editorial responsability: K. Anton Feenstra, Sanne Abeln. This chapter is part of the book "Introduction to Protein Structural Bioinformatics". The Preface arXiv:1801.09442 contains links to all the (published) chapters. The update adds available arxiv hyperlinks for the chapters
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. There are still huge gaps in understanding the molecular function of proteins. This raises the question on how we may predict protein function, when little to no knowledge from direct experiments is available. Protein function is a broad concept which spans different scales: from quantum scale effects for catalyzing enzymatic reactions, to phenotypes that manifest at the organism level. In fact, many of these functional scales are entirely different research areas. Here, we will consider prediction of a smaller range of functions, roughly spanning the protein residue-level up to the pathway level. We will give a conceptual overview of which functional aspects of proteins we can predict, which methods are currently available, and how well they work in practice.
[ { "created": "Wed, 5 Jul 2023 10:13:04 GMT", "version": "v1" }, { "created": "Thu, 6 Jul 2023 18:07:31 GMT", "version": "v2" } ]
2023-07-10
[ [ "Stringer", "Bas", "" ], [ "Jacobsen", "Annika", "" ], [ "Hou", "Qingzhen", "" ], [ "de Ferrante", "Hans", "" ], [ "Ivanova", "Olga", "" ], [ "Waury", "Katharina", "" ], [ "Gavaldá-Garciá", "Jose", "" ], [ "Abeln", "Sanne", "" ], [ "Feenstra", "K. Anton", "" ] ]
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. There are still huge gaps in understanding the molecular function of proteins. This raises the question on how we may predict protein function, when little to no knowledge from direct experiments is available. Protein function is a broad concept which spans different scales: from quantum scale effects for catalyzing enzymatic reactions, to phenotypes that manifest at the organism level. In fact, many of these functional scales are entirely different research areas. Here, we will consider prediction of a smaller range of functions, roughly spanning the protein residue-level up to the pathway level. We will give a conceptual overview of which functional aspects of proteins we can predict, which methods are currently available, and how well they work in practice.
1310.3408
Stephanie Johnson
Stephanie Johnson, Yi-Ju Chen, Rob Phillips
Poly(dA:dT)-rich DNAs are highly flexible in the context of DNA looping
Published version and Supporting Information available at: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0075799
PLOS ONE 8(10): e75799
10.1371/journal.pone.0075799
null
q-bio.BM
http://creativecommons.org/licenses/by/3.0/
Large-scale DNA deformation is ubiquitous in transcriptional regulation in prokaryotes and eukaryotes alike. Though much is known about how transcription factors and constellations of binding sites dictate where and how gene regulation will occur, less is known about the role played by the intervening DNA. In this work we explore the effect of sequence flexibility on transcription factor-mediated DNA looping, by drawing on sequences identified in nucleosome formation and ligase-mediated cyclization assays as being especially favorable for or resistant to large deformations. We examine a poly(dA:dT)-rich, nucleosome-repelling sequence that is often thought to belong to a class of highly inflexible DNAs; two strong nucleosome positioning sequences that share a set of particular sequence features common to nucleosome-preferring DNAs; and a CG-rich sequence representative of high G+C-content genomic regions that correlate with high nucleosome occupancy in vivo. To measure the flexibility of these sequences in the context of DNA looping, we combine the in vitro single-molecule tethered particle motion assay, a canonical looping protein, and a statistical mechan- ical model that allows us to quantitatively relate the looping probability to the looping free energy. We show that, in contrast to the case of nucleosome occupancy, G+C content does not positively correlate with looping probability, and that despite sharing sequence features that are thought to determine nucleosome affinity, the two strong nucleosome positioning sequences behave markedly dissimilarly in the context of looping. Most surprisingly, the poly(dA:dT)-rich DNA that is often characterized as highly inflexible in fact exhibits one of the highest propensities for looping that we have measured.
[ { "created": "Sat, 12 Oct 2013 17:31:05 GMT", "version": "v1" } ]
2013-10-15
[ [ "Johnson", "Stephanie", "" ], [ "Chen", "Yi-Ju", "" ], [ "Phillips", "Rob", "" ] ]
Large-scale DNA deformation is ubiquitous in transcriptional regulation in prokaryotes and eukaryotes alike. Though much is known about how transcription factors and constellations of binding sites dictate where and how gene regulation will occur, less is known about the role played by the intervening DNA. In this work we explore the effect of sequence flexibility on transcription factor-mediated DNA looping, by drawing on sequences identified in nucleosome formation and ligase-mediated cyclization assays as being especially favorable for or resistant to large deformations. We examine a poly(dA:dT)-rich, nucleosome-repelling sequence that is often thought to belong to a class of highly inflexible DNAs; two strong nucleosome positioning sequences that share a set of particular sequence features common to nucleosome-preferring DNAs; and a CG-rich sequence representative of high G+C-content genomic regions that correlate with high nucleosome occupancy in vivo. To measure the flexibility of these sequences in the context of DNA looping, we combine the in vitro single-molecule tethered particle motion assay, a canonical looping protein, and a statistical mechan- ical model that allows us to quantitatively relate the looping probability to the looping free energy. We show that, in contrast to the case of nucleosome occupancy, G+C content does not positively correlate with looping probability, and that despite sharing sequence features that are thought to determine nucleosome affinity, the two strong nucleosome positioning sequences behave markedly dissimilarly in the context of looping. Most surprisingly, the poly(dA:dT)-rich DNA that is often characterized as highly inflexible in fact exhibits one of the highest propensities for looping that we have measured.
1109.2923
Guy Shinar PhD
Guy Shinar and Martin Feinberg
Concordant Chemical Reaction Networks
60 pages, 0 figures
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a large class of chemical reaction networks, those endowed with a subtle structural property called concordance. We show that the class of concordant networks coincides precisely with the class of networks which, when taken with any weakly monotonic kinetics, invariably give rise to kinetic systems that are injective --- a quality that, among other things, precludes the possibility of switch-like transitions between distinct positive steady states. We also provide persistence characteristics of concordant networks, instability implications of discordance, and consequences of stronger variants of concordance. Some of our results are in the spirit of recent ones by Banaji and Craciun, but here we do not require that every species suffer a degradation reaction. This is especially important in studying biochemical networks, for which it is rare to have all species degrade.
[ { "created": "Tue, 13 Sep 2011 20:57:18 GMT", "version": "v1" } ]
2011-09-15
[ [ "Shinar", "Guy", "" ], [ "Feinberg", "Martin", "" ] ]
We describe a large class of chemical reaction networks, those endowed with a subtle structural property called concordance. We show that the class of concordant networks coincides precisely with the class of networks which, when taken with any weakly monotonic kinetics, invariably give rise to kinetic systems that are injective --- a quality that, among other things, precludes the possibility of switch-like transitions between distinct positive steady states. We also provide persistence characteristics of concordant networks, instability implications of discordance, and consequences of stronger variants of concordance. Some of our results are in the spirit of recent ones by Banaji and Craciun, but here we do not require that every species suffer a degradation reaction. This is especially important in studying biochemical networks, for which it is rare to have all species degrade.
2109.03351
Benjamin Peters
Benjamin Peters, Nikolaus Kriegeskorte
Capturing the objects of vision with neural networks
25 pages, 5 figures
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human visual perception carves a scene at its physical joints, decomposing the world into objects, which are selectively attended, tracked, and predicted as we engage our surroundings. Object representations emancipate perception from the sensory input, enabling us to keep in mind that which is out of sight and to use perceptual content as a basis for action and symbolic cognition. Human behavioral studies have documented how object representations emerge through grouping, amodal completion, proto-objects, and object files. Deep neural network (DNN) models of visual object recognition, by contrast, remain largely tethered to the sensory input, despite achieving human-level performance at labeling objects. Here, we review related work in both fields and examine how these fields can help each other. The cognitive literature provides a starting point for the development of new experimental tasks that reveal mechanisms of human object perception and serve as benchmarks driving development of deep neural network models that will put the object into object recognition.
[ { "created": "Tue, 7 Sep 2021 21:49:53 GMT", "version": "v1" } ]
2021-09-09
[ [ "Peters", "Benjamin", "" ], [ "Kriegeskorte", "Nikolaus", "" ] ]
Human visual perception carves a scene at its physical joints, decomposing the world into objects, which are selectively attended, tracked, and predicted as we engage our surroundings. Object representations emancipate perception from the sensory input, enabling us to keep in mind that which is out of sight and to use perceptual content as a basis for action and symbolic cognition. Human behavioral studies have documented how object representations emerge through grouping, amodal completion, proto-objects, and object files. Deep neural network (DNN) models of visual object recognition, by contrast, remain largely tethered to the sensory input, despite achieving human-level performance at labeling objects. Here, we review related work in both fields and examine how these fields can help each other. The cognitive literature provides a starting point for the development of new experimental tasks that reveal mechanisms of human object perception and serve as benchmarks driving development of deep neural network models that will put the object into object recognition.
2012.08088
Joseph Santos-Sacchi
Joseph Santos-Sacchi, Dhasakumar Navaratnam and Winston Tan
State dependent effects on the frequency response of prestin real and imaginary components of nonlinear capacitance
16 pages, 11 figures
Scientific Reports volume 11, Article number: 16149 (2021)
10.1038/s41598-021-95121-4
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The outer hair cell (OHC) membrane harbors a voltage-dependent protein, prestin (SLC26a5), in high density, whose charge movement is evidenced as a nonlinear capacitance (NLC). NLC is bell-shaped, with its peak occurring at a voltage, Vh, where sensor charge is equally distributed across the plasma membrane. Thus, Vh provides information on the conformational state of prestin. Vh is sensitive to membrane tension, shifting to positive voltage as tension increases and is the basis for considering prestin piezoelectric (PZE). NLC can be deconstructed into real and imaginary components that report on charge movements in phase or 90 degrees out of phase with AC voltage. Here we show in membrane macro-patches of the OHC that there is a partial trade-off in the magnitude of real and imaginary components as interrogation frequency increases, as predicted by a recent PZE model (Rabbitt, 2020). However, we find similar behavior in a simple kinetic model of prestin that lacks piezoelectric coupling, the meno presto model. At a particular frequency, Fis, the complex component magnitudes intersect. Using this metric, Fis, which depends on the frequency response of each complex component, we find that initial Vh influences Fis; thus, by categorizing patches into groups of different Vh, (above and below -30 mV) we find that Fis is lower for the negative Vh group. We also find that the effect of membrane tension on complex NLC is dependent, but differentially so, on initial Vh. Whereas the negative group exhibits shifts to higher frequencies for increasing tension, the opposite occurs for the positive group. Despite complex component trade-offs, the low-pass roll-off in absolute magnitude of NLC, which varies little with our perturbations and is indicative of diminishing total charge movement, poses a challenge for a role of voltage-driven prestin in cochlear amplification at very high frequencies.
[ { "created": "Tue, 15 Dec 2020 04:54:47 GMT", "version": "v1" }, { "created": "Wed, 30 Dec 2020 00:52:13 GMT", "version": "v2" } ]
2021-08-27
[ [ "Santos-Sacchi", "Joseph", "" ], [ "Navaratnam", "Dhasakumar", "" ], [ "Tan", "Winston", "" ] ]
The outer hair cell (OHC) membrane harbors a voltage-dependent protein, prestin (SLC26a5), in high density, whose charge movement is evidenced as a nonlinear capacitance (NLC). NLC is bell-shaped, with its peak occurring at a voltage, Vh, where sensor charge is equally distributed across the plasma membrane. Thus, Vh provides information on the conformational state of prestin. Vh is sensitive to membrane tension, shifting to positive voltage as tension increases and is the basis for considering prestin piezoelectric (PZE). NLC can be deconstructed into real and imaginary components that report on charge movements in phase or 90 degrees out of phase with AC voltage. Here we show in membrane macro-patches of the OHC that there is a partial trade-off in the magnitude of real and imaginary components as interrogation frequency increases, as predicted by a recent PZE model (Rabbitt, 2020). However, we find similar behavior in a simple kinetic model of prestin that lacks piezoelectric coupling, the meno presto model. At a particular frequency, Fis, the complex component magnitudes intersect. Using this metric, Fis, which depends on the frequency response of each complex component, we find that initial Vh influences Fis; thus, by categorizing patches into groups of different Vh, (above and below -30 mV) we find that Fis is lower for the negative Vh group. We also find that the effect of membrane tension on complex NLC is dependent, but differentially so, on initial Vh. Whereas the negative group exhibits shifts to higher frequencies for increasing tension, the opposite occurs for the positive group. Despite complex component trade-offs, the low-pass roll-off in absolute magnitude of NLC, which varies little with our perturbations and is indicative of diminishing total charge movement, poses a challenge for a role of voltage-driven prestin in cochlear amplification at very high frequencies.
1806.08613
Joachim Krug
Su-Chan Park, Philipp Klatt and Joachim Krug
Rare beneficial mutations cannot halt Muller's ratchet in spatial populations
7 pages, 6 figures
null
10.1209/0295-5075/123/48001
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Muller's ratchet describes the irreversible accumulation of deleterious mutations in asexual populations. In well-mixed populations the speed of fitness decline is exponentially small in the population size, and any positive rate of beneficial mutations is sufficient to reverse the ratchet in large populations. The behavior is fundamentally different in populations with spatial structure, because the speed of the ratchet remains nonzero in the infinite size limit when the deleterious mutation rate exceeds a critical value. Based on the relation between the spatial ratchet and directed percolation, we develop a scaling theory incorporating both deleterious and beneficial mutations. The theory is verified by extensive simulations in one and two dimensions.
[ { "created": "Fri, 22 Jun 2018 11:54:18 GMT", "version": "v1" } ]
2018-09-26
[ [ "Park", "Su-Chan", "" ], [ "Klatt", "Philipp", "" ], [ "Krug", "Joachim", "" ] ]
Muller's ratchet describes the irreversible accumulation of deleterious mutations in asexual populations. In well-mixed populations the speed of fitness decline is exponentially small in the population size, and any positive rate of beneficial mutations is sufficient to reverse the ratchet in large populations. The behavior is fundamentally different in populations with spatial structure, because the speed of the ratchet remains nonzero in the infinite size limit when the deleterious mutation rate exceeds a critical value. Based on the relation between the spatial ratchet and directed percolation, we develop a scaling theory incorporating both deleterious and beneficial mutations. The theory is verified by extensive simulations in one and two dimensions.
0912.1387
Jan Hoh
William F. Heinz, Jeffrey L. Werbin, Eaton Lattman, Jan H. Hoh
Computing spatial information from Fourier coefficient distributions
23 Pages, 11 Figures (whereof 4 pages and 4 figures in Auxiliary Material)
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an approach to computing spatial information based on Fourier coefficient distributions. The Fourier transform (FT) of an image contains a complete description of the image, and the values of the FT coefficients are uniquely associated with that image. For an image where the distribution of pixels is uncorrelated, the FT coefficients are normally distributed and uncorrelated. Further, the probability distribution for the FT coefficients of such an image can readily be obtained by Parseval's theorem. We take advantage of these properties to compute the spatial information in an image by determining the probability of each coefficient (both real and imaginary parts) in the FT, then using the Shannon formalism to calculate information. By using the probability distribution obtained from Parseval's theorem, an effective distance from the completely uncorrelated or most uncertain case is obtained. The resulting quantity is an information computed in k-space (kSI). This approach provides a robust, facile and highly flexible framework for quantifying spatial information in images and other types of data (of arbitrary dimensions). The kSI metric is tested on a 2D Ising ferromagnet, and the temperature-dependent phase transition is accurately determined from the spatial information in configurations of the system.
[ { "created": "Tue, 8 Dec 2009 02:06:31 GMT", "version": "v1" } ]
2009-12-09
[ [ "Heinz", "William F.", "" ], [ "Werbin", "Jeffrey L.", "" ], [ "Lattman", "Eaton", "" ], [ "Hoh", "Jan H.", "" ] ]
We present an approach to computing spatial information based on Fourier coefficient distributions. The Fourier transform (FT) of an image contains a complete description of the image, and the values of the FT coefficients are uniquely associated with that image. For an image where the distribution of pixels is uncorrelated, the FT coefficients are normally distributed and uncorrelated. Further, the probability distribution for the FT coefficients of such an image can readily be obtained by Parseval's theorem. We take advantage of these properties to compute the spatial information in an image by determining the probability of each coefficient (both real and imaginary parts) in the FT, then using the Shannon formalism to calculate information. By using the probability distribution obtained from Parseval's theorem, an effective distance from the completely uncorrelated or most uncertain case is obtained. The resulting quantity is an information computed in k-space (kSI). This approach provides a robust, facile and highly flexible framework for quantifying spatial information in images and other types of data (of arbitrary dimensions). The kSI metric is tested on a 2D Ising ferromagnet, and the temperature-dependent phase transition is accurately determined from the spatial information in configurations of the system.
2107.10108
Gustavo Machado
Nicolas C. Cardenas, Abagael L. Sykes, Francisco P. N. Lopes, Gustavo Machado
Multiple species animal movements: network properties, disease dynamic and the impact of targeted control actions
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Infectious diseases in livestock are well-known to infect multiple hosts and persist through the combination of within- and between-host transmission pathways. Uncertainty remains about the epidemic consequences of the disease being introduced on farms with more than one susceptible host. Here we describe multi-host contact networks to elucidate the potential of disease spread among farms with multiple species. Four years of between-farm animal movement data of bovine, swine, small ruminants, and multi-host, were described through both static and time-series networks; the in-going and out-going contact chains were also calculated. We use the proposed stochastic multilevel model to simulate scenarios in which infection was seeded into a single host and multi-hosts farms, to estimate epidemic trajectories and simulate network-based control actions to assess the reduction of secondarily infected farms. Our analysis showed that the swine network was more connected than cattle and small ruminants in the temporal network view. The small ruminants network was shown disconnected, however, allowing the interaction among networks with different hosts enabling the spread of disease throughout the network. Independently from the initial infected host, secondary infections were observed crossing overall species. We showed that targeting the top 3.25% of the farms ranked by degree could reduce the total number of infected farms below 70% at the end of the simulation period. In conclusion, we demonstrated the potential of the multi-host network in disease propagation, therefore, it becomes important to consider the observed multi-host movement dynamics while designing surveillance and preparedness control strategies.
[ { "created": "Wed, 21 Jul 2021 14:39:17 GMT", "version": "v1" }, { "created": "Fri, 14 Jan 2022 15:45:38 GMT", "version": "v2" } ]
2022-01-17
[ [ "Cardenas", "Nicolas C.", "" ], [ "Sykes", "Abagael L.", "" ], [ "Lopes", "Francisco P. N.", "" ], [ "Machado", "Gustavo", "" ] ]
Infectious diseases in livestock are well-known to infect multiple hosts and persist through the combination of within- and between-host transmission pathways. Uncertainty remains about the epidemic consequences of the disease being introduced on farms with more than one susceptible host. Here we describe multi-host contact networks to elucidate the potential of disease spread among farms with multiple species. Four years of between-farm animal movement data of bovine, swine, small ruminants, and multi-host, were described through both static and time-series networks; the in-going and out-going contact chains were also calculated. We use the proposed stochastic multilevel model to simulate scenarios in which infection was seeded into a single host and multi-hosts farms, to estimate epidemic trajectories and simulate network-based control actions to assess the reduction of secondarily infected farms. Our analysis showed that the swine network was more connected than cattle and small ruminants in the temporal network view. The small ruminants network was shown disconnected, however, allowing the interaction among networks with different hosts enabling the spread of disease throughout the network. Independently from the initial infected host, secondary infections were observed crossing overall species. We showed that targeting the top 3.25% of the farms ranked by degree could reduce the total number of infected farms below 70% at the end of the simulation period. In conclusion, we demonstrated the potential of the multi-host network in disease propagation, therefore, it becomes important to consider the observed multi-host movement dynamics while designing surveillance and preparedness control strategies.
2211.09862
Joel Shor
Anastasiya Belyaeva, Joel Shor, Daniel E. Cook, Kishwar Shafin, Daniel Liu, Armin T\"opfer, Aaron M. Wenger, William J. Rowell, Howard Yang, Alexey Kolesnikov, Cory Y. McLean, Maria Nattestad, Andrew Carroll, Pi-Chuan Chang
Knowledge distillation for fast and accurate DNA sequence correction
null
Learning Meaningful Representations of Life, NeurIPS 2022 workshop oral paper
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Accurate genome sequencing can improve our understanding of biology and the genetic basis of disease. The standard approach for generating DNA sequences from PacBio instruments relies on HMM-based models. Here, we introduce Distilled DeepConsensus - a distilled transformer-encoder model for sequence correction, which improves upon the HMM-based methods with runtime constraints in mind. Distilled DeepConsensus is 1.3x faster and 1.5x smaller than its larger counterpart while improving the yield of high quality reads (Q30) over the HMM-based method by 1.69x (vs. 1.73x for larger model). With improved accuracy of genomic sequences, Distilled DeepConsensus improves downstream applications of genomic sequence analysis such as reducing variant calling errors by 39% (34% for larger model) and improving genome assembly quality by 3.8% (4.2% for larger model). We show that the representations learned by Distilled DeepConsensus are similar between faster and slower models.
[ { "created": "Thu, 17 Nov 2022 19:54:18 GMT", "version": "v1" } ]
2022-11-21
[ [ "Belyaeva", "Anastasiya", "" ], [ "Shor", "Joel", "" ], [ "Cook", "Daniel E.", "" ], [ "Shafin", "Kishwar", "" ], [ "Liu", "Daniel", "" ], [ "Töpfer", "Armin", "" ], [ "Wenger", "Aaron M.", "" ], [ "Rowell", "William J.", "" ], [ "Yang", "Howard", "" ], [ "Kolesnikov", "Alexey", "" ], [ "McLean", "Cory Y.", "" ], [ "Nattestad", "Maria", "" ], [ "Carroll", "Andrew", "" ], [ "Chang", "Pi-Chuan", "" ] ]
Accurate genome sequencing can improve our understanding of biology and the genetic basis of disease. The standard approach for generating DNA sequences from PacBio instruments relies on HMM-based models. Here, we introduce Distilled DeepConsensus - a distilled transformer-encoder model for sequence correction, which improves upon the HMM-based methods with runtime constraints in mind. Distilled DeepConsensus is 1.3x faster and 1.5x smaller than its larger counterpart while improving the yield of high quality reads (Q30) over the HMM-based method by 1.69x (vs. 1.73x for larger model). With improved accuracy of genomic sequences, Distilled DeepConsensus improves downstream applications of genomic sequence analysis such as reducing variant calling errors by 39% (34% for larger model) and improving genome assembly quality by 3.8% (4.2% for larger model). We show that the representations learned by Distilled DeepConsensus are similar between faster and slower models.
2404.11759
Jinzhi Lei
Xue Liu, Yue Deng, Jingying Huang, Yuhong Zhang, Jinzhi Lei
Modelling infectious disease transmission dynamics in conference environments: An individual-based approach
25 pages; 8 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The global public health landscape is perpetually challenged by the looming threat of infectious diseases. Central to addressing this concern is the imperative to prevent and manage disease transmission during pandemics, particularly in unique settings. This study addresses the transmission dynamics of infectious diseases within conference venues, presenting a computational model designed to simulate transmission processes within a condensed timeframe (one day), beginning with sporadic cases. Our model intricately captures the activities of individual attendees within the conference venue, encompassing meetings, rest intervals, and meal breaks. While meetings entail proximity seating, rest and lunch periods allow attendees to interact with diverse individuals. Moreover, the restroom environment poses an additional avenue for potential infection transmission. Employing an individual-based model, we meticulously replicated the transmission dynamics of infectious diseases, with a specific emphasis on close-contact interactions between infected and susceptible individuals. Through comprehensive analysis of model simulations, we elucidated the intricacies of disease transmission dynamics within conference settings and assessed the efficacy of control strategies to curb disease dissemination. Ultimately, our study proffers a numerical framework for assessing the risk of infectious disease transmission during short-duration conferences, furnishing conference organizers with valuable insights to inform the implementation of targeted prevention and control measures.
[ { "created": "Wed, 17 Apr 2024 21:35:05 GMT", "version": "v1" } ]
2024-04-19
[ [ "Liu", "Xue", "" ], [ "Deng", "Yue", "" ], [ "Huang", "Jingying", "" ], [ "Zhang", "Yuhong", "" ], [ "Lei", "Jinzhi", "" ] ]
The global public health landscape is perpetually challenged by the looming threat of infectious diseases. Central to addressing this concern is the imperative to prevent and manage disease transmission during pandemics, particularly in unique settings. This study addresses the transmission dynamics of infectious diseases within conference venues, presenting a computational model designed to simulate transmission processes within a condensed timeframe (one day), beginning with sporadic cases. Our model intricately captures the activities of individual attendees within the conference venue, encompassing meetings, rest intervals, and meal breaks. While meetings entail proximity seating, rest and lunch periods allow attendees to interact with diverse individuals. Moreover, the restroom environment poses an additional avenue for potential infection transmission. Employing an individual-based model, we meticulously replicated the transmission dynamics of infectious diseases, with a specific emphasis on close-contact interactions between infected and susceptible individuals. Through comprehensive analysis of model simulations, we elucidated the intricacies of disease transmission dynamics within conference settings and assessed the efficacy of control strategies to curb disease dissemination. Ultimately, our study proffers a numerical framework for assessing the risk of infectious disease transmission during short-duration conferences, furnishing conference organizers with valuable insights to inform the implementation of targeted prevention and control measures.
2405.00070
Nisha Pillai
Nisha Pillai, Bindu Nanduri, Michael J Rothrock Jr., Zhiqian Chen, Mahalingam Ramkumar
Bayesian-Guided Generation of Synthetic Microbiomes with Minimized Pathogenicity
null
The 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE EMBC), 2024
null
null
q-bio.QM cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthetic microbiomes offer new possibilities for modulating microbiota, to address the barriers in multidtug resistance (MDR) research. We present a Bayesian optimization approach to enable efficient searching over the space of synthetic microbiome variants to identify candidates predictive of reduced MDR. Microbiome datasets were encoded into a low-dimensional latent space using autoencoders. Sampling from this space allowed generation of synthetic microbiome signatures. Bayesian optimization was then implemented to select variants for biological screening to maximize identification of designs with restricted MDR pathogens based on minimal samples. Four acquisition functions were evaluated: expected improvement, upper confidence bound, Thompson sampling, and probability of improvement. Based on each strategy, synthetic samples were prioritized according to their MDR detection. Expected improvement, upper confidence bound, and probability of improvement consistently produced synthetic microbiome candidates with significantly fewer searches than Thompson sampling. By combining deep latent space mapping and Bayesian learning for efficient guided screening, this study demonstrated the feasibility of creating bespoke synthetic microbiomes with customized MDR profiles.
[ { "created": "Mon, 29 Apr 2024 21:30:30 GMT", "version": "v1" } ]
2024-05-02
[ [ "Pillai", "Nisha", "" ], [ "Nanduri", "Bindu", "" ], [ "Rothrock", "Michael J", "Jr." ], [ "Chen", "Zhiqian", "" ], [ "Ramkumar", "Mahalingam", "" ] ]
Synthetic microbiomes offer new possibilities for modulating microbiota, to address the barriers in multidtug resistance (MDR) research. We present a Bayesian optimization approach to enable efficient searching over the space of synthetic microbiome variants to identify candidates predictive of reduced MDR. Microbiome datasets were encoded into a low-dimensional latent space using autoencoders. Sampling from this space allowed generation of synthetic microbiome signatures. Bayesian optimization was then implemented to select variants for biological screening to maximize identification of designs with restricted MDR pathogens based on minimal samples. Four acquisition functions were evaluated: expected improvement, upper confidence bound, Thompson sampling, and probability of improvement. Based on each strategy, synthetic samples were prioritized according to their MDR detection. Expected improvement, upper confidence bound, and probability of improvement consistently produced synthetic microbiome candidates with significantly fewer searches than Thompson sampling. By combining deep latent space mapping and Bayesian learning for efficient guided screening, this study demonstrated the feasibility of creating bespoke synthetic microbiomes with customized MDR profiles.
1208.0518
Simon Powers
Simon T. Powers and Alexandra S. Penn and Richard A. Watson
The efficacy of group selection is increased by coexistence dynamics within groups
pp. 498-505 in Bullock S., Noble J., Watson R.A., Bedau M.A. (eds.) Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems. MIT Press (2008). 8 pages, 7 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selection on the level of loosely associated groups has been suggested as a route towards the evolution of cooperation between individuals and the subsequent formation of higher-level biological entities. Such group selection explanations remain problematic, however, due to the narrow range of parameters under which they can overturn within-group selection that favours selfish behaviour. In principle, individual selection could act on such parameters so as to strengthen the force of between-group selection and hence increase cooperation and individual fitness, as illustrated in our previous work. However, such a process cannot operate in parameter regions where group selection effects are totally absent, since there would be no selective gradient to follow. One key parameter, which when increased often rapidly causes group selection effects to tend to zero, is initial group size, for when groups are formed randomly then even moderately sized groups lack significant variance in their composition. However, the consequent restriction of any group selection effect to small sized groups is derived from models that assume selfish types will competitively exclude their more cooperative counterparts at within-group equilibrium. In such cases, diversity in the migrant pool can tend to zero and accordingly variance in group composition cannot be generated. In contrast, we show that if within-group dynamics lead to a stable coexistence of selfish and cooperative types, then the range of group sizes showing some effect of group selection is much larger.
[ { "created": "Thu, 2 Aug 2012 15:36:12 GMT", "version": "v1" } ]
2012-08-03
[ [ "Powers", "Simon T.", "" ], [ "Penn", "Alexandra S.", "" ], [ "Watson", "Richard A.", "" ] ]
Selection on the level of loosely associated groups has been suggested as a route towards the evolution of cooperation between individuals and the subsequent formation of higher-level biological entities. Such group selection explanations remain problematic, however, due to the narrow range of parameters under which they can overturn within-group selection that favours selfish behaviour. In principle, individual selection could act on such parameters so as to strengthen the force of between-group selection and hence increase cooperation and individual fitness, as illustrated in our previous work. However, such a process cannot operate in parameter regions where group selection effects are totally absent, since there would be no selective gradient to follow. One key parameter, which when increased often rapidly causes group selection effects to tend to zero, is initial group size, for when groups are formed randomly then even moderately sized groups lack significant variance in their composition. However, the consequent restriction of any group selection effect to small sized groups is derived from models that assume selfish types will competitively exclude their more cooperative counterparts at within-group equilibrium. In such cases, diversity in the migrant pool can tend to zero and accordingly variance in group composition cannot be generated. In contrast, we show that if within-group dynamics lead to a stable coexistence of selfish and cooperative types, then the range of group sizes showing some effect of group selection is much larger.
2004.11882
Sergiy Perepelytsya
S.M. Perepelytsya (1), J. Uli\v{c}n\'y (2), S.N. Volkov (1) ((1) Bogolyubov Institute for Theoretical Physics of the National Academy of Sciences of Ukraine, (2) University of P. J. \v{S}af\'arik in Ko\v{s}ice, Institute of Physics at Faculty of Science \'UFV PF UPJ\v{S})
Molecular dynamics study of the competitive binding of hydrogen peroxide and water molecules with the DNA phosphate groups
13 pages, 6 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The hydrogen peroxide is present in the living cell at small concentrations that increase under the action of the heavy ion beams in the process of anticancer therapy. The interactions of hydrogen peroxide with DNA, proteins and other biological molecules are poorly understood. In the present work the competitive binding of the hydrogen peroxide and water molecules with the DNA double helix backbone has been studied using the molecular dynamics method. The simulations have been carried out for the DNA double helix in a water solution with hydrogen peroxide molecules and Na$^{+}$ counterions. The obtained radial distribution functions of counterions, H$_2$O$_2$ and H$_2$O molecules with respect to the oxygen atoms of DNA phosphate groups have been used for the analysis of the formation of different complexes. The calculated mean residence times show that a hydrogen peroxide molecule stays at least twice as long near the phosphate group (up to 7 ps) than a water molecule (about 3 ps). The hydrogen peroxide molecules form more stable complexes with the phosphate groups of the DNA backbone than water molecules do.
[ { "created": "Fri, 24 Apr 2020 17:47:15 GMT", "version": "v1" } ]
2020-04-27
[ [ "Perepelytsya", "S. M.", "" ], [ "Uličný", "J.", "" ], [ "Volkov", "S. N.", "" ] ]
The hydrogen peroxide is present in the living cell at small concentrations that increase under the action of the heavy ion beams in the process of anticancer therapy. The interactions of hydrogen peroxide with DNA, proteins and other biological molecules are poorly understood. In the present work the competitive binding of the hydrogen peroxide and water molecules with the DNA double helix backbone has been studied using the molecular dynamics method. The simulations have been carried out for the DNA double helix in a water solution with hydrogen peroxide molecules and Na$^{+}$ counterions. The obtained radial distribution functions of counterions, H$_2$O$_2$ and H$_2$O molecules with respect to the oxygen atoms of DNA phosphate groups have been used for the analysis of the formation of different complexes. The calculated mean residence times show that a hydrogen peroxide molecule stays at least twice as long near the phosphate group (up to 7 ps) than a water molecule (about 3 ps). The hydrogen peroxide molecules form more stable complexes with the phosphate groups of the DNA backbone than water molecules do.
1809.06672
Krzysztof Bartoszek
Krzysztof Bartoszek
The phylogenetic effective sample size and jumps
null
K. Bartoszek (2018). "The phylogenetic effective sample size and jumps". In: Mathematica Applicanda (Matematyka Stosowana) 46(1): 25-33
10.14708/ma.v46i1.6368
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phylogenetic effective sample size is a parameter that has as its goal the quantification of the amount of independent signal in a phylogenetically correlated sample. It was studied for Brownian motion and Ornstein-Uhlenbeck models of trait evolution. Here, we study this composite parameter when the trait is allowed to jump at speciation points of the phylogeny. Our numerical study indicates that there is a non-trivial limit as the effect of jumps grows. The limit depends on the value of the drift parameter of the Ornstein-Uhlenbeck process.
[ { "created": "Tue, 18 Sep 2018 12:37:57 GMT", "version": "v1" } ]
2018-09-19
[ [ "Bartoszek", "Krzysztof", "" ] ]
The phylogenetic effective sample size is a parameter that has as its goal the quantification of the amount of independent signal in a phylogenetically correlated sample. It was studied for Brownian motion and Ornstein-Uhlenbeck models of trait evolution. Here, we study this composite parameter when the trait is allowed to jump at speciation points of the phylogeny. Our numerical study indicates that there is a non-trivial limit as the effect of jumps grows. The limit depends on the value of the drift parameter of the Ornstein-Uhlenbeck process.
0805.1561
R. A. J. van Elburg
Ronald A.J. van Elburg, Arjen van Ooyen
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models
11 pages with 2 figures included in main text
Neural Computation, July 2009, Vol. 21, No. 7, Pages 1913-1930
10.1162/neco.2009.07-08-815
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has recently been introduced by Carnevale and Hines. This integration scheme imposes non-physiological constraints on the time constants of the synaptic currents it attempts to model which hamper the general applicability. This paper addresses this problem in two ways. First, we provide physical arguments to show why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. This proof rests on a generalization of the Carnevale-Hines lemma, which is a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems including receptor-neurotransmitter dissociation followed by channel closing. We show that this lemma can be generalized and subsequently used for lifting most of the original constraints on the time constants. Thus we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse type combinations than is apparent from the original treatment.
[ { "created": "Mon, 12 May 2008 08:36:29 GMT", "version": "v1" } ]
2010-10-05
[ [ "van Elburg", "Ronald A. J.", "" ], [ "van Ooyen", "Arjen", "" ] ]
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has recently been introduced by Carnevale and Hines. This integration scheme imposes non-physiological constraints on the time constants of the synaptic currents it attempts to model which hamper the general applicability. This paper addresses this problem in two ways. First, we provide physical arguments to show why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. This proof rests on a generalization of the Carnevale-Hines lemma, which is a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems including receptor-neurotransmitter dissociation followed by channel closing. We show that this lemma can be generalized and subsequently used for lifting most of the original constraints on the time constants. Thus we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse type combinations than is apparent from the original treatment.
q-bio/0410024
Jens Christian Claussen
Jens Christian Claussen (University Kiel)
Offdiagonal Complexity: A computationally quick complexity measure for graphs and networks
12 pages, revised version, to appear in Physica A
Physica A 375, 365 (2007)
10.1016/j.physa.2006.08.067
null
q-bio.MN q-bio.QM
null
A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This Offdiagonal Complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The Offdiagonal Complexity apporach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.
[ { "created": "Wed, 20 Oct 2004 07:25:42 GMT", "version": "v1" }, { "created": "Wed, 23 Aug 2006 16:48:42 GMT", "version": "v2" } ]
2010-03-11
[ [ "Claussen", "Jens Christian", "", "University Kiel" ] ]
A vast variety of biological, social, and economical networks shows topologies drastically differing from random graphs; yet the quantitative characterization remains unsatisfactory from a conceptual point of view. Motivated from the discussion of small scale-free networks, a biased link distribution entropy is defined, which takes an extremum for a power law distribution. This approach is extended to the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. From here a simple (and computationally cheap) complexity measure can be defined. This Offdiagonal Complexity (OdC) is proposed as a novel measure to characterize the complexity of an undirected graph, or network. While both for regular lattices and fully connected networks OdC is zero, it takes a moderately low value for a random graph and shows high values for apparently complex structures as scale-free networks and hierarchical trees. The Offdiagonal Complexity apporach is applied to the Helicobacter pylori protein interaction network and randomly rewired surrogates.
2101.05136
Sara Mohammad Taheri Mrs
Jeremy Zucker, Kaushal Paneri, Sara Mohammad-Taheri, Somya Bhargava, Pallavi Kolambkar, Craig Bakker, Jeremy Teuton, Charles Tapley Hoyt, Kristie Oxford, Robert Ness and Olga Vitek
Leveraging Structured Biological Knowledge for Counterfactual Inference: a Case Study of Viral Pathogenesis
In proceeding of IEEE, Transactions on Big Data
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Counterfactual inference is a useful tool for comparing outcomes of interventions on complex systems. It requires us to represent the system in form of a structural causal model, complete with a causal diagram, probabilistic assumptions on exogenous variables, and functional assignments. Specifying such models can be extremely difficult in practice. The process requires substantial domain expertise, and does not scale easily to large systems, multiple systems, or novel system modifications. At the same time, many application domains, such as molecular biology, are rich in structured causal knowledge that is qualitative in nature. This manuscript proposes a general approach for querying a causal biological knowledge graph, and converting the qualitative result into a quantitative structural causal model that can learn from data to answer the question. We demonstrate the feasibility, accuracy and versatility of this approach using two case studies in systems biology. The first demonstrates the appropriateness of the underlying assumptions and the accuracy of the results. The second demonstrates the versatility of the approach by querying a knowledge base for the molecular determinants of a severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-induced cytokine storm, and performing counterfactual inference to estimate the causal effect of medical countermeasures for severely ill patients.
[ { "created": "Wed, 13 Jan 2021 15:32:17 GMT", "version": "v1" } ]
2021-01-14
[ [ "Zucker", "Jeremy", "" ], [ "Paneri", "Kaushal", "" ], [ "Mohammad-Taheri", "Sara", "" ], [ "Bhargava", "Somya", "" ], [ "Kolambkar", "Pallavi", "" ], [ "Bakker", "Craig", "" ], [ "Teuton", "Jeremy", "" ], [ "Hoyt", "Charles Tapley", "" ], [ "Oxford", "Kristie", "" ], [ "Ness", "Robert", "" ], [ "Vitek", "Olga", "" ] ]
Counterfactual inference is a useful tool for comparing outcomes of interventions on complex systems. It requires us to represent the system in form of a structural causal model, complete with a causal diagram, probabilistic assumptions on exogenous variables, and functional assignments. Specifying such models can be extremely difficult in practice. The process requires substantial domain expertise, and does not scale easily to large systems, multiple systems, or novel system modifications. At the same time, many application domains, such as molecular biology, are rich in structured causal knowledge that is qualitative in nature. This manuscript proposes a general approach for querying a causal biological knowledge graph, and converting the qualitative result into a quantitative structural causal model that can learn from data to answer the question. We demonstrate the feasibility, accuracy and versatility of this approach using two case studies in systems biology. The first demonstrates the appropriateness of the underlying assumptions and the accuracy of the results. The second demonstrates the versatility of the approach by querying a knowledge base for the molecular determinants of a severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-induced cytokine storm, and performing counterfactual inference to estimate the causal effect of medical countermeasures for severely ill patients.
1808.03160
Sudeepto Bhattacharya Dr
Shashankaditya Upadhyay, Tamali Mondal, Prasad A. Pathak, Arijit Roy, Girish Agrawal, Sudeepto Bhattacharya
A network theoretic study of potential movement and spread of Lantana camara in Rajaji Tiger Reserve, India
null
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecosystems are often under threat by invasive species which, through their invasion dynamics, create ecological networks to spread. We present preliminary results using a technique of GIS coupled with complex network analysis to model the movement and spread of Lantana Camara in Rajaji Tiger Reserve, India, where prey species are being affected because of habitat degradation due to Lantana invasion. Understanding spatio-temporal aspects of the spread mechanism are essential for better management in the region. The objective of the present study is to develop insight into some key characteristics of the regulatory mechanism for lantana spread inside RTR. Lantana mapping was carried out by field observations along multiple transects and plots and the data generated was used as input for MaxEnt modelling to identify land patches in the study area that are favourable for lantana growth. The patch information so obtained is integrated with a raster map generated by identifying different topographical features in the study area which are favourable for lantana growth. The integrated data is analysed with a complex network perspective, where relatively dense potential lantana distribution patches are considered as vertices, connected by relatively sparse lantana continuities, identified as edges. The network centrality analysis reveal key patches in the study area that play specialized roles in the spread of lantana in a large region. Hubs in the lantana network are primarily identified as dry seasonal river beds and their management is proposed as a vital strategy to contain lantana invasion. The lantana network is found to exhibit small-world architecture with a well formed community structure. We infer that the above properties of the lantana network have major contribution in regulating the rapid infestation and even spread of the plant through the entire region of study.
[ { "created": "Wed, 8 Aug 2018 08:20:25 GMT", "version": "v1" } ]
2018-08-10
[ [ "Upadhyay", "Shashankaditya", "" ], [ "Mondal", "Tamali", "" ], [ "Pathak", "Prasad A.", "" ], [ "Roy", "Arijit", "" ], [ "Agrawal", "Girish", "" ], [ "Bhattacharya", "Sudeepto", "" ] ]
Ecosystems are often under threat by invasive species which, through their invasion dynamics, create ecological networks to spread. We present preliminary results using a technique of GIS coupled with complex network analysis to model the movement and spread of Lantana Camara in Rajaji Tiger Reserve, India, where prey species are being affected because of habitat degradation due to Lantana invasion. Understanding spatio-temporal aspects of the spread mechanism are essential for better management in the region. The objective of the present study is to develop insight into some key characteristics of the regulatory mechanism for lantana spread inside RTR. Lantana mapping was carried out by field observations along multiple transects and plots and the data generated was used as input for MaxEnt modelling to identify land patches in the study area that are favourable for lantana growth. The patch information so obtained is integrated with a raster map generated by identifying different topographical features in the study area which are favourable for lantana growth. The integrated data is analysed with a complex network perspective, where relatively dense potential lantana distribution patches are considered as vertices, connected by relatively sparse lantana continuities, identified as edges. The network centrality analysis reveal key patches in the study area that play specialized roles in the spread of lantana in a large region. Hubs in the lantana network are primarily identified as dry seasonal river beds and their management is proposed as a vital strategy to contain lantana invasion. The lantana network is found to exhibit small-world architecture with a well formed community structure. We infer that the above properties of the lantana network have major contribution in regulating the rapid infestation and even spread of the plant through the entire region of study.
1903.05449
Abhijeet Sonawane Mr.
Abhijeet R. Sonawane, Scott T. Weiss, Kimberly Glass, and Amitabh Sharma
Network Medicine in the age of biomedical big data
Review Article
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Network medicine is an emerging area of research dealing with molecular and genetic interactions, network biomarkers of disease, and therapeutic target discovery. Large-scale biomedical data generation offers a unique opportunity to assess the effect and impact of cellular heterogeneity and environmental perturbations on the observed phenotype. Marrying the two, network medicine with biomedical data provides a framework to build meaningful models and extract impactful results at a network level. In this review, we survey existing network types and biomedical data sources. More importantly, we delve into ways in which the network medicine approach, aided by phenotype-specific biomedical data, can be gainfully applied. We provide three paradigms, mainly dealing with three major biological network archetypes: protein-protein interaction, expression-based, and gene regulatory networks. For each of these paradigms, we discuss a broad overview of philosophies under which various network methods work. We also provide a few examples in each paradigm as a test case of its successful application. Finally, we delineate several opportunities and challenges in the field of network medicine. Taken together, the understanding gained from combining biomedical data with networks can be useful for characterizing disease etiologies and identifying therapeutic targets, which, in turn, will lead to better preventive medicine with translational impacts on personalized healthcare.
[ { "created": "Wed, 13 Mar 2019 12:31:44 GMT", "version": "v1" } ]
2019-03-14
[ [ "Sonawane", "Abhijeet R.", "" ], [ "Weiss", "Scott T.", "" ], [ "Glass", "Kimberly", "" ], [ "Sharma", "Amitabh", "" ] ]
Network medicine is an emerging area of research dealing with molecular and genetic interactions, network biomarkers of disease, and therapeutic target discovery. Large-scale biomedical data generation offers a unique opportunity to assess the effect and impact of cellular heterogeneity and environmental perturbations on the observed phenotype. Marrying the two, network medicine with biomedical data provides a framework to build meaningful models and extract impactful results at a network level. In this review, we survey existing network types and biomedical data sources. More importantly, we delve into ways in which the network medicine approach, aided by phenotype-specific biomedical data, can be gainfully applied. We provide three paradigms, mainly dealing with three major biological network archetypes: protein-protein interaction, expression-based, and gene regulatory networks. For each of these paradigms, we discuss a broad overview of philosophies under which various network methods work. We also provide a few examples in each paradigm as a test case of its successful application. Finally, we delineate several opportunities and challenges in the field of network medicine. Taken together, the understanding gained from combining biomedical data with networks can be useful for characterizing disease etiologies and identifying therapeutic targets, which, in turn, will lead to better preventive medicine with translational impacts on personalized healthcare.
2102.12651
Marcelo Lopes Pereira Junior
M. L. Pereira J\'unior, R. T. de Sousa Junior, G. D. Amvame Nze, W. F. Giozza, and L. A. Ribeiro J\'unior
Evaluation of Peppermint Leaf Flavonoids as SARS-CoV-2 Spike Receptor-Binding Domain Attachment Inhibitors to the Human ACE2 Receptor: A Molecular Docking Study
11 pages, 06 figures and 02 tables
null
null
null
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Virtual screening is a computational technique widely used for identifying small molecules which are most likely to bind to a protein target. Here, we performed a molecular docking study to propose potential candidates to prevent the RBD/ACE2 attachment. These candidates are sixteen different flavonoids present in the peppermint leaf. Results showed that Luteolin 7-O-neohesperidoside is the peppermint flavonoid with a higher binding affinity regarding the RBD/ACE2 complex (about -9.18 Kcal/mol). On the other hand, Sakuranetin presented the lowest affinity (about -6.38 Kcal/mol). Binding affinities of the other peppermint flavonoids ranged from -6.44 Kcal/mol up to -9.05 Kcal/mol. The binding site surface analysis showed pocket-like regions on the RBD/ACE2 complex that yield several interactions (mostly hydrogen bonds) between the flavonoid and the amino acid residues of the proteins. This study can open channels for the understanding of the roles of flavonoids against COVID-19 infection.
[ { "created": "Thu, 25 Feb 2021 03:02:41 GMT", "version": "v1" }, { "created": "Wed, 22 Sep 2021 01:52:16 GMT", "version": "v2" } ]
2021-09-23
[ [ "Júnior", "M. L. Pereira", "" ], [ "Junior", "R. T. de Sousa", "" ], [ "Nze", "G. D. Amvame", "" ], [ "Giozza", "W. F.", "" ], [ "Júnior", "L. A. Ribeiro", "" ] ]
Virtual screening is a computational technique widely used for identifying small molecules which are most likely to bind to a protein target. Here, we performed a molecular docking study to propose potential candidates to prevent the RBD/ACE2 attachment. These candidates are sixteen different flavonoids present in the peppermint leaf. Results showed that Luteolin 7-O-neohesperidoside is the peppermint flavonoid with a higher binding affinity regarding the RBD/ACE2 complex (about -9.18 Kcal/mol). On the other hand, Sakuranetin presented the lowest affinity (about -6.38 Kcal/mol). Binding affinities of the other peppermint flavonoids ranged from -6.44 Kcal/mol up to -9.05 Kcal/mol. The binding site surface analysis showed pocket-like regions on the RBD/ACE2 complex that yield several interactions (mostly hydrogen bonds) between the flavonoid and the amino acid residues of the proteins. This study can open channels for the understanding of the roles of flavonoids against COVID-19 infection.
1402.3065
Joachim Krug
Johannes Neidhart, Ivan G. Szendro and Joachim Krug
Adaptation in tunably rugged fitness landscapes: The Rough Mount Fuji Model
43 pages, 12 figures; revised version with new results on the number of fitness maxima
Genetics 198: 699-721 (2014)
10.1534/genetics.114.167668
null
q-bio.PE cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much of the current theory of adaptation is based on Gillespie's mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyse a simple fitness landscape model with tunable ruggedness based on the Rough Mount Fuji (RMF) model originally introduced by Aita et al. [Biopolymers 54:64-79 (2000)] in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulae for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly-fit first step backgound in a recent evolution experiment with a microvirid bacteriophage [Miller et al., Genetics 187:185-202 (2011)].
[ { "created": "Thu, 13 Feb 2014 08:55:02 GMT", "version": "v1" }, { "created": "Fri, 1 Aug 2014 09:39:00 GMT", "version": "v2" } ]
2014-11-11
[ [ "Neidhart", "Johannes", "" ], [ "Szendro", "Ivan G.", "" ], [ "Krug", "Joachim", "" ] ]
Much of the current theory of adaptation is based on Gillespie's mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyse a simple fitness landscape model with tunable ruggedness based on the Rough Mount Fuji (RMF) model originally introduced by Aita et al. [Biopolymers 54:64-79 (2000)] in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulae for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly-fit first step backgound in a recent evolution experiment with a microvirid bacteriophage [Miller et al., Genetics 187:185-202 (2011)].
2403.04820
Rajesh Kumar
Saurav Bharadwaj, Akshita Midha, Shikha Sharma, Gurupkar Singh Sidhu, Rajesh Kumar
Optical Screening of Citrus Leaf Diseases Using Label-Free Spectroscopic Tools: A Review
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Citrus diseases pose threats to citrus farming and result in economic losses worldwide. Nucleic acid and serology-based methods of detection and, immunochromatographic assays are commonly used but these laboratory tests are laborious, expensive and might be subjected to cross-reaction and contamination. Modern optical spectroscopic techniques offer a promising alternative as they are label-free, sensitive, rapid, non-destructive, and demonstrate the potential for incorporation into an autonomous system for disease detection in citrus orchards. Nevertheless, the majority of optical spectroscopic methods for citrus disease detection are still in the trial phases and, require additional efforts to be established as efficient and commercially viable methods. The review presents an overview of fundamental working principles, the state of the art, and explains the applications and limitations of the optical spectroscopy technique including the spectroscopic imaging approach (hyperspectral imaging) in the identification of diseases in citrus plants. The review highlights (1) the technical specifications of optical spectroscopic tools that can potentially be utilized in field measurements, (2) their applications in screening citrus diseases through leaf spectroscopy, and (3) discusses their benefits and limitations, including future insights into label-free identification of citrus diseases. Moreover, the role of artificial intelligence is reviewed as potential effective tools for spectral analysis, enabling more accurate detection of infected citrus leaves even before the appearance of visual symptoms by leveraging compositional, morphological, and chemometric characteristics of the plant leaves. The review aims to encourage stakeholders to enhance the development and commercialization of field-based, label-free optical tools for the rapid and early-stage screening of citrus diseases in plants.
[ { "created": "Thu, 7 Mar 2024 14:00:39 GMT", "version": "v1" } ]
2024-03-11
[ [ "Bharadwaj", "Saurav", "" ], [ "Midha", "Akshita", "" ], [ "Sharma", "Shikha", "" ], [ "Sidhu", "Gurupkar Singh", "" ], [ "Kumar", "Rajesh", "" ] ]
Citrus diseases pose threats to citrus farming and result in economic losses worldwide. Nucleic acid and serology-based methods of detection and, immunochromatographic assays are commonly used but these laboratory tests are laborious, expensive and might be subjected to cross-reaction and contamination. Modern optical spectroscopic techniques offer a promising alternative as they are label-free, sensitive, rapid, non-destructive, and demonstrate the potential for incorporation into an autonomous system for disease detection in citrus orchards. Nevertheless, the majority of optical spectroscopic methods for citrus disease detection are still in the trial phases and, require additional efforts to be established as efficient and commercially viable methods. The review presents an overview of fundamental working principles, the state of the art, and explains the applications and limitations of the optical spectroscopy technique including the spectroscopic imaging approach (hyperspectral imaging) in the identification of diseases in citrus plants. The review highlights (1) the technical specifications of optical spectroscopic tools that can potentially be utilized in field measurements, (2) their applications in screening citrus diseases through leaf spectroscopy, and (3) discusses their benefits and limitations, including future insights into label-free identification of citrus diseases. Moreover, the role of artificial intelligence is reviewed as potential effective tools for spectral analysis, enabling more accurate detection of infected citrus leaves even before the appearance of visual symptoms by leveraging compositional, morphological, and chemometric characteristics of the plant leaves. The review aims to encourage stakeholders to enhance the development and commercialization of field-based, label-free optical tools for the rapid and early-stage screening of citrus diseases in plants.
1802.08744
Jakub Otwinowski
Jakub Otwinowski
Biophysical inference of epistasis and the effects of mutations on protein stability and function
null
null
10.1093/molbev/msy141
null
q-bio.BM q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the relationship between protein sequence, function, and stability is a fundamental problem in biology. While high-throughput methods have produced large numbers of sequence-function pairs, functional assays do not distinguish whether mutations directly affect function or are destabilizing the protein. Here, we introduce a statistical method to infer the underlying biophysics from a high-throughput binding assay by combining information from many mutated variants. We fit a thermodynamic model describing the bound, unbound, and unfolded states to high quality data of protein G domain B1 binding to IgG-Fc. We infer an energy landscape with distinct folding and binding energies for each substitution providing a detailed view of how mutations affect binding and stability across the protein. We accurately infer folding energy of each variant in physical units, validated by independent data, whereas previous high-throughput methods could only measure indirect changes in stability. While we assume an additive sequence-energy relationship, the binding fraction is epistatic due its non-linear relation to energy. Despite having no epistasis in energy, our model explains much of the observed epistasis in binding fraction, with the remaining epistasis identifying conformationally dynamic regions.
[ { "created": "Fri, 23 Feb 2018 21:50:37 GMT", "version": "v1" }, { "created": "Fri, 30 Mar 2018 23:59:39 GMT", "version": "v2" } ]
2018-12-03
[ [ "Otwinowski", "Jakub", "" ] ]
Understanding the relationship between protein sequence, function, and stability is a fundamental problem in biology. While high-throughput methods have produced large numbers of sequence-function pairs, functional assays do not distinguish whether mutations directly affect function or are destabilizing the protein. Here, we introduce a statistical method to infer the underlying biophysics from a high-throughput binding assay by combining information from many mutated variants. We fit a thermodynamic model describing the bound, unbound, and unfolded states to high quality data of protein G domain B1 binding to IgG-Fc. We infer an energy landscape with distinct folding and binding energies for each substitution providing a detailed view of how mutations affect binding and stability across the protein. We accurately infer folding energy of each variant in physical units, validated by independent data, whereas previous high-throughput methods could only measure indirect changes in stability. While we assume an additive sequence-energy relationship, the binding fraction is epistatic due its non-linear relation to energy. Despite having no epistasis in energy, our model explains much of the observed epistasis in binding fraction, with the remaining epistasis identifying conformationally dynamic regions.
1611.06784
Luis Enrique Correa Rocha Dr
Luis E C Rocha, Vikramjit Singh, Markus Esch, Tom Lenaerts, Mikael Stenhem, Fredrik Liljeros, Anna Thorson
Modeling contact networks of patients and MRSA spread in Swedish hospitals
null
Scientific Reports 10, 2020, 9336
10.1038/s41598-020-66270-9
null
q-bio.PE physics.pop-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Methicillin-resistant Staphylococcus aureus (MRSA) is a difficult-to-treat infection that only in the European Union affects about 150,000 patients and causes extra costs of 380 million Euros annually to the health-care systems. Increasing efforts have been taken to mitigate the epidemics and to avoid potential outbreaks in low endemic settings. Understanding the population dynamics of MRSA through modeling is essential to identify the causal mechanisms driving the epidemics and to generalize conclusions to different contexts. We develop an innovative high-resolution spatiotemporal contact network model of interactions between patients to reproduce the hospital population in the context of the Stockholm County in Sweden and simulate the spread of MRSA within this population. Our model captures the spatial and temporal heterogeneity caused by human behavior and by the dynamics of mobility within wards and hospitals. We estimate that in this population the epidemic threshold is at about 0.008. We also identify that these heterogeneous contact patterns cause the emergence of super-spreader patients and a polynomial growth of the epidemic curve. We finally study the effect of standard intervention control strategies and identify that screening is more effective than improved hygienic in order to cause smaller or null outbreaks.
[ { "created": "Mon, 21 Nov 2016 13:52:31 GMT", "version": "v1" } ]
2021-07-07
[ [ "Rocha", "Luis E C", "" ], [ "Singh", "Vikramjit", "" ], [ "Esch", "Markus", "" ], [ "Lenaerts", "Tom", "" ], [ "Stenhem", "Mikael", "" ], [ "Liljeros", "Fredrik", "" ], [ "Thorson", "Anna", "" ] ]
Methicillin-resistant Staphylococcus aureus (MRSA) is a difficult-to-treat infection that only in the European Union affects about 150,000 patients and causes extra costs of 380 million Euros annually to the health-care systems. Increasing efforts have been taken to mitigate the epidemics and to avoid potential outbreaks in low endemic settings. Understanding the population dynamics of MRSA through modeling is essential to identify the causal mechanisms driving the epidemics and to generalize conclusions to different contexts. We develop an innovative high-resolution spatiotemporal contact network model of interactions between patients to reproduce the hospital population in the context of the Stockholm County in Sweden and simulate the spread of MRSA within this population. Our model captures the spatial and temporal heterogeneity caused by human behavior and by the dynamics of mobility within wards and hospitals. We estimate that in this population the epidemic threshold is at about 0.008. We also identify that these heterogeneous contact patterns cause the emergence of super-spreader patients and a polynomial growth of the epidemic curve. We finally study the effect of standard intervention control strategies and identify that screening is more effective than improved hygienic in order to cause smaller or null outbreaks.
1411.0140
Peter Clote Peter Clote
Peter Clote
Expected degree for RNA secondary structure networks
25 pages, 5 figures, 5 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consider the network of all secondary structures of a given RNA sequence, where nodes are connected when the corresponding structures have base pair distance one. The expected degree of the network is the average number of neighbors, where average may be computed with respect to the either the uniform or Boltzmann probability. Here we describe the first algorithm, RNAexpNumNbors, that can compute the expected number of neighbors, or expected network degree, of an input sequence. For RNA sequences from the Rfam database, the expected degree is significantly less than the CMFE structure, defined to have minimum free energy over all structures consistent with the Rfam consensus structure. The expected degree of structural RNAs, such as purine riboswitches, paradoxically appears to be smaller than that of random RNA, yet the difference between the degree of the MFE structure and the expected degree is larger than that of random RNA. Expected degree does not seem to correlate with standard structural diversity measures of RNA, such as positional entropy, ensemble defect, etc. The program {\tt RNAexpNumNbors} is written in C, runs in cubic time and quadratic space, and is publicly available at http://bioinformatics.bc.edu/clotelab/RNAexpNumNbors.
[ { "created": "Sat, 1 Nov 2014 17:13:16 GMT", "version": "v1" } ]
2014-11-04
[ [ "Clote", "Peter", "" ] ]
Consider the network of all secondary structures of a given RNA sequence, where nodes are connected when the corresponding structures have base pair distance one. The expected degree of the network is the average number of neighbors, where average may be computed with respect to the either the uniform or Boltzmann probability. Here we describe the first algorithm, RNAexpNumNbors, that can compute the expected number of neighbors, or expected network degree, of an input sequence. For RNA sequences from the Rfam database, the expected degree is significantly less than the CMFE structure, defined to have minimum free energy over all structures consistent with the Rfam consensus structure. The expected degree of structural RNAs, such as purine riboswitches, paradoxically appears to be smaller than that of random RNA, yet the difference between the degree of the MFE structure and the expected degree is larger than that of random RNA. Expected degree does not seem to correlate with standard structural diversity measures of RNA, such as positional entropy, ensemble defect, etc. The program {\tt RNAexpNumNbors} is written in C, runs in cubic time and quadratic space, and is publicly available at http://bioinformatics.bc.edu/clotelab/RNAexpNumNbors.
2012.05998
Redouane Qesmi
Redouane Qesmi and Aayah Hammoumi
Lifting Lockdown Control Measure Assessment: From Finite to Infinite-dimensional Epidemic Models for COVID-19
This is a preprint of a paper whose final and definite form is published in Analysis of Infectious Disease Problems (Covid-19) and Their Global Impact, Springer Nature Singapore Pte Ltd. Submitted August 14, 2020; revised September 18, 2020; accepted October 8, 2020
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by-nc-nd/4.0/
The main focus of this chapter is on public health control strategies which are currently the main way to mitigate COVID-19 pandemic. We introduce and compare compartmental models of increasing complexity for COVID-19 transmission to describe dynamics of the disease spread. We begin by considering an SEAIR model including basic characteristics related to COVID-19. Next, we shall pay attention to age-structure modeling to emphasis the role of age-group individuals on the disease spread. A Model with constant delay is also formulated to show the impact of the latency period on the severity of COVID-19. Since there is evidence that for COVID-19 disease, important relationships exist between what is happening in the host and what is occurring at the population level, we shall link the basic model to in-host dynamics through the so-called threshold-type delay models. Finally, we will include demographic effects to the most complex models and we will conduct rigorous bifurcation analysis to quantify possible factors responsible for disease progression.
[ { "created": "Thu, 10 Dec 2020 21:56:31 GMT", "version": "v1" } ]
2020-12-14
[ [ "Qesmi", "Redouane", "" ], [ "Hammoumi", "Aayah", "" ] ]
The main focus of this chapter is on public health control strategies which are currently the main way to mitigate COVID-19 pandemic. We introduce and compare compartmental models of increasing complexity for COVID-19 transmission to describe dynamics of the disease spread. We begin by considering an SEAIR model including basic characteristics related to COVID-19. Next, we shall pay attention to age-structure modeling to emphasis the role of age-group individuals on the disease spread. A Model with constant delay is also formulated to show the impact of the latency period on the severity of COVID-19. Since there is evidence that for COVID-19 disease, important relationships exist between what is happening in the host and what is occurring at the population level, we shall link the basic model to in-host dynamics through the so-called threshold-type delay models. Finally, we will include demographic effects to the most complex models and we will conduct rigorous bifurcation analysis to quantify possible factors responsible for disease progression.
2108.01198
Roberto Mor\'an-Tovar
Roberto Mor\'an-Tovar, Henning Gruell, Florian Klein, Michael L\"assig
Stochasticity of infectious outbreaks and consequences for optimal interventions
null
null
10.1088/1751-8121/ac88a6
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Global strategies to contain a pandemic, such as social distancing and protective measures, are designed to reduce the overall transmission rate between individuals. Despite such measures, essential institutions, including hospitals, schools, and food producing plants, remain focal points of local outbreaks. Here we develop a model for the stochastic outbreak dynamics in such local communities. We derive analytical expressions for the probability of containment of the outbreak, which is complementary to the probability of seeding a deterministically growing epidemic. This probability depends on the statistics of the intra-community contact network and the initial conditions, in particular, on the contact degree of patient zero. Based on this model, we suggest surveillance protocols by which individuals are tested proportionally to their degree in the contact network. We characterize the efficacy of contact-based protocols as a function of the epidemiological and the contact network parameters, and show numerically that such protocols outperform random testing.
[ { "created": "Mon, 2 Aug 2021 22:29:20 GMT", "version": "v1" }, { "created": "Mon, 15 Nov 2021 06:49:38 GMT", "version": "v2" }, { "created": "Tue, 10 May 2022 09:38:42 GMT", "version": "v3" }, { "created": "Sun, 31 Jul 2022 19:45:36 GMT", "version": "v4" } ]
2022-09-07
[ [ "Morán-Tovar", "Roberto", "" ], [ "Gruell", "Henning", "" ], [ "Klein", "Florian", "" ], [ "Lässig", "Michael", "" ] ]
Global strategies to contain a pandemic, such as social distancing and protective measures, are designed to reduce the overall transmission rate between individuals. Despite such measures, essential institutions, including hospitals, schools, and food producing plants, remain focal points of local outbreaks. Here we develop a model for the stochastic outbreak dynamics in such local communities. We derive analytical expressions for the probability of containment of the outbreak, which is complementary to the probability of seeding a deterministically growing epidemic. This probability depends on the statistics of the intra-community contact network and the initial conditions, in particular, on the contact degree of patient zero. Based on this model, we suggest surveillance protocols by which individuals are tested proportionally to their degree in the contact network. We characterize the efficacy of contact-based protocols as a function of the epidemiological and the contact network parameters, and show numerically that such protocols outperform random testing.