id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2010.09418
Delfim F. M. Torres
Faical Ndairou, Ivan Area, Delfim F. M. Torres
Mathematical Modeling of Japanese Encephalitis Under Aquatic Environmental Effects
This is a preprint of a paper whose final and definite form is published Open Access in 'Mathematics' (ISSN 2227-7390), available at [https://www.mdpi.com/journal/mathematics]. Submitted: 10-Sept-2020; Revised: 8-Oct-2020; Accepted to Mathematics (ISSN 2227-7390): 19-Oct-2020
Mathematics 8 (2020), no. 11, Art. 1880, 14 pp
10.3390/math8111880
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a mathematical model for the spread of Japanese encephalitis, with emphasis on environmental effects on the aquatic phase of mosquitoes. The model is shown to be biologically well-posed and to have a biologically and ecologically meaningful disease free equilibrium point. Local stability is analyzed in terms of the basic reproduction number and numerical simulations presented and discussed.
[ { "created": "Mon, 19 Oct 2020 12:28:39 GMT", "version": "v1" } ]
2020-11-02
[ [ "Ndairou", "Faical", "" ], [ "Area", "Ivan", "" ], [ "Torres", "Delfim F. M.", "" ] ]
We propose a mathematical model for the spread of Japanese encephalitis, with emphasis on environmental effects on the aquatic phase of mosquitoes. The model is shown to be biologically well-posed and to have a biologically and ecologically meaningful disease free equilibrium point. Local stability is analyzed in terms of the basic reproduction number and numerical simulations presented and discussed.
1304.4763
Thomas House
Matthew Graham and Thomas House
Dynamics of Stochastic Epidemics on Heterogeneous Networks
22 pages, 3 figures
null
10.1007/s00285-013-0679-1
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epidemic models currently play a central role in our attempts to understand and control infectious diseases. Here, we derive a model for the diffusion limit of stochastic susceptible-infectious-removed (SIR) epidemic dynamics on a heterogeneous network. Using this, we consider analytically the early asymptotic exponential growth phase of such epidemics, showing how the higher order moments of the network degree distribution enter into the stochastic behaviour of the epidemic. We find that the first three moments of the network degree distribution are needed to specify the variance in disease prevalence fully, meaning that the skewness of the degree distribution affects the variance of the prevalence of infection. We compare these asymptotic results to simulation and find a close agreement for city-sized populations.
[ { "created": "Wed, 17 Apr 2013 10:47:02 GMT", "version": "v1" } ]
2013-09-30
[ [ "Graham", "Matthew", "" ], [ "House", "Thomas", "" ] ]
Epidemic models currently play a central role in our attempts to understand and control infectious diseases. Here, we derive a model for the diffusion limit of stochastic susceptible-infectious-removed (SIR) epidemic dynamics on a heterogeneous network. Using this, we consider analytically the early asymptotic exponential growth phase of such epidemics, showing how the higher order moments of the network degree distribution enter into the stochastic behaviour of the epidemic. We find that the first three moments of the network degree distribution are needed to specify the variance in disease prevalence fully, meaning that the skewness of the degree distribution affects the variance of the prevalence of infection. We compare these asymptotic results to simulation and find a close agreement for city-sized populations.
1407.1499
Thomas Ouldridge
Thomas E. Ouldridge and Pieter Rein ten Wolde
The robustness of proofreading to crowding-induced pseudo-processivity in the MAPK pathway
To appear in Biohys. J
Biophys. J. 107, 2425--2435, 2014
10.1016/j.bpj.2014.10.020
null
q-bio.MN physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Double phosphorylation of protein kinases is a common feature of signalling cascades. This motif may reduce cross-talk between signalling pathways, as the second phosphorylation site allows for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be `pseudo-processive' in the crowded cellular environment, as rebinding after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding increases and pseudo-processive behavior becomes possible. However, this loss of specificity with increased rebinding is typically also observed if two distinct enzyme species are required for phosphorylation, i.e. when the system is necessarily distributive. Thus the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signalling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif.
[ { "created": "Sun, 6 Jul 2014 14:34:03 GMT", "version": "v1" }, { "created": "Fri, 7 Nov 2014 12:36:37 GMT", "version": "v2" } ]
2015-05-26
[ [ "Ouldridge", "Thomas E.", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
Double phosphorylation of protein kinases is a common feature of signalling cascades. This motif may reduce cross-talk between signalling pathways, as the second phosphorylation site allows for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be `pseudo-processive' in the crowded cellular environment, as rebinding after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding increases and pseudo-processive behavior becomes possible. However, this loss of specificity with increased rebinding is typically also observed if two distinct enzyme species are required for phosphorylation, i.e. when the system is necessarily distributive. Thus the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signalling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif.
2311.15201
Jintao Zhu
Jintao Zhu, Zhonghui Gu, Jianfeng Pei, Luhua Lai
DiffBindFR: An SE(3) Equivariant Network for Flexible Protein-Ligand Docking
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular docking, a key technique in structure-based drug design, plays pivotal roles in protein-ligand interaction modeling, hit identification and optimization, in which accurate prediction of protein-ligand binding mode is essential. Conventional docking approaches perform well in redocking tasks with known protein binding pocket conformation in the complex state. However, in real-world docking scenario without knowing the protein binding conformation for a new ligand, accurately modeling the binding complex structure remains challenging as flexible docking is computationally expensive and inaccurate. Typical deep learning-based docking methods do not explicitly consider protein side chain conformations and fail to ensure the physical plausibility and detailed atomic interactions. In this study, we present DiffBindFR, a full-atom diffusion-based flexible docking model that operates over the product space of ligand overall movements and flexibility and pocket side chain torsion changes. We show that DiffBindFR has higher accuracy in producing native-like binding structures with physically plausible and detailed interactions than available docking methods. Furthermore, in the Apo and AlphaFold2 modeled structures, DiffBindFR demonstrates superior advantages in accurate ligand binding pose and protein binding conformation prediction, making it suitable for Apo and AlphaFold2 structure-based drug design. DiffBindFR provides a powerful flexible docking tool for modeling accurate protein-ligand binding structures.
[ { "created": "Sun, 26 Nov 2023 05:46:19 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2023 14:44:52 GMT", "version": "v2" }, { "created": "Tue, 19 Dec 2023 06:54:06 GMT", "version": "v3" } ]
2023-12-20
[ [ "Zhu", "Jintao", "" ], [ "Gu", "Zhonghui", "" ], [ "Pei", "Jianfeng", "" ], [ "Lai", "Luhua", "" ] ]
Molecular docking, a key technique in structure-based drug design, plays pivotal roles in protein-ligand interaction modeling, hit identification and optimization, in which accurate prediction of protein-ligand binding mode is essential. Conventional docking approaches perform well in redocking tasks with known protein binding pocket conformation in the complex state. However, in real-world docking scenario without knowing the protein binding conformation for a new ligand, accurately modeling the binding complex structure remains challenging as flexible docking is computationally expensive and inaccurate. Typical deep learning-based docking methods do not explicitly consider protein side chain conformations and fail to ensure the physical plausibility and detailed atomic interactions. In this study, we present DiffBindFR, a full-atom diffusion-based flexible docking model that operates over the product space of ligand overall movements and flexibility and pocket side chain torsion changes. We show that DiffBindFR has higher accuracy in producing native-like binding structures with physically plausible and detailed interactions than available docking methods. Furthermore, in the Apo and AlphaFold2 modeled structures, DiffBindFR demonstrates superior advantages in accurate ligand binding pose and protein binding conformation prediction, making it suitable for Apo and AlphaFold2 structure-based drug design. DiffBindFR provides a powerful flexible docking tool for modeling accurate protein-ligand binding structures.
1805.04538
Michele Monti
Michele Monti, David K. Lubensky and Pieter Rein ten Wolde
Theory of circadian metabolism
17 Pages, 6 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many organisms repartition their proteome in a circadian fashion in response to the daily nutrient changes in their environment. A striking example is provided by cyanobacteria, which perform photosynthesis during the day to fix carbon. These organisms not only face the challenge of rewiring their proteome every 12 hours, but also the necessity of storing the fixed carbon in the form of glycogen to fuel processes during the night. In this manuscript, we extend the framework developed by Hwa and coworkers (Scott et al., Science 330, 1099 (2010)) for quantifying the relatinship between growth and proteome composition to circadian metabolism. We then apply this framework to investigate the circadian metabolism of the cyanobacterium Cyanothece, which not only fixes carbon during the day, but also nitrogen during the night, storing it in the polymer cyanophycin. Our analysis reveals that the need to store carbon and nitrogen tends to generate an extreme growth strategy, in which the cells predominantly grow during the day, as observed experimentally. This strategy maximizes the growth rate over 24 hours, and can be quantitatively understood by the bacterial growth laws. Our analysis also shows that the slow relaxation of the proteome, arising from the slow growth rate, puts a severe constraint on implementing this optimal strategy. Yet, the capacity to estimate the time of the day, enabled by the circadian clock, makes it possible to anticipate the daily changes in the environment and mount a response ahead of time. This significantly enhances the growth rate by counteracting the detrimental effects of the slow proteome relaxation.
[ { "created": "Fri, 11 May 2018 18:08:36 GMT", "version": "v1" } ]
2018-05-15
[ [ "Monti", "Michele", "" ], [ "Lubensky", "David K.", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
Many organisms repartition their proteome in a circadian fashion in response to the daily nutrient changes in their environment. A striking example is provided by cyanobacteria, which perform photosynthesis during the day to fix carbon. These organisms not only face the challenge of rewiring their proteome every 12 hours, but also the necessity of storing the fixed carbon in the form of glycogen to fuel processes during the night. In this manuscript, we extend the framework developed by Hwa and coworkers (Scott et al., Science 330, 1099 (2010)) for quantifying the relatinship between growth and proteome composition to circadian metabolism. We then apply this framework to investigate the circadian metabolism of the cyanobacterium Cyanothece, which not only fixes carbon during the day, but also nitrogen during the night, storing it in the polymer cyanophycin. Our analysis reveals that the need to store carbon and nitrogen tends to generate an extreme growth strategy, in which the cells predominantly grow during the day, as observed experimentally. This strategy maximizes the growth rate over 24 hours, and can be quantitatively understood by the bacterial growth laws. Our analysis also shows that the slow relaxation of the proteome, arising from the slow growth rate, puts a severe constraint on implementing this optimal strategy. Yet, the capacity to estimate the time of the day, enabled by the circadian clock, makes it possible to anticipate the daily changes in the environment and mount a response ahead of time. This significantly enhances the growth rate by counteracting the detrimental effects of the slow proteome relaxation.
2401.15190
Laura Di Domenico
Laura Di Domenico, Eugenio Valdano and Vittoria Colizza
Limited data on infectious disease distribution exposes ambiguity in epidemic modeling choices
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Traditional disease transmission models assume that the infectious period is exponentially distributed with a recovery rate fixed in time and across individuals. This assumption provides analytical and computational advantages, however it is often unrealistic. Efforts in modeling non-exponentially distributed infectious periods are either limited to special cases or lead to unsolvable models. Also, the link between empirical data (infectious period distribution) and the modeling needs (corresponding recovery rates) lacks a clear understanding. Here we introduce a mapping of an arbitrary distribution of infectious periods into a distribution of recovery rates. We show that the same infectious period distribution at the population level can be reproduced by two modeling schemes -- host-based and population-based -- depending on the individual response to the infection, and aggregated empirical data cannot easily discriminate the correct scheme. Besides being conceptually different, the two schemes also lead to different epidemic trajectories. Although sharing the same behavior close to the disease-free equilibrium, the host-based scheme deviates from the expected epidemic when reaching the endemic equilibrium of an SIS transmission model, while the population-based scheme turns out to be equivalent to assuming a homogeneous recovery rate. We show this through analytical computations and stochastic epidemic simulations on a contact network, using both generative network models and empirical contact data. It is therefore possible to reproduce heterogeneous infectious periods in network-based transmission models, however the resulting prevalence is sensitive to the modeling choice for the interpretation of the empirically collected data on infection duration. In absence of higher resolution data, studies should acknowledge such deviations in the epidemic predictions.
[ { "created": "Fri, 26 Jan 2024 20:24:02 GMT", "version": "v1" } ]
2024-01-30
[ [ "Di Domenico", "Laura", "" ], [ "Valdano", "Eugenio", "" ], [ "Colizza", "Vittoria", "" ] ]
Traditional disease transmission models assume that the infectious period is exponentially distributed with a recovery rate fixed in time and across individuals. This assumption provides analytical and computational advantages, however it is often unrealistic. Efforts in modeling non-exponentially distributed infectious periods are either limited to special cases or lead to unsolvable models. Also, the link between empirical data (infectious period distribution) and the modeling needs (corresponding recovery rates) lacks a clear understanding. Here we introduce a mapping of an arbitrary distribution of infectious periods into a distribution of recovery rates. We show that the same infectious period distribution at the population level can be reproduced by two modeling schemes -- host-based and population-based -- depending on the individual response to the infection, and aggregated empirical data cannot easily discriminate the correct scheme. Besides being conceptually different, the two schemes also lead to different epidemic trajectories. Although sharing the same behavior close to the disease-free equilibrium, the host-based scheme deviates from the expected epidemic when reaching the endemic equilibrium of an SIS transmission model, while the population-based scheme turns out to be equivalent to assuming a homogeneous recovery rate. We show this through analytical computations and stochastic epidemic simulations on a contact network, using both generative network models and empirical contact data. It is therefore possible to reproduce heterogeneous infectious periods in network-based transmission models, however the resulting prevalence is sensitive to the modeling choice for the interpretation of the empirically collected data on infection duration. In absence of higher resolution data, studies should acknowledge such deviations in the epidemic predictions.
2311.10640
Renjiu Hu
Renjiu Hu, Qihao Zhang, Pascal Spincemaille, Thanh D. Nguyen, Yi Wang
Multi-delay arterial spin-labeled perfusion estimation with biophysics simulation and deep learning
32 pages, 5 figures
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Purpose: To develop biophysics-based method for estimating perfusion Q from arterial spin labeling (ASL) images using deep learning. Methods: A 3D U-Net (QTMnet) was trained to estimate perfusion from 4D tracer propagation images. The network was trained and tested on simulated 4D tracer concentration data based on artificial vasculature structure generated by constrained constructive optimization (CCO) method. The trained network was further tested in a synthetic brain ASL image based on vasculature network extracted from magnetic resonance (MR) angiography. The estimations from both trained network and a conventional kinetic model were compared in ASL images acquired from eight healthy volunteers. Results: QTMnet accurately reconstructed perfusion Q from concentration data. Relative error of the synthetic brain ASL image was 7.04% for perfusion Q, lower than the error using single-delay ASL model: 25.15% for Q, and multi-delay ASL model: 12.62% for perfusion Q. Conclusion: QTMnet provides accurate estimation on perfusion parameters and is a promising approach as a clinical ASL MRI image processing pipeline.
[ { "created": "Fri, 17 Nov 2023 16:55:14 GMT", "version": "v1" } ]
2023-11-20
[ [ "Hu", "Renjiu", "" ], [ "Zhang", "Qihao", "" ], [ "Spincemaille", "Pascal", "" ], [ "Nguyen", "Thanh D.", "" ], [ "Wang", "Yi", "" ] ]
Purpose: To develop biophysics-based method for estimating perfusion Q from arterial spin labeling (ASL) images using deep learning. Methods: A 3D U-Net (QTMnet) was trained to estimate perfusion from 4D tracer propagation images. The network was trained and tested on simulated 4D tracer concentration data based on artificial vasculature structure generated by constrained constructive optimization (CCO) method. The trained network was further tested in a synthetic brain ASL image based on vasculature network extracted from magnetic resonance (MR) angiography. The estimations from both trained network and a conventional kinetic model were compared in ASL images acquired from eight healthy volunteers. Results: QTMnet accurately reconstructed perfusion Q from concentration data. Relative error of the synthetic brain ASL image was 7.04% for perfusion Q, lower than the error using single-delay ASL model: 25.15% for Q, and multi-delay ASL model: 12.62% for perfusion Q. Conclusion: QTMnet provides accurate estimation on perfusion parameters and is a promising approach as a clinical ASL MRI image processing pipeline.
1708.07508
Eero Satuvuori
Eero Satuvuori and Thomas Kreuz
Which spike train distance is most suitable for distinguishing rate and temporal coding?
14 pages, 6 Figures, 1 Table
null
null
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: It is commonly assumed in neuronal coding that repeated presentations of a stimulus to a coding neuron elicit similar responses. One common way to assess similarity are spike train distances. These can be divided into spike-resolved, such as the Victor-Purpura and the van Rossum distance, and time-resolved, e.g. the ISI-, the SPIKE- and the RI-SPIKE-distance. New Method: We use independent steady-rate Poisson processes as surrogates for spike trains with fixed rate and no timing information to address two basic questions: How does the sensitivity of the different spike train distances to temporal coding depend on the rates of the two processes and how do the distances deal with very low rates? Results: Spike-resolved distances always contain rate information even for parameters indicating time coding. This is an issue for reasonably high rates but beneficial for very low rates. In contrast, the operational range for detecting time coding of time-resolved distances is superior at normal rates, but these measures produce artefacts at very low rates. The RI-SPIKE-distance is the only measure that is sensitive to timing information only. Comparison with Existing Methods: While our results on rate-dependent expectation values for the spike-resolved distances agree with \citet{Chicharro11}, we here go one step further and specifically investigate applicability for very low rates. Conclusions: The most appropriate measure depends on the rates of the data being analysed. Accordingly, we summarize our results in one table that allows an easy selection of the preferred measure for any kind of data.
[ { "created": "Wed, 23 Aug 2017 21:53:15 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2017 18:14:38 GMT", "version": "v2" }, { "created": "Wed, 21 Feb 2018 08:29:15 GMT", "version": "v3" } ]
2018-02-22
[ [ "Satuvuori", "Eero", "" ], [ "Kreuz", "Thomas", "" ] ]
Background: It is commonly assumed in neuronal coding that repeated presentations of a stimulus to a coding neuron elicit similar responses. One common way to assess similarity are spike train distances. These can be divided into spike-resolved, such as the Victor-Purpura and the van Rossum distance, and time-resolved, e.g. the ISI-, the SPIKE- and the RI-SPIKE-distance. New Method: We use independent steady-rate Poisson processes as surrogates for spike trains with fixed rate and no timing information to address two basic questions: How does the sensitivity of the different spike train distances to temporal coding depend on the rates of the two processes and how do the distances deal with very low rates? Results: Spike-resolved distances always contain rate information even for parameters indicating time coding. This is an issue for reasonably high rates but beneficial for very low rates. In contrast, the operational range for detecting time coding of time-resolved distances is superior at normal rates, but these measures produce artefacts at very low rates. The RI-SPIKE-distance is the only measure that is sensitive to timing information only. Comparison with Existing Methods: While our results on rate-dependent expectation values for the spike-resolved distances agree with \citet{Chicharro11}, we here go one step further and specifically investigate applicability for very low rates. Conclusions: The most appropriate measure depends on the rates of the data being analysed. Accordingly, we summarize our results in one table that allows an easy selection of the preferred measure for any kind of data.
2405.11394
Ying Choon Wu
Chi-Yuan Chang, Chieh Hsu, Ying Choon Wu, Siwen Wang, Darin Tsui, Tzyy-Ping Jung
Online Mental Stress Detection Using Frontal-channel EEG Recordings in a Classroom Scenario
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Objective: To investigate the effects of different approaches to EEG preprocessing, channel montage selection, and model architecture on the performance of an online-capable stress detection algorithm in a classroom scenario. Methods: This analysis used EEG data from a longitudinal stress and fatigue study conducted among university students. Their self-reported stress ratings during each class session were the basis for classifying EEG recordings into either normal or elevated stress states. We used a data-processing pipeline that combined Artifact Subspace Reconstruction (ASR)and an Independent Component Analysis (ICA)-based method to achieve online artifact removal. We compared the performance of a Linear Discriminant Analysis (LDA) and a 4-layer neural network as classifiers. We opted for accuracy, balanced accuracy, and F1 score as the metrics for assessing performance. We examined the impact of varying numbers of input channels using different channel montages. Additionally, we explored different window lengths and step sizes during online evaluation. Results: Our online artifact removal method achieved performance comparable to the offline ICA method in both offline and online evaluations. A balanced accuracy of 77% and 78% in an imbalanced binary classification were observed when using the 11-frontal-channel LDA model with the proposed artifact removal method. Moreover, the model performance remained intact when changing the channel montage from 30 full-scalp channels to just 11 frontal channels. During the online evaluation, we achieved the highest balanced accuracy (78%) with a window length of 20 seconds and a step size of 1 second. Significance: This study comprehensively investigates the deployment of stress detection in real-world scenarios. The findings of this study provide insight into the development of daily mental stress monitoring.
[ { "created": "Sat, 18 May 2024 21:08:50 GMT", "version": "v1" } ]
2024-05-21
[ [ "Chang", "Chi-Yuan", "" ], [ "Hsu", "Chieh", "" ], [ "Wu", "Ying Choon", "" ], [ "Wang", "Siwen", "" ], [ "Tsui", "Darin", "" ], [ "Jung", "Tzyy-Ping", "" ] ]
Objective: To investigate the effects of different approaches to EEG preprocessing, channel montage selection, and model architecture on the performance of an online-capable stress detection algorithm in a classroom scenario. Methods: This analysis used EEG data from a longitudinal stress and fatigue study conducted among university students. Their self-reported stress ratings during each class session were the basis for classifying EEG recordings into either normal or elevated stress states. We used a data-processing pipeline that combined Artifact Subspace Reconstruction (ASR)and an Independent Component Analysis (ICA)-based method to achieve online artifact removal. We compared the performance of a Linear Discriminant Analysis (LDA) and a 4-layer neural network as classifiers. We opted for accuracy, balanced accuracy, and F1 score as the metrics for assessing performance. We examined the impact of varying numbers of input channels using different channel montages. Additionally, we explored different window lengths and step sizes during online evaluation. Results: Our online artifact removal method achieved performance comparable to the offline ICA method in both offline and online evaluations. A balanced accuracy of 77% and 78% in an imbalanced binary classification were observed when using the 11-frontal-channel LDA model with the proposed artifact removal method. Moreover, the model performance remained intact when changing the channel montage from 30 full-scalp channels to just 11 frontal channels. During the online evaluation, we achieved the highest balanced accuracy (78%) with a window length of 20 seconds and a step size of 1 second. Significance: This study comprehensively investigates the deployment of stress detection in real-world scenarios. The findings of this study provide insight into the development of daily mental stress monitoring.
1801.06252
Guillaume Lamoureux
Georgy Derevyanko, Sergei Grudinin, Yoshua Bengio, and Guillaume Lamoureux
Deep convolutional networks for quality assessment of protein folds
8 pages
null
10.1093/bioinformatics/bty494
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The computational prediction of a protein structure from its sequence generally relies on a method to assess the quality of protein models. Most assessment methods rank candidate models using heavily engineered structural features, defined as complex functions of the atomic coordinates. However, very few methods have attempted to learn these features directly from the data. We show that deep convolutional networks can be used to predict the ranking of model structures solely on the basis of their raw three-dimensional atomic densities, without any feature tuning. We develop a deep neural network that performs on par with state-of-the-art algorithms from the literature. The network is trained on decoys from the CASP7 to CASP10 datasets and its performance is tested on the CASP11 dataset. On the CASP11 stage 2 dataset, it achieves a loss of 0.064, whereas the best performing method achieves a loss of 0.063. Additional testing on decoys from the CASP12, CAMEO, and 3DRobot datasets confirms that the network performs consistently well across a variety of protein structures. While the network learns to assess structural decoys globally and does not rely on any predefined features, it can be analyzed to show that it implicitly identifies regions that deviate from the native structure.
[ { "created": "Thu, 18 Jan 2018 23:37:27 GMT", "version": "v1" } ]
2018-11-26
[ [ "Derevyanko", "Georgy", "" ], [ "Grudinin", "Sergei", "" ], [ "Bengio", "Yoshua", "" ], [ "Lamoureux", "Guillaume", "" ] ]
The computational prediction of a protein structure from its sequence generally relies on a method to assess the quality of protein models. Most assessment methods rank candidate models using heavily engineered structural features, defined as complex functions of the atomic coordinates. However, very few methods have attempted to learn these features directly from the data. We show that deep convolutional networks can be used to predict the ranking of model structures solely on the basis of their raw three-dimensional atomic densities, without any feature tuning. We develop a deep neural network that performs on par with state-of-the-art algorithms from the literature. The network is trained on decoys from the CASP7 to CASP10 datasets and its performance is tested on the CASP11 dataset. On the CASP11 stage 2 dataset, it achieves a loss of 0.064, whereas the best performing method achieves a loss of 0.063. Additional testing on decoys from the CASP12, CAMEO, and 3DRobot datasets confirms that the network performs consistently well across a variety of protein structures. While the network learns to assess structural decoys globally and does not rely on any predefined features, it can be analyzed to show that it implicitly identifies regions that deviate from the native structure.
2312.14939
Byung-Hoon Kim M.D. Ph.D.
Byung-Hoon Kim, Jungwon Choi, EungGu Yun, Kyungsang Kim, Xiang Li, Juho Lee
Large-scale Graph Representation Learning of Dynamic Brain Connectome with Transformers
NeurIPS 2023 Temporal Graph Learning Workshop
null
null
null
q-bio.NC cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Transformers have recently been successful in various graph representation learning tasks, providing a number of advantages over message-passing Graph Neural Networks. Utilizing Graph Transformers for learning the representation of the brain functional connectivity network is also gaining interest. However, studies to date have underlooked the temporal dynamics of functional connectivity, which fluctuates over time. Here, we propose a method for learning the representation of dynamic functional connectivity with Graph Transformers. Specifically, we define the connectome embedding, which holds the position, structure, and time information of the functional connectivity graph, and use Transformers to learn its representation across time. We perform experiments with over 50,000 resting-state fMRI samples obtained from three datasets, which is the largest number of fMRI data used in studies by far. The experimental results show that our proposed method outperforms other competitive baselines in gender classification and age regression tasks based on the functional connectivity extracted from the fMRI data.
[ { "created": "Mon, 4 Dec 2023 16:08:44 GMT", "version": "v1" } ]
2023-12-27
[ [ "Kim", "Byung-Hoon", "" ], [ "Choi", "Jungwon", "" ], [ "Yun", "EungGu", "" ], [ "Kim", "Kyungsang", "" ], [ "Li", "Xiang", "" ], [ "Lee", "Juho", "" ] ]
Graph Transformers have recently been successful in various graph representation learning tasks, providing a number of advantages over message-passing Graph Neural Networks. Utilizing Graph Transformers for learning the representation of the brain functional connectivity network is also gaining interest. However, studies to date have underlooked the temporal dynamics of functional connectivity, which fluctuates over time. Here, we propose a method for learning the representation of dynamic functional connectivity with Graph Transformers. Specifically, we define the connectome embedding, which holds the position, structure, and time information of the functional connectivity graph, and use Transformers to learn its representation across time. We perform experiments with over 50,000 resting-state fMRI samples obtained from three datasets, which is the largest number of fMRI data used in studies by far. The experimental results show that our proposed method outperforms other competitive baselines in gender classification and age regression tasks based on the functional connectivity extracted from the fMRI data.
0901.1464
Wojciech Waga
Agnieszka Laszkiewicz, Przemyslaw Biecek, Katarzyna Bonkowska, Stanislaw Cebrat
Evolution of the Age Structured Populations and Demography
35 pages, 19 figures
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe the simulation method of modelling the population evolution using Monte Carlo based on the Penna model. Individuals in the populations are represented by their diploid genomes. Genes expressed after the minimum reproduction age are under a weaker selection pressure and accumulate more mutations than those expressed before the minimum reproduction age. The generated gradient of defective genes determines the ageing of individuals and age-structured populations are very similar to the natural, sexually reproducing populations. The genetic structure of a population depends on the way how the random death affects the population. The improvement of the medical care and healthier life styles are responsible for the increasing of the life expectancy of humans during the last century. Introducing a noise into the relations between the genotype, phenotype, and environment, it is possible to simulate some other effects, like the role of immunological systems and a mother care. One of the most interesting results was the evolution of sex chromosomes. Placing the male sex determinants on one chromosome of a pair of sex chromosomes is enough to condemn it for shrinking if the population is panmictic (random-mating is assumed). If males are indispensable for taking care of their offspring and have to be faithful to their females, the male sex chromosome does not shrink.
[ { "created": "Sun, 11 Jan 2009 20:19:47 GMT", "version": "v1" } ]
2009-01-13
[ [ "Laszkiewicz", "Agnieszka", "" ], [ "Biecek", "Przemyslaw", "" ], [ "Bonkowska", "Katarzyna", "" ], [ "Cebrat", "Stanislaw", "" ] ]
We describe the simulation method of modelling the population evolution using Monte Carlo based on the Penna model. Individuals in the populations are represented by their diploid genomes. Genes expressed after the minimum reproduction age are under a weaker selection pressure and accumulate more mutations than those expressed before the minimum reproduction age. The generated gradient of defective genes determines the ageing of individuals and age-structured populations are very similar to the natural, sexually reproducing populations. The genetic structure of a population depends on the way how the random death affects the population. The improvement of the medical care and healthier life styles are responsible for the increasing of the life expectancy of humans during the last century. Introducing a noise into the relations between the genotype, phenotype, and environment, it is possible to simulate some other effects, like the role of immunological systems and a mother care. One of the most interesting results was the evolution of sex chromosomes. Placing the male sex determinants on one chromosome of a pair of sex chromosomes is enough to condemn it for shrinking if the population is panmictic (random-mating is assumed). If males are indispensable for taking care of their offspring and have to be faithful to their females, the male sex chromosome does not shrink.
1203.0844
A Derzsi
A. Derzsi, Z. Neda
A seed-diffusion model for tropical tree diversity patterns
12 pages, 5 figures
null
10.1016/j.physa.2012.05.008
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diversity patterns of tree species in a tropical forest community are approached by a simple lattice model and investigated by Monte Carlo simulations using a backtracking method. Our spatially explicit neutral model is based on a simple statistical physics process, namely the diffusion of seeds. The model has three parameters: the speciation rate, the size of the meta-community in which the studied tree-community is embedded, and the average surviving time of the seeds. By extensive computer simulations we aim the reproduction of relevant statistical measures derived from the experimental data of the Barro Colorado Island tree census in year 1995. The first two parameters of the model are fixed to known values, characteristic of the studied community, thus obtaining a model with only one freely adjustable parameter. As a result of this, the average number of species in the considered territory, the relative species abundance distribution, the species-area relationship and the spatial auto-correlation function of the individuals in abundant species are simultaneously fitted with only one parameter which is the average surviving time of the seeds.
[ { "created": "Mon, 5 Mar 2012 09:51:03 GMT", "version": "v1" } ]
2015-06-04
[ [ "Derzsi", "A.", "" ], [ "Neda", "Z.", "" ] ]
Diversity patterns of tree species in a tropical forest community are approached by a simple lattice model and investigated by Monte Carlo simulations using a backtracking method. Our spatially explicit neutral model is based on a simple statistical physics process, namely the diffusion of seeds. The model has three parameters: the speciation rate, the size of the meta-community in which the studied tree-community is embedded, and the average surviving time of the seeds. By extensive computer simulations we aim the reproduction of relevant statistical measures derived from the experimental data of the Barro Colorado Island tree census in year 1995. The first two parameters of the model are fixed to known values, characteristic of the studied community, thus obtaining a model with only one freely adjustable parameter. As a result of this, the average number of species in the considered territory, the relative species abundance distribution, the species-area relationship and the spatial auto-correlation function of the individuals in abundant species are simultaneously fitted with only one parameter which is the average surviving time of the seeds.
2207.11695
Jun He Prof.
Zhong-Xue Gao, Tian-Tian Li, Han-Yu Jiang, and Jun He
Calcium oscillation on homogeneous and heterogeneous networks of ryanodine receptor
14 pages, 8 figures, to be published in Phys. Rev. E
null
10.1103/PhysRevE.107.024402
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Calcium oscillation is an important calcium homeostasis, imbalance of which is the key mechanism of initiation and progression of many major diseases. The formation and maintenance of calcium homeostasis are closely related to the spatial distribution of calcium channels. In the current paper, a theoretical framework is established by abstracting the spatial distribution of the calcium channels as a nonlinear biological complex network with calcium channels as nodes and Ca$^{2+}$ as edges. A dynamical model for a RyR is adopted to investigate the effect of spatial distribution on calcium oscillation. The mean-field model can be well reproduced from the complete graph and dense Erd\"os-R\'enyi network. The synchronization of RyRs is found important to generate a global calcium oscillation. The clique graph with a cluster structure can not produce a global oscillation due to the failure of synchronization between clusters. A more realistic geometric network is constructed in a two-dimensional plane based on the experimental information about the RyR arrangement of clusters and the frequency distribution of cluster sizes. Different from the clique graph, the global oscillation can be generated with reasonable parameters on the geometric network. The simulation also suggests that existence of small clusters and rogue RyR's plays an important role in the maintenance of global calcium oscillation through keeping synchronization between large clusters. Such results support the heterogeneous distribution of RyR's with different-size clusters, which is helpful to understand recent observations with super resolution nanoscale imaging techniques. The current theoretical framework can also be extent to investigate other phenomena in calcium signal transduction.
[ { "created": "Sun, 24 Jul 2022 08:44:47 GMT", "version": "v1" }, { "created": "Wed, 1 Feb 2023 10:02:55 GMT", "version": "v2" } ]
2023-02-22
[ [ "Gao", "Zhong-Xue", "" ], [ "Li", "Tian-Tian", "" ], [ "Jiang", "Han-Yu", "" ], [ "He", "Jun", "" ] ]
Calcium oscillation is an important calcium homeostasis, imbalance of which is the key mechanism of initiation and progression of many major diseases. The formation and maintenance of calcium homeostasis are closely related to the spatial distribution of calcium channels. In the current paper, a theoretical framework is established by abstracting the spatial distribution of the calcium channels as a nonlinear biological complex network with calcium channels as nodes and Ca$^{2+}$ as edges. A dynamical model for a RyR is adopted to investigate the effect of spatial distribution on calcium oscillation. The mean-field model can be well reproduced from the complete graph and dense Erd\"os-R\'enyi network. The synchronization of RyRs is found important to generate a global calcium oscillation. The clique graph with a cluster structure can not produce a global oscillation due to the failure of synchronization between clusters. A more realistic geometric network is constructed in a two-dimensional plane based on the experimental information about the RyR arrangement of clusters and the frequency distribution of cluster sizes. Different from the clique graph, the global oscillation can be generated with reasonable parameters on the geometric network. The simulation also suggests that existence of small clusters and rogue RyR's plays an important role in the maintenance of global calcium oscillation through keeping synchronization between large clusters. Such results support the heterogeneous distribution of RyR's with different-size clusters, which is helpful to understand recent observations with super resolution nanoscale imaging techniques. The current theoretical framework can also be extent to investigate other phenomena in calcium signal transduction.
1602.00808
Fotini Pallikari
Fotini Pallikari
The balancing effect in brain-machine interaction
14 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The meta analysis of Intangible Brain Machine Interaction (IMMI) data with random number generators is re-evaluated through the application of rigorous and recognized mathematical tools. The current analysis shows that the statistical average of the true RNG-IMMI data is not shifted from chance by direct mental intervention, thus refuting the IMMI hypothesis. A facet of this general statistical behavior of true RNG-IMMI data is the statistical balancing of scores observed in IMMI tests where binary testing conditions are adopted. The actual dynamics that had been supporting the elusive IMMI effect are shown to be related to the psychology of experimenters. The implications of the refutation of the IMMI hypothesis especially on associated phenomena are also discussed.
[ { "created": "Tue, 2 Feb 2016 07:09:42 GMT", "version": "v1" } ]
2016-02-03
[ [ "Pallikari", "Fotini", "" ] ]
The meta analysis of Intangible Brain Machine Interaction (IMMI) data with random number generators is re-evaluated through the application of rigorous and recognized mathematical tools. The current analysis shows that the statistical average of the true RNG-IMMI data is not shifted from chance by direct mental intervention, thus refuting the IMMI hypothesis. A facet of this general statistical behavior of true RNG-IMMI data is the statistical balancing of scores observed in IMMI tests where binary testing conditions are adopted. The actual dynamics that had been supporting the elusive IMMI effect are shown to be related to the psychology of experimenters. The implications of the refutation of the IMMI hypothesis especially on associated phenomena are also discussed.
1302.6988
Nicola Fameli
Jeremy G. Hoskins, Nicola Fameli, Cornelis van Breemen
Gold particle quantification in immuno-gold labelled sections for transmission electron microscopy
5 pages, 2 figures, working document
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We briefly outline an algorithm for accurate quantification of specific binding of gold particles to fixed biological tissue samples prepared for immuno-transmission electron microscopy (TEM). The algorithm is based on existing protocols for rational accounting of colloidal gold particles used in secondary antibodies for immuno-gold labeling.
[ { "created": "Wed, 27 Feb 2013 20:50:08 GMT", "version": "v1" } ]
2013-02-28
[ [ "Hoskins", "Jeremy G.", "" ], [ "Fameli", "Nicola", "" ], [ "van Breemen", "Cornelis", "" ] ]
We briefly outline an algorithm for accurate quantification of specific binding of gold particles to fixed biological tissue samples prepared for immuno-transmission electron microscopy (TEM). The algorithm is based on existing protocols for rational accounting of colloidal gold particles used in secondary antibodies for immuno-gold labeling.
2102.09802
Bishal Chhetri
Bishal Chhetri, D.k.k. Vamsi, S Balasubramanian, Carani B Sanjeevi
Optimal Vaccination and Treatment Strategies in Reduction of COVID-19 Burden
21 pages, 16b figures
null
null
null
q-bio.PE math.DS physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this study, we formulate a mathematical model incorporating age specific transmission dynamics of COVID-19 to evaluate the role of vaccination and treatment strategies in reducing the size of COVID-19 burden. Initially, we establish the positivity and boundedness of the solutions of the model and calculate the basic reproduction number. We then formulate an optimal control problem with vaccination and treatment as control variables. Optimal vaccination and treatment policies are analysed for different values of the weight constant associated with the cost of vaccination and different transmissibility levels. Findings from these suggested that the combined strategies(vaccination and treatment) worked best in minimizing the infection and disease induced mortality. In order to reduce COVID-19 infection and COVID-19 induced deaths to maximum, it was observed that optimal control strategy should be prioritized to population with age greater than 40 years. Not much difference was found between individual strategies and combined strategies in case of mild epidemic ($R_0 \in (0, 2)$). For higher values of $R_0 (R_0 \in (2, 10))$ the combined strategies was found to be best in terms of minimizing the overall infection. The infection curves varying the efficacies of the vaccines were also analysed and it was found that higher efficacy of the vaccine resulted in lesser number of infection and COVID induced death.
[ { "created": "Fri, 19 Feb 2021 08:34:55 GMT", "version": "v1" } ]
2021-02-22
[ [ "Chhetri", "Bishal", "" ], [ "Vamsi", "D. k. k.", "" ], [ "Balasubramanian", "S", "" ], [ "Sanjeevi", "Carani B", "" ] ]
In this study, we formulate a mathematical model incorporating age specific transmission dynamics of COVID-19 to evaluate the role of vaccination and treatment strategies in reducing the size of COVID-19 burden. Initially, we establish the positivity and boundedness of the solutions of the model and calculate the basic reproduction number. We then formulate an optimal control problem with vaccination and treatment as control variables. Optimal vaccination and treatment policies are analysed for different values of the weight constant associated with the cost of vaccination and different transmissibility levels. Findings from these suggested that the combined strategies(vaccination and treatment) worked best in minimizing the infection and disease induced mortality. In order to reduce COVID-19 infection and COVID-19 induced deaths to maximum, it was observed that optimal control strategy should be prioritized to population with age greater than 40 years. Not much difference was found between individual strategies and combined strategies in case of mild epidemic ($R_0 \in (0, 2)$). For higher values of $R_0 (R_0 \in (2, 10))$ the combined strategies was found to be best in terms of minimizing the overall infection. The infection curves varying the efficacies of the vaccines were also analysed and it was found that higher efficacy of the vaccine resulted in lesser number of infection and COVID induced death.
1611.01448
Seyed Aidin Sajedi
Seyed Aidin Sajedi and Fahimeh Abdollahi
Multiple Sclerosis and Geomagnetic Disturbances: Investigating a Potentially Important Environmental Risk Factor
29 pages, 2 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple sclerosis (MS) is the most common disabling neurological disorders in the young adults. In spite of many researches, the cause of the disease has remained mainly not understood. All evidences indicate that environmental risk factors play key roles in this disease etiology. Various hypotheses have been posited up to now on the presumed disease risk factors, however, they were not successful in explaining all MS features. The aim of this article is to introduce the concept of the newly proposed "geomagnetic disturbance hypothesis of MS" and its abilities in explaining the special features of the disease to encourage medical geologist and other biomedical researchers to contribute to this area of research.
[ { "created": "Mon, 27 Jun 2016 18:56:47 GMT", "version": "v1" }, { "created": "Fri, 25 Nov 2016 15:58:50 GMT", "version": "v2" } ]
2016-11-28
[ [ "Sajedi", "Seyed Aidin", "" ], [ "Abdollahi", "Fahimeh", "" ] ]
Multiple sclerosis (MS) is the most common disabling neurological disorders in the young adults. In spite of many researches, the cause of the disease has remained mainly not understood. All evidences indicate that environmental risk factors play key roles in this disease etiology. Various hypotheses have been posited up to now on the presumed disease risk factors, however, they were not successful in explaining all MS features. The aim of this article is to introduce the concept of the newly proposed "geomagnetic disturbance hypothesis of MS" and its abilities in explaining the special features of the disease to encourage medical geologist and other biomedical researchers to contribute to this area of research.
1307.4417
Von Bing Yap
Von Bing Yap
Re-imagining the Hardy-Weinberg Law
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Under random mating, a progeny's alleles are independently sampled from the parental gene pools. Here is a new proof which avoids the usual algebraic complexity, based on a restated Mendel's First Law. Another simplified proof along the old approach led to the discovery that allelic independence can hold under random mating and fertility selection. The theoretical existence and number of such selection coefficients are established.
[ { "created": "Tue, 16 Jul 2013 20:28:06 GMT", "version": "v1" }, { "created": "Tue, 10 Sep 2013 14:41:14 GMT", "version": "v2" }, { "created": "Wed, 16 Dec 2015 04:34:13 GMT", "version": "v3" } ]
2015-12-17
[ [ "Yap", "Von Bing", "" ] ]
Under random mating, a progeny's alleles are independently sampled from the parental gene pools. Here is a new proof which avoids the usual algebraic complexity, based on a restated Mendel's First Law. Another simplified proof along the old approach led to the discovery that allelic independence can hold under random mating and fertility selection. The theoretical existence and number of such selection coefficients are established.
1411.0595
Xiaoxi Dong
Xiaoxi Dong, Anatoly Yambartsev, Stephen Ramsey, Lina Thomas, Natalia Shulzhenko, Andrey Morgun
Reverse enGENEering of regulatory networks from Big Data: a guide for a biologist
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Omics technologies enable unbiased investigation of biological systems through massively parallel sequence acquisition or molecular measurements, bringing the life sciences into the era of Big Data. A central challenge posed by such omics datasets is how to transform this data into biological knowledge. For example, how to use this data to answer questions such as: which functional pathways are involved in cell differentiation? Which genes should we target to stop cancer? Network analysis is a powerful and general approach to solve this problem consisting of two fundamental stages, network reconstruction and network interrogation. Herein, we provide an overview of network analysis including a step by step guide on how to perform and use this approach to investigate a biological question. In this guide, we also include the software packages that we and others employ for each of the steps of a network analysis workflow.
[ { "created": "Mon, 3 Nov 2014 18:23:17 GMT", "version": "v1" } ]
2014-11-04
[ [ "Dong", "Xiaoxi", "" ], [ "Yambartsev", "Anatoly", "" ], [ "Ramsey", "Stephen", "" ], [ "Thomas", "Lina", "" ], [ "Shulzhenko", "Natalia", "" ], [ "Morgun", "Andrey", "" ] ]
Omics technologies enable unbiased investigation of biological systems through massively parallel sequence acquisition or molecular measurements, bringing the life sciences into the era of Big Data. A central challenge posed by such omics datasets is how to transform this data into biological knowledge. For example, how to use this data to answer questions such as: which functional pathways are involved in cell differentiation? Which genes should we target to stop cancer? Network analysis is a powerful and general approach to solve this problem consisting of two fundamental stages, network reconstruction and network interrogation. Herein, we provide an overview of network analysis including a step by step guide on how to perform and use this approach to investigate a biological question. In this guide, we also include the software packages that we and others employ for each of the steps of a network analysis workflow.
1806.02310
Kelath Murali Manoj
Kelath Murali Manoj, Abhinav Parashar
Unveiling ADP-binding sites and channels in respiratory complexes: Validation of Murburn concept as a holistic explanation for oxidative phosphorylation
96 pages, 9 figures and three tables in the main manuscript, along with a supplementary information file
Journal of Biomolecular Structure and Dynamics, 2018
10.1080/07391102.2018.1552896
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mitochondrial oxidative phosphorylation (mOxPhos) makes ATP, the energy currency of life. Chemiosmosis, a proton centric mechanism, advocates that Complex V harnesses a transmembrane potential (TMP) for ATP synthesis. This perception of cellular respiration requires oxygen to stay tethered at Complex IV (an association inhibited by cyanide) and diffusible reactive oxygen species (DROS) are considered wasteful and toxic products. With new mechanistic insights on heme and flavin enzymes, an oxygen or DROS centric explanation (called murburn concept) was recently proposed for mOxPhos. In the new mechanism, TMP is not directly harnessed, protons are a rate limiting reactant and DROS within matrix serve as the chemical coupling agents that directly link NADH oxidation with ATP synthesis. Herein, we report multiple ADP binding sites and solvent accessible DROS channels in respiratory proteins, which validate the oxygen or DROS centric power generation (ATP synthesis) system in mOxPhos. Since cyanide's heme binding Kd is high (mM), low doses (uM) of cyanide is lethal because cyanide disrupts DROS dynamics in mOxPhos. The critical study also provides comprehensive arguments against Mitchell's and Boyer's explanations and extensive support for murburn concept based holistic perspectives for mOxPhos.
[ { "created": "Wed, 6 Jun 2018 17:17:49 GMT", "version": "v1" } ]
2018-12-21
[ [ "Manoj", "Kelath Murali", "" ], [ "Parashar", "Abhinav", "" ] ]
Mitochondrial oxidative phosphorylation (mOxPhos) makes ATP, the energy currency of life. Chemiosmosis, a proton centric mechanism, advocates that Complex V harnesses a transmembrane potential (TMP) for ATP synthesis. This perception of cellular respiration requires oxygen to stay tethered at Complex IV (an association inhibited by cyanide) and diffusible reactive oxygen species (DROS) are considered wasteful and toxic products. With new mechanistic insights on heme and flavin enzymes, an oxygen or DROS centric explanation (called murburn concept) was recently proposed for mOxPhos. In the new mechanism, TMP is not directly harnessed, protons are a rate limiting reactant and DROS within matrix serve as the chemical coupling agents that directly link NADH oxidation with ATP synthesis. Herein, we report multiple ADP binding sites and solvent accessible DROS channels in respiratory proteins, which validate the oxygen or DROS centric power generation (ATP synthesis) system in mOxPhos. Since cyanide's heme binding Kd is high (mM), low doses (uM) of cyanide is lethal because cyanide disrupts DROS dynamics in mOxPhos. The critical study also provides comprehensive arguments against Mitchell's and Boyer's explanations and extensive support for murburn concept based holistic perspectives for mOxPhos.
0910.1068
Ariel Fern\'andez
Ariel Fernandez and Hugo Fort
Catastrophic Phase Transitions and Early Warnings in a Spatial Ecological Model
null
J. Stat. Mech. (2009) P09014
10.1088/1742-5468/2009/09/P09014
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gradual changes in exploitation, nutrient loading, etc. produce shifts between alternative stable states (ASS) in ecosystems which, quite often, are not smooth but abrupt or catastrophic. Early warnings of such catastrophic regime shifts are fundamental for designing management protocols for ecosystems. Here we study the spatial version of a popular ecological model, involving a logistically growing single species subject to exploitation, which is known to exhibit ASS. Spatial heterogeneity is introduced by a carrying capacity parameter varying from cell to cell in a regular lattice. Transport of biomass among cells is included in the form of diffusion. We investigate whether different quantities from statistical mechanics -like the variance, the two-point correlation function and the patchiness- may serve as early warnings of catastrophic phase transitions between the ASS. In particular, we find that the patch-size distribution follows a power law when the system is close to the catastrophic transition. We also provide links between spatial and temporal indicators and analyze how the interplay between diffusion and spatial heterogeneity may affect the earliness of each of the observables. We find that possible remedial procedures, which can be followed after these early signals, are more effective as the diffusion becomes lower. Finally, we comment on similarities and differences between these catastrophic shifts and paradigmatic thermodynamic phase transitions like the liquid-vapour change of state for a fluid like water.
[ { "created": "Tue, 6 Oct 2009 18:20:40 GMT", "version": "v1" } ]
2009-10-07
[ [ "Fernandez", "Ariel", "" ], [ "Fort", "Hugo", "" ] ]
Gradual changes in exploitation, nutrient loading, etc. produce shifts between alternative stable states (ASS) in ecosystems which, quite often, are not smooth but abrupt or catastrophic. Early warnings of such catastrophic regime shifts are fundamental for designing management protocols for ecosystems. Here we study the spatial version of a popular ecological model, involving a logistically growing single species subject to exploitation, which is known to exhibit ASS. Spatial heterogeneity is introduced by a carrying capacity parameter varying from cell to cell in a regular lattice. Transport of biomass among cells is included in the form of diffusion. We investigate whether different quantities from statistical mechanics -like the variance, the two-point correlation function and the patchiness- may serve as early warnings of catastrophic phase transitions between the ASS. In particular, we find that the patch-size distribution follows a power law when the system is close to the catastrophic transition. We also provide links between spatial and temporal indicators and analyze how the interplay between diffusion and spatial heterogeneity may affect the earliness of each of the observables. We find that possible remedial procedures, which can be followed after these early signals, are more effective as the diffusion becomes lower. Finally, we comment on similarities and differences between these catastrophic shifts and paradigmatic thermodynamic phase transitions like the liquid-vapour change of state for a fluid like water.
1305.2083
Darren Wong
Darren CJ Wong, Crystal Sweetman, Damian P Drew, Christopher M Ford
VTCdb: A transcriptomics & co-expression database for the crop species Vitis vinifera (grapevine)
21 pages, 5 figures, 2 tables
BMC Genomics 2013, 14:882
10.1186/1471-2164-14-882
null
q-bio.MN q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene co-expression databases in model plants such as Arabidopsis and rice have been extensively developed and utilised for predicting gene function and identifying functional modules for underlying biological processes. However, such tools are less available for non-model plants such as grapevine. We have constructed a gene co-expression database, VTCdb (http://vtcdb.adelaide.edu.au/Home.aspx) that offers an online platform for transcriptional regulatory inference in the cultivated grapevine. Using a condition-independent approach, the grapevine co-expression network was constructed using 352 publicly available microarray datasets from a diverse experimental series, profiling approximately 9000 genes (40% of the predicted grapevine transcriptome). Using correlation rank transformation and graph-clustering, we have identified modules putatively involved in several fundamental biological processes such as photosynthesis, secondary metabolism and stress responses. Inter-module network connections revealed a higher-level organization of function, in which densely connected modules often participated in related biological processes and had similar expression profiles in grapevine. The database enables users to query genes, modules or biological processes of interest. Querying a gene will give a ranked list of co-expressed genes, functional annotations and information on the associated module. Alternatively, browsing modules of interest will retrieve information regarding Gene Ontology and Mapman enriched terms, as well as tissue- and condition-specific patterns of gene expression within the module. Furthermore, the database features interactive network visualisation via CytoscapeWeb. VTCdb should aid researchers in their prioritization of gene candidates for further study towards the understanding of biological processes related to many aspects of grapevine development and metabolism.
[ { "created": "Thu, 9 May 2013 13:15:30 GMT", "version": "v1" } ]
2013-12-20
[ [ "Wong", "Darren CJ", "" ], [ "Sweetman", "Crystal", "" ], [ "Drew", "Damian P", "" ], [ "Ford", "Christopher M", "" ] ]
Gene co-expression databases in model plants such as Arabidopsis and rice have been extensively developed and utilised for predicting gene function and identifying functional modules for underlying biological processes. However, such tools are less available for non-model plants such as grapevine. We have constructed a gene co-expression database, VTCdb (http://vtcdb.adelaide.edu.au/Home.aspx) that offers an online platform for transcriptional regulatory inference in the cultivated grapevine. Using a condition-independent approach, the grapevine co-expression network was constructed using 352 publicly available microarray datasets from a diverse experimental series, profiling approximately 9000 genes (40% of the predicted grapevine transcriptome). Using correlation rank transformation and graph-clustering, we have identified modules putatively involved in several fundamental biological processes such as photosynthesis, secondary metabolism and stress responses. Inter-module network connections revealed a higher-level organization of function, in which densely connected modules often participated in related biological processes and had similar expression profiles in grapevine. The database enables users to query genes, modules or biological processes of interest. Querying a gene will give a ranked list of co-expressed genes, functional annotations and information on the associated module. Alternatively, browsing modules of interest will retrieve information regarding Gene Ontology and Mapman enriched terms, as well as tissue- and condition-specific patterns of gene expression within the module. Furthermore, the database features interactive network visualisation via CytoscapeWeb. VTCdb should aid researchers in their prioritization of gene candidates for further study towards the understanding of biological processes related to many aspects of grapevine development and metabolism.
1512.05213
Paolo Sibani
Christian Walther Andersen and Paolo Sibani
The Tangled Nature Model of evolutionary dynamics reconsidered: structural and dynamical effects of trait inheritance
11 pages, 10 figures Improved figures and text. To be published in Phys. Rev. E
Phys. Rev. E 93, 052410 (2016)
10.1103/PhysRevE.93.052410
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Tangled Nature Model of biological and cultural evolution features interacting agents which compete for limited resources and reproduce in an error prone fashion and at a rate depending on the `tangle' of interactions they maintain with others. The set of interactions linking a TNM individual to others is key to its reproductive success and arguably constitutes its most important property. Yet, in many studies, the interactions of an individual and those of its mutated off-spring are unrelated, a rather unrealistic feature corresponding to a point mutation turning a giraffe into an elephant. To bring out the structural and dynamical effects of trait inheritance , we introduce and numerically analyze a family of TNM models where a positive integer $K$ parametrises correlations between the interactions of an agent and those of its mutated offspring. For $K=1$ a single point mutation randomizes all the interactions, while increasing $K$ up to the length of the genome ensures an increasing level of trait inheritance. We show that the distribution of the interactions generated by our rule is nearly independent of the value of $K$. Changing $K$ strengthens the core structure of the ecology, leads to population abundance distributions which are better approximated by log-normal probability densities and increases the probability that a species extant at time $t_{\rm w}$ is also extant at a later time $t$. In particular, survival probabilities are shown to decay as powers of the ratio $t/t_{\rm w}$, similarity to the pure aging behaviour approximately describing glassy systems of physical origin. Increasing the value of $K$ decreases the numerical value of the decay exponent of the power law, which is a clear quantitative dynamical effect of trait inheritance.
[ { "created": "Wed, 16 Dec 2015 15:33:27 GMT", "version": "v1" }, { "created": "Fri, 6 May 2016 12:41:46 GMT", "version": "v2" } ]
2016-05-25
[ [ "Andersen", "Christian Walther", "" ], [ "Sibani", "Paolo", "" ] ]
The Tangled Nature Model of biological and cultural evolution features interacting agents which compete for limited resources and reproduce in an error prone fashion and at a rate depending on the `tangle' of interactions they maintain with others. The set of interactions linking a TNM individual to others is key to its reproductive success and arguably constitutes its most important property. Yet, in many studies, the interactions of an individual and those of its mutated off-spring are unrelated, a rather unrealistic feature corresponding to a point mutation turning a giraffe into an elephant. To bring out the structural and dynamical effects of trait inheritance , we introduce and numerically analyze a family of TNM models where a positive integer $K$ parametrises correlations between the interactions of an agent and those of its mutated offspring. For $K=1$ a single point mutation randomizes all the interactions, while increasing $K$ up to the length of the genome ensures an increasing level of trait inheritance. We show that the distribution of the interactions generated by our rule is nearly independent of the value of $K$. Changing $K$ strengthens the core structure of the ecology, leads to population abundance distributions which are better approximated by log-normal probability densities and increases the probability that a species extant at time $t_{\rm w}$ is also extant at a later time $t$. In particular, survival probabilities are shown to decay as powers of the ratio $t/t_{\rm w}$, similarity to the pure aging behaviour approximately describing glassy systems of physical origin. Increasing the value of $K$ decreases the numerical value of the decay exponent of the power law, which is a clear quantitative dynamical effect of trait inheritance.
2103.08225
Richard P. Sear
Richard P Sear
Liquid-liquid phase separation, biomolecular condensates, puncta, non-stoichiometric supramolecular assemblies, membraneless organelles, and bacterial chemotaxis are best understood as emergent phenomena with switch-like behaviour
4 pages, 2 figures
null
null
null
q-bio.CB q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Liquid-liquid phase separation (LLPS) is currently of great interest in cell biology. LLPS is an example of what is called an emergent phenomenon -- an idea that comes from condensed-matter physics. Emergent phenomena have the characteristic feature of having a switch-like response. I show that the Hill equation of biochemistry can be used as a simple model of strongly cooperative, switch-like, behaviour. One result is that a switch-like response requires relatively few molecules, even ten gives a strongly switch-like response. Thus if a biological function enabled by LLPS relies on LLPS to provide a switch-like response to a stimulus, then condensates large enough to be visible in optical microscopy are not needed.
[ { "created": "Mon, 15 Mar 2021 09:15:38 GMT", "version": "v1" } ]
2021-03-16
[ [ "Sear", "Richard P", "" ] ]
Liquid-liquid phase separation (LLPS) is currently of great interest in cell biology. LLPS is an example of what is called an emergent phenomenon -- an idea that comes from condensed-matter physics. Emergent phenomena have the characteristic feature of having a switch-like response. I show that the Hill equation of biochemistry can be used as a simple model of strongly cooperative, switch-like, behaviour. One result is that a switch-like response requires relatively few molecules, even ten gives a strongly switch-like response. Thus if a biological function enabled by LLPS relies on LLPS to provide a switch-like response to a stimulus, then condensates large enough to be visible in optical microscopy are not needed.
q-bio/0511007
Liane Gabora
Liane Gabora & Diederik Aerts
Evolution as context-driven actualization of potential: Toward an interdisciplinary theory of change of state
19 pages, 1 figure
Gabora, L. & Aerts, D. (2005). Evolution as context-driven actualization of potential: Toward an interdisciplinary theory of change of state. Interdisciplinary Science Reviews, 30(1), 69-88
10.1179/030801805X25873
null
q-bio.PE nlin.AO q-bio.OT quant-ph
null
It is increasingly evident that there is more to biological evolution than natural selection; moreover, the concept of evolution is not limited to biology. We propose an integrative framework for characterizing how entities evolve, in which evolution is viewed as a process of context-driven actualization of potential (CAP). Processes of change differ according to the degree of nondeterminism, and the degree to which they are sensitive to, internalize, and depend upon a particular context. The approach enables us to embed phenomena across disciplines into a broad conceptual framework. We give examples of insights into physics, biology, culture and cognition that derive from this unifying framework.
[ { "created": "Sat, 5 Nov 2005 06:16:59 GMT", "version": "v1" } ]
2007-05-23
[ [ "Gabora", "Liane", "" ], [ "Aerts", "Diederik", "" ] ]
It is increasingly evident that there is more to biological evolution than natural selection; moreover, the concept of evolution is not limited to biology. We propose an integrative framework for characterizing how entities evolve, in which evolution is viewed as a process of context-driven actualization of potential (CAP). Processes of change differ according to the degree of nondeterminism, and the degree to which they are sensitive to, internalize, and depend upon a particular context. The approach enables us to embed phenomena across disciplines into a broad conceptual framework. We give examples of insights into physics, biology, culture and cognition that derive from this unifying framework.
1602.02569
Tom\'as Revilla
Tom\'as A. Revilla and Vlastimil K\v{r}ivan
Pollinator foraging flexibility and coexistence of competing plants
29 pages, 7 main figures, 2 appendix figures
PLoS ONE 11(8): e0160076 (2016)
10.1371/journal.pone.0160076
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use the optimal foraging theory to study coexistence between two plant species and a generalist pollinator. We compare conditions for plant coexistence for non-adaptive vs. adaptive pollinators that adjust their foraging strategy to maximize fitness. When pollinators have fixed preferences, we show that plant coexistence typically requires both weak competition between plants for resources (e.g., space or nutrients) and pollinator preferences that are not too biased in favour of either plant. We also show how plant coexistence is promoted by indirect facilitation via the pollinator. When pollinators are adaptive foragers, pollinator's diet maximizes pollinator's fitness measured as the per capita population growth rate. Simulations show that this has two conflicting consequences for plant coexistence. On the one hand, when competition between pollinators is weak, adaptation favours pollinator specialization on the more profitable plant which increases asymmetries in plant competition and makes their coexistence less likely. On the other hand, when competition between pollinators is strong, adaptation promotes generalism, which facilitates plant coexistence. In addition, adaptive foraging allows pollinators to survive sudden loss of the preferred plant host, thus preventing further collapse of the entire community. Keywords: mutualism, competition, optimal foraging, evolutionarily stable strategy, coexistence, adaptation rate
[ { "created": "Mon, 8 Feb 2016 13:59:12 GMT", "version": "v1" }, { "created": "Thu, 18 Feb 2016 14:30:27 GMT", "version": "v2" }, { "created": "Wed, 10 Aug 2016 06:45:35 GMT", "version": "v3" } ]
2016-08-22
[ [ "Revilla", "Tomás A.", "" ], [ "Křivan", "Vlastimil", "" ] ]
We use the optimal foraging theory to study coexistence between two plant species and a generalist pollinator. We compare conditions for plant coexistence for non-adaptive vs. adaptive pollinators that adjust their foraging strategy to maximize fitness. When pollinators have fixed preferences, we show that plant coexistence typically requires both weak competition between plants for resources (e.g., space or nutrients) and pollinator preferences that are not too biased in favour of either plant. We also show how plant coexistence is promoted by indirect facilitation via the pollinator. When pollinators are adaptive foragers, pollinator's diet maximizes pollinator's fitness measured as the per capita population growth rate. Simulations show that this has two conflicting consequences for plant coexistence. On the one hand, when competition between pollinators is weak, adaptation favours pollinator specialization on the more profitable plant which increases asymmetries in plant competition and makes their coexistence less likely. On the other hand, when competition between pollinators is strong, adaptation promotes generalism, which facilitates plant coexistence. In addition, adaptive foraging allows pollinators to survive sudden loss of the preferred plant host, thus preventing further collapse of the entire community. Keywords: mutualism, competition, optimal foraging, evolutionarily stable strategy, coexistence, adaptation rate
0909.2832
Marianne Rooman
Peter van der Gulik, Serge Massar, Dimitri Gilis, Harry Buhrman, Marianne Rooman
The first peptides: the evolutionary transition between prebiotic amino acids and early proteins
22 pages, 2 figures, J. Theor. Biol. (2009) in press
Journal of Theoretical Biology 261 (2009) 531-539
null
null
q-bio.BM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The issues we attempt to tackle here are what the first peptides did look like when they emerged on the primitive earth, and what simple catalytic activities they fulfilled. We conjecture that the early functional peptides were short (3 to 8 amino acids long), were made of those amino acids, Gly, Ala, Val and Asp, that are abundantly produced in many prebiotic synthesis experiments and observed in meteorites, and that the neutralization of Asp's negative charge is achieved by metal ions. We further assume that some traces of these prebiotic peptides still exist, in the form of active sites in present-day proteins. Searching these proteins for prebiotic peptide candidates led us to identify three main classes of motifs, bound mainly to Mg^{2+} ions: D(F/Y)DGD corresponding to the active site in RNA polymerases, DGD(G/A)D present in some kinds of mutases, and DAKVGDGD in dihydroxyacetone kinase. All three motifs contain a DGD submotif, which is suggested to be the common ancestor of all active peptides. Moreover, all three manipulate phosphate groups, which was probably a very important biological function in the very first stages of life. The statistical significance of our results is supported by the frequency of these motifs in today's proteins, which is three times higher than expected by chance, with a P-value of 3 10^{-2}. The implications of our findings in the context of the appearance of life and the possibility of an experimental validation are discussed.
[ { "created": "Tue, 15 Sep 2009 15:55:46 GMT", "version": "v1" } ]
2009-11-03
[ [ "van der Gulik", "Peter", "" ], [ "Massar", "Serge", "" ], [ "Gilis", "Dimitri", "" ], [ "Buhrman", "Harry", "" ], [ "Rooman", "Marianne", "" ] ]
The issues we attempt to tackle here are what the first peptides did look like when they emerged on the primitive earth, and what simple catalytic activities they fulfilled. We conjecture that the early functional peptides were short (3 to 8 amino acids long), were made of those amino acids, Gly, Ala, Val and Asp, that are abundantly produced in many prebiotic synthesis experiments and observed in meteorites, and that the neutralization of Asp's negative charge is achieved by metal ions. We further assume that some traces of these prebiotic peptides still exist, in the form of active sites in present-day proteins. Searching these proteins for prebiotic peptide candidates led us to identify three main classes of motifs, bound mainly to Mg^{2+} ions: D(F/Y)DGD corresponding to the active site in RNA polymerases, DGD(G/A)D present in some kinds of mutases, and DAKVGDGD in dihydroxyacetone kinase. All three motifs contain a DGD submotif, which is suggested to be the common ancestor of all active peptides. Moreover, all three manipulate phosphate groups, which was probably a very important biological function in the very first stages of life. The statistical significance of our results is supported by the frequency of these motifs in today's proteins, which is three times higher than expected by chance, with a P-value of 3 10^{-2}. The implications of our findings in the context of the appearance of life and the possibility of an experimental validation are discussed.
0709.0823
Carsten Marr
Carsten Marr, Mark Mueller-Linow, Marc-Thorsten Huett
Reply to ''Comment on 'Regularizing Capacity of Metabolic Networks' ''
2 pages, 2 figures
null
10.1103/PhysRevE.77.023902
null
q-bio.MN nlin.CG
null
In a recent paper [C. Marr, M. Mueller-Linow, and M.-T. Huett, Phys. Rev. E 75, 041917 (2007)] we discuss the pronounced potential of real metabolic network topologies, compared to randomized counterparts, to regularize complex binary dynamics. In their comment [P. Holme and M. Huss, arXiv:0705.4084v1], Holme and Huss criticize our approach and repeat our study with more realistic dynamics, where stylized reaction kinetics are implemented on sets of pairwise reactions. The authors find no dynamic difference between the reaction sets recreated from the metabolic networks and randomized counterparts. We reproduce the author's observation and find that their algorithm leads to a dynamical fragmentation and thus eliminates the topological information contained in the graphs. Hence, their approach cannot rule out a connection between the topology of metabolic networks and the ubiquity of steady states.
[ { "created": "Thu, 6 Sep 2007 10:35:31 GMT", "version": "v1" } ]
2009-11-13
[ [ "Marr", "Carsten", "" ], [ "Mueller-Linow", "Mark", "" ], [ "Huett", "Marc-Thorsten", "" ] ]
In a recent paper [C. Marr, M. Mueller-Linow, and M.-T. Huett, Phys. Rev. E 75, 041917 (2007)] we discuss the pronounced potential of real metabolic network topologies, compared to randomized counterparts, to regularize complex binary dynamics. In their comment [P. Holme and M. Huss, arXiv:0705.4084v1], Holme and Huss criticize our approach and repeat our study with more realistic dynamics, where stylized reaction kinetics are implemented on sets of pairwise reactions. The authors find no dynamic difference between the reaction sets recreated from the metabolic networks and randomized counterparts. We reproduce the author's observation and find that their algorithm leads to a dynamical fragmentation and thus eliminates the topological information contained in the graphs. Hence, their approach cannot rule out a connection between the topology of metabolic networks and the ubiquity of steady states.
2306.04665
Jozsef Prechl
Jozsef Prechl
Statistical thermodynamics of self-organization of the binding energy super-landscape in the adaptive immune system
19 pages, 7 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
The humoral adaptive immune system, from the physical perspective, can be regarded as a self-organized antibody binding energy landscape. In biological terms it means the system organizes its repertoire of antigen binding molecules so as to maintain its integrity. In this article I reason that the super-landscape created by the fusion of binding energy landscapes can be described by the distribution of interaction energies and a deformation parameter of thermodynamic potentials in the system. This deformation parameter characterizes the adaptive network of interactions in the system and the asymmetry of generalized logistic distributions in immunoassays. Overall, statistical thermodynamics approaches could provide a deeper theoretical insight into the dynamical self-organization of the adaptive immune system and into the interpretation of experimental results.
[ { "created": "Wed, 7 Jun 2023 11:18:52 GMT", "version": "v1" }, { "created": "Tue, 13 Jun 2023 14:31:41 GMT", "version": "v2" }, { "created": "Wed, 25 Oct 2023 13:47:41 GMT", "version": "v3" }, { "created": "Mon, 8 Apr 2024 10:57:31 GMT", "version": "v4" }, { "created": "Tue, 30 Jul 2024 14:16:37 GMT", "version": "v5" } ]
2024-07-31
[ [ "Prechl", "Jozsef", "" ] ]
The humoral adaptive immune system, from the physical perspective, can be regarded as a self-organized antibody binding energy landscape. In biological terms it means the system organizes its repertoire of antigen binding molecules so as to maintain its integrity. In this article I reason that the super-landscape created by the fusion of binding energy landscapes can be described by the distribution of interaction energies and a deformation parameter of thermodynamic potentials in the system. This deformation parameter characterizes the adaptive network of interactions in the system and the asymmetry of generalized logistic distributions in immunoassays. Overall, statistical thermodynamics approaches could provide a deeper theoretical insight into the dynamical self-organization of the adaptive immune system and into the interpretation of experimental results.
2203.09591
James Tee
James Tee, Giorgio M. Vitetta and Desmond P. Taylor
Advances in Shannon-Based Communications and Computations Approaches to Understanding Information Processing in the Brain
null
null
null
null
q-bio.NC cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article serves as a supplement to the recently published call for participation in a Research Topic [1] that is timed to commemorate the 75th anniversary of Shannon's pioneering 1948 paper [2]. Here, we include some citations of key and relevant literature, which reflect our opinions/perspectives on the proposed topic, and serve as guidance to potential submissions.
[ { "created": "Thu, 17 Mar 2022 20:09:49 GMT", "version": "v1" } ]
2022-03-21
[ [ "Tee", "James", "" ], [ "Vitetta", "Giorgio M.", "" ], [ "Taylor", "Desmond P.", "" ] ]
This article serves as a supplement to the recently published call for participation in a Research Topic [1] that is timed to commemorate the 75th anniversary of Shannon's pioneering 1948 paper [2]. Here, we include some citations of key and relevant literature, which reflect our opinions/perspectives on the proposed topic, and serve as guidance to potential submissions.
2106.07490
Spyridon Chavlis Ph.D.
Spyridon Chavlis and Panayiota Poirazi
Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks
12 pages, 1 figure, opinion article
Current Opinion in Neurobiology 70 (2021): 1-10
10.1016/j.conb.2021.04.007
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
This article highlights specific features of biological neurons and their dendritic trees, whose adoption may help advance artificial neural networks used in various machine learning applications. Advancements could take the form of increased computational capabilities and/or reduced power consumption. Proposed features include dendritic anatomy, dendritic nonlinearities, and compartmentalized plasticity rules, all of which shape learning and information processing in biological networks. We discuss the computational benefits provided by these features in biological neurons and suggest ways to adopt them in artificial neurons in order to exploit the respective benefits in machine learning.
[ { "created": "Mon, 14 Jun 2021 15:20:24 GMT", "version": "v1" } ]
2021-06-15
[ [ "Chavlis", "Spyridon", "" ], [ "Poirazi", "Panayiota", "" ] ]
This article highlights specific features of biological neurons and their dendritic trees, whose adoption may help advance artificial neural networks used in various machine learning applications. Advancements could take the form of increased computational capabilities and/or reduced power consumption. Proposed features include dendritic anatomy, dendritic nonlinearities, and compartmentalized plasticity rules, all of which shape learning and information processing in biological networks. We discuss the computational benefits provided by these features in biological neurons and suggest ways to adopt them in artificial neurons in order to exploit the respective benefits in machine learning.
q-bio/0504006
Karen Luz Burgoa K. Luz-Burgoa
K. Luz-Burgoa
Computer Simulations to Study Sympatric Speciation Processes
Doctoral Thesis defended on 28th March, 2005 at the "Universidade Federal Fluminense" Federal University in Niteroi, Brazil. Version in Portuguese
null
null
null
q-bio.PE q-bio.QM
null
We perform simulations based on the Penna model for biological ageing, now with the purpose of studying sympatric speciation, that is, the division of a single species into two or more populations, reproductively isolated, but without any physical barrier separating them. For that we introduce a new kind of competition among the individuals, using a modified Verhulst factor. The new competition depends on some specific phenotypic characteristic of each individual, which is represented by a pair of bitstrings. These strings are read in parallel and have no age structure. In this way, each individual genome consists of two parts. The first one has an age-structure and is related to the appearance of inherited diseases; the second part is not structured and takes into account the competition for the available resources. We also introduce sexual selection into the model, making use of another non-structured and independent pair of bitstrings. In this thesis we present three different models; two of them use, besides the competition, a sudden change in the ecology to obtain speciation. They were motivated by the speciation process observed in the Darwin finches, a family of birds that inhabits the Galapagos Islands, and also by that observed in the cichlids, a family of fish that lives in the Nicaragua Lakes and in the Vitoria Lake, in Africa. The third model does not use any ecological change: sympatric speciation is obtained depending only on the strength of competition among individuals with similar phenotypic characteristics.
[ { "created": "Tue, 5 Apr 2005 12:50:50 GMT", "version": "v1" } ]
2007-05-23
[ [ "Luz-Burgoa", "K.", "" ] ]
We perform simulations based on the Penna model for biological ageing, now with the purpose of studying sympatric speciation, that is, the division of a single species into two or more populations, reproductively isolated, but without any physical barrier separating them. For that we introduce a new kind of competition among the individuals, using a modified Verhulst factor. The new competition depends on some specific phenotypic characteristic of each individual, which is represented by a pair of bitstrings. These strings are read in parallel and have no age structure. In this way, each individual genome consists of two parts. The first one has an age-structure and is related to the appearance of inherited diseases; the second part is not structured and takes into account the competition for the available resources. We also introduce sexual selection into the model, making use of another non-structured and independent pair of bitstrings. In this thesis we present three different models; two of them use, besides the competition, a sudden change in the ecology to obtain speciation. They were motivated by the speciation process observed in the Darwin finches, a family of birds that inhabits the Galapagos Islands, and also by that observed in the cichlids, a family of fish that lives in the Nicaragua Lakes and in the Vitoria Lake, in Africa. The third model does not use any ecological change: sympatric speciation is obtained depending only on the strength of competition among individuals with similar phenotypic characteristics.
1409.3272
Benjamin de Bivort
Julien F. Ayroles, Sean M. Buchanan, Chelsea Jenney, Kyobi Skutt-Kakaria, Jennifer Grenier, Andrew G. Clark, Daniel L. Hartl, Benjamin L. de Bivort
Behavioral individuality reveals genetic control of phenotypic variability
13 pages, 9 figures, 2 tables
null
10.1073/pnas.1503830112
null
q-bio.GN
http://creativecommons.org/licenses/by/3.0/
Variability is ubiquitous in nature and a fundamental feature of complex systems. Few studies, however, have investigated variance itself as a trait under genetic control. By focusing primarily on trait means and ignoring the effect of alternative alleles on trait variability, we may be missing an important axis of genetic variation contributing to phenotypic differences among individuals. To study genetic effects on individual-to-individual phenotypic variability (or intragenotypic variability), we used a panel of Drosophila inbred lines and focused on locomotor handedness, in an assay optimized to measure variability. We discovered that some lines had consistently high levels of intragenotypic variability among individuals while others had low levels. We demonstrate that the degree of variability is itself heritable. Using a genome-wide association study (GWAS) for the degree of intragenotypic variability as the phenotype across lines, we identified several genes expressed in the brain that affect variability in handedness without affecting the mean. One of these genes, Ten-a, implicated a neuropil in the central complex of the fly brain as influencing the magnitude of behavioral variability, a brain region involved in sensory integration and locomotor coordination. We have validated these results using genetic deficiencies, null alleles, and inducible RNAi transgenes. This study reveals the constellation of phenotypes that can arise from a single genotype and it shows that different genetic backgrounds differ dramatically in their propensity for phenotypic variability. Because traditional mean-focused GWASs ignore the contribution of variability to overall phenotypic variation, current methods may miss important links between genotype and phenotype.
[ { "created": "Wed, 10 Sep 2014 22:53:25 GMT", "version": "v1" } ]
2016-02-17
[ [ "Ayroles", "Julien F.", "" ], [ "Buchanan", "Sean M.", "" ], [ "Jenney", "Chelsea", "" ], [ "Skutt-Kakaria", "Kyobi", "" ], [ "Grenier", "Jennifer", "" ], [ "Clark", "Andrew G.", "" ], [ "Hartl", "Daniel L.", "" ], [ "de Bivort", "Benjamin L.", "" ] ]
Variability is ubiquitous in nature and a fundamental feature of complex systems. Few studies, however, have investigated variance itself as a trait under genetic control. By focusing primarily on trait means and ignoring the effect of alternative alleles on trait variability, we may be missing an important axis of genetic variation contributing to phenotypic differences among individuals. To study genetic effects on individual-to-individual phenotypic variability (or intragenotypic variability), we used a panel of Drosophila inbred lines and focused on locomotor handedness, in an assay optimized to measure variability. We discovered that some lines had consistently high levels of intragenotypic variability among individuals while others had low levels. We demonstrate that the degree of variability is itself heritable. Using a genome-wide association study (GWAS) for the degree of intragenotypic variability as the phenotype across lines, we identified several genes expressed in the brain that affect variability in handedness without affecting the mean. One of these genes, Ten-a, implicated a neuropil in the central complex of the fly brain as influencing the magnitude of behavioral variability, a brain region involved in sensory integration and locomotor coordination. We have validated these results using genetic deficiencies, null alleles, and inducible RNAi transgenes. This study reveals the constellation of phenotypes that can arise from a single genotype and it shows that different genetic backgrounds differ dramatically in their propensity for phenotypic variability. Because traditional mean-focused GWASs ignore the contribution of variability to overall phenotypic variation, current methods may miss important links between genotype and phenotype.
1101.0723
Jongmin Kim
Pakpoom Subsoontorn, Jongmin Kim, Erik Winfree
Bistability of an In Vitro Synthetic Autoregulatory Switch
21 pages, 9 figures, journal version in preparation
ACS Synthetic Biology 2012
10.1021/sb300018h
null
q-bio.MN q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The construction of synthetic biochemical circuits is an essential step for developing quantitative understanding of information processing in natural organisms. Here, we report construction and analysis of an in vitro circuit with positive autoregulation that consists of just four synthetic DNA strands and three enzymes, bacteriophage T7 RNA polymerase, Escherichia coli ribonuclease (RNase) H, and RNase R. The modularity of the DNA switch template allowed a rational design of a synthetic DNA switch regulated by its RNA output acting as a transcription activator. We verified that the thermodynamic and kinetic constraints dictated by the sequence design criteria were enough to experimentally achieve the intended dynamics: a transcription activator configured to regulate its own production. Although only RNase H is necessary to achieve bistability of switch states, RNase R is necessary to maintain stable RNA signal levels and to control incomplete degradation products. A simple mathematical model was used to fit ensemble parameters for the training set of experimental results and was then directly applied to predict time-courses of switch dynamics and sensitivity to parameter variations with reasonable agreement. The positive autoregulation switches can be used to provide constant input signals and store outputs of biochemical networks and are potentially useful for chemical control applications.
[ { "created": "Tue, 4 Jan 2011 14:23:40 GMT", "version": "v1" } ]
2012-06-28
[ [ "Subsoontorn", "Pakpoom", "" ], [ "Kim", "Jongmin", "" ], [ "Winfree", "Erik", "" ] ]
The construction of synthetic biochemical circuits is an essential step for developing quantitative understanding of information processing in natural organisms. Here, we report construction and analysis of an in vitro circuit with positive autoregulation that consists of just four synthetic DNA strands and three enzymes, bacteriophage T7 RNA polymerase, Escherichia coli ribonuclease (RNase) H, and RNase R. The modularity of the DNA switch template allowed a rational design of a synthetic DNA switch regulated by its RNA output acting as a transcription activator. We verified that the thermodynamic and kinetic constraints dictated by the sequence design criteria were enough to experimentally achieve the intended dynamics: a transcription activator configured to regulate its own production. Although only RNase H is necessary to achieve bistability of switch states, RNase R is necessary to maintain stable RNA signal levels and to control incomplete degradation products. A simple mathematical model was used to fit ensemble parameters for the training set of experimental results and was then directly applied to predict time-courses of switch dynamics and sensitivity to parameter variations with reasonable agreement. The positive autoregulation switches can be used to provide constant input signals and store outputs of biochemical networks and are potentially useful for chemical control applications.
2404.17804
Rama Hoetzlein
Rama Carl Hoetzlein
Flock2: A model for orientation-based social flocking
15 pages (22 /w appendices & refs), 16 images. Source code: https://github.com/ramakarl/Flock2. Supplementary video: https://youtu.be/GGF28-t4wAY
J. Theoretical Biology, vol 593, p.111880, 2024
10.1016/j.jtbi.2024.111880
null
q-bio.QM physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
The aerial flocking of birds, or murmurations, has fascinated observers while presenting many challenges to behavioral study and simulation. We examine how the periphery of murmurations remain well bounded and cohesive. We also investigate agitation waves, which occur when a flock is disturbed, developing a plausible model for how they might emerge spontaneously. To understand these behaviors a new model is presented for orientation-based social flocking. Previous methods model inter-bird dynamics by considering the neighborhood around each bird, and introducing forces for avoidance, alignment, and cohesion as three dimensional vectors that alter acceleration. Our method introduces orientation-based social flocking that treats social influences from neighbors more realistically as a desire to turn, indirectly controlling the heading in an aerodynamic model. While our model can be applied to any flocking social bird we simulate flocks of starlings, Sturnus vulgaris, and demonstrate the possibility of orientation waves in the absence of predators. Our model exhibits spherical and ovoidal flock shapes matching observation. Comparisons of our model to Reynolds' on energy consumption and frequency analysis demonstrates more realistic motions, significantly less energy use in turning, and a plausible mechanism for emergent orientation waves.
[ { "created": "Sat, 27 Apr 2024 07:03:12 GMT", "version": "v1" }, { "created": "Fri, 19 Jul 2024 17:14:48 GMT", "version": "v2" } ]
2024-07-22
[ [ "Hoetzlein", "Rama Carl", "" ] ]
The aerial flocking of birds, or murmurations, has fascinated observers while presenting many challenges to behavioral study and simulation. We examine how the periphery of murmurations remain well bounded and cohesive. We also investigate agitation waves, which occur when a flock is disturbed, developing a plausible model for how they might emerge spontaneously. To understand these behaviors a new model is presented for orientation-based social flocking. Previous methods model inter-bird dynamics by considering the neighborhood around each bird, and introducing forces for avoidance, alignment, and cohesion as three dimensional vectors that alter acceleration. Our method introduces orientation-based social flocking that treats social influences from neighbors more realistically as a desire to turn, indirectly controlling the heading in an aerodynamic model. While our model can be applied to any flocking social bird we simulate flocks of starlings, Sturnus vulgaris, and demonstrate the possibility of orientation waves in the absence of predators. Our model exhibits spherical and ovoidal flock shapes matching observation. Comparisons of our model to Reynolds' on energy consumption and frequency analysis demonstrates more realistic motions, significantly less energy use in turning, and a plausible mechanism for emergent orientation waves.
1907.04965
Farzaneh Zokaee
Farzaneh Zokaee, Mingzhe Zhang, Lei Jiang
FindeR: Accelerating FM-Index-based Exact Pattern Matching in Genomic Sequences through ReRAM technology
null
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genomics is the critical key to enabling precision medicine, ensuring global food security and enforcing wildlife conservation. The massive genomic data produced by various genome sequencing technologies presents a significant challenge for genome analysis. Because of errors from sequencing machines and genetic variations, approximate pattern matching (APM) is a must for practical genome analysis. Recent work proposes FPGA, ASIC and even process-in-memory-based accelerators to boost the APM throughput by accelerating dynamic-programming-based algorithms (e.g., Smith-Waterman). However, existing accelerators lack the efficient hardware acceleration for the exact pattern matching (EPM) that is an even more critical and essential function widely used in almost every step of genome analysis including assembly, alignment, annotation and compression. State-of-the-art genome analysis adopts the FM-Index that augments the space-efficient BWT with additional data structures permitting fast EPM operations. But the FM-Index is notorious for poor spatial locality and massive random memory accesses. In this paper, we propose a ReRAM-based process-in-memory architecture, FindeR, to enhance the FM-Index EPM search throughput in genomic sequences. We build a reliable and energy-efficient Hamming distance unit to accelerate the computing kernel of FM-Index search using commodity ReRAM chips without introducing extra CMOS logic. We further architect a full-fledged FM-Index search pipeline and improve its search throughput by lightweight scheduling on the NVDIMM. We also create a system library for programmers to invoke FindeR to perform EPMs in genome analysis. Compared to state-of-the-art accelerators, FindeR improves the FM-Index search throughput by $83\%\sim 30K\times$ and throughput per Watt by $3.5\times\sim 42.5K\times$.
[ { "created": "Thu, 11 Jul 2019 00:50:11 GMT", "version": "v1" }, { "created": "Tue, 1 Oct 2019 19:18:11 GMT", "version": "v2" } ]
2019-10-03
[ [ "Zokaee", "Farzaneh", "" ], [ "Zhang", "Mingzhe", "" ], [ "Jiang", "Lei", "" ] ]
Genomics is the critical key to enabling precision medicine, ensuring global food security and enforcing wildlife conservation. The massive genomic data produced by various genome sequencing technologies presents a significant challenge for genome analysis. Because of errors from sequencing machines and genetic variations, approximate pattern matching (APM) is a must for practical genome analysis. Recent work proposes FPGA, ASIC and even process-in-memory-based accelerators to boost the APM throughput by accelerating dynamic-programming-based algorithms (e.g., Smith-Waterman). However, existing accelerators lack the efficient hardware acceleration for the exact pattern matching (EPM) that is an even more critical and essential function widely used in almost every step of genome analysis including assembly, alignment, annotation and compression. State-of-the-art genome analysis adopts the FM-Index that augments the space-efficient BWT with additional data structures permitting fast EPM operations. But the FM-Index is notorious for poor spatial locality and massive random memory accesses. In this paper, we propose a ReRAM-based process-in-memory architecture, FindeR, to enhance the FM-Index EPM search throughput in genomic sequences. We build a reliable and energy-efficient Hamming distance unit to accelerate the computing kernel of FM-Index search using commodity ReRAM chips without introducing extra CMOS logic. We further architect a full-fledged FM-Index search pipeline and improve its search throughput by lightweight scheduling on the NVDIMM. We also create a system library for programmers to invoke FindeR to perform EPMs in genome analysis. Compared to state-of-the-art accelerators, FindeR improves the FM-Index search throughput by $83\%\sim 30K\times$ and throughput per Watt by $3.5\times\sim 42.5K\times$.
2308.11671
Jacob Deasy
Jacob Deasy, Ron Schwessinger, Ferran Gonzalez, Stephen Young, Kim Branson
Generalising sequence models for epigenome predictions with tissue and assay embeddings
null
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sequence modelling approaches for epigenetic profile prediction have recently expanded in terms of sequence length, model size, and profile diversity. However, current models cannot infer on many experimentally feasible tissue and assay pairs due to poor usage of contextual information, limiting $\textit{in silico}$ understanding of regulatory genomics. We demonstrate that strong correlation can be achieved across a large range of experimental conditions by integrating tissue and assay embeddings into a Contextualised Genomic Network (CGN). In contrast to previous approaches, we enhance long-range sequence embeddings with contextual information in the input space, rather than expanding the output space. We exhibit the efficacy of our approach across a broad set of epigenetic profiles and provide the first insights into the effect of genetic variants on epigenetic sequence model training. Our general approach to context integration exceeds state of the art in multiple settings while employing a more rigorous validation procedure.
[ { "created": "Tue, 22 Aug 2023 10:34:19 GMT", "version": "v1" } ]
2023-08-24
[ [ "Deasy", "Jacob", "" ], [ "Schwessinger", "Ron", "" ], [ "Gonzalez", "Ferran", "" ], [ "Young", "Stephen", "" ], [ "Branson", "Kim", "" ] ]
Sequence modelling approaches for epigenetic profile prediction have recently expanded in terms of sequence length, model size, and profile diversity. However, current models cannot infer on many experimentally feasible tissue and assay pairs due to poor usage of contextual information, limiting $\textit{in silico}$ understanding of regulatory genomics. We demonstrate that strong correlation can be achieved across a large range of experimental conditions by integrating tissue and assay embeddings into a Contextualised Genomic Network (CGN). In contrast to previous approaches, we enhance long-range sequence embeddings with contextual information in the input space, rather than expanding the output space. We exhibit the efficacy of our approach across a broad set of epigenetic profiles and provide the first insights into the effect of genetic variants on epigenetic sequence model training. Our general approach to context integration exceeds state of the art in multiple settings while employing a more rigorous validation procedure.
2012.12850
Axel Brandenburg
Axel Brandenburg
Homochirality: a prerequisite or consequence of life?
33 pages, 18 figures, to appear as Chapter 4 of "Prebiotic Chemistry and the Origin of Life", ed. A. Neubeck, & S. McMahon, Springer
Prebiotic Chemistry and the Origin of Life, edited by Neubeck, Anna; McMahon, Sean. ISBN: 978-3-030-81039-9. Cham: Springer International Publishing, 2021, pp. 87-115
10.1007/978-3-030-81039-9_4
NORDITA-2020-136
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Many of the building blocks of life such as amino acids and nucleotides are chiral, i.e., different from their mirror image. Contemporary life selects and synthesizes only one of two possible handednesses. In an abiotic environment, however, there are usually equally many left- and right-handed molecules. If homochirality was a prerequisite of life, there must have been physical or chemical circumstances that led to the selection of a certain preference. Conversely, if it was a consequence of life, we must identify possible pathways for accomplishing a transition from a racemic to a homochiral chemistry. After a discussion of the observational evidence, I will review ideas where homochirality of any handedness could emerge as a consequence of the first polymerization events of nucleotides in an emerging RNA world. These mechanisms are not limited to nucleotides, but can also occur for peptides, as a precursor to the RNA world. The question of homochirality is, in this sense, intimately tied to the origin of life. Future Mars missions may be able to detect biomolecules of extant or extinct life. We will therefore also discuss possible experimental setups for determining the chirality of primitive life forms in situ on Mars.
[ { "created": "Tue, 15 Dec 2020 20:03:27 GMT", "version": "v1" } ]
2022-09-20
[ [ "Brandenburg", "Axel", "" ] ]
Many of the building blocks of life such as amino acids and nucleotides are chiral, i.e., different from their mirror image. Contemporary life selects and synthesizes only one of two possible handednesses. In an abiotic environment, however, there are usually equally many left- and right-handed molecules. If homochirality was a prerequisite of life, there must have been physical or chemical circumstances that led to the selection of a certain preference. Conversely, if it was a consequence of life, we must identify possible pathways for accomplishing a transition from a racemic to a homochiral chemistry. After a discussion of the observational evidence, I will review ideas where homochirality of any handedness could emerge as a consequence of the first polymerization events of nucleotides in an emerging RNA world. These mechanisms are not limited to nucleotides, but can also occur for peptides, as a precursor to the RNA world. The question of homochirality is, in this sense, intimately tied to the origin of life. Future Mars missions may be able to detect biomolecules of extant or extinct life. We will therefore also discuss possible experimental setups for determining the chirality of primitive life forms in situ on Mars.
1310.6556
Andrea De Martino
Francesco Alessandro Massucci, Mauro DiNuzzo, Federico Giove, Bruno Maraviglia, Isaac Perez Castillo, Enzo Marinari, Andrea De Martino
Energy metabolism and glutamate-glutamine cycle in the brain: a stoichiometric modeling perspective
21 pages, incl. supporting text and tables
BMC Systems Biology 7:103 (2013)
10.1186/1752-0509-7-103
null
q-bio.MN cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The energetics of cerebral activity critically relies on the functional and metabolic interactions between neurons and astrocytes. Important open questions include the relation between neuronal versus astrocytic energy demand, glucose uptake and intercellular lactate transfer, as well as their dependence on the level of activity. We have developed a large-scale, constraint-based network model of the metabolic partnership between astrocytes and glutamatergic neurons that allows for a quantitative appraisal of the extent to which stoichiometry alone drives the energetics of the system. We find that the velocity of the glutamate-glutamine cycle ($V_{cyc}$) explains part of the uncoupling between glucose and oxygen utilization at increasing $V_{cyc}$ levels. Thus, we are able to characterize different activation states in terms of the tissue oxygen-glucose index (OGI). Calculations show that glucose is taken up and metabolized according to cellular energy requirements, and that partitioning of the sugar between different cell types is not significantly affected by $V_{cyc}$. Furthermore, both the direction and magnitude of the lactate shuttle between neurons and astrocytes turn out to depend on the relative cell glucose uptake while being roughly independent of $V_{cyc}$. These findings suggest that, in absence of ad hoc activity-related constraints on neuronal and astrocytic metabolism, the glutamate-glutamine cycle does not control the relative energy demand of neurons and astrocytes, and hence their glucose uptake and lactate exchange.
[ { "created": "Thu, 24 Oct 2013 10:40:29 GMT", "version": "v1" } ]
2013-10-25
[ [ "Massucci", "Francesco Alessandro", "" ], [ "DiNuzzo", "Mauro", "" ], [ "Giove", "Federico", "" ], [ "Maraviglia", "Bruno", "" ], [ "Castillo", "Isaac Perez", "" ], [ "Marinari", "Enzo", "" ], [ "De Martino", "Andrea", "" ] ]
The energetics of cerebral activity critically relies on the functional and metabolic interactions between neurons and astrocytes. Important open questions include the relation between neuronal versus astrocytic energy demand, glucose uptake and intercellular lactate transfer, as well as their dependence on the level of activity. We have developed a large-scale, constraint-based network model of the metabolic partnership between astrocytes and glutamatergic neurons that allows for a quantitative appraisal of the extent to which stoichiometry alone drives the energetics of the system. We find that the velocity of the glutamate-glutamine cycle ($V_{cyc}$) explains part of the uncoupling between glucose and oxygen utilization at increasing $V_{cyc}$ levels. Thus, we are able to characterize different activation states in terms of the tissue oxygen-glucose index (OGI). Calculations show that glucose is taken up and metabolized according to cellular energy requirements, and that partitioning of the sugar between different cell types is not significantly affected by $V_{cyc}$. Furthermore, both the direction and magnitude of the lactate shuttle between neurons and astrocytes turn out to depend on the relative cell glucose uptake while being roughly independent of $V_{cyc}$. These findings suggest that, in absence of ad hoc activity-related constraints on neuronal and astrocytic metabolism, the glutamate-glutamine cycle does not control the relative energy demand of neurons and astrocytes, and hence their glucose uptake and lactate exchange.
1712.09275
Shumpei Ujiyama
Shumpei Ujiyama, Kazuki Tsuji
Controlling invasive ant species: a theoretical strategy for efficient monitoring in the early stage of invasion
Revised the manuscript
Scientific Reports. (2018) 8:8033
10.1038/s41598-018-26406-4
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Invasion by the red imported fire ant, Solenopsis invicta Buren, has destructive effects on native biodiversity, agriculture, and public health. This ant's aggressive foraging behaviour and high reproductive capability have enabled its establishment of wild populations in most regions into which it has been imported. An important aspect of eradication is thorough nest monitoring and destruction during early invasion to prevent range expansion. The question is: How intense must monitoring be on temporal and spatial scales to eradicate the fire ant? Assuming that the ant was introduced into a region and that monitoring was conducted immediately after nest detection in an effort to detect all other potentially established nests, we developed a mathematical model to investigate detection rates. Setting the monitoring limit to three years, the detection rate was maximized when monitoring was conducted shifting bait trap locations and setting them at intervals of 30 m for each monitoring. Monitoring should be conducted in a radius of at least 4 km around the source nest, or wider --depending on how late a nest is found. For ease of application, we also derived equations for finding the minimum bait interval required in an arbitrary ant species for thorough monitoring.
[ { "created": "Tue, 26 Dec 2017 14:39:32 GMT", "version": "v1" }, { "created": "Sun, 21 Jan 2018 07:11:30 GMT", "version": "v2" }, { "created": "Thu, 19 Apr 2018 08:13:19 GMT", "version": "v3" } ]
2018-05-31
[ [ "Ujiyama", "Shumpei", "" ], [ "Tsuji", "Kazuki", "" ] ]
Invasion by the red imported fire ant, Solenopsis invicta Buren, has destructive effects on native biodiversity, agriculture, and public health. This ant's aggressive foraging behaviour and high reproductive capability have enabled its establishment of wild populations in most regions into which it has been imported. An important aspect of eradication is thorough nest monitoring and destruction during early invasion to prevent range expansion. The question is: How intense must monitoring be on temporal and spatial scales to eradicate the fire ant? Assuming that the ant was introduced into a region and that monitoring was conducted immediately after nest detection in an effort to detect all other potentially established nests, we developed a mathematical model to investigate detection rates. Setting the monitoring limit to three years, the detection rate was maximized when monitoring was conducted shifting bait trap locations and setting them at intervals of 30 m for each monitoring. Monitoring should be conducted in a radius of at least 4 km around the source nest, or wider --depending on how late a nest is found. For ease of application, we also derived equations for finding the minimum bait interval required in an arbitrary ant species for thorough monitoring.
2210.07392
Thomas Shultz
Thomas R. Shultz, Ardavan S. Nobandegani, Zilong Wang
A Neural Model of Number Comparison with Surprisingly Robust Generalization
14 pages, 9 figures, 2 tables
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a relatively simple computational neural-network model of number comparison. Training on comparisons of the integers 1-9 enable the model to efficiently and accurately simulate a wide range of phenomena, including distance and ratio effects and robust generalization to multidigit integers, negative numbers, and decimal numbers. An accompanying logical model of number comparison provides further insights into the workings of number comparison and its relation to the Arabic number system. These models provide a rational basis for the psychology of number comparison and the ability of neural networks to efficiently learn a powerful system with robust generalization.
[ { "created": "Thu, 13 Oct 2022 22:19:08 GMT", "version": "v1" } ]
2022-10-17
[ [ "Shultz", "Thomas R.", "" ], [ "Nobandegani", "Ardavan S.", "" ], [ "Wang", "Zilong", "" ] ]
We propose a relatively simple computational neural-network model of number comparison. Training on comparisons of the integers 1-9 enable the model to efficiently and accurately simulate a wide range of phenomena, including distance and ratio effects and robust generalization to multidigit integers, negative numbers, and decimal numbers. An accompanying logical model of number comparison provides further insights into the workings of number comparison and its relation to the Arabic number system. These models provide a rational basis for the psychology of number comparison and the ability of neural networks to efficiently learn a powerful system with robust generalization.
2405.00513
Yuran Zhu
Yuran Zhu, Guanhua Wang, Yuning Gu, Walter Zhao, Jiahao Lu, Junqing Zhu, Christina J. MacAskill, Andrew Dupuis, Mark A. Griswold, Dan Ma, Chris A. Flask, Xin Yu
3D MR Fingerprinting for Dynamic Contrast-Enhanced Imaging of Whole Mouse Brain
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Quantitative MRI enables direct quantification of contrast agent concentrations in contrast-enhanced scans. However, the lengthy scan times required by conventional methods are inadequate for tracking contrast agent transport dynamically in mouse brain. We developed a 3D MR fingerprinting (MRF) method for simultaneous T1 and T2 mapping across the whole mouse brain with 4.3-min temporal resolution. We designed a 3D MRF sequence with variable acquisition segment lengths and magnetization preparations on a 9.4T preclinical MRI scanner. Model-based reconstruction approaches were employed to improve the accuracy and speed of MRF acquisition. The method's accuracy for T1 and T2 measurements was validated in vitro, while its repeatability of T1 and T2 measurements was evaluated in vivo (n=3). The utility of the 3D MRF sequence for dynamic tracking of intracisternally infused Gd-DTPA in the whole mouse brain was demonstrated (n=5). Phantom studies confirmed accurate T1 and T2 measurements by 3D MRF with an undersampling factor up to 48. Dynamic contrast-enhanced (DCE) MRF scans achieved a spatial resolution of 192 x 192 x 500 um3 and a temporal resolution of 4.3 min, allowing for the analysis and comparison of dynamic changes in concentration and transport kinetics of intracisternally infused Gd-DTPA across brain regions. The sequence also enabled highly repeatable, high-resolution T1 and T2 mapping of the whole mouse brain (192 x 192 x 250 um3) in 30 min. We present the first dynamic and multi-parametric approach for quantitatively tracking contrast agent transport in the mouse brain using 3D MRF.
[ { "created": "Wed, 1 May 2024 13:47:13 GMT", "version": "v1" }, { "created": "Mon, 5 Aug 2024 14:46:06 GMT", "version": "v2" } ]
2024-08-06
[ [ "Zhu", "Yuran", "" ], [ "Wang", "Guanhua", "" ], [ "Gu", "Yuning", "" ], [ "Zhao", "Walter", "" ], [ "Lu", "Jiahao", "" ], [ "Zhu", "Junqing", "" ], [ "MacAskill", "Christina J.", "" ], [ "Dupuis", "Andrew", "" ], [ "Griswold", "Mark A.", "" ], [ "Ma", "Dan", "" ], [ "Flask", "Chris A.", "" ], [ "Yu", "Xin", "" ] ]
Quantitative MRI enables direct quantification of contrast agent concentrations in contrast-enhanced scans. However, the lengthy scan times required by conventional methods are inadequate for tracking contrast agent transport dynamically in mouse brain. We developed a 3D MR fingerprinting (MRF) method for simultaneous T1 and T2 mapping across the whole mouse brain with 4.3-min temporal resolution. We designed a 3D MRF sequence with variable acquisition segment lengths and magnetization preparations on a 9.4T preclinical MRI scanner. Model-based reconstruction approaches were employed to improve the accuracy and speed of MRF acquisition. The method's accuracy for T1 and T2 measurements was validated in vitro, while its repeatability of T1 and T2 measurements was evaluated in vivo (n=3). The utility of the 3D MRF sequence for dynamic tracking of intracisternally infused Gd-DTPA in the whole mouse brain was demonstrated (n=5). Phantom studies confirmed accurate T1 and T2 measurements by 3D MRF with an undersampling factor up to 48. Dynamic contrast-enhanced (DCE) MRF scans achieved a spatial resolution of 192 x 192 x 500 um3 and a temporal resolution of 4.3 min, allowing for the analysis and comparison of dynamic changes in concentration and transport kinetics of intracisternally infused Gd-DTPA across brain regions. The sequence also enabled highly repeatable, high-resolution T1 and T2 mapping of the whole mouse brain (192 x 192 x 250 um3) in 30 min. We present the first dynamic and multi-parametric approach for quantitatively tracking contrast agent transport in the mouse brain using 3D MRF.
2311.06074
Elena Pastorelli
Elena Pastorelli, Alper Yegenoglu, Nicole Kolodziej, Willem Wybo, Francesco Simula, Sandra Diaz, Johan Frederik Storm, Pier Stanislao Paolucci
Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes
23 pages, 9 figures (29 single images), 4 tables, paper
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mounting experimental evidence suggests that brain-state-specific neural mechanisms, supported by connectomic architectures, play a crucial role in integrating past and contextual knowledge with the current, incoming flow of evidence (e.g., from sensory systems). These mechanisms operate across multiple spatial and temporal scales, necessitating dedicated support at the levels of individual neurons and synapses. A notable feature within the neocortex is the structure of large, deep pyramidal neurons, which exhibit a distinctive separation between an apical dendritic compartment and a basal dendritic/perisomatic compartment. This separation is characterized by distinct patterns of incoming connections and brain-state-specific activation mechanisms, namely, apical amplification, isolation, and drive, which are associated with wakefulness, deeper NREM sleep stages, and REM sleep, respectively. The cognitive roles of apical mechanisms have been demonstrated in behaving animals. In contrast, classical models of learning in spiking networks are based on single-compartment neurons, lacking the ability to describe the integration of apical and basal/somatic information. This work aims to provide the computational community with a two-compartment spiking neuron model that incorporates features essential for supporting brain-state-specific learning. This model includes a piece-wise linear transfer function (ThetaPlanes) at the highest abstraction level, making it suitable for use in large-scale bio-inspired artificial intelligence systems. A machine learning evolutionary algorithm, guided by a set of fitness functions, selected the parameters that define neurons expressing the desired apical mechanisms.
[ { "created": "Fri, 10 Nov 2023 14:16:46 GMT", "version": "v1" }, { "created": "Tue, 26 Mar 2024 13:26:31 GMT", "version": "v2" } ]
2024-03-27
[ [ "Pastorelli", "Elena", "" ], [ "Yegenoglu", "Alper", "" ], [ "Kolodziej", "Nicole", "" ], [ "Wybo", "Willem", "" ], [ "Simula", "Francesco", "" ], [ "Diaz", "Sandra", "" ], [ "Storm", "Johan Frederik", "" ], [ "Paolucci", "Pier Stanislao", "" ] ]
Mounting experimental evidence suggests that brain-state-specific neural mechanisms, supported by connectomic architectures, play a crucial role in integrating past and contextual knowledge with the current, incoming flow of evidence (e.g., from sensory systems). These mechanisms operate across multiple spatial and temporal scales, necessitating dedicated support at the levels of individual neurons and synapses. A notable feature within the neocortex is the structure of large, deep pyramidal neurons, which exhibit a distinctive separation between an apical dendritic compartment and a basal dendritic/perisomatic compartment. This separation is characterized by distinct patterns of incoming connections and brain-state-specific activation mechanisms, namely, apical amplification, isolation, and drive, which are associated with wakefulness, deeper NREM sleep stages, and REM sleep, respectively. The cognitive roles of apical mechanisms have been demonstrated in behaving animals. In contrast, classical models of learning in spiking networks are based on single-compartment neurons, lacking the ability to describe the integration of apical and basal/somatic information. This work aims to provide the computational community with a two-compartment spiking neuron model that incorporates features essential for supporting brain-state-specific learning. This model includes a piece-wise linear transfer function (ThetaPlanes) at the highest abstraction level, making it suitable for use in large-scale bio-inspired artificial intelligence systems. A machine learning evolutionary algorithm, guided by a set of fitness functions, selected the parameters that define neurons expressing the desired apical mechanisms.
1312.4152
Tom\'as Revilla
Tom\'as A. Revilla
Numerical responses in resource-based mutualisms: a time scale approach
null
Journal of Theoretical Biology. Vol. 378, pp. 39-46 (2015)
10.1016/j.jtbi.2015.04.012 10.1016/j.jtbi.2015.04.012 10.1016/j.jtbi.2015.04.012 10.1016/j.jtbi.2015.04.012
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In mutualisms where there is exchange of resources for resources, or resources for services, the resources are typically short lived compared with the lives of the organisms that produce and make use of them. This fact allows a separation of time scales, by which the numerical response of one species with respect to the abundance of another can be derived mechanistically. These responses can account for intra-specific competition, due to the partition of the resources provided by mutualists, in this way connecting competition theory and mutualism at a microscopic level. It is also possible to derive saturating responses in the case of species that provide resources but expect a service in return (e.g. pollination, seed dispersal) instead of food or nutrients. In both situations, competition and saturation have the same underlying cause, which is that the generation of resources occur at a finite velocity per individual of the providing species, but their depletion happens much faster due to the acceleration in growth rates that characterizes mutualism. The resulting models can display all the basic features seen in many models of facultative and obligate mutualisms, and they can be generalized from species pairs to larger communities. The parameters of the numerical responses can be related with quantities that can be in principle measured, and that can be related by trade-offs, which can be useful for studying the evolution of mutualisms. Abstract Keywords: mutualism, resources, services, steady-state, functional and numerical response
[ { "created": "Sun, 15 Dec 2013 14:48:09 GMT", "version": "v1" }, { "created": "Wed, 20 Jul 2016 14:18:17 GMT", "version": "v2" } ]
2016-07-21
[ [ "Revilla", "Tomás A.", "" ] ]
In mutualisms where there is exchange of resources for resources, or resources for services, the resources are typically short lived compared with the lives of the organisms that produce and make use of them. This fact allows a separation of time scales, by which the numerical response of one species with respect to the abundance of another can be derived mechanistically. These responses can account for intra-specific competition, due to the partition of the resources provided by mutualists, in this way connecting competition theory and mutualism at a microscopic level. It is also possible to derive saturating responses in the case of species that provide resources but expect a service in return (e.g. pollination, seed dispersal) instead of food or nutrients. In both situations, competition and saturation have the same underlying cause, which is that the generation of resources occur at a finite velocity per individual of the providing species, but their depletion happens much faster due to the acceleration in growth rates that characterizes mutualism. The resulting models can display all the basic features seen in many models of facultative and obligate mutualisms, and they can be generalized from species pairs to larger communities. The parameters of the numerical responses can be related with quantities that can be in principle measured, and that can be related by trade-offs, which can be useful for studying the evolution of mutualisms. Abstract Keywords: mutualism, resources, services, steady-state, functional and numerical response
2005.06526
Viet Chi Tran
Arthur Charpentier, Romuald Elie, Mathieu Lauri\`ere, Viet Chi Tran
COVID-19 pandemic control: balancing detection policy and lockdown intervention under ICU sustainability
null
null
null
null
q-bio.PE cs.SY eess.SY physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider here an extended SIR model, including several features of the recent COVID-19 outbreak: in particular the infected and recovered individuals can either be detected (+) or undetected (-) and we also integrate an intensive care unit (ICU) capacity. Our model enables a tractable quantitative analysis of the optimal policy for the control of the epidemic dynamics using both lockdown and detection intervention levers. With parametric specification based on literature on COVID-19, we investigate the sensitivities of various quantities on the optimal strategies, taking into account the subtle trade-off between the sanitary and the socio-economic cost of the pandemic, together with the limited capacity level of ICU. We identify the optimal lockdown policy as an intervention structured in 4 successive phases: First a quick and strong lockdown intervention to stop the exponential growth of the contagion; second a short transition phase to reduce the prevalence of the virus; third a long period with full ICU capacity and stable virus prevalence; finally a return to normal social interactions with disappearance of the virus. The optimal scenario hereby avoids the second wave of infection, provided the lockdown is released sufficiently slowly. We also provide optimal intervention measures with increasing ICU capacity, as well as optimization over the effort on detection of infectious and immune individuals. Whenever massive resources are introduced to detect infected individuals, the pressure on social distancing can be released, whereas the impact of detection of immune individuals reveals to be more moderate.
[ { "created": "Wed, 13 May 2020 18:48:31 GMT", "version": "v1" }, { "created": "Thu, 21 May 2020 12:45:56 GMT", "version": "v2" }, { "created": "Fri, 22 May 2020 01:15:53 GMT", "version": "v3" } ]
2020-05-25
[ [ "Charpentier", "Arthur", "" ], [ "Elie", "Romuald", "" ], [ "Laurière", "Mathieu", "" ], [ "Tran", "Viet Chi", "" ] ]
We consider here an extended SIR model, including several features of the recent COVID-19 outbreak: in particular the infected and recovered individuals can either be detected (+) or undetected (-) and we also integrate an intensive care unit (ICU) capacity. Our model enables a tractable quantitative analysis of the optimal policy for the control of the epidemic dynamics using both lockdown and detection intervention levers. With parametric specification based on literature on COVID-19, we investigate the sensitivities of various quantities on the optimal strategies, taking into account the subtle trade-off between the sanitary and the socio-economic cost of the pandemic, together with the limited capacity level of ICU. We identify the optimal lockdown policy as an intervention structured in 4 successive phases: First a quick and strong lockdown intervention to stop the exponential growth of the contagion; second a short transition phase to reduce the prevalence of the virus; third a long period with full ICU capacity and stable virus prevalence; finally a return to normal social interactions with disappearance of the virus. The optimal scenario hereby avoids the second wave of infection, provided the lockdown is released sufficiently slowly. We also provide optimal intervention measures with increasing ICU capacity, as well as optimization over the effort on detection of infectious and immune individuals. Whenever massive resources are introduced to detect infected individuals, the pressure on social distancing can be released, whereas the impact of detection of immune individuals reveals to be more moderate.
1001.0647
Benjamin Torben-Nielsen
Benjamin Torben-Nielsen, Marylka Uusisaari, Klaus M. Stiefel
A comparison of methods to determine neuronal phase-response curves
PDFLatex, 16 pages, 7 figures.
Front. Neuroinform. 4:6
10.3389/fninf.2010.00006
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phase-response curve (PRC) is an important tool to determine the excitability type of single neurons which reveals consequences for their synchronizing properties. We review five methods to compute the PRC from both model data and experimental data and compare the numerically obtained results from each method. The main difference between the methods lies in the reliability which is influenced by the fluctuations in the spiking data and the number of spikes available for analysis. We discuss the significance of our results and provide guidelines to choose the best method based on the available data.
[ { "created": "Tue, 5 Jan 2010 08:19:36 GMT", "version": "v1" }, { "created": "Sun, 10 Jan 2010 17:26:36 GMT", "version": "v2" }, { "created": "Fri, 26 Mar 2010 14:10:03 GMT", "version": "v3" } ]
2010-03-29
[ [ "Torben-Nielsen", "Benjamin", "" ], [ "Uusisaari", "Marylka", "" ], [ "Stiefel", "Klaus M.", "" ] ]
The phase-response curve (PRC) is an important tool to determine the excitability type of single neurons which reveals consequences for their synchronizing properties. We review five methods to compute the PRC from both model data and experimental data and compare the numerically obtained results from each method. The main difference between the methods lies in the reliability which is influenced by the fluctuations in the spiking data and the number of spikes available for analysis. We discuss the significance of our results and provide guidelines to choose the best method based on the available data.
1901.07666
Chi-Sing Ho
Chi-Sing Ho, Neal Jean, Catherine A. Hogan, Lena Blackmon, Stefanie S. Jeffrey, Mark Holodniy, Niaz Banaei, Amr A. E. Saleh, Stefano Ermon, and Jennifer Dionne
Rapid identification of pathogenic bacteria using Raman spectroscopy and deep learning
null
Nature Communications 10, 4927 (2019)
10.1038/s41467-019-12898-9
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rapid identification of bacteria is essential to prevent the spread of infectious disease, help combat antimicrobial resistance, and improve patient outcomes. Raman optical spectroscopy promises to combine bacterial detection, identification, and antibiotic susceptibility testing in a single step. However, achieving clinically relevant speeds and accuracies remains challenging due to the weak Raman signal from bacterial cells and the large number of bacterial species and phenotypes. By amassing the largest known dataset of bacterial Raman spectra, we are able to apply state-of-the-art deep learning approaches to identify 30 of the most common bacterial pathogens from noisy Raman spectra, achieving antibiotic treatment identification accuracies of 99.0$\pm$0.1%. This novel approach distinguishes between methicillin-resistant and -susceptible isolates of Staphylococcus aureus (MRSA and MSSA) as well as a pair of isogenic MRSA and MSSA that are genetically identical apart from deletion of the mecA resistance gene, indicating the potential for culture-free detection of antibiotic resistance. Results from initial clinical validation are promising: using just 10 bacterial spectra from each of 25 isolates, we achieve 99.0$\pm$1.9% species identification accuracy. Our combined Raman-deep learning system represents an important proof-of-concept for rapid, culture-free identification of bacterial isolates and antibiotic resistance and could be readily extended for diagnostics on blood, urine, and sputum.
[ { "created": "Wed, 23 Jan 2019 00:47:42 GMT", "version": "v1" }, { "created": "Tue, 5 Nov 2019 22:31:57 GMT", "version": "v2" } ]
2019-11-07
[ [ "Ho", "Chi-Sing", "" ], [ "Jean", "Neal", "" ], [ "Hogan", "Catherine A.", "" ], [ "Blackmon", "Lena", "" ], [ "Jeffrey", "Stefanie S.", "" ], [ "Holodniy", "Mark", "" ], [ "Banaei", "Niaz", "" ], [ "Saleh", "Amr A. E.", "" ], [ "Ermon", "Stefano", "" ], [ "Dionne", "Jennifer", "" ] ]
Rapid identification of bacteria is essential to prevent the spread of infectious disease, help combat antimicrobial resistance, and improve patient outcomes. Raman optical spectroscopy promises to combine bacterial detection, identification, and antibiotic susceptibility testing in a single step. However, achieving clinically relevant speeds and accuracies remains challenging due to the weak Raman signal from bacterial cells and the large number of bacterial species and phenotypes. By amassing the largest known dataset of bacterial Raman spectra, we are able to apply state-of-the-art deep learning approaches to identify 30 of the most common bacterial pathogens from noisy Raman spectra, achieving antibiotic treatment identification accuracies of 99.0$\pm$0.1%. This novel approach distinguishes between methicillin-resistant and -susceptible isolates of Staphylococcus aureus (MRSA and MSSA) as well as a pair of isogenic MRSA and MSSA that are genetically identical apart from deletion of the mecA resistance gene, indicating the potential for culture-free detection of antibiotic resistance. Results from initial clinical validation are promising: using just 10 bacterial spectra from each of 25 isolates, we achieve 99.0$\pm$1.9% species identification accuracy. Our combined Raman-deep learning system represents an important proof-of-concept for rapid, culture-free identification of bacterial isolates and antibiotic resistance and could be readily extended for diagnostics on blood, urine, and sputum.
2103.04356
Leonardo Alexandre
Leonardo Alexandre, Rafael S. Costa, Rui Henriques
DI2: prior-free and multi-item discretization ofbiomedical data and its applications
null
null
null
null
q-bio.QM cs.SC
http://creativecommons.org/licenses/by/4.0/
Motivation: A considerable number of data mining approaches for biomedical data analysis, including state-of-the-art associative models, require a form of data discretization. Although diverse discretization approaches have been proposed, they generally work under a strict set of statistical assumptions which are arguably insufficient to handle the diversity and heterogeneity of clinical and molecular variables within a given dataset. In addition, although an increasing number of symbolic approaches in bioinformatics are able to assign multiple items to values occurring near discretization boundaries for superior robustness, there are no reference principles on how to perform multi-item discretizations. Results: In this study, an unsupervised discretization method, DI2, for variables with arbitrarily skewed distributions is proposed. DI2 provides robust guarantees of generalization by placing data corrections using the Kolmogorov-Smirnov test before statistically fitting distribution candidates. DI2 further supports multi-item assignments. Results gathered from biomedical data show its relevance to improve classic discretization choices. Software: available at https://github.com/JupitersMight/DI2
[ { "created": "Sun, 7 Mar 2021 13:45:30 GMT", "version": "v1" } ]
2021-03-09
[ [ "Alexandre", "Leonardo", "" ], [ "Costa", "Rafael S.", "" ], [ "Henriques", "Rui", "" ] ]
Motivation: A considerable number of data mining approaches for biomedical data analysis, including state-of-the-art associative models, require a form of data discretization. Although diverse discretization approaches have been proposed, they generally work under a strict set of statistical assumptions which are arguably insufficient to handle the diversity and heterogeneity of clinical and molecular variables within a given dataset. In addition, although an increasing number of symbolic approaches in bioinformatics are able to assign multiple items to values occurring near discretization boundaries for superior robustness, there are no reference principles on how to perform multi-item discretizations. Results: In this study, an unsupervised discretization method, DI2, for variables with arbitrarily skewed distributions is proposed. DI2 provides robust guarantees of generalization by placing data corrections using the Kolmogorov-Smirnov test before statistically fitting distribution candidates. DI2 further supports multi-item assignments. Results gathered from biomedical data show its relevance to improve classic discretization choices. Software: available at https://github.com/JupitersMight/DI2
1711.05630
Delfim F. M. Torres
Faical Ndairou, Ivan Area, Juan J. Nieto, Cristiana J. Silva, Delfim F. M. Torres
Mathematical modeling of Zika disease in pregnant women and newborns with microcephaly in Brazil
This is a preprint of a paper whose final and definite form is with 'Mathematical Methods in the Applied Sciences', ISSN 0170-4214. Submitted Aug 10, 2017; Revised Nov 13, 2017; accepted for publication Nov 14, 2017
Math. Methods Appl. Sci. 41 (2018), no. 18, 8929--8941
10.1002/mma.4702
null
q-bio.PE math.CA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new mathematical model for the spread of Zika virus. Special attention is paid to the transmission of microcephaly. Numerical simulations show the accuracy of the model with respect to the Zika outbreak occurred in Brazil.
[ { "created": "Tue, 14 Nov 2017 14:16:53 GMT", "version": "v1" } ]
2018-11-30
[ [ "Ndairou", "Faical", "" ], [ "Area", "Ivan", "" ], [ "Nieto", "Juan J.", "" ], [ "Silva", "Cristiana J.", "" ], [ "Torres", "Delfim F. M.", "" ] ]
We propose a new mathematical model for the spread of Zika virus. Special attention is paid to the transmission of microcephaly. Numerical simulations show the accuracy of the model with respect to the Zika outbreak occurred in Brazil.
2306.11609
Xell Brunet Guasch
Meritxell Brunet Guasch, P. L. Krapivsky and Tibor Antal
Error-induced extinction in a multi-type critical birth-death process
34 pages, 7 figures
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
Extreme mutation rates in microbes and cancer cells can result in error-induced extinction (EEX), where every descendant cell eventually acquires a lethal mutation. In this work, we investigate critical birth-death processes with $n$ distinct types as a birth-death model of EEX in a growing population. Each type-$i$ cell divides independently $(i)\to(i)+(i)$ or mutates $(i)\to(i+1)$ at the same rate. The total number of cells grows exponentially as a Yule process until a cell of type-$n$ appears, which cell type can only die at rate one. This makes the whole process critical and hence after the exponentially growing phase eventually all cells die with probability one. We present large-time asymptotic results for the general $n$-type critical birth-death process. We find that the mass function of the number of cells of type-$k$ has algebraic and stationary tail $(\text{size})^{-1-\chi_k}$, with $\chi_k=2^{1-k}$, for $k=2,\dots,n$, in sharp contrast to the exponential tail of the first type. The same exponents describe the tail of the asymptotic survival probability $(\text{time})^{-\chi_n}$. We present applications of the results for studying extinction due to intolerable mutation rates in biological populations.
[ { "created": "Tue, 20 Jun 2023 15:37:52 GMT", "version": "v1" }, { "created": "Fri, 5 Jul 2024 14:42:36 GMT", "version": "v2" } ]
2024-07-08
[ [ "Guasch", "Meritxell Brunet", "" ], [ "Krapivsky", "P. L.", "" ], [ "Antal", "Tibor", "" ] ]
Extreme mutation rates in microbes and cancer cells can result in error-induced extinction (EEX), where every descendant cell eventually acquires a lethal mutation. In this work, we investigate critical birth-death processes with $n$ distinct types as a birth-death model of EEX in a growing population. Each type-$i$ cell divides independently $(i)\to(i)+(i)$ or mutates $(i)\to(i+1)$ at the same rate. The total number of cells grows exponentially as a Yule process until a cell of type-$n$ appears, which cell type can only die at rate one. This makes the whole process critical and hence after the exponentially growing phase eventually all cells die with probability one. We present large-time asymptotic results for the general $n$-type critical birth-death process. We find that the mass function of the number of cells of type-$k$ has algebraic and stationary tail $(\text{size})^{-1-\chi_k}$, with $\chi_k=2^{1-k}$, for $k=2,\dots,n$, in sharp contrast to the exponential tail of the first type. The same exponents describe the tail of the asymptotic survival probability $(\text{time})^{-\chi_n}$. We present applications of the results for studying extinction due to intolerable mutation rates in biological populations.
2407.01574
Gabriel Ducrocq
Gabriel Ducrocq, Lukas Grunewald, Sebastian Westenhoff, Fredrik Lindsten
cryoSPHERE: Single-particle heterogeneous reconstruction from cryo EM
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
The three-dimensional structure of a protein plays a key role in determining its function. Methods like AlphaFold have revolutionized protein structure prediction based only on the amino-acid sequence. However, proteins often appear in multiple different conformations, and it is highly relevant to resolve the full conformational distribution. Single-particle cryo-electron microscopy (cryo EM) is a powerful tool for capturing a large number of images of a given protein, frequently in different conformations (referred to as particles). The images are, however, very noisy projections of the protein, and traditional methods for cryo EM reconstruction are limited to recovering a single, or a few, conformations. In this paper, we introduce cryoSPHERE, a deep learning method that takes as input a nominal protein structure, e.g. from AlphaFold, learns how to divide it into segments, and how to move these as approximately rigid bodies to fit the different conformations present in the cryo EM dataset. This formulation is shown to provide enough constraints to recover meaningful reconstructions of single protein structures. This is illustrated in three examples where we show consistent improvements over the current state-of-the-art for heterogeneous reconstruction.
[ { "created": "Wed, 29 May 2024 15:12:19 GMT", "version": "v1" } ]
2024-07-03
[ [ "Ducrocq", "Gabriel", "" ], [ "Grunewald", "Lukas", "" ], [ "Westenhoff", "Sebastian", "" ], [ "Lindsten", "Fredrik", "" ] ]
The three-dimensional structure of a protein plays a key role in determining its function. Methods like AlphaFold have revolutionized protein structure prediction based only on the amino-acid sequence. However, proteins often appear in multiple different conformations, and it is highly relevant to resolve the full conformational distribution. Single-particle cryo-electron microscopy (cryo EM) is a powerful tool for capturing a large number of images of a given protein, frequently in different conformations (referred to as particles). The images are, however, very noisy projections of the protein, and traditional methods for cryo EM reconstruction are limited to recovering a single, or a few, conformations. In this paper, we introduce cryoSPHERE, a deep learning method that takes as input a nominal protein structure, e.g. from AlphaFold, learns how to divide it into segments, and how to move these as approximately rigid bodies to fit the different conformations present in the cryo EM dataset. This formulation is shown to provide enough constraints to recover meaningful reconstructions of single protein structures. This is illustrated in three examples where we show consistent improvements over the current state-of-the-art for heterogeneous reconstruction.
2210.17401
Li Kun
Li Kun and Hu Wenbin
TransEDRP: Dual Transformer model with Edge Emdedded for Drug Respond Prediction
8 pages, 5 figures, 4 tables
null
null
null
q-bio.BM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
GNN-based methods have achieved excellent results as a mainstream task in drug response prediction tasks in recent years. Traditional GNN methods use only the atoms in a drug molecule as nodes to obtain the representation of the molecular graph through node information passing, whereas the method using the transformer can only extract information about the nodes. However, the covalent bonding and chirality of a drug molecule have a great influence on the pharmacological properties of the molecule, and these information are implied in the chemical bonds formed by the edges between the atoms. In addition, CNN methods for modelling cell lines genomics sequences can only perceive local rather than global information about the sequence. In order to solve the above problems, we propose the decoupled dual transformer structure with edge embedded for drug respond prediction (TransEDRP), which is used for the representation of cell line genomics and drug respectively. For the drug branch, we encoded the chemical bond information within the molecule as the embedding of the edge in the molecular graph, extracted the global structural and biochemical information of the drug molecule using graph transformer. For the branch of cell lines genomics, we use the multi-headed attention mechanism to globally represent the genomics sequence. Finally, the drug and genomics branches are fused to predict IC50 values through the transformer layer and the fully connected layer, which two branches are different modalities. Extensive experiments have shown that our method is better than the current mainstream approach in all evaluation indicators.
[ { "created": "Sun, 23 Oct 2022 11:00:43 GMT", "version": "v1" } ]
2022-11-01
[ [ "Kun", "Li", "" ], [ "Wenbin", "Hu", "" ] ]
GNN-based methods have achieved excellent results as a mainstream task in drug response prediction tasks in recent years. Traditional GNN methods use only the atoms in a drug molecule as nodes to obtain the representation of the molecular graph through node information passing, whereas the method using the transformer can only extract information about the nodes. However, the covalent bonding and chirality of a drug molecule have a great influence on the pharmacological properties of the molecule, and these information are implied in the chemical bonds formed by the edges between the atoms. In addition, CNN methods for modelling cell lines genomics sequences can only perceive local rather than global information about the sequence. In order to solve the above problems, we propose the decoupled dual transformer structure with edge embedded for drug respond prediction (TransEDRP), which is used for the representation of cell line genomics and drug respectively. For the drug branch, we encoded the chemical bond information within the molecule as the embedding of the edge in the molecular graph, extracted the global structural and biochemical information of the drug molecule using graph transformer. For the branch of cell lines genomics, we use the multi-headed attention mechanism to globally represent the genomics sequence. Finally, the drug and genomics branches are fused to predict IC50 values through the transformer layer and the fully connected layer, which two branches are different modalities. Extensive experiments have shown that our method is better than the current mainstream approach in all evaluation indicators.
1403.6854
Zachary Szpiech
Zachary A Szpiech and Ryan D Hernandez
Selscan: an efficient multi-threaded program to perform EHH-based scans for positive selection
5 pages, 2 tables, 1 figure
Molecular Biology and Evolution 31: 2824-2827
10.1093/molbev/msu211
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Haplotype-based scans to detect natural selection are useful to identify recent or ongoing positive selection in genomes. As both real and simulated genomic datasets grow larger, spanning thousands of samples and millions of markers, there is a need for a fast and efficient implementation of these scans for general use. Here we present selscan, an efficient multi-threaded application that implements Extended Haplotype Homozygosity (EHH), Integrated Haplotype Score (iHS), and Cross-population Extended Haplotype Homozygosity (XPEHH). selscan accepts phased genotypes in multiple formats, including TPED, and performs extremely well on both simulated and real data and over an order of magnitude faster than existing available implementations. It calculates iHS on chromosome 22 (22,147 loci) across 204 CEU haplotypes in 353s on one thread (33s on 16 threads) and calculates XPEHH for the same data relative to 210 YRI haplotypes in 578s on one thread (52s on 16 threads). Source code and binaries (Windows, OSX and Linux) are available at https://github.com/szpiech/selscan .
[ { "created": "Wed, 26 Mar 2014 20:45:32 GMT", "version": "v1" }, { "created": "Wed, 2 Apr 2014 22:23:19 GMT", "version": "v2" }, { "created": "Mon, 19 May 2014 17:01:08 GMT", "version": "v3" }, { "created": "Thu, 26 Jun 2014 16:56:24 GMT", "version": "v4" }, { "created": "Mon, 7 Jul 2014 15:23:47 GMT", "version": "v5" }, { "created": "Fri, 11 Jul 2014 15:01:41 GMT", "version": "v6" } ]
2014-10-03
[ [ "Szpiech", "Zachary A", "" ], [ "Hernandez", "Ryan D", "" ] ]
Haplotype-based scans to detect natural selection are useful to identify recent or ongoing positive selection in genomes. As both real and simulated genomic datasets grow larger, spanning thousands of samples and millions of markers, there is a need for a fast and efficient implementation of these scans for general use. Here we present selscan, an efficient multi-threaded application that implements Extended Haplotype Homozygosity (EHH), Integrated Haplotype Score (iHS), and Cross-population Extended Haplotype Homozygosity (XPEHH). selscan accepts phased genotypes in multiple formats, including TPED, and performs extremely well on both simulated and real data and over an order of magnitude faster than existing available implementations. It calculates iHS on chromosome 22 (22,147 loci) across 204 CEU haplotypes in 353s on one thread (33s on 16 threads) and calculates XPEHH for the same data relative to 210 YRI haplotypes in 578s on one thread (52s on 16 threads). Source code and binaries (Windows, OSX and Linux) are available at https://github.com/szpiech/selscan .
2406.14358
Yuannan Li
Yuannan Li, Shan Xu, Jia Liu
The neural correlates of logical-mathematical symbol systems processing resemble that of spatial cognition more than natural language processing
null
null
null
null
q-bio.NC cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
The ability to manipulate logical-mathematical symbols (LMS), encompassing tasks such as calculation, reasoning, and programming, is a cognitive skill arguably unique to humans. Considering the relatively recent emergence of this ability in human evolutionary history, it has been suggested that LMS processing may build upon more fundamental cognitive systems, possibly through neuronal recycling. Previous studies have pinpointed two primary candidates, natural language processing and spatial cognition. Existing comparisons between these domains largely relied on task-level comparison, which may be confounded by task idiosyncrasy. The present study instead compared the neural correlates at the domain level with both automated meta-analysis and synthesized maps based on three representative LMS tasks, reasoning, calculation, and mental programming. Our results revealed a more substantial cortical overlap between LMS processing and spatial cognition, in contrast to language processing. Furthermore, in regions activated by both spatial and language processing, the multivariate activation pattern for LMS processing exhibited greater multivariate similarity to spatial cognition than to language processing. A hierarchical clustering analysis further indicated that typical LMS tasks were indistinguishable from spatial cognition tasks at the neural level, suggesting an inherent connection between these two cognitive processes. Taken together, our findings support the hypothesis that spatial cognition is likely the basis of LMS processing, which may shed light on the limitations of large language models in logical reasoning, particularly those trained exclusively on textual data without explicit emphasis on spatial content.
[ { "created": "Thu, 20 Jun 2024 14:31:09 GMT", "version": "v1" } ]
2024-06-21
[ [ "Li", "Yuannan", "" ], [ "Xu", "Shan", "" ], [ "Liu", "Jia", "" ] ]
The ability to manipulate logical-mathematical symbols (LMS), encompassing tasks such as calculation, reasoning, and programming, is a cognitive skill arguably unique to humans. Considering the relatively recent emergence of this ability in human evolutionary history, it has been suggested that LMS processing may build upon more fundamental cognitive systems, possibly through neuronal recycling. Previous studies have pinpointed two primary candidates, natural language processing and spatial cognition. Existing comparisons between these domains largely relied on task-level comparison, which may be confounded by task idiosyncrasy. The present study instead compared the neural correlates at the domain level with both automated meta-analysis and synthesized maps based on three representative LMS tasks, reasoning, calculation, and mental programming. Our results revealed a more substantial cortical overlap between LMS processing and spatial cognition, in contrast to language processing. Furthermore, in regions activated by both spatial and language processing, the multivariate activation pattern for LMS processing exhibited greater multivariate similarity to spatial cognition than to language processing. A hierarchical clustering analysis further indicated that typical LMS tasks were indistinguishable from spatial cognition tasks at the neural level, suggesting an inherent connection between these two cognitive processes. Taken together, our findings support the hypothesis that spatial cognition is likely the basis of LMS processing, which may shed light on the limitations of large language models in logical reasoning, particularly those trained exclusively on textual data without explicit emphasis on spatial content.
2003.01053
Michael Fanous
Michael Fanous, Megan P. Caputo, Young Jae Lee, Laurie A. Rund, Catherine Best-Popescu, Mikhail E. Kandel, Rodney W. Johnson, Tapas Das, Matthew J. Kuchan, Gabriel Popescu
Quantifying myelin in brain tissue using color spatial light interference microscopy (cSLIM)
null
null
10.1371/journal.pone.0241084
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deficient myelination of the brain is associated with neurodevelopmental delays, particularly in high-risk infants, such as those born small in relation to their gestational age (SGA). New methods are needed to further study this condition. Here, we employ Color Spatial Light Interference Microscopy (cSLIM), which uses a brightfield objective and RGB camera to generate pathlength-maps with nanoscale sensitivity in conjunction with a regular brightfield image. Using tissue sections stained with Luxol Fast Blue, the myelin structures were segmented from a brightfield image. Using a binary mask, those portions were quantitatively analyzed in the corresponding phase maps. We first used the CLARITY method to remove tissue lipids and validate the sensitivity of cSLIM to lipid content. We then applied cSLIM to brain histology slices. These specimens are from a previous MRI study, which demonstrated that appropriate for gestational age (AGA) piglets have increased internal capsule myelination (ICM) compared to small for gestational age (SGA) piglets and that a hydrolyzed fat diet improved ICM in both. The identity of samples was blinded until after statistical analyses.
[ { "created": "Mon, 2 Mar 2020 17:38:39 GMT", "version": "v1" } ]
2021-01-27
[ [ "Fanous", "Michael", "" ], [ "Caputo", "Megan P.", "" ], [ "Lee", "Young Jae", "" ], [ "Rund", "Laurie A.", "" ], [ "Best-Popescu", "Catherine", "" ], [ "Kandel", "Mikhail E.", "" ], [ "Johnson", "Rodney W.", "" ], [ "Das", "Tapas", "" ], [ "Kuchan", "Matthew J.", "" ], [ "Popescu", "Gabriel", "" ] ]
Deficient myelination of the brain is associated with neurodevelopmental delays, particularly in high-risk infants, such as those born small in relation to their gestational age (SGA). New methods are needed to further study this condition. Here, we employ Color Spatial Light Interference Microscopy (cSLIM), which uses a brightfield objective and RGB camera to generate pathlength-maps with nanoscale sensitivity in conjunction with a regular brightfield image. Using tissue sections stained with Luxol Fast Blue, the myelin structures were segmented from a brightfield image. Using a binary mask, those portions were quantitatively analyzed in the corresponding phase maps. We first used the CLARITY method to remove tissue lipids and validate the sensitivity of cSLIM to lipid content. We then applied cSLIM to brain histology slices. These specimens are from a previous MRI study, which demonstrated that appropriate for gestational age (AGA) piglets have increased internal capsule myelination (ICM) compared to small for gestational age (SGA) piglets and that a hydrolyzed fat diet improved ICM in both. The identity of samples was blinded until after statistical analyses.
2105.01999
Hyun Youk
Lars Koopmans, Hyun Youk
Predictive landscapes hidden beneath biological cellular automata
Invited perspective in honor of Hans Frauenfelder: to appear in upcoming special issue titled "Impact of Landscapes in Biology" in Journal of Biological Physics
null
null
null
q-bio.QM nlin.CG nlin.PS physics.bio-ph q-bio.CB
http://creativecommons.org/licenses/by-nc-nd/4.0/
To celebrate Hans Frauenfelder's achievements, we examine energy(-like) "landscapes" for complex living systems. Energy landscapes summarize all possible dynamics of some physical systems. Energy(-like) landscapes can explain some biomolecular processes, including gene expression and, as Frauenfelder showed, protein folding. But energy-like landscapes and existing frameworks like statistical mechanics seem impractical for describing many living systems. Difficulties stem from living systems being high dimensional, nonlinear, and governed by many, tightly coupled constituents that are noisy. The predominant modeling approach is devising differential equations that are tailored to each living system. This ad hoc approach faces the notorious "parameter problem": models have numerous nonlinear, mathematical functions with unknown parameter values, even for describing just a few intracellular processes. One cannot measure many intracellular parameters or can only measure them as snapshots in time. Another modeling approach uses cellular automata to represent living systems as discrete dynamical systems with binary variables. Quantitative (Hamiltonian-based) rules can dictate cellular automata (e.g., Cellular Potts Model). But numerous biological features, in current practice, are qualitatively described rather than quantitatively (e.g., gene is (highly) expressed or not (highly) expressed). Cellular automata governed by verbal rules are useful representations for living systems and can mitigate the parameter problem. However, they can yield complex dynamics that are difficult to understand because much of the existing mathematical tools and theorems apply to continuous but not discrete dynamical systems. Recent studies found ways to overcome this challenge by discovering a predictive "landscape" that yield low-dimensional representations of cellular automata dynamics. We review these studies.
[ { "created": "Wed, 5 May 2021 11:54:32 GMT", "version": "v1" }, { "created": "Thu, 14 Oct 2021 22:35:25 GMT", "version": "v2" } ]
2021-10-18
[ [ "Koopmans", "Lars", "" ], [ "Youk", "Hyun", "" ] ]
To celebrate Hans Frauenfelder's achievements, we examine energy(-like) "landscapes" for complex living systems. Energy landscapes summarize all possible dynamics of some physical systems. Energy(-like) landscapes can explain some biomolecular processes, including gene expression and, as Frauenfelder showed, protein folding. But energy-like landscapes and existing frameworks like statistical mechanics seem impractical for describing many living systems. Difficulties stem from living systems being high dimensional, nonlinear, and governed by many, tightly coupled constituents that are noisy. The predominant modeling approach is devising differential equations that are tailored to each living system. This ad hoc approach faces the notorious "parameter problem": models have numerous nonlinear, mathematical functions with unknown parameter values, even for describing just a few intracellular processes. One cannot measure many intracellular parameters or can only measure them as snapshots in time. Another modeling approach uses cellular automata to represent living systems as discrete dynamical systems with binary variables. Quantitative (Hamiltonian-based) rules can dictate cellular automata (e.g., Cellular Potts Model). But numerous biological features, in current practice, are qualitatively described rather than quantitatively (e.g., gene is (highly) expressed or not (highly) expressed). Cellular automata governed by verbal rules are useful representations for living systems and can mitigate the parameter problem. However, they can yield complex dynamics that are difficult to understand because much of the existing mathematical tools and theorems apply to continuous but not discrete dynamical systems. Recent studies found ways to overcome this challenge by discovering a predictive "landscape" that yield low-dimensional representations of cellular automata dynamics. We review these studies.
2305.16104
David Waxman
Konstantinos Mavreas and David Waxman
Information encoded in gene-frequency trajectories
30 pages, 3 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In this work we present a systematic mathematical approximation scheme that exposes the way that information, about the evolutionary forces of selection and random genetic drift, is encoded in gene-frequency trajectories. We determine approximate, time-dependent, gene-frequency trajectory statistics, assuming additive selection. We use the probability of fixation to test and illustrate the approximation scheme introduced. For the case where the strength of selection and the effective population size have constant values, we show how a standard result for the probability of fixation, under the diffusion approximation, systematically emerges, when increasing numbers of approximate trajectory statistics are taken into account. We then provide examples of how time-dependent parameters influence gene-frequency statistics.
[ { "created": "Thu, 25 May 2023 14:34:45 GMT", "version": "v1" } ]
2023-05-26
[ [ "Mavreas", "Konstantinos", "" ], [ "Waxman", "David", "" ] ]
In this work we present a systematic mathematical approximation scheme that exposes the way that information, about the evolutionary forces of selection and random genetic drift, is encoded in gene-frequency trajectories. We determine approximate, time-dependent, gene-frequency trajectory statistics, assuming additive selection. We use the probability of fixation to test and illustrate the approximation scheme introduced. For the case where the strength of selection and the effective population size have constant values, we show how a standard result for the probability of fixation, under the diffusion approximation, systematically emerges, when increasing numbers of approximate trajectory statistics are taken into account. We then provide examples of how time-dependent parameters influence gene-frequency statistics.
q-bio/0406036
Helmut Schiessel
Boris Mergell, Ralf Everaers and Helmut Schiessel
Nucleosome interactions in chromatin: fiber stiffening and hairpin formation
11 pages, 5 figures, Phys. Rev. E, in press
null
10.1103/PhysRevE.70.011915
null
q-bio.SC q-bio.BM
null
We use Monte Carlo simulations to study attractive and excluded volume interactions between nucleosome core particles in 30 nm-chromatin fibers. The nucleosomes are treated as disk-like objects having an excluded volume and short range attraction modelled by a variant of the Gay-Berne potential. The nucleosomes are connected via bendable and twistable linker DNA in the crossed linker fashion. We investigate the influence of the nucleosomal excluded volume on the stiffness of the fiber. For parameter values that correspond to chicken erythrocyte chromatin we find that the persistence length is governed to a large extent by that excluded volume whereas the soft linker backbone elasticity plays only a minor role. We further find that internucleosomal attraction can induce the formation of hairpin configurations. Tension-induced opening of such configurations into straight fibers manifests itself in a quasi-plateau in the force-extension curve that resembles results from recent micromanipulation experiments. Such hairpins may play a role in the formation of higher order structures in chromosomes like chromonema fibers.
[ { "created": "Wed, 16 Jun 2004 20:12:39 GMT", "version": "v1" } ]
2009-11-10
[ [ "Mergell", "Boris", "" ], [ "Everaers", "Ralf", "" ], [ "Schiessel", "Helmut", "" ] ]
We use Monte Carlo simulations to study attractive and excluded volume interactions between nucleosome core particles in 30 nm-chromatin fibers. The nucleosomes are treated as disk-like objects having an excluded volume and short range attraction modelled by a variant of the Gay-Berne potential. The nucleosomes are connected via bendable and twistable linker DNA in the crossed linker fashion. We investigate the influence of the nucleosomal excluded volume on the stiffness of the fiber. For parameter values that correspond to chicken erythrocyte chromatin we find that the persistence length is governed to a large extent by that excluded volume whereas the soft linker backbone elasticity plays only a minor role. We further find that internucleosomal attraction can induce the formation of hairpin configurations. Tension-induced opening of such configurations into straight fibers manifests itself in a quasi-plateau in the force-extension curve that resembles results from recent micromanipulation experiments. Such hairpins may play a role in the formation of higher order structures in chromosomes like chromonema fibers.
1809.05095
Akram Yazdani PhD
Akram Yazdani
Effect of Blast Exposure on Gene-Gene Interactions
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Repeated exposure to low-level blast may initiate a range of adverse health problem such as traumatic brain injury (TBI). Although many studies successfully identified genes associated with TBI, yet the cellular mechanisms underpinning TBI are not fully elucidated. In this study, we investigated underlying relationship among genes through constructing transcript Bayesian networks using RNA-seq data. The data for pre- and post-blast transcripts, which were collected on 33 individuals in Army training program, combined with our system approach provide unique opportunity to investigate the effect of blast-wave exposure on gene-gene interactions. Digging into the networks, we identified four subnetworks related to immune system and inflammatory process that are disrupted due to the exposure. Among genes with relatively high fold change in their transcript expression level, ATP6V1G1, B2M, BCL2A1, PELI, S100A8, TRIM58 and ZNF654 showed major impact on the dysregulation of the gene-gene interactions. This study reveals how repeated exposures to traumatic conditions increase the level of fold change of transcript expression and hypothesizes new targets for further experimental studies.
[ { "created": "Thu, 13 Sep 2018 15:44:02 GMT", "version": "v1" }, { "created": "Fri, 9 Nov 2018 20:55:45 GMT", "version": "v2" } ]
2018-11-13
[ [ "Yazdani", "Akram", "" ] ]
Repeated exposure to low-level blast may initiate a range of adverse health problem such as traumatic brain injury (TBI). Although many studies successfully identified genes associated with TBI, yet the cellular mechanisms underpinning TBI are not fully elucidated. In this study, we investigated underlying relationship among genes through constructing transcript Bayesian networks using RNA-seq data. The data for pre- and post-blast transcripts, which were collected on 33 individuals in Army training program, combined with our system approach provide unique opportunity to investigate the effect of blast-wave exposure on gene-gene interactions. Digging into the networks, we identified four subnetworks related to immune system and inflammatory process that are disrupted due to the exposure. Among genes with relatively high fold change in their transcript expression level, ATP6V1G1, B2M, BCL2A1, PELI, S100A8, TRIM58 and ZNF654 showed major impact on the dysregulation of the gene-gene interactions. This study reveals how repeated exposures to traumatic conditions increase the level of fold change of transcript expression and hypothesizes new targets for further experimental studies.
1411.4978
Marc Lefranc
Jingkui Wang (PhLAM), Benjamin Pfeuty (PhLAM), Quentin Thommen (PhLAM), Carmen Romano (SUPA), Marc Lefranc (PhLAM)
Minimal model of transcriptional elongation processes with pauses
null
Physical Review E, American Physical Society, 2014, 90, pp.050701(R)
10.1103/PhysRevE.90.050701
null
q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fundamental biological processes such as transcription and translation, where a genetic sequence is sequentially read by a macromolecule, have been well described by a classical model of non-equilibrium statistical physics, the totally asymmetric exclusion principle (TASEP). This model describes particles hopping between sites of a one-dimensional lattice, with the particle current determining the transcription or translation rate. An open problem is how to analyze a TASEP where particles can pause randomly, as has been observed during transcription. In this work, we report that surprisingly, a simple mean-field model predicts well the particle current for all values of the average pause duration, using a simple description of blocking behind paused particles.
[ { "created": "Tue, 18 Nov 2014 19:42:11 GMT", "version": "v1" } ]
2019-08-15
[ [ "Wang", "Jingkui", "", "PhLAM" ], [ "Pfeuty", "Benjamin", "", "PhLAM" ], [ "Thommen", "Quentin", "", "PhLAM" ], [ "Romano", "Carmen", "", "SUPA" ], [ "Lefranc", "Marc", "", "PhLAM" ] ]
Fundamental biological processes such as transcription and translation, where a genetic sequence is sequentially read by a macromolecule, have been well described by a classical model of non-equilibrium statistical physics, the totally asymmetric exclusion principle (TASEP). This model describes particles hopping between sites of a one-dimensional lattice, with the particle current determining the transcription or translation rate. An open problem is how to analyze a TASEP where particles can pause randomly, as has been observed during transcription. In this work, we report that surprisingly, a simple mean-field model predicts well the particle current for all values of the average pause duration, using a simple description of blocking behind paused particles.
2402.04286
Qing Li
Qing Li, Zhihang Hu, Yixuan Wang, Lei Li, Yimin Fan, Irwin King, Le Song, Yu Li
Progress and Opportunities of Foundation Models in Bioinformatics
27 pages, 3 figures, 2 tables
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bioinformatics has witnessed a paradigm shift with the increasing integration of artificial intelligence (AI), particularly through the adoption of foundation models (FMs). These AI techniques have rapidly advanced, addressing historical challenges in bioinformatics such as the scarcity of annotated data and the presence of data noise. FMs are particularly adept at handling large-scale, unlabeled data, a common scenario in biological contexts due to the time-consuming and costly nature of experimentally determining labeled data. This characteristic has allowed FMs to excel and achieve notable results in various downstream validation tasks, demonstrating their ability to represent diverse biological entities effectively. Undoubtedly, FMs have ushered in a new era in computational biology, especially in the realm of deep learning. The primary goal of this survey is to conduct a systematic investigation and summary of FMs in bioinformatics, tracing their evolution, current research status, and the methodologies employed. Central to our focus is the application of FMs to specific biological problems, aiming to guide the research community in choosing appropriate FMs for their research needs. We delve into the specifics of the problem at hand including sequence analysis, structure prediction, function annotation, and multimodal integration, comparing the structures and advancements against traditional methods. Furthermore, the review analyses challenges and limitations faced by FMs in biology, such as data noise, model explainability, and potential biases. Finally, we outline potential development paths and strategies for FMs in future biological research, setting the stage for continued innovation and application in this rapidly evolving field. This comprehensive review serves not only as an academic resource but also as a roadmap for future explorations and applications of FMs in biology.
[ { "created": "Tue, 6 Feb 2024 02:29:17 GMT", "version": "v1" } ]
2024-02-08
[ [ "Li", "Qing", "" ], [ "Hu", "Zhihang", "" ], [ "Wang", "Yixuan", "" ], [ "Li", "Lei", "" ], [ "Fan", "Yimin", "" ], [ "King", "Irwin", "" ], [ "Song", "Le", "" ], [ "Li", "Yu", "" ] ]
Bioinformatics has witnessed a paradigm shift with the increasing integration of artificial intelligence (AI), particularly through the adoption of foundation models (FMs). These AI techniques have rapidly advanced, addressing historical challenges in bioinformatics such as the scarcity of annotated data and the presence of data noise. FMs are particularly adept at handling large-scale, unlabeled data, a common scenario in biological contexts due to the time-consuming and costly nature of experimentally determining labeled data. This characteristic has allowed FMs to excel and achieve notable results in various downstream validation tasks, demonstrating their ability to represent diverse biological entities effectively. Undoubtedly, FMs have ushered in a new era in computational biology, especially in the realm of deep learning. The primary goal of this survey is to conduct a systematic investigation and summary of FMs in bioinformatics, tracing their evolution, current research status, and the methodologies employed. Central to our focus is the application of FMs to specific biological problems, aiming to guide the research community in choosing appropriate FMs for their research needs. We delve into the specifics of the problem at hand including sequence analysis, structure prediction, function annotation, and multimodal integration, comparing the structures and advancements against traditional methods. Furthermore, the review analyses challenges and limitations faced by FMs in biology, such as data noise, model explainability, and potential biases. Finally, we outline potential development paths and strategies for FMs in future biological research, setting the stage for continued innovation and application in this rapidly evolving field. This comprehensive review serves not only as an academic resource but also as a roadmap for future explorations and applications of FMs in biology.
1301.1541
Agnes Noy
Agnes Noy and Ramin Golestanian
Reply to Comment on 'Length Scale Dependence of DNA Mechanical Properties'
Revised version to appear in Phys. Rev. Lett
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reply to Comment on 'Length Scale Dependence of DNA Mechanical Properties'
[ { "created": "Tue, 8 Jan 2013 14:21:08 GMT", "version": "v1" }, { "created": "Mon, 14 Oct 2013 13:36:57 GMT", "version": "v2" } ]
2013-10-15
[ [ "Noy", "Agnes", "" ], [ "Golestanian", "Ramin", "" ] ]
Reply to Comment on 'Length Scale Dependence of DNA Mechanical Properties'
1801.02435
Simon Benhamou
Simon Benhamou
Mean squared displacement and sinuosity of three-dimensional random search movements
7 pages for main text, 2 pages for appendix, 1 figure
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Correlated random walks (CRW) have been used for a long time as a null model for animal's random search movement in two dimensions (2D). An increasing number of studies focus on animals' movement in three dimensions (3D), but the key properties of CRW, such as the way the mean squared displacement is related to the path length, are well known only in 1D and 2D. In this paper I derive such properties for 3D CRW, in a consistent way with the expression of these properties in 2D. This should allow 3D CRW to act as a null model when analyzing actual 3D movements similarly to what is done in 2D
[ { "created": "Mon, 8 Jan 2018 14:24:49 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2018 11:09:11 GMT", "version": "v2" }, { "created": "Sun, 25 Feb 2018 12:25:04 GMT", "version": "v3" }, { "created": "Fri, 8 Jun 2018 16:34:52 GMT", "version": "v4" }, { "created": "Thu, 10 Jan 2019 12:28:44 GMT", "version": "v5" } ]
2019-01-11
[ [ "Benhamou", "Simon", "" ] ]
Correlated random walks (CRW) have been used for a long time as a null model for animal's random search movement in two dimensions (2D). An increasing number of studies focus on animals' movement in three dimensions (3D), but the key properties of CRW, such as the way the mean squared displacement is related to the path length, are well known only in 1D and 2D. In this paper I derive such properties for 3D CRW, in a consistent way with the expression of these properties in 2D. This should allow 3D CRW to act as a null model when analyzing actual 3D movements similarly to what is done in 2D
2008.11790
Daniel Berman
Daniel S. Berman (1), Craig Howser (1), Thomas Mehoke (1), Jared D. Evans (1) ((1) Johns Hopkins Applied Physics Laboratory, Laurel, United States)
MutaGAN: A Seq2seq GAN Framework to Predict Mutations of Evolving Protein Populations
28 pages, 9 figures, 2 tables, Daniel S. Berman and Craig Howser contributed equally to this work. This paper was submitted to Artificial Intelligence
null
null
null
q-bio.QM cs.LG stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
The ability to predict the evolution of a pathogen would significantly improve the ability to control, prevent, and treat disease. Despite significant progress in other problem spaces, deep learning has yet to contribute to the issue of predicting mutations of evolving populations. To address this gap, we developed a novel machine learning framework using generative adversarial networks (GANs) with recurrent neural networks (RNNs) to accurately predict genetic mutations and evolution of future biological populations. Using a generalized time-reversible phylogenetic model of protein evolution with bootstrapped maximum likelihood tree estimation, we trained a sequence-to-sequence generator within an adversarial framework, named MutaGAN, to generate complete protein sequences augmented with possible mutations of future virus populations. Influenza virus sequences were identified as an ideal test case for this deep learning framework because it is a significant human pathogen with new strains emerging annually and global surveillance efforts have generated a large amount of publicly available data from the National Center for Biotechnology Information's (NCBI) Influenza Virus Resource (IVR). MutaGAN generated "child" sequences from a given "parent" protein sequence with a median Levenshtein distance of 2.00 amino acids. Additionally, the generator was able to augment the majority of parent proteins with at least one mutation identified within the global influenza virus population. These results demonstrate the power of the MutaGAN framework to aid in pathogen forecasting with implications for broad utility in evolutionary prediction for any protein population.
[ { "created": "Wed, 26 Aug 2020 20:20:30 GMT", "version": "v1" } ]
2020-08-28
[ [ "Berman", "Daniel S.", "" ], [ "Howser", "Craig", "" ], [ "Mehoke", "Thomas", "" ], [ "Evans", "Jared D.", "" ] ]
The ability to predict the evolution of a pathogen would significantly improve the ability to control, prevent, and treat disease. Despite significant progress in other problem spaces, deep learning has yet to contribute to the issue of predicting mutations of evolving populations. To address this gap, we developed a novel machine learning framework using generative adversarial networks (GANs) with recurrent neural networks (RNNs) to accurately predict genetic mutations and evolution of future biological populations. Using a generalized time-reversible phylogenetic model of protein evolution with bootstrapped maximum likelihood tree estimation, we trained a sequence-to-sequence generator within an adversarial framework, named MutaGAN, to generate complete protein sequences augmented with possible mutations of future virus populations. Influenza virus sequences were identified as an ideal test case for this deep learning framework because it is a significant human pathogen with new strains emerging annually and global surveillance efforts have generated a large amount of publicly available data from the National Center for Biotechnology Information's (NCBI) Influenza Virus Resource (IVR). MutaGAN generated "child" sequences from a given "parent" protein sequence with a median Levenshtein distance of 2.00 amino acids. Additionally, the generator was able to augment the majority of parent proteins with at least one mutation identified within the global influenza virus population. These results demonstrate the power of the MutaGAN framework to aid in pathogen forecasting with implications for broad utility in evolutionary prediction for any protein population.
2304.06592
Bastian Alt
Derya Alt{\i}ntan, Bastian Alt, Heinz Koeppl
Bayesian Inference for Jump-Diffusion Approximations of Biochemical Reaction Networks
null
null
null
null
q-bio.QM q-bio.MN stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biochemical reaction networks are an amalgamation of reactions where each reaction represents the interaction of different species. Generally, these networks exhibit a multi-scale behavior caused by the high variability in reaction rates and abundances of species. The so-called jump-diffusion approximation is a valuable tool in the modeling of such systems. The approximation is constructed by partitioning the reaction network into a fast and slow subgroup of fast and slow reactions, respectively. This enables the modeling of the dynamics using a Langevin equation for the fast group, while a Markov jump process model is kept for the dynamics of the slow group. Most often biochemical processes are poorly characterized in terms of parameters and population states. As a result of this, methods for estimating hidden quantities are of significant interest. In this paper, we develop a tractable Bayesian inference algorithm based on Markov chain Monte Carlo. The presented blocked Gibbs particle smoothing algorithm utilizes a sequential Monte Carlo method to estimate the latent states and performs distinct Gibbs steps for the parameters of a biochemical reaction network, by exploiting a jump-diffusion approximation model. The presented blocked Gibbs sampler is based on the two distinct steps of state inference and parameter inference. We estimate states via a continuous-time forward-filtering backward-smoothing procedure in the state inference step. By utilizing bootstrap particle filtering within a backward-smoothing procedure, we sample a smoothing trajectory. For estimating the hidden parameters, we utilize a separate Markov chain Monte Carlo sampler within the Gibbs sampler that uses the path-wise continuous-time representation of the reaction counters. Finally, the algorithm is numerically evaluated for a partially observed multi-scale birth-death process example.
[ { "created": "Thu, 13 Apr 2023 14:57:22 GMT", "version": "v1" } ]
2023-04-14
[ [ "Altıntan", "Derya", "" ], [ "Alt", "Bastian", "" ], [ "Koeppl", "Heinz", "" ] ]
Biochemical reaction networks are an amalgamation of reactions where each reaction represents the interaction of different species. Generally, these networks exhibit a multi-scale behavior caused by the high variability in reaction rates and abundances of species. The so-called jump-diffusion approximation is a valuable tool in the modeling of such systems. The approximation is constructed by partitioning the reaction network into a fast and slow subgroup of fast and slow reactions, respectively. This enables the modeling of the dynamics using a Langevin equation for the fast group, while a Markov jump process model is kept for the dynamics of the slow group. Most often biochemical processes are poorly characterized in terms of parameters and population states. As a result of this, methods for estimating hidden quantities are of significant interest. In this paper, we develop a tractable Bayesian inference algorithm based on Markov chain Monte Carlo. The presented blocked Gibbs particle smoothing algorithm utilizes a sequential Monte Carlo method to estimate the latent states and performs distinct Gibbs steps for the parameters of a biochemical reaction network, by exploiting a jump-diffusion approximation model. The presented blocked Gibbs sampler is based on the two distinct steps of state inference and parameter inference. We estimate states via a continuous-time forward-filtering backward-smoothing procedure in the state inference step. By utilizing bootstrap particle filtering within a backward-smoothing procedure, we sample a smoothing trajectory. For estimating the hidden parameters, we utilize a separate Markov chain Monte Carlo sampler within the Gibbs sampler that uses the path-wise continuous-time representation of the reaction counters. Finally, the algorithm is numerically evaluated for a partially observed multi-scale birth-death process example.
1807.04190
Tomokazu Konishi
Tomokazu Konishi
Concerns regarding the deterioration of objectivity in molecular biology
13 pages, 4 figures, a review article
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Scientific objectivity was not a problem in the early days of molecular biology. However, relativism seems to have invaded some areas of the field, damaging the objectivity of its analyses. This review reports on the status of this issue by investigating a number of cases.
[ { "created": "Tue, 10 Jul 2018 02:54:49 GMT", "version": "v1" } ]
2018-07-12
[ [ "Konishi", "Tomokazu", "" ] ]
Scientific objectivity was not a problem in the early days of molecular biology. However, relativism seems to have invaded some areas of the field, damaging the objectivity of its analyses. This review reports on the status of this issue by investigating a number of cases.
2202.10605
Lin Yang
Lin Yang, Shuai Guo, Chengyu Hou, Chencheng Liao, Jiacheng Li, Liping Shi, Xiaoliang Ma, Shenda Jiang, Bing Zheng, Yi Fang, Lin Ye, Xiaodong He
Space Layout of Low-entropy Hydration Shells Guides Protein Binding
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Protein-protein binding enables orderly and lawful biological self-organization, and is therefore considered a miracle of nature. Protein-protein binding is steered by electrostatic forces, hydrogen bonding, van der Waals force, and hydrophobic interactions. Among these physical forces, only the hydrophobic interactions can be considered as long-range intermolecular attractions between proteins in intracellular and extracellular fluid. Low-entropy regions of hydration shells around proteins drive hydrophobic attraction among them that essentially coordinate protein-protein docking in rotational-conformational space of mutual orientations at the guidance stage of the binding. Here, an innovative method was developed for identifying the low-entropy regions of hydration shells of given proteins, and we discovered that the largest low-entropy regions of hydration shells on proteins typically cover the binding sites. According to an analysis of determined protein complex structures, shape matching between the largest low-entropy hydration shell region of a protein and that of its partner at the binding sites is revealed as a regular pattern. Protein-protein binding is thus found to be mainly guided by hydrophobic collapse between the shape-matched low-entropy hydration shells that is verified by bioinformatics analyses of hundreds of structures of protein complexes. A simple algorithm is developed to precisely predict protein binding sites.
[ { "created": "Tue, 22 Feb 2022 01:13:38 GMT", "version": "v1" } ]
2022-02-23
[ [ "Yang", "Lin", "" ], [ "Guo", "Shuai", "" ], [ "Hou", "Chengyu", "" ], [ "Liao", "Chencheng", "" ], [ "Li", "Jiacheng", "" ], [ "Shi", "Liping", "" ], [ "Ma", "Xiaoliang", "" ], [ "Jiang", "Shenda", "" ], [ "Zheng", "Bing", "" ], [ "Fang", "Yi", "" ], [ "Ye", "Lin", "" ], [ "He", "Xiaodong", "" ] ]
Protein-protein binding enables orderly and lawful biological self-organization, and is therefore considered a miracle of nature. Protein-protein binding is steered by electrostatic forces, hydrogen bonding, van der Waals force, and hydrophobic interactions. Among these physical forces, only the hydrophobic interactions can be considered as long-range intermolecular attractions between proteins in intracellular and extracellular fluid. Low-entropy regions of hydration shells around proteins drive hydrophobic attraction among them that essentially coordinate protein-protein docking in rotational-conformational space of mutual orientations at the guidance stage of the binding. Here, an innovative method was developed for identifying the low-entropy regions of hydration shells of given proteins, and we discovered that the largest low-entropy regions of hydration shells on proteins typically cover the binding sites. According to an analysis of determined protein complex structures, shape matching between the largest low-entropy hydration shell region of a protein and that of its partner at the binding sites is revealed as a regular pattern. Protein-protein binding is thus found to be mainly guided by hydrophobic collapse between the shape-matched low-entropy hydration shells that is verified by bioinformatics analyses of hundreds of structures of protein complexes. A simple algorithm is developed to precisely predict protein binding sites.
2210.03190
Francis Banville
Francis Banville (1, 2 and 3), Dominique Gravel (2 and 3) and Timoth\'ee Poisot (1 and 3) ((1) Universit\'e de Montr\'eal, (2) Universit\'e de Sherbrooke, (3) Quebec Centre for Biodiversity Science)
What constrains food webs? A maximum entropy framework for predicting their structure with minimal biases
null
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Food webs are complex ecological networks whose structure is both ecologically and statistically constrained, with many network properties being correlated with each other. Despite the recognition of these invariable relationships in food webs, the use of the principle of maximum entropy (MaxEnt) in network ecology is still rare. This is surprising considering that MaxEnt is a statistical tool precisely designed for understanding and predicting many different types of constrained systems. Precisely, this principle asserts that the least-biased probability distribution of a system's property, constrained by prior knowledge about that system, is the one with maximum information entropy. Here we show how MaxEnt can be used to derive many food-web properties both analytically and heuristically. First, we show how the joint degree distribution (the joint probability distribution of the numbers of prey and predators for each species in the network) can be derived analytically using the number of species and the number of interactions in food webs. Second, we present a heuristic and flexible approach of finding a network's adjacency matrix (the network's representation in matrix format) based on simulated annealing and SVD entropy. We built two heuristic models using the connectance and the joint degree sequence as statistical constraints, respectively. We compared both models' predictions against corresponding null and neutral models commonly used in network ecology using open access data of terrestrial and aquatic food webs sampled globally. We found that the heuristic model constrained by the joint degree sequence was a good predictor of many measures of food-web structure, especially the nestedness and motifs distribution. Specifically, our results suggest that the structure of terrestrial and aquatic food webs is mainly driven by their joint degree distribution.
[ { "created": "Thu, 6 Oct 2022 20:06:31 GMT", "version": "v1" }, { "created": "Fri, 13 Jan 2023 20:54:11 GMT", "version": "v2" } ]
2023-01-18
[ [ "Banville", "Francis", "", "1, 2 and 3" ], [ "Gravel", "Dominique", "", "2 and 3" ], [ "Poisot", "Timothée", "", "1 and 3" ] ]
Food webs are complex ecological networks whose structure is both ecologically and statistically constrained, with many network properties being correlated with each other. Despite the recognition of these invariable relationships in food webs, the use of the principle of maximum entropy (MaxEnt) in network ecology is still rare. This is surprising considering that MaxEnt is a statistical tool precisely designed for understanding and predicting many different types of constrained systems. Precisely, this principle asserts that the least-biased probability distribution of a system's property, constrained by prior knowledge about that system, is the one with maximum information entropy. Here we show how MaxEnt can be used to derive many food-web properties both analytically and heuristically. First, we show how the joint degree distribution (the joint probability distribution of the numbers of prey and predators for each species in the network) can be derived analytically using the number of species and the number of interactions in food webs. Second, we present a heuristic and flexible approach of finding a network's adjacency matrix (the network's representation in matrix format) based on simulated annealing and SVD entropy. We built two heuristic models using the connectance and the joint degree sequence as statistical constraints, respectively. We compared both models' predictions against corresponding null and neutral models commonly used in network ecology using open access data of terrestrial and aquatic food webs sampled globally. We found that the heuristic model constrained by the joint degree sequence was a good predictor of many measures of food-web structure, especially the nestedness and motifs distribution. Specifically, our results suggest that the structure of terrestrial and aquatic food webs is mainly driven by their joint degree distribution.
2203.00707
Evan Anderson
Evan D. Anderson, Ramsey Wilcox, Anuj Nayak, Christopher Zwilling, Pablo Robles-Granda, Been Kim, Lav R. Varshney, Aron K. Barbey
Advanced Methods for Connectome-Based Predictive Modeling of Human Intelligence: A Novel Approach Based on Individual Differences in Cortical Topography
6 pages, 2 figures, workshop paper at NeurIPS 2021 AI for Science Workshop
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Individual differences in human intelligence can be modeled and predicted from in vivo neurobiological connectivity. Many established modeling frameworks for predicting intelligence, however, discard higher-order information about individual differences in brain network topology, and show only moderate performance when generalized to make predictions in out-of-sample subjects. In this paper, we propose that connectome-based predictive modeling, a common predictive modeling framework for neuroscience data, can be productively modified to incorporate information about brain network topology and individual differences via the incorporation of bagged decision trees and the network based statistic. These modifications produce a novel predictive modeling framework that leverages individual differences in cortical tractography to generate accurate regression predictions of intelligence scores. Network topology-based feature selection provides for natively interpretable networks as input features, increasing the model's explainability. Investigating the proposed modeling framework's efficacy, we find that advanced connectome-based predictive modeling generates neuroscience predictions that account for a significantly greater proportion of variance in general intelligence scores than previously established methods, advancing our scientific understanding of the network architecture that underlies human intelligence.
[ { "created": "Tue, 1 Mar 2022 19:05:55 GMT", "version": "v1" }, { "created": "Thu, 3 Mar 2022 20:48:13 GMT", "version": "v2" } ]
2022-03-07
[ [ "Anderson", "Evan D.", "" ], [ "Wilcox", "Ramsey", "" ], [ "Nayak", "Anuj", "" ], [ "Zwilling", "Christopher", "" ], [ "Robles-Granda", "Pablo", "" ], [ "Kim", "Been", "" ], [ "Varshney", "Lav R.", "" ], [ "Barbey", "Aron K.", "" ] ]
Individual differences in human intelligence can be modeled and predicted from in vivo neurobiological connectivity. Many established modeling frameworks for predicting intelligence, however, discard higher-order information about individual differences in brain network topology, and show only moderate performance when generalized to make predictions in out-of-sample subjects. In this paper, we propose that connectome-based predictive modeling, a common predictive modeling framework for neuroscience data, can be productively modified to incorporate information about brain network topology and individual differences via the incorporation of bagged decision trees and the network based statistic. These modifications produce a novel predictive modeling framework that leverages individual differences in cortical tractography to generate accurate regression predictions of intelligence scores. Network topology-based feature selection provides for natively interpretable networks as input features, increasing the model's explainability. Investigating the proposed modeling framework's efficacy, we find that advanced connectome-based predictive modeling generates neuroscience predictions that account for a significantly greater proportion of variance in general intelligence scores than previously established methods, advancing our scientific understanding of the network architecture that underlies human intelligence.
1612.01030
Alexandre Drouin
Alexandre Drouin, Fr\'ed\'eric Raymond, Ga\"el Letarte St-Pierre, Mario Marchand, Jacques Corbeil, Fran\c{c}ois Laviolette
Large scale modeling of antimicrobial resistance with interpretable classifiers
Peer-reviewed and accepted for presentation at the Machine Learning for Health Workshop, NIPS 2016, Barcelona, Spain
null
null
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Antimicrobial resistance is an important public health concern that has implications in the practice of medicine worldwide. Accurately predicting resistance phenotypes from genome sequences shows great promise in promoting better use of antimicrobial agents, by determining which antibiotics are likely to be effective in specific clinical cases. In healthcare, this would allow for the design of treatment plans tailored for specific individuals, likely resulting in better clinical outcomes for patients with bacterial infections. In this work, we present the recent work of Drouin et al. (2016) on using Set Covering Machines to learn highly interpretable models of antibiotic resistance and complement it by providing a large scale application of their method to the entire PATRIC database. We report prediction results for 36 new datasets and present the Kover AMR platform, a new web-based tool allowing the visualization and interpretation of the generated models.
[ { "created": "Sat, 3 Dec 2016 22:52:44 GMT", "version": "v1" } ]
2016-12-06
[ [ "Drouin", "Alexandre", "" ], [ "Raymond", "Frédéric", "" ], [ "St-Pierre", "Gaël Letarte", "" ], [ "Marchand", "Mario", "" ], [ "Corbeil", "Jacques", "" ], [ "Laviolette", "François", "" ] ]
Antimicrobial resistance is an important public health concern that has implications in the practice of medicine worldwide. Accurately predicting resistance phenotypes from genome sequences shows great promise in promoting better use of antimicrobial agents, by determining which antibiotics are likely to be effective in specific clinical cases. In healthcare, this would allow for the design of treatment plans tailored for specific individuals, likely resulting in better clinical outcomes for patients with bacterial infections. In this work, we present the recent work of Drouin et al. (2016) on using Set Covering Machines to learn highly interpretable models of antibiotic resistance and complement it by providing a large scale application of their method to the entire PATRIC database. We report prediction results for 36 new datasets and present the Kover AMR platform, a new web-based tool allowing the visualization and interpretation of the generated models.
1803.05866
Katharina Huber
Manuel Lafond and Nadia El-Mabrouk and Katharina T. Huber and Vincent Moulton
The complexity of comparing multiply-labelled trees by extending phylogenetic-tree metrics
31 pages, 6 figures
null
null
null
q-bio.PE cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A multilabeled tree (or MUL-tree) is a rooted tree in which every leaf is labelled by an element from some set, but in which more than one leaf may be labelled by the same element of that set. In phylogenetics, such trees are used in biogeographical studies, to study the evolution of gene families, and also within approaches to construct phylogenetic networks. A multilabelled tree in which no leaf-labels are repeated is called a phylogenetic tree, and one in which every label is the same is also known as a tree-shape. In this paper, we consider the complexity of computing metrics on MUL-trees that are obtained by extending metrics on phylogenetic trees. In particular, by restricting our attention to tree shapes, we show that computing the metric extension on MUL-trees is NP complete for two well-known metrics on phylogenetic trees, namely, the path-difference and Robinson Foulds distances. We also show that the extension of the Robinson Foulds distance is fixed parameter tractable with respect to the distance parameter. The path distance complexity result allows us to also answer an open problem concerning the complexity of solving the quadratic assignment problem for two matrices that are a Robinson similarity and a Robinson dissimilarity, which we show to be NP-complete. We conclude by considering the maximum agreement subtree (MAST) distance on phylogenetic trees to MUL-trees. Although its extension to MUL-trees can be computed in polynomial time, we show that computing its natural generalization to more than two MUL-trees is NP-complete, although fixed-parameter tractable in the maximum degree when the number of given trees is bounded.
[ { "created": "Thu, 15 Mar 2018 17:00:24 GMT", "version": "v1" } ]
2018-03-16
[ [ "Lafond", "Manuel", "" ], [ "El-Mabrouk", "Nadia", "" ], [ "Huber", "Katharina T.", "" ], [ "Moulton", "Vincent", "" ] ]
A multilabeled tree (or MUL-tree) is a rooted tree in which every leaf is labelled by an element from some set, but in which more than one leaf may be labelled by the same element of that set. In phylogenetics, such trees are used in biogeographical studies, to study the evolution of gene families, and also within approaches to construct phylogenetic networks. A multilabelled tree in which no leaf-labels are repeated is called a phylogenetic tree, and one in which every label is the same is also known as a tree-shape. In this paper, we consider the complexity of computing metrics on MUL-trees that are obtained by extending metrics on phylogenetic trees. In particular, by restricting our attention to tree shapes, we show that computing the metric extension on MUL-trees is NP complete for two well-known metrics on phylogenetic trees, namely, the path-difference and Robinson Foulds distances. We also show that the extension of the Robinson Foulds distance is fixed parameter tractable with respect to the distance parameter. The path distance complexity result allows us to also answer an open problem concerning the complexity of solving the quadratic assignment problem for two matrices that are a Robinson similarity and a Robinson dissimilarity, which we show to be NP-complete. We conclude by considering the maximum agreement subtree (MAST) distance on phylogenetic trees to MUL-trees. Although its extension to MUL-trees can be computed in polynomial time, we show that computing its natural generalization to more than two MUL-trees is NP-complete, although fixed-parameter tractable in the maximum degree when the number of given trees is bounded.
1309.6273
Thomas Gregor
Laurent Abouchar, Mariela D. Petkova, Cynthia R. Steinhardt, and Thomas Gregor
Precision and reproducibility of macroscopic developmental patterns
5 pages, 3 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developmental processes in multicellular organisms occur far from equilibrium, yet produce complex patterns with astonishing reproducibility. We measure the precision and reproducibility of bilaterally symmetric fly wings across the natural range of genetic and environmental conditions and find that wing patterns are specified with identical spatial precision and are reproducible to within a single cell width. The early fly embryo operates at a similar degree of reproducibility, suggesting that the overall spatial precision of morphogenesis in Drosophila performs at the single cell level, arguably the physical limit of what a biological system can achieve.
[ { "created": "Tue, 24 Sep 2013 18:05:38 GMT", "version": "v1" } ]
2013-09-25
[ [ "Abouchar", "Laurent", "" ], [ "Petkova", "Mariela D.", "" ], [ "Steinhardt", "Cynthia R.", "" ], [ "Gregor", "Thomas", "" ] ]
Developmental processes in multicellular organisms occur far from equilibrium, yet produce complex patterns with astonishing reproducibility. We measure the precision and reproducibility of bilaterally symmetric fly wings across the natural range of genetic and environmental conditions and find that wing patterns are specified with identical spatial precision and are reproducible to within a single cell width. The early fly embryo operates at a similar degree of reproducibility, suggesting that the overall spatial precision of morphogenesis in Drosophila performs at the single cell level, arguably the physical limit of what a biological system can achieve.
2309.08126
Jiaxun Li
Jiaxun Li and Yanni Xiao
Analysis of a stochastic SIR model with media effects
null
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
In this study, we investigate a stochastic SIR model with media effects. The uniqueness and the existence of a global positive solution are studied. The sufficient conditions of extinction and persistence of the disease are established. We obtain the basic reproduction number $R_0^S$ for stochastic system, which can act as the threshold given small environmental noise. Note that large noise can induce the disease extinction with probability of 1, suggesting that environmental noises can not be ignored when investigating threshold dynamics. Further, inclusion of media induced behaviour changes does not affect the threshold itself, which is similar to the conclusion of the deterministic models. However, numerical simulations suggest that media impacts induce the disease infection decline.
[ { "created": "Fri, 15 Sep 2023 03:30:25 GMT", "version": "v1" } ]
2023-09-18
[ [ "Li", "Jiaxun", "" ], [ "Xiao", "Yanni", "" ] ]
In this study, we investigate a stochastic SIR model with media effects. The uniqueness and the existence of a global positive solution are studied. The sufficient conditions of extinction and persistence of the disease are established. We obtain the basic reproduction number $R_0^S$ for stochastic system, which can act as the threshold given small environmental noise. Note that large noise can induce the disease extinction with probability of 1, suggesting that environmental noises can not be ignored when investigating threshold dynamics. Further, inclusion of media induced behaviour changes does not affect the threshold itself, which is similar to the conclusion of the deterministic models. However, numerical simulations suggest that media impacts induce the disease infection decline.
2006.02772
Fabrizio Pucci Dr.
Fabrizio Pucci, Philippe Bogaerts, Marianne Rooman
Modeling the molecular impact of SARS-CoV-2 infection on the renin-angiotensin system
22 pages, 5 figures. Revision 1 [section rearranged, typos fixed]
null
null
null
q-bio.MN
http://creativecommons.org/publicdomain/zero/1.0/
SARS-CoV-2 coronavirus infection is mediated by the binding of its spike protein to the angiotensin-converting enzyme 2 (ACE2), which plays a pivotal role in the renin-angiotensin system (RAS). The study of RAS dysregulation due to SARS-CoV-2 infection is fundamentally important for a better understanding of the pathogenic mechanisms and risk factors associated with COVID-19 coronavirus disease, and to design effective therapeutic strategies. In this context, we developed a mathematical model of RAS based on data regarding protein and peptide concentrations; the model was tested on clinical data from healthy normotensive and hypertensive individuals. We then used our model to analyze the impact of SARS-CoV-2 infection on RAS, which we modeled through a down-regulation of ACE2 as a function of viral load. We also used it to predict the effect of RAS-targeting drugs, such as RAS-blockers, human recombinant ACE2, and angiotensin 1-7 peptide, on COVID-19 patients; the model predicted an improvement of the clinical outcome for some drugs and a worsening for others.
[ { "created": "Thu, 4 Jun 2020 11:04:47 GMT", "version": "v1" }, { "created": "Sat, 1 Aug 2020 21:51:33 GMT", "version": "v2" } ]
2020-08-04
[ [ "Pucci", "Fabrizio", "" ], [ "Bogaerts", "Philippe", "" ], [ "Rooman", "Marianne", "" ] ]
SARS-CoV-2 coronavirus infection is mediated by the binding of its spike protein to the angiotensin-converting enzyme 2 (ACE2), which plays a pivotal role in the renin-angiotensin system (RAS). The study of RAS dysregulation due to SARS-CoV-2 infection is fundamentally important for a better understanding of the pathogenic mechanisms and risk factors associated with COVID-19 coronavirus disease, and to design effective therapeutic strategies. In this context, we developed a mathematical model of RAS based on data regarding protein and peptide concentrations; the model was tested on clinical data from healthy normotensive and hypertensive individuals. We then used our model to analyze the impact of SARS-CoV-2 infection on RAS, which we modeled through a down-regulation of ACE2 as a function of viral load. We also used it to predict the effect of RAS-targeting drugs, such as RAS-blockers, human recombinant ACE2, and angiotensin 1-7 peptide, on COVID-19 patients; the model predicted an improvement of the clinical outcome for some drugs and a worsening for others.
1006.3511
Stefano Luccioli
Stefano Luccioli and Antonio Politi
Collective behavior of heterogeneous neural networks
4 pages, 5 figures
null
10.1103/PhysRevLett.105.158104
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a network of integrate-and-fire neurons characterized by a distribution of spiking frequencies. Upon increasing the coupling strength, the model exhibits a transition from an asynchronous regime to a nontrivial collective behavior. At variance with the Kuramoto model, (i) the macroscopic dynamics is irregular even in the thermodynamic limit, and (ii) the microscopic (single-neuron) evolution is linearly stable.
[ { "created": "Thu, 17 Jun 2010 16:35:07 GMT", "version": "v1" } ]
2015-05-19
[ [ "Luccioli", "Stefano", "" ], [ "Politi", "Antonio", "" ] ]
We investigate a network of integrate-and-fire neurons characterized by a distribution of spiking frequencies. Upon increasing the coupling strength, the model exhibits a transition from an asynchronous regime to a nontrivial collective behavior. At variance with the Kuramoto model, (i) the macroscopic dynamics is irregular even in the thermodynamic limit, and (ii) the microscopic (single-neuron) evolution is linearly stable.
2401.02564
Jacob Luber
Parisa Boodaghi Malidarreh, Biraaj Rout, Mohammad Sadegh Nasr, Priyanshi Borad, Jillur Rahman Saurav, Jai Prakash Veerla, Kelli Fenelon, Theodora Koromila, Jacob M. Luber
Predicting Future States with Spatial Point Processes in Single Molecule Resolution Spatial Transcriptomics
null
null
null
null
q-bio.TO cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we introduce a pipeline based on Random Forest Regression to predict the future distribution of cells that are expressed by the Sog-D gene (active cells) in both the Anterior to posterior (AP) and the Dorsal to Ventral (DV) axis of the Drosophila in embryogenesis process. This method provides insights about how cells and living organisms control gene expression in super resolution whole embryo spatial transcriptomics imaging at sub cellular, single molecule resolution. A Random Forest Regression model was used to predict the next stage active distribution based on the previous one. To achieve this goal, we leveraged temporally resolved, spatial point processes by including Ripley's K-function in conjunction with the cell's state in each stage of embryogenesis, and found average predictive accuracy of active cell distribution. This tool is analogous to RNA Velocity for spatially resolved developmental biology, from one data point we can predict future spatially resolved gene expression using features from the spatial point processes.
[ { "created": "Thu, 4 Jan 2024 22:37:56 GMT", "version": "v1" } ]
2024-01-08
[ [ "Malidarreh", "Parisa Boodaghi", "" ], [ "Rout", "Biraaj", "" ], [ "Nasr", "Mohammad Sadegh", "" ], [ "Borad", "Priyanshi", "" ], [ "Saurav", "Jillur Rahman", "" ], [ "Veerla", "Jai Prakash", "" ], [ "Fenelon", "Kelli", "" ], [ "Koromila", "Theodora", "" ], [ "Luber", "Jacob M.", "" ] ]
In this paper, we introduce a pipeline based on Random Forest Regression to predict the future distribution of cells that are expressed by the Sog-D gene (active cells) in both the Anterior to posterior (AP) and the Dorsal to Ventral (DV) axis of the Drosophila in embryogenesis process. This method provides insights about how cells and living organisms control gene expression in super resolution whole embryo spatial transcriptomics imaging at sub cellular, single molecule resolution. A Random Forest Regression model was used to predict the next stage active distribution based on the previous one. To achieve this goal, we leveraged temporally resolved, spatial point processes by including Ripley's K-function in conjunction with the cell's state in each stage of embryogenesis, and found average predictive accuracy of active cell distribution. This tool is analogous to RNA Velocity for spatially resolved developmental biology, from one data point we can predict future spatially resolved gene expression using features from the spatial point processes.
2005.01434
Ante Lojic Kapetanovic
Ante Loji\'c Kapetanovi\'c and Dragan Poljak
Modeling the Epidemic Outbreak and Dynamics of COVID-19 in Croatia
5 pages, 6 figures, to appear in the Proceedings of the SpliTech2020 conference
null
10.23919/SpliTech49282.2020.9243757
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The paper deals with a modeling of the ongoing epidemic caused by Coronavirus disease 2019 (COVID-19) on the closed territory of the Republic of Croatia. Using the official public information on the number of confirmed infected, recovered and deceased individuals, the modified SEIR compartmental model is developed to describe the underlying dynamics of the epidemic. Fitted modified SEIR model provides the prediction of the disease progression in the near future, considering strict control interventions by means of social distancing and quarantine for infected and at-risk individuals introduced at the beginning of COVID-19 spread on February, 25th by Croatian Ministry of Health. Assuming the accuracy of provided data and satisfactory representativeness of the model used, the basic reproduction number is derived. Obtained results portray potential positive developments and justify the stringent precautionary measures introduced by the Ministry of Health.
[ { "created": "Mon, 4 May 2020 12:43:12 GMT", "version": "v1" } ]
2020-11-17
[ [ "Kapetanović", "Ante Lojić", "" ], [ "Poljak", "Dragan", "" ] ]
The paper deals with a modeling of the ongoing epidemic caused by Coronavirus disease 2019 (COVID-19) on the closed territory of the Republic of Croatia. Using the official public information on the number of confirmed infected, recovered and deceased individuals, the modified SEIR compartmental model is developed to describe the underlying dynamics of the epidemic. Fitted modified SEIR model provides the prediction of the disease progression in the near future, considering strict control interventions by means of social distancing and quarantine for infected and at-risk individuals introduced at the beginning of COVID-19 spread on February, 25th by Croatian Ministry of Health. Assuming the accuracy of provided data and satisfactory representativeness of the model used, the basic reproduction number is derived. Obtained results portray potential positive developments and justify the stringent precautionary measures introduced by the Ministry of Health.
0811.4468
Tom Chou
Sarah A. Nowak, Buddhapriya Chakrabarti, Tom Chou, Ajay Gopinathan
Frequency-dependent Chemolocation and Chemotactic Target Selection
14 pages, 5 figures
Physical Biology, volume 7, 026003, (2010)
10.1088/1478-3975/7/2/026003
null
q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemotaxis is typically modeled in the context of cellular motion towards a static, exogenous source of chemoattractant. Here, we propose a time-dependent mechanism of chemotaxis in which a self-propelled particle ({\it e.g.}, a cell) releases a chemical that diffuses to fixed particles (targets) and signals the production of a second chemical by these targets. The particle then moves up concentration gradients of this second chemical, analogous to diffusive echolocation. When one target is present, we describe probe release strategies that optimize travel of the cell to the target. In the presence of multiple targets, the one selected by the cell depends on the strength and, interestingly, on the frequency of probe chemical release. Although involving an additional chemical signaling step, our chemical ``pinging'' hypothesis allows for greater flexibility in regulating target selection, as seen in a number of physical or biological realizations.
[ { "created": "Thu, 27 Nov 2008 04:49:45 GMT", "version": "v1" }, { "created": "Tue, 20 Jul 2010 01:10:46 GMT", "version": "v2" } ]
2015-05-13
[ [ "Nowak", "Sarah A.", "" ], [ "Chakrabarti", "Buddhapriya", "" ], [ "Chou", "Tom", "" ], [ "Gopinathan", "Ajay", "" ] ]
Chemotaxis is typically modeled in the context of cellular motion towards a static, exogenous source of chemoattractant. Here, we propose a time-dependent mechanism of chemotaxis in which a self-propelled particle ({\it e.g.}, a cell) releases a chemical that diffuses to fixed particles (targets) and signals the production of a second chemical by these targets. The particle then moves up concentration gradients of this second chemical, analogous to diffusive echolocation. When one target is present, we describe probe release strategies that optimize travel of the cell to the target. In the presence of multiple targets, the one selected by the cell depends on the strength and, interestingly, on the frequency of probe chemical release. Although involving an additional chemical signaling step, our chemical ``pinging'' hypothesis allows for greater flexibility in regulating target selection, as seen in a number of physical or biological realizations.
1507.05720
David Budden
David M. Budden and Edmund J. Crampin
Gene expression modelling across multiple cell-lines with MapReduce
10 pages, 3 figures
BMC Bioinformatics 2016 17:446
10.1186/s12859-016-1313-1
null
q-bio.QM cs.DC q-bio.GN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the wealth of high-throughput sequencing data generated by recent large-scale consortia, predictive gene expression modelling has become an important tool for integrative analysis of transcriptomic and epigenetic data. However, sequencing data-sets are characteristically large, and previously modelling frameworks are typically inefficient and unable to leverage multi-core or distributed processing architectures. In this study, we detail an efficient and parallelised MapReduce implementation of gene expression modelling. We leverage the computational efficiency of this framework to provide an integrative analysis of over fifty histone modification data-sets across a variety of cancerous and non-cancerous cell-lines. Our results demonstrate that the genome-wide relationships between histone modifications and mRNA transcription are lineage, tissue and karyotype-invariant, and that models trained on matched epigenetic/transcriptomic data from non-cancerous cell-lines are able to predict cancerous expression with equivalent genome-wide fidelity.
[ { "created": "Tue, 21 Jul 2015 06:37:35 GMT", "version": "v1" } ]
2017-02-14
[ [ "Budden", "David M.", "" ], [ "Crampin", "Edmund J.", "" ] ]
With the wealth of high-throughput sequencing data generated by recent large-scale consortia, predictive gene expression modelling has become an important tool for integrative analysis of transcriptomic and epigenetic data. However, sequencing data-sets are characteristically large, and previously modelling frameworks are typically inefficient and unable to leverage multi-core or distributed processing architectures. In this study, we detail an efficient and parallelised MapReduce implementation of gene expression modelling. We leverage the computational efficiency of this framework to provide an integrative analysis of over fifty histone modification data-sets across a variety of cancerous and non-cancerous cell-lines. Our results demonstrate that the genome-wide relationships between histone modifications and mRNA transcription are lineage, tissue and karyotype-invariant, and that models trained on matched epigenetic/transcriptomic data from non-cancerous cell-lines are able to predict cancerous expression with equivalent genome-wide fidelity.
1504.03855
Irene Costantini
Irene Costantini, Jean-Pierre Ghobril, Antonino Paolo Di Giovanna, Anna Letizia Allegra Mascaro, Ludovico Silvestri, Marie Caroline M\"ullenbroich, Leonardo Onofri, Valerio Conti, Francesco Vanzi, Leonardo Sacconi, Renzo Guerrini, Henry Markram, Giulio Iannello and Francesco Saverio Pavone
A versatile clearing agent for multi-modal brain imaging
in Scientific Reports 2015
null
10.1038/srep09808
null
q-bio.NC physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extensive mapping of neuronal connections in the central nervous system requires high-throughput um-scale imaging of large volumes. In recent years, different approaches have been developed to overcome the limitations due to tissue light scattering. These methods are generally developed to improve the performance of a specific imaging modality, thus limiting comprehensive neuroanatomical exploration by multimodal optical techniques. Here, we introduce a versatile brain clearing agent (2,2'-thiodiethanol; TDE) suitable for various applications and imaging techniques. TDE is cost-efficient, water-soluble and low-viscous and, more importantly, it preserves fluorescence, is compatible with immunostaining and does not cause deformations at sub-cellular level. We demonstrate the effectiveness of this method in different applications: in fixed samples by imaging a whole mouse hippocampus with serial two-photon tomography; in combination with CLARITY by reconstructing an entire mouse brain with light sheet microscopy and in translational research by imaging immunostained human dysplastic brain tissue.
[ { "created": "Wed, 15 Apr 2015 10:38:48 GMT", "version": "v1" }, { "created": "Thu, 16 Apr 2015 11:30:16 GMT", "version": "v2" } ]
2015-04-17
[ [ "Costantini", "Irene", "" ], [ "Ghobril", "Jean-Pierre", "" ], [ "Di Giovanna", "Antonino Paolo", "" ], [ "Mascaro", "Anna Letizia Allegra", "" ], [ "Silvestri", "Ludovico", "" ], [ "Müllenbroich", "Marie Caroline", "" ], [ "Onofri", "Leonardo", "" ], [ "Conti", "Valerio", "" ], [ "Vanzi", "Francesco", "" ], [ "Sacconi", "Leonardo", "" ], [ "Guerrini", "Renzo", "" ], [ "Markram", "Henry", "" ], [ "Iannello", "Giulio", "" ], [ "Pavone", "Francesco Saverio", "" ] ]
Extensive mapping of neuronal connections in the central nervous system requires high-throughput um-scale imaging of large volumes. In recent years, different approaches have been developed to overcome the limitations due to tissue light scattering. These methods are generally developed to improve the performance of a specific imaging modality, thus limiting comprehensive neuroanatomical exploration by multimodal optical techniques. Here, we introduce a versatile brain clearing agent (2,2'-thiodiethanol; TDE) suitable for various applications and imaging techniques. TDE is cost-efficient, water-soluble and low-viscous and, more importantly, it preserves fluorescence, is compatible with immunostaining and does not cause deformations at sub-cellular level. We demonstrate the effectiveness of this method in different applications: in fixed samples by imaging a whole mouse hippocampus with serial two-photon tomography; in combination with CLARITY by reconstructing an entire mouse brain with light sheet microscopy and in translational research by imaging immunostained human dysplastic brain tissue.
2003.05003
Jingyuan Wang
Jingyuan Wang, Ke Tang, Kai Feng, Xin Li, Weifeng Lv, Kun Chen, Fei Wang
Impact of Temperature and Relative Humidity on the Transmission of COVID-19: A Modeling Study in China and the United States
null
BMJ Open 2021;11:e043863
10.1136/bmjopen-2020-043863
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives: We aim to assess the impact of temperature and relative humidity on the transmission of COVID-19 across communities after accounting for community-level factors such as demographics, socioeconomic status, and human mobility status. Design: A retrospective cross-sectional regression analysis via the Fama-MacBeth procedure is adopted. Setting: We use the data for COVID-19 daily symptom-onset cases for 100 Chinese cities and COVID-19 daily confirmed cases for 1,005 U.S. counties. Participants: A total of 69,498 cases in China and 740,843 cases in the U.S. are used for calculating the effective reproductive numbers. Primary outcome measures: Regression analysis of the impact of temperature and relative humidity on the effective reproductive number (R value). Results: Statistically significant negative correlations are found between temperature/relative humidity and the effective reproductive number (R value) in both China and the U.S. Conclusions: Higher temperature and higher relative humidity potentially suppress the transmission of COVID-19. Specifically, an increase in temperature by 1 degree Celsius is associated with a reduction in the R value of COVID-19 by 0.026 (95% CI [-0.0395,-0.0125]) in China and by 0.020 (95% CI [-0.0311, -0.0096]) in the U.S.; an increase in relative humidity by 1% is associated with a reduction in the R value by 0.0076 (95% CI [-0.0108,-0.0045]) in China and by 0.0080 (95% CI [-0.0150,-0.0010]) in the U.S. Therefore, the potential impact of temperature/relative humidity on the effective reproductive number alone is not strong enough to stop the pandemic.
[ { "created": "Mon, 9 Mar 2020 17:43:50 GMT", "version": "v1" }, { "created": "Fri, 13 Mar 2020 06:25:25 GMT", "version": "v2" }, { "created": "Fri, 3 Apr 2020 17:44:34 GMT", "version": "v3" }, { "created": "Fri, 22 May 2020 09:25:58 GMT", "version": "v4" }, { "created": "Sun, 30 May 2021 11:29:56 GMT", "version": "v5" } ]
2021-06-01
[ [ "Wang", "Jingyuan", "" ], [ "Tang", "Ke", "" ], [ "Feng", "Kai", "" ], [ "Li", "Xin", "" ], [ "Lv", "Weifeng", "" ], [ "Chen", "Kun", "" ], [ "Wang", "Fei", "" ] ]
Objectives: We aim to assess the impact of temperature and relative humidity on the transmission of COVID-19 across communities after accounting for community-level factors such as demographics, socioeconomic status, and human mobility status. Design: A retrospective cross-sectional regression analysis via the Fama-MacBeth procedure is adopted. Setting: We use the data for COVID-19 daily symptom-onset cases for 100 Chinese cities and COVID-19 daily confirmed cases for 1,005 U.S. counties. Participants: A total of 69,498 cases in China and 740,843 cases in the U.S. are used for calculating the effective reproductive numbers. Primary outcome measures: Regression analysis of the impact of temperature and relative humidity on the effective reproductive number (R value). Results: Statistically significant negative correlations are found between temperature/relative humidity and the effective reproductive number (R value) in both China and the U.S. Conclusions: Higher temperature and higher relative humidity potentially suppress the transmission of COVID-19. Specifically, an increase in temperature by 1 degree Celsius is associated with a reduction in the R value of COVID-19 by 0.026 (95% CI [-0.0395,-0.0125]) in China and by 0.020 (95% CI [-0.0311, -0.0096]) in the U.S.; an increase in relative humidity by 1% is associated with a reduction in the R value by 0.0076 (95% CI [-0.0108,-0.0045]) in China and by 0.0080 (95% CI [-0.0150,-0.0010]) in the U.S. Therefore, the potential impact of temperature/relative humidity on the effective reproductive number alone is not strong enough to stop the pandemic.
1309.1768
Valentina Agoni
Valentina Agoni
DNA evolved to minimize frameshift mutations
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point mutations can surely be dangerous but what is worst than to lose the reading frame?! Does DNA evolved a strategy to try to limit frameshift mutations?! Here we investigate if DNA sequences effectively evolved a system to minimize frameshift mutations analyzing the transcripts of proteins with high molecular weights.
[ { "created": "Tue, 3 Sep 2013 09:59:49 GMT", "version": "v1" } ]
2013-09-10
[ [ "Agoni", "Valentina", "" ] ]
Point mutations can surely be dangerous but what is worst than to lose the reading frame?! Does DNA evolved a strategy to try to limit frameshift mutations?! Here we investigate if DNA sequences effectively evolved a system to minimize frameshift mutations analyzing the transcripts of proteins with high molecular weights.
2406.01613
Chao-Hui Huang
Chao-Hui Huang
QuST: QuPath Extension for Integrative Whole Slide Image and Spatial Transcriptomics Analysis
9 pages, 4 figures
null
null
null
q-bio.QM cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, various technologies have been introduced into digital pathology, including artificial intelligence (AI) driven methods, in both areas of pathological whole slide image (WSI) analysis and spatial transcriptomics (ST) analysis. AI-driven WSI analysis utilizes the power of deep learning (DL), expands the field of view for histopathological image analysis. On the other hand, ST bridges the gap between tissue spatial analysis and biological signals, offering the possibility to understand the spatial biology. However, a major bottleneck in DL-based WSI analysis is the preparation of training patterns, as hematoxylin & eosin (H&E) staining does not provide direct biological evidence, such as gene expression, for determining the category of a biological component. On the other hand, as of now, the resolution in ST is far beyond that of WSI, resulting the challenge of further spatial analysis. Although various WSI analysis tools, including QuPath, have cited the use of WSI analysis tools in the context of ST analysis, its usage is primarily focused on initial image analysis, with other tools being utilized for more detailed transcriptomic analysis. As a result, the information hidden beneath WSI has not yet been fully utilized to support ST analysis. To bridge this gap, we introduce QuST, a QuPath extension designed to bridge the gap between H&E WSI and ST analyzing tasks. In this paper, we highlight the importance of integrating DL-based WSI analysis and ST analysis in understanding disease biology and the challenges in integrating these modalities due to differences in data formats and analytical methods. The QuST source code is hosted on GitHub and documentation is available at (https://github.com/huangch/qust).
[ { "created": "Fri, 31 May 2024 00:08:09 GMT", "version": "v1" }, { "created": "Mon, 1 Jul 2024 19:40:18 GMT", "version": "v2" } ]
2024-07-03
[ [ "Huang", "Chao-Hui", "" ] ]
Recently, various technologies have been introduced into digital pathology, including artificial intelligence (AI) driven methods, in both areas of pathological whole slide image (WSI) analysis and spatial transcriptomics (ST) analysis. AI-driven WSI analysis utilizes the power of deep learning (DL), expands the field of view for histopathological image analysis. On the other hand, ST bridges the gap between tissue spatial analysis and biological signals, offering the possibility to understand the spatial biology. However, a major bottleneck in DL-based WSI analysis is the preparation of training patterns, as hematoxylin & eosin (H&E) staining does not provide direct biological evidence, such as gene expression, for determining the category of a biological component. On the other hand, as of now, the resolution in ST is far beyond that of WSI, resulting the challenge of further spatial analysis. Although various WSI analysis tools, including QuPath, have cited the use of WSI analysis tools in the context of ST analysis, its usage is primarily focused on initial image analysis, with other tools being utilized for more detailed transcriptomic analysis. As a result, the information hidden beneath WSI has not yet been fully utilized to support ST analysis. To bridge this gap, we introduce QuST, a QuPath extension designed to bridge the gap between H&E WSI and ST analyzing tasks. In this paper, we highlight the importance of integrating DL-based WSI analysis and ST analysis in understanding disease biology and the challenges in integrating these modalities due to differences in data formats and analytical methods. The QuST source code is hosted on GitHub and documentation is available at (https://github.com/huangch/qust).
1510.02813
Sarthok Sircar
Sarthok Sircar, Andrei Kotousov, Giang Nguyen and Anthony J. Roberts
Ligand mediated adhesive mechanics of two deformed spheres
Keywords: Bioadhesion, contact mechanics, surface deformation, binding kinetics, JKR theory, DMT theory. arXiv admin note: text overlap with arXiv:1504.05641
null
null
null
q-bio.CB cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A self-consistent model is developed to investigate attachment / detachment kinetics of two soft, deformable microspheres with irregular surface and coated with flexible binding ligands. The model highlights how the microscale binding kinetics of these ligands as well as the attractive/repulsive potential of the charged surface affects the static deformed configuration of the spheres. It is shown that in the limit of smooth, neutral charged surface (i.e., the Debye length, $\kappa \rightarrow \infty $), interacting via elastic binders (i.e., the stiffness coefficient, $\lambda \rightarrow 0$) the adhesion mechanics approaches the regime of application of the JKR theory, and in this particular limit, the contact radius scales with the particle radius, according to the scaling law, $R_c\propto R^{\frac{2}{3}}$. We show that adhesion dominates in larger particles with highly charged surface and with resilient binders. Normal stress distribution within the contact area fluctuates with the binder stiffness coefficient, from a maximum at the center to a maximum at the periphery of the region. Surface heterogeneities result in a diminished adhesion with a distinct reduction in the pull off force, larger separation gap, weaker normal stress and limited area of adhesion. These results are in agreement with the published experimental findings.
[ { "created": "Fri, 9 Oct 2015 20:25:52 GMT", "version": "v1" } ]
2015-10-13
[ [ "Sircar", "Sarthok", "" ], [ "Kotousov", "Andrei", "" ], [ "Nguyen", "Giang", "" ], [ "Roberts", "Anthony J.", "" ] ]
A self-consistent model is developed to investigate attachment / detachment kinetics of two soft, deformable microspheres with irregular surface and coated with flexible binding ligands. The model highlights how the microscale binding kinetics of these ligands as well as the attractive/repulsive potential of the charged surface affects the static deformed configuration of the spheres. It is shown that in the limit of smooth, neutral charged surface (i.e., the Debye length, $\kappa \rightarrow \infty $), interacting via elastic binders (i.e., the stiffness coefficient, $\lambda \rightarrow 0$) the adhesion mechanics approaches the regime of application of the JKR theory, and in this particular limit, the contact radius scales with the particle radius, according to the scaling law, $R_c\propto R^{\frac{2}{3}}$. We show that adhesion dominates in larger particles with highly charged surface and with resilient binders. Normal stress distribution within the contact area fluctuates with the binder stiffness coefficient, from a maximum at the center to a maximum at the periphery of the region. Surface heterogeneities result in a diminished adhesion with a distinct reduction in the pull off force, larger separation gap, weaker normal stress and limited area of adhesion. These results are in agreement with the published experimental findings.
1312.0851
Jane Elith
Jane Elith
Predicting distributions of invasive species
28 pages, this manuscript will later be published as a book chapter
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This chapter aims to inform a practitioner about current methods for predicting potential distributions of invasive species. It mostly addresses single species models, covering the conceptual bases, touching on mechanistic models, and then focusing on methods using species distribution records and environmental data to predict distributions. The commentary in this last section is oriented towards key issues that arise in fitting, and predicting with, these models (which include CLIMEX, MaxEnt and other regression methods). In other words, it is more about the process of thinking about the data and the modelling problem (which is a challenging one) than it is about one technique versus another. The discussion helps clarify the necessary steps and expertise for predicting distributions. Some researchers are optimistic that correlative models will predict with high precision; while that may be true for some species at some scales of evaluation, I believe that the issues discussed in this chapter show that substantial errors are reasonably likely. I am hopeful that ongoing developments will produce models better suited to the task and tools to help practitioners to better understand predictions and their uncertainties.
[ { "created": "Mon, 2 Dec 2013 04:09:06 GMT", "version": "v1" }, { "created": "Thu, 30 Jul 2015 06:08:22 GMT", "version": "v2" } ]
2015-07-31
[ [ "Elith", "Jane", "" ] ]
This chapter aims to inform a practitioner about current methods for predicting potential distributions of invasive species. It mostly addresses single species models, covering the conceptual bases, touching on mechanistic models, and then focusing on methods using species distribution records and environmental data to predict distributions. The commentary in this last section is oriented towards key issues that arise in fitting, and predicting with, these models (which include CLIMEX, MaxEnt and other regression methods). In other words, it is more about the process of thinking about the data and the modelling problem (which is a challenging one) than it is about one technique versus another. The discussion helps clarify the necessary steps and expertise for predicting distributions. Some researchers are optimistic that correlative models will predict with high precision; while that may be true for some species at some scales of evaluation, I believe that the issues discussed in this chapter show that substantial errors are reasonably likely. I am hopeful that ongoing developments will produce models better suited to the task and tools to help practitioners to better understand predictions and their uncertainties.
1204.3021
Ilmari Karonen
Ilmari Karonen
Coupling methods for efficient simulation of spatial population dynamics
9 pages, 5 figures; submitted to JTB
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coupling is a widely used technique in the theoretical study of interacting stochastic processes. In this paper I present an example demonstrating its usefulness also in the efficient computer simulation of such processes. I first describe a basic coupling technique, applicable to all kinds of processes, which allows trading memory use for a limited speedup. Next, I describe a specialized variant of it, which can be used to speed up the simulation certain kinds of processes satisfying a monotonicity criterion. This special algorithm increases the speed by several orders of magnitude with only a modest increase in memory usage.
[ { "created": "Fri, 13 Apr 2012 14:56:36 GMT", "version": "v1" } ]
2012-04-16
[ [ "Karonen", "Ilmari", "" ] ]
Coupling is a widely used technique in the theoretical study of interacting stochastic processes. In this paper I present an example demonstrating its usefulness also in the efficient computer simulation of such processes. I first describe a basic coupling technique, applicable to all kinds of processes, which allows trading memory use for a limited speedup. Next, I describe a specialized variant of it, which can be used to speed up the simulation certain kinds of processes satisfying a monotonicity criterion. This special algorithm increases the speed by several orders of magnitude with only a modest increase in memory usage.
2108.04849
Julia Palacios
Julia A. Palacios, Anand Bhaskar, Filippo Disanto and Noah A. Rosenberg
Enumeration of binary trees compatible with a perfect phylogeny
30 pages, 10 figures
null
null
null
q-bio.PE math.CO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Evolutionary models used for describing molecular sequence variation suppose that at a non-recombining genomic segment, sequences share ancestry that can be represented as a genealogy--a rooted, binary, timed tree, with tips corresponding to individual sequences. Under the infinitely-many-sites mutation model, mutations are randomly superimposed along the branches of the genealogy, so that every mutation occurs at a chromosomal site that has not previously mutated; if a mutation occurs at an interior branch, then all individuals descending from that branch carry the mutation. The implication is that observed patterns of molecular variation from this model impose combinatorial constraints on the hidden state space of genealogies. In particular, observed molecular variation can be represented in the form of a perfect phylogeny, a tree structure that fully encodes the mutational differences among sequences. For a sample of n sequences, a perfect phylogeny might not possess n distinct leaves, and hence might be compatible with many possible binary tree structures that could describe the evolutionary relationships among the n sequences. Here, we investigate enumerative properties of the set of binary ranked and unranked tree shapes that are compatible with a perfect phylogeny, and hence, the binary ranked and unranked tree shapes conditioned on an observed pattern of mutations under the infinitely-many-sites mutation model. We provide a recursive enumeration of these shapes. We consider both perfect phylogenies that can be represented as binary and those that are multifurcating. The results have implications for computational aspects of the statistical inference of evolutionary parameters that underlie sets of molecular sequences.
[ { "created": "Tue, 10 Aug 2021 18:08:09 GMT", "version": "v1" } ]
2021-08-19
[ [ "Palacios", "Julia A.", "" ], [ "Bhaskar", "Anand", "" ], [ "Disanto", "Filippo", "" ], [ "Rosenberg", "Noah A.", "" ] ]
Evolutionary models used for describing molecular sequence variation suppose that at a non-recombining genomic segment, sequences share ancestry that can be represented as a genealogy--a rooted, binary, timed tree, with tips corresponding to individual sequences. Under the infinitely-many-sites mutation model, mutations are randomly superimposed along the branches of the genealogy, so that every mutation occurs at a chromosomal site that has not previously mutated; if a mutation occurs at an interior branch, then all individuals descending from that branch carry the mutation. The implication is that observed patterns of molecular variation from this model impose combinatorial constraints on the hidden state space of genealogies. In particular, observed molecular variation can be represented in the form of a perfect phylogeny, a tree structure that fully encodes the mutational differences among sequences. For a sample of n sequences, a perfect phylogeny might not possess n distinct leaves, and hence might be compatible with many possible binary tree structures that could describe the evolutionary relationships among the n sequences. Here, we investigate enumerative properties of the set of binary ranked and unranked tree shapes that are compatible with a perfect phylogeny, and hence, the binary ranked and unranked tree shapes conditioned on an observed pattern of mutations under the infinitely-many-sites mutation model. We provide a recursive enumeration of these shapes. We consider both perfect phylogenies that can be represented as binary and those that are multifurcating. The results have implications for computational aspects of the statistical inference of evolutionary parameters that underlie sets of molecular sequences.
1709.04548
Jeremy Sumner
Jonathan D. Mitchell, Jeremy G. Sumner, and Barbara R. Holland
Distinguishing between convergent evolution and violation of the molecular clock
12 pages, 3 figures
null
null
null
q-bio.PE math.ST q-bio.QM stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a non-technical introduction to convergence-divergence models, a new modeling approach for phylogenetic data that allows for the usual divergence of species post speciation but also allows for species to converge, i.e. become more similar over time. By examining the $3$-taxon case in some detail we illustrate that phylogeneticists have been "spoiled" in the sense of not having to think about the structural parameters in their models by virtue of the strong assumption that evolution is treelike. We show that there are not always good statistical reasons to prefer the usual class of treelike models over more general convergence-divergence models. Specifically we show many $3$-taxon datasets can be equally well explained by supposing violation of the molecular clock due to change in the rate of evolution along different edges, or by keeping the assumption of a constant rate of evolution but instead assuming that evolution is not a purely divergent process. Given the abundance of evidence that evolution is not strictly treelike, our discussion is an illustration that as phylogeneticists we often need to think clearly about the structural form of the models we use.
[ { "created": "Wed, 13 Sep 2017 21:45:46 GMT", "version": "v1" } ]
2017-09-15
[ [ "Mitchell", "Jonathan D.", "" ], [ "Sumner", "Jeremy G.", "" ], [ "Holland", "Barbara R.", "" ] ]
We give a non-technical introduction to convergence-divergence models, a new modeling approach for phylogenetic data that allows for the usual divergence of species post speciation but also allows for species to converge, i.e. become more similar over time. By examining the $3$-taxon case in some detail we illustrate that phylogeneticists have been "spoiled" in the sense of not having to think about the structural parameters in their models by virtue of the strong assumption that evolution is treelike. We show that there are not always good statistical reasons to prefer the usual class of treelike models over more general convergence-divergence models. Specifically we show many $3$-taxon datasets can be equally well explained by supposing violation of the molecular clock due to change in the rate of evolution along different edges, or by keeping the assumption of a constant rate of evolution but instead assuming that evolution is not a purely divergent process. Given the abundance of evidence that evolution is not strictly treelike, our discussion is an illustration that as phylogeneticists we often need to think clearly about the structural form of the models we use.
1707.03779
Eda Koculi PhD
Anthony F.T. Moore, Aliana Lopez de Victoria and Eda Koculi
Action mechanism of DDX3X: An RNA helicase implicated in cancer propagation and viral infection
33 pages, 5 figures, 2 tables, 2 supplemental figures
null
10.1016/j.bpj.2016.11.437
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DDX3X is a human DEAD-box RNA helicase implicated in many cancers and in viral progression. In addition to the RecA-like catalytic core, DDX3X contains N- and C-terminal domains. Here, we investigate the substrate and protein requirements to support the ATPase activity of a DDX3X construct lacking 80 residues from its C-terminal domain. Our data confirmed previous results that for an RNA molecule to support the ATPase activity of DDX3X it must contain a single-stranded-double-stranded region. We investigated protein and RNA structural reasons for this requirement. First, the RNA substrates consisting only of a double-helix were unable to support DDX3X binding. A single-stranded RNA substrate supported DDX3X binding, while an RNA substrate consisting of a single-stranded-double-stranded region not only supported the binding of DDX3X to RNA, but also promoted DDX3X trimer formation. Thus, the single-stranded-double-stranded RNA region is needed for DDX3X trimer formation, and trimer formation is required for ATPase activity. Interestingly, the dependence of ATP hydrolysis on the protein concentration suggests that the DDX3X trimer hydrolyzes only two molecules of ATP. Lastly, a DNA substrate that contains single-stranded-double-stranded regions does not support the ATPase activity of DDX3X.
[ { "created": "Wed, 12 Jul 2017 16:03:09 GMT", "version": "v1" } ]
2017-08-23
[ [ "Moore", "Anthony F. T.", "" ], [ "de Victoria", "Aliana Lopez", "" ], [ "Koculi", "Eda", "" ] ]
DDX3X is a human DEAD-box RNA helicase implicated in many cancers and in viral progression. In addition to the RecA-like catalytic core, DDX3X contains N- and C-terminal domains. Here, we investigate the substrate and protein requirements to support the ATPase activity of a DDX3X construct lacking 80 residues from its C-terminal domain. Our data confirmed previous results that for an RNA molecule to support the ATPase activity of DDX3X it must contain a single-stranded-double-stranded region. We investigated protein and RNA structural reasons for this requirement. First, the RNA substrates consisting only of a double-helix were unable to support DDX3X binding. A single-stranded RNA substrate supported DDX3X binding, while an RNA substrate consisting of a single-stranded-double-stranded region not only supported the binding of DDX3X to RNA, but also promoted DDX3X trimer formation. Thus, the single-stranded-double-stranded RNA region is needed for DDX3X trimer formation, and trimer formation is required for ATPase activity. Interestingly, the dependence of ATP hydrolysis on the protein concentration suggests that the DDX3X trimer hydrolyzes only two molecules of ATP. Lastly, a DNA substrate that contains single-stranded-double-stranded regions does not support the ATPase activity of DDX3X.
2404.10545
Mahnoor Gondal
Mahnoor N. Gondal, Saad Ur Rehman Shah, Arul M. Chinnaiyan, Marcin Cieslik
A Systematic Overview of Single-Cell Transcriptomics Databases, their Use cases, and Limitations
17 pages, 1 figure, 1 table
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Rapid advancements in high-throughput single-cell RNA-seq (scRNA-seq) technologies and experimental protocols have led to the generation of vast amounts of genomic data that populates several online databases and repositories. Here, we systematically examined large-scale scRNA-seq databases, categorizing them based on their scope and purpose such as general, tissue-specific databases, disease-specific databases, cancer-focused databases, and cell type-focused databases. Next, we discuss the technical and methodological challenges associated with curating large-scale scRNA-seq databases, along with current computational solutions. We argue that understanding scRNA-seq databases, including their limitations and assumptions, is crucial for effectively utilizing this data to make robust discoveries and identify novel biological insights. Furthermore, we propose that bridging the gap between computational and wet lab scientists through user-friendly web-based platforms is needed for democratizing access to single-cell data. These platforms would facilitate interdisciplinary research, enabling researchers from various disciplines to collaborate effectively. This review underscores the importance of leveraging computational approaches to unravel the complexities of single-cell data and offers a promising direction for future research in the field.
[ { "created": "Mon, 15 Apr 2024 17:16:29 GMT", "version": "v1" } ]
2024-04-17
[ [ "Gondal", "Mahnoor N.", "" ], [ "Shah", "Saad Ur Rehman", "" ], [ "Chinnaiyan", "Arul M.", "" ], [ "Cieslik", "Marcin", "" ] ]
Rapid advancements in high-throughput single-cell RNA-seq (scRNA-seq) technologies and experimental protocols have led to the generation of vast amounts of genomic data that populates several online databases and repositories. Here, we systematically examined large-scale scRNA-seq databases, categorizing them based on their scope and purpose such as general, tissue-specific databases, disease-specific databases, cancer-focused databases, and cell type-focused databases. Next, we discuss the technical and methodological challenges associated with curating large-scale scRNA-seq databases, along with current computational solutions. We argue that understanding scRNA-seq databases, including their limitations and assumptions, is crucial for effectively utilizing this data to make robust discoveries and identify novel biological insights. Furthermore, we propose that bridging the gap between computational and wet lab scientists through user-friendly web-based platforms is needed for democratizing access to single-cell data. These platforms would facilitate interdisciplinary research, enabling researchers from various disciplines to collaborate effectively. This review underscores the importance of leveraging computational approaches to unravel the complexities of single-cell data and offers a promising direction for future research in the field.
0912.1548
Kenneth Ho
Kenneth L. Ho, Heather A. Harrington
Bistability in Apoptosis by Receptor Clustering
Accepted by PLoS Comput Biol
PLoS Comput. Biol. 6 (10): e1000956, 2010
10.1371/journal.pcbi.1000956
null
q-bio.CB q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Apoptosis is a highly regulated cell death mechanism involved in many physiological processes. A key component of extrinsically activated apoptosis is the death receptor Fas, which, on binding to its cognate ligand FasL, oligomerize to form the death-inducing signaling complex. Motivated by recent experimental data, we propose a mathematical model of death ligand-receptor dynamics where FasL acts as a clustering agent for Fas, which form locally stable signaling platforms through proximity-induced receptor interactions. Significantly, the model exhibits hysteresis, providing an upstream mechanism for bistability and robustness. At low receptor concentrations, the bistability is contingent on the trimerism of FasL. Moreover, irreversible bistability, representing a committed cell death decision, emerges at high concentrations, which may be achieved through receptor pre-association or localization onto membrane lipid rafts. Thus, our model provides a novel theory for these observed biological phenomena within the unified context of bistability. Importantly, as Fas interactions initiate the extrinsic apoptotic pathway, our model also suggests a mechanism by which cells may function as bistable life/death switches independently of any such dynamics in their downstream components. Our results highlight the role of death receptors in deciding cell fate and add to the signal processing capabilities attributed to receptor clustering.
[ { "created": "Tue, 8 Dec 2009 17:24:33 GMT", "version": "v1" }, { "created": "Wed, 9 Dec 2009 03:46:24 GMT", "version": "v2" }, { "created": "Wed, 8 Sep 2010 22:50:04 GMT", "version": "v3" } ]
2014-04-10
[ [ "Ho", "Kenneth L.", "" ], [ "Harrington", "Heather A.", "" ] ]
Apoptosis is a highly regulated cell death mechanism involved in many physiological processes. A key component of extrinsically activated apoptosis is the death receptor Fas, which, on binding to its cognate ligand FasL, oligomerize to form the death-inducing signaling complex. Motivated by recent experimental data, we propose a mathematical model of death ligand-receptor dynamics where FasL acts as a clustering agent for Fas, which form locally stable signaling platforms through proximity-induced receptor interactions. Significantly, the model exhibits hysteresis, providing an upstream mechanism for bistability and robustness. At low receptor concentrations, the bistability is contingent on the trimerism of FasL. Moreover, irreversible bistability, representing a committed cell death decision, emerges at high concentrations, which may be achieved through receptor pre-association or localization onto membrane lipid rafts. Thus, our model provides a novel theory for these observed biological phenomena within the unified context of bistability. Importantly, as Fas interactions initiate the extrinsic apoptotic pathway, our model also suggests a mechanism by which cells may function as bistable life/death switches independently of any such dynamics in their downstream components. Our results highlight the role of death receptors in deciding cell fate and add to the signal processing capabilities attributed to receptor clustering.
2212.06501
Valeriia Demareva
Valeriia Demareva, Valeriia Viakhireva, Irina Zayceva, Inna Isakova, Yana Okhrimchuk, Karina Zueva, Andrey Demarev, Marina Zhukova, Nikolay Nazarov, and Julia Edelevaa
Temporal dynamics of subjective sleepiness: A convergence analysis of two scales
36 pages, 6 figures, 3 tables
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While sleepiness assessment metrics were initially developed in medical research to study the effects of drugs on sleep, subjective sleepiness assessment is now widely used in both fundamental and applied studies. The Stanford Sleepiness Scale (SSS) and the Karolinska Sleepiness Scale (KSS) are often considered the gold standard in sleepiness research. However, only a few studies have applied both scales, and their convergence and specific features have not been sufficiently investigated. The present study aims to analyse the dynamics and convergence of subjective sleepiness as measured by the KSS and SSS in a population of adults. To achieve this, we present the Subjective Sleepiness Dynamics Dataset (SSDD), which collects evening and morning data on situational subjective sleepiness. A total of 208 adults participated in the experiment. Our findings suggest that sleepiness generally increased from the evening till night and was highest early in the morning. The SSS score appeared to be more sensitive to certain factors, such as the presence of a sleep disorder. The SSS and KSS scores strongly correlated with each other and converged on sleepiness assessment. However, the KSS showed a more even distribution of scores than the SSS. Currently, we are continuously expanding the SSDD.
[ { "created": "Tue, 13 Dec 2022 11:28:55 GMT", "version": "v1" }, { "created": "Wed, 22 Feb 2023 16:29:31 GMT", "version": "v2" }, { "created": "Wed, 1 Mar 2023 10:15:03 GMT", "version": "v3" } ]
2023-03-02
[ [ "Demareva", "Valeriia", "" ], [ "Viakhireva", "Valeriia", "" ], [ "Zayceva", "Irina", "" ], [ "Isakova", "Inna", "" ], [ "Okhrimchuk", "Yana", "" ], [ "Zueva", "Karina", "" ], [ "Demarev", "Andrey", "" ], [ "Zhukova", "Marina", "" ], [ "Nazarov", "Nikolay", "" ], [ "Edelevaa", "Julia", "" ] ]
While sleepiness assessment metrics were initially developed in medical research to study the effects of drugs on sleep, subjective sleepiness assessment is now widely used in both fundamental and applied studies. The Stanford Sleepiness Scale (SSS) and the Karolinska Sleepiness Scale (KSS) are often considered the gold standard in sleepiness research. However, only a few studies have applied both scales, and their convergence and specific features have not been sufficiently investigated. The present study aims to analyse the dynamics and convergence of subjective sleepiness as measured by the KSS and SSS in a population of adults. To achieve this, we present the Subjective Sleepiness Dynamics Dataset (SSDD), which collects evening and morning data on situational subjective sleepiness. A total of 208 adults participated in the experiment. Our findings suggest that sleepiness generally increased from the evening till night and was highest early in the morning. The SSS score appeared to be more sensitive to certain factors, such as the presence of a sleep disorder. The SSS and KSS scores strongly correlated with each other and converged on sleepiness assessment. However, the KSS showed a more even distribution of scores than the SSS. Currently, we are continuously expanding the SSDD.
q-bio/0512025
Robersy Sanchez
Robersy Sanchez and Ricardo Grau
The energy cost of protein messages lead to a new protein information law
13 pages, 1 figure. Article process
null
null
null
q-bio.BM q-bio.QM
null
By considering the energy cost of messages carried by proteins as proportional to their information content we found experimental proof that proteins from all living organisms tend to have their estimated semantic content of information per unit mass, statistically, close to a constant. Thus, in the message carried by proteins -to achieve minimum energy waste- the rate of information content per unit mass tends to be optimized in living organisms. The experimental evidence of this new information law resembles a marathon where highly optimized proteins correspond to advanced runners followed by a main bunch and the stragglers -lowly optimized proteins. Our results suggest the existence of a continuous optimization process that living organisms had to face, in which a compromise between biological functionality, economic feasibility and the survival requirements is established.
[ { "created": "Sun, 11 Dec 2005 01:55:06 GMT", "version": "v1" } ]
2007-05-23
[ [ "Sanchez", "Robersy", "" ], [ "Grau", "Ricardo", "" ] ]
By considering the energy cost of messages carried by proteins as proportional to their information content we found experimental proof that proteins from all living organisms tend to have their estimated semantic content of information per unit mass, statistically, close to a constant. Thus, in the message carried by proteins -to achieve minimum energy waste- the rate of information content per unit mass tends to be optimized in living organisms. The experimental evidence of this new information law resembles a marathon where highly optimized proteins correspond to advanced runners followed by a main bunch and the stragglers -lowly optimized proteins. Our results suggest the existence of a continuous optimization process that living organisms had to face, in which a compromise between biological functionality, economic feasibility and the survival requirements is established.
1512.07437
Adam Gudy\'s
Adam Gudys and Sebastian Deorowicz
QuickProbs 2: towards rapid construction of high-quality alignments of large protein families
12 pages, 7 figures
null
10.1038/srep41553
null
q-bio.QM cs.CE cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Increasing size of sequence databases caused by the development of high throughput sequencing, poses multiple alignment algorithms to face one of the greatest challenges yet. As we show, well-established techniques employed for increasing alignment quality, i.e., refinement and consistency, are ineffective when large protein families are of interest. We present QuickProbs 2, an algorithm for multiple sequence alignment. Based on probabilistic models, equipped with novel column-oriented refinement and selective consistency, it offers outstanding accuracy. When analysing hundreds of sequences, QuickProbs 2 is significantly better than Clustal Omega, the previous leader for processing numerous protein families. In the case of smaller sets, for which consistency-based methods are the best performing, QuickProbs 2 is also superior to the competitors. Due to computational scalability of selective consistency and utilisation of massively parallel architectures, presented algorithm is comparable to Clustal Omega in terms of execution time, and orders of magnitude faster than full consistency approaches, like MSAProbs or PicXAA. All these make QuickProbs 2 a useful tool for aligning families ranging from few, to hundreds of proteins. QuickProbs 2 is available at https://github.com/refresh-bio/QuickProbs.
[ { "created": "Wed, 23 Dec 2015 11:29:18 GMT", "version": "v1" }, { "created": "Tue, 30 Aug 2016 18:02:10 GMT", "version": "v2" } ]
2018-05-28
[ [ "Gudys", "Adam", "" ], [ "Deorowicz", "Sebastian", "" ] ]
Increasing size of sequence databases caused by the development of high throughput sequencing, poses multiple alignment algorithms to face one of the greatest challenges yet. As we show, well-established techniques employed for increasing alignment quality, i.e., refinement and consistency, are ineffective when large protein families are of interest. We present QuickProbs 2, an algorithm for multiple sequence alignment. Based on probabilistic models, equipped with novel column-oriented refinement and selective consistency, it offers outstanding accuracy. When analysing hundreds of sequences, QuickProbs 2 is significantly better than Clustal Omega, the previous leader for processing numerous protein families. In the case of smaller sets, for which consistency-based methods are the best performing, QuickProbs 2 is also superior to the competitors. Due to computational scalability of selective consistency and utilisation of massively parallel architectures, presented algorithm is comparable to Clustal Omega in terms of execution time, and orders of magnitude faster than full consistency approaches, like MSAProbs or PicXAA. All these make QuickProbs 2 a useful tool for aligning families ranging from few, to hundreds of proteins. QuickProbs 2 is available at https://github.com/refresh-bio/QuickProbs.
1801.04515
Ueli Rutishauser
Ueli Rutishauser, Jean-Jacques Slotine, Rodney J. Douglas
Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks
Accepted manuscript, in press, Neural Computation (2018)
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSPs planar four-color graph coloring, maximum independent set, and Sudoku on this substrate, and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of non-saturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by non-linear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation, and also offer insight into the computational role of dual inhibitory mechanisms in neural circuits.
[ { "created": "Sun, 14 Jan 2018 05:44:36 GMT", "version": "v1" } ]
2018-01-16
[ [ "Rutishauser", "Ueli", "" ], [ "Slotine", "Jean-Jacques", "" ], [ "Douglas", "Rodney J.", "" ] ]
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSPs planar four-color graph coloring, maximum independent set, and Sudoku on this substrate, and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of non-saturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by non-linear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation, and also offer insight into the computational role of dual inhibitory mechanisms in neural circuits.
0907.3840
Anna Ochab-Marcinek
Anna Ochab-Marcinek, Ewa Gudowska-Nowak, Elena Nasonova, Sylvia Ritter
Modelling radiation-induced cell cycle delays
19 pages, 11 figures, accepted for publication in Radiation and Environmental Biophysics
Published online: Radiation and Environmental Biophysics, 2009. The original publication is available at http://www.springerlink.com/content/l33041651m10815w/
10.1007/s00411-009-0239-7
null
q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ionizing radiation is known to delay the cell cycle progression. In particular after particle exposure significant delays have been observed and it has been shown that the extent of delay affects the expression of damage such as chromosome aberrations. Thus, to predict how cells respond to ionizing radiation and to derive reliable estimates of radiation risks, information about radiation-induced cell cycle perturbations is required. In the present study we describe and apply a method for retrieval of information about the time-course of all cell cycle phases from experimental data on the mitotic index only. We study the progression of mammalian cells through the cell cycle after exposure. The analysis reveals a prolonged block of damaged cells in the G2 phase. Furthermore, by performing an error analysis on simulated data valuable information for the design of experimental studies has been obtained. The analysis showed that the number of cells analyzed in an experimental sample should be at least 100 to obtain a relative error less than 20%.
[ { "created": "Wed, 22 Jul 2009 13:20:31 GMT", "version": "v1" } ]
2009-08-24
[ [ "Ochab-Marcinek", "Anna", "" ], [ "Gudowska-Nowak", "Ewa", "" ], [ "Nasonova", "Elena", "" ], [ "Ritter", "Sylvia", "" ] ]
Ionizing radiation is known to delay the cell cycle progression. In particular after particle exposure significant delays have been observed and it has been shown that the extent of delay affects the expression of damage such as chromosome aberrations. Thus, to predict how cells respond to ionizing radiation and to derive reliable estimates of radiation risks, information about radiation-induced cell cycle perturbations is required. In the present study we describe and apply a method for retrieval of information about the time-course of all cell cycle phases from experimental data on the mitotic index only. We study the progression of mammalian cells through the cell cycle after exposure. The analysis reveals a prolonged block of damaged cells in the G2 phase. Furthermore, by performing an error analysis on simulated data valuable information for the design of experimental studies has been obtained. The analysis showed that the number of cells analyzed in an experimental sample should be at least 100 to obtain a relative error less than 20%.
2312.15665
Yongkang Wang
Yongkang Wang, Xuan Liu, Feng Huang, Zhankun Xiong, Wen Zhang
A Multi-Modal Contrastive Diffusion Model for Therapeutic Peptide Generation
This paper is accepted by AAAI 2024
null
null
null
q-bio.QM cs.LG q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Therapeutic peptides represent a unique class of pharmaceutical agents crucial for the treatment of human diseases. Recently, deep generative models have exhibited remarkable potential for generating therapeutic peptides, but they only utilize sequence or structure information alone, which hinders the performance in generation. In this study, we propose a Multi-Modal Contrastive Diffusion model (MMCD), fusing both sequence and structure modalities in a diffusion framework to co-generate novel peptide sequences and structures. Specifically, MMCD constructs the sequence-modal and structure-modal diffusion models, respectively, and devises a multi-modal contrastive learning strategy with intercontrastive and intra-contrastive in each diffusion timestep, aiming to capture the consistency between two modalities and boost model performance. The inter-contrastive aligns sequences and structures of peptides by maximizing the agreement of their embeddings, while the intra-contrastive differentiates therapeutic and non-therapeutic peptides by maximizing the disagreement of their sequence/structure embeddings simultaneously. The extensive experiments demonstrate that MMCD performs better than other state-of-theart deep generative methods in generating therapeutic peptides across various metrics, including antimicrobial/anticancer score, diversity, and peptide-docking.
[ { "created": "Mon, 25 Dec 2023 09:20:26 GMT", "version": "v1" }, { "created": "Thu, 4 Jan 2024 02:32:33 GMT", "version": "v2" } ]
2024-01-05
[ [ "Wang", "Yongkang", "" ], [ "Liu", "Xuan", "" ], [ "Huang", "Feng", "" ], [ "Xiong", "Zhankun", "" ], [ "Zhang", "Wen", "" ] ]
Therapeutic peptides represent a unique class of pharmaceutical agents crucial for the treatment of human diseases. Recently, deep generative models have exhibited remarkable potential for generating therapeutic peptides, but they only utilize sequence or structure information alone, which hinders the performance in generation. In this study, we propose a Multi-Modal Contrastive Diffusion model (MMCD), fusing both sequence and structure modalities in a diffusion framework to co-generate novel peptide sequences and structures. Specifically, MMCD constructs the sequence-modal and structure-modal diffusion models, respectively, and devises a multi-modal contrastive learning strategy with intercontrastive and intra-contrastive in each diffusion timestep, aiming to capture the consistency between two modalities and boost model performance. The inter-contrastive aligns sequences and structures of peptides by maximizing the agreement of their embeddings, while the intra-contrastive differentiates therapeutic and non-therapeutic peptides by maximizing the disagreement of their sequence/structure embeddings simultaneously. The extensive experiments demonstrate that MMCD performs better than other state-of-theart deep generative methods in generating therapeutic peptides across various metrics, including antimicrobial/anticancer score, diversity, and peptide-docking.
1211.5646
Michael Deem
Jeong-Man Park, Liang Ren Niestemski, and Michael W. Deem
Quasispecies Theory for Evolution of Modularity
21 pages, 4 figures; presentation reordered; to appear in Phys. Rev. E
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological systems are modular, and this modularity evolves over time and in different environments. A number of observations have been made of increased modularity in biological systems under increased environmental pressure. We here develop a quasispecies theory for the dynamics of modularity in populations of these systems. We show how the steady-state fitness in a randomly changing environment can be computed. We derive a fluctuation dissipation relation for the rate of change of modularity and use it to derive a relationship between rate of environmental changes and rate of growth of modularity. We also find a principle of least action for the evolved modularity at steady state. Finally, we compare our predictions to simulations of protein evolution and find them to be consistent.
[ { "created": "Sat, 24 Nov 2012 04:37:20 GMT", "version": "v1" }, { "created": "Fri, 1 Aug 2014 22:33:43 GMT", "version": "v2" }, { "created": "Fri, 29 Aug 2014 20:35:54 GMT", "version": "v3" }, { "created": "Mon, 5 Jan 2015 23:34:19 GMT", "version": "v4" } ]
2015-01-07
[ [ "Park", "Jeong-Man", "" ], [ "Niestemski", "Liang Ren", "" ], [ "Deem", "Michael W.", "" ] ]
Biological systems are modular, and this modularity evolves over time and in different environments. A number of observations have been made of increased modularity in biological systems under increased environmental pressure. We here develop a quasispecies theory for the dynamics of modularity in populations of these systems. We show how the steady-state fitness in a randomly changing environment can be computed. We derive a fluctuation dissipation relation for the rate of change of modularity and use it to derive a relationship between rate of environmental changes and rate of growth of modularity. We also find a principle of least action for the evolved modularity at steady state. Finally, we compare our predictions to simulations of protein evolution and find them to be consistent.
1303.1018
Tilo Schwalger
Tilo Schwalger and Benjamin Lindner
Patterns of interval correlations in neural oscillators with adaptation
Without the typo occurring in the journal publication (page 2, symbol $v_T$ has been misprinted in the reference below)
Front. Comput. Neurosci. 2017; 7:164
10.3389/fncom.2013.00164
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural firing is often subject to negative feedback by adaptation currents. These currents can induce strong correlations among the time intervals between spikes. Here we study analytically the interval correlations of a broad class of noisy neural oscillators with spike-triggered adaptation of arbitrary strength and time scale. Our weak-noise theory provides a general relation between the correlations and the phase-response curve (PRC) of the oscillator, proves anti-correlations between neighboring intervals for adapting neurons with type I PRC and identifies a single order parameter that determines the qualitative pattern of correlations. Monotonically decaying or oscillating correlation structures can be related to qualitatively different voltage traces after spiking, which can be explained by the phase plane geometry. At high firing rates, the long-term variability of the spike train associated with the cumulative interval correlations becomes small, independent of model details. Our results are verified by comparison with stochastic simulations of the exponential, leaky, and generalized integrate-and-fire models with adaptation.
[ { "created": "Tue, 5 Mar 2013 12:45:24 GMT", "version": "v1" }, { "created": "Wed, 26 Apr 2017 07:08:30 GMT", "version": "v2" } ]
2017-04-27
[ [ "Schwalger", "Tilo", "" ], [ "Lindner", "Benjamin", "" ] ]
Neural firing is often subject to negative feedback by adaptation currents. These currents can induce strong correlations among the time intervals between spikes. Here we study analytically the interval correlations of a broad class of noisy neural oscillators with spike-triggered adaptation of arbitrary strength and time scale. Our weak-noise theory provides a general relation between the correlations and the phase-response curve (PRC) of the oscillator, proves anti-correlations between neighboring intervals for adapting neurons with type I PRC and identifies a single order parameter that determines the qualitative pattern of correlations. Monotonically decaying or oscillating correlation structures can be related to qualitatively different voltage traces after spiking, which can be explained by the phase plane geometry. At high firing rates, the long-term variability of the spike train associated with the cumulative interval correlations becomes small, independent of model details. Our results are verified by comparison with stochastic simulations of the exponential, leaky, and generalized integrate-and-fire models with adaptation.