id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1201.3884
Francesc Rossell\'o
Arnau Mir, Francesc Rossello
Two results on expected values of imbalance indices of phylogenetic trees
11 pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
We compute an explicit formula for the expected value of the Colless index of a phylogenetic tree generated under the Yule model, and an explicit formula for the expected value of the Sackin index of a phylogenetic tree generated under the uniform model.
[ { "created": "Wed, 18 Jan 2012 19:16:27 GMT", "version": "v1" } ]
2012-01-19
[ [ "Mir", "Arnau", "" ], [ "Rossello", "Francesc", "" ] ]
We compute an explicit formula for the expected value of the Colless index of a phylogenetic tree generated under the Yule model, and an explicit formula for the expected value of the Sackin index of a phylogenetic tree generated under the uniform model.
1202.1581
Christos Skiadas H
Christos H. Skiadas
The Health State Function, the Force of Mortality and other Characteristics resulting from the First Exit Time Theory applied to Life Table Data
20 pages, 12 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we summarize the main parts of the first exit time theory developed in connection to the life table data and the resulting theoretical and applied issues. Several new tools arise from the development of this theory and especially the Health State Function and some important characteristics of this function. Special attention has being done in the presentation of the health state function along with the well established theory for the Force of Mortality and the related applications as are the life tables and the estimation of life expectancies. A main part of this work is the formulation of the appropriate non-linear analysis program including a model which provides an almost perfect fit to life table data. This model, proposed in 1995 is now expanded as to include the mortality excess for the age group from 15-30 years. A version of the program is given in Excel and provided at the website: http://www.cmsim.net
[ { "created": "Wed, 8 Feb 2012 02:15:40 GMT", "version": "v1" } ]
2012-02-09
[ [ "Skiadas", "Christos H.", "" ] ]
In this paper we summarize the main parts of the first exit time theory developed in connection to the life table data and the resulting theoretical and applied issues. Several new tools arise from the development of this theory and especially the Health State Function and some important characteristics of this function. Special attention has being done in the presentation of the health state function along with the well established theory for the Force of Mortality and the related applications as are the life tables and the estimation of life expectancies. A main part of this work is the formulation of the appropriate non-linear analysis program including a model which provides an almost perfect fit to life table data. This model, proposed in 1995 is now expanded as to include the mortality excess for the age group from 15-30 years. A version of the program is given in Excel and provided at the website: http://www.cmsim.net
1908.07935
Catarina Moreira
Lauren Fell and Shahram Dehdashti and Peter Bruza and Catarina Moreira
An Experimental Protocol to Derive and Validate a Quantum Model of Decision-Making
null
null
null
null
q-bio.NC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study utilises an experiment famous in quantum physics, the Stern-Gerlach experiment, to inform the structure of an experimental protocol from which a quantum cognitive decision model can be developed. The 'quantumness' of this model is tested by computing a discrete quasi-probabilistic Wigner function. Based on theory from quantum physics, our hypothesis is that the Stern-Gerlach protocol will admit negative values in the Wigner function, thus signalling that the cognitive decision model is quantum. A crowdsourced experiment of two images was used to collect decisions around three questions related to image trustworthiness. The resultant data was used to instantiate the quantum model and compute the Wigner function. Negative values in the Wigner functions of both images were encountered, thus substantiating our hypothesis. Findings also revealed that the quantum cognitive model was a more accurate predictor of decisions when compared to predictions computed using Bayes' rule.
[ { "created": "Tue, 30 Jul 2019 10:21:54 GMT", "version": "v1" } ]
2019-08-22
[ [ "Fell", "Lauren", "" ], [ "Dehdashti", "Shahram", "" ], [ "Bruza", "Peter", "" ], [ "Moreira", "Catarina", "" ] ]
This study utilises an experiment famous in quantum physics, the Stern-Gerlach experiment, to inform the structure of an experimental protocol from which a quantum cognitive decision model can be developed. The 'quantumness' of this model is tested by computing a discrete quasi-probabilistic Wigner function. Based on theory from quantum physics, our hypothesis is that the Stern-Gerlach protocol will admit negative values in the Wigner function, thus signalling that the cognitive decision model is quantum. A crowdsourced experiment of two images was used to collect decisions around three questions related to image trustworthiness. The resultant data was used to instantiate the quantum model and compute the Wigner function. Negative values in the Wigner functions of both images were encountered, thus substantiating our hypothesis. Findings also revealed that the quantum cognitive model was a more accurate predictor of decisions when compared to predictions computed using Bayes' rule.
2012.05716
Jake Taylor-King
Thomas Gaudelet, Ben Day, Arian R. Jamasb, Jyothish Soman, Cristian Regep, Gertrude Liu, Jeremy B. R. Hayter, Richard Vickers, Charles Roberts, Jian Tang, David Roblin, Tom L. Blundell, Michael M. Bronstein, Jake P. Taylor-King
Utilising Graph Machine Learning within Drug Discovery and Development
19 pages, 7 figures, 2 tables
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Graph Machine Learning (GML) is receiving growing interest within the pharmaceutical and biotechnology industries for its ability to model biomolecular structures, the functional relationships between them, and integrate multi-omic datasets - amongst other data types. Herein, we present a multidisciplinary academic-industrial review of the topic within the context of drug discovery and development. After introducing key terms and modelling approaches, we move chronologically through the drug development pipeline to identify and summarise work incorporating: target identification, design of small molecules and biologics, and drug repurposing. Whilst the field is still emerging, key milestones including repurposed drugs entering in vivo studies, suggest graph machine learning will become a modelling framework of choice within biomedical machine learning.
[ { "created": "Wed, 9 Dec 2020 10:12:33 GMT", "version": "v1" }, { "created": "Wed, 10 Feb 2021 17:13:24 GMT", "version": "v2" } ]
2021-02-11
[ [ "Gaudelet", "Thomas", "" ], [ "Day", "Ben", "" ], [ "Jamasb", "Arian R.", "" ], [ "Soman", "Jyothish", "" ], [ "Regep", "Cristian", "" ], [ "Liu", "Gertrude", "" ], [ "Hayter", "Jeremy B. R.", "" ], [ "Vickers", "Richard", "" ], [ "Roberts", "Charles", "" ], [ "Tang", "Jian", "" ], [ "Roblin", "David", "" ], [ "Blundell", "Tom L.", "" ], [ "Bronstein", "Michael M.", "" ], [ "Taylor-King", "Jake P.", "" ] ]
Graph Machine Learning (GML) is receiving growing interest within the pharmaceutical and biotechnology industries for its ability to model biomolecular structures, the functional relationships between them, and integrate multi-omic datasets - amongst other data types. Herein, we present a multidisciplinary academic-industrial review of the topic within the context of drug discovery and development. After introducing key terms and modelling approaches, we move chronologically through the drug development pipeline to identify and summarise work incorporating: target identification, design of small molecules and biologics, and drug repurposing. Whilst the field is still emerging, key milestones including repurposed drugs entering in vivo studies, suggest graph machine learning will become a modelling framework of choice within biomedical machine learning.
2102.07653
Charalambos Chrysostomou
Charalambos Chrysostomou, Harris Partaourides and Huseyin Seker
Prediction of Influenza A virus infections in humans using an Artificial Neural Network learning approach
null
39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 1186--1189, 2017, IEEE
10.1109/EMBC.2017.8037042
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The Influenza type A virus can be considered as one of the most severe viruses that can infect multiple species with often fatal consequences to the hosts. The Haemagglutinin (HA) gene of the virus has the potential to be a target for antiviral drug development realised through accurate identification of its sub-types and possible the targeted hosts. In this paper, to accurately predict if an Influenza type A virus has the capability to infect human hosts, by using only the HA gene, is therefore developed and tested. The predictive model follows three main steps; (i) decoding the protein sequences into numerical signals using EIIP amino acid scale, (ii) analysing these sequences by using Discrete Fourier Transform (DFT) and extracting DFT-based features, (iii) using a predictive model, based on Artificial Neural Networks and using the features generated by DFT. In this analysis, from the Influenza Research Database, 30724, 18236 and 8157 HA protein sequences were collected for Human, Avian and Swine, respectively. Given this set of the proteins, the proposed method yielded 97.36% (+- 0.04%), 97.26% (+- 0.26%), 0.978 (+- 0.004), 0.963 (+- 0.005) and 0.945 (+- 0.005) for the training accuracy validation accuracy, precision, recall and Mathews Correlation Coefficient (MCC) respectively, based on a 10-fold cross-validation. The classification model generated by using one of the largest dataset, if not the largest, yields promising results that could lead to early detection of such species and help develop precautionary measurements for possible human infections.
[ { "created": "Fri, 12 Feb 2021 07:09:12 GMT", "version": "v1" } ]
2021-02-16
[ [ "Chrysostomou", "Charalambos", "" ], [ "Partaourides", "Harris", "" ], [ "Seker", "Huseyin", "" ] ]
The Influenza type A virus can be considered as one of the most severe viruses that can infect multiple species with often fatal consequences to the hosts. The Haemagglutinin (HA) gene of the virus has the potential to be a target for antiviral drug development realised through accurate identification of its sub-types and possible the targeted hosts. In this paper, to accurately predict if an Influenza type A virus has the capability to infect human hosts, by using only the HA gene, is therefore developed and tested. The predictive model follows three main steps; (i) decoding the protein sequences into numerical signals using EIIP amino acid scale, (ii) analysing these sequences by using Discrete Fourier Transform (DFT) and extracting DFT-based features, (iii) using a predictive model, based on Artificial Neural Networks and using the features generated by DFT. In this analysis, from the Influenza Research Database, 30724, 18236 and 8157 HA protein sequences were collected for Human, Avian and Swine, respectively. Given this set of the proteins, the proposed method yielded 97.36% (+- 0.04%), 97.26% (+- 0.26%), 0.978 (+- 0.004), 0.963 (+- 0.005) and 0.945 (+- 0.005) for the training accuracy validation accuracy, precision, recall and Mathews Correlation Coefficient (MCC) respectively, based on a 10-fold cross-validation. The classification model generated by using one of the largest dataset, if not the largest, yields promising results that could lead to early detection of such species and help develop precautionary measurements for possible human infections.
2004.04387
Yi Zhang
Yi Zhang (1,3), Hanwen Tian (1), Yinglong Zhang (2), Yiping Chen (3) ((1) College of Geography and Environmental Science, Northwest Normal University, China, (2) Zhejiang Provincial Institute of Communications Planning, Design & Research Co., Ltd., China, (3) Institute of Earth Environment, Chinese Academy of Sciences, China)
Is the epidemic spread related to GDP? Visualizing the distribution of COVID-19 in Chinese Mainland
7 pages, 1 figure
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In December 2019, COVID-19 were detected in Wuhan City, Hubei Province of China. SARS-CoV-2 rapidly spread to the whole Chinese mainland with the people during the Chinese Spring Festival Travel Rush. As of 19 February 2020, 74576 confirmed cases of COVID-19 had been reported in Chinese Mainland. What kind of cities have more confirmed cases, and is there any relationship between GDP and confirmed cases? In this study, we explored the relationship between the confirmed cases of COVID-19 and GDP at the prefectural-level, found a positive correlation between them. This finding warns high GDP areas should pay more prevention and control efforts when an epidemic outbreak, as they have greater risks than other areas nearby.
[ { "created": "Thu, 9 Apr 2020 07:07:43 GMT", "version": "v1" } ]
2020-04-10
[ [ "Zhang", "Yi", "" ], [ "Tian", "Hanwen", "" ], [ "Zhang", "Yinglong", "" ], [ "Chen", "Yiping", "" ] ]
In December 2019, COVID-19 were detected in Wuhan City, Hubei Province of China. SARS-CoV-2 rapidly spread to the whole Chinese mainland with the people during the Chinese Spring Festival Travel Rush. As of 19 February 2020, 74576 confirmed cases of COVID-19 had been reported in Chinese Mainland. What kind of cities have more confirmed cases, and is there any relationship between GDP and confirmed cases? In this study, we explored the relationship between the confirmed cases of COVID-19 and GDP at the prefectural-level, found a positive correlation between them. This finding warns high GDP areas should pay more prevention and control efforts when an epidemic outbreak, as they have greater risks than other areas nearby.
1206.3004
Thierry Rabilloud
Thierry Rabilloud (LCBM)
Silver Staining of 2D Electrophoresis Gels
arXiv admin note: substantial text overlap with arXiv:0904.3535
Methods in Molecular Biology -Clifton then Totowa- 893 (2012) 61-73
10.1007/978-1-61779-885-6_5
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Silver staining is used to detect proteins after electrophoretic separation on polyacrylamide gels. It -combines excellent sensitivity (in the low nanogram range) with the use of very simple and cheap equipment and chemicals. For its use in proteomics, two important additional features must be considered, compatibility with mass spectrometry and quantitative response. Both features are discussed in this chapter, and optimized silver staining protocols are proposed.
[ { "created": "Thu, 14 Jun 2012 04:57:45 GMT", "version": "v1" } ]
2012-06-15
[ [ "Rabilloud", "Thierry", "", "LCBM" ] ]
Silver staining is used to detect proteins after electrophoretic separation on polyacrylamide gels. It -combines excellent sensitivity (in the low nanogram range) with the use of very simple and cheap equipment and chemicals. For its use in proteomics, two important additional features must be considered, compatibility with mass spectrometry and quantitative response. Both features are discussed in this chapter, and optimized silver staining protocols are proposed.
1311.1301
Santiago Laplagne
Massimo Andreatta, Santiago Laplagne, Shuai Cheng Li and Stephen Smale
Prediction of residue-residue contacts from protein families using similarity kernels and least squares regularization
16 pages
null
null
null
q-bio.BM math.NA q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most challenging and long-standing problems in computational biology is the prediction of three-dimensional protein structure from amino acid sequence. A promising approach to infer spatial proximity between residues is the study of evolutionary covariance from multiple sequence alignments, especially in light of recent algorithmic improvements and the fast growing size of sequence databases. In this paper, we present a simple, fast and accurate algorithm for the prediction of residue-residue contacts based on regularized least squares. The basic assumption is that spatially proximal residues in a protein coevolve to maintain the physicochemical complementarity of the amino acids involved in the contact. Our regularized inversion of the sample covariance matrix allows the computation of partial correlations between pairs of residues, thereby removing the effect of spurious transitive correlations. The method also accounts for low number of observations by means of a regularization parameter that depends on the effective number of sequences in the alignment. When tested on a set of protein families from Pfam, we found the RLS algorithm to have performance comparable to state-of-the-art methods for contact prediction, while at the same time being faster and conceptually simpler.
[ { "created": "Wed, 6 Nov 2013 07:35:48 GMT", "version": "v1" }, { "created": "Fri, 3 Jan 2014 06:38:14 GMT", "version": "v2" }, { "created": "Fri, 25 Apr 2014 03:23:21 GMT", "version": "v3" } ]
2014-04-28
[ [ "Andreatta", "Massimo", "" ], [ "Laplagne", "Santiago", "" ], [ "Li", "Shuai Cheng", "" ], [ "Smale", "Stephen", "" ] ]
One of the most challenging and long-standing problems in computational biology is the prediction of three-dimensional protein structure from amino acid sequence. A promising approach to infer spatial proximity between residues is the study of evolutionary covariance from multiple sequence alignments, especially in light of recent algorithmic improvements and the fast growing size of sequence databases. In this paper, we present a simple, fast and accurate algorithm for the prediction of residue-residue contacts based on regularized least squares. The basic assumption is that spatially proximal residues in a protein coevolve to maintain the physicochemical complementarity of the amino acids involved in the contact. Our regularized inversion of the sample covariance matrix allows the computation of partial correlations between pairs of residues, thereby removing the effect of spurious transitive correlations. The method also accounts for low number of observations by means of a regularization parameter that depends on the effective number of sequences in the alignment. When tested on a set of protein families from Pfam, we found the RLS algorithm to have performance comparable to state-of-the-art methods for contact prediction, while at the same time being faster and conceptually simpler.
1910.07122
Hui Xue PhD
Hui Xue, Ethan Tseng, Kristopher D Knott, Tushar Kotecha, Louise Brown, Sven Plein, Marianna Fontana, James C Moon, Peter Kellman
Automated Detection of Left Ventricle in Arterial Input Function Images for Inline Perfusion Mapping using Deep Learning: A study of 15,000 Patients
Accepted by Magnetic Resonance in Medicine on March 30, 2020
null
10.1002/mrm.27954
null
q-bio.QM cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantification of myocardial perfusion has the potential to improve detection of regional and global flow reduction. Significant effort has been made to automate the workflow, where one essential step is the arterial input function (AIF) extraction. Since failure here invalidates quantification, high accuracy is required. For this purpose, this study presents a robust AIF detection method using the convolutional neural net (CNN) model. CNN models were trained by assembling 25,027 scans (N=12,984 patients) from three hospitals, seven scanners. A test set of 5,721 scans (N=2,805 patients) evaluated model performance. The 2D+T AIF time series was inputted into CNN. Two variations were investigated: a) Two Classes (2CS) for background and foreground (LV mask); b) Three Classes (3CS) for background, foreground LV and RV. Final model was deployed on MR scanners via the Gadgetron InlineAI. Model loading time on MR scanner was ~340ms and applying it took ~180ms. The 3CS model successfully detect LV for 99.98% of all test cases (1 failed out of 5,721 cases). The mean Dice ratio for 3CS was 0.87+/-0.08 with 92.0% of all test cases having Dice ratio >0.75, while the 2CS model gave lower Dice of 0.82+/-0.22 (P<1e-5). Extracted AIF signals using CNN were further compared to manual ground-truth for foot-time, peak-time, first-pass duration, peak value and area-under-curve. No significant differences were found for all features (P>0.2). This study proposed, validated, and deployed a robust CNN solution to detect the LV for the extraction of the AIF signal used in fully automated perfusion flow mapping. A very large data cohort was assembled and resulting models were deployed to MR scanners for fully inline AI in clinical hospitals.
[ { "created": "Wed, 16 Oct 2019 01:22:38 GMT", "version": "v1" }, { "created": "Mon, 6 Apr 2020 21:41:36 GMT", "version": "v2" } ]
2020-05-11
[ [ "Xue", "Hui", "" ], [ "Tseng", "Ethan", "" ], [ "Knott", "Kristopher D", "" ], [ "Kotecha", "Tushar", "" ], [ "Brown", "Louise", "" ], [ "Plein", "Sven", "" ], [ "Fontana", "Marianna", "" ], [ "Moon", "James C", "" ], [ "Kellman", "Peter", "" ] ]
Quantification of myocardial perfusion has the potential to improve detection of regional and global flow reduction. Significant effort has been made to automate the workflow, where one essential step is the arterial input function (AIF) extraction. Since failure here invalidates quantification, high accuracy is required. For this purpose, this study presents a robust AIF detection method using the convolutional neural net (CNN) model. CNN models were trained by assembling 25,027 scans (N=12,984 patients) from three hospitals, seven scanners. A test set of 5,721 scans (N=2,805 patients) evaluated model performance. The 2D+T AIF time series was inputted into CNN. Two variations were investigated: a) Two Classes (2CS) for background and foreground (LV mask); b) Three Classes (3CS) for background, foreground LV and RV. Final model was deployed on MR scanners via the Gadgetron InlineAI. Model loading time on MR scanner was ~340ms and applying it took ~180ms. The 3CS model successfully detect LV for 99.98% of all test cases (1 failed out of 5,721 cases). The mean Dice ratio for 3CS was 0.87+/-0.08 with 92.0% of all test cases having Dice ratio >0.75, while the 2CS model gave lower Dice of 0.82+/-0.22 (P<1e-5). Extracted AIF signals using CNN were further compared to manual ground-truth for foot-time, peak-time, first-pass duration, peak value and area-under-curve. No significant differences were found for all features (P>0.2). This study proposed, validated, and deployed a robust CNN solution to detect the LV for the extraction of the AIF signal used in fully automated perfusion flow mapping. A very large data cohort was assembled and resulting models were deployed to MR scanners for fully inline AI in clinical hospitals.
0804.0759
Matthias Keil
Matthias S. Keil
Does Face Image Statistics Predict a Preferred Spatial Frequency for Human Face Processing?
6 pages, 11 figures, submitted to a peer-reviewed journal
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/3.0/
Psychophysical experiments suggested a relative importance of a narrow band of spatial frequencies for recognition of face identity in humans. There exists, however, no conclusive evidence of why it is that such frequencies are preferred. To address this question, I examined the amplitude spectra of a large number of face images, and observed that face spectra generally fall off steeper with spatial frequency compared to ordinary natural images. When external face features (like hair) are suppressed, then whitening of the corresponding mean amplitude spectra revealed higher response amplitudes at those spatial frequencies which are deemed important for processing face identity. The results presented here therefore provide support for that face processing characteristics match corresponding stimulus properties.
[ { "created": "Fri, 4 Apr 2008 15:33:48 GMT", "version": "v1" } ]
2008-04-07
[ [ "Keil", "Matthias S.", "" ] ]
Psychophysical experiments suggested a relative importance of a narrow band of spatial frequencies for recognition of face identity in humans. There exists, however, no conclusive evidence of why it is that such frequencies are preferred. To address this question, I examined the amplitude spectra of a large number of face images, and observed that face spectra generally fall off steeper with spatial frequency compared to ordinary natural images. When external face features (like hair) are suppressed, then whitening of the corresponding mean amplitude spectra revealed higher response amplitudes at those spatial frequencies which are deemed important for processing face identity. The results presented here therefore provide support for that face processing characteristics match corresponding stimulus properties.
2312.15252
Min Li
Zhangli Lu, Chuqi Lei, Kaili Wang, Libo Qin, Jing Tang, Min Li
DTIAM: A unified framework for predicting drug-target interactions, binding affinities and activation/inhibition mechanisms
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate and robust prediction of drug-target interactions (DTIs) plays a vital role in drug discovery. Despite extensive efforts have been invested in predicting novel DTIs, existing approaches still suffer from insufficient labeled data and cold start problems. More importantly, there is currently a lack of studies focusing on elucidating the mechanism of action (MoA) between drugs and targets. Distinguishing the activation and inhibition mechanisms is critical and challenging in drug development. Here, we introduce a unified framework called DTIAM, which aims to predict interactions, binding affinities, and activation/inhibition mechanisms between drugs and targets. DTIAM learns drug and target representations from large amounts of label-free data through self-supervised pre-training, which accurately extracts the substructure and contextual information of drugs and targets, and thus benefits the downstream prediction based on these representations. DTIAM achieves substantial performance improvement over other state-of-the-art methods in all tasks, particularly in the cold start scenario. Moreover, independent validation demonstrates the strong generalization ability of DTIAM. All these results suggested that DTIAM can provide a practically useful tool for predicting novel DTIs and further distinguishing the MoA of candidate drugs. DTIAM, for the first time, provides a unified framework for accurate and robust prediction of drug-target interactions, binding affinities, and activation/inhibition mechanisms.
[ { "created": "Sat, 23 Dec 2023 13:27:41 GMT", "version": "v1" } ]
2023-12-27
[ [ "Lu", "Zhangli", "" ], [ "Lei", "Chuqi", "" ], [ "Wang", "Kaili", "" ], [ "Qin", "Libo", "" ], [ "Tang", "Jing", "" ], [ "Li", "Min", "" ] ]
Accurate and robust prediction of drug-target interactions (DTIs) plays a vital role in drug discovery. Despite extensive efforts have been invested in predicting novel DTIs, existing approaches still suffer from insufficient labeled data and cold start problems. More importantly, there is currently a lack of studies focusing on elucidating the mechanism of action (MoA) between drugs and targets. Distinguishing the activation and inhibition mechanisms is critical and challenging in drug development. Here, we introduce a unified framework called DTIAM, which aims to predict interactions, binding affinities, and activation/inhibition mechanisms between drugs and targets. DTIAM learns drug and target representations from large amounts of label-free data through self-supervised pre-training, which accurately extracts the substructure and contextual information of drugs and targets, and thus benefits the downstream prediction based on these representations. DTIAM achieves substantial performance improvement over other state-of-the-art methods in all tasks, particularly in the cold start scenario. Moreover, independent validation demonstrates the strong generalization ability of DTIAM. All these results suggested that DTIAM can provide a practically useful tool for predicting novel DTIs and further distinguishing the MoA of candidate drugs. DTIAM, for the first time, provides a unified framework for accurate and robust prediction of drug-target interactions, binding affinities, and activation/inhibition mechanisms.
2004.01634
Dr. Ashok Kumar Mishra
Ashok Kumar Mishra and Satya Prakash Tewari
In Silico Screening of Some Naturally Occurring Bioactive Compounds Predicts Potential Inhibitors against SARS-COV-2 (COVID-19) Protease
This research is dedicated to the peoples who have lost the lives, been struggling for the lives, been working hard to save the lives as well as been frightened for the lives while fighting against the pandemic COVID-19 i.e. the all CORONA WARRIORS
null
null
null
q-bio.BM physics.bio-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SARS-COV-2 identified as COVID-19 in Wuhan city of China in the month of December, 2019 has now been declared as pandemic by World Health Organization whose transmission chain and cure both have emerged as a tough problem for the medical fraternity. The reports pertaining to the treatment of this pandemic are still lacking. We firmly believe that Nature itself provides a simple solution for any complicated problem created in it which motivated us to carry out In Silico investigations on some bioactive natural compounds reportedly found in the fruits and leaves of Anthocephalus Cadamba which is a miraculous plant found on the earth aiming to predict the potential inhibitors against aforesaid virus. Having modeled the ground state ligand structure of the such nine natural compounds applying density functional theory at B3LYP/631+G (d, p) level we have performed their molecular docking with SARS-COV-2 protease to calculate the binding affinity as well as to screen the binding at S-protein site during ligand-protein interactions. Out of these nine studied naturally occurring compounds; Oleanic Acid has been appeared to be potential inhibitor for COVID-19 followed by Ursolic Acid, IsoVallesiachotamine,Vallesiachotamine,Cadambine,Vincosamide-N-Oxide, Isodihydroamino-cadambine, Pentyle Ester of Chlorogenic Acid and D-Myo-Inositol. Hence these bioactive natural compounds or their structural analogs may be explored as anti-COVID19 drug agent which will be possessing the peculiar feature of cost-less synthesis and less or no side effect due to their natural occurrence. The solubility and solvent-effect related to the phytochemicals may be the point of concern. The In-vivo investigations on these proposed natural compounds or on their structural analogs are invited for designing and developing the potential medicine/vaccine for the treatment of COVID-19 pandemic.
[ { "created": "Fri, 3 Apr 2020 15:57:13 GMT", "version": "v1" } ]
2020-04-06
[ [ "Mishra", "Ashok Kumar", "" ], [ "Tewari", "Satya Prakash", "" ] ]
SARS-COV-2 identified as COVID-19 in Wuhan city of China in the month of December, 2019 has now been declared as pandemic by World Health Organization whose transmission chain and cure both have emerged as a tough problem for the medical fraternity. The reports pertaining to the treatment of this pandemic are still lacking. We firmly believe that Nature itself provides a simple solution for any complicated problem created in it which motivated us to carry out In Silico investigations on some bioactive natural compounds reportedly found in the fruits and leaves of Anthocephalus Cadamba which is a miraculous plant found on the earth aiming to predict the potential inhibitors against aforesaid virus. Having modeled the ground state ligand structure of the such nine natural compounds applying density functional theory at B3LYP/631+G (d, p) level we have performed their molecular docking with SARS-COV-2 protease to calculate the binding affinity as well as to screen the binding at S-protein site during ligand-protein interactions. Out of these nine studied naturally occurring compounds; Oleanic Acid has been appeared to be potential inhibitor for COVID-19 followed by Ursolic Acid, IsoVallesiachotamine,Vallesiachotamine,Cadambine,Vincosamide-N-Oxide, Isodihydroamino-cadambine, Pentyle Ester of Chlorogenic Acid and D-Myo-Inositol. Hence these bioactive natural compounds or their structural analogs may be explored as anti-COVID19 drug agent which will be possessing the peculiar feature of cost-less synthesis and less or no side effect due to their natural occurrence. The solubility and solvent-effect related to the phytochemicals may be the point of concern. The In-vivo investigations on these proposed natural compounds or on their structural analogs are invited for designing and developing the potential medicine/vaccine for the treatment of COVID-19 pandemic.
0803.2678
Garrit Jentsch
Garrit Jentsch and Reiner Kree
Spreading of EGF Receptor Activity into EGF-free Regions and Molecular Therapies of Cancer
null
null
null
null
q-bio.MN q-bio.SC
http://creativecommons.org/licenses/by-nc-sa/3.0/
The primary activation of the epidermal growth factor receptor (EGFR) has become a prominent target for molecular therapies against several forms of cancer. But despite considerable progress during the last years, many of its aspects remain poorly understood. Experiments on lateral spreading of receptor activity into ligand-free regions challenge the current standard models of EGFR activation. Here, we propose and study a theoretical model, which explains spreading into ligand-free regions without introducing any new, unknown kinetic parameters. The model exhibits bistability of activity, induced by a generic reaction mechanism, which consists of activation via dimerization and deactivation via a Michaelis-Menten reaction. It possesses slow propagating front solutions and faster initial transients. We analyze relevant experiments and find that they are in quantitative accordance with the fast initial modes of spreading, but not with the slow propagating front. We point out that lateral spreading of activity is linked to pathological levels of persistent receptor activity as observed in cancer cells and exemplify uses of this link for the design and quick evaluation of molecular therapies targeting primary activation of EGFR.
[ { "created": "Tue, 18 Mar 2008 17:30:26 GMT", "version": "v1" } ]
2008-03-19
[ [ "Jentsch", "Garrit", "" ], [ "Kree", "Reiner", "" ] ]
The primary activation of the epidermal growth factor receptor (EGFR) has become a prominent target for molecular therapies against several forms of cancer. But despite considerable progress during the last years, many of its aspects remain poorly understood. Experiments on lateral spreading of receptor activity into ligand-free regions challenge the current standard models of EGFR activation. Here, we propose and study a theoretical model, which explains spreading into ligand-free regions without introducing any new, unknown kinetic parameters. The model exhibits bistability of activity, induced by a generic reaction mechanism, which consists of activation via dimerization and deactivation via a Michaelis-Menten reaction. It possesses slow propagating front solutions and faster initial transients. We analyze relevant experiments and find that they are in quantitative accordance with the fast initial modes of spreading, but not with the slow propagating front. We point out that lateral spreading of activity is linked to pathological levels of persistent receptor activity as observed in cancer cells and exemplify uses of this link for the design and quick evaluation of molecular therapies targeting primary activation of EGFR.
2301.09568
Amarpal Sahota
Amarpal Sahota, Amber Roguski, Matthew W. Jones, Michal Rolinski, Alan Whone, Raul Santos-Rodriguez, Zahraa S. Abdallah
Interpretable Classification of Early Stage Parkinson's Disease from EEG
null
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting Parkinson's Disease in its early stages using EEG data presents a significant challenge. This paper introduces a novel approach, representing EEG data as a 15-variate series of bandpower and peak frequency values/coefficients. The hypothesis is that this representation captures essential information from the noisy EEG signal, improving disease detection. Statistical features extracted from this representation are utilised as input for interpretable machine learning models, specifically Decision Tree and AdaBoost classifiers. Our classification pipeline is deployed within our proposed framework which enables high-importance data types and brain regions for classification to be identified. Interestingly, our analysis reveals that while there is no significant regional importance, the N1 sleep data type exhibits statistically significant predictive power (p < 0.01) for early-stage Parkinson's Disease classification. AdaBoost classifiers trained on the N1 data type consistently outperform baseline models, achieving over 80% accuracy and recall. Our classification pipeline statistically significantly outperforms baseline models indicating that the model has acquired useful information. Paired with the interpretability (ability to view feature importance's) of our pipeline this enables us to generate meaningful insights into the classification of early stage Parkinson's with our N1 models. In Future, these models could be deployed in the real world - the results presented in this paper indicate that more than 3 in 4 early-stage Parkinson's cases would be captured with our pipeline.
[ { "created": "Fri, 20 Jan 2023 16:11:02 GMT", "version": "v1" }, { "created": "Fri, 8 Dec 2023 10:34:59 GMT", "version": "v2" } ]
2023-12-11
[ [ "Sahota", "Amarpal", "" ], [ "Roguski", "Amber", "" ], [ "Jones", "Matthew W.", "" ], [ "Rolinski", "Michal", "" ], [ "Whone", "Alan", "" ], [ "Santos-Rodriguez", "Raul", "" ], [ "Abdallah", "Zahraa S.", "" ] ]
Detecting Parkinson's Disease in its early stages using EEG data presents a significant challenge. This paper introduces a novel approach, representing EEG data as a 15-variate series of bandpower and peak frequency values/coefficients. The hypothesis is that this representation captures essential information from the noisy EEG signal, improving disease detection. Statistical features extracted from this representation are utilised as input for interpretable machine learning models, specifically Decision Tree and AdaBoost classifiers. Our classification pipeline is deployed within our proposed framework which enables high-importance data types and brain regions for classification to be identified. Interestingly, our analysis reveals that while there is no significant regional importance, the N1 sleep data type exhibits statistically significant predictive power (p < 0.01) for early-stage Parkinson's Disease classification. AdaBoost classifiers trained on the N1 data type consistently outperform baseline models, achieving over 80% accuracy and recall. Our classification pipeline statistically significantly outperforms baseline models indicating that the model has acquired useful information. Paired with the interpretability (ability to view feature importance's) of our pipeline this enables us to generate meaningful insights into the classification of early stage Parkinson's with our N1 models. In Future, these models could be deployed in the real world - the results presented in this paper indicate that more than 3 in 4 early-stage Parkinson's cases would be captured with our pipeline.
2004.06220
Rebecca E. Morrison
Rebecca E. Morrison, Americo Cunha Jr
Embedded model discrepancy: A case study of Zika modeling
9 pages, 7 figures
Chaos (2020)
10.1063/5.0005204
vol. 30, pp. 051103
q-bio.PE cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical models of epidemiological systems enable investigation of and predictions about potential disease outbreaks. However, commonly used models are often highly simplified representations of incredibly complex systems. Because of these simplifications, the model output, of say new cases of a disease over time, or when an epidemic will occur, may be inconsistent with available data. In this case, we must improve the model, especially if we plan to make decisions based on it that could affect human health and safety, but direct improvements are often beyond our reach. In this work, we explore this problem through a case study of the Zika outbreak in Brazil in 2016. We propose an embedded discrepancy operator---a modification to the model equations that requires modest information about the system and is calibrated by all relevant data. We show that the new enriched model demonstrates greatly increased consistency with real data. Moreover, the method is general enough to easily apply to many other mathematical models in epidemiology.
[ { "created": "Mon, 13 Apr 2020 22:12:10 GMT", "version": "v1" } ]
2023-06-28
[ [ "Morrison", "Rebecca E.", "" ], [ "Cunha", "Americo", "Jr" ] ]
Mathematical models of epidemiological systems enable investigation of and predictions about potential disease outbreaks. However, commonly used models are often highly simplified representations of incredibly complex systems. Because of these simplifications, the model output, of say new cases of a disease over time, or when an epidemic will occur, may be inconsistent with available data. In this case, we must improve the model, especially if we plan to make decisions based on it that could affect human health and safety, but direct improvements are often beyond our reach. In this work, we explore this problem through a case study of the Zika outbreak in Brazil in 2016. We propose an embedded discrepancy operator---a modification to the model equations that requires modest information about the system and is calibrated by all relevant data. We show that the new enriched model demonstrates greatly increased consistency with real data. Moreover, the method is general enough to easily apply to many other mathematical models in epidemiology.
2212.09450
Danqing Wang
Danqing Wang, Zeyu Wen, Fei Ye, Lei Li, Hao Zhou
Accelerating Antimicrobial Peptide Discovery with Latent Structure
KDD 2023
null
10.1145/3580305.3599249
null
q-bio.BM cs.CE cs.LG
http://creativecommons.org/licenses/by/4.0/
Antimicrobial peptides (AMPs) are promising therapeutic approaches against drug-resistant pathogens. Recently, deep generative models are used to discover new AMPs. However, previous studies mainly focus on peptide sequence attributes and do not consider crucial structure information. In this paper, we propose a latent sequence-structure model for designing AMPs (LSSAMP). LSSAMP exploits multi-scale vector quantization in the latent space to represent secondary structures (e.g. alpha helix and beta sheet). By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures. Experimental results show that the peptides generated by LSSAMP have a high probability of antimicrobial activity. Our wet laboratory experiments verified that two of the 21 candidates exhibit strong antimicrobial activity. The code is released at https://github.com/dqwang122/LSSAMP.
[ { "created": "Mon, 28 Nov 2022 06:43:32 GMT", "version": "v1" }, { "created": "Mon, 21 Aug 2023 00:36:44 GMT", "version": "v2" } ]
2023-08-22
[ [ "Wang", "Danqing", "" ], [ "Wen", "Zeyu", "" ], [ "Ye", "Fei", "" ], [ "Li", "Lei", "" ], [ "Zhou", "Hao", "" ] ]
Antimicrobial peptides (AMPs) are promising therapeutic approaches against drug-resistant pathogens. Recently, deep generative models are used to discover new AMPs. However, previous studies mainly focus on peptide sequence attributes and do not consider crucial structure information. In this paper, we propose a latent sequence-structure model for designing AMPs (LSSAMP). LSSAMP exploits multi-scale vector quantization in the latent space to represent secondary structures (e.g. alpha helix and beta sheet). By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures. Experimental results show that the peptides generated by LSSAMP have a high probability of antimicrobial activity. Our wet laboratory experiments verified that two of the 21 candidates exhibit strong antimicrobial activity. The code is released at https://github.com/dqwang122/LSSAMP.
2211.13544
William Dorrell Mr
William Dorrell, Maria Yuffa, Peter Latham
Meta-Learning the Inductive Biases of Simple Neural Circuits
14 pages, 12 figures
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:8389-8402, 2023
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Training data is always finite, making it unclear how to generalise to unseen situations. But, animals do generalise, wielding Occam's razor to select a parsimonious explanation of their observations. How they do this is called their inductive bias, and it is implicitly built into the operation of animals' neural circuits. This relationship between an observed circuit and its inductive bias is a useful explanatory window for neuroscience, allowing design choices to be understood normatively. However, it is generally very difficult to map circuit structure to inductive bias. Here, we present a neural network tool to bridge this gap. The tool meta-learns the inductive bias by learning functions that a neural circuit finds easy to generalise, since easy-to-generalise functions are exactly those the circuit chooses to explain incomplete data. In systems with analytically known inductive bias, i.e. linear and kernel regression, our tool recovers it. Generally, we show it can flexibly extract inductive biases from supervised learners, including spiking neural networks, and show how it could be applied to real animals. Finally, we use our tool to interpret recent connectomic data illustrating its intended use: understanding the role of circuit features through the resulting inductive bias.
[ { "created": "Thu, 24 Nov 2022 11:34:10 GMT", "version": "v1" }, { "created": "Fri, 17 Feb 2023 04:26:43 GMT", "version": "v2" }, { "created": "Thu, 13 Jul 2023 10:03:26 GMT", "version": "v3" } ]
2023-07-20
[ [ "Dorrell", "William", "" ], [ "Yuffa", "Maria", "" ], [ "Latham", "Peter", "" ] ]
Training data is always finite, making it unclear how to generalise to unseen situations. But, animals do generalise, wielding Occam's razor to select a parsimonious explanation of their observations. How they do this is called their inductive bias, and it is implicitly built into the operation of animals' neural circuits. This relationship between an observed circuit and its inductive bias is a useful explanatory window for neuroscience, allowing design choices to be understood normatively. However, it is generally very difficult to map circuit structure to inductive bias. Here, we present a neural network tool to bridge this gap. The tool meta-learns the inductive bias by learning functions that a neural circuit finds easy to generalise, since easy-to-generalise functions are exactly those the circuit chooses to explain incomplete data. In systems with analytically known inductive bias, i.e. linear and kernel regression, our tool recovers it. Generally, we show it can flexibly extract inductive biases from supervised learners, including spiking neural networks, and show how it could be applied to real animals. Finally, we use our tool to interpret recent connectomic data illustrating its intended use: understanding the role of circuit features through the resulting inductive bias.
2208.10495
Bangwei Guo
Bangwei Guo, Xingyu Li, Jitendra Jonnagaddala, Hong Zhang, Xu Steven Xu
Predicting microsatellite instability and key biomarkers in colorectal cancer from H&E-stained images: Achieving SOTA predictive performance with fewer data using Swin Transformer
null
null
null
null
q-bio.QM cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence (AI) models have been developed for predicting clinically relevant biomarkers, including microsatellite instability (MSI), for colorectal cancers (CRC). However, the current deep-learning networks are data-hungry and require large training datasets, which are often lacking in the medical domain. In this study, based on the latest Hierarchical Vision Transformer using Shifted Windows (Swin-T), we developed an efficient workflow for biomarkers in CRC (MSI, hypermutation, chromosomal instability, CpG island methylator phenotype, BRAF, and TP53 mutation) that only required relatively small datasets, but achieved the state-of-the-art (SOTA) predictive performance. Our Swin-T workflow not only substantially outperformed published models in an intra-study cross-validation experiment using TCGA-CRC-DX dataset (N = 462), but also showed excellent generalizability in cross-study external validation and delivered a SOTA AUROC of 0.90 for MSI using the MCO dataset for training (N = 1065) and the same TCGA-CRC-DX for testing. Similar performance (AUROC=0.91) was achieved by Echle and colleagues using approximately 8000 training samples (ResNet18) on the same testing dataset. Swin-T was extremely efficient using small training datasets and exhibits robust predictive performance with only 200-500 training samples. These data indicate that Swin-T may be 5-10 times more efficient than the current state-of-the-art algorithms for MSI based on ResNet18 and ShuffleNet. Furthermore, the Swin-T models showed promise as pre-screening tests for MSI status and BRAF mutation status, which could exclude and reduce the samples before the subsequent standard testing in a cascading diagnostic workflow to allow turnaround time reduction and cost saving.
[ { "created": "Mon, 22 Aug 2022 02:32:30 GMT", "version": "v1" }, { "created": "Mon, 12 Sep 2022 03:18:01 GMT", "version": "v2" } ]
2022-09-13
[ [ "Guo", "Bangwei", "" ], [ "Li", "Xingyu", "" ], [ "Jonnagaddala", "Jitendra", "" ], [ "Zhang", "Hong", "" ], [ "Xu", "Xu Steven", "" ] ]
Artificial intelligence (AI) models have been developed for predicting clinically relevant biomarkers, including microsatellite instability (MSI), for colorectal cancers (CRC). However, the current deep-learning networks are data-hungry and require large training datasets, which are often lacking in the medical domain. In this study, based on the latest Hierarchical Vision Transformer using Shifted Windows (Swin-T), we developed an efficient workflow for biomarkers in CRC (MSI, hypermutation, chromosomal instability, CpG island methylator phenotype, BRAF, and TP53 mutation) that only required relatively small datasets, but achieved the state-of-the-art (SOTA) predictive performance. Our Swin-T workflow not only substantially outperformed published models in an intra-study cross-validation experiment using TCGA-CRC-DX dataset (N = 462), but also showed excellent generalizability in cross-study external validation and delivered a SOTA AUROC of 0.90 for MSI using the MCO dataset for training (N = 1065) and the same TCGA-CRC-DX for testing. Similar performance (AUROC=0.91) was achieved by Echle and colleagues using approximately 8000 training samples (ResNet18) on the same testing dataset. Swin-T was extremely efficient using small training datasets and exhibits robust predictive performance with only 200-500 training samples. These data indicate that Swin-T may be 5-10 times more efficient than the current state-of-the-art algorithms for MSI based on ResNet18 and ShuffleNet. Furthermore, the Swin-T models showed promise as pre-screening tests for MSI status and BRAF mutation status, which could exclude and reduce the samples before the subsequent standard testing in a cascading diagnostic workflow to allow turnaround time reduction and cost saving.
2306.03111
Minsu Kim
Minsu Kim, Federico Berto, Sungsoo Ahn, Jinkyoo Park
Bootstrapped Training of Score-Conditioned Generator for Offline Design of Biological Sequences
NeurIPS 2023, 19 pages, 5 figures
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of optimizing biological sequences, e.g., proteins, DNA, and RNA, to maximize a black-box score function that is only evaluated in an offline dataset. We propose a novel solution, bootstrapped training of score-conditioned generator (BootGen) algorithm. Our algorithm repeats a two-stage process. In the first stage, our algorithm trains the biological sequence generator with rank-based weights to enhance the accuracy of sequence generation based on high scores. The subsequent stage involves bootstrapping, which augments the training dataset with self-generated data labeled by a proxy score function. Our key idea is to align the score-based generation with a proxy score function, which distills the knowledge of the proxy score function to the generator. After training, we aggregate samples from multiple bootstrapped generators and proxies to produce a diverse design. Extensive experiments show that our method outperforms competitive baselines on biological sequential design tasks. We provide reproducible source code: \href{https://github.com/kaist-silab/bootgen}{https://github.com/kaist-silab/bootgen}.
[ { "created": "Mon, 5 Jun 2023 08:23:46 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2024 18:43:38 GMT", "version": "v2" } ]
2024-03-26
[ [ "Kim", "Minsu", "" ], [ "Berto", "Federico", "" ], [ "Ahn", "Sungsoo", "" ], [ "Park", "Jinkyoo", "" ] ]
We study the problem of optimizing biological sequences, e.g., proteins, DNA, and RNA, to maximize a black-box score function that is only evaluated in an offline dataset. We propose a novel solution, bootstrapped training of score-conditioned generator (BootGen) algorithm. Our algorithm repeats a two-stage process. In the first stage, our algorithm trains the biological sequence generator with rank-based weights to enhance the accuracy of sequence generation based on high scores. The subsequent stage involves bootstrapping, which augments the training dataset with self-generated data labeled by a proxy score function. Our key idea is to align the score-based generation with a proxy score function, which distills the knowledge of the proxy score function to the generator. After training, we aggregate samples from multiple bootstrapped generators and proxies to produce a diverse design. Extensive experiments show that our method outperforms competitive baselines on biological sequential design tasks. We provide reproducible source code: \href{https://github.com/kaist-silab/bootgen}{https://github.com/kaist-silab/bootgen}.
0812.2345
Oskar Hallatschek
Oskar Hallatschek, Pascal Hersen, Sharad Ramanathan and David R. Nelson
Genetic drift at expanding frontiers promotes gene segregation
Please visit http://www.pnas.org/content/104/50/19926.abstract for published article
PNAS 2007 104:19926-19930
10.1073/pnas.0710150104
null
q-bio.PE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Competition between random genetic drift and natural selection plays a central role in evolution: Whereas non-beneficial mutations often prevail in small populations by chance, mutations that sweep through large populations typically confer a selective advantage. Here, however, we observe chance effects during range expansions that dramatically alter the gene pool even in large microbial populations. Initially well-mixed populations of two fluorescently labeled strains of Escherichia coli develop well-defined, sector-like regions with fractal boundaries in expanding colonies. The formation of these regions is driven by random fluctuations that originate in a thin band of pioneers at the expanding frontier. A comparison of bacterial and yeast colonies (Saccharomyces cerevisiae) suggests that this large-scale genetic sectoring is a generic phenomenon that may provide a detectable footprint of past range expansions.
[ { "created": "Fri, 12 Dec 2008 10:59:06 GMT", "version": "v1" } ]
2009-11-13
[ [ "Hallatschek", "Oskar", "" ], [ "Hersen", "Pascal", "" ], [ "Ramanathan", "Sharad", "" ], [ "Nelson", "David R.", "" ] ]
Competition between random genetic drift and natural selection plays a central role in evolution: Whereas non-beneficial mutations often prevail in small populations by chance, mutations that sweep through large populations typically confer a selective advantage. Here, however, we observe chance effects during range expansions that dramatically alter the gene pool even in large microbial populations. Initially well-mixed populations of two fluorescently labeled strains of Escherichia coli develop well-defined, sector-like regions with fractal boundaries in expanding colonies. The formation of these regions is driven by random fluctuations that originate in a thin band of pioneers at the expanding frontier. A comparison of bacterial and yeast colonies (Saccharomyces cerevisiae) suggests that this large-scale genetic sectoring is a generic phenomenon that may provide a detectable footprint of past range expansions.
1210.7583
Bertrand Servin
Mar\`ia In\`es Fariello, Simon Boitard, Hugo Naya, Magali SanCristobal, Bertrand Servin
Using haplotype differentiation among hierarchically structured populations for the detection of selection signatures
null
null
10.1534/genetics.112.147231
null
q-bio.PE q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The detection of molecular signatures of selection is one of the major concerns of modern population genetics. A widely used strategy in this context is to compare samples from several populations, and to look for genomic regions with outstanding genetic differentiation between these populations. Genetic differentiation is generally based on allele frequency differences between populations, which are measured by Fst or related statistics. Here we introduce a new statistic, denoted hapFLK, which focuses instead on the differences of haplotype frequencies between populations. In contrast to most existing statistics, hapFLK accounts for the hierarchical structure of the sampled populations. Using computer simulations, we show that each of these two features - the use of haplotype information and of the hierarchical structure of populations - significantly improves the detection power of selected loci, and that combining them in the hapFLK statistic provides even greater power. We also show that hapFLK is robust with respect to bottlenecks and migration and improves over existing approaches in many situations. Finally, we apply hapFLK to a set of six sheep breeds from Northern Europe, and identify seven regions under selection, which include already reported regions but also several new ones. We propose a method to help identifying the population(s) under selection in a detected region, which reveals that in many of these regions selection most likely occurred in more than one population. Furthermore, several of the detected regions correspond to incomplete sweeps, where the favourable haplotype is only at intermediate frequency in the population(s) under selection.
[ { "created": "Mon, 29 Oct 2012 07:53:00 GMT", "version": "v1" } ]
2013-01-24
[ [ "Fariello", "Marìa Inès", "" ], [ "Boitard", "Simon", "" ], [ "Naya", "Hugo", "" ], [ "SanCristobal", "Magali", "" ], [ "Servin", "Bertrand", "" ] ]
The detection of molecular signatures of selection is one of the major concerns of modern population genetics. A widely used strategy in this context is to compare samples from several populations, and to look for genomic regions with outstanding genetic differentiation between these populations. Genetic differentiation is generally based on allele frequency differences between populations, which are measured by Fst or related statistics. Here we introduce a new statistic, denoted hapFLK, which focuses instead on the differences of haplotype frequencies between populations. In contrast to most existing statistics, hapFLK accounts for the hierarchical structure of the sampled populations. Using computer simulations, we show that each of these two features - the use of haplotype information and of the hierarchical structure of populations - significantly improves the detection power of selected loci, and that combining them in the hapFLK statistic provides even greater power. We also show that hapFLK is robust with respect to bottlenecks and migration and improves over existing approaches in many situations. Finally, we apply hapFLK to a set of six sheep breeds from Northern Europe, and identify seven regions under selection, which include already reported regions but also several new ones. We propose a method to help identifying the population(s) under selection in a detected region, which reveals that in many of these regions selection most likely occurred in more than one population. Furthermore, several of the detected regions correspond to incomplete sweeps, where the favourable haplotype is only at intermediate frequency in the population(s) under selection.
2308.12624
Jianan Li
Jianan Li, Liang Li, Shiyu Zhao
Predator-prey survival pressure is sufficient to evolve swarming behaviors
null
null
10.1088/1367-2630/acf33a
null
q-bio.PE cs.MA cs.NE cs.RO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The comprehension of how local interactions arise in global collective behavior is of utmost importance in both biological and physical research. Traditional agent-based models often rely on static rules that fail to capture the dynamic strategies of the biological world. Reinforcement learning has been proposed as a solution, but most previous methods adopt handcrafted reward functions that implicitly or explicitly encourage the emergence of swarming behaviors. In this study, we propose a minimal predator-prey coevolution framework based on mixed cooperative-competitive multiagent reinforcement learning, and adopt a reward function that is solely based on the fundamental survival pressure, that is, prey receive a reward of $-1$ if caught by predators while predators receive a reward of $+1$. Surprisingly, our analysis of this approach reveals an unexpectedly rich diversity of emergent behaviors for both prey and predators, including flocking and swirling behaviors for prey, as well as dispersion tactics, confusion, and marginal predation phenomena for predators. Overall, our study provides novel insights into the collective behavior of organisms and highlights the potential applications in swarm robotics.
[ { "created": "Thu, 24 Aug 2023 08:03:11 GMT", "version": "v1" } ]
2023-08-25
[ [ "Li", "Jianan", "" ], [ "Li", "Liang", "" ], [ "Zhao", "Shiyu", "" ] ]
The comprehension of how local interactions arise in global collective behavior is of utmost importance in both biological and physical research. Traditional agent-based models often rely on static rules that fail to capture the dynamic strategies of the biological world. Reinforcement learning has been proposed as a solution, but most previous methods adopt handcrafted reward functions that implicitly or explicitly encourage the emergence of swarming behaviors. In this study, we propose a minimal predator-prey coevolution framework based on mixed cooperative-competitive multiagent reinforcement learning, and adopt a reward function that is solely based on the fundamental survival pressure, that is, prey receive a reward of $-1$ if caught by predators while predators receive a reward of $+1$. Surprisingly, our analysis of this approach reveals an unexpectedly rich diversity of emergent behaviors for both prey and predators, including flocking and swirling behaviors for prey, as well as dispersion tactics, confusion, and marginal predation phenomena for predators. Overall, our study provides novel insights into the collective behavior of organisms and highlights the potential applications in swarm robotics.
1612.04136
Julien Dervaux
Julien Dervaux (MSC), Vincent Noireaux, Albert Libchaber
Growth and instability of a phospholipid vesicle in a bath of fatty acids
null
null
null
null
q-bio.SC cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using a microfluidic trap, we study the behavior of individual phospholipid vesicles in contact with fatty acids. We show that spontaneous fatty acids insertion inside the bilayer is controlled by the vesicle size, osmotic pressure difference across the membrane and fatty acids concentration in the external bath. Depending on these parameters, vesicles can grow spherically or become unstable and fragment into several daughter vesicles. We establish the phase diagram for vesicle growth and we derive a simple thermodynamic model that reproduces the time evolution of the vesicle volume. Finally, we show that stable growth can be achieved on an artificial cell expressing a simple set of bacterial cytoskeletal proteins, paving the way toward artificial cell reproduction.
[ { "created": "Tue, 13 Dec 2016 13:03:54 GMT", "version": "v1" } ]
2016-12-14
[ [ "Dervaux", "Julien", "", "MSC" ], [ "Noireaux", "Vincent", "" ], [ "Libchaber", "Albert", "" ] ]
Using a microfluidic trap, we study the behavior of individual phospholipid vesicles in contact with fatty acids. We show that spontaneous fatty acids insertion inside the bilayer is controlled by the vesicle size, osmotic pressure difference across the membrane and fatty acids concentration in the external bath. Depending on these parameters, vesicles can grow spherically or become unstable and fragment into several daughter vesicles. We establish the phase diagram for vesicle growth and we derive a simple thermodynamic model that reproduces the time evolution of the vesicle volume. Finally, we show that stable growth can be achieved on an artificial cell expressing a simple set of bacterial cytoskeletal proteins, paving the way toward artificial cell reproduction.
q-bio/0507045
Hemant Bokil
Hemant Bokil, Bijan Pesaran, R.A. Andersen, Partha P. Mitra
A framework for detection and classification of events in neural activity
30 pages, 6 figures; This version submitted to the IEEE Transactions in Biomedical Engineering
null
null
null
q-bio.NC q-bio.QM
null
We present a method for the real time prediction of punctate events in neural activity, based on the time-frequency spectrum of the signal, applicable both to continuous processes like local field potentials (LFP) as well as to spike trains. We test it on recordings of LFP and spiking activity acquired previously from the lateral intraparietal area (LIP) of macaque monkeys performing a memory-saccade task. In contrast to earlier work, where trials with known start times were classified, our method detects and classifies trials directly from the data. It provides a means to quantitatively compare and contrast the content of LFP signals and spike trains: we find that the detector performance based on the LFP matches the performance based on spike rates. The method should find application in the development of neural prosthetics based on the LFP signal. Our approach uses a new feature vector, which we call the 2D cepstrum.
[ { "created": "Fri, 29 Jul 2005 18:20:45 GMT", "version": "v1" }, { "created": "Thu, 20 Oct 2005 01:22:18 GMT", "version": "v2" } ]
2007-05-23
[ [ "Bokil", "Hemant", "" ], [ "Pesaran", "Bijan", "" ], [ "Andersen", "R. A.", "" ], [ "Mitra", "Partha P.", "" ] ]
We present a method for the real time prediction of punctate events in neural activity, based on the time-frequency spectrum of the signal, applicable both to continuous processes like local field potentials (LFP) as well as to spike trains. We test it on recordings of LFP and spiking activity acquired previously from the lateral intraparietal area (LIP) of macaque monkeys performing a memory-saccade task. In contrast to earlier work, where trials with known start times were classified, our method detects and classifies trials directly from the data. It provides a means to quantitatively compare and contrast the content of LFP signals and spike trains: we find that the detector performance based on the LFP matches the performance based on spike rates. The method should find application in the development of neural prosthetics based on the LFP signal. Our approach uses a new feature vector, which we call the 2D cepstrum.
2204.09673
Tiago Lubiana
Tiago Lubiana, Paola Roncaglia, Christopher J. Mungall, Ellen M. Quardokus, Joshua D. Fortriede, David Osumi-Sutherland and Alexander D. Diehl
Guidelines for reporting cell types: the MIRACL standard
8 pages, 1 figure, 1 table
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Cell types are at the root of modern biology, and describing them is a core task of the Human Cell Atlas project. Surprisingly, there are no standards for reporting new cell types, leading to a gap between classes mentioned in biomedical literature and the Cell Ontology, the primary registry of cell types. Here we introduce the Minimal Information Reporting About a CelL (MIRACL) standard, a guideline for describing cell types alongside scientific articles. In a MIRACL sheet, authors organize a label, a diagnostic description, a taxon, an anatomical structure, and a parent cell class for each cell type of interest. The MIRACL standard bridges the gap between wet-lab researchers and ontologists, facilitating the integration of biomedical knowledge into ontologies and artificial intelligence systems.
[ { "created": "Mon, 18 Apr 2022 18:50:13 GMT", "version": "v1" }, { "created": "Wed, 25 May 2022 14:20:43 GMT", "version": "v2" } ]
2022-05-26
[ [ "Lubiana", "Tiago", "" ], [ "Roncaglia", "Paola", "" ], [ "Mungall", "Christopher J.", "" ], [ "Quardokus", "Ellen M.", "" ], [ "Fortriede", "Joshua D.", "" ], [ "Osumi-Sutherland", "David", "" ], [ "Diehl", "Alexander D.", "" ] ]
Cell types are at the root of modern biology, and describing them is a core task of the Human Cell Atlas project. Surprisingly, there are no standards for reporting new cell types, leading to a gap between classes mentioned in biomedical literature and the Cell Ontology, the primary registry of cell types. Here we introduce the Minimal Information Reporting About a CelL (MIRACL) standard, a guideline for describing cell types alongside scientific articles. In a MIRACL sheet, authors organize a label, a diagnostic description, a taxon, an anatomical structure, and a parent cell class for each cell type of interest. The MIRACL standard bridges the gap between wet-lab researchers and ontologists, facilitating the integration of biomedical knowledge into ontologies and artificial intelligence systems.
1205.6438
Gasper Tkacik
Einat Granot-Atedgi and Ga\v{s}per Tka\v{c}ik and Ronen Segev and Elad Schneidman
Stimulus-dependent maximum entropy models of neural population codes
11 pages, 7 figures
PLoS Comput Biol 9 (2013): e1002922
10.1371/journal.pcbi.1002922
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.
[ { "created": "Tue, 29 May 2012 18:03:26 GMT", "version": "v1" } ]
2013-06-14
[ [ "Granot-Atedgi", "Einat", "" ], [ "Tkačik", "Gašper", "" ], [ "Segev", "Ronen", "" ], [ "Schneidman", "Elad", "" ] ]
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.
1801.00494
Po-Yi Ho
Po-Yi Ho, Jie Lin, Ariel Amir
Modeling cell size regulation: From single-cell level statistics to molecular mechanisms and population level effects
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most microorganisms regulate their cell size. We review here some of the mathematical formulations of the problem of cell size regulation. We focus on coarse-grained stochastic models and the statistics they generate. We review the biologically relevant insights obtained from these models. We then describe cell cycle regulation and their molecular implementations, protein number regulation, and population growth, all in relation to size regulation. Finally, we discuss several future directions for developing understanding beyond phenomenological models of cell size regulation.
[ { "created": "Mon, 1 Jan 2018 19:00:02 GMT", "version": "v1" } ]
2018-01-03
[ [ "Ho", "Po-Yi", "" ], [ "Lin", "Jie", "" ], [ "Amir", "Ariel", "" ] ]
Most microorganisms regulate their cell size. We review here some of the mathematical formulations of the problem of cell size regulation. We focus on coarse-grained stochastic models and the statistics they generate. We review the biologically relevant insights obtained from these models. We then describe cell cycle regulation and their molecular implementations, protein number regulation, and population growth, all in relation to size regulation. Finally, we discuss several future directions for developing understanding beyond phenomenological models of cell size regulation.
1406.7060
Aparna Rai
Aparna Rai, A. Vipin Menon and Sarika Jalan
Randomness and preserved patterns in cancer network
21 pages, 9 figures
Sci. Rep. 4: 6368 (2014)
10.1038/srep06368
null
q-bio.MN cond-mat.dis-nn nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Breast cancer has been reported to account for the maximum cases among all female cancers till date. In order to gain a deeper insight into the complexities of the disease, we analyze the breast cancer network and its normal counterpart at the proteomic level. While the short range correlations in the eigenvalues exhibiting universality provide an evidence towards the importance of random connections in the underlying networks, the long range correlations along with the localization properties reveal insightful structural patterns involving functionally important proteins. The analysis provides a benchmark for designing drugs which can target a subgraph instead of individual proteins.
[ { "created": "Fri, 27 Jun 2014 04:51:45 GMT", "version": "v1" }, { "created": "Tue, 4 Nov 2014 07:27:51 GMT", "version": "v2" } ]
2017-04-05
[ [ "Rai", "Aparna", "" ], [ "Menon", "A. Vipin", "" ], [ "Jalan", "Sarika", "" ] ]
Breast cancer has been reported to account for the maximum cases among all female cancers till date. In order to gain a deeper insight into the complexities of the disease, we analyze the breast cancer network and its normal counterpart at the proteomic level. While the short range correlations in the eigenvalues exhibiting universality provide an evidence towards the importance of random connections in the underlying networks, the long range correlations along with the localization properties reveal insightful structural patterns involving functionally important proteins. The analysis provides a benchmark for designing drugs which can target a subgraph instead of individual proteins.
1602.08268
Marc Hellmuth
Marc Hellmuth and Nicolas Wieseke
Construction of Gene and Species Trees from Sequence Data incl. Orthologs, Paralogs, and Xenologs
null
null
null
null
q-bio.PE cs.DS q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic reconstruction aims at finding plausible hypotheses of the evolutionary history of genes or species based on genomic sequence information. The distinction of orthologous genes (genes that having a common ancestry and diverged after a speciation) is crucial and lies at the heart of many genomic studies. However, existing methods that rely only on 1:1 orthologs to infer species trees are strongly restricted to a small set of allowed genes that provide information about the species tree. The use of larger gene sets that consist in addition of non-orthologous genes (e.g. so-called paralogous or xenologous genes) considerably increases the information about the evolutionary history of the respective species. In this work, we introduce a novel method to compute species phylogenies based on sequence data including orthologs, paralogs or even xenologs.
[ { "created": "Fri, 26 Feb 2016 10:23:26 GMT", "version": "v1" } ]
2016-02-29
[ [ "Hellmuth", "Marc", "" ], [ "Wieseke", "Nicolas", "" ] ]
Phylogenetic reconstruction aims at finding plausible hypotheses of the evolutionary history of genes or species based on genomic sequence information. The distinction of orthologous genes (genes that having a common ancestry and diverged after a speciation) is crucial and lies at the heart of many genomic studies. However, existing methods that rely only on 1:1 orthologs to infer species trees are strongly restricted to a small set of allowed genes that provide information about the species tree. The use of larger gene sets that consist in addition of non-orthologous genes (e.g. so-called paralogous or xenologous genes) considerably increases the information about the evolutionary history of the respective species. In this work, we introduce a novel method to compute species phylogenies based on sequence data including orthologs, paralogs or even xenologs.
2308.08618
Brian Weiser
Venkat D. Abbaraju, Tamaraty L. Robinson, Brian P. Weiser
Modeling Biphasic, Non-Sigmoidal Dose-Response Relationships: Comparison of Brain-Cousens and Cedergreen Models for a Biochemical Dataset
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Biphasic, non-sigmoidal dose-response relationships are frequently observed in biochemistry and pharmacology, but they are not always analyzed with appropriate statistical methods. Here, we examine curve fitting methods for "hormetic" dose-response relationships where low and high doses of an effector produce opposite responses. We provide the full dataset used for modeling, and we provide the code for analyzing the dataset in SAS using two established mathematical models of hormesis, the Brain-Cousens model and the Cedergreen model. We show how to obtain and interpret curve parameters such as the ED50 that arise from modeling, and we discuss how curve parameters might change in a predictable manner when the conditions of the dose-response assay are altered. In addition to modeling the raw dataset that we provide, we also model the dataset after applying common normalization techniques, and we indicate how this affects the parameters that are associated with the fit curves. The Brain-Cousens and Cedergreen models that we used for curve fitting were similarly effective at capturing quantitative information about the biphasic dose-response relationships.
[ { "created": "Wed, 16 Aug 2023 18:22:37 GMT", "version": "v1" } ]
2023-08-21
[ [ "Abbaraju", "Venkat D.", "" ], [ "Robinson", "Tamaraty L.", "" ], [ "Weiser", "Brian P.", "" ] ]
Biphasic, non-sigmoidal dose-response relationships are frequently observed in biochemistry and pharmacology, but they are not always analyzed with appropriate statistical methods. Here, we examine curve fitting methods for "hormetic" dose-response relationships where low and high doses of an effector produce opposite responses. We provide the full dataset used for modeling, and we provide the code for analyzing the dataset in SAS using two established mathematical models of hormesis, the Brain-Cousens model and the Cedergreen model. We show how to obtain and interpret curve parameters such as the ED50 that arise from modeling, and we discuss how curve parameters might change in a predictable manner when the conditions of the dose-response assay are altered. In addition to modeling the raw dataset that we provide, we also model the dataset after applying common normalization techniques, and we indicate how this affects the parameters that are associated with the fit curves. The Brain-Cousens and Cedergreen models that we used for curve fitting were similarly effective at capturing quantitative information about the biphasic dose-response relationships.
1512.03784
Hao Yu
Chenxia Gu, Shaotong Wang, Hao Yu
A Chessboard Model of Human Brain and One Application on Memory Capacity
21 pages, 7 figures, 1 tabel
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The famous claim that we only use about 10% of the brain capacity has recently been challenged. Researchers argue that we are likely to use the whole brain, against the 10% claim. Some evidence and results from relevant studies and experiments related to memory in the field of neuroscience leads to the conclusion that if the rest 90% of the brain is not used, then many neural pathways would degenerate. What is memory? How does the brain function? What would be the limit of memory capacity? This article provides a model established upon the physiological and neurological characteristics of the human brain, which could give some theoretical support and scientific explanation to explain some phenomena. It may not only have theoretically significance in neuroscience, but could also be practically useful to fill in the gap between the natural and machine intelligence.
[ { "created": "Thu, 3 Dec 2015 09:02:32 GMT", "version": "v1" }, { "created": "Fri, 29 Jan 2016 11:06:34 GMT", "version": "v2" } ]
2016-02-01
[ [ "Gu", "Chenxia", "" ], [ "Wang", "Shaotong", "" ], [ "Yu", "Hao", "" ] ]
The famous claim that we only use about 10% of the brain capacity has recently been challenged. Researchers argue that we are likely to use the whole brain, against the 10% claim. Some evidence and results from relevant studies and experiments related to memory in the field of neuroscience leads to the conclusion that if the rest 90% of the brain is not used, then many neural pathways would degenerate. What is memory? How does the brain function? What would be the limit of memory capacity? This article provides a model established upon the physiological and neurological characteristics of the human brain, which could give some theoretical support and scientific explanation to explain some phenomena. It may not only have theoretically significance in neuroscience, but could also be practically useful to fill in the gap between the natural and machine intelligence.
1601.00334
Elliot Martin
Elliot A. Martin, Jaroslav Hlinka, J\"orn Davidsen
Pairwise Network Information and Nonlinear Correlations
null
Phys. Rev. E 94, 040301 (2016)
10.1103/PhysRevE.94.040301
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing the structural connectivity between interacting units from observed activity is a challenge across many different disciplines. The fundamental first step is to establish whether or to what extent the interactions between the units can be considered pairwise and, thus, can be modeled as an interaction network with simple links corresponding to pairwise interactions. In principle this can be determined by comparing the maximum entropy given the bivariate probability distributions to the true joint entropy. In many practical cases this is not an option since the bivariate distributions needed may not be reliably estimated, or the optimization is too computationally expensive. Here we present an approach that allows one to use mutual informations as a proxy for the bivariate distributions. This has the advantage of being less computationally expensive and easier to estimate. We achieve this by introducing a novel entropy maximization scheme that is based on conditioning on entropies and mutual informations. This renders our approach typically superior to other methods based on linear approximations. The advantages of the proposed method are documented using oscillator networks and a resting-state human brain network as generic relevant examples.
[ { "created": "Sun, 3 Jan 2016 20:10:09 GMT", "version": "v1" }, { "created": "Thu, 29 Sep 2016 05:14:05 GMT", "version": "v2" } ]
2016-11-02
[ [ "Martin", "Elliot A.", "" ], [ "Hlinka", "Jaroslav", "" ], [ "Davidsen", "Jörn", "" ] ]
Reconstructing the structural connectivity between interacting units from observed activity is a challenge across many different disciplines. The fundamental first step is to establish whether or to what extent the interactions between the units can be considered pairwise and, thus, can be modeled as an interaction network with simple links corresponding to pairwise interactions. In principle this can be determined by comparing the maximum entropy given the bivariate probability distributions to the true joint entropy. In many practical cases this is not an option since the bivariate distributions needed may not be reliably estimated, or the optimization is too computationally expensive. Here we present an approach that allows one to use mutual informations as a proxy for the bivariate distributions. This has the advantage of being less computationally expensive and easier to estimate. We achieve this by introducing a novel entropy maximization scheme that is based on conditioning on entropies and mutual informations. This renders our approach typically superior to other methods based on linear approximations. The advantages of the proposed method are documented using oscillator networks and a resting-state human brain network as generic relevant examples.
1909.09847
Hyodong Lee
Hyodong Lee, James J. DiCarlo
Topographic Deep Artificial Neural Networks (TDANNs) predict face selectivity topography in primate inferior temporal (IT) cortex
2018 Conference on Cognitive Computational Neuroscience
null
10.32470/CCN.2018.1085-0
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep convolutional neural networks are biologically driven models that resemble the hierarchical structure of primate visual cortex and are the current best predictors of the neural responses measured along the ventral stream. However, the networks lack topographic properties that are present in the visual cortex, such as orientation maps in primary visual cortex and category-selective maps in inferior temporal (IT) cortex. In this work, the minimum wiring cost constraint was approximated as an additional learning rule in order to generate topographic maps of the networks. We found that our topographic deep artificial neural networks (ANNs) can reproduce the category selectivity maps of the primate IT cortex.
[ { "created": "Sat, 21 Sep 2019 15:53:24 GMT", "version": "v1" } ]
2019-09-24
[ [ "Lee", "Hyodong", "" ], [ "DiCarlo", "James J.", "" ] ]
Deep convolutional neural networks are biologically driven models that resemble the hierarchical structure of primate visual cortex and are the current best predictors of the neural responses measured along the ventral stream. However, the networks lack topographic properties that are present in the visual cortex, such as orientation maps in primary visual cortex and category-selective maps in inferior temporal (IT) cortex. In this work, the minimum wiring cost constraint was approximated as an additional learning rule in order to generate topographic maps of the networks. We found that our topographic deep artificial neural networks (ANNs) can reproduce the category selectivity maps of the primate IT cortex.
1411.6880
William Gray Roncal
William Gray Roncal, Dean M. Kleissas, Joshua T. Vogelstein, Priya Manavalan, Kunal Lillaney, Michael Pekala, Randal Burns, R. Jacob Vogelstein, Carey E. Priebe, Mark A. Chevillet, Gregory D. Hager
An Automated Images-to-Graphs Framework for High Resolution Connectomics
13 pages, first two authors contributed equally V2: Added additional experiments and clarifications; added information on infrastructure and pipeline environment
null
null
null
q-bio.QM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing a map of neuronal connectivity is a critical challenge in contemporary neuroscience. Recent advances in high-throughput serial section electron microscopy (EM) have produced massive 3D image volumes of nanoscale brain tissue for the first time. The resolution of EM allows for individual neurons and their synaptic connections to be directly observed. Recovering neuronal networks by manually tracing each neuronal process at this scale is unmanageable, and therefore researchers are developing automated image processing modules. Thus far, state-of-the-art algorithms focus only on the solution to a particular task (e.g., neuron segmentation or synapse identification). In this manuscript we present the first fully automated images-to-graphs pipeline (i.e., a pipeline that begins with an imaged volume of neural tissue and produces a brain graph without any human interaction). To evaluate overall performance and select the best parameters and methods, we also develop a metric to assess the quality of the output graphs. We evaluate a set of algorithms and parameters, searching possible operating points to identify the best available brain graph for our assessment metric. Finally, we deploy a reference end-to-end version of the pipeline on a large, publicly available data set. This provides a baseline result and framework for community analysis and future algorithm development and testing. All code and data derivatives have been made publicly available toward eventually unlocking new biofidelic computational primitives and understanding of neuropathologies.
[ { "created": "Tue, 25 Nov 2014 14:37:47 GMT", "version": "v1" }, { "created": "Thu, 30 Apr 2015 06:04:10 GMT", "version": "v2" } ]
2015-05-01
[ [ "Roncal", "William Gray", "" ], [ "Kleissas", "Dean M.", "" ], [ "Vogelstein", "Joshua T.", "" ], [ "Manavalan", "Priya", "" ], [ "Lillaney", "Kunal", "" ], [ "Pekala", "Michael", "" ], [ "Burns", "Randal", "" ], [ "Vogelstein", "R. Jacob", "" ], [ "Priebe", "Carey E.", "" ], [ "Chevillet", "Mark A.", "" ], [ "Hager", "Gregory D.", "" ] ]
Reconstructing a map of neuronal connectivity is a critical challenge in contemporary neuroscience. Recent advances in high-throughput serial section electron microscopy (EM) have produced massive 3D image volumes of nanoscale brain tissue for the first time. The resolution of EM allows for individual neurons and their synaptic connections to be directly observed. Recovering neuronal networks by manually tracing each neuronal process at this scale is unmanageable, and therefore researchers are developing automated image processing modules. Thus far, state-of-the-art algorithms focus only on the solution to a particular task (e.g., neuron segmentation or synapse identification). In this manuscript we present the first fully automated images-to-graphs pipeline (i.e., a pipeline that begins with an imaged volume of neural tissue and produces a brain graph without any human interaction). To evaluate overall performance and select the best parameters and methods, we also develop a metric to assess the quality of the output graphs. We evaluate a set of algorithms and parameters, searching possible operating points to identify the best available brain graph for our assessment metric. Finally, we deploy a reference end-to-end version of the pipeline on a large, publicly available data set. This provides a baseline result and framework for community analysis and future algorithm development and testing. All code and data derivatives have been made publicly available toward eventually unlocking new biofidelic computational primitives and understanding of neuropathologies.
2304.04239
Jumpei Yamagishi
Jumpei F. Yamagishi and Kunihiko Kaneko
Universal Transitions between Growth and Dormancy via Intermediate Complex Formation
6+6 pages, 3+6 figures
null
null
null
q-bio.CB physics.bio-ph
http://creativecommons.org/licenses/by-sa/4.0/
A simple cell model consisting of a catalytic reaction network with intermediate complex formation is numerically studied. As nutrients are depleted, the transition from the exponential growth phase to the growth-arrested dormant phase occurs along with hysteresis and a lag time for growth recovery. This transition is caused by the accumulation of intermediate complexes, leading to the jamming of reactions and the diversification of components. These properties are generic in random reaction networks, as supported by dynamical systems analyses of corresponding mean-field models.
[ { "created": "Sun, 9 Apr 2023 13:55:45 GMT", "version": "v1" } ]
2023-04-11
[ [ "Yamagishi", "Jumpei F.", "" ], [ "Kaneko", "Kunihiko", "" ] ]
A simple cell model consisting of a catalytic reaction network with intermediate complex formation is numerically studied. As nutrients are depleted, the transition from the exponential growth phase to the growth-arrested dormant phase occurs along with hysteresis and a lag time for growth recovery. This transition is caused by the accumulation of intermediate complexes, leading to the jamming of reactions and the diversification of components. These properties are generic in random reaction networks, as supported by dynamical systems analyses of corresponding mean-field models.
2303.16829
Siddharth Kackar
Siddharth Kackar
Neural spikes as rare events
null
null
null
null
q-bio.NC cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
We consider the information transmission problem in neurons and its possible implications for learning in neural networks. Our approach is based on recent developments in statistical physics and complexity science. We also develop a method to select statistically significant neural responses from the background activity and consider its wider applications. This would support temporal coding theory as a model for neural coding.
[ { "created": "Wed, 29 Mar 2023 16:25:50 GMT", "version": "v1" }, { "created": "Tue, 25 Jul 2023 10:30:12 GMT", "version": "v10" }, { "created": "Sun, 11 Feb 2024 13:06:10 GMT", "version": "v11" }, { "created": "Thu, 29 Feb 2024 07:12:04 GMT", "version": "v12" }, { "created": "Sun, 31 Mar 2024 03:42:11 GMT", "version": "v13" }, { "created": "Mon, 10 Jun 2024 02:27:37 GMT", "version": "v14" }, { "created": "Mon, 8 Jul 2024 13:23:43 GMT", "version": "v15" }, { "created": "Sat, 1 Apr 2023 15:37:22 GMT", "version": "v2" }, { "created": "Mon, 8 May 2023 07:48:50 GMT", "version": "v3" }, { "created": "Mon, 29 May 2023 17:33:42 GMT", "version": "v4" }, { "created": "Wed, 31 May 2023 11:25:31 GMT", "version": "v5" }, { "created": "Thu, 1 Jun 2023 17:34:15 GMT", "version": "v6" }, { "created": "Sun, 11 Jun 2023 13:14:51 GMT", "version": "v7" }, { "created": "Sat, 24 Jun 2023 13:58:30 GMT", "version": "v8" }, { "created": "Tue, 4 Jul 2023 10:34:31 GMT", "version": "v9" } ]
2024-07-09
[ [ "Kackar", "Siddharth", "" ] ]
We consider the information transmission problem in neurons and its possible implications for learning in neural networks. Our approach is based on recent developments in statistical physics and complexity science. We also develop a method to select statistically significant neural responses from the background activity and consider its wider applications. This would support temporal coding theory as a model for neural coding.
2101.07926
Irena Papst
Irena Papst, Kevin P. O'Keeffe, Steven H. Strogatz
Modeling the interplay between seasonal flu outcomes and individual vaccination decisions
20 pages, 6 figures
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
Seasonal influenza presents an ongoing challenge to public health. The rapid evolution of the flu virus necessitates annual vaccination campaigns, but the decision to get vaccinated or not in a given year is largely voluntary, at least in the United States, and many people decide against it. In early attempts to model these yearly flu vaccine decisions, it was often assumed that individuals behave rationally, and do so with perfect information -- assumptions that allowed the techniques of classical economics and game theory to be applied. However, the usual assumptions are contradicted by the emerging empirical evidence about human decision-making behavior in this context. We develop a simple model of coupled disease spread and vaccination dynamics that instead incorporates experimental observations from social psychology to model annual vaccine decision-making more realistically. We investigate population-level effects of these new decision-making assumptions, with the goal of understanding whether the population can self-organize into a state of herd immunity, and if so, under what conditions. Our model agrees with established results while also revealing more subtle population-level behavior, including biennial oscillations about the herd immunity threshold.
[ { "created": "Wed, 20 Jan 2021 02:05:00 GMT", "version": "v1" } ]
2021-01-21
[ [ "Papst", "Irena", "" ], [ "O'Keeffe", "Kevin P.", "" ], [ "Strogatz", "Steven H.", "" ] ]
Seasonal influenza presents an ongoing challenge to public health. The rapid evolution of the flu virus necessitates annual vaccination campaigns, but the decision to get vaccinated or not in a given year is largely voluntary, at least in the United States, and many people decide against it. In early attempts to model these yearly flu vaccine decisions, it was often assumed that individuals behave rationally, and do so with perfect information -- assumptions that allowed the techniques of classical economics and game theory to be applied. However, the usual assumptions are contradicted by the emerging empirical evidence about human decision-making behavior in this context. We develop a simple model of coupled disease spread and vaccination dynamics that instead incorporates experimental observations from social psychology to model annual vaccine decision-making more realistically. We investigate population-level effects of these new decision-making assumptions, with the goal of understanding whether the population can self-organize into a state of herd immunity, and if so, under what conditions. Our model agrees with established results while also revealing more subtle population-level behavior, including biennial oscillations about the herd immunity threshold.
1703.09990
Saeed Reza Kheradpisheh
Matin N. Ashtiani, Saeed Reza Kheradpisheh, Timoth\'ee Masquelier, Mohammad Ganjtabesh
Object categorization in finer levels requires higher spatial frequencies, and therefore takes longer
null
Frontiers in Psychology 2017
10.3389/fpsyg.2017.01261
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had advantage over two others; a claim that has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed inaccurately, while superordinate level is performed well. This means that, low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels require high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same.This shows that our results are not due to a ceiling effect.
[ { "created": "Wed, 29 Mar 2017 12:03:21 GMT", "version": "v1" } ]
2018-03-12
[ [ "Ashtiani", "Matin N.", "" ], [ "Kheradpisheh", "Saeed Reza", "" ], [ "Masquelier", "Timothée", "" ], [ "Ganjtabesh", "Mohammad", "" ] ]
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had advantage over two others; a claim that has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed inaccurately, while superordinate level is performed well. This means that, low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels require high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same.This shows that our results are not due to a ceiling effect.
2201.08447
Celine Vanhee
Antoine Francotte, Raphael Esson, Eric Abachin, Melissa Vanhamme, Alexandre Dobly, Bruce Carpick, Sylvie Uhlrich, Jean-Fran\c{c}ois Dierick, Celine Vanhee
Development and validation of a targeted LC-MS/MS quantitation method to monitor cell culture expression of tetanus neurotoxin during vaccine production
manuscript accepted for publication by talanta (DOI 10.1016/j.talanta.2021.122883). In total
Talanta vol. 236 (2022): 122883
10.1016/j.talanta.2021.122883
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The tetanus neurotoxin (TeNT) is one of the most toxic proteins known to man, which prior to the use of the vaccine against the TeNT producing bacteria Clostridium tetani, resulted in a 20 % mortality rate upon infection. The clinical detrimental effects of tetanus have decreased immensely since the introduction of global vaccination programs, which depend on sustainable vaccine production. One of the major critical points in the manufacturing of these vaccines is the stable and reproducible production of high levels of toxin by the bacterial seed strains. In order to minimize time loss, the amount of TeNT is often monitored during and at the end of the bacterial culturing. The different methods that are currently available to assess the amount of TeNT in the bacterial medium suffer from variability, lack of sensitivity, and/or require specific antibodies. In accordance with the consistency approach and the three Rs (3Rs), both aiming to reduce the use of animals for testing, in-process monitoring of TeNT production could benefit from animal and antibody-free analytical tools. In this paper, we describe the development and validation of a new and reliable antibody free targeted LC-MS/MS method that is able to identify and quantify the amount of TeNT present in the bacterial medium during the different production time points up to the harvesting of the TeNT just prior to further upstream purification and detoxification. The quantitation method, validated according to ICH guidelines and by the application of the total error approach, was utilized to assess the amount of TeNT present in the cell culture medium of two TeNT production batches during different steps in the vaccine production process prior to the generation of the toxoid. The amount of TeNT generated under different physical stress conditions applied during bacterial culture was also monitored.
[ { "created": "Thu, 20 Jan 2022 20:45:58 GMT", "version": "v1" } ]
2022-01-24
[ [ "Francotte", "Antoine", "" ], [ "Esson", "Raphael", "" ], [ "Abachin", "Eric", "" ], [ "Vanhamme", "Melissa", "" ], [ "Dobly", "Alexandre", "" ], [ "Carpick", "Bruce", "" ], [ "Uhlrich", "Sylvie", "" ], [ "Dierick", "Jean-François", "" ], [ "Vanhee", "Celine", "" ] ]
The tetanus neurotoxin (TeNT) is one of the most toxic proteins known to man, which prior to the use of the vaccine against the TeNT producing bacteria Clostridium tetani, resulted in a 20 % mortality rate upon infection. The clinical detrimental effects of tetanus have decreased immensely since the introduction of global vaccination programs, which depend on sustainable vaccine production. One of the major critical points in the manufacturing of these vaccines is the stable and reproducible production of high levels of toxin by the bacterial seed strains. In order to minimize time loss, the amount of TeNT is often monitored during and at the end of the bacterial culturing. The different methods that are currently available to assess the amount of TeNT in the bacterial medium suffer from variability, lack of sensitivity, and/or require specific antibodies. In accordance with the consistency approach and the three Rs (3Rs), both aiming to reduce the use of animals for testing, in-process monitoring of TeNT production could benefit from animal and antibody-free analytical tools. In this paper, we describe the development and validation of a new and reliable antibody free targeted LC-MS/MS method that is able to identify and quantify the amount of TeNT present in the bacterial medium during the different production time points up to the harvesting of the TeNT just prior to further upstream purification and detoxification. The quantitation method, validated according to ICH guidelines and by the application of the total error approach, was utilized to assess the amount of TeNT present in the cell culture medium of two TeNT production batches during different steps in the vaccine production process prior to the generation of the toxoid. The amount of TeNT generated under different physical stress conditions applied during bacterial culture was also monitored.
1212.4114
Jeremy Draghi
Jeremy A. Draghi and Joshua B. Plotkin
Selection biases the prevalence and type of epistasis along adaptive trajectories
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The contribution to an organism's phenotype from one genetic locus may depend upon the status of other loci. Such epistatic interactions among loci are now recognized as fundamental to shaping the process of adaptation in evolving populations. Although little is known about the structure of epistasis in most organisms, recent experiments with bacterial populations have concluded that antagonistic interactions abound and tend to de-accelerate the pace of adaptation over time. Here, we use a broad class of mathematical fitness landscapes to examine how natural selection biases the mutations that substitute during evolution based on their epistatic interactions. We find that, even when beneficial mutations are rare, these biases are strong and change substantially throughout the course of adaptation. In particular, epistasis is less prevalent than the neutral expectation early in adaptation and much more prevalent later, with a concomitant shift from predominantly antagonistic interactions early in adaptation to synergistic and sign epistasis later in adaptation. We observe the same patterns when re-analyzing data from a recent microbial evolution experiment. Since these biases depend on the population size and other parameters, they must be quantified before we can hope to use experimental data to infer an organism's underlying fitness landscape or to understand the role of epistasis in shaping its adaptation. In particular, we show that when the order of substitutions is not known to an experimentalist, then standard methods of analysis may suggest that epistasis retards adaptation when in fact it accelerates it.
[ { "created": "Mon, 17 Dec 2012 19:31:32 GMT", "version": "v1" } ]
2012-12-18
[ [ "Draghi", "Jeremy A.", "" ], [ "Plotkin", "Joshua B.", "" ] ]
The contribution to an organism's phenotype from one genetic locus may depend upon the status of other loci. Such epistatic interactions among loci are now recognized as fundamental to shaping the process of adaptation in evolving populations. Although little is known about the structure of epistasis in most organisms, recent experiments with bacterial populations have concluded that antagonistic interactions abound and tend to de-accelerate the pace of adaptation over time. Here, we use a broad class of mathematical fitness landscapes to examine how natural selection biases the mutations that substitute during evolution based on their epistatic interactions. We find that, even when beneficial mutations are rare, these biases are strong and change substantially throughout the course of adaptation. In particular, epistasis is less prevalent than the neutral expectation early in adaptation and much more prevalent later, with a concomitant shift from predominantly antagonistic interactions early in adaptation to synergistic and sign epistasis later in adaptation. We observe the same patterns when re-analyzing data from a recent microbial evolution experiment. Since these biases depend on the population size and other parameters, they must be quantified before we can hope to use experimental data to infer an organism's underlying fitness landscape or to understand the role of epistasis in shaping its adaptation. In particular, we show that when the order of substitutions is not known to an experimentalist, then standard methods of analysis may suggest that epistasis retards adaptation when in fact it accelerates it.
2007.09093
Stefan Legewie
Uddipan Sarma, Lorenz Hexemer, Uchenna Alex Anyaegbunam, Stefan Legewie
Modelling cellular signalling variability based on single-cell data: the TGFb/SMAD signaling pathway
null
null
null
null
q-bio.MN q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-genetic heterogeneity is key to cellular decisions, as even genetically identical cells respond in very different ways to the same external stimulus, e.g., during cell differentiation or therapeutic treatment of disease. Strong heterogeneity is typically already observed at the level of signaling pathways that are the first sensors of external inputs and transmit information to the nucleus where decisions are made. Since heterogeneity arises from random fluctuations of cellular components, mathematical models are required to fully describe the phenomenon and to understand the dynamics of heterogeneous cell populations. Here, we review the experimental and theoretical literature on cellular signaling heterogeneity, with special focus on the TGFb/SMAD signaling pathway.
[ { "created": "Fri, 17 Jul 2020 16:18:40 GMT", "version": "v1" } ]
2020-07-20
[ [ "Sarma", "Uddipan", "" ], [ "Hexemer", "Lorenz", "" ], [ "Anyaegbunam", "Uchenna Alex", "" ], [ "Legewie", "Stefan", "" ] ]
Non-genetic heterogeneity is key to cellular decisions, as even genetically identical cells respond in very different ways to the same external stimulus, e.g., during cell differentiation or therapeutic treatment of disease. Strong heterogeneity is typically already observed at the level of signaling pathways that are the first sensors of external inputs and transmit information to the nucleus where decisions are made. Since heterogeneity arises from random fluctuations of cellular components, mathematical models are required to fully describe the phenomenon and to understand the dynamics of heterogeneous cell populations. Here, we review the experimental and theoretical literature on cellular signaling heterogeneity, with special focus on the TGFb/SMAD signaling pathway.
2208.03139
Amin Gasmi
Amin Gasmi (SOFNNA)
Machine Learning and Bioinformatics for Diagnosis Analysis of Obesity Spectrum Disorders
null
null
null
null
q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Globally, the number of obese patients has doubled due to sedentary lifestyles and improper dieting. The tremendous increase altered human genetics, and health. According to the world health organization, Life expectancy dropped from 80 to 75 years, as obese people struggle with different chronic diseases. This report will address the problems of obesity in children and adults using ML datasets to feature, predict, and analyze the causes of obesity. By engaging neural ML networks, we will explore neural control using diffusion tensor imaging to consider body fats, BMI, waist \& hip ratio circumference of obese patients. To predict the present and future causes of obesity with ML, we will discuss ML techniques like decision trees, SVM, RF, GBM, LASSO, BN, and ANN and use datasets implement the stated algorithms. Different theoretical literature from experts ML \& Bioinformatics experiments will be outlined in this report while making recommendations on how to advance ML for predicting obesity and other chronic diseases.
[ { "created": "Fri, 5 Aug 2022 13:07:27 GMT", "version": "v1" } ]
2022-08-08
[ [ "Gasmi", "Amin", "", "SOFNNA" ] ]
Globally, the number of obese patients has doubled due to sedentary lifestyles and improper dieting. The tremendous increase altered human genetics, and health. According to the world health organization, Life expectancy dropped from 80 to 75 years, as obese people struggle with different chronic diseases. This report will address the problems of obesity in children and adults using ML datasets to feature, predict, and analyze the causes of obesity. By engaging neural ML networks, we will explore neural control using diffusion tensor imaging to consider body fats, BMI, waist \& hip ratio circumference of obese patients. To predict the present and future causes of obesity with ML, we will discuss ML techniques like decision trees, SVM, RF, GBM, LASSO, BN, and ANN and use datasets implement the stated algorithms. Different theoretical literature from experts ML \& Bioinformatics experiments will be outlined in this report while making recommendations on how to advance ML for predicting obesity and other chronic diseases.
1210.0234
Marion Scheepers
Jacob Herlin, Anna Nelson and Marion Scheepers
Using Ciliate Operations to construct Chromosome Phylogenies
31 pages, 14 figures. Preliminary report
null
null
REUG01
q-bio.GN cs.CE cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop an algorithm based on three basic DNA editing operations suggested by a model for ciliate micronuclear decryption, to transform a given permutation into another. The number of ciliate operations performed by our algorithm during such a transformation is taken to be the distance between two such permutations. Applying well-known clustering methods to such distance functions enables one to determine phylogenies among the items to which the distance functions apply. As an application of these ideas we explore the relationships among the chromosomes of eight fruitfly (drosophila) species, using the well-known UPGMA algorithm on the distance function provided by our algorithm.
[ { "created": "Sun, 30 Sep 2012 19:57:35 GMT", "version": "v1" }, { "created": "Sun, 28 Oct 2012 21:41:56 GMT", "version": "v2" }, { "created": "Tue, 13 Aug 2013 14:39:04 GMT", "version": "v3" }, { "created": "Tue, 7 Jan 2014 08:56:24 GMT", "version": "v4" } ]
2014-01-08
[ [ "Herlin", "Jacob", "" ], [ "Nelson", "Anna", "" ], [ "Scheepers", "Marion", "" ] ]
We develop an algorithm based on three basic DNA editing operations suggested by a model for ciliate micronuclear decryption, to transform a given permutation into another. The number of ciliate operations performed by our algorithm during such a transformation is taken to be the distance between two such permutations. Applying well-known clustering methods to such distance functions enables one to determine phylogenies among the items to which the distance functions apply. As an application of these ideas we explore the relationships among the chromosomes of eight fruitfly (drosophila) species, using the well-known UPGMA algorithm on the distance function provided by our algorithm.
2111.08686
Laurent H\'ebert-Dufresne
Laurent H\'ebert-Dufresne, Jean-Gabriel Young, Jamie Bedson, Laura A. Skrip, Danielle Pedi, Mohamed F. Jalloh, Bastian Raulier, Olivier Lapointe-Gagn\'e, Amara Jambai, Antoine Allard and Benjamin M. Althouse
The network epidemiology of an Ebola epidemic
See the anciliary file for our Supplementary Information document
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Connecting the different scales of epidemic dynamics, from individuals to communities to nations, remains one of the main challenges of disease modeling. Here, we revisit one of the largest public health efforts deployed against a localized epidemic: the 2014-2016 Ebola Virus Disease (EVD) epidemic in Sierra Leone. We leverage the data collected by the surveillance and contact tracing protocols of the Sierra Leone Ministry of Health and Sanitation, the US Centers for Disease Control and Prevention, and other responding partners to validate a network epidemiology framework connecting the population (incidence), community (local forecasts), and individual (secondary infections) scales of disease transmission. In doing so, we gain a better understanding of what brought the EVD epidemic to an end: Reduction of introduction in new clusters (primary cases), and not reduction in local transmission patterns (secondary infections). We also find that the first 90 days of the epidemic contained enough information to produce probabilistic forecasts of EVD cases; forecasts which we show are confirmed independently by both disease surveillance and contact tracing. Altogether, using data available two months before the start of the international support to the local response, network epidemiology could have inferred heterogeneity in local transmissions, the risk for superspreading events, and probabilistic forecasts of eventual cases per community. We expect that our framework will help connect large data collection efforts with individual behavior, and help reduce uncertainty during health emergencies and emerging epidemics.
[ { "created": "Tue, 16 Nov 2021 18:42:14 GMT", "version": "v1" } ]
2021-11-17
[ [ "Hébert-Dufresne", "Laurent", "" ], [ "Young", "Jean-Gabriel", "" ], [ "Bedson", "Jamie", "" ], [ "Skrip", "Laura A.", "" ], [ "Pedi", "Danielle", "" ], [ "Jalloh", "Mohamed F.", "" ], [ "Raulier", "Bastian", "" ], [ "Lapointe-Gagné", "Olivier", "" ], [ "Jambai", "Amara", "" ], [ "Allard", "Antoine", "" ], [ "Althouse", "Benjamin M.", "" ] ]
Connecting the different scales of epidemic dynamics, from individuals to communities to nations, remains one of the main challenges of disease modeling. Here, we revisit one of the largest public health efforts deployed against a localized epidemic: the 2014-2016 Ebola Virus Disease (EVD) epidemic in Sierra Leone. We leverage the data collected by the surveillance and contact tracing protocols of the Sierra Leone Ministry of Health and Sanitation, the US Centers for Disease Control and Prevention, and other responding partners to validate a network epidemiology framework connecting the population (incidence), community (local forecasts), and individual (secondary infections) scales of disease transmission. In doing so, we gain a better understanding of what brought the EVD epidemic to an end: Reduction of introduction in new clusters (primary cases), and not reduction in local transmission patterns (secondary infections). We also find that the first 90 days of the epidemic contained enough information to produce probabilistic forecasts of EVD cases; forecasts which we show are confirmed independently by both disease surveillance and contact tracing. Altogether, using data available two months before the start of the international support to the local response, network epidemiology could have inferred heterogeneity in local transmissions, the risk for superspreading events, and probabilistic forecasts of eventual cases per community. We expect that our framework will help connect large data collection efforts with individual behavior, and help reduce uncertainty during health emergencies and emerging epidemics.
1106.2574
Matthew Reichl
Matthew D. Reichl, Kevin E. Bassler
Canalization in the Critical States of Highly Connected Networks of Competing Boolean Nodes
8 pages, 5 figures
null
10.1103/PhysRevE.84.056103
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Canalization is a classic concept in Developmental Biology that is thought to be an important feature of evolving systems. In a Boolean network it is a form of network robustness in which a subset of the input signals control the behavior of a node regardless of the remaining input. It has been shown that Boolean networks can become canalized if they evolve through a frustrated competition between nodes. This was demonstrated for large networks in which each node had K=3 inputs. Those networks evolve to a critical steady-state at the boarder of two phases of dynamical behavior. Moreover, the evolution of these networks was shown to be associated with the symmetry of the evolutionary dynamics. We extend these results to the more highly connected K>3 cases and show that similar canalized critical steady states emerge with the same associated dynamical symmetry, but only if the evolutionary dynamics is biased toward homogeneous Boolean functions.
[ { "created": "Mon, 13 Jun 2011 23:37:27 GMT", "version": "v1" } ]
2015-05-28
[ [ "Reichl", "Matthew D.", "" ], [ "Bassler", "Kevin E.", "" ] ]
Canalization is a classic concept in Developmental Biology that is thought to be an important feature of evolving systems. In a Boolean network it is a form of network robustness in which a subset of the input signals control the behavior of a node regardless of the remaining input. It has been shown that Boolean networks can become canalized if they evolve through a frustrated competition between nodes. This was demonstrated for large networks in which each node had K=3 inputs. Those networks evolve to a critical steady-state at the boarder of two phases of dynamical behavior. Moreover, the evolution of these networks was shown to be associated with the symmetry of the evolutionary dynamics. We extend these results to the more highly connected K>3 cases and show that similar canalized critical steady states emerge with the same associated dynamical symmetry, but only if the evolutionary dynamics is biased toward homogeneous Boolean functions.
q-bio/0611081
Thierry Rabilloud
Mireille Chevallet, Pierre Lescuyer, H\'el\`ene Diemer, Alain van Dorsselaer, Emmanuelle Leize-Wagner, Thierry Rabilloud
Alterations of the mitochondrial proteome caused by the absence of mitochondrial DNA: A proteomic view
website publisher: http://www3.interscience.wiley.com/
Electrophoresis 27 (04/2006) 1574-83
10.1002/elps.200500704
null
q-bio.GN
null
The proper functioning of mitochondria requires that both the mitochondrial and the nuclear genome are functional. To investigate the importance of the mitochondrial genome, which encodes only 13 subunits of the respiratory complexes, the mitochondrial rRNAs and a few tRNAs, we performed a comparative study on the 143B cell line and on its Rho-0 counterpart, i.e., devoid of mitochondrial DNA. Quantitative differences were found, of course in the respiratory complexes subunits, but also in the mitochondrial translation apparatus, mainly mitochondrial ribosomal proteins, and in the ion and protein import system, i.e., including membrane proteins. Various mitochondrial metabolic processes were also altered, especially electron transfer proteins and some dehydrogenases, but quite often on a few proteins for each pathway. This study also showed variations in some hypothetical or poorly characterized proteins, suggesting a mitochondrial localization for these proteins. Examples include a stomatin-like protein and a protein sharing homologies with bacterial proteins implicated in tyrosine catabolism. Proteins involved in apoptosis control are also found modulated in Rho-0 mitochondria.
[ { "created": "Fri, 24 Nov 2006 08:11:56 GMT", "version": "v1" } ]
2016-08-16
[ [ "Chevallet", "Mireille", "" ], [ "Lescuyer", "Pierre", "" ], [ "Diemer", "Hélène", "" ], [ "van Dorsselaer", "Alain", "" ], [ "Leize-Wagner", "Emmanuelle", "" ], [ "Rabilloud", "Thierry", "" ] ]
The proper functioning of mitochondria requires that both the mitochondrial and the nuclear genome are functional. To investigate the importance of the mitochondrial genome, which encodes only 13 subunits of the respiratory complexes, the mitochondrial rRNAs and a few tRNAs, we performed a comparative study on the 143B cell line and on its Rho-0 counterpart, i.e., devoid of mitochondrial DNA. Quantitative differences were found, of course in the respiratory complexes subunits, but also in the mitochondrial translation apparatus, mainly mitochondrial ribosomal proteins, and in the ion and protein import system, i.e., including membrane proteins. Various mitochondrial metabolic processes were also altered, especially electron transfer proteins and some dehydrogenases, but quite often on a few proteins for each pathway. This study also showed variations in some hypothetical or poorly characterized proteins, suggesting a mitochondrial localization for these proteins. Examples include a stomatin-like protein and a protein sharing homologies with bacterial proteins implicated in tyrosine catabolism. Proteins involved in apoptosis control are also found modulated in Rho-0 mitochondria.
2004.12453
Alan Akil
Alan Eric Akil, Robert Rosenbaum and Kre\v{s}imir Josi\'c
Synaptic Plasticity in Correlated Balanced Networks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory--inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How does the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a general theory of plasticity in balanced networks. We show that balance can be attained and maintained under plasticity induced weight changes. We find that correlations in the input mildly, but significantly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.
[ { "created": "Sun, 26 Apr 2020 18:40:46 GMT", "version": "v1" } ]
2020-04-28
[ [ "Akil", "Alan Eric", "" ], [ "Rosenbaum", "Robert", "" ], [ "Josić", "Krešimir", "" ] ]
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory--inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How does the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a general theory of plasticity in balanced networks. We show that balance can be attained and maintained under plasticity induced weight changes. We find that correlations in the input mildly, but significantly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.
2312.07306
Shubham Krishna
Shubham Krishna, Carsten Lemmen, Serra \"Orey, Jennifer Rehren, Julien Di Pane, Moritz Mathis, Miriam P\"uts, Sascha Hokamp, Himansu Pradhan, Matthias Hasenbein, J\"urgen Scheffran, Kai Wirtz
Interactive effects of multiple stressors in coastal ecosystems
null
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Coastal ecosystems are increasingly experiencing anthropogenic pressures such as climate heating, CO2 increase, metal and organic pollution, overfishing and resource extraction. Some resulting stressors are more direct like fisheries, others more indirect like ocean acidification, yet they jointly affect marine biota, communities and entire ecosystems. While single-stressor effects have been widely investigated, the interactive effects of multiple stressors on ecosystems are less researched. In this study, we review the literature on multiple stressors and their interactive effects in coastal environments across organisms. We classify the interactions into three categories: synergistic, additive, and antagonistic. We found phytoplankton and mollusks to be the most studied taxonomic groups. The stressor combinations of climate warming, ocean acidification, eutrophication, and metal pollution are the most critical for coastal ecosystems as they exacerbate adverse effects on physiological traits such as growth rate, basal respiration, and size. Phytoplankton appears to be most sensitive to interactions between metal and nutrient pollution. In nutrient-enriched environments, the presence of metals considerably affects the uptake of nutrients, and increases respiration costs and toxin production in phytoplankton. For mollusks, warming and low pH are the most lethal stressors. The combined effect of heat stress and ocean acidification leads to decreased growth rate, shell size, and acid-base regulation capacity in mollusks. However, for a holistic understanding of how coastal food webs will evolve with ongoing changes, we suggest more research on ecosystem-level responses. This can be achieved by combining in-situ observations from controlled environments (e.g. mesocosm experiments) with modelling approaches.
[ { "created": "Tue, 12 Dec 2023 14:25:26 GMT", "version": "v1" } ]
2023-12-13
[ [ "Krishna", "Shubham", "" ], [ "Lemmen", "Carsten", "" ], [ "Örey", "Serra", "" ], [ "Rehren", "Jennifer", "" ], [ "Di Pane", "Julien", "" ], [ "Mathis", "Moritz", "" ], [ "Püts", "Miriam", "" ], [ "Hokamp", "Sascha", "" ], [ "Pradhan", "Himansu", "" ], [ "Hasenbein", "Matthias", "" ], [ "Scheffran", "Jürgen", "" ], [ "Wirtz", "Kai", "" ] ]
Coastal ecosystems are increasingly experiencing anthropogenic pressures such as climate heating, CO2 increase, metal and organic pollution, overfishing and resource extraction. Some resulting stressors are more direct like fisheries, others more indirect like ocean acidification, yet they jointly affect marine biota, communities and entire ecosystems. While single-stressor effects have been widely investigated, the interactive effects of multiple stressors on ecosystems are less researched. In this study, we review the literature on multiple stressors and their interactive effects in coastal environments across organisms. We classify the interactions into three categories: synergistic, additive, and antagonistic. We found phytoplankton and mollusks to be the most studied taxonomic groups. The stressor combinations of climate warming, ocean acidification, eutrophication, and metal pollution are the most critical for coastal ecosystems as they exacerbate adverse effects on physiological traits such as growth rate, basal respiration, and size. Phytoplankton appears to be most sensitive to interactions between metal and nutrient pollution. In nutrient-enriched environments, the presence of metals considerably affects the uptake of nutrients, and increases respiration costs and toxin production in phytoplankton. For mollusks, warming and low pH are the most lethal stressors. The combined effect of heat stress and ocean acidification leads to decreased growth rate, shell size, and acid-base regulation capacity in mollusks. However, for a holistic understanding of how coastal food webs will evolve with ongoing changes, we suggest more research on ecosystem-level responses. This can be achieved by combining in-situ observations from controlled environments (e.g. mesocosm experiments) with modelling approaches.
1407.3880
Taoyang Wu
Sha Zhu and Cuong Than and Taoyang Wu
Clades and clans: a comparison study of two evolutionary models
21pages
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model are two binary tree generating models that are widely used in evolutionary biology. Understanding the distributions of clade sizes under these two models provides valuable insights into macro-evolutionary processes, and is important in hypothesis testing and Bayesian analyses in phylogenetics. Here we show that these distributions are log-convex, which implies that very large clades or very small clades are more likely to occur under these two models. Moreover, we prove that there exists a critical value $\kappa(n)$ for each $n\geqslant 4$ such that for a given clade with size $k$, the probability that this clade is contained in a random tree with $n$ leaves generated under the YHK model is higher than that under the PDA model if $1<k<\kappa(n)$, and lower if $\kappa(n)<k<n$. Finally, we extend our results to binary unrooted trees, and obtain similar results for the distributions of clan sizes.
[ { "created": "Tue, 15 Jul 2014 04:36:32 GMT", "version": "v1" } ]
2014-07-16
[ [ "Zhu", "Sha", "" ], [ "Than", "Cuong", "" ], [ "Wu", "Taoyang", "" ] ]
The Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model are two binary tree generating models that are widely used in evolutionary biology. Understanding the distributions of clade sizes under these two models provides valuable insights into macro-evolutionary processes, and is important in hypothesis testing and Bayesian analyses in phylogenetics. Here we show that these distributions are log-convex, which implies that very large clades or very small clades are more likely to occur under these two models. Moreover, we prove that there exists a critical value $\kappa(n)$ for each $n\geqslant 4$ such that for a given clade with size $k$, the probability that this clade is contained in a random tree with $n$ leaves generated under the YHK model is higher than that under the PDA model if $1<k<\kappa(n)$, and lower if $\kappa(n)<k<n$. Finally, we extend our results to binary unrooted trees, and obtain similar results for the distributions of clan sizes.
2203.11753
Benjamin Walker
Benjamin J. Walker, Adriana T. Dawes
Modelling mechanically dominated vasculature development
21 pages; 8 figures; 1 table
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vascular networks play a key role in the development, function, and survival of many organisms, facilitating transport of nutrients and other critical factors within and between systems. The development of these vessel networks has been thoroughly explored in a variety of in vivo, in vitro and in silico contexts. However, the role of interactions between the growing vasculature and its environment remains largely unresolved, particularly concerning mechanical effects. Motivated by this gap in understanding, we develop a computational framework that is tailored to exploring the role of the mechanical environment on the formation of vascular networks. Here, we describe, document, implement, and explore an agent-based modelling framework, resolving the growth of individual vessels and seeking to capture phenomenology and intuitive qualitative mechanisms. In our explorations, we demonstrate that such a model can successfully reproduce familiar network structures, whilst highlighting the roles that mechanical influences could play in vascular development. For instance, we illustrate how an external substrate could act as an effective shared memory for the periodic regrowth of vasculature. We also observe the emergence of a nuanced collective behaviour and clustered vessel growth, which results from mechanical characteristics of the external environment.
[ { "created": "Tue, 22 Mar 2022 14:08:53 GMT", "version": "v1" } ]
2022-03-23
[ [ "Walker", "Benjamin J.", "" ], [ "Dawes", "Adriana T.", "" ] ]
Vascular networks play a key role in the development, function, and survival of many organisms, facilitating transport of nutrients and other critical factors within and between systems. The development of these vessel networks has been thoroughly explored in a variety of in vivo, in vitro and in silico contexts. However, the role of interactions between the growing vasculature and its environment remains largely unresolved, particularly concerning mechanical effects. Motivated by this gap in understanding, we develop a computational framework that is tailored to exploring the role of the mechanical environment on the formation of vascular networks. Here, we describe, document, implement, and explore an agent-based modelling framework, resolving the growth of individual vessels and seeking to capture phenomenology and intuitive qualitative mechanisms. In our explorations, we demonstrate that such a model can successfully reproduce familiar network structures, whilst highlighting the roles that mechanical influences could play in vascular development. For instance, we illustrate how an external substrate could act as an effective shared memory for the periodic regrowth of vasculature. We also observe the emergence of a nuanced collective behaviour and clustered vessel growth, which results from mechanical characteristics of the external environment.
2403.17984
Roshna Faeq Kakbra
Roshna Faeq Kakbra
Effect of seaweed, moringa leaf extract and biofertilizer on growth, yield and fruit quality of cucumber (Cucumis sativus L.) under greenhouse condition
63 pages, Master's thesis
null
null
null
q-bio.TO
http://creativecommons.org/publicdomain/zero/1.0/
This factorial experiment was conducted in a greenhouse during the period of May 3, 2021 to August 5, 2021 at the research farm belongs to the Horticulture Department, College of Agricultural Engineering Sciences, University of Sulaimani, Sulaimani, Iraq. The experiment was designed to study the effect of some biostimulants, individually and their combinations, on cucumber plants performance under greenhouse conditions; in addition to compare the results with of chemical fertilizers application. The treatments consisted of control (without adding any kinds of biostimulants) recommended dose of 100% chemical fertilizers (RDCF), seaweed extracts (SE), moringa leaf extract (MLE), bacterial-based biostimulant of Fulzym-plus (FP), that contains Bacillus subtilis and Pseudomonas putida, (SE+MLE), (SE+FP), (MLE+FP), and (SE+MLE+FP). The experiment was laid out in simple RCBD with 3 replications. The results showed that the application of different biostimulants, individually and their combinations, significantly improved the root growth characteristics. However, the highest values of lateral roots number per plant, lateral root length, lateral root diameter and root system dry weight were recorded by the application of recommended dose of chemical fertilizer (RDCF). While, this treatment was not different substantially with the triple combination of the tested biostimulants (SE+FP+MLE) in all studied root characteristics. In addition, untreated plants registered the minimum value of all the mentioned characters.
[ { "created": "Mon, 25 Mar 2024 06:58:44 GMT", "version": "v1" }, { "created": "Fri, 12 Apr 2024 19:28:23 GMT", "version": "v2" } ]
2024-04-16
[ [ "Kakbra", "Roshna Faeq", "" ] ]
This factorial experiment was conducted in a greenhouse during the period of May 3, 2021 to August 5, 2021 at the research farm belongs to the Horticulture Department, College of Agricultural Engineering Sciences, University of Sulaimani, Sulaimani, Iraq. The experiment was designed to study the effect of some biostimulants, individually and their combinations, on cucumber plants performance under greenhouse conditions; in addition to compare the results with of chemical fertilizers application. The treatments consisted of control (without adding any kinds of biostimulants) recommended dose of 100% chemical fertilizers (RDCF), seaweed extracts (SE), moringa leaf extract (MLE), bacterial-based biostimulant of Fulzym-plus (FP), that contains Bacillus subtilis and Pseudomonas putida, (SE+MLE), (SE+FP), (MLE+FP), and (SE+MLE+FP). The experiment was laid out in simple RCBD with 3 replications. The results showed that the application of different biostimulants, individually and their combinations, significantly improved the root growth characteristics. However, the highest values of lateral roots number per plant, lateral root length, lateral root diameter and root system dry weight were recorded by the application of recommended dose of chemical fertilizer (RDCF). While, this treatment was not different substantially with the triple combination of the tested biostimulants (SE+FP+MLE) in all studied root characteristics. In addition, untreated plants registered the minimum value of all the mentioned characters.
1412.3151
Vince Grolmusz
Balazs Szalkai, Csaba Kerepesi, Balint Varga, Vince Grolmusz
The Budapest Reference Connectome Server v2.0
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The connectomes of different human brains are pairwise distinct: we cannot talk about an abstract "graph of the brain". Two typical connectomes, however, have quite a few common graph edges that may describe the same connections between the same cortical areas. The Budapest Reference Connectome Server Ver. 2.0 (http://connectome.pitgroup.org) generates the common edges of the connectomes of 96 distinct cortexes, each with 1015 vertices, computed from 96 MRI data sets of the Human Connectome Project. The user may set numerous parameters for the identification and filtering of common edges, and the graphs are downloadable in both csv and GraphML formats; both formats carry the anatomical annotations of the vertices, generated by the Freesurfer program. The resulting consensus graph is also automatically visualized in a 3D rotating brain model on the website. The consensus graphs, generated with various parameter settings, can be used as reference connectomes based on different, independent MRI images, therefore they may serve as reduced-error, low-noise, robust graph representations of the human brain.
[ { "created": "Tue, 9 Dec 2014 22:59:59 GMT", "version": "v1" }, { "created": "Tue, 6 Jan 2015 17:22:17 GMT", "version": "v2" } ]
2015-01-07
[ [ "Szalkai", "Balazs", "" ], [ "Kerepesi", "Csaba", "" ], [ "Varga", "Balint", "" ], [ "Grolmusz", "Vince", "" ] ]
The connectomes of different human brains are pairwise distinct: we cannot talk about an abstract "graph of the brain". Two typical connectomes, however, have quite a few common graph edges that may describe the same connections between the same cortical areas. The Budapest Reference Connectome Server Ver. 2.0 (http://connectome.pitgroup.org) generates the common edges of the connectomes of 96 distinct cortexes, each with 1015 vertices, computed from 96 MRI data sets of the Human Connectome Project. The user may set numerous parameters for the identification and filtering of common edges, and the graphs are downloadable in both csv and GraphML formats; both formats carry the anatomical annotations of the vertices, generated by the Freesurfer program. The resulting consensus graph is also automatically visualized in a 3D rotating brain model on the website. The consensus graphs, generated with various parameter settings, can be used as reference connectomes based on different, independent MRI images, therefore they may serve as reduced-error, low-noise, robust graph representations of the human brain.
1506.05352
Adriaan (Ard) A. Louis
Kamaludin Dingle, Steffen Schaper, and Ard A. Louis
The structure of the genotype-phenotype map strongly constrains the evolution of non-coding RNA
new version with some small tweaks to clarify the conclusions and some minor corrections
null
null
null
q-bio.PE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prevalence of neutral mutations implies that biological systems typically have many more genotypes than phenotypes. But can the way that genotypes are distributed over phenotypes determine evolutionary outcomes? Answering such questions is difficult because the number of genotypes can be hyper-astronomically large. By solving the genotype-phenotype (GP) map for RNA secondary structure for systems up to length $L=126$ nucleotides (where the set of all possible RNA strands would weigh more than the mass of the visible universe) we show that the GP map strongly constrains the evolution of non-coding RNA (ncRNA). Simple random sampling over genotypes predicts the distribution of properties such as the mutational robustness or the number of stems per secondary structure found in naturally occurring ncRNA with surprising accuracy. Since we ignore natural selection, this strikingly close correspondence with the mapping suggests that structures allowing for functionality are easily discovered, despite the enormous size of the genetic spaces. The mapping is extremely biased: the majority of genotypes map to an exponentially small portion of the morphospace of all biophysically possible structures. Such strong constraints provide a non-adaptive explanation for the convergent evolution of structures such as the hammerhead ribozyme. These results presents a particularly clear example of bias in the arrival of variation strongly shaping evolutionary outcomes and may be relevant to Mayr's distinction between proximate and ultimate causes in evolutionary biology.
[ { "created": "Wed, 17 Jun 2015 14:53:28 GMT", "version": "v1" }, { "created": "Fri, 25 Sep 2015 04:45:23 GMT", "version": "v2" } ]
2015-09-28
[ [ "Dingle", "Kamaludin", "" ], [ "Schaper", "Steffen", "" ], [ "Louis", "Ard A.", "" ] ]
The prevalence of neutral mutations implies that biological systems typically have many more genotypes than phenotypes. But can the way that genotypes are distributed over phenotypes determine evolutionary outcomes? Answering such questions is difficult because the number of genotypes can be hyper-astronomically large. By solving the genotype-phenotype (GP) map for RNA secondary structure for systems up to length $L=126$ nucleotides (where the set of all possible RNA strands would weigh more than the mass of the visible universe) we show that the GP map strongly constrains the evolution of non-coding RNA (ncRNA). Simple random sampling over genotypes predicts the distribution of properties such as the mutational robustness or the number of stems per secondary structure found in naturally occurring ncRNA with surprising accuracy. Since we ignore natural selection, this strikingly close correspondence with the mapping suggests that structures allowing for functionality are easily discovered, despite the enormous size of the genetic spaces. The mapping is extremely biased: the majority of genotypes map to an exponentially small portion of the morphospace of all biophysically possible structures. Such strong constraints provide a non-adaptive explanation for the convergent evolution of structures such as the hammerhead ribozyme. These results presents a particularly clear example of bias in the arrival of variation strongly shaping evolutionary outcomes and may be relevant to Mayr's distinction between proximate and ultimate causes in evolutionary biology.
1309.7643
Zhizhen Zhao
Zhizhen Zhao and Amit Singer
Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM
null
null
null
null
q-bio.BM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of Cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering, MSA/MRA 2D classification, and their modern approximations.
[ { "created": "Sun, 29 Sep 2013 19:24:51 GMT", "version": "v1" }, { "created": "Fri, 14 Feb 2014 21:06:18 GMT", "version": "v2" }, { "created": "Sat, 1 Mar 2014 00:56:22 GMT", "version": "v3" }, { "created": "Mon, 17 Mar 2014 23:34:52 GMT", "version": "v4" } ]
2014-03-19
[ [ "Zhao", "Zhizhen", "" ], [ "Singer", "Amit", "" ] ]
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of Cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering, MSA/MRA 2D classification, and their modern approximations.
1602.08881
Christopher Buckley
Christopher L. Buckley, Satohiro Tajima, Toru Yanagawa, Kana Takakura, Yasuo Nagasaka, Naotaka Fujii, and Taro Toyoizumi
Brain State Control by Closed-Loop Environmental Feedback
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain state regulates sensory processing and motor control for adaptive behavior. Internal mechanisms of brain state control are well studied, but the role of external modulation from the environment is not well understood. Here, we examined the role of closed-loop environmental (CLE) feedback, in comparison to open-loop sensory input, on brain state and behavior in diverse vertebrate systems. In fictively swimming zebrafish, CLE feedback for optomotor stability controlled brain state by reducing coherent neuronal activity. The role of CLE feedback in brain state was also shown in a model of rodent active whisking, where brief interruptions in this feedback enhanced signal-to-noise ratio for detecting touch. Finally, in monkey visual fixation, artificial CLE feedback suppressed stimulus-specific neuronal activity and improved behavioral performance. Our findings show that the environment mediates continuous closed-loop feedback that controls neuronal gain, regulating brain state, and that brain function is an emergent property of brain-environment interactions.
[ { "created": "Mon, 29 Feb 2016 09:35:07 GMT", "version": "v1" } ]
2016-03-01
[ [ "Buckley", "Christopher L.", "" ], [ "Tajima", "Satohiro", "" ], [ "Yanagawa", "Toru", "" ], [ "Takakura", "Kana", "" ], [ "Nagasaka", "Yasuo", "" ], [ "Fujii", "Naotaka", "" ], [ "Toyoizumi", "Taro", "" ] ]
Brain state regulates sensory processing and motor control for adaptive behavior. Internal mechanisms of brain state control are well studied, but the role of external modulation from the environment is not well understood. Here, we examined the role of closed-loop environmental (CLE) feedback, in comparison to open-loop sensory input, on brain state and behavior in diverse vertebrate systems. In fictively swimming zebrafish, CLE feedback for optomotor stability controlled brain state by reducing coherent neuronal activity. The role of CLE feedback in brain state was also shown in a model of rodent active whisking, where brief interruptions in this feedback enhanced signal-to-noise ratio for detecting touch. Finally, in monkey visual fixation, artificial CLE feedback suppressed stimulus-specific neuronal activity and improved behavioral performance. Our findings show that the environment mediates continuous closed-loop feedback that controls neuronal gain, regulating brain state, and that brain function is an emergent property of brain-environment interactions.
1401.3203
Alan Braslau
Jean-Louis Sikorav, Alan Braslau, and Arach Goldar
Foundations of biology
14 pages,2 tables
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is often stated that there are no laws in biology, where everything is contingent and could have been otherwise, being solely the result of historical accidents. Furthermore, the customary introduction of fundamental biological entities such as individual organisms, cells, genes, catalysts and motors remains largely descriptive; constructive approaches involving deductive reasoning appear, in comparison, almost absent. As a consequence, both the logical content and principles of biology need to be reconsidered. The present article describes an inquiry into the foundations of biology. The foundations of biology are built in terms of elements, logic and principles, using both the language and the general methods employed in other disciplines. This approach assumes the existence of a certain unity of human knowledge that transcends discipline boundaries. Leibniz's principle of sufficient reason is revised through the introduction of the complementary concepts of symmetry and asymmetry and of necessity and contingency. This is used to explain how these four concepts are involved in the elaboration of theories or laws of nature. Four fundamental theories of biology are then identified: cell theory, Darwin's theory of natural selection, an informational theory of life (which includes Mendel's theory of inheritance) and a physico-chemical theory of life. Atomism and deductive reasoning are shown to enter into the elaboration of the concepts of natural selection, individual living organisms, cells and their reproduction, genes as well as catalysts and motors. This work contributes to clarify the philosophical and logical structure of biology and its major theories. This should ultimately lead to a better understanding of the origin of life, of system and synthetic biology, and of artificial life.
[ { "created": "Mon, 13 Jan 2014 14:12:10 GMT", "version": "v1" }, { "created": "Tue, 15 Apr 2014 13:36:48 GMT", "version": "v2" } ]
2014-04-16
[ [ "Sikorav", "Jean-Louis", "" ], [ "Braslau", "Alan", "" ], [ "Goldar", "Arach", "" ] ]
It is often stated that there are no laws in biology, where everything is contingent and could have been otherwise, being solely the result of historical accidents. Furthermore, the customary introduction of fundamental biological entities such as individual organisms, cells, genes, catalysts and motors remains largely descriptive; constructive approaches involving deductive reasoning appear, in comparison, almost absent. As a consequence, both the logical content and principles of biology need to be reconsidered. The present article describes an inquiry into the foundations of biology. The foundations of biology are built in terms of elements, logic and principles, using both the language and the general methods employed in other disciplines. This approach assumes the existence of a certain unity of human knowledge that transcends discipline boundaries. Leibniz's principle of sufficient reason is revised through the introduction of the complementary concepts of symmetry and asymmetry and of necessity and contingency. This is used to explain how these four concepts are involved in the elaboration of theories or laws of nature. Four fundamental theories of biology are then identified: cell theory, Darwin's theory of natural selection, an informational theory of life (which includes Mendel's theory of inheritance) and a physico-chemical theory of life. Atomism and deductive reasoning are shown to enter into the elaboration of the concepts of natural selection, individual living organisms, cells and their reproduction, genes as well as catalysts and motors. This work contributes to clarify the philosophical and logical structure of biology and its major theories. This should ultimately lead to a better understanding of the origin of life, of system and synthetic biology, and of artificial life.
1605.09710
Michael Schaub
Benjamin R. C. Amor, Michael T. Schaub, Sophia N. Yaliraki, Mauricio Barahona
Prediction of allosteric sites and mediating interactions through bond-to-bond propensities
30 pages, including 17 pages main text + 13 pages supplementary information. 7 Figures and 3 tables (main) + 5 Figures and 6 tables (supplementary)
Nature Communications 7, Article number: 12477 (2016)
10.1038/ncomms12477
null
q-bio.BM physics.bio-ph physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Allosteric regulation is central to many biochemical processes. Allosteric sites provide a target to fine-tune protein activity, yet we lack computational methods to predict them. Here, we present an efficient graph-theoretical approach for identifying allosteric sites and the mediating interactions that connect them to the active site. Using an atomistic graph with edges weighted by covalent and non-covalent bond energies, we obtain a bond-to-bond propensity that quantifies the effect of instantaneous bond fluctuations propagating through the protein. We use this propensity to detect the sites and communication pathways most strongly linked to the active site, assessing their significance through quantile regression and comparison against a reference set of 100 generic proteins. We exemplify our method in detail with three well-studied allosteric proteins: caspase-1, CheY, and h-Ras, correctly predicting the location of the allosteric site and identifying key allosteric interactions. Consistent prediction of allosteric sites is then attained in a further set of 17 proteins known to exhibit allostery. Because our propensity measure runs in almost linear time, it offers a scalable approach to high-throughput searches for candidate allosteric sites.
[ { "created": "Tue, 31 May 2016 16:56:23 GMT", "version": "v1" } ]
2016-08-29
[ [ "Amor", "Benjamin R. C.", "" ], [ "Schaub", "Michael T.", "" ], [ "Yaliraki", "Sophia N.", "" ], [ "Barahona", "Mauricio", "" ] ]
Allosteric regulation is central to many biochemical processes. Allosteric sites provide a target to fine-tune protein activity, yet we lack computational methods to predict them. Here, we present an efficient graph-theoretical approach for identifying allosteric sites and the mediating interactions that connect them to the active site. Using an atomistic graph with edges weighted by covalent and non-covalent bond energies, we obtain a bond-to-bond propensity that quantifies the effect of instantaneous bond fluctuations propagating through the protein. We use this propensity to detect the sites and communication pathways most strongly linked to the active site, assessing their significance through quantile regression and comparison against a reference set of 100 generic proteins. We exemplify our method in detail with three well-studied allosteric proteins: caspase-1, CheY, and h-Ras, correctly predicting the location of the allosteric site and identifying key allosteric interactions. Consistent prediction of allosteric sites is then attained in a further set of 17 proteins known to exhibit allostery. Because our propensity measure runs in almost linear time, it offers a scalable approach to high-throughput searches for candidate allosteric sites.
2302.00767
Anton Orlichenko
Anton Orlichenko, Grant Daly, Ziyu Zhou, Anqi Liu, Hui Shen, Hong-Wen Deng, Yu-Ping Wang
ImageNomer: description of a functional connectivity and omics analysis tool and case study identifying a race confound
11 pages
null
null
null
q-bio.PE cs.LG q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most packages for the analysis of fMRI-based functional connectivity (FC) and genomic data are used with a programming language interface, lacking an easy-to-navigate GUI frontend. This exacerbates two problems found in these types of data: demographic confounds and quality control in the face of high dimensionality of features. The reason is that it is too slow and cumbersome to use a programming interface to create all the necessary visualizations required to identify all correlations, confounding effects, or quality control problems in a dataset. To remedy this situation, we have developed ImageNomer, a data visualization and analysis tool that allows inspection of both subject-level and cohort-level demographic, genomic, and imaging features. The software is Python-based, runs in a self-contained Docker image, and contains a browser-based GUI frontend. We demonstrate the usefulness of ImageNomer by identifying an unexpected race confound when predicting achievement scores in the Philadelphia Neurodevelopmental Cohort (PNC) dataset. In the past, many studies have attempted to use FC to identify achievement-related features in fMRI. Using ImageNomer, we find a clear potential for confounding effects of race. Using correlation analysis in the ImageNomer software, we show that FCs correlated with Wide Range Achievement Test (WRAT) score are in fact more highly correlated with race. Investigating further, we find that whereas both FC and SNP (genomic) features can account for 10-15\% of WRAT score variation, this predictive ability disappears when controlling for race. In this work, we demonstrate the advantage of our ImageNomer GUI tool in data exploration and confound detection. Additionally, this work identifies race as a strong confound in FC data and casts doubt on the possibility of finding unbiased achievement-related features in fMRI and SNP data of healthy adolescents.
[ { "created": "Wed, 1 Feb 2023 21:32:24 GMT", "version": "v1" }, { "created": "Wed, 11 Oct 2023 18:23:59 GMT", "version": "v2" } ]
2023-10-13
[ [ "Orlichenko", "Anton", "" ], [ "Daly", "Grant", "" ], [ "Zhou", "Ziyu", "" ], [ "Liu", "Anqi", "" ], [ "Shen", "Hui", "" ], [ "Deng", "Hong-Wen", "" ], [ "Wang", "Yu-Ping", "" ] ]
Most packages for the analysis of fMRI-based functional connectivity (FC) and genomic data are used with a programming language interface, lacking an easy-to-navigate GUI frontend. This exacerbates two problems found in these types of data: demographic confounds and quality control in the face of high dimensionality of features. The reason is that it is too slow and cumbersome to use a programming interface to create all the necessary visualizations required to identify all correlations, confounding effects, or quality control problems in a dataset. To remedy this situation, we have developed ImageNomer, a data visualization and analysis tool that allows inspection of both subject-level and cohort-level demographic, genomic, and imaging features. The software is Python-based, runs in a self-contained Docker image, and contains a browser-based GUI frontend. We demonstrate the usefulness of ImageNomer by identifying an unexpected race confound when predicting achievement scores in the Philadelphia Neurodevelopmental Cohort (PNC) dataset. In the past, many studies have attempted to use FC to identify achievement-related features in fMRI. Using ImageNomer, we find a clear potential for confounding effects of race. Using correlation analysis in the ImageNomer software, we show that FCs correlated with Wide Range Achievement Test (WRAT) score are in fact more highly correlated with race. Investigating further, we find that whereas both FC and SNP (genomic) features can account for 10-15\% of WRAT score variation, this predictive ability disappears when controlling for race. In this work, we demonstrate the advantage of our ImageNomer GUI tool in data exploration and confound detection. Additionally, this work identifies race as a strong confound in FC data and casts doubt on the possibility of finding unbiased achievement-related features in fMRI and SNP data of healthy adolescents.
2405.00255
Jessica Dafflon
Jessica Dafflon, Dustin Moraczewski, Eric Earl, Dylan M. Nielson, Gabriel Loewinger, Patrick McClure, Adam G. Thomas, and Francisco Pereira
Reliability and predictability of phenotype information from functional connectivity in large imaging datasets
null
null
null
null
q-bio.NC
http://creativecommons.org/publicdomain/zero/1.0/
One of the central objectives of contemporary neuroimaging research is to create predictive models that can disentangle the connection between patterns of functional connectivity across the entire brain and various behavioral traits. Previous studies have shown that models trained to predict behavioral features from the individual's functional connectivity have modest to poor performance. In this study, we trained models that predict observable individual traits (phenotypes) and their corresponding singular value decomposition (SVD) representations - herein referred to as latent phenotypes from resting state functional connectivity. For this task, we predicted phenotypes in two large neuroimaging datasets: the Human Connectome Project (HCP) and the Philadelphia Neurodevelopmental Cohort (PNC). We illustrate the importance of regressing out confounds, which could significantly influence phenotype prediction. Our findings reveal that both phenotypes and their corresponding latent phenotypes yield similar predictive performance. Interestingly, only the first five latent phenotypes were reliably identified, and using just these reliable phenotypes for predicting phenotypes yielded a similar performance to using all latent phenotypes. This suggests that the predictable information is present in the first latent phenotypes, allowing the remainder to be filtered out without any harm in performance. This study sheds light on the intricate relationship between functional connectivity and the predictability and reliability of phenotypic information, with potential implications for enhancing predictive modeling in the realm of neuroimaging research.
[ { "created": "Wed, 1 May 2024 00:00:07 GMT", "version": "v1" } ]
2024-05-02
[ [ "Dafflon", "Jessica", "" ], [ "Moraczewski", "Dustin", "" ], [ "Earl", "Eric", "" ], [ "Nielson", "Dylan M.", "" ], [ "Loewinger", "Gabriel", "" ], [ "McClure", "Patrick", "" ], [ "Thomas", "Adam G.", "" ], [ "Pereira", "Francisco", "" ] ]
One of the central objectives of contemporary neuroimaging research is to create predictive models that can disentangle the connection between patterns of functional connectivity across the entire brain and various behavioral traits. Previous studies have shown that models trained to predict behavioral features from the individual's functional connectivity have modest to poor performance. In this study, we trained models that predict observable individual traits (phenotypes) and their corresponding singular value decomposition (SVD) representations - herein referred to as latent phenotypes from resting state functional connectivity. For this task, we predicted phenotypes in two large neuroimaging datasets: the Human Connectome Project (HCP) and the Philadelphia Neurodevelopmental Cohort (PNC). We illustrate the importance of regressing out confounds, which could significantly influence phenotype prediction. Our findings reveal that both phenotypes and their corresponding latent phenotypes yield similar predictive performance. Interestingly, only the first five latent phenotypes were reliably identified, and using just these reliable phenotypes for predicting phenotypes yielded a similar performance to using all latent phenotypes. This suggests that the predictable information is present in the first latent phenotypes, allowing the remainder to be filtered out without any harm in performance. This study sheds light on the intricate relationship between functional connectivity and the predictability and reliability of phenotypic information, with potential implications for enhancing predictive modeling in the realm of neuroimaging research.
1703.04554
Derdei Bichara
Derdei Bichara and Abderrahman Iggidr
Multi-Patch and Multi-Group Epidemic Models: A New Framework
29 pages, 10 figures
Journal of Mathematical Biology, 2017
10.1007/s00285-017-1191-9
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a multi-patch and multi-group model that captures the dynamics of an infectious disease when the host is structured into an arbitrary number of groups and interacts into an arbitrary number of patches where the infection takes place. In this framework, we model host mobility that depends on its epidemiological status, by a Lagrangian approach. This framework is applied to a general SEIRS model and the basic reproduction number $\mathcal{R_0}$ is derived. The effects of heterogeneity in groups, patches and mobility patterns on $\mathcal{R_0}$ and disease prevalence are explored. Our results show that for a fixed number of groups, the basic reproduction number increases with respect to the number of patches and the host mobility patterns. Moreover, when the mobility matrix of susceptible individuals is of rank one, the basic reproduction number is explicitly determined and was found to be independent of the latter. The cases where mobility matrices are of rank one capture important modeling scenarios. Additionally, we study the global analysis of equilibria for some special cases. Numerical simulations are carried out to showcase the ramifications of mobility pattern matrices on disease prevalence and basic reproduction number.
[ { "created": "Mon, 13 Mar 2017 21:19:39 GMT", "version": "v1" }, { "created": "Wed, 22 Nov 2017 02:42:33 GMT", "version": "v2" } ]
2017-11-23
[ [ "Bichara", "Derdei", "" ], [ "Iggidr", "Abderrahman", "" ] ]
We develop a multi-patch and multi-group model that captures the dynamics of an infectious disease when the host is structured into an arbitrary number of groups and interacts into an arbitrary number of patches where the infection takes place. In this framework, we model host mobility that depends on its epidemiological status, by a Lagrangian approach. This framework is applied to a general SEIRS model and the basic reproduction number $\mathcal{R_0}$ is derived. The effects of heterogeneity in groups, patches and mobility patterns on $\mathcal{R_0}$ and disease prevalence are explored. Our results show that for a fixed number of groups, the basic reproduction number increases with respect to the number of patches and the host mobility patterns. Moreover, when the mobility matrix of susceptible individuals is of rank one, the basic reproduction number is explicitly determined and was found to be independent of the latter. The cases where mobility matrices are of rank one capture important modeling scenarios. Additionally, we study the global analysis of equilibria for some special cases. Numerical simulations are carried out to showcase the ramifications of mobility pattern matrices on disease prevalence and basic reproduction number.
2209.03718
Adrien Doerig
Adrien Doerig, Rowan Sommers, Katja Seeliger, Blake Richards, Jenann Ismael, Grace Lindsay, Konrad Kording, Talia Konkle, Marcel A. J. Van Gerven, Nikolaus Kriegeskorte and Tim C. Kietzmann
The neuroconnectionist research programme
23 pages, 4 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Artificial Neural Networks (ANNs) inspired by biology are beginning to be widely used to model behavioral and neural data, an approach we call neuroconnectionism. ANNs have been lauded as the current best models of information processing in the brain, but also criticized for failing to account for basic cognitive functions. We propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of scientific research programmes is often not directly falsifiable, but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a cohesive large-scale research programme centered around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges, and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.
[ { "created": "Thu, 8 Sep 2022 11:24:08 GMT", "version": "v1" } ]
2022-09-09
[ [ "Doerig", "Adrien", "" ], [ "Sommers", "Rowan", "" ], [ "Seeliger", "Katja", "" ], [ "Richards", "Blake", "" ], [ "Ismael", "Jenann", "" ], [ "Lindsay", "Grace", "" ], [ "Kording", "Konrad", "" ], [ "Konkle", "Talia", "" ], [ "Van Gerven", "Marcel A. J.", "" ], [ "Kriegeskorte", "Nikolaus", "" ], [ "Kietzmann", "Tim C.", "" ] ]
Artificial Neural Networks (ANNs) inspired by biology are beginning to be widely used to model behavioral and neural data, an approach we call neuroconnectionism. ANNs have been lauded as the current best models of information processing in the brain, but also criticized for failing to account for basic cognitive functions. We propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of scientific research programmes is often not directly falsifiable, but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a cohesive large-scale research programme centered around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges, and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.
1807.09481
Andreas Mayer
Andreas Mayer, Yaojun Zhang, Alan S. Perelson, Ned S. Wingreen
Regulation of T cell expansion by antigen presentation dynamics
null
PNAS 2019, 116 (13) 5914-5919
10.1073/pnas.1812800116
null
q-bio.PE q-bio.CB
http://creativecommons.org/licenses/by-sa/4.0/
An essential feature of the adaptive immune system is the proliferation of antigen-specific lymphocytes during an immune reaction to form a large pool of effector cells. This proliferation must be regulated to ensure an effective response to infection while avoiding immunopathology. Recent experiments in mice have demonstrated that the expansion of a specific clone of T cells in response to cognate antigen obeys a striking inverse power law with respect to the initial number of T cells. Here, we show that such a relationship arises naturally from a model in which T cell expansion is limited by decaying levels of presented antigen. The same model also accounts for the observed dependence of T cell expansion on affinity for antigen and on the kinetics of antigen administration. Extending the model to address expansion of multiple T cell clones competing for antigen, we find that higher affinity clones can suppress the proliferation of lower affinity clones, thereby promoting the specificity of the response. Employing the model to derive optimal vaccination protocols, we find that exponentially increasing antigen doses can achieve a nearly optimized response. We thus conclude that the dynamics of presented antigen is a key regulator of both the size and specificity of the adaptive immune response.
[ { "created": "Wed, 25 Jul 2018 08:39:44 GMT", "version": "v1" } ]
2021-01-06
[ [ "Mayer", "Andreas", "" ], [ "Zhang", "Yaojun", "" ], [ "Perelson", "Alan S.", "" ], [ "Wingreen", "Ned S.", "" ] ]
An essential feature of the adaptive immune system is the proliferation of antigen-specific lymphocytes during an immune reaction to form a large pool of effector cells. This proliferation must be regulated to ensure an effective response to infection while avoiding immunopathology. Recent experiments in mice have demonstrated that the expansion of a specific clone of T cells in response to cognate antigen obeys a striking inverse power law with respect to the initial number of T cells. Here, we show that such a relationship arises naturally from a model in which T cell expansion is limited by decaying levels of presented antigen. The same model also accounts for the observed dependence of T cell expansion on affinity for antigen and on the kinetics of antigen administration. Extending the model to address expansion of multiple T cell clones competing for antigen, we find that higher affinity clones can suppress the proliferation of lower affinity clones, thereby promoting the specificity of the response. Employing the model to derive optimal vaccination protocols, we find that exponentially increasing antigen doses can achieve a nearly optimized response. We thus conclude that the dynamics of presented antigen is a key regulator of both the size and specificity of the adaptive immune response.
2401.03277
Luca Cattelani
Luca Cattelani, Giusy del Giudice, Angela Serra, Michele Fratello, Laura Aliisa Saarim\"aki, Vittorio Fortino, Antonio Federico, Periklis Tsiros, Marika Mannerstr\"om, Tarja Toimela, Tommaso Serchi, Iseult Lynch, Philip Doganis, Haralambos Sarimveis, Dario Greco
Quantitative in vitro to in vivo extrapolation for human toxicology and drug development
We authors have decided to withdraw the preprint from arXiv at this time. This decision stems from our desire to revisit and refine certain specific points in our work
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional animal testing for toxicity is expensive, time consuming, ethically questioned, sometimes inaccurate because of the necessity to extrapolate from animal to human, and in most cases not formally validated according to modern standards. This is driving regulatory bodies and companies in backing alternative methods focusing on in silico and in vitro approaches. These are complex to implement and validate, and their wide adoption is not yet established despite legal directives providing an imperative. It is difficult to link a cell level response to effects on a whole organism, but the advances in high-throughput toxicogenomics towards elucidating the mechanism of action of substances are gradually reducing this gap and fostering the adoption of Next Generation Safety Assessment approaches. Quantitative in vitro to in vivo extrapolation (QIVIVE) methods hold the promise to reveal how to use in vitro -omics data to predict the potential for in vivo toxicity. They could improve lead compounds prioritisation, reduce time and costs, also in numbers of animal lives, and help with the complexity of extrapolating between species. We provide a description of QIVIVE state of the art, including how the problems of dosing and timing are being approached, how in silico simulation can take into account the variability of individuals, and how multiple techniques can be integrated to face complex tasks like the prediction of long term toxicity, including a close look into the open problems and challenges ahead.
[ { "created": "Sat, 6 Jan 2024 18:32:19 GMT", "version": "v1" }, { "created": "Wed, 10 Jan 2024 14:09:43 GMT", "version": "v2" } ]
2024-01-11
[ [ "Cattelani", "Luca", "" ], [ "del Giudice", "Giusy", "" ], [ "Serra", "Angela", "" ], [ "Fratello", "Michele", "" ], [ "Saarimäki", "Laura Aliisa", "" ], [ "Fortino", "Vittorio", "" ], [ "Federico", "Antonio", "" ], [ "Tsiros", "Periklis", "" ], [ "Mannerström", "Marika", "" ], [ "Toimela", "Tarja", "" ], [ "Serchi", "Tommaso", "" ], [ "Lynch", "Iseult", "" ], [ "Doganis", "Philip", "" ], [ "Sarimveis", "Haralambos", "" ], [ "Greco", "Dario", "" ] ]
Traditional animal testing for toxicity is expensive, time consuming, ethically questioned, sometimes inaccurate because of the necessity to extrapolate from animal to human, and in most cases not formally validated according to modern standards. This is driving regulatory bodies and companies in backing alternative methods focusing on in silico and in vitro approaches. These are complex to implement and validate, and their wide adoption is not yet established despite legal directives providing an imperative. It is difficult to link a cell level response to effects on a whole organism, but the advances in high-throughput toxicogenomics towards elucidating the mechanism of action of substances are gradually reducing this gap and fostering the adoption of Next Generation Safety Assessment approaches. Quantitative in vitro to in vivo extrapolation (QIVIVE) methods hold the promise to reveal how to use in vitro -omics data to predict the potential for in vivo toxicity. They could improve lead compounds prioritisation, reduce time and costs, also in numbers of animal lives, and help with the complexity of extrapolating between species. We provide a description of QIVIVE state of the art, including how the problems of dosing and timing are being approached, how in silico simulation can take into account the variability of individuals, and how multiple techniques can be integrated to face complex tasks like the prediction of long term toxicity, including a close look into the open problems and challenges ahead.
q-bio/0403035
Byung Mook Weon
Byung Mook Weon
Demographic trajectories for supercentenarians
null
null
null
null
q-bio.PE
null
A fundamental question in aging research concerns the demographic trajectories at the highest ages, especially for supercentenarians (persons aged 110 or more). We wish to demonstrate that the Weon model enables scientists to describe the demographic trajectories for supercentenarians. We evaluate the average survival data from the modern eight countries and the valid and complete data for supercentenarians from the International Database on Longevity (Robine and Vaupel, (2002) North American Actuarial Journal 6, 54-63). The results suggest that the Weon model predicts the maximum longevity to exist around ages 120-130, which indicates that there is an intrinsic limit to human longevity, and that the Weon model allows the best possible description of the demographic trajectories for supercentenarians.
[ { "created": "Thu, 25 Mar 2004 04:13:27 GMT", "version": "v1" } ]
2007-05-23
[ [ "Weon", "Byung Mook", "" ] ]
A fundamental question in aging research concerns the demographic trajectories at the highest ages, especially for supercentenarians (persons aged 110 or more). We wish to demonstrate that the Weon model enables scientists to describe the demographic trajectories for supercentenarians. We evaluate the average survival data from the modern eight countries and the valid and complete data for supercentenarians from the International Database on Longevity (Robine and Vaupel, (2002) North American Actuarial Journal 6, 54-63). The results suggest that the Weon model predicts the maximum longevity to exist around ages 120-130, which indicates that there is an intrinsic limit to human longevity, and that the Weon model allows the best possible description of the demographic trajectories for supercentenarians.
2005.07661
Maria Soledad Aronna
M. Soledad Aronna, Roberto Guglielmi, Lucas M. Moschen
A model for COVID-19 with isolation, quarantine and testing as control measures
null
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we propose a compartmental model for the dynamics of Coronavirus Disease 2019 (COVID-19). We take into account the presence of asymptomatic infections and the main policies that have been adopted so far to contain the epidemic: isolation (or social distancing) of a portion of the population, quarantine for confirmed cases and testing. We model isolation by separating the population in two groups: one composed by key-workers that keep working during the pandemic and have a usual contact rate, and a second group consisting of people that are enforced/recommended to stay at home. We refer to quarantine as strict isolation, and it is applied to confirmed infected cases. In the proposed model, the proportion of people in isolation, the level of contact reduction and the testing rate are control parameters that can vary in time, representing policies that evolve in different stages. We obtain an explicit expression for the basic reproduction number $\mathcal{R}_0$ in terms of the parameters of the disease and of the control policies. In this way we can quantify the effect that isolation and testing have in the evolution of the epidemic. We present a series of simulations to illustrate different realistic scenarios. From the expression of $\mathcal{R}_0$ and the simulations we conclude that isolation (social distancing) and testing among asymptomatic cases are fundamental actions to control the epidemic, {and the stricter these measures are and the sooner they are implemented,} the more lives can be saved. Additionally, we show that people that remain in isolation significantly reduce their probability of contagion, so risk groups should be recommended to maintain a low contact rate during the course of the epidemic.
[ { "created": "Fri, 15 May 2020 17:20:59 GMT", "version": "v1" } ]
2020-05-18
[ [ "Aronna", "M. Soledad", "" ], [ "Guglielmi", "Roberto", "" ], [ "Moschen", "Lucas M.", "" ] ]
In this article we propose a compartmental model for the dynamics of Coronavirus Disease 2019 (COVID-19). We take into account the presence of asymptomatic infections and the main policies that have been adopted so far to contain the epidemic: isolation (or social distancing) of a portion of the population, quarantine for confirmed cases and testing. We model isolation by separating the population in two groups: one composed by key-workers that keep working during the pandemic and have a usual contact rate, and a second group consisting of people that are enforced/recommended to stay at home. We refer to quarantine as strict isolation, and it is applied to confirmed infected cases. In the proposed model, the proportion of people in isolation, the level of contact reduction and the testing rate are control parameters that can vary in time, representing policies that evolve in different stages. We obtain an explicit expression for the basic reproduction number $\mathcal{R}_0$ in terms of the parameters of the disease and of the control policies. In this way we can quantify the effect that isolation and testing have in the evolution of the epidemic. We present a series of simulations to illustrate different realistic scenarios. From the expression of $\mathcal{R}_0$ and the simulations we conclude that isolation (social distancing) and testing among asymptomatic cases are fundamental actions to control the epidemic, {and the stricter these measures are and the sooner they are implemented,} the more lives can be saved. Additionally, we show that people that remain in isolation significantly reduce their probability of contagion, so risk groups should be recommended to maintain a low contact rate during the course of the epidemic.
0707.1579
Yurie Okabe
Yurie Okabe and Masaki Sasai
Stable stochastic dynamics in yeast cell cycle
main text, 2 supporting texts, 3 supplementary tables
null
10.1529/biophysj.107.109991
null
q-bio.MN
null
Chemical reactions in cell are subject to intense stochastic fluctuations. An important question is how the fundamental physiological behavior of cell is kept stable against those noisy perturbations. In this paper a stochastic model of cell cycle of budding yeast is constructed to analyze the effects of noise on the cell cycle oscillation. The model predicts intense noise in levels of mRNAs and proteins, and the simulated protein levels explain the observed statistical tendency of noise in populations of synchronous and asynchronous cells. In spite of intense noise in levels of proteins and mRNAs, cell cycle is stable enough to bring the largely perturbed cells back to the physiological cyclic oscillation. The model shows that consecutively appearing fixed points are the origin of this stability of cell cycle.
[ { "created": "Wed, 11 Jul 2007 08:50:50 GMT", "version": "v1" } ]
2009-11-13
[ [ "Okabe", "Yurie", "" ], [ "Sasai", "Masaki", "" ] ]
Chemical reactions in cell are subject to intense stochastic fluctuations. An important question is how the fundamental physiological behavior of cell is kept stable against those noisy perturbations. In this paper a stochastic model of cell cycle of budding yeast is constructed to analyze the effects of noise on the cell cycle oscillation. The model predicts intense noise in levels of mRNAs and proteins, and the simulated protein levels explain the observed statistical tendency of noise in populations of synchronous and asynchronous cells. In spite of intense noise in levels of proteins and mRNAs, cell cycle is stable enough to bring the largely perturbed cells back to the physiological cyclic oscillation. The model shows that consecutively appearing fixed points are the origin of this stability of cell cycle.
0711.4498
Raja Paul
Raja Paul
Flow-correlated dilution of a regular network leads to a percolating network during tumor induced angiogenesis
15 pages, 12 figures
null
null
null
q-bio.CB q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a simplified stochastic model for the vascularization of a growing tumor, incorporating the formation of new blood vessels at the tumor periphery as well as their regression in the tumor center. The resulting morphology of the tumor vasculature differs drastically from the original one. We demonstrate that the probabilistic vessel collapse has to be correlated with the blood shear force in order to yield percolating network structures. The resulting tumor vasculature displays fractal properties. Fractal dimension, microvascular density (MVD), blood flow and shear force has been computed for a wide range of parameters.
[ { "created": "Tue, 27 Nov 2007 23:30:28 GMT", "version": "v1" }, { "created": "Mon, 22 Sep 2008 22:55:48 GMT", "version": "v2" }, { "created": "Fri, 26 Sep 2008 02:29:21 GMT", "version": "v3" }, { "created": "Wed, 16 Sep 2009 12:36:51 GMT", "version": "v4" } ]
2009-09-16
[ [ "Paul", "Raja", "" ] ]
We study a simplified stochastic model for the vascularization of a growing tumor, incorporating the formation of new blood vessels at the tumor periphery as well as their regression in the tumor center. The resulting morphology of the tumor vasculature differs drastically from the original one. We demonstrate that the probabilistic vessel collapse has to be correlated with the blood shear force in order to yield percolating network structures. The resulting tumor vasculature displays fractal properties. Fractal dimension, microvascular density (MVD), blood flow and shear force has been computed for a wide range of parameters.
1306.4167
Tommaso Biancalani
Tommaso Biancalani, Louise Dyson and Alan J. McKane
Noise-Induced Bistable States and Their Mean Switching Time in Foraging Colonies
8 pages, 5 figures. See also a "light-hearted" introduction: http://www.youtube.com/watch?v=m37Fe4qjeZk
null
10.1103/PhysRevLett.112.038101
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a type of bistability where noise not only causes transitions between stable states, but also constructs the states themselves. We focus on the experimentally well-studied system of ants choosing between two food sources to illustrate the essential points, but the ideas are more general. The mean time for switching between the two bistable states of the system is calculated. This suggests a procedure for estimating, in a real system, the critical population size above which bistability ceases to occur.
[ { "created": "Tue, 18 Jun 2013 12:46:22 GMT", "version": "v1" }, { "created": "Wed, 19 Jun 2013 15:11:23 GMT", "version": "v2" }, { "created": "Fri, 3 Jan 2014 15:58:09 GMT", "version": "v3" } ]
2015-06-16
[ [ "Biancalani", "Tommaso", "" ], [ "Dyson", "Louise", "" ], [ "McKane", "Alan J.", "" ] ]
We investigate a type of bistability where noise not only causes transitions between stable states, but also constructs the states themselves. We focus on the experimentally well-studied system of ants choosing between two food sources to illustrate the essential points, but the ideas are more general. The mean time for switching between the two bistable states of the system is calculated. This suggests a procedure for estimating, in a real system, the critical population size above which bistability ceases to occur.
2306.04501
Dozie Iwuh
Dozie Iwuh
Quantum Brain Dynamics. A Possibility of Having a Quantum Interpretation of the Brain
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
What Quantum Brain Dynamics (QBD) considers is not just these other functions of the brain, this is because they can be well analyzed with the workings of classical mechanics (even though they still play host to a quantum description). It rather considers two specific functions above all else consciousness and memory. QBD falls in line umbrella-covers aspects of the quantum brain analysis such as quantum-consciousness, quantum-mind and quantum-brain. The inspiration that lurks behind the Quantum Interpretation of the Brain (QIB), is traceable to the 1944 article written by E. Schrodinger, What is Life, in which he presents how a living organisms evades decay to equilibrium by the fact of negentropy, as such life which is in its ordered macroscopic state is created (in an environment of disorder), which moves against the second law of thermodynamics. The life that is created, that which is sustained, arises from an interaction that the organism engages in with the environment. This interaction is microscopic, albeit quantum, it is an interaction that underscores the reality of quantum entanglement (which also plays hosts to the superposition of quantum states). The quantum interpretation of the brain is a nascent, yet burgeoning as it might be that necessary tool required for a better articulation and comprehension of the brain.
[ { "created": "Mon, 5 Jun 2023 10:45:22 GMT", "version": "v1" } ]
2023-06-08
[ [ "Iwuh", "Dozie", "" ] ]
What Quantum Brain Dynamics (QBD) considers is not just these other functions of the brain, this is because they can be well analyzed with the workings of classical mechanics (even though they still play host to a quantum description). It rather considers two specific functions above all else consciousness and memory. QBD falls in line umbrella-covers aspects of the quantum brain analysis such as quantum-consciousness, quantum-mind and quantum-brain. The inspiration that lurks behind the Quantum Interpretation of the Brain (QIB), is traceable to the 1944 article written by E. Schrodinger, What is Life, in which he presents how a living organisms evades decay to equilibrium by the fact of negentropy, as such life which is in its ordered macroscopic state is created (in an environment of disorder), which moves against the second law of thermodynamics. The life that is created, that which is sustained, arises from an interaction that the organism engages in with the environment. This interaction is microscopic, albeit quantum, it is an interaction that underscores the reality of quantum entanglement (which also plays hosts to the superposition of quantum states). The quantum interpretation of the brain is a nascent, yet burgeoning as it might be that necessary tool required for a better articulation and comprehension of the brain.
2401.00381
Hui Wei Dr.
Hui Wei, Chenyue Feng, Jianning Zhang
Modeling of Memory Mechanisms in Cerebral Cortex and Simulation of Storage Performance
null
null
null
null
q-bio.NC cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
At the intersection of computation and cognitive science, graph theory is utilized as a formalized description of complex relationships and structures. Traditional graph models are often static, lacking dynamic and autonomous behavioral patterns. They rely on algorithms with a global view, significantly differing from biological neural networks, in which, to simulate information storage and retrieval processes, the limitations of centralized algorithms must be overcome. This study introduces a directed graph model that equips each node with adaptive learning and decision-making capabilities, thereby facilitating decentralized dynamic information storage and modeling and simulation of the brain's memory process. We abstract different storage instances as directed graph paths, transforming the storage of information into the assignment, discrimination, and extraction of different paths. To address writing and reading challenges, each node has a personalized adaptive learning ability. A storage algorithm without a God's eye view is developed, where each node uses its limited neighborhood information to facilitate the extension, formation, solidification, and awakening of directed graph paths, achieving competitive, reciprocal, and sustainable utilization of limited resources. Storage behavior occurs in each node, with adaptive learning behaviors of nodes concretized in a microcircuit centered around a variable resistor, simulating the electrophysiological behavior of neurons. Under the constraints of neurobiology on the anatomy and electrophysiology of biological neural networks, this model offers a plausible explanation for the mechanism of memory realization, providing a comprehensive, system-level experimental validation of the memory trace theory.
[ { "created": "Sun, 31 Dec 2023 03:27:20 GMT", "version": "v1" }, { "created": "Fri, 7 Jun 2024 23:48:04 GMT", "version": "v2" } ]
2024-06-11
[ [ "Wei", "Hui", "" ], [ "Feng", "Chenyue", "" ], [ "Zhang", "Jianning", "" ] ]
At the intersection of computation and cognitive science, graph theory is utilized as a formalized description of complex relationships and structures. Traditional graph models are often static, lacking dynamic and autonomous behavioral patterns. They rely on algorithms with a global view, significantly differing from biological neural networks, in which, to simulate information storage and retrieval processes, the limitations of centralized algorithms must be overcome. This study introduces a directed graph model that equips each node with adaptive learning and decision-making capabilities, thereby facilitating decentralized dynamic information storage and modeling and simulation of the brain's memory process. We abstract different storage instances as directed graph paths, transforming the storage of information into the assignment, discrimination, and extraction of different paths. To address writing and reading challenges, each node has a personalized adaptive learning ability. A storage algorithm without a God's eye view is developed, where each node uses its limited neighborhood information to facilitate the extension, formation, solidification, and awakening of directed graph paths, achieving competitive, reciprocal, and sustainable utilization of limited resources. Storage behavior occurs in each node, with adaptive learning behaviors of nodes concretized in a microcircuit centered around a variable resistor, simulating the electrophysiological behavior of neurons. Under the constraints of neurobiology on the anatomy and electrophysiology of biological neural networks, this model offers a plausible explanation for the mechanism of memory realization, providing a comprehensive, system-level experimental validation of the memory trace theory.
2306.16432
Andrea Tosin
Tommaso Lorenzi, Elisa Paparelli, Andrea Tosin
Modelling coevolutionary dynamics in heterogeneous SI epidemiological systems across scales
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a new structured compartmental model for the coevolutionary dynamics between susceptible and infectious individuals in heterogeneous SI epidemiological systems. In this model, the susceptible compartment is structured by a continuous variable that represents the level of resistance to infection of susceptible individuals, while the infectious compartment is structured by a continuous variable that represents the viral load of infectious individuals. We first formulate an individual-based model wherein the dynamics of single individuals is described through stochastic processes, which permits a fine-grain representation of individual dynamics and captures stochastic variability in evolutionary trajectories amongst individuals. Next we formally derive the mesoscopic counterpart of this model, which consists of a system of coupled integro-differential equations for the population density functions of susceptible and infectious individuals. Then we consider an appropriately rescaled version of this system and we carry out formal asymptotic analysis to derive the corresponding macroscopic model, which comprises a system of coupled ordinary differential equations for the proportions of susceptible and infectious individuals, the mean level of resistance to infection of susceptible individuals, and the mean viral load of infectious individuals. Overall, this leads to a coherent mathematical representation of the coevolutionary dynamics between susceptible and infectious individuals across scales. We provide well-posedness results for the mesoscopic and macroscopic models, and we show that there is excellent agreement between analytical results on the long-time behaviour of the components of the solution to the macroscopic model, the results of Monte Carlo simulations of the individual-based model, and numerical solutions of the macroscopic model.
[ { "created": "Wed, 28 Jun 2023 16:41:10 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2024 15:18:43 GMT", "version": "v2" } ]
2024-03-15
[ [ "Lorenzi", "Tommaso", "" ], [ "Paparelli", "Elisa", "" ], [ "Tosin", "Andrea", "" ] ]
We develop a new structured compartmental model for the coevolutionary dynamics between susceptible and infectious individuals in heterogeneous SI epidemiological systems. In this model, the susceptible compartment is structured by a continuous variable that represents the level of resistance to infection of susceptible individuals, while the infectious compartment is structured by a continuous variable that represents the viral load of infectious individuals. We first formulate an individual-based model wherein the dynamics of single individuals is described through stochastic processes, which permits a fine-grain representation of individual dynamics and captures stochastic variability in evolutionary trajectories amongst individuals. Next we formally derive the mesoscopic counterpart of this model, which consists of a system of coupled integro-differential equations for the population density functions of susceptible and infectious individuals. Then we consider an appropriately rescaled version of this system and we carry out formal asymptotic analysis to derive the corresponding macroscopic model, which comprises a system of coupled ordinary differential equations for the proportions of susceptible and infectious individuals, the mean level of resistance to infection of susceptible individuals, and the mean viral load of infectious individuals. Overall, this leads to a coherent mathematical representation of the coevolutionary dynamics between susceptible and infectious individuals across scales. We provide well-posedness results for the mesoscopic and macroscopic models, and we show that there is excellent agreement between analytical results on the long-time behaviour of the components of the solution to the macroscopic model, the results of Monte Carlo simulations of the individual-based model, and numerical solutions of the macroscopic model.
q-bio/0605024
Steven Kelk
Leo van Iersel, Judith Keijsper, Steven Kelk, Leen Stougie
Shorelines of islands of tractability: Algorithms for parsimony and minimum perfect phylogeny haplotyping problems
Updated version of our "Beaches of Islands of Tractability..." paper (which appeared in WABI2006 and as a Technische Universiteit Eindhoven technical report.) This version contains new approximation results and has been submitted (Jan 2007) to IEEE/TCBB journal (IEEE/ACM Transactions on Computational Biology and Bioinformatics.)
null
null
null
q-bio.OT q-bio.PE
null
The problem Parsimony Haplotyping (PH) asks for the smallest set of haplotypes which can explain a given set of genotypes, and the problem Minimum Perfect Phylogeny Haplotyping (MPPH) asks for the smallest such set which also allows the haplotypes to be embedded in a perfect phylogeny, an evolutionary tree with biologically-motivated restrictions. For PH, we extend recent work by further mapping the interface between ``easy'' and ``hard'' instances, within the framework of (k,l)-bounded instances where the number of 2's per column and row of the input matrix is restricted. By exploring, in the same way, the tractability frontier of MPPH we provide the first concrete, positive results for this problem, and the algorithms underpinning these results offer new insights about how MPPH might be further tackled in the future. In addition, we construct for both PH and MPPH polynomial time approximation algorithms, based on properties of the columns of the input matrix. We conclude with an overview of intriguing open problems in PH and MPPH.
[ { "created": "Tue, 16 May 2006 16:03:01 GMT", "version": "v1" }, { "created": "Mon, 26 Jun 2006 08:11:47 GMT", "version": "v2" }, { "created": "Fri, 12 Jan 2007 09:26:44 GMT", "version": "v3" } ]
2007-05-23
[ [ "van Iersel", "Leo", "" ], [ "Keijsper", "Judith", "" ], [ "Kelk", "Steven", "" ], [ "Stougie", "Leen", "" ] ]
The problem Parsimony Haplotyping (PH) asks for the smallest set of haplotypes which can explain a given set of genotypes, and the problem Minimum Perfect Phylogeny Haplotyping (MPPH) asks for the smallest such set which also allows the haplotypes to be embedded in a perfect phylogeny, an evolutionary tree with biologically-motivated restrictions. For PH, we extend recent work by further mapping the interface between ``easy'' and ``hard'' instances, within the framework of (k,l)-bounded instances where the number of 2's per column and row of the input matrix is restricted. By exploring, in the same way, the tractability frontier of MPPH we provide the first concrete, positive results for this problem, and the algorithms underpinning these results offer new insights about how MPPH might be further tackled in the future. In addition, we construct for both PH and MPPH polynomial time approximation algorithms, based on properties of the columns of the input matrix. We conclude with an overview of intriguing open problems in PH and MPPH.
2208.12811
Wu Xinxing
Xinxing Wu and Chong Peng and Gregory Jicha and Donna Wilcock and Qiang Cheng
PRIME: Uncovering Circadian Oscillation Patterns and Associations with AD in Untimed Genome-wide Gene Expression across Multiple Brain Regions
10 pages
null
null
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The disruption of circadian rhythm is a cardinal symptom for Alzheimer's disease (AD) patients. The full circadian rhythm orchestration of gene expression in the human brain and its inherent associations with AD remain largely unknown. We present a novel comprehensive approach, PRIME, to detect and analyze rhythmic oscillation patterns in untimed high-dimensional gene expression data across multiple datasets. To demonstrate the utility of PRIME, firstly, we validate it by a time course expression dataset from mouse liver as a cross-species and cross-organ validation. Then, we apply it to study oscillation patterns in untimed genome-wide gene expression from 19 human brain regions of controls and AD patients. Our findings reveal clear, synchronized oscillation patterns in 15 pairs of brain regions of control, while these oscillation patterns either disappear or dim for AD. It is worth noting that PRIME discovers the circadian rhythmic patterns without requiring the sample's timestamps. The codes for PRIME, along with codes to reproduce the figures in this paper, are available at https://github.com/xinxingwu-uk/PRIME.
[ { "created": "Thu, 25 Aug 2022 21:47:22 GMT", "version": "v1" } ]
2022-09-26
[ [ "Wu", "Xinxing", "" ], [ "Peng", "Chong", "" ], [ "Jicha", "Gregory", "" ], [ "Wilcock", "Donna", "" ], [ "Cheng", "Qiang", "" ] ]
The disruption of circadian rhythm is a cardinal symptom for Alzheimer's disease (AD) patients. The full circadian rhythm orchestration of gene expression in the human brain and its inherent associations with AD remain largely unknown. We present a novel comprehensive approach, PRIME, to detect and analyze rhythmic oscillation patterns in untimed high-dimensional gene expression data across multiple datasets. To demonstrate the utility of PRIME, firstly, we validate it by a time course expression dataset from mouse liver as a cross-species and cross-organ validation. Then, we apply it to study oscillation patterns in untimed genome-wide gene expression from 19 human brain regions of controls and AD patients. Our findings reveal clear, synchronized oscillation patterns in 15 pairs of brain regions of control, while these oscillation patterns either disappear or dim for AD. It is worth noting that PRIME discovers the circadian rhythmic patterns without requiring the sample's timestamps. The codes for PRIME, along with codes to reproduce the figures in this paper, are available at https://github.com/xinxingwu-uk/PRIME.
q-bio/0510005
J\'er\^ome Benoit
Jerome Benoit, Ana Nunes and Margarida Telo da Gama
Pair Approximation Models for Disease Spread
6 pages, 3 figures, LaTeX2e+SVJour+AmSLaTeX, NEXTSigmaPhi 2005; metadata title corrected wrt paper title
Eur. Phys. J. B 50 (2006), no. 1-2, 177--181
10.1140/epjb/e2006-00096-x
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a Susceptible-Infective-Recovered (SIR) model, where the mechanism for the renewal of susceptibles is demographic, on a ring with next nearest neighbour interactions, and a family of correlated pair approximations (CPA), parametrized by a measure of the relative contributions of loops and open triplets of the sites involved in the infection process. We have found that the phase diagram of the CPA, at fixed coordination number, changes qualitatively as the relative weight of the loops increases, from the phase diagram of the uncorrelated pair approximation to phase diagrams typical of one-dimensional systems. In addition, we have performed computer simulations of the same model and shown that while the CPA with a constant correlation parameter cannot describe the global behaviour of the model, a reasonable description of the endemic equilibria as well as of the phase diagram may be obtained by allowing the parameter to depend on the demographic rate.
[ { "created": "Mon, 3 Oct 2005 08:55:29 GMT", "version": "v1" }, { "created": "Tue, 11 Oct 2005 18:50:39 GMT", "version": "v2" }, { "created": "Thu, 24 Nov 2005 20:09:26 GMT", "version": "v3" }, { "created": "Tue, 7 Feb 2006 20:49:30 GMT", "version": "v4" }, { "created": "Fri, 31 May 2013 22:50:46 GMT", "version": "v5" } ]
2013-06-04
[ [ "Benoit", "Jerome", "" ], [ "Nunes", "Ana", "" ], [ "da Gama", "Margarida Telo", "" ] ]
We consider a Susceptible-Infective-Recovered (SIR) model, where the mechanism for the renewal of susceptibles is demographic, on a ring with next nearest neighbour interactions, and a family of correlated pair approximations (CPA), parametrized by a measure of the relative contributions of loops and open triplets of the sites involved in the infection process. We have found that the phase diagram of the CPA, at fixed coordination number, changes qualitatively as the relative weight of the loops increases, from the phase diagram of the uncorrelated pair approximation to phase diagrams typical of one-dimensional systems. In addition, we have performed computer simulations of the same model and shown that while the CPA with a constant correlation parameter cannot describe the global behaviour of the model, a reasonable description of the endemic equilibria as well as of the phase diagram may be obtained by allowing the parameter to depend on the demographic rate.
1703.05414
Andrew Sornborger
Andrew T. Sornborger and James D. Lauderdale
A Multitaper, Causal Decomposition for Stochastic, Multivariate Time Series: Application to High-Frequency Calcium Imaging Data
This invited paper was presented at the Asilomar 50th Conference on Signals, Systems, and Computers
null
null
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, $C(\tau)$, as opposed to standard methods that decompose the time series, $\mathbf{X}(t)$, using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.
[ { "created": "Wed, 15 Mar 2017 22:54:15 GMT", "version": "v1" } ]
2017-03-17
[ [ "Sornborger", "Andrew T.", "" ], [ "Lauderdale", "James D.", "" ] ]
Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, $C(\tau)$, as opposed to standard methods that decompose the time series, $\mathbf{X}(t)$, using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.
2108.04936
James Stone Dr
James V Stone
Using Information Theory to Measure Psychophysical Performance
null
null
null
null
q-bio.NC cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Most psychophysical experiments discard half the data collected. Specifically, experiments discard reaction time data, and use binary responses (e.g. yes/no) to measure performance. Here, Shannon's information theory is used to define Shannon competence $s'$, which depends on the mutual information between stimulus strength (e.g. luminance) and a combination of reaction times and binary responses. Mutual information is the entropy of the joint distribution of responses minus the residual entropy after a model has been fitted to these responses. Here, this model is instantiated as a proportional rate diffusion model, with the additional innovation that the full covariance structure of responses is taken into account. Results suggest information associated with reaction times is independent of (i.e. additional to) information associated with binary responses, and that reaction time and binary responses together provide substantially more than the sum of their individual contributions (i.e. they act synergistically). Consequently, the additional information supplied by reaction times suggests that using combined reaction time and binary responses requires fewer stimulus presentations, without loss of precision in psychophysical parameters. Finally, because s' takes account of both reaction time and binary responses, (and in contrast to d') $s'$ is immune to speed-accuracy trade-offs, which vary between observers and experimental designs.
[ { "created": "Tue, 10 Aug 2021 21:47:57 GMT", "version": "v1" }, { "created": "Thu, 12 Aug 2021 09:03:51 GMT", "version": "v2" }, { "created": "Sat, 11 Dec 2021 12:18:45 GMT", "version": "v3" } ]
2021-12-14
[ [ "Stone", "James V", "" ] ]
Most psychophysical experiments discard half the data collected. Specifically, experiments discard reaction time data, and use binary responses (e.g. yes/no) to measure performance. Here, Shannon's information theory is used to define Shannon competence $s'$, which depends on the mutual information between stimulus strength (e.g. luminance) and a combination of reaction times and binary responses. Mutual information is the entropy of the joint distribution of responses minus the residual entropy after a model has been fitted to these responses. Here, this model is instantiated as a proportional rate diffusion model, with the additional innovation that the full covariance structure of responses is taken into account. Results suggest information associated with reaction times is independent of (i.e. additional to) information associated with binary responses, and that reaction time and binary responses together provide substantially more than the sum of their individual contributions (i.e. they act synergistically). Consequently, the additional information supplied by reaction times suggests that using combined reaction time and binary responses requires fewer stimulus presentations, without loss of precision in psychophysical parameters. Finally, because s' takes account of both reaction time and binary responses, (and in contrast to d') $s'$ is immune to speed-accuracy trade-offs, which vary between observers and experimental designs.
1501.04986
Istv\'an Mikl\'os
Joseph L. Herman and Adrienn Szab\'o and Instv\'an Mikl\'os and Jotun Hein
Approximate statistical alignment by iterative sampling of substitution matrices
null
null
null
null
q-bio.QM cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We outline a procedure for jointly sampling substitution matrices and multiple sequence alignments, according to an approximate posterior distribution, using an MCMC-based algorithm. This procedure provides an efficient and simple method by which to generate alternative alignments according to their expected accuracy, and allows appropriate parameters for substitution matrices to be selected in an automated fashion. In the cases considered here, the sampled alignments with the highest likelihood have an accuracy consistently higher than alignments generated using the standard BLOSUM62 matrix.
[ { "created": "Mon, 19 Jan 2015 09:19:51 GMT", "version": "v1" } ]
2015-01-22
[ [ "Herman", "Joseph L.", "" ], [ "Szabó", "Adrienn", "" ], [ "Miklós", "Instván", "" ], [ "Hein", "Jotun", "" ] ]
We outline a procedure for jointly sampling substitution matrices and multiple sequence alignments, according to an approximate posterior distribution, using an MCMC-based algorithm. This procedure provides an efficient and simple method by which to generate alternative alignments according to their expected accuracy, and allows appropriate parameters for substitution matrices to be selected in an automated fashion. In the cases considered here, the sampled alignments with the highest likelihood have an accuracy consistently higher than alignments generated using the standard BLOSUM62 matrix.
1506.06392
Jan Hasenauer
Jan Hasenauer, Nick Jagiella, Sabrina Hross, and Fabian J. Theis
Data-driven modelling of biological multi-scale processes
This manuscript will appear in the Journal of Coupled Systems and Multiscale Dynamics (American Scientific Publishers)
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological processes involve a variety of spatial and temporal scales. A holistic understanding of many biological processes therefore requires multi-scale models which capture the relevant properties on all these scales. In this manuscript we review mathematical modelling approaches used to describe the individual spatial scales and how they are integrated into holistic models. We discuss the relation between spatial and temporal scales and the implication of that on multi-scale modelling. Based upon this overview over state-of-the-art modelling approaches, we formulate key challenges in mathematical and computational modelling of biological multi-scale and multi-physics processes. In particular, we considered the availability of analysis tools for multi-scale models and model-based multi-scale data integration. We provide a compact review of methods for model-based data integration and model-based hypothesis testing. Furthermore, novel approaches and recent trends are discussed, including computation time reduction using reduced order and surrogate models, which contribute to the solution of inference problems. We conclude the manuscript by providing a few ideas for the development of tailored multi-scale inference methods.
[ { "created": "Sun, 21 Jun 2015 17:23:22 GMT", "version": "v1" } ]
2015-06-23
[ [ "Hasenauer", "Jan", "" ], [ "Jagiella", "Nick", "" ], [ "Hross", "Sabrina", "" ], [ "Theis", "Fabian J.", "" ] ]
Biological processes involve a variety of spatial and temporal scales. A holistic understanding of many biological processes therefore requires multi-scale models which capture the relevant properties on all these scales. In this manuscript we review mathematical modelling approaches used to describe the individual spatial scales and how they are integrated into holistic models. We discuss the relation between spatial and temporal scales and the implication of that on multi-scale modelling. Based upon this overview over state-of-the-art modelling approaches, we formulate key challenges in mathematical and computational modelling of biological multi-scale and multi-physics processes. In particular, we considered the availability of analysis tools for multi-scale models and model-based multi-scale data integration. We provide a compact review of methods for model-based data integration and model-based hypothesis testing. Furthermore, novel approaches and recent trends are discussed, including computation time reduction using reduced order and surrogate models, which contribute to the solution of inference problems. We conclude the manuscript by providing a few ideas for the development of tailored multi-scale inference methods.
q-bio/0703050
Vahid Rezania
Vahid Rezania, Jack Tuszynski
From a quantum mechanical description of the assembly processes in microtubules to their semiclassical nonlinear dynamics
20 pages, no figures, to appear in Quantum Biosystems
Quantum Biosystems (2007) Vol. 1, 1 - 20
null
null
q-bio.CB q-bio.BM q-bio.NC
null
In this paper a quantum mechanical description of the assembly/disassembly process for microtubules is proposed. We introduce creation and annihilation operators that raise or lower the microtubule length by a tubulin layer. Following that, the Hamiltonian and corresponding equations of motion are derived that describe the dynamics of microtubules. These Heisenberg-type equations are then transformed to semi-classical equations using the method of coherent structures. The latter equations are very similar to the phenomenological equations that describe dynamic instability of microtubules in a tubulin solution.
[ { "created": "Thu, 22 Mar 2007 20:22:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Rezania", "Vahid", "" ], [ "Tuszynski", "Jack", "" ] ]
In this paper a quantum mechanical description of the assembly/disassembly process for microtubules is proposed. We introduce creation and annihilation operators that raise or lower the microtubule length by a tubulin layer. Following that, the Hamiltonian and corresponding equations of motion are derived that describe the dynamics of microtubules. These Heisenberg-type equations are then transformed to semi-classical equations using the method of coherent structures. The latter equations are very similar to the phenomenological equations that describe dynamic instability of microtubules in a tubulin solution.
2301.12975
Florian Poydenot
Florian Poydenot and Alice Lebreton and Jacques Haiech and Bruno Andreotti
At the crossroads of epidemiology and biology: bridging the gap between SARS-CoV-2 viral strain properties and epidemic wave characteristics
26 pages, 5 figures; submitted to Biochimie
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic has given rise to numerous articles from different scientific fields (epidemiology, virology, immunology, airflow physics...) without any effort to link these different insights. In this review, we aim to establish relationships between epidemiological data and the characteristics of the virus strain responsible for the epidemic wave concerned. We have carried out this study on the Wuhan, Alpha, Delta and Omicron strains allowing us to illustrate the evolution of the relationships we have highlighted according to these different viral strains. We addressed the following questions: 1) How can the mean infectious dose (one quantum, by definition in epidemiology) be measured and expressed as an amount of viral RNA molecules (in genome units, GU) or as a number of replicative viral particles (in plaque-forming units, PFU)? 2) How many infectious quanta are exhaled by an infected person per unit of time? 3) How many infectious quanta are exhaled, on average, integrated over the whole contagious period? 4) How do these quantities relate to the epidemic reproduction rate R as measured in epidemiology, and to the viral load, as measured by molecular biological methods? 5) How has the infectious dose evolved with the different strains of SARS-CoV-2? We make use of state-of-the-art modelling, reviewed and explained in the appendix of the article (Supplemental Information, SI), to answer these questions using data from the literature in both epidemiology and virology. We have considered the modification of these relationships according to the vaccination status of the population. We hope that this work will allow a better integration of data from different fields (virology, epidemiology, and immunology) to anticipate the evolution of the epidemic in the case of COVID-19, but also in respiratory pathologies transmissible in an airborne manner.
[ { "created": "Mon, 30 Jan 2023 15:22:36 GMT", "version": "v1" } ]
2023-01-31
[ [ "Poydenot", "Florian", "" ], [ "Lebreton", "Alice", "" ], [ "Haiech", "Jacques", "" ], [ "Andreotti", "Bruno", "" ] ]
The COVID-19 pandemic has given rise to numerous articles from different scientific fields (epidemiology, virology, immunology, airflow physics...) without any effort to link these different insights. In this review, we aim to establish relationships between epidemiological data and the characteristics of the virus strain responsible for the epidemic wave concerned. We have carried out this study on the Wuhan, Alpha, Delta and Omicron strains allowing us to illustrate the evolution of the relationships we have highlighted according to these different viral strains. We addressed the following questions: 1) How can the mean infectious dose (one quantum, by definition in epidemiology) be measured and expressed as an amount of viral RNA molecules (in genome units, GU) or as a number of replicative viral particles (in plaque-forming units, PFU)? 2) How many infectious quanta are exhaled by an infected person per unit of time? 3) How many infectious quanta are exhaled, on average, integrated over the whole contagious period? 4) How do these quantities relate to the epidemic reproduction rate R as measured in epidemiology, and to the viral load, as measured by molecular biological methods? 5) How has the infectious dose evolved with the different strains of SARS-CoV-2? We make use of state-of-the-art modelling, reviewed and explained in the appendix of the article (Supplemental Information, SI), to answer these questions using data from the literature in both epidemiology and virology. We have considered the modification of these relationships according to the vaccination status of the population. We hope that this work will allow a better integration of data from different fields (virology, epidemiology, and immunology) to anticipate the evolution of the epidemic in the case of COVID-19, but also in respiratory pathologies transmissible in an airborne manner.
1006.1212
Stefan Auer SA
Stefan Auer and Dimo Kashchiev
Phase Diagram of alpha-Helical and beta-Sheet Forming Peptides
null
S. Auer and D. Kashchiev, Phys. Rev. Lett., 104, 168105 (2010)
10.1103/PhysRevLett.104.168105
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The intrinsic property of proteins to form structural motifs such as alpha-helices and beta-sheets leads to a complex phase behavior in which proteins can assemble into various types of aggregates including crystals, liquidlike phases of unfolded or natively folded proteins, and amyloid fibrils. Here we use a coarse-grained protein model that enables us to perform Monte Carlo simulations for determining the phase diagram of natively folded alpha-helical and unfolded beta-sheet forming peptides. The simulations reveal the existence of various metastable peptide phases. The liquidlike phases are metastable with respect to the fibrillar phases, and there is a hierarchy of metastability.
[ { "created": "Mon, 7 Jun 2010 09:35:10 GMT", "version": "v1" } ]
2010-06-08
[ [ "Auer", "Stefan", "" ], [ "Kashchiev", "Dimo", "" ] ]
The intrinsic property of proteins to form structural motifs such as alpha-helices and beta-sheets leads to a complex phase behavior in which proteins can assemble into various types of aggregates including crystals, liquidlike phases of unfolded or natively folded proteins, and amyloid fibrils. Here we use a coarse-grained protein model that enables us to perform Monte Carlo simulations for determining the phase diagram of natively folded alpha-helical and unfolded beta-sheet forming peptides. The simulations reveal the existence of various metastable peptide phases. The liquidlike phases are metastable with respect to the fibrillar phases, and there is a hierarchy of metastability.
1911.12656
Yuval Harel
Yuval Harel, Ron Meir
Optimal Multivariate Tuning with Neuron-Level and Population-Level Energy Constraints
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting, as neural tuning which minimizes mean decoding error. Many works optimize Fisher information, which approximates the Minimum Mean Square Error (MMSE) of the optimal decoder for long encoding time, but may be misleading for short encoding times. We study MMSE-optimal neural encoding of a multivariate stimulus by uniform populations of spiking neurons, under firing rate constraints for each neuron as well as for the entire population. We show that the population-level constraint is essential for the formulation of a well-posed problem having finite optimal tuning widths, and optimal tuning aligns with the principal components of the prior distribution. Numerical evaluation of the two-dimensional case shows that encoding only the dimension with higher variance is optimal for short encoding times. We also compare direct MMSE optimization to optimization of several proxies to MMSE, namely Fisher information, Maximum Likelihood estimation error, and the Bayesian Cram\'er-Rao bound. We find that optimization of these measures yield qualitatively misleading results regarding MMSE-optimal tuning and its dependence on encoding time and energy constraints.
[ { "created": "Thu, 28 Nov 2019 11:54:29 GMT", "version": "v1" } ]
2019-12-02
[ [ "Harel", "Yuval", "" ], [ "Meir", "Ron", "" ] ]
Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting, as neural tuning which minimizes mean decoding error. Many works optimize Fisher information, which approximates the Minimum Mean Square Error (MMSE) of the optimal decoder for long encoding time, but may be misleading for short encoding times. We study MMSE-optimal neural encoding of a multivariate stimulus by uniform populations of spiking neurons, under firing rate constraints for each neuron as well as for the entire population. We show that the population-level constraint is essential for the formulation of a well-posed problem having finite optimal tuning widths, and optimal tuning aligns with the principal components of the prior distribution. Numerical evaluation of the two-dimensional case shows that encoding only the dimension with higher variance is optimal for short encoding times. We also compare direct MMSE optimization to optimization of several proxies to MMSE, namely Fisher information, Maximum Likelihood estimation error, and the Bayesian Cram\'er-Rao bound. We find that optimization of these measures yield qualitatively misleading results regarding MMSE-optimal tuning and its dependence on encoding time and energy constraints.
1908.01548
Maria Masoliver
Cristian Estarellas, Maria Masoliver, Cristina Masoller and Claudio Mirasso
Characterizing signal encoding and transmission in class I and class II neurons via ordinal time-series analysis
null
null
10.1063/1.5121257
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons encode and transmit information in spike sequences. However, despite the effort devoted to quantify their information content, little progress has been made in this regard. Here we use a nonlinear method of time-series analysis (known as ordinal analysis) to compare the statistics of spike sequences generated by applying an input signal to the neuronal model of Morris-Lecar. In particular we consider two different regimes for the neurons which lead to two classes of excitability: class I, where the frequency-current curve is continuous and class II, where the frequency-current curve is discontinuous. By applying ordinal analysis to sequences of inter-spike-intervals (ISIs) our goals are (1) to investigate if different neuron types can generate spike sequences which have similar symbolic properties; (2) to get deeper understanding on the effects that electrical (diffusive) and excitatory chemical (i.e., excitatory synapse) couplings have; and (3) to compare, when a small--amplitude periodic signal is applied to one of the neurons, how the signal features (amplitude and frequency) are encoded and transmitted in the generated ISI sequences for both class I and class II type neurons and electrical or chemical couplings. We find that depending on the frequency, specific combinations of neuron/class and coupling-type allow a more effective encoding, or a more effective transmission of the signal.
[ { "created": "Mon, 5 Aug 2019 10:23:40 GMT", "version": "v1" } ]
2020-02-19
[ [ "Estarellas", "Cristian", "" ], [ "Masoliver", "Maria", "" ], [ "Masoller", "Cristina", "" ], [ "Mirasso", "Claudio", "" ] ]
Neurons encode and transmit information in spike sequences. However, despite the effort devoted to quantify their information content, little progress has been made in this regard. Here we use a nonlinear method of time-series analysis (known as ordinal analysis) to compare the statistics of spike sequences generated by applying an input signal to the neuronal model of Morris-Lecar. In particular we consider two different regimes for the neurons which lead to two classes of excitability: class I, where the frequency-current curve is continuous and class II, where the frequency-current curve is discontinuous. By applying ordinal analysis to sequences of inter-spike-intervals (ISIs) our goals are (1) to investigate if different neuron types can generate spike sequences which have similar symbolic properties; (2) to get deeper understanding on the effects that electrical (diffusive) and excitatory chemical (i.e., excitatory synapse) couplings have; and (3) to compare, when a small--amplitude periodic signal is applied to one of the neurons, how the signal features (amplitude and frequency) are encoded and transmitted in the generated ISI sequences for both class I and class II type neurons and electrical or chemical couplings. We find that depending on the frequency, specific combinations of neuron/class and coupling-type allow a more effective encoding, or a more effective transmission of the signal.
1704.08321
Zoya Leonenko
L Dindia, J Murray, Erin Faught, T Davis, Zoya Leonenko, M Vijayan
Novel Nongenomic Signaling by Glucocorticoid May Involve Changes to Liver Membrane Order in Rainbow Trout
null
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stress-induced glucocorticoid elevation is a highly conserved response among vertebrates. This facilitates stress adaptation and the mode of action involves activation of the intracellular glucocorticoid receptor leading to the modulation of target gene expression. However, this genomic effect is slow acting and, therefore, a role for glucocorticoid in the rapid response to stress is unclear. Here we show that stress levels of cortisol, the primary glucocorticoid in teleosts, rapidly fluidizes rainbow trout (Oncorhynchus mykiss) liver plasma membranes in vitro. This involved incorporation of the steroid into the lipid domains, as cortisol coupled to a membrane impermeable peptide moiety, did not affect membrane order. Studies confirmed that cortisol, but not sex steroids, increases liver plasma membrane fluidity. Atomic force microscopy revealed cortisol mediated changes to membrane surface topography and viscoelasticity confirming changes to membrane order. Treating trout hepatocytes with stress levels of cortisol led to the modulation of cell signaling pathways, including the phosphorylation status of putative PKA, PKC and AKT substrate proteins within 10 minutes. The phosphorylation by protein kinases in the presence of cortisol was consistent with that seen with benzyl alcohol, a known membrane fluidizer. Our results suggest that biophysical changes to plasma membrane properties, triggered by stressor induced glucocorticoid elevation, act as a nonspecific stress response and may rapidly modulate acute stress-signaling pathways.
[ { "created": "Wed, 26 Apr 2017 19:38:37 GMT", "version": "v1" } ]
2017-04-28
[ [ "Dindia", "L", "" ], [ "Murray", "J", "" ], [ "Faught", "Erin", "" ], [ "Davis", "T", "" ], [ "Leonenko", "Zoya", "" ], [ "Vijayan", "M", "" ] ]
Stress-induced glucocorticoid elevation is a highly conserved response among vertebrates. This facilitates stress adaptation and the mode of action involves activation of the intracellular glucocorticoid receptor leading to the modulation of target gene expression. However, this genomic effect is slow acting and, therefore, a role for glucocorticoid in the rapid response to stress is unclear. Here we show that stress levels of cortisol, the primary glucocorticoid in teleosts, rapidly fluidizes rainbow trout (Oncorhynchus mykiss) liver plasma membranes in vitro. This involved incorporation of the steroid into the lipid domains, as cortisol coupled to a membrane impermeable peptide moiety, did not affect membrane order. Studies confirmed that cortisol, but not sex steroids, increases liver plasma membrane fluidity. Atomic force microscopy revealed cortisol mediated changes to membrane surface topography and viscoelasticity confirming changes to membrane order. Treating trout hepatocytes with stress levels of cortisol led to the modulation of cell signaling pathways, including the phosphorylation status of putative PKA, PKC and AKT substrate proteins within 10 minutes. The phosphorylation by protein kinases in the presence of cortisol was consistent with that seen with benzyl alcohol, a known membrane fluidizer. Our results suggest that biophysical changes to plasma membrane properties, triggered by stressor induced glucocorticoid elevation, act as a nonspecific stress response and may rapidly modulate acute stress-signaling pathways.
1503.02536
Helio M. de Oliveira
E.A. Bouton, H.M. de Oliveira, R.M. Campello de Souza and N.S. Santos-Magalhaes
Genomic Imaging Based on Codongrams and a^2grams
7 pages, 3 figures
WSEAS Trans. on Biology and Biomedicine, vol.1, n.2, pp.255-260, April 2004
null
null
q-bio.OT cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces new tools for genomic signal processing, which can assist for genomic attribute extracting or describing biologically meaningful features embedded in a DNA. The codongrams and a2grams are offered as an alternative to spectrograms and scalograms. Twenty different a^2grams are defined for a genome, one for each amino acid (valgram is an a^2gram for valine; alagram is an a^2gram for alanine and so on). They provide information about the distribution and occurrence of the investigated amino acid. In particular, the metgram can be used to find out potential start position of genes within a genome. This approach can help implementing a new diagnosis test for genetic diseases by providing a type of DNA-medical imaging.
[ { "created": "Thu, 5 Mar 2015 18:24:32 GMT", "version": "v1" } ]
2015-03-10
[ [ "Bouton", "E. A.", "" ], [ "de Oliveira", "H. M.", "" ], [ "de Souza", "R. M. Campello", "" ], [ "Santos-Magalhaes", "N. S.", "" ] ]
This paper introduces new tools for genomic signal processing, which can assist for genomic attribute extracting or describing biologically meaningful features embedded in a DNA. The codongrams and a2grams are offered as an alternative to spectrograms and scalograms. Twenty different a^2grams are defined for a genome, one for each amino acid (valgram is an a^2gram for valine; alagram is an a^2gram for alanine and so on). They provide information about the distribution and occurrence of the investigated amino acid. In particular, the metgram can be used to find out potential start position of genes within a genome. This approach can help implementing a new diagnosis test for genetic diseases by providing a type of DNA-medical imaging.
1812.03363
David Mehler
David Marc Anton Mehler, Konrad Paul Kording
The lure of misleading causal statements in functional connectivity research
37 pages, 2 figures. Code and simulated data available on: https://osf.io/9cs8p/
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
As neuroscientists we want to understand how causal interactions or mechanisms within the brain give rise to perception, cognition, and behavior. It is typical to estimate interaction effects from measured activity using statistical techniques such as functional connectivity, Granger Causality, or information flow, whose outcomes are often falsely treated as revealing mechanistic insight. Since these statistical techniques fit models to low-dimensional measurements from brains, they ignore the fact that brain activity is high-dimensional. Here we focus on the obvious confound of common inputs: the countless unobserved variables likely have more influence than the few observed ones. Any given observed correlation can be explained by an infinite set of causal models that take into account the unobserved variables. Therefore, correlations within massively undersampled measurements tell us little about mechanisms. We argue that these mis-inferences of causality from correlation are augmented by an implicit redefinition of words that suggest mechanisms, such as connectivity, causality, and flow.
[ { "created": "Sat, 8 Dec 2018 18:21:07 GMT", "version": "v1" }, { "created": "Wed, 21 Oct 2020 11:21:23 GMT", "version": "v2" }, { "created": "Fri, 23 Oct 2020 09:50:39 GMT", "version": "v3" } ]
2020-10-26
[ [ "Mehler", "David Marc Anton", "" ], [ "Kording", "Konrad Paul", "" ] ]
As neuroscientists we want to understand how causal interactions or mechanisms within the brain give rise to perception, cognition, and behavior. It is typical to estimate interaction effects from measured activity using statistical techniques such as functional connectivity, Granger Causality, or information flow, whose outcomes are often falsely treated as revealing mechanistic insight. Since these statistical techniques fit models to low-dimensional measurements from brains, they ignore the fact that brain activity is high-dimensional. Here we focus on the obvious confound of common inputs: the countless unobserved variables likely have more influence than the few observed ones. Any given observed correlation can be explained by an infinite set of causal models that take into account the unobserved variables. Therefore, correlations within massively undersampled measurements tell us little about mechanisms. We argue that these mis-inferences of causality from correlation are augmented by an implicit redefinition of words that suggest mechanisms, such as connectivity, causality, and flow.
q-bio/0604029
Sandeep Krishna
Sandeep Krishna, Anna M. C. Andersson, Szabolcs Semsey, Kim Sneppen
Structure and function of negative feedback loops at the interface of genetic and metabolic networks
8 pages, 4 figures
Nucleic Acids Res., 34: 2455 - 2462 (2006).
null
null
q-bio.MN cond-mat.other
null
The molecular network in an organism consists of transcription/translation regulation, protein-protein interactions/modifications and a metabolic network, together forming a system that allows the cell to respond sensibly to the multiple signal molecules that exist in its environment. A key part of this overall system of molecular regulation is therefore the interface between the genetic and the metabolic network. A motif that occurs very often at this interface is a negative feedback loop used to regulate the level of the signal molecules. In this work we use mathematical models to investigate the steady state and dynamical behaviour of different negative feedback loops. We show, in particular, that feedback loops where the signal molecule does not cause the dissociation of the transcription factor from the DNA respond faster than loops where the molecule acts by sequestering transcription factors off the DNA. We use three examples, the bet, mer and lac systems in E. coli, to illustrate the behaviour of such feedback loops.
[ { "created": "Mon, 24 Apr 2006 16:23:30 GMT", "version": "v1" } ]
2007-05-23
[ [ "Krishna", "Sandeep", "" ], [ "Andersson", "Anna M. C.", "" ], [ "Semsey", "Szabolcs", "" ], [ "Sneppen", "Kim", "" ] ]
The molecular network in an organism consists of transcription/translation regulation, protein-protein interactions/modifications and a metabolic network, together forming a system that allows the cell to respond sensibly to the multiple signal molecules that exist in its environment. A key part of this overall system of molecular regulation is therefore the interface between the genetic and the metabolic network. A motif that occurs very often at this interface is a negative feedback loop used to regulate the level of the signal molecules. In this work we use mathematical models to investigate the steady state and dynamical behaviour of different negative feedback loops. We show, in particular, that feedback loops where the signal molecule does not cause the dissociation of the transcription factor from the DNA respond faster than loops where the molecule acts by sequestering transcription factors off the DNA. We use three examples, the bet, mer and lac systems in E. coli, to illustrate the behaviour of such feedback loops.
1608.08007
Edgar Altszyler
Edgar Altszyler, Alejandra Ventura, Alejandro Colman-Lerner and Ariel Chernomoretz
Ultrasensitivity on signaling cascades revisited: Linking local and global ultrasensitivity estimations
null
PLoS ONE 12(6), 2017
10.1371/journal.pone.0180083
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ultrasensitive response motifs, which are capable of converting graded stimulus in binary responses, are very well-conserved in signal transduction networks. Although it has been shown that a cascade arrangement of multiple ultrasensitive modules can produce an enhancement of the system's ultrasensitivity, how the combination of layers affects the cascade's ultrasensitivity remains an open question for the general case. Here we introduced a methodology that allowed us to determine the presence of sequestration effects and to quantify the relative contribution of each module to the overall cascade's ultrasensitivity. The proposed analysis framework provides a natural link between global and local ultrasensitivity descriptors and is particularly well-suited to characterize and better understand mathematical models used to study real biological systems. As a case study we considered three mathematical models introduced by O'Shaughnessy et al. to study a tunable synthetic MAPK cascade, and showed how our methodology might help modelers to better understand modeling alternatives.
[ { "created": "Mon, 29 Aug 2016 11:52:17 GMT", "version": "v1" }, { "created": "Thu, 1 Sep 2016 13:45:11 GMT", "version": "v2" }, { "created": "Mon, 3 Apr 2017 15:03:28 GMT", "version": "v3" } ]
2018-09-03
[ [ "Altszyler", "Edgar", "" ], [ "Ventura", "Alejandra", "" ], [ "Colman-Lerner", "Alejandro", "" ], [ "Chernomoretz", "Ariel", "" ] ]
Ultrasensitive response motifs, which are capable of converting graded stimulus in binary responses, are very well-conserved in signal transduction networks. Although it has been shown that a cascade arrangement of multiple ultrasensitive modules can produce an enhancement of the system's ultrasensitivity, how the combination of layers affects the cascade's ultrasensitivity remains an open question for the general case. Here we introduced a methodology that allowed us to determine the presence of sequestration effects and to quantify the relative contribution of each module to the overall cascade's ultrasensitivity. The proposed analysis framework provides a natural link between global and local ultrasensitivity descriptors and is particularly well-suited to characterize and better understand mathematical models used to study real biological systems. As a case study we considered three mathematical models introduced by O'Shaughnessy et al. to study a tunable synthetic MAPK cascade, and showed how our methodology might help modelers to better understand modeling alternatives.
0802.3854
Giuseppe Vitiello
Walter J. Freeman and Giuseppe Vitiello
Vortices in brain waves
null
null
null
null
q-bio.NC cond-mat.other q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactions by mutual excitation in neural populations in human and animal brains create a mesoscopic order parameter that is recorded in brain waves (electroencephalogram, EEG). Spatially and spectrally distributed oscillations are imposed on the background activity by inhibitory feedback in the gamma range (30-80 Hz). Beats recur at theta rates (3-7 Hz), at which the order parameter transiently approaches zero and microscopic activity becomes disordered. After these null spikes, the order parameter resurges and initiates a frame bearing a mesoscopic spatial pattern of gamma amplitude modulation that governs the microscopic activity, and that is correlated with behavior. The brain waves also reveal a spatial pattern of phase modulation in the form of a cone. Using the formalism of the dissipative many-body model of brain, we describe the null spikes and the accompanying phase cones as vortices.
[ { "created": "Tue, 26 Feb 2008 17:31:09 GMT", "version": "v1" } ]
2008-03-08
[ [ "Freeman", "Walter J.", "" ], [ "Vitiello", "Giuseppe", "" ] ]
Interactions by mutual excitation in neural populations in human and animal brains create a mesoscopic order parameter that is recorded in brain waves (electroencephalogram, EEG). Spatially and spectrally distributed oscillations are imposed on the background activity by inhibitory feedback in the gamma range (30-80 Hz). Beats recur at theta rates (3-7 Hz), at which the order parameter transiently approaches zero and microscopic activity becomes disordered. After these null spikes, the order parameter resurges and initiates a frame bearing a mesoscopic spatial pattern of gamma amplitude modulation that governs the microscopic activity, and that is correlated with behavior. The brain waves also reveal a spatial pattern of phase modulation in the form of a cone. Using the formalism of the dissipative many-body model of brain, we describe the null spikes and the accompanying phase cones as vortices.
1402.2392
Bart Haegeman
Bart Haegeman, Michel Loreau
General relationships between consumer dispersal, resource dispersal and metacommunity diversity
Main text: 15 pages, 4 figures. Supplement: 25 pages, 12 figures
Ecol. Lett. 17, 175--184 (2014)
10.1111/ele.12214
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the central questions of metacommunity theory is how dispersal of organisms affects species diversity. Here we show that the diversity-dispersal relationship should not be studied in isolation of other abiotic and biotic flows in the metacommunity. We study a mechanistic metacommunity model in which consumer species compete for an abiotic or biotic resource. We consider both consumer species specialized to a habitat patch, and generalist species capable of using the resource throughout the metacommunity. We present analytical results for different limiting values of consumer dispersal and resource dispersal, and complement these results with simulations for intermediate dispersal values. Our analysis reveals generic patterns for the combined effects of consumer and resource dispersal on the metacommunity diversity of consumer species, and shows that hump-shaped relationships between local diversity and dispersal are not universal. Diversity-dispersal relationships can also be monotonically increasing or multimodal. Our work is a new step towards a general theory of metacommunity diversity integrating dispersal at multiple trophic levels.
[ { "created": "Tue, 11 Feb 2014 08:20:01 GMT", "version": "v1" } ]
2014-02-12
[ [ "Haegeman", "Bart", "" ], [ "Loreau", "Michel", "" ] ]
One of the central questions of metacommunity theory is how dispersal of organisms affects species diversity. Here we show that the diversity-dispersal relationship should not be studied in isolation of other abiotic and biotic flows in the metacommunity. We study a mechanistic metacommunity model in which consumer species compete for an abiotic or biotic resource. We consider both consumer species specialized to a habitat patch, and generalist species capable of using the resource throughout the metacommunity. We present analytical results for different limiting values of consumer dispersal and resource dispersal, and complement these results with simulations for intermediate dispersal values. Our analysis reveals generic patterns for the combined effects of consumer and resource dispersal on the metacommunity diversity of consumer species, and shows that hump-shaped relationships between local diversity and dispersal are not universal. Diversity-dispersal relationships can also be monotonically increasing or multimodal. Our work is a new step towards a general theory of metacommunity diversity integrating dispersal at multiple trophic levels.
2311.10669
Laia Barjuan Ballabriga
Laia Barjuan, Jordi Soriano, and M. \'Angeles Serrano
Optimal navigability of weighted human brain connectomes in physical space
49 pages (10 main text, 39 Supplementary Material)
null
null
null
q-bio.NC physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The architecture of the human connectome supports efficient communication protocols relying either on distances between brain regions or on the intensities of connections. However, none of these protocols combines information about the two or reaches full efficiency. Here, we introduce a continuous spectrum of decentralized routing strategies that combine link weights and the spatial embedding of connectomes to transmit signals. We applied the protocols to individual connectomes in two cohorts, and to cohort archetypes designed to capture weighted connectivity properties. We found that there is an intermediate region, a sweet spot, in which navigation achieves maximum communication efficiency at low transmission cost. Interestingly, this phenomenon is robust and independent of the particular configuration of weights.Our results indicate that the intensity and topology of neural connections and brain geometry interplay to boost communicability, fundamental to support effective responses to external and internal stimuli and the diversity of brain functions.
[ { "created": "Fri, 17 Nov 2023 17:43:02 GMT", "version": "v1" } ]
2023-11-20
[ [ "Barjuan", "Laia", "" ], [ "Soriano", "Jordi", "" ], [ "Serrano", "M. Ángeles", "" ] ]
The architecture of the human connectome supports efficient communication protocols relying either on distances between brain regions or on the intensities of connections. However, none of these protocols combines information about the two or reaches full efficiency. Here, we introduce a continuous spectrum of decentralized routing strategies that combine link weights and the spatial embedding of connectomes to transmit signals. We applied the protocols to individual connectomes in two cohorts, and to cohort archetypes designed to capture weighted connectivity properties. We found that there is an intermediate region, a sweet spot, in which navigation achieves maximum communication efficiency at low transmission cost. Interestingly, this phenomenon is robust and independent of the particular configuration of weights.Our results indicate that the intensity and topology of neural connections and brain geometry interplay to boost communicability, fundamental to support effective responses to external and internal stimuli and the diversity of brain functions.
2101.08905
Helena Bordini De Lucas
Helena B. Lucas, Steven L. Bressler, Fernanda S. Matias and Osvaldo A. Rosso
A symbolic information approach to characterize response-related differences in cortical activity during a Go/No-Go task
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How the brain processes information from external stimuli in order to perceive the world and act on it is one of the greatest questions in neuroscience. To address this question different time series analyzes techniques have been employed to characterize the statistical properties of brain signals during cognitive tasks. Typically response-specific processes are addressed by comparing the time course of average event-related potentials in different trials type. Here we analyze monkey Local Field Potentials data during visual pattern discrimination called Go/No-Go task in the light of information theory quantifiers. We show that the Bandt-Pompe symbolization methodology to calculate entropy and complexity of data is a useful tool to distinguish response-related differences between Go and No-Go trials. We propose to use an asymmetry index to statistically validate trial type differences. Moreover, by using the multi-scale approach and embedding time delays to downsample the data we can estimate the important time scales in which the relevant information is been processed.
[ { "created": "Fri, 22 Jan 2021 01:15:06 GMT", "version": "v1" } ]
2021-01-25
[ [ "Lucas", "Helena B.", "" ], [ "Bressler", "Steven L.", "" ], [ "Matias", "Fernanda S.", "" ], [ "Rosso", "Osvaldo A.", "" ] ]
How the brain processes information from external stimuli in order to perceive the world and act on it is one of the greatest questions in neuroscience. To address this question different time series analyzes techniques have been employed to characterize the statistical properties of brain signals during cognitive tasks. Typically response-specific processes are addressed by comparing the time course of average event-related potentials in different trials type. Here we analyze monkey Local Field Potentials data during visual pattern discrimination called Go/No-Go task in the light of information theory quantifiers. We show that the Bandt-Pompe symbolization methodology to calculate entropy and complexity of data is a useful tool to distinguish response-related differences between Go and No-Go trials. We propose to use an asymmetry index to statistically validate trial type differences. Moreover, by using the multi-scale approach and embedding time delays to downsample the data we can estimate the important time scales in which the relevant information is been processed.
1311.6326
Simon Aeschbacher
Simon Aeschbacher and Reinhard Buerger
The effect of linkage on establishment and survival of locally beneficial mutations
This is a revised version based on comments and suggestions by S. Yeaman and S. Flaxman
Genetics 197, 317-336 (2014)
10.1534/genetics.114.163477
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When organisms adapt to spatially heterogeneous environments, selection may drive divergence at multiple genes. If populations under divergent selection also exchange migrants, we expect genetic differentiation to be high at selected loci, relative to the baseline caused by migration and genetic drift. Indeed, empirical studies have found peaks of putatively adaptive differentiation. These are highly variable in length, some of them extending over several hundreds of thousands of base pairs. How can such 'islands of divergence' be explained? Physical linkage produces elevated levels of differentiation at loci close to genes under selection. However, whether this is enough to account for the observed patterns of divergence is not well understood. Here, we investigate the fate of a locally beneficial mutation that arises in linkage to an existing migration-selection polymorphism and derive two important quantities: the probability that the mutation becomes established, and the expected time to its extinction. We find that intermediate levels of recombinations are sometimes favourable, and that physical linkage can lead to strongly elevated invasion probabilities and extinction times. We provide a rule of thumb for when this is the case. Moreover, we quantify the long-term effect of polygenic local adaptation on linked neutral variation.
[ { "created": "Mon, 25 Nov 2013 15:09:38 GMT", "version": "v1" }, { "created": "Wed, 27 Nov 2013 10:16:47 GMT", "version": "v2" }, { "created": "Wed, 4 Dec 2013 14:29:19 GMT", "version": "v3" }, { "created": "Sat, 7 Dec 2013 22:02:39 GMT", "version": "v4" }, { "created": "Wed, 26 Feb 2014 01:00:11 GMT", "version": "v5" } ]
2014-07-21
[ [ "Aeschbacher", "Simon", "" ], [ "Buerger", "Reinhard", "" ] ]
When organisms adapt to spatially heterogeneous environments, selection may drive divergence at multiple genes. If populations under divergent selection also exchange migrants, we expect genetic differentiation to be high at selected loci, relative to the baseline caused by migration and genetic drift. Indeed, empirical studies have found peaks of putatively adaptive differentiation. These are highly variable in length, some of them extending over several hundreds of thousands of base pairs. How can such 'islands of divergence' be explained? Physical linkage produces elevated levels of differentiation at loci close to genes under selection. However, whether this is enough to account for the observed patterns of divergence is not well understood. Here, we investigate the fate of a locally beneficial mutation that arises in linkage to an existing migration-selection polymorphism and derive two important quantities: the probability that the mutation becomes established, and the expected time to its extinction. We find that intermediate levels of recombinations are sometimes favourable, and that physical linkage can lead to strongly elevated invasion probabilities and extinction times. We provide a rule of thumb for when this is the case. Moreover, we quantify the long-term effect of polygenic local adaptation on linked neutral variation.
2304.14648
David Eriksson Eriksson
Kevin Luxem and David Eriksson
Ecologically mapped neuronal identity: Towards standardizing activity across heterogeneous experiments
16 Pages
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The brain's diversity of neurons enables a rich behavioral repertoire and flexible adaptation to new situations. Assuming that the ecological pressure has optimized this neuronal variety, we propose exploiting na\"ive behavior to map the neuronal identity. Here we investigate the feasibility of identifying neurons "ecologically" using their activation for natural behavioral and environmental parameters. Such a neuronal ECO-marker might give a finer granularity than possible with genetic or molecular markers, thereby facilitating the comparison of the functional characteristics of individual neurons across animals. In contrast to a potential mapping using artificial stimuli and trained behavior which have an unlimited parameter space, an ecological mapping is experimentally feasible since it is bounded by the ecology. Home-cage environment is an excellent basis for this ECO-mapping covering an extensive behavioral repertoire and since home-cage behavior is similar across laboratories. We review the possibility of adding area-specific environmental enrichment and automatized behavioral tasks to identify neurons in specific brain areas. In this work, we focus on the visual cortex, motor cortex, prefrontal cortex, and hippocampus. Fundamental to achieving this identification is to take advantage of state-of-the-art behavioral tracking, sensory stimulation protocols, and the plethora of creative behavioral solutions for rodents. We find that motor areas might be easiest to address, followed by prefrontal, hippocampal, and visual areas. The possibility of acquiring a near-complete ecological identification with minimal animal handling, minimal constraints on the main experiment, and data compatibility across laboratories might outweigh the necessity of implanting electrodes or imaging devices.
[ { "created": "Fri, 28 Apr 2023 06:29:30 GMT", "version": "v1" } ]
2023-05-01
[ [ "Luxem", "Kevin", "" ], [ "Eriksson", "David", "" ] ]
The brain's diversity of neurons enables a rich behavioral repertoire and flexible adaptation to new situations. Assuming that the ecological pressure has optimized this neuronal variety, we propose exploiting na\"ive behavior to map the neuronal identity. Here we investigate the feasibility of identifying neurons "ecologically" using their activation for natural behavioral and environmental parameters. Such a neuronal ECO-marker might give a finer granularity than possible with genetic or molecular markers, thereby facilitating the comparison of the functional characteristics of individual neurons across animals. In contrast to a potential mapping using artificial stimuli and trained behavior which have an unlimited parameter space, an ecological mapping is experimentally feasible since it is bounded by the ecology. Home-cage environment is an excellent basis for this ECO-mapping covering an extensive behavioral repertoire and since home-cage behavior is similar across laboratories. We review the possibility of adding area-specific environmental enrichment and automatized behavioral tasks to identify neurons in specific brain areas. In this work, we focus on the visual cortex, motor cortex, prefrontal cortex, and hippocampus. Fundamental to achieving this identification is to take advantage of state-of-the-art behavioral tracking, sensory stimulation protocols, and the plethora of creative behavioral solutions for rodents. We find that motor areas might be easiest to address, followed by prefrontal, hippocampal, and visual areas. The possibility of acquiring a near-complete ecological identification with minimal animal handling, minimal constraints on the main experiment, and data compatibility across laboratories might outweigh the necessity of implanting electrodes or imaging devices.
q-bio/0412042
Luciano da Fontoura Costa
Luciano da Fontoura Costa, Matheus Palhares Viana and Marcelo E. Beletti
The complex channel networks of bone structure
3 pages, 1 figure, The following article has been submitted to Applied Physics Letters. If it is published, it will be found online at http://apl.aip.org/
Appl. Phys. Lett. 88, 033903 (2006)
10.1063/1.2166473
null
q-bio.TO cond-mat.dis-nn physics.bio-ph q-bio.QM
null
Bone structure in mammals involves a complex network of channels (Havers and Volkmann channels) required to nourish the bone marrow cells. This work describes how three-dimensional reconstructions of such systems can be obtained and represented in terms of complex networks. Three important findings are reported: (i) the fact that the channel branching density resembles a power law implies the existence of distribution hubs; (ii) the conditional node degree density indicates a clear tendency of connection between nodes with degrees 2 and 4; and (iii) the application of the recently introduced concept of hierarchical clustering coefficient allows the identification of typical scales of channel redistribution. A series of important biological insights is drawn and discussed
[ { "created": "Wed, 22 Dec 2004 19:26:34 GMT", "version": "v1" } ]
2007-09-19
[ [ "Costa", "Luciano da Fontoura", "" ], [ "Viana", "Matheus Palhares", "" ], [ "Beletti", "Marcelo E.", "" ] ]
Bone structure in mammals involves a complex network of channels (Havers and Volkmann channels) required to nourish the bone marrow cells. This work describes how three-dimensional reconstructions of such systems can be obtained and represented in terms of complex networks. Three important findings are reported: (i) the fact that the channel branching density resembles a power law implies the existence of distribution hubs; (ii) the conditional node degree density indicates a clear tendency of connection between nodes with degrees 2 and 4; and (iii) the application of the recently introduced concept of hierarchical clustering coefficient allows the identification of typical scales of channel redistribution. A series of important biological insights is drawn and discussed
2401.13467
Sacha van Albada
Eli J M\"uller and Sacha J van Albada and Jong-Won Kim and Peter A Robinson
Unified neural field theory of brain dynamics underlying oscillations in Parkinson's disease and generalized epilepsies
null
Journal of Theoretical Biology (2017) 428, 132-146
10.1016/j.jtbi.2017.06.016
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The mechanisms underlying pathologically synchronized neural oscillations in Parkinson's disease (PD) and generalized epilepsies are jointly explored via a neural field model of the corticothalamic-basal ganglia (CTBG) system. The basal ganglia (BG) are approximated as a single effective population and their roles in modulating oscillatory corticothalamic (CT) dynamics and vice versa are analyzed. Besides normal EEG rhythms, enhanced activity around 4 Hz and 20 Hz exists in the model, consistent with characteristic frequencies in PD. These rhythms result from resonances in loops between the BG and CT populations, analogous to those underlying epileptic oscillations in a previous CT model. Dopamine depletion is argued to weaken the dampening of these resonances in PD, and network connections explain the significant coherence between BG, thalamic, and cortical activity around 4-8 Hz and 20 Hz. Parallels between the afferent and efferent connection sites of the thalamic reticular nucleus (TRN) and BG predict low dopamine to correspond to a reduced likelihood of tonic-clonic (grand mal) seizures, agreeing with experimental findings. Further, the model predicts an increased likelihood of absence (petit mal) seizure resulting from low dopamine levels matching experimental findings. Suppression of absence seizure activity is shown when afferent and efferent BG connections to the CT system are strengthened, consistent with other CTBG modeling studies. The BG are demonstrated to suppress activity of the CTBG system near tonic-clonic seizure states, providing insight into the reported efficacy of current treatments in BG circuits. Sleep states of the TRN are also found to suppress pathological PD activity matching observations. Overall, the findings demonstrate strong parallels between coherent oscillations in generalized epilepsies and PD, and provide insights into possible comorbidities.
[ { "created": "Wed, 24 Jan 2024 14:11:20 GMT", "version": "v1" } ]
2024-01-25
[ [ "Müller", "Eli J", "" ], [ "van Albada", "Sacha J", "" ], [ "Kim", "Jong-Won", "" ], [ "Robinson", "Peter A", "" ] ]
The mechanisms underlying pathologically synchronized neural oscillations in Parkinson's disease (PD) and generalized epilepsies are jointly explored via a neural field model of the corticothalamic-basal ganglia (CTBG) system. The basal ganglia (BG) are approximated as a single effective population and their roles in modulating oscillatory corticothalamic (CT) dynamics and vice versa are analyzed. Besides normal EEG rhythms, enhanced activity around 4 Hz and 20 Hz exists in the model, consistent with characteristic frequencies in PD. These rhythms result from resonances in loops between the BG and CT populations, analogous to those underlying epileptic oscillations in a previous CT model. Dopamine depletion is argued to weaken the dampening of these resonances in PD, and network connections explain the significant coherence between BG, thalamic, and cortical activity around 4-8 Hz and 20 Hz. Parallels between the afferent and efferent connection sites of the thalamic reticular nucleus (TRN) and BG predict low dopamine to correspond to a reduced likelihood of tonic-clonic (grand mal) seizures, agreeing with experimental findings. Further, the model predicts an increased likelihood of absence (petit mal) seizure resulting from low dopamine levels matching experimental findings. Suppression of absence seizure activity is shown when afferent and efferent BG connections to the CT system are strengthened, consistent with other CTBG modeling studies. The BG are demonstrated to suppress activity of the CTBG system near tonic-clonic seizure states, providing insight into the reported efficacy of current treatments in BG circuits. Sleep states of the TRN are also found to suppress pathological PD activity matching observations. Overall, the findings demonstrate strong parallels between coherent oscillations in generalized epilepsies and PD, and provide insights into possible comorbidities.
2305.07947
Ferenc A. Bartha
Richmond Opoku-Sarkodie, Ferenc A.Bartha, M\'onika Polner, and Gergely R\"ost
Bifurcation analysis of waning-boosting epidemiological models with repeat infections and varying immunity periods
null
null
null
null
q-bio.PE cs.NA math.DS math.NA
http://creativecommons.org/licenses/by-nc-sa/4.0/
We consider the SIRWJS epidemiological model that includes the waning and boosting of immunity via secondary infections. We carry out combined analytical and numerical investigations of the dynamics. The formulae describing the existence and stability of equilibria are derived. Combining this analysis with numerical continuation techniques, we construct global bifurcation diagrams with respect to several epidemiological parameters. The bifurcation analysis reveals a very rich structure of possible global dynamics. We show that backward bifurcation is possible at the critical value of the basic reproduction number, $\mathcal{R}_0 = 1$. Furthermore, we find stability switches and Hopf bifurcations from steady states forming multiple endemic bubbles, and saddle-node bifurcations of periodic orbits. Regions of bistability are also found, where either two stable steady states, or a stable steady state and a stable periodic orbit coexist. This work provides an insight to the rich and complicated infectious disease dynamics that can emerge from the waning and boosting of immunity.
[ { "created": "Sat, 13 May 2023 15:30:19 GMT", "version": "v1" } ]
2023-05-16
[ [ "Opoku-Sarkodie", "Richmond", "" ], [ "Bartha", "Ferenc A.", "" ], [ "Polner", "Mónika", "" ], [ "Röst", "Gergely", "" ] ]
We consider the SIRWJS epidemiological model that includes the waning and boosting of immunity via secondary infections. We carry out combined analytical and numerical investigations of the dynamics. The formulae describing the existence and stability of equilibria are derived. Combining this analysis with numerical continuation techniques, we construct global bifurcation diagrams with respect to several epidemiological parameters. The bifurcation analysis reveals a very rich structure of possible global dynamics. We show that backward bifurcation is possible at the critical value of the basic reproduction number, $\mathcal{R}_0 = 1$. Furthermore, we find stability switches and Hopf bifurcations from steady states forming multiple endemic bubbles, and saddle-node bifurcations of periodic orbits. Regions of bistability are also found, where either two stable steady states, or a stable steady state and a stable periodic orbit coexist. This work provides an insight to the rich and complicated infectious disease dynamics that can emerge from the waning and boosting of immunity.
2002.10497
Roland Langrock
Brett T. McClintock, Roland Langrock, Olivier Gimenez, Emmanuelle Cam, David L. Borchers, Richard Glennie, Toby A. Patterson
Uncovering ecological state dynamics with hidden Markov models
null
null
10.1111/ele.13610
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecological systems can often be characterised by changes among a finite set of underlying states pertaining to individuals, populations, communities, or entire ecosystems through time. Owing to the inherent difficulty of empirical field studies, ecological state dynamics operating at any level of this hierarchy can often be unobservable or "hidden". Ecologists must therefore often contend with incomplete or indirect observations that are somehow related to these underlying processes. By formally disentangling state and observation processes based on simple yet powerful mathematical properties that can be used to describe many ecological phenomena, hidden Markov models (HMMs) can facilitate inferences about complex system state dynamics that might otherwise be intractable. However, while HMMs are routinely applied in other disciplines, they have only recently begun to gain traction within the broader ecological community. We provide a gentle introduction to HMMs, establish some common terminology, and review the immense scope of HMMs for applied ecological research. We also provide a supplemental tutorial on some of the more technical aspects of HMM implementation and interpretation. By illustrating how practitioners can use a simple conceptual template to customise HMMs for their specific systems of interest, revealing methodological links between existing applications, and highlighting some practical considerations and limitations of these approaches, our goal is to help establish HMMs as a fundamental inferential tool for ecologists.
[ { "created": "Mon, 24 Feb 2020 19:26:48 GMT", "version": "v1" }, { "created": "Tue, 14 Jul 2020 20:58:01 GMT", "version": "v2" } ]
2020-11-12
[ [ "McClintock", "Brett T.", "" ], [ "Langrock", "Roland", "" ], [ "Gimenez", "Olivier", "" ], [ "Cam", "Emmanuelle", "" ], [ "Borchers", "David L.", "" ], [ "Glennie", "Richard", "" ], [ "Patterson", "Toby A.", "" ] ]
Ecological systems can often be characterised by changes among a finite set of underlying states pertaining to individuals, populations, communities, or entire ecosystems through time. Owing to the inherent difficulty of empirical field studies, ecological state dynamics operating at any level of this hierarchy can often be unobservable or "hidden". Ecologists must therefore often contend with incomplete or indirect observations that are somehow related to these underlying processes. By formally disentangling state and observation processes based on simple yet powerful mathematical properties that can be used to describe many ecological phenomena, hidden Markov models (HMMs) can facilitate inferences about complex system state dynamics that might otherwise be intractable. However, while HMMs are routinely applied in other disciplines, they have only recently begun to gain traction within the broader ecological community. We provide a gentle introduction to HMMs, establish some common terminology, and review the immense scope of HMMs for applied ecological research. We also provide a supplemental tutorial on some of the more technical aspects of HMM implementation and interpretation. By illustrating how practitioners can use a simple conceptual template to customise HMMs for their specific systems of interest, revealing methodological links between existing applications, and highlighting some practical considerations and limitations of these approaches, our goal is to help establish HMMs as a fundamental inferential tool for ecologists.
1806.00935
Masayo Inoue
Masayo Inoue and Kunihiko Kaneko
Cooperative reliable response from sloppy gene-expression dynamics
null
null
10.1209/0295-5075/124/38002
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression dynamics satisfying given input-output relationships were investigated by evolving the networks for an optimal response. We found three types of networks and corresponding dynamics, depending on the sensitivity of gene expression dynamics: direct response with straight paths, amplified response by a feed-forward network, and cooperative response with a complex network. When the sensitivity of each gene's response is low and expression dynamics is sloppy, the last type is selected, in which many genes respond collectively to inputs, with local-excitation and global-inhibition structures. The result provides an insight into how a reliable response is achieved with unreliable units, and on why complex networks with many genes are adopted in cells.
[ { "created": "Mon, 4 Jun 2018 02:59:29 GMT", "version": "v1" } ]
2018-12-26
[ [ "Inoue", "Masayo", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Gene expression dynamics satisfying given input-output relationships were investigated by evolving the networks for an optimal response. We found three types of networks and corresponding dynamics, depending on the sensitivity of gene expression dynamics: direct response with straight paths, amplified response by a feed-forward network, and cooperative response with a complex network. When the sensitivity of each gene's response is low and expression dynamics is sloppy, the last type is selected, in which many genes respond collectively to inputs, with local-excitation and global-inhibition structures. The result provides an insight into how a reliable response is achieved with unreliable units, and on why complex networks with many genes are adopted in cells.
1606.00495
Ada Yan
Ada W. C. Yan, Pengxing Cao, Jane M. Heffernan, Jodie McVernon, Kylie M. Quinn, Nicole L. La Gruta, Karen L. Laurie, James M. McCaw
Modelling cross-reactivity and memory in the cellular adaptive immune response to influenza infection in the host
35 pages, 12 figures
null
10.1016/j.jtbi.2016.11.008
null
q-bio.PE q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cellular adaptive immune response plays a key role in resolving influenza infection. Experiments where individuals are successively infected with different strains within a short timeframe provide insight into the underlying viral dynamics and the role of a cross-reactive immune response in resolving an acute infection. We construct a mathematical model of within-host influenza viral dynamics including three possible factors which determine the strength of the cross-reactive cellular adaptive immune response: the initial naive T cell number, the avidity of the interaction between T cells and the epitopes presented by infected cells, and the epitope abundance per infected cell. Our model explains the experimentally observed shortening of a second infection when cross-reactivity is present, and shows that memory in the cellular adaptive immune response is necessary to protect against a second infection.
[ { "created": "Wed, 1 Jun 2016 23:08:26 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2016 23:34:36 GMT", "version": "v2" } ]
2016-11-24
[ [ "Yan", "Ada W. C.", "" ], [ "Cao", "Pengxing", "" ], [ "Heffernan", "Jane M.", "" ], [ "McVernon", "Jodie", "" ], [ "Quinn", "Kylie M.", "" ], [ "La Gruta", "Nicole L.", "" ], [ "Laurie", "Karen L.", "" ], [ "McCaw", "James M.", "" ] ]
The cellular adaptive immune response plays a key role in resolving influenza infection. Experiments where individuals are successively infected with different strains within a short timeframe provide insight into the underlying viral dynamics and the role of a cross-reactive immune response in resolving an acute infection. We construct a mathematical model of within-host influenza viral dynamics including three possible factors which determine the strength of the cross-reactive cellular adaptive immune response: the initial naive T cell number, the avidity of the interaction between T cells and the epitopes presented by infected cells, and the epitope abundance per infected cell. Our model explains the experimentally observed shortening of a second infection when cross-reactivity is present, and shows that memory in the cellular adaptive immune response is necessary to protect against a second infection.