id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2403.00854
Aldo Faisal
Lauren Stumpf and Balasundaram Kadirvelu and Sigourney Waibel and A. Aldo Faisal
Speaker-Independent Dysarthria Severity Classification using Self-Supervised Transformers and Multi-Task Learning
17 pages, 2 tables, 4 main figures, 2 supplemental figures, prepared for journal submission
null
null
null
q-bio.NC cs.AI cs.CL cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Dysarthria, a condition resulting from impaired control of the speech muscles due to neurological disorders, significantly impacts the communication and quality of life of patients. The condition's complexity, human scoring and varied presentations make its assessment and management challenging. This study presents a transformer-based framework for automatically assessing dysarthria severity from raw speech data. It can offer an objective, repeatable, accessible, standardised and cost-effective and compared to traditional methods requiring human expert assessors. We develop a transformer framework, called Speaker-Agnostic Latent Regularisation (SALR), incorporating a multi-task learning objective and contrastive learning for speaker-independent multi-class dysarthria severity classification. The multi-task framework is designed to reduce reliance on speaker-specific characteristics and address the intrinsic intra-class variability of dysarthric speech. We evaluated on the Universal Access Speech dataset using leave-one-speaker-out cross-validation, our model demonstrated superior performance over traditional machine learning approaches, with an accuracy of $70.48\%$ and an F1 score of $59.23\%$. Our SALR model also exceeded the previous benchmark for AI-based classification, which used support vector machines, by $16.58\%$. We open the black box of our model by visualising the latent space where we can observe how the model substantially reduces speaker-specific cues and amplifies task-specific ones, thereby showing its robustness. In conclusion, SALR establishes a new benchmark in speaker-independent multi-class dysarthria severity classification using generative AI. The potential implications of our findings for broader clinical applications in automated dysarthria severity assessments.
[ { "created": "Thu, 29 Feb 2024 18:30:52 GMT", "version": "v1" } ]
2024-03-05
[ [ "Stumpf", "Lauren", "" ], [ "Kadirvelu", "Balasundaram", "" ], [ "Waibel", "Sigourney", "" ], [ "Faisal", "A. Aldo", "" ] ]
Dysarthria, a condition resulting from impaired control of the speech muscles due to neurological disorders, significantly impacts the communication and quality of life of patients. The condition's complexity, human scoring and varied presentations make its assessment and management challenging. This study presents a transformer-based framework for automatically assessing dysarthria severity from raw speech data. It can offer an objective, repeatable, accessible, standardised and cost-effective and compared to traditional methods requiring human expert assessors. We develop a transformer framework, called Speaker-Agnostic Latent Regularisation (SALR), incorporating a multi-task learning objective and contrastive learning for speaker-independent multi-class dysarthria severity classification. The multi-task framework is designed to reduce reliance on speaker-specific characteristics and address the intrinsic intra-class variability of dysarthric speech. We evaluated on the Universal Access Speech dataset using leave-one-speaker-out cross-validation, our model demonstrated superior performance over traditional machine learning approaches, with an accuracy of $70.48\%$ and an F1 score of $59.23\%$. Our SALR model also exceeded the previous benchmark for AI-based classification, which used support vector machines, by $16.58\%$. We open the black box of our model by visualising the latent space where we can observe how the model substantially reduces speaker-specific cues and amplifies task-specific ones, thereby showing its robustness. In conclusion, SALR establishes a new benchmark in speaker-independent multi-class dysarthria severity classification using generative AI. The potential implications of our findings for broader clinical applications in automated dysarthria severity assessments.
2212.06834
Zeyneb Kurt Dr.
Olalekan Ogundipe, Zeyneb Kurt, Wai Lok Woo
Deep Neural Networks integrating genomics and histopathological images for predicting stages and survival time-to-event in colon cancer
21 pages, 5 figures, 4 tables
null
null
null
q-bio.QM cs.CV cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
There exists unexplained diverse variation within the predefined colon cancer stages using only features either from genomics or histopathological whole slide images as prognostic factors. Unraveling this variation will bring about improved in staging and treatment outcome, hence motivated by the advancement of Deep Neural Network libraries and different structures and factors within some genomic dataset, we aggregate atypical patterns in histopathological images with diverse carcinogenic expression from mRNA, miRNA and DNA Methylation as an integrative input source into an ensemble deep neural network for colon cancer stages classification and samples stratification into low or high risk survival groups. The results of our Ensemble Deep Convolutional Neural Network model show an improved performance in stages classification on the integrated dataset. The fused input features return Area under curve Receiver Operating Characteristic curve (AUC ROC) of 0.95 compared with AUC ROC of 0.71 and 0.68 obtained when only genomics and images features are used for the stage's classification, respectively. Also, the extracted features were used to split the patients into low or high risk survival groups. Among the 2548 fused features, 1695 features showed a statistically significant survival probability differences between the two risk groups defined by the extracted features.
[ { "created": "Tue, 13 Dec 2022 16:12:45 GMT", "version": "v1" } ]
2022-12-15
[ [ "Ogundipe", "Olalekan", "" ], [ "Kurt", "Zeyneb", "" ], [ "Woo", "Wai Lok", "" ] ]
There exists unexplained diverse variation within the predefined colon cancer stages using only features either from genomics or histopathological whole slide images as prognostic factors. Unraveling this variation will bring about improved in staging and treatment outcome, hence motivated by the advancement of Deep Neural Network libraries and different structures and factors within some genomic dataset, we aggregate atypical patterns in histopathological images with diverse carcinogenic expression from mRNA, miRNA and DNA Methylation as an integrative input source into an ensemble deep neural network for colon cancer stages classification and samples stratification into low or high risk survival groups. The results of our Ensemble Deep Convolutional Neural Network model show an improved performance in stages classification on the integrated dataset. The fused input features return Area under curve Receiver Operating Characteristic curve (AUC ROC) of 0.95 compared with AUC ROC of 0.71 and 0.68 obtained when only genomics and images features are used for the stage's classification, respectively. Also, the extracted features were used to split the patients into low or high risk survival groups. Among the 2548 fused features, 1695 features showed a statistically significant survival probability differences between the two risk groups defined by the extracted features.
1612.07846
Daniel Durstewitz
Daniel Durstewitz
A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements
null
null
10.1371/journal.pcbi.1005542
null
q-bio.NC cs.NE q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The computational properties of neural systems are often thought to be implemented in terms of their network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit (MSU) recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a state space representation of the dynamics, but would wish to have access to its governing equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, the approach is applied to MSU recordings from the rodent anterior cingulate cortex obtained during performance of a classical working memory task, delayed alternation. A model with 5 states turned out to be sufficient to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover the relevant dynamics underlying observed neuronal time series, and directly link them to computational properties.
[ { "created": "Fri, 23 Dec 2016 01:01:52 GMT", "version": "v1" } ]
2017-07-05
[ [ "Durstewitz", "Daniel", "" ] ]
The computational properties of neural systems are often thought to be implemented in terms of their network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit (MSU) recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a state space representation of the dynamics, but would wish to have access to its governing equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, the approach is applied to MSU recordings from the rodent anterior cingulate cortex obtained during performance of a classical working memory task, delayed alternation. A model with 5 states turned out to be sufficient to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover the relevant dynamics underlying observed neuronal time series, and directly link them to computational properties.
1412.8065
John Barton
John P. Barton and Eduardo D. Sontag
Remarks on the energy costs of insulators in enzymatic cascades
10 pages, 4 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The connection between optimal biological function and energy use, measured for example by the rate of metabolite consumption, is a current topic of interest in the systems biology literature which has been explored in several different contexts. In [J. P. Barton and E. D. Sontag, Biophys. J. 104, 6 (2013)], we related the metabolic cost of enzymatic futile cycles with their capacity to act as insulators which facilitate modular interconnections in biochemical networks. There we analyzed a simple model system in which a signal molecule regulates the transcription of one or more target proteins by interacting with their promoters. In this note, we consider the case of a protein with an active and an inactive form, and whose activation is controlled by the signal molecule. As in the original case, higher rates of energy consumption are required for better insulator performance.
[ { "created": "Sat, 27 Dec 2014 15:47:38 GMT", "version": "v1" } ]
2014-12-30
[ [ "Barton", "John P.", "" ], [ "Sontag", "Eduardo D.", "" ] ]
The connection between optimal biological function and energy use, measured for example by the rate of metabolite consumption, is a current topic of interest in the systems biology literature which has been explored in several different contexts. In [J. P. Barton and E. D. Sontag, Biophys. J. 104, 6 (2013)], we related the metabolic cost of enzymatic futile cycles with their capacity to act as insulators which facilitate modular interconnections in biochemical networks. There we analyzed a simple model system in which a signal molecule regulates the transcription of one or more target proteins by interacting with their promoters. In this note, we consider the case of a protein with an active and an inactive form, and whose activation is controlled by the signal molecule. As in the original case, higher rates of energy consumption are required for better insulator performance.
q-bio/0402011
Byung Mook Weon
Byung Mook Weon
Analysis of trends in human longevity by new model
30 pages, 9 figures, submitted to Demographic Research <see http://www.demographic-research.org>
null
null
null
q-bio.PE
null
Trends in human longevity are puzzling, especially when considering the limits of human longevity. Partially, the conflicting assertions are based upon demographic evidence and the interpretation of survival and mortality curves using the Gompertz model and the Weibull model; these models are sometimes considered to be incomplete in describing the entire curves. In this paper a new model is proposed to take the place of the traditional models. We directly analysed the rectangularity (the parts of the curves being shaped like a rectangle) of survival curves for 17 countries and for 1876-2001 in Switzerland (it being one of the longest-lived countries) with a new model. This model is derived from the Weibull survival function and is simply described by two parameters, in which the shape parameter indicates 'rectangularity' and characteristic life indicates the duration for survival to be 'exp(-1)'. The shape parameter is essentially a function of age and it distinguishes humans from technical devices. We find that although characteristic life has increased up to the present time, the slope of the shape parameter for middle age has been saturated in recent decades and that the rectangularity above characteristic life has been suppressed, suggesting there are ultimate limits to human longevity. The new model and subsequent findings will contribute greatly to the interpretation and comprehension of our knowledge on the human ageing processes.
[ { "created": "Thu, 5 Feb 2004 17:47:07 GMT", "version": "v1" }, { "created": "Wed, 25 Feb 2004 10:29:38 GMT", "version": "v2" } ]
2007-05-23
[ [ "Weon", "Byung Mook", "" ] ]
Trends in human longevity are puzzling, especially when considering the limits of human longevity. Partially, the conflicting assertions are based upon demographic evidence and the interpretation of survival and mortality curves using the Gompertz model and the Weibull model; these models are sometimes considered to be incomplete in describing the entire curves. In this paper a new model is proposed to take the place of the traditional models. We directly analysed the rectangularity (the parts of the curves being shaped like a rectangle) of survival curves for 17 countries and for 1876-2001 in Switzerland (it being one of the longest-lived countries) with a new model. This model is derived from the Weibull survival function and is simply described by two parameters, in which the shape parameter indicates 'rectangularity' and characteristic life indicates the duration for survival to be 'exp(-1)'. The shape parameter is essentially a function of age and it distinguishes humans from technical devices. We find that although characteristic life has increased up to the present time, the slope of the shape parameter for middle age has been saturated in recent decades and that the rectangularity above characteristic life has been suppressed, suggesting there are ultimate limits to human longevity. The new model and subsequent findings will contribute greatly to the interpretation and comprehension of our knowledge on the human ageing processes.
1207.6251
Daniel Soudry
Daniel Soudry, Ron Meir
Spiking input-output relation for general biophysical neuron models
Soudry D and Meir R (2014) The neuronal response at extended timescales: a linearized spiking input--output relation. Front. Comput. Neurosci. 8:29. doi: 10.3389/fncom.2014.00029. This version of the paper is now obsolete. For the the updated and published version - see comments
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cortical neurons include many sub-cellular processes, operating at multiple timescales, which may affect their response to stimulation through non-linear and stochastic interaction with ion channels and ionic concentrations. Since new processes are constantly being discovered, biophysical neuron models increasingly become "too complex to be useful" yet "too simple to be realistic". A fundamental open question in theoretical neuroscience pertains to how this deadlock may be resolved. In order to tackle this problem, we first define the notion of a "excitable neuron model". Then we analytically derive the input-output relation of such neuronal models, relating input spike trains to output spikes based on known biophysical properties. Thus we obtain closed-form expressions for the mean firing rates, all second order statistics (input-state-output correlation and spectra) and construct optimal linear estimators for the neuronal response and internal state. These results are guaranteed to hold, given a few generic assumptions, for any stochastic biophysical neuron model (with an arbitrary number of slow kinetic processes) under general sparse stimulation. This solution suggests that the common simplifying approach that ignores much of the complexity of the neuron might actually be unnecessary and even deleterious in some cases. Specifically, the stochasticity of ion channels and the temporal sparseness of inputs is exactly what rendered our analysis tractable, allowing us to incorporate slow kinetics.
[ { "created": "Thu, 26 Jul 2012 12:21:55 GMT", "version": "v1" }, { "created": "Wed, 1 Aug 2012 11:33:45 GMT", "version": "v2" }, { "created": "Sun, 19 Aug 2012 20:19:29 GMT", "version": "v3" }, { "created": "Tue, 29 Apr 2014 22:29:03 GMT", "version": "v4" } ]
2014-05-01
[ [ "Soudry", "Daniel", "" ], [ "Meir", "Ron", "" ] ]
Cortical neurons include many sub-cellular processes, operating at multiple timescales, which may affect their response to stimulation through non-linear and stochastic interaction with ion channels and ionic concentrations. Since new processes are constantly being discovered, biophysical neuron models increasingly become "too complex to be useful" yet "too simple to be realistic". A fundamental open question in theoretical neuroscience pertains to how this deadlock may be resolved. In order to tackle this problem, we first define the notion of a "excitable neuron model". Then we analytically derive the input-output relation of such neuronal models, relating input spike trains to output spikes based on known biophysical properties. Thus we obtain closed-form expressions for the mean firing rates, all second order statistics (input-state-output correlation and spectra) and construct optimal linear estimators for the neuronal response and internal state. These results are guaranteed to hold, given a few generic assumptions, for any stochastic biophysical neuron model (with an arbitrary number of slow kinetic processes) under general sparse stimulation. This solution suggests that the common simplifying approach that ignores much of the complexity of the neuron might actually be unnecessary and even deleterious in some cases. Specifically, the stochasticity of ion channels and the temporal sparseness of inputs is exactly what rendered our analysis tractable, allowing us to incorporate slow kinetics.
1902.03188
Ehsan Hajiramezanali
Ehsan Hajiramezanali, Mahdi Imani, Ulisses Braga-Neto, Xiaoning Qian, and Edward R Dougherty
Scalable optimal Bayesian classification of single-cell trajectories under regulatory model uncertainty
null
BMC Genomics 2019
null
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell gene expression measurements offer opportunities in deriving mechanistic understanding of complex diseases, including cancer. However, due to the complex regulatory machinery of the cell, gene regulatory network (GRN) model inference based on such data still manifests significant uncertainty. The goal of this paper is to develop optimal classification of single-cell trajectories accounting for potential model uncertainty. Partially-observed Boolean dynamical systems (POBDS) are used for modeling gene regulatory networks observed through noisy gene-expression data. We derive the exact optimal Bayesian classifier (OBC) for binary classification of single-cell trajectories. The application of the OBC becomes impractical for large GRNs, due to computational and memory requirements. To address this, we introduce a particle-based single-cell classification method that is highly scalable for large GRNs with much lower complexity than the optimal solution. The performance of the proposed particle-based method is demonstrated through numerical experiments using a POBDS model of the well-known T-cell large granular lymphocyte (T-LGL) leukemia network with noisy time-series gene-expression data.
[ { "created": "Fri, 8 Feb 2019 16:51:34 GMT", "version": "v1" } ]
2019-02-11
[ [ "Hajiramezanali", "Ehsan", "" ], [ "Imani", "Mahdi", "" ], [ "Braga-Neto", "Ulisses", "" ], [ "Qian", "Xiaoning", "" ], [ "Dougherty", "Edward R", "" ] ]
Single-cell gene expression measurements offer opportunities in deriving mechanistic understanding of complex diseases, including cancer. However, due to the complex regulatory machinery of the cell, gene regulatory network (GRN) model inference based on such data still manifests significant uncertainty. The goal of this paper is to develop optimal classification of single-cell trajectories accounting for potential model uncertainty. Partially-observed Boolean dynamical systems (POBDS) are used for modeling gene regulatory networks observed through noisy gene-expression data. We derive the exact optimal Bayesian classifier (OBC) for binary classification of single-cell trajectories. The application of the OBC becomes impractical for large GRNs, due to computational and memory requirements. To address this, we introduce a particle-based single-cell classification method that is highly scalable for large GRNs with much lower complexity than the optimal solution. The performance of the proposed particle-based method is demonstrated through numerical experiments using a POBDS model of the well-known T-cell large granular lymphocyte (T-LGL) leukemia network with noisy time-series gene-expression data.
2009.14699
Edward D Lee
Edward D. Lee, Christopher P. Kempes, Geoffrey B. West
Dynamics of growth, death, and resource competition in sessile organisms
null
null
10.1073/pnas.2020424118
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population-level scaling in ecological systems arises from individual growth and death with competitive constraints. We build on a minimal dynamical model of metabolic growth where the tension between individual growth and mortality determines population size distribution. We include resource competition based on shared capture area separately. By varying relative rates of growth, death, and competitive attrition, we connect regular and random spatial patterns across sessile organisms from forests to ants, termites, and fairy circles. Then, we consider transient temporal dynamics in the context of asymmetric competition that primarily weakens the smaller of two competitors such as canopy shading or large colony dominance. When such competition couples slow timescales of growth with fast competitive death, it generates population shock waves similar to those observed in forest demographic data. Our minimal quantitative theory unifies spatiotemporal patterns across sessile organisms through local competition mediated by the laws of metabolic growth which in turn result from long-term evolutionary dynamics.
[ { "created": "Wed, 30 Sep 2020 14:27:53 GMT", "version": "v1" }, { "created": "Fri, 9 Oct 2020 00:22:37 GMT", "version": "v2" } ]
2022-05-11
[ [ "Lee", "Edward D.", "" ], [ "Kempes", "Christopher P.", "" ], [ "West", "Geoffrey B.", "" ] ]
Population-level scaling in ecological systems arises from individual growth and death with competitive constraints. We build on a minimal dynamical model of metabolic growth where the tension between individual growth and mortality determines population size distribution. We include resource competition based on shared capture area separately. By varying relative rates of growth, death, and competitive attrition, we connect regular and random spatial patterns across sessile organisms from forests to ants, termites, and fairy circles. Then, we consider transient temporal dynamics in the context of asymmetric competition that primarily weakens the smaller of two competitors such as canopy shading or large colony dominance. When such competition couples slow timescales of growth with fast competitive death, it generates population shock waves similar to those observed in forest demographic data. Our minimal quantitative theory unifies spatiotemporal patterns across sessile organisms through local competition mediated by the laws of metabolic growth which in turn result from long-term evolutionary dynamics.
1703.10996
Frank Poelwijk
Frank J. Poelwijk, Rama Ranganathan
The relation between alignment covariance and background-averaged epistasis
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epistasis, or the context-dependence of the effects of mutations, limits our ability to predict the functional impact of combinations of mutations, and ultimately our ability to predict evolutionary trajectories. Information about the context-dependence of mutations can essentially be obtained in two ways: First, by experimental measurement the functional effects of combinations of mutations and calculating the epistatic contributions directly, and second, by statistical analysis of the frequencies and co-occurrences of protein residues in a multiple sequence alignment of protein homologs. In this manuscript, we derive the mathematical relationship between epistasis calculated on the basis of functional measurements, and the covariance calculated from a multiple sequence alignment. There is no one-to-one mapping between covariance and epistatic terms: covariance implies epistasis, but epistasis does not necessarily lead to covariance, indicating that covariance in itself is not the directly relevant quantity for functional prediction. Having calculated epistatic contributions from the alignment, we can directly obtain a functional prediction from the alignment statistics by applying a Walsh-Hadamard transform, fully analogous to the transformation that reconstructs functional data from measured epistatic contributions. This embedding into the Hadamard framework is directly relevant for solidifying our theoretical understanding of statistical methods that predict function and three-dimensional structure from natural alignments.
[ { "created": "Fri, 31 Mar 2017 17:40:08 GMT", "version": "v1" } ]
2017-04-03
[ [ "Poelwijk", "Frank J.", "" ], [ "Ranganathan", "Rama", "" ] ]
Epistasis, or the context-dependence of the effects of mutations, limits our ability to predict the functional impact of combinations of mutations, and ultimately our ability to predict evolutionary trajectories. Information about the context-dependence of mutations can essentially be obtained in two ways: First, by experimental measurement the functional effects of combinations of mutations and calculating the epistatic contributions directly, and second, by statistical analysis of the frequencies and co-occurrences of protein residues in a multiple sequence alignment of protein homologs. In this manuscript, we derive the mathematical relationship between epistasis calculated on the basis of functional measurements, and the covariance calculated from a multiple sequence alignment. There is no one-to-one mapping between covariance and epistatic terms: covariance implies epistasis, but epistasis does not necessarily lead to covariance, indicating that covariance in itself is not the directly relevant quantity for functional prediction. Having calculated epistatic contributions from the alignment, we can directly obtain a functional prediction from the alignment statistics by applying a Walsh-Hadamard transform, fully analogous to the transformation that reconstructs functional data from measured epistatic contributions. This embedding into the Hadamard framework is directly relevant for solidifying our theoretical understanding of statistical methods that predict function and three-dimensional structure from natural alignments.
2302.07648
Georgia Smith
Georgia Smith and Yishi Wang
Atrial Fibrillation Detection Using RR-Intervals for Application in Photoplethysmographs
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Atrial Fibrillation is a common form of irregular heart rhythm that can be very dangerous. Our primary goal is to analyze Atrial Fibrillation data within ECGs to develop a model based only on RR-Intervals, or the length between heart-beats, to create a real time classification model for Atrial Fibrillation to be implemented in common heart-rate monitors on the market today. Physionet's MIT-BIH Atrial Fibrillation Database \cite{goldberger2000physiobank} and 2017 Challenge Database \cite{clifford2017af} were used to identify patterns of Atrial Fibrillation and test classification models on. These two datasets are very different. The MIT-BIH database contains long samples taken with a medical grade device, which is not useful for simulating a consumer device, but is useful for Atrial Fibrillation pattern detection. The 2017 Challenge database includes short ($<60sec$) samples taken with a portable device and reveals many of the challenges of Atrial Fibrillation classification in a real-time device. We developed multiple SVM models with three sets of extracted features as predictor variables which gave us moderately high accuracies with low computational intensity. With robust filtering techniques already applied in many Photoplethysmograph-based consumer heart-rate monitors, this method can be used to develop a reliable real time model for Atrial Fibrillation detection in consumer-grade heart-rate monitors.
[ { "created": "Mon, 13 Feb 2023 20:28:41 GMT", "version": "v1" } ]
2023-02-16
[ [ "Smith", "Georgia", "" ], [ "Wang", "Yishi", "" ] ]
Atrial Fibrillation is a common form of irregular heart rhythm that can be very dangerous. Our primary goal is to analyze Atrial Fibrillation data within ECGs to develop a model based only on RR-Intervals, or the length between heart-beats, to create a real time classification model for Atrial Fibrillation to be implemented in common heart-rate monitors on the market today. Physionet's MIT-BIH Atrial Fibrillation Database \cite{goldberger2000physiobank} and 2017 Challenge Database \cite{clifford2017af} were used to identify patterns of Atrial Fibrillation and test classification models on. These two datasets are very different. The MIT-BIH database contains long samples taken with a medical grade device, which is not useful for simulating a consumer device, but is useful for Atrial Fibrillation pattern detection. The 2017 Challenge database includes short ($<60sec$) samples taken with a portable device and reveals many of the challenges of Atrial Fibrillation classification in a real-time device. We developed multiple SVM models with three sets of extracted features as predictor variables which gave us moderately high accuracies with low computational intensity. With robust filtering techniques already applied in many Photoplethysmograph-based consumer heart-rate monitors, this method can be used to develop a reliable real time model for Atrial Fibrillation detection in consumer-grade heart-rate monitors.
1902.03977
Andres Ospina-Alvarez Dr.
Andres Ospina-Alvarez, Silvia de Juan, Josep Al\'os, Gotzon Basterretxea, Alexandre Alonso-Fern\'andez, Guillermo Follana-Bern\'a, Miquel Palmer, and Ignacio A. Catal\'an
MPA network design based on graph network theory and emergent properties of larval dispersal
8 figures, 3 tables, 1 Supplementary material (including 4 table; 3 figures and supplementary methods)
Mar. Ecol. Prog. Ser. 650:309-326 (2020)
10.3354/meps13399
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite the recognised effectiveness of networks of Marine Protected Areas (MPAs) as a biodiversity conservation instrument, nowadays MPA network design frequently disregards the importance of connectivity patterns. In the case of sedentary marine populations, connectivity stems not only from the stochastic nature of the physical environment that affects early-life stages dispersal, but also from the spawning stock attributes that affect the reproductive output (e.g., passive eggs and larvae) and its survivorship. Early-life stages are virtually impossible to track in the ocean. Therefore, numerical ocean current simulations coupled to egg and larval Lagrangian transport models remain the most common approach for the assessment of marine larval connectivity. Inferred larval connectivity may be different depending on the type of connectivity considered; consequently, the prioritisation of sites for marine populations' conservation might also differ. Here, we introduce a framework for evaluating and designing MPA networks based on the identification of connectivity hotspots using graph theoretic analysis. We use as a case of study a network of open-access areas and MPAs, off Mallorca Island (Spain), and test its effectiveness for the protection of the painted comber Serranus scriba. Outputs from network analysis are used to: (1) identify critical areas for improving overall larval connectivity; (2) assess the impact of species' biological parameters in network connectivity; and (3) explore alternative MPA configurations to improve average network connectivity. Results demonstrate the potential of graph theory to identify non-trivial egg/larval dispersal patterns and emerging collective properties of the MPA network which are relevant for increasing protection efficiency.
[ { "created": "Mon, 11 Feb 2019 16:36:10 GMT", "version": "v1" }, { "created": "Wed, 27 Feb 2019 10:22:01 GMT", "version": "v2" }, { "created": "Thu, 7 Mar 2019 15:37:14 GMT", "version": "v3" }, { "created": "Thu, 14 Mar 2019 05:35:25 GMT", "version": "v4" }, { "created": "Tue, 21 Apr 2020 19:35:14 GMT", "version": "v5" }, { "created": "Mon, 6 Jul 2020 17:03:18 GMT", "version": "v6" } ]
2021-02-11
[ [ "Ospina-Alvarez", "Andres", "" ], [ "de Juan", "Silvia", "" ], [ "Alós", "Josep", "" ], [ "Basterretxea", "Gotzon", "" ], [ "Alonso-Fernández", "Alexandre", "" ], [ "Follana-Berná", "Guillermo", "" ], [ "Palmer", "Miquel", "" ], [ "Catalán", "Ignacio A.", "" ] ]
Despite the recognised effectiveness of networks of Marine Protected Areas (MPAs) as a biodiversity conservation instrument, nowadays MPA network design frequently disregards the importance of connectivity patterns. In the case of sedentary marine populations, connectivity stems not only from the stochastic nature of the physical environment that affects early-life stages dispersal, but also from the spawning stock attributes that affect the reproductive output (e.g., passive eggs and larvae) and its survivorship. Early-life stages are virtually impossible to track in the ocean. Therefore, numerical ocean current simulations coupled to egg and larval Lagrangian transport models remain the most common approach for the assessment of marine larval connectivity. Inferred larval connectivity may be different depending on the type of connectivity considered; consequently, the prioritisation of sites for marine populations' conservation might also differ. Here, we introduce a framework for evaluating and designing MPA networks based on the identification of connectivity hotspots using graph theoretic analysis. We use as a case of study a network of open-access areas and MPAs, off Mallorca Island (Spain), and test its effectiveness for the protection of the painted comber Serranus scriba. Outputs from network analysis are used to: (1) identify critical areas for improving overall larval connectivity; (2) assess the impact of species' biological parameters in network connectivity; and (3) explore alternative MPA configurations to improve average network connectivity. Results demonstrate the potential of graph theory to identify non-trivial egg/larval dispersal patterns and emerging collective properties of the MPA network which are relevant for increasing protection efficiency.
1003.4674
Gustav Delius
Jose A. Capitan and Gustav W. Delius
A scale-invariant model of marine population dynamics
Same as published version in Phys.Rev.E. except for a correction in the appendix of the coefficients in the Fokker-Planck equation (A8). 18 pages, 8 figures
Phys. Rev. E 81, 061901 (2010)
10.1103/PhysRevE.81.061901
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A striking feature of the marine ecosystem is the regularity in its size spectrum: the abundance of organisms as a function of their weight approximately follows a power law over almost ten orders of magnitude. We interpret this as evidence that the population dynamics in the ocean is approximately scale-invariant. We use this invariance in the construction and solution of a size-structured dynamical population model. Starting from a Markov model encoding the basic processes of predation, reproduction, maintenance respiration and intrinsic mortality, we derive a partial integro-differential equation describing the dependence of abundance on weight and time. Our model represents an extension of the jump-growth model and hence also of earlier models based on the McKendrick--von Foerster equation. The model is scale-invariant provided the rate functions of the stochastic processes have certain scaling properties. We determine the steady-state power law solution, whose exponent is determined by the relative scaling between the rates of the density-dependent processes (predation) and the rates of the density-independent processes (reproduction, maintenance, mortality). We study the stability of the steady-state against small perturbations and find that inclusion of maintenance respiration and reproduction in the model has astrong stabilising effect. Furthermore, the steady state is unstable against a change in the overall population density unless the reproduction rate exceeds a certain threshold.
[ { "created": "Wed, 24 Mar 2010 15:55:54 GMT", "version": "v1" }, { "created": "Sun, 30 May 2010 07:54:47 GMT", "version": "v2" }, { "created": "Thu, 16 Sep 2010 10:10:16 GMT", "version": "v3" } ]
2010-09-17
[ [ "Capitan", "Jose A.", "" ], [ "Delius", "Gustav W.", "" ] ]
A striking feature of the marine ecosystem is the regularity in its size spectrum: the abundance of organisms as a function of their weight approximately follows a power law over almost ten orders of magnitude. We interpret this as evidence that the population dynamics in the ocean is approximately scale-invariant. We use this invariance in the construction and solution of a size-structured dynamical population model. Starting from a Markov model encoding the basic processes of predation, reproduction, maintenance respiration and intrinsic mortality, we derive a partial integro-differential equation describing the dependence of abundance on weight and time. Our model represents an extension of the jump-growth model and hence also of earlier models based on the McKendrick--von Foerster equation. The model is scale-invariant provided the rate functions of the stochastic processes have certain scaling properties. We determine the steady-state power law solution, whose exponent is determined by the relative scaling between the rates of the density-dependent processes (predation) and the rates of the density-independent processes (reproduction, maintenance, mortality). We study the stability of the steady-state against small perturbations and find that inclusion of maintenance respiration and reproduction in the model has astrong stabilising effect. Furthermore, the steady state is unstable against a change in the overall population density unless the reproduction rate exceeds a certain threshold.
1407.0387
Saikat Chatterjee
Saikat Chatterjee, David Koslicki, Siyuan Dong, Nicolas Innocenti, Lu Cheng, Yueheng Lan, Mikko Vehkaper\"a, Mikael Skoglund, Lars K. Rasmussen, Erik Aurell, Jukka Corander
SEK: Sparsity exploiting $k$-mer-based estimation of bacterial community composition
10 pages
null
10.1093/bioinformatics/btu320
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Estimation of bacterial community composition from a high-throughput sequenced sample is an important task in metagenomics applications. Since the sample sequence data typically harbors reads of variable lengths and different levels of biological and technical noise, accurate statistical analysis of such data is challenging. Currently popular estimation methods are typically very time consuming in a desktop computing environment. Results: Using sparsity enforcing methods from the general sparse signal processing field (such as compressed sensing), we derive a solution to the community composition estimation problem by a simultaneous assignment of all sample reads to a pre-processed reference database. A general statistical model based on kernel density estimation techniques is introduced for the assignment task and the model solution is obtained using convex optimization tools. Further, we design a greedy algorithm solution for a fast solution. Our approach offers a reasonably fast community composition estimation method which is shown to be more robust to input data variation than a recently introduced related method. Availability: A platform-independent Matlab implementation of the method is freely available at http://www.ee.kth.se/ctsoftware; source code that does not require access to Matlab is currently being tested and will be made available later through the above website.
[ { "created": "Tue, 1 Jul 2014 10:46:59 GMT", "version": "v1" } ]
2014-10-02
[ [ "Chatterjee", "Saikat", "" ], [ "Koslicki", "David", "" ], [ "Dong", "Siyuan", "" ], [ "Innocenti", "Nicolas", "" ], [ "Cheng", "Lu", "" ], [ "Lan", "Yueheng", "" ], [ "Vehkaperä", "Mikko", "" ], [ "Skoglund", "Mikael", "" ], [ "Rasmussen", "Lars K.", "" ], [ "Aurell", "Erik", "" ], [ "Corander", "Jukka", "" ] ]
Motivation: Estimation of bacterial community composition from a high-throughput sequenced sample is an important task in metagenomics applications. Since the sample sequence data typically harbors reads of variable lengths and different levels of biological and technical noise, accurate statistical analysis of such data is challenging. Currently popular estimation methods are typically very time consuming in a desktop computing environment. Results: Using sparsity enforcing methods from the general sparse signal processing field (such as compressed sensing), we derive a solution to the community composition estimation problem by a simultaneous assignment of all sample reads to a pre-processed reference database. A general statistical model based on kernel density estimation techniques is introduced for the assignment task and the model solution is obtained using convex optimization tools. Further, we design a greedy algorithm solution for a fast solution. Our approach offers a reasonably fast community composition estimation method which is shown to be more robust to input data variation than a recently introduced related method. Availability: A platform-independent Matlab implementation of the method is freely available at http://www.ee.kth.se/ctsoftware; source code that does not require access to Matlab is currently being tested and will be made available later through the above website.
2305.07948
Jacob Beal
Jacob Beal
Flow Cytometry Quantification of Transient Transfections in Mammalian Cells
23 pages, 13 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Flow cytometry is a powerful quantitative assay supporting high-throughput collection of single-cell data with a high dynamic range. For flow cytometry to yield reproducible data with a quantitative relationship to the underlying biology, however, requires that 1) appropriate process controls are collected along with experimental samples, 2) these process controls are used for unit calibration and quality control, and 3) data is analyzed using appropriate statistics. To this end, this article describes methods for quantitative flow cytometry through addition of process controls and analyses, thereby enabling better development, modeling, and debugging of engineered biological organisms. The methods described here have specifically been developed in the context of transient transfections in mammalian cells, but may in many cases be adaptable to other categories of transfection and other types of cells.
[ { "created": "Sat, 13 May 2023 15:35:54 GMT", "version": "v1" } ]
2023-05-16
[ [ "Beal", "Jacob", "" ] ]
Flow cytometry is a powerful quantitative assay supporting high-throughput collection of single-cell data with a high dynamic range. For flow cytometry to yield reproducible data with a quantitative relationship to the underlying biology, however, requires that 1) appropriate process controls are collected along with experimental samples, 2) these process controls are used for unit calibration and quality control, and 3) data is analyzed using appropriate statistics. To this end, this article describes methods for quantitative flow cytometry through addition of process controls and analyses, thereby enabling better development, modeling, and debugging of engineered biological organisms. The methods described here have specifically been developed in the context of transient transfections in mammalian cells, but may in many cases be adaptable to other categories of transfection and other types of cells.
2111.11373
Anne-Florence Bitbol
Andonis Gerardos, Nicola Dietler and Anne-Florence Bitbol
Correlations from structure and phylogeny combine constructively in the inference of protein partners from sequences
33 pages, 20 figures
PLoS Comput. Biol. 18(5): e1010147 (2022)
10.1371/journal.pcbi.1010147
null
q-bio.BM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inferring protein-protein interactions from sequences is an important task in computational biology. Recent methods based on Direct Coupling Analysis (DCA) or Mutual Information (MI) allow to find interaction partners among paralogs of two protein families. Does successful inference mainly rely on correlations from structural contacts or from phylogeny, or both? Do these two types of signal combine constructively or hinder each other? To address these questions, we generate and analyze synthetic data produced using a minimal model that allows us to control the amounts of structural constraints and phylogeny. We show that correlations from these two sources combine constructively to increase the performance of partner inference by DCA or MI. Furthermore, signal from phylogeny can rescue partner inference when signal from contacts becomes less informative, including in the realistic case where inter-protein contacts are restricted to a small subset of sites. We also demonstrate that DCA-inferred couplings between non-contact pairs of sites improve partner inference in the presence of strong phylogeny, while deteriorating it otherwise. Moreover, restricting to non-contact pairs of sites preserves inference performance in the presence of strong phylogeny. In a natural data set, as well as in realistic synthetic data based on it, we find that non-contact pairs of sites contribute positively to partner inference performance, and that restricting to them preserves performance, evidencing an important role of phylogeny.
[ { "created": "Mon, 22 Nov 2021 17:31:40 GMT", "version": "v1" }, { "created": "Mon, 25 Apr 2022 12:12:17 GMT", "version": "v2" }, { "created": "Tue, 17 May 2022 20:01:01 GMT", "version": "v3" } ]
2022-05-19
[ [ "Gerardos", "Andonis", "" ], [ "Dietler", "Nicola", "" ], [ "Bitbol", "Anne-Florence", "" ] ]
Inferring protein-protein interactions from sequences is an important task in computational biology. Recent methods based on Direct Coupling Analysis (DCA) or Mutual Information (MI) allow to find interaction partners among paralogs of two protein families. Does successful inference mainly rely on correlations from structural contacts or from phylogeny, or both? Do these two types of signal combine constructively or hinder each other? To address these questions, we generate and analyze synthetic data produced using a minimal model that allows us to control the amounts of structural constraints and phylogeny. We show that correlations from these two sources combine constructively to increase the performance of partner inference by DCA or MI. Furthermore, signal from phylogeny can rescue partner inference when signal from contacts becomes less informative, including in the realistic case where inter-protein contacts are restricted to a small subset of sites. We also demonstrate that DCA-inferred couplings between non-contact pairs of sites improve partner inference in the presence of strong phylogeny, while deteriorating it otherwise. Moreover, restricting to non-contact pairs of sites preserves inference performance in the presence of strong phylogeny. In a natural data set, as well as in realistic synthetic data based on it, we find that non-contact pairs of sites contribute positively to partner inference performance, and that restricting to them preserves performance, evidencing an important role of phylogeny.
1404.7728
Samuel Johnson
Samuel Johnson, Virginia Dom\'inguez-Garc\'ia, Luca Donetti, and Miguel A. Mu\~noz
Trophic coherence determines food-web stability
Manuscript plus Supporting Information. To appear in PNAS
PNAS (2014) 111: 17923-17928
10.1073/pnas.1409077111
null
q-bio.PE cond-mat.stat-mech math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Why are large, complex ecosystems stable? Both theory and simulations of current models predict the onset of instability with growing size and complexity, so for decades it has been conjectured that ecosystems must have some unidentified structural property exempting them from this outcome. We show that 'trophic coherence' -- a hitherto ignored feature of food webs which current structural models fail to reproduce -- is a better statistical predictor of linear stability than size or complexity. Furthermore, we prove that a maximally coherent network with constant interaction strengths will always be linearly stable. We also propose a simple model which, by correctly capturing the trophic coherence of food webs, accurately reproduces their stability and other basic structural features. Most remarkably, our model shows that stability can increase with size and complexity. This suggests a key to May's Paradox, and a range of opportunities and concerns for biodiversity conservation.
[ { "created": "Wed, 30 Apr 2014 13:59:03 GMT", "version": "v1" }, { "created": "Tue, 16 Sep 2014 12:41:21 GMT", "version": "v2" }, { "created": "Mon, 24 Nov 2014 15:03:31 GMT", "version": "v3" } ]
2016-08-11
[ [ "Johnson", "Samuel", "" ], [ "Domínguez-García", "Virginia", "" ], [ "Donetti", "Luca", "" ], [ "Muñoz", "Miguel A.", "" ] ]
Why are large, complex ecosystems stable? Both theory and simulations of current models predict the onset of instability with growing size and complexity, so for decades it has been conjectured that ecosystems must have some unidentified structural property exempting them from this outcome. We show that 'trophic coherence' -- a hitherto ignored feature of food webs which current structural models fail to reproduce -- is a better statistical predictor of linear stability than size or complexity. Furthermore, we prove that a maximally coherent network with constant interaction strengths will always be linearly stable. We also propose a simple model which, by correctly capturing the trophic coherence of food webs, accurately reproduces their stability and other basic structural features. Most remarkably, our model shows that stability can increase with size and complexity. This suggests a key to May's Paradox, and a range of opportunities and concerns for biodiversity conservation.
1512.03660
Raphael Cl\'ement
Raphael Clement
Stephane Leduc and the vital exception in the Life Sciences
null
null
null
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embryogenesis, the process by which an organism forms and develops, has long been and still is a major field of investigation in the natural sciences. By which means, which forces, are embryonic cells and tissues assembled, deformed, and eventually organized into an animal? Because embryogenesis deeply questions our understanding of the mechanisms of life, it has motivated many scientific theories and philosophies over the course of history. While genetics now seems to have emerged as a natural background to study embryogenesis, it was intuited long ago that it should also rely on mechanical forces and on the physical properties of cells and tissues. In the early 20th century, Stephane Leduc proposed that biology was merely a subset of fluid physics, and argued that biology should focus on how forces act on living matter. Rejecting vitalism and life-specific approaches, he designed naive experiments based on osmosis and diffusion to mimic shapes and phenomena found in living systems, in order to identify physical mechanisms that could support the development of the embryo. While Leduc's ideas then had some impact in the field, notably on later acclaimed D'Arcy Thompson, they fell into oblivion during the later 20th century. In this article I give an overview of Stephane Leduc's physical approach to life, and show that the paradigm that he introduced, although long forsaken, becomes more and more topical today, as developmental biology increasingly turns to physics and self-organization theories to study the mechanisms of embryogenesis. His story, I suggest, bears witness to our reluctance to abandon life-specific approaches in biology.
[ { "created": "Fri, 11 Dec 2015 14:36:29 GMT", "version": "v1" }, { "created": "Mon, 4 Jan 2016 16:42:26 GMT", "version": "v2" }, { "created": "Mon, 11 Jan 2016 13:27:56 GMT", "version": "v3" }, { "created": "Fri, 4 Mar 2016 15:05:55 GMT", "version": "v4" } ]
2016-03-07
[ [ "Clement", "Raphael", "" ] ]
Embryogenesis, the process by which an organism forms and develops, has long been and still is a major field of investigation in the natural sciences. By which means, which forces, are embryonic cells and tissues assembled, deformed, and eventually organized into an animal? Because embryogenesis deeply questions our understanding of the mechanisms of life, it has motivated many scientific theories and philosophies over the course of history. While genetics now seems to have emerged as a natural background to study embryogenesis, it was intuited long ago that it should also rely on mechanical forces and on the physical properties of cells and tissues. In the early 20th century, Stephane Leduc proposed that biology was merely a subset of fluid physics, and argued that biology should focus on how forces act on living matter. Rejecting vitalism and life-specific approaches, he designed naive experiments based on osmosis and diffusion to mimic shapes and phenomena found in living systems, in order to identify physical mechanisms that could support the development of the embryo. While Leduc's ideas then had some impact in the field, notably on later acclaimed D'Arcy Thompson, they fell into oblivion during the later 20th century. In this article I give an overview of Stephane Leduc's physical approach to life, and show that the paradigm that he introduced, although long forsaken, becomes more and more topical today, as developmental biology increasingly turns to physics and self-organization theories to study the mechanisms of embryogenesis. His story, I suggest, bears witness to our reluctance to abandon life-specific approaches in biology.
1306.1235
Dmitri Finkelshtein L
Dmitri Finkelshtein, Yuri Kondratiev, Oleksandr Kutoviy, Stanislav Molchanov, Elena Zhizhina
Density behavior of spatial birth-and-death stochastic evolution of mutating genotypes under selection rates
14 pages
Russian Journal of Mathematical Physics, 2014, 21 (4), p. 450-459
10.1134/S1061920814040037
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider birth-and-death stochastic evolution of genotypes with different lengths. The genotypes might mutate that provides a stochastic changing of lengthes by a free diffusion law. The birth and death rates are length dependent which corresponds to a selection effect. We study an asymptotic behavior of a density for an infinite collection of genotypes. The cases of space homogeneous and space heterogeneous densities are considered.
[ { "created": "Wed, 5 Jun 2013 20:00:47 GMT", "version": "v1" } ]
2015-06-16
[ [ "Finkelshtein", "Dmitri", "" ], [ "Kondratiev", "Yuri", "" ], [ "Kutoviy", "Oleksandr", "" ], [ "Molchanov", "Stanislav", "" ], [ "Zhizhina", "Elena", "" ] ]
We consider birth-and-death stochastic evolution of genotypes with different lengths. The genotypes might mutate that provides a stochastic changing of lengthes by a free diffusion law. The birth and death rates are length dependent which corresponds to a selection effect. We study an asymptotic behavior of a density for an infinite collection of genotypes. The cases of space homogeneous and space heterogeneous densities are considered.
2008.12455
Lana Garmire
Lana X Garmire
Strategies to integrate multi-omics data for patient survival prediction
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-sa/4.0/
Genomics, especially multi-omics, has made precision medicine feasible. The completion and publicly accessible multi-omics resource with clinical outcome, such as The Cancer Genome Atlas (TCGA) is a great test bed for developing computational methods that integrate multi-omics data to predict patient cancer phenotypes. We have been utilizing TCGA multi-omics data to predict cancer patient survival, using a variety of approaches, including prior-biological knowledge (such as pathways), and more recently, deep-learning methods. Over time, we have developed methods such as Cox-nnet, DeepProg, and two-stage Cox-nnet, to address the challenges due to multi-omics and multi-modality. Despite the limited sample size (hundreds to thousands) in the training datasets as well as the heterogeneity nature of human populations, these methods have shown significance and robustness at predicting patient survival in independent population cohorts. In the following, we would describe in detail these methodologies, the modeling results, and important biological insights revealed by these methods.
[ { "created": "Fri, 28 Aug 2020 03:08:02 GMT", "version": "v1" } ]
2020-08-31
[ [ "Garmire", "Lana X", "" ] ]
Genomics, especially multi-omics, has made precision medicine feasible. The completion and publicly accessible multi-omics resource with clinical outcome, such as The Cancer Genome Atlas (TCGA) is a great test bed for developing computational methods that integrate multi-omics data to predict patient cancer phenotypes. We have been utilizing TCGA multi-omics data to predict cancer patient survival, using a variety of approaches, including prior-biological knowledge (such as pathways), and more recently, deep-learning methods. Over time, we have developed methods such as Cox-nnet, DeepProg, and two-stage Cox-nnet, to address the challenges due to multi-omics and multi-modality. Despite the limited sample size (hundreds to thousands) in the training datasets as well as the heterogeneity nature of human populations, these methods have shown significance and robustness at predicting patient survival in independent population cohorts. In the following, we would describe in detail these methodologies, the modeling results, and important biological insights revealed by these methods.
2403.15926
Yuri A. Dabaghian
C. Hoffman, J. Cheng, R. Morales, D. Ji, Y. Dabaghian
Altered patterning of neural activity in a tauopathy mouse model
17 pages, plus supplementary material
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Alzheimer's disease (AD) is a complex neurodegenerative condition that manifests at multiple levels and involves a spectrum of abnormalities ranging from the cellular to cognitive. Here, we investigate the impact of AD-related tau-pathology on hippocampal circuits in mice engaged in spatial navigation, and study changes of neuronal firing and dynamics of extracellular fields. While most studies are based on analyzing instantaneous or time-averaged characteristics of neuronal activity, we focus on intermediate timescales -- spike trains and waveforms of oscillatory potentials, which we consider as single entities. We find that, in healthy mice, spike arrangements and wave patterns (series of crests or troughs) are coupled to the animal's location, speed, and acceleration. In contrast, in tau-mice, neural activity is structurally disarrayed: brainwave cadence is detached from locomotion, spatial selectivity is lost, the spike flow is scrambled. Importantly, these alterations start early and accumulate with age, which exposes progressive disinvolvement the hippocampus circuit in spatial navigation. These features highlight qualitatively different neurodynamics than the ones provided by conventional analyses, and are more salient, thus revealing a new level of the hippocampal circuit disruptions.
[ { "created": "Sat, 23 Mar 2024 20:17:06 GMT", "version": "v1" } ]
2024-03-26
[ [ "Hoffman", "C.", "" ], [ "Cheng", "J.", "" ], [ "Morales", "R.", "" ], [ "Ji", "D.", "" ], [ "Dabaghian", "Y.", "" ] ]
Alzheimer's disease (AD) is a complex neurodegenerative condition that manifests at multiple levels and involves a spectrum of abnormalities ranging from the cellular to cognitive. Here, we investigate the impact of AD-related tau-pathology on hippocampal circuits in mice engaged in spatial navigation, and study changes of neuronal firing and dynamics of extracellular fields. While most studies are based on analyzing instantaneous or time-averaged characteristics of neuronal activity, we focus on intermediate timescales -- spike trains and waveforms of oscillatory potentials, which we consider as single entities. We find that, in healthy mice, spike arrangements and wave patterns (series of crests or troughs) are coupled to the animal's location, speed, and acceleration. In contrast, in tau-mice, neural activity is structurally disarrayed: brainwave cadence is detached from locomotion, spatial selectivity is lost, the spike flow is scrambled. Importantly, these alterations start early and accumulate with age, which exposes progressive disinvolvement the hippocampus circuit in spatial navigation. These features highlight qualitatively different neurodynamics than the ones provided by conventional analyses, and are more salient, thus revealing a new level of the hippocampal circuit disruptions.
1711.08032
Alexander Seeholzer
Alexander Seeholzer, Moritz Deger, Wulfram Gerstner
Efficient low-dimensional approximation of continuous attractor networks
23 pages, 6 figures, 3 tables. A previous version of this article was published as a thesis chapter of the first author
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Continuous "bump" attractors are an established model of cortical working memory for continuous variables and can be implemented using various neuron and network models. Here, we develop a generalizable approach for the approximation of bump states of continuous attractor networks implemented in networks of both rate-based and spiking neurons. The method relies on a low-dimensional parametrization of the spatial shape of firing rates, allowing to apply efficient numerical optimization methods. Using our theory, we can establish a mapping between network structure and attractor properties that allows the prediction of the effects of network parameters on the steady state firing rate profile and the existence of bumps, and vice-versa, to fine-tune a network to produce bumps of a given shape.
[ { "created": "Tue, 21 Nov 2017 20:47:18 GMT", "version": "v1" } ]
2017-11-23
[ [ "Seeholzer", "Alexander", "" ], [ "Deger", "Moritz", "" ], [ "Gerstner", "Wulfram", "" ] ]
Continuous "bump" attractors are an established model of cortical working memory for continuous variables and can be implemented using various neuron and network models. Here, we develop a generalizable approach for the approximation of bump states of continuous attractor networks implemented in networks of both rate-based and spiking neurons. The method relies on a low-dimensional parametrization of the spatial shape of firing rates, allowing to apply efficient numerical optimization methods. Using our theory, we can establish a mapping between network structure and attractor properties that allows the prediction of the effects of network parameters on the steady state firing rate profile and the existence of bumps, and vice-versa, to fine-tune a network to produce bumps of a given shape.
1504.07343
Daniel Villela
Daniel A. M. Villela
An analysis of the vectorial capacity using moment-generating functions
8 pages
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a technique for analyzing the stochastic structure of the vectorial capacity using moment--generating functions. In such formulation, for an infectious disease transmitted by a vector, we obtain the generating function for the distribution of the number of infectious contacts (e.g., infectious bites from mosquitoes) between vectors and humans after a contact to a single infected individual. This approach permits us to generally derive the moments of the distribution and, under some conditions, derive the distribution function of the vectorial capacity. A stochastic modeling framework is helpful for analyzing the dynamics of disease spreading, such as performing sensitivity analysis.
[ { "created": "Tue, 28 Apr 2015 04:45:13 GMT", "version": "v1" } ]
2015-04-29
[ [ "Villela", "Daniel A. M.", "" ] ]
This paper describes a technique for analyzing the stochastic structure of the vectorial capacity using moment--generating functions. In such formulation, for an infectious disease transmitted by a vector, we obtain the generating function for the distribution of the number of infectious contacts (e.g., infectious bites from mosquitoes) between vectors and humans after a contact to a single infected individual. This approach permits us to generally derive the moments of the distribution and, under some conditions, derive the distribution function of the vectorial capacity. A stochastic modeling framework is helpful for analyzing the dynamics of disease spreading, such as performing sensitivity analysis.
q-bio/0404025
Byung Mook Weon
Byung Mook Weon
Introduction to new demographic model for humans
12 pages, 1 figure
null
null
null
q-bio.PE
null
The Gompertz model since 1825 has significantly contributed to interpretation of ageing in biological and social sciences. However, in modern research findings, it is clear that the Gompertz model is not successful to describe the whole demographic trajectories. In this letter, a new demographic model is introduced especially to describe human demographic trajectories, for example, for Sweden (2002). The new model is derived from the Weibull model with an age-dependent shape parameter, which seems to indicate the dynamical aspects of biological systems for longevity. We will discuss the origin of the age-dependent shape parameter. Finally, the new model presented here has significant potential to change our understanding of definition of maximum longevity.
[ { "created": "Thu, 22 Apr 2004 08:31:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "Weon", "Byung Mook", "" ] ]
The Gompertz model since 1825 has significantly contributed to interpretation of ageing in biological and social sciences. However, in modern research findings, it is clear that the Gompertz model is not successful to describe the whole demographic trajectories. In this letter, a new demographic model is introduced especially to describe human demographic trajectories, for example, for Sweden (2002). The new model is derived from the Weibull model with an age-dependent shape parameter, which seems to indicate the dynamical aspects of biological systems for longevity. We will discuss the origin of the age-dependent shape parameter. Finally, the new model presented here has significant potential to change our understanding of definition of maximum longevity.
1312.3669
Antonio Bianconi
Nicola Poccia, Gaetano Campi, Alessandro Ricci, Alessandra S. Caporale, Emanuela Di Cola, Thomas A. Hawkins, Antonio Bianconi
Changes of statistical structural fluctuations unveils an early compacted degraded stage of PNS myelin
16 pages, 6 figures
Scientific Reports 4, 5430 (2014)
10.1038/srep05430
null
q-bio.BM physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Degradation of the myelin sheath is a common pathology underlying demyelinating neurological diseases from Multiple Sclerosis to Leukodistrophies. Although large malformations of myelin ultrastructure in the advanced stages of Wallerian degradation is known, its subtle structural variations at early stages of demyelination remains poorly characterized. This is partly due to the lack of suitable and non-invasive experimental probes possessing sufficient resolution to detect the degradation. Here we report the feasibility of the application of an innovative non-invasive local structure experimental approach for imaging the changes of statistical structural fluctuations in the first stage of myelin degeneration. Scanning micro X-ray diffraction, using advances in synchrotron x-ray beam focusing, fast data collection, paired with spatial statistical analysis, has been used to unveil temporal changes in the myelin structure of dissected nerves following extraction of the Xenopus laevis sciatic nerve. The early myelin degeneration is a specific ordered compacted phase preceding the swollen myelin phase of Wallerian degradation. Our demonstration of the feasibility of the statistical analysis of SmXRD measurements using biological tissue paves the way for further structural investigations of degradation and death of neurons and other cells and tissues in diverse pathological states where nanoscale structural changes may be uncovered.
[ { "created": "Thu, 12 Dec 2013 22:59:56 GMT", "version": "v1" }, { "created": "Mon, 9 Jun 2014 13:54:12 GMT", "version": "v2" } ]
2014-07-15
[ [ "Poccia", "Nicola", "" ], [ "Campi", "Gaetano", "" ], [ "Ricci", "Alessandro", "" ], [ "Caporale", "Alessandra S.", "" ], [ "Di Cola", "Emanuela", "" ], [ "Hawkins", "Thomas A.", "" ], [ "Bianconi", "Antonio", "" ] ]
Degradation of the myelin sheath is a common pathology underlying demyelinating neurological diseases from Multiple Sclerosis to Leukodistrophies. Although large malformations of myelin ultrastructure in the advanced stages of Wallerian degradation is known, its subtle structural variations at early stages of demyelination remains poorly characterized. This is partly due to the lack of suitable and non-invasive experimental probes possessing sufficient resolution to detect the degradation. Here we report the feasibility of the application of an innovative non-invasive local structure experimental approach for imaging the changes of statistical structural fluctuations in the first stage of myelin degeneration. Scanning micro X-ray diffraction, using advances in synchrotron x-ray beam focusing, fast data collection, paired with spatial statistical analysis, has been used to unveil temporal changes in the myelin structure of dissected nerves following extraction of the Xenopus laevis sciatic nerve. The early myelin degeneration is a specific ordered compacted phase preceding the swollen myelin phase of Wallerian degradation. Our demonstration of the feasibility of the statistical analysis of SmXRD measurements using biological tissue paves the way for further structural investigations of degradation and death of neurons and other cells and tissues in diverse pathological states where nanoscale structural changes may be uncovered.
1207.3288
Andy Royle
J. Andrew Royle and Richard B. Chandler
Integrating Resource Selection Information with Spatial Capture-Recapture
null
null
null
null
q-bio.QM stat.AP
http://creativecommons.org/licenses/publicdomain/
Understanding space usage and resource selection is a primary focus of many studies of animal populations. Usually, such studies are based on location data obtained from telemetry, and resource selection functions (RSF) are used for inference. Another important focus of wildlife research is estimation and modeling population size and density. Recently developed spatial capture-recapture (SCR) models accomplish this objective using individual encounter history data with auxiliary spatial information on location of capture. SCR models include encounter probability functions that are intuitively related to RSFs, but to date, no one has extended SCR models to allow for explicit inference about space usage and resource selection. We develop a statistical framework for jointly modeling space usage, resource selection, and population density by integrating SCR data, such as from camera traps, mist-nets, or conventional catch-traps, with resource selection data from telemetered individuals. We provide a framework for estimation based on marginal likelihood, wherein we estimate simultaneously the parameters of the SCR and RSF models. Our method leads to increases in precision for estimating population density and parameters of ordinary SCR models. Importantly, we also find that SCR models alone can estimate parameters of resource selection functions and, as such, SCR methods can be used as the sole source for studying space-usage; however, precision will be higher when telemetry data are available. Finally, we find that SCR models using standard symmetric and stationary encounter probability models produce biased estimates of density when animal space usage is related to a landscape covariate. Therefore, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture-recapture methods.
[ { "created": "Fri, 13 Jul 2012 16:05:40 GMT", "version": "v1" } ]
2012-07-16
[ [ "Royle", "J. Andrew", "" ], [ "Chandler", "Richard B.", "" ] ]
Understanding space usage and resource selection is a primary focus of many studies of animal populations. Usually, such studies are based on location data obtained from telemetry, and resource selection functions (RSF) are used for inference. Another important focus of wildlife research is estimation and modeling population size and density. Recently developed spatial capture-recapture (SCR) models accomplish this objective using individual encounter history data with auxiliary spatial information on location of capture. SCR models include encounter probability functions that are intuitively related to RSFs, but to date, no one has extended SCR models to allow for explicit inference about space usage and resource selection. We develop a statistical framework for jointly modeling space usage, resource selection, and population density by integrating SCR data, such as from camera traps, mist-nets, or conventional catch-traps, with resource selection data from telemetered individuals. We provide a framework for estimation based on marginal likelihood, wherein we estimate simultaneously the parameters of the SCR and RSF models. Our method leads to increases in precision for estimating population density and parameters of ordinary SCR models. Importantly, we also find that SCR models alone can estimate parameters of resource selection functions and, as such, SCR methods can be used as the sole source for studying space-usage; however, precision will be higher when telemetry data are available. Finally, we find that SCR models using standard symmetric and stationary encounter probability models produce biased estimates of density when animal space usage is related to a landscape covariate. Therefore, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture-recapture methods.
2310.13898
Andy Lin
Andy Lin, Cameron Torres, Errett C. Hobbs, Jaydeep Bardhan, Stephen B. Aley, Charles T. Spencer, Karen L. Taylor, and Tony Chiang
Computational and Systems Biology Advances to Enable Bioagent-Agnostic Signatures
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Enumerated threat agent lists have long driven biodefense priorities. The global SARS-CoV-2 pandemic demonstrated the limitations of searching for known threat agents as compared to a more agnostic approach. Recent technological advances are enabling agent-agnostic biodefense, especially through the integration of multi-modal observations of host-pathogen interactions directed by a human immunological model. Although well-developed technical assays exist for many aspects of human-pathogen interaction, the analytic methods and pipelines to combine and holistically interpret the results of such assays are immature and require further investments to exploit new technologies. In this manuscript, we discuss potential immunologically based bioagent-agnostic approaches and the computational tool gaps the community should prioritize filling.
[ { "created": "Sat, 21 Oct 2023 03:32:18 GMT", "version": "v1" }, { "created": "Mon, 30 Oct 2023 19:57:24 GMT", "version": "v2" }, { "created": "Wed, 28 Feb 2024 18:29:08 GMT", "version": "v3" } ]
2024-02-29
[ [ "Lin", "Andy", "" ], [ "Torres", "Cameron", "" ], [ "Hobbs", "Errett C.", "" ], [ "Bardhan", "Jaydeep", "" ], [ "Aley", "Stephen B.", "" ], [ "Spencer", "Charles T.", "" ], [ "Taylor", "Karen L.", "" ], [ "Chiang", "Tony", "" ] ]
Enumerated threat agent lists have long driven biodefense priorities. The global SARS-CoV-2 pandemic demonstrated the limitations of searching for known threat agents as compared to a more agnostic approach. Recent technological advances are enabling agent-agnostic biodefense, especially through the integration of multi-modal observations of host-pathogen interactions directed by a human immunological model. Although well-developed technical assays exist for many aspects of human-pathogen interaction, the analytic methods and pipelines to combine and holistically interpret the results of such assays are immature and require further investments to exploit new technologies. In this manuscript, we discuss potential immunologically based bioagent-agnostic approaches and the computational tool gaps the community should prioritize filling.
1012.0337
Yoichiro Mori
Yoichiro Mori, Alexandra Jilkine, Leah Edelstein-Keshet
Asymptotic and bifurcation analysis of wave-pinning in a reaction-diffusion model for cell polarization
Manuscript submitted April 4, 2010 to SIAM Journal of Applied Mathematics, under review
null
null
null
q-bio.CB math.DS nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe and analyze a bistable reaction-diffusion (RD) model for two interconverting chemical species that exhibits a phenomenon of wave-pinning: a wave of activation of one of the species is initiated at one end of the domain, moves into the domain, decelerates, and eventually stops inside the domain, forming a stationary front. The second ("inactive") species is depleted in this process. This behavior arises in a model for chemical polarization of a cell by Rho GTPases in response to stimulation. The initially spatially homogeneous concentration profile (representative of a resting cell) develops into an asymmetric stationary front profile (typical of a polarized cell). Wave-pinning here is based on three properties: (1) mass conservation in a finite domain, (2) nonlinear reaction kinetics allowing for multiple stable steady states, and (3) a sufficiently large difference in diffusion of the two species. Using matched asymptotic analysis, we explain the mathematical basis of wave-pinning, and predict the speed and pinned position of the wave. An analysis of the bifurcation of the pinned front solution reveals how the wave-pinning regime depends on parameters such as rates of diffusion and total mass of the species. We describe two ways in which the pinned solution can be lost depending on the details of the reaction kinetics: a saddle-node or a pitchfork bifurcation.
[ { "created": "Wed, 1 Dec 2010 22:08:28 GMT", "version": "v1" } ]
2010-12-03
[ [ "Mori", "Yoichiro", "" ], [ "Jilkine", "Alexandra", "" ], [ "Edelstein-Keshet", "Leah", "" ] ]
We describe and analyze a bistable reaction-diffusion (RD) model for two interconverting chemical species that exhibits a phenomenon of wave-pinning: a wave of activation of one of the species is initiated at one end of the domain, moves into the domain, decelerates, and eventually stops inside the domain, forming a stationary front. The second ("inactive") species is depleted in this process. This behavior arises in a model for chemical polarization of a cell by Rho GTPases in response to stimulation. The initially spatially homogeneous concentration profile (representative of a resting cell) develops into an asymmetric stationary front profile (typical of a polarized cell). Wave-pinning here is based on three properties: (1) mass conservation in a finite domain, (2) nonlinear reaction kinetics allowing for multiple stable steady states, and (3) a sufficiently large difference in diffusion of the two species. Using matched asymptotic analysis, we explain the mathematical basis of wave-pinning, and predict the speed and pinned position of the wave. An analysis of the bifurcation of the pinned front solution reveals how the wave-pinning regime depends on parameters such as rates of diffusion and total mass of the species. We describe two ways in which the pinned solution can be lost depending on the details of the reaction kinetics: a saddle-node or a pitchfork bifurcation.
1801.07086
Xiang-Yi Li
Xiang-Yi Li and Hanna Kokko
Sex-biased dispersal: a review of the theory
null
null
10.1111/brv.12475
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dispersal is ubiquitous throughout the tree of life: factors selecting for dispersal include kin competition, inbreeding avoidance and spatiotemporal variation in resources or habitat suitability. These factors differ in whether they promote male and female dispersal equally strongly, and often selection on dispersal of one sex depends on how much the other disperses. For example, for inbreeding avoidance it can be sufficient that one sex disperses away from the natal site. Attempts to understand sex-specific dispersal evolution have created a rich body of theoretical literature, which we review here. We highlight an interesting gap between empirical and theoretical literature. The former associates different patterns of sex-biased dispersal with mating systems, such as female-biased dispersal in monogamous birds and male-biased dispersal in polygynous mammals. The predominant explanation is traceable back to Greenwood's (1980) ideas of how successful philopatric or dispersing individuals are at gaining mates or resources required to attract them. Theory, however, has developed surprisingly independently of these ideas: predominant ideas in theoretical work track how immigration and emigration change relatedness patterns and alleviate competition for limiting resources, typically considered sexually distinct, with breeding sites and fertilisable females limiting reproductive success for females and males, respectively. We show that the link between mating system and sex-biased dispersal is far from resolved: there are studies showing that mating systems matter, but the oft-stated association between polygyny and male-biased dispersal is not a straightforward theoretical expectation... (full abstract in the PDF)
[ { "created": "Mon, 22 Jan 2018 13:40:42 GMT", "version": "v1" }, { "created": "Wed, 31 Jan 2018 19:39:15 GMT", "version": "v2" }, { "created": "Mon, 23 Jul 2018 11:46:51 GMT", "version": "v3" }, { "created": "Mon, 24 Sep 2018 20:30:16 GMT", "version": "v4" }, { "created": "Tue, 27 Nov 2018 07:52:37 GMT", "version": "v5" } ]
2018-11-28
[ [ "Li", "Xiang-Yi", "" ], [ "Kokko", "Hanna", "" ] ]
Dispersal is ubiquitous throughout the tree of life: factors selecting for dispersal include kin competition, inbreeding avoidance and spatiotemporal variation in resources or habitat suitability. These factors differ in whether they promote male and female dispersal equally strongly, and often selection on dispersal of one sex depends on how much the other disperses. For example, for inbreeding avoidance it can be sufficient that one sex disperses away from the natal site. Attempts to understand sex-specific dispersal evolution have created a rich body of theoretical literature, which we review here. We highlight an interesting gap between empirical and theoretical literature. The former associates different patterns of sex-biased dispersal with mating systems, such as female-biased dispersal in monogamous birds and male-biased dispersal in polygynous mammals. The predominant explanation is traceable back to Greenwood's (1980) ideas of how successful philopatric or dispersing individuals are at gaining mates or resources required to attract them. Theory, however, has developed surprisingly independently of these ideas: predominant ideas in theoretical work track how immigration and emigration change relatedness patterns and alleviate competition for limiting resources, typically considered sexually distinct, with breeding sites and fertilisable females limiting reproductive success for females and males, respectively. We show that the link between mating system and sex-biased dispersal is far from resolved: there are studies showing that mating systems matter, but the oft-stated association between polygyny and male-biased dispersal is not a straightforward theoretical expectation... (full abstract in the PDF)
1512.04781
Brian Williams Dr
Andrew Black, Janie Kriel, Michael Mitchley and Brian G. Williams
The burden of HIV in a Public Hospital in Johannesburg, South Africa
Three pages
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
South Africa has the greatest number of people living with HIV in the world but the direct impact of this on the public health system has not been directly measured. Using data from the Chris Hani Baragwanath Hospital, the largest hospital in the Southern Hemisphere, collected between January 2006 and December 2009, we demonstrate directly the scale of the impact of HIV on mortality in health services in the pubic sector in South Africa. During the period under investigation 14,431 people died in the hospital's medical wards, an average of 11 deaths each day. Of those that died 64 per cent of men and 82 per cent of women were HIV positive. Between the ages of 30 and 40, 94 percent of men and 96 percent of women of those that died were HIV-positive. These data not only reflect the extraordinary mortality directly attributable to the epidemic of HIV but also the massive burden placed on the health services at a time when triple combination therapy was available and these HIV-related deaths could have been averted.
[ { "created": "Tue, 15 Dec 2015 13:38:33 GMT", "version": "v1" } ]
2015-12-16
[ [ "Black", "Andrew", "" ], [ "Kriel", "Janie", "" ], [ "Mitchley", "Michael", "" ], [ "Williams", "Brian G.", "" ] ]
South Africa has the greatest number of people living with HIV in the world but the direct impact of this on the public health system has not been directly measured. Using data from the Chris Hani Baragwanath Hospital, the largest hospital in the Southern Hemisphere, collected between January 2006 and December 2009, we demonstrate directly the scale of the impact of HIV on mortality in health services in the pubic sector in South Africa. During the period under investigation 14,431 people died in the hospital's medical wards, an average of 11 deaths each day. Of those that died 64 per cent of men and 82 per cent of women were HIV positive. Between the ages of 30 and 40, 94 percent of men and 96 percent of women of those that died were HIV-positive. These data not only reflect the extraordinary mortality directly attributable to the epidemic of HIV but also the massive burden placed on the health services at a time when triple combination therapy was available and these HIV-related deaths could have been averted.
1809.09555
Johannes Falk
Johannes Falk, Leo Bronstein, Maleen Hanst, Barbara Drossel, Heinz Koeppl
Context in Synthetic Biology: Memory Effects of Environments with Mono-molecular Reactions
14 pages, 6 figures Accepted Version
J. Chem. Phys. 150, 024106 (2019)
10.1063/1.5053816
null
q-bio.MN cond-mat.stat-mech physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthetic biology aims at designing modular genetic circuits that can be assembled according to the desired function. When embedded in a cell, a circuit module becomes a small subnetwork within a larger environmental network, and its dynamics is therefore affected by potentially unknown interactions with the environment. It is well-known that the presence of the environment not only causes extrinsic noise but also memory effects, which means that the dynamics of the subnetwork is affected by its past states via a memory function that is characteristic of the environment. We study several generic scenarios for the coupling between a small module and a larger environment, with the environment consisting of a chain of mono-molecular reactions. By mapping the dynamics of this coupled system onto random walks, we are able to give exact analytical expressions for the arising memory functions. Hence, our results give insights into the possible types of memory functions and thereby help to better predict subnetwork dynamics.
[ { "created": "Tue, 25 Sep 2018 15:39:23 GMT", "version": "v1" }, { "created": "Fri, 11 Jan 2019 09:20:19 GMT", "version": "v2" } ]
2019-01-14
[ [ "Falk", "Johannes", "" ], [ "Bronstein", "Leo", "" ], [ "Hanst", "Maleen", "" ], [ "Drossel", "Barbara", "" ], [ "Koeppl", "Heinz", "" ] ]
Synthetic biology aims at designing modular genetic circuits that can be assembled according to the desired function. When embedded in a cell, a circuit module becomes a small subnetwork within a larger environmental network, and its dynamics is therefore affected by potentially unknown interactions with the environment. It is well-known that the presence of the environment not only causes extrinsic noise but also memory effects, which means that the dynamics of the subnetwork is affected by its past states via a memory function that is characteristic of the environment. We study several generic scenarios for the coupling between a small module and a larger environment, with the environment consisting of a chain of mono-molecular reactions. By mapping the dynamics of this coupled system onto random walks, we are able to give exact analytical expressions for the arising memory functions. Hence, our results give insights into the possible types of memory functions and thereby help to better predict subnetwork dynamics.
2305.14355
Anagha P
Anagha P and Selvakumar R
Hypergraph representation in brain network analysis
10 pages
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
For the study of functional aspects of the brain network. This paper is a study on the hypergraph representation, based on the functional regions of the brain network. A new parameter that can measure how many multifunctioning regions each function contains and thereby the correlation of other functions with each function.
[ { "created": "Fri, 5 May 2023 05:35:01 GMT", "version": "v1" }, { "created": "Tue, 19 Dec 2023 14:15:21 GMT", "version": "v2" } ]
2023-12-20
[ [ "P", "Anagha", "" ], [ "R", "Selvakumar", "" ] ]
For the study of functional aspects of the brain network. This paper is a study on the hypergraph representation, based on the functional regions of the brain network. A new parameter that can measure how many multifunctioning regions each function contains and thereby the correlation of other functions with each function.
1903.06540
Atte Aalto
Atte Aalto and Jorge Goncalves
Linear system identification from ensemble snapshot observations
null
null
null
null
q-bio.QM stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developments in transcriptomics techniques have caused a large demand in tailored computational methods for modelling gene expression dynamics from experimental data. Recently, so-called single-cell experiments have revolutionised genetic studies. These experiments yield gene expression data in single cell resolution for a large number of cells at a time. However, the cells are destroyed in the measurement process, and so the data consist of snapshots of an ensemble evolving over time, instead of time series. The problem studied in this article is how such data can be used in modelling gene regulatory dynamics. Two different paradigms are studied for linear system identification. The first is based on tracking the evolution of the distribution of cells over time. The second is based on the so-called pseudotime concept, identifying a common trajectory through the state space, along which cells propagate with different rates. Therefore, at any given time, the population contains cells in different stages of the trajectory. Resulting methods are compared in numerical experiments.
[ { "created": "Fri, 15 Mar 2019 13:24:12 GMT", "version": "v1" } ]
2019-03-18
[ [ "Aalto", "Atte", "" ], [ "Goncalves", "Jorge", "" ] ]
Developments in transcriptomics techniques have caused a large demand in tailored computational methods for modelling gene expression dynamics from experimental data. Recently, so-called single-cell experiments have revolutionised genetic studies. These experiments yield gene expression data in single cell resolution for a large number of cells at a time. However, the cells are destroyed in the measurement process, and so the data consist of snapshots of an ensemble evolving over time, instead of time series. The problem studied in this article is how such data can be used in modelling gene regulatory dynamics. Two different paradigms are studied for linear system identification. The first is based on tracking the evolution of the distribution of cells over time. The second is based on the so-called pseudotime concept, identifying a common trajectory through the state space, along which cells propagate with different rates. Therefore, at any given time, the population contains cells in different stages of the trajectory. Resulting methods are compared in numerical experiments.
1408.5216
John Bechhoefer
Michel G. Gauthier, Antoine Dub\'e, and John Bechhoefer
Numerical modeling of inhomogeneous DNA replication kinetics
This paper was written in 2011 as a chapter for a book project that, ultimately, failed to attract enough chapters to be published. We have not updated the references or discussion to take into account the significant amount of work done in this field since then. See, for example, A. Baker and J. Bechhoefer, arxiv1312.4590 / Phys. Rev. E 89, 032703 (2014) for more recent developments
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a calculation technique for modeling inhomogeneous DNA replication kinetics, where replication factors such as initiation rates or fork speeds can change with both position and time. We can use our model to simulate data sets obtained by molecular combing, a widely used experimental technique for probing replication. We can also infer information about the replication program by fitting our model to experimental data sets and also test the efficacy of planned experiments by fitting our model to simulated data sets. We consider asynchronous data sets and illustrate how a lack of synchrony affects replication profiles. In addition to combing data, our technique is well-adapted to microarray-based studies of replication.
[ { "created": "Fri, 22 Aug 2014 06:45:43 GMT", "version": "v1" } ]
2014-08-25
[ [ "Gauthier", "Michel G.", "" ], [ "Dubé", "Antoine", "" ], [ "Bechhoefer", "John", "" ] ]
We present a calculation technique for modeling inhomogeneous DNA replication kinetics, where replication factors such as initiation rates or fork speeds can change with both position and time. We can use our model to simulate data sets obtained by molecular combing, a widely used experimental technique for probing replication. We can also infer information about the replication program by fitting our model to experimental data sets and also test the efficacy of planned experiments by fitting our model to simulated data sets. We consider asynchronous data sets and illustrate how a lack of synchrony affects replication profiles. In addition to combing data, our technique is well-adapted to microarray-based studies of replication.
1701.06184
Matthew Turner
S. Alex Rautu, G. Rowlands and M. S. Turner
Recycling controls membrane domains
5 pages, 4 figures
EPL 121 (2018) 58004
10.1209/0295-5075/121/58004
null
q-bio.SC cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the coarsening of strongly microphase separated membrane domains in the presence of recycling of material. We study the dynamics of the domain size distribution under both scale-free and size-dependent recycling. Closed form solutions to the steady state distributions and its associated central moments are obtained in both cases. Moreover, for the size-independent case, the~time evolution of the moments is analytically calculated, which provide us with exact results for their corresponding relaxation times. Since these moments and relaxation times are measurable quantities, the biophysically significant free parameters in our model may be determined by comparison with experimental data.
[ { "created": "Sun, 22 Jan 2017 16:26:21 GMT", "version": "v1" } ]
2018-05-11
[ [ "Rautu", "S. Alex", "" ], [ "Rowlands", "G.", "" ], [ "Turner", "M. S.", "" ] ]
We study the coarsening of strongly microphase separated membrane domains in the presence of recycling of material. We study the dynamics of the domain size distribution under both scale-free and size-dependent recycling. Closed form solutions to the steady state distributions and its associated central moments are obtained in both cases. Moreover, for the size-independent case, the~time evolution of the moments is analytically calculated, which provide us with exact results for their corresponding relaxation times. Since these moments and relaxation times are measurable quantities, the biophysically significant free parameters in our model may be determined by comparison with experimental data.
1510.02679
Svetlana Kurushina
V.T. Volov, V.V. Volov
Investigation of functional systems of the psychic self-organization based on the method of basal matrix
35 pages, 3 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The innovational matrix method of the basal emotions analysis for individuals is presented. The matrix criteria of the psycho-emotional state stability have been formulated. The developed method approbation on the myographic research data for individuals and comparison with psychological estimations results have been carried out.
[ { "created": "Fri, 9 Oct 2015 14:09:37 GMT", "version": "v1" } ]
2015-10-12
[ [ "Volov", "V. T.", "" ], [ "Volov", "V. V.", "" ] ]
The innovational matrix method of the basal emotions analysis for individuals is presented. The matrix criteria of the psycho-emotional state stability have been formulated. The developed method approbation on the myographic research data for individuals and comparison with psychological estimations results have been carried out.
1707.05639
Erkki Somersalo Dr.
Daniela Calvetti, Annalisa Pascarella, Francesca Pitolli, Erkki Somersalo, Barbara Vantaggi
Brain activity mapping from MEG data via a hierarchical Bayesian algorithm with automatic depth weighting: sensitivity and specificity analysis
null
null
null
null
q-bio.NC math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recently proposed IAS MEG inverse solver algorithm, based on the coupling of a hierarchical Bayesian model with computationally efficient Krylov subspace linear solver, has been shown to perform well for both superficial and deep brain sources. However, a systematic study of its sensitivity and specificity as a function of the activity location is still missing. We propose novel statistical protocols to quantify the performance of MEG inverse solvers, focusing in particular on their sensitivity and specificity in identifying active brain regions. We use these protocols for a systematic study of the sensitivity and specificity of the IAS MEG inverse solver, comparing the performance with three standard inversion methods, wMNE, dSPM, and sLORETA. To avoid the bias of anecdotal tests towards a particular algorithm, the proposed protocols are Monte Carlo sampling based, generating an ensemble of activity patches in each brain region identified in a given atlas. The sensitivity is measured by how much, on average, the reconstructed activity is concentrated in the brain region of the simulated active patch. The specificity analysis is based on Bayes factors, interpreting the estimated current activity as data for testing the hypothesis that the active brain region is correctly identified, vs. the hypothesis of any erroneous attribution. The methodology allows the presence of a single or several simultaneous activity regions, without assuming the knowledge of the number of active regions. The testing protocols suggest that the IAS solver performs well in terms of sensitivity and specificity both with cortical and subcortical activity estimation.
[ { "created": "Sat, 15 Jul 2017 17:17:08 GMT", "version": "v1" } ]
2017-07-19
[ [ "Calvetti", "Daniela", "" ], [ "Pascarella", "Annalisa", "" ], [ "Pitolli", "Francesca", "" ], [ "Somersalo", "Erkki", "" ], [ "Vantaggi", "Barbara", "" ] ]
A recently proposed IAS MEG inverse solver algorithm, based on the coupling of a hierarchical Bayesian model with computationally efficient Krylov subspace linear solver, has been shown to perform well for both superficial and deep brain sources. However, a systematic study of its sensitivity and specificity as a function of the activity location is still missing. We propose novel statistical protocols to quantify the performance of MEG inverse solvers, focusing in particular on their sensitivity and specificity in identifying active brain regions. We use these protocols for a systematic study of the sensitivity and specificity of the IAS MEG inverse solver, comparing the performance with three standard inversion methods, wMNE, dSPM, and sLORETA. To avoid the bias of anecdotal tests towards a particular algorithm, the proposed protocols are Monte Carlo sampling based, generating an ensemble of activity patches in each brain region identified in a given atlas. The sensitivity is measured by how much, on average, the reconstructed activity is concentrated in the brain region of the simulated active patch. The specificity analysis is based on Bayes factors, interpreting the estimated current activity as data for testing the hypothesis that the active brain region is correctly identified, vs. the hypothesis of any erroneous attribution. The methodology allows the presence of a single or several simultaneous activity regions, without assuming the knowledge of the number of active regions. The testing protocols suggest that the IAS solver performs well in terms of sensitivity and specificity both with cortical and subcortical activity estimation.
2403.00875
Rui Sun
Rui Sun, Lirong Wu, Haitao Lin, Yufei Huang, Stan Z. Li
Enhancing Protein Predictive Models via Proteins Data Augmentation: A Benchmark and New Directions
null
null
null
null
q-bio.QM cs.AI cs.LG q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Augmentation is an effective alternative to utilize the small amount of labeled protein data. However, most of the existing work focuses on design-ing new architectures or pre-training tasks, and relatively little work has studied data augmentation for proteins. This paper extends data augmentation techniques previously used for images and texts to proteins and then benchmarks these techniques on a variety of protein-related tasks, providing the first comprehensive evaluation of protein augmentation. Furthermore, we propose two novel semantic-level protein augmentation methods, namely Integrated Gradients Substitution and Back Translation Substitution, which enable protein semantic-aware augmentation through saliency detection and biological knowledge. Finally, we integrate extended and proposed augmentations into an augmentation pool and propose a simple but effective framework, namely Automated Protein Augmentation (APA), which can adaptively select the most suitable augmentation combinations for different tasks. Extensive experiments have shown that APA enhances the performance of five protein related tasks by an average of 10.55% across three architectures compared to vanilla implementations without augmentation, highlighting its potential to make a great impact on the field.
[ { "created": "Fri, 1 Mar 2024 07:58:29 GMT", "version": "v1" } ]
2024-03-05
[ [ "Sun", "Rui", "" ], [ "Wu", "Lirong", "" ], [ "Lin", "Haitao", "" ], [ "Huang", "Yufei", "" ], [ "Li", "Stan Z.", "" ] ]
Augmentation is an effective alternative to utilize the small amount of labeled protein data. However, most of the existing work focuses on design-ing new architectures or pre-training tasks, and relatively little work has studied data augmentation for proteins. This paper extends data augmentation techniques previously used for images and texts to proteins and then benchmarks these techniques on a variety of protein-related tasks, providing the first comprehensive evaluation of protein augmentation. Furthermore, we propose two novel semantic-level protein augmentation methods, namely Integrated Gradients Substitution and Back Translation Substitution, which enable protein semantic-aware augmentation through saliency detection and biological knowledge. Finally, we integrate extended and proposed augmentations into an augmentation pool and propose a simple but effective framework, namely Automated Protein Augmentation (APA), which can adaptively select the most suitable augmentation combinations for different tasks. Extensive experiments have shown that APA enhances the performance of five protein related tasks by an average of 10.55% across three architectures compared to vanilla implementations without augmentation, highlighting its potential to make a great impact on the field.
2303.06154
Wolfgang Fuhl
Wolfgang Fuhl, Susanne Zabel, Kay Nieselt
Resource saving taxonomy classification with k-mer distributions and machine learning
null
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Modern high throughput sequencing technologies like metagenomic sequencing generate millions of sequences which have to be classified based on their taxonomic rank. Modern approaches either apply local alignment and comparison to existing data sets like MMseqs2 or use deep neural networks as it is done in DeepMicrobes and BERTax. Alignment-based approaches are costly in terms of runtime, especially since databases get larger and larger. For the deep learning-based approaches, specialized hardware is necessary for a computation, which consumes large amounts of energy. In this paper, we propose to use $k$-mer distributions obtained from DNA as features to classify its taxonomic origin using machine learning approaches like the subspace $k$-nearest neighbors algorithm, neural networks or bagged decision trees. In addition, we propose a feature space data set balancing approach, which allows reducing the data set for training and improves the performance of the classifiers. By comparing performance, time, and memory consumption of our approach to those of state-of-the-art algorithms (BERTax and MMseqs2) using several datasets, we show that our approach improves the classification on the genus level and achieves comparable results for the superkingdom and phylum level. Link: https://es-cloud.cs.uni-tuebingen.de/d/8e2ab8c3fdd444e1a135/?p=%2FTaxonomyClassification&mode=list
[ { "created": "Fri, 10 Mar 2023 08:01:08 GMT", "version": "v1" } ]
2023-03-14
[ [ "Fuhl", "Wolfgang", "" ], [ "Zabel", "Susanne", "" ], [ "Nieselt", "Kay", "" ] ]
Modern high throughput sequencing technologies like metagenomic sequencing generate millions of sequences which have to be classified based on their taxonomic rank. Modern approaches either apply local alignment and comparison to existing data sets like MMseqs2 or use deep neural networks as it is done in DeepMicrobes and BERTax. Alignment-based approaches are costly in terms of runtime, especially since databases get larger and larger. For the deep learning-based approaches, specialized hardware is necessary for a computation, which consumes large amounts of energy. In this paper, we propose to use $k$-mer distributions obtained from DNA as features to classify its taxonomic origin using machine learning approaches like the subspace $k$-nearest neighbors algorithm, neural networks or bagged decision trees. In addition, we propose a feature space data set balancing approach, which allows reducing the data set for training and improves the performance of the classifiers. By comparing performance, time, and memory consumption of our approach to those of state-of-the-art algorithms (BERTax and MMseqs2) using several datasets, we show that our approach improves the classification on the genus level and achieves comparable results for the superkingdom and phylum level. Link: https://es-cloud.cs.uni-tuebingen.de/d/8e2ab8c3fdd444e1a135/?p=%2FTaxonomyClassification&mode=list
0709.2031
Cheong Xin Chan
Cheong Xin Chan, Robert G. Beiko and Mark A. Ragan
Genetic transfer in Staphylococcus: a case study of 13 genomes
34 pages, 7 figures, 4 tables
null
null
null
q-bio.GN q-bio.PE
null
The widespread presence of antibiotic resistance and virulence among Staphylococcus isolates has been attributed to lateral genetic transfer (LGT) between different strains or species. However, there has been very little study of the extent of LGT in Staphylococcus species using a phylogenetic approach, particularly of the units of such genetic transfer. Here we report the first systematic study of the units of genetic transfer in 13 Staphylococcus genomes, using a rigorous phylogenetic approach. We found clear evidence of LGT in 26.1% of the 1354 homologous gene families examined, and possibly more in another 17.9% of the total families. Within-gene and whole-gene transfer contribute almost equally to the discordance of these gene families against a reference phylogeny. Comparing genetic transfer in single-copy and in multi-copy gene families, we found little functional bias in cases of within-gene (fragmentary) genetic transfer but substantial functional bias in cases of whole-gene (non-fragmentary) genetic transfer, and we observed a higher frequency of LGT in multi-copy gene families. Our results demonstrate that LGT and gene duplication play an important part among the factors that contribute to functional innovation in staphylococcal genomes.
[ { "created": "Thu, 13 Sep 2007 12:24:53 GMT", "version": "v1" }, { "created": "Tue, 29 Jan 2008 10:24:31 GMT", "version": "v2" } ]
2008-01-29
[ [ "Chan", "Cheong Xin", "" ], [ "Beiko", "Robert G.", "" ], [ "Ragan", "Mark A.", "" ] ]
The widespread presence of antibiotic resistance and virulence among Staphylococcus isolates has been attributed to lateral genetic transfer (LGT) between different strains or species. However, there has been very little study of the extent of LGT in Staphylococcus species using a phylogenetic approach, particularly of the units of such genetic transfer. Here we report the first systematic study of the units of genetic transfer in 13 Staphylococcus genomes, using a rigorous phylogenetic approach. We found clear evidence of LGT in 26.1% of the 1354 homologous gene families examined, and possibly more in another 17.9% of the total families. Within-gene and whole-gene transfer contribute almost equally to the discordance of these gene families against a reference phylogeny. Comparing genetic transfer in single-copy and in multi-copy gene families, we found little functional bias in cases of within-gene (fragmentary) genetic transfer but substantial functional bias in cases of whole-gene (non-fragmentary) genetic transfer, and we observed a higher frequency of LGT in multi-copy gene families. Our results demonstrate that LGT and gene duplication play an important part among the factors that contribute to functional innovation in staphylococcal genomes.
1404.5516
Yi Ming Zou
Yi Ming Zou
Boolean Networks with Multi-Expressions and Parameters
A version of this paper appeared in IEEE Transactions on Computational Biology and Bioinformatics
IEEE Transactions on Computational Biology and Bioinformatics, 10(3) (2013), 584 - 592
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To model biological systems using networks, it is desirable to allow more than two levels of expression for the nodes and to allow the introduction of parameters. Various modeling and simulation methods addressing these needs using Boolean models, both synchronous and asynchronous, have been proposed in the literature. However, analytical study of these more general Boolean networks models is lagging. This paper aims to develop a concise theory for these different Boolean logic based modeling methods. Boolean models for networks where each node can have more than two levels of expression and Boolean models with parameters are defined algebraically with examples provided. Certain classes of random asynchronous Boolean networks and deterministic moduli asynchronous Boolean networks are investigated in detail using the setting introduced in this paper. The derived theorems provide a clear picture for the attractor structures of these asynchronous Boolean networks.
[ { "created": "Tue, 22 Apr 2014 14:48:45 GMT", "version": "v1" } ]
2014-04-23
[ [ "Zou", "Yi Ming", "" ] ]
To model biological systems using networks, it is desirable to allow more than two levels of expression for the nodes and to allow the introduction of parameters. Various modeling and simulation methods addressing these needs using Boolean models, both synchronous and asynchronous, have been proposed in the literature. However, analytical study of these more general Boolean networks models is lagging. This paper aims to develop a concise theory for these different Boolean logic based modeling methods. Boolean models for networks where each node can have more than two levels of expression and Boolean models with parameters are defined algebraically with examples provided. Certain classes of random asynchronous Boolean networks and deterministic moduli asynchronous Boolean networks are investigated in detail using the setting introduced in this paper. The derived theorems provide a clear picture for the attractor structures of these asynchronous Boolean networks.
q-bio/0502008
Axel Brandenburg
A. Brandenburg, A.C. Andersen, M. Nilsson
Dissociation in a polymerization model of homochirality
16 pages, 6 figures, submitted to Orig. Life Evol. Biosph
Orig. Life Evol. Biosph. 35, 507-521 (2005)
10.1007/s11084-005-5757-y
NORDITA-2005-10
q-bio.BM cond-mat.other q-bio.OT
null
A fully self-contained model of homochirality is presented that contains the effects of both polymerization and dissociation. The dissociation fragments are assumed to replenish the substrate from which new monomers can grow and undergo new polymerization. The mean length of isotactic polymers is found to grow slowly with the normalized total number of corresponding building blocks. Alternatively, if one assumes that the dissociation fragments themselves can polymerize further, then this corresponds to a strong source of short polymers, and an unrealistically short average length of only 3. By contrast, without dissociation, isotactic polymers becomes infinitely long.
[ { "created": "Tue, 8 Feb 2005 07:57:00 GMT", "version": "v1" } ]
2007-05-23
[ [ "Brandenburg", "A.", "" ], [ "Andersen", "A. C.", "" ], [ "Nilsson", "M.", "" ] ]
A fully self-contained model of homochirality is presented that contains the effects of both polymerization and dissociation. The dissociation fragments are assumed to replenish the substrate from which new monomers can grow and undergo new polymerization. The mean length of isotactic polymers is found to grow slowly with the normalized total number of corresponding building blocks. Alternatively, if one assumes that the dissociation fragments themselves can polymerize further, then this corresponds to a strong source of short polymers, and an unrealistically short average length of only 3. By contrast, without dissociation, isotactic polymers becomes infinitely long.
2105.02805
Christoforos Papasavvas
Christoforos Papasavvas, Peter Neal Taylor, Yujiang Wang
Long-term changes in functional connectivity predict responses to intracranial stimulation of the human brain
Article, 7 figures
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Targeted electrical stimulation of the brain perturbs neural networks and modulates their rhythmic activity both at the site of stimulation and at remote brain regions. Understanding, or even predicting, this neuromodulatory effect is crucial for any therapeutic use of brain stimulation. To this end, we analyzed the stimulation responses in 131 stimulation sessions across 66 patients with focal epilepsy recorded through intracranial EEG (iEEG). We considered functional and structural connectivity features as predictors of the response at every iEEG contact. Taking advantage of multiple recordings over days, we also investigated how slow changes in interictal functional connectivity (FC) ahead of the stimulation relate to stimulation responses. The results reveal that, indeed, this long-term variability of FC exhibits strong association with the stimulation-induced increases in delta and theta band power. Furthermore, we show through cross-validation that long-term variability of FC improves prediction of responses above the performance of spatial predictors alone. These findings can enhance the patient-specific design of effective neuromodulatory protocols for therapeutic interventions.
[ { "created": "Thu, 6 May 2021 16:51:08 GMT", "version": "v1" }, { "created": "Mon, 28 Jun 2021 09:46:13 GMT", "version": "v2" } ]
2021-06-29
[ [ "Papasavvas", "Christoforos", "" ], [ "Taylor", "Peter Neal", "" ], [ "Wang", "Yujiang", "" ] ]
Targeted electrical stimulation of the brain perturbs neural networks and modulates their rhythmic activity both at the site of stimulation and at remote brain regions. Understanding, or even predicting, this neuromodulatory effect is crucial for any therapeutic use of brain stimulation. To this end, we analyzed the stimulation responses in 131 stimulation sessions across 66 patients with focal epilepsy recorded through intracranial EEG (iEEG). We considered functional and structural connectivity features as predictors of the response at every iEEG contact. Taking advantage of multiple recordings over days, we also investigated how slow changes in interictal functional connectivity (FC) ahead of the stimulation relate to stimulation responses. The results reveal that, indeed, this long-term variability of FC exhibits strong association with the stimulation-induced increases in delta and theta band power. Furthermore, we show through cross-validation that long-term variability of FC improves prediction of responses above the performance of spatial predictors alone. These findings can enhance the patient-specific design of effective neuromodulatory protocols for therapeutic interventions.
1704.02406
Petter Holme
Petter Holme, Luis E C Rocha
Impact of misinformation in temporal network epidemiology
null
Net Sci 7 (2019) 52-69
10.1017/nws.2018.28
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the impact of misinformation about the contact structure on the ability to predict disease outbreaks. We base our study on 31 empirical temporal networks and tune the frequencies in errors in the node identities or timestamps of contacts. We find that for both these spreading scenarios, the maximal misprediction of both the outbreak size and time to extinction follows an stretched exponential convergence as a function of the error frequency. We furthermore determine the temporal-network structural factors influencing the parameters of this convergence.
[ { "created": "Sat, 8 Apr 2017 00:12:10 GMT", "version": "v1" }, { "created": "Tue, 27 Jun 2017 01:56:01 GMT", "version": "v2" } ]
2019-05-01
[ [ "Holme", "Petter", "" ], [ "Rocha", "Luis E C", "" ] ]
We investigate the impact of misinformation about the contact structure on the ability to predict disease outbreaks. We base our study on 31 empirical temporal networks and tune the frequencies in errors in the node identities or timestamps of contacts. We find that for both these spreading scenarios, the maximal misprediction of both the outbreak size and time to extinction follows an stretched exponential convergence as a function of the error frequency. We furthermore determine the temporal-network structural factors influencing the parameters of this convergence.
2301.10914
Chen-Gia Tsai
Chia-Wei Li, Tzu-Han Cheng, Chen-Gia Tsai
Music Enhances Activity in the Hypothalamus, Brainstem, and Anterior Cerebellum during Script-Driven Imagery of Affective Scenes
25 pages, 5 figures
Neuropsychologia 133, 107073. Epub 2019 Apr 24
10.1016/j.neuropsychologia.2019.04.014.
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Music is frequently used to establish atmosphere and to enhance/alter emotion in dramas and films. During music listening, visual imagery is a common mechanism underlying emotion induction. The present functional magnetic resonance imaging (fMRI) study examined the neural substrates of the emotional processing of music and imagined scene. A factorial design was used with factors emotion valence (positive; negative) and music (withoutMUSIC: script-driven imagery of emotional scenes; withMUSIC: script-driven imagery of emotional scenes and simultaneously listening to affectively congruent music). The baseline condition was imagery of neutral scenes in the absence of music. Eleven females and five males participated in this fMRI study. The contrasts of positive and negative withoutMUSIC conditions minus the baseline (imagery of neutral scenes) showed no significant activation. When comparing the withMUSIC to withoutMUSIC conditions, activity in a number of emotion-related regions was observed, including the temporal pole (TP), amygdala, hippocampus, hypothalamus, anterior ventral tegmental area (VTA), locus coeruleus, and anterior cerebellum. We hypothesized that the TP may integrate music and the imagined scene to extract socioemotional significance, initiating the subcortical structures to generate subjective feelings and bodily responses. For the withMUSIC conditions, negative emotions were associated with enhanced activation in the posterior VTA compared to positive emotions. Our findings replicated and extended previous research which suggests that different subregions of the VTA are sensitive to rewarding and aversive stimuli. Taken together, this study suggests that emotional music embedded in an imagined scenario is a salient social signal that prompts preparation of approach/avoidance behaviours and emotional responses in listeners.
[ { "created": "Thu, 26 Jan 2023 03:01:47 GMT", "version": "v1" } ]
2023-01-27
[ [ "Li", "Chia-Wei", "" ], [ "Cheng", "Tzu-Han", "" ], [ "Tsai", "Chen-Gia", "" ] ]
Music is frequently used to establish atmosphere and to enhance/alter emotion in dramas and films. During music listening, visual imagery is a common mechanism underlying emotion induction. The present functional magnetic resonance imaging (fMRI) study examined the neural substrates of the emotional processing of music and imagined scene. A factorial design was used with factors emotion valence (positive; negative) and music (withoutMUSIC: script-driven imagery of emotional scenes; withMUSIC: script-driven imagery of emotional scenes and simultaneously listening to affectively congruent music). The baseline condition was imagery of neutral scenes in the absence of music. Eleven females and five males participated in this fMRI study. The contrasts of positive and negative withoutMUSIC conditions minus the baseline (imagery of neutral scenes) showed no significant activation. When comparing the withMUSIC to withoutMUSIC conditions, activity in a number of emotion-related regions was observed, including the temporal pole (TP), amygdala, hippocampus, hypothalamus, anterior ventral tegmental area (VTA), locus coeruleus, and anterior cerebellum. We hypothesized that the TP may integrate music and the imagined scene to extract socioemotional significance, initiating the subcortical structures to generate subjective feelings and bodily responses. For the withMUSIC conditions, negative emotions were associated with enhanced activation in the posterior VTA compared to positive emotions. Our findings replicated and extended previous research which suggests that different subregions of the VTA are sensitive to rewarding and aversive stimuli. Taken together, this study suggests that emotional music embedded in an imagined scenario is a salient social signal that prompts preparation of approach/avoidance behaviours and emotional responses in listeners.
2103.06027
Mattia Zanella
M. Zanella, C. Bardelli, G. Dimarco, S. Deandrea, P. Perotti, M. Azzi, S. Figini, G. Toscani
A data-driven epidemic model with social structure for understanding the COVID-19 infection on a heavily affected Italian Province
null
null
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, using a detailed dataset furnished by National Health Authorities concerning the Province of Pavia (Lombardy, Italy), we propose to determine the essential features of the ongoing COVID-19 pandemic in term of contact dynamics. Our contribution is devoted to provide a possible planning of the needs of medical infrastructures in the Pavia Province and to suggest different scenarios about the vaccination campaign which possibly help in reducing the fatalities and/or reducing the number of infected in the population. The proposed research combines a new mathematical description of the spread of an infectious diseases which takes into account both age and average daily social contacts with a detailed analysis of the dataset of all traced infected individuals in the Province of Pavia. These information are used to develop a data-driven model in which calibration and feeding of the model are extensively used. The epidemiological evolution is obtained by relying on an approach based on statistical mechanics. This leads to study the evolution over time of a system of probability distributions characterizing the age and social contacts of the population. One of the main outcomes shows that, as expected, the spread of the disease is closely related to the mean number of contacts of individuals. The model permits to forecast thanks to an uncertainty quantification approach and in the short time horizon, the average number and the confidence bands of expected hospitalized classified by age and to test different options for an effective vaccination campaign with age-decreasing priority.
[ { "created": "Wed, 10 Mar 2021 12:59:06 GMT", "version": "v1" }, { "created": "Thu, 1 Jul 2021 19:50:36 GMT", "version": "v2" } ]
2021-07-05
[ [ "Zanella", "M.", "" ], [ "Bardelli", "C.", "" ], [ "Dimarco", "G.", "" ], [ "Deandrea", "S.", "" ], [ "Perotti", "P.", "" ], [ "Azzi", "M.", "" ], [ "Figini", "S.", "" ], [ "Toscani", "G.", "" ] ]
In this work, using a detailed dataset furnished by National Health Authorities concerning the Province of Pavia (Lombardy, Italy), we propose to determine the essential features of the ongoing COVID-19 pandemic in term of contact dynamics. Our contribution is devoted to provide a possible planning of the needs of medical infrastructures in the Pavia Province and to suggest different scenarios about the vaccination campaign which possibly help in reducing the fatalities and/or reducing the number of infected in the population. The proposed research combines a new mathematical description of the spread of an infectious diseases which takes into account both age and average daily social contacts with a detailed analysis of the dataset of all traced infected individuals in the Province of Pavia. These information are used to develop a data-driven model in which calibration and feeding of the model are extensively used. The epidemiological evolution is obtained by relying on an approach based on statistical mechanics. This leads to study the evolution over time of a system of probability distributions characterizing the age and social contacts of the population. One of the main outcomes shows that, as expected, the spread of the disease is closely related to the mean number of contacts of individuals. The model permits to forecast thanks to an uncertainty quantification approach and in the short time horizon, the average number and the confidence bands of expected hospitalized classified by age and to test different options for an effective vaccination campaign with age-decreasing priority.
1708.05070
Randal Olson
Randal S. Olson, William La Cava, Zairah Mustahsan, Akshay Varik, Jason H. Moore
Data-driven Advice for Applying Machine Learning to Bioinformatics Problems
12 pages, 5 figures, 4 tables. To be published in the proceedings of PSB 2018. Randal S. Olson and William La Cava contributed equally as co-first authors
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the bioinformatics field grows, it must keep pace not only with new data but with new algorithms. Here we contribute a thorough analysis of 13 state-of-the-art, commonly used machine learning algorithms on a set of 165 publicly available classification problems in order to provide data-driven algorithm recommendations to current researchers. We present a number of statistical and visual comparisons of algorithm performance and quantify the effect of model selection and algorithm tuning for each algorithm and dataset. The analysis culminates in the recommendation of five algorithms with hyperparameters that maximize classifier performance across the tested problems, as well as general guidelines for applying machine learning to supervised classification problems.
[ { "created": "Tue, 8 Aug 2017 21:41:48 GMT", "version": "v1" }, { "created": "Sun, 7 Jan 2018 19:08:53 GMT", "version": "v2" } ]
2018-01-09
[ [ "Olson", "Randal S.", "" ], [ "La Cava", "William", "" ], [ "Mustahsan", "Zairah", "" ], [ "Varik", "Akshay", "" ], [ "Moore", "Jason H.", "" ] ]
As the bioinformatics field grows, it must keep pace not only with new data but with new algorithms. Here we contribute a thorough analysis of 13 state-of-the-art, commonly used machine learning algorithms on a set of 165 publicly available classification problems in order to provide data-driven algorithm recommendations to current researchers. We present a number of statistical and visual comparisons of algorithm performance and quantify the effect of model selection and algorithm tuning for each algorithm and dataset. The analysis culminates in the recommendation of five algorithms with hyperparameters that maximize classifier performance across the tested problems, as well as general guidelines for applying machine learning to supervised classification problems.
0902.3146
Claude Pasquier
Claude Pasquier, Vasilis Promponas, Stavros Hamodrakas
PRED-CLASS: cascading neural networks for generalized protein classification and genome-wide applications
null
Proteins Structure Function and Bioinformatics 44, 3 (2001) 361-9
10.1002/prot.1101
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A cascading system of hierarchical, artificial neural networks (named PRED-CLASS) is presented for the generalized classification of proteins into four distinct classes-transmembrane, fibrous, globular, and mixed-from information solely encoded in their amino acid sequences. The architecture of the individual component networks is kept very simple, reducing the number of free parameters (network synaptic weights) for faster training, improved generalization, and the avoidance of data overfitting. Capturing information from as few as 50 protein sequences spread among the four target classes (6 transmembrane, 10 fibrous, 13 globular, and 17 mixed), PRED-CLASS was able to obtain 371 correct predictions out of a set of 387 proteins (success rate approximately 96%) unambiguously assigned into one of the target classes. The application of PRED-CLASS to several test sets and complete proteomes of several organisms demonstrates that such a method could serve as a valuable tool in the annotation of genomic open reading frames with no functional assignment or as a preliminary step in fold recognition and ab initio structure prediction methods. Detailed results obtained for various data sets and completed genomes, along with a web sever running the PRED-CLASS algorithm, can be accessed over the World Wide Web at http://o2.biol.uoa.gr/PRED-CLASS
[ { "created": "Wed, 18 Feb 2009 14:03:49 GMT", "version": "v1" } ]
2009-02-19
[ [ "Pasquier", "Claude", "" ], [ "Promponas", "Vasilis", "" ], [ "Hamodrakas", "Stavros", "" ] ]
A cascading system of hierarchical, artificial neural networks (named PRED-CLASS) is presented for the generalized classification of proteins into four distinct classes-transmembrane, fibrous, globular, and mixed-from information solely encoded in their amino acid sequences. The architecture of the individual component networks is kept very simple, reducing the number of free parameters (network synaptic weights) for faster training, improved generalization, and the avoidance of data overfitting. Capturing information from as few as 50 protein sequences spread among the four target classes (6 transmembrane, 10 fibrous, 13 globular, and 17 mixed), PRED-CLASS was able to obtain 371 correct predictions out of a set of 387 proteins (success rate approximately 96%) unambiguously assigned into one of the target classes. The application of PRED-CLASS to several test sets and complete proteomes of several organisms demonstrates that such a method could serve as a valuable tool in the annotation of genomic open reading frames with no functional assignment or as a preliminary step in fold recognition and ab initio structure prediction methods. Detailed results obtained for various data sets and completed genomes, along with a web sever running the PRED-CLASS algorithm, can be accessed over the World Wide Web at http://o2.biol.uoa.gr/PRED-CLASS
1102.5295
Ulrich S. Schwarz
Achim Besser (1), Julien Colombelli (2), Ernst H. K. Stelzer (2) and Ulrich S. Schwarz (1) ((1) Heidelberg University, (2) EMBL Heidelberg)
Viscoelastic response of contractile filament bundles
Revtex with 24 pages, 7 Postscript figures included, accepted for publication in Phys. Rev. E
null
10.1103/PhysRevE.83.051902
null
q-bio.SC cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The actin cytoskeleton of adherent tissue cells often condenses into filament bundles contracted by myosin motors, so-called stress fibers, which play a crucial role in the mechanical interaction of cells with their environment. Stress fibers are usually attached to their environment at the endpoints, but possibly also along their whole length. We introduce a theoretical model for such contractile filament bundles which combines passive viscoelasticity with active contractility. The model equations are solved analytically for two different types of boundary conditions. A free boundary corresponds to stress fiber contraction dynamics after laser surgery and results in good agreement with experimental data. Imposing cyclic varying boundary forces allows us to calculate the complex modulus of a single stress fiber.
[ { "created": "Fri, 25 Feb 2011 17:44:07 GMT", "version": "v1" } ]
2015-05-27
[ [ "Besser", "Achim", "", "Heidelberg University" ], [ "Colombelli", "Julien", "", "EMBL Heidelberg" ], [ "Stelzer", "Ernst H. K.", "", "EMBL Heidelberg" ], [ "Schwarz", "Ulrich S.", "", "Heidelberg University" ] ]
The actin cytoskeleton of adherent tissue cells often condenses into filament bundles contracted by myosin motors, so-called stress fibers, which play a crucial role in the mechanical interaction of cells with their environment. Stress fibers are usually attached to their environment at the endpoints, but possibly also along their whole length. We introduce a theoretical model for such contractile filament bundles which combines passive viscoelasticity with active contractility. The model equations are solved analytically for two different types of boundary conditions. A free boundary corresponds to stress fiber contraction dynamics after laser surgery and results in good agreement with experimental data. Imposing cyclic varying boundary forces allows us to calculate the complex modulus of a single stress fiber.
2003.12670
Michael Taynnan Barros
Michael Taynnan Barros, Harun Siljak, Peter Mullen, Constantinos Papadias, Jari Hyttinen and Nicola Marchetti
Objective Multi-variable Classification and Inference of Biological Neuronal Networks
null
null
null
null
q-bio.NC cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classification of biological neuron types and networks poses challenges to the full understanding of the brain's organisation and functioning. In this paper, we develop a novel objective classification model of biological neuronal types and networks based on the communication metrics of neurons. This presents advantages against the existing approaches since the mutual information or the delay between neurons obtained from spike trains are more abundant data compare to conventional morphological data. We firstly designed two open-access supporting computational platforms of various neuronal circuits from the Blue Brain Project realistic models, named Neurpy and Neurgen. Then we investigate how the concept of network tomography could be achieved with cortical neuronal circuits for morphological, topological and electrical classification of neurons. We extract the simulated data to many different classifiers (including SVM, Decision Trees, Random Forest, and Artificial Neuron Networks) classifying the specific cell type (and sub-group types) achieving accuracies of up to 70\%. Inference of biological network structures using network tomography reached up to 65\% of accuracy. We also analysed recall, precision and F1score of the classification of five layers, 25 cell m-types, and 14 cell e-types. Our research not only contributes to existing classification efforts but sets the road-map for future usage of cellular-scaled brain-machine interfaces for in-vivo objective classification of neurons as a sensing mechanism of the brain's structure.
[ { "created": "Sat, 28 Mar 2020 00:25:49 GMT", "version": "v1" } ]
2020-03-31
[ [ "Barros", "Michael Taynnan", "" ], [ "Siljak", "Harun", "" ], [ "Mullen", "Peter", "" ], [ "Papadias", "Constantinos", "" ], [ "Hyttinen", "Jari", "" ], [ "Marchetti", "Nicola", "" ] ]
Classification of biological neuron types and networks poses challenges to the full understanding of the brain's organisation and functioning. In this paper, we develop a novel objective classification model of biological neuronal types and networks based on the communication metrics of neurons. This presents advantages against the existing approaches since the mutual information or the delay between neurons obtained from spike trains are more abundant data compare to conventional morphological data. We firstly designed two open-access supporting computational platforms of various neuronal circuits from the Blue Brain Project realistic models, named Neurpy and Neurgen. Then we investigate how the concept of network tomography could be achieved with cortical neuronal circuits for morphological, topological and electrical classification of neurons. We extract the simulated data to many different classifiers (including SVM, Decision Trees, Random Forest, and Artificial Neuron Networks) classifying the specific cell type (and sub-group types) achieving accuracies of up to 70\%. Inference of biological network structures using network tomography reached up to 65\% of accuracy. We also analysed recall, precision and F1score of the classification of five layers, 25 cell m-types, and 14 cell e-types. Our research not only contributes to existing classification efforts but sets the road-map for future usage of cellular-scaled brain-machine interfaces for in-vivo objective classification of neurons as a sensing mechanism of the brain's structure.
0807.3038
Jan Biro Dr
Jan C Biro
Design and Production of Specifically and with High Affinity Reacting Peptides
18 pages including 7 figures
null
null
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: A partially random target selection method was developed to design and produce affinity reagents (target) to any protein query. It is based on the recent concept of Proteomic Code (for review see Biro, 2007 [1]) which suggests that significant number of amino acids in specifically interacting proteins are coded by partially complementary codons. It means that the 1st and 3rd residues of codons coding many co-locating amino acids are complementary but the 2nd may but not necessarily complementary: like 5'-AXG-3'/3'-CXT-5' codon pair, where X is any nucleotide. Results: A mixture of 45 residue long, reverse, partially complementary oligonucleotide sequences (target pool) were synthesized to selected epitopes of query mRNA sequences. The 2nd codon residues were randomized. The target oligonucleotide pool was inserted into vectors, expressed and the protein products were screened for affinity to the query in Bacterial Two-Hybrid System. The best clones were used for larger-scale protein syntheses and characterization. It was possible to design and produce specific and with high affinity reacting (Kd:100 nM) oligopeptide reagents to GAL4 query oligopeptides. Conclusions: Second codon residue randomization is a promising method to design and produce affinity peptides to any protein sequences. The method has the potential to be a rapid, inexpensive, high throughput, non-immunoglobulin based alternative to recent in vivo antibody generating procedures.
[ { "created": "Sun, 20 Jul 2008 23:05:53 GMT", "version": "v1" } ]
2008-07-22
[ [ "Biro", "Jan C", "" ] ]
Background: A partially random target selection method was developed to design and produce affinity reagents (target) to any protein query. It is based on the recent concept of Proteomic Code (for review see Biro, 2007 [1]) which suggests that significant number of amino acids in specifically interacting proteins are coded by partially complementary codons. It means that the 1st and 3rd residues of codons coding many co-locating amino acids are complementary but the 2nd may but not necessarily complementary: like 5'-AXG-3'/3'-CXT-5' codon pair, where X is any nucleotide. Results: A mixture of 45 residue long, reverse, partially complementary oligonucleotide sequences (target pool) were synthesized to selected epitopes of query mRNA sequences. The 2nd codon residues were randomized. The target oligonucleotide pool was inserted into vectors, expressed and the protein products were screened for affinity to the query in Bacterial Two-Hybrid System. The best clones were used for larger-scale protein syntheses and characterization. It was possible to design and produce specific and with high affinity reacting (Kd:100 nM) oligopeptide reagents to GAL4 query oligopeptides. Conclusions: Second codon residue randomization is a promising method to design and produce affinity peptides to any protein sequences. The method has the potential to be a rapid, inexpensive, high throughput, non-immunoglobulin based alternative to recent in vivo antibody generating procedures.
2007.01411
Gregory Kozyreff
Gregory Kozyreff
Hospitalization dynamics during the first COVID-19 pandemic wave: SIR modelling compared to Belgium, France, Italy, Switzerland and New York City data
null
Infectious Disease Modelling, Vol. 6, 2021, Pages 398-404
10.1016/j.idm.2021.01.006
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using the classical Susceptible-Infected-Recovered epidemiological model, an analytical formula is derived for the number of beds occupied by Covid-19 patients. The analytical curve is fitted to data in Belgium, France, New York City and Switzerland, with a correlation coefficient exceeding 98.8%, suggesting that finer models are unnecessary with such macroscopic data. The fitting is used to extract estimates of the doubling time in the ascending phase of the epidemic, the mean recovery time and, for those who require medical intervention, the mean hospitalization time. Large variations can be observed among different outbreaks.
[ { "created": "Thu, 2 Jul 2020 22:12:17 GMT", "version": "v1" }, { "created": "Fri, 19 Feb 2021 13:47:46 GMT", "version": "v2" } ]
2021-02-22
[ [ "Kozyreff", "Gregory", "" ] ]
Using the classical Susceptible-Infected-Recovered epidemiological model, an analytical formula is derived for the number of beds occupied by Covid-19 patients. The analytical curve is fitted to data in Belgium, France, New York City and Switzerland, with a correlation coefficient exceeding 98.8%, suggesting that finer models are unnecessary with such macroscopic data. The fitting is used to extract estimates of the doubling time in the ascending phase of the epidemic, the mean recovery time and, for those who require medical intervention, the mean hospitalization time. Large variations can be observed among different outbreaks.
1707.08774
Stephen Rush PhD
Pavel Petrov, Stephen T Rush, Zhichun Zhai, Christine H Lee, Peter T Kim, and Giseon Heo
Topological Data Analysis of Clostridioides difficile Infection and Fecal Microbiota Transplantation
20 pages, 8 figures
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational topologists recently developed a method, called persistent homology to analyze data presented in terms of similarity or dissimilarity. Indeed, persistent homology studies the evolution of topological features in terms of a single index, and is able to capture higher order features beyond the usual clustering techniques. There are three descriptive statistics of persistent homology, namely barcode, persistence diagram and more recently, persistence landscape. Persistence landscape is useful for statistical inference as it belongs to a space of $p-$integrable functions, a separable Banach space. We apply tools in both computational topology and statistics to DNA sequences taken from Clostridioides difficile infected patients treated with an experimental fecal microbiota transplantation. Our statistical and topological data analysis are able to detect interesting patterns among patients and donors. It also provides visualization of DNA sequences in the form of clusters and loops.
[ { "created": "Thu, 27 Jul 2017 08:28:15 GMT", "version": "v1" }, { "created": "Mon, 31 Jul 2017 07:09:27 GMT", "version": "v2" } ]
2017-08-01
[ [ "Petrov", "Pavel", "" ], [ "Rush", "Stephen T", "" ], [ "Zhai", "Zhichun", "" ], [ "Lee", "Christine H", "" ], [ "Kim", "Peter T", "" ], [ "Heo", "Giseon", "" ] ]
Computational topologists recently developed a method, called persistent homology to analyze data presented in terms of similarity or dissimilarity. Indeed, persistent homology studies the evolution of topological features in terms of a single index, and is able to capture higher order features beyond the usual clustering techniques. There are three descriptive statistics of persistent homology, namely barcode, persistence diagram and more recently, persistence landscape. Persistence landscape is useful for statistical inference as it belongs to a space of $p-$integrable functions, a separable Banach space. We apply tools in both computational topology and statistics to DNA sequences taken from Clostridioides difficile infected patients treated with an experimental fecal microbiota transplantation. Our statistical and topological data analysis are able to detect interesting patterns among patients and donors. It also provides visualization of DNA sequences in the form of clusters and loops.
2309.07671
Giovanni Di Liberto
Giovanni M. Di Liberto, Aaron Nidiffer, Michael J. Crosse, Nathaniel Zuk, Stephanie Haro, Giorgia Cantisani, Martin M. Winchester, Aoife Igoe, Ross McCrann, Satwik Chandra, Edmund C. Lalor, Giacomo Baruzzo
A standardised open science framework for sharing and re-analysing neural data acquired to continuous stimuli
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Neurophysiology research has demonstrated that it is possible and valuable to investigate sensory processing in scenarios involving continuous sensory streams, such as speech and music. Over the past 10 years or so, novel analytic frameworks combined with the growing participation in data sharing has led to a surge of publicly available datasets involving continuous sensory experiments. However, open science efforts in this domain of research remain scattered, lacking a cohesive set of guidelines. This paper presents an end-to-end open science framework for the storage, analysis, sharing, and re-analysis of neural data recorded during continuous sensory experiments. The framework has been designed to interface easily with existing toolboxes, such as EelBrain, NapLib, MNE, and the mTRF-Toolbox. We present guidelines by taking both the user view (how to rapidly re-analyse existing data) and the experimenter view (how to store, analyse, and share), making the process as straightforward and accessible as possible for all users. Additionally, we introduce a web-based data browser that enables the effortless replication of published results and data re-analysis.
[ { "created": "Thu, 14 Sep 2023 12:34:34 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2023 17:03:31 GMT", "version": "v2" }, { "created": "Tue, 13 Feb 2024 20:10:11 GMT", "version": "v3" } ]
2024-02-15
[ [ "Di Liberto", "Giovanni M.", "" ], [ "Nidiffer", "Aaron", "" ], [ "Crosse", "Michael J.", "" ], [ "Zuk", "Nathaniel", "" ], [ "Haro", "Stephanie", "" ], [ "Cantisani", "Giorgia", "" ], [ "Winchester", "Martin M.", "" ], [ "Igoe", "Aoife", "" ], [ "McCrann", "Ross", "" ], [ "Chandra", "Satwik", "" ], [ "Lalor", "Edmund C.", "" ], [ "Baruzzo", "Giacomo", "" ] ]
Neurophysiology research has demonstrated that it is possible and valuable to investigate sensory processing in scenarios involving continuous sensory streams, such as speech and music. Over the past 10 years or so, novel analytic frameworks combined with the growing participation in data sharing has led to a surge of publicly available datasets involving continuous sensory experiments. However, open science efforts in this domain of research remain scattered, lacking a cohesive set of guidelines. This paper presents an end-to-end open science framework for the storage, analysis, sharing, and re-analysis of neural data recorded during continuous sensory experiments. The framework has been designed to interface easily with existing toolboxes, such as EelBrain, NapLib, MNE, and the mTRF-Toolbox. We present guidelines by taking both the user view (how to rapidly re-analyse existing data) and the experimenter view (how to store, analyse, and share), making the process as straightforward and accessible as possible for all users. Additionally, we introduce a web-based data browser that enables the effortless replication of published results and data re-analysis.
1403.1244
Robin Allaby
Robin G. Allaby, Dorian Q. Fuller, James L. Kitchen
The limits of selection under plant domestication
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Plant domestication involved a process of selection through human agency of a series of traits collectively termed the domestication syndrome. Current debate concerns the pace at which domesticated plants emerged from cultivated wild populations and how many genes were involved. Here we present simulations that test how many genes could have been involved by considering the cost of selection. We demonstrate the selection load that can be endured by populations increases with decreasing selection coefficients and greater numbers of loci down to values of about s = 0.005, causing a driving force that increases the number of loci under selection. As the number of loci under selection increases, an effect of co-selection increases resulting in individual unlinked loci being fixed more rapidly in out-crossing populations, representing a second driving force to increase the number of loci under selection. In inbreeding systems co-selection results in interference and reduced rates of fixation but does not reduce the size of the selection load that can be endured. These driving forces result in an optimum pace of genome evolution in which 50-100 loci are the most that could be under selection in a cultivation regime. Furthermore, the simulations do not preclude the existence of selective sweeps but demonstrate that they come at a cost of the selection load that can be endured and consequently a reduction of the capacity of plants to adapt to new environments, which may contribute to the explanation of why selective sweeps have been so rarely detected in genome studies.
[ { "created": "Wed, 5 Mar 2014 20:33:48 GMT", "version": "v1" } ]
2014-03-06
[ [ "Allaby", "Robin G.", "" ], [ "Fuller", "Dorian Q.", "" ], [ "Kitchen", "James L.", "" ] ]
Plant domestication involved a process of selection through human agency of a series of traits collectively termed the domestication syndrome. Current debate concerns the pace at which domesticated plants emerged from cultivated wild populations and how many genes were involved. Here we present simulations that test how many genes could have been involved by considering the cost of selection. We demonstrate the selection load that can be endured by populations increases with decreasing selection coefficients and greater numbers of loci down to values of about s = 0.005, causing a driving force that increases the number of loci under selection. As the number of loci under selection increases, an effect of co-selection increases resulting in individual unlinked loci being fixed more rapidly in out-crossing populations, representing a second driving force to increase the number of loci under selection. In inbreeding systems co-selection results in interference and reduced rates of fixation but does not reduce the size of the selection load that can be endured. These driving forces result in an optimum pace of genome evolution in which 50-100 loci are the most that could be under selection in a cultivation regime. Furthermore, the simulations do not preclude the existence of selective sweeps but demonstrate that they come at a cost of the selection load that can be endured and consequently a reduction of the capacity of plants to adapt to new environments, which may contribute to the explanation of why selective sweeps have been so rarely detected in genome studies.
1309.7275
Stefano Allesina
Stefano Allesina, Elizabeth Sander, Matthew J. Smith, Si Tang
Superelliptical laws for complex networks
28 pages, 16 figures, 2 tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
All dynamical systems of biological interest--be they food webs, regulation of genes, or contacts between healthy and infectious individuals--have complex network structure. Wigner's semicircular law and Girko's circular law describe the eigenvalues of systems whose structure is a fully connected network. However, these laws fail for systems with complex network structure. Here we show that in these cases the eigenvalues are described by superellipses. We also develop a new method to analytically estimate the dominant eigenvalue of complex networks.
[ { "created": "Thu, 26 Sep 2013 16:37:35 GMT", "version": "v1" }, { "created": "Thu, 7 Nov 2013 17:58:30 GMT", "version": "v2" } ]
2013-11-08
[ [ "Allesina", "Stefano", "" ], [ "Sander", "Elizabeth", "" ], [ "Smith", "Matthew J.", "" ], [ "Tang", "Si", "" ] ]
All dynamical systems of biological interest--be they food webs, regulation of genes, or contacts between healthy and infectious individuals--have complex network structure. Wigner's semicircular law and Girko's circular law describe the eigenvalues of systems whose structure is a fully connected network. However, these laws fail for systems with complex network structure. Here we show that in these cases the eigenvalues are described by superellipses. We also develop a new method to analytically estimate the dominant eigenvalue of complex networks.
2109.12551
Thomas Fink
Thomas M. A. Fink and Ryan Hannam
Biological logics are restricted
6 pages
null
null
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks of gene regulation govern morphogenesis, determine cell identity and regulate cell function. But we have little understanding, at the local level, of which logics are biologically preferred or even permitted. To solve this puzzle, we studied the consequences of a fundamental aspect of gene regulatory networks: genes and transcription factors talk to each other but not themselves. Remarkably, this bipartite structure severely restricts the number of logical dependencies that a gene can have on other genes. We developed a theory for the number of permitted logics for different regulatory building blocks of genes and transcription factors. We tested our predictions against a simulation of the 19 simplest building blocks, and found complete agreement. The restricted range of biological logics is a key insight into how information is processed at the genetic level. It constraints global network function and makes it easier to reverse engineer regulatory networks from observed behavior.
[ { "created": "Sun, 26 Sep 2021 10:07:40 GMT", "version": "v1" }, { "created": "Wed, 31 Aug 2022 18:50:54 GMT", "version": "v2" } ]
2022-09-02
[ [ "Fink", "Thomas M. A.", "" ], [ "Hannam", "Ryan", "" ] ]
Networks of gene regulation govern morphogenesis, determine cell identity and regulate cell function. But we have little understanding, at the local level, of which logics are biologically preferred or even permitted. To solve this puzzle, we studied the consequences of a fundamental aspect of gene regulatory networks: genes and transcription factors talk to each other but not themselves. Remarkably, this bipartite structure severely restricts the number of logical dependencies that a gene can have on other genes. We developed a theory for the number of permitted logics for different regulatory building blocks of genes and transcription factors. We tested our predictions against a simulation of the 19 simplest building blocks, and found complete agreement. The restricted range of biological logics is a key insight into how information is processed at the genetic level. It constraints global network function and makes it easier to reverse engineer regulatory networks from observed behavior.
1309.3337
Alberto d'Onofrio
Alberto d'Onofrio
A general framework for modeling tumor-immune system competition and immunotherapy: mathematical analysis and biomedical inferences
22 pages, 13 figures
Physica D, 208, 220-235 (2005)
10.1016/j.physd.2005.06.032
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose and investigate a family of models, which admits as particular cases some well known mathematical models of tumor-immune system interaction, with the additional assumption that the influx of immune system cells may be a function of the number of cancer cells. Constant, periodic and impulsive therapies (as well as the non-perturbed system) are investigated both analytically for the general family and, by using the model by Kuznetsov et al. (V. A. Kuznetsov, I. A. Makalkin, M. A. Taylor and A. S. Perelson. Nonlinear dynamics of immunogenic tumors: Parameter estimation and global bifurcation analysis. Bulletin of Mathematical Biology 56(2): 295-321, (1994)), via numerical simulations. Simulations seem to show that the shape of the function modeling the therapy is a crucial factor only for very high values of the therapy period $T$, whereas for realistic values of $T$, the eradication of the cancer cells depends on the mean values of the therapy term. Finally, some medical inferences are proposed.
[ { "created": "Fri, 13 Sep 2013 00:00:01 GMT", "version": "v1" } ]
2015-06-17
[ [ "d'Onofrio", "Alberto", "" ] ]
In this work we propose and investigate a family of models, which admits as particular cases some well known mathematical models of tumor-immune system interaction, with the additional assumption that the influx of immune system cells may be a function of the number of cancer cells. Constant, periodic and impulsive therapies (as well as the non-perturbed system) are investigated both analytically for the general family and, by using the model by Kuznetsov et al. (V. A. Kuznetsov, I. A. Makalkin, M. A. Taylor and A. S. Perelson. Nonlinear dynamics of immunogenic tumors: Parameter estimation and global bifurcation analysis. Bulletin of Mathematical Biology 56(2): 295-321, (1994)), via numerical simulations. Simulations seem to show that the shape of the function modeling the therapy is a crucial factor only for very high values of the therapy period $T$, whereas for realistic values of $T$, the eradication of the cancer cells depends on the mean values of the therapy term. Finally, some medical inferences are proposed.
q-bio/0411007
Jonathan Coe
J.B.Coe and Y.Mao
Analytical solution of a generalized Penna model
6 figures
Physical Review E 67 61909 (2003)
10.1103/PhysRevE.67.061909
null
q-bio.PE
null
In 1995 T.J.Penna introduced a simple model of biological aging. A modified Penna model has been demonstrated to exhibit behaviour of real-life systems including catastrophic senescence in salmon and a mortality plateau at advanced ages. We present a general steady-state, analytic solution to the Penna model, able to deal with arbitrary birth and survivability functions. This solution is employed to solve standard variant Penna models studied by simulation. Different Verhulst factors regulating both the birth rate and external death rate are considered.
[ { "created": "Mon, 1 Nov 2004 11:56:43 GMT", "version": "v1" } ]
2007-05-23
[ [ "Coe", "J. B.", "" ], [ "Mao", "Y.", "" ] ]
In 1995 T.J.Penna introduced a simple model of biological aging. A modified Penna model has been demonstrated to exhibit behaviour of real-life systems including catastrophic senescence in salmon and a mortality plateau at advanced ages. We present a general steady-state, analytic solution to the Penna model, able to deal with arbitrary birth and survivability functions. This solution is employed to solve standard variant Penna models studied by simulation. Different Verhulst factors regulating both the birth rate and external death rate are considered.
2112.05098
Xin Wang
Ju Kang, Shijie Zhang, Yiyuan Niu, Fan Zhong, Xin Wang
Intraspecific predator interference promotes biodiversity in ecosystems
Main text 14 pages, 3 figures. Appendices 34 pages, 15 Appendix-figures
null
10.7554/eLife.93115.1
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Explaining biodiversity is a fundamental issue in ecology. A long-standing puzzle lies in the paradox of the plankton: many species of plankton feeding on a limited variety of resources coexist, apparently flouting the competitive exclusion principle (CEP), which holds that the number of predator (consumer) species cannot exceed that of the resources at a steady state. Here, we present a mechanistic model and demonstrate that intraspecific interference among the consumers enables a plethora of consumer species to coexist at constant population densities with only one or a handful of resource species. This facilitated biodiversity is resistant to stochasticity, either with the stochastic simulation algorithm or individual-based modeling. Our model naturally explains the classical experiments that invalidate the CEP, quantitatively illustrates the universal S-shaped pattern of the rank-abundance curves across a wide range of ecological communities, and can be broadly used to resolve the mystery of biodiversity in many natural ecosystems.
[ { "created": "Thu, 9 Dec 2021 18:37:48 GMT", "version": "v1" }, { "created": "Mon, 29 Aug 2022 14:30:49 GMT", "version": "v2" }, { "created": "Wed, 19 Jul 2023 12:48:39 GMT", "version": "v3" }, { "created": "Thu, 14 Dec 2023 15:39:30 GMT", "version": "v4" }, { "created": "Tue, 30 Apr 2024 14:34:21 GMT", "version": "v5" } ]
2024-05-01
[ [ "Kang", "Ju", "" ], [ "Zhang", "Shijie", "" ], [ "Niu", "Yiyuan", "" ], [ "Zhong", "Fan", "" ], [ "Wang", "Xin", "" ] ]
Explaining biodiversity is a fundamental issue in ecology. A long-standing puzzle lies in the paradox of the plankton: many species of plankton feeding on a limited variety of resources coexist, apparently flouting the competitive exclusion principle (CEP), which holds that the number of predator (consumer) species cannot exceed that of the resources at a steady state. Here, we present a mechanistic model and demonstrate that intraspecific interference among the consumers enables a plethora of consumer species to coexist at constant population densities with only one or a handful of resource species. This facilitated biodiversity is resistant to stochasticity, either with the stochastic simulation algorithm or individual-based modeling. Our model naturally explains the classical experiments that invalidate the CEP, quantitatively illustrates the universal S-shaped pattern of the rank-abundance curves across a wide range of ecological communities, and can be broadly used to resolve the mystery of biodiversity in many natural ecosystems.
2312.13660
ghazi bouaziz
Abderrahim Derouiche (LAAS, UT3), Ghazi Bouaziz (LAAS, UT3), Damien Brulin (UT2J, LAAS), Eric Campo (UT2J, LAAS), Antoine Piau
Empowering health in aging: Innovation in undernutrition detection and prevention through comprehensive monitoring
null
IADIS International Journal on Computer Science and Information Systems, 2023, 18 (2), pp.93-112
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Addressing the health challenges faced by the aging population, particularly undernutrition, is of paramount importance, given the significant representation of older individuals in society. Undernutrition arises from a disbalance between nutritional intake and energy expenditure, making its diagnosis crucial. Advances in technology allowed a better precision and efficiency of biomarker measurements, making it easier to detect undernutrition in the elderly. This article introduces an innovative system developed as part of the CART initiative at Toulouse University Hospital in France. This system takes a comprehensive approach to monitor health and well-being, collecting data that can provide insights, shape health outcomes, and even predict them. A key focus of this system is on identifying nutrition-related behaviors. By integrating quantitative and clinical assessments, which include biannual nutritional evaluations, as well as physical and physiological measurements like mobility and weight, this approach improves the diagnosis and prevention of undernutrition risks. It offers a more holistic perspective aligned with physiological standards. An example is given with the data collection of an elderly person followed at home for 3 months. We believe that this advance could make a significant contribution to the overall improvement of health and well-being, particularly in the elderly population.
[ { "created": "Thu, 21 Dec 2023 08:32:05 GMT", "version": "v1" } ]
2023-12-22
[ [ "Derouiche", "Abderrahim", "", "LAAS, UT3" ], [ "Bouaziz", "Ghazi", "", "LAAS, UT3" ], [ "Brulin", "Damien", "", "UT2J, LAAS" ], [ "Campo", "Eric", "", "UT2J, LAAS" ], [ "Piau", "Antoine", "" ] ]
Addressing the health challenges faced by the aging population, particularly undernutrition, is of paramount importance, given the significant representation of older individuals in society. Undernutrition arises from a disbalance between nutritional intake and energy expenditure, making its diagnosis crucial. Advances in technology allowed a better precision and efficiency of biomarker measurements, making it easier to detect undernutrition in the elderly. This article introduces an innovative system developed as part of the CART initiative at Toulouse University Hospital in France. This system takes a comprehensive approach to monitor health and well-being, collecting data that can provide insights, shape health outcomes, and even predict them. A key focus of this system is on identifying nutrition-related behaviors. By integrating quantitative and clinical assessments, which include biannual nutritional evaluations, as well as physical and physiological measurements like mobility and weight, this approach improves the diagnosis and prevention of undernutrition risks. It offers a more holistic perspective aligned with physiological standards. An example is given with the data collection of an elderly person followed at home for 3 months. We believe that this advance could make a significant contribution to the overall improvement of health and well-being, particularly in the elderly population.
1808.08749
Ryosuke Omori
Yukihiko Nakata, Ryosuke Omori
$\mathcal{R}_{0}$ fails to predict the outbreak potential in the presence of natural-boosting immunity
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time varying susceptibility of host at individual level due to waning and boosting immunity is known to induce rich long-term behavior of disease transmission dynamics. Meanwhile, the impact of the time varying heterogeneity of host susceptibility on the shot-term behavior of epidemics is not well-studied, even though the large amount of the available epidemiological data are the short-term epidemics. Here we constructed a parsimonious mathematical model describing the short-term transmission dynamics taking into account natural-boosting immunity by reinfection, and obtained the explicit solution for our model. We found that our system show "the delayed epidemic", the epidemic takes off after negative slope of the epidemic curve at the initial phase of epidemic, in addition to the common classification in the standard SIR model, i.e., "no epidemic" as $\mathcal{R}_{0}\leq1$ or normal epidemic as $\mathcal{R}_{0}>1$. Employing the explicit solution we derived the condition for each classification.
[ { "created": "Mon, 27 Aug 2018 09:14:17 GMT", "version": "v1" } ]
2018-08-28
[ [ "Nakata", "Yukihiko", "" ], [ "Omori", "Ryosuke", "" ] ]
Time varying susceptibility of host at individual level due to waning and boosting immunity is known to induce rich long-term behavior of disease transmission dynamics. Meanwhile, the impact of the time varying heterogeneity of host susceptibility on the shot-term behavior of epidemics is not well-studied, even though the large amount of the available epidemiological data are the short-term epidemics. Here we constructed a parsimonious mathematical model describing the short-term transmission dynamics taking into account natural-boosting immunity by reinfection, and obtained the explicit solution for our model. We found that our system show "the delayed epidemic", the epidemic takes off after negative slope of the epidemic curve at the initial phase of epidemic, in addition to the common classification in the standard SIR model, i.e., "no epidemic" as $\mathcal{R}_{0}\leq1$ or normal epidemic as $\mathcal{R}_{0}>1$. Employing the explicit solution we derived the condition for each classification.
2205.07673
Junkang Wei
Junkang Wei, Jin Xiao, Siyuan Chen, Licheng Zong, Xin Gao, Yu Li
ProNet DB: A proteome-wise database for protein surface property representations and RNA-binding profiles
12 pages, 6 figures
null
null
null
q-bio.QM q-bio.BM q-bio.MN
http://creativecommons.org/licenses/by-nc-nd/4.0/
The rapid growth in the number of experimental and predicted protein structures and more complicated protein structures challenge users in computational biology for utilizing the structural information and protein surface property representation. Recently, AlphaFold2 released the comprehensive proteome of various species, and protein surface property representation plays a crucial role in protein-molecule interaction prediction such as protein-protein interaction, protein-nucleic acid interaction, and protein-compound interaction. Here, we proposed the first comprehensive database, namely ProNet DB, which incorporates multiple protein surface representations and RNA-binding landscape for more than 326,175 protein structures covering 16 model organism proteomes from AlphaFold Protein Structure Database (AlphaFold DB) and experimentally validated protein structures deposited in Protein Data Bank (PDB). For each protein, we provided the original protein structure, surface property representation including hydrophobicity, charge distribution, hydrogen bond, interacting face, and RNA-binding landscape such as RNA binding sites and RNA binding preference. To interpret protein surface property representation and RNA binding landscape intuitively, we also integrate Mol* and Online 3D Viewer to visualize the representation on the protein surface. The pre-computed features are available for the users instantaneously and boost computational biology development including molecular mechanism exploration, geometry-based drug discovery and novel therapeutics development. The server is now available on https://proj.cse.cuhk.edu.hk/aihlab/pronet/.
[ { "created": "Mon, 16 May 2022 13:40:33 GMT", "version": "v1" }, { "created": "Mon, 7 Aug 2023 15:41:40 GMT", "version": "v2" } ]
2023-08-08
[ [ "Wei", "Junkang", "" ], [ "Xiao", "Jin", "" ], [ "Chen", "Siyuan", "" ], [ "Zong", "Licheng", "" ], [ "Gao", "Xin", "" ], [ "Li", "Yu", "" ] ]
The rapid growth in the number of experimental and predicted protein structures and more complicated protein structures challenge users in computational biology for utilizing the structural information and protein surface property representation. Recently, AlphaFold2 released the comprehensive proteome of various species, and protein surface property representation plays a crucial role in protein-molecule interaction prediction such as protein-protein interaction, protein-nucleic acid interaction, and protein-compound interaction. Here, we proposed the first comprehensive database, namely ProNet DB, which incorporates multiple protein surface representations and RNA-binding landscape for more than 326,175 protein structures covering 16 model organism proteomes from AlphaFold Protein Structure Database (AlphaFold DB) and experimentally validated protein structures deposited in Protein Data Bank (PDB). For each protein, we provided the original protein structure, surface property representation including hydrophobicity, charge distribution, hydrogen bond, interacting face, and RNA-binding landscape such as RNA binding sites and RNA binding preference. To interpret protein surface property representation and RNA binding landscape intuitively, we also integrate Mol* and Online 3D Viewer to visualize the representation on the protein surface. The pre-computed features are available for the users instantaneously and boost computational biology development including molecular mechanism exploration, geometry-based drug discovery and novel therapeutics development. The server is now available on https://proj.cse.cuhk.edu.hk/aihlab/pronet/.
1611.06618
Nan Xu
Nan Xu, Peter C. Doershuck
3D Reconstruction of Heterogeneous Virus Particles with Statistical Geometric Symmetry
null
null
null
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 3-D reconstruction problems, the image data obtained from cryo electron microscopy is the projection of many heterogeneous instances of the object under study (e.g., a virus). When the object is heterogeneous but has an overall symmetry, it is natural to describe the object as stochastic with symmetrical statistics. This paper presents a maximum likelihood reconstruction approach which allows each object to lack symmetry while constraining the {\it statistics} of the ensemble of objects to have symmetry. This algorithm is demonstrated on bacteriophage HK97 and is contrasted with an existing algorithm in which each object, while still heterogeneous, has the symmetry. Reconstruction results show that the proposed algorithm eliminates long-standing distortions in previous heterogeneity calculations associated with symmetry axes, and provides estimates that make more biologically sense than the estimates of existing algorithms.
[ { "created": "Mon, 21 Nov 2016 00:38:23 GMT", "version": "v1" } ]
2016-11-22
[ [ "Xu", "Nan", "" ], [ "Doershuck", "Peter C.", "" ] ]
In 3-D reconstruction problems, the image data obtained from cryo electron microscopy is the projection of many heterogeneous instances of the object under study (e.g., a virus). When the object is heterogeneous but has an overall symmetry, it is natural to describe the object as stochastic with symmetrical statistics. This paper presents a maximum likelihood reconstruction approach which allows each object to lack symmetry while constraining the {\it statistics} of the ensemble of objects to have symmetry. This algorithm is demonstrated on bacteriophage HK97 and is contrasted with an existing algorithm in which each object, while still heterogeneous, has the symmetry. Reconstruction results show that the proposed algorithm eliminates long-standing distortions in previous heterogeneity calculations associated with symmetry axes, and provides estimates that make more biologically sense than the estimates of existing algorithms.
1601.06962
Didier Fass
Didier Fass (MOSEL), Franck Gechter (IRTES - SET)
Towards a Theory for Bio - Cyber Physical Systems Modelling
HCI International 2015, Aug 2015, Los Angeles, United States. LNCS 9184, 2015, LNCS - Digital Human Modeling and applications in Health, Safety, Ergonomics and Risk Management: Human Modelling (Part I)
null
10.1007/978-3-319-21073-5_25
null
q-bio.QM cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Currently, CyberPhysical Systems (CPS) represents a great challenge for automatic control and smart systems engineering on both theoretical and practical levels. Designing CPS requires approaches involving multidisciplinary competences. However they are designed to be autonomous, the CPS present a part of uncertainty, which requires interaction with human for engineering, monitoring, controlling, performing operational maintenance, etc. This human-CPS interaction led naturally to the human in-the-loop (HITL) concept. Nevertheless, this HITL concept , which stems from a reductionist point of view, exhibits limitations due to the different natures of the systems involved. As opposed to this classical approach, we propose, in this paper, a model of Bio-CPS (i.e. systems based on an integration of computational elements within biological systems) grounded on theoretical biology, physics and computer sciences and based on the key concept of human systems integration.
[ { "created": "Tue, 26 Jan 2016 10:14:49 GMT", "version": "v1" } ]
2016-01-27
[ [ "Fass", "Didier", "", "MOSEL" ], [ "Gechter", "Franck", "", "IRTES - SET" ] ]
Currently, CyberPhysical Systems (CPS) represents a great challenge for automatic control and smart systems engineering on both theoretical and practical levels. Designing CPS requires approaches involving multidisciplinary competences. However they are designed to be autonomous, the CPS present a part of uncertainty, which requires interaction with human for engineering, monitoring, controlling, performing operational maintenance, etc. This human-CPS interaction led naturally to the human in-the-loop (HITL) concept. Nevertheless, this HITL concept , which stems from a reductionist point of view, exhibits limitations due to the different natures of the systems involved. As opposed to this classical approach, we propose, in this paper, a model of Bio-CPS (i.e. systems based on an integration of computational elements within biological systems) grounded on theoretical biology, physics and computer sciences and based on the key concept of human systems integration.
1412.0363
Natasha Cayco Gajic
Alex Cayco-Gajic and Joel Zylberberg and Eric Shea-Brown
Impact of triplet correlations on neural population codes
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Which statistical features of spiking activity matter for how stimuli are encoded in neural populations? A vast body of work has explored how firing rates in individual cells and correlations in the spikes of cell pairs impact coding. But little is known about how higher-order correlations, which describe simultaneous firing in triplets and larger ensembles of cells, impact encoded stimulus information. Here, we take a first step toward closing this gap. We vary triplet correlations in small (~10 cell) neural populations while keeping single cell and pairwise statistics fixed at typically reported values. For each value of triplet correlations, we estimate the performance of the neural population on a two-stimulus discrimination task. We identify a predominant way that such triplet correlations can strongly enhance coding: if triplet correlations differ for the two stimuli, they skew the response distributions of the two stimuli apart from each other, separating them and making them easier to distinguish. This coding benefit does not occur when both stimuli elicit similar triplet correlations. These results indicate that higher-order correlations could have a strong effect on population coding. Finally, we calculate how many samples are necessary to accurately measure spiking correlations of this type, providing an estimate of the necessary recording times in experiments.
[ { "created": "Mon, 1 Dec 2014 07:07:25 GMT", "version": "v1" } ]
2014-12-02
[ [ "Cayco-Gajic", "Alex", "" ], [ "Zylberberg", "Joel", "" ], [ "Shea-Brown", "Eric", "" ] ]
Which statistical features of spiking activity matter for how stimuli are encoded in neural populations? A vast body of work has explored how firing rates in individual cells and correlations in the spikes of cell pairs impact coding. But little is known about how higher-order correlations, which describe simultaneous firing in triplets and larger ensembles of cells, impact encoded stimulus information. Here, we take a first step toward closing this gap. We vary triplet correlations in small (~10 cell) neural populations while keeping single cell and pairwise statistics fixed at typically reported values. For each value of triplet correlations, we estimate the performance of the neural population on a two-stimulus discrimination task. We identify a predominant way that such triplet correlations can strongly enhance coding: if triplet correlations differ for the two stimuli, they skew the response distributions of the two stimuli apart from each other, separating them and making them easier to distinguish. This coding benefit does not occur when both stimuli elicit similar triplet correlations. These results indicate that higher-order correlations could have a strong effect on population coding. Finally, we calculate how many samples are necessary to accurately measure spiking correlations of this type, providing an estimate of the necessary recording times in experiments.
2007.15917
Peter Gawthrop
Peter J. Gawthrop
Bond Graphs Unify Stoichiometric Analysis and Thermodynamics
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Whole-cell modelling is constrained by the laws of nature in general and the laws of thermodynamics in particular. This paper shows how one prolific source of information, stoichiometric models of biomolecular systems, can be integrated with thermodynamic principles using the bond graph approach to network thermodynamics.
[ { "created": "Fri, 31 Jul 2020 09:24:50 GMT", "version": "v1" } ]
2020-08-03
[ [ "Gawthrop", "Peter J.", "" ] ]
Whole-cell modelling is constrained by the laws of nature in general and the laws of thermodynamics in particular. This paper shows how one prolific source of information, stoichiometric models of biomolecular systems, can be integrated with thermodynamic principles using the bond graph approach to network thermodynamics.
q-bio/0702020
Brigitte Gaillard
Valerie Dufour (DEPE-IPHC), Olivier Pascalis, Odile Petit (DEPE-IPHC)
Face processing limitation to own species in primates: a comparative study in brown capuchins, Tonkean macaques and humans
null
Behav. Processes 73 (07/2006) 107-113
10.1016/j.beproc.2006.04.006
null
q-bio.PE
null
Most primates live in social groups which survival and stability depend on individuals' abilities to create strong social relationships with other group members. The existence of those groups requires to identify individuals and to assign to each of them a social status. Individual recognition can be achieved through vocalizations but also through faces. In humans, an efficient system for the processing of own species faces exists. This specialization is achieved through experience with faces of conspecifics during development and leads to the loss of ability to process faces from other primate species. We hypothesize that a similar mechanism exists in social primates. We investigated face processing in one Old World species (genus Macaca) and in one New World species (genus Cebus). Our results show the same advantage for own species face recognition for all tested subjects. This work suggests in all species tested the existence of a common trait inherited from the primate ancestor: an efficient system to identify individual faces of own species only.
[ { "created": "Fri, 9 Feb 2007 14:27:41 GMT", "version": "v1" } ]
2007-05-23
[ [ "Dufour", "Valerie", "", "DEPE-IPHC" ], [ "Pascalis", "Olivier", "", "DEPE-IPHC" ], [ "Petit", "Odile", "", "DEPE-IPHC" ] ]
Most primates live in social groups which survival and stability depend on individuals' abilities to create strong social relationships with other group members. The existence of those groups requires to identify individuals and to assign to each of them a social status. Individual recognition can be achieved through vocalizations but also through faces. In humans, an efficient system for the processing of own species faces exists. This specialization is achieved through experience with faces of conspecifics during development and leads to the loss of ability to process faces from other primate species. We hypothesize that a similar mechanism exists in social primates. We investigated face processing in one Old World species (genus Macaca) and in one New World species (genus Cebus). Our results show the same advantage for own species face recognition for all tested subjects. This work suggests in all species tested the existence of a common trait inherited from the primate ancestor: an efficient system to identify individual faces of own species only.
1602.06132
Sergey Volkov N.
Sergey N. Volkov
Mechanism of Threshold Elongation of DNA Macromolecule
null
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mechanism of threshold elongation (overstretching) of DNA macromolecules under the action of external force is studied within the framework of phenomenological approach. When considering the task it is taken into account that double-stranded DNA is a polymorphic macromolecule with a set of metastable states. Accordingly to proposed mechanism, DNA threshold elongation can take place as a cooperative structural transition of macromolecule to metastable form, which is stabilized by the external force. For the description of DNA overstretching, the model included external (stretching) and internal (conformation) displacement components is constructed. As assumed, both components are coupled on the pathway of double helix transformations. It is shown, that under force action DNA deformation proceeds in two stages. First one is the restructuring of the double helix, which leads to formation of conformational bistability in DNA macromolecule. On the second stage, the conformational transition and the deformation induced by it consistently cover the macromolecule as a threshold process. The comparison of calculated characteristics of DNA overstretching process with experimental data shows good agreement. Analysis of the results obtained, together with the available literature data allow to conclude, that overstretching transition in the double helix is dynamical process and can spread in DNA chain. In the same time, in DNA with A$\cdot$T-rich content due to large dissipation the overstretching process leads to force induced melting transition, and should have nearly static character.
[ { "created": "Fri, 19 Feb 2016 12:58:34 GMT", "version": "v1" } ]
2016-02-22
[ [ "Volkov", "Sergey N.", "" ] ]
The mechanism of threshold elongation (overstretching) of DNA macromolecules under the action of external force is studied within the framework of phenomenological approach. When considering the task it is taken into account that double-stranded DNA is a polymorphic macromolecule with a set of metastable states. Accordingly to proposed mechanism, DNA threshold elongation can take place as a cooperative structural transition of macromolecule to metastable form, which is stabilized by the external force. For the description of DNA overstretching, the model included external (stretching) and internal (conformation) displacement components is constructed. As assumed, both components are coupled on the pathway of double helix transformations. It is shown, that under force action DNA deformation proceeds in two stages. First one is the restructuring of the double helix, which leads to formation of conformational bistability in DNA macromolecule. On the second stage, the conformational transition and the deformation induced by it consistently cover the macromolecule as a threshold process. The comparison of calculated characteristics of DNA overstretching process with experimental data shows good agreement. Analysis of the results obtained, together with the available literature data allow to conclude, that overstretching transition in the double helix is dynamical process and can spread in DNA chain. In the same time, in DNA with A$\cdot$T-rich content due to large dissipation the overstretching process leads to force induced melting transition, and should have nearly static character.
1503.06094
Gianluca Martelloni
Gianluca Martelloni, Alisa Santarlasci, Franco Bagnoli, Giacomo Santini
Modeling ant battles by means of a diffusion-limited Gillespie algorithm
null
Theoretical Biology Forum 107(1-2), 2014:57-76
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose two modeling approaches to describe the dynamics of ant battles, starting from laboratory experiments on the behavior of two ant species, the invasive Lasius neglectus and the authocthonus Lasius paralienus. This work is mainly motivated by the need to have realistic models to predict the interaction dynamics of invasive species. The two considered species exhibit different fighting strategies. In order to describe the observed battle dynamics, we start by building a chemical model considering the ants and the fighting groups (for instance two ants of a species and one of the other one) as a chemical species. From the chemical equations we deduce a system of differential equations, whose parameters are estimated by minimizing the difference between the experimental data and the model output. We model the fluctuations observed in the experiments by means of a standard Gillespie algorithm. In order to better reproduce the observed behavior, we adopt a spatial agent-based model, in which ants not engaged in fighting groups move randomly (diffusion) among compartments, and the Gillespie algorithm is used to model the reactions inside a compartment.
[ { "created": "Fri, 20 Mar 2015 14:52:12 GMT", "version": "v1" } ]
2015-03-23
[ [ "Martelloni", "Gianluca", "" ], [ "Santarlasci", "Alisa", "" ], [ "Bagnoli", "Franco", "" ], [ "Santini", "Giacomo", "" ] ]
We propose two modeling approaches to describe the dynamics of ant battles, starting from laboratory experiments on the behavior of two ant species, the invasive Lasius neglectus and the authocthonus Lasius paralienus. This work is mainly motivated by the need to have realistic models to predict the interaction dynamics of invasive species. The two considered species exhibit different fighting strategies. In order to describe the observed battle dynamics, we start by building a chemical model considering the ants and the fighting groups (for instance two ants of a species and one of the other one) as a chemical species. From the chemical equations we deduce a system of differential equations, whose parameters are estimated by minimizing the difference between the experimental data and the model output. We model the fluctuations observed in the experiments by means of a standard Gillespie algorithm. In order to better reproduce the observed behavior, we adopt a spatial agent-based model, in which ants not engaged in fighting groups move randomly (diffusion) among compartments, and the Gillespie algorithm is used to model the reactions inside a compartment.
2407.03392
Agust Egilsson
Agust Egilsson
M5: A Whole Genome Bacterial Encoder at Single Nucleotide Resolution
13 pages, 5 figures
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A linear attention mechanism is described to extend the context length of an encoder only transformer, called M5 in this report, to a multi-million single nucleotide resolution foundation model pretrained on bacterial whole genomes. The linear attention mechanism used approximates a full quadratic attention mechanism tightly and has a simple and lightweight implementation for the use case when the key-query embedding dimensionality is low. The M5-small model is entirely trained and tested on one A100 GPU with 40gb of memory up to 196K nucleotides during training and 2M nucleotides during testing. We test the performance of the M5-small model and record notable improvements in performance as whole genome bacterial sequence lengths are increased as well as demonstrating the stability of the full multi-head attention approximation used as sequence length is increased.
[ { "created": "Wed, 3 Jul 2024 15:30:44 GMT", "version": "v1" } ]
2024-07-08
[ [ "Egilsson", "Agust", "" ] ]
A linear attention mechanism is described to extend the context length of an encoder only transformer, called M5 in this report, to a multi-million single nucleotide resolution foundation model pretrained on bacterial whole genomes. The linear attention mechanism used approximates a full quadratic attention mechanism tightly and has a simple and lightweight implementation for the use case when the key-query embedding dimensionality is low. The M5-small model is entirely trained and tested on one A100 GPU with 40gb of memory up to 196K nucleotides during training and 2M nucleotides during testing. We test the performance of the M5-small model and record notable improvements in performance as whole genome bacterial sequence lengths are increased as well as demonstrating the stability of the full multi-head attention approximation used as sequence length is increased.
2307.03498
Patrick Vincent Lubenia
Patrick Vincent N. Lubenia, Eduardo R. Mendoza, Angelyn R. Lao
Comparative Analysis of Kinetic Realizations of Insulin Signaling
32 pages, 1 figure
null
null
null
q-bio.MN
http://creativecommons.org/publicdomain/zero/1.0/
Several studies have developed dynamical models to understand the underlying mechanisms of insulin signaling, a signaling cascade that leads to the translocation of glucose, the human body's main source of energy. Fortunately, reaction network analysis allows us to extract properties of dynamical systems without depending on their model parameter values. This study focuses on the comparison of insulin signaling in healthy state (INSMS or INSulin Metabolic Signaling) and in type 2 diabetes (INRES or INsulin RESistance) using reaction network analysis. The analysis uses network decomposition to identify the different subsystems involved in insulin signaling (e.g., insulin receptor binding and recycling, GLUT4 translocation, and ERK signaling pathway, among others). Furthermore, results show that INSMS and INRES are similar with respect to some network, structo-kinetic, and kinetic properties. Their differences, however, provide insights into what happens when insulin resistance occurs. First, the variation in the number of species involved in INSMS and INRES suggests that when irregularities occur in the insulin signaling pathway, other complexes (and, hence, other processes) get involved, characterizing insulin resistance. Second, the loss of concordance exhibited by INRES suggests less restrictive interplay between the species involved in insulin signaling, leading to unusual activities in the signaling cascade. Lastly, GLUT4 losing its absolute concentration robustness in INRES may signify that the transporter has lost its reliability in shuttling glucose to the cell, inhibiting efficient cellular energy production. This study also suggests possible applications of the equilibria parametrization and network decomposition, resulting from the analysis, to potentially establish absolute concentration robustness in a species.
[ { "created": "Fri, 7 Jul 2023 10:25:54 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2023 02:16:19 GMT", "version": "v2" } ]
2023-10-25
[ [ "Lubenia", "Patrick Vincent N.", "" ], [ "Mendoza", "Eduardo R.", "" ], [ "Lao", "Angelyn R.", "" ] ]
Several studies have developed dynamical models to understand the underlying mechanisms of insulin signaling, a signaling cascade that leads to the translocation of glucose, the human body's main source of energy. Fortunately, reaction network analysis allows us to extract properties of dynamical systems without depending on their model parameter values. This study focuses on the comparison of insulin signaling in healthy state (INSMS or INSulin Metabolic Signaling) and in type 2 diabetes (INRES or INsulin RESistance) using reaction network analysis. The analysis uses network decomposition to identify the different subsystems involved in insulin signaling (e.g., insulin receptor binding and recycling, GLUT4 translocation, and ERK signaling pathway, among others). Furthermore, results show that INSMS and INRES are similar with respect to some network, structo-kinetic, and kinetic properties. Their differences, however, provide insights into what happens when insulin resistance occurs. First, the variation in the number of species involved in INSMS and INRES suggests that when irregularities occur in the insulin signaling pathway, other complexes (and, hence, other processes) get involved, characterizing insulin resistance. Second, the loss of concordance exhibited by INRES suggests less restrictive interplay between the species involved in insulin signaling, leading to unusual activities in the signaling cascade. Lastly, GLUT4 losing its absolute concentration robustness in INRES may signify that the transporter has lost its reliability in shuttling glucose to the cell, inhibiting efficient cellular energy production. This study also suggests possible applications of the equilibria parametrization and network decomposition, resulting from the analysis, to potentially establish absolute concentration robustness in a species.
2204.12611
Jin Xu
Jin Xu, Jessie Jiang, Herbert M. Sauro
SBMLDiagrams: A python package to process and visualize SBML layout and render
null
null
10.1093/bioinformatics/btac730
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Summary: The Systems Biology Markup Language (SBML) is an extensible standard format for exchanging biochemical models. One of the extensions for SBML is the SBML Layout and Render package. This allows modelers to describe a biochemical model as a pathway diagram. However, up to now there has been little support to help users easily add and retrieve such information from SBML. In this application note, we describe a new Python package called SBMLDiagrams. This package allows a user to add layout and render information or retrieve it using a straightforward Python API. The package uses skia-python to support the rendering of the diagrams, allowing export to commons formats such as PNG or PDF. Availability: SBMLDiagrams is publicly available and licensed under the liberal MIT open-source license. The package is available for all major platforms. The source code has been deposited at GitHub (github.com/sys-bio/SBMLDiagrams). Users can install the package using the standard pip installation mechanism: pip install SBMLDiagrams. Contact: hsauro@uw.edu.
[ { "created": "Tue, 26 Apr 2022 22:01:35 GMT", "version": "v1" }, { "created": "Mon, 14 Nov 2022 22:03:16 GMT", "version": "v2" } ]
2022-11-16
[ [ "Xu", "Jin", "" ], [ "Jiang", "Jessie", "" ], [ "Sauro", "Herbert M.", "" ] ]
Summary: The Systems Biology Markup Language (SBML) is an extensible standard format for exchanging biochemical models. One of the extensions for SBML is the SBML Layout and Render package. This allows modelers to describe a biochemical model as a pathway diagram. However, up to now there has been little support to help users easily add and retrieve such information from SBML. In this application note, we describe a new Python package called SBMLDiagrams. This package allows a user to add layout and render information or retrieve it using a straightforward Python API. The package uses skia-python to support the rendering of the diagrams, allowing export to commons formats such as PNG or PDF. Availability: SBMLDiagrams is publicly available and licensed under the liberal MIT open-source license. The package is available for all major platforms. The source code has been deposited at GitHub (github.com/sys-bio/SBMLDiagrams). Users can install the package using the standard pip installation mechanism: pip install SBMLDiagrams. Contact: hsauro@uw.edu.
2211.00772
Nan Xi
Nan Miles Xi and Angelos Vasilopoulos
Tuning hyperparameters of doublet-detection methods for single-cell RNA sequencing data
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The existence of doublets in single-cell RNA sequencing (scRNA-seq) data poses a great challenge in downstream data analysis. Computational doublet-detection methods have been developed to remove doublets from scRNA-seq data. Yet, the default hyperparameter settings of those methods may not provide optimal performance. Here, we propose a strategy to tune hyperparameters for a cutting-edge doublet-detection method. We utilize a full factorial design to explore the relationship between hyperparameters and detection accuracy on 16 real scRNA-seq datasets. The optimal hyperparameters are obtained by a response surface model and convex optimization. We show that the optimal hyperparameters provide top performance across scRNA-seq datasets under various biological conditions. Our tuning strategy can be applied to other computational doublet-detection methods. It also offers insights into hyperparameter tuning for broader computational methods in scRNA-seq data analysis.
[ { "created": "Tue, 1 Nov 2022 22:24:39 GMT", "version": "v1" }, { "created": "Mon, 6 Feb 2023 04:24:27 GMT", "version": "v2" } ]
2023-02-07
[ [ "Xi", "Nan Miles", "" ], [ "Vasilopoulos", "Angelos", "" ] ]
The existence of doublets in single-cell RNA sequencing (scRNA-seq) data poses a great challenge in downstream data analysis. Computational doublet-detection methods have been developed to remove doublets from scRNA-seq data. Yet, the default hyperparameter settings of those methods may not provide optimal performance. Here, we propose a strategy to tune hyperparameters for a cutting-edge doublet-detection method. We utilize a full factorial design to explore the relationship between hyperparameters and detection accuracy on 16 real scRNA-seq datasets. The optimal hyperparameters are obtained by a response surface model and convex optimization. We show that the optimal hyperparameters provide top performance across scRNA-seq datasets under various biological conditions. Our tuning strategy can be applied to other computational doublet-detection methods. It also offers insights into hyperparameter tuning for broader computational methods in scRNA-seq data analysis.
2005.08603
Leendert Remmelzwaal
Leendert A Remmelzwaal, Amit K Mishra, George F R Ellis
Brain-inspired Distributed Cognitive Architecture
null
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by-sa/4.0/
In this paper we present a brain-inspired cognitive architecture that incorporates sensory processing, classification, contextual prediction, and emotional tagging. The cognitive architecture is implemented as three modular web-servers, meaning that it can be deployed centrally or across a network for servers. The experiments reveal two distinct operations of behaviour, namely high- and low-salience modes of operations, which closely model attention in the brain. In addition to modelling the cortex, we have demonstrated that a bio-inspired architecture introduced processing efficiencies. The software has been published as an open source platform, and can be easily extended by future research teams. This research lays the foundations for bio-realistic attention direction and sensory selection, and we believe that it is a key step towards achieving a bio-realistic artificial intelligent system.
[ { "created": "Mon, 18 May 2020 11:38:32 GMT", "version": "v1" } ]
2020-05-19
[ [ "Remmelzwaal", "Leendert A", "" ], [ "Mishra", "Amit K", "" ], [ "Ellis", "George F R", "" ] ]
In this paper we present a brain-inspired cognitive architecture that incorporates sensory processing, classification, contextual prediction, and emotional tagging. The cognitive architecture is implemented as three modular web-servers, meaning that it can be deployed centrally or across a network for servers. The experiments reveal two distinct operations of behaviour, namely high- and low-salience modes of operations, which closely model attention in the brain. In addition to modelling the cortex, we have demonstrated that a bio-inspired architecture introduced processing efficiencies. The software has been published as an open source platform, and can be easily extended by future research teams. This research lays the foundations for bio-realistic attention direction and sensory selection, and we believe that it is a key step towards achieving a bio-realistic artificial intelligent system.
2008.01971
Lorenzo Zino
Mengbin Ye, Lorenzo Zino, Alessandro Rizzo and Ming Cao
Game-theoretic modeling of collective decision-making during epidemics
Under Review
Phys. Rev. E 104, 024314 (2021)
10.1103/PhysRevE.104.024314
null
q-bio.PE cs.GT cs.SY eess.SY math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spreading dynamics of an epidemic and the collective behavioral pattern of the population over which it spreads are deeply intertwined and the latter can critically shape the outcome of the former. Motivated by this, we design a parsimonious game-theoretic behavioral--epidemic model, in which an interplay of realistic factors shapes the co-evolution of individual decision-making and epidemics on a network. Although such a co-evolution is deeply intertwined in the real-world, existing models schematize population behavior as instantaneously reactive, thus being unable to capture human behavior in the long term. Our model offers a unified framework to model and predict complex emergent phenomena, including successful collective responses, periodic oscillations, and resurgent epidemic outbreaks. The framework also allows to assess the effectiveness of different policy interventions on ensuring a collective response that successfully eradicates the outbreak. Two case studies, inspired by real-world diseases, are presented to illustrate the potentialities of the proposed model.
[ { "created": "Wed, 5 Aug 2020 07:37:48 GMT", "version": "v1" }, { "created": "Thu, 19 Nov 2020 13:08:12 GMT", "version": "v2" }, { "created": "Thu, 13 May 2021 09:47:17 GMT", "version": "v3" }, { "created": "Mon, 19 Jul 2021 10:11:23 GMT", "version": "v4" } ]
2021-08-25
[ [ "Ye", "Mengbin", "" ], [ "Zino", "Lorenzo", "" ], [ "Rizzo", "Alessandro", "" ], [ "Cao", "Ming", "" ] ]
The spreading dynamics of an epidemic and the collective behavioral pattern of the population over which it spreads are deeply intertwined and the latter can critically shape the outcome of the former. Motivated by this, we design a parsimonious game-theoretic behavioral--epidemic model, in which an interplay of realistic factors shapes the co-evolution of individual decision-making and epidemics on a network. Although such a co-evolution is deeply intertwined in the real-world, existing models schematize population behavior as instantaneously reactive, thus being unable to capture human behavior in the long term. Our model offers a unified framework to model and predict complex emergent phenomena, including successful collective responses, periodic oscillations, and resurgent epidemic outbreaks. The framework also allows to assess the effectiveness of different policy interventions on ensuring a collective response that successfully eradicates the outbreak. Two case studies, inspired by real-world diseases, are presented to illustrate the potentialities of the proposed model.
1111.6573
Nicholas Cain
Nicholas Cain, Andrea K. Barreiro, Michael Shadlen, Eric Shea-Brown
Neural integrators for decision making: A favorable tradeoff between robustness and sensitivity
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key step in many perceptual decision tasks is the integration of sensory inputs over time, but fundamental questions remain about how this is accomplished in neural circuits. One possibility is to balance decay modes of membranes and synapses with recurrent excitation. To allow integration over long timescales, however, this balance must be precise; this is known as the fine tuning problem. The need for fine tuning can be overcome via a ratchet-like mechanism, in which momentary inputs must be above a preset limit to be registered by the circuit. The degree of this ratcheting embodies a tradeoff between sensitivity to the input stream and robustness against parameter mistuning. The goal of our study is to analyze the consequences of this tradeoff for decision making performance. For concreteness, we focus on the well-studied random dot motion discrimination task. For stimulus parameters constrained by experimental data, we find that loss of sensitivity to inputs has surprisingly little cost for decision performance. This leads robust integrators to performance gains when feedback becomes mistuned. Moreover, we find that substantially robust and mistuned integrator models remain consistent with chronometric and accuracy functions found in experiments. We explain our findings via sequential analysis of the momentary and integrated signals, and discuss their implication: robust integrators may be surprisingly well-suited to subserve the basic function of evidence integration in many cognitive tasks.
[ { "created": "Mon, 28 Nov 2011 20:23:29 GMT", "version": "v1" } ]
2011-11-29
[ [ "Cain", "Nicholas", "" ], [ "Barreiro", "Andrea K.", "" ], [ "Shadlen", "Michael", "" ], [ "Shea-Brown", "Eric", "" ] ]
A key step in many perceptual decision tasks is the integration of sensory inputs over time, but fundamental questions remain about how this is accomplished in neural circuits. One possibility is to balance decay modes of membranes and synapses with recurrent excitation. To allow integration over long timescales, however, this balance must be precise; this is known as the fine tuning problem. The need for fine tuning can be overcome via a ratchet-like mechanism, in which momentary inputs must be above a preset limit to be registered by the circuit. The degree of this ratcheting embodies a tradeoff between sensitivity to the input stream and robustness against parameter mistuning. The goal of our study is to analyze the consequences of this tradeoff for decision making performance. For concreteness, we focus on the well-studied random dot motion discrimination task. For stimulus parameters constrained by experimental data, we find that loss of sensitivity to inputs has surprisingly little cost for decision performance. This leads robust integrators to performance gains when feedback becomes mistuned. Moreover, we find that substantially robust and mistuned integrator models remain consistent with chronometric and accuracy functions found in experiments. We explain our findings via sequential analysis of the momentary and integrated signals, and discuss their implication: robust integrators may be surprisingly well-suited to subserve the basic function of evidence integration in many cognitive tasks.
2311.09229
Srilekha Mamidala
Srilekha Mamidala
Developing a Novel Holistic, Personalized Dementia Risk Prediction Model via Integration of Machine Learning and Network Systems Biology Approaches
null
null
null
null
q-bio.NC cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prevalence of dementia has increased over time as global life expectancy improves and populations age. An individual's risk of developing dementia is influenced by various genetic, lifestyle, and environmental factors, among others. Predicting dementia risk may enable individuals to employ mitigation strategies or lifestyle changes to delay dementia onset. Current computational approaches to dementia prediction only return risk upon narrow categories of variables and do not account for interactions between different risk variables. The proposed framework utilizes a novel holistic approach to dementia risk prediction and is the first to incorporate various sources of tabular environmental pollution and lifestyle factor data with network systems biology-based genetic data. LightGBM gradient boosting was employed to ensure validity of included factors. This approach successfully models interactions between variables through an original weighted integration method coined Sysable. Multiple machine learning models trained the algorithm to reduce reliance on a single model. The developed approach surpassed all existing dementia risk prediction approaches, with a sensitivity of 85%, specificity of 99%, geometric accuracy of 92%, and AUROC of 91.7%. A transfer learning model was implemented as well. De-biasing algorithms were run on the model via the AI Fairness 360 Library. Effects of demographic disparities on dementia prevalence were analyzed to potentially highlight areas in need and promote equitable and accessible care. The resulting model was additionally integrated into a user-friendly app providing holistic predictions and personalized risk mitigation strategies. The developed model successfully employs holistic computational dementia risk prediction for clinical use.
[ { "created": "Wed, 4 Oct 2023 02:47:29 GMT", "version": "v1" }, { "created": "Wed, 10 Jan 2024 21:08:59 GMT", "version": "v2" } ]
2024-01-12
[ [ "Mamidala", "Srilekha", "" ] ]
The prevalence of dementia has increased over time as global life expectancy improves and populations age. An individual's risk of developing dementia is influenced by various genetic, lifestyle, and environmental factors, among others. Predicting dementia risk may enable individuals to employ mitigation strategies or lifestyle changes to delay dementia onset. Current computational approaches to dementia prediction only return risk upon narrow categories of variables and do not account for interactions between different risk variables. The proposed framework utilizes a novel holistic approach to dementia risk prediction and is the first to incorporate various sources of tabular environmental pollution and lifestyle factor data with network systems biology-based genetic data. LightGBM gradient boosting was employed to ensure validity of included factors. This approach successfully models interactions between variables through an original weighted integration method coined Sysable. Multiple machine learning models trained the algorithm to reduce reliance on a single model. The developed approach surpassed all existing dementia risk prediction approaches, with a sensitivity of 85%, specificity of 99%, geometric accuracy of 92%, and AUROC of 91.7%. A transfer learning model was implemented as well. De-biasing algorithms were run on the model via the AI Fairness 360 Library. Effects of demographic disparities on dementia prevalence were analyzed to potentially highlight areas in need and promote equitable and accessible care. The resulting model was additionally integrated into a user-friendly app providing holistic predictions and personalized risk mitigation strategies. The developed model successfully employs holistic computational dementia risk prediction for clinical use.
2009.00228
Tom Chou
Yue Wang, Renaud Dessalles, and Tom Chou
Modeling the impact of birth control policies on China's population and age: effects of delayed births and minimum birth age constraints
17 pages, 9 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
We consider age-structured models with an imposed refractory period between births. These models can be used to formulate alternative population control strategies to China's one-child policy. By allowing any number of births, but with an imposed delay between births, we show how the total population can be decreased and how a relatively older age distribution can be generated. This delay represents a more "continuous" form of population management for which the strict one-child policy is a limiting case. Such a policy approach could be more easily accepted by society. Our analyses provide an initial framework for studying demographics and how social constraints influence population structure.
[ { "created": "Tue, 1 Sep 2020 05:02:02 GMT", "version": "v1" }, { "created": "Sat, 2 Apr 2022 01:09:42 GMT", "version": "v2" } ]
2022-04-05
[ [ "Wang", "Yue", "" ], [ "Dessalles", "Renaud", "" ], [ "Chou", "Tom", "" ] ]
We consider age-structured models with an imposed refractory period between births. These models can be used to formulate alternative population control strategies to China's one-child policy. By allowing any number of births, but with an imposed delay between births, we show how the total population can be decreased and how a relatively older age distribution can be generated. This delay represents a more "continuous" form of population management for which the strict one-child policy is a limiting case. Such a policy approach could be more easily accepted by society. Our analyses provide an initial framework for studying demographics and how social constraints influence population structure.
1604.03041
Davide Michieletto
Davide Michieletto, Davide Marenduzzo and Ajazul H. Wani
Chromosome-wide simulations uncover folding pathway and 3D organization of interphase chromosomes
20 pages; text + SI; submitted version; supplementary movies can be found at http://www2.ph.ed.ac.uk/~dmichiel/
null
null
null
q-bio.SC cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Three-dimensional interphase organization of metazoan genomes has been linked to cellular identity. However, the principles governing 3D interphase genome architecture and its faithful transmission through disruptive events of cell-cycle, like mitosis, are not fully understood. By using Brownian dynamics simulations of Drosophila chromosome 3R up to time-scales of minutes, we show that chromatin binding profile of Polycomb-repressive-complex-1 robustly predicts a sub-set of topologically associated domains (TADs), and inclusion of other factors recapitulates the profile of all TADs, as observed experimentally. Our simulations show that chromosome 3R attains interphase organization from mitotic state by a two-step process in which formation of local TADs is followed by long-range interactions. Our model also explains statistical features and tracks the assembly kinetics of polycomb subnuclear clusters. In conclusion, our approach can be used to predict structural and kinetic features of 3D chromosome folding and its associated proteins in biological relevant genomic and time scales.
[ { "created": "Mon, 11 Apr 2016 17:36:32 GMT", "version": "v1" } ]
2016-04-12
[ [ "Michieletto", "Davide", "" ], [ "Marenduzzo", "Davide", "" ], [ "Wani", "Ajazul H.", "" ] ]
Three-dimensional interphase organization of metazoan genomes has been linked to cellular identity. However, the principles governing 3D interphase genome architecture and its faithful transmission through disruptive events of cell-cycle, like mitosis, are not fully understood. By using Brownian dynamics simulations of Drosophila chromosome 3R up to time-scales of minutes, we show that chromatin binding profile of Polycomb-repressive-complex-1 robustly predicts a sub-set of topologically associated domains (TADs), and inclusion of other factors recapitulates the profile of all TADs, as observed experimentally. Our simulations show that chromosome 3R attains interphase organization from mitotic state by a two-step process in which formation of local TADs is followed by long-range interactions. Our model also explains statistical features and tracks the assembly kinetics of polycomb subnuclear clusters. In conclusion, our approach can be used to predict structural and kinetic features of 3D chromosome folding and its associated proteins in biological relevant genomic and time scales.
1708.06922
Suman Kumar Banik
Ayan Biswas and Suman K Banik
Interplay of synergy and redundancy in diamond motif
Revised version, 23 pages, 8 figures
Chaos 28 (2018) 103102
10.1063/1.5044606
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The formalism of partial information decomposition provides independent or non-overlapping components constituting total information content provided by a set of source variables about the target variable. These components are recognised as unique information, synergistic information and, redundant information. The metric of net synergy, conceived as the difference between synergistic and redundant information, is capable of detecting synergy, redundancy and, information independence among stochastic variables. And it can be quantified, as it is done here, using appropriate combinations of different Shannon mutual information terms. Utilisation of such a metric in network motifs with the nodes representing different biochemical species, involved in information sharing, uncovers rich store for interesting results. In the current study, we make use of this formalism to obtain a comprehensive understanding of the relative information processing mechanism in a diamond motif and two of its sub-motifs namely bifurcation and integration motif embedded within the diamond motif. The emerging patterns of synergy and redundancy and their effective contribution towards ensuring high fidelity information transmission are duly compared in the sub-motifs and independent motifs (bifurcation and integration). In this context, the crucial roles played by various time scales and activation coefficients in the network topologies are especially emphasised. We show that the origin of synergy and redundancy in information transmission can be physically justified by decomposing diamond motif into bifurcation and integration motif.
[ { "created": "Wed, 23 Aug 2017 09:07:48 GMT", "version": "v1" }, { "created": "Fri, 14 Sep 2018 05:21:55 GMT", "version": "v2" } ]
2018-10-09
[ [ "Biswas", "Ayan", "" ], [ "Banik", "Suman K", "" ] ]
The formalism of partial information decomposition provides independent or non-overlapping components constituting total information content provided by a set of source variables about the target variable. These components are recognised as unique information, synergistic information and, redundant information. The metric of net synergy, conceived as the difference between synergistic and redundant information, is capable of detecting synergy, redundancy and, information independence among stochastic variables. And it can be quantified, as it is done here, using appropriate combinations of different Shannon mutual information terms. Utilisation of such a metric in network motifs with the nodes representing different biochemical species, involved in information sharing, uncovers rich store for interesting results. In the current study, we make use of this formalism to obtain a comprehensive understanding of the relative information processing mechanism in a diamond motif and two of its sub-motifs namely bifurcation and integration motif embedded within the diamond motif. The emerging patterns of synergy and redundancy and their effective contribution towards ensuring high fidelity information transmission are duly compared in the sub-motifs and independent motifs (bifurcation and integration). In this context, the crucial roles played by various time scales and activation coefficients in the network topologies are especially emphasised. We show that the origin of synergy and redundancy in information transmission can be physically justified by decomposing diamond motif into bifurcation and integration motif.
1410.0208
Estelle Pitard
G. Huth, A. Lesne, F. Munoz, E. Pitard
Correlated percolation models of structured habitat in ecology
null
Physica A, 416, 290 (2014)
10.1016/j.physa.2014.08.006
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Percolation offers acknowledged models of random media when the relevant medium characteristics can be described as a binary feature. However, when considering habitat modeling in ecology, a natural constraint comes from nearest-neighbor correlations between the suitable/unsuitable states of the spatial units forming the habitat. Such constraints are also relevant in the physics of aggregation where underlying processes may lead to a form of correlated percolation. However, in ecology, the processes leading to habitat correlations are in general not known or very complex. As proposed by Hiebeler [Ecology {\bf 81}, 1629 (2000)], these correlations can be captured in a lattice model by an observable aggregation parameter $q$, supplementing the density $p$ of suitable sites. We investigate this model as an instance of correlated percolation. We analyze the phase diagram of the percolation transition and compute the cluster size distribution, the pair-connectedness function $C(r)$ and the correlation function $g(r)$. We find that while $g(r)$ displays a power-law decrease associated with long-range correlations in a wide domain of parameter values, critical properties are compatible with the universality class of uncorrelated percolation. We contrast the correlation structures obtained respectively for the correlated percolation model and for the Ising model, and show that the diversity of habitat configurations generated by the Hiebeler model is richer than the archetypal Ising model. We also find that emergent structural properties are peculiar to the implemented algorithm, leading to questioning the notion of a well-defined model of aggregated habitat. We conclude that the choice of model and algorithm have strong consequences on what insights ecological studies can get using such models of species habitat.
[ { "created": "Wed, 1 Oct 2014 12:55:28 GMT", "version": "v1" } ]
2015-06-23
[ [ "Huth", "G.", "" ], [ "Lesne", "A.", "" ], [ "Munoz", "F.", "" ], [ "Pitard", "E.", "" ] ]
Percolation offers acknowledged models of random media when the relevant medium characteristics can be described as a binary feature. However, when considering habitat modeling in ecology, a natural constraint comes from nearest-neighbor correlations between the suitable/unsuitable states of the spatial units forming the habitat. Such constraints are also relevant in the physics of aggregation where underlying processes may lead to a form of correlated percolation. However, in ecology, the processes leading to habitat correlations are in general not known or very complex. As proposed by Hiebeler [Ecology {\bf 81}, 1629 (2000)], these correlations can be captured in a lattice model by an observable aggregation parameter $q$, supplementing the density $p$ of suitable sites. We investigate this model as an instance of correlated percolation. We analyze the phase diagram of the percolation transition and compute the cluster size distribution, the pair-connectedness function $C(r)$ and the correlation function $g(r)$. We find that while $g(r)$ displays a power-law decrease associated with long-range correlations in a wide domain of parameter values, critical properties are compatible with the universality class of uncorrelated percolation. We contrast the correlation structures obtained respectively for the correlated percolation model and for the Ising model, and show that the diversity of habitat configurations generated by the Hiebeler model is richer than the archetypal Ising model. We also find that emergent structural properties are peculiar to the implemented algorithm, leading to questioning the notion of a well-defined model of aggregated habitat. We conclude that the choice of model and algorithm have strong consequences on what insights ecological studies can get using such models of species habitat.
2406.19744
Yang Tan
Yang Tan, Jia Zheng, Liang Hong, Bingxin Zhou
ProtSolM: Protein Solubility Prediction with Multi-modal Features
10 pages, 7 figures, 9 tables
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Understanding protein solubility is essential for their functional applications. Computational methods for predicting protein solubility are crucial for reducing experimental costs and enhancing the efficiency and success rates of protein engineering. Existing methods either construct a supervised learning scheme on small-scale datasets with manually processed physicochemical properties, or blindly apply pre-trained protein language models to extract amino acid interaction information. The scale and quality of available training datasets leave significant room for improvement in terms of accuracy and generalization. To address these research gaps, we propose \sol, a novel deep learning method that combines pre-training and fine-tuning schemes for protein solubility prediction. ProtSolM integrates information from multiple dimensions, including physicochemical properties, amino acid sequences, and protein backbone structures. Our model is trained using \data, the largest solubility dataset that we have constructed. PDBSol includes over $60,000$ protein sequences and structures. We provide a comprehensive leaderboard of existing statistical learning and deep learning methods on independent datasets with computational and experimental labels. ProtSolM achieved state-of-the-art performance across various evaluation metrics, demonstrating its potential to significantly advance the accuracy of protein solubility prediction.
[ { "created": "Fri, 28 Jun 2024 08:31:46 GMT", "version": "v1" } ]
2024-07-01
[ [ "Tan", "Yang", "" ], [ "Zheng", "Jia", "" ], [ "Hong", "Liang", "" ], [ "Zhou", "Bingxin", "" ] ]
Understanding protein solubility is essential for their functional applications. Computational methods for predicting protein solubility are crucial for reducing experimental costs and enhancing the efficiency and success rates of protein engineering. Existing methods either construct a supervised learning scheme on small-scale datasets with manually processed physicochemical properties, or blindly apply pre-trained protein language models to extract amino acid interaction information. The scale and quality of available training datasets leave significant room for improvement in terms of accuracy and generalization. To address these research gaps, we propose \sol, a novel deep learning method that combines pre-training and fine-tuning schemes for protein solubility prediction. ProtSolM integrates information from multiple dimensions, including physicochemical properties, amino acid sequences, and protein backbone structures. Our model is trained using \data, the largest solubility dataset that we have constructed. PDBSol includes over $60,000$ protein sequences and structures. We provide a comprehensive leaderboard of existing statistical learning and deep learning methods on independent datasets with computational and experimental labels. ProtSolM achieved state-of-the-art performance across various evaluation metrics, demonstrating its potential to significantly advance the accuracy of protein solubility prediction.
0708.2931
Douglas Galvao
Fernando Sato, Scheila F. Braga, Helio F. dos Santos, and Douglas S. Galvao
Structure-Activity Relationship Investigation of Some New Tetracyclines by Electronic Index Methodology
18 pages, 8 figures
null
null
null
q-bio.BM physics.bio-ph
null
Tetracyclines are an old class of molecules that constitute a broad-spectrum antibiotics. Since the first member of tetracycline family were isolated, the clinical importance of these compounds as therapeutic and prophylactic agents against a wide range of infections has stimulated efforts to define their mode of action as inhibitors of bacterial reproduction. We used three SAR methodologies for the analysis of biological activity of a set of 104 tetracycline compounds. Our calculation were carried out using the semi-empirical Austin Method One (AM1) and Parametric Method 3 (PM3). Electronic Indices Methodology (EIM), Principal Component Analysis (PCA) and Artificial Neural Networks (ANN) were applied to the classification of 14 old and 90 new proposed derivatives of tetracyclines. Our results make evident the importance of EIM descriptors in pattern recognition and also show that the EIM can be effectively used to predict the biological activity of Tetracyclines.
[ { "created": "Tue, 21 Aug 2007 21:28:41 GMT", "version": "v1" } ]
2007-09-13
[ [ "Sato", "Fernando", "" ], [ "Braga", "Scheila F.", "" ], [ "Santos", "Helio F. dos", "" ], [ "Galvao", "Douglas S.", "" ] ]
Tetracyclines are an old class of molecules that constitute a broad-spectrum antibiotics. Since the first member of tetracycline family were isolated, the clinical importance of these compounds as therapeutic and prophylactic agents against a wide range of infections has stimulated efforts to define their mode of action as inhibitors of bacterial reproduction. We used three SAR methodologies for the analysis of biological activity of a set of 104 tetracycline compounds. Our calculation were carried out using the semi-empirical Austin Method One (AM1) and Parametric Method 3 (PM3). Electronic Indices Methodology (EIM), Principal Component Analysis (PCA) and Artificial Neural Networks (ANN) were applied to the classification of 14 old and 90 new proposed derivatives of tetracyclines. Our results make evident the importance of EIM descriptors in pattern recognition and also show that the EIM can be effectively used to predict the biological activity of Tetracyclines.
1606.07160
Jonathan Mitchell Mr
Jonathan Mitchell
Distinguishing Convergence on Phylogenetic Networks
PhD Thesis
null
null
null
q-bio.PE math.AG math.RT q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We compare the phylogenetic tensors for various trees and networks for two, three and four taxa. If the probability spaces between one tree or network and another are not identical then there will be phylogenetic tensors that could have arisen on one but not the other. We call these two trees or networks distinguishable from each other. We show that for the binary symmetric model there are no two-taxon trees and networks that are distinguishable from each other, however there are three-taxon trees and networks that are distinguishable from each other. We compare the time parameters for the phylogenetic tensors for various taxon label permutations on a given tree or network. If the time parameters on one taxon label permutation in terms of the other taxon label permutation are all non-negative then we say that the two taxon label permutations are not network identifiable from each other. We show that some taxon label permutations are network identifiable from each other. We show that some four-taxon networks satisfy the four-point condition. Of the two "shapes" of four-taxon rooted trees, one is defined by the cluster, b,c,d, labelling taxa alphabetically from left to right. The network with this shape and convergence between the two taxa with the root as their most recent common ancestor satisfies the four-point condition. The phylogenetic tensors contain polynomial equations that cannot be easily solved for four-taxon or higher trees or networks. We show how methods from algebraic geometry, such as Gr\"obner bases, can be used to solve the polynomial equations. We show that some four-taxon trees and networks can be distinguished from each other.
[ { "created": "Thu, 23 Jun 2016 01:54:55 GMT", "version": "v1" } ]
2016-06-24
[ [ "Mitchell", "Jonathan", "" ] ]
We compare the phylogenetic tensors for various trees and networks for two, three and four taxa. If the probability spaces between one tree or network and another are not identical then there will be phylogenetic tensors that could have arisen on one but not the other. We call these two trees or networks distinguishable from each other. We show that for the binary symmetric model there are no two-taxon trees and networks that are distinguishable from each other, however there are three-taxon trees and networks that are distinguishable from each other. We compare the time parameters for the phylogenetic tensors for various taxon label permutations on a given tree or network. If the time parameters on one taxon label permutation in terms of the other taxon label permutation are all non-negative then we say that the two taxon label permutations are not network identifiable from each other. We show that some taxon label permutations are network identifiable from each other. We show that some four-taxon networks satisfy the four-point condition. Of the two "shapes" of four-taxon rooted trees, one is defined by the cluster, b,c,d, labelling taxa alphabetically from left to right. The network with this shape and convergence between the two taxa with the root as their most recent common ancestor satisfies the four-point condition. The phylogenetic tensors contain polynomial equations that cannot be easily solved for four-taxon or higher trees or networks. We show how methods from algebraic geometry, such as Gr\"obner bases, can be used to solve the polynomial equations. We show that some four-taxon trees and networks can be distinguished from each other.
2201.01018
Jiayu Shang
Jiayu Shang and Yanni Sun
CHERRY: a Computational metHod for accuratE pRediction of virus-pRokarYotic interactions using a graph encoder-decoder model
20 pages, 14 figures
Briefings in Bioinformatics, Volume 23, Issue 5, September 2022, bbac182
10.1093/bib/bbac182
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Prokaryotic viruses, which infect bacteria and archaea, are key players in microbial communities. Predicting the hosts of prokaryotic viruses helps decipher the dynamic relationship between microbes. Experimental methods for host prediction cannot keep pace with the fast accumulation of sequenced phages. Thus, there is a need for computational host prediction. Despite some promising results, computational host prediction remains a challenge because of the limited known interactions and the sheer amount of sequenced phages by high-throughput sequencing technologies. The state-of-the-art methods can only achieve 43\% accuracy at the species level. In this work, we formulate host prediction as link prediction in a knowledge graph that integrates multiple protein and DNA-based sequence features. Our implementation named CHERRY can be applied to predict hosts for newly discovered viruses and to identify viruses infecting targeted bacteria. We demonstrated the utility of CHERRY for both applications and compared its performance with 11 popular host prediction methods. To our best knowledge, CHERRY has the highest accuracy in identifying virus-prokaryote interactions. It outperforms all the existing methods at the species level with an accuracy increase of 37\%. In addition, CHERRY's performance on short contigs is more stable than other tools.
[ { "created": "Tue, 4 Jan 2022 07:32:00 GMT", "version": "v1" }, { "created": "Fri, 13 May 2022 06:04:33 GMT", "version": "v2" } ]
2023-01-02
[ [ "Shang", "Jiayu", "" ], [ "Sun", "Yanni", "" ] ]
Prokaryotic viruses, which infect bacteria and archaea, are key players in microbial communities. Predicting the hosts of prokaryotic viruses helps decipher the dynamic relationship between microbes. Experimental methods for host prediction cannot keep pace with the fast accumulation of sequenced phages. Thus, there is a need for computational host prediction. Despite some promising results, computational host prediction remains a challenge because of the limited known interactions and the sheer amount of sequenced phages by high-throughput sequencing technologies. The state-of-the-art methods can only achieve 43\% accuracy at the species level. In this work, we formulate host prediction as link prediction in a knowledge graph that integrates multiple protein and DNA-based sequence features. Our implementation named CHERRY can be applied to predict hosts for newly discovered viruses and to identify viruses infecting targeted bacteria. We demonstrated the utility of CHERRY for both applications and compared its performance with 11 popular host prediction methods. To our best knowledge, CHERRY has the highest accuracy in identifying virus-prokaryote interactions. It outperforms all the existing methods at the species level with an accuracy increase of 37\%. In addition, CHERRY's performance on short contigs is more stable than other tools.
2106.00790
SueYeon Chung
SueYeon Chung
Statistical Mechanics of Neural Processing of Object Manifolds
PhD thesis, Harvard University, Cambridge, Massachusetts, USA. 2017. Some chapters report joint work
null
null
null
q-bio.NC cond-mat.dis-nn cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Invariant object recognition is one of the most fundamental cognitive tasks performed by the brain. In the neural state space, different objects with stimulus variabilities are represented as different manifolds. In this geometrical perspective, object recognition becomes the problem of linearly separating different object manifolds. In feedforward visual hierarchy, it has been suggested that the object manifold representations are reformatted across the layers, to become more linearly separable. Thus, a complete theory of perception requires characterizing the ability of linear readout networks to classify object manifolds from variable neural responses. A theory of the perceptron of isolated points was pioneered by E. Gardner who formulated it as a statistical mechanics problem and analyzed it using replica theory. In this thesis, we generalize Gardner's analysis and establish a theory of linear classification of manifolds synthesizing statistical and geometric properties of high dimensional signals. [..] Next, we generalize our theory further to linear classification of general perceptual manifolds, such as point clouds. We identify that the capacity of a manifold is determined that effective radius, R_M, and effective dimension, D_M. Finally, we show extensions relevant for applications to real data, incorporating correlated manifolds, heterogenous manifold geometries, sparse labels and nonlinear classifications. Then, we demonstrate how object-based manifolds transform in standard deep networks. This thesis lays the groundwork for a computational theory of neuronal processing of objects, providing quantitative measures for linear separability of object manifolds. We hope this theory will provide new insights into the computational principles underlying processing of sensory representations in biological and artificial neural networks.
[ { "created": "Tue, 1 Jun 2021 20:49:14 GMT", "version": "v1" } ]
2021-06-03
[ [ "Chung", "SueYeon", "" ] ]
Invariant object recognition is one of the most fundamental cognitive tasks performed by the brain. In the neural state space, different objects with stimulus variabilities are represented as different manifolds. In this geometrical perspective, object recognition becomes the problem of linearly separating different object manifolds. In feedforward visual hierarchy, it has been suggested that the object manifold representations are reformatted across the layers, to become more linearly separable. Thus, a complete theory of perception requires characterizing the ability of linear readout networks to classify object manifolds from variable neural responses. A theory of the perceptron of isolated points was pioneered by E. Gardner who formulated it as a statistical mechanics problem and analyzed it using replica theory. In this thesis, we generalize Gardner's analysis and establish a theory of linear classification of manifolds synthesizing statistical and geometric properties of high dimensional signals. [..] Next, we generalize our theory further to linear classification of general perceptual manifolds, such as point clouds. We identify that the capacity of a manifold is determined that effective radius, R_M, and effective dimension, D_M. Finally, we show extensions relevant for applications to real data, incorporating correlated manifolds, heterogenous manifold geometries, sparse labels and nonlinear classifications. Then, we demonstrate how object-based manifolds transform in standard deep networks. This thesis lays the groundwork for a computational theory of neuronal processing of objects, providing quantitative measures for linear separability of object manifolds. We hope this theory will provide new insights into the computational principles underlying processing of sensory representations in biological and artificial neural networks.
2408.05967
Manuel Baum
Manuel Baum, Theresa Roessler, Antonio J. Osuna-Mascar\'o, Alice Auersperg, Oliver Brock
Mechanical problem solving in Goffin's cockatoos -- Towards modeling complex behavior
Accepted for publication at journal Adaptive Behavior with SAGE publishing
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Research continues to accumulate evidence that Goffin's cockatoos (Cacatua goffiniana) can solve wide sets of mechanical problems, such as tool use, tool manufacture, and solving mechanical puzzles. However, the proximate mechanisms underlying this adaptive behavior are largely unknown. In this study, we analyze how three Goffin's cockatoos learn to solve a specific mechanical puzzle, a lockbox. The observed behavior results from the interaction between a complex environment (the lockbox) and different processes that jointly govern the animals' behavior. We thus jointly analyze the parrots' (1) engagement, (2) sensorimotor skill learning, and (3) action selection. We find that neither of these aspects could solely explain the animals' behavioral adaptation and that a plausible model of proximate mechanisms (including adaptation) should thus also jointly address these aspects. We accompany this analysis with a discussion of methods that may be used to identify such mechanisms. A major point we want to make is, that it is implausible to reliably identify a detailed model from the limited data of one or a few studies. Instead, we advocate for a more coarse approach that first establishes constraints on proximate mechanisms before specific, detailed models are formulated. We exercise this idea on the data we present in this study.
[ { "created": "Mon, 12 Aug 2024 07:46:10 GMT", "version": "v1" } ]
2024-08-13
[ [ "Baum", "Manuel", "" ], [ "Roessler", "Theresa", "" ], [ "Osuna-Mascaró", "Antonio J.", "" ], [ "Auersperg", "Alice", "" ], [ "Brock", "Oliver", "" ] ]
Research continues to accumulate evidence that Goffin's cockatoos (Cacatua goffiniana) can solve wide sets of mechanical problems, such as tool use, tool manufacture, and solving mechanical puzzles. However, the proximate mechanisms underlying this adaptive behavior are largely unknown. In this study, we analyze how three Goffin's cockatoos learn to solve a specific mechanical puzzle, a lockbox. The observed behavior results from the interaction between a complex environment (the lockbox) and different processes that jointly govern the animals' behavior. We thus jointly analyze the parrots' (1) engagement, (2) sensorimotor skill learning, and (3) action selection. We find that neither of these aspects could solely explain the animals' behavioral adaptation and that a plausible model of proximate mechanisms (including adaptation) should thus also jointly address these aspects. We accompany this analysis with a discussion of methods that may be used to identify such mechanisms. A major point we want to make is, that it is implausible to reliably identify a detailed model from the limited data of one or a few studies. Instead, we advocate for a more coarse approach that first establishes constraints on proximate mechanisms before specific, detailed models are formulated. We exercise this idea on the data we present in this study.
q-bio/0601032
Can Ozan Tan Mr.
Uygar Ozesmi
Fuzzy Cognitive Maps Of Local People Impacted By Dam Construction: Their Demands Regarding Resettlement
23 pages, 4 tables, 7 figures
null
null
null
q-bio.NC q-bio.QM
null
Fuzzy cognitive mapping was used to understand the wants and desires of local people before resettlement. Variables that the affected people think will increase their welfare during and after dam construction were determined. Simulations were done with their cumulative social cognitive map to determine which policy options would most increase their welfare. The construction of roads, job opportunities, advance payment of condemnation value, and schools are central variables that had the most effect on increasing people's income and welfare. The synergistic effects of variables demonstrated that the implementation of different policies not only add cumulatively to the people's welfare but also have an increased effect.
[ { "created": "Sat, 21 Jan 2006 19:20:07 GMT", "version": "v1" } ]
2007-05-23
[ [ "Ozesmi", "Uygar", "" ] ]
Fuzzy cognitive mapping was used to understand the wants and desires of local people before resettlement. Variables that the affected people think will increase their welfare during and after dam construction were determined. Simulations were done with their cumulative social cognitive map to determine which policy options would most increase their welfare. The construction of roads, job opportunities, advance payment of condemnation value, and schools are central variables that had the most effect on increasing people's income and welfare. The synergistic effects of variables demonstrated that the implementation of different policies not only add cumulatively to the people's welfare but also have an increased effect.
1803.01826
Navit Dori
N. Dori, H. Behar, H. Brot, and Y. Louzoun
Family-size variability grows with collapse rate in Birth-Death-Catastrophe model
6 pages 4 figures, plus Supplemental Material 8 pages 6 figues
Phys. Rev. E 98, 012416 (2018)
10.1103/PhysRevE.98.012416
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Forest-fire and avalanche models support the notion that frequent catastrophes prevent the growth of very large populations and as such prevent rare large-scale catastrophes. We show that this notion is not universal. A new model class leads to a paradigm shift in the influence of catastrophes on the family-size distribution of sub-populations. We study a simple population dynamics model where individuals, as well as a whole family, may die with a constant probability, accompanied by a logistic population growth model. We compute the characteristics of the family-size distribution in steady-state and the phase diagram of the steady-state distribution, and show that the family and catastrophe size variances increase with the catastrophe frequency, which is the opposite of common intuition. Frequent catastrophes are balanced by a larger net-growth rate in surviving families, leading to the exponential growth of these families. When the catastrophe rate is further increased, a second phase transition to extinction occurs, when the rate of new families creations is lower than their destruction rate by catastrophes.
[ { "created": "Mon, 5 Mar 2018 18:43:51 GMT", "version": "v1" } ]
2018-08-08
[ [ "Dori", "N.", "" ], [ "Behar", "H.", "" ], [ "Brot", "H.", "" ], [ "Louzoun", "Y.", "" ] ]
Forest-fire and avalanche models support the notion that frequent catastrophes prevent the growth of very large populations and as such prevent rare large-scale catastrophes. We show that this notion is not universal. A new model class leads to a paradigm shift in the influence of catastrophes on the family-size distribution of sub-populations. We study a simple population dynamics model where individuals, as well as a whole family, may die with a constant probability, accompanied by a logistic population growth model. We compute the characteristics of the family-size distribution in steady-state and the phase diagram of the steady-state distribution, and show that the family and catastrophe size variances increase with the catastrophe frequency, which is the opposite of common intuition. Frequent catastrophes are balanced by a larger net-growth rate in surviving families, leading to the exponential growth of these families. When the catastrophe rate is further increased, a second phase transition to extinction occurs, when the rate of new families creations is lower than their destruction rate by catastrophes.
0910.2903
Andrea Giansanti (Mr.)
Antonio Deiana, Andrea Giansanti
Combining predictors of natively unfolded proteins to detect a twilight zone between order and disorder in generic datasets
The title has been changed to make more clear the content of the paper, and some previously misprinted formulas have been fixed. A slightly different version of this manuscript has been submitted to BMC Bioinformatics
null
null
null
q-bio.BM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natively unfolded proteins lack a well defined three dimensional structure but have important biological functions, suggesting a re-assignment of the structure-function paradigm. Many proteins have amino acidic compositions compatible both with the folded and unfolded status, and belong to a twilight zone between order and disorder. This makes difficult a dichotomic classification of protein sequences into folded and natively unfolded ones. In this methodological paper dichotomic folding indexes are considered: hydrophobicity-charge, mean packing, mean pairwise energy, Poodle-W and a new global index, that is called here gVSL2, based on the local disorder predictor VSL2. The performance of these indexes is evaluated on different datasets. Poodle-W, gVSL2 and mean pairwise energy have good performance and stability in all the datasets considered and are combined into a strictly unanimous combination score SSU, that leaves proteins unclassified when the consensus of all combined indexes is not reached. The unclassified proteins: i) belong to an overlap region in the vector space of amino acidic compositions occupied by both folded and unfolded proteins; ii) are composed by approximately the same number of order-promoting and disorder-promoting amino acids; iii) have a mean flexibility intermediate between that of folded and that of unfolded proteins. These proteins reasonably have physical properties intermediate between those of folded and those of natively unfolded proteins and their structural properties and evolutionary history are worth to be investigated.
[ { "created": "Thu, 15 Oct 2009 15:18:00 GMT", "version": "v1" }, { "created": "Mon, 19 Oct 2009 10:17:44 GMT", "version": "v2" } ]
2016-09-08
[ [ "Deiana", "Antonio", "" ], [ "Giansanti", "Andrea", "" ] ]
Natively unfolded proteins lack a well defined three dimensional structure but have important biological functions, suggesting a re-assignment of the structure-function paradigm. Many proteins have amino acidic compositions compatible both with the folded and unfolded status, and belong to a twilight zone between order and disorder. This makes difficult a dichotomic classification of protein sequences into folded and natively unfolded ones. In this methodological paper dichotomic folding indexes are considered: hydrophobicity-charge, mean packing, mean pairwise energy, Poodle-W and a new global index, that is called here gVSL2, based on the local disorder predictor VSL2. The performance of these indexes is evaluated on different datasets. Poodle-W, gVSL2 and mean pairwise energy have good performance and stability in all the datasets considered and are combined into a strictly unanimous combination score SSU, that leaves proteins unclassified when the consensus of all combined indexes is not reached. The unclassified proteins: i) belong to an overlap region in the vector space of amino acidic compositions occupied by both folded and unfolded proteins; ii) are composed by approximately the same number of order-promoting and disorder-promoting amino acids; iii) have a mean flexibility intermediate between that of folded and that of unfolded proteins. These proteins reasonably have physical properties intermediate between those of folded and those of natively unfolded proteins and their structural properties and evolutionary history are worth to be investigated.
2405.12144
Yihan Wu
Yihan Wu, Tao Chang, Siliang Chen, Xiaodong Niu, Yu Li, Yuan Fang, Lei Yang, Yixuan Zong, Yaoxin Yang, Yuehua Li, Mengsong Wang, Wen Yang, Yixuan Wu, Chen Fu, Xia Fang, Yuxin Quan, Xilin Peng, Qiang Sun, Marc M. Van Hulle, Yanhui Liu, Ning Jiang, Dario Farina, Yuan Yang, Jiayuan He, and Qing Mao
Alterations of electrocortical activity during hand movements induced by motor cortex glioma
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Glioma cells can reshape functional neuronal networks by hijacking neuronal synapses, leading to partial or complete neurological dysfunction. These mechanisms have been previously explored for language functions. However, the impact of glioma on sensorimotor functions is still unknown. Therefore, we recruited a control group of patients with unaffected motor cortex and a group of patients with glioma-infiltrated motor cortex, and recorded high-density electrocortical signals during finger movement tasks. The results showed that glioma suppresses task-related synchronization in the high-gamma band and reduces the power across all frequency bands. The resulting atypical motor information transmission model with discrete signaling pathways and delayed responses disrupts the stability of neuronal encoding patterns for finger movement kinematics across various temporal-spatial scales. These findings demonstrate that gliomas functionally invade neural circuits within the motor cortex. This result advances our understanding of motor function processing in chronic disease states, which is important to advance the surgical strategies and neurorehabilitation approaches for patients with malignant gliomas.
[ { "created": "Mon, 20 May 2024 16:13:07 GMT", "version": "v1" } ]
2024-05-21
[ [ "Wu", "Yihan", "" ], [ "Chang", "Tao", "" ], [ "Chen", "Siliang", "" ], [ "Niu", "Xiaodong", "" ], [ "Li", "Yu", "" ], [ "Fang", "Yuan", "" ], [ "Yang", "Lei", "" ], [ "Zong", "Yixuan", "" ], [ "Yang", "Yaoxin", "" ], [ "Li", "Yuehua", "" ], [ "Wang", "Mengsong", "" ], [ "Yang", "Wen", "" ], [ "Wu", "Yixuan", "" ], [ "Fu", "Chen", "" ], [ "Fang", "Xia", "" ], [ "Quan", "Yuxin", "" ], [ "Peng", "Xilin", "" ], [ "Sun", "Qiang", "" ], [ "Van Hulle", "Marc M.", "" ], [ "Liu", "Yanhui", "" ], [ "Jiang", "Ning", "" ], [ "Farina", "Dario", "" ], [ "Yang", "Yuan", "" ], [ "He", "Jiayuan", "" ], [ "Mao", "Qing", "" ] ]
Glioma cells can reshape functional neuronal networks by hijacking neuronal synapses, leading to partial or complete neurological dysfunction. These mechanisms have been previously explored for language functions. However, the impact of glioma on sensorimotor functions is still unknown. Therefore, we recruited a control group of patients with unaffected motor cortex and a group of patients with glioma-infiltrated motor cortex, and recorded high-density electrocortical signals during finger movement tasks. The results showed that glioma suppresses task-related synchronization in the high-gamma band and reduces the power across all frequency bands. The resulting atypical motor information transmission model with discrete signaling pathways and delayed responses disrupts the stability of neuronal encoding patterns for finger movement kinematics across various temporal-spatial scales. These findings demonstrate that gliomas functionally invade neural circuits within the motor cortex. This result advances our understanding of motor function processing in chronic disease states, which is important to advance the surgical strategies and neurorehabilitation approaches for patients with malignant gliomas.
1809.06530
Liane Gabora
Burton Voorhees, Dwight Read, and Liane Gabora
Identity, Kinship, and the Evolution of Cooperation
55 pages; Accepted for publication in Current Anthropology
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extensive cooperation among biologically unrelated individuals is uniquely human and much current research attempts to explain this fact. We draw upon social, cultural, and psychological aspects of human uniqueness to present an integrated theory of human cooperation that explains aspects of human cooperation that are problematic for other theories (e.g., defector invasion avoidance, preferential assortment to exclude free riders, and the second order free rider problem). We propose that the evolution of human cooperative behavior required (1) a capacity for self-sustained, self-referential thought manifested as an integrated worldview, including a sense of identity and point of view, and (2) the cultural formation of kinship-based social organizational systems within which social identities can be established and transmitted through enculturation. Human cooperative behavior arose, we argue, through the acquisition of a culturally grounded social identity that included the expectation of cooperation among kin. This identity is linked to basic survival instincts by emotions that are mentally experienced as culture laden feelings. As a consequence, individuals are motivated to cooperate with those perceived culturally as kin, while deviations from expected social behavior are experienced as threatening to one's social identity, leading to punishment of those seen as violating cultural expectations regarding socially proper behavior.
[ { "created": "Tue, 18 Sep 2018 04:50:02 GMT", "version": "v1" } ]
2018-09-19
[ [ "Voorhees", "Burton", "" ], [ "Read", "Dwight", "" ], [ "Gabora", "Liane", "" ] ]
Extensive cooperation among biologically unrelated individuals is uniquely human and much current research attempts to explain this fact. We draw upon social, cultural, and psychological aspects of human uniqueness to present an integrated theory of human cooperation that explains aspects of human cooperation that are problematic for other theories (e.g., defector invasion avoidance, preferential assortment to exclude free riders, and the second order free rider problem). We propose that the evolution of human cooperative behavior required (1) a capacity for self-sustained, self-referential thought manifested as an integrated worldview, including a sense of identity and point of view, and (2) the cultural formation of kinship-based social organizational systems within which social identities can be established and transmitted through enculturation. Human cooperative behavior arose, we argue, through the acquisition of a culturally grounded social identity that included the expectation of cooperation among kin. This identity is linked to basic survival instincts by emotions that are mentally experienced as culture laden feelings. As a consequence, individuals are motivated to cooperate with those perceived culturally as kin, while deviations from expected social behavior are experienced as threatening to one's social identity, leading to punishment of those seen as violating cultural expectations regarding socially proper behavior.
1201.0535
Gianluca Gabbriellini
Gianluca Gabbriellini
Non Standard Finite Difference Scheme for Mutualistic Interaction Description
12 pages, 4 figures
International Journal of Difference Equations, ISSN 0973-6069, Volume 9, N. 2, pp. 147-161 (2014)
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the more interesting themes of the mathematical ecology is the description of the mutualistic interaction between two interacting species. Based on continuous-time model developed by Holland and DeAngelis 2009 for consumer-resource mutualism description, this work deals with the application of the Mickens Non Standard Finite Difference method to transform the continuous-time scheme into a discrete-time one. It has been proved that the Mickens scheme is dynamically consistent with the original one regardless of the step-sizes used in numerical simulations, in opposition of the forward Euler method that shows numerical instabilities when the step size overcomes a critical value.
[ { "created": "Mon, 2 Jan 2012 21:13:41 GMT", "version": "v1" }, { "created": "Fri, 2 Jan 2015 12:12:33 GMT", "version": "v2" } ]
2015-01-05
[ [ "Gabbriellini", "Gianluca", "" ] ]
One of the more interesting themes of the mathematical ecology is the description of the mutualistic interaction between two interacting species. Based on continuous-time model developed by Holland and DeAngelis 2009 for consumer-resource mutualism description, this work deals with the application of the Mickens Non Standard Finite Difference method to transform the continuous-time scheme into a discrete-time one. It has been proved that the Mickens scheme is dynamically consistent with the original one regardless of the step-sizes used in numerical simulations, in opposition of the forward Euler method that shows numerical instabilities when the step size overcomes a critical value.
2405.09216
Alexander Ioannidis
Consuelo D. Quinto-Cort\'es, Carmina Barberena Jonas, Sof\'ia Vieyra-S\'anchez, Stephen Oppenheimer, Ram Gonz\'alez-Buenfil, Kathryn Auckland, Kathryn Robson, Tom Parks, J. V\'ictor Moreno-Mayar, Javier Blanco-Portillo, Julian R. Homburger, Genevieve L. Wojcik, Alissa L. Severson, Jonathan S. Friedlaender, Francoise Friedlaender, Angela Allen, Stephen Allen, Mark Stoneking, Adrian V. S. Hill, George Aho, George Koki, William Pomat, Carlos D. Bustamante, Maude Phipps, Alexander J. Mentzer, Andr\'es Moreno-Estrada, Alexander G. Ioannidis
The Genomic Landscape of Oceania
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Encompassing regions that were amongst the first inhabited by humans following the out-of-Africa expansion, hosting populations with the highest levels of archaic hominid introgression, and including Pacific islands that are the most isolated inhabited locations on the planet, Oceania has a rich, but understudied, human genomic landscape. Here we describe the first region-wide analysis of genome-wide data from population groups spanning Oceania and its surroundings, from island and peninsular southeast Asia to Papua New Guinea, east across the Pacific through Melanesia, Micronesia, and Polynesia, and west across the Indian Ocean to related island populations in the Andamans and Madagascar. In total we generate and analyze genome-wide data from 981 individuals from 92 different populations, 58 separate islands, and 30 countries, representing the most expansive study of Pacific genetics to date. In each sample we disentangle the Papuan and more recent Austronesian ancestries, which have admixed in various proportions across this region, using ancestry-specific analyses, and characterize the distinct patterns of settlement, migration, and archaic introgression separately in these two ancestries. We also focus on the patterns of clinically relevant genetic variation across Oceania--a landscape rippled with strong founder effects and island-specific genetic drift in allele frequencies--providing an atlas for the development of precision genetic health strategies in this understudied region of the world.
[ { "created": "Wed, 15 May 2024 09:50:43 GMT", "version": "v1" } ]
2024-05-16
[ [ "Quinto-Cortés", "Consuelo D.", "" ], [ "Jonas", "Carmina Barberena", "" ], [ "Vieyra-Sánchez", "Sofía", "" ], [ "Oppenheimer", "Stephen", "" ], [ "González-Buenfil", "Ram", "" ], [ "Auckland", "Kathryn", "" ], [ "Robson", "Kathryn", "" ], [ "Parks", "Tom", "" ], [ "Moreno-Mayar", "J. Víctor", "" ], [ "Blanco-Portillo", "Javier", "" ], [ "Homburger", "Julian R.", "" ], [ "Wojcik", "Genevieve L.", "" ], [ "Severson", "Alissa L.", "" ], [ "Friedlaender", "Jonathan S.", "" ], [ "Friedlaender", "Francoise", "" ], [ "Allen", "Angela", "" ], [ "Allen", "Stephen", "" ], [ "Stoneking", "Mark", "" ], [ "Hill", "Adrian V. S.", "" ], [ "Aho", "George", "" ], [ "Koki", "George", "" ], [ "Pomat", "William", "" ], [ "Bustamante", "Carlos D.", "" ], [ "Phipps", "Maude", "" ], [ "Mentzer", "Alexander J.", "" ], [ "Moreno-Estrada", "Andrés", "" ], [ "Ioannidis", "Alexander G.", "" ] ]
Encompassing regions that were amongst the first inhabited by humans following the out-of-Africa expansion, hosting populations with the highest levels of archaic hominid introgression, and including Pacific islands that are the most isolated inhabited locations on the planet, Oceania has a rich, but understudied, human genomic landscape. Here we describe the first region-wide analysis of genome-wide data from population groups spanning Oceania and its surroundings, from island and peninsular southeast Asia to Papua New Guinea, east across the Pacific through Melanesia, Micronesia, and Polynesia, and west across the Indian Ocean to related island populations in the Andamans and Madagascar. In total we generate and analyze genome-wide data from 981 individuals from 92 different populations, 58 separate islands, and 30 countries, representing the most expansive study of Pacific genetics to date. In each sample we disentangle the Papuan and more recent Austronesian ancestries, which have admixed in various proportions across this region, using ancestry-specific analyses, and characterize the distinct patterns of settlement, migration, and archaic introgression separately in these two ancestries. We also focus on the patterns of clinically relevant genetic variation across Oceania--a landscape rippled with strong founder effects and island-specific genetic drift in allele frequencies--providing an atlas for the development of precision genetic health strategies in this understudied region of the world.
1902.09026
Jaroslav Hor\'a\v{c}ek
Jaroslav Hor\'a\v{c}ek, V\'aclav Kouck\'y and Milan Hlad\'ik
Contribution of Interval Linear Algebra to the Ongoing Discussions on Multiple Breath Washout Test
The paper summarizes our hypotheses relevant to the area of the Multiple Breath Washout test, which are based on application of interval linear algebra
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the paper the interval least squares approach to estimate/fit data with interval uncertainties is introduced. The solution of this problem is discussed from the perspective of interval linear algebra. Using the interval linear algebra carefully, it is possible to significantly speed up the computation in specialized cases. The interval least squares approach is then applied to lung function testing method - Multiple breath washout test (MBW). It is used for algebraic handling of uncertainties arising during the measurement. Surprisingly, it sheds new light on various aspects of this procedure - it shows that the precision of currently used sensors does not allow verified prediction. Moreover, it proved the most commonly used curve to model the nitrogen washout process from lung to be wrong. Such insight contributes to the ongoing discussions on the possibility to predict clinically relevant indices (e.g., LCI).
[ { "created": "Sun, 24 Feb 2019 22:22:19 GMT", "version": "v1" } ]
2019-02-26
[ [ "Horáček", "Jaroslav", "" ], [ "Koucký", "Václav", "" ], [ "Hladík", "Milan", "" ] ]
In the paper the interval least squares approach to estimate/fit data with interval uncertainties is introduced. The solution of this problem is discussed from the perspective of interval linear algebra. Using the interval linear algebra carefully, it is possible to significantly speed up the computation in specialized cases. The interval least squares approach is then applied to lung function testing method - Multiple breath washout test (MBW). It is used for algebraic handling of uncertainties arising during the measurement. Surprisingly, it sheds new light on various aspects of this procedure - it shows that the precision of currently used sensors does not allow verified prediction. Moreover, it proved the most commonly used curve to model the nitrogen washout process from lung to be wrong. Such insight contributes to the ongoing discussions on the possibility to predict clinically relevant indices (e.g., LCI).
1902.05899
Jamila Andoh
Yuanyuan Lyu, Francesca Zidda, Stefan Radev, Hongcai Liu, Xiaoli Guo, Shanbao Tong, Herta Flor, Jamila Andoh
Gamma band oscillations reflect sensory and affective dimensions of pain
33 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pain is a multidimensional process, which can be modulated by emotions, however, the mechanisms underlying this modulation are unknown. We used pictures with different emotional valence (negative, positive, neutral) as primes and applied electrical painful stimuli as targets to healthy participants. We assessed pain intensity and unpleasantness ratings and recorded electroencephalograms (EEG). We found that pain unpleasantness, and not pain intensity ratings were modulated by emotion, with increased ratings for negative and decreased for positive pictures. We also found two consecutive gamma band oscillations (GBOs) related to pain processing from time frequency analyses of the EEG signals. An early GBO had a cortical distribution contralateral to the painful stimulus, and its amplitude was positively correlated with intensity and unpleasantness ratings, but not with prime valence. The late GBO had a centroparietal distribution and its amplitude was larger for negative compared to neutral and positive pictures. The emotional modulation effect (negative versus positive) of the late GBO amplitude was positively correlated with pain unpleasantness. The early GBO might reflect the overall pain perception, possibly involving the thalamocortical circuit, while the late GBO might be related to the affective dimension of pain and top-down related processes.
[ { "created": "Fri, 15 Feb 2019 17:44:36 GMT", "version": "v1" } ]
2019-02-18
[ [ "Lyu", "Yuanyuan", "" ], [ "Zidda", "Francesca", "" ], [ "Radev", "Stefan", "" ], [ "Liu", "Hongcai", "" ], [ "Guo", "Xiaoli", "" ], [ "Tong", "Shanbao", "" ], [ "Flor", "Herta", "" ], [ "Andoh", "Jamila", "" ] ]
Pain is a multidimensional process, which can be modulated by emotions, however, the mechanisms underlying this modulation are unknown. We used pictures with different emotional valence (negative, positive, neutral) as primes and applied electrical painful stimuli as targets to healthy participants. We assessed pain intensity and unpleasantness ratings and recorded electroencephalograms (EEG). We found that pain unpleasantness, and not pain intensity ratings were modulated by emotion, with increased ratings for negative and decreased for positive pictures. We also found two consecutive gamma band oscillations (GBOs) related to pain processing from time frequency analyses of the EEG signals. An early GBO had a cortical distribution contralateral to the painful stimulus, and its amplitude was positively correlated with intensity and unpleasantness ratings, but not with prime valence. The late GBO had a centroparietal distribution and its amplitude was larger for negative compared to neutral and positive pictures. The emotional modulation effect (negative versus positive) of the late GBO amplitude was positively correlated with pain unpleasantness. The early GBO might reflect the overall pain perception, possibly involving the thalamocortical circuit, while the late GBO might be related to the affective dimension of pain and top-down related processes.
2101.11143
Arthur Prat-Carrabin
Arthur Prat-Carrabin, Robert C. Wilson, Jonathan D. Cohen, Rava Azeredo da Silveira
Human Inference in Changing Environments With Temporal Structure
59 pages, 21 figures, and 6 tables
Psychological Review (2021), 128(5), 879-912
10.1037/rev0000276
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
To make informed decisions in natural environments that change over time, humans must update their beliefs as new observations are gathered. Studies exploring human inference as a dynamical process that unfolds in time have focused on situations in which the statistics of observations are history-independent. Yet temporal structure is everywhere in nature, and yields history-dependent observations. Do humans modify their inference processes depending on the latent temporal statistics of their observations? We investigate this question experimentally and theoretically using a change-point inference task. We show that humans adapt their inference process to fine aspects of the temporal structure in the statistics of stimuli. As such, humans behave qualitatively in a Bayesian fashion, but, quantitatively, deviate away from optimality. Perhaps more importantly, humans behave suboptimally in that their responses are not deterministic, but variable. We show that this variability itself is modulated by the temporal statistics of stimuli. To elucidate the cognitive algorithm that yields this behavior, we investigate a broad array of existing and new models that characterize different sources of suboptimal deviations away from Bayesian inference. While models with 'output noise' that corrupts the response-selection process are natural candidates, human behavior is best described by sampling-based inference models, in which the main ingredient is a compressed approximation of the posterior, represented through a modest set of random samples and updated over time. This result comes to complement a growing literature on sample-based representation and learning in humans.
[ { "created": "Wed, 27 Jan 2021 00:31:46 GMT", "version": "v1" } ]
2022-03-03
[ [ "Prat-Carrabin", "Arthur", "" ], [ "Wilson", "Robert C.", "" ], [ "Cohen", "Jonathan D.", "" ], [ "da Silveira", "Rava Azeredo", "" ] ]
To make informed decisions in natural environments that change over time, humans must update their beliefs as new observations are gathered. Studies exploring human inference as a dynamical process that unfolds in time have focused on situations in which the statistics of observations are history-independent. Yet temporal structure is everywhere in nature, and yields history-dependent observations. Do humans modify their inference processes depending on the latent temporal statistics of their observations? We investigate this question experimentally and theoretically using a change-point inference task. We show that humans adapt their inference process to fine aspects of the temporal structure in the statistics of stimuli. As such, humans behave qualitatively in a Bayesian fashion, but, quantitatively, deviate away from optimality. Perhaps more importantly, humans behave suboptimally in that their responses are not deterministic, but variable. We show that this variability itself is modulated by the temporal statistics of stimuli. To elucidate the cognitive algorithm that yields this behavior, we investigate a broad array of existing and new models that characterize different sources of suboptimal deviations away from Bayesian inference. While models with 'output noise' that corrupts the response-selection process are natural candidates, human behavior is best described by sampling-based inference models, in which the main ingredient is a compressed approximation of the posterior, represented through a modest set of random samples and updated over time. This result comes to complement a growing literature on sample-based representation and learning in humans.
q-bio/0407013
Yixin Guo
Yixin Guo and Carson C. Chow
Existence and Stability of Standing Pulses in Neural Networks : I Existence
31 pages, 29 figures, submitted to SIAM Journal on Applied Dynamical Systems
null
10.1137/040609483
null
q-bio.NC q-bio.QM
null
We consider the existence of standing pulse solutions of a neural network integro-differential equation. These pulses are bistable with the zero state and may be an analogue for short term memory in the brain. The network consists of a single-layer of neurons synaptically connected by lateral inhibition. Our work extends the classic Amari result by considering a non-saturating gain function. We consider a specific connectivity function where the existence conditions for single-pulses can be reduced to the solution of an algebraic system. In addition to the two localized pulse solutions found by Amari, we find that three or more pulses can coexist. We also show the existence of nonconvex ``dimpled'' pulses and double pulses. We map out the pulse shapes and maximum firing rates for different connection weights and gain functions.
[ { "created": "Thu, 8 Jul 2004 05:46:17 GMT", "version": "v1" } ]
2009-11-10
[ [ "Guo", "Yixin", "" ], [ "Chow", "Carson C.", "" ] ]
We consider the existence of standing pulse solutions of a neural network integro-differential equation. These pulses are bistable with the zero state and may be an analogue for short term memory in the brain. The network consists of a single-layer of neurons synaptically connected by lateral inhibition. Our work extends the classic Amari result by considering a non-saturating gain function. We consider a specific connectivity function where the existence conditions for single-pulses can be reduced to the solution of an algebraic system. In addition to the two localized pulse solutions found by Amari, we find that three or more pulses can coexist. We also show the existence of nonconvex ``dimpled'' pulses and double pulses. We map out the pulse shapes and maximum firing rates for different connection weights and gain functions.
2405.04540
Michele Farisco
Michele Farisco, Kathinka Evers, Jean-Pierre Changeux
Is artificial consciousness achievable? Lessons from the human brain
null
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (structural and architectural) and extrinsic (related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it is theoretically possible that AI research can develop partial or potentially alternative forms of consciousness that is qualitatively different from the human, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word consciousness for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify what is common and what differs in AI conscious processing from full human conscious experience.
[ { "created": "Thu, 18 Apr 2024 12:59:44 GMT", "version": "v1" }, { "created": "Mon, 29 Jul 2024 17:55:17 GMT", "version": "v2" } ]
2024-07-30
[ [ "Farisco", "Michele", "" ], [ "Evers", "Kathinka", "" ], [ "Changeux", "Jean-Pierre", "" ] ]
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (structural and architectural) and extrinsic (related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it is theoretically possible that AI research can develop partial or potentially alternative forms of consciousness that is qualitatively different from the human, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word consciousness for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify what is common and what differs in AI conscious processing from full human conscious experience.
2407.05857
Jonathan Swinton
Jonathan Swinton
Disk-stacking models are consistent with Fibonacci and non-Fibonacci structure in sunflowers
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper investigates a model of plant organ placement motivated by the appearance of large Fibonacci numbers in phyllotaxis, and provides the first large-scale empirical validation of this model. Specifically it evaluates the ability of Schwendener disk-stacking models to generate parastichy patterns seen in a large dataset of sunflower seedheads. We find that features of this data that the models can account for include a predominance of Fibonacci counts, usually in a pair of left and right counts on a single seedhead, a smaller but detectable frequency of Lucas and double Fibonacci numbers, a comparable frequency of Fibonacci numbers plus or minus one, and occurrences of pairs of roughly equal but non-Fibonacci counts in a `columnar' structure. A further observation in the dataset was an occasional lack of rotational symmetry in the parastichy spirals, and this paper demonstrates those in the model for the first time. Schwendener disk-stacking models allow Fibonacci structure by ensuring that a parameter of the model corresponding to the speed of plant growth is kept small enough. While many other models can exhibit Fibonacci structure, usually by specifying a rotation parameter to an extremely high precision, no other model has accounted for further, non-Fibonacci, features in the observed data. The Schwendener model produces these naturally in the region of parameter space just beyond where the Fibonacci structure breaks down, without any further parameter fitting. We also introduce stochasticity into the model and show that it while it can be responsible for the appearance of columnar structure, the disordered dynamics of the deterministic system near the critical region can also generate this structure.
[ { "created": "Mon, 8 Jul 2024 12:05:53 GMT", "version": "v1" }, { "created": "Tue, 16 Jul 2024 11:39:17 GMT", "version": "v2" } ]
2024-07-17
[ [ "Swinton", "Jonathan", "" ] ]
This paper investigates a model of plant organ placement motivated by the appearance of large Fibonacci numbers in phyllotaxis, and provides the first large-scale empirical validation of this model. Specifically it evaluates the ability of Schwendener disk-stacking models to generate parastichy patterns seen in a large dataset of sunflower seedheads. We find that features of this data that the models can account for include a predominance of Fibonacci counts, usually in a pair of left and right counts on a single seedhead, a smaller but detectable frequency of Lucas and double Fibonacci numbers, a comparable frequency of Fibonacci numbers plus or minus one, and occurrences of pairs of roughly equal but non-Fibonacci counts in a `columnar' structure. A further observation in the dataset was an occasional lack of rotational symmetry in the parastichy spirals, and this paper demonstrates those in the model for the first time. Schwendener disk-stacking models allow Fibonacci structure by ensuring that a parameter of the model corresponding to the speed of plant growth is kept small enough. While many other models can exhibit Fibonacci structure, usually by specifying a rotation parameter to an extremely high precision, no other model has accounted for further, non-Fibonacci, features in the observed data. The Schwendener model produces these naturally in the region of parameter space just beyond where the Fibonacci structure breaks down, without any further parameter fitting. We also introduce stochasticity into the model and show that it while it can be responsible for the appearance of columnar structure, the disordered dynamics of the deterministic system near the critical region can also generate this structure.