id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0312030
Francois Coppex
Francois Coppex, Michel Droz, Adam Lipowski
Extinction dynamics of Lotka-Volterra ecosystems on evolving networks
8 pages, 6 eps figures included
Phys. Rev. E 69, 061901 (2004)
10.1103/PhysRevE.69.061901
null
q-bio.PE cond-mat.stat-mech
null
We study a model of a multi-species ecosystem described by Lotka-Volterra-like equations. Interactions among species form a network whose evolution is determined by the dynamics of the model. Numerical simulations show power-law distribution of intervals between extinctions, but only for ecosystems with sufficient variability of species and with networks of connectivity above certain threshold that is very close to the percolation threshold of the network. Effect of slow environmental changes on extinction dynamics, degree distribution of the network of interspecies interactions, and some emergent properties of our model are also examined.
[ { "created": "Fri, 19 Dec 2003 09:40:19 GMT", "version": "v1" }, { "created": "Wed, 2 Jun 2004 08:18:17 GMT", "version": "v2" } ]
2007-05-23
[ [ "Coppex", "Francois", "" ], [ "Droz", "Michel", "" ], [ "Lipowski", "Adam", "" ] ]
We study a model of a multi-species ecosystem described by Lotka-Volterra-like equations. Interactions among species form a network whose evolution is determined by the dynamics of the model. Numerical simulations show power-law distribution of intervals between extinctions, but only for ecosystems with sufficient variability of species and with networks of connectivity above certain threshold that is very close to the percolation threshold of the network. Effect of slow environmental changes on extinction dynamics, degree distribution of the network of interspecies interactions, and some emergent properties of our model are also examined.
2306.03399
Haoyu Cheng
Haoyu Cheng, Mobin Asri, Julian Lucas, Sergey Koren, and Heng Li
Scalable telomere-to-telomere assembly for diploid and polyploid genomes with double graph
14 pages, 4 fuhires
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Despite recent advances in the length and the accuracy of long-read data, building haplotype-resolved genome assemblies from telomere to telomere still requires considerable computational resources. In this study, we present an efficient de novo assembly algorithm that combines multiple sequencing technologies to scale up population-wide telomere-to-telomere assemblies. By utilizing twenty-two human and two plant genomes, we demonstrate that our algorithm is around an order of magnitude cheaper than existing methods, while producing better diploid and haploid assemblies. Notably, our algorithm is the only feasible solution to the haplotype-resolved assembly of polyploid genomes.
[ { "created": "Tue, 6 Jun 2023 04:29:12 GMT", "version": "v1" } ]
2023-06-07
[ [ "Cheng", "Haoyu", "" ], [ "Asri", "Mobin", "" ], [ "Lucas", "Julian", "" ], [ "Koren", "Sergey", "" ], [ "Li", "Heng", "" ] ]
Despite recent advances in the length and the accuracy of long-read data, building haplotype-resolved genome assemblies from telomere to telomere still requires considerable computational resources. In this study, we present an efficient de novo assembly algorithm that combines multiple sequencing technologies to scale up population-wide telomere-to-telomere assemblies. By utilizing twenty-two human and two plant genomes, we demonstrate that our algorithm is around an order of magnitude cheaper than existing methods, while producing better diploid and haploid assemblies. Notably, our algorithm is the only feasible solution to the haplotype-resolved assembly of polyploid genomes.
1608.02375
Dan Wang
Dan Wang, Shuaicheng Li, Fei Guo, Lusheng Wang
Core-genome scaffold comparison reveals the prevalence that inversion events are associated with pairs of inverted repeats
8 pages, 11 figures
null
null
null
q-bio.GN cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Genome rearrangement plays an important role in evolutionary biology and has profound impacts on phenotype in organisms ranging from microbes to humans. The mechanisms for genome rearrangement events remain unclear. Lots of comparisons have been conducted among different species. To reveal the mechanisms for rearrangement events, comparison of different individuals/strains within the same species or genus (pan-genomes) is more helpful since they are much closer to each other. Results: We study the mechanism for inversion events via core-genome scaffold comparison of different strains within the same species. We focus on two kinds of bacteria, Pseudomonas aeruginosa and Escherichia coli, and investigate the inversion events among different strains of the same specie. We find an interesting phenomenon that long (larger than 10,000 bp) inversion regions are flanked by a pair of Inverted Repeats (IRs) (with lengths ranging from 385 bp to 27476 bp) which are often Insertion Sequences (ISs).This mechanism can also explain why the breakpoint reuses for inversion events happen. We study the prevalence of the phenomenon and find that it is a major mechanism for inversions. The other observation is that for different rearrangement events such as transposition and inverted block interchange, the two ends of the swapped regions are also associated with repeats so that after the rearrangement operations the two ends of the swapped regions remain unchanged. To our knowledge, this is the first time such a phenomenon is reported for transposition event.
[ { "created": "Mon, 8 Aug 2016 10:36:36 GMT", "version": "v1" } ]
2016-08-09
[ [ "Wang", "Dan", "" ], [ "Li", "Shuaicheng", "" ], [ "Guo", "Fei", "" ], [ "Wang", "Lusheng", "" ] ]
Motivation: Genome rearrangement plays an important role in evolutionary biology and has profound impacts on phenotype in organisms ranging from microbes to humans. The mechanisms for genome rearrangement events remain unclear. Lots of comparisons have been conducted among different species. To reveal the mechanisms for rearrangement events, comparison of different individuals/strains within the same species or genus (pan-genomes) is more helpful since they are much closer to each other. Results: We study the mechanism for inversion events via core-genome scaffold comparison of different strains within the same species. We focus on two kinds of bacteria, Pseudomonas aeruginosa and Escherichia coli, and investigate the inversion events among different strains of the same specie. We find an interesting phenomenon that long (larger than 10,000 bp) inversion regions are flanked by a pair of Inverted Repeats (IRs) (with lengths ranging from 385 bp to 27476 bp) which are often Insertion Sequences (ISs).This mechanism can also explain why the breakpoint reuses for inversion events happen. We study the prevalence of the phenomenon and find that it is a major mechanism for inversions. The other observation is that for different rearrangement events such as transposition and inverted block interchange, the two ends of the swapped regions are also associated with repeats so that after the rearrangement operations the two ends of the swapped regions remain unchanged. To our knowledge, this is the first time such a phenomenon is reported for transposition event.
1902.03216
Gaoxiang Zhou
Gaoxiang Zhou, Kai-Wen Liang, Natasa Miskov-Zivanov
Intervention Pathway Discovery via Context-Dependent Dynamic Sensitivity Analysis
null
null
null
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The sensitivity analysis of biological system models can significantly contribute to identifying and explaining influences of internal or external changes on model and its elements. We propose here a comprehensive framework to study sensitivity of intra-cellular networks and to identify key intervention pathways, by performing both static and dynamic sensitivity analysis. While the static sensitivity analysis focuses on the impact of network topology and update functions, the dynamic analysis accounts for context-dependent transient state distributions. To study sensitivity, we use discrete models, where each element is represented as a discrete variable and assigned an update rule, which is a function of element's known direct and indirect regulators. Our sensitivity analysis framework allows for assessing the effect of context on individual element sensitivity, as well as on element criticality in reaching preferred outcomes. The framework also enables discovery of most influential pathways in the model that are essential for satisfying important system properties, and thus, could be used for interventions. We discuss the role of nine different network attributes in identifying key elements and intervention pathways, and evaluate their performance using model checking method. Finally, we apply our methods on the model of naive T cell differentiation, and further demonstrate the importance of context-based sensitivity analysis in identifying most influential elements and pathways.
[ { "created": "Fri, 8 Feb 2019 18:27:24 GMT", "version": "v1" } ]
2019-02-11
[ [ "Zhou", "Gaoxiang", "" ], [ "Liang", "Kai-Wen", "" ], [ "Miskov-Zivanov", "Natasa", "" ] ]
The sensitivity analysis of biological system models can significantly contribute to identifying and explaining influences of internal or external changes on model and its elements. We propose here a comprehensive framework to study sensitivity of intra-cellular networks and to identify key intervention pathways, by performing both static and dynamic sensitivity analysis. While the static sensitivity analysis focuses on the impact of network topology and update functions, the dynamic analysis accounts for context-dependent transient state distributions. To study sensitivity, we use discrete models, where each element is represented as a discrete variable and assigned an update rule, which is a function of element's known direct and indirect regulators. Our sensitivity analysis framework allows for assessing the effect of context on individual element sensitivity, as well as on element criticality in reaching preferred outcomes. The framework also enables discovery of most influential pathways in the model that are essential for satisfying important system properties, and thus, could be used for interventions. We discuss the role of nine different network attributes in identifying key elements and intervention pathways, and evaluate their performance using model checking method. Finally, we apply our methods on the model of naive T cell differentiation, and further demonstrate the importance of context-based sensitivity analysis in identifying most influential elements and pathways.
q-bio/0401042
Thomas R. Weikl
Thomas R. Weikl and Reinhard Lipowsky
Mechanisms of pattern formation during T cell adhesion
12 pages, 8 figures
null
10.1529/biophysj.104.045609
null
q-bio.SC cond-mat.stat-mech q-bio.CB
null
T cells form intriguing patterns during adhesion to antigen-presenting cells. The patterns at the cell-cell contact zone are composed of two types of domains, which either contain short TCR/MHCp receptor-ligand complexes or the longer LFA-1/ICAM-1 complexes. The final pattern consists of a central TCR/MHCp domain surrounded by a ring-shaped LFA-1/ICAM-1 domain, while the characteristic pattern formed at intermediate times is inverted with TCR/MHCp complexes at the periphery of the contact zone and LFA-1/ICAM-1 complexes in the center. In this article, we present a statistical-mechanical model of cell adhesion and propose a novel mechanism for the T cell pattern formation. Our mechanism for the formation of the intermediate inverted pattern is based (i) on the initial nucleation of numerous TCR/MHCp microdomains, and (ii) on the diffusion of free receptors and ligands into the contact zone. Due to this inward diffusion, TCR/MHCp microdomains at the rim of the contact zone grow faster and form an intermediate peripheral ring for sufficiently large TCR/MHCp concentrations. In agreement with experiments, we find that the formation of the final pattern with a central TCR/MHCp domain requires active cytoskeletal transport processes. Without active transport, the intermediate inverted pattern seems to be metastable in our model, which might explain patterns observed during natural killer (NK) cell adhesion. At smaller TCR/MHCp complex concentrations, we observe a different regime of pattern formation with intermediate multifocal TCR/MHCp patterns which resemble experimental patterns found during thymozyte adhesion.
[ { "created": "Wed, 28 Jan 2004 15:51:43 GMT", "version": "v1" } ]
2009-11-10
[ [ "Weikl", "Thomas R.", "" ], [ "Lipowsky", "Reinhard", "" ] ]
T cells form intriguing patterns during adhesion to antigen-presenting cells. The patterns at the cell-cell contact zone are composed of two types of domains, which either contain short TCR/MHCp receptor-ligand complexes or the longer LFA-1/ICAM-1 complexes. The final pattern consists of a central TCR/MHCp domain surrounded by a ring-shaped LFA-1/ICAM-1 domain, while the characteristic pattern formed at intermediate times is inverted with TCR/MHCp complexes at the periphery of the contact zone and LFA-1/ICAM-1 complexes in the center. In this article, we present a statistical-mechanical model of cell adhesion and propose a novel mechanism for the T cell pattern formation. Our mechanism for the formation of the intermediate inverted pattern is based (i) on the initial nucleation of numerous TCR/MHCp microdomains, and (ii) on the diffusion of free receptors and ligands into the contact zone. Due to this inward diffusion, TCR/MHCp microdomains at the rim of the contact zone grow faster and form an intermediate peripheral ring for sufficiently large TCR/MHCp concentrations. In agreement with experiments, we find that the formation of the final pattern with a central TCR/MHCp domain requires active cytoskeletal transport processes. Without active transport, the intermediate inverted pattern seems to be metastable in our model, which might explain patterns observed during natural killer (NK) cell adhesion. At smaller TCR/MHCp complex concentrations, we observe a different regime of pattern formation with intermediate multifocal TCR/MHCp patterns which resemble experimental patterns found during thymozyte adhesion.
1203.0222
Carsten Lemmen
Carsten Lemmen and Kai W. Wirtz
On the sensitivity of the simulated European Neolithic transition to climate extremes
Revised version submitted to the Journal of Archaeological Science, special issue on The World Reshaped: impacts of the Neolithic transition. 10 pages, 4 figures, 1 table + supplementary material
null
10.1016/j.jas.2012.10.023
null
q-bio.PE cs.MA math.DS physics.geo-ph
http://creativecommons.org/licenses/by-nc-sa/3.0/
Was the spread of agropastoralism from the Fertile Crescent throughout Europe influenced by extreme climate events, or was it independent of climate? We here generate idealized climate events using palaeoclimate records. In a mathematical model of regional sociocultural development, these events disturb the subsistence base of simulated forager and farmer societies. We evaluate the regional simulated transition timings and durations against a published large set of radiocarbon dates for western Eurasia; the model is able to realistically hindcast much of the inhomogeneous space-time evolution of regional Neolithic transitions. Our study shows that the consideration of climate events improves the simulation of typical lags between cultural complexes, but that the overall difference to a model without climate events is not significant. Climate events may not have been as important for early sociocultural dynamics as endogenous factors.
[ { "created": "Thu, 1 Mar 2012 15:48:59 GMT", "version": "v1" }, { "created": "Sat, 11 Aug 2012 06:02:18 GMT", "version": "v2" } ]
2012-11-01
[ [ "Lemmen", "Carsten", "" ], [ "Wirtz", "Kai W.", "" ] ]
Was the spread of agropastoralism from the Fertile Crescent throughout Europe influenced by extreme climate events, or was it independent of climate? We here generate idealized climate events using palaeoclimate records. In a mathematical model of regional sociocultural development, these events disturb the subsistence base of simulated forager and farmer societies. We evaluate the regional simulated transition timings and durations against a published large set of radiocarbon dates for western Eurasia; the model is able to realistically hindcast much of the inhomogeneous space-time evolution of regional Neolithic transitions. Our study shows that the consideration of climate events improves the simulation of typical lags between cultural complexes, but that the overall difference to a model without climate events is not significant. Climate events may not have been as important for early sociocultural dynamics as endogenous factors.
q-bio/0412014
Tobias Bollenbach
T. Bollenbach, K. Kruse, P. Pantazis, M. Gonz\'alez-Gait\'an, and F. J\"ulicher
Robust formation of morphogen gradients
null
Physical Review Letters 94, 018103 (2005)
10.1103/PhysRevLett.94.018103
null
q-bio.OT physics.bio-ph
null
We discuss the formation of graded morphogen profiles in a cell layer by nonlinear transport phenomena, important for patterning developing organisms. We focus on a process termed transcytosis, where morphogen transport results from binding of ligands to receptors on the cell surface, incorporation into the cell and subsequent externalization. Starting from a microscopic model, we derive effective transport equations. We show that, in contrast to morphogen transport by extracellular diffusion, transcytosis leads to robust ligand profiles which are insensitive to the rate of ligand production.
[ { "created": "Wed, 8 Dec 2004 14:53:16 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bollenbach", "T.", "" ], [ "Kruse", "K.", "" ], [ "Pantazis", "P.", "" ], [ "González-Gaitán", "M.", "" ], [ "Jülicher", "F.", "" ] ]
We discuss the formation of graded morphogen profiles in a cell layer by nonlinear transport phenomena, important for patterning developing organisms. We focus on a process termed transcytosis, where morphogen transport results from binding of ligands to receptors on the cell surface, incorporation into the cell and subsequent externalization. Starting from a microscopic model, we derive effective transport equations. We show that, in contrast to morphogen transport by extracellular diffusion, transcytosis leads to robust ligand profiles which are insensitive to the rate of ligand production.
1606.01932
Warren Lord
Warren M. Lord, Jie Sun, Nicholas T. Ouellette, and Erik M. Bollt
Inference of Causal Information Flow in Collective Animal Behavior
To appear in TMBMC special issue in honor of Claude Shannon's 100th Birthday
null
10.1109/TMBMC.2016.2632099
null
q-bio.QM cs.IT math.DS math.IT q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding and even defining what constitutes animal interactions remains a challenging problem. Correlational tools may be inappropriate for detecting communication between a set of many agents exhibiting nonlinear behavior. A different approach is to define coordinated motions in terms of an information theoretic channel of direct causal information flow. In this work, we consider time series data obtained by an experimental protocol of optical tracking of the insect species Chironomus riparius. The data constitute reconstructed 3-D spatial trajectories of the insects' flight trajectories and kinematics. We present an application of the optimal causation entropy (oCSE) principle to identify direct causal relationships or information channels among the insects. The collection of channels inferred by oCSE describes a network of information flow within the swarm. We find that information channels with a long spatial range are more common than expected under the assumption that causal information flows should be spatially localized. The tools developed herein are general and applicable to the inference and study of intercommunication networks in a wide variety of natural settings.
[ { "created": "Fri, 3 Jun 2016 00:53:28 GMT", "version": "v1" }, { "created": "Thu, 29 Dec 2016 21:43:11 GMT", "version": "v2" } ]
2017-01-02
[ [ "Lord", "Warren M.", "" ], [ "Sun", "Jie", "" ], [ "Ouellette", "Nicholas T.", "" ], [ "Bollt", "Erik M.", "" ] ]
Understanding and even defining what constitutes animal interactions remains a challenging problem. Correlational tools may be inappropriate for detecting communication between a set of many agents exhibiting nonlinear behavior. A different approach is to define coordinated motions in terms of an information theoretic channel of direct causal information flow. In this work, we consider time series data obtained by an experimental protocol of optical tracking of the insect species Chironomus riparius. The data constitute reconstructed 3-D spatial trajectories of the insects' flight trajectories and kinematics. We present an application of the optimal causation entropy (oCSE) principle to identify direct causal relationships or information channels among the insects. The collection of channels inferred by oCSE describes a network of information flow within the swarm. We find that information channels with a long spatial range are more common than expected under the assumption that causal information flows should be spatially localized. The tools developed herein are general and applicable to the inference and study of intercommunication networks in a wide variety of natural settings.
q-bio/0611089
Alain Destexhe
R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J. M. Bower, M. Diesmann, A. Morrison, P. H. Goodman, F. C. Harris Jr., M. Zirpe, T. Natschlager, D. Pecevski, B. Ermentrout, M. Djurfeldt, A. Lansner, O. Rochel, T. Vieville, E. Muller, A. P. Davison, S. El Boustani, A. Destexhe
Simulation of networks of spiking neurons: A review of tools and strategies
49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007)
Journal of Computational Neuroscience 2007 Dec;23(3):349-98. Epub 2007 Jul 12
null
null
q-bio.NC
null
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
[ { "created": "Tue, 28 Nov 2006 14:35:19 GMT", "version": "v1" }, { "created": "Thu, 12 Apr 2007 21:41:07 GMT", "version": "v2" } ]
2008-01-26
[ [ "Brette", "R.", "" ], [ "Rudolph", "M.", "" ], [ "Carnevale", "T.", "" ], [ "Hines", "M.", "" ], [ "Beeman", "D.", "" ], [ "Bower", "J. M.", "" ], [ "Diesmann", "M.", "" ], [ "Morrison", "A.", "" ], [ "Goodman", "P. H.", "" ], [ "Harris", "F. C.", "Jr." ], [ "Zirpe", "M.", "" ], [ "Natschlager", "T.", "" ], [ "Pecevski", "D.", "" ], [ "Ermentrout", "B.", "" ], [ "Djurfeldt", "M.", "" ], [ "Lansner", "A.", "" ], [ "Rochel", "O.", "" ], [ "Vieville", "T.", "" ], [ "Muller", "E.", "" ], [ "Davison", "A. P.", "" ], [ "Boustani", "S. El", "" ], [ "Destexhe", "A.", "" ] ]
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
2101.04081
Apostolos Gkatzionis
Apostolos Gkatzionis, Stephen Burgess and Paul J. Newcombe
Statistical Methods for cis-Mendelian Randomization with Two-sample Summary-level Data
39 pages (33 main text + 6 supplement), 4 figures, 7 tables
null
null
null
q-bio.QM q-bio.GN stat.AP
http://creativecommons.org/licenses/by/4.0/
Mendelian randomization is the use of genetic variants to assess the existence of a causal relationship between a risk factor and an outcome of interest. Here, we focus on two-sample summary-data Mendelian randomization analyses with many correlated variants from a single gene region, and particularly on cis-Mendelian randomization studies which use protein expression as a risk factor. Such studies must rely on a small, curated set of variants from the studied region; using all variants in the region requires inverting an ill-conditioned genetic correlation matrix and results in numerically unstable causal effect estimates. We review methods for variable selection and estimation in cis-Mendelian randomization with summary-level data, ranging from stepwise pruning and conditional analysis to principal components analysis, factor analysis and Bayesian variable selection. In a simulation study, we show that the various methods have a comparable performance in analyses with large sample sizes and strong genetic instruments. However, when weak instrument bias is suspected, factor analysis and Bayesian variable selection produce more reliable inferences than simple pruning approaches, which are often used in practice. We conclude by examining two case studies, assessing the effects of LDL-cholesterol and serum testosterone on coronary heart disease risk using variants in the HMGCR and SHBG gene regions respectively.
[ { "created": "Mon, 11 Jan 2021 18:23:04 GMT", "version": "v1" }, { "created": "Thu, 15 Sep 2022 10:22:10 GMT", "version": "v2" } ]
2022-09-16
[ [ "Gkatzionis", "Apostolos", "" ], [ "Burgess", "Stephen", "" ], [ "Newcombe", "Paul J.", "" ] ]
Mendelian randomization is the use of genetic variants to assess the existence of a causal relationship between a risk factor and an outcome of interest. Here, we focus on two-sample summary-data Mendelian randomization analyses with many correlated variants from a single gene region, and particularly on cis-Mendelian randomization studies which use protein expression as a risk factor. Such studies must rely on a small, curated set of variants from the studied region; using all variants in the region requires inverting an ill-conditioned genetic correlation matrix and results in numerically unstable causal effect estimates. We review methods for variable selection and estimation in cis-Mendelian randomization with summary-level data, ranging from stepwise pruning and conditional analysis to principal components analysis, factor analysis and Bayesian variable selection. In a simulation study, we show that the various methods have a comparable performance in analyses with large sample sizes and strong genetic instruments. However, when weak instrument bias is suspected, factor analysis and Bayesian variable selection produce more reliable inferences than simple pruning approaches, which are often used in practice. We conclude by examining two case studies, assessing the effects of LDL-cholesterol and serum testosterone on coronary heart disease risk using variants in the HMGCR and SHBG gene regions respectively.
2403.02724
Peng Li
Lingmin Zhan, Yuanyuan Zhang, Yingdong Wang, Aoyi Wang, Caiping Cheng, Jinzhong Zhao, Wuxia Zhang, Peng Lia, Jianxin Chen
A genome-scale deep learning model to predict gene expression changes of genetic perturbations from multiplex biological networks
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Systematic characterization of biological effects to genetic perturbation is essential to the application of molecular biology and biomedicine. However, the experimental exhaustion of genetic perturbations on the genome-wide scale is challenging. Here, we show that TranscriptionNet, a deep learning model that integrates multiple biological networks to systematically predict transcriptional profiles to three types of genetic perturbations based on transcriptional profiles induced by genetic perturbations in the L1000 project: RNA interference (RNAi), clustered regularly interspaced short palindromic repeat (CRISPR) and overexpression (OE). TranscriptionNet performs better than existing approaches in predicting inducible gene expression changes for all three types of genetic perturbations. TranscriptionNet can predict transcriptional profiles for all genes in existing biological networks and increases perturbational gene expression changes for each type of genetic perturbation from a few thousand to 26,945 genes. TranscriptionNet demonstrates strong generalization ability when comparing predicted and true gene expression changes on different external tasks. Overall, TranscriptionNet can systemically predict transcriptional consequences induced by perturbing genes on a genome-wide scale and thus holds promise to systemically detect gene function and enhance drug development and target discovery.
[ { "created": "Tue, 5 Mar 2024 07:31:46 GMT", "version": "v1" } ]
2024-03-06
[ [ "Zhan", "Lingmin", "" ], [ "Zhang", "Yuanyuan", "" ], [ "Wang", "Yingdong", "" ], [ "Wang", "Aoyi", "" ], [ "Cheng", "Caiping", "" ], [ "Zhao", "Jinzhong", "" ], [ "Zhang", "Wuxia", "" ], [ "Lia", "Peng", "" ], [ "Chen", "Jianxin", "" ] ]
Systematic characterization of biological effects to genetic perturbation is essential to the application of molecular biology and biomedicine. However, the experimental exhaustion of genetic perturbations on the genome-wide scale is challenging. Here, we show that TranscriptionNet, a deep learning model that integrates multiple biological networks to systematically predict transcriptional profiles to three types of genetic perturbations based on transcriptional profiles induced by genetic perturbations in the L1000 project: RNA interference (RNAi), clustered regularly interspaced short palindromic repeat (CRISPR) and overexpression (OE). TranscriptionNet performs better than existing approaches in predicting inducible gene expression changes for all three types of genetic perturbations. TranscriptionNet can predict transcriptional profiles for all genes in existing biological networks and increases perturbational gene expression changes for each type of genetic perturbation from a few thousand to 26,945 genes. TranscriptionNet demonstrates strong generalization ability when comparing predicted and true gene expression changes on different external tasks. Overall, TranscriptionNet can systemically predict transcriptional consequences induced by perturbing genes on a genome-wide scale and thus holds promise to systemically detect gene function and enhance drug development and target discovery.
1804.01342
Alexander Gorban
Alexander N. Gorban and Nurdan \c{C}abuko\v{g}lu
Mobility cost and degenerated diffusion in kinesis models
The final version submitted to the journal
Ecological Complexity 36 (2018), 16-21
10.1016/j.ecocom.2018.06.007
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
A new critical effect is predicted in population dispersal. It is based on the fact that a trade-off between the advantages of mobility and the cost of mobility breaks with a significant deterioration in living conditions. The recently developed model of purposeful kinesis (Gorban \& \c{C}abuko\v{g}lu, Ecological Complexity 33, 2018) is based on the "let well enough alone" idea: mobility decreases for high reproduction coefficient and, therefore, animals stay longer in good conditions and leave quicker bad conditions. Mobility has a cost, which should be measured in the changes of the reproduction coefficient. Introduction of the cost of mobility into the reproduction coefficient leads to an equation for mobility. It can be solved in a closed form using Lambert $W$-function. Surprisingly, the "let well enough alone" models with the simple linear cost of mobility have an intrinsic phase transition: when conditions worsen then the mobility increases up to some critical value of the reproduction coefficient. For worse conditions, there is no solution for mobility. We interpret this critical effect as the complete loss of mobility that is degeneration of diffusion. Qualitatively, this means that mobility increases with worsening of conditions up to some limit, and after that, mobility is nullified.
[ { "created": "Wed, 4 Apr 2018 10:59:32 GMT", "version": "v1" }, { "created": "Wed, 16 May 2018 08:52:53 GMT", "version": "v2" }, { "created": "Thu, 28 Feb 2019 08:18:08 GMT", "version": "v3" } ]
2019-03-01
[ [ "Gorban", "Alexander N.", "" ], [ "Çabukoǧlu", "Nurdan", "" ] ]
A new critical effect is predicted in population dispersal. It is based on the fact that a trade-off between the advantages of mobility and the cost of mobility breaks with a significant deterioration in living conditions. The recently developed model of purposeful kinesis (Gorban \& \c{C}abuko\v{g}lu, Ecological Complexity 33, 2018) is based on the "let well enough alone" idea: mobility decreases for high reproduction coefficient and, therefore, animals stay longer in good conditions and leave quicker bad conditions. Mobility has a cost, which should be measured in the changes of the reproduction coefficient. Introduction of the cost of mobility into the reproduction coefficient leads to an equation for mobility. It can be solved in a closed form using Lambert $W$-function. Surprisingly, the "let well enough alone" models with the simple linear cost of mobility have an intrinsic phase transition: when conditions worsen then the mobility increases up to some critical value of the reproduction coefficient. For worse conditions, there is no solution for mobility. We interpret this critical effect as the complete loss of mobility that is degeneration of diffusion. Qualitatively, this means that mobility increases with worsening of conditions up to some limit, and after that, mobility is nullified.
1210.4322
Vladimir Chechetkin R.
V. R. Chechetkin and V.V. Lobzin
Stability of the genetic code and optimal parameters of amino acids
9 pages, 3 figures
Journal of Theoretical Biology V. 269, Pp. 57-63, 2011
10.1016/j.jtbi.2010.10.015
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The standard genetic code is known to be much more efficient in minimizing adverse effects of misreading errors and one-point mutations in comparison with a random code having the same structure, i.e. the same number of codons coding for each particular amino acid. We study the inverse problem, how the code structure affects the optimal physico-chemical parameters of amino acids ensuring the highest stability of the genetic code. It is shown that the choice of two or more amino acids with given properties determines unambiguously all the others. In this sense the code structure determines strictly the optimal parameters of amino acids. In the code with the structure of the standard genetic code the resulting values for hydrophobicity obtained in the scheme leave one out and in the scheme with fixed maximum and minimum parameters correlate significantly with the natural scale. This indicates the co-evolution of the genetic code and physico-chemical properties of amino acids.
[ { "created": "Tue, 16 Oct 2012 09:16:48 GMT", "version": "v1" } ]
2012-10-17
[ [ "Chechetkin", "V. R.", "" ], [ "Lobzin", "V. V.", "" ] ]
The standard genetic code is known to be much more efficient in minimizing adverse effects of misreading errors and one-point mutations in comparison with a random code having the same structure, i.e. the same number of codons coding for each particular amino acid. We study the inverse problem, how the code structure affects the optimal physico-chemical parameters of amino acids ensuring the highest stability of the genetic code. It is shown that the choice of two or more amino acids with given properties determines unambiguously all the others. In this sense the code structure determines strictly the optimal parameters of amino acids. In the code with the structure of the standard genetic code the resulting values for hydrophobicity obtained in the scheme leave one out and in the scheme with fixed maximum and minimum parameters correlate significantly with the natural scale. This indicates the co-evolution of the genetic code and physico-chemical properties of amino acids.
0803.2085
Conrad Burden
Sylvain Foret, Susan R. Wilson, Conrad J. Burden
Empirical distribution of k-word matches in biological sequences
23 pages, 10 figures
Pattern Recognition 42 (2009) 539-548
10.1016/j.patcog.2008.06.026
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study focuses on an alignment-free sequence comparison method: the number of words of length k shared between two sequences, also known as the D_2 statistic. The advantages of the use of this statistic over alignment-based methods are firstly that it does not assume that homologous segments are contiguous, and secondly that the algorithm is computationally extremely fast, the runtime being proportional to the size of the sequence under scrutiny. Existing applications of the D_2 statistic include the clustering of related sequences in large EST databases such as the STACK database. Such applications have typically relied on heuristics without any statistical basis. Rigorous statistical characterisations of the distribution of D_2 have subsequently been undertaken, but have focussed on the distribution's asymptotic behaviour, leaving the distribution of D_2 uncharacterised for most practical cases. The work presented here bridges these two worlds to give usable approximations of the distribution of D_2 for ranges of parameters most frequently encountered in the study of biological sequences.
[ { "created": "Fri, 14 Mar 2008 05:14:55 GMT", "version": "v1" } ]
2009-09-08
[ [ "Foret", "Sylvain", "" ], [ "Wilson", "Susan R.", "" ], [ "Burden", "Conrad J.", "" ] ]
This study focuses on an alignment-free sequence comparison method: the number of words of length k shared between two sequences, also known as the D_2 statistic. The advantages of the use of this statistic over alignment-based methods are firstly that it does not assume that homologous segments are contiguous, and secondly that the algorithm is computationally extremely fast, the runtime being proportional to the size of the sequence under scrutiny. Existing applications of the D_2 statistic include the clustering of related sequences in large EST databases such as the STACK database. Such applications have typically relied on heuristics without any statistical basis. Rigorous statistical characterisations of the distribution of D_2 have subsequently been undertaken, but have focussed on the distribution's asymptotic behaviour, leaving the distribution of D_2 uncharacterised for most practical cases. The work presented here bridges these two worlds to give usable approximations of the distribution of D_2 for ranges of parameters most frequently encountered in the study of biological sequences.
1502.01061
Robert Noble
Robert Noble, Oliver Kaltz, Michael E Hochberg
Statistical interpretations and new findings on Variation in Cancer Risk Among Tissues
17 pages
null
null
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tomasetti and Vogelstein (2015a) find that the incidence of a set of cancer types is correlated with the total number of normal stem cell divisions. Here, we separate the effects of standing stem cell number (i.e., organ or tissue size) and per stem cell lifetime replication rate. We show that each has a statistically significant and independent effect on explaining variation in cancer incidence over the 31 cases considered by Tomasetti and Vogelstein. When considering the total number of stem cell divisions and when removing cases associated with disease or carcinogens, we find that cancer incidence attains a plateau of approximately 0.6% incidence for the cases considered by these authors. We further demonstrate that grouping by anatomical site explains most of the remaining variation in risk between cancer types. This new analysis suggests that cancer risk depends not only on the number of stem cell divisions but varies enormously ($\sim$10,000 times) depending on the stem cell's environment. Future research should investigate how tissue characteristics (anatomical site, type, size, stem cell divisions) explain cancer incidence over a wider range of cancers, to what extent different tissues express specific protective mechanisms, and whether any differential protection can be attributed to natural selection.
[ { "created": "Tue, 3 Feb 2015 22:58:02 GMT", "version": "v1" } ]
2015-02-05
[ [ "Noble", "Robert", "" ], [ "Kaltz", "Oliver", "" ], [ "Hochberg", "Michael E", "" ] ]
Tomasetti and Vogelstein (2015a) find that the incidence of a set of cancer types is correlated with the total number of normal stem cell divisions. Here, we separate the effects of standing stem cell number (i.e., organ or tissue size) and per stem cell lifetime replication rate. We show that each has a statistically significant and independent effect on explaining variation in cancer incidence over the 31 cases considered by Tomasetti and Vogelstein. When considering the total number of stem cell divisions and when removing cases associated with disease or carcinogens, we find that cancer incidence attains a plateau of approximately 0.6% incidence for the cases considered by these authors. We further demonstrate that grouping by anatomical site explains most of the remaining variation in risk between cancer types. This new analysis suggests that cancer risk depends not only on the number of stem cell divisions but varies enormously ($\sim$10,000 times) depending on the stem cell's environment. Future research should investigate how tissue characteristics (anatomical site, type, size, stem cell divisions) explain cancer incidence over a wider range of cancers, to what extent different tissues express specific protective mechanisms, and whether any differential protection can be attributed to natural selection.
q-bio/0309023
Bijan Pesaran
B. Pesaran and P. P. Mitra
Idl Signal Processing Library 1.0
13 IDL .pro files, 1 .html file, 1 .ps file, 1 license file. Download the source for the IDL files (save as .tar.gz) Read idl_lib.ps for instructions on use. Originally submitted to the neuro-sys archive which was never publicly announced (was 9801001)
null
null
CMP-001
q-bio.QM
null
We make available a library of documented IDL .pro files as well as a shareable object library that allows IDL to call routines from LAPACK. The routines are for use in the spectral analysis of time series data. The primary focus of these routines are David Thomson's multitaper methods but a whole range of functions will be made available in future revisions of the submission. At present routines are provided to carry out the following operations: calculate prolate spheroidal sequences and eigenvalues, project time-series into frequency bands, calculate spectral estimates with or without moving windows, and calculate the cross-coherence between two time series as a function of frequency as well as the coherence between frequencies for a single time series.
[ { "created": "Tue, 20 Jan 1998 18:38:17 GMT", "version": "v1" } ]
2007-05-23
[ [ "Pesaran", "B.", "" ], [ "Mitra", "P. P.", "" ] ]
We make available a library of documented IDL .pro files as well as a shareable object library that allows IDL to call routines from LAPACK. The routines are for use in the spectral analysis of time series data. The primary focus of these routines are David Thomson's multitaper methods but a whole range of functions will be made available in future revisions of the submission. At present routines are provided to carry out the following operations: calculate prolate spheroidal sequences and eigenvalues, project time-series into frequency bands, calculate spectral estimates with or without moving windows, and calculate the cross-coherence between two time series as a function of frequency as well as the coherence between frequencies for a single time series.
1605.06925
Lorenz K. Muller
Lorenz K. Muller and Giacomo Indiveri
Neural Sampling by Irregular Gating Inhibition of Spiking Neurons and Attractor Networks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A long tradition in theoretical neuroscience casts sensory processing in the brain as the process of inferring the maximally consistent interpretations of imperfect sensory input. Recently it has been shown that Gamma-band inhibition can enable neural attractor networks to approximately carry out such a sampling mechanism. In this paper we propose a novel neural network model based on irregular gating inhibition, show analytically how it implements a Monte-Carlo Markov Chain (MCMC) sampler, and describe how it can be used to model networks of both neural attractors as well as of single spiking neurons. Finally we show how this model applied to spiking neurons gives rise to a new putative mechanism that could be used to implement stochastic synaptic weights in biological neural networks and in neuromorphic hardware.
[ { "created": "Mon, 23 May 2016 07:54:46 GMT", "version": "v1" }, { "created": "Thu, 8 Sep 2016 07:13:07 GMT", "version": "v2" }, { "created": "Thu, 31 Aug 2017 08:43:49 GMT", "version": "v3" }, { "created": "Fri, 1 Sep 2017 06:39:55 GMT", "version": "v4" } ]
2017-09-04
[ [ "Muller", "Lorenz K.", "" ], [ "Indiveri", "Giacomo", "" ] ]
A long tradition in theoretical neuroscience casts sensory processing in the brain as the process of inferring the maximally consistent interpretations of imperfect sensory input. Recently it has been shown that Gamma-band inhibition can enable neural attractor networks to approximately carry out such a sampling mechanism. In this paper we propose a novel neural network model based on irregular gating inhibition, show analytically how it implements a Monte-Carlo Markov Chain (MCMC) sampler, and describe how it can be used to model networks of both neural attractors as well as of single spiking neurons. Finally we show how this model applied to spiking neurons gives rise to a new putative mechanism that could be used to implement stochastic synaptic weights in biological neural networks and in neuromorphic hardware.
1707.01974
Dervis Vural
Aylin Acun, Dervis Can Vural, Pinar Zorlutuna
A Tissue Engineered Model of Aging: Interdependence and Cooperative Effects in Failing Tissues
null
null
null
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aging remains a fundamental open problem in modern biology. Although there exist a number of theories on aging on the cellular scale, nearly nothing is known about how microscopic failures cascade to macroscopic failures of tissues, organs and ultimately the organism. The goal of this work is to bridge microscopic cell failure to macroscopic manifestations of aging. We use tissue engineered constructs to control the cellular-level damage and cell-cell distance in individual tissues to establish the role of complex interdependence and interactions between cells in aging tissues. We found that while microscopic mechanisms drive aging, the interdependency between cells plays a major role in tissue death, providing evidence on how cellular aging is connected to its higher systemic consequences.
[ { "created": "Thu, 6 Jul 2017 21:38:39 GMT", "version": "v1" } ]
2017-07-10
[ [ "Acun", "Aylin", "" ], [ "Vural", "Dervis Can", "" ], [ "Zorlutuna", "Pinar", "" ] ]
Aging remains a fundamental open problem in modern biology. Although there exist a number of theories on aging on the cellular scale, nearly nothing is known about how microscopic failures cascade to macroscopic failures of tissues, organs and ultimately the organism. The goal of this work is to bridge microscopic cell failure to macroscopic manifestations of aging. We use tissue engineered constructs to control the cellular-level damage and cell-cell distance in individual tissues to establish the role of complex interdependence and interactions between cells in aging tissues. We found that while microscopic mechanisms drive aging, the interdependency between cells plays a major role in tissue death, providing evidence on how cellular aging is connected to its higher systemic consequences.
1309.0267
Wolfram Liebermeister
Wolfram Liebermeister
Structural thermokinetic modelling
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Translating metabolic networks into dynamic models is difficult if kinetic constants are unknown. Structural Kinetic Modelling (SKM) replaces reaction elasticities by independent random numbers. Here I propose a variant that accounts for reversible reactions and thermodynamics: in Structural Thermokinetic Modelling (STM), correlated elasticities are computed from enzyme saturation values and thermodynamic forces, which are physically independent. STM relies on a dependency schema in which basic variables can be sampled, fitted to data, or optimised, while all other variables are computed from them. Probability distributions in the dependency schema define a model ensemble, which leads to probabilistic predictions even if data are scarce. STM highlights the importance of variabilities, dependencies and covariances of biological variables. By choosing or sampling the basic variables, we can convert metabolic networks into kinetic models with consistent reversible rate laws. Metabolic control coefficients obtained from these models can tell us about metabolic dynamics, including responses and optimal adaptations to perturbations as well as enzyme synergies, metabolite correlations, and metabolic fluctuations arising from chemical noise. By comparing model variants with different network structures, fluxes, thermodynamic forces, regulation, or types of rate laws, we can quantify the effects of these model features. To showcase STM, I study metabolic control, metabolic fluctuations, and enzyme synergies, and how they are shaped by thermodynamic forces. Thermodynamics can be used to obtain more precise predictions of flux control, enzyme synergies, correlated flux and metabolite variations, and of the emergence and propagation of metabolic noise.
[ { "created": "Sun, 1 Sep 2013 21:35:06 GMT", "version": "v1" }, { "created": "Mon, 7 Mar 2022 09:51:43 GMT", "version": "v2" } ]
2022-03-08
[ [ "Liebermeister", "Wolfram", "" ] ]
Translating metabolic networks into dynamic models is difficult if kinetic constants are unknown. Structural Kinetic Modelling (SKM) replaces reaction elasticities by independent random numbers. Here I propose a variant that accounts for reversible reactions and thermodynamics: in Structural Thermokinetic Modelling (STM), correlated elasticities are computed from enzyme saturation values and thermodynamic forces, which are physically independent. STM relies on a dependency schema in which basic variables can be sampled, fitted to data, or optimised, while all other variables are computed from them. Probability distributions in the dependency schema define a model ensemble, which leads to probabilistic predictions even if data are scarce. STM highlights the importance of variabilities, dependencies and covariances of biological variables. By choosing or sampling the basic variables, we can convert metabolic networks into kinetic models with consistent reversible rate laws. Metabolic control coefficients obtained from these models can tell us about metabolic dynamics, including responses and optimal adaptations to perturbations as well as enzyme synergies, metabolite correlations, and metabolic fluctuations arising from chemical noise. By comparing model variants with different network structures, fluxes, thermodynamic forces, regulation, or types of rate laws, we can quantify the effects of these model features. To showcase STM, I study metabolic control, metabolic fluctuations, and enzyme synergies, and how they are shaped by thermodynamic forces. Thermodynamics can be used to obtain more precise predictions of flux control, enzyme synergies, correlated flux and metabolite variations, and of the emergence and propagation of metabolic noise.
2401.17328
Farnoush Shishehbori
Farnoush Shishehbori, Zainab Awan
Enhancing Cardiovascular Disease Risk Prediction with Machine Learning Models
46 pages (including references), 5 Figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Cardiovascular disease remains a leading global cause of mortality, necessitating accurate risk prediction tools. Traditional methods, such as QRISK and the Framingham heart score, exhibit limitations in their ability to incorporate comprehensive patient data, potentially resulting in incomplete risk factor consideration. To address these shortcomings, this study conducts a meticulous review focusing on the application of machine learning models to enhance predictive accuracy. Machine learning models, such as support vector machines, and Random Forest, as well as deep learning techniques like convolutional neural networks and recurrent neural networks, have emerged as promising alternatives. These models offer superior performance, accommodating a broader spectrum of variables and providing precise subgroup-specific predictions. While machine learning integration holds promise for enhancing risk assessment, it presents challenges such as data requirements and computational constraints. Additionally, large language models have revolutionised healthcare applications, augmenting diagnostic precision and patient care. This study examines the core aspects of cardiovascular disease event risk and presents a thorough review of traditional and machine learning models, alongside deep learning techniques, for improved accuracy. It offers a comprehensive survey of relevant datasets, critically compares ML models with conventional approaches, and synthesizes key findings, highlighting their implications for clinical practice. Furthermore, the potential of machine learning and large language models in cardiovascular medicine is undeniable. However, rigorous validation and optimisation are imperative before widespread application in healthcare. This integration promises more accurate and personalised cardiovascular care.
[ { "created": "Mon, 29 Jan 2024 19:08:33 GMT", "version": "v1" }, { "created": "Thu, 8 Feb 2024 10:42:53 GMT", "version": "v2" }, { "created": "Fri, 9 Feb 2024 16:13:09 GMT", "version": "v3" } ]
2024-02-12
[ [ "Shishehbori", "Farnoush", "" ], [ "Awan", "Zainab", "" ] ]
Cardiovascular disease remains a leading global cause of mortality, necessitating accurate risk prediction tools. Traditional methods, such as QRISK and the Framingham heart score, exhibit limitations in their ability to incorporate comprehensive patient data, potentially resulting in incomplete risk factor consideration. To address these shortcomings, this study conducts a meticulous review focusing on the application of machine learning models to enhance predictive accuracy. Machine learning models, such as support vector machines, and Random Forest, as well as deep learning techniques like convolutional neural networks and recurrent neural networks, have emerged as promising alternatives. These models offer superior performance, accommodating a broader spectrum of variables and providing precise subgroup-specific predictions. While machine learning integration holds promise for enhancing risk assessment, it presents challenges such as data requirements and computational constraints. Additionally, large language models have revolutionised healthcare applications, augmenting diagnostic precision and patient care. This study examines the core aspects of cardiovascular disease event risk and presents a thorough review of traditional and machine learning models, alongside deep learning techniques, for improved accuracy. It offers a comprehensive survey of relevant datasets, critically compares ML models with conventional approaches, and synthesizes key findings, highlighting their implications for clinical practice. Furthermore, the potential of machine learning and large language models in cardiovascular medicine is undeniable. However, rigorous validation and optimisation are imperative before widespread application in healthcare. This integration promises more accurate and personalised cardiovascular care.
2105.01167
Qi Su
Qi Su, Joshua. B Plotkin
Evolution of cooperation with asymmetric social interactions
40 pages, 11 figures
null
10.1073/pnas.2113468118
null
q-bio.PE math.DS physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
How cooperation emerges in human societies is both an evolutionary enigma, and a practical problem with tangible implications for societal health. Population structure has long been recognized as a catalyst for cooperation because local interactions enable reciprocity. Analysis of this phenomenon typically assumes bi-directional social interactions, even though real-world interactions are often uni-directional. Uni-directional interactions -- where one individual has the opportunity to contribute altruistically to another, but not conversely -- arise in real-world populations as the result of organizational hierarchies, social stratification, popularity effects, and endogenous mechanisms of network growth. Here we expand the theory of cooperation in structured populations to account for both uni- and bi-directional social interactions. Even though directed interactions remove the opportunity for reciprocity, we find that cooperation can nonetheless be favored in directed social networks and that cooperation is provably maximized for networks with an intermediate proportion of directed interactions, as observed in many empirical settings. We also identify two simple structural motifs that allow efficient modification of interaction directionality to promote cooperation by orders of magnitude. We discuss how our results relate to the concepts of generalized and indirect reciprocity.
[ { "created": "Mon, 3 May 2021 20:57:10 GMT", "version": "v1" }, { "created": "Thu, 20 May 2021 19:50:15 GMT", "version": "v2" } ]
2022-01-06
[ [ "Su", "Qi", "" ], [ "Plotkin", "Joshua. B", "" ] ]
How cooperation emerges in human societies is both an evolutionary enigma, and a practical problem with tangible implications for societal health. Population structure has long been recognized as a catalyst for cooperation because local interactions enable reciprocity. Analysis of this phenomenon typically assumes bi-directional social interactions, even though real-world interactions are often uni-directional. Uni-directional interactions -- where one individual has the opportunity to contribute altruistically to another, but not conversely -- arise in real-world populations as the result of organizational hierarchies, social stratification, popularity effects, and endogenous mechanisms of network growth. Here we expand the theory of cooperation in structured populations to account for both uni- and bi-directional social interactions. Even though directed interactions remove the opportunity for reciprocity, we find that cooperation can nonetheless be favored in directed social networks and that cooperation is provably maximized for networks with an intermediate proportion of directed interactions, as observed in many empirical settings. We also identify two simple structural motifs that allow efficient modification of interaction directionality to promote cooperation by orders of magnitude. We discuss how our results relate to the concepts of generalized and indirect reciprocity.
1702.06405
David Caldwell
David J. Caldwell, Jing Wu, Kaitlyn Casimo, Jeffrey G. Ojemann, Rajesh P.N. Rao
Interactive Web Application for Exploring Matrices of Neural Connectivity
4 pages, IEEE NER 2017
null
10.1109/NER.2017.8008287
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present here a browser-based application for visualizing patterns of connectivity in 3D stacked data matrices with large numbers of pairwise relations. Visualizing a connectivity matrix, looking for trends and patterns, and dynamically manipulating these values is a challenge for scientists from diverse fields, including neuroscience and genomics. In particular, high-dimensional neural data include those acquired via electroencephalography (EEG), electrocorticography (ECoG), magnetoencephalography (MEG), and functional MRI. Neural connectivity data contains multivariate attributes for each edge between different brain regions, which motivated our lightweight, open source, easy-to-use visualization tool for the exploration of these connectivity matrices to highlight connections of interest. Here we present a client-side, mobile-compatible visualization tool written entirely in HTML5/JavaScript that allows in-browser manipulation of user-defined files for exploration of brain connectivity. Visualizations can highlight different aspects of the data simultaneously across different dimensions. Input files are in JSON format, and custom Python scripts have been written to parse MATLAB or Python data files into JSON-loadable format. We demonstrate the analysis of connectivity data acquired via human ECoG recordings as a domain-specific implementation of our application. We envision applications for this interactive tool in fields seeking to visualize pairwise connectivity.
[ { "created": "Tue, 21 Feb 2017 14:36:17 GMT", "version": "v1" } ]
2018-01-04
[ [ "Caldwell", "David J.", "" ], [ "Wu", "Jing", "" ], [ "Casimo", "Kaitlyn", "" ], [ "Ojemann", "Jeffrey G.", "" ], [ "Rao", "Rajesh P. N.", "" ] ]
We present here a browser-based application for visualizing patterns of connectivity in 3D stacked data matrices with large numbers of pairwise relations. Visualizing a connectivity matrix, looking for trends and patterns, and dynamically manipulating these values is a challenge for scientists from diverse fields, including neuroscience and genomics. In particular, high-dimensional neural data include those acquired via electroencephalography (EEG), electrocorticography (ECoG), magnetoencephalography (MEG), and functional MRI. Neural connectivity data contains multivariate attributes for each edge between different brain regions, which motivated our lightweight, open source, easy-to-use visualization tool for the exploration of these connectivity matrices to highlight connections of interest. Here we present a client-side, mobile-compatible visualization tool written entirely in HTML5/JavaScript that allows in-browser manipulation of user-defined files for exploration of brain connectivity. Visualizations can highlight different aspects of the data simultaneously across different dimensions. Input files are in JSON format, and custom Python scripts have been written to parse MATLAB or Python data files into JSON-loadable format. We demonstrate the analysis of connectivity data acquired via human ECoG recordings as a domain-specific implementation of our application. We envision applications for this interactive tool in fields seeking to visualize pairwise connectivity.
q-bio/0509007
Isaac Hubner
Isaac A. Hubner, Eric J. Deeds, and Eugene I. Shakhnovich
High resolution protein folding with a transferable potential
submitted to PNAS 2005-03-16
null
10.1073/pnas.0502181102
null
q-bio.BM
null
A generalized computational method for folding proteins with a fully transferable potential and geometrically realistic all-atom model is presented and tested on seven different helix bundle proteins. The protocol, which includes graph-theoretical analysis of the ensemble of resulting folded conformations, was systematically applied and consistently produced structure predictions of approximately 3 Angstroms without any knowledge of the native state. To measure and understand the significance of the results, extensive control simulations were conducted. Graph theoretic analysis provides a means for systematically identifying the native fold and provides physical insight, conceptually linking the results to modern theoretical views of protein folding. In addition to presenting a method for prediction of structure and folding mechanism, our model suggests that a accurate all-atom amino acid representation coupled with a physically reasonable atomic interaction potential (that does not require optimization to the test set) and hydrogen bonding are essential features for a realistic protein model.
[ { "created": "Wed, 7 Sep 2005 14:24:27 GMT", "version": "v1" } ]
2009-11-11
[ [ "Hubner", "Isaac A.", "" ], [ "Deeds", "Eric J.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
A generalized computational method for folding proteins with a fully transferable potential and geometrically realistic all-atom model is presented and tested on seven different helix bundle proteins. The protocol, which includes graph-theoretical analysis of the ensemble of resulting folded conformations, was systematically applied and consistently produced structure predictions of approximately 3 Angstroms without any knowledge of the native state. To measure and understand the significance of the results, extensive control simulations were conducted. Graph theoretic analysis provides a means for systematically identifying the native fold and provides physical insight, conceptually linking the results to modern theoretical views of protein folding. In addition to presenting a method for prediction of structure and folding mechanism, our model suggests that a accurate all-atom amino acid representation coupled with a physically reasonable atomic interaction potential (that does not require optimization to the test set) and hydrogen bonding are essential features for a realistic protein model.
1007.0327
Michael Assaf
Michael Assaf and Mauro Mobilia
Large Fluctuations and Fixation in Evolutionary Games
17 pages, 10 figures, to appear in JSTAT
J. Stat. Mech., (2010) P09009
10.1088/1742-5468/2010/09/P09009
null
q-bio.PE cond-mat.stat-mech nlin.AO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study large fluctuations in evolutionary games belonging to the coordination and anti-coordination classes. The dynamics of these games, modeling cooperation dilemmas, is characterized by a coexistence fixed point separating two absorbing states. We are particularly interested in the problem of fixation that refers to the possibility that a few mutants take over the entire population. Here, the fixation phenomenon is induced by large fluctuations and is investigated by a semi-classical WKB (Wentzel-Kramers-Brillouin) theory generalized to treat stochastic systems possessing multiple absorbing states. Importantly, this method allows us to analyze the combined influence of selection and random fluctuations on the evolutionary dynamics \textit{beyond} the weak selection limit often considered in previous works. We accurately compute, including pre-exponential factors, the probability distribution function in the long-lived coexistence state and the mean fixation time necessary for a few mutants to take over the entire population in anti-coordination games, and also the fixation probability in the coordination class. Our analytical results compare excellently with extensive numerical simulations. Furthermore, we demonstrate that our treatment is superior to the Fokker-Planck approximation when the selection intensity is finite.
[ { "created": "Fri, 2 Jul 2010 10:03:34 GMT", "version": "v1" }, { "created": "Wed, 25 Aug 2010 12:42:45 GMT", "version": "v2" }, { "created": "Thu, 26 Aug 2010 10:36:31 GMT", "version": "v3" } ]
2015-05-19
[ [ "Assaf", "Michael", "" ], [ "Mobilia", "Mauro", "" ] ]
We study large fluctuations in evolutionary games belonging to the coordination and anti-coordination classes. The dynamics of these games, modeling cooperation dilemmas, is characterized by a coexistence fixed point separating two absorbing states. We are particularly interested in the problem of fixation that refers to the possibility that a few mutants take over the entire population. Here, the fixation phenomenon is induced by large fluctuations and is investigated by a semi-classical WKB (Wentzel-Kramers-Brillouin) theory generalized to treat stochastic systems possessing multiple absorbing states. Importantly, this method allows us to analyze the combined influence of selection and random fluctuations on the evolutionary dynamics \textit{beyond} the weak selection limit often considered in previous works. We accurately compute, including pre-exponential factors, the probability distribution function in the long-lived coexistence state and the mean fixation time necessary for a few mutants to take over the entire population in anti-coordination games, and also the fixation probability in the coordination class. Our analytical results compare excellently with extensive numerical simulations. Furthermore, we demonstrate that our treatment is superior to the Fokker-Planck approximation when the selection intensity is finite.
1907.00230
Alessandro Torcini Dr
Hongjie Bi, Marco Segneri, Matteo di Volo, Alessandro Torcini
Coexistence of fast and slow gamma oscillations in one population of inhibitory spiking neurons
20 pages, 14 figures
Phys. Rev. Research 2, 013042 (2020)
10.1103/PhysRevResearch.2.013042
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Oscillations are a hallmark of neural population activity in various brain regions with a spectrum covering a wide range of frequencies. Within this spectrum gamma oscillations have received particular attention due to their ubiquitous nature and to their correlation with higher brain functions. Recently, it has been reported that gamma oscillations in the hippocampus of behaving rodents are segregated in two distinct frequency bands: slow and fast. These two gamma rhythms correspond to dfferent states of the network, but their origin has been not yet clarified. Here, we show theoretically and numerically that a single inhibitory population can give rise to coexisting slow and fast gamma rhythms corresponding to collective oscillations of a balanced spiking network. The slow and fast gamma rhythms are generated via two different mechanisms: the fast one being driven by the coordinated tonic neural firing and the slow one by endogenous fluctuations due to irregular neural activity. We show that almost instantaneous stimulations can switch the collective gamma oscillations from slow to fast and vice versa. Furthermore, to make a closer contact with the experimental observations, we consider the modulation of the gamma rhythms induced by a slower (theta) rhythm driving the network dynamics. In this context, depending on the strength of the forcing, we observe phase-amplitude and phase-phase coupling between the fast and slow gamma oscillations and the theta forcing. Phase-phase coupling reveals different theta-phases preferences for the two coexisting gamma rhythms.
[ { "created": "Sat, 29 Jun 2019 16:09:48 GMT", "version": "v1" } ]
2020-01-22
[ [ "Bi", "Hongjie", "" ], [ "Segneri", "Marco", "" ], [ "di Volo", "Matteo", "" ], [ "Torcini", "Alessandro", "" ] ]
Oscillations are a hallmark of neural population activity in various brain regions with a spectrum covering a wide range of frequencies. Within this spectrum gamma oscillations have received particular attention due to their ubiquitous nature and to their correlation with higher brain functions. Recently, it has been reported that gamma oscillations in the hippocampus of behaving rodents are segregated in two distinct frequency bands: slow and fast. These two gamma rhythms correspond to dfferent states of the network, but their origin has been not yet clarified. Here, we show theoretically and numerically that a single inhibitory population can give rise to coexisting slow and fast gamma rhythms corresponding to collective oscillations of a balanced spiking network. The slow and fast gamma rhythms are generated via two different mechanisms: the fast one being driven by the coordinated tonic neural firing and the slow one by endogenous fluctuations due to irregular neural activity. We show that almost instantaneous stimulations can switch the collective gamma oscillations from slow to fast and vice versa. Furthermore, to make a closer contact with the experimental observations, we consider the modulation of the gamma rhythms induced by a slower (theta) rhythm driving the network dynamics. In this context, depending on the strength of the forcing, we observe phase-amplitude and phase-phase coupling between the fast and slow gamma oscillations and the theta forcing. Phase-phase coupling reveals different theta-phases preferences for the two coexisting gamma rhythms.
1301.7734
Pablo Cordero
Pablo Cordero, Wipapat Kladwang, Christopher C. VanLang and Rhiju Das
A mutate-and-map protocol for inferring base pairs in structured RNA
22 pages, 5 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemical mapping is a widespread technique for structural analysis of nucleic acids in which a molecule's reactivity to different probes is quantified at single-nucleotide resolution and used to constrain structural modeling. This experimental framework has been extensively revisited in the past decade with new strategies for high-throughput read-outs, chemical modification, and rapid data analysis. Recently, we have coupled the technique to high-throughput mutagenesis. Point mutations of a base-paired nucleotide can lead to exposure of not only that nucleotide but also its interaction partner. Carrying out the mutation and mapping for the entire system gives an experimental approximation of the molecules contact map. Here, we give our in-house protocol for this mutate-and-map strategy, based on 96-well capillary electrophoresis, and we provide practical tips on interpreting the data to infer nucleic acid structure.
[ { "created": "Thu, 31 Jan 2013 19:58:16 GMT", "version": "v1" } ]
2013-02-01
[ [ "Cordero", "Pablo", "" ], [ "Kladwang", "Wipapat", "" ], [ "VanLang", "Christopher C.", "" ], [ "Das", "Rhiju", "" ] ]
Chemical mapping is a widespread technique for structural analysis of nucleic acids in which a molecule's reactivity to different probes is quantified at single-nucleotide resolution and used to constrain structural modeling. This experimental framework has been extensively revisited in the past decade with new strategies for high-throughput read-outs, chemical modification, and rapid data analysis. Recently, we have coupled the technique to high-throughput mutagenesis. Point mutations of a base-paired nucleotide can lead to exposure of not only that nucleotide but also its interaction partner. Carrying out the mutation and mapping for the entire system gives an experimental approximation of the molecules contact map. Here, we give our in-house protocol for this mutate-and-map strategy, based on 96-well capillary electrophoresis, and we provide practical tips on interpreting the data to infer nucleic acid structure.
2204.07186
Naoki Hiratani
Naoki Hiratani, Haim Sompolinsky
Optimal quadratic binding for relational reasoning in vector symbolic neural architectures
32 pages, 9 figures
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Binding operation is fundamental to many cognitive processes, such as cognitive map formation, relational reasoning, and language comprehension. In these processes, two different modalities, such as location and objects, events and their contextual cues, and words and their roles, need to be bound together, but little is known about the underlying neural mechanisms. Previous works introduced a binding model based on quadratic functions of bound pairs, followed by vector summation of multiple pairs. Based on this framework, we address following questions: Which classes of quadratic matrices are optimal for decoding relational structures? And what is the resultant accuracy? We introduce a new class of binding matrices based on a matrix representation of octonion algebra, an eight-dimensional extension of complex numbers. We show that these matrices enable a more accurate unbinding than previously known methods when a small number of pairs are present. Moreover, numerical optimization of a binding operator converges to this octonion binding. We also show that when there are a large number of bound pairs, however, a random quadratic binding performs as well as the octonion and previously-proposed binding methods. This study thus provides new insight into potential neural mechanisms of binding operations in the brain.
[ { "created": "Thu, 14 Apr 2022 18:41:27 GMT", "version": "v1" } ]
2022-04-18
[ [ "Hiratani", "Naoki", "" ], [ "Sompolinsky", "Haim", "" ] ]
Binding operation is fundamental to many cognitive processes, such as cognitive map formation, relational reasoning, and language comprehension. In these processes, two different modalities, such as location and objects, events and their contextual cues, and words and their roles, need to be bound together, but little is known about the underlying neural mechanisms. Previous works introduced a binding model based on quadratic functions of bound pairs, followed by vector summation of multiple pairs. Based on this framework, we address following questions: Which classes of quadratic matrices are optimal for decoding relational structures? And what is the resultant accuracy? We introduce a new class of binding matrices based on a matrix representation of octonion algebra, an eight-dimensional extension of complex numbers. We show that these matrices enable a more accurate unbinding than previously known methods when a small number of pairs are present. Moreover, numerical optimization of a binding operator converges to this octonion binding. We also show that when there are a large number of bound pairs, however, a random quadratic binding performs as well as the octonion and previously-proposed binding methods. This study thus provides new insight into potential neural mechanisms of binding operations in the brain.
1409.1933
Alessandro Torcini Dr
Stefano Luccioli, Eshel Ben-Jacob, Ari Barzilai, Paolo Bonifazi, Alessandro Torcini
Clique of functional hubs orchestrates population bursts in developmentally regulated neural networks
39 pages, 15 figures, to appear in PLOS Computational Biology
PLoS Comput Biol 10(9) (2014) e1003823
10.1371/journal.pcbi.1003823
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate
[ { "created": "Fri, 5 Sep 2014 20:11:32 GMT", "version": "v1" } ]
2015-04-14
[ [ "Luccioli", "Stefano", "" ], [ "Ben-Jacob", "Eshel", "" ], [ "Barzilai", "Ari", "" ], [ "Bonifazi", "Paolo", "" ], [ "Torcini", "Alessandro", "" ] ]
It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate
2003.14284
Alfonso M. Ganan-Calvo
Alfonso M. Ganan-Calvo and Juan A. Hernandez Ramos
The fractal time growth of COVID-19 pandemic: an accurate self-similar model, and urgent conclusions
null
null
null
null
q-bio.PE nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current available data of the worldwide impact of the COVID-19 pandemic has been analyzed using dimensional analysis and self-similarity hypotheses. We show that the time series of infected population and deaths of the most impacted and unprepared countries exhibits an asymptotic power law behavior, compatible with the propagation of a signal in a fractal network. We propose a model which predicts an asymptotically self-similar expansion of deaths in time before containment, and the final death toll under total containment measures, as a function of the delay in taking those measures after the expansion is observed. The physics of the model resembles the expansion of a flame in a homogeneous domain with a fractal dimension 3.75. After containment measures are taken, the natural fractal structure of the network is drastically altered and a secondary evolution is observed. This evolution, akin to the homogeneous combustion in a static isolated enclosure with a final quenching, has a characteristic time of 20.1 days, according to available data of the pandemic behavior in China. The proposed model is remarkably consistent with available data, which supports the simplifying hypotheses made in the model. A universal formulation for a quarantine as a function of that delay is also proposed.
[ { "created": "Tue, 31 Mar 2020 15:17:54 GMT", "version": "v1" } ]
2020-04-01
[ [ "Ganan-Calvo", "Alfonso M.", "" ], [ "Ramos", "Juan A. Hernandez", "" ] ]
Current available data of the worldwide impact of the COVID-19 pandemic has been analyzed using dimensional analysis and self-similarity hypotheses. We show that the time series of infected population and deaths of the most impacted and unprepared countries exhibits an asymptotic power law behavior, compatible with the propagation of a signal in a fractal network. We propose a model which predicts an asymptotically self-similar expansion of deaths in time before containment, and the final death toll under total containment measures, as a function of the delay in taking those measures after the expansion is observed. The physics of the model resembles the expansion of a flame in a homogeneous domain with a fractal dimension 3.75. After containment measures are taken, the natural fractal structure of the network is drastically altered and a secondary evolution is observed. This evolution, akin to the homogeneous combustion in a static isolated enclosure with a final quenching, has a characteristic time of 20.1 days, according to available data of the pandemic behavior in China. The proposed model is remarkably consistent with available data, which supports the simplifying hypotheses made in the model. A universal formulation for a quarantine as a function of that delay is also proposed.
2407.03441
Simon Coetzee
Simon G. Coetzee and Dennis J. Hazelett
MotifbreakR v2: extended capability and database integration
4 pages of text, 1 figure with 2 panels, 12 total pages. Source code, documentation, and tutorials are available on Bioconductor at https://bioconductor.org/packages/release/bioc/html/motifbreakR.html and GitHub at https://github.com/Simon-Coetzee/motifBreakR
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
MotifbreakR is a software tool that scans genetic variants against position weight matrices of transcription factors (TF) to determine the potential for the disruption of TF binding at the site of the variant. It leverages the Bioconductor suite of software packages and annotations to operate across a diverse array of genomes and motif databases. Initially developed to interrogate the effect of single nucleotide variants (common and rare SNVs) on potential TF binding sites, in motifbreakR v2, we have updated the functionality. New features include the ability to query other types of more complex genetic variants, such as short insertions and deletions (indels). This function allows modeling a more extensive array of variants that may have more significant effects on TF binding. Additionally, while TF binding is based partly on sequence preference, predictions of TF binding based on sequence preference alone can indicate many more potential binding events than observed. Adding information from DNA-binding sequencing datasets lends confidence to motif disruption prediction by demonstrating TF binding in cell lines and tissue types. Therefore, motifbreakR implements querying the ReMap2022 database for evidence that a TF matching the disrupted motif binds over the disrupting variant. Finally, in motifbreakR, in addition to the existing interface, we have implemented an R/Shiny graphical user interface to simplify and enhance access to researchers with different skill sets.
[ { "created": "Wed, 3 Jul 2024 18:34:03 GMT", "version": "v1" } ]
2024-07-08
[ [ "Coetzee", "Simon G.", "" ], [ "Hazelett", "Dennis J.", "" ] ]
MotifbreakR is a software tool that scans genetic variants against position weight matrices of transcription factors (TF) to determine the potential for the disruption of TF binding at the site of the variant. It leverages the Bioconductor suite of software packages and annotations to operate across a diverse array of genomes and motif databases. Initially developed to interrogate the effect of single nucleotide variants (common and rare SNVs) on potential TF binding sites, in motifbreakR v2, we have updated the functionality. New features include the ability to query other types of more complex genetic variants, such as short insertions and deletions (indels). This function allows modeling a more extensive array of variants that may have more significant effects on TF binding. Additionally, while TF binding is based partly on sequence preference, predictions of TF binding based on sequence preference alone can indicate many more potential binding events than observed. Adding information from DNA-binding sequencing datasets lends confidence to motif disruption prediction by demonstrating TF binding in cell lines and tissue types. Therefore, motifbreakR implements querying the ReMap2022 database for evidence that a TF matching the disrupted motif binds over the disrupting variant. Finally, in motifbreakR, in addition to the existing interface, we have implemented an R/Shiny graphical user interface to simplify and enhance access to researchers with different skill sets.
2008.11491
Sam Blakeman
Sam Blakeman, Denis Mareschal
Selective Particle Attention: Visual Feature-Based Attention in Deep Reinforcement Learning
null
null
null
null
q-bio.NC cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain uses selective attention to filter perceptual input so that only the components that are useful for behaviour are processed using its limited computational resources. We focus on one particular form of visual attention known as feature-based attention, which is concerned with identifying features of the visual input that are important for the current task regardless of their spatial location. Visual feature-based attention has been proposed to improve the efficiency of Reinforcement Learning (RL) by reducing the dimensionality of state representations and guiding learning towards relevant features. Despite achieving human level performance in complex perceptual-motor tasks, Deep RL algorithms have been consistently criticised for their poor efficiency and lack of flexibility. Visual feature-based attention therefore represents one option for addressing these criticisms. Nevertheless, it is still an open question how the brain is able to learn which features to attend to during RL. To help answer this question we propose a novel algorithm, termed Selective Particle Attention (SPA), which imbues a Deep RL agent with the ability to perform selective feature-based attention. SPA learns which combinations of features to attend to based on their bottom-up saliency and how accurately they predict future reward. We evaluate SPA on a multiple choice task and a 2D video game that both involve raw pixel input and dynamic changes to the task structure. We show various benefits of SPA over approaches that naively attend to either all or random subsets of features. Our results demonstrate (1) how visual feature-based attention in Deep RL models can improve their learning efficiency and ability to deal with sudden changes in task structure and (2) that particle filters may represent a viable computational account of how visual feature-based attention occurs in the brain.
[ { "created": "Wed, 26 Aug 2020 11:07:50 GMT", "version": "v1" } ]
2020-08-31
[ [ "Blakeman", "Sam", "" ], [ "Mareschal", "Denis", "" ] ]
The human brain uses selective attention to filter perceptual input so that only the components that are useful for behaviour are processed using its limited computational resources. We focus on one particular form of visual attention known as feature-based attention, which is concerned with identifying features of the visual input that are important for the current task regardless of their spatial location. Visual feature-based attention has been proposed to improve the efficiency of Reinforcement Learning (RL) by reducing the dimensionality of state representations and guiding learning towards relevant features. Despite achieving human level performance in complex perceptual-motor tasks, Deep RL algorithms have been consistently criticised for their poor efficiency and lack of flexibility. Visual feature-based attention therefore represents one option for addressing these criticisms. Nevertheless, it is still an open question how the brain is able to learn which features to attend to during RL. To help answer this question we propose a novel algorithm, termed Selective Particle Attention (SPA), which imbues a Deep RL agent with the ability to perform selective feature-based attention. SPA learns which combinations of features to attend to based on their bottom-up saliency and how accurately they predict future reward. We evaluate SPA on a multiple choice task and a 2D video game that both involve raw pixel input and dynamic changes to the task structure. We show various benefits of SPA over approaches that naively attend to either all or random subsets of features. Our results demonstrate (1) how visual feature-based attention in Deep RL models can improve their learning efficiency and ability to deal with sudden changes in task structure and (2) that particle filters may represent a viable computational account of how visual feature-based attention occurs in the brain.
1902.08511
David Papo
David Papo
Gauging functional brain activity: from distinguishability to accessibility
7 pages, 0 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard neuroimaging techniques provide non-invasive access not only to human brain anatomy but also to its physiology. The activity recorded with these techniques is generally called functional imaging, but what is observed per se is an instance of dynamics, from which functional brain activity should be extracted. Distinguishing between bare dynamics and genuine function is a highly non-trivial task, but a crucially important one when comparing experimental observations and interpreting their significance. Here we illustrate how the ability of neuroimaging to extract genuine functional brain activity is bounded by the structure of functional representations. To do so, we first provide a simple definition of functional brain activity from a system-level brain imaging perspective. We then review how the properties of the space on which brain activity is represented allow defining relations ranging from distinguishability to accessibility of observed imaging data. We show how these properties result from the structure defined on dynamical data and dynamics-to-function projections, and consider some implications that the way and extent to which these are defined have for the interpretation of experimental data from standard system-level brain recording techniques.
[ { "created": "Fri, 22 Feb 2019 14:39:40 GMT", "version": "v1" } ]
2019-02-25
[ [ "Papo", "David", "" ] ]
Standard neuroimaging techniques provide non-invasive access not only to human brain anatomy but also to its physiology. The activity recorded with these techniques is generally called functional imaging, but what is observed per se is an instance of dynamics, from which functional brain activity should be extracted. Distinguishing between bare dynamics and genuine function is a highly non-trivial task, but a crucially important one when comparing experimental observations and interpreting their significance. Here we illustrate how the ability of neuroimaging to extract genuine functional brain activity is bounded by the structure of functional representations. To do so, we first provide a simple definition of functional brain activity from a system-level brain imaging perspective. We then review how the properties of the space on which brain activity is represented allow defining relations ranging from distinguishability to accessibility of observed imaging data. We show how these properties result from the structure defined on dynamical data and dynamics-to-function projections, and consider some implications that the way and extent to which these are defined have for the interpretation of experimental data from standard system-level brain recording techniques.
2004.11841
Felix Sattler
Felix Sattler, Jackie Ma, Patrick Wagner, David Neumann, Markus Wenzel, Ralf Sch\"afer, Wojciech Samek, Klaus-Robert M\"uller, Thomas Wiegand
Risk Estimation of SARS-CoV-2 Transmission from Bluetooth Low Energy Measurements
null
null
null
null
q-bio.QM cs.LG q-bio.PE stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Digital contact tracing approaches based on Bluetooth low energy (BLE) have the potential to efficiently contain and delay outbreaks of infectious diseases such as the ongoing SARS-CoV-2 pandemic. In this work we propose a novel machine learning based approach to reliably detect subjects that have spent enough time in close proximity to be at risk of being infected. Our study is an important proof of concept that will aid the battery of epidemiological policies aiming to slow down the rapid spread of COVID-19.
[ { "created": "Wed, 22 Apr 2020 20:10:35 GMT", "version": "v1" } ]
2020-04-27
[ [ "Sattler", "Felix", "" ], [ "Ma", "Jackie", "" ], [ "Wagner", "Patrick", "" ], [ "Neumann", "David", "" ], [ "Wenzel", "Markus", "" ], [ "Schäfer", "Ralf", "" ], [ "Samek", "Wojciech", "" ], [ "Müller", "Klaus-Robert", "" ], [ "Wiegand", "Thomas", "" ] ]
Digital contact tracing approaches based on Bluetooth low energy (BLE) have the potential to efficiently contain and delay outbreaks of infectious diseases such as the ongoing SARS-CoV-2 pandemic. In this work we propose a novel machine learning based approach to reliably detect subjects that have spent enough time in close proximity to be at risk of being infected. Our study is an important proof of concept that will aid the battery of epidemiological policies aiming to slow down the rapid spread of COVID-19.
1406.6424
Corey Bradshaw
Richard Frankham, Corey J. A. Bradshaw, Barry W. Brook
50/500 or 100/1000 debate is not about the time frame - Reply to Rosenfeld
5 pages, 0 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Letter from Rosenfeld (2014, Biological Conservation) in response to Jamieson and Allendorf (2012, Trends in Ecology and Evolution) and Frankham et al. (2014, Biological Conservation) and related papers is misleading in places and requires clarification and correction. We provide those here.
[ { "created": "Wed, 25 Jun 2014 00:41:47 GMT", "version": "v1" }, { "created": "Tue, 1 Jul 2014 01:10:01 GMT", "version": "v2" } ]
2014-07-02
[ [ "Frankham", "Richard", "" ], [ "Bradshaw", "Corey J. A.", "" ], [ "Brook", "Barry W.", "" ] ]
The Letter from Rosenfeld (2014, Biological Conservation) in response to Jamieson and Allendorf (2012, Trends in Ecology and Evolution) and Frankham et al. (2014, Biological Conservation) and related papers is misleading in places and requires clarification and correction. We provide those here.
1511.00255
Carina Curto
Carina Curto and Nora Youngs
Neural ring homomorphisms and maps between neural codes
15 pages, 2 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural codes are binary codes that are used for information processing and representation in the brain. In previous work, we have shown how an algebraic structure, called the {\it neural ring}, can be used to efficiently encode geometric and combinatorial properties of a neural code [1]. In this work, we consider maps between neural codes and the associated homomorphisms of their neural rings. In order to ensure that these maps are meaningful and preserve relevant structure, we find that we need additional constraints on the ring homomorphisms. This motivates us to define {\it neural ring homomorphisms}. Our main results characterize all code maps corresponding to neural ring homomorphisms as compositions of 5 elementary code maps. As an application, we find that neural ring homomorphisms behave nicely with respect to convexity. In particular, if $\mathcal{C}$ and $\mathcal{D}$ are convex codes, the existence of a surjective code map $\mathcal{C}\rightarrow \mathcal{D}$ with a corresponding neural ring homomorphism implies that the minimal embedding dimensions satisfy $d(\mathcal{D}) \leq d(\mathcal{C})$.
[ { "created": "Sun, 1 Nov 2015 14:29:03 GMT", "version": "v1" }, { "created": "Thu, 1 Nov 2018 16:40:08 GMT", "version": "v2" }, { "created": "Wed, 13 Feb 2019 18:33:31 GMT", "version": "v3" } ]
2019-02-14
[ [ "Curto", "Carina", "" ], [ "Youngs", "Nora", "" ] ]
Neural codes are binary codes that are used for information processing and representation in the brain. In previous work, we have shown how an algebraic structure, called the {\it neural ring}, can be used to efficiently encode geometric and combinatorial properties of a neural code [1]. In this work, we consider maps between neural codes and the associated homomorphisms of their neural rings. In order to ensure that these maps are meaningful and preserve relevant structure, we find that we need additional constraints on the ring homomorphisms. This motivates us to define {\it neural ring homomorphisms}. Our main results characterize all code maps corresponding to neural ring homomorphisms as compositions of 5 elementary code maps. As an application, we find that neural ring homomorphisms behave nicely with respect to convexity. In particular, if $\mathcal{C}$ and $\mathcal{D}$ are convex codes, the existence of a surjective code map $\mathcal{C}\rightarrow \mathcal{D}$ with a corresponding neural ring homomorphism implies that the minimal embedding dimensions satisfy $d(\mathcal{D}) \leq d(\mathcal{C})$.
1412.3893
Ian Ochs
Ian E. Ochs and Michael M. Desai
The competition between simple and complex evolutionary trajectories in asexual populations
8 pages, 3 figures
BMC Evolutionary Biology 2015, 15:55
10.1186/s12862-015-0334-0
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On rugged fitness landscapes where sign epistasis is common, adaptation can often involve either individually beneficial "uphill" mutations or more complex mutational trajectories involving fitness valleys or plateaus. The dynamics of the evolutionary process determine the probability that evolution will take any specific path among a variety of competing possible trajectories. Understanding this evolutionary choice is essential if we are to understand the outcomes and predictability of adaptation on rugged landscapes. We present a simple model to analyze the probability that evolution will eschew immediately uphill paths in favor of crossing fitness valleys or plateaus that lead to higher fitness but less accessible genotypes. We calculate how this probability depends on the population size, mutation rates, and relevant selection pressures, and compare our analytical results to Wright-Fisher simulations. We find that the probability of valley crossing depends nonmonotonically on population size: intermediate size populations are most likely to follow a "greedy" strategy of acquiring immediately beneficial mutations even if they lead to evolutionary dead ends, while larger and smaller populations are more likely to cross fitness valleys to reach distant advantageous genotypes. We explicitly identify the boundaries between these different regimes in terms of the relevant evolutionary parameters. Above a certain threshold population size, we show that the degree of evolutionary "foresight" depends only on a single simple combination of the relevant parameters.
[ { "created": "Fri, 12 Dec 2014 05:35:40 GMT", "version": "v1" } ]
2015-10-06
[ [ "Ochs", "Ian E.", "" ], [ "Desai", "Michael M.", "" ] ]
On rugged fitness landscapes where sign epistasis is common, adaptation can often involve either individually beneficial "uphill" mutations or more complex mutational trajectories involving fitness valleys or plateaus. The dynamics of the evolutionary process determine the probability that evolution will take any specific path among a variety of competing possible trajectories. Understanding this evolutionary choice is essential if we are to understand the outcomes and predictability of adaptation on rugged landscapes. We present a simple model to analyze the probability that evolution will eschew immediately uphill paths in favor of crossing fitness valleys or plateaus that lead to higher fitness but less accessible genotypes. We calculate how this probability depends on the population size, mutation rates, and relevant selection pressures, and compare our analytical results to Wright-Fisher simulations. We find that the probability of valley crossing depends nonmonotonically on population size: intermediate size populations are most likely to follow a "greedy" strategy of acquiring immediately beneficial mutations even if they lead to evolutionary dead ends, while larger and smaller populations are more likely to cross fitness valleys to reach distant advantageous genotypes. We explicitly identify the boundaries between these different regimes in terms of the relevant evolutionary parameters. Above a certain threshold population size, we show that the degree of evolutionary "foresight" depends only on a single simple combination of the relevant parameters.
1404.5252
Wolfram Liebermeister
Wolfram Liebermeister
Enzyme economy and metabolic control
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The metabolic state of a cell, comprising fluxes, metabolite concentrations and enzyme levels, is shaped by a compromise between metabolic benefit and enzyme cost. This hypothesis and its consequences can be studied by computational models and using a theory of metabolic value. In optimal metabolic states, any increase of an enzyme level must improve the metabolic performance to justify its own cost, so each active enzyme must contribute to the cell's benefit by producing valuable products. This principle of value production leads to variation rules that relate metabolic fluxes and reaction elasticities to enzyme costs. Metabolic value theory provides a language to describe this. It postulates a balance of local values, which I derive here from concepts of metabolic control theory. Economic state variables, called economic potentials and loads, describe how metabolites, reactions, and enzymes contribute to metabolic performance. Economic potentials describe the indirect value of metabolite production, while economic loads describe the indirect value of metabolite concentrations. These economic variables, and others, are linked by local balance equations. These laws for optimal metabolic states define conditions for metabolic fluxes that hold for a wide range of rate laws. To produce metabolic value, fluxes run from lower to higher economic potentials, must be free of futile cycles, and satisfy a principle of minimal weighted fluxes. Given an economical flux mode, one can systematically construct kinetic models in which all enzymes have positive effects on metabolic performance.
[ { "created": "Mon, 21 Apr 2014 17:45:13 GMT", "version": "v1" }, { "created": "Tue, 4 Oct 2022 08:00:18 GMT", "version": "v2" } ]
2022-10-05
[ [ "Liebermeister", "Wolfram", "" ] ]
The metabolic state of a cell, comprising fluxes, metabolite concentrations and enzyme levels, is shaped by a compromise between metabolic benefit and enzyme cost. This hypothesis and its consequences can be studied by computational models and using a theory of metabolic value. In optimal metabolic states, any increase of an enzyme level must improve the metabolic performance to justify its own cost, so each active enzyme must contribute to the cell's benefit by producing valuable products. This principle of value production leads to variation rules that relate metabolic fluxes and reaction elasticities to enzyme costs. Metabolic value theory provides a language to describe this. It postulates a balance of local values, which I derive here from concepts of metabolic control theory. Economic state variables, called economic potentials and loads, describe how metabolites, reactions, and enzymes contribute to metabolic performance. Economic potentials describe the indirect value of metabolite production, while economic loads describe the indirect value of metabolite concentrations. These economic variables, and others, are linked by local balance equations. These laws for optimal metabolic states define conditions for metabolic fluxes that hold for a wide range of rate laws. To produce metabolic value, fluxes run from lower to higher economic potentials, must be free of futile cycles, and satisfy a principle of minimal weighted fluxes. Given an economical flux mode, one can systematically construct kinetic models in which all enzymes have positive effects on metabolic performance.
2211.09705
Divyanshu Aggarwal
Divyanshu Aggarwal and Yasha Hasija
A Review of Deep Learning Techniques for Protein Function Prediction
null
2021 2nd International Conference for Emerging Technology (INCET) Belgaum, India. May 21-23, 2021
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep Learning and big data have shown tremendous success in bioinformatics and computational biology in recent years; artificial intelligence methods have also significantly contributed in the task of protein function classification. This review paper analyzes the recent developments in approaches for the task of predicting protein function using deep learning. We explain the importance of determining the protein function and why automating the following task is crucial. Then, after reviewing the widely used deep learning techniques for this task, we continue our review and highlight the emergence of the modern State of The Art (SOTA) deep learning models which have achieved groundbreaking results in the field of computer vision, natural language processing and multi-modal learning in the last few years. We hope that this review will provide a broad view of the current role and advances of deep learning in biological sciences, especially in predicting protein function tasks and encourage new researchers to contribute to this area.
[ { "created": "Thu, 27 Oct 2022 20:30:25 GMT", "version": "v1" } ]
2022-11-18
[ [ "Aggarwal", "Divyanshu", "" ], [ "Hasija", "Yasha", "" ] ]
Deep Learning and big data have shown tremendous success in bioinformatics and computational biology in recent years; artificial intelligence methods have also significantly contributed in the task of protein function classification. This review paper analyzes the recent developments in approaches for the task of predicting protein function using deep learning. We explain the importance of determining the protein function and why automating the following task is crucial. Then, after reviewing the widely used deep learning techniques for this task, we continue our review and highlight the emergence of the modern State of The Art (SOTA) deep learning models which have achieved groundbreaking results in the field of computer vision, natural language processing and multi-modal learning in the last few years. We hope that this review will provide a broad view of the current role and advances of deep learning in biological sciences, especially in predicting protein function tasks and encourage new researchers to contribute to this area.
0907.4386
Leonard M. Sander
David A. Kessler and Leonard M. Sander
Fluctuations and Dispersal Rates in Population Dyanmics
4 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dispersal of species to find a more favorable habitat is important in population dynamics. Dispersal rates evolve in response to the relative success of different dispersal strategies. In a simplified deterministic treatment (J. Dockery, V. Hutson, K. Mischaikow, et al., J. Math. Bio. 37, 61 (1998)) of two species which differ only in their dispersal rates the slow species always dominates. We demonstrate that fluctuations can change this conclusion and can lead to dominance by the fast species or to coexistence, depending on parameters. We discuss two different effects of fluctuations, and show that our results are consistent with more complex treatments that find that selected dispersal rates are not monotonic with the cost of migration.
[ { "created": "Fri, 24 Jul 2009 21:50:33 GMT", "version": "v1" } ]
2009-07-28
[ [ "Kessler", "David A.", "" ], [ "Sander", "Leonard M.", "" ] ]
Dispersal of species to find a more favorable habitat is important in population dynamics. Dispersal rates evolve in response to the relative success of different dispersal strategies. In a simplified deterministic treatment (J. Dockery, V. Hutson, K. Mischaikow, et al., J. Math. Bio. 37, 61 (1998)) of two species which differ only in their dispersal rates the slow species always dominates. We demonstrate that fluctuations can change this conclusion and can lead to dominance by the fast species or to coexistence, depending on parameters. We discuss two different effects of fluctuations, and show that our results are consistent with more complex treatments that find that selected dispersal rates are not monotonic with the cost of migration.
1910.08784
James McIntosh
J. R. McIntosh, P. Sajda
Estimation of phase in EEG rhythms for real-time applications
null
null
10.1088/1741-2552/ab8683
null
q-bio.QM eess.SP q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective. We identify two linked problems related to estimating the phase of the alpha rhythm when the signal after a specific event is unknown (real-time case), or corrupted (offline analysis). We propose methods to estimate the phase prior to such events. Approach. Machine learning is used to mimic a non-causal signal-processing chain with a purely causal one. Main results. We demonstrate the ability of these methods to estimate instantaneous phase from an electroencephalography signal subjected to very minor pre-processing with higher accuracy than more standard signal-processing methods. Significance. Phase estimation of EEG-rhythms is a challenge due to non-stationarity and low signal to noise ratio. The methods presented enable scientists and engineers to achieve relatively low error by optimizing causal phase estimation on a non-causally processed signal for a real-time experiments and offline analysis.
[ { "created": "Sat, 19 Oct 2019 14:56:26 GMT", "version": "v1" } ]
2020-04-07
[ [ "McIntosh", "J. R.", "" ], [ "Sajda", "P.", "" ] ]
Objective. We identify two linked problems related to estimating the phase of the alpha rhythm when the signal after a specific event is unknown (real-time case), or corrupted (offline analysis). We propose methods to estimate the phase prior to such events. Approach. Machine learning is used to mimic a non-causal signal-processing chain with a purely causal one. Main results. We demonstrate the ability of these methods to estimate instantaneous phase from an electroencephalography signal subjected to very minor pre-processing with higher accuracy than more standard signal-processing methods. Significance. Phase estimation of EEG-rhythms is a challenge due to non-stationarity and low signal to noise ratio. The methods presented enable scientists and engineers to achieve relatively low error by optimizing causal phase estimation on a non-causally processed signal for a real-time experiments and offline analysis.
1903.05590
Jennifer Creaser
Jennifer Creaser, Peter Ashwin, Claire Postlethwaite, and Juliane Britz
Noisy network attractor models for transitions between EEG microstates
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain is intrinsically organized into large-scale networks that constantly re-organize on multiple timescales, even when the brain is at rest. The timing of these dynamics is crucial for sensation, perception, cognition and ultimately consciousness, but the underlying dynamics governing the constant reorganization and switching between networks are not yet well understood. Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) provide anatomical and temporal information about the resting-state networks (RSNs), respectively. EEG microstates are brief periods of stable scalp topography, and four distinct configurations with characteristic switching patterns between them are reliably identified at rest. Microstates have been identified as the electrophysiological correlate of fMRI-defined RSNs, this link could be established because EEG microstate sequences are scale-free and have long-range temporal correlations. This property is crucial for any approach to model EEG microstates. This paper proposes a novel modeling approach for microstates: we consider nonlinear stochastic differential equations (SDEs) that exhibit a noisy network attractor between nodes that represent the microstates. Using a single layer network between four states, we can reproduce the transition probabilities between microstates but not the heavy tailed residence time distributions. Introducing a two layer network with a hidden layer gives the flexibility to capture these heavy tails and their long-range temporal correlations. We fit these models to capture the statistical properties of microstate sequences from EEG data recorded inside and outside the MRI scanner and show that the processing required to separate the EEG signal from the fMRI machine noise results in a loss of information which is reflected in differences in the long tail of the dwell-time distributions.
[ { "created": "Wed, 13 Mar 2019 16:40:59 GMT", "version": "v1" } ]
2019-03-14
[ [ "Creaser", "Jennifer", "" ], [ "Ashwin", "Peter", "" ], [ "Postlethwaite", "Claire", "" ], [ "Britz", "Juliane", "" ] ]
The brain is intrinsically organized into large-scale networks that constantly re-organize on multiple timescales, even when the brain is at rest. The timing of these dynamics is crucial for sensation, perception, cognition and ultimately consciousness, but the underlying dynamics governing the constant reorganization and switching between networks are not yet well understood. Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) provide anatomical and temporal information about the resting-state networks (RSNs), respectively. EEG microstates are brief periods of stable scalp topography, and four distinct configurations with characteristic switching patterns between them are reliably identified at rest. Microstates have been identified as the electrophysiological correlate of fMRI-defined RSNs, this link could be established because EEG microstate sequences are scale-free and have long-range temporal correlations. This property is crucial for any approach to model EEG microstates. This paper proposes a novel modeling approach for microstates: we consider nonlinear stochastic differential equations (SDEs) that exhibit a noisy network attractor between nodes that represent the microstates. Using a single layer network between four states, we can reproduce the transition probabilities between microstates but not the heavy tailed residence time distributions. Introducing a two layer network with a hidden layer gives the flexibility to capture these heavy tails and their long-range temporal correlations. We fit these models to capture the statistical properties of microstate sequences from EEG data recorded inside and outside the MRI scanner and show that the processing required to separate the EEG signal from the fMRI machine noise results in a loss of information which is reflected in differences in the long tail of the dwell-time distributions.
1404.6684
Domenico Gatti
Greg W. Clark, Sharon H. Ackerman, Elisabeth R. Tillier, Domenico L. Gatti
Multidimensional mutual information methods for the analysis of covariation in multiple sequence alignments
21 pages, 4 figures, 1 table, supporting information containing 2 additional figures is included at the end of the manuscript
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several methods are available for the detection of covarying positions from a multiple sequence alignment (MSA). If the MSA contains a large number of sequences, information about the proximities between residues derived from covariation maps can be sufficient to predict a protein fold. If the structure is already known, information on the covarying positions can be valuable to understand the protein mechanism. In this study we have sought to determine whether a multivariate extension of traditional mutual information (MI) can be an additional tool to study covariation. The performance of two multidimensional MI (mdMI) methods, designed to remove the effect of ternary/quaternary interdependencies, was tested with a set of 9 MSAs each containing <400 sequences, and was shown to be comparable to that of methods based on maximum entropy/pseudolikelyhood statistical models of protein sequences. However, while all the methods tested detected a similar number of covarying pairs among the residues separated by < 8 {\AA} in the reference X-ray structures, there was on average less than 65% overlap between the top scoring pairs detected by methods that are based on different principles. We have also attempted to identify whether the difference in performance among methods is due to different efficiency in removing covariation originating from chains of structural contacts. We found that the reason why methods that derive partial correlation between the columns of a MSA provide a better recognition of close contacts is not because they remove chaining effects, but because they filter out the correlation between distant residues that originates from general fitness constraints. In contrast we found that true chaining effects are expression of real physical perturbations that propagate inside proteins, and therefore are not removed by the derivation of partial correlation between variables.
[ { "created": "Sat, 26 Apr 2014 21:10:57 GMT", "version": "v1" } ]
2014-04-29
[ [ "Clark", "Greg W.", "" ], [ "Ackerman", "Sharon H.", "" ], [ "Tillier", "Elisabeth R.", "" ], [ "Gatti", "Domenico L.", "" ] ]
Several methods are available for the detection of covarying positions from a multiple sequence alignment (MSA). If the MSA contains a large number of sequences, information about the proximities between residues derived from covariation maps can be sufficient to predict a protein fold. If the structure is already known, information on the covarying positions can be valuable to understand the protein mechanism. In this study we have sought to determine whether a multivariate extension of traditional mutual information (MI) can be an additional tool to study covariation. The performance of two multidimensional MI (mdMI) methods, designed to remove the effect of ternary/quaternary interdependencies, was tested with a set of 9 MSAs each containing <400 sequences, and was shown to be comparable to that of methods based on maximum entropy/pseudolikelyhood statistical models of protein sequences. However, while all the methods tested detected a similar number of covarying pairs among the residues separated by < 8 {\AA} in the reference X-ray structures, there was on average less than 65% overlap between the top scoring pairs detected by methods that are based on different principles. We have also attempted to identify whether the difference in performance among methods is due to different efficiency in removing covariation originating from chains of structural contacts. We found that the reason why methods that derive partial correlation between the columns of a MSA provide a better recognition of close contacts is not because they remove chaining effects, but because they filter out the correlation between distant residues that originates from general fitness constraints. In contrast we found that true chaining effects are expression of real physical perturbations that propagate inside proteins, and therefore are not removed by the derivation of partial correlation between variables.
1407.5328
Dean Wyatte
Dean Wyatte
What happens next and when "next" happens: Mechanisms of spatial and temporal prediction
Doctoral thesis (May 2014)
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The physics of the environment provide a rich spatiotemporal structure for our experience. Objects move in predictable ways and their features and identity remain stable across time and space. How does the brain leverage this structure to make predictions about and learn from the environment? This thesis describes research centered around a mechanistic description of sensory prediction called LeabraTI (TI: Temporal Integration) that explains precisely how predictive processing is accomplished in neocortical microcircuits. The fundamental prediction of LeabraTI is that predictions and sensations are interleaved across the same neural tissue at an overall rate of 10 Hz, corresponding to the widely studied alpha rhythm of posterior cortex. Experiments described herein tested this prediction by manipulating the spatiotemporal properties of three-dimensional object stimuli in a laboratory setting. EEG results indicated that predictions were subserved by ~10 Hz oscillations that reliably tracked the onset of stimuli and differentiated between spatially predictable and unpredictable object sequences. There was a behavioral advantage for combined spatial and temporal predictability for discrimination of unlearned objects, but prolonged study of objects under this combined predictability context impaired discriminability relative to other learning contexts. This counterintuitive pattern of results was accounted for by a neural network model that learned three-dimensional viewpoint invariance with LeabraTI's spatiotemporal prediction rule. Synaptic weight scaling from prolonged learning built viewpoint invariance, but led to confusion between ambiguous views of objects, producing slightly lower performance on average. Overall, this work advances a biological architecture for sensory prediction accompanied by empirical evidence that supports learning of realistic time- and space-varying inputs.
[ { "created": "Sun, 20 Jul 2014 19:11:51 GMT", "version": "v1" } ]
2014-07-22
[ [ "Wyatte", "Dean", "" ] ]
The physics of the environment provide a rich spatiotemporal structure for our experience. Objects move in predictable ways and their features and identity remain stable across time and space. How does the brain leverage this structure to make predictions about and learn from the environment? This thesis describes research centered around a mechanistic description of sensory prediction called LeabraTI (TI: Temporal Integration) that explains precisely how predictive processing is accomplished in neocortical microcircuits. The fundamental prediction of LeabraTI is that predictions and sensations are interleaved across the same neural tissue at an overall rate of 10 Hz, corresponding to the widely studied alpha rhythm of posterior cortex. Experiments described herein tested this prediction by manipulating the spatiotemporal properties of three-dimensional object stimuli in a laboratory setting. EEG results indicated that predictions were subserved by ~10 Hz oscillations that reliably tracked the onset of stimuli and differentiated between spatially predictable and unpredictable object sequences. There was a behavioral advantage for combined spatial and temporal predictability for discrimination of unlearned objects, but prolonged study of objects under this combined predictability context impaired discriminability relative to other learning contexts. This counterintuitive pattern of results was accounted for by a neural network model that learned three-dimensional viewpoint invariance with LeabraTI's spatiotemporal prediction rule. Synaptic weight scaling from prolonged learning built viewpoint invariance, but led to confusion between ambiguous views of objects, producing slightly lower performance on average. Overall, this work advances a biological architecture for sensory prediction accompanied by empirical evidence that supports learning of realistic time- and space-varying inputs.
2009.07479
Anisleidy Gonz\'alez-Mitjans
A. Gonz\'alez-Mitjans, D. Paz-Linares, A. Areces-Gonzalez, M. Li, Y. Wang, ML. Bringas-Vega, and P.A Vald\'es-Sosa
Accurate and efficient Simulation of very high-dimensional Neural Mass Models with distributed-delay Connectome Tensors
12 pages, 6 figures, 2 tables
null
null
null
q-bio.NC cs.CE cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces methods and a novel toolbox that efficiently integrates any high-dimensional Neural Mass Models (NMMs) specified by two essential components. The first is the set of nonlinear Random Differential Equations of the dynamics of each neural mass. The second is the highly sparse three-dimensional Connectome Tensor (CT) that encodes the strength of the connections and the delays of information transfer along the axons of each connection. Semi-analytical integration of the RDE is done with the Local Linearization scheme for each neural mass model, which is the only scheme guaranteeing dynamical fidelity to the original continuous-time nonlinear dynamic. It also seamlessly allows modeling distributed delays CT with any level of complexity or realism, as shown by the Moore-Penrose diagram of the algorithm. This ease of implementation includes models with distributed-delay CTs. We achieve high computational efficiency by using a tensor representation of the model that leverages semi-analytic expressions to integrate the Random Differential Equations (RDEs) underlying the NMM. We discretized the state equation with Local Linearization via an algebraic formulation. This approach increases numerical integration speed and efficiency, a crucial aspect of large-scale NMM simulations. To illustrate the usefulness of the toolbox, we simulate both a single Zetterberg-Jansen-Rit (ZJR) cortical column and an interconnected population of such columns. These examples illustrate the consequence of modifying the CT in these models, especially by introducing distributed delays. We provide an open-source Matlab live script for the toolbox.
[ { "created": "Wed, 16 Sep 2020 05:55:17 GMT", "version": "v1" }, { "created": "Fri, 25 Sep 2020 02:17:54 GMT", "version": "v2" }, { "created": "Tue, 23 Feb 2021 07:02:09 GMT", "version": "v3" }, { "created": "Fri, 14 May 2021 05:54:11 GMT", "version": "v4" }, { "created": "Fri, 17 Dec 2021 07:28:09 GMT", "version": "v5" }, { "created": "Fri, 10 Jun 2022 03:05:22 GMT", "version": "v6" } ]
2022-06-13
[ [ "González-Mitjans", "A.", "" ], [ "Paz-Linares", "D.", "" ], [ "Areces-Gonzalez", "A.", "" ], [ "Li", "M.", "" ], [ "Wang", "Y.", "" ], [ "Bringas-Vega", "ML.", "" ], [ "Valdés-Sosa", "P. A", "" ] ]
This paper introduces methods and a novel toolbox that efficiently integrates any high-dimensional Neural Mass Models (NMMs) specified by two essential components. The first is the set of nonlinear Random Differential Equations of the dynamics of each neural mass. The second is the highly sparse three-dimensional Connectome Tensor (CT) that encodes the strength of the connections and the delays of information transfer along the axons of each connection. Semi-analytical integration of the RDE is done with the Local Linearization scheme for each neural mass model, which is the only scheme guaranteeing dynamical fidelity to the original continuous-time nonlinear dynamic. It also seamlessly allows modeling distributed delays CT with any level of complexity or realism, as shown by the Moore-Penrose diagram of the algorithm. This ease of implementation includes models with distributed-delay CTs. We achieve high computational efficiency by using a tensor representation of the model that leverages semi-analytic expressions to integrate the Random Differential Equations (RDEs) underlying the NMM. We discretized the state equation with Local Linearization via an algebraic formulation. This approach increases numerical integration speed and efficiency, a crucial aspect of large-scale NMM simulations. To illustrate the usefulness of the toolbox, we simulate both a single Zetterberg-Jansen-Rit (ZJR) cortical column and an interconnected population of such columns. These examples illustrate the consequence of modifying the CT in these models, especially by introducing distributed delays. We provide an open-source Matlab live script for the toolbox.
1812.03783
Noemi Kurt
Jochen Blath, Adri\'an Gonz\'alez Casanova, Noemi Kurt, Maite Wilke-Berenguer
The seed bank coalescent with simultaneous switching
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new Wright-Fisher type model for seed banks incorporating "simultaneous switching", which is motivated by recent work on microbial dormancy. We show that the simultaneous switching mechanism leads to a new jump-diffusion limit for the scaled frequency processes, extending the classical Wright-Fisher and seed bank diffusion limits. We further establish a new dual coalescent structure with multiple activation and deactivation events of lineages. While this seems reminiscent of multiple merger events in general exchangeable coalescents, it actually leads to an entirely new class of coalescent processes with unique qualitative and quantitative behaviour. To illustrate this, we provide a novel kind of condition for coming down from infinity for these coalescents using recent results of Griffiths.
[ { "created": "Mon, 10 Dec 2018 13:36:30 GMT", "version": "v1" }, { "created": "Fri, 21 Dec 2018 10:24:57 GMT", "version": "v2" } ]
2018-12-24
[ [ "Blath", "Jochen", "" ], [ "Casanova", "Adrián González", "" ], [ "Kurt", "Noemi", "" ], [ "Wilke-Berenguer", "Maite", "" ] ]
We introduce a new Wright-Fisher type model for seed banks incorporating "simultaneous switching", which is motivated by recent work on microbial dormancy. We show that the simultaneous switching mechanism leads to a new jump-diffusion limit for the scaled frequency processes, extending the classical Wright-Fisher and seed bank diffusion limits. We further establish a new dual coalescent structure with multiple activation and deactivation events of lineages. While this seems reminiscent of multiple merger events in general exchangeable coalescents, it actually leads to an entirely new class of coalescent processes with unique qualitative and quantitative behaviour. To illustrate this, we provide a novel kind of condition for coming down from infinity for these coalescents using recent results of Griffiths.
2204.11747
Chottiwatt Jittprasong
Chottiwatt Jittprasong (Biomedical Robotics Laboratory, Department of Biomedical Engineering, City University of Hong Kong)
A feasibility study proposal of the predictive model to enable the prediction of population susceptibility to COVID-19 by analysis of vaccine utilization for advising deployment of a booster dose
4 pages with 7 figures, pdfLaTeX
null
null
null
q-bio.PE cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
With the present highly infectious dominant SARS-CoV-2 strain of B1.1.529 or Omicron spreading around the globe, there is concern that the COVID-19 pandemic will not end soon and that it will be a race against time until a more contagious and virulent variant emerges. One of the most promising approaches for preventing virus propagation is to maintain continuous high vaccination efficacy among the population, thereby strengthening the population protective effect and preventing the majority of infection in the vaccinated population, as is known to occur with the Omicron variant frequently. Countries must structure vaccination programs in accordance with their populations' susceptibility to infection, optimizing vaccination efforts by delivering vaccines progressively enough to protect the majority of the population. We present a feasibility study proposal for maintaining optimal continuous vaccination by assessing the susceptible population, the decline of vaccine efficacy in the population, and advising booster dosage deployment to maintain the population's protective efficacy through the use of a predictive model. Numerous studies have been conducted in the direction of analyzing vaccine utilization; however, very little study has been conducted to substantiate the optimal deployment of booster dosage vaccination with the help of a predictive model based on machine learning algorithms.
[ { "created": "Mon, 25 Apr 2022 16:05:59 GMT", "version": "v1" } ]
2022-04-26
[ [ "Jittprasong", "Chottiwatt", "", "Biomedical Robotics Laboratory, Department of\n Biomedical Engineering, City University of Hong Kong" ] ]
With the present highly infectious dominant SARS-CoV-2 strain of B1.1.529 or Omicron spreading around the globe, there is concern that the COVID-19 pandemic will not end soon and that it will be a race against time until a more contagious and virulent variant emerges. One of the most promising approaches for preventing virus propagation is to maintain continuous high vaccination efficacy among the population, thereby strengthening the population protective effect and preventing the majority of infection in the vaccinated population, as is known to occur with the Omicron variant frequently. Countries must structure vaccination programs in accordance with their populations' susceptibility to infection, optimizing vaccination efforts by delivering vaccines progressively enough to protect the majority of the population. We present a feasibility study proposal for maintaining optimal continuous vaccination by assessing the susceptible population, the decline of vaccine efficacy in the population, and advising booster dosage deployment to maintain the population's protective efficacy through the use of a predictive model. Numerous studies have been conducted in the direction of analyzing vaccine utilization; however, very little study has been conducted to substantiate the optimal deployment of booster dosage vaccination with the help of a predictive model based on machine learning algorithms.
2004.07937
Syed Muhammad Usman
Syed Muhammad Usman, Shahzad Latif, Arshad Beg
Principle components analysis for seizures prediction using wavelet transform
null
null
null
null
q-bio.NC cs.LG eess.SP stat.ML
http://creativecommons.org/licenses/by/4.0/
Epilepsy is a disease in which frequent seizures occur due to abnormal activity of neurons. Patients affected by this disease can be treated with the help of medicines or surgical procedures. However, both of these methods are not quite useful. The only method to treat epilepsy patients effectively is to predict the seizure before its onset. It has been observed that abnormal activity in the brain signals starts before the occurrence of seizure known as the preictal state. Many researchers have proposed machine learning models for prediction of epileptic seizures by detecting the start of preictal state. However, pre-processing, feature extraction and classification remains a great challenge in the prediction of preictal state. Therefore, we propose a model that uses common spatial pattern filtering and wavelet transform for preprocessing, principal component analysis for feature extraction and support vector machines for detecting preictal state. We have applied our model on 23 subjects and an average sensitivity of 93.1% has been observed for 84 seizures.
[ { "created": "Mon, 9 Mar 2020 04:32:57 GMT", "version": "v1" } ]
2020-04-20
[ [ "Usman", "Syed Muhammad", "" ], [ "Latif", "Shahzad", "" ], [ "Beg", "Arshad", "" ] ]
Epilepsy is a disease in which frequent seizures occur due to abnormal activity of neurons. Patients affected by this disease can be treated with the help of medicines or surgical procedures. However, both of these methods are not quite useful. The only method to treat epilepsy patients effectively is to predict the seizure before its onset. It has been observed that abnormal activity in the brain signals starts before the occurrence of seizure known as the preictal state. Many researchers have proposed machine learning models for prediction of epileptic seizures by detecting the start of preictal state. However, pre-processing, feature extraction and classification remains a great challenge in the prediction of preictal state. Therefore, we propose a model that uses common spatial pattern filtering and wavelet transform for preprocessing, principal component analysis for feature extraction and support vector machines for detecting preictal state. We have applied our model on 23 subjects and an average sensitivity of 93.1% has been observed for 84 seizures.
1907.10790
Katsuyoshi Matsushita
Katsuyoshi Matsushita and Kazuya Horibe and Naoya Kamamoto and Koichi Fujimoto
Cell Motion Alignment as Polarity Memory Effect
6 pages, 3 figures
null
10.7566/JPSJ.88.103801
null
q-bio.CB cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The clarification of the motion alignment mechanism in collective cell migration is an important issue commonly in physics and biology. In analogy with the self-propelled disk, the polarity memory effect of eukaryotic cell is a fundamental candidate for this alignment mechanism. In the present paper, we theoretically examine the polarity memory effect for the motion alignment of cells on the basis of the cellular Potts model. We show that the polarity memory effect can align motion of cells. We also find that the polarity memory effect emerges for the persistent length of cell trajectories longer than average cell-cell distance.
[ { "created": "Thu, 25 Jul 2019 01:44:15 GMT", "version": "v1" } ]
2019-10-02
[ [ "Matsushita", "Katsuyoshi", "" ], [ "Horibe", "Kazuya", "" ], [ "Kamamoto", "Naoya", "" ], [ "Fujimoto", "Koichi", "" ] ]
The clarification of the motion alignment mechanism in collective cell migration is an important issue commonly in physics and biology. In analogy with the self-propelled disk, the polarity memory effect of eukaryotic cell is a fundamental candidate for this alignment mechanism. In the present paper, we theoretically examine the polarity memory effect for the motion alignment of cells on the basis of the cellular Potts model. We show that the polarity memory effect can align motion of cells. We also find that the polarity memory effect emerges for the persistent length of cell trajectories longer than average cell-cell distance.
q-bio/0611044
Nils Becker
Nils B. Becker and Ralf Everaers
DNA: From rigid base-pairs to semiflexible polymers
13 pages, 6 figures, 6 tables
null
10.1103/PhysRevE.76.021923
null
q-bio.BM
null
The sequence-dependent elasticity of double-helical DNA on a nm length scale can be captured by the rigid base-pair model, whose strains are the relative position and orientation of adjacent base-pairs. Corresponding elastic potentials have been obtained from all-atom MD simulation and from high-resolution structural data. On the scale of a hundred nm, DNA is successfully described by a continuous worm-like chain model with homogeneous elastic properties characterized by a set of four elastic constants, which have been directly measured in single-molecule experiments. We present here a theory that links these experiments on different scales, by systematically coarse-graining the rigid base-pair model for random sequence DNA to an effective worm-like chain description. The average helical geometry of the molecule is exactly taken into account in our approach. We find that the available microscopic parameters sets predict qualitatively similar mesoscopic parameters. The thermal bending and twisting persistence lengths computed from MD data are 42 and 48 nm, respectively. The static persistence lengths are generally much higher, in agreement with cyclization experiments. All microscopic parameter sets predict negative twist-stretch coupling. The variability and anisotropy of bending stiffness in short random chains lead to non-Gaussian bend angle distributions, but become unimportant after two helical turns.
[ { "created": "Wed, 15 Nov 2006 09:13:40 GMT", "version": "v1" } ]
2013-05-29
[ [ "Becker", "Nils B.", "" ], [ "Everaers", "Ralf", "" ] ]
The sequence-dependent elasticity of double-helical DNA on a nm length scale can be captured by the rigid base-pair model, whose strains are the relative position and orientation of adjacent base-pairs. Corresponding elastic potentials have been obtained from all-atom MD simulation and from high-resolution structural data. On the scale of a hundred nm, DNA is successfully described by a continuous worm-like chain model with homogeneous elastic properties characterized by a set of four elastic constants, which have been directly measured in single-molecule experiments. We present here a theory that links these experiments on different scales, by systematically coarse-graining the rigid base-pair model for random sequence DNA to an effective worm-like chain description. The average helical geometry of the molecule is exactly taken into account in our approach. We find that the available microscopic parameters sets predict qualitatively similar mesoscopic parameters. The thermal bending and twisting persistence lengths computed from MD data are 42 and 48 nm, respectively. The static persistence lengths are generally much higher, in agreement with cyclization experiments. All microscopic parameter sets predict negative twist-stretch coupling. The variability and anisotropy of bending stiffness in short random chains lead to non-Gaussian bend angle distributions, but become unimportant after two helical turns.
1704.08994
Jian-Jun Shu
Jian-Jun Shu, Kian-Yan Yong
Fourier-based classification of protein secondary structures
null
Biochemical and Biophysical Research Communications, Vol. 485, No. 4, pp. 731-735, 2017
10.1016/j.bbrc.2017.02.117
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices.
[ { "created": "Fri, 28 Apr 2017 16:20:50 GMT", "version": "v1" } ]
2017-05-01
[ [ "Shu", "Jian-Jun", "" ], [ "Yong", "Kian-Yan", "" ] ]
The correct prediction of protein secondary structures is one of the key issues in predicting the correct protein folded shape, which is used for determining gene function. Existing methods make use of amino acids properties as indices to classify protein secondary structures, but are faced with a significant number of misclassifications. The paper presents a technique for the classification of protein secondary structures based on protein "signal-plotting" and the use of the Fourier technique for digital signal processing. New indices are proposed to classify protein secondary structures by analyzing hydrophobicity profiles. The approach is simple and straightforward. Results show that the more types of protein secondary structures can be classified by means of these newly-proposed indices.
1611.04193
Stuart Hagler
Stuart Hagler
Patterns of Selection of Human Movements III: Energy Efficiency, Mechanical Advantage, and Walking Gait
21 pages, 5 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human movements are physical processes combining the classical mechanics of the human body moving in space and the biomechanics of the muscles generating the forces acting on the body under sophisticated sensory-motor control. One way to characterize movement performance is through measures of energy efficiency that relate the mechanical energy of the body and metabolic energy expended by the muscles. We expect the practical utility of such measures to be greater when human subjects execute movements that maximize energy efficiency. We therefore seek to understand if and when subjects select movements with that maximizing energy efficiency. We proceed using a model-based approach to describe movements which perform a task requiring the body to add or remove external mechanical work to or from an object. We use the specific example of walking gaits doing external mechanical work by pulling a cart, and estimate the relationship between the avg. walking speed and avg. step length. In the limit where no external work is done, we find that the estimated maximum energy efficiency walking gait is much slower than the walking gaits healthy adults typically select. We then modify the situation of the walking gait by introducing an idealized mechanical device that creates an adjustable mechanical advantage. The walking gaits that maximize the energy efficiency using the optimal mechanical advantage are again much slower than the walking gaits healthy adults typically select. We finally modify the situation so that the avg. walking speed is fixed and derive the pattern of the avg. step length and mechanical advantage that maximize energy efficiency.
[ { "created": "Sun, 13 Nov 2016 21:16:15 GMT", "version": "v1" }, { "created": "Thu, 24 Nov 2016 22:50:17 GMT", "version": "v2" }, { "created": "Sun, 9 Dec 2018 19:52:31 GMT", "version": "v3" } ]
2018-12-11
[ [ "Hagler", "Stuart", "" ] ]
Human movements are physical processes combining the classical mechanics of the human body moving in space and the biomechanics of the muscles generating the forces acting on the body under sophisticated sensory-motor control. One way to characterize movement performance is through measures of energy efficiency that relate the mechanical energy of the body and metabolic energy expended by the muscles. We expect the practical utility of such measures to be greater when human subjects execute movements that maximize energy efficiency. We therefore seek to understand if and when subjects select movements with that maximizing energy efficiency. We proceed using a model-based approach to describe movements which perform a task requiring the body to add or remove external mechanical work to or from an object. We use the specific example of walking gaits doing external mechanical work by pulling a cart, and estimate the relationship between the avg. walking speed and avg. step length. In the limit where no external work is done, we find that the estimated maximum energy efficiency walking gait is much slower than the walking gaits healthy adults typically select. We then modify the situation of the walking gait by introducing an idealized mechanical device that creates an adjustable mechanical advantage. The walking gaits that maximize the energy efficiency using the optimal mechanical advantage are again much slower than the walking gaits healthy adults typically select. We finally modify the situation so that the avg. walking speed is fixed and derive the pattern of the avg. step length and mechanical advantage that maximize energy efficiency.
2303.16209
Kyoungmin Min
Myeonghun Lee and Kyoungmin Min
AmorProt: Amino Acid Molecular Fingerprints Repurposing based Protein Fingerprint
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
As protein therapeutics play an important role in almost all medical fields, numerous studies have been conducted on proteins using artificial intelligence. Artificial intelligence has enabled data driven predictions without the need for expensive experiments. Nevertheless, unlike the various molecular fingerprint algorithms that have been developed, protein fingerprint algorithms have rarely been studied. In this study, we proposed the amino acid molecular fingerprints repurposing based protein (AmorProt) fingerprint, a protein sequence representation method that effectively uses the molecular fingerprints corresponding to 20 amino acids. Subsequently, the performances of the tree based machine learning and artificial neural network models were compared using (1) amyloid classification and (2) isoelectric point regression. Finally, the applicability and advantages of the developed platform were demonstrated through a case study and the following experiments: (3) comparison of dataset dependence with feature based methods; (4) feature importance analysis; and (5) protein space analysis. Consequently, the significantly improved model performance and data set independent versatility of the AmorProt fingerprint were verified. The results revealed that the current protein representation method can be applied to various fields related to proteins, such as predicting their fundamental properties or interaction with ligands.
[ { "created": "Mon, 27 Mar 2023 23:57:47 GMT", "version": "v1" } ]
2023-03-30
[ [ "Lee", "Myeonghun", "" ], [ "Min", "Kyoungmin", "" ] ]
As protein therapeutics play an important role in almost all medical fields, numerous studies have been conducted on proteins using artificial intelligence. Artificial intelligence has enabled data driven predictions without the need for expensive experiments. Nevertheless, unlike the various molecular fingerprint algorithms that have been developed, protein fingerprint algorithms have rarely been studied. In this study, we proposed the amino acid molecular fingerprints repurposing based protein (AmorProt) fingerprint, a protein sequence representation method that effectively uses the molecular fingerprints corresponding to 20 amino acids. Subsequently, the performances of the tree based machine learning and artificial neural network models were compared using (1) amyloid classification and (2) isoelectric point regression. Finally, the applicability and advantages of the developed platform were demonstrated through a case study and the following experiments: (3) comparison of dataset dependence with feature based methods; (4) feature importance analysis; and (5) protein space analysis. Consequently, the significantly improved model performance and data set independent versatility of the AmorProt fingerprint were verified. The results revealed that the current protein representation method can be applied to various fields related to proteins, such as predicting their fundamental properties or interaction with ligands.
q-bio/0312040
Peng-Ye Wang
Ping Xie, Shuo-Xing Dou, Peng-Ye Wang
A model for processivity of molecular motors
10 pages, 7 figures
Chinese Physics, 13, 1569 (2004)
10.1088/1009-1963/13/9/036
null
q-bio.BM
null
We propose a two-dimensional model for a complete description of the dynamics of molecular motors, including both the processive movement along track filaments and the dissociation from the filaments. The theoretical results on the distributions of the run length and dwell time at a given ATP concentration, the dependences of mean run length, mean dwell time and mean velocity on ATP concentration and load are in good agreement with the previous experimental results.
[ { "created": "Thu, 25 Dec 2003 09:58:52 GMT", "version": "v1" } ]
2009-11-10
[ [ "Xie", "Ping", "" ], [ "Dou", "Shuo-Xing", "" ], [ "Wang", "Peng-Ye", "" ] ]
We propose a two-dimensional model for a complete description of the dynamics of molecular motors, including both the processive movement along track filaments and the dissociation from the filaments. The theoretical results on the distributions of the run length and dwell time at a given ATP concentration, the dependences of mean run length, mean dwell time and mean velocity on ATP concentration and load are in good agreement with the previous experimental results.
1402.0511
Charles Fisher
Charles K. Fisher, Pankaj Mehta
Identifying Keystone Species in the Human Gut Microbiome from Metagenomic Timeseries using Sparse Linear Regression
null
null
10.1371/journal.pone.0102451
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human associated microbial communities exert tremendous influence over human health and disease. With modern metagenomic sequencing methods it is possible to follow the relative abundance of microbes in a community over time. These microbial communities exhibit rich ecological dynamics and an important goal of microbial ecology is to infer the interactions between species from sequence data. Any algorithm for inferring species interactions must overcome three obstacles: 1) a correlation between the abundances of two species does not imply that those species are interacting, 2) the sum constraint on the relative abundances obtained from metagenomic studies makes it difficult to infer the parameters in timeseries models, and 3) errors due to experimental uncertainty, or mis-assignment of sequencing reads into operational taxonomic units, bias inferences of species interactions. Here we introduce an approach, Learning Interactions from MIcrobial Time Series (LIMITS), that overcomes these obstacles. LIMITS uses sparse linear regression with boostrap aggregation to infer a discrete-time Lotka-Volterra model for microbial dynamics. We tested LIMITS on synthetic data and showed that it could reliably infer the topology of the inter-species ecological interactions. We then used LIMITS to characterize the species interactions in the gut microbiomes of two individuals and found that the interaction networks varied significantly between individuals. Furthermore, we found that the interaction networks of the two individuals are dominated by distinct "keystone species", Bacteroides fragilis and Bacteroided stercosis, that have a disproportionate influence on the structure of the gut microbiome even though they are only found in moderate abundance. Based on our results, we hypothesize that the abundances of certain keystone species may be responsible for individuality in the human gut microbiome.
[ { "created": "Mon, 3 Feb 2014 21:00:29 GMT", "version": "v1" } ]
2015-06-18
[ [ "Fisher", "Charles K.", "" ], [ "Mehta", "Pankaj", "" ] ]
Human associated microbial communities exert tremendous influence over human health and disease. With modern metagenomic sequencing methods it is possible to follow the relative abundance of microbes in a community over time. These microbial communities exhibit rich ecological dynamics and an important goal of microbial ecology is to infer the interactions between species from sequence data. Any algorithm for inferring species interactions must overcome three obstacles: 1) a correlation between the abundances of two species does not imply that those species are interacting, 2) the sum constraint on the relative abundances obtained from metagenomic studies makes it difficult to infer the parameters in timeseries models, and 3) errors due to experimental uncertainty, or mis-assignment of sequencing reads into operational taxonomic units, bias inferences of species interactions. Here we introduce an approach, Learning Interactions from MIcrobial Time Series (LIMITS), that overcomes these obstacles. LIMITS uses sparse linear regression with boostrap aggregation to infer a discrete-time Lotka-Volterra model for microbial dynamics. We tested LIMITS on synthetic data and showed that it could reliably infer the topology of the inter-species ecological interactions. We then used LIMITS to characterize the species interactions in the gut microbiomes of two individuals and found that the interaction networks varied significantly between individuals. Furthermore, we found that the interaction networks of the two individuals are dominated by distinct "keystone species", Bacteroides fragilis and Bacteroided stercosis, that have a disproportionate influence on the structure of the gut microbiome even though they are only found in moderate abundance. Based on our results, we hypothesize that the abundances of certain keystone species may be responsible for individuality in the human gut microbiome.
1810.01452
Arni S.R. Srinivasa Rao
Arni S.R. Srinivasa Rao
A Partition Theorem for a Randomly Selected Large Population
12 pages, 4 figures. A new result in population dynamics
Acta Biotheoretica (Springer) 2021
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
We state and prove a theorem on the partitioning of a randomly selected large population into stationary and non-stationary components by using a property of stationary population identity. Applications of this theorem for practical purposes is summarized at the end.
[ { "created": "Tue, 2 Oct 2018 18:47:37 GMT", "version": "v1" }, { "created": "Mon, 22 Oct 2018 02:07:38 GMT", "version": "v2" }, { "created": "Thu, 1 Jul 2021 17:16:42 GMT", "version": "v3" } ]
2021-10-20
[ [ "Rao", "Arni S. R. Srinivasa", "" ] ]
We state and prove a theorem on the partitioning of a randomly selected large population into stationary and non-stationary components by using a property of stationary population identity. Applications of this theorem for practical purposes is summarized at the end.
1201.2900
Mikhail Peslyak
Mikhail Peslyak
Model of pathogenesis of psoriasis. Part 2. Local processes
English edition e1.3, Russia, Moscow, MYPE, 2012, 110 pages, 30 figures, ISBN 9785905504044
null
null
null
q-bio.CB q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/3.0/
Analytical research of results of experimental and theoretical studies on pathogenesis of psoriatic disease is carried out. The new model of pathogenesis - skin reaction to systemic psoriatic process SPP is formulated. ... Psoriatic inflammation is regarded as a reaction of the skin immune system to activity of Mo-R and DC-R involved in derma from blood flow. They contain Y-antigen and, getting to derma, can be transformed in mature maDC-Y and present this antigen to Y-specific T-lymphocytes as well as activate them. Y-antigen is a part of the interpeptide bridge IB-Y. Therefore, the skin immune system can incorrectly interpret Y-antigen presentation as a sign of external PsB-infection and switch one of mechanisms of protection against bacterial infection - epidermal hyperproliferation. Psoriatic plaque can be initiated only during action of local inflammatory process LP2 in derma causing not only innate, but also adaptive response. In particular, it is possible at LP2(IN) - open trauma of derma or at LP2(HPV) - HPV-carriage of keratinocytes. The level of Y-priming (presence and concentration of Y-specific T-lymphocytes in prepsoriatic derma and in lymph nodes) also determines possibility of psoriatic plaque initiation. Existence and severity of psoriatic plaque is determined by intensity of Y-antigen income into derma (inside Mo-R and DC-R). ... Severity of plaque is aggravated by LP2-inflammation if it persists after this plaque initiation. New Mo-T, DC-T (incl. Mo-R, DC-R) and Y-specific T-lymphocytes are constantly attracted into plaques from blood flow, and so support vicious cycles. Only at decrease of SPP severity, these vicious cycles weaken and natural remission of plaques takes place, up to their complete disappearance. The detailed analysis comparing the new model of pathogenesis with five other previously published models is carried out. Part 1. arXiv:1110.0584
[ { "created": "Fri, 13 Jan 2012 17:31:10 GMT", "version": "v1" }, { "created": "Sat, 14 Apr 2012 16:47:02 GMT", "version": "v2" } ]
2012-04-20
[ [ "Peslyak", "Mikhail", "" ] ]
Analytical research of results of experimental and theoretical studies on pathogenesis of psoriatic disease is carried out. The new model of pathogenesis - skin reaction to systemic psoriatic process SPP is formulated. ... Psoriatic inflammation is regarded as a reaction of the skin immune system to activity of Mo-R and DC-R involved in derma from blood flow. They contain Y-antigen and, getting to derma, can be transformed in mature maDC-Y and present this antigen to Y-specific T-lymphocytes as well as activate them. Y-antigen is a part of the interpeptide bridge IB-Y. Therefore, the skin immune system can incorrectly interpret Y-antigen presentation as a sign of external PsB-infection and switch one of mechanisms of protection against bacterial infection - epidermal hyperproliferation. Psoriatic plaque can be initiated only during action of local inflammatory process LP2 in derma causing not only innate, but also adaptive response. In particular, it is possible at LP2(IN) - open trauma of derma or at LP2(HPV) - HPV-carriage of keratinocytes. The level of Y-priming (presence and concentration of Y-specific T-lymphocytes in prepsoriatic derma and in lymph nodes) also determines possibility of psoriatic plaque initiation. Existence and severity of psoriatic plaque is determined by intensity of Y-antigen income into derma (inside Mo-R and DC-R). ... Severity of plaque is aggravated by LP2-inflammation if it persists after this plaque initiation. New Mo-T, DC-T (incl. Mo-R, DC-R) and Y-specific T-lymphocytes are constantly attracted into plaques from blood flow, and so support vicious cycles. Only at decrease of SPP severity, these vicious cycles weaken and natural remission of plaques takes place, up to their complete disappearance. The detailed analysis comparing the new model of pathogenesis with five other previously published models is carried out. Part 1. arXiv:1110.0584
0911.1844
Anne Feltz
Norbert Weiss, Christophe Arnoult, Anne Feltz (NEURO), Michel De Waard
Contribution of the kinetics of G protein dissociation to the characteristic modifications of N-type calcium channel activity
null
Neuroscience Research 56, 3 (2006) 332-43
10.1016/j.neures.2006.08.002
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Direct G protein inhibition of N-type calcium channels is recognized by characteristic biophysical modifications. In this study, we quantify and simulate the importance of G protein dissociation on the phenotype of G protein-regulated whole-cell currents. Based on the observation that the voltage-dependence of the time constant of recovery from G protein inhibition is correlated with the voltage-dependence of channel opening, we depict all G protein effects by a simple kinetic scheme. All landmark modifications in calcium currents, except inhibition, can be successfully described using three simple biophysical parameters (extent of block, extent of recovery, and time constant of recovery). Modifications of these parameters by auxiliary beta subunits are at the origin of differences in N-type channel regulation by G proteins. The simulation data illustrate that channel reluctance can occur as the result of an experimental bias linked to the variable extent of G protein dissociation when peak currents are measured at various membrane potentials. To produce alterations in channel kinetics, the two most important parameters are the extents of initial block and recovery. These data emphasize the contribution of the degree and kinetics of G protein dissociation in the modification of N-type currents.
[ { "created": "Tue, 10 Nov 2009 07:28:18 GMT", "version": "v1" } ]
2009-11-11
[ [ "Weiss", "Norbert", "", "NEURO" ], [ "Arnoult", "Christophe", "", "NEURO" ], [ "Feltz", "Anne", "", "NEURO" ], [ "De Waard", "Michel", "" ] ]
Direct G protein inhibition of N-type calcium channels is recognized by characteristic biophysical modifications. In this study, we quantify and simulate the importance of G protein dissociation on the phenotype of G protein-regulated whole-cell currents. Based on the observation that the voltage-dependence of the time constant of recovery from G protein inhibition is correlated with the voltage-dependence of channel opening, we depict all G protein effects by a simple kinetic scheme. All landmark modifications in calcium currents, except inhibition, can be successfully described using three simple biophysical parameters (extent of block, extent of recovery, and time constant of recovery). Modifications of these parameters by auxiliary beta subunits are at the origin of differences in N-type channel regulation by G proteins. The simulation data illustrate that channel reluctance can occur as the result of an experimental bias linked to the variable extent of G protein dissociation when peak currents are measured at various membrane potentials. To produce alterations in channel kinetics, the two most important parameters are the extents of initial block and recovery. These data emphasize the contribution of the degree and kinetics of G protein dissociation in the modification of N-type currents.
2407.19328
Nicholas Dimonaco
Nicholas J. Dimonaco
PyamilySeq: A Python Tool for Interpretable Gene (Re)Clustering and Pangenomic Inference Across Species and Genera
See here for installation and use: https://pypi.org/project/PyamilySeq/
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
PyamilySeq is a Python-based tool designed for interpretable gene clustering and pangenomic inference, supporting analyses at both species and genus levels. It facilitates the clustering of gene sequences into families based on sequence similarity using CD-HIT, and can take the output of tried-and-tested sequence clustering tools such as CD-HIT, BLAST, DIAMOND, and MMseqs2. PyamilySeq is distinctive in its ability to integrate new sequences into existing clusters, providing a robust framework for iterative analysis while preserving the original clusters, useful when reannotating genomes. In addition to the standard Species mode which as with other tools performs core-gene analysis across a species range, PyamilySeq can be run in Genus mode where it detects the presence of gene families shared across multiple genera. These features enhance the tools applicability for ongoing and past genomic studies and comparative analyses. PyamilySeq generates comprehensive outputs, including gene presence-absence matrices and aligned sequence data, enabling downstream analysis and interpretation of the identified gene groups and pangenomic data.
[ { "created": "Sat, 27 Jul 2024 19:32:35 GMT", "version": "v1" } ]
2024-07-30
[ [ "Dimonaco", "Nicholas J.", "" ] ]
PyamilySeq is a Python-based tool designed for interpretable gene clustering and pangenomic inference, supporting analyses at both species and genus levels. It facilitates the clustering of gene sequences into families based on sequence similarity using CD-HIT, and can take the output of tried-and-tested sequence clustering tools such as CD-HIT, BLAST, DIAMOND, and MMseqs2. PyamilySeq is distinctive in its ability to integrate new sequences into existing clusters, providing a robust framework for iterative analysis while preserving the original clusters, useful when reannotating genomes. In addition to the standard Species mode which as with other tools performs core-gene analysis across a species range, PyamilySeq can be run in Genus mode where it detects the presence of gene families shared across multiple genera. These features enhance the tools applicability for ongoing and past genomic studies and comparative analyses. PyamilySeq generates comprehensive outputs, including gene presence-absence matrices and aligned sequence data, enabling downstream analysis and interpretation of the identified gene groups and pangenomic data.
1810.03282
Suman Kumar Banik
Mintu Nandi, Ayan Biswas, Suman K Banik and Pinaki Chaudhury
Information processing in a simple one-step cascade
26 pages, 5 figures
null
10.1103/PhysRevE.98.042310
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using the formalism of information theory, we analyze the mechanism of information transduction in a simple one-step signaling cascade S$\rightarrow$X representing the gene regulatory network. Approximating the signaling channel to be Gaussian, we describe the dynamics using Langevin equations. Upon discretization, we calculate the associated second moments for linear and nonlinear regulation of the output by the input, which follows the birth-death process. While mutual information between the input and the output characterizes the channel capacity, the Fano factor of the output gives a clear idea of how internal and external fluctuations assemble at the output level. To quantify the contribution of the present state of the input to predict the future output, transfer entropy is computed. We find that higher amount of transfer entropy is accompanied by the greater magnitude of external fluctuations (quantified by the Fano factor of the output) propagation from the input to the output. We notice that low input population characterized by the number of signaling molecules S, which fluctuates in a relatively slower fashion compared to its downstream (target) species X, is maximally able to predict (as quantified by transfer entropy) the future state of the output. Our computations also reveal that with increased linear nature of the input-output interaction, all three metrics of mutual information, Fano factor and, transfer entropy achieve relatively larger magnitudes.
[ { "created": "Mon, 8 Oct 2018 06:44:40 GMT", "version": "v1" } ]
2018-11-14
[ [ "Nandi", "Mintu", "" ], [ "Biswas", "Ayan", "" ], [ "Banik", "Suman K", "" ], [ "Chaudhury", "Pinaki", "" ] ]
Using the formalism of information theory, we analyze the mechanism of information transduction in a simple one-step signaling cascade S$\rightarrow$X representing the gene regulatory network. Approximating the signaling channel to be Gaussian, we describe the dynamics using Langevin equations. Upon discretization, we calculate the associated second moments for linear and nonlinear regulation of the output by the input, which follows the birth-death process. While mutual information between the input and the output characterizes the channel capacity, the Fano factor of the output gives a clear idea of how internal and external fluctuations assemble at the output level. To quantify the contribution of the present state of the input to predict the future output, transfer entropy is computed. We find that higher amount of transfer entropy is accompanied by the greater magnitude of external fluctuations (quantified by the Fano factor of the output) propagation from the input to the output. We notice that low input population characterized by the number of signaling molecules S, which fluctuates in a relatively slower fashion compared to its downstream (target) species X, is maximally able to predict (as quantified by transfer entropy) the future state of the output. Our computations also reveal that with increased linear nature of the input-output interaction, all three metrics of mutual information, Fano factor and, transfer entropy achieve relatively larger magnitudes.
2107.13709
Tom Chou
Mingtao Xia, Lucas B\"ottcher, Tom Chou
Controlling epidemics through optimal allocation of test kits and vaccine doses across networks
13 pages, 8 figures, Submitted to IEEE Transactions on Network Science and Engineering
IEEE Trans. Netw. Sci. Eng. 9, 1422-1436 (2022)
10.1109/TNSE.2022.3144624
null
q-bio.PE cs.SI math.OC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Efficient testing and vaccination protocols are critical aspects of epidemic management. To study the optimal allocation of limited testing and vaccination resources in a heterogeneous contact network of interacting susceptible, recovered, and infected individuals, we present a degree-based testing and vaccination model for which we use control-theoretic methods to derive optimal testing and vaccination policies. Within our framework, we find that optimal intervention policies first target high-degree nodes before shifting to lower-degree nodes in a time-dependent manner. Using such optimal policies, it is possible to delay outbreaks and reduce incidence rates to a greater extent than uniform and reinforcement-learning-based interventions, particularly on certain scale-free networks.
[ { "created": "Thu, 29 Jul 2021 02:23:45 GMT", "version": "v1" }, { "created": "Sat, 31 Jul 2021 01:57:19 GMT", "version": "v2" } ]
2022-06-22
[ [ "Xia", "Mingtao", "" ], [ "Böttcher", "Lucas", "" ], [ "Chou", "Tom", "" ] ]
Efficient testing and vaccination protocols are critical aspects of epidemic management. To study the optimal allocation of limited testing and vaccination resources in a heterogeneous contact network of interacting susceptible, recovered, and infected individuals, we present a degree-based testing and vaccination model for which we use control-theoretic methods to derive optimal testing and vaccination policies. Within our framework, we find that optimal intervention policies first target high-degree nodes before shifting to lower-degree nodes in a time-dependent manner. Using such optimal policies, it is possible to delay outbreaks and reduce incidence rates to a greater extent than uniform and reinforcement-learning-based interventions, particularly on certain scale-free networks.
1712.04223
Andrew Francis
Andrew Francis and Vincent Moulton
Identifiability of tree-child phylogenetic networks under a probabilistic recombination-mutation model of evolution
18 pages, 4 figures
Journal of Theoretical Biology, Volume 446, 7 June 2018, Pages 160-167
10.1016/j.jtbi.2018.03.011
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks are an extension of phylogenetic trees which are used to represent evolutionary histories in which reticulation events (such as recombination and hybridization) have occurred. A central question for such networks is that of identifiability, which essentially asks under what circumstances can we reliably identify the phylogenetic network that gave rise to the observed data? Recently, identifiability results have appeared for networks relative to a model of sequence evolution that generalizes the standard Markov models used for phylogenetic trees. However, these results are quite limited in terms of the complexity of the networks that are considered. In this paper, by introducing an alternative probabilistic model for evolution along a network that is based on some ground-breaking work by Thatte for pedigrees, we are able to obtain an identifiability result for a much larger class of phylogenetic networks (essentially the class of so-called tree-child networks). To prove our main theorem, we derive some new results for identifying tree-child networks combinatorially, and then adapt some techniques developed by Thatte for pedigrees to show that our combinatorial results imply identifiability in the probabilistic setting. We hope that the introduction of our new model for networks could lead to new approaches to reliably construct phylogenetic networks.
[ { "created": "Tue, 12 Dec 2017 10:51:45 GMT", "version": "v1" } ]
2018-03-28
[ [ "Francis", "Andrew", "" ], [ "Moulton", "Vincent", "" ] ]
Phylogenetic networks are an extension of phylogenetic trees which are used to represent evolutionary histories in which reticulation events (such as recombination and hybridization) have occurred. A central question for such networks is that of identifiability, which essentially asks under what circumstances can we reliably identify the phylogenetic network that gave rise to the observed data? Recently, identifiability results have appeared for networks relative to a model of sequence evolution that generalizes the standard Markov models used for phylogenetic trees. However, these results are quite limited in terms of the complexity of the networks that are considered. In this paper, by introducing an alternative probabilistic model for evolution along a network that is based on some ground-breaking work by Thatte for pedigrees, we are able to obtain an identifiability result for a much larger class of phylogenetic networks (essentially the class of so-called tree-child networks). To prove our main theorem, we derive some new results for identifying tree-child networks combinatorially, and then adapt some techniques developed by Thatte for pedigrees to show that our combinatorial results imply identifiability in the probabilistic setting. We hope that the introduction of our new model for networks could lead to new approaches to reliably construct phylogenetic networks.
1801.08366
Stephen Coombes
S Coombes, Y-M Lai, M Sayli and R Thul
Networks of piecewise linear neural mass models
null
null
null
null
q-bio.NC math.DS nlin.AO
http://creativecommons.org/licenses/by/4.0/
Neural mass models are ubiquitous in large scale brain modelling. At the node level they are written in terms of a set of ODEs with a nonlinearity that is typically a sigmoidal shape. Using structural data from brain atlases they may be connected into a network to investigate the emergence of functional dynamic states, such as synchrony. With the simple restriction of the classic sigmoidal nonlinearity to a piecewise linear caricature we show that the famous Wilson-Cowan neural mass model can be analysed at both the node and network level. The construction of periodic orbits at the node level is achieved by patching together matrix exponential solutions, and stability is determined using Floquet theory. For networks with interactions described by circulant matrices, we show that the stability of the synchronous state can be determined in terms of a low-dimensional Floquet problem parameterised by the eigenvalues of the interaction matrix. This network Floquet problem is readily solved using linear algebra, to predict the onset of spatio-temporal network patterns arising from a synchronous instability. We consider the case of a discontinuous choice for the node nonlinearity, namely the replacement of the sigmoid by a Heaviside nonlinearity. This gives rise to a continuous-time switching network. At the node level this allows for the existence of unstable sliding periodic orbits, which we construct. The stability of a periodic orbit is now treated with a modification of Floquet theory to treat the evolution of small perturbations through switching manifolds via saltation matrices. At the network level the stability analysis of the synchronous state is considerably more challenging. Here we report on the use of ideas originally developed for the study of Glass networks to treat the stability of periodic network states in neural mass models with discontinuous interactions.
[ { "created": "Thu, 25 Jan 2018 11:55:24 GMT", "version": "v1" } ]
2018-01-26
[ [ "Coombes", "S", "" ], [ "Lai", "Y-M", "" ], [ "Sayli", "M", "" ], [ "Thul", "R", "" ] ]
Neural mass models are ubiquitous in large scale brain modelling. At the node level they are written in terms of a set of ODEs with a nonlinearity that is typically a sigmoidal shape. Using structural data from brain atlases they may be connected into a network to investigate the emergence of functional dynamic states, such as synchrony. With the simple restriction of the classic sigmoidal nonlinearity to a piecewise linear caricature we show that the famous Wilson-Cowan neural mass model can be analysed at both the node and network level. The construction of periodic orbits at the node level is achieved by patching together matrix exponential solutions, and stability is determined using Floquet theory. For networks with interactions described by circulant matrices, we show that the stability of the synchronous state can be determined in terms of a low-dimensional Floquet problem parameterised by the eigenvalues of the interaction matrix. This network Floquet problem is readily solved using linear algebra, to predict the onset of spatio-temporal network patterns arising from a synchronous instability. We consider the case of a discontinuous choice for the node nonlinearity, namely the replacement of the sigmoid by a Heaviside nonlinearity. This gives rise to a continuous-time switching network. At the node level this allows for the existence of unstable sliding periodic orbits, which we construct. The stability of a periodic orbit is now treated with a modification of Floquet theory to treat the evolution of small perturbations through switching manifolds via saltation matrices. At the network level the stability analysis of the synchronous state is considerably more challenging. Here we report on the use of ideas originally developed for the study of Glass networks to treat the stability of periodic network states in neural mass models with discontinuous interactions.
1112.1733
Petter Holme
Sungmin Lee, Petter Holme, Zhi-Xi Wu
Cooperation, structure and hierarchy in multiadaptive games
null
Phys. Rev. E. 84, 061148 (2011)
10.1103/PhysRevE.84.061148
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Game-theoretical models where the rules of the game and the interaction structure both coevolves with the game dynamics -- multiadaptive games -- capture very flexible situations where cooperation among selfish agents can emerge. In this work, we will discuss a multiadaptive model presented in a recent Letter [Phys. Rev. Lett. 106, 028702 (2011)], and generalizations of it. The model captures a non-equilibrium situation where social unrest increases the incentive to cooperate and, simultaneously, agents are partly free to influence with whom they interact. First, we investigate the details of how the feedback from the behavior of agents determines the emergence of cooperation and hierarchical contact structures. We also study the stability of the system to different types of noise, and find that different regions of parameter space show very different response. Some types of noise can destroy an all-cooperator state. If, on the other hand, hubs are stable, then so is the all-C state. Finally, we investigate the dependence of the ratio between the timescales of strategy updates and the evolution of the interaction structure. We find that a comparatively fast strategy dynamics is a prerequisite for the emergence of cooperation.
[ { "created": "Wed, 7 Dec 2011 23:30:13 GMT", "version": "v1" } ]
2012-10-10
[ [ "Lee", "Sungmin", "" ], [ "Holme", "Petter", "" ], [ "Wu", "Zhi-Xi", "" ] ]
Game-theoretical models where the rules of the game and the interaction structure both coevolves with the game dynamics -- multiadaptive games -- capture very flexible situations where cooperation among selfish agents can emerge. In this work, we will discuss a multiadaptive model presented in a recent Letter [Phys. Rev. Lett. 106, 028702 (2011)], and generalizations of it. The model captures a non-equilibrium situation where social unrest increases the incentive to cooperate and, simultaneously, agents are partly free to influence with whom they interact. First, we investigate the details of how the feedback from the behavior of agents determines the emergence of cooperation and hierarchical contact structures. We also study the stability of the system to different types of noise, and find that different regions of parameter space show very different response. Some types of noise can destroy an all-cooperator state. If, on the other hand, hubs are stable, then so is the all-C state. Finally, we investigate the dependence of the ratio between the timescales of strategy updates and the evolution of the interaction structure. We find that a comparatively fast strategy dynamics is a prerequisite for the emergence of cooperation.
2112.13273
Seyedeh Sajedeh Mousavi Dr
S. Sajedeh Mousavi, Mohammad Jalil Zorriehzahra
High Expression of CDK1 and NDC80 Predicts Poor Prognosis of Bladder Cancer
Talk in 4th International biotechnology congress of Islamic Republic of Iran (2021)
null
null
null
q-bio.GN q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Background: Bladder cancer is the 10th most common cancer worldwide, and its prevalence is increasing, especially in developing countries. Objective: In the present study, we employed gene expression profiles from the GSE163209 data set in the GEO database to identify potential molecular and genetic markers in BC patients. Methods: The data set comprised 217 samples, with 113 stage Ta tumor tissue samples and 104 stage T1 tumor tissue samples. The top 766 genes were chosen. P.value<0.0001 and |logFC|=1 was used to change the cutoff criteria for defining DEGs. Moreover, the MCODE plugin and cytoHubba plugin were employed to produce a module and detect 20 hub genes in these DEGs. We used GO and KEGG pathway enrichment analyses to get a better understanding of these DEGs. Results: The KEGG pathway enrichment results indicated that the top genes were mainly involved: Systemic lupus erythematosus, Alcoholism, and Viral carcinogenesis. SLE activation in the renal glomeruli could explain the connection between this disease's route and bladder cancer, and according to our results and previous researches, heavy alcohol intake can increase the risk of BC in males and particular populations. Conclusion: According to our hub genes, we can consider CDK1 and NDC80 as bladder cancer biomarkers. Not much research has been done on the effect of this gene on bladder cancer.
[ { "created": "Sat, 25 Dec 2021 19:25:47 GMT", "version": "v1" } ]
2021-12-28
[ [ "Mousavi", "S. Sajedeh", "" ], [ "Zorriehzahra", "Mohammad Jalil", "" ] ]
Background: Bladder cancer is the 10th most common cancer worldwide, and its prevalence is increasing, especially in developing countries. Objective: In the present study, we employed gene expression profiles from the GSE163209 data set in the GEO database to identify potential molecular and genetic markers in BC patients. Methods: The data set comprised 217 samples, with 113 stage Ta tumor tissue samples and 104 stage T1 tumor tissue samples. The top 766 genes were chosen. P.value<0.0001 and |logFC|=1 was used to change the cutoff criteria for defining DEGs. Moreover, the MCODE plugin and cytoHubba plugin were employed to produce a module and detect 20 hub genes in these DEGs. We used GO and KEGG pathway enrichment analyses to get a better understanding of these DEGs. Results: The KEGG pathway enrichment results indicated that the top genes were mainly involved: Systemic lupus erythematosus, Alcoholism, and Viral carcinogenesis. SLE activation in the renal glomeruli could explain the connection between this disease's route and bladder cancer, and according to our results and previous researches, heavy alcohol intake can increase the risk of BC in males and particular populations. Conclusion: According to our hub genes, we can consider CDK1 and NDC80 as bladder cancer biomarkers. Not much research has been done on the effect of this gene on bladder cancer.
1310.5202
Omur Arslan
Omur Arslan, Dan P. Guralnik and Daniel E. Koditschek
Discriminative Measures for Comparison of Phylogenetic Trees
24 pages, 7 figures, 1 table, a new graph-theoretic formulation of the NNI navigation dissimilarity
null
null
null
q-bio.PE cs.CE cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce and study three new measures for efficient discriminative comparison of phylogenetic trees. The NNI navigation dissimilarity $d_{nav}$ counts the steps along a "combing" of the Nearest Neighbor Interchange (NNI) graph of binary hierarchies, providing an efficient approximation to the (NP-hard) NNI distance in terms of "edit length". At the same time, a closed form formula for $d_{nav}$ presents it as a weighted count of pairwise incompatibilities between clusters, lending it the character of an edge dissimilarity measure as well. A relaxation of this formula to a simple count yields another measure on all trees --- the crossing dissimilarity $d_{CM}$. Both dissimilarities are symmetric and positive definite (vanish only between identical trees) on binary hierarchies but they fail to satisfy the triangle inequality. Nevertheless, both are bounded below by the widely used Robinson-Foulds metric and bounded above by a closely related true metric, the cluster-cardinality metric $d_{CC}$. We show that each of the three proposed new dissimilarities is computable in time $O(n^2)$ in the number of leaves $n$, and conclude the paper with a brief numerical exploration of the distribution over tree space of these dissimilarities in comparison with the Robinson-Foulds metric and the more recently introduced matching-split distance.
[ { "created": "Sat, 19 Oct 2013 05:03:23 GMT", "version": "v1" }, { "created": "Tue, 20 Oct 2015 14:10:43 GMT", "version": "v2" } ]
2015-10-21
[ [ "Arslan", "Omur", "" ], [ "Guralnik", "Dan P.", "" ], [ "Koditschek", "Daniel E.", "" ] ]
In this paper we introduce and study three new measures for efficient discriminative comparison of phylogenetic trees. The NNI navigation dissimilarity $d_{nav}$ counts the steps along a "combing" of the Nearest Neighbor Interchange (NNI) graph of binary hierarchies, providing an efficient approximation to the (NP-hard) NNI distance in terms of "edit length". At the same time, a closed form formula for $d_{nav}$ presents it as a weighted count of pairwise incompatibilities between clusters, lending it the character of an edge dissimilarity measure as well. A relaxation of this formula to a simple count yields another measure on all trees --- the crossing dissimilarity $d_{CM}$. Both dissimilarities are symmetric and positive definite (vanish only between identical trees) on binary hierarchies but they fail to satisfy the triangle inequality. Nevertheless, both are bounded below by the widely used Robinson-Foulds metric and bounded above by a closely related true metric, the cluster-cardinality metric $d_{CC}$. We show that each of the three proposed new dissimilarities is computable in time $O(n^2)$ in the number of leaves $n$, and conclude the paper with a brief numerical exploration of the distribution over tree space of these dissimilarities in comparison with the Robinson-Foulds metric and the more recently introduced matching-split distance.
2307.03415
Oliver Eales
Oliver Eales, Steven Riley
Differences between the true reproduction number and the apparent reproduction number of an epidemic time series
null
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The time-varying reproduction number $R(t)$ measures the number of new infections per infectious individual and is closely correlated with the time series of infection incidence by definition. The timings of actual infections are rarely known, and analysis of epidemics usually relies on time series data for other outcomes such as symptom onset. A common implicit assumption, when estimating $R(t)$ from an epidemic time series, is that $R(t)$ has the same relationship with these downstream outcomes as it does with the time series of incidence. However, this assumption is unlikely to be valid given that most epidemic time series are not perfect proxies of incidence. Rather they represent convolutions of incidence with uncertain delay distributions. Here we define the apparent time-varying reproduction number, $R_A(t)$, the reproduction number calculated from a downstream epidemic time series and demonstrate how differences between $R_A(t)$ and $R(t)$ depend on the convolution function. The mean of the convolution function sets a time offset between the two signals, whilst the variance of the convolution function introduces a relative distortion between them. We present the convolution functions of epidemic time series that were available during the SARS-CoV-2 pandemic. Infection prevalence, measured by random sampling studies, presents fewer biases than other epidemic time series. Here we show that additionally the mean and variance of its convolution function were similar to that obtained from traditional surveillance based on mass-testing and could be reduced using more frequent testing, or by using stricter thresholds for positivity. Infection prevalence studies continue to be a versatile tool for tracking the temporal trends of $R(t)$, and with additional refinements to their study protocol, will be of even greater utility during any future epidemics or pandemics.
[ { "created": "Fri, 7 Jul 2023 06:53:25 GMT", "version": "v1" } ]
2023-07-10
[ [ "Eales", "Oliver", "" ], [ "Riley", "Steven", "" ] ]
The time-varying reproduction number $R(t)$ measures the number of new infections per infectious individual and is closely correlated with the time series of infection incidence by definition. The timings of actual infections are rarely known, and analysis of epidemics usually relies on time series data for other outcomes such as symptom onset. A common implicit assumption, when estimating $R(t)$ from an epidemic time series, is that $R(t)$ has the same relationship with these downstream outcomes as it does with the time series of incidence. However, this assumption is unlikely to be valid given that most epidemic time series are not perfect proxies of incidence. Rather they represent convolutions of incidence with uncertain delay distributions. Here we define the apparent time-varying reproduction number, $R_A(t)$, the reproduction number calculated from a downstream epidemic time series and demonstrate how differences between $R_A(t)$ and $R(t)$ depend on the convolution function. The mean of the convolution function sets a time offset between the two signals, whilst the variance of the convolution function introduces a relative distortion between them. We present the convolution functions of epidemic time series that were available during the SARS-CoV-2 pandemic. Infection prevalence, measured by random sampling studies, presents fewer biases than other epidemic time series. Here we show that additionally the mean and variance of its convolution function were similar to that obtained from traditional surveillance based on mass-testing and could be reduced using more frequent testing, or by using stricter thresholds for positivity. Infection prevalence studies continue to be a versatile tool for tracking the temporal trends of $R(t)$, and with additional refinements to their study protocol, will be of even greater utility during any future epidemics or pandemics.
0806.3823
Francois Bonneton
Fran\c{c}ois Bonneton (IGFL), Arnaud Chaumot (UR BELY), Vincent Laudet (IGFL)
Annotation of Tribolium nuclear receptors reveals an evolutionary overacceleration of a network controlling the ecdysone cascade
null
Insect Biochemistry and Molecular Biology 4, 38 (2008) 416-429
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Tribolium genome contains 21 nuclear receptors, representing all of the six known subfamilies. When compared to other species, this first complete set for a Coleoptera reveals a strong conservation of the number and identity of nuclear receptors in holometabolous insects. Two novelties are observed: the atypical NR0 gene knirps is present only in brachyceran flies, while the NR2E6 gene is found only in Tribolium and in Apis. Using a quantitative analysis of the evolutionary rate, we discovered that nuclear receptors could be divided into two groups. In one group of 13 proteins, the rates follow the trend of the Mecopterida genome-wide acceleration. In a second group of five nuclear receptors, all acting together at the top of the ecdysone cascade, we observed an overacceleration of the evolutionary rate during the early divergence of Mecopterida. We thus extended our analysis to the twelve classic ecdysone transcriptional regulators and found that six of them (ECR, USP, HR3, E75, HR4 and Kr-h1) underwent an overacceleration at the base of the Mecopterida lineage. By contrast, E74, E93, BR, HR39, FTZ-F1 and E78 do not show this divergence. We suggest that coevolution occurred within a network of regulators that control the ecdysone cascade. The advent of Tribolium as a powerful model should allow a better understanding of this evolution.
[ { "created": "Tue, 24 Jun 2008 06:20:29 GMT", "version": "v1" } ]
2008-12-18
[ [ "Bonneton", "François", "", "IGFL" ], [ "Chaumot", "Arnaud", "", "UR BELY" ], [ "Laudet", "Vincent", "", "IGFL" ] ]
The Tribolium genome contains 21 nuclear receptors, representing all of the six known subfamilies. When compared to other species, this first complete set for a Coleoptera reveals a strong conservation of the number and identity of nuclear receptors in holometabolous insects. Two novelties are observed: the atypical NR0 gene knirps is present only in brachyceran flies, while the NR2E6 gene is found only in Tribolium and in Apis. Using a quantitative analysis of the evolutionary rate, we discovered that nuclear receptors could be divided into two groups. In one group of 13 proteins, the rates follow the trend of the Mecopterida genome-wide acceleration. In a second group of five nuclear receptors, all acting together at the top of the ecdysone cascade, we observed an overacceleration of the evolutionary rate during the early divergence of Mecopterida. We thus extended our analysis to the twelve classic ecdysone transcriptional regulators and found that six of them (ECR, USP, HR3, E75, HR4 and Kr-h1) underwent an overacceleration at the base of the Mecopterida lineage. By contrast, E74, E93, BR, HR39, FTZ-F1 and E78 do not show this divergence. We suggest that coevolution occurred within a network of regulators that control the ecdysone cascade. The advent of Tribolium as a powerful model should allow a better understanding of this evolution.
1907.01681
Fabio Chalub
Fabio A. C. C. Chalub and L\'eonard Monsaingeon and Ana Margarida Ribeiro and Max O. Souza
Gradient flow formulations of discrete and continuous evolutionary models: a unifying perspective
null
null
null
null
q-bio.PE math.AP math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider three classical models of biological evolution: (i) the Moran process, an example of a reducible Markov Chain; (ii) the Kimura Equation, a particular case of a degenerated Fokker-Planck Diffusion; (iii) the Replicator Equation, a paradigm in Evolutionary Game Theory. While these approaches are not completely equivalent, they are intimately connected, since (ii) is the diffusion approximation of (i), and (iii) is obtained from (ii) in an appropriate limit. It is well known that the Replicator Dynamics for two strategies is a gradient flow with respect to the celebrated Shahshahani distance. We reformulate the Moran process and the Kimura Equation as gradient flows and in the sequel we discuss conditions such that the associated gradient structures converge: (i) to (ii) and (ii) to (iii). This provides a geometric characterisation of these evolutionary processes and provides a reformulation of the above examples as time minimization of free energy functionals.
[ { "created": "Tue, 2 Jul 2019 23:37:47 GMT", "version": "v1" }, { "created": "Thu, 8 Oct 2020 15:53:44 GMT", "version": "v2" } ]
2020-10-09
[ [ "Chalub", "Fabio A. C. C.", "" ], [ "Monsaingeon", "Léonard", "" ], [ "Ribeiro", "Ana Margarida", "" ], [ "Souza", "Max O.", "" ] ]
We consider three classical models of biological evolution: (i) the Moran process, an example of a reducible Markov Chain; (ii) the Kimura Equation, a particular case of a degenerated Fokker-Planck Diffusion; (iii) the Replicator Equation, a paradigm in Evolutionary Game Theory. While these approaches are not completely equivalent, they are intimately connected, since (ii) is the diffusion approximation of (i), and (iii) is obtained from (ii) in an appropriate limit. It is well known that the Replicator Dynamics for two strategies is a gradient flow with respect to the celebrated Shahshahani distance. We reformulate the Moran process and the Kimura Equation as gradient flows and in the sequel we discuss conditions such that the associated gradient structures converge: (i) to (ii) and (ii) to (iii). This provides a geometric characterisation of these evolutionary processes and provides a reformulation of the above examples as time minimization of free energy functionals.
1809.03866
Divine Wanduku (Dr. )
Divine Wanduku
The stochastic permanence of malaria, and the existence of a stationary distribution for a class of malaria models
International Journal of Biomathematics, 2020. arXiv admin note: text overlap with arXiv:1808.09842
null
10.1142/S1793524520500242
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper investigates the stochastic permanence of malaria and the existence of a stationary distribution for the stochastic process describing the disease dynamics over sufficiently longtime. The malaria system is highly random with fluctuations from the disease transmission and natural deathrates, which are expressed as independent white noise processes in a family of stochastic differential equation epidemic models. Other sources of variability in the malaria dynamics are the random incubation and naturally acquired immunity periods of malaria. Improved analytical techniques and local martingale characterizations are applied to describe the character of the sample paths of the solution process of the system in the neighborhood of an endemic equilibrium. Emphasis of this study is laid on examination of the impacts of (1) the sources of variability- disease transmission and natural death rates, and (2) the intensities of the white noise processes in the system on the stochastic permanence of malaria, and also on the existence of the stationary distribution for the solution process over sufficiently long time. Numerical simulation examples are presented to illuminate the persistence and stochastic permanence of malaria, and also to numerically approximate the stationary distribution of the states of the solution process.
[ { "created": "Sat, 8 Sep 2018 18:15:41 GMT", "version": "v1" } ]
2020-05-05
[ [ "Wanduku", "Divine", "" ] ]
This paper investigates the stochastic permanence of malaria and the existence of a stationary distribution for the stochastic process describing the disease dynamics over sufficiently longtime. The malaria system is highly random with fluctuations from the disease transmission and natural deathrates, which are expressed as independent white noise processes in a family of stochastic differential equation epidemic models. Other sources of variability in the malaria dynamics are the random incubation and naturally acquired immunity periods of malaria. Improved analytical techniques and local martingale characterizations are applied to describe the character of the sample paths of the solution process of the system in the neighborhood of an endemic equilibrium. Emphasis of this study is laid on examination of the impacts of (1) the sources of variability- disease transmission and natural death rates, and (2) the intensities of the white noise processes in the system on the stochastic permanence of malaria, and also on the existence of the stationary distribution for the solution process over sufficiently long time. Numerical simulation examples are presented to illuminate the persistence and stochastic permanence of malaria, and also to numerically approximate the stationary distribution of the states of the solution process.
0710.5625
Daniel Rockmore
Gregory Leibon, Daniel Rockmore, Martin Pollak
A simple computational method for the identification of disease-associated loci in complex, incomplete pedigrees
20 pages, 9 figures
null
null
null
q-bio.GN q-bio.QM
null
We present an approach, called the "Shadow Method," for the identification of disease loci from dense genetic marker maps in complex, potentially incomplete pedigrees. "Shadow" is a simple method based on an analysis of the patterns of obligate meiotic recombination events in genotypic data. This method can be applied to any high density marker map and was specifically designed to exploit the fact that extremely dense marker maps are becoming more readily available. We also describe how to interpret and associate meaningful P-Values to the results. Shadow has significant advantages over traditional parametric linkage analysis methods in that it can be readily applied even in cases in which the topology of a pedigree or pedigrees can only be partially determined. In addition, Shadow is robust to variability in a range of parameters and in particular does not require prior knowledge of mode of inheritance, penetrance or clinical misdiagnosis rate. Shadow can be used for any SNP data, but is especially effective when applied to dense samplings. Our primary example uses data from Affymetrix 100k SNPChip samples in which we illustrate our approach by analyzing simulated data as well as genome-wide SNP data from two pedigrees with inherited forms of kidney failure, one of which is compared with a typical LOD score analysis.
[ { "created": "Tue, 30 Oct 2007 02:27:11 GMT", "version": "v1" } ]
2007-10-31
[ [ "Leibon", "Gregory", "" ], [ "Rockmore", "Daniel", "" ], [ "Pollak", "Martin", "" ] ]
We present an approach, called the "Shadow Method," for the identification of disease loci from dense genetic marker maps in complex, potentially incomplete pedigrees. "Shadow" is a simple method based on an analysis of the patterns of obligate meiotic recombination events in genotypic data. This method can be applied to any high density marker map and was specifically designed to exploit the fact that extremely dense marker maps are becoming more readily available. We also describe how to interpret and associate meaningful P-Values to the results. Shadow has significant advantages over traditional parametric linkage analysis methods in that it can be readily applied even in cases in which the topology of a pedigree or pedigrees can only be partially determined. In addition, Shadow is robust to variability in a range of parameters and in particular does not require prior knowledge of mode of inheritance, penetrance or clinical misdiagnosis rate. Shadow can be used for any SNP data, but is especially effective when applied to dense samplings. Our primary example uses data from Affymetrix 100k SNPChip samples in which we illustrate our approach by analyzing simulated data as well as genome-wide SNP data from two pedigrees with inherited forms of kidney failure, one of which is compared with a typical LOD score analysis.
2211.03051
Christopher Fusco
Christopher Fusco, Angel Allen
Multilayer Perceptron Network Discriminates Larval Zebrafish Genotype using Behaviour
Preprint
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Zebrafish are a common model organism used to identify new disease therapeutics. High-throughput drug screens can be performed on larval zebrafish in multi-well plates by observing changes in behaviour following a treatment. Analysis of this behaviour can be difficult, however, due to the high dimensionality of the data obtained. Statistical analysis of individual statistics (such as the distance travelled) is generally not powerful enough to detect meaningful differences between treatment groups. Here, we propose a method for classifying zebrafish models of Parkinson's disease by genotype at 5 days old. Using a set of 2D behavioural features, we train a multi-layer perceptron neural network. We further show that the use of integrated gradients can give insight into the impact of each behaviour feature on genotype classifications by the model. In this way, we provide a novel pipeline for classifying zebrafish larvae, beginning with feature preparation and ending with an impact analysis of said features.
[ { "created": "Sun, 6 Nov 2022 07:36:31 GMT", "version": "v1" }, { "created": "Tue, 8 Nov 2022 01:48:45 GMT", "version": "v2" } ]
2022-11-09
[ [ "Fusco", "Christopher", "" ], [ "Allen", "Angel", "" ] ]
Zebrafish are a common model organism used to identify new disease therapeutics. High-throughput drug screens can be performed on larval zebrafish in multi-well plates by observing changes in behaviour following a treatment. Analysis of this behaviour can be difficult, however, due to the high dimensionality of the data obtained. Statistical analysis of individual statistics (such as the distance travelled) is generally not powerful enough to detect meaningful differences between treatment groups. Here, we propose a method for classifying zebrafish models of Parkinson's disease by genotype at 5 days old. Using a set of 2D behavioural features, we train a multi-layer perceptron neural network. We further show that the use of integrated gradients can give insight into the impact of each behaviour feature on genotype classifications by the model. In this way, we provide a novel pipeline for classifying zebrafish larvae, beginning with feature preparation and ending with an impact analysis of said features.
2005.12513
Fernanda Ribeiro
Fernanda L. Ribeiro, Steffen Bollmann, Alexander M. Puckett
DeepRetinotopy: Predicting the Functional Organization of Human Visual Cortex from Structural MRI Data using Geometric Deep Learning
null
null
10.1101/2020.02.11.934471
MIDL/2020/ExtendedAbstract/Nw_trRFjPE
q-bio.NC cs.CV cs.LG q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Whether it be in a man-made machine or a biological system, form and function are often directly related. In the latter, however, this particular relationship is often unclear due to the intricate nature of biology. Here we developed a geometric deep learning model capable of exploiting the actual structure of the cortex to learn the complex relationship between brain function and anatomy from structural and functional MRI data. Our model was not only able to predict the functional organization of human visual cortex from anatomical properties alone, but it was also able to predict nuanced variations across individuals.
[ { "created": "Tue, 26 May 2020 04:54:31 GMT", "version": "v1" } ]
2020-05-27
[ [ "Ribeiro", "Fernanda L.", "" ], [ "Bollmann", "Steffen", "" ], [ "Puckett", "Alexander M.", "" ] ]
Whether it be in a man-made machine or a biological system, form and function are often directly related. In the latter, however, this particular relationship is often unclear due to the intricate nature of biology. Here we developed a geometric deep learning model capable of exploiting the actual structure of the cortex to learn the complex relationship between brain function and anatomy from structural and functional MRI data. Our model was not only able to predict the functional organization of human visual cortex from anatomical properties alone, but it was also able to predict nuanced variations across individuals.
2307.09169
Zihan Liu
Zihan Liu, Jiaqi Wang, Yun Luo, Shuang Zhao, Wenbin Li, Stan Z. Li
Efficient Prediction of Peptide Self-assembly through Sequential and Graphical Encoding
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, there has been an explosion of research on the application of deep learning to the prediction of various peptide properties, due to the significant development and market potential of peptides. Molecular dynamics has enabled the efficient collection of large peptide datasets, providing reliable training data for deep learning. However, the lack of systematic analysis of the peptide encoding, which is essential for AI-assisted peptide-related tasks, makes it an urgent problem to be solved for the improvement of prediction accuracy. To address this issue, we first collect a high-quality, colossal simulation dataset of peptide self-assembly containing over 62,000 samples generated by coarse-grained molecular dynamics (CGMD). Then, we systematically investigate the effect of peptide encoding of amino acids into sequences and molecular graphs using state-of-the-art sequential (i.e., RNN, LSTM, and Transformer) and structural deep learning models (i.e., GCN, GAT, and GraphSAGE), on the accuracy of peptide self-assembly prediction, an essential physiochemical process prior to any peptide-related applications. Extensive benchmarking studies have proven Transformer to be the most powerful sequence-encoding-based deep learning model, pushing the limit of peptide self-assembly prediction to decapeptides. In summary, this work provides a comprehensive benchmark analysis of peptide encoding with advanced deep learning models, serving as a guide for a wide range of peptide-related predictions such as isoelectric points, hydration free energy, etc.
[ { "created": "Mon, 17 Jul 2023 00:43:33 GMT", "version": "v1" } ]
2023-07-19
[ [ "Liu", "Zihan", "" ], [ "Wang", "Jiaqi", "" ], [ "Luo", "Yun", "" ], [ "Zhao", "Shuang", "" ], [ "Li", "Wenbin", "" ], [ "Li", "Stan Z.", "" ] ]
In recent years, there has been an explosion of research on the application of deep learning to the prediction of various peptide properties, due to the significant development and market potential of peptides. Molecular dynamics has enabled the efficient collection of large peptide datasets, providing reliable training data for deep learning. However, the lack of systematic analysis of the peptide encoding, which is essential for AI-assisted peptide-related tasks, makes it an urgent problem to be solved for the improvement of prediction accuracy. To address this issue, we first collect a high-quality, colossal simulation dataset of peptide self-assembly containing over 62,000 samples generated by coarse-grained molecular dynamics (CGMD). Then, we systematically investigate the effect of peptide encoding of amino acids into sequences and molecular graphs using state-of-the-art sequential (i.e., RNN, LSTM, and Transformer) and structural deep learning models (i.e., GCN, GAT, and GraphSAGE), on the accuracy of peptide self-assembly prediction, an essential physiochemical process prior to any peptide-related applications. Extensive benchmarking studies have proven Transformer to be the most powerful sequence-encoding-based deep learning model, pushing the limit of peptide self-assembly prediction to decapeptides. In summary, this work provides a comprehensive benchmark analysis of peptide encoding with advanced deep learning models, serving as a guide for a wide range of peptide-related predictions such as isoelectric points, hydration free energy, etc.
2010.12857
William McCorkindale Mr.
William McCorkindale, Carl Poelking, Alpha A. Lee
Investigating 3D Atomic Environments for Enhanced QSAR
null
null
null
null
q-bio.QM cs.LG physics.comp-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Predicting bioactivity and physical properties of molecules is a longstanding challenge in drug design. Most approaches use molecular descriptors based on a 2D representation of molecules as a graph of atoms and bonds, abstracting away the molecular shape. A difficulty in accounting for 3D shape is in designing molecular descriptors can precisely capture molecular shape while remaining invariant to rotations/translations. We describe a novel alignment-free 3D QSAR method using Smooth Overlap of Atomic Positions (SOAP), a well-established formalism developed for interpolating potential energy surfaces. We show that this approach rigorously describes local 3D atomic environments to compare molecular shapes in a principled manner. This method performs competitively with traditional fingerprint-based approaches as well as state-of-the-art graph neural networks on pIC$_{50}$ ligand-binding prediction in both random and scaffold split scenarios. We illustrate the utility of SOAP descriptors by showing that its inclusion in ensembling diverse representations statistically improves performance, demonstrating that incorporating 3D atomic environments could lead to enhanced QSAR for cheminformatics.
[ { "created": "Sat, 24 Oct 2020 10:04:48 GMT", "version": "v1" } ]
2020-10-27
[ [ "McCorkindale", "William", "" ], [ "Poelking", "Carl", "" ], [ "Lee", "Alpha A.", "" ] ]
Predicting bioactivity and physical properties of molecules is a longstanding challenge in drug design. Most approaches use molecular descriptors based on a 2D representation of molecules as a graph of atoms and bonds, abstracting away the molecular shape. A difficulty in accounting for 3D shape is in designing molecular descriptors can precisely capture molecular shape while remaining invariant to rotations/translations. We describe a novel alignment-free 3D QSAR method using Smooth Overlap of Atomic Positions (SOAP), a well-established formalism developed for interpolating potential energy surfaces. We show that this approach rigorously describes local 3D atomic environments to compare molecular shapes in a principled manner. This method performs competitively with traditional fingerprint-based approaches as well as state-of-the-art graph neural networks on pIC$_{50}$ ligand-binding prediction in both random and scaffold split scenarios. We illustrate the utility of SOAP descriptors by showing that its inclusion in ensembling diverse representations statistically improves performance, demonstrating that incorporating 3D atomic environments could lead to enhanced QSAR for cheminformatics.
1412.0291
Viola Priesemann
Michael Wibral, Joseph T. Lizier, Viola Priesemann
Bits from Biology for Computational Intelligence
null
Frontiers in Robotics and AI, 2:5 (2015)
10.3389/frobt.2015.00005
null
q-bio.NC cs.IT math.IT physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems.
[ { "created": "Sun, 30 Nov 2014 21:47:15 GMT", "version": "v1" } ]
2018-05-11
[ [ "Wibral", "Michael", "" ], [ "Lizier", "Joseph T.", "" ], [ "Priesemann", "Viola", "" ] ]
Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems.
1408.0463
Peter O. Fedichev
Valeria Kogan, Ivan Molodtcov, Leonid I. Menshikov, Robert J. Shmookler Reis and Peter Fedichev
Stability analysis of a model gene network links aging, stress resistance, and negligible senescence
8 pages, 2 figures
Scientific Reports 5, Article number: 13589 (2015)
10.1038/srep13589
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several animal species are considered to exhibit what is called negligible senescence, i.e. they do not show signs of functional decline or any increase of mortality with age, and do not have measurable reductions in reproductive capacity with age. Recent studies in Naked Mole Rat (NMR) and long- lived sea urchin showed that the level of gene expression changes with age is lower than in other organisms. These phenotypic observations correlate well with exceptional endurance of NMR tissues to various genotoxic stresses. Therefore, the lifelong transcriptional stability of an organism may be a key determinant of longevity. However, the exact relation between genetic network stability, stress-resistance and aging has not been defined. We analyze the stability of a simple genetic- network model of a living organism under the influence of external and endogenous factors. We demonstrate that under most common circumstances a gene network is inherently unstable and suffers from exponential accumulation of gene-regulation deviations leading to death. However, should the repair systems be sufficiently effective, the gene network can stabilize so that gene damage remains constrained along with mortality of the organism, which may then enjoy a remarkable degree of stability over very long times. We clarify the relation between stress-resistance and aging and suggest that stabilization of the genetic network may provide a mathematical explanation of the Gompertz equation describing the relationship between age and mortality in many species, and of the apparently negligible senescence observed in exceptionally long-lived animals. The model may support a range of applications, such as systematic searches for therapeutics to extend lifespan and healthspan.
[ { "created": "Sun, 3 Aug 2014 07:14:09 GMT", "version": "v1" } ]
2015-10-15
[ [ "Kogan", "Valeria", "" ], [ "Molodtcov", "Ivan", "" ], [ "Menshikov", "Leonid I.", "" ], [ "Reis", "Robert J. Shmookler", "" ], [ "Fedichev", "Peter", "" ] ]
Several animal species are considered to exhibit what is called negligible senescence, i.e. they do not show signs of functional decline or any increase of mortality with age, and do not have measurable reductions in reproductive capacity with age. Recent studies in Naked Mole Rat (NMR) and long- lived sea urchin showed that the level of gene expression changes with age is lower than in other organisms. These phenotypic observations correlate well with exceptional endurance of NMR tissues to various genotoxic stresses. Therefore, the lifelong transcriptional stability of an organism may be a key determinant of longevity. However, the exact relation between genetic network stability, stress-resistance and aging has not been defined. We analyze the stability of a simple genetic- network model of a living organism under the influence of external and endogenous factors. We demonstrate that under most common circumstances a gene network is inherently unstable and suffers from exponential accumulation of gene-regulation deviations leading to death. However, should the repair systems be sufficiently effective, the gene network can stabilize so that gene damage remains constrained along with mortality of the organism, which may then enjoy a remarkable degree of stability over very long times. We clarify the relation between stress-resistance and aging and suggest that stabilization of the genetic network may provide a mathematical explanation of the Gompertz equation describing the relationship between age and mortality in many species, and of the apparently negligible senescence observed in exceptionally long-lived animals. The model may support a range of applications, such as systematic searches for therapeutics to extend lifespan and healthspan.
1611.03952
Christoph Adami
A. Gupta and C. Adami
Shared information between residues is sufficient to detect pair-wise epistasis in a protein
4 pages, 1 figure. To appear in PLoS Genetics
PLoS Genetics 12 (2016) e1006471
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a comment on our manuscript "Strong selection significantly increases epistatic interactions in the long-term evolution of a protein", Dr. Crona challenges our assertion that shared entropy (that is, information) between two residues implies epistasis between those residues, by constructing an explicit example of three loci (say A, B, and C), where A and B are epistatically linked (leading to shared entropy between A and B), and A and C also depend epistatically (leading to shared entropy between A and C), so that loci B and C are correlated (share entropy).
[ { "created": "Sat, 12 Nov 2016 04:18:18 GMT", "version": "v1" } ]
2016-12-07
[ [ "Gupta", "A.", "" ], [ "Adami", "C.", "" ] ]
In a comment on our manuscript "Strong selection significantly increases epistatic interactions in the long-term evolution of a protein", Dr. Crona challenges our assertion that shared entropy (that is, information) between two residues implies epistasis between those residues, by constructing an explicit example of three loci (say A, B, and C), where A and B are epistatically linked (leading to shared entropy between A and B), and A and C also depend epistatically (leading to shared entropy between A and C), so that loci B and C are correlated (share entropy).
1406.0399
Nikolai Slavov
Nikolai Slavov, Sefan Semrau, Edoardo Airoldi, Bogdan Budnik, Alexander van Oudenaarden
Differential stoichiometry among core ribosomal proteins
31 pages, 8 figures
Cell Reports 13: 865 - 873, 2015
10.1016/j.celrep.2015.09.056
null
q-bio.GN q-bio.BM q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the regulation and structure of ribosomes is essential to understanding protein synthesis and its deregulation in disease. While ribosomes are believed to have a fixed stoichiometry among their core ribosomal proteins (RPs), some experiments suggest a more variable composition. Testing such variability requires direct and precise quantification of RPs. We used mass-spectrometry to directly quantify RPs across monosomes and polysomes of mouse embryonic stem cells (ESC) and budding yeast. Our data show that the stoichiometry among core RPs in wild-type yeast cells and ESC depends both on the growth conditions and on the number of ribosomes bound per mRNA. Furthermore, we find that the fitness of cells with a deleted RP-gene is inversely proportional to the enrichment of the corresponding RP in polysomes. Together, our findings support the existence of ribosomes with distinct protein composition and physiological function.
[ { "created": "Mon, 2 Jun 2014 14:57:33 GMT", "version": "v1" }, { "created": "Wed, 15 Apr 2015 12:17:10 GMT", "version": "v2" } ]
2015-11-24
[ [ "Slavov", "Nikolai", "" ], [ "Semrau", "Sefan", "" ], [ "Airoldi", "Edoardo", "" ], [ "Budnik", "Bogdan", "" ], [ "van Oudenaarden", "Alexander", "" ] ]
Understanding the regulation and structure of ribosomes is essential to understanding protein synthesis and its deregulation in disease. While ribosomes are believed to have a fixed stoichiometry among their core ribosomal proteins (RPs), some experiments suggest a more variable composition. Testing such variability requires direct and precise quantification of RPs. We used mass-spectrometry to directly quantify RPs across monosomes and polysomes of mouse embryonic stem cells (ESC) and budding yeast. Our data show that the stoichiometry among core RPs in wild-type yeast cells and ESC depends both on the growth conditions and on the number of ribosomes bound per mRNA. Furthermore, we find that the fitness of cells with a deleted RP-gene is inversely proportional to the enrichment of the corresponding RP in polysomes. Together, our findings support the existence of ribosomes with distinct protein composition and physiological function.
1702.06977
Carsten Lemmen
Carsten Lemmen and Detlef Gronenborn
The Diffusion of Humans and Cultures in the Course of the Spread of Farming
20 pages, 5 figures, submitted to Diffusive Spreading in Nature, Technology and Society, edited by Armin Bunde, J\"urgen Caro, J\"org K\"arger, Gero Vogl, Chapter 17
In: Bunde A., Caro J., K\"arger J., Vogl G. (eds) Diffusive Spreading in Nature, Technology and Society. Springer, Cham
10.1007/978-3-319-67798-9_17
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The most profound change in the relationship between humans and their environment was the introduction of agriculture and pastoralism. [....] For an understanding of the expansion process, it appears appropriate to apply a diffusive model. Broadly, these numerical modeling approaches can be catego- rized in correlative, continuous and discrete. Common to all approaches is the comparison to collections of radiocarbon data that show the apparent wave of advance of the transition to farming. However, these data sets differ in entry density and data quality. Often they disregard local and regional specifics and research gaps, or dating uncertainties. Thus, most of these data bases may only be used on a very general, broad scale. One of the pitfalls of using irregularly spaced or irregularly documented radiocarbon data becomes evident from the map generated by Fort (this volume, Chapter 16): while the general east-west and south-north trends become evident, some areas appear as having undergone anomalously early transitions to farming. This may be due to faulty entries into the data base or regional problems with radiocarbon dating, if not unnoticed or undocumented laboratory mistakes.
[ { "created": "Wed, 22 Feb 2017 19:28:18 GMT", "version": "v1" } ]
2018-05-10
[ [ "Lemmen", "Carsten", "" ], [ "Gronenborn", "Detlef", "" ] ]
The most profound change in the relationship between humans and their environment was the introduction of agriculture and pastoralism. [....] For an understanding of the expansion process, it appears appropriate to apply a diffusive model. Broadly, these numerical modeling approaches can be catego- rized in correlative, continuous and discrete. Common to all approaches is the comparison to collections of radiocarbon data that show the apparent wave of advance of the transition to farming. However, these data sets differ in entry density and data quality. Often they disregard local and regional specifics and research gaps, or dating uncertainties. Thus, most of these data bases may only be used on a very general, broad scale. One of the pitfalls of using irregularly spaced or irregularly documented radiocarbon data becomes evident from the map generated by Fort (this volume, Chapter 16): while the general east-west and south-north trends become evident, some areas appear as having undergone anomalously early transitions to farming. This may be due to faulty entries into the data base or regional problems with radiocarbon dating, if not unnoticed or undocumented laboratory mistakes.
2109.11616
K.S. Joseph
D.W. Rurak, M.Y. Shen and K.S. Joseph
Fetal oxygen delivery and consumption and blood gases in relation to gestational age
88 pages, 26 Figures 233 references and
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Fetal oxygen delivery and consumption and blood gases in relation to gestational age. Oxygen crosses the placenta by diffusion and placental permeability to O$_2$ is high. Thus, the fetus receives adequate amounts, but vascular Po$_2$ is much lower than after birth. Studies of sustained fetal hypoxemia and acute 40-45% hemorrhage show that hypoxemia is not tolerated whereas hemorrhage is. This suggests that if fetal Po$_2$ falls markedly, O$_2$ diffusion from blood to tissue is impaired. Uterine blood and umbilical blood flows/fetal weight fall progressively with advancing gestation. This results in fetal hypoxemia, an increase in Pco$_2$, and decrease in pH. This decreases fetal O$_2$ delivery, and in fetal lambs and horses there is a decrease in fetal O$_2$ consumption. The decrease in O$_2$ demands is linked to a decrease in fetal breathing and body movements and growth rate. The decrease in fetal motility is due to an increase in fetal plasma PGE2 concentration, which begins at ~120 days GA in sheep and is due to the prepartum rise in fetal cortisol. Also, adenosine administration to fetal lambs decreases fetal breathing and REM sleep and the plasma adenosine concentration increases in late gestation. The fetal plasma levels of neurosteroids, which suppress fetal motility, increase with advancing gestation. The prepartum cortisol rise also inhibits fetal growth. In normal pregnancies, these mechanisms operate effectively to maintain an appropriate balance between fetal oxygen consumption and delivery. However, in pregnancies with either further reduce O$_2$ delivery or increase fetal O$_2$ demands, the mismatch between O$_2$ delivery and consumption may worsen leading to IUGR, hypoxic organ damage or stillbirth.
[ { "created": "Thu, 23 Sep 2021 19:41:14 GMT", "version": "v1" } ]
2021-09-27
[ [ "Rurak", "D. W.", "" ], [ "Shen", "M. Y.", "" ], [ "Joseph", "K. S.", "" ] ]
Fetal oxygen delivery and consumption and blood gases in relation to gestational age. Oxygen crosses the placenta by diffusion and placental permeability to O$_2$ is high. Thus, the fetus receives adequate amounts, but vascular Po$_2$ is much lower than after birth. Studies of sustained fetal hypoxemia and acute 40-45% hemorrhage show that hypoxemia is not tolerated whereas hemorrhage is. This suggests that if fetal Po$_2$ falls markedly, O$_2$ diffusion from blood to tissue is impaired. Uterine blood and umbilical blood flows/fetal weight fall progressively with advancing gestation. This results in fetal hypoxemia, an increase in Pco$_2$, and decrease in pH. This decreases fetal O$_2$ delivery, and in fetal lambs and horses there is a decrease in fetal O$_2$ consumption. The decrease in O$_2$ demands is linked to a decrease in fetal breathing and body movements and growth rate. The decrease in fetal motility is due to an increase in fetal plasma PGE2 concentration, which begins at ~120 days GA in sheep and is due to the prepartum rise in fetal cortisol. Also, adenosine administration to fetal lambs decreases fetal breathing and REM sleep and the plasma adenosine concentration increases in late gestation. The fetal plasma levels of neurosteroids, which suppress fetal motility, increase with advancing gestation. The prepartum cortisol rise also inhibits fetal growth. In normal pregnancies, these mechanisms operate effectively to maintain an appropriate balance between fetal oxygen consumption and delivery. However, in pregnancies with either further reduce O$_2$ delivery or increase fetal O$_2$ demands, the mismatch between O$_2$ delivery and consumption may worsen leading to IUGR, hypoxic organ damage or stillbirth.
1110.1739
Graciano Dieck Kattas
Graciano Dieck Kattas, Xiao-Ke Xu and Michael Small
Dynamical modeling of collective behavior from pigeon flight data: flock cohesion and dispersion
null
null
10.1371/journal.pcbi.1002449
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several models of flocking have been promoted based on simulations with qualitatively naturalistic behavior. In this paper we provide the first direct application of computational modeling methods to infer flocking behavior from experimental field data. We show that this approach is able to infer general rules for interaction, or lack of interaction, among members of a flock or, more generally, any community. Using experimental field measurements of homing pigeons in flight we demonstrate the existence of a basic distance dependent attraction/repulsion relationship and show that this rule is sufficient to explain collective behavior observed in nature. Positional data of individuals over time are used as input data to a computational algorithm capable of building complex nonlinear functions that can represent the system behavior. Topological nearest neighbor interactions are considered to characterize the components within this model. The efficacy of this method is demonstrated with simulated noisy data generated from the classical (two dimensional) Vicsek model. When applied to experimental data from homing pigeon flights we show that the more complex three dimensional models are capable of predicting and simulating trajectories, as well as exhibiting realistic collective dynamics. The simulations of the reconstructed models are used to extract properties of the collective behavior in pigeons, and how it is affected by changing the initial conditions of the system. Our results demonstrate that this approach may be applied to construct models capable of simulating trajectories and collective dynamics using experimental field measurements of herd movement. From these models, the behavior of the individual agents (animals) may be inferred.
[ { "created": "Sat, 8 Oct 2011 14:08:23 GMT", "version": "v1" } ]
2015-05-30
[ [ "Kattas", "Graciano Dieck", "" ], [ "Xu", "Xiao-Ke", "" ], [ "Small", "Michael", "" ] ]
Several models of flocking have been promoted based on simulations with qualitatively naturalistic behavior. In this paper we provide the first direct application of computational modeling methods to infer flocking behavior from experimental field data. We show that this approach is able to infer general rules for interaction, or lack of interaction, among members of a flock or, more generally, any community. Using experimental field measurements of homing pigeons in flight we demonstrate the existence of a basic distance dependent attraction/repulsion relationship and show that this rule is sufficient to explain collective behavior observed in nature. Positional data of individuals over time are used as input data to a computational algorithm capable of building complex nonlinear functions that can represent the system behavior. Topological nearest neighbor interactions are considered to characterize the components within this model. The efficacy of this method is demonstrated with simulated noisy data generated from the classical (two dimensional) Vicsek model. When applied to experimental data from homing pigeon flights we show that the more complex three dimensional models are capable of predicting and simulating trajectories, as well as exhibiting realistic collective dynamics. The simulations of the reconstructed models are used to extract properties of the collective behavior in pigeons, and how it is affected by changing the initial conditions of the system. Our results demonstrate that this approach may be applied to construct models capable of simulating trajectories and collective dynamics using experimental field measurements of herd movement. From these models, the behavior of the individual agents (animals) may be inferred.
2110.11339
Li Liu
Hengyang Wang, Xianghao Zhan, Li Liu, Asif Ullah, Huiyan Li, Han Gao, You Wang, Guang Li
Unsupervised cross-user adaptation in taste sensation recognition based on surface electromyography with conformal prediction and domain regularized component analysis
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human taste sensation can be qualitatively described with surface electromyography. However, the pattern recognition models trained on one subject (the source domain) do not generalize well on other subjects (the target domain). To improve the generalizability and transferability of taste sensation models developed with sEMG data, two methods were innovatively applied in this study: domain regularized component analysis (DRCA) and conformal prediction with shrunken centroids (CPSC). The effectiveness of these two methods was investigated independently in an unlabeled data augmentation process with the unlabeled data from the target domain, and the same cross-user adaptation pipeline were conducted on six subjects. The results show that DRCA improved the classification accuracy on six subjects (p < 0.05), compared with the baseline models trained only with the source domain data;, while CPSC did not guarantee the accuracy improvement. Furthermore, the combination of DRCA and CPSC presented statistically significant improvement (p < 0.05) in classification accuracy on six subjects. The proposed strategy combining DRCA and CPSC showed its effectiveness in addressing the cross-user data distribution drift in sEMG-based taste sensation recognition application. It also shows the potential in more cross-user adaptation applications.
[ { "created": "Wed, 20 Oct 2021 09:11:14 GMT", "version": "v1" }, { "created": "Sat, 11 Dec 2021 12:40:39 GMT", "version": "v2" } ]
2021-12-14
[ [ "Wang", "Hengyang", "" ], [ "Zhan", "Xianghao", "" ], [ "Liu", "Li", "" ], [ "Ullah", "Asif", "" ], [ "Li", "Huiyan", "" ], [ "Gao", "Han", "" ], [ "Wang", "You", "" ], [ "Li", "Guang", "" ] ]
Human taste sensation can be qualitatively described with surface electromyography. However, the pattern recognition models trained on one subject (the source domain) do not generalize well on other subjects (the target domain). To improve the generalizability and transferability of taste sensation models developed with sEMG data, two methods were innovatively applied in this study: domain regularized component analysis (DRCA) and conformal prediction with shrunken centroids (CPSC). The effectiveness of these two methods was investigated independently in an unlabeled data augmentation process with the unlabeled data from the target domain, and the same cross-user adaptation pipeline were conducted on six subjects. The results show that DRCA improved the classification accuracy on six subjects (p < 0.05), compared with the baseline models trained only with the source domain data;, while CPSC did not guarantee the accuracy improvement. Furthermore, the combination of DRCA and CPSC presented statistically significant improvement (p < 0.05) in classification accuracy on six subjects. The proposed strategy combining DRCA and CPSC showed its effectiveness in addressing the cross-user data distribution drift in sEMG-based taste sensation recognition application. It also shows the potential in more cross-user adaptation applications.
2404.10854
Matthew Andres Moreno
Matthew Andres Moreno
Methods to Estimate Cryptic Sequence Complexity
null
null
null
null
q-bio.PE cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complexity is a signature quality of interest in artificial life systems. Alongside other dimensions of assessment, it is common to quantify genome sites that contribute to fitness as a complexity measure. However, limitations to the sensitivity of fitness assays in models with implicit replication criteria involving rich biotic interactions introduce the possibility of difficult-to-detect ``cryptic'' adaptive sites, which contribute small fitness effects below the threshold of individual detectability or involve epistatic redundancies. Here, we propose three knockout-based assay procedures designed to quantify cryptic adaptive sites within digital genomes. We report initial tests of these methods on a simple genome model with explicitly configured site fitness effects. In these limited tests, estimation results reflect ground truth cryptic sequence complexities well. Presented work provides initial steps toward development of new methods and software tools that improve the resolution, rigor, and tractability of complexity analyses across alife systems, particularly those requiring expensive in situ assessments of organism fitness.
[ { "created": "Tue, 16 Apr 2024 19:04:03 GMT", "version": "v1" }, { "created": "Fri, 31 May 2024 13:59:27 GMT", "version": "v2" } ]
2024-06-03
[ [ "Moreno", "Matthew Andres", "" ] ]
Complexity is a signature quality of interest in artificial life systems. Alongside other dimensions of assessment, it is common to quantify genome sites that contribute to fitness as a complexity measure. However, limitations to the sensitivity of fitness assays in models with implicit replication criteria involving rich biotic interactions introduce the possibility of difficult-to-detect ``cryptic'' adaptive sites, which contribute small fitness effects below the threshold of individual detectability or involve epistatic redundancies. Here, we propose three knockout-based assay procedures designed to quantify cryptic adaptive sites within digital genomes. We report initial tests of these methods on a simple genome model with explicitly configured site fitness effects. In these limited tests, estimation results reflect ground truth cryptic sequence complexities well. Presented work provides initial steps toward development of new methods and software tools that improve the resolution, rigor, and tractability of complexity analyses across alife systems, particularly those requiring expensive in situ assessments of organism fitness.
1307.2461
Martin Kapun PhD
Martin Kapun, Hester van Schalkwyk, Bryant McAllister, Thomas Flatt and Christian Schl\"otterer
Inference of chromosomal inversion dynamics from Pool-Seq data in natural and laboratory populations of Drosophila melanogaster
31 pages, 4 main figures, 1 main table, 7 supporting figures, 11 supporting tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequencing of pools of individuals (Pool-Seq) represents a reliable and cost- effective approach for estimating genome-wide SNP and transposable element insertion frequencies. However, Pool-Seq does not provide direct information on haplotypes so that for example obtaining inversion frequencies has not been possible until now. Here, we have developed a new set of diagnostic marker SNPs for 7 cosmopolitan inversions in Drosophila melanogaster that can be used to infer inversion frequencies from Pool-Seq data. We applied our novel marker set to Pool-Seq data from an experimental evolution study and from North American and Australian latitudinal clines. In the experimental evolution data, we find evidence that positive selection has driven the frequencies of In(3R)C and In(3R)Mo to increase over time. In the clinal data, we confirm the existence of frequency clines for In(2L)t, In(3L)P and In(3R)Payne in both North America and Australia and detect a previously unknown latitudinal cline for In(3R)Mo in North America. The inversion markers developed here provide a versatile and robust tool for characterizing inversion frequencies and their dynamics in Pool- Seq data from diverse D. melanogaster populations.
[ { "created": "Tue, 9 Jul 2013 14:04:06 GMT", "version": "v1" } ]
2013-07-10
[ [ "Kapun", "Martin", "" ], [ "van Schalkwyk", "Hester", "" ], [ "McAllister", "Bryant", "" ], [ "Flatt", "Thomas", "" ], [ "Schlötterer", "Christian", "" ] ]
Sequencing of pools of individuals (Pool-Seq) represents a reliable and cost- effective approach for estimating genome-wide SNP and transposable element insertion frequencies. However, Pool-Seq does not provide direct information on haplotypes so that for example obtaining inversion frequencies has not been possible until now. Here, we have developed a new set of diagnostic marker SNPs for 7 cosmopolitan inversions in Drosophila melanogaster that can be used to infer inversion frequencies from Pool-Seq data. We applied our novel marker set to Pool-Seq data from an experimental evolution study and from North American and Australian latitudinal clines. In the experimental evolution data, we find evidence that positive selection has driven the frequencies of In(3R)C and In(3R)Mo to increase over time. In the clinal data, we confirm the existence of frequency clines for In(2L)t, In(3L)P and In(3R)Payne in both North America and Australia and detect a previously unknown latitudinal cline for In(3R)Mo in North America. The inversion markers developed here provide a versatile and robust tool for characterizing inversion frequencies and their dynamics in Pool- Seq data from diverse D. melanogaster populations.
1707.03360
Haozhe Shan
Haozhe Shan, Peggy Mason
Unsupervised identification of rat behavioral motifs across timescales
9 pages, 6 figures
NeurIPS 2020 Learning Meaningful Representations of Life (LMRL) workshop
null
null
q-bio.QM stat.ML
http://creativecommons.org/licenses/by/4.0/
Behaviors of several laboratory animals can be modeled as sequences of stereotyped behaviors, or behavioral motifs. However, identifying such motifs is a challenging problem. Behaviors have a multi-scale structure: the animal can be simultaneously performing a small-scale motif and a large-scale one (e.g. \textit{chewing} and \textit{feeding}). Motifs are compositional: a large-scale motif is a chain of smaller-scale ones, folded in (some behavioral) space in a specific manner. We demonstrate an approach which captures these structures, using rat locomotor data as an example. From the same dataset, we used a preprocessing procedure to create different versions, each describing motifs of a different scale. We then trained several Hidden Markov Models (HMMs) in parallel, one for each dataset version. This approach essentially forced each HMM to learn motifs on a different scale, allowing us to capture behavioral structures lost in previous approaches. By comparing HMMs with models representing different null hypotheses, we found that rat locomotion was composed of distinct motifs from second scale to minute scale. We found that transitions between motifs were modulated by rats' location in the environment, leading to non-Markovian transitions. To test the ethological relevance of motifs we discovered, we compared their usage between rats with differences in a high-level trait, prosociality. We found that these rats had distinct motif repertoires, suggesting that motif usage statistics can be used to infer internal states of rats. Our method is therefore an efficient way to discover multi-scale, compositional structures in animal behaviors. It may also be applied as a sensitive assay for internal states.
[ { "created": "Tue, 11 Jul 2017 16:55:48 GMT", "version": "v1" }, { "created": "Wed, 23 Aug 2017 16:13:48 GMT", "version": "v2" }, { "created": "Wed, 1 Jul 2020 14:51:26 GMT", "version": "v3" } ]
2021-05-12
[ [ "Shan", "Haozhe", "" ], [ "Mason", "Peggy", "" ] ]
Behaviors of several laboratory animals can be modeled as sequences of stereotyped behaviors, or behavioral motifs. However, identifying such motifs is a challenging problem. Behaviors have a multi-scale structure: the animal can be simultaneously performing a small-scale motif and a large-scale one (e.g. \textit{chewing} and \textit{feeding}). Motifs are compositional: a large-scale motif is a chain of smaller-scale ones, folded in (some behavioral) space in a specific manner. We demonstrate an approach which captures these structures, using rat locomotor data as an example. From the same dataset, we used a preprocessing procedure to create different versions, each describing motifs of a different scale. We then trained several Hidden Markov Models (HMMs) in parallel, one for each dataset version. This approach essentially forced each HMM to learn motifs on a different scale, allowing us to capture behavioral structures lost in previous approaches. By comparing HMMs with models representing different null hypotheses, we found that rat locomotion was composed of distinct motifs from second scale to minute scale. We found that transitions between motifs were modulated by rats' location in the environment, leading to non-Markovian transitions. To test the ethological relevance of motifs we discovered, we compared their usage between rats with differences in a high-level trait, prosociality. We found that these rats had distinct motif repertoires, suggesting that motif usage statistics can be used to infer internal states of rats. Our method is therefore an efficient way to discover multi-scale, compositional structures in animal behaviors. It may also be applied as a sensitive assay for internal states.
2004.02069
Nikolai Slavov
Nikolai Slavov
Single-cell protein analysis by mass-spectrometry
keywords: single-cell analysis; single-cell proteomics; mass-spectrometry; isobaric carrier; sample preparation; systems biology
Current Opinion in Chemical Biology (2020)
10.1016/j.cbpa.2020.04.018
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Human physiology and pathology arise from the coordinated interactions of diverse single cells. However, analyzing single cells has been limited by the low sensitivity and throughput of analytical methods. DNA sequencing has recently made such analysis feasible for nucleic acids, but single-cell protein analysis remains limited. Mass-spectrometry is the most powerful method for protein analysis, but its application to single cells faces three major challenges: Efficiently delivering proteins/peptides to MS detectors, identifying their sequences, and scaling the analysis to many thousands of single cells. These challenges have motivated corresponding solutions, including SCoPE-design multiplexing and clean, automated, and miniaturized sample preparation. Synergistically applied, these solutions enable quantifying thousands of proteins across many single cells and establish a solid foundation for further advances. Building upon this foundation, the SCoPE concept will enable analyzing subcellular organelles and post-translational modifications while increases in multiplexing capabilities will increase the throughput and decrease cost.
[ { "created": "Sun, 5 Apr 2020 02:05:20 GMT", "version": "v1" }, { "created": "Thu, 23 Apr 2020 17:18:09 GMT", "version": "v2" }, { "created": "Sat, 20 Jun 2020 19:51:36 GMT", "version": "v3" } ]
2020-06-23
[ [ "Slavov", "Nikolai", "" ] ]
Human physiology and pathology arise from the coordinated interactions of diverse single cells. However, analyzing single cells has been limited by the low sensitivity and throughput of analytical methods. DNA sequencing has recently made such analysis feasible for nucleic acids, but single-cell protein analysis remains limited. Mass-spectrometry is the most powerful method for protein analysis, but its application to single cells faces three major challenges: Efficiently delivering proteins/peptides to MS detectors, identifying their sequences, and scaling the analysis to many thousands of single cells. These challenges have motivated corresponding solutions, including SCoPE-design multiplexing and clean, automated, and miniaturized sample preparation. Synergistically applied, these solutions enable quantifying thousands of proteins across many single cells and establish a solid foundation for further advances. Building upon this foundation, the SCoPE concept will enable analyzing subcellular organelles and post-translational modifications while increases in multiplexing capabilities will increase the throughput and decrease cost.
1502.05667
Giovanni Bussi
Sandro Bottaro, Francesco Di Palma and Giovanni Bussi
Towards de novo RNA 3D structure prediction
Accepted for publication on RNA & Disease
RNA & Disease 2, e544 (2015)
10.14800/rd.544
null
q-bio.BM physics.bio-ph physics.chem-ph physics.comp-ph
http://creativecommons.org/licenses/by/3.0/
RNA is a fundamental class of biomolecules that mediate a large variety of molecular processes within the cell. Computational algorithms can be of great help in the understanding of RNA structure-function relationship. One of the main challenges in this field is the development of structure-prediction algorithms, which aim at the prediction of the three-dimensional (3D) native fold from the sole knowledge of the sequence. In a recent paper, we have introduced a scoring function for RNA structure prediction. Here, we analyze in detail the performance of the method, we underline strengths and shortcomings, and we discuss the results with respect to state-of-the-art techniques. These observations provide a starting point for improving current methodologies, thus paving the way to the advances of more accurate approaches for RNA 3D structure prediction.
[ { "created": "Thu, 19 Feb 2015 18:42:46 GMT", "version": "v1" } ]
2015-02-20
[ [ "Bottaro", "Sandro", "" ], [ "Di Palma", "Francesco", "" ], [ "Bussi", "Giovanni", "" ] ]
RNA is a fundamental class of biomolecules that mediate a large variety of molecular processes within the cell. Computational algorithms can be of great help in the understanding of RNA structure-function relationship. One of the main challenges in this field is the development of structure-prediction algorithms, which aim at the prediction of the three-dimensional (3D) native fold from the sole knowledge of the sequence. In a recent paper, we have introduced a scoring function for RNA structure prediction. Here, we analyze in detail the performance of the method, we underline strengths and shortcomings, and we discuss the results with respect to state-of-the-art techniques. These observations provide a starting point for improving current methodologies, thus paving the way to the advances of more accurate approaches for RNA 3D structure prediction.
1409.1542
Norman Poh
Norman Poh, Andrew McGovern and Simon de Lusignan
Towards automated identification of changes in laboratory measurement of renal function: implications for longitudinal research and observing trends in glomerular filtration rate (GFR)
null
null
null
TR-14-03
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introduction: Kidney function is reported using estimates of glomerular filtration rate (eGFR). However, eGFR values are recorded without reference to the creatinine (SCr) assays used to derive them, and newer assays were introduced at different time points across laboratories in UK. These changes may cause systematic bias in eGFR reported in routinely collected data; even though laboratory reported eGFR values have a correction factor applied. Design: An algorithm to detect changes in SCr which affect eGFR calculation method by comparing the mapping of SCr values on to eGFR values across a time-series of paired eGFR and SCr measurements. Setting: Routinely collected primary care data from 20,000 people with the richest renal function data from the Quality Improvement in Chronic Kidney Disease (QICKD) trial. Results: The algorithm identified a change in eGFR calculation method in 80 (63%) of the 127 included practices. This change was identified in 4,736 (23.7%) patient time series analysed. This change in calibration method was found to cause a significant step change in reported eGFR values producing a systematic bias. eGFR values could not be recalibrated by applying the Modification of Diet in Renal Disease (MDRD) equation to the laboratory reported SCr values. Conclusions: This algorithm can identify laboratory changes in eGFR calculation methods and changes in SCr assay. Failure to account for these changes may misconstrue renal function changes over time. Researchers using routine eGFR data should account for these effects.
[ { "created": "Wed, 3 Sep 2014 09:53:20 GMT", "version": "v1" } ]
2014-09-05
[ [ "Poh", "Norman", "" ], [ "McGovern", "Andrew", "" ], [ "de Lusignan", "Simon", "" ] ]
Introduction: Kidney function is reported using estimates of glomerular filtration rate (eGFR). However, eGFR values are recorded without reference to the creatinine (SCr) assays used to derive them, and newer assays were introduced at different time points across laboratories in UK. These changes may cause systematic bias in eGFR reported in routinely collected data; even though laboratory reported eGFR values have a correction factor applied. Design: An algorithm to detect changes in SCr which affect eGFR calculation method by comparing the mapping of SCr values on to eGFR values across a time-series of paired eGFR and SCr measurements. Setting: Routinely collected primary care data from 20,000 people with the richest renal function data from the Quality Improvement in Chronic Kidney Disease (QICKD) trial. Results: The algorithm identified a change in eGFR calculation method in 80 (63%) of the 127 included practices. This change was identified in 4,736 (23.7%) patient time series analysed. This change in calibration method was found to cause a significant step change in reported eGFR values producing a systematic bias. eGFR values could not be recalibrated by applying the Modification of Diet in Renal Disease (MDRD) equation to the laboratory reported SCr values. Conclusions: This algorithm can identify laboratory changes in eGFR calculation methods and changes in SCr assay. Failure to account for these changes may misconstrue renal function changes over time. Researchers using routine eGFR data should account for these effects.
1911.05663
Rohan Gala
Rohan Gala, Nathan Gouwens, Zizhen Yao, Agata Budzillo, Osnat Penn, Bosiljka Tasic, Gabe Murphy, Hongkui Zeng, Uygar S\"umb\"ul
A coupled autoencoder approach for multi-modal analysis of cell types
Main text : 10 pages, 5 figures. Supp text : 6 pages, 3 figures
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent developments in high throughput profiling of individual neurons have spurred data driven exploration of the idea that there exist natural groupings of neurons referred to as cell types. The promise of this idea is that the immense complexity of brain circuits can be reduced, and effectively studied by means of interactions between cell types. While clustering of neuron populations based on a particular data modality can be used to define cell types, such definitions are often inconsistent across different characterization modalities. We pose this issue of cross-modal alignment as an optimization problem and develop an approach based on coupled training of autoencoders as a framework for such analyses. We apply this framework to a Patch-seq dataset consisting of transcriptomic and electrophysiological profiles for the same set of neurons to study consistency of representations across modalities, and evaluate cross-modal data prediction ability. We explore the problem where only a subset of neurons is characterized with more than one modality, and demonstrate that representations learned by coupled autoencoders can be used to identify types sampled only by a single modality.
[ { "created": "Wed, 6 Nov 2019 00:58:02 GMT", "version": "v1" } ]
2019-11-14
[ [ "Gala", "Rohan", "" ], [ "Gouwens", "Nathan", "" ], [ "Yao", "Zizhen", "" ], [ "Budzillo", "Agata", "" ], [ "Penn", "Osnat", "" ], [ "Tasic", "Bosiljka", "" ], [ "Murphy", "Gabe", "" ], [ "Zeng", "Hongkui", "" ], [ "Sümbül", "Uygar", "" ] ]
Recent developments in high throughput profiling of individual neurons have spurred data driven exploration of the idea that there exist natural groupings of neurons referred to as cell types. The promise of this idea is that the immense complexity of brain circuits can be reduced, and effectively studied by means of interactions between cell types. While clustering of neuron populations based on a particular data modality can be used to define cell types, such definitions are often inconsistent across different characterization modalities. We pose this issue of cross-modal alignment as an optimization problem and develop an approach based on coupled training of autoencoders as a framework for such analyses. We apply this framework to a Patch-seq dataset consisting of transcriptomic and electrophysiological profiles for the same set of neurons to study consistency of representations across modalities, and evaluate cross-modal data prediction ability. We explore the problem where only a subset of neurons is characterized with more than one modality, and demonstrate that representations learned by coupled autoencoders can be used to identify types sampled only by a single modality.
2208.09871
Mattia Miotto
Greta Grassmann, Lorenzo Di Rienzo, Giorgio Gosti, Marco Leonetti, Giancarlo Ruocco, Mattia Miotto, Edoardo Milanetti
Electrostatic complementarity at the interface drives transient protein-protein interactions
16 pages, 5 figures, 3 tables
Sci Rep 13, 10207 (2023)
10.1038/s41598-023-37130-z
null
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Understanding the molecular mechanisms driving the binding between bio-molecules is a crucial challenge in molecular biology. In this respect, characteristics like the preferentially hydrophobic composition of the binding interfaces, the role of van der Waals interactions (short range forces), and the consequent shape complementarity between the interacting molecular surfaces are well established. However, no consensus has yet been reached on how and how much electrostatic participates in the various stages of protein-protein interactions. Here, we perform extensive analyses on a large dataset of protein complexes for which both experimental binding affinity and pH data were available. We found that (i) although different classes of dimers do not present marked differences in the amino acid composition and charges disposition in the binding region, (ii) homodimers with identical binding region show higher electrostatic compatibility with respect to both homodimers with non-identical binding region and heterodimers. The level of electrostatic compatibility also varies with the pH of the complex, reaching the lowest values for low pH. Interestingly, (iii) shape and electrostatic complementarity behave oppositely when one stratifies the complexes by their binding affinity. Conversely, complexes with low values of binding affinity exploit Coulombic complementarity to acquire specificity, suggesting that electrostatic complementarity may play a greater role in transient (or less stable) complexes. In light of these results, (iv) we provide a fast and efficient method to measure electrostatic complementarity without the need of knowing the complex structure. Expanding the electrostatic potential on a basis of 2D orthogonal polynomials, we can discriminate between transient and permanent protein complexes with an AUC of the ROC of 0.8.
[ { "created": "Sun, 21 Aug 2022 11:55:07 GMT", "version": "v1" } ]
2023-10-23
[ [ "Grassmann", "Greta", "" ], [ "Di Rienzo", "Lorenzo", "" ], [ "Gosti", "Giorgio", "" ], [ "Leonetti", "Marco", "" ], [ "Ruocco", "Giancarlo", "" ], [ "Miotto", "Mattia", "" ], [ "Milanetti", "Edoardo", "" ] ]
Understanding the molecular mechanisms driving the binding between bio-molecules is a crucial challenge in molecular biology. In this respect, characteristics like the preferentially hydrophobic composition of the binding interfaces, the role of van der Waals interactions (short range forces), and the consequent shape complementarity between the interacting molecular surfaces are well established. However, no consensus has yet been reached on how and how much electrostatic participates in the various stages of protein-protein interactions. Here, we perform extensive analyses on a large dataset of protein complexes for which both experimental binding affinity and pH data were available. We found that (i) although different classes of dimers do not present marked differences in the amino acid composition and charges disposition in the binding region, (ii) homodimers with identical binding region show higher electrostatic compatibility with respect to both homodimers with non-identical binding region and heterodimers. The level of electrostatic compatibility also varies with the pH of the complex, reaching the lowest values for low pH. Interestingly, (iii) shape and electrostatic complementarity behave oppositely when one stratifies the complexes by their binding affinity. Conversely, complexes with low values of binding affinity exploit Coulombic complementarity to acquire specificity, suggesting that electrostatic complementarity may play a greater role in transient (or less stable) complexes. In light of these results, (iv) we provide a fast and efficient method to measure electrostatic complementarity without the need of knowing the complex structure. Expanding the electrostatic potential on a basis of 2D orthogonal polynomials, we can discriminate between transient and permanent protein complexes with an AUC of the ROC of 0.8.
2207.07215
Mohsen Annabestani
Ali Olyanasab, Zahra Meskar, Mohsen Annabestani, Ali Mousavi Shaegh, and Mehdi Fardmanesh
Warp and Weft Wiring method for rapid, modifiable, self-aligned, and bonding-free fabrication of multi electrodes microfluidic sensors
null
null
null
null
q-bio.QM physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The need for rapid fabrication of microfluidic devices has become increasingly critical as microfluidics become part of biomedical sensors. Using Warp and Weft Wiring (WWW) of copper wires, this paper presents a novel low-cost method for rapid, self-aligned, bonding-free, and modifiable fabrication of multi-electrodes microfluidic sensors. All the proposed features are promising and highly recommended for the development of Point-of-Care Tests (POCTs), while most of the conventional methods have low chances of coming out of the research labs and play no role in POCTs development. To have an experimental proof of concept, the proposed chip was fabricated and then tested with two sets of experiments that showed the potential applications of water quality management, hygiene, biomedical impedance measurement, cell analysis, flow cytometry, etc.
[ { "created": "Thu, 14 Jul 2022 21:42:49 GMT", "version": "v1" } ]
2022-07-18
[ [ "Olyanasab", "Ali", "" ], [ "Meskar", "Zahra", "" ], [ "Annabestani", "Mohsen", "" ], [ "Shaegh", "Ali Mousavi", "" ], [ "Fardmanesh", "Mehdi", "" ] ]
The need for rapid fabrication of microfluidic devices has become increasingly critical as microfluidics become part of biomedical sensors. Using Warp and Weft Wiring (WWW) of copper wires, this paper presents a novel low-cost method for rapid, self-aligned, bonding-free, and modifiable fabrication of multi-electrodes microfluidic sensors. All the proposed features are promising and highly recommended for the development of Point-of-Care Tests (POCTs), while most of the conventional methods have low chances of coming out of the research labs and play no role in POCTs development. To have an experimental proof of concept, the proposed chip was fabricated and then tested with two sets of experiments that showed the potential applications of water quality management, hygiene, biomedical impedance measurement, cell analysis, flow cytometry, etc.
2101.11929
Shiva Rudraraju
Debabrata Auddya, Xiaoxuan Zhang, Rahul Gulati, Ritvik Vasan, Krishna Garikipati, Padmini Rangamani, Shiva Rudraraju
Biomembranes undergo complex, non-axisymmetric deformations governed by Kirchhoff-Love kinematics and revealed by a three dimensional computational framework
null
null
10.1098/rspa.2021.0246
null
q-bio.QM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biomembranes play a central role in various phenomena like locomotion of cells, cell-cell interactions, packaging of nutrients, and in maintaining organelle morphology and functionality. During these processes, the membranes undergo significant morphological changes through deformation, scission, and fusion. Modeling the underlying mechanics of such morphological changes has traditionally relied on reduced order axisymmetric representations of membrane geometry and deformation. Axisymmetric representations, while robust and extensively deployed, suffer from their inability to model symmetry breaking deformations and structural bifurcations. To address this limitation, a 3D computational mechanics framework for high fidelity modeling of biomembrane deformation is presented. The proposed framework brings together Kirchhoff-Love thin-shell kinematics, Helfrich-energy based mechanics, and state-of-the-art numerical techniques for modeling deformation of surface geometries. Lipid bilayers are represented as spline-based surfaces immersed in a 3D space; this enables modeling of a wide spectrum of membrane geometries, boundary conditions, and deformations that are physically admissible in a 3D space. The mathematical basis of the framework and its numerical machinery are presented, and their utility is demonstrated by modeling 3 classical, yet non-trivial, membrane problems: formation of tubular shapes and their lateral constriction, Piezo1-induced membrane footprint generation and gating response, and the budding of membranes by protein coats during endocytosis. For each problem, the full 3D membrane deformation is captured, potential symmetry-breaking deformation paths identified, and various case studies of boundary and load conditions are presented. Using the endocytic vesicle budding as a case study, we also present a "phase diagram" for its symmetric and broken-symmetry states.
[ { "created": "Thu, 28 Jan 2021 11:04:49 GMT", "version": "v1" } ]
2021-12-01
[ [ "Auddya", "Debabrata", "" ], [ "Zhang", "Xiaoxuan", "" ], [ "Gulati", "Rahul", "" ], [ "Vasan", "Ritvik", "" ], [ "Garikipati", "Krishna", "" ], [ "Rangamani", "Padmini", "" ], [ "Rudraraju", "Shiva", "" ] ]
Biomembranes play a central role in various phenomena like locomotion of cells, cell-cell interactions, packaging of nutrients, and in maintaining organelle morphology and functionality. During these processes, the membranes undergo significant morphological changes through deformation, scission, and fusion. Modeling the underlying mechanics of such morphological changes has traditionally relied on reduced order axisymmetric representations of membrane geometry and deformation. Axisymmetric representations, while robust and extensively deployed, suffer from their inability to model symmetry breaking deformations and structural bifurcations. To address this limitation, a 3D computational mechanics framework for high fidelity modeling of biomembrane deformation is presented. The proposed framework brings together Kirchhoff-Love thin-shell kinematics, Helfrich-energy based mechanics, and state-of-the-art numerical techniques for modeling deformation of surface geometries. Lipid bilayers are represented as spline-based surfaces immersed in a 3D space; this enables modeling of a wide spectrum of membrane geometries, boundary conditions, and deformations that are physically admissible in a 3D space. The mathematical basis of the framework and its numerical machinery are presented, and their utility is demonstrated by modeling 3 classical, yet non-trivial, membrane problems: formation of tubular shapes and their lateral constriction, Piezo1-induced membrane footprint generation and gating response, and the budding of membranes by protein coats during endocytosis. For each problem, the full 3D membrane deformation is captured, potential symmetry-breaking deformation paths identified, and various case studies of boundary and load conditions are presented. Using the endocytic vesicle budding as a case study, we also present a "phase diagram" for its symmetric and broken-symmetry states.
2311.12040
Xiaoqiong Xia
Xiaoqiong Xia, Chaoyu Zhu, Yuqi Shan, Fan Zhong, and Lei Liu
TransCDR: a deep learning model for enhancing the generalizability of cancer drug response prediction through transfer learning and multimodal data fusion for drug representation
8 figures
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate and robust drug response prediction is of utmost importance in precision medicine. Although many models have been developed to utilize the representations of drugs and cancer cell lines for predicting cancer drug responses (CDR), their performances can be improved by addressing issues such as insufficient data modality, suboptimal fusion algorithms, and poor generalizability for novel drugs or cell lines. We introduce TransCDR, which uses transfer learning to learn drug representations and fuses multi-modality features of drugs and cell lines by a self-attention mechanism, to predict the IC50 values or sensitive states of drugs on cell lines. We are the first to systematically evaluate the generalization of the CDR prediction model to novel (i.e., never-before-seen) compound scaffolds and cell line clusters. TransCDR shows better generalizability than 8 state-of-the-art models. TransCDR outperforms its 5 variants that train drug encoders (i.e., RNN and AttentiveFP) from scratch under various scenarios. The most critical contributors among multiple drug notations and omics profiles are Extended Connectivity Fingerprint and genetic mutation. Additionally, the attention-based fusion module further enhances the predictive performance of TransCDR. TransCDR, trained on the GDSC dataset, demonstrates strong predictive performance on the external testing set CCLE. It is also utilized to predict missing CDRs on GDSC. Moreover, we investigate the biological mechanisms underlying drug response by classifying 7,675 patients from TCGA into drug-sensitive or drug-resistant groups, followed by a Gene Set Enrichment Analysis. TransCDR emerges as a potent tool with significant potential in drug response prediction. The source code and data can be accessed at https://github.com/XiaoqiongXia/TransCDR.
[ { "created": "Fri, 17 Nov 2023 14:55:12 GMT", "version": "v1" } ]
2023-11-22
[ [ "Xia", "Xiaoqiong", "" ], [ "Zhu", "Chaoyu", "" ], [ "Shan", "Yuqi", "" ], [ "Zhong", "Fan", "" ], [ "Liu", "Lei", "" ] ]
Accurate and robust drug response prediction is of utmost importance in precision medicine. Although many models have been developed to utilize the representations of drugs and cancer cell lines for predicting cancer drug responses (CDR), their performances can be improved by addressing issues such as insufficient data modality, suboptimal fusion algorithms, and poor generalizability for novel drugs or cell lines. We introduce TransCDR, which uses transfer learning to learn drug representations and fuses multi-modality features of drugs and cell lines by a self-attention mechanism, to predict the IC50 values or sensitive states of drugs on cell lines. We are the first to systematically evaluate the generalization of the CDR prediction model to novel (i.e., never-before-seen) compound scaffolds and cell line clusters. TransCDR shows better generalizability than 8 state-of-the-art models. TransCDR outperforms its 5 variants that train drug encoders (i.e., RNN and AttentiveFP) from scratch under various scenarios. The most critical contributors among multiple drug notations and omics profiles are Extended Connectivity Fingerprint and genetic mutation. Additionally, the attention-based fusion module further enhances the predictive performance of TransCDR. TransCDR, trained on the GDSC dataset, demonstrates strong predictive performance on the external testing set CCLE. It is also utilized to predict missing CDRs on GDSC. Moreover, we investigate the biological mechanisms underlying drug response by classifying 7,675 patients from TCGA into drug-sensitive or drug-resistant groups, followed by a Gene Set Enrichment Analysis. TransCDR emerges as a potent tool with significant potential in drug response prediction. The source code and data can be accessed at https://github.com/XiaoqiongXia/TransCDR.
1702.05288
Roland Kr\"amer
Ulrich Warttinger, Roland Kr\"amer
Instant determination of the potential biomarker heparan sulfate in human plasma by a mix-and-read fluorescence assay
18 pages, 5 figures, 2 schemes, 5 tables
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heparan sulfate (HS) is a linear, polydisperse sulfated polysaccharide belonging to the glycosaminoglycan family. HS proteoglycans are ubiquitously found at the cell surface and extracellular matrix in animal species. HS is involved in the interaction with a wide variety of proteins and the regulation of many biological activities. In certain pathologic conditions, expression and shedding of HS proteoglycans is overregulated, or enzymatic degradation of HS in lysosomes is deficient, both leading to excess circulating free HS chains in blood plasma. HS has therefore been suggested as a biomarker for various severe disease states. The structural heterogeneity makes the quantification of heparan sulfate in complex matrices such as human plasma challenging. HS plasma levels are usually quantified by either disaccharide analysis or enzyme linked immunosorbent assay(ELISA). Both methods require time-consuming multistep-protocols. We describe here the instant detection of heparan sulfate in spiked plasma samples by the Heparin Red Kit, a commercial mix-and-read fluorescence microplate assay. The method enables HS quantification in the low microgram per mL range without sample pretreatment. Heparin Red appears to be sufficiently sensitive for the detection of highly elevated HS levels as reported for mucopolysaccharidosis, graft versus host disease after transplantation, dengue infection or septic shock. This study is a significant step toward the development of a convenient and fast method for the quantification of HS in human plasma, with the potential to simplify the detection and advance the acceptance of HS as a biomarker.
[ { "created": "Fri, 17 Feb 2017 10:19:36 GMT", "version": "v1" } ]
2017-02-20
[ [ "Warttinger", "Ulrich", "" ], [ "Krämer", "Roland", "" ] ]
Heparan sulfate (HS) is a linear, polydisperse sulfated polysaccharide belonging to the glycosaminoglycan family. HS proteoglycans are ubiquitously found at the cell surface and extracellular matrix in animal species. HS is involved in the interaction with a wide variety of proteins and the regulation of many biological activities. In certain pathologic conditions, expression and shedding of HS proteoglycans is overregulated, or enzymatic degradation of HS in lysosomes is deficient, both leading to excess circulating free HS chains in blood plasma. HS has therefore been suggested as a biomarker for various severe disease states. The structural heterogeneity makes the quantification of heparan sulfate in complex matrices such as human plasma challenging. HS plasma levels are usually quantified by either disaccharide analysis or enzyme linked immunosorbent assay(ELISA). Both methods require time-consuming multistep-protocols. We describe here the instant detection of heparan sulfate in spiked plasma samples by the Heparin Red Kit, a commercial mix-and-read fluorescence microplate assay. The method enables HS quantification in the low microgram per mL range without sample pretreatment. Heparin Red appears to be sufficiently sensitive for the detection of highly elevated HS levels as reported for mucopolysaccharidosis, graft versus host disease after transplantation, dengue infection or septic shock. This study is a significant step toward the development of a convenient and fast method for the quantification of HS in human plasma, with the potential to simplify the detection and advance the acceptance of HS as a biomarker.
1810.12663
Michael Adamer
Michael F Adamer, Heather A Harrington, Eamonn A Gaffney, Thomas E Woolley
Coloured Noise from Stochastic Inflows in Reaction-Diffusion Systems
31 pages, 8 figures
null
null
null
q-bio.QM physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a framework for investigating coloured noise in reaction-diffusion systems. We start by considering a deterministic reaction-diffusion equation and show how external forcing can cause temporally correlated or coloured noise. Here, the main source of external noise is considered to be fluctuations in the parameter values representing the inflow of particles to the system. First, we determine which reaction systems, driven by extrinsic noise, can admit only one steady state, so that effects, such as stochastic switching, are precluded from our analysis. To analyse the steady state behaviour of reaction systems, even if the parameter values are changing, necessitates a parameter-free approach, which has been central to algebraic analysis in chemical reaction network theory. To identify suitable models we use tools from real algebraic geometry that link the network structure to its dynamical properties. We then make a connection to internal noise models and show how power spectral methods can be used to predict stochastically driven patterns in systems with coloured noise. In simple cases we show that the power spectrum of the coloured noise process and the power spectrum of the reaction-diffusion system modelled with white noise multiply to give the power spectrum of the coloured noise reaction-diffusion system.
[ { "created": "Tue, 30 Oct 2018 11:19:58 GMT", "version": "v1" }, { "created": "Fri, 30 Nov 2018 13:14:15 GMT", "version": "v2" } ]
2018-12-03
[ [ "Adamer", "Michael F", "" ], [ "Harrington", "Heather A", "" ], [ "Gaffney", "Eamonn A", "" ], [ "Woolley", "Thomas E", "" ] ]
In this paper we present a framework for investigating coloured noise in reaction-diffusion systems. We start by considering a deterministic reaction-diffusion equation and show how external forcing can cause temporally correlated or coloured noise. Here, the main source of external noise is considered to be fluctuations in the parameter values representing the inflow of particles to the system. First, we determine which reaction systems, driven by extrinsic noise, can admit only one steady state, so that effects, such as stochastic switching, are precluded from our analysis. To analyse the steady state behaviour of reaction systems, even if the parameter values are changing, necessitates a parameter-free approach, which has been central to algebraic analysis in chemical reaction network theory. To identify suitable models we use tools from real algebraic geometry that link the network structure to its dynamical properties. We then make a connection to internal noise models and show how power spectral methods can be used to predict stochastically driven patterns in systems with coloured noise. In simple cases we show that the power spectrum of the coloured noise process and the power spectrum of the reaction-diffusion system modelled with white noise multiply to give the power spectrum of the coloured noise reaction-diffusion system.
1704.00940
Antonino Sciarrino
A. Sciarrino and P.Sorba
Symmetry and Minimum Principle at the Basis of the Genetic Code
To appear in BIOMAT 2016, 326 - 362, 2017
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The importance of the notion of symmetry in physics is well established: could it also be the case for the genetic code? In this spirit, a model for the Genetic Code based on continuous symmetries and entitled the "Crystal Basis Model" has been proposed a few years ago. The present paper is a review of the model, of some of its first applications as well as of its recent developments. Indeed, after a motivated presentation of our mathematical model, we illustrate its pertinence by applying it for the elaboration and verification of sum rules for codon usage probabilities, as well as for establishing relations and some predictions between physical-chemical properties of amino-acids. Then, defining in this context a "bio-spin" structure for the nucleotides and codons, the interaction between a couple of codon-anticodon can simply be represented by a (bio) spin-spin potential. This approach will constitute the second part of the paper where, imposing the minimum energy principle, an analysis of the evolution of the genetic code can be performed with good agreement with the generally accepted scheme. A more precise study of this interaction model provides informations on codon bias, consistent with data.
[ { "created": "Tue, 4 Apr 2017 10:17:00 GMT", "version": "v1" } ]
2017-04-05
[ [ "Sciarrino", "A.", "" ], [ "Sorba", "P.", "" ] ]
The importance of the notion of symmetry in physics is well established: could it also be the case for the genetic code? In this spirit, a model for the Genetic Code based on continuous symmetries and entitled the "Crystal Basis Model" has been proposed a few years ago. The present paper is a review of the model, of some of its first applications as well as of its recent developments. Indeed, after a motivated presentation of our mathematical model, we illustrate its pertinence by applying it for the elaboration and verification of sum rules for codon usage probabilities, as well as for establishing relations and some predictions between physical-chemical properties of amino-acids. Then, defining in this context a "bio-spin" structure for the nucleotides and codons, the interaction between a couple of codon-anticodon can simply be represented by a (bio) spin-spin potential. This approach will constitute the second part of the paper where, imposing the minimum energy principle, an analysis of the evolution of the genetic code can be performed with good agreement with the generally accepted scheme. A more precise study of this interaction model provides informations on codon bias, consistent with data.
1504.00698
Richard McMurtrey
Richard J. McMurtrey
Novel Advancements in Three-Dimensional Neural Tissue Engineering and Regenerative Medicine
null
Neural Regen. Res. 2015;10(3):352-4
10.4103/1673-5374.153674
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurological diseases and injuries present some of the greatest challenges in modern medicine, often causing irreversible and lifelong burdens in the people whom they afflict. These diagnoses have devastating consequences on millions of people each year, and yet there are currently no therapies or interventions that can repair the structure of neural circuits and restore neural tissue function in the brain and spinal cord. Despite the challenges of overcoming these limitations, there are many new approaches under development that hold much promise. Neural tissue engineering aims to restore and influence the function of damaged or diseased neural tissue generally through the use of stem cells and biomaterials. In this paper, several new 3D tissue constructs and designs are described for functional reconstruction of neural architecture. With the use of induced pluripotent stem cells or induced neuronal cells, these 3D constructs could then be studied as regional models of the central nervous system or could one day be implemented as autologous grafts into damaged sites of the nervous system in order to restore neural function, particularly for damaged sites of spinal cord, areas of stroke infarction, tumor resection sites, peripheral nerve injuries, or areas of neurodegeneration.
[ { "created": "Thu, 2 Apr 2015 21:59:26 GMT", "version": "v1" } ]
2015-04-06
[ [ "McMurtrey", "Richard J.", "" ] ]
Neurological diseases and injuries present some of the greatest challenges in modern medicine, often causing irreversible and lifelong burdens in the people whom they afflict. These diagnoses have devastating consequences on millions of people each year, and yet there are currently no therapies or interventions that can repair the structure of neural circuits and restore neural tissue function in the brain and spinal cord. Despite the challenges of overcoming these limitations, there are many new approaches under development that hold much promise. Neural tissue engineering aims to restore and influence the function of damaged or diseased neural tissue generally through the use of stem cells and biomaterials. In this paper, several new 3D tissue constructs and designs are described for functional reconstruction of neural architecture. With the use of induced pluripotent stem cells or induced neuronal cells, these 3D constructs could then be studied as regional models of the central nervous system or could one day be implemented as autologous grafts into damaged sites of the nervous system in order to restore neural function, particularly for damaged sites of spinal cord, areas of stroke infarction, tumor resection sites, peripheral nerve injuries, or areas of neurodegeneration.
1412.6325
Robert Leech
Peter J. Hellyer, Barbara Jachs, Robert Leech, Claudia Clopath
Local inhibitory plasticity tunes global brain dynamics and allows the emergence of functional brain networks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rich, spontaneous brain activity has been observed across a range of different temporal and spatial scales. These dynamics are thought to be important t for efficient neural functioning. Experimental evidence suggests that these neural dynamics are maintained across a variety of different cognitive states, in response to alterations of the environment and to changes in brain configuration (e.g., across individuals, development and in many neurological disorders). This suggests that the brain has evolved mechanisms to stabilize dynamics and maintain them across a range of situations. Here, we employ a local homeostatic inhibitory plasticity mechanism, balancing inhibitory and excitatory activity in a model of macroscopic brain activity based on white-matter structural connectivity. We demonstrate that the addition of homeostatic plasticity regulates network activity and allows for the emergence of rich, spontaneous dynamics across a range of brain configurations. Furthermore, the presence of homeostatic plasticity maximises the overlap between empirical and simulated patterns of functional connectivity. Therefore, this work presents a simple, local, biologically plausible inhibitory mechanism that allows stable dynamics to emerge in the brain and which facilitates the formation of functional connectivity networks.
[ { "created": "Fri, 19 Dec 2014 13:08:04 GMT", "version": "v1" }, { "created": "Tue, 20 Jan 2015 00:02:22 GMT", "version": "v2" } ]
2015-01-21
[ [ "Hellyer", "Peter J.", "" ], [ "Jachs", "Barbara", "" ], [ "Leech", "Robert", "" ], [ "Clopath", "Claudia", "" ] ]
Rich, spontaneous brain activity has been observed across a range of different temporal and spatial scales. These dynamics are thought to be important t for efficient neural functioning. Experimental evidence suggests that these neural dynamics are maintained across a variety of different cognitive states, in response to alterations of the environment and to changes in brain configuration (e.g., across individuals, development and in many neurological disorders). This suggests that the brain has evolved mechanisms to stabilize dynamics and maintain them across a range of situations. Here, we employ a local homeostatic inhibitory plasticity mechanism, balancing inhibitory and excitatory activity in a model of macroscopic brain activity based on white-matter structural connectivity. We demonstrate that the addition of homeostatic plasticity regulates network activity and allows for the emergence of rich, spontaneous dynamics across a range of brain configurations. Furthermore, the presence of homeostatic plasticity maximises the overlap between empirical and simulated patterns of functional connectivity. Therefore, this work presents a simple, local, biologically plausible inhibitory mechanism that allows stable dynamics to emerge in the brain and which facilitates the formation of functional connectivity networks.
1907.05064
Francesc Rossell\'o
Tom\'as M. Coronado, Mareike Fischer, Lina Herbst, Francesc Rossell\'o, Kristina Wicke
On the minimum value of the Colless index and the bifurcating trees that achieve it
61 pages. This paper is the result of merging our previous preprints arXiv:1903.11670 [q-bio.PE] and arXiv:1904.09771 [math.CO] into a single joint manuscript. Several proofs are new
null
null
null
q-bio.PE cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Measures of tree balance play an important role in the analysis of phylogenetic trees. One of the oldest and most popular indices in this regard is the Colless index for rooted bifurcating trees, introduced by Colless (1982). While many of its statistical properties under different probabilistic models for phylogenetic trees have already been established, little is known about its minimum value and the trees that achieve it. In this manuscript, we fill this gap in the literature. To begin with, we derive both recursive and closed expressions for the minimum Colless index of a tree with $n$ leaves. Surprisingly, these expressions show a connection between the minimum Colless index and the so-called Blancmange curve, a fractal curve. We then fully characterize the tree shapes that achieve this minimum value and we introduce both an algorithm to generate them and a recurrence to count them. After focusing on two extremal classes of trees with minimum Colless index (the maximally balanced trees and the greedy from the bottom trees), we conclude by showing that all trees with minimum Colless index also have minimum Sackin index, another popular balance index.
[ { "created": "Thu, 11 Jul 2019 09:07:29 GMT", "version": "v1" }, { "created": "Mon, 17 Feb 2020 17:31:14 GMT", "version": "v2" } ]
2020-02-18
[ [ "Coronado", "Tomás M.", "" ], [ "Fischer", "Mareike", "" ], [ "Herbst", "Lina", "" ], [ "Rosselló", "Francesc", "" ], [ "Wicke", "Kristina", "" ] ]
Measures of tree balance play an important role in the analysis of phylogenetic trees. One of the oldest and most popular indices in this regard is the Colless index for rooted bifurcating trees, introduced by Colless (1982). While many of its statistical properties under different probabilistic models for phylogenetic trees have already been established, little is known about its minimum value and the trees that achieve it. In this manuscript, we fill this gap in the literature. To begin with, we derive both recursive and closed expressions for the minimum Colless index of a tree with $n$ leaves. Surprisingly, these expressions show a connection between the minimum Colless index and the so-called Blancmange curve, a fractal curve. We then fully characterize the tree shapes that achieve this minimum value and we introduce both an algorithm to generate them and a recurrence to count them. After focusing on two extremal classes of trees with minimum Colless index (the maximally balanced trees and the greedy from the bottom trees), we conclude by showing that all trees with minimum Colless index also have minimum Sackin index, another popular balance index.
1603.04023
Greg Stephens
Onno D Broekmans, Jarlath B Rodgers, William S Ryu and Greg J Stephens
Resolving coiled shapes reveals new reorientation behaviors in C. elegans
14 pages and 8 figures, including supplementary information
eLife 2016;5:e17227
10.7554/eLife.17227
null
q-bio.QM physics.bio-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We exploit the reduced space of C. elegans postures to develop a novel tracking algorithm which captures both simple shapes and also self-occluding coils, an important, yet unexplored, component of worm behavior. We apply our algorithm to show that visually complex, coiled sequences are a superposition of two simpler patterns: the body wave dynamics and a head-curvature pulse. We demonstrate the precise coiled dynamics of an escape response and uncover new behaviors in spontaneous, large amplitude coils; deep reorientations occur through classical Omega-shaped postures and also through larger, new postural excitations which we label here as delta-turns. We find that omega and delta turns occur independently, the serpentine analog of a random left-right step, suggesting a distinct triggering mechanism. We also show that omega and delta turns display approximately equal rates and adapt to food-free conditions on a similar timescale, a simple strategy to avoid navigational bias.
[ { "created": "Sun, 13 Mar 2016 12:38:01 GMT", "version": "v1" } ]
2016-11-01
[ [ "Broekmans", "Onno D", "" ], [ "Rodgers", "Jarlath B", "" ], [ "Ryu", "William S", "" ], [ "Stephens", "Greg J", "" ] ]
We exploit the reduced space of C. elegans postures to develop a novel tracking algorithm which captures both simple shapes and also self-occluding coils, an important, yet unexplored, component of worm behavior. We apply our algorithm to show that visually complex, coiled sequences are a superposition of two simpler patterns: the body wave dynamics and a head-curvature pulse. We demonstrate the precise coiled dynamics of an escape response and uncover new behaviors in spontaneous, large amplitude coils; deep reorientations occur through classical Omega-shaped postures and also through larger, new postural excitations which we label here as delta-turns. We find that omega and delta turns occur independently, the serpentine analog of a random left-right step, suggesting a distinct triggering mechanism. We also show that omega and delta turns display approximately equal rates and adapt to food-free conditions on a similar timescale, a simple strategy to avoid navigational bias.