id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0603026
Adiel Loinger
Azi Lipshtat, Adiel Loinger, Nathalie Q. Balaban and Ofer Biham
Genetic Toggle Switch Without Cooperative Binding
10 pages,4 figures
null
10.1103/PhysRevLett.96.188101
null
q-bio.MN
null
Genetic switch systems with mutual repression of two transcription factors are studied using deterministic and stochastic methods. Numerous studies have concluded that cooperative binding is a necessary condition for the emergence of bistability in these systems. Here we show that for a range of biologically relevant conditions, a suitable combination of network structure and stochastic effects gives rise to bistability even without cooperative binding.
[ { "created": "Wed, 22 Mar 2006 14:46:16 GMT", "version": "v1" } ]
2009-11-13
[ [ "Lipshtat", "Azi", "" ], [ "Loinger", "Adiel", "" ], [ "Balaban", "Nathalie Q.", "" ], [ "Biham", "Ofer", "" ] ]
Genetic switch systems with mutual repression of two transcription factors are studied using deterministic and stochastic methods. Numerous studies have concluded that cooperative binding is a necessary condition for the emergence of bistability in these systems. Here we show that for a range of biologically relevant conditions, a suitable combination of network structure and stochastic effects gives rise to bistability even without cooperative binding.
2201.07075
Mario Martinez-Saito
Mario Martinez-Saito
Discrete scaling and criticality in a chain of adaptive excitable integrators
null
null
10.1016/j.chaos.2022.112574
null
q-bio.NC nlin.AO
http://creativecommons.org/licenses/by/4.0/
We describe a chain of unidirectionally coupled adaptive excitable elements slowly driven by a stochastic process from one end and open at the other end, as a minimal toy model of unresolved irreducible uncertainty in a system performing inference through a hierarchical model. Threshold potentials adapt slowly to ensure sensitivity without being wasteful. Activity and energy are released as intermittent avalanches of pulses with a discrete scaling distribution largely independent of the exogenous input form. Subthreshold activities and threshold potentials exhibit Lorentzian temporal spectra, with a power-law range determined by position in the chain. Subthreshold bistability closely resembles empirical measurements of intracellular membrane potential. We suggest that critical cortical cascades emerge from a trade-off between metabolic power consumption and performance requirements in a critical world, and that the temporal scaling patterns of brain electrophysiological recordings ensue from weighted linear combinations of subthreshold activities and pulses from different hierarchy levels.
[ { "created": "Tue, 18 Jan 2022 15:59:23 GMT", "version": "v1" }, { "created": "Wed, 29 Jun 2022 18:37:08 GMT", "version": "v10" }, { "created": "Wed, 19 Jan 2022 15:55:39 GMT", "version": "v2" }, { "created": "Sun, 23 Jan 2022 13:59:47 GMT", "version": "v3" }, { "...
2022-09-14
[ [ "Martinez-Saito", "Mario", "" ] ]
We describe a chain of unidirectionally coupled adaptive excitable elements slowly driven by a stochastic process from one end and open at the other end, as a minimal toy model of unresolved irreducible uncertainty in a system performing inference through a hierarchical model. Threshold potentials adapt slowly to ensure sensitivity without being wasteful. Activity and energy are released as intermittent avalanches of pulses with a discrete scaling distribution largely independent of the exogenous input form. Subthreshold activities and threshold potentials exhibit Lorentzian temporal spectra, with a power-law range determined by position in the chain. Subthreshold bistability closely resembles empirical measurements of intracellular membrane potential. We suggest that critical cortical cascades emerge from a trade-off between metabolic power consumption and performance requirements in a critical world, and that the temporal scaling patterns of brain electrophysiological recordings ensue from weighted linear combinations of subthreshold activities and pulses from different hierarchy levels.
1801.08108
Matthias Keil
Matthias S. Keil, Elisenda Roca-Moreno, and Angel Rodriguez-Vazquez
A neural model of the locust visual system for detection of object approaches with real-world scenes
Originally published in the Proceedings of the Fourth IASTED International Conference on Visualization, Imaging, and Image Processing, September 6-8, 2004, Marbella, Spain (see http://www.actapress.com/Abstract.aspx?paperId=18773)
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the central nervous systems of animals like pigeons and locusts, neurons were identified which signal objects approaching the animal on a direct collision course. Unraveling the neural circuitry for collision avoidance, and identifying the underlying computational principles, is promising for building vision-based neuromorphic architectures, which in the near future could find applications in cars or planes. At the present there is no published model available for robust detection of approaching objects under real-world conditions. Here we present a computational architecture for signalling impending collisions, based on known anatomical data of the locust \emph{lobula giant movement detector} (LGMD) neuron. Our model shows robust performance even in adverse situations, such as with approaching low-contrast objects, or with highly textured and moving backgrounds. We furthermore discuss which components need to be added to our model to convert it into a full-fledged real-world-environment collision detector. KEYWORDS: Locust, LGMD, collision detection, lateral inhibition, diffusion, ON-OFF-pathways, neuronal dynamics, computer vision, image processing
[ { "created": "Wed, 24 Jan 2018 18:13:12 GMT", "version": "v1" } ]
2018-01-25
[ [ "Keil", "Matthias S.", "" ], [ "Roca-Moreno", "Elisenda", "" ], [ "Rodriguez-Vazquez", "Angel", "" ] ]
In the central nervous systems of animals like pigeons and locusts, neurons were identified which signal objects approaching the animal on a direct collision course. Unraveling the neural circuitry for collision avoidance, and identifying the underlying computational principles, is promising for building vision-based neuromorphic architectures, which in the near future could find applications in cars or planes. At the present there is no published model available for robust detection of approaching objects under real-world conditions. Here we present a computational architecture for signalling impending collisions, based on known anatomical data of the locust \emph{lobula giant movement detector} (LGMD) neuron. Our model shows robust performance even in adverse situations, such as with approaching low-contrast objects, or with highly textured and moving backgrounds. We furthermore discuss which components need to be added to our model to convert it into a full-fledged real-world-environment collision detector. KEYWORDS: Locust, LGMD, collision detection, lateral inhibition, diffusion, ON-OFF-pathways, neuronal dynamics, computer vision, image processing
2311.02471
Sam Subbey
Salah Alrabeei, Talal Rahman, Sam Subbey
Efficient Large-Scale Simulation of Fish Schooling Behavior Using Voronoi Tessellations and Fuzzy Clustering
11 pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
This paper introduces an efficient approach to reduce the computational cost of simulating collective behaviors, such as fish schooling, using Individual-Based Models (IBMs). The proposed technique employs adaptive and dynamic load-balancing domain partitioning, which utilizes unsupervised machine-learning models to cluster a large number of simulated individuals into sub-schools based on their spatial-temporal locations. It also utilizes Voronoi tessellations to construct non-overlapping simulation subdomains. This approach minimizes agent-to-agent communication and balances the load both spatially and temporally, ultimately resulting in reduced computational complexity. Experimental simulations demonstrate that this partitioning approach outperforms the standard regular grid-based domain decomposition, achieving a reduction in computational cost while maintaining spatial and temporal load balance. The approach presented in this paper has the potential to be applied to other collective behavior simulations requiring large-scale simulations with a substantial number of individuals.
[ { "created": "Sat, 4 Nov 2023 18:11:09 GMT", "version": "v1" } ]
2023-11-07
[ [ "Alrabeei", "Salah", "" ], [ "Rahman", "Talal", "" ], [ "Subbey", "Sam", "" ] ]
This paper introduces an efficient approach to reduce the computational cost of simulating collective behaviors, such as fish schooling, using Individual-Based Models (IBMs). The proposed technique employs adaptive and dynamic load-balancing domain partitioning, which utilizes unsupervised machine-learning models to cluster a large number of simulated individuals into sub-schools based on their spatial-temporal locations. It also utilizes Voronoi tessellations to construct non-overlapping simulation subdomains. This approach minimizes agent-to-agent communication and balances the load both spatially and temporally, ultimately resulting in reduced computational complexity. Experimental simulations demonstrate that this partitioning approach outperforms the standard regular grid-based domain decomposition, achieving a reduction in computational cost while maintaining spatial and temporal load balance. The approach presented in this paper has the potential to be applied to other collective behavior simulations requiring large-scale simulations with a substantial number of individuals.
2306.13300
Jun Chen
Jun Chen, Jordy O Rodriguez Rincon, Gloria DeGrandi-Hoffman, Jennifer Fewell, Jon Harrison and Yun Kang
Impacts of seasonality and parasitism on honey bee population dynamics
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The honeybee plays an extremely important role in ecosystem stability and diversity and in the production of bee pollinated crops. Honey bees and other pollinators are under threat from the combined effects of nutritional stress, parasitism, pesticides, and climate change that impact the timing, duration, and variability of seasonal events. To understand how parasitism and seasonality influence honey bee colonies separately and interactively, we developed a non-autonomous nonlinear honeybee-parasite interaction differential equation model that incorporates seasonality into the egg-laying rate of the queen. Our theoretical results show that parasitism negatively impacts the honey bee population either by decreasing colony size or destabilizing population dynamics through supercritical or subcritical Hopf-bifurcations depending on conditions. Our bifurcation analysis and simulations suggest that seasonality alone may have positive or negative impacts on the survival of honey bee colonies. More specifically, our study indicates that (1) the timing of the maximum egg-laying rate seems to determine when seasonality has positive or negative impacts; and (2) when the period of seasonality is large it can lead to the colony collapsing. Our study further suggests that the synergistic influences of parasitism and seasonality can lead to complicated dynamics that may positively and negatively impact the honey bee colony's survival. Our work partially uncovers the intrinsic effects of climate change and parasites, which potentially provide essential insights into how best to maintain or improve a honey bee colony's health.
[ { "created": "Fri, 23 Jun 2023 05:35:24 GMT", "version": "v1" } ]
2023-06-26
[ [ "Chen", "Jun", "" ], [ "Rincon", "Jordy O Rodriguez", "" ], [ "DeGrandi-Hoffman", "Gloria", "" ], [ "Fewell", "Jennifer", "" ], [ "Harrison", "Jon", "" ], [ "Kang", "Yun", "" ] ]
The honeybee plays an extremely important role in ecosystem stability and diversity and in the production of bee pollinated crops. Honey bees and other pollinators are under threat from the combined effects of nutritional stress, parasitism, pesticides, and climate change that impact the timing, duration, and variability of seasonal events. To understand how parasitism and seasonality influence honey bee colonies separately and interactively, we developed a non-autonomous nonlinear honeybee-parasite interaction differential equation model that incorporates seasonality into the egg-laying rate of the queen. Our theoretical results show that parasitism negatively impacts the honey bee population either by decreasing colony size or destabilizing population dynamics through supercritical or subcritical Hopf-bifurcations depending on conditions. Our bifurcation analysis and simulations suggest that seasonality alone may have positive or negative impacts on the survival of honey bee colonies. More specifically, our study indicates that (1) the timing of the maximum egg-laying rate seems to determine when seasonality has positive or negative impacts; and (2) when the period of seasonality is large it can lead to the colony collapsing. Our study further suggests that the synergistic influences of parasitism and seasonality can lead to complicated dynamics that may positively and negatively impact the honey bee colony's survival. Our work partially uncovers the intrinsic effects of climate change and parasites, which potentially provide essential insights into how best to maintain or improve a honey bee colony's health.
q-bio/0610005
Shin-Ichiro Nishimura
Shin I. Nishimura and Masaki Sasai
Modulation of the reaction rate of regulating protein induces large morphological and motional change of amoebic cell
17 pages including 4 figures, latex source, Journal of Theoritical Biology (in press)
null
null
null
q-bio.CB
null
Morphologies of moving amoebae are categorized into two types. One is the ``neutrophil'' type in which the long axis of cell roughly coincides with its moving direction. This type of cell extends a leading edge at the front and retracts a narrow tail at the rear, whose shape has been often drawn as a typical amoeba in textbooks. The other one is the ``keratocyte'' type with widespread lamellipodia along the front side arc. Short axis of cell in this type roughly coincides with its moving direction. In order to understand what kind of molecular feature causes conversion between two types of morphologies, and how two typical morphologies are maintained, a mathematical model of amoebic cells is developed. This model describes movement of cell and intracellular reactions of activator, inhibitor and actin filaments in a unified way. It is found that the producing rate of activator is a key factor of conversion between two types. This model also explains the observed data that the keratocye type cells tend to rapidly move along a straight line. The neutrophil type cells move along a straight line when the moving velocity is small, but they show fluctuated motions deviating from a line when they move as fast as the keratocye type cells. Efficient energy consumption in the neutrophil type cells is predicted.
[ { "created": "Mon, 2 Oct 2006 20:01:31 GMT", "version": "v1" } ]
2007-05-23
[ [ "Nishimura", "Shin I.", "" ], [ "Sasai", "Masaki", "" ] ]
Morphologies of moving amoebae are categorized into two types. One is the ``neutrophil'' type in which the long axis of cell roughly coincides with its moving direction. This type of cell extends a leading edge at the front and retracts a narrow tail at the rear, whose shape has been often drawn as a typical amoeba in textbooks. The other one is the ``keratocyte'' type with widespread lamellipodia along the front side arc. Short axis of cell in this type roughly coincides with its moving direction. In order to understand what kind of molecular feature causes conversion between two types of morphologies, and how two typical morphologies are maintained, a mathematical model of amoebic cells is developed. This model describes movement of cell and intracellular reactions of activator, inhibitor and actin filaments in a unified way. It is found that the producing rate of activator is a key factor of conversion between two types. This model also explains the observed data that the keratocye type cells tend to rapidly move along a straight line. The neutrophil type cells move along a straight line when the moving velocity is small, but they show fluctuated motions deviating from a line when they move as fast as the keratocye type cells. Efficient energy consumption in the neutrophil type cells is predicted.
2403.07147
Akram Yazdani PhD
Akram Yazdani, Maureen Samms-Vaughan, Sepideh Saroukhani, Jan Bressler, Manouchehr Hessabi, Amirali Tahanan, Megan L. Grove, Tanja Gangnus, Vasanta Putluri, Abu Hena Mostafa Kamal, Nagireddy Putluri, Katherine A. Loveland, Mohammad H. Rahbar
Metabolomic profiles in Jamaican children with and without autism spectrum disorder
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition with a wide range of behavioral and cognitive impairments. While genetic and environmental factors are known to contribute to its etiology, the underlying metabolic perturbations associated with ASD which can potentially connect genetic and environmental factors, remain poorly understood. Therefore, we conducted a metabolomic case-control study and performed a comprehensive analysis to identify significant alterations in metabolite profiles between children with ASD and typically developing (TD) controls. The objective of this study is to elucidate potential metabolomic signatures associated with ASD in children and identify specific metabolites that may serve as biomarkers for the disorder. We conducted metabolomic profiling on plasma samples from participants in the second phase of Epidemiological Research on Autism in Jamaica, a cohort of 200 children with ASD and 200 TD controls (2-8 years old). Using high-throughput liquid chromatography-mass spectrometry techniques, we performed a targeted metabolite analysis, encompassing amino acids, lipids, carbohydrates, and other key metabolic compounds. After quality control and imputation of missing values, we performed univariable and multivariable analysis using normalized metabolites while adjusting for covariates, age, sex, socioeconomic status, and child's parish of birth. Our findings revealed unique metabolic patterns in children with ASD for four metabolites compared to TD controls. Notably, three of these metabolites were fatty acids, including myristoleic acid, eicosatetraenoic acid, and octadecenoic acid. Additionally, the amino acid sarcosine exhibited a significant association with ASD. These findings highlight the role of metabolites in the etiology of ASD and suggest opportunities for the development of targeted interventions.
[ { "created": "Mon, 11 Mar 2024 20:34:24 GMT", "version": "v1" } ]
2024-03-13
[ [ "Yazdani", "Akram", "" ], [ "Samms-Vaughan", "Maureen", "" ], [ "Saroukhani", "Sepideh", "" ], [ "Bressler", "Jan", "" ], [ "Hessabi", "Manouchehr", "" ], [ "Tahanan", "Amirali", "" ], [ "Grove", "Megan L.", ...
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition with a wide range of behavioral and cognitive impairments. While genetic and environmental factors are known to contribute to its etiology, the underlying metabolic perturbations associated with ASD which can potentially connect genetic and environmental factors, remain poorly understood. Therefore, we conducted a metabolomic case-control study and performed a comprehensive analysis to identify significant alterations in metabolite profiles between children with ASD and typically developing (TD) controls. The objective of this study is to elucidate potential metabolomic signatures associated with ASD in children and identify specific metabolites that may serve as biomarkers for the disorder. We conducted metabolomic profiling on plasma samples from participants in the second phase of Epidemiological Research on Autism in Jamaica, a cohort of 200 children with ASD and 200 TD controls (2-8 years old). Using high-throughput liquid chromatography-mass spectrometry techniques, we performed a targeted metabolite analysis, encompassing amino acids, lipids, carbohydrates, and other key metabolic compounds. After quality control and imputation of missing values, we performed univariable and multivariable analysis using normalized metabolites while adjusting for covariates, age, sex, socioeconomic status, and child's parish of birth. Our findings revealed unique metabolic patterns in children with ASD for four metabolites compared to TD controls. Notably, three of these metabolites were fatty acids, including myristoleic acid, eicosatetraenoic acid, and octadecenoic acid. Additionally, the amino acid sarcosine exhibited a significant association with ASD. These findings highlight the role of metabolites in the etiology of ASD and suggest opportunities for the development of targeted interventions.
1507.07039
Kimberly Glass
Kimberly Glass, John Quackenbush, Jeremy Kepner
High Performance Computing of Gene Regulatory Networks using a Message-Passing Model
null
null
10.1109/HPEC.2015.7322475
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulatory network reconstruction is a fundamental problem in computational biology. We recently developed an algorithm, called PANDA (Passing Attributes Between Networks for Data Assimilation), that integrates multiple sources of 'omics data and estimates regulatory network models. This approach was initially implemented in the C++ programming language and has since been applied to a number of biological systems. In our current research we are beginning to expand the algorithm to incorporate larger and most diverse data-sets, to reconstruct networks that contain increasing numbers of elements, and to build not only single network models, but sets of networks. In order to accomplish these "Big Data" applications, it has become critical that we increase the computational efficiency of the PANDA implementation. In this paper we show how to recast PANDA's similarity equations as matrix operations. This allows us to implement a highly readable version of the algorithm using the MATLAB/Octave programming language. We find that the resulting M-code much shorter (103 compared to 1128 lines) and more easily modifiable for potential future applications. The new implementation also runs significantly faster, with increasing efficiency as the network models increase in size. Tests comparing the C-code and M-code versions of PANDA demonstrate that this speed-up is on the order of 20-80 times faster for networks of similar dimensions to those we find in current biological applications.
[ { "created": "Fri, 24 Jul 2015 22:51:11 GMT", "version": "v1" } ]
2017-04-18
[ [ "Glass", "Kimberly", "" ], [ "Quackenbush", "John", "" ], [ "Kepner", "Jeremy", "" ] ]
Gene regulatory network reconstruction is a fundamental problem in computational biology. We recently developed an algorithm, called PANDA (Passing Attributes Between Networks for Data Assimilation), that integrates multiple sources of 'omics data and estimates regulatory network models. This approach was initially implemented in the C++ programming language and has since been applied to a number of biological systems. In our current research we are beginning to expand the algorithm to incorporate larger and most diverse data-sets, to reconstruct networks that contain increasing numbers of elements, and to build not only single network models, but sets of networks. In order to accomplish these "Big Data" applications, it has become critical that we increase the computational efficiency of the PANDA implementation. In this paper we show how to recast PANDA's similarity equations as matrix operations. This allows us to implement a highly readable version of the algorithm using the MATLAB/Octave programming language. We find that the resulting M-code much shorter (103 compared to 1128 lines) and more easily modifiable for potential future applications. The new implementation also runs significantly faster, with increasing efficiency as the network models increase in size. Tests comparing the C-code and M-code versions of PANDA demonstrate that this speed-up is on the order of 20-80 times faster for networks of similar dimensions to those we find in current biological applications.
2408.05298
Mariah C. Boudreau
Mariah C. Boudreau, Jamie A. Cohen, Laurent H\'ebert-Dufresne
Within-host infection dynamics with master equations and the method of moments: A case study of human papillomavirus in the epithelium
15 pages, 8 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Master equations provide researchers with the ability to track the distribution over possible states of a system. From these equations, we can summarize the temporal dynamics through a method of moments. These distributions and their moments capture the stochastic nature of a system, which is essential to study infectious diseases. In this paper, we define the states of the system to be the number of infected cells of a given type in the epithelium, the hollow organ tissue in the human body. Epithelium found in the cervix provides a location for viral infections to live and persist, such as human papillomavirus (HPV). HPV is a highly transmissible disease which most commonly affects biological females and has the potential to progress into cervical cancer. By defining a master equation model which tracks the infected cell layer dynamics, information on disease extinction, progression, and viral output can be derived from the method of moments. From this methodology and the outcomes we glean from it, we aim to inform differing states of HPV infected cells, and assess the effects of structural information for each outcome.
[ { "created": "Fri, 9 Aug 2024 18:48:06 GMT", "version": "v1" } ]
2024-08-13
[ [ "Boudreau", "Mariah C.", "" ], [ "Cohen", "Jamie A.", "" ], [ "Hébert-Dufresne", "Laurent", "" ] ]
Master equations provide researchers with the ability to track the distribution over possible states of a system. From these equations, we can summarize the temporal dynamics through a method of moments. These distributions and their moments capture the stochastic nature of a system, which is essential to study infectious diseases. In this paper, we define the states of the system to be the number of infected cells of a given type in the epithelium, the hollow organ tissue in the human body. Epithelium found in the cervix provides a location for viral infections to live and persist, such as human papillomavirus (HPV). HPV is a highly transmissible disease which most commonly affects biological females and has the potential to progress into cervical cancer. By defining a master equation model which tracks the infected cell layer dynamics, information on disease extinction, progression, and viral output can be derived from the method of moments. From this methodology and the outcomes we glean from it, we aim to inform differing states of HPV infected cells, and assess the effects of structural information for each outcome.
1811.11846
Jesse Meyer
Jesse G. Meyer
Fast Proteome Identification and Quantification from Data-Dependent Acquisition - Tandem Mass Spectrometry using Free Software Tools
null
Methods and Protocols 2019
10.3390/mps2010008
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Identification of nearly all proteins in a system using data-dependent acquisition (DDA) mass spectrometry has become routine for simple organisms, such as bacteria and yeast. Still, quantification of the identified proteins may be a complex process and require multiple different software packages. This protocol describes identification and label-free quantification of proteins from bottom-up proteomics experiments. This method can be used to quantify all the detectable proteins in any DDA dataset collected with high-resolution precursor scans. This protocol may be used to quantify proteome remodeling in response to a drug treatment or a gene knockout. Notably, the method uses the latest and fastest freely-available software, and the entire protocol can be completed in a few hours with data from organisms with relatively small genomes, such as yeast or bacteria.
[ { "created": "Wed, 28 Nov 2018 21:29:54 GMT", "version": "v1" } ]
2019-01-21
[ [ "Meyer", "Jesse G.", "" ] ]
Identification of nearly all proteins in a system using data-dependent acquisition (DDA) mass spectrometry has become routine for simple organisms, such as bacteria and yeast. Still, quantification of the identified proteins may be a complex process and require multiple different software packages. This protocol describes identification and label-free quantification of proteins from bottom-up proteomics experiments. This method can be used to quantify all the detectable proteins in any DDA dataset collected with high-resolution precursor scans. This protocol may be used to quantify proteome remodeling in response to a drug treatment or a gene knockout. Notably, the method uses the latest and fastest freely-available software, and the entire protocol can be completed in a few hours with data from organisms with relatively small genomes, such as yeast or bacteria.
2202.03534
Lucas Czech
Lucas Czech, Alexandros Stamatakis, Micah Dunthorn, Pierre Barbera
Metagenomic Analysis using Phylogenetic Placement -- A Review of the First Decade
null
null
10.3389/fbinf.2022.871393
null
q-bio.PE q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Phylogenetic placement refers to a family of tools and methods to analyze, visualize, and interpret the tsunami of metagenomic sequencing data generated by high-throughput sequencing. Compared to alternative (e. g., similarity-based) methods, it puts metabarcoding sequences into a phylogenetic context using a set of known reference sequences and taking evolutionary history into account. Thereby, one can increase the accuracy of metagenomic surveys and eliminate the requirement for having exact or close matches with existing sequence databases. Phylogenetic placement constitutes a valuable analysis tool per se, but also entails a plethora of downstream tools to interpret its results. A common use case is to analyze species communities obtained from metagenomic sequencing, for example via taxonomic assignment, diversity quantification, sample comparison, and identification of correlations with environmental variables. In this review, we provide an overview over the methods developed during the first ten years. In particular, the goals of this review are (i) to motivate the usage of phylogenetic placement and illustrate some of its use cases, (ii) to outline the full workflow, from raw sequences to publishable figures, including best practices, (iii) to introduce the most common tools and methods and their capabilities, (iv) to point out common placement pitfalls and misconceptions,(v) to showcase typical placement-based analyses, and how they can help to analyze, visualize, and interpret phylogenetic placement data.
[ { "created": "Mon, 7 Feb 2022 21:50:54 GMT", "version": "v1" }, { "created": "Fri, 18 Mar 2022 22:41:23 GMT", "version": "v2" } ]
2022-09-27
[ [ "Czech", "Lucas", "" ], [ "Stamatakis", "Alexandros", "" ], [ "Dunthorn", "Micah", "" ], [ "Barbera", "Pierre", "" ] ]
Phylogenetic placement refers to a family of tools and methods to analyze, visualize, and interpret the tsunami of metagenomic sequencing data generated by high-throughput sequencing. Compared to alternative (e. g., similarity-based) methods, it puts metabarcoding sequences into a phylogenetic context using a set of known reference sequences and taking evolutionary history into account. Thereby, one can increase the accuracy of metagenomic surveys and eliminate the requirement for having exact or close matches with existing sequence databases. Phylogenetic placement constitutes a valuable analysis tool per se, but also entails a plethora of downstream tools to interpret its results. A common use case is to analyze species communities obtained from metagenomic sequencing, for example via taxonomic assignment, diversity quantification, sample comparison, and identification of correlations with environmental variables. In this review, we provide an overview over the methods developed during the first ten years. In particular, the goals of this review are (i) to motivate the usage of phylogenetic placement and illustrate some of its use cases, (ii) to outline the full workflow, from raw sequences to publishable figures, including best practices, (iii) to introduce the most common tools and methods and their capabilities, (iv) to point out common placement pitfalls and misconceptions,(v) to showcase typical placement-based analyses, and how they can help to analyze, visualize, and interpret phylogenetic placement data.
1012.1611
Alexander Bershadskii
A. Bershadskii
Broken chaotic clocks of brain neurons and depression
null
null
null
null
q-bio.NC cond-mat.dis-nn nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Irregular spiking time-series obtained in vitro and in vivo from singular brain neurons of different types of rats are analyzed by mapping to telegraph signals. Since the neural information is coded in the length of the interspike intervals and their positions on the time axis, this mapping is the most direct way to map a spike train into a signal which allows a proper application of the Fourier transform methods. This analysis shows that healthy neurons firing has periodic and chaotic deterministic clocks while for the rats representing genetic animal model of human depression these neuron clocks might be broken, that results in decoherence between the depressive neurons firing. Since depression is usually accompanied by a narrowing of consciousness this specific decoherence can be considered as a cause of the phenomenon of the consciousness narrowing as well. This suggestion is also supported by observation of the large-scale chaotic coherence of the posterior piriform and entorhinal cortices' electrical activity at transition from anesthesia to the waking state with full consciousness.
[ { "created": "Tue, 7 Dec 2010 21:20:40 GMT", "version": "v1" }, { "created": "Sat, 5 Mar 2011 17:28:20 GMT", "version": "v2" } ]
2015-03-17
[ [ "Bershadskii", "A.", "" ] ]
Irregular spiking time-series obtained in vitro and in vivo from singular brain neurons of different types of rats are analyzed by mapping to telegraph signals. Since the neural information is coded in the length of the interspike intervals and their positions on the time axis, this mapping is the most direct way to map a spike train into a signal which allows a proper application of the Fourier transform methods. This analysis shows that healthy neurons firing has periodic and chaotic deterministic clocks while for the rats representing genetic animal model of human depression these neuron clocks might be broken, that results in decoherence between the depressive neurons firing. Since depression is usually accompanied by a narrowing of consciousness this specific decoherence can be considered as a cause of the phenomenon of the consciousness narrowing as well. This suggestion is also supported by observation of the large-scale chaotic coherence of the posterior piriform and entorhinal cortices' electrical activity at transition from anesthesia to the waking state with full consciousness.
q-bio/0703013
Ophir Flomenbom
Ophir Flomenbom and Robert J. Silbey
Utilizing the information content in two-state trajectories
The file contains: main text (+4 figures), supporting information (+9 figures), poster (1 page)
Proc. Natl. Acad. Sci. USA 103, 10907-10910 (2006)
10.1073/pnas.0604546103
null
q-bio.QM
null
The signal from many single molecule experiments monitoring molecular processes, such as enzyme turnover via fluorescence and opening and closing of ion channel via the flux of ions, consists of a time series of stochastic on and off (or open and closed) periods, termed a two-state trajectory. This signal reflects the dynamics in the underlying multi-substate on-off kinetic scheme (KS) of the process. The determination of the underlying KS is difficult and sometimes even impossible due to the loss of information in the mapping of the mutli dimensional KS onto two dimensions. Here we introduce a new procedure that efficiently and optimally relates the signal to all equivalent underlying KSs. This procedure partitions the space of KSs into canonical (unique) forms that can handle any KS, and obtains the topology and other details of the canonical form from the data without the need for fitting. Also established are relationships between the data and the topology of the canonical form to the on-off connectivity of a KS. The suggested canonical forms constitute a powerful tool in discriminating between KSs. Based on our approach, the upper bound on the information content in two state trajectories is determined.
[ { "created": "Mon, 5 Mar 2007 21:19:10 GMT", "version": "v1" } ]
2007-05-23
[ [ "Flomenbom", "Ophir", "" ], [ "Silbey", "Robert J.", "" ] ]
The signal from many single molecule experiments monitoring molecular processes, such as enzyme turnover via fluorescence and opening and closing of ion channel via the flux of ions, consists of a time series of stochastic on and off (or open and closed) periods, termed a two-state trajectory. This signal reflects the dynamics in the underlying multi-substate on-off kinetic scheme (KS) of the process. The determination of the underlying KS is difficult and sometimes even impossible due to the loss of information in the mapping of the mutli dimensional KS onto two dimensions. Here we introduce a new procedure that efficiently and optimally relates the signal to all equivalent underlying KSs. This procedure partitions the space of KSs into canonical (unique) forms that can handle any KS, and obtains the topology and other details of the canonical form from the data without the need for fitting. Also established are relationships between the data and the topology of the canonical form to the on-off connectivity of a KS. The suggested canonical forms constitute a powerful tool in discriminating between KSs. Based on our approach, the upper bound on the information content in two state trajectories is determined.
1403.2878
Thomas House
Thomas House
Non-Markovian stochastic epidemics in extremely heterogeneous populations
10 pages, 1 figure
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A feature often observed in epidemiological networks is significant heterogeneity in degree. A popular modelling approach to this has been to consider large populations with highly heterogeneous discrete contact rates. This paper defines an individual-level non-Markovian stochastic process that converges on standard ODE models of such populations in the appropriate asymptotic limit. A generalised Sellke construction is derived for this model, and this is then used to consider final outcomes in the case where heterogeneity follows a truncated Zipf distribution.
[ { "created": "Wed, 12 Mar 2014 10:36:29 GMT", "version": "v1" } ]
2014-03-13
[ [ "House", "Thomas", "" ] ]
A feature often observed in epidemiological networks is significant heterogeneity in degree. A popular modelling approach to this has been to consider large populations with highly heterogeneous discrete contact rates. This paper defines an individual-level non-Markovian stochastic process that converges on standard ODE models of such populations in the appropriate asymptotic limit. A generalised Sellke construction is derived for this model, and this is then used to consider final outcomes in the case where heterogeneity follows a truncated Zipf distribution.
2110.00668
Harrison Ritz
Harrison Ritz, Xiamin Leng, and Amitai Shenhav
Cognitive control as a multivariate optimization problem
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Research has characterized the various forms cognitive control can take, including enhancement of goal-relevant information, suppression of goal-irrelevant information, and overall inhibition of potential responses, and has identified computations and neural circuits that underpin this multitude of control types. Studies have also identified a wide range of situations that elicit adjustments in control allocation (e.g., those eliciting signals indicating an error or increased processing conflict), but the rules governing when a given situation will give rise to a given control adjustment remain poorly understood. Significant progress has recently been made on this front by casting the allocation of control as a decision-making problem, and developing unifying and normative models that prescribe when and how a change in incentives and task demands will result in changes in a given form of control. Despite their successes, these models, and the experiments that have been developed to test them, have yet to face their greatest challenge: deciding how to allocate control across the multiplicity of control signals that one could engage at any given time. Here, we will lay out the complexities of the inverse problem inherent to cognitive control allocation, and their close parallels to inverse problems within motor control (e.g., choosing between redundant limb movements). We discuss existing solutions to motor control`s inverse problems drawn from optimal control theory, which have proposed that effort costs act to regularize actions and transform motor planning into a well-posed problem. These same principles may help shed light on how our brains optimize over complex control configuration, while providing a new normative perspective on the origins of mental effort.
[ { "created": "Fri, 1 Oct 2021 22:14:51 GMT", "version": "v1" }, { "created": "Mon, 3 Jan 2022 04:43:50 GMT", "version": "v2" } ]
2022-01-04
[ [ "Ritz", "Harrison", "" ], [ "Leng", "Xiamin", "" ], [ "Shenhav", "Amitai", "" ] ]
Research has characterized the various forms cognitive control can take, including enhancement of goal-relevant information, suppression of goal-irrelevant information, and overall inhibition of potential responses, and has identified computations and neural circuits that underpin this multitude of control types. Studies have also identified a wide range of situations that elicit adjustments in control allocation (e.g., those eliciting signals indicating an error or increased processing conflict), but the rules governing when a given situation will give rise to a given control adjustment remain poorly understood. Significant progress has recently been made on this front by casting the allocation of control as a decision-making problem, and developing unifying and normative models that prescribe when and how a change in incentives and task demands will result in changes in a given form of control. Despite their successes, these models, and the experiments that have been developed to test them, have yet to face their greatest challenge: deciding how to allocate control across the multiplicity of control signals that one could engage at any given time. Here, we will lay out the complexities of the inverse problem inherent to cognitive control allocation, and their close parallels to inverse problems within motor control (e.g., choosing between redundant limb movements). We discuss existing solutions to motor control`s inverse problems drawn from optimal control theory, which have proposed that effort costs act to regularize actions and transform motor planning into a well-posed problem. These same principles may help shed light on how our brains optimize over complex control configuration, while providing a new normative perspective on the origins of mental effort.
q-bio/0605016
Ellen Baake
Natali Zint, Ellen Baake and Frank den Hollander
How T-cells use large deviations to recognize foreign antigens
16 pages, 6 figures; minor revision, new simulations; J Math Biol., in press
J. Math. Biol. 57 (2008), 841-861
null
null
q-bio.SC math.PR q-bio.MN
null
A stochastic model for the activation of T-cells is analysed. T-cells are part of the immune system and recognize foreign antigens against a background of the body's own molecules. The model under consideration is a slight generalization of a model introduced by Van den Berg, Rand and Burroughs in 2001, and is capable of explaining how this recognition works on the basis of rare stochastic events. With the help of a refined large deviation theorem and numerical evaluation it is shown that, for a wide range of parameters, T-cells can distinguish reliably between foreign antigens and self-antigens.
[ { "created": "Thu, 11 May 2006 09:01:59 GMT", "version": "v1" }, { "created": "Thu, 15 May 2008 08:47:37 GMT", "version": "v2" } ]
2009-02-19
[ [ "Zint", "Natali", "" ], [ "Baake", "Ellen", "" ], [ "Hollander", "Frank den", "" ] ]
A stochastic model for the activation of T-cells is analysed. T-cells are part of the immune system and recognize foreign antigens against a background of the body's own molecules. The model under consideration is a slight generalization of a model introduced by Van den Berg, Rand and Burroughs in 2001, and is capable of explaining how this recognition works on the basis of rare stochastic events. With the help of a refined large deviation theorem and numerical evaluation it is shown that, for a wide range of parameters, T-cells can distinguish reliably between foreign antigens and self-antigens.
1311.1171
Stuart Borrett Stuart Borrett
David E. Hines, Jessica A. Lisa, Bongkeun Song, Craig R. Tobias, Stuart R. Borrett
Estimating the effects of sea level rise on coupled estuarine nitrogen cycling processes through comparative network analysis
18 pages, 2 tables, 8 figures, 1 web appendix with two ecosystem network models
Marine Ecology Progress Series 524: 137-154
10.3354/meps11187
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nitrogen (N) removal from estuaries is driven in part by sedimentary microbial processes. The processes of denitrification and anaerobic ammonium oxidation (anammox) remove N from estuaries by producing di-nitrogen gas, and each can be coupled to N recycling pathways such as nitrification and dissimilatory nitrate reduction to ammonium (DNRA). Environmental conditions in estuaries influence sedimentary N cycling processes; therefore, seawater intrusion may affect the coupling of N cycling processes in the freshwater portions of estuaries. This study investigated the potential effects of seawater intrusion on these process couplings through a comparative modeling approach. We applied environ analysis, a form of ecological network analysis, to two N cycling mass-balance network models constructed at freshwater (oligohaline) and saltwater (polyhaline) sites in the Cape Fear River Estuary, North Carolina. We used a space-for-time substitution to predict the effects of seawater intrusion on the sedimentary N cycle. Further, we conducted an uncertainty analysis using linear inverse modeling to evaluate the effects of parameterization uncertainty on model results. Nitrification coupled to both denitrification and anammox was 2.5 times greater in the oligohaline model, while DNRA coupled to anammox was 2.7 times greater in the polyhaline model. However, the total amount of N2 gas produced relative to the nitrogen inputs to each network was 4.7% and 4.6% at the oligohaline and polyhaline sites, respectively. These findings suggest that changes in water chemistry from seawater intrusion may favor direct over coupled nitrogen removal, but may not substantially change the N removal capacity of the sedimentary microbial processes.
[ { "created": "Tue, 5 Nov 2013 19:32:07 GMT", "version": "v1" } ]
2015-04-16
[ [ "Hines", "David E.", "" ], [ "Lisa", "Jessica A.", "" ], [ "Song", "Bongkeun", "" ], [ "Tobias", "Craig R.", "" ], [ "Borrett", "Stuart R.", "" ] ]
Nitrogen (N) removal from estuaries is driven in part by sedimentary microbial processes. The processes of denitrification and anaerobic ammonium oxidation (anammox) remove N from estuaries by producing di-nitrogen gas, and each can be coupled to N recycling pathways such as nitrification and dissimilatory nitrate reduction to ammonium (DNRA). Environmental conditions in estuaries influence sedimentary N cycling processes; therefore, seawater intrusion may affect the coupling of N cycling processes in the freshwater portions of estuaries. This study investigated the potential effects of seawater intrusion on these process couplings through a comparative modeling approach. We applied environ analysis, a form of ecological network analysis, to two N cycling mass-balance network models constructed at freshwater (oligohaline) and saltwater (polyhaline) sites in the Cape Fear River Estuary, North Carolina. We used a space-for-time substitution to predict the effects of seawater intrusion on the sedimentary N cycle. Further, we conducted an uncertainty analysis using linear inverse modeling to evaluate the effects of parameterization uncertainty on model results. Nitrification coupled to both denitrification and anammox was 2.5 times greater in the oligohaline model, while DNRA coupled to anammox was 2.7 times greater in the polyhaline model. However, the total amount of N2 gas produced relative to the nitrogen inputs to each network was 4.7% and 4.6% at the oligohaline and polyhaline sites, respectively. These findings suggest that changes in water chemistry from seawater intrusion may favor direct over coupled nitrogen removal, but may not substantially change the N removal capacity of the sedimentary microbial processes.
2310.15857
Sean Brown
Sean M. Brown, Christopher Mayer-Bacon, Stephen Freeland
Xeno Amino Acids: A look into biochemistry as we don't know it
Submitted to Life (ISSN 2075-1729), 26 pages (without references), 8 figures, 1 table, 1 box
Life, 13(12), 2281 (2023)
10.3390/life13122281
null
q-bio.BM q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Would another origin of life resemble Earth's biochemical use of amino acids? Here we review current knowledge at three levels: 1) Could other classes of chemical structure serve as building blocks for biopolymer structure and catalysis? Amino acids now seem both readily available to, and a plausible chemical attractor for, life as we don't know it. Amino acids thus remain important and tractable targets for astrobiological research. 2) If amino acids are used, would we expect the same L-alpha-structural subclass used by life? Despite numerous ideas, it is not clear why life favors L-enantiomers. It seems clearer, however, why life on Earth uses the shortest possible (alpha-) amino acid backbone, and why each carries only one side chain. However, assertions that other backbones are physicochemically impossible have relaxed into arguments that they are disadvantageous. 3) Would we expect a similar set of side chains to those within the genetic code? Not only do many plausible alternatives exist and evidence exists for both evolutionary advantage and physicochemical constraint for those encoded by life. Overall, as focus shifts from amino acids as a chemical class to specific side chains used by post-LUCA biology, the probable role of physicochemical constraint diminishes relative to that of biological evolution. Exciting opportunities now present themselves for laboratory work and computing to explore how changing the amino acid alphabet alters the universe of protein folds. Near-term milestones include: a) expanding evidence about amino acids as attractors within chemical evolution; b) extending characterization of other backbones relative to biological proteins; c) merging computing and laboratory explorations of structures and functions unlocked by xeno peptides.
[ { "created": "Tue, 24 Oct 2023 14:15:55 GMT", "version": "v1" }, { "created": "Mon, 30 Oct 2023 14:39:08 GMT", "version": "v2" } ]
2024-06-04
[ [ "Brown", "Sean M.", "" ], [ "Mayer-Bacon", "Christopher", "" ], [ "Freeland", "Stephen", "" ] ]
Would another origin of life resemble Earth's biochemical use of amino acids? Here we review current knowledge at three levels: 1) Could other classes of chemical structure serve as building blocks for biopolymer structure and catalysis? Amino acids now seem both readily available to, and a plausible chemical attractor for, life as we don't know it. Amino acids thus remain important and tractable targets for astrobiological research. 2) If amino acids are used, would we expect the same L-alpha-structural subclass used by life? Despite numerous ideas, it is not clear why life favors L-enantiomers. It seems clearer, however, why life on Earth uses the shortest possible (alpha-) amino acid backbone, and why each carries only one side chain. However, assertions that other backbones are physicochemically impossible have relaxed into arguments that they are disadvantageous. 3) Would we expect a similar set of side chains to those within the genetic code? Not only do many plausible alternatives exist and evidence exists for both evolutionary advantage and physicochemical constraint for those encoded by life. Overall, as focus shifts from amino acids as a chemical class to specific side chains used by post-LUCA biology, the probable role of physicochemical constraint diminishes relative to that of biological evolution. Exciting opportunities now present themselves for laboratory work and computing to explore how changing the amino acid alphabet alters the universe of protein folds. Near-term milestones include: a) expanding evidence about amino acids as attractors within chemical evolution; b) extending characterization of other backbones relative to biological proteins; c) merging computing and laboratory explorations of structures and functions unlocked by xeno peptides.
1009.2470
Marta Luksza
Marta {\L}uksza, Michael L\"assig and Johannes Berg
Significance analysis and statistical mechanics: an application to clustering
to appear in Phys. Rev. Lett
null
10.1103/PhysRevLett.105.220601
null
q-bio.MN cond-mat.stat-mech q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the statistical significance of structures in random data: Given a set of vectors and a measure of mutual similarity, how likely does a subset of these vectors form a cluster with enhanced similarity among its elements? The computation of this cluster p-value for randomly distributed vectors is mapped onto a well-defined problem of statistical mechanics. We solve this problem analytically, establishing a connection between the physics of quenched disorder and multiple testing statistics in clustering and related problems. In an application to gene expression data, we find a remarkable link between the statistical significance of a cluster and the functional relationships between its genes.
[ { "created": "Mon, 13 Sep 2010 18:25:05 GMT", "version": "v1" } ]
2015-05-19
[ [ "Łuksza", "Marta", "" ], [ "Lässig", "Michael", "" ], [ "Berg", "Johannes", "" ] ]
This paper addresses the statistical significance of structures in random data: Given a set of vectors and a measure of mutual similarity, how likely does a subset of these vectors form a cluster with enhanced similarity among its elements? The computation of this cluster p-value for randomly distributed vectors is mapped onto a well-defined problem of statistical mechanics. We solve this problem analytically, establishing a connection between the physics of quenched disorder and multiple testing statistics in clustering and related problems. In an application to gene expression data, we find a remarkable link between the statistical significance of a cluster and the functional relationships between its genes.
1412.5982
Anna Ochab-Marcinek
Anna Ochab-Marcinek and Marcin Tabaka
Transcriptional leakage versus noise: A simple mechanism of conversion between binary and graded response in autoregulated genes
8 pages, 6 figures, published in Physical Review E, http://link.aps.org/doi/10.1103/PhysRevE.91.012704 , copyright APS. Added missing explanations of the symbols in Tab. 1. Corrected "inactive", "active" in the text. Added missing "transcriptional" in the abstract
null
10.1103/PhysRevE.91.012704
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the response of an autoregulated gene to a range of concentrations of signal molecules. We show that transcriptional leakage and noise due to translational bursting have the opposite effects. In a positively autoregulated gene, increasing the noise converts the response from graded to binary, while increasing the leakage converts the response from binary to graded. Our findings support the hypothesis that, being a common phenomenon, leaky expression may be a relatively easy way for evolutionary tuning of the type of gene response without changing the type of regulation from positive to negative.
[ { "created": "Thu, 18 Dec 2014 18:12:01 GMT", "version": "v1" }, { "created": "Fri, 9 Jan 2015 14:43:12 GMT", "version": "v2" }, { "created": "Fri, 16 Jan 2015 16:48:52 GMT", "version": "v3" } ]
2015-01-19
[ [ "Ochab-Marcinek", "Anna", "" ], [ "Tabaka", "Marcin", "" ] ]
We study the response of an autoregulated gene to a range of concentrations of signal molecules. We show that transcriptional leakage and noise due to translational bursting have the opposite effects. In a positively autoregulated gene, increasing the noise converts the response from graded to binary, while increasing the leakage converts the response from binary to graded. Our findings support the hypothesis that, being a common phenomenon, leaky expression may be a relatively easy way for evolutionary tuning of the type of gene response without changing the type of regulation from positive to negative.
0904.2327
Rainer Klages
Peter Dieterich (1), Rainer Klages (2), Roland Preuss (3), Albrecht Schwab (4) ((1) TU Dresden, Germany (2) Queen Mary University of London, UK (3) Max-Planck-Institute for Plasma Physics, Garching, Germany (4) University of Muenster, Germany)
Anomalous dynamics of cell migration
20 pages, 3 figures, 1 table
PNAS 105, 459 (2008)
10.1073/pnas.0707603105
null
q-bio.CB cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell movement, for example during embryogenesis or tumor metastasis, is a complex dynamical process resulting from an intricate interplay of multiple components of the cellular migration machinery. At first sight, the paths of migrating cells resemble those of thermally driven Brownian particles. However, cell migration is an active biological process putting a characterization in terms of normal Brownian motion into question. By analyzing the trajectories of wildtype and mutated epithelial (MDCK-F) cells we show experimentally that anomalous dynamics characterizes cell migration. A superdiffusive increase of the mean squared displacement, non-Gaussian spatial probability distributions, and power-law decays of the velocity autocorrelations are the basis for this interpretation. Almost all results can be explained with a fractional Klein- Kramers equation allowing the quantitative classification of cell migration by a few parameters. Thereby it discloses the influence and relative importance of individual components of the cellular migration apparatus to the behavior of the cell as a whole.
[ { "created": "Wed, 15 Apr 2009 14:11:48 GMT", "version": "v1" } ]
2009-04-17
[ [ "Dieterich", "Peter", "" ], [ "Klages", "Rainer", "" ], [ "Preuss", "Roland", "" ], [ "Schwab", "Albrecht", "" ] ]
Cell movement, for example during embryogenesis or tumor metastasis, is a complex dynamical process resulting from an intricate interplay of multiple components of the cellular migration machinery. At first sight, the paths of migrating cells resemble those of thermally driven Brownian particles. However, cell migration is an active biological process putting a characterization in terms of normal Brownian motion into question. By analyzing the trajectories of wildtype and mutated epithelial (MDCK-F) cells we show experimentally that anomalous dynamics characterizes cell migration. A superdiffusive increase of the mean squared displacement, non-Gaussian spatial probability distributions, and power-law decays of the velocity autocorrelations are the basis for this interpretation. Almost all results can be explained with a fractional Klein- Kramers equation allowing the quantitative classification of cell migration by a few parameters. Thereby it discloses the influence and relative importance of individual components of the cellular migration apparatus to the behavior of the cell as a whole.
2201.02703
Pablo Carlos L\'opez Dr.
Pablo Carlos L\'opez V\'azquez and Gilberto S\'anchez Gonz\'alez and Jorge Mart\'inez Ortega and Renato Salom\'on Arroyo Duarte
Stochastic epidemiological model: Modeling the SARS-CoV-2 spreading in Mexico
14 pages, 7 figures
null
10.1371/journal.pone.0275216
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
In this paper we model the spreading of the SARS-CoV-2 in Mexico by introducing a new stochastic approximation constructed from first principles, structured on the basis of a Latent-Infectious- (Recovered or Deceased) (LI(RD)) compartmental approximation, where the number of new infected individuals caused by a single infectious individual per unit time (a day), is a random variable of a Poisson distribution and whose parameter is modulated through a weight-like time-dependent function. The weight function serves to introduce a time dependence to the average number of new infections and as we will show, this information can be extracted from empirical data, giving to the model self-consistency and provides a tool to study information about periodic patterns encoded in the epidemiological dynamics
[ { "created": "Fri, 7 Jan 2022 22:54:44 GMT", "version": "v1" } ]
2023-01-11
[ [ "Vázquez", "Pablo Carlos López", "" ], [ "González", "Gilberto Sánchez", "" ], [ "Ortega", "Jorge Martínez", "" ], [ "Duarte", "Renato Salomón Arroyo", "" ] ]
In this paper we model the spreading of the SARS-CoV-2 in Mexico by introducing a new stochastic approximation constructed from first principles, structured on the basis of a Latent-Infectious- (Recovered or Deceased) (LI(RD)) compartmental approximation, where the number of new infected individuals caused by a single infectious individual per unit time (a day), is a random variable of a Poisson distribution and whose parameter is modulated through a weight-like time-dependent function. The weight function serves to introduce a time dependence to the average number of new infections and as we will show, this information can be extracted from empirical data, giving to the model self-consistency and provides a tool to study information about periodic patterns encoded in the epidemiological dynamics
1110.1412
Eleni Katifori
Eleni Katifori and Marcelo O. Magnasco
Quantifying loopy network architectures
17 pages, 8 figures. During preparation of this manuscript the authors became aware of the work of Mileyko at al., concurrently submitted for publication
null
10.1371/journal.pone.0037994
null
q-bio.QM cond-mat.stat-mech nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of methods have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the Asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.
[ { "created": "Thu, 6 Oct 2011 23:26:57 GMT", "version": "v1" } ]
2015-05-30
[ [ "Katifori", "Eleni", "" ], [ "Magnasco", "Marcelo O.", "" ] ]
Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of methods have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the Asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.
2306.08564
Raja Marjieh
Raja Marjieh, Nori Jacoby, Joshua C. Peterson, Thomas L. Griffiths
The Universal Law of Generalization Holds for Naturalistic Stimuli
36 pages, 6 figures
null
null
null
q-bio.NC cs.AI stat.AP
http://creativecommons.org/licenses/by/4.0/
Shepard's universal law of generalization is a remarkable hypothesis about how intelligent organisms should perceive similarity. In its broadest form, the universal law states that the level of perceived similarity between a pair of stimuli should decay as a concave function of their distance when embedded in an appropriate psychological space. While extensively studied, evidence in support of the universal law has relied on low-dimensional stimuli and small stimulus sets that are very different from their real-world counterparts. This is largely because pairwise comparisons -- as required for similarity judgments -- scale quadratically in the number of stimuli. We provide direct evidence for the universal law in a naturalistic high-dimensional regime by analyzing an existing dataset of 214,200 human similarity judgments and a newly collected dataset of 390,819 human generalization judgments (N=2406 US participants) across three sets of natural images.
[ { "created": "Wed, 14 Jun 2023 15:17:48 GMT", "version": "v1" } ]
2023-06-16
[ [ "Marjieh", "Raja", "" ], [ "Jacoby", "Nori", "" ], [ "Peterson", "Joshua C.", "" ], [ "Griffiths", "Thomas L.", "" ] ]
Shepard's universal law of generalization is a remarkable hypothesis about how intelligent organisms should perceive similarity. In its broadest form, the universal law states that the level of perceived similarity between a pair of stimuli should decay as a concave function of their distance when embedded in an appropriate psychological space. While extensively studied, evidence in support of the universal law has relied on low-dimensional stimuli and small stimulus sets that are very different from their real-world counterparts. This is largely because pairwise comparisons -- as required for similarity judgments -- scale quadratically in the number of stimuli. We provide direct evidence for the universal law in a naturalistic high-dimensional regime by analyzing an existing dataset of 214,200 human similarity judgments and a newly collected dataset of 390,819 human generalization judgments (N=2406 US participants) across three sets of natural images.
1806.02893
William T Redman
William T Redman
An O(n) method of calculating Kendall correlations of spike trains
7 pages, 1 figure, 1 table
PLoS ONE (2019) 14(2): e0212190
10.1371/journal.pone.0212190
null
q-bio.QM q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The ability to record from increasingly large numbers of neurons, and the increasing attention being paid to large scale neural network simulations, demands computationally fast algorithms to compute relevant statistical measures. We present an O(n) algorithm for calculating the Kendall correlation of spike trains, a correlation measure that is becoming especially recognized as an important tool in neuroscience. We show that our method is around 50 times faster than the O (n ln n) method which is a current standard for quickly computing the Kendall correlation. In addition to providing a faster algorithm, we emphasize the role that taking the specific nature of spike trains had on reducing the run time. We imagine that there are many other useful algorithms that can be even more significantly sped up when taking this into consideration. A MATLAB function executing the method described here has been made freely available on-line.
[ { "created": "Thu, 7 Jun 2018 20:49:07 GMT", "version": "v1" }, { "created": "Thu, 23 Jan 2020 06:38:40 GMT", "version": "v2" }, { "created": "Tue, 25 Feb 2020 05:35:41 GMT", "version": "v3" } ]
2020-02-26
[ [ "Redman", "William T", "" ] ]
The ability to record from increasingly large numbers of neurons, and the increasing attention being paid to large scale neural network simulations, demands computationally fast algorithms to compute relevant statistical measures. We present an O(n) algorithm for calculating the Kendall correlation of spike trains, a correlation measure that is becoming especially recognized as an important tool in neuroscience. We show that our method is around 50 times faster than the O (n ln n) method which is a current standard for quickly computing the Kendall correlation. In addition to providing a faster algorithm, we emphasize the role that taking the specific nature of spike trains had on reducing the run time. We imagine that there are many other useful algorithms that can be even more significantly sped up when taking this into consideration. A MATLAB function executing the method described here has been made freely available on-line.
2108.06610
Harisankar Sadasivan
Tim Dunn, Harisankar Sadasivan, Jack Wadden, Kush Goliya, Kuan-Yu Chen, Reetuparna Das, David Blaauw, Satish Narayanasamy
SquiggleFilter: An Accelerator for Portable Virus Detection
https://micro2021ae.hotcrp.com/paper/12?cap=012aOJj-0U08_9o
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
The MinION is a recent-to-market handheld nanopore sequencer. It can be used to determine the whole genome of a target virus in a biological sample. Its Read Until feature allows us to skip sequencing a majority of non-target reads (DNA/RNA fragments), which constitutes more than 99% of all reads in a typical sample. However, it does not have any on-board computing, which significantly limits its portability. We analyze the performance of a Read Until metagenomic pipeline for detecting target viruses and identifying strain-specific mutations. We find new sources of performance bottlenecks (basecaller in classification of a read) that are not addressed by past genomics accelerators. We present SquiggleFilter, a novel hardware accelerated dynamic time warping (DTW) based filter that directly analyzes MinION's raw squiggles and filters everything except target viral reads, thereby avoiding the expensive basecalling step. We show that our 14.3W 13.25mm2 accelerator has 274X greater throughput and 3481X lower latency than existing GPU-based solutions while consuming half the power, enabling Read Until for the next generation of nanopore sequencers.
[ { "created": "Sat, 14 Aug 2021 20:35:27 GMT", "version": "v1" }, { "created": "Thu, 23 Sep 2021 16:10:09 GMT", "version": "v2" } ]
2021-09-24
[ [ "Dunn", "Tim", "" ], [ "Sadasivan", "Harisankar", "" ], [ "Wadden", "Jack", "" ], [ "Goliya", "Kush", "" ], [ "Chen", "Kuan-Yu", "" ], [ "Das", "Reetuparna", "" ], [ "Blaauw", "David", "" ], [ "Nara...
The MinION is a recent-to-market handheld nanopore sequencer. It can be used to determine the whole genome of a target virus in a biological sample. Its Read Until feature allows us to skip sequencing a majority of non-target reads (DNA/RNA fragments), which constitutes more than 99% of all reads in a typical sample. However, it does not have any on-board computing, which significantly limits its portability. We analyze the performance of a Read Until metagenomic pipeline for detecting target viruses and identifying strain-specific mutations. We find new sources of performance bottlenecks (basecaller in classification of a read) that are not addressed by past genomics accelerators. We present SquiggleFilter, a novel hardware accelerated dynamic time warping (DTW) based filter that directly analyzes MinION's raw squiggles and filters everything except target viral reads, thereby avoiding the expensive basecalling step. We show that our 14.3W 13.25mm2 accelerator has 274X greater throughput and 3481X lower latency than existing GPU-based solutions while consuming half the power, enabling Read Until for the next generation of nanopore sequencers.
1707.07145
C\'esar Parra-Rojas
C\'esar Parra-Rojas, Alan J. McKane
Reduction of a metapopulation genetic model to an effective one island model
16 pages, 4 figures. Supplementary material: 22 pages, 3 figures
Europhys. Lett. 122, 18001 (2018)
10.1209/0295-5075/122/18001
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore a model of metapopulation genetics which is based on a more ecologically motivated approach than is frequently used in population genetics. The size of the population is regulated by competition between individuals, rather than by artificially imposing a fixed population size. The increased complexity of the model is managed by employing techniques often used in the physical sciences, namely exploiting time-scale separation to eliminate fast variables and then constructing an effective model from the slow modes. Remarkably, an initial model with 2$\mathcal{D}$ variables, where $\mathcal{D}$ is the number of islands in the metapopulation, can be reduced to a model with a single variable. We analyze this effective model and show that the predictions for the probability of fixation of the alleles and the mean time to fixation agree well with those found from numerical simulations of the original model.
[ { "created": "Sat, 22 Jul 2017 11:58:12 GMT", "version": "v1" }, { "created": "Sun, 27 May 2018 11:00:41 GMT", "version": "v2" } ]
2018-05-29
[ [ "Parra-Rojas", "César", "" ], [ "McKane", "Alan J.", "" ] ]
We explore a model of metapopulation genetics which is based on a more ecologically motivated approach than is frequently used in population genetics. The size of the population is regulated by competition between individuals, rather than by artificially imposing a fixed population size. The increased complexity of the model is managed by employing techniques often used in the physical sciences, namely exploiting time-scale separation to eliminate fast variables and then constructing an effective model from the slow modes. Remarkably, an initial model with 2$\mathcal{D}$ variables, where $\mathcal{D}$ is the number of islands in the metapopulation, can be reduced to a model with a single variable. We analyze this effective model and show that the predictions for the probability of fixation of the alleles and the mean time to fixation agree well with those found from numerical simulations of the original model.
1912.05057
Julia Shore
Julia A. Shore, Barbara R. Holland, Jeremy G. Sumner, Kay Nieselt and Peter R. Wills
The ancient Operational Code is embedded in the amino acid substitution matrix and aaRS phylogenies
null
Journal of molecular evolution, 1-15 (2019)
10.1007/s00239-019-09918-z
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The underlying structure of the canonical amino acid substitution matrix (aaSM) is examined by considering stepwise improvements in the differential recognition of amino acids according to their chemical properties during the branching history of the two aminoacyl-tRNA synthetase (aaRS) superfamilies. The evolutionary expansion of the genetic code is described by a simple parameterization of the aaSM, in which (i) the number of distinguishable amino acid types, (ii) the matrix dimension, and (iii) the number of parameters, each increases by one for each bifurcation in an aaRS phylogeny. Parameterized matrices corresponding to trees in which the size of an amino acid sidechain is the only discernible property behind its categorization as a substrate, exclusively for a Class I or II aaRS, provide a significantly better fit to empirically determined aaSM than trees with random bifurcation patterns. A second split between polar and nonpolar amino acids in each Class effects a vastly greater further improvement. The earliest Class-separated epochs in the phylogenies of the aaRS reflect these enzymes' capability to distinguish tRNAs through the recognition of acceptor stem identity elements via the minor (Class I) and major (Class II) helical grooves, which is how the ancient Operational Code functioned. The advent of tRNA recognition using the anticodon loop supports the evolution of the optimal map of amino acid chemistry found in the later Genetic Code, an essentially digital categorization, in which polarity is the major functional property, compensating for the unrefined, haphazard differentiation of amino acids achieved by the Operational Code.
[ { "created": "Wed, 11 Dec 2019 00:06:30 GMT", "version": "v1" } ]
2019-12-12
[ [ "Shore", "Julia A.", "" ], [ "Holland", "Barbara R.", "" ], [ "Sumner", "Jeremy G.", "" ], [ "Nieselt", "Kay", "" ], [ "Wills", "Peter R.", "" ] ]
The underlying structure of the canonical amino acid substitution matrix (aaSM) is examined by considering stepwise improvements in the differential recognition of amino acids according to their chemical properties during the branching history of the two aminoacyl-tRNA synthetase (aaRS) superfamilies. The evolutionary expansion of the genetic code is described by a simple parameterization of the aaSM, in which (i) the number of distinguishable amino acid types, (ii) the matrix dimension, and (iii) the number of parameters, each increases by one for each bifurcation in an aaRS phylogeny. Parameterized matrices corresponding to trees in which the size of an amino acid sidechain is the only discernible property behind its categorization as a substrate, exclusively for a Class I or II aaRS, provide a significantly better fit to empirically determined aaSM than trees with random bifurcation patterns. A second split between polar and nonpolar amino acids in each Class effects a vastly greater further improvement. The earliest Class-separated epochs in the phylogenies of the aaRS reflect these enzymes' capability to distinguish tRNAs through the recognition of acceptor stem identity elements via the minor (Class I) and major (Class II) helical grooves, which is how the ancient Operational Code functioned. The advent of tRNA recognition using the anticodon loop supports the evolution of the optimal map of amino acid chemistry found in the later Genetic Code, an essentially digital categorization, in which polarity is the major functional property, compensating for the unrefined, haphazard differentiation of amino acids achieved by the Operational Code.
q-bio/0604012
Serge Smidtas
Serge Smidtas, Vincent Schachter, Francois Kepes
The adaptive filter of the yeast galactose pathway
null
J. Theor. Biol. 2005
null
null
q-bio.MN q-bio.CB
null
In the yeast Saccharomyces cerevisiae, the interplay between galactose, Gal3p, Gal80p and Gal4p determines the transcriptional status of the genes required for galactose utilization. After an increase in galactose concentration, galactose molecules bind onto Gal3p. This event leads via Gal80p to the activation of Gal4p, which then induces GAL3 and GAL80 gene transcription. Here we propose a qualitative dynamic model, whereby these molecular interaction events represent the first two stages of a functional feedback loop that closes with the capture of activated Gal4p by newly synthesized Gal3p and Gal80p, decreasing transcriptional activation and creating again the protein complex that can bind incoming galactose molecules. Based on the differential time scales of faster protein interactions versus slower biosynthetic steps, this feedback loop functions as a derivative filter where galactose is the input step signal, and released Gal4p is the output derivative signal. One advantage of such a derivative filter is that GAL genes are expressed in proportion to the cellular requirement. Furthermore, this filter adaptively protects the cellular receptors from saturation by galactose, allowing cells to remain sensitive to variations in galactose concentrations rather than to absolute concentrations. Finally, this feedback loop, by allowing phosphorylation of some active Gal4p, may be essential to initiate the subsequent long-term response.
[ { "created": "Mon, 10 Apr 2006 11:02:16 GMT", "version": "v1" } ]
2007-05-23
[ [ "Smidtas", "Serge", "" ], [ "Schachter", "Vincent", "" ], [ "Kepes", "Francois", "" ] ]
In the yeast Saccharomyces cerevisiae, the interplay between galactose, Gal3p, Gal80p and Gal4p determines the transcriptional status of the genes required for galactose utilization. After an increase in galactose concentration, galactose molecules bind onto Gal3p. This event leads via Gal80p to the activation of Gal4p, which then induces GAL3 and GAL80 gene transcription. Here we propose a qualitative dynamic model, whereby these molecular interaction events represent the first two stages of a functional feedback loop that closes with the capture of activated Gal4p by newly synthesized Gal3p and Gal80p, decreasing transcriptional activation and creating again the protein complex that can bind incoming galactose molecules. Based on the differential time scales of faster protein interactions versus slower biosynthetic steps, this feedback loop functions as a derivative filter where galactose is the input step signal, and released Gal4p is the output derivative signal. One advantage of such a derivative filter is that GAL genes are expressed in proportion to the cellular requirement. Furthermore, this filter adaptively protects the cellular receptors from saturation by galactose, allowing cells to remain sensitive to variations in galactose concentrations rather than to absolute concentrations. Finally, this feedback loop, by allowing phosphorylation of some active Gal4p, may be essential to initiate the subsequent long-term response.
1902.01399
Samuel St-Jean
Samuel St-Jean, Maxime Chamberland, Max A. Viergever, Alexander Leemans
Reducing variability in along-tract analysis with diffusion profile realignment
v4: peer-reviewed round 2 v3 : deleted some old text from before peer-review which was mistakenly included v2 : peer-reviewed version v1: preprint as submitted to journal NeuroImage
NeuroImage, 2019, ISSN 1053-8119
10.1016/j.neuroimage.2019.06.016
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion weighted MRI (dMRI) provides a non invasive virtual reconstruction of the brain's white matter structures through tractography. Analyzing dMRI measures along the trajectory of white matter bundles can provide a more specific investigation than considering a region of interest or tract-averaged measurements. However, performing group analyses with this along-tract strategy requires correspondence between points of tract pathways across subjects. This is usually achieved by creating a new common space where the representative streamlines from every subject are resampled to the same number of points. If the underlying anatomy of some subjects was altered due to, e.g. disease or developmental changes, such information might be lost by resampling to a fixed number of points. In this work, we propose to address the issue of possible misalignment, which might be present even after resampling, by realigning the representative streamline of each subject in this 1D space with a new method, coined diffusion profile realignment (DPR). Experiments on synthetic datasets show that DPR reduces the coefficient of variation for the mean diffusivity, fractional anisotropy and apparent fiber density when compared to the unaligned case. Using 100 in vivo datasets from the HCP, we simulated changes in mean diffusivity, fractional anisotropy and apparent fiber density. Pairwise Student's t-tests between these altered subjects and the original subjects indicate that regional changes are identified after realignment with the DPR algorithm, while preserving differences previously detected in the unaligned case. This new correction strategy contributes to revealing effects of interest which might be hidden by misalignment and has the potential to improve the specificity in longitudinal population studies beyond the traditional region of interest based analysis and along-tract analysis workflows.
[ { "created": "Mon, 4 Feb 2019 17:45:34 GMT", "version": "v1" }, { "created": "Wed, 27 Mar 2019 15:17:57 GMT", "version": "v2" }, { "created": "Fri, 29 Mar 2019 10:24:05 GMT", "version": "v3" }, { "created": "Wed, 8 May 2019 08:47:05 GMT", "version": "v4" } ]
2019-06-21
[ [ "St-Jean", "Samuel", "" ], [ "Chamberland", "Maxime", "" ], [ "Viergever", "Max A.", "" ], [ "Leemans", "Alexander", "" ] ]
Diffusion weighted MRI (dMRI) provides a non invasive virtual reconstruction of the brain's white matter structures through tractography. Analyzing dMRI measures along the trajectory of white matter bundles can provide a more specific investigation than considering a region of interest or tract-averaged measurements. However, performing group analyses with this along-tract strategy requires correspondence between points of tract pathways across subjects. This is usually achieved by creating a new common space where the representative streamlines from every subject are resampled to the same number of points. If the underlying anatomy of some subjects was altered due to, e.g. disease or developmental changes, such information might be lost by resampling to a fixed number of points. In this work, we propose to address the issue of possible misalignment, which might be present even after resampling, by realigning the representative streamline of each subject in this 1D space with a new method, coined diffusion profile realignment (DPR). Experiments on synthetic datasets show that DPR reduces the coefficient of variation for the mean diffusivity, fractional anisotropy and apparent fiber density when compared to the unaligned case. Using 100 in vivo datasets from the HCP, we simulated changes in mean diffusivity, fractional anisotropy and apparent fiber density. Pairwise Student's t-tests between these altered subjects and the original subjects indicate that regional changes are identified after realignment with the DPR algorithm, while preserving differences previously detected in the unaligned case. This new correction strategy contributes to revealing effects of interest which might be hidden by misalignment and has the potential to improve the specificity in longitudinal population studies beyond the traditional region of interest based analysis and along-tract analysis workflows.
0709.2646
Seth Sullivant
Niko Beerenwinkel, Seth Sullivant
Markov models for accumulating mutations
21 pages, 8 figures
null
null
null
q-bio.PE math.CO
null
We introduce and analyze a waiting time model for the accumulation of genetic changes. The continuous time conjunctive Bayesian network is defined by a partially ordered set of mutations and by the rate of fixation of each mutation. The partial order encodes constraints on the order in which mutations can fixate in the population, shedding light on the mutational pathways underlying the evolutionary process. We study a censored version of the model and derive equations for an EM algorithm to perform maximum likelihood estimation of the model parameters. We also show how to select the maximum likelihood poset. The model is applied to genetic data from different cancers and from drug resistant HIV samples, indicating implications for diagnosis and treatment.
[ { "created": "Mon, 17 Sep 2007 14:25:42 GMT", "version": "v1" } ]
2007-09-18
[ [ "Beerenwinkel", "Niko", "" ], [ "Sullivant", "Seth", "" ] ]
We introduce and analyze a waiting time model for the accumulation of genetic changes. The continuous time conjunctive Bayesian network is defined by a partially ordered set of mutations and by the rate of fixation of each mutation. The partial order encodes constraints on the order in which mutations can fixate in the population, shedding light on the mutational pathways underlying the evolutionary process. We study a censored version of the model and derive equations for an EM algorithm to perform maximum likelihood estimation of the model parameters. We also show how to select the maximum likelihood poset. The model is applied to genetic data from different cancers and from drug resistant HIV samples, indicating implications for diagnosis and treatment.
1608.02795
Guido Tiana
Y. Zhan, L. Giorgetti, G. Tiana
The looping probability of random heteropolymers helps to understand the scaling properties of biopolymers
null
Phys. Rev. E 94, 032402 (2016)
10.1103/PhysRevE.94.032402
null
q-bio.QM cond-mat.soft q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Random heteropolymers are a minimal description of biopolymers and can provide a theoretical framework to the investigate the formation of loops in biophysical experiments. A two--state model provides a consistent and robust way to study the scaling properties of loop formation in polymers of the size of typical biological systems. Combining it with self--adjusting simulated--tempering simulations, we can calculate numerically the looping properties of several realizations of the random interactions within the chain. Differently from homopolymers, random heteropolymers display at different temperatures a continuous set of scaling exponents. The necessity of using self--averaging quantities makes finite--size effects dominant at low temperatures even for long polymers, shadowing the length--independent character of looping probability expected in analogy with homopolymeric globules. This could provide a simple explanation for the small scaling exponents found in experiments, for example in chromosome folding.
[ { "created": "Tue, 9 Aug 2016 13:04:34 GMT", "version": "v1" } ]
2016-09-28
[ [ "Zhan", "Y.", "" ], [ "Giorgetti", "L.", "" ], [ "Tiana", "G.", "" ] ]
Random heteropolymers are a minimal description of biopolymers and can provide a theoretical framework to the investigate the formation of loops in biophysical experiments. A two--state model provides a consistent and robust way to study the scaling properties of loop formation in polymers of the size of typical biological systems. Combining it with self--adjusting simulated--tempering simulations, we can calculate numerically the looping properties of several realizations of the random interactions within the chain. Differently from homopolymers, random heteropolymers display at different temperatures a continuous set of scaling exponents. The necessity of using self--averaging quantities makes finite--size effects dominant at low temperatures even for long polymers, shadowing the length--independent character of looping probability expected in analogy with homopolymeric globules. This could provide a simple explanation for the small scaling exponents found in experiments, for example in chromosome folding.
1704.02168
Todd Parsons
Todd L. Parsons
Invasion probabilities, hitting times, and some fluctuation theory for the stochastic logistic process
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider excursions for a class of stochastic processes describing a population of discrete individuals experiencing density-limited growth, such that the population has a finite carrying capacity and behaves qualitatively like the classical logistic model when the carrying capacity is large. Being discrete and stochastic, however, our population nonetheless goes extinct in finite time. We present results concerning the maximum of the population prior to extinction in the large population limit, from which we obtain establishment probabilities and upper bounds for the process, as well as estimates for the waiting time to establishment and extinction. As a consequence, we show that conditional upon establishment, the stochastic logistic process will with high probability greatly exceed carrying capacity an arbitrary number of times prior to extinction.
[ { "created": "Fri, 7 Apr 2017 10:22:21 GMT", "version": "v1" } ]
2017-04-10
[ [ "Parsons", "Todd L.", "" ] ]
We consider excursions for a class of stochastic processes describing a population of discrete individuals experiencing density-limited growth, such that the population has a finite carrying capacity and behaves qualitatively like the classical logistic model when the carrying capacity is large. Being discrete and stochastic, however, our population nonetheless goes extinct in finite time. We present results concerning the maximum of the population prior to extinction in the large population limit, from which we obtain establishment probabilities and upper bounds for the process, as well as estimates for the waiting time to establishment and extinction. As a consequence, we show that conditional upon establishment, the stochastic logistic process will with high probability greatly exceed carrying capacity an arbitrary number of times prior to extinction.
0902.2918
Johannes Berg
Franck Stauffer and Johannes Berg
Adaptive gene regulatory networks
5 pages RevTex
null
10.1209/0295-5075/88/48004
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regulatory interactions between genes show a large amount of cross-species variability, even when the underlying functions are conserved: There are many ways to achieve the same function. Here we investigate the ability of regulatory networks to reproduce given expression levels within a simple model of gene regulation. We find an exponentially large space of regulatory networks compatible with a given set of expression levels, giving rise to an extensive entropy of networks. Typical realisations of regulatory networks are found to share a bias towards symmetric interactions, in line with empirical evidence.
[ { "created": "Tue, 17 Feb 2009 13:02:42 GMT", "version": "v1" } ]
2015-05-13
[ [ "Stauffer", "Franck", "" ], [ "Berg", "Johannes", "" ] ]
Regulatory interactions between genes show a large amount of cross-species variability, even when the underlying functions are conserved: There are many ways to achieve the same function. Here we investigate the ability of regulatory networks to reproduce given expression levels within a simple model of gene regulation. We find an exponentially large space of regulatory networks compatible with a given set of expression levels, giving rise to an extensive entropy of networks. Typical realisations of regulatory networks are found to share a bias towards symmetric interactions, in line with empirical evidence.
1510.00576
Tobias Galla
Julie Eatock, Yen Ting Lin, Eugene T. Y. Chang, Tobias Galla, Richard H. Clayton
Assessing Measures of Atrial Fibrillation Clustering via Stochastic Models of Episode Recurrence and Disease Progression
4 pages, 4 figures, submitted to Computing in Cardiology 2015
null
null
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Atrial fibrillation (AF) is a leading cause of morbidity and mortality. AF prevalence increases with age, which is attributed to pathophysiological changes that aid AF initiation and perpetuation. Current state-of-the-art models are only capable of simulating short periods of atrial activity at high spatial resolution, whilst the majority of clinical recordings are based on infrequent temporal datasets of limited spatial resolution. Being able to estimate disease progression informed by both modelling and clinical data would be of significant interest. In addition an analysis of the temporal distribution of recorded fibrillation episodes AF density can provide insights into recurrence patterns. We present an initial analysis of the AF density measure using a simplified idealised stochastic model of a binary time series representing AF episodes. The future aim of this work is to develop robust clinical measures of progression which will be tested on models that generate long-term synthetic data. These measures would then be of clinical interest in deciding treatment strategies.
[ { "created": "Fri, 2 Oct 2015 12:29:21 GMT", "version": "v1" } ]
2015-10-05
[ [ "Eatock", "Julie", "" ], [ "Lin", "Yen Ting", "" ], [ "Chang", "Eugene T. Y.", "" ], [ "Galla", "Tobias", "" ], [ "Clayton", "Richard H.", "" ] ]
Atrial fibrillation (AF) is a leading cause of morbidity and mortality. AF prevalence increases with age, which is attributed to pathophysiological changes that aid AF initiation and perpetuation. Current state-of-the-art models are only capable of simulating short periods of atrial activity at high spatial resolution, whilst the majority of clinical recordings are based on infrequent temporal datasets of limited spatial resolution. Being able to estimate disease progression informed by both modelling and clinical data would be of significant interest. In addition an analysis of the temporal distribution of recorded fibrillation episodes AF density can provide insights into recurrence patterns. We present an initial analysis of the AF density measure using a simplified idealised stochastic model of a binary time series representing AF episodes. The future aim of this work is to develop robust clinical measures of progression which will be tested on models that generate long-term synthetic data. These measures would then be of clinical interest in deciding treatment strategies.
2004.13777
Vikram Singh
Spandan Kumar, Bhanu Sharma, Vikram Singh
Modelling the role of media induced fear conditioning in mitigating post-lockdown COVID-19 pandemic: perspectives on India
21 pages, 8 figures, 1 table
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Several countries that have been successful in constraining the severity of COVID-19 pandemic via "lockdown" are now considering to slowly end it, mainly because of enormous socio-economic side-effects. An abrupt ending of lockdown can increase the basic reproductive number and undo everything; therefore, carefully designed exit strategies are needed to sustain its benefits post upliftment. To study the role of fear conditioning on mitigating the spread of COVID-19 in post-lockdown phase, in this work, we propose an age- and social contact- structures dependent Susceptible, Feared, Exposed, Infected and Recovered (SFEIR) model. Simulating the SFEIR model on Indian population with fear conditioning via mass media (like, television, community radio, internet and print media) along with positive reinforcement, it is found that increase in fraction of feared people results in the significant decrease in the growth of infected population. The present study suggests that, during post-lockdown phase, media induced fear conditioning in conjunction with closure of schools for about one more year can serve as an important non-pharmaceutical intervention to substantially mitigate this pandemic in India. The proposed SFEIR model, by quantifying the influence of media in inducing fear conditioning, underlies the importance of community driven changes in country specific mitigation of COVID-19 spread in post-lockdown phase.
[ { "created": "Tue, 28 Apr 2020 19:05:04 GMT", "version": "v1" }, { "created": "Mon, 25 May 2020 15:25:42 GMT", "version": "v2" } ]
2020-05-26
[ [ "Kumar", "Spandan", "" ], [ "Sharma", "Bhanu", "" ], [ "Singh", "Vikram", "" ] ]
Several countries that have been successful in constraining the severity of COVID-19 pandemic via "lockdown" are now considering to slowly end it, mainly because of enormous socio-economic side-effects. An abrupt ending of lockdown can increase the basic reproductive number and undo everything; therefore, carefully designed exit strategies are needed to sustain its benefits post upliftment. To study the role of fear conditioning on mitigating the spread of COVID-19 in post-lockdown phase, in this work, we propose an age- and social contact- structures dependent Susceptible, Feared, Exposed, Infected and Recovered (SFEIR) model. Simulating the SFEIR model on Indian population with fear conditioning via mass media (like, television, community radio, internet and print media) along with positive reinforcement, it is found that increase in fraction of feared people results in the significant decrease in the growth of infected population. The present study suggests that, during post-lockdown phase, media induced fear conditioning in conjunction with closure of schools for about one more year can serve as an important non-pharmaceutical intervention to substantially mitigate this pandemic in India. The proposed SFEIR model, by quantifying the influence of media in inducing fear conditioning, underlies the importance of community driven changes in country specific mitigation of COVID-19 spread in post-lockdown phase.
2304.01799
Gavin Mischler
Gavin Mischler, Vinay Raghavan, Menoua Keshishian, Nima Mesgarani
naplib-python: Neural Acoustic Data Processing and Analysis Tools in Python
9 pages including references, 1 table, 1 figure
null
10.1016/j.simpa.2023.100541
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, the computational neuroscience community has pushed for more transparent and reproducible methods across the field. In the interest of unifying the domain of auditory neuroscience, naplib-python provides an intuitive and general data structure for handling all neural recordings and stimuli, as well as extensive preprocessing, feature extraction, and analysis tools which operate on that data structure. The package removes many of the complications associated with this domain, such as varying trial durations and multi-modal stimuli, and provides a general-purpose analysis framework that interfaces easily with existing toolboxes used in the field.
[ { "created": "Tue, 4 Apr 2023 13:56:32 GMT", "version": "v1" } ]
2023-09-20
[ [ "Mischler", "Gavin", "" ], [ "Raghavan", "Vinay", "" ], [ "Keshishian", "Menoua", "" ], [ "Mesgarani", "Nima", "" ] ]
Recently, the computational neuroscience community has pushed for more transparent and reproducible methods across the field. In the interest of unifying the domain of auditory neuroscience, naplib-python provides an intuitive and general data structure for handling all neural recordings and stimuli, as well as extensive preprocessing, feature extraction, and analysis tools which operate on that data structure. The package removes many of the complications associated with this domain, such as varying trial durations and multi-modal stimuli, and provides a general-purpose analysis framework that interfaces easily with existing toolboxes used in the field.
2312.01186
Shuxian Zou
Shuxian Zou, Hui Li, Shentong Mo, Xingyi Cheng, Eric Xing, Le Song
Linker-Tuning: Optimizing Continuous Prompts for Heterodimeric Protein Prediction
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting the structure of interacting chains is crucial for understanding biological systems and developing new drugs. Large-scale pre-trained Protein Language Models (PLMs), such as ESM2, have shown impressive abilities in extracting biologically meaningful representations for protein structure prediction. In this paper, we show that ESMFold, which has been successful in computing accurate atomic structures for single-chain proteins, can be adapted to predict the heterodimer structures in a lightweight manner. We propose Linker-tuning, which learns a continuous prompt to connect the two chains in a dimer before running it as a single sequence in ESMFold. Experiment results show that our method successfully predicts 56.98% of interfaces on the i.i.d. heterodimer test set, with an absolute improvement of +12.79% over the ESMFold-Linker baseline. Furthermore, our model can generalize well to the out-of-distribution (OOD) test set HeteroTest2 and two antibody test sets Fab and Fv while being $9\times$ faster than AF-Multimer.
[ { "created": "Sat, 2 Dec 2023 17:24:45 GMT", "version": "v1" } ]
2023-12-05
[ [ "Zou", "Shuxian", "" ], [ "Li", "Hui", "" ], [ "Mo", "Shentong", "" ], [ "Cheng", "Xingyi", "" ], [ "Xing", "Eric", "" ], [ "Song", "Le", "" ] ]
Predicting the structure of interacting chains is crucial for understanding biological systems and developing new drugs. Large-scale pre-trained Protein Language Models (PLMs), such as ESM2, have shown impressive abilities in extracting biologically meaningful representations for protein structure prediction. In this paper, we show that ESMFold, which has been successful in computing accurate atomic structures for single-chain proteins, can be adapted to predict the heterodimer structures in a lightweight manner. We propose Linker-tuning, which learns a continuous prompt to connect the two chains in a dimer before running it as a single sequence in ESMFold. Experiment results show that our method successfully predicts 56.98% of interfaces on the i.i.d. heterodimer test set, with an absolute improvement of +12.79% over the ESMFold-Linker baseline. Furthermore, our model can generalize well to the out-of-distribution (OOD) test set HeteroTest2 and two antibody test sets Fab and Fv while being $9\times$ faster than AF-Multimer.
1505.04518
Christopher Marriott
Chris Marriott and Jobran Chebib
Emergence-focused design in complex system simulation
European Conference on Artificial Life 2015 - York, UK
null
null
null
q-bio.PE cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emergence is a phenomenon taken for granted in science but also still not well understood. We have developed a model of artificial genetic evolution intended to allow for emergence on genetic, population and social levels. We present the details of the current state of our environment, agent, and reproductive models. In developing our models we have relied on a principle of using non-linear systems to model as many systems as possible including mutation and recombination, gene-environment interaction, agent metabolism, agent survival, resource gathering and sexual reproduction. In this paper we review the genetic dynamics that have emerged in our system including genotype-phenotype divergence, genetic drift, pseudogenes, and gene duplication. We conclude that emergence-focused design in complex system simulation is necessary to reproduce the multilevel emergence seen in the natural world.
[ { "created": "Mon, 18 May 2015 05:42:38 GMT", "version": "v1" } ]
2015-05-19
[ [ "Marriott", "Chris", "" ], [ "Chebib", "Jobran", "" ] ]
Emergence is a phenomenon taken for granted in science but also still not well understood. We have developed a model of artificial genetic evolution intended to allow for emergence on genetic, population and social levels. We present the details of the current state of our environment, agent, and reproductive models. In developing our models we have relied on a principle of using non-linear systems to model as many systems as possible including mutation and recombination, gene-environment interaction, agent metabolism, agent survival, resource gathering and sexual reproduction. In this paper we review the genetic dynamics that have emerged in our system including genotype-phenotype divergence, genetic drift, pseudogenes, and gene duplication. We conclude that emergence-focused design in complex system simulation is necessary to reproduce the multilevel emergence seen in the natural world.
q-bio/0411015
Wentian Li
Wentian Li, Dirk Holste
An Unusual 500,000 Bases Long Oscillation of Guanine and Cytosine Content in Human Chromosome 21
15 pages (figures included), 5 figures
Computational Biology and Chemistry, 28(5-6): 393-399 (2004)
10.1016/j.compbiolchem.2004.09.011
q-bio.GN/0411015
q-bio.GN q-bio.QM
null
An oscillation with a period of around 500 kb in guanine and cytosine content (GC%) is observed in the DNA sequence of human chromosome 21. This oscillation is localized in the rightmost one-eighth region of the chromosome, from 43.5 Mb to 46.5 Mb. Five cycles of oscillation are observed in this region with six GC-rich peaks and five GC-poor valleys. The GC-poor valleys comprise regions with low density of CpG islands and, alternating between the two DNA strands, low gene density regions. Consequently, the long-range oscillation of GC% result in spacing patterns of both CpG island density, and to a lesser extent, gene densities.
[ { "created": "Wed, 3 Nov 2004 20:20:06 GMT", "version": "v1" } ]
2007-05-23
[ [ "Li", "Wentian", "" ], [ "Holste", "Dirk", "" ] ]
An oscillation with a period of around 500 kb in guanine and cytosine content (GC%) is observed in the DNA sequence of human chromosome 21. This oscillation is localized in the rightmost one-eighth region of the chromosome, from 43.5 Mb to 46.5 Mb. Five cycles of oscillation are observed in this region with six GC-rich peaks and five GC-poor valleys. The GC-poor valleys comprise regions with low density of CpG islands and, alternating between the two DNA strands, low gene density regions. Consequently, the long-range oscillation of GC% result in spacing patterns of both CpG island density, and to a lesser extent, gene densities.
1412.2779
Guowei Wei
Kelin Xia and Guo-Wei Wei
Persistent homology analysis of protein structure, flexibility and folding
22 figures, 82 references
International Journal for Numerical Methods in Biomedical Engineering, 30, 814-844 (2014)
10.1002/cnm.2655
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins are the most important biomolecules for living organisms. The understanding of protein structure, function, dynamics and transport is one of most challenging tasks in biological science. In the present work, persistent homology is, for the first time, introduced for extracting molecular topological fingerprints (MTFs) based on the persistence of molecular topological invariants. MTFs are utilized for protein characterization, identification and classification. The method of slicing is proposed to track the geometric origin of protein topological invariants. Both all-atom and coarse-grained representations of MTFs are constructed. A new cutoff-like filtration is proposed to shed light on the optimal cutoff distance in elastic network models. Based on the correlation between protein compactness, rigidity and connectivity, we propose an accumulated bar length generated from persistent topological invariants for the quantitative modeling of protein flexibility. To this end, a correlation matrix based filtration is developed. This approach gives rise to an accurate prediction of the optimal characteristic distance used in protein B-factor analysis. Finally, MTFs are employed to characterize protein topological evolution during protein folding and quantitatively predict the protein folding stability. An excellent consistence between our persistent homology prediction and molecular dynamics simulation is found. This work reveals the topology-function relationship of proteins.
[ { "created": "Mon, 8 Dec 2014 21:24:50 GMT", "version": "v1" } ]
2014-12-10
[ [ "Xia", "Kelin", "" ], [ "Wei", "Guo-Wei", "" ] ]
Proteins are the most important biomolecules for living organisms. The understanding of protein structure, function, dynamics and transport is one of most challenging tasks in biological science. In the present work, persistent homology is, for the first time, introduced for extracting molecular topological fingerprints (MTFs) based on the persistence of molecular topological invariants. MTFs are utilized for protein characterization, identification and classification. The method of slicing is proposed to track the geometric origin of protein topological invariants. Both all-atom and coarse-grained representations of MTFs are constructed. A new cutoff-like filtration is proposed to shed light on the optimal cutoff distance in elastic network models. Based on the correlation between protein compactness, rigidity and connectivity, we propose an accumulated bar length generated from persistent topological invariants for the quantitative modeling of protein flexibility. To this end, a correlation matrix based filtration is developed. This approach gives rise to an accurate prediction of the optimal characteristic distance used in protein B-factor analysis. Finally, MTFs are employed to characterize protein topological evolution during protein folding and quantitatively predict the protein folding stability. An excellent consistence between our persistent homology prediction and molecular dynamics simulation is found. This work reveals the topology-function relationship of proteins.
0902.2970
Stephen Willson
Stephen J. Willson
Regular networks are determined by their trees
16 pages
IEEE/ACM Transactions on Computational Biology and Bioinformatics 8 (2011) 785-796
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A rooted acyclic digraph N with labelled leaves displays a tree T when there exists a way to select a unique parent of each hybrid vertex resulting in the tree T. Let Tr(N) denote the set of all trees displayed by the network N. In general, there may be many other networks M such that Tr(M) = Tr(N). A network is regular if it is isomorphic with its cover digraph. This paper shows that if N is regular, there is a procedure to reconstruct N given Tr(N). Hence if N and M are regular networks and Tr(N) = Tr(M), it follows that N = M, proving that a regular network is uniquely determined by its displayed trees.
[ { "created": "Tue, 17 Feb 2009 18:56:10 GMT", "version": "v1" } ]
2015-01-30
[ [ "Willson", "Stephen J.", "" ] ]
A rooted acyclic digraph N with labelled leaves displays a tree T when there exists a way to select a unique parent of each hybrid vertex resulting in the tree T. Let Tr(N) denote the set of all trees displayed by the network N. In general, there may be many other networks M such that Tr(M) = Tr(N). A network is regular if it is isomorphic with its cover digraph. This paper shows that if N is regular, there is a procedure to reconstruct N given Tr(N). Hence if N and M are regular networks and Tr(N) = Tr(M), it follows that N = M, proving that a regular network is uniquely determined by its displayed trees.
q-bio/0609044
Jose Vilar
Leonor Saiz and Jose M. G. Vilar
Stochastic dynamics of macromolecular-assembly networks
Open Access article available at http://www.nature.com/msb/journal/v2/n1/full/msb4100061.html
Nature/EMBO Molecular Systems Biology 2, art. no. msb4100061, pp. 2006.0024 (2006)
10.1038/msb4100061
null
q-bio.MN cond-mat.soft physics.bio-ph q-bio.SC
null
The formation and regulation of macromolecular complexes provides the backbone of most cellular processes, including gene regulation and signal transduction. The inherent complexity of assembling macromolecular structures makes current computational methods strongly limited for understanding how the physical interactions between cellular components give rise to systemic properties of cells. Here we present a stochastic approach to study the dynamics of networks formed by macromolecular complexes in terms of the molecular interactions of their components. Exploiting key thermodynamic concepts, this approach makes it possible to both estimate reaction rates and incorporate the resulting assembly dynamics into the stochastic kinetics of cellular networks. As prototype systems, we consider the lac operon and phage lambda induction switches, which rely on the formation of DNA loops by proteins and on the integration of these protein-DNA complexes into intracellular networks. This cross-scale approach offers an effective starting point to move forward from network diagrams, such as those of protein-protein and DNA-protein interaction networks, to the actual dynamics of cellular processes.
[ { "created": "Tue, 26 Sep 2006 16:28:27 GMT", "version": "v1" }, { "created": "Tue, 26 Sep 2006 20:47:00 GMT", "version": "v2" } ]
2007-05-23
[ [ "Saiz", "Leonor", "" ], [ "Vilar", "Jose M. G.", "" ] ]
The formation and regulation of macromolecular complexes provides the backbone of most cellular processes, including gene regulation and signal transduction. The inherent complexity of assembling macromolecular structures makes current computational methods strongly limited for understanding how the physical interactions between cellular components give rise to systemic properties of cells. Here we present a stochastic approach to study the dynamics of networks formed by macromolecular complexes in terms of the molecular interactions of their components. Exploiting key thermodynamic concepts, this approach makes it possible to both estimate reaction rates and incorporate the resulting assembly dynamics into the stochastic kinetics of cellular networks. As prototype systems, we consider the lac operon and phage lambda induction switches, which rely on the formation of DNA loops by proteins and on the integration of these protein-DNA complexes into intracellular networks. This cross-scale approach offers an effective starting point to move forward from network diagrams, such as those of protein-protein and DNA-protein interaction networks, to the actual dynamics of cellular processes.
1708.03765
Romulus Breban
Pavel Polyakov, C\'ecile Souty, Pierre-Yves B\"oelle and Romulus Breban
Spatial heterogeneity analyses identify limitations of epidemic alert systems: Monitoring influenza-like illness in France
24 pages, 1 table, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Surveillance data serving for epidemic alert systems are typically fully aggregated in space. However, epidemics may be spatially heterogeneous, undergoing distinct dynamics in distinct regions of the surveillance area. We unveil this in retrospective analyses by classifying incidence time series. We use Pearson correlation to quantify the similarity between local time series and then classify them using modularity maximization. The surveillance area is thus divided into regions with different incidence patterns. We analyzed 31 years of data on influenza-like-illness from the French system Sentinelles and found spatial heterogeneity in 19/31 influenza seasons. However, distinct epidemic regions could be identified only 4-5 weeks after the nationwide alert. The impact of spatial heterogeneity on influenza epidemiology was complex. First, when the nationwide alert was triggered, 32-41% of the administrative regions were experiencing an epidemic, while the others were not. Second, the nationwide alert was timely for the whole surveillance area, but, subsequently, regions experienced distinct epidemic dynamics. Third, the epidemic dynamics were homogeneous in space. Spatial heterogeneity analyses can provide the timing of the epidemic peak and finish, in various regions, to tailor disease monitoring and control.
[ { "created": "Sat, 12 Aug 2017 11:13:14 GMT", "version": "v1" }, { "created": "Thu, 29 Mar 2018 08:01:49 GMT", "version": "v2" } ]
2018-03-30
[ [ "Polyakov", "Pavel", "" ], [ "Souty", "Cécile", "" ], [ "Böelle", "Pierre-Yves", "" ], [ "Breban", "Romulus", "" ] ]
Surveillance data serving for epidemic alert systems are typically fully aggregated in space. However, epidemics may be spatially heterogeneous, undergoing distinct dynamics in distinct regions of the surveillance area. We unveil this in retrospective analyses by classifying incidence time series. We use Pearson correlation to quantify the similarity between local time series and then classify them using modularity maximization. The surveillance area is thus divided into regions with different incidence patterns. We analyzed 31 years of data on influenza-like-illness from the French system Sentinelles and found spatial heterogeneity in 19/31 influenza seasons. However, distinct epidemic regions could be identified only 4-5 weeks after the nationwide alert. The impact of spatial heterogeneity on influenza epidemiology was complex. First, when the nationwide alert was triggered, 32-41% of the administrative regions were experiencing an epidemic, while the others were not. Second, the nationwide alert was timely for the whole surveillance area, but, subsequently, regions experienced distinct epidemic dynamics. Third, the epidemic dynamics were homogeneous in space. Spatial heterogeneity analyses can provide the timing of the epidemic peak and finish, in various regions, to tailor disease monitoring and control.
1204.0997
Gautier Stoll
Gautier Stoll, Eric Viara, Emmanuel Barillot, Laurence Calzone
Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm
10 pages, 9 figures
null
null
null
q-bio.MN physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents an algorithm that allows modeling of biological networks in a qualitative framework with continuous time. Mathematical modeling is used as a systems biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and to predict the effect of perturbations. We propose a modeling approach that is intrinsically continuous in time. The algorithm presented here fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. The values of transition rates have a natural interpretation: it is the inverse of the time for the transition to occur. Mathematically, this approach can be translated in a set of ordinary differential equations on probability distributions; therefore, it can be seen as an approach in between quantitative and qualitative. We developed a C++ software, MaBoSS, that is able to simulate such a system by applying Kinetic Monte-Carlo (or Gillespie algorithm) in the Boolean state space. This software, parallelized and optimized, computes temporal evolution of probability distributions and can also estimate stationary distributions. Applications of Boolean Kinetic Monte-Carlo have been demonstrated for two qualitative models: a toy model and a published p53/Mdm2 model. Our approach allows to describe kinetic phenomena which were difficult to handle in the original models. In particular, transient effects are represented by time dependent probability distributions, interpretable in terms of cell populations.
[ { "created": "Wed, 4 Apr 2012 16:47:33 GMT", "version": "v1" }, { "created": "Tue, 29 May 2012 13:51:13 GMT", "version": "v2" } ]
2012-05-30
[ [ "Stoll", "Gautier", "" ], [ "Viara", "Eric", "" ], [ "Barillot", "Emmanuel", "" ], [ "Calzone", "Laurence", "" ] ]
This article presents an algorithm that allows modeling of biological networks in a qualitative framework with continuous time. Mathematical modeling is used as a systems biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and to predict the effect of perturbations. We propose a modeling approach that is intrinsically continuous in time. The algorithm presented here fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. The values of transition rates have a natural interpretation: it is the inverse of the time for the transition to occur. Mathematically, this approach can be translated in a set of ordinary differential equations on probability distributions; therefore, it can be seen as an approach in between quantitative and qualitative. We developed a C++ software, MaBoSS, that is able to simulate such a system by applying Kinetic Monte-Carlo (or Gillespie algorithm) in the Boolean state space. This software, parallelized and optimized, computes temporal evolution of probability distributions and can also estimate stationary distributions. Applications of Boolean Kinetic Monte-Carlo have been demonstrated for two qualitative models: a toy model and a published p53/Mdm2 model. Our approach allows to describe kinetic phenomena which were difficult to handle in the original models. In particular, transient effects are represented by time dependent probability distributions, interpretable in terms of cell populations.
2407.10832
Alexandria Volkening
Alexandria Volkening
Methods for quantifying self-organization in biology: a forward-looking survey and tutorial
Tutorial survey on methods for quantifying biological patterns
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
From flocking birds to schooling fish, organisms interact to form collective dynamics across the natural world. Self-organization is present at smaller scales as well: cells interact and move during development to produce patterns in fish skin, and wound healing relies on cell migration. Across these examples, scientists are interested in shedding light on the individual behaviors informing spatial group dynamics and in predicting the patterns that will emerge under altered agent interactions. One challenge to these goals is that images of self-organization -- whether empirical or generated by models -- are qualitative. To get around this, there are many methods for transforming qualitative pattern data into quantitative information. In this tutorial chapter, I survey some methods for quantifying self-organization, including order parameters, pair correlation functions, and techniques from topological data analysis. I also discuss some places that I see as especially promising for quantitative data, modeling, and data-driven approaches to continue meeting in the future.
[ { "created": "Mon, 15 Jul 2024 15:43:16 GMT", "version": "v1" } ]
2024-07-16
[ [ "Volkening", "Alexandria", "" ] ]
From flocking birds to schooling fish, organisms interact to form collective dynamics across the natural world. Self-organization is present at smaller scales as well: cells interact and move during development to produce patterns in fish skin, and wound healing relies on cell migration. Across these examples, scientists are interested in shedding light on the individual behaviors informing spatial group dynamics and in predicting the patterns that will emerge under altered agent interactions. One challenge to these goals is that images of self-organization -- whether empirical or generated by models -- are qualitative. To get around this, there are many methods for transforming qualitative pattern data into quantitative information. In this tutorial chapter, I survey some methods for quantifying self-organization, including order parameters, pair correlation functions, and techniques from topological data analysis. I also discuss some places that I see as especially promising for quantitative data, modeling, and data-driven approaches to continue meeting in the future.
2304.12693
Neil Scheidwasser
Matthew J Penn, Neil Scheidwasser, Mark P Khurana, David A Duch\^ene, Christl A Donnelly, Samir Bhatt
Phylo2Vec: a vector representation for binary trees
36 pages, 9 figures, 1 table, 2 supplementary figures
null
null
null
q-bio.PE cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Binary phylogenetic trees inferred from biological data are central to understanding the shared history among evolutionary units. However, inferring the placement of latent nodes in a tree is NP-hard and thus computationally expensive. State-of-the-art methods rely on carefully designed heuristics for tree search. These methods use different data structures for easy manipulation (e.g., classes in object-oriented programming languages) and readable representation of trees (e.g., Newick-format strings). Here, we present Phylo2Vec, a parsimonious encoding for phylogenetic trees that serves as a unified approach for both manipulating and representing phylogenetic trees. Phylo2Vec maps any binary tree with $n$ leaves to a unique integer vector of length $n-1$. The advantages of Phylo2Vec are fourfold: i) fast tree sampling, (ii) compressed tree representation compared to a Newick string, iii) quick and unambiguous verification if two binary trees are identical topologically, and iv) systematic ability to traverse tree space in very large or small jumps. As a proof of concept, we use Phylo2Vec for maximum likelihood inference on five real-world datasets and show that a simple hill-climbing-based optimisation scheme can efficiently traverse the vastness of tree space from a random to an optimal tree.
[ { "created": "Tue, 25 Apr 2023 09:54:35 GMT", "version": "v1" }, { "created": "Fri, 1 Dec 2023 08:26:28 GMT", "version": "v2" }, { "created": "Fri, 10 May 2024 14:31:10 GMT", "version": "v3" } ]
2024-05-13
[ [ "Penn", "Matthew J", "" ], [ "Scheidwasser", "Neil", "" ], [ "Khurana", "Mark P", "" ], [ "Duchêne", "David A", "" ], [ "Donnelly", "Christl A", "" ], [ "Bhatt", "Samir", "" ] ]
Binary phylogenetic trees inferred from biological data are central to understanding the shared history among evolutionary units. However, inferring the placement of latent nodes in a tree is NP-hard and thus computationally expensive. State-of-the-art methods rely on carefully designed heuristics for tree search. These methods use different data structures for easy manipulation (e.g., classes in object-oriented programming languages) and readable representation of trees (e.g., Newick-format strings). Here, we present Phylo2Vec, a parsimonious encoding for phylogenetic trees that serves as a unified approach for both manipulating and representing phylogenetic trees. Phylo2Vec maps any binary tree with $n$ leaves to a unique integer vector of length $n-1$. The advantages of Phylo2Vec are fourfold: i) fast tree sampling, (ii) compressed tree representation compared to a Newick string, iii) quick and unambiguous verification if two binary trees are identical topologically, and iv) systematic ability to traverse tree space in very large or small jumps. As a proof of concept, we use Phylo2Vec for maximum likelihood inference on five real-world datasets and show that a simple hill-climbing-based optimisation scheme can efficiently traverse the vastness of tree space from a random to an optimal tree.
2007.09800
Joaquin Salas
Joaqu\'in Salas
Improving the Estimation of the COVID-19 Effective Reproduction Number using Nowcasting
11 pages, 5 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the interactions between people increases, the impending menace of COVID-19 outbreaks materialize, and there is an inclination to apply lockdowns. In this context, it is essential to have easy-to-use indicators for people to use as a reference. The basic reproduction number of confirmed positives, $R_t$, fulfill such a role. This document proposes a data-driven approach to nowcast $R_t$ based on previous observations' statistical behavior. As more information arrives, the method naturally becomes more precise about the final count of confirmed positives. Our method's strength is that it is based on the self-reported onset of symptoms, in contrast to other methods that use the daily report's count to infer this quantity. We show that our approach may be the foundation for determining useful epidemy tracking indicators.
[ { "created": "Sun, 19 Jul 2020 22:17:26 GMT", "version": "v1" }, { "created": "Mon, 25 Jan 2021 05:57:54 GMT", "version": "v2" } ]
2021-01-26
[ [ "Salas", "Joaquín", "" ] ]
As the interactions between people increases, the impending menace of COVID-19 outbreaks materialize, and there is an inclination to apply lockdowns. In this context, it is essential to have easy-to-use indicators for people to use as a reference. The basic reproduction number of confirmed positives, $R_t$, fulfill such a role. This document proposes a data-driven approach to nowcast $R_t$ based on previous observations' statistical behavior. As more information arrives, the method naturally becomes more precise about the final count of confirmed positives. Our method's strength is that it is based on the self-reported onset of symptoms, in contrast to other methods that use the daily report's count to infer this quantity. We show that our approach may be the foundation for determining useful epidemy tracking indicators.
1709.05429
Hector Zenil
Hector Zenil, Narsis A. Kiani, Francesco Marabita, Yue Deng, Szabolcs Elias, Angelika Schmidt, Gordon Ball, Jesper Tegn\'er
An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems
50 pages with Supplementary Information and Extended Figures. The Online Algorithmic Complexity Calculator implements the methods in this paper: http://complexitycalculator.com/ Animated video available at: https://youtu.be/ufzq2p5tVLI
null
null
null
q-bio.OT cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate that the algorithmic information content of a system is deeply connected to its potential dynamics, thus affording an avenue for moving systems in the information-theoretic space and controlling them in the phase space. To this end we performed experiments and validated the results on (1) a very large set of small graphs, (2) a number of larger networks with different topologies, and (3) biological networks from a widely studied and validated genetic network (e.coli) as well as on a significant number of differentiating (Th17) and differentiated human cells from high quality databases (Harvard's CellNet) with results conforming to experimentally validated biological data. Based on these results we introduce a conceptual framework, a model-based interventional calculus and a reprogrammability measure with which to steer, manipulate, and reconstruct the dynamics of non- linear dynamical systems from partial and disordered observations. The method consists in finding and applying a series of controlled interventions to a dynamical system to estimate how its algorithmic information content is affected when every one of its elements are perturbed. The approach represents an alternative to numerical simulation and statistical approaches for inferring causal mechanistic/generative models and finding first principles. We demonstrate the framework's capabilities by reconstructing the phase space of some discrete dynamical systems (cellular automata) as case study and reconstructing their generating rules. We thus advance tools for reprogramming artificial and living systems without full knowledge or access to the system's actual kinetic equations or probability distributions yielding a suite of universal and parameter-free algorithms of wide applicability ranging from causation, dimension reduction, feature selection and model generation.
[ { "created": "Fri, 15 Sep 2017 22:41:38 GMT", "version": "v1" }, { "created": "Thu, 15 Mar 2018 22:48:09 GMT", "version": "v10" }, { "created": "Thu, 5 Apr 2018 15:38:29 GMT", "version": "v11" }, { "created": "Tue, 19 Sep 2017 01:01:24 GMT", "version": "v2" }, { "...
2018-04-06
[ [ "Zenil", "Hector", "" ], [ "Kiani", "Narsis A.", "" ], [ "Marabita", "Francesco", "" ], [ "Deng", "Yue", "" ], [ "Elias", "Szabolcs", "" ], [ "Schmidt", "Angelika", "" ], [ "Ball", "Gordon", "" ], [ ...
We demonstrate that the algorithmic information content of a system is deeply connected to its potential dynamics, thus affording an avenue for moving systems in the information-theoretic space and controlling them in the phase space. To this end we performed experiments and validated the results on (1) a very large set of small graphs, (2) a number of larger networks with different topologies, and (3) biological networks from a widely studied and validated genetic network (e.coli) as well as on a significant number of differentiating (Th17) and differentiated human cells from high quality databases (Harvard's CellNet) with results conforming to experimentally validated biological data. Based on these results we introduce a conceptual framework, a model-based interventional calculus and a reprogrammability measure with which to steer, manipulate, and reconstruct the dynamics of non- linear dynamical systems from partial and disordered observations. The method consists in finding and applying a series of controlled interventions to a dynamical system to estimate how its algorithmic information content is affected when every one of its elements are perturbed. The approach represents an alternative to numerical simulation and statistical approaches for inferring causal mechanistic/generative models and finding first principles. We demonstrate the framework's capabilities by reconstructing the phase space of some discrete dynamical systems (cellular automata) as case study and reconstructing their generating rules. We thus advance tools for reprogramming artificial and living systems without full knowledge or access to the system's actual kinetic equations or probability distributions yielding a suite of universal and parameter-free algorithms of wide applicability ranging from causation, dimension reduction, feature selection and model generation.
1603.05897
Manlio De Domenico
Manlio De Domenico, Shuntaro Sasai and Alex Arenas
Mapping multiplex hubs in human functional brain network
11 pages, 8 figures, 2 tables
Front. Neurosci. 10, 326 (2016)
10.3389/fnins.2016.00326
null
q-bio.NC cond-mat.dis-nn physics.bio-ph physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Typical brain networks consist of many peripheral regions and a few highly central ones, i.e. hubs, playing key functional roles in cerebral inter-regional interactions. Studies have shown that networks, obtained from the analysis of specific frequency components of brain activity, present peculiar architectures with unique profiles of region centrality. However, the identification of hubs in networks built from different frequency bands simultaneously is still a challenging problem, remaining largely unexplored. Here we identify each frequency component with one layer of a multiplex network and face this challenge by exploiting the recent advances in the analysis of multiplex topologies. First, we show that each frequency band carries unique topological information, fundamental to accurately model brain functional networks. We then demonstrate that hubs in the multiplex network, in general different from those ones obtained after discarding or aggregating the measured signals as usual, provide a more accurate map of brain's most important functional regions, allowing to distinguish between healthy and schizophrenic populations better than conventional network approaches.
[ { "created": "Fri, 18 Mar 2016 15:53:07 GMT", "version": "v1" } ]
2016-07-19
[ [ "De Domenico", "Manlio", "" ], [ "Sasai", "Shuntaro", "" ], [ "Arenas", "Alex", "" ] ]
Typical brain networks consist of many peripheral regions and a few highly central ones, i.e. hubs, playing key functional roles in cerebral inter-regional interactions. Studies have shown that networks, obtained from the analysis of specific frequency components of brain activity, present peculiar architectures with unique profiles of region centrality. However, the identification of hubs in networks built from different frequency bands simultaneously is still a challenging problem, remaining largely unexplored. Here we identify each frequency component with one layer of a multiplex network and face this challenge by exploiting the recent advances in the analysis of multiplex topologies. First, we show that each frequency band carries unique topological information, fundamental to accurately model brain functional networks. We then demonstrate that hubs in the multiplex network, in general different from those ones obtained after discarding or aggregating the measured signals as usual, provide a more accurate map of brain's most important functional regions, allowing to distinguish between healthy and schizophrenic populations better than conventional network approaches.
2208.00530
Nikolai Slavov
Michael J. MacCoss, Javier Alfaro, Meni Wanunu, Danielle A. Faivre, and Nikolai Slavov
Sampling the proteome by emerging single-molecule and mass-spectrometry methods
Recorded presentation: https://youtu.be/w0IOgJrrvNM
Nat Methods 20, 339--346 (2023)
10.1038/s41592-023-01802-5
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Mammalian cells have about 30,000-fold more protein molecules than mRNA molecules. This larger number of molecules and the associated larger dynamic range have major implications in the development of proteomics technologies. We examine these implications for both liquid chromatography-tandem mass spectrometry (LC-MS/MS) and single-molecule counting and provide estimates on how many molecules are routinely measured in proteomics experiments by LC-MS/MS. We review strategies that have been helpful for counting billions of protein molecules by LC-MS/MS and suggest that these strategies can benefit single-molecule methods, especially in mitigating the challenges of the wide dynamic range of the proteome. We also examine the theoretical possibilities for scaling up single-molecule and mass spectrometry proteomics approaches to quantifying the billions of protein molecules that make up the proteomes of our cells.
[ { "created": "Sun, 31 Jul 2022 21:59:40 GMT", "version": "v1" }, { "created": "Fri, 27 Jan 2023 21:52:08 GMT", "version": "v2" } ]
2023-03-14
[ [ "MacCoss", "Michael J.", "" ], [ "Alfaro", "Javier", "" ], [ "Wanunu", "Meni", "" ], [ "Faivre", "Danielle A.", "" ], [ "Slavov", "Nikolai", "" ] ]
Mammalian cells have about 30,000-fold more protein molecules than mRNA molecules. This larger number of molecules and the associated larger dynamic range have major implications in the development of proteomics technologies. We examine these implications for both liquid chromatography-tandem mass spectrometry (LC-MS/MS) and single-molecule counting and provide estimates on how many molecules are routinely measured in proteomics experiments by LC-MS/MS. We review strategies that have been helpful for counting billions of protein molecules by LC-MS/MS and suggest that these strategies can benefit single-molecule methods, especially in mitigating the challenges of the wide dynamic range of the proteome. We also examine the theoretical possibilities for scaling up single-molecule and mass spectrometry proteomics approaches to quantifying the billions of protein molecules that make up the proteomes of our cells.
2311.03131
Ilya Auslender
Ilya Auslender, Giorgio Letti, Yasaman Heydari, Clara Zaccaria, Lorenzo Pavesi
Decoding Neuronal Networks: A Reservoir Computing Approach for Predicting Connectivity and Functionality
Submitted version
null
null
null
q-bio.QM cs.LG physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this study, we address the challenge of analyzing electrophysiological measurements in neuronal networks. Our computational model, based on the Reservoir Computing Network (RCN) architecture, deciphers spatio-temporal data obtained from electrophysiological measurements of neuronal cultures. By reconstructing the network structure on a macroscopic scale, we reveal the connectivity between neuronal units. Notably, our model outperforms common methods like Cross-Correlation and Transfer-Entropy in predicting the network's connectivity map. Furthermore, we experimentally validate its ability to forecast network responses to specific inputs, including localized optogenetic stimuli.
[ { "created": "Mon, 6 Nov 2023 14:28:11 GMT", "version": "v1" }, { "created": "Tue, 23 Jan 2024 17:29:54 GMT", "version": "v2" }, { "created": "Tue, 5 Mar 2024 10:25:03 GMT", "version": "v3" } ]
2024-03-06
[ [ "Auslender", "Ilya", "" ], [ "Letti", "Giorgio", "" ], [ "Heydari", "Yasaman", "" ], [ "Zaccaria", "Clara", "" ], [ "Pavesi", "Lorenzo", "" ] ]
In this study, we address the challenge of analyzing electrophysiological measurements in neuronal networks. Our computational model, based on the Reservoir Computing Network (RCN) architecture, deciphers spatio-temporal data obtained from electrophysiological measurements of neuronal cultures. By reconstructing the network structure on a macroscopic scale, we reveal the connectivity between neuronal units. Notably, our model outperforms common methods like Cross-Correlation and Transfer-Entropy in predicting the network's connectivity map. Furthermore, we experimentally validate its ability to forecast network responses to specific inputs, including localized optogenetic stimuli.
1901.06794
Chunmei Feng
Jin-Xing Liu, Chun-Mei Feng, Xiang-Zhen Kong, Yong Xu
Dual Graph-Laplacian PCA: A Closed-Form Solution for Bi-clustering to Find "Checkerboard" Structures on Gene Expression Data
This manuscript was submitted in IEEE Transaction on Knowledge and Data Engineering on 12/01/2017. 9 pages, 3 figures
null
null
null
q-bio.GN cs.CE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of cancer, internal "checkerboard" structures are normally found in the matrices of gene expression data, which correspond to genes that are significantly up- or down-regulated in patients with specific types of tumors. In this paper, we propose a novel method, called dual graph-regularization principal component analysis (DGPCA). The main innovation of this method is that it simultaneously considers the internal geometric structures of the condition manifold and the gene manifold. Specifically, we obtain principal components (PCs) to represent the data and approximate the cluster membership indicators through Laplacian embedding. This new method is endowed with internal geometric structures, such as the condition manifold and gene manifold, which are both suitable for bi-clustering. A closed-form solution is provided for DGPCA. We apply this new method to simultaneously cluster genes and conditions (e.g., different samples) with the aim of finding internal "checkerboard" structures on gene expression data, if they exist. Then, we use this new method to identify regulatory genes under the particular conditions and to compare the results with those of other state-of-the-art PCA-based methods. Promising results on gene expression data have been verified by extensive experiments
[ { "created": "Mon, 21 Jan 2019 05:43:31 GMT", "version": "v1" } ]
2019-01-23
[ [ "Liu", "Jin-Xing", "" ], [ "Feng", "Chun-Mei", "" ], [ "Kong", "Xiang-Zhen", "" ], [ "Xu", "Yong", "" ] ]
In the context of cancer, internal "checkerboard" structures are normally found in the matrices of gene expression data, which correspond to genes that are significantly up- or down-regulated in patients with specific types of tumors. In this paper, we propose a novel method, called dual graph-regularization principal component analysis (DGPCA). The main innovation of this method is that it simultaneously considers the internal geometric structures of the condition manifold and the gene manifold. Specifically, we obtain principal components (PCs) to represent the data and approximate the cluster membership indicators through Laplacian embedding. This new method is endowed with internal geometric structures, such as the condition manifold and gene manifold, which are both suitable for bi-clustering. A closed-form solution is provided for DGPCA. We apply this new method to simultaneously cluster genes and conditions (e.g., different samples) with the aim of finding internal "checkerboard" structures on gene expression data, if they exist. Then, we use this new method to identify regulatory genes under the particular conditions and to compare the results with those of other state-of-the-art PCA-based methods. Promising results on gene expression data have been verified by extensive experiments
1705.06911
Taikai Takeda
Taikai Takeda, Michiaki Hamada
Beyond similarity assessment: Selecting the optimal model for sequence alignment via the Factorized Asymptotic Bayesian algorithm
This article has been accepted for publication in Bioinformatics Published by Oxford University Press
Bioinformatics, 2017, btx643
10.1093/bioinformatics/btx643
null
q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pair Hidden Markov Models (PHMMs) are probabilistic models used for pairwise sequence alignment, a quintessential problem in bioinformatics. PHMMs include three types of hidden states: match, insertion and deletion. Most previous studies have used one or two hidden states for each PHMM state type. However, few studies have examined the number of states suitable for representing sequence data or improving alignment accuracy.We developed a novel method to select superior models (including the number of hidden states) for PHMM. Our method selects models with the highest posterior probability using Factorized Information Criteria (FIC), which is widely utilised in model selection for probabilistic models with hidden variables. Our simulations indicated this method has excellent model selection capabilities with slightly improved alignment accuracy. We applied our method to DNA datasets from 5 and 28 species, ultimately selecting more complex models than those used in previous studies.
[ { "created": "Fri, 19 May 2017 09:49:59 GMT", "version": "v1" }, { "created": "Sun, 15 Oct 2017 06:52:17 GMT", "version": "v2" } ]
2017-10-17
[ [ "Takeda", "Taikai", "" ], [ "Hamada", "Michiaki", "" ] ]
Pair Hidden Markov Models (PHMMs) are probabilistic models used for pairwise sequence alignment, a quintessential problem in bioinformatics. PHMMs include three types of hidden states: match, insertion and deletion. Most previous studies have used one or two hidden states for each PHMM state type. However, few studies have examined the number of states suitable for representing sequence data or improving alignment accuracy.We developed a novel method to select superior models (including the number of hidden states) for PHMM. Our method selects models with the highest posterior probability using Factorized Information Criteria (FIC), which is widely utilised in model selection for probabilistic models with hidden variables. Our simulations indicated this method has excellent model selection capabilities with slightly improved alignment accuracy. We applied our method to DNA datasets from 5 and 28 species, ultimately selecting more complex models than those used in previous studies.
1305.6231
Kristina Crona
Kristina Crona, Devin Greene, Miriam Barlow
Evolutionary Predictability and Complications with Additivity
null
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptation is a central topic in theoretical biology, of practical importance for analyzing drug resistance mutations. Several authors have used arguments based on extreme value theory in their work on adaptation. There are complications with these approaches if fitness is additive (meaning that fitness effects of mutations sum), or whenever there is more additivity than what one would expect in an uncorrelated fitness landscape. However, the approaches have been used in published work, even in situations with substantial amounts of additivity. In particular, extreme value theory has been used in discussions on evolutionary predictability. We say that evolution is predictable if the use of a particular drug at different locations tends lead to the same resistance mutations. Evolutionary predictability depends on the probabilities of mutational trajectories. Arguments about probabilities based on extreme value theory can be misleading. Additivity may cause errors in estimates of the probabilities of some mutational trajectories by a factor 20 even for rather small examples. We show that additivity gives systematic errors so as to exaggerate the differences between the most and the least likely trajectory. As a result of this bias, evolution may appear more predictable than it is. From a broader perspective, our results suggest that approaches which depend on the Orr-Gillespie theory are likely to give misleading results for realistic fitness landscapes whenever one considers adaptation in several steps.
[ { "created": "Mon, 27 May 2013 14:17:41 GMT", "version": "v1" }, { "created": "Sat, 14 Dec 2013 19:26:53 GMT", "version": "v2" } ]
2013-12-17
[ [ "Crona", "Kristina", "" ], [ "Greene", "Devin", "" ], [ "Barlow", "Miriam", "" ] ]
Adaptation is a central topic in theoretical biology, of practical importance for analyzing drug resistance mutations. Several authors have used arguments based on extreme value theory in their work on adaptation. There are complications with these approaches if fitness is additive (meaning that fitness effects of mutations sum), or whenever there is more additivity than what one would expect in an uncorrelated fitness landscape. However, the approaches have been used in published work, even in situations with substantial amounts of additivity. In particular, extreme value theory has been used in discussions on evolutionary predictability. We say that evolution is predictable if the use of a particular drug at different locations tends lead to the same resistance mutations. Evolutionary predictability depends on the probabilities of mutational trajectories. Arguments about probabilities based on extreme value theory can be misleading. Additivity may cause errors in estimates of the probabilities of some mutational trajectories by a factor 20 even for rather small examples. We show that additivity gives systematic errors so as to exaggerate the differences between the most and the least likely trajectory. As a result of this bias, evolution may appear more predictable than it is. From a broader perspective, our results suggest that approaches which depend on the Orr-Gillespie theory are likely to give misleading results for realistic fitness landscapes whenever one considers adaptation in several steps.
2104.05057
Etienne Racine
Etienne Racine, Nicholas C. Coops, Jean B\'egin, Mari Myllym\"aki
Tree species, crown cover, and age as determinants of the vertical distribution of airborne LiDAR returns
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Light detection and ranging (LiDAR) provides information on the vertical structure of forest stands enabling detailed and extensive ecosystem study. The vertical structure is often summarized by scalar features and data-reduction techniques that limit the interpretation of results. Instead, we quantified the influence of three variables, species, crown cover, and age, on the vertical distribution of airborne LiDAR returns from forest stands. We studied 5,428 regular, even-aged stands in Quebec (Canada) with five dominant species: balsam fir (Abies balsamea (L.) Mill.), paper birch (Betula papyrifera Marsh), black spruce (Picea mariana (Mill.) BSP), white spruce (Picea glauca Moench) and aspen (Populus tremuloides Michx.). We modeled the vertical distribution against the three variables using a functional general linear model and a novel nonparametric graphical test of significance. Results indicate that LiDAR returns from aspen stands had the most uniform vertical distribution. Balsam fir and white birch distributions were similar and centered at around 50% of the stand height, and black spruce and white spruce distributions were skewed to below 30% of stand height (p<0.001). Increased crown cover concentrated the distributions around 50% of stand height. Increasing age gradually shifted the distributions higher in the stand for stands younger than 70-years, before plateauing and slowly declining at 90-120 years. Results suggest that the vertical distributions of LiDAR returns depend on the three variables studied.
[ { "created": "Sun, 11 Apr 2021 17:15:47 GMT", "version": "v1" }, { "created": "Wed, 2 Jun 2021 01:40:44 GMT", "version": "v2" } ]
2021-06-03
[ [ "Racine", "Etienne", "" ], [ "Coops", "Nicholas C.", "" ], [ "Bégin", "Jean", "" ], [ "Myllymäki", "Mari", "" ] ]
Light detection and ranging (LiDAR) provides information on the vertical structure of forest stands enabling detailed and extensive ecosystem study. The vertical structure is often summarized by scalar features and data-reduction techniques that limit the interpretation of results. Instead, we quantified the influence of three variables, species, crown cover, and age, on the vertical distribution of airborne LiDAR returns from forest stands. We studied 5,428 regular, even-aged stands in Quebec (Canada) with five dominant species: balsam fir (Abies balsamea (L.) Mill.), paper birch (Betula papyrifera Marsh), black spruce (Picea mariana (Mill.) BSP), white spruce (Picea glauca Moench) and aspen (Populus tremuloides Michx.). We modeled the vertical distribution against the three variables using a functional general linear model and a novel nonparametric graphical test of significance. Results indicate that LiDAR returns from aspen stands had the most uniform vertical distribution. Balsam fir and white birch distributions were similar and centered at around 50% of the stand height, and black spruce and white spruce distributions were skewed to below 30% of stand height (p<0.001). Increased crown cover concentrated the distributions around 50% of stand height. Increasing age gradually shifted the distributions higher in the stand for stands younger than 70-years, before plateauing and slowly declining at 90-120 years. Results suggest that the vertical distributions of LiDAR returns depend on the three variables studied.
1511.09062
Viet Chi Tran
Sylvain Billiard, Pierre Collet, R\'egis Ferri\`ere, Sylvie M\'el\'eard, Viet Chi Tran
The effect of competition and horizontal trait inheritance on invasion, fixation and polymorphism
1 Electronic Supplementary Material
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Horizontal transfer (HT) of heritable information or `traits' (carried by genetic elements, endosymbionts, or culture) is widespread among living organisms. Yet current ecological and evolutionary theory addressing HT is limited. We present a modeling framework for the dynamics of two populations that compete for resources and exchange horizontally (transfer) an otherwise vertically inherited trait. Competition influences individual demographics, affecting population size, which feeds back on the dynamics of transfer. We capture this feedback with a stochastic individual-based model, from which we derive a deterministic approximation for large populations. The interaction between horizontal transfer and competition makes possible the stable (or bi-stable) polymorphic maintenance of deleterious traits (including costly plasmids). When transfer rates are of a general density-dependent form, transfer stochasticity contributes strongly to population fluctuations. For an initially rare trait, we describe the probabilistic dynamics of invasion and fixation. Acceleration of fixation by HT is faster when competition is weak in the resident population. Thus, HT can have a major impact on the distribution of mutational effects that are fixed, and our model provides a basis for a general theory of the influence of HT on eco-evolutionary dynamics and adaptation.
[ { "created": "Sun, 29 Nov 2015 18:55:30 GMT", "version": "v1" } ]
2015-12-01
[ [ "Billiard", "Sylvain", "" ], [ "Collet", "Pierre", "" ], [ "Ferrière", "Régis", "" ], [ "Méléard", "Sylvie", "" ], [ "Tran", "Viet Chi", "" ] ]
Horizontal transfer (HT) of heritable information or `traits' (carried by genetic elements, endosymbionts, or culture) is widespread among living organisms. Yet current ecological and evolutionary theory addressing HT is limited. We present a modeling framework for the dynamics of two populations that compete for resources and exchange horizontally (transfer) an otherwise vertically inherited trait. Competition influences individual demographics, affecting population size, which feeds back on the dynamics of transfer. We capture this feedback with a stochastic individual-based model, from which we derive a deterministic approximation for large populations. The interaction between horizontal transfer and competition makes possible the stable (or bi-stable) polymorphic maintenance of deleterious traits (including costly plasmids). When transfer rates are of a general density-dependent form, transfer stochasticity contributes strongly to population fluctuations. For an initially rare trait, we describe the probabilistic dynamics of invasion and fixation. Acceleration of fixation by HT is faster when competition is weak in the resident population. Thus, HT can have a major impact on the distribution of mutational effects that are fixed, and our model provides a basis for a general theory of the influence of HT on eco-evolutionary dynamics and adaptation.
1211.4911
Justin Yeakel
Justin D. Yeakel, Paulo R. Guimaraes Jr, Herve Bocherens, Paul L. Koch
The impact of climate change on the structure of Pleistocene mammoth steppe food webs
null
null
10.1098/rspb.2013.0239
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Species interactions shape predator-prey networks, impacting community structure and, potentially, ecological dynamics. It is likely that global climatic perturbations that occur over long periods of time have a significant impact on species interactions patterns. However, observations of how these patterns change over time are typically limited to extant communities, which is particularly problematic for communities with long-lived species. Here we integrate stable isotope analysis and network theory to reconstruct patterns of trophic interactions for six independent mammalian communities that inhabited mammoth steppe environments spanning western Europe to eastern Alaska during the Pleistocene. We use a Bayesian mixing model to quantify the proportional contribution of prey to the diets of local predators, and assess how the structure of trophic interactions changed across space and the Last Glacial Maximum (LGM), a global climatic event that severely impacted mammoth steppe communities. We find that large felids had diets that were more constrained than other co-occurring predators, and largely influenced by an increase in {\it Rangifer} abundance after the LGM. Moreover, the structural organization of Beringian and European communities strongly differed: compared to Europe, species interactions in Beringian communities before the LGM were highly compartmentalized, or modular. This modularity was lost during the LGM, and partially recovered after the glacial retreat, and we suggest that changes in modularity among predators and prey may have been driven by geographic insularity.
[ { "created": "Wed, 21 Nov 2012 01:43:00 GMT", "version": "v1" }, { "created": "Tue, 29 Jan 2013 01:25:36 GMT", "version": "v2" } ]
2015-03-13
[ [ "Yeakel", "Justin D.", "" ], [ "Guimaraes", "Paulo R.", "Jr" ], [ "Bocherens", "Herve", "" ], [ "Koch", "Paul L.", "" ] ]
Species interactions shape predator-prey networks, impacting community structure and, potentially, ecological dynamics. It is likely that global climatic perturbations that occur over long periods of time have a significant impact on species interactions patterns. However, observations of how these patterns change over time are typically limited to extant communities, which is particularly problematic for communities with long-lived species. Here we integrate stable isotope analysis and network theory to reconstruct patterns of trophic interactions for six independent mammalian communities that inhabited mammoth steppe environments spanning western Europe to eastern Alaska during the Pleistocene. We use a Bayesian mixing model to quantify the proportional contribution of prey to the diets of local predators, and assess how the structure of trophic interactions changed across space and the Last Glacial Maximum (LGM), a global climatic event that severely impacted mammoth steppe communities. We find that large felids had diets that were more constrained than other co-occurring predators, and largely influenced by an increase in {\it Rangifer} abundance after the LGM. Moreover, the structural organization of Beringian and European communities strongly differed: compared to Europe, species interactions in Beringian communities before the LGM were highly compartmentalized, or modular. This modularity was lost during the LGM, and partially recovered after the glacial retreat, and we suggest that changes in modularity among predators and prey may have been driven by geographic insularity.
1712.03377
Roland Kr\"amer
Ulrich Warttinger, Christina Giese, Roland Kr\"amer
Comparison of Heparin Red, Azure A and Toluidine Blue assays for direct quantification of heparins in human plasma
15 pages, 4 figure, 3 schemes, 1 table
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heparins are are sulfated polysaccharides that have tremendous clinical importance as anticoagulant drugs. Monitoring of heparin blood levels can improve patient safety. In clinical practice, heparins are monitored indirectly by their inhibtory effect on coagulation proteases. Drawbacks of these established methods have stimulated the development of simple direct detection methods with cationic dyes that change absorbance or fluorescence upon binding of polyanionic heparin. Very few such dyes or assay kits, however, are commercially and widely available to a broad community of researchers and clinicians. This study compares the performance of three commercial dyes for the direct quantification of unfractionated heparin and the widely used low-molecular-weight heparin enoxaparin. Two traditional metachromatic dyes, Azure A and Toluidine Blue, and the more recently developed fluorescent dye Heparin Red were applied in a mix-and-read microplate assay to the same heparin-spiked human plasma samples. In the clinically most relevant concentration range below 1 IU (international units) per mL, only Heparin Red is a useful tool for the determination of both heparins. Heparin Red is at least 9 times more sensitive than the metachromatic dyes which can not reliably quantify the heparins in this concentration range. Unfractionated heparin levels between 2 and 10 IU per mL can be determined by all dyes, Heparin Red being the most sensitive.
[ { "created": "Sat, 9 Dec 2017 11:41:12 GMT", "version": "v1" } ]
2017-12-12
[ [ "Warttinger", "Ulrich", "" ], [ "Giese", "Christina", "" ], [ "Krämer", "Roland", "" ] ]
Heparins are are sulfated polysaccharides that have tremendous clinical importance as anticoagulant drugs. Monitoring of heparin blood levels can improve patient safety. In clinical practice, heparins are monitored indirectly by their inhibtory effect on coagulation proteases. Drawbacks of these established methods have stimulated the development of simple direct detection methods with cationic dyes that change absorbance or fluorescence upon binding of polyanionic heparin. Very few such dyes or assay kits, however, are commercially and widely available to a broad community of researchers and clinicians. This study compares the performance of three commercial dyes for the direct quantification of unfractionated heparin and the widely used low-molecular-weight heparin enoxaparin. Two traditional metachromatic dyes, Azure A and Toluidine Blue, and the more recently developed fluorescent dye Heparin Red were applied in a mix-and-read microplate assay to the same heparin-spiked human plasma samples. In the clinically most relevant concentration range below 1 IU (international units) per mL, only Heparin Red is a useful tool for the determination of both heparins. Heparin Red is at least 9 times more sensitive than the metachromatic dyes which can not reliably quantify the heparins in this concentration range. Unfractionated heparin levels between 2 and 10 IU per mL can be determined by all dyes, Heparin Red being the most sensitive.
1202.0428
Philipp Germann
Philipp Germann, Dzianis Menshykau, Simon Tanaka and Dagmar Iber
Simulating Organogenesis in COMSOL
Proceedings of COMSOL Conference, Stuttgart 2011
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Organogenesis is a tightly regulated process that has been studied experimentally for decades. Computational models can help to integrate available knowledge and to better understand the underlying regulatory logic. We are currently studying mechanistic models for the development of limbs, lungs, kidneys, and bone. We have tested a number of alternative methods to solve our spatio- temporal differential equation models of reaction-diffusion type on growing domains of realistic shape, among them finite elements in COMSOL Multiphysics. Given the large number of variables (up to fifteen), the sharp domain boundaries, the travelling wave character of some solutions, and the stiffness of the reactions we are facing numerous numerical challenges. To test new ideas efficiently we have developed a strategy to optimize simulation times in COMSOL.
[ { "created": "Thu, 2 Feb 2012 13:34:05 GMT", "version": "v1" } ]
2012-02-03
[ [ "Germann", "Philipp", "" ], [ "Menshykau", "Dzianis", "" ], [ "Tanaka", "Simon", "" ], [ "Iber", "Dagmar", "" ] ]
Organogenesis is a tightly regulated process that has been studied experimentally for decades. Computational models can help to integrate available knowledge and to better understand the underlying regulatory logic. We are currently studying mechanistic models for the development of limbs, lungs, kidneys, and bone. We have tested a number of alternative methods to solve our spatio- temporal differential equation models of reaction-diffusion type on growing domains of realistic shape, among them finite elements in COMSOL Multiphysics. Given the large number of variables (up to fifteen), the sharp domain boundaries, the travelling wave character of some solutions, and the stiffness of the reactions we are facing numerous numerical challenges. To test new ideas efficiently we have developed a strategy to optimize simulation times in COMSOL.
2211.11808
Lana Garmire
Lana X. Garmire, Yijun Li, Qianhui Huang, Chuan Xu, Sarah Teichmann, Naftali Kaminski, Matteo Pellegrini, Quan Nguyen, Andrew E. Teschendorff
Challenges and perspectives in computational deconvolution of genomics data
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Deciphering cell type heterogeneity is crucial for systematically understanding tissue homeostasis and its dysregulation in diseases. Computational deconvolution is an efficient approach estimating cell type abundances from a variety of omics data. Despite significant methodological progress in computational deconvolution in recent years, challenges are still outstanding. Here we enlist four significant challenges related to computational deconvolution, from the quality of the reference data, generation of ground truth data, limitations of computational methodologies, and benchmarking design and implementation. Finally, we make recommendations on reference data generation, new directions of computational methodologies and strategies to promote rigorous benchmarking.
[ { "created": "Mon, 21 Nov 2022 19:18:06 GMT", "version": "v1" }, { "created": "Sat, 2 Sep 2023 16:51:48 GMT", "version": "v2" } ]
2023-09-06
[ [ "Garmire", "Lana X.", "" ], [ "Li", "Yijun", "" ], [ "Huang", "Qianhui", "" ], [ "Xu", "Chuan", "" ], [ "Teichmann", "Sarah", "" ], [ "Kaminski", "Naftali", "" ], [ "Pellegrini", "Matteo", "" ], [ "...
Deciphering cell type heterogeneity is crucial for systematically understanding tissue homeostasis and its dysregulation in diseases. Computational deconvolution is an efficient approach estimating cell type abundances from a variety of omics data. Despite significant methodological progress in computational deconvolution in recent years, challenges are still outstanding. Here we enlist four significant challenges related to computational deconvolution, from the quality of the reference data, generation of ground truth data, limitations of computational methodologies, and benchmarking design and implementation. Finally, we make recommendations on reference data generation, new directions of computational methodologies and strategies to promote rigorous benchmarking.
2210.16577
Jean-Baptiste Camps
Jean-Baptiste Camps and Julien Randon-Furling
Lost Manuscripts and Extinct Texts: A Dynamic Model of Cultural Transmission
null
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How did written works evolve, disappear or survive down through the ages? In this paper, we propose a unified, formal framework for two fundamental questions in the study of the transmission of texts: how much was lost or preserved from all works of the past, and why do their genealogies (their ``phylogenetic trees'') present the very peculiar shapes that we observe or, more precisely, reconstruct? We argue here that these questions share similarities to those encountered in evolutionary biology, and can be described in terms of ``genetic'' drift and ``natural'' selection. Through agent-based models, we show that such properties as have been observed by philologists since the 1800s can be simulated, and confronted to data gathered for ancient and medieval texts across Europe, in order to obtain plausible estimations of the number of works and manuscripts that existed and were lost.
[ { "created": "Sat, 29 Oct 2022 11:47:07 GMT", "version": "v1" } ]
2022-11-01
[ [ "Camps", "Jean-Baptiste", "" ], [ "Randon-Furling", "Julien", "" ] ]
How did written works evolve, disappear or survive down through the ages? In this paper, we propose a unified, formal framework for two fundamental questions in the study of the transmission of texts: how much was lost or preserved from all works of the past, and why do their genealogies (their ``phylogenetic trees'') present the very peculiar shapes that we observe or, more precisely, reconstruct? We argue here that these questions share similarities to those encountered in evolutionary biology, and can be described in terms of ``genetic'' drift and ``natural'' selection. Through agent-based models, we show that such properties as have been observed by philologists since the 1800s can be simulated, and confronted to data gathered for ancient and medieval texts across Europe, in order to obtain plausible estimations of the number of works and manuscripts that existed and were lost.
1811.12245
Greg Murray
Greg Murray, Catherine Orr, Jamie E. M. Byrne, Matthew E. Hughes, Susan L. Rossell, Sheri L. Johnson
Effect of time of day on reward circuitry: Further thoughts on methods, prompted by Steel et al 2018
9 pages, 1 figure
null
null
null
q-bio.NC
http://creativecommons.org/publicdomain/zero/1.0/
The interplay between circadian and reward function is well understood in animal models, and is of growing interest as an aetiological explanation in psychopathologies. Circadian modulation of reward function has been demonstrated in human behavioural data, but understanding at the neural level is limited. In 2017, our group published results of a first step in addressing this deficit, demonstrating a diurnal rhythm in fMRI-measured reward activation. In 2018, Steel et al wrote a constructive critique of our findings, and the aim of this paper is to outline how future research could improve on our first proof-of-concept study. Key challenges include addressing divergent and convergent validity (by addressing non-reward neural variation, and testing for absence of variation in threat-related pathways), preregistration and power analysis to protect against false positives, wider range of fMRI methods (to directly test our post-hoc hypothesis of some form of reward prediction error, and multiple phases of reward), the parallel collection of behavioural data (particularly self-reported positive affect, and actigraphically measured activity) to illuminate the nature of the reward activation across the day, and some attempt to parse out circadian versus homeostatic/masking influences on any observed diurnal rhythm in neural reward activation.
[ { "created": "Wed, 14 Nov 2018 01:55:57 GMT", "version": "v1" }, { "created": "Tue, 11 Dec 2018 05:05:49 GMT", "version": "v2" } ]
2018-12-12
[ [ "Murray", "Greg", "" ], [ "Orr", "Catherine", "" ], [ "Byrne", "Jamie E. M.", "" ], [ "Hughes", "Matthew E.", "" ], [ "Rossell", "Susan L.", "" ], [ "Johnson", "Sheri L.", "" ] ]
The interplay between circadian and reward function is well understood in animal models, and is of growing interest as an aetiological explanation in psychopathologies. Circadian modulation of reward function has been demonstrated in human behavioural data, but understanding at the neural level is limited. In 2017, our group published results of a first step in addressing this deficit, demonstrating a diurnal rhythm in fMRI-measured reward activation. In 2018, Steel et al wrote a constructive critique of our findings, and the aim of this paper is to outline how future research could improve on our first proof-of-concept study. Key challenges include addressing divergent and convergent validity (by addressing non-reward neural variation, and testing for absence of variation in threat-related pathways), preregistration and power analysis to protect against false positives, wider range of fMRI methods (to directly test our post-hoc hypothesis of some form of reward prediction error, and multiple phases of reward), the parallel collection of behavioural data (particularly self-reported positive affect, and actigraphically measured activity) to illuminate the nature of the reward activation across the day, and some attempt to parse out circadian versus homeostatic/masking influences on any observed diurnal rhythm in neural reward activation.
2206.07966
Yu Qin
Yu Qin and Alex Sheremet
Mesoscopic Collective Activity in Excitatory Neural Fields: Governing Equations
27 pages, 7 figures
null
null
null
q-bio.QM q-bio.NC
http://creativecommons.org/licenses/by/4.0/
In this study we derive the governing equations for mesoscopic collective activity in the cortex, starting from the generic Hodgkin-Huxley equations for microscopic cell dynamics. For simplicity, and to maintain focus on the essential elements of the derivation, the discussion is confined to excitatory neural fields. The fundamental assumption of the procedure is that mesoscale processes are macroscopic with respect to cell-scale activity, and emerge as the average behavior of a large population of cells. Because of their duration, action-potential details are assumed not observable at mesoscale; the essential mesoscopic function of action potentials is to redistribute energy in the neural field. The Hodgkin-Huxley dynamical model is first reduced to a set of equations that describe subthreshold dynamics. An ensemble average over a cell population then produces a closed system of equations involving two mesoscopic state variables: the density of kinetic energy J, carried by sodium ionic currents, and the excitability H of the neural field, which could be described as the average state of gating variable h. The resulting model is represented as essentially a subthreshold process; and the dynamical role of the firing rate is naturally reassessed as describing energy transfers. The linear properties of the equations are consistent with expectations for the dynamics of excitatory neural fields: the system supports oscillations of progressive waves, with shorter waves typically having higher frequencies, propagating slower, and decaying faster. Extending the derivation to include more complex cell dynamics (e.g., including other ionic channels, e.g., calcium channels) and multiple-type, excitatory-inhibitory, neural fields is straightforward, and will be presented elsewhere.
[ { "created": "Thu, 16 Jun 2022 07:13:33 GMT", "version": "v1" }, { "created": "Wed, 6 Jul 2022 17:50:09 GMT", "version": "v2" } ]
2022-07-07
[ [ "Qin", "Yu", "" ], [ "Sheremet", "Alex", "" ] ]
In this study we derive the governing equations for mesoscopic collective activity in the cortex, starting from the generic Hodgkin-Huxley equations for microscopic cell dynamics. For simplicity, and to maintain focus on the essential elements of the derivation, the discussion is confined to excitatory neural fields. The fundamental assumption of the procedure is that mesoscale processes are macroscopic with respect to cell-scale activity, and emerge as the average behavior of a large population of cells. Because of their duration, action-potential details are assumed not observable at mesoscale; the essential mesoscopic function of action potentials is to redistribute energy in the neural field. The Hodgkin-Huxley dynamical model is first reduced to a set of equations that describe subthreshold dynamics. An ensemble average over a cell population then produces a closed system of equations involving two mesoscopic state variables: the density of kinetic energy J, carried by sodium ionic currents, and the excitability H of the neural field, which could be described as the average state of gating variable h. The resulting model is represented as essentially a subthreshold process; and the dynamical role of the firing rate is naturally reassessed as describing energy transfers. The linear properties of the equations are consistent with expectations for the dynamics of excitatory neural fields: the system supports oscillations of progressive waves, with shorter waves typically having higher frequencies, propagating slower, and decaying faster. Extending the derivation to include more complex cell dynamics (e.g., including other ionic channels, e.g., calcium channels) and multiple-type, excitatory-inhibitory, neural fields is straightforward, and will be presented elsewhere.
2405.07837
Troy Shinbrot
Troy Shinbrot and Wise Young
Why Decussate? Topological Constraints on 3D Wiring
15 pages, 8 figures
The Anatomical Record 291.10 (2008) 1278-1292
10.1002/ar.20731
null
q-bio.NC cond-mat.dis-nn math.GT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Many vertebrate motor and sensory systems decussate, or cross the midline to the opposite side of the body. The successful crossing of millions of axons during development requires a complex of tightly controlled regulatory processes. Because these processes have evolved in many distinct systems and organisms, it seems reasonable to presume that decussation confers a significant functional advantage. Yet if this is so, the nature of this advantage is not understood. In this article, we examine constraints imposed by topology on the ways that a three-dimensional processor and environment can be wired together in a continuous, somatotopic, way. We show that as the number of wiring connections grows, decussated arrangements become overwhelmingly more robust against wiring errors than seemingly simpler same-sided wiring schemes. These results provide a predictive approach for understanding how 3D networks must be wired if they are to be robust, and therefore have implications both for future large-scale computational networks and for complex bio-medical devices
[ { "created": "Mon, 13 May 2024 15:24:11 GMT", "version": "v1" } ]
2024-05-14
[ [ "Shinbrot", "Troy", "" ], [ "Young", "Wise", "" ] ]
Many vertebrate motor and sensory systems decussate, or cross the midline to the opposite side of the body. The successful crossing of millions of axons during development requires a complex of tightly controlled regulatory processes. Because these processes have evolved in many distinct systems and organisms, it seems reasonable to presume that decussation confers a significant functional advantage. Yet if this is so, the nature of this advantage is not understood. In this article, we examine constraints imposed by topology on the ways that a three-dimensional processor and environment can be wired together in a continuous, somatotopic, way. We show that as the number of wiring connections grows, decussated arrangements become overwhelmingly more robust against wiring errors than seemingly simpler same-sided wiring schemes. These results provide a predictive approach for understanding how 3D networks must be wired if they are to be robust, and therefore have implications both for future large-scale computational networks and for complex bio-medical devices
2004.00553
Antonio Della Cioppa
I. De Falco, A. Della Cioppa, U. Scafuri, and E. Tarantino
Coronavirus Covid-19 spreading in Italy: optimizing an epidemiological model with dynamic social distancing through Differential Evolution
null
null
null
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this paper consists in the application of a recent epidemiological model, namely SEIR with Social Distancing (SEIR--SD), extended here through the definition of a social distancing function varying over time, to assess the situation related to the spreading of the coronavirus Covid--19 in Italy and in two of its most important regions, i.e., Lombardy and Campania. To profitably use this model, the most suitable values of its parameters must be found. The estimation of the SEIR--SD model parameters takes place here through the use of Differential Evolution, a heuristic optimization technique. In this way, we are able to evaluate for each of the three above-mentioned scenarios the daily number of infectious cases from today until the end of virus spreading, the day(s) in which this number will be at its highest peak, and the day in which the infected cases will become very close to zero.
[ { "created": "Wed, 1 Apr 2020 16:32:58 GMT", "version": "v1" }, { "created": "Thu, 2 Apr 2020 17:15:00 GMT", "version": "v2" }, { "created": "Sat, 4 Apr 2020 17:50:17 GMT", "version": "v3" } ]
2020-04-07
[ [ "De Falco", "I.", "" ], [ "Della Cioppa", "A.", "" ], [ "Scafuri", "U.", "" ], [ "Tarantino", "E.", "" ] ]
The aim of this paper consists in the application of a recent epidemiological model, namely SEIR with Social Distancing (SEIR--SD), extended here through the definition of a social distancing function varying over time, to assess the situation related to the spreading of the coronavirus Covid--19 in Italy and in two of its most important regions, i.e., Lombardy and Campania. To profitably use this model, the most suitable values of its parameters must be found. The estimation of the SEIR--SD model parameters takes place here through the use of Differential Evolution, a heuristic optimization technique. In this way, we are able to evaluate for each of the three above-mentioned scenarios the daily number of infectious cases from today until the end of virus spreading, the day(s) in which this number will be at its highest peak, and the day in which the infected cases will become very close to zero.
1711.11314
Vince Grolmusz
Mate Fellner and Balint Varga and Vince Grolmusz
The Frequent Subgraphs of the Connectome of the Human Brain
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In mapping the human structural connectome, we are in a very fortunate situation: one can compute and compare graphs, describing the cerebral connections between the very same, anatomically identified small regions of the gray matter among hundreds of human subjects. The comparison of these graphs has led to numerous recent results, as the (i) discovery that women's connectomes have deeper and richer connectivity-related graph parameters like those of men, or (ii) the description of more and less conservatively connected lobes and cerebral regions, and (iii) the discovery of the phenomenon of the Consensus Connectome Dynamics. Today one of the greatest challenges of brain science is the description and modeling of the circuitry of the human brain. For this goal, we need to identify sub-circuits that are present in almost all human subjects and those, which are much less frequent: the former sub-circuits most probably have functions with general importance, the latter sub-circuits are probably related to the individual variability of the brain structure and functions. The present contribution describes the frequent connected subgraphs (instead of sub-circuits) of at most 6 edges in the human brain. We analyze these frequent graphs and also examine sex differences in these graphs: we demonstrate numerous connected sub-graphs that are more frequent in female or the male connectome. While our results describe subgraphs, instead of sub-circuits, we need to note that all macroscopic sub-circuits correspond to an underlying connected subgraph. Our data source is the public release of the Human Connectome Project, and we are applying the data of 426 human subjects in this study.
[ { "created": "Thu, 30 Nov 2017 10:45:43 GMT", "version": "v1" } ]
2017-12-01
[ [ "Fellner", "Mate", "" ], [ "Varga", "Balint", "" ], [ "Grolmusz", "Vince", "" ] ]
In mapping the human structural connectome, we are in a very fortunate situation: one can compute and compare graphs, describing the cerebral connections between the very same, anatomically identified small regions of the gray matter among hundreds of human subjects. The comparison of these graphs has led to numerous recent results, as the (i) discovery that women's connectomes have deeper and richer connectivity-related graph parameters like those of men, or (ii) the description of more and less conservatively connected lobes and cerebral regions, and (iii) the discovery of the phenomenon of the Consensus Connectome Dynamics. Today one of the greatest challenges of brain science is the description and modeling of the circuitry of the human brain. For this goal, we need to identify sub-circuits that are present in almost all human subjects and those, which are much less frequent: the former sub-circuits most probably have functions with general importance, the latter sub-circuits are probably related to the individual variability of the brain structure and functions. The present contribution describes the frequent connected subgraphs (instead of sub-circuits) of at most 6 edges in the human brain. We analyze these frequent graphs and also examine sex differences in these graphs: we demonstrate numerous connected sub-graphs that are more frequent in female or the male connectome. While our results describe subgraphs, instead of sub-circuits, we need to note that all macroscopic sub-circuits correspond to an underlying connected subgraph. Our data source is the public release of the Human Connectome Project, and we are applying the data of 426 human subjects in this study.
0904.2500
Edward O'Brien Jr.
Edward P. O'Brien, Bernard R. Brooks, and Dave Thirumalai
Molecular origin of constant m-values, denatured state collapse, and residue-dependent transition midpoints in globular proteins
41 pages, 10 figures
null
null
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experiments show that for many two state folders the free energy of the native state DG_ND([C]) changes linearly as the denaturant concentration [C] is varied. The slope, m = d DG_ND([C])/d[C], is nearly constant. The m-value is associated with the difference in the surface area between the native (N) and the denatured (D) state, which should be a function of DR_g^2, the difference in the square of the radius of gyration between the D and N states. Single molecule experiments show that the denatured state undergoes an equilibrium collapse transition as [C] decreases, which implies m also should be [C]-dependent. We resolve the conundrum between constant m-values and [C]-dependent changes in Rg using molecular simulations of a coarse-grained representation of protein L, and the Molecular Transfer Model, for which the equilibrium folding can be accurately calculated as a function of denaturant concentration. We find that over a large range of denaturant concentration (> 3 M) the m-value is a constant, whereas under strongly renaturing conditions (< 3 M) it depends on [C]. The m-value is a constant above [C]> 3 M because the [C]-dependent changes in the surface area of the backbone groups, which make the largest contribution to m, is relatively narrow in the denatured state. The burial of the backbone gives rise to substantial surface area changes below [C]< 3 M, leading to collapse in the denatured state. The midpoint of transition of individual residues vary significantly even though global folding can be described as an all-or-none transition. Collapse driven by the loss of favorable residue-solvent interactions and a concomitant increase in the strength of intrapeptide interactions with decreasing [C]. These interactions are non-uniformly distributed throughout the native structure of protein L.
[ { "created": "Thu, 16 Apr 2009 14:44:44 GMT", "version": "v1" } ]
2009-04-20
[ [ "O'Brien", "Edward P.", "" ], [ "Brooks", "Bernard R.", "" ], [ "Thirumalai", "Dave", "" ] ]
Experiments show that for many two state folders the free energy of the native state DG_ND([C]) changes linearly as the denaturant concentration [C] is varied. The slope, m = d DG_ND([C])/d[C], is nearly constant. The m-value is associated with the difference in the surface area between the native (N) and the denatured (D) state, which should be a function of DR_g^2, the difference in the square of the radius of gyration between the D and N states. Single molecule experiments show that the denatured state undergoes an equilibrium collapse transition as [C] decreases, which implies m also should be [C]-dependent. We resolve the conundrum between constant m-values and [C]-dependent changes in Rg using molecular simulations of a coarse-grained representation of protein L, and the Molecular Transfer Model, for which the equilibrium folding can be accurately calculated as a function of denaturant concentration. We find that over a large range of denaturant concentration (> 3 M) the m-value is a constant, whereas under strongly renaturing conditions (< 3 M) it depends on [C]. The m-value is a constant above [C]> 3 M because the [C]-dependent changes in the surface area of the backbone groups, which make the largest contribution to m, is relatively narrow in the denatured state. The burial of the backbone gives rise to substantial surface area changes below [C]< 3 M, leading to collapse in the denatured state. The midpoint of transition of individual residues vary significantly even though global folding can be described as an all-or-none transition. Collapse driven by the loss of favorable residue-solvent interactions and a concomitant increase in the strength of intrapeptide interactions with decreasing [C]. These interactions are non-uniformly distributed throughout the native structure of protein L.
2012.02734
Arthur Genthon
Arthur Genthon and David Lacoste
Universal constraints on selection strength in lineage trees
null
Phys. Rev. Research 3, 023187 (2021)
10.1103/PhysRevResearch.3.023187
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We obtain general inequalities constraining the difference between the average of an arbitrary function of a phenotypic trait, which includes the fitness landscape of the trait itself, in the presence or in the absence of natural selection. These inequalities imply bounds on the strength of selection, which can be measured from the statistics of trait values and divisions along lineages. The upper bound is related to recent generalizations of linear response relations in Stochastic Thermodynamics, and shares common features with Fisher's fundamental theorem of natural selection, and with its generalization by Price, although they define different measures of selection. The lower bound follows from recent improvements on Jensen's inequality, and both bounds depend on the variability of the fitness landscape. We illustrate our results using numerical simulations of growing cell colonies and with experimental data of time-lapse microscopy experiments of bacteria cell colonies.
[ { "created": "Fri, 4 Dec 2020 17:27:44 GMT", "version": "v1" }, { "created": "Tue, 9 Mar 2021 08:43:34 GMT", "version": "v2" } ]
2021-06-16
[ [ "Genthon", "Arthur", "" ], [ "Lacoste", "David", "" ] ]
We obtain general inequalities constraining the difference between the average of an arbitrary function of a phenotypic trait, which includes the fitness landscape of the trait itself, in the presence or in the absence of natural selection. These inequalities imply bounds on the strength of selection, which can be measured from the statistics of trait values and divisions along lineages. The upper bound is related to recent generalizations of linear response relations in Stochastic Thermodynamics, and shares common features with Fisher's fundamental theorem of natural selection, and with its generalization by Price, although they define different measures of selection. The lower bound follows from recent improvements on Jensen's inequality, and both bounds depend on the variability of the fitness landscape. We illustrate our results using numerical simulations of growing cell colonies and with experimental data of time-lapse microscopy experiments of bacteria cell colonies.
1609.07000
Bartosz Rozycki
Bartosz Rozycki and Marek Cieplak
Stiffness of the C-terminal disordered linker affects the geometry of the active site in endoglucanase Cel8A
accepted for publication in Molecular BioSystems (September 22, 2016)
null
10.1039/C6MB00606J
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cellulosomes are complex multi-enzyme machineries which efficiently degrade plant cell-wall polysaccharides. The multiple domains of the cellulosome proteins are often tethered together by intrinsically disordered regions. The properties and functions of these disordered linkers are not well understood. In this work, we study endoglucanase Cel8A, which is a relevant enzymatic component of the cellulosomes of Clostridium thermocellum. We use both all-atom and coarse-grained simulations to investigate how the equilibrium conformations of the catalytic domain of Cel8A are affected by the disordered linker at its C terminus. We find that when the endoglucanase is bound to its substrate, the effective stiffness of the linker can influence the distances between groups of amino-acid residues throughout the entire enzymatic domain. In particular, variations in the linker stiffness can lead to small changes in the geometry of the active-site cleft. We suggest that such geometrical changes may, in turn, have an effect on the catalytic activity of the enzyme.
[ { "created": "Thu, 22 Sep 2016 14:40:33 GMT", "version": "v1" } ]
2016-09-23
[ [ "Rozycki", "Bartosz", "" ], [ "Cieplak", "Marek", "" ] ]
Cellulosomes are complex multi-enzyme machineries which efficiently degrade plant cell-wall polysaccharides. The multiple domains of the cellulosome proteins are often tethered together by intrinsically disordered regions. The properties and functions of these disordered linkers are not well understood. In this work, we study endoglucanase Cel8A, which is a relevant enzymatic component of the cellulosomes of Clostridium thermocellum. We use both all-atom and coarse-grained simulations to investigate how the equilibrium conformations of the catalytic domain of Cel8A are affected by the disordered linker at its C terminus. We find that when the endoglucanase is bound to its substrate, the effective stiffness of the linker can influence the distances between groups of amino-acid residues throughout the entire enzymatic domain. In particular, variations in the linker stiffness can lead to small changes in the geometry of the active-site cleft. We suggest that such geometrical changes may, in turn, have an effect on the catalytic activity of the enzyme.
1708.03666
Sarah Sauv\'e
Sarah A. Sauv\'e and Marcus T. Pearce
Attention but not musical training affects auditory streaming
36 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While musicians generally perform better than non-musicians in various auditory discrimination tasks, effects of specific instrumental training have received little attention. The effects of instrument-specific musical training on auditory grouping in the context of stream segregation are investigated here in three experiments. In Experiment 1a, participants listened to sequences of ABA tones and indicated when they heard a change in rhythm. This change is caused by the manipulation of the B tones' timbre and indexes a change in perception from integration to segregation, or vice versa. While it was expected that musicians would detect a change in rhythm earlier when their own instrument was involved, no such pattern was observed. In Experiment 1b, designed to control for potential expectation effects in Experiment 1a, participants heard sequences of static ABA tones and reported their initial perceptions, whether the sequence was integrated or segregated. Results show that participants tend to initially perceive these static sequences as segregated, and that perception is influenced by similarity between the timbres involved. Finally, in Experiment 2 violinists and flautists located mistuned notes in an interleaved melody paradigm containing a violin and a flute melody. Performance did not depend on the instrument the participant played but rather which melody their attention was directed to. Taken together, results from the three experiments suggest that the specific instrument one practices does not have an influence on auditory grouping, but attentional mechanisms are necessary for processing auditory scenes.
[ { "created": "Fri, 11 Aug 2017 19:12:46 GMT", "version": "v1" } ]
2017-08-15
[ [ "Sauvé", "Sarah A.", "" ], [ "Pearce", "Marcus T.", "" ] ]
While musicians generally perform better than non-musicians in various auditory discrimination tasks, effects of specific instrumental training have received little attention. The effects of instrument-specific musical training on auditory grouping in the context of stream segregation are investigated here in three experiments. In Experiment 1a, participants listened to sequences of ABA tones and indicated when they heard a change in rhythm. This change is caused by the manipulation of the B tones' timbre and indexes a change in perception from integration to segregation, or vice versa. While it was expected that musicians would detect a change in rhythm earlier when their own instrument was involved, no such pattern was observed. In Experiment 1b, designed to control for potential expectation effects in Experiment 1a, participants heard sequences of static ABA tones and reported their initial perceptions, whether the sequence was integrated or segregated. Results show that participants tend to initially perceive these static sequences as segregated, and that perception is influenced by similarity between the timbres involved. Finally, in Experiment 2 violinists and flautists located mistuned notes in an interleaved melody paradigm containing a violin and a flute melody. Performance did not depend on the instrument the participant played but rather which melody their attention was directed to. Taken together, results from the three experiments suggest that the specific instrument one practices does not have an influence on auditory grouping, but attentional mechanisms are necessary for processing auditory scenes.
2004.04474
Lorenzo Vannucci
Lorenzo Vannucci, Maria Pasquini, Cristina Spalletti, Matteo Caleo, Silvestro Micera, Cecilia Laschi, Egidio Falotico
Towards in-silico robotic post-stroke rehabilitation for mice
7 pages, 9 figures. To be published in the 2019 IEEE International Conference on Cyborg and Bionic Systems
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The possibility of simulating in detail in-vivo experiments could be highly beneficial to the neuroscientific community. It could easily allow for preliminary testing of different experimental conditions without having to be constrained by factors such as training of the subjects or resting times between experimental trials. In order to achieve this, the simulation of the environment, of the subject and of the neural system, should be as accurate as possible. Unfortunately, it is not possible to completely simulate physical systems, alongside their neural counterparts, without greatly increasing the computational cost of the simulation. For this reason, it is crucial to limit the simulation to all physical and neural areas that are involved in the experiment. We propose that using a combination of data analysis and simulated models is beneficial in determining the minimal subset of entities that have to be included in the simulation to replicate the in-vivo experiment. In particular, we focused on a pulling task performed by mice on a robotic platform before and after lesion of the central nervous system. Here, we show that, while it is possible to replicate the behaviour of the healthy mouse just by including models of the mouse forelimb, spinal cord, and recording of the rostral forelimb area (RFA), it is not possible to reproduce the behaviour of the post-stroke mouse. This can give us insights on what other elements would be needed to replicate the complete experiment.
[ { "created": "Thu, 9 Apr 2020 10:47:04 GMT", "version": "v1" } ]
2020-04-10
[ [ "Vannucci", "Lorenzo", "" ], [ "Pasquini", "Maria", "" ], [ "Spalletti", "Cristina", "" ], [ "Caleo", "Matteo", "" ], [ "Micera", "Silvestro", "" ], [ "Laschi", "Cecilia", "" ], [ "Falotico", "Egidio", "" ...
The possibility of simulating in detail in-vivo experiments could be highly beneficial to the neuroscientific community. It could easily allow for preliminary testing of different experimental conditions without having to be constrained by factors such as training of the subjects or resting times between experimental trials. In order to achieve this, the simulation of the environment, of the subject and of the neural system, should be as accurate as possible. Unfortunately, it is not possible to completely simulate physical systems, alongside their neural counterparts, without greatly increasing the computational cost of the simulation. For this reason, it is crucial to limit the simulation to all physical and neural areas that are involved in the experiment. We propose that using a combination of data analysis and simulated models is beneficial in determining the minimal subset of entities that have to be included in the simulation to replicate the in-vivo experiment. In particular, we focused on a pulling task performed by mice on a robotic platform before and after lesion of the central nervous system. Here, we show that, while it is possible to replicate the behaviour of the healthy mouse just by including models of the mouse forelimb, spinal cord, and recording of the rostral forelimb area (RFA), it is not possible to reproduce the behaviour of the post-stroke mouse. This can give us insights on what other elements would be needed to replicate the complete experiment.
2309.07766
Benjamin Hayden
W. Jeffrey Johnston, Justin M. Fine, Seng Bum Michael Yoo, R. Becket Ebitz, and Benjamin Y. Hayden
Semi-orthogonal subspaces for value mediate a tradeoff between binding and generalization
arXiv admin note: substantial text overlap with arXiv:2205.06769
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces. To test this hypothesis, we examined neuronal responses in five reward-sensitive regions in macaques performing a risky choice task with sequential offers. Surprisingly, in all areas, the neural population encoded the values of offers presented on the left and right in distinct subspaces. We show that the encoding we observe is sufficient to bind the values of the offers to their respective positions in space while preserving abstract value information, which may be important for rapid learning and generalization to novel contexts. Moreover, after both offers have been presented, all areas encode the value of the first and second offers in orthogonal subspaces. In this case as well, the orthogonalization provides binding. Our binding-by-subspace hypothesis makes two novel predictions borne out by the data. First, behavioral errors should correlate with putative spatial (but not temporal) misbinding in the neural representation. Second, the specific representational geometry that we observe across animals also indicates that behavioral errors should increase when offers have low or high values, compared to when they have medium values, even when controlling for value difference. Together, these results support the idea that the brain makes use of semi-orthogonal subspaces to bind features together.
[ { "created": "Thu, 14 Sep 2023 14:54:25 GMT", "version": "v1" } ]
2023-09-15
[ [ "Johnston", "W. Jeffrey", "" ], [ "Fine", "Justin M.", "" ], [ "Yoo", "Seng Bum Michael", "" ], [ "Ebitz", "R. Becket", "" ], [ "Hayden", "Benjamin Y.", "" ] ]
When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces. To test this hypothesis, we examined neuronal responses in five reward-sensitive regions in macaques performing a risky choice task with sequential offers. Surprisingly, in all areas, the neural population encoded the values of offers presented on the left and right in distinct subspaces. We show that the encoding we observe is sufficient to bind the values of the offers to their respective positions in space while preserving abstract value information, which may be important for rapid learning and generalization to novel contexts. Moreover, after both offers have been presented, all areas encode the value of the first and second offers in orthogonal subspaces. In this case as well, the orthogonalization provides binding. Our binding-by-subspace hypothesis makes two novel predictions borne out by the data. First, behavioral errors should correlate with putative spatial (but not temporal) misbinding in the neural representation. Second, the specific representational geometry that we observe across animals also indicates that behavioral errors should increase when offers have low or high values, compared to when they have medium values, even when controlling for value difference. Together, these results support the idea that the brain makes use of semi-orthogonal subspaces to bind features together.
2004.00218
Sukrit Gupta
Satya P. Singh, Lipo Wang, Sukrit Gupta, Haveesh Goli, Parasuraman Padmanabhan and Bal\'azs Guly\'as
3D Deep Learning on Medical Images: A Review
Published in Sensors Journal (https://www.mdpi.com/1424-8220/20/18/5097)
Sensors 2020, 20, 5097
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.
[ { "created": "Wed, 1 Apr 2020 03:56:48 GMT", "version": "v1" }, { "created": "Sat, 9 May 2020 05:26:19 GMT", "version": "v2" }, { "created": "Sat, 11 Jul 2020 04:28:29 GMT", "version": "v3" }, { "created": "Tue, 13 Oct 2020 08:38:19 GMT", "version": "v4" } ]
2020-10-14
[ [ "Singh", "Satya P.", "" ], [ "Wang", "Lipo", "" ], [ "Gupta", "Sukrit", "" ], [ "Goli", "Haveesh", "" ], [ "Padmanabhan", "Parasuraman", "" ], [ "Gulyás", "Balázs", "" ] ]
The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.
1511.09426
Cengiz Pehlevan
Cengiz Pehlevan, Dmitri B. Chklovskii
A Normative Theory of Adaptive Dimensionality Reduction in Neural Networks
Advances in Neural Information Processing Systems (NIPS), 2015
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To make sense of the world our brains must analyze high-dimensional datasets streamed by our sensory organs. Because such analysis begins with dimensionality reduction, modelling early sensory processing requires biologically plausible online dimensionality reduction algorithms. Recently, we derived such an algorithm, termed similarity matching, from a Multidimensional Scaling (MDS) objective function. However, in the existing algorithm, the number of output dimensions is set a priori by the number of output neurons and cannot be changed. Because the number of informative dimensions in sensory inputs is variable there is a need for adaptive dimensionality reduction. Here, we derive biologically plausible dimensionality reduction algorithms which adapt the number of output dimensions to the eigenspectrum of the input covariance matrix. We formulate three objective functions which, in the offline setting, are optimized by the projections of the input dataset onto its principal subspace scaled by the eigenvalues of the output covariance matrix. In turn, the output eigenvalues are computed as i) soft-thresholded, ii) hard-thresholded, iii) equalized thresholded eigenvalues of the input covariance matrix. In the online setting, we derive the three corresponding adaptive algorithms and map them onto the dynamics of neuronal activity in networks with biologically plausible local learning rules. Remarkably, in the last two networks, neurons are divided into two classes which we identify with principal neurons and interneurons in biological circuits.
[ { "created": "Mon, 30 Nov 2015 18:45:30 GMT", "version": "v1" }, { "created": "Tue, 26 Jan 2016 18:44:23 GMT", "version": "v2" } ]
2016-01-27
[ [ "Pehlevan", "Cengiz", "" ], [ "Chklovskii", "Dmitri B.", "" ] ]
To make sense of the world our brains must analyze high-dimensional datasets streamed by our sensory organs. Because such analysis begins with dimensionality reduction, modelling early sensory processing requires biologically plausible online dimensionality reduction algorithms. Recently, we derived such an algorithm, termed similarity matching, from a Multidimensional Scaling (MDS) objective function. However, in the existing algorithm, the number of output dimensions is set a priori by the number of output neurons and cannot be changed. Because the number of informative dimensions in sensory inputs is variable there is a need for adaptive dimensionality reduction. Here, we derive biologically plausible dimensionality reduction algorithms which adapt the number of output dimensions to the eigenspectrum of the input covariance matrix. We formulate three objective functions which, in the offline setting, are optimized by the projections of the input dataset onto its principal subspace scaled by the eigenvalues of the output covariance matrix. In turn, the output eigenvalues are computed as i) soft-thresholded, ii) hard-thresholded, iii) equalized thresholded eigenvalues of the input covariance matrix. In the online setting, we derive the three corresponding adaptive algorithms and map them onto the dynamics of neuronal activity in networks with biologically plausible local learning rules. Remarkably, in the last two networks, neurons are divided into two classes which we identify with principal neurons and interneurons in biological circuits.
2204.02731
Emily SC Ching Prof.
Chumin Sun, K.C. Lin, C.Y. Yeung, Emily S.C. Ching, Yu-Ting Huang, Pik-Yin Lai, C.K. Chan
Revealing directed effective connectivity of cortical neuronal networks from measurements
null
null
10.1103/PhysRevE.105.044406
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
In the study of biological networks, one of the major challenges is to understand the relationships between network structure and dynamics. In this paper, we model in vitro cortical neuronal cultures as stochastic dynamical systems and apply a method that reconstructs directed networks from dynamics [Ching and Tam, Phys. Rev. E 95, 010301(R), 2017] to reveal directed effective connectivity, namely the directed links and synaptic weights, of the neuronal cultures from voltage measurements recorded by a multielectrode array. The effective connectivity so obtained reproduces several features of cortical regions in rats and monkeys and has similar network properties as the synaptic network of the nematode C. elegans, the only organism whose entire nervous system has been mapped out as of today. The distribution of the incoming degree is bimodal and the distributions of the average incoming and outgoing synaptic strength are non-Gaussian with long tails. The effective connectivity captures different information from the commonly studied functional connectivity, estimated using statistical correlation between spiking activities. The average synaptic strengths of excitatory incoming and outgoing links are found to increase with the spiking activity in the estimated effective connectivity but not in the functional connectivity estimated using the same sets of voltage measurements. These results thus demonstrate that the reconstructed effective connectivity can capture the general properties of synaptic connections and better reveal relationships between network structure and dynamics.
[ { "created": "Wed, 6 Apr 2022 11:15:42 GMT", "version": "v1" } ]
2022-05-04
[ [ "Sun", "Chumin", "" ], [ "Lin", "K. C.", "" ], [ "Yeung", "C. Y.", "" ], [ "Ching", "Emily S. C.", "" ], [ "Huang", "Yu-Ting", "" ], [ "Lai", "Pik-Yin", "" ], [ "Chan", "C. K.", "" ] ]
In the study of biological networks, one of the major challenges is to understand the relationships between network structure and dynamics. In this paper, we model in vitro cortical neuronal cultures as stochastic dynamical systems and apply a method that reconstructs directed networks from dynamics [Ching and Tam, Phys. Rev. E 95, 010301(R), 2017] to reveal directed effective connectivity, namely the directed links and synaptic weights, of the neuronal cultures from voltage measurements recorded by a multielectrode array. The effective connectivity so obtained reproduces several features of cortical regions in rats and monkeys and has similar network properties as the synaptic network of the nematode C. elegans, the only organism whose entire nervous system has been mapped out as of today. The distribution of the incoming degree is bimodal and the distributions of the average incoming and outgoing synaptic strength are non-Gaussian with long tails. The effective connectivity captures different information from the commonly studied functional connectivity, estimated using statistical correlation between spiking activities. The average synaptic strengths of excitatory incoming and outgoing links are found to increase with the spiking activity in the estimated effective connectivity but not in the functional connectivity estimated using the same sets of voltage measurements. These results thus demonstrate that the reconstructed effective connectivity can capture the general properties of synaptic connections and better reveal relationships between network structure and dynamics.
1610.01193
Jing Xu
Jing Xu, Stephen J. King, Maryse Lapierre-Landry, Brian Nemec
Interplay between velocity and travel distance of kinesin-based transport in the presence of tau
null
Biophysical Journal, 105, L23-5 (2013)
10.1016/j.bpj.2013.10.006
null
q-bio.BM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although the disease-relevant microtubule-associated protein tau is known to severely inhibit kinesin-based transport in vitro, the potential mechanisms for reversing this detrimental effect to maintain healthy transport in cells remain unknown. Here we report the unambiguous upregulation of multiple-kinesin travel distance despite the presence of tau, via decreased single-kinesin velocity. Interestingly, the presence of tau also modestly reduced cargo velocity in multiple-kinesin transport, and our stochastic simulations indicate that the tau-mediated reduction in single-kinesin travel underlies this observation. Taken together, our observations highlight a nontrivial interplay between velocity and travel distance for kinesin transport, and suggest that single-kinesin velocity is a promising experimental handle for tuning the effect of tau on multiple-kinesin travel distance.
[ { "created": "Tue, 4 Oct 2016 20:39:06 GMT", "version": "v1" } ]
2017-07-26
[ [ "Xu", "Jing", "" ], [ "King", "Stephen J.", "" ], [ "Lapierre-Landry", "Maryse", "" ], [ "Nemec", "Brian", "" ] ]
Although the disease-relevant microtubule-associated protein tau is known to severely inhibit kinesin-based transport in vitro, the potential mechanisms for reversing this detrimental effect to maintain healthy transport in cells remain unknown. Here we report the unambiguous upregulation of multiple-kinesin travel distance despite the presence of tau, via decreased single-kinesin velocity. Interestingly, the presence of tau also modestly reduced cargo velocity in multiple-kinesin transport, and our stochastic simulations indicate that the tau-mediated reduction in single-kinesin travel underlies this observation. Taken together, our observations highlight a nontrivial interplay between velocity and travel distance for kinesin transport, and suggest that single-kinesin velocity is a promising experimental handle for tuning the effect of tau on multiple-kinesin travel distance.
q-bio/0602004
Illes Farkas
Balazs Adamcsek, Gergely Palla, Illes J. Farkas, Imre Derenyi, Tamas Vicsek
CFinder: Locating cliques and overlapping modules in biological networks
The free academic research software, CFinder, used for the publication is available at the website of the publication: http://angel.elte.hu/clustering
Bioinformatics 22, 1021-1023 (2006)
10.1093/bioinformatics/btl039
null
q-bio.MN q-bio.GN
null
Summary: Most cellular tasks are performed not by individual proteins, but by groups of functionally associated proteins, often referred to as modules. In a protein assocation network modules appear as groups of densely interconnected nodes, also called communities or clusters. These modules often overlap with each other and form a network of their own, in which nodes (links) represent the modules (overlaps). We introduce CFinder, a fast program locating and visualizing overlapping, densely interconnected groups of nodes in undirected graphs, and allowing the user to easily navigate between the original graph and the web of these groups. We show that in gene (protein) association networks CFinder can be used to predict the function(s) of a single protein and to discover novel modules. CFinder is also very efficient for locating the cliques of large sparse graphs. Availability: CFinder (for Windows, Linux, and Macintosh) and its manual can be downloaded from http://angel.elte.hu/clustering. Contact: cfinder@angel.elte.hu
[ { "created": "Sat, 4 Feb 2006 10:20:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Adamcsek", "Balazs", "" ], [ "Palla", "Gergely", "" ], [ "Farkas", "Illes J.", "" ], [ "Derenyi", "Imre", "" ], [ "Vicsek", "Tamas", "" ] ]
Summary: Most cellular tasks are performed not by individual proteins, but by groups of functionally associated proteins, often referred to as modules. In a protein assocation network modules appear as groups of densely interconnected nodes, also called communities or clusters. These modules often overlap with each other and form a network of their own, in which nodes (links) represent the modules (overlaps). We introduce CFinder, a fast program locating and visualizing overlapping, densely interconnected groups of nodes in undirected graphs, and allowing the user to easily navigate between the original graph and the web of these groups. We show that in gene (protein) association networks CFinder can be used to predict the function(s) of a single protein and to discover novel modules. CFinder is also very efficient for locating the cliques of large sparse graphs. Availability: CFinder (for Windows, Linux, and Macintosh) and its manual can be downloaded from http://angel.elte.hu/clustering. Contact: cfinder@angel.elte.hu
0910.1577
Rajesh Karmakar
Rajesh Karmakar
Conversion of graded to binary response in an activator-repressor system
12 pages, Accepted for publication in Physical Review E
Phys. Rev. E 81, 021905 (2010)
10.1103/PhysRevE.81.021905
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Appropriate regulation of gene expression is essential to ensure that protein synthesis occurs in a selective manner. The control of transcription is the most dominant type of regulation mediated by a complex of molecules such as transcription factors. In general, regulatory molecules are of two types: activator and repressor. Activators promote the initiation of transcription whereas repressors inhibit transcription. In many cases, they regulate the gene transcription on binding the promoter mutually exclusively and the observed gene expression response is either graded or binary. In experiments, the gene expression response is quantified by the amount of proteins produced on varying the concentration of an external inducer molecules in the cell. In this paper, we study a gene regulatory network where activators and repressors both bind the same promoter mutually exclusively. The network is modeled by assuming that the gene can be in three possible states: repressed, unregulated and active. An exact analytical expression for the steady-state probability distribution of protein levels is then derived. The exact result helps to explain the experimental observations that in the presence of activator molecules the response is graded at all inducer levels whereas in the presence of both activator and repressor molecules, the response is graded at low and high inducer levels and binary at an intermediate inducer level.
[ { "created": "Thu, 8 Oct 2009 19:39:19 GMT", "version": "v1" }, { "created": "Tue, 19 Jan 2010 06:54:46 GMT", "version": "v2" } ]
2010-02-04
[ [ "Karmakar", "Rajesh", "" ] ]
Appropriate regulation of gene expression is essential to ensure that protein synthesis occurs in a selective manner. The control of transcription is the most dominant type of regulation mediated by a complex of molecules such as transcription factors. In general, regulatory molecules are of two types: activator and repressor. Activators promote the initiation of transcription whereas repressors inhibit transcription. In many cases, they regulate the gene transcription on binding the promoter mutually exclusively and the observed gene expression response is either graded or binary. In experiments, the gene expression response is quantified by the amount of proteins produced on varying the concentration of an external inducer molecules in the cell. In this paper, we study a gene regulatory network where activators and repressors both bind the same promoter mutually exclusively. The network is modeled by assuming that the gene can be in three possible states: repressed, unregulated and active. An exact analytical expression for the steady-state probability distribution of protein levels is then derived. The exact result helps to explain the experimental observations that in the presence of activator molecules the response is graded at all inducer levels whereas in the presence of both activator and repressor molecules, the response is graded at low and high inducer levels and binary at an intermediate inducer level.
2308.11665
Cole Mathis
OoLEN (Origin of Life Early-career Network), Silke Asche, Carla Bautista, David Boulesteix, Alexandre Champagne-Ruel, Cole Mathis, Omer Markovitch, Zhen Peng, Alyssa Adams, Avinash Vicholous Dass, Arnaud Buch, Eloi Camprubi, Enrico Sandro Colizzi, Stephanie Col\'on-Santos, Hannah Dromiack, Valentina Erastova, Amanda Garcia, Ghjuvan Grimaud, Aaron Halpern, Stuart A Harrison, Se\'an F. Jordan, Tony Z Jia, Amit Kahana, Artemy Kolchinsky, Odin Moron-Garcia, Ryo Mizuuchi, Jingbo Nan, Yuliia Orlova, Ben K. D. Pearce, Klaus Paschek, Martina Preiner, Silvana Pinna, Eduardo Rodr\'iguez-Rom\'an, Loraine Schwander, Siddhant Sharma, Harrison B. Smith, Andrey Vieira, Joana C. Xavier
What it takes to solve the Origin(s) of Life: An integrated review of techniques
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the origin(s) of life (OoL) is a fundamental challenge for science in the 21st century. Research on OoL spans many disciplines, including chemistry, physics, biology, planetary sciences, computer science, mathematics and philosophy. The sheer number of different scientific perspectives relevant to the problem has resulted in the coexistence of diverse tools, techniques, data, and software in OoL studies. This has made communication between the disciplines relevant to the OoL extremely difficult because the interpretation of data, analyses, or standards of evidence can vary dramatically. Here, we hope to bridge this wide field of study by providing common ground via the consolidation of tools and techniques rather than positing a unifying view on how life emerges. We review the common tools and techniques that have been used significantly in OoL studies in recent years. In particular, we aim to identify which information is most relevant for comparing and integrating the results of experimental analyses into mathematical and computational models. This review aims to provide a baseline expectation and understanding of technical aspects of origins research, rather than being a primer on any particular topic. As such, it spans broadly -- from analytical chemistry to mathematical models -- and highlights areas of future work that will benefit from a multidisciplinary approach to tackling the mystery of life's origin. Ultimately, we hope to empower a new generation of OoL scientists by reviewing how they can investigate life's origin, rather than dictating how to think about the problem.
[ { "created": "Tue, 22 Aug 2023 04:46:19 GMT", "version": "v1" }, { "created": "Thu, 24 Aug 2023 23:12:27 GMT", "version": "v2" } ]
2023-08-29
[ [ "OoLEN", "", "", "Origin of Life Early-career Network" ], [ "Asche", "Silke", "" ], [ "Bautista", "Carla", "" ], [ "Boulesteix", "David", "" ], [ "Champagne-Ruel", "Alexandre", "" ], [ "Mathis", "Cole", "" ], [ ...
Understanding the origin(s) of life (OoL) is a fundamental challenge for science in the 21st century. Research on OoL spans many disciplines, including chemistry, physics, biology, planetary sciences, computer science, mathematics and philosophy. The sheer number of different scientific perspectives relevant to the problem has resulted in the coexistence of diverse tools, techniques, data, and software in OoL studies. This has made communication between the disciplines relevant to the OoL extremely difficult because the interpretation of data, analyses, or standards of evidence can vary dramatically. Here, we hope to bridge this wide field of study by providing common ground via the consolidation of tools and techniques rather than positing a unifying view on how life emerges. We review the common tools and techniques that have been used significantly in OoL studies in recent years. In particular, we aim to identify which information is most relevant for comparing and integrating the results of experimental analyses into mathematical and computational models. This review aims to provide a baseline expectation and understanding of technical aspects of origins research, rather than being a primer on any particular topic. As such, it spans broadly -- from analytical chemistry to mathematical models -- and highlights areas of future work that will benefit from a multidisciplinary approach to tackling the mystery of life's origin. Ultimately, we hope to empower a new generation of OoL scientists by reviewing how they can investigate life's origin, rather than dictating how to think about the problem.
0910.4915
Per Arne Rikvold
Per Arne Rikvold (Florida State University)
Complex dynamics in coevolution models with ratio-dependent functional response
19 pages
Ecol. Complex. 6, 443-452 (2009)
10.1016/j.ecocom.2009.08.007
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the complex dynamical behavior of two simple predator-prey models of biological coevolution that on the ecological level account for interspecific and intraspecific competition, as well as adaptive foraging behavior. The underlying individual-based population dynamics are based on a ratio-dependent functional response [W.M. Getz, J. Theor. Biol. 108, 623 (1984)]. Analytical results for fixed-point population sizes in some simple communities are derived and discussed. In long kinetic Monte Carlo simulations we find quite robust, approximate 1/f noise in species diversity and population sizes, as well as power-law distributions for the lifetimes of individual species and the durations of periods of relative evolutionary stasis. Adaptive foraging enhances coexistence of species and produces a metastable low-diversity phase and a stable high-diversity phase.
[ { "created": "Mon, 26 Oct 2009 16:06:47 GMT", "version": "v1" } ]
2009-11-28
[ [ "Rikvold", "Per Arne", "", "Florida State University" ] ]
We explore the complex dynamical behavior of two simple predator-prey models of biological coevolution that on the ecological level account for interspecific and intraspecific competition, as well as adaptive foraging behavior. The underlying individual-based population dynamics are based on a ratio-dependent functional response [W.M. Getz, J. Theor. Biol. 108, 623 (1984)]. Analytical results for fixed-point population sizes in some simple communities are derived and discussed. In long kinetic Monte Carlo simulations we find quite robust, approximate 1/f noise in species diversity and population sizes, as well as power-law distributions for the lifetimes of individual species and the durations of periods of relative evolutionary stasis. Adaptive foraging enhances coexistence of species and produces a metastable low-diversity phase and a stable high-diversity phase.
1603.02062
Erich Schmid
Erich W. Schmid
Extracellular stimulation of nerve cells with electric current spikes induced by voltage steps
17 pages, 9 figuress
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new stimulation paradigm is presented for the stimulation of nerve cells by extracellular electric currents. In the new paradigm stimulation is achieved with the current spike induced by a voltage step whenever the voltage step is applied to a live biological tissue. By experimental evidence and theoretical arguments, it is shown that this spike is well suited for the stimulation of nerve cells. Stimulation of the human tongue is used for proof of principle. Charge injection thresholds are measured for various voltages. The time-profile of the current spike used in the experiment has a half-width of about 1 microsecond. The decay of the spike is non-exponential. The spike has at least three distinctly different phases. A Maxwell phase is followed by a charge-rearrangement phase. Charging of cell membranes is completed in a third phase. All three phases contribute to depolarization or hyperpolarization of cell membranes. Due to the short duration of the spike the charge transfer is very small. The activation time (time of no return) of nerve cell membranes leading to an action potential is measured and found to be unexpectedly short. It can become as short as 3 microseconds for a voltage step of 10 V or higher.
[ { "created": "Sat, 27 Feb 2016 15:22:33 GMT", "version": "v1" } ]
2016-03-08
[ [ "Schmid", "Erich W.", "" ] ]
A new stimulation paradigm is presented for the stimulation of nerve cells by extracellular electric currents. In the new paradigm stimulation is achieved with the current spike induced by a voltage step whenever the voltage step is applied to a live biological tissue. By experimental evidence and theoretical arguments, it is shown that this spike is well suited for the stimulation of nerve cells. Stimulation of the human tongue is used for proof of principle. Charge injection thresholds are measured for various voltages. The time-profile of the current spike used in the experiment has a half-width of about 1 microsecond. The decay of the spike is non-exponential. The spike has at least three distinctly different phases. A Maxwell phase is followed by a charge-rearrangement phase. Charging of cell membranes is completed in a third phase. All three phases contribute to depolarization or hyperpolarization of cell membranes. Due to the short duration of the spike the charge transfer is very small. The activation time (time of no return) of nerve cell membranes leading to an action potential is measured and found to be unexpectedly short. It can become as short as 3 microseconds for a voltage step of 10 V or higher.
1404.6668
Sayak Mukherjee
Sayak Mukherjee, Kristin E. Weimer, Sang-Cheol Seok, Will C. Ray, C. Jayaprakash, Veronica J. Vieland, W. Edward Swords, Jayajit Das
Host-to-host variation of ecological interactions in polymicrobial infections
39 Pages 6 figures
null
10.1088/1478-3975/12/1/016003
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Host-to-host variability with respect to interactions between microorganisms and multicellular hosts are commonly observed in infection and in homeostasis. However, the majority of mechanistic models used in analyzing host-microorganism relationships, as well as most of the ecological theories proposed to explain co-evolution of host and microbes, are based on averages across a host population. By assuming that observed variations are random and independent, these models overlook the role of inter-host differences. Here we analyze mechanisms underlying host-to-host variations, using the well-characterized experimental infection model of polymicrobial otitis media (OM) in chinchillas, in combination with population dynamic models and a Maximum Entropy (MaxEnt) based inference scheme. We find that the nature of the interactions among bacterial species critically regulates host-to-host variations of these interactions. Surprisingly, seemingly unrelated phenomena, such as the efficiency of individual bacterial species in utilizing nutrients for growth and the microbe-specific host immune response, can become interdependent in a host population. The latter finding suggests a potential mechanism that could lead to selection of specific strains of bacterial species during the coevolution of the host immune response and the bacterial species.
[ { "created": "Sat, 26 Apr 2014 18:38:30 GMT", "version": "v1" } ]
2015-06-19
[ [ "Mukherjee", "Sayak", "" ], [ "Weimer", "Kristin E.", "" ], [ "Seok", "Sang-Cheol", "" ], [ "Ray", "Will C.", "" ], [ "Jayaprakash", "C.", "" ], [ "Vieland", "Veronica J.", "" ], [ "Swords", "W. Edward", ""...
Host-to-host variability with respect to interactions between microorganisms and multicellular hosts are commonly observed in infection and in homeostasis. However, the majority of mechanistic models used in analyzing host-microorganism relationships, as well as most of the ecological theories proposed to explain co-evolution of host and microbes, are based on averages across a host population. By assuming that observed variations are random and independent, these models overlook the role of inter-host differences. Here we analyze mechanisms underlying host-to-host variations, using the well-characterized experimental infection model of polymicrobial otitis media (OM) in chinchillas, in combination with population dynamic models and a Maximum Entropy (MaxEnt) based inference scheme. We find that the nature of the interactions among bacterial species critically regulates host-to-host variations of these interactions. Surprisingly, seemingly unrelated phenomena, such as the efficiency of individual bacterial species in utilizing nutrients for growth and the microbe-specific host immune response, can become interdependent in a host population. The latter finding suggests a potential mechanism that could lead to selection of specific strains of bacterial species during the coevolution of the host immune response and the bacterial species.
q-bio/0407042
Francisco Guinea
F. Guinea, V. A. A. Jansen, N. Stollenwerk
Statistics of infections with diversity in the pathogenicity
null
null
null
null
q-bio.PE cond-mat.stat-mech q-bio.QM
null
The statistics of outbreaks in a model for the propagation of meningococcal diseases is analyzed, taking into account the possibility that the population is fragmented into weakly connected patches. It is shown that, depending on the size of of the sample studied, the ratio between the variance and the mean of infected cases can vary from one (Poisson statistics) to the inverse of the infection rate.
[ { "created": "Fri, 30 Jul 2004 12:27:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Guinea", "F.", "" ], [ "Jansen", "V. A. A.", "" ], [ "Stollenwerk", "N.", "" ] ]
The statistics of outbreaks in a model for the propagation of meningococcal diseases is analyzed, taking into account the possibility that the population is fragmented into weakly connected patches. It is shown that, depending on the size of of the sample studied, the ratio between the variance and the mean of infected cases can vary from one (Poisson statistics) to the inverse of the infection rate.
2406.14842
Hui Ma
Hui Ma and Kai Chen
Online t-SNE for single-cell RNA-seq
null
null
null
null
q-bio.GN cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the sequential sample arrival, changing experiment conditions, and evolution of knowledge, the demand to continually visualize evolving structures of sequential and diverse single-cell RNA-sequencing (scRNA-seq) data becomes indispensable. However, as one of the state-of-the-art visualization and analysis methods for scRNA-seq, t-distributed stochastic neighbor embedding (t-SNE) merely visualizes static scRNA-seq data offline and fails to meet the demand well. To address these challenges, we introduce online t-SNE to seamlessly integrate sequential scRNA-seq data. Online t-SNE achieves this by leveraging the embedding space of old samples, exploring the embedding space of new samples, and aligning the two embedding spaces on the fly. Consequently, online t-SNE dramatically enables the continual discovery of new structures and high-quality visualization of new scRNA-seq data without retraining from scratch. We showcase the formidable visualization capabilities of online t-SNE across diverse sequential scRNA-seq datasets.
[ { "created": "Fri, 21 Jun 2024 03:02:45 GMT", "version": "v1" } ]
2024-06-24
[ [ "Ma", "Hui", "" ], [ "Chen", "Kai", "" ] ]
Due to the sequential sample arrival, changing experiment conditions, and evolution of knowledge, the demand to continually visualize evolving structures of sequential and diverse single-cell RNA-sequencing (scRNA-seq) data becomes indispensable. However, as one of the state-of-the-art visualization and analysis methods for scRNA-seq, t-distributed stochastic neighbor embedding (t-SNE) merely visualizes static scRNA-seq data offline and fails to meet the demand well. To address these challenges, we introduce online t-SNE to seamlessly integrate sequential scRNA-seq data. Online t-SNE achieves this by leveraging the embedding space of old samples, exploring the embedding space of new samples, and aligning the two embedding spaces on the fly. Consequently, online t-SNE dramatically enables the continual discovery of new structures and high-quality visualization of new scRNA-seq data without retraining from scratch. We showcase the formidable visualization capabilities of online t-SNE across diverse sequential scRNA-seq datasets.
2404.15387
Alan Inglis
Alan Inglis, Andrew Parnell, Natarajan Subramani, Fiona Doohan
Machine Learning Applied to the Detection of Mycotoxin in Food: A Review
39 pages, 8 figures, review paper
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mycotoxins, toxic secondary metabolites produced by certain fungi, pose significant threats to global food safety and public health. These compounds can contaminate a variety of crops, leading to economic losses and health risks to both humans and animals. Traditional lab analysis methods for mycotoxin detection can be time-consuming and may not always be suitable for large-scale screenings. However, in recent years, machine learning (ML) methods have gained popularity for use in the detection of mycotoxins and in the food safety industry in general, due to their accurate and timely predictions. We provide a systematic review on some of the recent ML applications for detecting/predicting the presence of mycotoxin on a variety of food ingredients, highlighting their advantages, challenges, and potential for future advancements. We address the need for reproducibility and transparency in ML research through open access to data and code. An observation from our findings is the frequent lack of detailed reporting on hyperparameters in many studies as well as a lack of open source code, which raises concerns about the reproducibility and optimisation of the ML models used. The findings reveal that while the majority of studies predominantly utilised neural networks for mycotoxin detection, there was a notable diversity in the types of neural network architectures employed, with convolutional neural networks being the most popular.
[ { "created": "Tue, 23 Apr 2024 14:13:31 GMT", "version": "v1" } ]
2024-04-25
[ [ "Inglis", "Alan", "" ], [ "Parnell", "Andrew", "" ], [ "Subramani", "Natarajan", "" ], [ "Doohan", "Fiona", "" ] ]
Mycotoxins, toxic secondary metabolites produced by certain fungi, pose significant threats to global food safety and public health. These compounds can contaminate a variety of crops, leading to economic losses and health risks to both humans and animals. Traditional lab analysis methods for mycotoxin detection can be time-consuming and may not always be suitable for large-scale screenings. However, in recent years, machine learning (ML) methods have gained popularity for use in the detection of mycotoxins and in the food safety industry in general, due to their accurate and timely predictions. We provide a systematic review on some of the recent ML applications for detecting/predicting the presence of mycotoxin on a variety of food ingredients, highlighting their advantages, challenges, and potential for future advancements. We address the need for reproducibility and transparency in ML research through open access to data and code. An observation from our findings is the frequent lack of detailed reporting on hyperparameters in many studies as well as a lack of open source code, which raises concerns about the reproducibility and optimisation of the ML models used. The findings reveal that while the majority of studies predominantly utilised neural networks for mycotoxin detection, there was a notable diversity in the types of neural network architectures employed, with convolutional neural networks being the most popular.
1404.4548
Jonathan Potts
Jonathan R. Potts, Mark A. Lewis
A mathematical approach to territorial pattern formation
Note: this is a pre-print version and may contain small errors. Please see the published version if possible
The American Mathematical Monthly (2014) 121(9):754-770
10.4169/amer.math.monthly.121.09.754
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Territorial behaviour is widespread in the animal kingdom, with creatures seeking to gain parts of space for their exclusive use. It arises through a complicated interplay of many different behavioural features. Extracting and quantifying the processes that give rise to territorial patterns requires both mathematical models of movement and interaction mechanisms, together with statistical techniques for rigorously extracting parameters from data. Here, we give a brisk, pedagogical overview of the techniques so far developed to tackle the problem of territory formation. We give some examples of what has already been achieved using these techniques, together with pointers as to where we believe the future lies in this area of study. This progress is a single example of a major aim for 21st century science: to construct quantitatively predictive theory for ecological systems.
[ { "created": "Thu, 17 Apr 2014 14:57:07 GMT", "version": "v1" }, { "created": "Fri, 23 Jan 2015 07:30:36 GMT", "version": "v2" } ]
2015-01-26
[ [ "Potts", "Jonathan R.", "" ], [ "Lewis", "Mark A.", "" ] ]
Territorial behaviour is widespread in the animal kingdom, with creatures seeking to gain parts of space for their exclusive use. It arises through a complicated interplay of many different behavioural features. Extracting and quantifying the processes that give rise to territorial patterns requires both mathematical models of movement and interaction mechanisms, together with statistical techniques for rigorously extracting parameters from data. Here, we give a brisk, pedagogical overview of the techniques so far developed to tackle the problem of territory formation. We give some examples of what has already been achieved using these techniques, together with pointers as to where we believe the future lies in this area of study. This progress is a single example of a major aim for 21st century science: to construct quantitatively predictive theory for ecological systems.
2407.15028
Roel Ceballos
Julienne Kate N. Kintanar, Roel F. Ceballos
Statistical Models for Outbreak Detection of Measles in North Cotabato, Philippines
null
Mindanao Journal of Science and Technology, 22(1) (2024)
null
null
q-bio.QM stat.AP stat.CO stat.ME
http://creativecommons.org/licenses/by-nc-sa/4.0/
A measles outbreak occurs when the number of cases of measles in the population exceeds the typical level. Outbreaks that are not detected and managed early can increase mortality and morbidity and incur costs from activities responding to these events. The number of measles cases in the Province of North Cotabato, Philippines, was used in this study. Weekly reported cases of measles from January 2016 to December 2021 were provided by the Epidemiology and Surveillance Unit of the North Cotabato Provincial Health Office. Several integer-valued autoregressive (INAR) time series models were used to explore the possibility of detecting and identifying measles outbreaks in the province along with the classical ARIMA model. These models were evaluated based on goodness of fit, measles outbreak detection accuracy, and timeliness. The results of this study confirmed that INAR models have the conceptual advantage over ARIMA since the latter produces non-integer forecasts, which are not realistic for count data such as measles cases. Among the INAR models, the ZINGINAR (1) model was recommended for having a good model fit and timely and accurate detection of outbreaks. Furthermore, policymakers and decision-makers from relevant government agencies can use the ZINGINAR (1) model to improve disease surveillance and implement preventive measures against contagious diseases beforehand.
[ { "created": "Sun, 21 Jul 2024 01:25:51 GMT", "version": "v1" } ]
2024-07-23
[ [ "Kintanar", "Julienne Kate N.", "" ], [ "Ceballos", "Roel F.", "" ] ]
A measles outbreak occurs when the number of cases of measles in the population exceeds the typical level. Outbreaks that are not detected and managed early can increase mortality and morbidity and incur costs from activities responding to these events. The number of measles cases in the Province of North Cotabato, Philippines, was used in this study. Weekly reported cases of measles from January 2016 to December 2021 were provided by the Epidemiology and Surveillance Unit of the North Cotabato Provincial Health Office. Several integer-valued autoregressive (INAR) time series models were used to explore the possibility of detecting and identifying measles outbreaks in the province along with the classical ARIMA model. These models were evaluated based on goodness of fit, measles outbreak detection accuracy, and timeliness. The results of this study confirmed that INAR models have the conceptual advantage over ARIMA since the latter produces non-integer forecasts, which are not realistic for count data such as measles cases. Among the INAR models, the ZINGINAR (1) model was recommended for having a good model fit and timely and accurate detection of outbreaks. Furthermore, policymakers and decision-makers from relevant government agencies can use the ZINGINAR (1) model to improve disease surveillance and implement preventive measures against contagious diseases beforehand.
1504.07382
Francois Saint-Antonin
Francois Saint-Antonin
Reply to Mills et al.: Oceanic Anoxic Event, a mechanism for selecting animals with the ability to survive hypoxic conditions
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
It is generally considered that animal life was triggered by the rise of oxygen levels. Based on experiments evaluating the minimum range of oxygen levels at which sponges can survive, Mills and coauthors (doi:10.1073/pnas.1400547111) defend the opposite view. However, the authors do not demonstrate that "animal life was not triggered by the oxygen rise" is the only possible and unique conclusion from their observation. In this reply, it is suggested that a mechanism to explain the ability of sponges to survive at low oxygen biota is Ocean Anoxic Events. These lead to oxygen depletion and a series of them would selectively favor animals able to survive at low oxygen levels. Thus, the origin of the ability of marine animals to survive in low oxygen biota remains to be clarified.
[ { "created": "Tue, 28 Apr 2015 08:52:06 GMT", "version": "v1" } ]
2015-04-29
[ [ "Saint-Antonin", "Francois", "" ] ]
It is generally considered that animal life was triggered by the rise of oxygen levels. Based on experiments evaluating the minimum range of oxygen levels at which sponges can survive, Mills and coauthors (doi:10.1073/pnas.1400547111) defend the opposite view. However, the authors do not demonstrate that "animal life was not triggered by the oxygen rise" is the only possible and unique conclusion from their observation. In this reply, it is suggested that a mechanism to explain the ability of sponges to survive at low oxygen biota is Ocean Anoxic Events. These lead to oxygen depletion and a series of them would selectively favor animals able to survive at low oxygen levels. Thus, the origin of the ability of marine animals to survive in low oxygen biota remains to be clarified.
1601.07041
Ciprian Palaghianu Dr.
Ciprian Palaghianu, Marian Dragoi
Patterns of mast fruiting - a stochastic approach
6 oages, 3 figures
Journal of Landscape Management, 6 (2), 56-61 (2015)
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mast fruiting represents a synchronous population behaviour which can spread on large landscape areas. This reproductive pattern is generally perceived as a synchronous periodic production of large seed crops and has a significant practical importance to forest natural regeneration in order to synchronize cuttings. The mechanisms of masting are still argued and models of this phenomenon are uncommon, so a stochastic approach can cast significant light on some particular aspects. Trees manage to get synchronized and coordinate their reproductive routines. But is it possible that trees get synchronized by chance, absolutely random? Using a Monte Carlo simulation of seeding years and a theoretical masting pattern, a stochastic analysis is performed in order to assess the chance of random mast fruiting. Two populations of 100 trees, with different fruiting periodicity of 2-3 years and 4-6 years, were set and the fruition dynamic was simulated for 100 years. The results show that periodicity itself cannot induce by chance the masting effect, but periodicity mathematically influences the reproductive pattern.
[ { "created": "Sun, 24 Jan 2016 10:27:46 GMT", "version": "v1" } ]
2016-02-12
[ [ "Palaghianu", "Ciprian", "" ], [ "Dragoi", "Marian", "" ] ]
Mast fruiting represents a synchronous population behaviour which can spread on large landscape areas. This reproductive pattern is generally perceived as a synchronous periodic production of large seed crops and has a significant practical importance to forest natural regeneration in order to synchronize cuttings. The mechanisms of masting are still argued and models of this phenomenon are uncommon, so a stochastic approach can cast significant light on some particular aspects. Trees manage to get synchronized and coordinate their reproductive routines. But is it possible that trees get synchronized by chance, absolutely random? Using a Monte Carlo simulation of seeding years and a theoretical masting pattern, a stochastic analysis is performed in order to assess the chance of random mast fruiting. Two populations of 100 trees, with different fruiting periodicity of 2-3 years and 4-6 years, were set and the fruition dynamic was simulated for 100 years. The results show that periodicity itself cannot induce by chance the masting effect, but periodicity mathematically influences the reproductive pattern.
0710.3944
Swanand Gore
Swanand Gore and Tom Blundell
Crystallographic modelling of protein loops and their heterogeneity with Rappertk
null
null
null
null
q-bio.BM
null
Background. All-atom crystallographic refinement of proteins is a laborious manually driven procedure, as a result of which, alternative and multiconformer interpretations are not routinely investigated. Results. We describe efficient loop sampling procedures in Rappertk and demonstrate that single loops in proteins can be automatically and accurately modelled with few positional restraints. Loops constructed with a composite CNS/Rappertk protocol consistently have better Rfree than those with CNS alone. This approach is extended to a more realistic scenario where there are often large positional uncertainties in loops along with small imperfections in the secondary structural framework. Both ensemble and collection methods are used to estimate the structural heterogeneity of loop regions. Conclusion. Apart from benchmarking Rappertk for the all-atom protein refinement task, this work also demonstrates its utility in both aspects of loop modelling - building a single conformer and estimating structural heterogeneity the loops can exhibit.
[ { "created": "Sun, 21 Oct 2007 15:42:20 GMT", "version": "v1" } ]
2007-10-23
[ [ "Gore", "Swanand", "" ], [ "Blundell", "Tom", "" ] ]
Background. All-atom crystallographic refinement of proteins is a laborious manually driven procedure, as a result of which, alternative and multiconformer interpretations are not routinely investigated. Results. We describe efficient loop sampling procedures in Rappertk and demonstrate that single loops in proteins can be automatically and accurately modelled with few positional restraints. Loops constructed with a composite CNS/Rappertk protocol consistently have better Rfree than those with CNS alone. This approach is extended to a more realistic scenario where there are often large positional uncertainties in loops along with small imperfections in the secondary structural framework. Both ensemble and collection methods are used to estimate the structural heterogeneity of loop regions. Conclusion. Apart from benchmarking Rappertk for the all-atom protein refinement task, this work also demonstrates its utility in both aspects of loop modelling - building a single conformer and estimating structural heterogeneity the loops can exhibit.
2103.01307
Vasily Romanchak
Vasily Romanchak
About solving the Fechner-Stevens problem
7 pages
null
10.13140/RG.2.2.16725.76002
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we prove that the Fechner and Stevens laws are equivalent (coincide up to isomorphism). Therefore, the problem does not exist.
[ { "created": "Fri, 26 Feb 2021 14:49:49 GMT", "version": "v1" } ]
2021-03-03
[ [ "Romanchak", "Vasily", "" ] ]
In this paper, we prove that the Fechner and Stevens laws are equivalent (coincide up to isomorphism). Therefore, the problem does not exist.
1711.04495
Chiranjib Patra MR
Chiranjib Patra
Geo-spatial Monitoring Of Infectious Diseases By Unmanned Aerial Vehicles
This paper was presented at GeoMundus 2017 (http://www.geomundus.org/2017/) and was one of the winners of Travel Grant for the presentation of Abstract at the Institute for GeoInformatics , Munster, Germany
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent development in unmanned UAV technology paved the way for numerous applications in diverse cross discipline fields. One of the main feature of UAV s are their portability in terms of size that allows them to navigate through fairly hostile environments and collect data . This data collection leads to the interpretation of the behavior and predictability according to the analysis as presented by data science. The application of UAV to monitor the population and climate geography is well documented. But the usage of UAV to study the germs in the atmosphere is not well documented or absent. As air remains one of main medium of transmission of germs so there must be some kind of signature specific for a particular kind of germ. Using this as cue in this present communication a hypothetical model to study the spread of disease is presented. This model can help the epidemiologists to understand the mechanism of microbial traffic like for example flu getting transferred within the same species or cross species ,spatial diffusion like for example human traveling pattern and newly recognized diseases for example various type of flu and vector borne diseases like malaria , dengue etc. This model also covers some relevant scenarios like global climate change, political ecologic emergences of aerial transmitted diseases.
[ { "created": "Mon, 13 Nov 2017 10:02:42 GMT", "version": "v1" } ]
2017-11-15
[ [ "Patra", "Chiranjib", "" ] ]
Recent development in unmanned UAV technology paved the way for numerous applications in diverse cross discipline fields. One of the main feature of UAV s are their portability in terms of size that allows them to navigate through fairly hostile environments and collect data . This data collection leads to the interpretation of the behavior and predictability according to the analysis as presented by data science. The application of UAV to monitor the population and climate geography is well documented. But the usage of UAV to study the germs in the atmosphere is not well documented or absent. As air remains one of main medium of transmission of germs so there must be some kind of signature specific for a particular kind of germ. Using this as cue in this present communication a hypothetical model to study the spread of disease is presented. This model can help the epidemiologists to understand the mechanism of microbial traffic like for example flu getting transferred within the same species or cross species ,spatial diffusion like for example human traveling pattern and newly recognized diseases for example various type of flu and vector borne diseases like malaria , dengue etc. This model also covers some relevant scenarios like global climate change, political ecologic emergences of aerial transmitted diseases.
q-bio/0703018
Greg Stephens
Luis M. A. Bettencourt, Greg J. Stephens, Michael I. Ham, and Guenter W. Gross
The functional structure of cortical neuronal networks grown in vitro
12 pages, 5 figures
Phys. Rev. E 75, 021915 (2007)
10.1103/PhysRevE.75.021915
LAUR-06-5040
q-bio.NC cond-mat.dis-nn
null
We apply an information theoretic treatment of action potential time series measured with microelectrode arrays to estimate the connectivity of mammalian neuronal cell assemblies grown {\it in vitro}. We infer connectivity between two neurons via the measurement of the mutual information between their spike trains. In addition we measure higher point multi-informations between any two spike trains conditional on the activity of a third cell, as a means to identify and distinguish classes of functional connectivity among three neurons. The use of a conditional three-cell measure removes some interpretational shortcomings of the pairwise mutual information and sheds light into the functional connectivity arrangements of any three cells. We analyze the resultant connectivity graphs in light of other complex networks and demonstrate that, despite their {\it ex vivo} development, the connectivity maps derived from cultured neural assemblies are similar to other biological networks and display nontrivial structure in clustering coefficient, network diameter and assortative mixing. Specifically we show that these networks are weakly disassortative small world graphs, which differ significantly in their structure from randomized graphs with the same degree. We expect our analysis to be useful in identifying the computational motifs of a wide variety of complex networks, derived from time series data.
[ { "created": "Wed, 7 Mar 2007 16:08:52 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bettencourt", "Luis M. A.", "" ], [ "Stephens", "Greg J.", "" ], [ "Ham", "Michael I.", "" ], [ "Gross", "Guenter W.", "" ] ]
We apply an information theoretic treatment of action potential time series measured with microelectrode arrays to estimate the connectivity of mammalian neuronal cell assemblies grown {\it in vitro}. We infer connectivity between two neurons via the measurement of the mutual information between their spike trains. In addition we measure higher point multi-informations between any two spike trains conditional on the activity of a third cell, as a means to identify and distinguish classes of functional connectivity among three neurons. The use of a conditional three-cell measure removes some interpretational shortcomings of the pairwise mutual information and sheds light into the functional connectivity arrangements of any three cells. We analyze the resultant connectivity graphs in light of other complex networks and demonstrate that, despite their {\it ex vivo} development, the connectivity maps derived from cultured neural assemblies are similar to other biological networks and display nontrivial structure in clustering coefficient, network diameter and assortative mixing. Specifically we show that these networks are weakly disassortative small world graphs, which differ significantly in their structure from randomized graphs with the same degree. We expect our analysis to be useful in identifying the computational motifs of a wide variety of complex networks, derived from time series data.
2103.10166
Alessandro Lameiras Koerich
Bernardo B. Gatto, Juan G. Colonna, Eulanda M. dos Santos, Alessandro L. Koerich, Kazuhiro Fukui
Discriminative Singular Spectrum Classifier with Applications on Bioacoustic Signal Recognition
15 pages
null
null
null
q-bio.QM cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic analysis of bioacoustic signals is a fundamental tool to evaluate the vitality of our planet. Frogs and bees, for instance, may act like biological sensors providing information about environmental changes. This task is fundamental for ecological monitoring still includes many challenges such as nonuniform signal length processing, degraded target signal due to environmental noise, and the scarcity of the labeled samples for training machine learning. To tackle these challenges, we present a bioacoustic signal classifier equipped with a discriminative mechanism to extract useful features for analysis and classification efficiently. The proposed classifier does not require a large amount of training data and handles nonuniform signal length natively. Unlike current bioacoustic recognition methods, which are task-oriented, the proposed model relies on transforming the input signals into vector subspaces generated by applying Singular Spectrum Analysis (SSA). Then, a subspace is designed to expose discriminative features. The proposed model shares end-to-end capabilities, which is desirable in modern machine learning systems. This formulation provides a segmentation-free and noise-tolerant approach to represent and classify bioacoustic signals and a highly compact signal descriptor inherited from SSA. The validity of the proposed method is verified using three challenging bioacoustic datasets containing anuran, bee, and mosquito species. Experimental results on three bioacoustic datasets have shown the competitive performance of the proposed method compared to commonly employed methods for bioacoustics signal classification in terms of accuracy.
[ { "created": "Thu, 18 Mar 2021 11:01:21 GMT", "version": "v1" } ]
2021-03-19
[ [ "Gatto", "Bernardo B.", "" ], [ "Colonna", "Juan G.", "" ], [ "Santos", "Eulanda M. dos", "" ], [ "Koerich", "Alessandro L.", "" ], [ "Fukui", "Kazuhiro", "" ] ]
Automatic analysis of bioacoustic signals is a fundamental tool to evaluate the vitality of our planet. Frogs and bees, for instance, may act like biological sensors providing information about environmental changes. This task is fundamental for ecological monitoring still includes many challenges such as nonuniform signal length processing, degraded target signal due to environmental noise, and the scarcity of the labeled samples for training machine learning. To tackle these challenges, we present a bioacoustic signal classifier equipped with a discriminative mechanism to extract useful features for analysis and classification efficiently. The proposed classifier does not require a large amount of training data and handles nonuniform signal length natively. Unlike current bioacoustic recognition methods, which are task-oriented, the proposed model relies on transforming the input signals into vector subspaces generated by applying Singular Spectrum Analysis (SSA). Then, a subspace is designed to expose discriminative features. The proposed model shares end-to-end capabilities, which is desirable in modern machine learning systems. This formulation provides a segmentation-free and noise-tolerant approach to represent and classify bioacoustic signals and a highly compact signal descriptor inherited from SSA. The validity of the proposed method is verified using three challenging bioacoustic datasets containing anuran, bee, and mosquito species. Experimental results on three bioacoustic datasets have shown the competitive performance of the proposed method compared to commonly employed methods for bioacoustics signal classification in terms of accuracy.
1901.05315
Divine Wanduku
Divine Wanduku
On a family of stochastic SVIR influenza epidemic models and maximum likelihood estimation
Math. Model. in Health, Social and Appl. Sci., Springer
null
10.1007/978-981-15-2286-4_2
null
q-bio.PE stat.ME
http://creativecommons.org/licenses/by-nc-sa/4.0/
This study presents a family of stochastic models for the dynamics of influenza in a closed human population. We consider treatment for the disease in the form of vaccination, and incorporate the periods of effectiveness of the vaccine and infectiousness for the individuals in the population. Our model is a SVIR model, with trinomial transition probabilities, where all individuals who recover from the disease acquire permanent natural immunity against the strain of the disease. Special SVIR models in the family are presented, based on the structure of the probability of getting infection and vaccination at any instant. The methods of maximum likelihood, and expectation maximization are derived for the parameters of the chain. Moreover, estimators for some epidemiological assessment parameters, such as the basic reproduction number are computed. Numerical simulation examples are presented for the model.
[ { "created": "Tue, 15 Jan 2019 16:15:03 GMT", "version": "v1" } ]
2020-05-05
[ [ "Wanduku", "Divine", "" ] ]
This study presents a family of stochastic models for the dynamics of influenza in a closed human population. We consider treatment for the disease in the form of vaccination, and incorporate the periods of effectiveness of the vaccine and infectiousness for the individuals in the population. Our model is a SVIR model, with trinomial transition probabilities, where all individuals who recover from the disease acquire permanent natural immunity against the strain of the disease. Special SVIR models in the family are presented, based on the structure of the probability of getting infection and vaccination at any instant. The methods of maximum likelihood, and expectation maximization are derived for the parameters of the chain. Moreover, estimators for some epidemiological assessment parameters, such as the basic reproduction number are computed. Numerical simulation examples are presented for the model.
1706.04117
Khaled Sayed
Khaled Sayed, Cheryl A. Telmer, Adam A. Butchy, and Natasa Miskov-Zivanov
Recipes for Translating Big Data Machine Reading to Executable Cellular Signaling Models
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the tremendous increase in the amount of biological literature, developing automated methods for extracting big data from papers, building models and explaining big mechanisms becomes a necessity. We describe here our approach to translating machine reading outputs, obtained by reading bio- logical signaling literature, to discrete models of cellular networks. We use out- puts from three different reading engines, and describe our approach to translating their different features, using examples from reading cancer literature. We also outline several issues that still arise when assembling cellular network models from state-of-the-art reading engines. Finally, we illustrate the details of our approach with a case study in pancreatic cancer.
[ { "created": "Tue, 13 Jun 2017 15:21:22 GMT", "version": "v1" } ]
2017-06-14
[ [ "Sayed", "Khaled", "" ], [ "Telmer", "Cheryl A.", "" ], [ "Butchy", "Adam A.", "" ], [ "Miskov-Zivanov", "Natasa", "" ] ]
With the tremendous increase in the amount of biological literature, developing automated methods for extracting big data from papers, building models and explaining big mechanisms becomes a necessity. We describe here our approach to translating machine reading outputs, obtained by reading bio- logical signaling literature, to discrete models of cellular networks. We use out- puts from three different reading engines, and describe our approach to translating their different features, using examples from reading cancer literature. We also outline several issues that still arise when assembling cellular network models from state-of-the-art reading engines. Finally, we illustrate the details of our approach with a case study in pancreatic cancer.
1507.02148
Lu Xie
Lu Xie, Gregory R. Smith, Russell Schwartz
Derivative-free optimization of rate parameters of capsid assembly models from bulk in vitro data
null
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The assembly of virus capsids from free coat proteins proceeds by a complicated cascade of association and dissociation steps, the great majority of which cannot be directly experimentally observed. This has made capsid assembly a rich field for computational models to attempt to fill the gaps in what is experimentally observable. Nonetheless, accurate simulation predictions depend on accurate models and there are substantial obstacles to model inference for such systems. Here, we describe progress in learning parameters for capsid assembly systems, particularly kinetic rate constants of coat-coat interactions, by computationally fitting simulations to experimental data. We previously developed an approach to learn rate parameters of coat-coat interactions by minimizing the deviation between real and simulated light scattering data monitoring bulk capsid assembly in vitro. This is a difficult data-fitting problem, however, because of the high computational cost of simulating assembly trajectories, the stochastic noise inherent to the models, and the limited and noisy data available for fitting. Here we show that a newer classes of methods, based on derivative-free optimization (DFO), can more quickly and precisely learn physical parameters from static light scattering data. We further explore how the advantages of the approaches might be affected by alternative data sources through simulation of a model of time-resolved mass spectrometry data, an alternative technology for monitoring bulk capsid assembly that can be expected to provide much richer data. The results show that advances in both the data and the algorithms can improve model inference, with rich data leading to high-quality fits for all methods, but DFO methods showing substantial advantages over less informative data sources better representative of the current experimental practice.
[ { "created": "Tue, 7 Jul 2015 18:09:47 GMT", "version": "v1" } ]
2015-07-09
[ [ "Xie", "Lu", "" ], [ "Smith", "Gregory R.", "" ], [ "Schwartz", "Russell", "" ] ]
The assembly of virus capsids from free coat proteins proceeds by a complicated cascade of association and dissociation steps, the great majority of which cannot be directly experimentally observed. This has made capsid assembly a rich field for computational models to attempt to fill the gaps in what is experimentally observable. Nonetheless, accurate simulation predictions depend on accurate models and there are substantial obstacles to model inference for such systems. Here, we describe progress in learning parameters for capsid assembly systems, particularly kinetic rate constants of coat-coat interactions, by computationally fitting simulations to experimental data. We previously developed an approach to learn rate parameters of coat-coat interactions by minimizing the deviation between real and simulated light scattering data monitoring bulk capsid assembly in vitro. This is a difficult data-fitting problem, however, because of the high computational cost of simulating assembly trajectories, the stochastic noise inherent to the models, and the limited and noisy data available for fitting. Here we show that a newer classes of methods, based on derivative-free optimization (DFO), can more quickly and precisely learn physical parameters from static light scattering data. We further explore how the advantages of the approaches might be affected by alternative data sources through simulation of a model of time-resolved mass spectrometry data, an alternative technology for monitoring bulk capsid assembly that can be expected to provide much richer data. The results show that advances in both the data and the algorithms can improve model inference, with rich data leading to high-quality fits for all methods, but DFO methods showing substantial advantages over less informative data sources better representative of the current experimental practice.
1807.03696
Peter Taylor
Nishant Sinha, Yujiang Wang, Justin Dauwels, Marcus Kaiser, Thomas Thesen, Rob Forsyth, Peter Neal Taylor
Computer modelling of connectivity change suggests epileptogenesis mechanisms in idiopathic generalised epilepsy
null
NeuroImage.Clinical 21 (2019) 101655
10.1016/j.nicl.2019.101655
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Patients with idiopathic generalised epilepsy (IGE) typically have normal conventional magnetic resonance imaging (MRI), hence MRI based diagnosis is challenging. Anatomical abnormalities underlying brain dysfunctions in IGE are unclear and their relation to the pathomechanisms of epileptogenesis is poorly understood. In this study, we applied connectometry, an advanced quantitative neuroimaging technique for investigating localised changes in white-matter tissue. Analysing white matter structures of 32 subjects we incorporated our findings in a computational model of seizure dynamics to suggest a plausible mechanism of epileptogenesis. Patients with IGE have significant bilateral alterations in major white-matter fascicles. In the cingulum, fornix, and superior longitudinal fasciculus, tract integrity is compromised, whereas in specific parts of tracts between thalamus and the precentral gyrus, tract integrity is enhanced in patients. Combining these alterations in a logistic regression model, we computed the decision boundary that discriminated patients and controls. The computational model, informed with the findings on the tract abnormalities, specifically highlighted the importance of enhanced cortico-reticular connections along with impaired cortico-cortical connections in inducing pathological seizure-like dynamics. We emphasise taking directionality of brain connectivity into consideration towards understanding the pathological mechanisms; this is possible by combining neuroimaging and computational modelling. Our imaging evidence of structural alterations suggest the loss of cortico-cortical and enhancement of cortico-thalamic fibre integrity in IGE. We further suggest that impaired connectivity from cortical regions to the thalamic reticular nucleus offers a therapeutic target for selectively modifying the brain circuit for reversing the mechanisms leading to epileptogenesis.
[ { "created": "Tue, 10 Jul 2018 15:11:19 GMT", "version": "v1" }, { "created": "Sat, 10 Nov 2018 09:31:25 GMT", "version": "v2" } ]
2020-09-30
[ [ "Sinha", "Nishant", "" ], [ "Wang", "Yujiang", "" ], [ "Dauwels", "Justin", "" ], [ "Kaiser", "Marcus", "" ], [ "Thesen", "Thomas", "" ], [ "Forsyth", "Rob", "" ], [ "Taylor", "Peter Neal", "" ] ]
Patients with idiopathic generalised epilepsy (IGE) typically have normal conventional magnetic resonance imaging (MRI), hence MRI based diagnosis is challenging. Anatomical abnormalities underlying brain dysfunctions in IGE are unclear and their relation to the pathomechanisms of epileptogenesis is poorly understood. In this study, we applied connectometry, an advanced quantitative neuroimaging technique for investigating localised changes in white-matter tissue. Analysing white matter structures of 32 subjects we incorporated our findings in a computational model of seizure dynamics to suggest a plausible mechanism of epileptogenesis. Patients with IGE have significant bilateral alterations in major white-matter fascicles. In the cingulum, fornix, and superior longitudinal fasciculus, tract integrity is compromised, whereas in specific parts of tracts between thalamus and the precentral gyrus, tract integrity is enhanced in patients. Combining these alterations in a logistic regression model, we computed the decision boundary that discriminated patients and controls. The computational model, informed with the findings on the tract abnormalities, specifically highlighted the importance of enhanced cortico-reticular connections along with impaired cortico-cortical connections in inducing pathological seizure-like dynamics. We emphasise taking directionality of brain connectivity into consideration towards understanding the pathological mechanisms; this is possible by combining neuroimaging and computational modelling. Our imaging evidence of structural alterations suggest the loss of cortico-cortical and enhancement of cortico-thalamic fibre integrity in IGE. We further suggest that impaired connectivity from cortical regions to the thalamic reticular nucleus offers a therapeutic target for selectively modifying the brain circuit for reversing the mechanisms leading to epileptogenesis.
1405.3021
Thorsten Pr\"ustel
Thorsten Pr\"ustel and Martin Meier-Schellersheim
General theory of area reactivity models: rate coefficients, binding probabilities and all that
null
null
null
null
q-bio.QM cond-mat.stat-mech
http://creativecommons.org/licenses/publicdomain/
We further develop the general theory of the area reactivity model that provides an alternative description of the diffusion-influenced reaction of an isolated receptor-ligand pair in terms of a generalized Feynman-Kac equation. We analyze both the irreversible and reversible reaction and derive the equation of motion for the survival and separation probability. Furthermore, we discuss the notion of a time-dependent rate coefficient within the alternative model and obtain a number of relations between the rate coefficient, the survival and separation probabilities and the reaction rate. Finally, we calculate asymptotic and approximate expressions for the (irreversible) rate coefficient, the binding probability, the average lifetime of the bound state and discuss on- and off-rates in this context. Throughout our treatment, we will point out similarities and differences between the area and the classical contact reactivity model. The presented analysis and obtained results provide a theoretical framework that will facilitate the comparison of experiment and model predictions.
[ { "created": "Tue, 13 May 2014 03:12:43 GMT", "version": "v1" } ]
2014-05-14
[ [ "Prüstel", "Thorsten", "" ], [ "Meier-Schellersheim", "Martin", "" ] ]
We further develop the general theory of the area reactivity model that provides an alternative description of the diffusion-influenced reaction of an isolated receptor-ligand pair in terms of a generalized Feynman-Kac equation. We analyze both the irreversible and reversible reaction and derive the equation of motion for the survival and separation probability. Furthermore, we discuss the notion of a time-dependent rate coefficient within the alternative model and obtain a number of relations between the rate coefficient, the survival and separation probabilities and the reaction rate. Finally, we calculate asymptotic and approximate expressions for the (irreversible) rate coefficient, the binding probability, the average lifetime of the bound state and discuss on- and off-rates in this context. Throughout our treatment, we will point out similarities and differences between the area and the classical contact reactivity model. The presented analysis and obtained results provide a theoretical framework that will facilitate the comparison of experiment and model predictions.