id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2303.09094
Masod Sadipour
Masod Sadipour, Ali N. Azadani
The measurement of bovine pericardium density and its implications on leaflet stress distribution in bioprosthetic heart valves
13 pages, 7 figures
null
10.1007/s13239-023-00692-0
null
q-bio.TO
http://creativecommons.org/publicdomain/zero/1.0/
Purpose: Bioprosthetic Heart Valves (BHVs) are currently in widespread use with promising outcomes. Computational modeling provides a framework for quantitatively describing BHVs in the preclinical phase. To obtain reliable solutions in computational modeling, it is essential to consider accurate leaflet properties such as mechanical properties and density. Bovine pericardium (BP) is widely used as BHV leaflets. Previous computational studies assume BP density to be close to the density of water or blood. However, BP leaflets undergo multiple treatments such as fixation and anti-calcification. The present study aims to measure the density of the BP used in BHVs and determine its effect on leaflet stress distribution. Methods: We determined the density of eight square BP samples laser cut from Edwards BP patches. The weight of specimens was measured using an A&D Analytical Balance, and volume was measured by high-resolution imaging. Finite element models of a BHV similar to PERIMOUNT Magna were developed in ABAQUS. Results: The average density value of the BP samples was 1410 kg/m3. In the acceleration phase of a cardiac cycle, the maximum stress value reached 1.89 MPa for a density value of 1410 kg/m3 , and 2.47 MPa for a density of 1000 kg/m3(30.7% difference). In the deceleration, the maximum stress value reached 713 and 669 kPa, respectively. Conclusion: Stress distribution and deformation of BHV leaflets are dependent upon the magnitude of density. Ascertaining an accurate value for the density of BHV leaflets is essential for computational models.
[ { "created": "Thu, 16 Mar 2023 05:38:54 GMT", "version": "v1" } ]
2024-01-23
[ [ "Sadipour", "Masod", "" ], [ "Azadani", "Ali N.", "" ] ]
Purpose: Bioprosthetic Heart Valves (BHVs) are currently in widespread use with promising outcomes. Computational modeling provides a framework for quantitatively describing BHVs in the preclinical phase. To obtain reliable solutions in computational modeling, it is essential to consider accurate leaflet properties such as mechanical properties and density. Bovine pericardium (BP) is widely used as BHV leaflets. Previous computational studies assume BP density to be close to the density of water or blood. However, BP leaflets undergo multiple treatments such as fixation and anti-calcification. The present study aims to measure the density of the BP used in BHVs and determine its effect on leaflet stress distribution. Methods: We determined the density of eight square BP samples laser cut from Edwards BP patches. The weight of specimens was measured using an A&D Analytical Balance, and volume was measured by high-resolution imaging. Finite element models of a BHV similar to PERIMOUNT Magna were developed in ABAQUS. Results: The average density value of the BP samples was 1410 kg/m3. In the acceleration phase of a cardiac cycle, the maximum stress value reached 1.89 MPa for a density value of 1410 kg/m3 , and 2.47 MPa for a density of 1000 kg/m3(30.7% difference). In the deceleration, the maximum stress value reached 713 and 669 kPa, respectively. Conclusion: Stress distribution and deformation of BHV leaflets are dependent upon the magnitude of density. Ascertaining an accurate value for the density of BHV leaflets is essential for computational models.
2009.11241
Xuan Guo
Xuan Guo, Shichao Feng
Deep learning for peptide identification from metaproteomics datasets
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metaproteomics are becoming widely used in microbiome research for gaining insights into the functional state of the microbial community. Current metaproteomics studies are generally based on high-throughput tandem mass spectrometry (MS/MS) coupled with liquid chromatography. The identification of peptides and proteins from MS data involves the computational procedure of searching MS/MS spectra against a predefined protein sequence database and assigning top-scored peptides to spectra. Existing computational tools are still far from being able to extract all the information out of large MS/MS datasets acquired from metaproteome samples. In this paper, we proposed a deep-learning-based algorithm, called DeepFilter, for improving the rate of confident peptide identifications from a collection of tandem mass spectra. Compared with other post-processing tools, including Percolator, Q-ranker, PeptideProphet, and Iprophet, DeepFilter identified 20% and 10% more peptide-spectrum-matches and proteins, respectively, on marine microbial and soil microbial metaproteome samples with false discovery rate at 1%.
[ { "created": "Wed, 23 Sep 2020 16:25:22 GMT", "version": "v1" } ]
2020-09-24
[ [ "Guo", "Xuan", "" ], [ "Feng", "Shichao", "" ] ]
Metaproteomics are becoming widely used in microbiome research for gaining insights into the functional state of the microbial community. Current metaproteomics studies are generally based on high-throughput tandem mass spectrometry (MS/MS) coupled with liquid chromatography. The identification of peptides and proteins from MS data involves the computational procedure of searching MS/MS spectra against a predefined protein sequence database and assigning top-scored peptides to spectra. Existing computational tools are still far from being able to extract all the information out of large MS/MS datasets acquired from metaproteome samples. In this paper, we proposed a deep-learning-based algorithm, called DeepFilter, for improving the rate of confident peptide identifications from a collection of tandem mass spectra. Compared with other post-processing tools, including Percolator, Q-ranker, PeptideProphet, and Iprophet, DeepFilter identified 20% and 10% more peptide-spectrum-matches and proteins, respectively, on marine microbial and soil microbial metaproteome samples with false discovery rate at 1%.
2101.01752
Giuseppe Tronci
Heather E. Owston, Katrina M. Moisley, Giuseppe Tronci, Stephen J. Russell, Peter V. Giannoudis, Elena Jones
Induced Periosteum-Mimicking Membrane with Cell Barrier and Multipotential Stromal Cell (MSC) Homing Functionalities
null
null
10.3390/ijms21155233
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
The current management of critical size bone defects (CSBDs) remains challenging and requires multiple surgeries. To reduce the number of surgeries, wrapping a biodegradable fibrous membrane around the defect to contain the graft and carry biological stimulants for repair is highly desirable. Poly(epsilon-caprolactone) (PCL) can be utilised to realise nonwoven fibrous barrier-like structures through free surface electrospinning (FSE). Human periosteum and induced membrane (IM) samples informed the development of an FSE membrane to support platelet lysate (PL) absorption, multipotential stromal cells (MSC) growth, and the prevention of cell migration. Although thinner than IM, periosteum presented a more mature vascular system with a significantly larger blood vessel diameter. The electrospun membrane (PCL3%-E) exhibited randomly configured nanoscale fibres that were successfully customised to introduce pores of increased diameter, without compromising tensile properties. Additional to the PL absorption and release capabilities needed for MSC attraction and growth, PCL3%-E also provided a favourable surface for the proliferation and alignment of periosteum- and bone marrow derived-MSCs, whilst possessing a barrier function to cell migration. These results demonstrate the development of a promising biodegradable barrier membrane enabling PL release and MSC colonisation, two key functionalities needed for the in situ formation of a transitional periosteum-like structure, enabling movement towards single-surgery CSBD reconstruction.
[ { "created": "Tue, 5 Jan 2021 19:36:50 GMT", "version": "v1" } ]
2021-01-07
[ [ "Owston", "Heather E.", "" ], [ "Moisley", "Katrina M.", "" ], [ "Tronci", "Giuseppe", "" ], [ "Russell", "Stephen J.", "" ], [ "Giannoudis", "Peter V.", "" ], [ "Jones", "Elena", "" ] ]
The current management of critical size bone defects (CSBDs) remains challenging and requires multiple surgeries. To reduce the number of surgeries, wrapping a biodegradable fibrous membrane around the defect to contain the graft and carry biological stimulants for repair is highly desirable. Poly(epsilon-caprolactone) (PCL) can be utilised to realise nonwoven fibrous barrier-like structures through free surface electrospinning (FSE). Human periosteum and induced membrane (IM) samples informed the development of an FSE membrane to support platelet lysate (PL) absorption, multipotential stromal cells (MSC) growth, and the prevention of cell migration. Although thinner than IM, periosteum presented a more mature vascular system with a significantly larger blood vessel diameter. The electrospun membrane (PCL3%-E) exhibited randomly configured nanoscale fibres that were successfully customised to introduce pores of increased diameter, without compromising tensile properties. Additional to the PL absorption and release capabilities needed for MSC attraction and growth, PCL3%-E also provided a favourable surface for the proliferation and alignment of periosteum- and bone marrow derived-MSCs, whilst possessing a barrier function to cell migration. These results demonstrate the development of a promising biodegradable barrier membrane enabling PL release and MSC colonisation, two key functionalities needed for the in situ formation of a transitional periosteum-like structure, enabling movement towards single-surgery CSBD reconstruction.
1909.00404
Alexander Spirov
Victoria Yu. Samuta, Alexander V. Spirov
Quantitative analysis of the dynamics of maternal gradients of the early Drosophila embryo
19 pages, in russian
null
null
null
q-bio.QM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The predetermination, formation and maintenance of the primary morphogenetic gradient (bicoid gradient) of the early Drosophila embryo involves many interrelated processes. Here we focus on a system-biological analysis of the processes of redistribution of bicoid mRNA in an early embryo. The results of a quantitative analysis of experimental data, together with the results of their dynamic modeling, substantiate the role of active transport in the redistribution of bicoid mRNA.
[ { "created": "Sun, 1 Sep 2019 14:00:30 GMT", "version": "v1" } ]
2019-09-04
[ [ "Samuta", "Victoria Yu.", "" ], [ "Spirov", "Alexander V.", "" ] ]
The predetermination, formation and maintenance of the primary morphogenetic gradient (bicoid gradient) of the early Drosophila embryo involves many interrelated processes. Here we focus on a system-biological analysis of the processes of redistribution of bicoid mRNA in an early embryo. The results of a quantitative analysis of experimental data, together with the results of their dynamic modeling, substantiate the role of active transport in the redistribution of bicoid mRNA.
2104.11852
Tiberiu Tesileanu
Tiberiu Tesileanu, Siavash Golkar, Samaneh Nasiri, Anirvan M. Sengupta, Dmitri B. Chklovskii
Neural circuits for dynamics-based segmentation of time series
v2.1; 34 pages, 14 figures
null
null
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model-based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error, and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We also test our methods on datasets generated by alternating snippets of voice recordings. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.
[ { "created": "Sat, 24 Apr 2021 01:54:27 GMT", "version": "v1" }, { "created": "Wed, 29 Sep 2021 18:59:27 GMT", "version": "v2" }, { "created": "Tue, 5 Oct 2021 21:11:50 GMT", "version": "v3" } ]
2021-10-07
[ [ "Tesileanu", "Tiberiu", "" ], [ "Golkar", "Siavash", "" ], [ "Nasiri", "Samaneh", "" ], [ "Sengupta", "Anirvan M.", "" ], [ "Chklovskii", "Dmitri B.", "" ] ]
The brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model-based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error, and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We also test our methods on datasets generated by alternating snippets of voice recordings. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.
1806.11557
Francis Aweda
O.A. Falaiye and F. O Aweda
Mineralogical Characteristics of Harmattan Dust Across Jos North Central and Potiskum North Earthern Cities of Nigeria
18 pqges, 7 figures
null
null
null
q-bio.OT astro-ph.EP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The trace metals and mineralogical composition of harmattan dust carried out on the samples collected at Jos ((9 55'N, 8 55'E)) and Potiskum ((11 43'N, 11 02'E) as revealed by PIXE and AAS machine using clean Petri Dishes and Plastic bowls of 10 cm in diameter aimed on the characteristics of the mineralogical and elemental composition of the harmattan dust carried out in Nigeria. Thirteen trace elements, Na, K, Ca, Mg, Fe, Cd, Zn, Mn, Cu, Si, Al, Ti, and Zr were determined and their concentrations were evaluated in different proportion. Minerals such as Quartz [SiO2], Corundum [Al2O3], Hematite [Fe2O3], Lime [CaO], Periclase [MgO], Rutile [TiO2], Zincite [MnO], Bunsenite [NiO], Cuprite [Cu2O], Zincite [ZnO], Baddeleyite [ZrO2], Litharge [PbO], Monazite [P2O5], Montrodydite [HgO] and Petzite [Au2O3] were also determined in different concentrations. The particle weight of the sample for the residential and commercial areas were calculated to be Jos (18.95g/m2, 19.25g/m2), Potiskum (24.24 g/m2, 2515g/m2) respectively. The results shows that the harmattan dust that blows across the two stations in Nigeria comprise of high elements and more minerals.
[ { "created": "Tue, 19 Jun 2018 21:14:03 GMT", "version": "v1" } ]
2018-07-02
[ [ "Falaiye", "O. A.", "" ], [ "Aweda", "F. O", "" ] ]
The trace metals and mineralogical composition of harmattan dust carried out on the samples collected at Jos ((9 55'N, 8 55'E)) and Potiskum ((11 43'N, 11 02'E) as revealed by PIXE and AAS machine using clean Petri Dishes and Plastic bowls of 10 cm in diameter aimed on the characteristics of the mineralogical and elemental composition of the harmattan dust carried out in Nigeria. Thirteen trace elements, Na, K, Ca, Mg, Fe, Cd, Zn, Mn, Cu, Si, Al, Ti, and Zr were determined and their concentrations were evaluated in different proportion. Minerals such as Quartz [SiO2], Corundum [Al2O3], Hematite [Fe2O3], Lime [CaO], Periclase [MgO], Rutile [TiO2], Zincite [MnO], Bunsenite [NiO], Cuprite [Cu2O], Zincite [ZnO], Baddeleyite [ZrO2], Litharge [PbO], Monazite [P2O5], Montrodydite [HgO] and Petzite [Au2O3] were also determined in different concentrations. The particle weight of the sample for the residential and commercial areas were calculated to be Jos (18.95g/m2, 19.25g/m2), Potiskum (24.24 g/m2, 2515g/m2) respectively. The results shows that the harmattan dust that blows across the two stations in Nigeria comprise of high elements and more minerals.
1908.01876
Patrick Holmes
Patrick D. Holmes, Shannon M. Danforth, Xiao-Yu Fu, Talia Y. Moore, and Ram Vasudevan
Characterizing the limits of human stability during motion: perturbative experiment validates a model-based approach for the Sit-to-Stand task
19 pages, 9 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Falls affect a growing number of the population each year. Clinical methods to identify those at greatest risk for falls usually evaluate individuals while they perform specific motions such as balancing or Sit-to-Stand (STS). Unfortunately these techniques have been shown to have poor predictive power and are unable to identify the magnitude, direction, and timing of perturbations that can cause an individual to lose stability during motion. To address this limitation, the recently proposed Stability Basin (SB) aims to characterize the set of perturbations that will cause an individual to fall under a specific motor control strategy. The SB is defined as the set of configurations that do not lead to failure for an individual under their chosen control strategy. This paper presents a novel method to compute the SB and the first experimental validation of the SB with an 11-person perturbative STS experiment involving forwards or backwards pulls from a motor-driven cable. The individually-constructed SBs are used to identify when a trial fails, i.e., when an individual must switch control strategies (indicated by a step or sit) to recover from a perturbation. The constructed SBs correctly predict the outcome of trials where failure was observed with over 90% accuracy, and correctly predict the outcome of successful trials with over 95% accuracy. The SB was compared to three other methods and was found to estimate the stable region with over 45% more accuracy in all cases. This study demonstrates that SBs offer a novel model-based approach for quantifying stability during motion, which could be used in physical therapy for individuals at risk of falling.
[ { "created": "Mon, 5 Aug 2019 21:54:52 GMT", "version": "v1" } ]
2019-08-07
[ [ "Holmes", "Patrick D.", "" ], [ "Danforth", "Shannon M.", "" ], [ "Fu", "Xiao-Yu", "" ], [ "Moore", "Talia Y.", "" ], [ "Vasudevan", "Ram", "" ] ]
Falls affect a growing number of the population each year. Clinical methods to identify those at greatest risk for falls usually evaluate individuals while they perform specific motions such as balancing or Sit-to-Stand (STS). Unfortunately these techniques have been shown to have poor predictive power and are unable to identify the magnitude, direction, and timing of perturbations that can cause an individual to lose stability during motion. To address this limitation, the recently proposed Stability Basin (SB) aims to characterize the set of perturbations that will cause an individual to fall under a specific motor control strategy. The SB is defined as the set of configurations that do not lead to failure for an individual under their chosen control strategy. This paper presents a novel method to compute the SB and the first experimental validation of the SB with an 11-person perturbative STS experiment involving forwards or backwards pulls from a motor-driven cable. The individually-constructed SBs are used to identify when a trial fails, i.e., when an individual must switch control strategies (indicated by a step or sit) to recover from a perturbation. The constructed SBs correctly predict the outcome of trials where failure was observed with over 90% accuracy, and correctly predict the outcome of successful trials with over 95% accuracy. The SB was compared to three other methods and was found to estimate the stable region with over 45% more accuracy in all cases. This study demonstrates that SBs offer a novel model-based approach for quantifying stability during motion, which could be used in physical therapy for individuals at risk of falling.
2007.02855
Gilberto Nakamura
Gilberto Nakamura, Basil Grammaticos, Christophe Deroulers, Mathilde Badoual
Effective epidemic model for COVID-19 using accumulated deaths
20 pages, 7 figures
null
10.1016/j.chaos.2021.110667
null
q-bio.PE physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The severe acute respiratory syndrome COVID-19 has been in the center of the ongoing global health crisis in 2020. The high prevalence of mild cases facilitates sub-notification outside hospital environments and the number of those who are or have been infected remains largely unknown, leading to poor estimates of the crude mortality rate of the disease. Here we use a simple model to describe the number of accumulated deaths caused by COVID-19. The close connection between the proposed model and an approximate solution of the SIR model provides a system of equations whose solutions are robust estimates of epidemiological parameters. We find that the crude mortality varies between $10^{-4}$ and $10^{-3}$ depending on the severity of the outbreak which is lower than previous estimates obtained from laboratory confirmed patients. We also estimate quantities of practical interest such as the basic reproduction number and the expected number of deaths in the asymptotic limit with and without social distancing measures and lockdowns, which allow us to measure the efficiency of these interventions.
[ { "created": "Mon, 6 Jul 2020 16:13:45 GMT", "version": "v1" } ]
2021-05-26
[ [ "Nakamura", "Gilberto", "" ], [ "Grammaticos", "Basil", "" ], [ "Deroulers", "Christophe", "" ], [ "Badoual", "Mathilde", "" ] ]
The severe acute respiratory syndrome COVID-19 has been in the center of the ongoing global health crisis in 2020. The high prevalence of mild cases facilitates sub-notification outside hospital environments and the number of those who are or have been infected remains largely unknown, leading to poor estimates of the crude mortality rate of the disease. Here we use a simple model to describe the number of accumulated deaths caused by COVID-19. The close connection between the proposed model and an approximate solution of the SIR model provides a system of equations whose solutions are robust estimates of epidemiological parameters. We find that the crude mortality varies between $10^{-4}$ and $10^{-3}$ depending on the severity of the outbreak which is lower than previous estimates obtained from laboratory confirmed patients. We also estimate quantities of practical interest such as the basic reproduction number and the expected number of deaths in the asymptotic limit with and without social distancing measures and lockdowns, which allow us to measure the efficiency of these interventions.
2402.17252
Thomas-Otavio Peulen
Thomas-Otavio Peulen (1,2,3,4), Katherina Hemmen (4), Annemarie Greife (5), Benjamin M. Webb (1,2,3), Suren Felekyan (5), Andrej Sali (1,2,3), Claus A. M. Seidel (5), Hugo Sanabria (6), Katrin G. Heinze (4) ((1) Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, (2) Department of Pharmaceutical Chemistry, University of California, San Francisco, San Francisco, California, United States, (3) Quantitative Biosciences Institute (QBI), University of California, San Francisco, San Francisco, California, United States, (4) Rudolf Virchow Center for Integrative and Translational Bioimaging, University of W\"urzburg, W\"urzburg, Germany, (5) Chair of Molecular Physical Chemistry, Heinrich-Heine University, D\"usseldorf, Germany, (6) Department of Physics & Astronomy, Clemson University, Clemson, United States)
tttrlib: modular software for integrating fluorescence spectroscopy, imaging, and molecular modeling
null
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by-sa/4.0/
We introduce software for reading, writing and processing fluorescence single-molecule and image spectroscopy data and developing analysis pipelines that unifies various spectroscopic analysis tools. Our software can be used for processing multiple experiment types, e.g., for time-resolved single-molecule (sm) spectroscopy, laser scanning microscopy, fluorescence correlation spectroscopy, and image correlation spectroscopy. The software is file format agnostic, processes and outputs multiple time-resolved data formats. Thereby our software eliminates the need for data conversion and mitigates data archiving issues.
[ { "created": "Tue, 27 Feb 2024 06:53:18 GMT", "version": "v1" } ]
2024-02-28
[ [ "Peulen", "Thomas-Otavio", "" ], [ "Hemmen", "Katherina", "" ], [ "Greife", "Annemarie", "" ], [ "Webb", "Benjamin M.", "" ], [ "Felekyan", "Suren", "" ], [ "Sali", "Andrej", "" ], [ "Seidel", "Claus A. M.", "" ], [ "Sanabria", "Hugo", "" ], [ "Heinze", "Katrin G.", "" ] ]
We introduce software for reading, writing and processing fluorescence single-molecule and image spectroscopy data and developing analysis pipelines that unifies various spectroscopic analysis tools. Our software can be used for processing multiple experiment types, e.g., for time-resolved single-molecule (sm) spectroscopy, laser scanning microscopy, fluorescence correlation spectroscopy, and image correlation spectroscopy. The software is file format agnostic, processes and outputs multiple time-resolved data formats. Thereby our software eliminates the need for data conversion and mitigates data archiving issues.
1912.08735
Can Firtina
Jeremie S. Kim, Can Firtina, Meryem Banu Cavlak, Damla Senol Cali, Mohammed Alser, Nastaran Hajinazar, Can Alkan, Onur Mutlu
AirLift: A Fast and Comprehensive Technique for Remapping Alignments between Reference Genomes
null
null
null
null
q-bio.GN cs.CE
http://creativecommons.org/licenses/by/4.0/
As genome sequencing tools and techniques improve, researchers are able to incrementally assemble more accurate reference genomes, which enable sensitivity in read mapping and downstream analysis such as variant calling. A more sensitive downstream analysis is critical for a better understanding of the genome donor (e.g., health characteristics). Therefore, read sets from sequenced samples should ideally be mapped to the latest available reference genome that represents the most relevant population. Unfortunately, the increasingly large amount of available genomic data makes it prohibitively expensive to fully re-map each read set to its respective reference genome every time the reference is updated. There are several tools that attempt to accelerate the process of updating a read data set from one reference to another (i.e., remapping). However, if a read maps to a region in the old reference that does not appear with a reasonable degree of similarity in the new reference, the read cannot be remapped. We find that, as a result of this drawback, a significant portion of annotations are lost when using state-of-the-art remapping tools. To address this major limitation in existing tools, we propose AirLift, a fast and comprehensive technique for remapping alignments from one genome to another. Compared to the state-of-the-art method for remapping reads (i.e., full mapping), AirLift reduces the overall execution time to remap read sets between two reference genome versions by up to 27.4x. We validate our remapping results with GATK and find that AirLift provides high accuracy in identifying ground truth SNP/INDEL variants.
[ { "created": "Wed, 18 Dec 2019 16:58:27 GMT", "version": "v1" }, { "created": "Wed, 17 Feb 2021 00:07:40 GMT", "version": "v2" }, { "created": "Fri, 12 Aug 2022 04:38:54 GMT", "version": "v3" }, { "created": "Mon, 21 Nov 2022 13:12:27 GMT", "version": "v4" } ]
2022-11-22
[ [ "Kim", "Jeremie S.", "" ], [ "Firtina", "Can", "" ], [ "Cavlak", "Meryem Banu", "" ], [ "Cali", "Damla Senol", "" ], [ "Alser", "Mohammed", "" ], [ "Hajinazar", "Nastaran", "" ], [ "Alkan", "Can", "" ], [ "Mutlu", "Onur", "" ] ]
As genome sequencing tools and techniques improve, researchers are able to incrementally assemble more accurate reference genomes, which enable sensitivity in read mapping and downstream analysis such as variant calling. A more sensitive downstream analysis is critical for a better understanding of the genome donor (e.g., health characteristics). Therefore, read sets from sequenced samples should ideally be mapped to the latest available reference genome that represents the most relevant population. Unfortunately, the increasingly large amount of available genomic data makes it prohibitively expensive to fully re-map each read set to its respective reference genome every time the reference is updated. There are several tools that attempt to accelerate the process of updating a read data set from one reference to another (i.e., remapping). However, if a read maps to a region in the old reference that does not appear with a reasonable degree of similarity in the new reference, the read cannot be remapped. We find that, as a result of this drawback, a significant portion of annotations are lost when using state-of-the-art remapping tools. To address this major limitation in existing tools, we propose AirLift, a fast and comprehensive technique for remapping alignments from one genome to another. Compared to the state-of-the-art method for remapping reads (i.e., full mapping), AirLift reduces the overall execution time to remap read sets between two reference genome versions by up to 27.4x. We validate our remapping results with GATK and find that AirLift provides high accuracy in identifying ground truth SNP/INDEL variants.
2004.14787
Biman Bagchi -
Saumyak Mukherjee, Sayantan Mondal and Biman Bagchi
Dynamical Theory and Cellular Automata Simulations of Pandemic Spread: Understanding Different Temporal Patterns of Infections
11 pages, 11 figures, 3 tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we propose and implement a generalized mathematical model to find the time evolution of population in infectious diseases and apply the model to study the recent COVID-19 pandemic. Our model at the core is a non-local generalization of the widely used Kermack-McKendrick(KM) model where the susceptible(S) population evolves into two other categories, namely infectives(I) and removed(R). This is the well-known SIR model in which we further divide both S and I into high and low risk categories. We first formulate a set of non-local dynamical equations for the time evolution of distinct population distributions under this categorization in an attempt to describe the general scenario of infectious disease progression. We then solve the non-linear coupled differential equations-(i) numerically by the method of propagation, and (ii) a more flexible and versatile cellular automata (CA) simulation which provides a coarse-grained description of the generalized non-local model. In order to account for multiple factors such as role of spreaders before containment, we introduce a time dependent rate which appears to be essential to explain the sudden spikes before the plateau observed in many cases (for example like China). We demonstrate how this generalized approach allows us to handle the effects of (i) time-dependence of the rate-constants of spread, (ii) different population density, (iii) the age ratio, (iv) quarantine, (v) lockdown, and (vi) social distancing. Our study allows us to make certain predictions regarding the nature of spread with respect to several external parameters, treated as control variables. Analysis of the model clearly shows that due to the strong heterogeneity in the epidemic process originating from the distribution of initial infectives, the theory must be local in character but at the same time connect to a global perspective.
[ { "created": "Thu, 30 Apr 2020 14:02:49 GMT", "version": "v1" } ]
2020-05-01
[ [ "Mukherjee", "Saumyak", "" ], [ "Mondal", "Sayantan", "" ], [ "Bagchi", "Biman", "" ] ]
Here we propose and implement a generalized mathematical model to find the time evolution of population in infectious diseases and apply the model to study the recent COVID-19 pandemic. Our model at the core is a non-local generalization of the widely used Kermack-McKendrick(KM) model where the susceptible(S) population evolves into two other categories, namely infectives(I) and removed(R). This is the well-known SIR model in which we further divide both S and I into high and low risk categories. We first formulate a set of non-local dynamical equations for the time evolution of distinct population distributions under this categorization in an attempt to describe the general scenario of infectious disease progression. We then solve the non-linear coupled differential equations-(i) numerically by the method of propagation, and (ii) a more flexible and versatile cellular automata (CA) simulation which provides a coarse-grained description of the generalized non-local model. In order to account for multiple factors such as role of spreaders before containment, we introduce a time dependent rate which appears to be essential to explain the sudden spikes before the plateau observed in many cases (for example like China). We demonstrate how this generalized approach allows us to handle the effects of (i) time-dependence of the rate-constants of spread, (ii) different population density, (iii) the age ratio, (iv) quarantine, (v) lockdown, and (vi) social distancing. Our study allows us to make certain predictions regarding the nature of spread with respect to several external parameters, treated as control variables. Analysis of the model clearly shows that due to the strong heterogeneity in the epidemic process originating from the distribution of initial infectives, the theory must be local in character but at the same time connect to a global perspective.
1706.05568
Mareike Fischer
Michelle Galla and Kristina Wicke and Mareike Fischer
On the statistical inconsistency of Maximum Parsimony for $k$-tuple-site data
null
null
null
null
q-bio.PE math.CO math.PR stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the main aims of phylogenetics is to reconstruct the \enquote{Tree of Life}. In this respect, different methods and criteria are used to analyze DNA sequences of different species and to compare them in order to derive the evolutionary relationships of these species. Maximum Parsimony is one such criterion for tree reconstruction and, it is the one which we will use in this paper. However, it is well-known that tree reconstruction methods can lead to wrong relationship estimates. One typical problem of Maximum Parsimony is long branch attraction, which can lead to statistical inconsistency. In this work, we will consider a blockwise approach to alignment analysis, namely so-called $k$-tuple analyses. For four taxa it has already been shown that $k$-tuple-based analyses are statistically inconsistent if and only if the standard character-based (site-based) analyses are statistically inconsistent. So, in the four-taxon case, going from individual sites to $k$-tuples does not lead to any improvement. However, real biological analyses often consider more than only four taxa. Therefore, we analyze the case of five taxa for $2$- and $3$-tuple-site data and consider alphabets with two and four elements. We show that the equivalence of single-site data and $k$-tuple-site data then no longer holds. Even so, we can show that Maximum Parsimony is statistically inconsistent for $k$-tuple site data and five taxa.
[ { "created": "Sat, 17 Jun 2017 18:03:27 GMT", "version": "v1" }, { "created": "Thu, 14 Dec 2017 21:32:47 GMT", "version": "v2" }, { "created": "Thu, 4 Oct 2018 07:58:45 GMT", "version": "v3" } ]
2018-10-05
[ [ "Galla", "Michelle", "" ], [ "Wicke", "Kristina", "" ], [ "Fischer", "Mareike", "" ] ]
One of the main aims of phylogenetics is to reconstruct the \enquote{Tree of Life}. In this respect, different methods and criteria are used to analyze DNA sequences of different species and to compare them in order to derive the evolutionary relationships of these species. Maximum Parsimony is one such criterion for tree reconstruction and, it is the one which we will use in this paper. However, it is well-known that tree reconstruction methods can lead to wrong relationship estimates. One typical problem of Maximum Parsimony is long branch attraction, which can lead to statistical inconsistency. In this work, we will consider a blockwise approach to alignment analysis, namely so-called $k$-tuple analyses. For four taxa it has already been shown that $k$-tuple-based analyses are statistically inconsistent if and only if the standard character-based (site-based) analyses are statistically inconsistent. So, in the four-taxon case, going from individual sites to $k$-tuples does not lead to any improvement. However, real biological analyses often consider more than only four taxa. Therefore, we analyze the case of five taxa for $2$- and $3$-tuple-site data and consider alphabets with two and four elements. We show that the equivalence of single-site data and $k$-tuple-site data then no longer holds. Even so, we can show that Maximum Parsimony is statistically inconsistent for $k$-tuple site data and five taxa.
q-bio/0506018
Antonio T. Costa Jr
Monica F. B. Moreira, Marcio P. Dantas, A. T. Costa Jr
Spontaneous natural selection in a model for spatially distributed interacting populations
RevTeX4, 8 figures
null
null
null
q-bio.PE
null
We present an individual-based model for two interacting populations diffusing on lattices in which a strong natural selection develops spontaneously. The models combine traditional local predator-prey dynamics with random walks. Individual's mobility is considered as an inherited trait. Small variations upon inheritance, mimicking mutations, provide variability on which natural selection may act. Although the dynamic rules defining the models do not explicitly favor any mobility values, we found that the average mobility of both populations tend to be maximized in various situations. In some situations there is evidence of polymorphism, indicated by an adaptive landscape with many local maxima. We provide evidence relating selective pressure for high mobility with pattern formation.
[ { "created": "Wed, 15 Jun 2005 21:39:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Moreira", "Monica F. B.", "" ], [ "Dantas", "Marcio P.", "" ], [ "Costa", "A. T.", "Jr" ] ]
We present an individual-based model for two interacting populations diffusing on lattices in which a strong natural selection develops spontaneously. The models combine traditional local predator-prey dynamics with random walks. Individual's mobility is considered as an inherited trait. Small variations upon inheritance, mimicking mutations, provide variability on which natural selection may act. Although the dynamic rules defining the models do not explicitly favor any mobility values, we found that the average mobility of both populations tend to be maximized in various situations. In some situations there is evidence of polymorphism, indicated by an adaptive landscape with many local maxima. We provide evidence relating selective pressure for high mobility with pattern formation.
1902.01257
Vesna Vuksanovic
Vesna Vuksanovi\'c
Cortical thickness and functional networks modules by cortical lobes
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
This study aims to investigate topological organization of cortical thickness and functional networks by cortical lobes. First, I demonstrated modular organization of these networks by the cortical surface frontal, temporal, parietal and occipital divisions. Secondly, I mapped the overlapping edges of cortical thickness and functional networks for positive and negative correlations. Finally, I showed that overlapping positive edges map onto within-lobe cortical interactions and negative onto between-lobes interactions.
[ { "created": "Mon, 4 Feb 2019 15:39:10 GMT", "version": "v1" }, { "created": "Thu, 13 Jun 2019 09:16:27 GMT", "version": "v2" } ]
2019-06-14
[ [ "Vuksanović", "Vesna", "" ] ]
This study aims to investigate topological organization of cortical thickness and functional networks by cortical lobes. First, I demonstrated modular organization of these networks by the cortical surface frontal, temporal, parietal and occipital divisions. Secondly, I mapped the overlapping edges of cortical thickness and functional networks for positive and negative correlations. Finally, I showed that overlapping positive edges map onto within-lobe cortical interactions and negative onto between-lobes interactions.
2006.03420
Robert Worden
R.P. Worden
Is there a wave excitation in the Thalamus?
11 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes that the thalamus is the site of a wave excitation, whose function is to represent the locations of things around the animal. Neurons couple to the wave as transmitters and receivers. The wave acts as an analogue representation of local space. This has benefits over a purely neural representation of space. Several lines of evidence support this hypothesis; both theoretical, concerning efficient Bayesian inference in the brain, and empirical, concerning the neuro-anatomy of the thalamus. Across all species, the most basic function of the brain is to coordinate movements in space. To represent positions in space only by neural firing rates would be complex and inefficient. It is possible that that the brain represents 3D space in a direct and natural way, by a 3D wave
[ { "created": "Wed, 20 May 2020 14:28:02 GMT", "version": "v1" } ]
2020-06-08
[ [ "Worden", "R. P.", "" ] ]
This paper proposes that the thalamus is the site of a wave excitation, whose function is to represent the locations of things around the animal. Neurons couple to the wave as transmitters and receivers. The wave acts as an analogue representation of local space. This has benefits over a purely neural representation of space. Several lines of evidence support this hypothesis; both theoretical, concerning efficient Bayesian inference in the brain, and empirical, concerning the neuro-anatomy of the thalamus. Across all species, the most basic function of the brain is to coordinate movements in space. To represent positions in space only by neural firing rates would be complex and inefficient. It is possible that that the brain represents 3D space in a direct and natural way, by a 3D wave
2404.09059
Rayanne Luke
Prajakta Bedekar and Rayanne A. Luke and Anthony J. Kearsley
Prevalence estimation methods for time-dependent antibody kinetics of infected and vaccinated individuals: a graph-theoretic approach
27 pages, 7 figures
null
null
null
q-bio.PE math.PR physics.bio-ph q-bio.QM stat.ME
http://creativecommons.org/licenses/by/4.0/
Immune events such as infection, vaccination, and a combination of the two result in distinct time-dependent antibody responses in affected individuals. These responses and event prevalences combine non-trivially to govern antibody levels sampled from a population. Time-dependence and disease prevalence pose considerable modeling challenges that need to be addressed to provide a rigorous mathematical underpinning of the underlying biology. We propose a time-inhomogeneous Markov chain model for event-to-event transitions coupled with a probabilistic framework for anti-body kinetics and demonstrate its use in a setting in which individuals can be infected or vaccinated but not both. We prove the equivalency of this approach to the framework developed in our previous work. Synthetic data are used to demonstrate the modeling process and conduct prevalence estimation via transition probability matrices. This approach is ideal to model sequences of infections and vaccinations, or personal trajectories in a population, making it an important first step towards a mathematical characterization of reinfection, vaccination boosting, and cross-events of infection after vaccination or vice versa.
[ { "created": "Sat, 13 Apr 2024 18:43:59 GMT", "version": "v1" } ]
2024-04-16
[ [ "Bedekar", "Prajakta", "" ], [ "Luke", "Rayanne A.", "" ], [ "Kearsley", "Anthony J.", "" ] ]
Immune events such as infection, vaccination, and a combination of the two result in distinct time-dependent antibody responses in affected individuals. These responses and event prevalences combine non-trivially to govern antibody levels sampled from a population. Time-dependence and disease prevalence pose considerable modeling challenges that need to be addressed to provide a rigorous mathematical underpinning of the underlying biology. We propose a time-inhomogeneous Markov chain model for event-to-event transitions coupled with a probabilistic framework for anti-body kinetics and demonstrate its use in a setting in which individuals can be infected or vaccinated but not both. We prove the equivalency of this approach to the framework developed in our previous work. Synthetic data are used to demonstrate the modeling process and conduct prevalence estimation via transition probability matrices. This approach is ideal to model sequences of infections and vaccinations, or personal trajectories in a population, making it an important first step towards a mathematical characterization of reinfection, vaccination boosting, and cross-events of infection after vaccination or vice versa.
1706.02327
Christian Negre
Christian F. A. Negre, Uriel N. Morzan, Heidi Hendrickson, Rhitankar Pal, George P. Lisi, J. Patrick Loria, Ivan Rivalta, Junming Ho, Victor S. Batista
Eigenvector Centrality Distribution for Characterization of Protein Allosteric Pathways
null
null
10.1073/pnas.1810452115
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the principal energy pathways for allosteric communication in biomolecules, that occur as a result of thermal motion, remains challenging due to the intrinsic complexity of the systems involved. Graph theory provides an approach for making sense of such complexity, where allosteric proteins can be represented as networks of amino acids. In this work, we establish the eigenvector centrality metric in terms of the mutual information, as a mean of elucidating the allosteric mechanism that regulates the enzymatic activity of proteins. Moreover, we propose a strategy to characterize the range of the physical interactions that underlie the allosteric process. In particular, the well known enzyme, imidazol glycerol phosphate synthase (IGPS), is utilized to test the proposed methodology. The eigenvector centrality measurement successfully describes the allosteric pathways of IGPS, and allows to pinpoint key amino acids in terms of their relevance in the momentum transfer process. The resulting insight can be utilized for refining the control of IGPS activity, widening the scope for its engineering. Furthermore, we propose a new centrality metric quantifying the relevance of the surroundings of each residue. In addition, the proposed technique is validated against experimental solution NMR measurements yielding fully consistent results. Overall, the methodologies proposed in the present work constitute a powerful and cost effective strategy to gain insight on the allosteric mechanism of proteins.
[ { "created": "Wed, 7 Jun 2017 18:32:59 GMT", "version": "v1" }, { "created": "Mon, 25 Jun 2018 15:33:04 GMT", "version": "v2" } ]
2022-05-04
[ [ "Negre", "Christian F. A.", "" ], [ "Morzan", "Uriel N.", "" ], [ "Hendrickson", "Heidi", "" ], [ "Pal", "Rhitankar", "" ], [ "Lisi", "George P.", "" ], [ "Loria", "J. Patrick", "" ], [ "Rivalta", "Ivan", "" ], [ "Ho", "Junming", "" ], [ "Batista", "Victor S.", "" ] ]
Determining the principal energy pathways for allosteric communication in biomolecules, that occur as a result of thermal motion, remains challenging due to the intrinsic complexity of the systems involved. Graph theory provides an approach for making sense of such complexity, where allosteric proteins can be represented as networks of amino acids. In this work, we establish the eigenvector centrality metric in terms of the mutual information, as a mean of elucidating the allosteric mechanism that regulates the enzymatic activity of proteins. Moreover, we propose a strategy to characterize the range of the physical interactions that underlie the allosteric process. In particular, the well known enzyme, imidazol glycerol phosphate synthase (IGPS), is utilized to test the proposed methodology. The eigenvector centrality measurement successfully describes the allosteric pathways of IGPS, and allows to pinpoint key amino acids in terms of their relevance in the momentum transfer process. The resulting insight can be utilized for refining the control of IGPS activity, widening the scope for its engineering. Furthermore, we propose a new centrality metric quantifying the relevance of the surroundings of each residue. In addition, the proposed technique is validated against experimental solution NMR measurements yielding fully consistent results. Overall, the methodologies proposed in the present work constitute a powerful and cost effective strategy to gain insight on the allosteric mechanism of proteins.
2002.10372
Mauro Mobilia
Ami Taitelbaum, Robert West, Michael Assaf, Mauro Mobilia
Population Dynamics in a Changing Environment: Random versus Periodic Switching
22 pages, 7 figures: main text (6 pages, 3 figures) followed by Supplementary Material (16 pages, 4 figures). Published in Physical Review Letters. Additional supporting resources available at https://figshare.com/articles/Supplementary_Material/12613370
Phys. Rev. Lett. 125, 048105 (2020)
10.1103/PhysRevLett.125.048105
null
q-bio.PE cond-mat.stat-mech nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Environmental changes greatly influence the evolution of populations. Here, we study the dynamics of a population of two strains, one growing slightly faster than the other, competing for resources in a time-varying binary environment modeled by a carrying capacity switching either randomly or periodically between states of abundance and scarcity. The population dynamics is characterized by demographic noise (birth and death events) coupled to a varying environment. We elucidate the similarities and differences of the evolution subject to a stochastically- and periodically-varying environment. Importantly, the population size distribution is generally found to be broader under intermediate and fast random switching than under periodic variations, which results in markedly different asymptotic behaviors between the fixation probability of random and periodic switching. We also determine the detailed conditions under which the fixation probability of the slow strain is maximal.
[ { "created": "Mon, 24 Feb 2020 16:57:17 GMT", "version": "v1" }, { "created": "Thu, 9 Jul 2020 00:22:13 GMT", "version": "v2" }, { "created": "Mon, 27 Jul 2020 14:26:40 GMT", "version": "v3" } ]
2020-07-28
[ [ "Taitelbaum", "Ami", "" ], [ "West", "Robert", "" ], [ "Assaf", "Michael", "" ], [ "Mobilia", "Mauro", "" ] ]
Environmental changes greatly influence the evolution of populations. Here, we study the dynamics of a population of two strains, one growing slightly faster than the other, competing for resources in a time-varying binary environment modeled by a carrying capacity switching either randomly or periodically between states of abundance and scarcity. The population dynamics is characterized by demographic noise (birth and death events) coupled to a varying environment. We elucidate the similarities and differences of the evolution subject to a stochastically- and periodically-varying environment. Importantly, the population size distribution is generally found to be broader under intermediate and fast random switching than under periodic variations, which results in markedly different asymptotic behaviors between the fixation probability of random and periodic switching. We also determine the detailed conditions under which the fixation probability of the slow strain is maximal.
1205.6867
Frederick Matsen IV
Frederick A. Matsen, Aaron Gallagher, Connor McCoy
Minimizing the average distance to a closest leaf in a phylogenetic tree
Please contact us with any comments or questions!
null
null
null
q-bio.PE cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When performing an analysis on a collection of molecular sequences, it can be convenient to reduce the number of sequences under consideration while maintaining some characteristic of a larger collection of sequences. For example, one may wish to select a subset of high-quality sequences that represent the diversity of a larger collection of sequences. One may also wish to specialize a large database of characterized "reference sequences" to a smaller subset that is as close as possible on average to a collection of "query sequences" of interest. Such a representative subset can be useful whenever one wishes to find a set of reference sequences that is appropriate to use for comparative analysis of environmentally-derived sequences, such as for selecting "reference tree" sequences for phylogenetic placement of metagenomic reads. In this paper we formalize these problems in terms of the minimization of the Average Distance to the Closest Leaf (ADCL) and investigate algorithms to perform the relevant minimization. We show that the greedy algorithm is not effective, show that a variant of the Partitioning Among Medoids (PAM) heuristic gets stuck in local minima, and develop an exact dynamic programming approach. Using this exact program we note that the performance of PAM appears to be good for simulated trees, and is faster than the exact algorithm for small trees. On the other hand, the exact program gives solutions for all numbers of leaves less than or equal to the given desired number of leaves, while PAM only gives a solution for the pre-specified number of leaves. Via application to real data, we show that the ADCL criterion chooses chimeric sequences less often than random subsets, while the maximization of phylogenetic diversity chooses them more often than random. These algorithms have been implemented in publicly available software.
[ { "created": "Thu, 31 May 2012 01:41:37 GMT", "version": "v1" }, { "created": "Fri, 31 Aug 2012 18:05:58 GMT", "version": "v2" } ]
2012-09-03
[ [ "Matsen", "Frederick A.", "" ], [ "Gallagher", "Aaron", "" ], [ "McCoy", "Connor", "" ] ]
When performing an analysis on a collection of molecular sequences, it can be convenient to reduce the number of sequences under consideration while maintaining some characteristic of a larger collection of sequences. For example, one may wish to select a subset of high-quality sequences that represent the diversity of a larger collection of sequences. One may also wish to specialize a large database of characterized "reference sequences" to a smaller subset that is as close as possible on average to a collection of "query sequences" of interest. Such a representative subset can be useful whenever one wishes to find a set of reference sequences that is appropriate to use for comparative analysis of environmentally-derived sequences, such as for selecting "reference tree" sequences for phylogenetic placement of metagenomic reads. In this paper we formalize these problems in terms of the minimization of the Average Distance to the Closest Leaf (ADCL) and investigate algorithms to perform the relevant minimization. We show that the greedy algorithm is not effective, show that a variant of the Partitioning Among Medoids (PAM) heuristic gets stuck in local minima, and develop an exact dynamic programming approach. Using this exact program we note that the performance of PAM appears to be good for simulated trees, and is faster than the exact algorithm for small trees. On the other hand, the exact program gives solutions for all numbers of leaves less than or equal to the given desired number of leaves, while PAM only gives a solution for the pre-specified number of leaves. Via application to real data, we show that the ADCL criterion chooses chimeric sequences less often than random subsets, while the maximization of phylogenetic diversity chooses them more often than random. These algorithms have been implemented in publicly available software.
2404.10807
Joseph Marsh
Benjamin J. Livesey, Mihaly Badonyi, Mafalda Dias, Jonathan Frazer, Sushant Kumar, Kresten Lindorff-Larsen, David M. McCandlish, Rose Orenbuch, Courtney A. Shearer, Lara Muffley, Julia Foreman, Andrew M. Glazer, Ben Lehner, Debora S. Marks, Frederick P. Roth, Alan F. Rubin, Lea M. Starita and Joseph A. Marsh
Guidelines for releasing a variant effect predictor
14 pages, 1 figure
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Computational methods for assessing the likely impacts of mutations, known as variant effect predictors (VEPs), are widely used in the assessment and interpretation of human genetic variation, as well as in other applications like protein engineering. Many different VEPs have been released to date, and there is tremendous variability in their underlying algorithms and outputs, and in the ways in which the methodologies and predictions are shared. This leads to considerable challenges for end users in knowing which VEPs to use and how to use them. Here, to address these issues, we provide guidelines and recommendations for the release of novel VEPs. Emphasising open-source availability, transparent methodologies, clear variant effect score interpretations, standardised scales, accessible predictions, and rigorous training data disclosure, we aim to improve the usability and interpretability of VEPs, and promote their integration into analysis and evaluation pipelines. We also provide a large, categorised list of currently available VEPs, aiming to facilitate the discovery and encourage the usage of novel methods within the scientific community.
[ { "created": "Tue, 16 Apr 2024 13:37:07 GMT", "version": "v1" } ]
2024-04-18
[ [ "Livesey", "Benjamin J.", "" ], [ "Badonyi", "Mihaly", "" ], [ "Dias", "Mafalda", "" ], [ "Frazer", "Jonathan", "" ], [ "Kumar", "Sushant", "" ], [ "Lindorff-Larsen", "Kresten", "" ], [ "McCandlish", "David M.", "" ], [ "Orenbuch", "Rose", "" ], [ "Shearer", "Courtney A.", "" ], [ "Muffley", "Lara", "" ], [ "Foreman", "Julia", "" ], [ "Glazer", "Andrew M.", "" ], [ "Lehner", "Ben", "" ], [ "Marks", "Debora S.", "" ], [ "Roth", "Frederick P.", "" ], [ "Rubin", "Alan F.", "" ], [ "Starita", "Lea M.", "" ], [ "Marsh", "Joseph A.", "" ] ]
Computational methods for assessing the likely impacts of mutations, known as variant effect predictors (VEPs), are widely used in the assessment and interpretation of human genetic variation, as well as in other applications like protein engineering. Many different VEPs have been released to date, and there is tremendous variability in their underlying algorithms and outputs, and in the ways in which the methodologies and predictions are shared. This leads to considerable challenges for end users in knowing which VEPs to use and how to use them. Here, to address these issues, we provide guidelines and recommendations for the release of novel VEPs. Emphasising open-source availability, transparent methodologies, clear variant effect score interpretations, standardised scales, accessible predictions, and rigorous training data disclosure, we aim to improve the usability and interpretability of VEPs, and promote their integration into analysis and evaluation pipelines. We also provide a large, categorised list of currently available VEPs, aiming to facilitate the discovery and encourage the usage of novel methods within the scientific community.
1503.00529
Sergei Maslov
Sergei Maslov and Kim Sneppen
Diversity waves in collapse-driven population dynamics
15 pages (including SI), 6 figures + 7 supplementary figures
null
10.1371/journal.pcbi.1004440
null
q-bio.PE nlin.AO physics.soc-ph q-fin.EC q-fin.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Populations of species in ecosystems are often constrained by availability of resources within their environment. In effect this means that a growth of one population, needs to be balanced by comparable reduction in populations of others. In neutral models of biodiversity all populations are assumed to change incrementally due to stochastic births and deaths of individuals. Here we propose and model another redistribution mechanism driven by abrupt and severe collapses of the entire population of a single species freeing up resources for the remaining ones. This mechanism may be relevant e.g. for communities of bacteria, with strain-specific collapses caused e.g. by invading bacteriophages, or for other ecosystems where infectious diseases play an important role. The emergent dynamics of our system is cyclic "diversity waves" triggered by collapses of globally dominating populations. The population diversity peaks at the beginning of each wave and exponentially decreases afterwards. Species abundances are characterized by a bimodal time-aggregated distribution with the lower peak formed by populations of recently collapsed or newly introduced species, while the upper peak - species that has not yet collapsed in the current wave. In most waves both upper and lower peaks are composed of several smaller peaks. This self-organized hierarchical peak structure has a long-term memory transmitted across several waves. It gives rise to a scale-free tail of the time-aggregated population distribution with a universal exponent of 1.7. We show that diversity wave dynamics is robust with respect to variations in the rules of our model such as diffusion between multiple environments, species-specific growth and extinction rates, and bet-hedging strategies.
[ { "created": "Mon, 2 Mar 2015 14:03:39 GMT", "version": "v1" }, { "created": "Tue, 14 Jul 2015 18:39:20 GMT", "version": "v2" } ]
2016-02-17
[ [ "Maslov", "Sergei", "" ], [ "Sneppen", "Kim", "" ] ]
Populations of species in ecosystems are often constrained by availability of resources within their environment. In effect this means that a growth of one population, needs to be balanced by comparable reduction in populations of others. In neutral models of biodiversity all populations are assumed to change incrementally due to stochastic births and deaths of individuals. Here we propose and model another redistribution mechanism driven by abrupt and severe collapses of the entire population of a single species freeing up resources for the remaining ones. This mechanism may be relevant e.g. for communities of bacteria, with strain-specific collapses caused e.g. by invading bacteriophages, or for other ecosystems where infectious diseases play an important role. The emergent dynamics of our system is cyclic "diversity waves" triggered by collapses of globally dominating populations. The population diversity peaks at the beginning of each wave and exponentially decreases afterwards. Species abundances are characterized by a bimodal time-aggregated distribution with the lower peak formed by populations of recently collapsed or newly introduced species, while the upper peak - species that has not yet collapsed in the current wave. In most waves both upper and lower peaks are composed of several smaller peaks. This self-organized hierarchical peak structure has a long-term memory transmitted across several waves. It gives rise to a scale-free tail of the time-aggregated population distribution with a universal exponent of 1.7. We show that diversity wave dynamics is robust with respect to variations in the rules of our model such as diffusion between multiple environments, species-specific growth and extinction rates, and bet-hedging strategies.
2004.07440
Masaki Watabe
Masaki Watabe, Hideaki Yoshimura, Satya N. V. Arjunan, Kazunari Kaizu, and Koichi Takahashi
Signaling activations through G-protein-coupled-receptor aggregations
6 pages, 4 figures
Phys. Rev. E 102, 032413 (2020)
10.1103/PhysRevE.102.032413
null
q-bio.MN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eukaryotic cells transmit extracellular signal information to cellular interiors through the formation of a ternary complex made up of a ligand (or agonist), G-protein, and G-protein coupled receptor (GPCR). Previously formalized theories of ternary complex formation have mainly assumed that observable states of receptors can only take the form of monomers. Here, we propose a multiary complex model of GPCR signaling activations via the vector representation of various unobserved aggregated receptor states. Our results from model simulations imply that receptor aggregation processes can govern cooperative effects in a regime inaccessible by previous theories. In particular, we show how the affinity of ligand-receptor binding can be largely varied by various oligomer formations in the low concentration range of G-protein stimulus.
[ { "created": "Thu, 16 Apr 2020 03:51:21 GMT", "version": "v1" }, { "created": "Tue, 22 Sep 2020 23:10:22 GMT", "version": "v2" } ]
2020-09-24
[ [ "Watabe", "Masaki", "" ], [ "Yoshimura", "Hideaki", "" ], [ "Arjunan", "Satya N. V.", "" ], [ "Kaizu", "Kazunari", "" ], [ "Takahashi", "Koichi", "" ] ]
Eukaryotic cells transmit extracellular signal information to cellular interiors through the formation of a ternary complex made up of a ligand (or agonist), G-protein, and G-protein coupled receptor (GPCR). Previously formalized theories of ternary complex formation have mainly assumed that observable states of receptors can only take the form of monomers. Here, we propose a multiary complex model of GPCR signaling activations via the vector representation of various unobserved aggregated receptor states. Our results from model simulations imply that receptor aggregation processes can govern cooperative effects in a regime inaccessible by previous theories. In particular, we show how the affinity of ligand-receptor binding can be largely varied by various oligomer formations in the low concentration range of G-protein stimulus.
q-bio/0611024
Iraziet Charret
I. C. Charret and M. V. Carneiro
Spontaneous emergence of spatial patterns ina a predator-prey model
17 pages and 15 figures
PRE v.76. e061902, 2007
10.1103/PhysRevE.76.061902
null
q-bio.PE
null
We present studies for an individual based model of three interacting populations whose individuals are mobile in a 2D-lattice. We focus on the pattern formation in the spatial distributions of the populations. Also relevant is the relationship between pattern formation and features of the populations' time series. Our model displays travelling waves solutions, clustering and uniform distributions, all related to the parameters values. We also observed that the regeneration rate, the parameter associated to the primary level of trophic chain, the plants, regulated the presence of predators, as well as the type of spatial configuration.
[ { "created": "Tue, 7 Nov 2006 13:37:35 GMT", "version": "v1" } ]
2007-12-18
[ [ "Charret", "I. C.", "" ], [ "Carneiro", "M. V.", "" ] ]
We present studies for an individual based model of three interacting populations whose individuals are mobile in a 2D-lattice. We focus on the pattern formation in the spatial distributions of the populations. Also relevant is the relationship between pattern formation and features of the populations' time series. Our model displays travelling waves solutions, clustering and uniform distributions, all related to the parameters values. We also observed that the regeneration rate, the parameter associated to the primary level of trophic chain, the plants, regulated the presence of predators, as well as the type of spatial configuration.
2003.00009
Carlo Vittorio Cannistraci
Yan Ge, Philipp Rosendahl, Claudio Dur\'an, Nicole T\"opfner, Sara Ciucci, Jochen Guck, and Carlo Vittorio Cannistraci
Cell Mechanics Based Computational Classification of Red Blood Cells Via Machine Intelligence Applied to Morpho-Rheological Markers
13 pages, 3 figures, 4 tables
IEEE/ACM Trans. Comput. Biol. Bioinform (2019)
10.1109/TCBB.2019.2945762
null
q-bio.QM cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite fluorescent cell-labelling being widely employed in biomedical studies, some of its drawbacks are inevitable, with unsuitable fluorescent probes or probes inducing a functional change being the main limitations. Consequently, the demand for and development of label-free methodologies to classify cells is strong and its impact on precision medicine is relevant. Towards this end, high-throughput techniques for cell mechanical phenotyping have been proposed to get a multidimensional biophysical characterization of single cells. With this motivation, our goal here is to investigate the extent to which an unsupervised machine learning methodology, which is applied exclusively on morpho-rheological markers obtained by real-time deformability and fluorescence cytometry (RT-FDC), can address the difficult task of providing label-free discrimination of reticulocytes from mature red blood cells. We focused on this problem, since the characterization of reticulocytes (their percentage and cellular features) in the blood is vital in multiple human disease conditions, especially bone-marrow disorders such as anemia and leukemia. Our approach reports promising label-free results in the classification of reticulocytes from mature red blood cells, and it represents a step forward in the development of high-throughput morpho-rheological-based methodologies for the computational categorization of single cells. Besides, our methodology can be an alternative but also a complementary method to integrate with existing cell-labelling techniques.
[ { "created": "Mon, 2 Mar 2020 15:11:46 GMT", "version": "v1" } ]
2020-03-03
[ [ "Ge", "Yan", "" ], [ "Rosendahl", "Philipp", "" ], [ "Durán", "Claudio", "" ], [ "Töpfner", "Nicole", "" ], [ "Ciucci", "Sara", "" ], [ "Guck", "Jochen", "" ], [ "Cannistraci", "Carlo Vittorio", "" ] ]
Despite fluorescent cell-labelling being widely employed in biomedical studies, some of its drawbacks are inevitable, with unsuitable fluorescent probes or probes inducing a functional change being the main limitations. Consequently, the demand for and development of label-free methodologies to classify cells is strong and its impact on precision medicine is relevant. Towards this end, high-throughput techniques for cell mechanical phenotyping have been proposed to get a multidimensional biophysical characterization of single cells. With this motivation, our goal here is to investigate the extent to which an unsupervised machine learning methodology, which is applied exclusively on morpho-rheological markers obtained by real-time deformability and fluorescence cytometry (RT-FDC), can address the difficult task of providing label-free discrimination of reticulocytes from mature red blood cells. We focused on this problem, since the characterization of reticulocytes (their percentage and cellular features) in the blood is vital in multiple human disease conditions, especially bone-marrow disorders such as anemia and leukemia. Our approach reports promising label-free results in the classification of reticulocytes from mature red blood cells, and it represents a step forward in the development of high-throughput morpho-rheological-based methodologies for the computational categorization of single cells. Besides, our methodology can be an alternative but also a complementary method to integrate with existing cell-labelling techniques.
2401.06294
Clement Soubrier
Cl\'ement Soubrier, Eric Foxall, Luca Ciandrini, Khanh Dao Duc
Optimal control of ribosome population for gene expression under periodic nutrient intake
16 pages (plus 13 pages of appendices). 4 figures (plus 1 figure and 1 table in appendices). Submitted to the Journal of The Royal Society Interface
J. R. Soc. Interface. 2120230652
10.1098/rsif.2023.0652
null
q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Translation of proteins is a fundamental part of gene expression that is mediated by ribosomes. As ribosomes significantly contribute to both cellular mass and energy consumption, achieving efficient management of the ribosome population is also crucial to metabolism and growth. Inspired by biological evidence for nutrient-dependent mechanisms that control both ribosome active degradation and genesis, we introduce a dynamical model of protein production, that includes the dynamics of resources and control over the ribosome population. Under the hypothesis that active degradation and biogenesis are optimal for maximizing and maintaining protein production, we aim to qualitatively reproduce empirical observations of the ribosome population dynamics. Upon formulating the associated optimization problem, we first analytically study the stability and global behaviour of solutions under constant resource input, and characterize the extent of oscillations and convergence rate to a global equilibrium. We further use these results to simplify and solve the problem under a quasi-static approximation. Using biophysical parameter values, we find that optimal control solutions lead to both control mechanisms and the ribosome population switching between periods of feeding and fasting, suggesting that the intense regulation of ribosome population observed in experiments allows to maximize and maintain protein production. Finally, we find some range for the control values over which such a regime can be observed, depending on the intensity of fasting.
[ { "created": "Thu, 11 Jan 2024 23:27:13 GMT", "version": "v1" } ]
2024-04-18
[ [ "Soubrier", "Clément", "" ], [ "Foxall", "Eric", "" ], [ "Ciandrini", "Luca", "" ], [ "Duc", "Khanh Dao", "" ] ]
Translation of proteins is a fundamental part of gene expression that is mediated by ribosomes. As ribosomes significantly contribute to both cellular mass and energy consumption, achieving efficient management of the ribosome population is also crucial to metabolism and growth. Inspired by biological evidence for nutrient-dependent mechanisms that control both ribosome active degradation and genesis, we introduce a dynamical model of protein production, that includes the dynamics of resources and control over the ribosome population. Under the hypothesis that active degradation and biogenesis are optimal for maximizing and maintaining protein production, we aim to qualitatively reproduce empirical observations of the ribosome population dynamics. Upon formulating the associated optimization problem, we first analytically study the stability and global behaviour of solutions under constant resource input, and characterize the extent of oscillations and convergence rate to a global equilibrium. We further use these results to simplify and solve the problem under a quasi-static approximation. Using biophysical parameter values, we find that optimal control solutions lead to both control mechanisms and the ribosome population switching between periods of feeding and fasting, suggesting that the intense regulation of ribosome population observed in experiments allows to maximize and maintain protein production. Finally, we find some range for the control values over which such a regime can be observed, depending on the intensity of fasting.
2311.17066
Ke Yuhe
Yuhe Ke, Matilda Swee Sun Tang, Celestine Jia Ling Loh, Hairil Rizal Abdullah, Nicholas Brian Shannon
Cluster trajectory of SOFA score in predicting mortality in sepsis
26 pages, 4 figures, 2 tables
null
null
null
q-bio.QM cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Objective: Sepsis is a life-threatening condition. Sequential Organ Failure Assessment (SOFA) score is commonly used to assess organ dysfunction and predict ICU mortality, but it is taken as a static measurement and fails to capture dynamic changes. This study aims to investigate the relationship between dynamic changes in SOFA scores over the first 72 hours of ICU admission and patient outcomes. Design, setting, and participants: 3,253 patients in the Medical Information Mart for Intensive Care IV database who met the sepsis-3 criteria and were admitted from the emergency department with at least 72 hours of ICU admission and full-active resuscitation status were analysed. Group-based trajectory modelling with dynamic time warping and k-means clustering identified distinct trajectory patterns in dynamic SOFA scores. They were subsequently compared using Python. Main outcome measures: Outcomes including hospital and ICU mortality, length of stay in hospital and ICU, and readmission during hospital stay, were collected. Discharge time from ICU to wards and cut-offs at 7-day and 14-day were taken. Results: Four clusters were identified: A (consistently low SOFA scores), B (rapid increase followed by a decline in SOFA scores), C (higher baseline scores with gradual improvement), and D (persistently elevated scores). Cluster D had the longest ICU and hospital stays, highest ICU and hospital mortality. Discharge rates from ICU were similar for Clusters A and B, while Cluster C had initially comparable rates but a slower transition to ward. Conclusion: Monitoring dynamic changes in SOFA score is valuable for assessing sepsis severity and treatment responsiveness.
[ { "created": "Thu, 23 Nov 2023 12:29:00 GMT", "version": "v1" } ]
2023-11-30
[ [ "Ke", "Yuhe", "" ], [ "Tang", "Matilda Swee Sun", "" ], [ "Loh", "Celestine Jia Ling", "" ], [ "Abdullah", "Hairil Rizal", "" ], [ "Shannon", "Nicholas Brian", "" ] ]
Objective: Sepsis is a life-threatening condition. Sequential Organ Failure Assessment (SOFA) score is commonly used to assess organ dysfunction and predict ICU mortality, but it is taken as a static measurement and fails to capture dynamic changes. This study aims to investigate the relationship between dynamic changes in SOFA scores over the first 72 hours of ICU admission and patient outcomes. Design, setting, and participants: 3,253 patients in the Medical Information Mart for Intensive Care IV database who met the sepsis-3 criteria and were admitted from the emergency department with at least 72 hours of ICU admission and full-active resuscitation status were analysed. Group-based trajectory modelling with dynamic time warping and k-means clustering identified distinct trajectory patterns in dynamic SOFA scores. They were subsequently compared using Python. Main outcome measures: Outcomes including hospital and ICU mortality, length of stay in hospital and ICU, and readmission during hospital stay, were collected. Discharge time from ICU to wards and cut-offs at 7-day and 14-day were taken. Results: Four clusters were identified: A (consistently low SOFA scores), B (rapid increase followed by a decline in SOFA scores), C (higher baseline scores with gradual improvement), and D (persistently elevated scores). Cluster D had the longest ICU and hospital stays, highest ICU and hospital mortality. Discharge rates from ICU were similar for Clusters A and B, while Cluster C had initially comparable rates but a slower transition to ward. Conclusion: Monitoring dynamic changes in SOFA score is valuable for assessing sepsis severity and treatment responsiveness.
1401.1755
Tobias Reichenbach
Alexander Dobrinevski, Mikko Alava, Tobias Reichenbach, Erwin Frey
Mobility-Dependent Selection of Competing Strategy Associations
9 pages, six figures; accepted in Physical Review E
Phys. Rev. E 89, 012721 (2014)
10.1103/PhysRevE.89.012721
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard models of population dynamics focus on the the interaction, survival, and extinction of the competing species individually. Real ecological systems, however, are characterized by an abundance of species (or strategies, in the terminology of evolutionary-game theory) that form intricate, complex interaction networks. The description of the ensuing dynamics may be aided by studying associations of certain strategies rather than individual ones. Here we show how such a higher-level description can bear fruitful insight. Motivated from different strains of colicinogenic Escherichia coli bacteria, we investigate a four-strategy system which contains a three-strategy cycle and a neutral alliance of two strategies. We find that the stochastic, spatial model exhibits a mobility-dependent selection of either the three-strategy cycle or of the neutral pair. We analyze this intriguing phenomenon numerically and analytically.
[ { "created": "Wed, 8 Jan 2014 17:11:20 GMT", "version": "v1" } ]
2014-05-27
[ [ "Dobrinevski", "Alexander", "" ], [ "Alava", "Mikko", "" ], [ "Reichenbach", "Tobias", "" ], [ "Frey", "Erwin", "" ] ]
Standard models of population dynamics focus on the the interaction, survival, and extinction of the competing species individually. Real ecological systems, however, are characterized by an abundance of species (or strategies, in the terminology of evolutionary-game theory) that form intricate, complex interaction networks. The description of the ensuing dynamics may be aided by studying associations of certain strategies rather than individual ones. Here we show how such a higher-level description can bear fruitful insight. Motivated from different strains of colicinogenic Escherichia coli bacteria, we investigate a four-strategy system which contains a three-strategy cycle and a neutral alliance of two strategies. We find that the stochastic, spatial model exhibits a mobility-dependent selection of either the three-strategy cycle or of the neutral pair. We analyze this intriguing phenomenon numerically and analytically.
2011.10126
Brandon Hayes
Brandon H Hayes, Mathieu Andraud, Luis G Salazar, Nicolas Rose, and Timothee Vergne
Mechanistic modeling of African swine fever: A systematic review
19 pages, 3 figures, 4 tables. Accepted to Preventive Veterinary Medicine. Revised from previous preprint to include models published through Dec 2020
null
10.1016/j.prevetmed.2021.105358
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The spread of African swine fever (ASF) poses a grave threat to the global swine industry. Understanding transmission dynamics, such as through mechanistic modeling, is critical for designing effective control strategies. Articles were examined across multiple epidemiological and model characteristics. Model filiation was determined through creation of a neighbor-joined tree using phylogenetic software. Of 34 four articles qualifying for inclusion, four main modelling objectives were identified: estimating transmission parameters (11 studies), assessing determinants of transmission (7), examining consequences of hypothetical outbreaks (5), and assessing alternative control strategies (11). Estimated transmission parameters varied widely as did parameter assumptions between models. Uncertainties on epidemiological and ecological parameters were usually accounted for to assess the impact on the modeled infection trajectory. Almost all models are host specific, being developed for either domestic pigs or wild boar despite the fact that spillover events between domestic pigs and wild boar are evidenced to play an important role in ASF outbreaks. The development of models incorporating such transmission routes is crucial. All compared control strategies were defined a priori, and future models should be built to identify optimal contributions across many control methods. Further, control strategies were examined in competition, opposed to how they would be synergistically implemented. While comparing strategies is beneficial for identifying a rank-order efficacy of control methods, this structure does not necessarily determine the most effective combination of all available strategies. In order for ASFV models to effectively support decision-making in controlling ASFV globally, these modelling limitations need to be addressed.
[ { "created": "Thu, 19 Nov 2020 22:11:14 GMT", "version": "v1" }, { "created": "Thu, 29 Apr 2021 07:44:11 GMT", "version": "v2" } ]
2021-04-30
[ [ "Hayes", "Brandon H", "" ], [ "Andraud", "Mathieu", "" ], [ "Salazar", "Luis G", "" ], [ "Rose", "Nicolas", "" ], [ "Vergne", "Timothee", "" ] ]
The spread of African swine fever (ASF) poses a grave threat to the global swine industry. Understanding transmission dynamics, such as through mechanistic modeling, is critical for designing effective control strategies. Articles were examined across multiple epidemiological and model characteristics. Model filiation was determined through creation of a neighbor-joined tree using phylogenetic software. Of 34 four articles qualifying for inclusion, four main modelling objectives were identified: estimating transmission parameters (11 studies), assessing determinants of transmission (7), examining consequences of hypothetical outbreaks (5), and assessing alternative control strategies (11). Estimated transmission parameters varied widely as did parameter assumptions between models. Uncertainties on epidemiological and ecological parameters were usually accounted for to assess the impact on the modeled infection trajectory. Almost all models are host specific, being developed for either domestic pigs or wild boar despite the fact that spillover events between domestic pigs and wild boar are evidenced to play an important role in ASF outbreaks. The development of models incorporating such transmission routes is crucial. All compared control strategies were defined a priori, and future models should be built to identify optimal contributions across many control methods. Further, control strategies were examined in competition, opposed to how they would be synergistically implemented. While comparing strategies is beneficial for identifying a rank-order efficacy of control methods, this structure does not necessarily determine the most effective combination of all available strategies. In order for ASFV models to effectively support decision-making in controlling ASFV globally, these modelling limitations need to be addressed.
1208.0029
Tobias Reichenbach
Tobias Reichenbach, Aleksandra Stefanovic, Fumiaki Nin, A. J. Hudspeth
Waves on Reissner's membrane: a mechanism for the propagation of otoacoustic emissions from the cochlea
30 pages, 6 figures, and Supplemental information
Cell Reports 1, 374-384 (2012)
10.1016/j.celrep.2012.02.013
null
q-bio.TO physics.bio-ph physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sound is detected and converted into electrical signals within the ear. The cochlea not only acts as a passive detector of sound, however, but can also produce tones itself. These otoacoustic emissions are a striking manifestation of the cochlea's mechanical active process. A controversy remains of how these mechanical signals propagate back to the middle ear, from which they are emitted as sound. Here we combine theoretical and experimental studies to show that mechanical signals can be transmitted by waves on Reissner's membrane, an elastic structure within the cochea. We develop a theory for wave propagation on Reissner's membrane and its role in otoacoustic emissions. Employing a scanning laser interferometer, we measure traveling waves on Reissner's membrane in the gerbil, guinea pig, and chinchilla. The results accord with the theory and thus support a role for Reissner's membrane in otoacoustic emissions.
[ { "created": "Tue, 31 Jul 2012 20:49:32 GMT", "version": "v1" } ]
2012-08-02
[ [ "Reichenbach", "Tobias", "" ], [ "Stefanovic", "Aleksandra", "" ], [ "Nin", "Fumiaki", "" ], [ "Hudspeth", "A. J.", "" ] ]
Sound is detected and converted into electrical signals within the ear. The cochlea not only acts as a passive detector of sound, however, but can also produce tones itself. These otoacoustic emissions are a striking manifestation of the cochlea's mechanical active process. A controversy remains of how these mechanical signals propagate back to the middle ear, from which they are emitted as sound. Here we combine theoretical and experimental studies to show that mechanical signals can be transmitted by waves on Reissner's membrane, an elastic structure within the cochea. We develop a theory for wave propagation on Reissner's membrane and its role in otoacoustic emissions. Employing a scanning laser interferometer, we measure traveling waves on Reissner's membrane in the gerbil, guinea pig, and chinchilla. The results accord with the theory and thus support a role for Reissner's membrane in otoacoustic emissions.
2003.01409
Niceto R. Luque
Francisco Naveros, Niceto R. Luque, Eduardo Ros, Angelo Arleo
VOR Adaptation on a Humanoid iCub Robot Using a Spiking Cerebellar Model
null
null
10.1109/TCYB.2019.2899246
null
q-bio.NC cs.NE cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We embed a spiking cerebellar model within an adaptive real-time (RT) control loop that is able to operate a real robotic body (iCub) when performing different vestibulo-ocular reflex (VOR) tasks. The spiking neural network computation, including event- and time-driven neural dynamics, neural activity, and spike-timing dependent plasticity (STDP) mechanisms, leads to a nondeterministic computation time caused by the neural activity volleys encountered during cerebellar simulation. This nondeterministic computation time motivates the integration of an RT supervisor module that is able to ensure a well-orchestrated neural computation time and robot operation. Actually, our neurorobotic experimental setup (VOR) benefits from the biological sensory motor delay between the cerebellum and the body to buffer the computational overloads as well as providing flexibility in adjusting the neural computation time and RT operation. The RT supervisor module provides for incremental countermeasures that dynamically slow down or speed up the cerebellar simulation by either halting the simulation or disabling certain neural computation features (i.e., STDP mechanisms, spike propagation, and neural updates) to cope with the RT constraints imposed by the real robot operation. This neurorobotic experimental setup is applied to different horizontal and vertical VOR adaptive tasks that are widely used by the neuroscientific community to address cerebellar functioning. We aim to elucidate the manner in which the combination of the cerebellar neural substrate and the distributed plasticity shapes the cerebellar neural activity to mediate motor adaptation. This paper underlies the need for a two-stage learning process to facilitate VOR acquisition.
[ { "created": "Tue, 3 Mar 2020 09:48:15 GMT", "version": "v1" }, { "created": "Tue, 31 Mar 2020 07:26:00 GMT", "version": "v2" } ]
2020-04-01
[ [ "Naveros", "Francisco", "" ], [ "Luque", "Niceto R.", "" ], [ "Ros", "Eduardo", "" ], [ "Arleo", "Angelo", "" ] ]
We embed a spiking cerebellar model within an adaptive real-time (RT) control loop that is able to operate a real robotic body (iCub) when performing different vestibulo-ocular reflex (VOR) tasks. The spiking neural network computation, including event- and time-driven neural dynamics, neural activity, and spike-timing dependent plasticity (STDP) mechanisms, leads to a nondeterministic computation time caused by the neural activity volleys encountered during cerebellar simulation. This nondeterministic computation time motivates the integration of an RT supervisor module that is able to ensure a well-orchestrated neural computation time and robot operation. Actually, our neurorobotic experimental setup (VOR) benefits from the biological sensory motor delay between the cerebellum and the body to buffer the computational overloads as well as providing flexibility in adjusting the neural computation time and RT operation. The RT supervisor module provides for incremental countermeasures that dynamically slow down or speed up the cerebellar simulation by either halting the simulation or disabling certain neural computation features (i.e., STDP mechanisms, spike propagation, and neural updates) to cope with the RT constraints imposed by the real robot operation. This neurorobotic experimental setup is applied to different horizontal and vertical VOR adaptive tasks that are widely used by the neuroscientific community to address cerebellar functioning. We aim to elucidate the manner in which the combination of the cerebellar neural substrate and the distributed plasticity shapes the cerebellar neural activity to mediate motor adaptation. This paper underlies the need for a two-stage learning process to facilitate VOR acquisition.
1311.2665
Chen Jia
Chen Jia, Daquan Jiang, Minping Qian
An allosteric model of the inositol trisphosphate receptor with nonequilibrium binding
23 pages, 5 figures
Physical Biology, 11(5):056001, 2014
10.1088/1478-3975/11/5/056001
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The inositol trisphosphate receptor (IPR) is a crucial ion channel that regulates the Ca$^{2+}$ influx from the endoplasmic reticulum (ER) to the cytoplasm. A thorough study of the IPR channel contributes to a better understanding of calcium oscillations and waves. It has long been observed that the IPR channel is a typical biological system which performs adaptation. However, recent advances on the physical essence of adaptation show that adaptation systems with a negative feedback mechanism, such as the IPR channel, must break detailed balance and always operate out of equilibrium with energy dissipation. Almost all previous IPR models are equilibrium models assuming detailed balance and thus violate the physical essence of adaptation. In this article, we constructed a nonequilibrium allosteric model of single IPR channels based on the patch-clamp experimental data obtained from the IPR in the outer membranes of isolated nuclei of the \emph{Xenopus} oocyte. It turns out that our model reproduces the patch-clamp experimental data reasonably well and produces both the correct steady-state and dynamic properties of the channel. Particularly, our model successfully describes the complicated bimodal [Ca$^{2+}$] dependence of the mean open duration at high [IP$_3$], a steady-state behavior which fails to be correctly described in previous IPR models. Finally, we used the patch-clamp experimental data to validate that the IPR channel indeed breaks detailed balance and thus is a nonequilibrium system which consumes energy.
[ { "created": "Tue, 12 Nov 2013 03:23:05 GMT", "version": "v1" }, { "created": "Fri, 4 Jul 2014 15:21:35 GMT", "version": "v2" } ]
2014-08-28
[ [ "Jia", "Chen", "" ], [ "Jiang", "Daquan", "" ], [ "Qian", "Minping", "" ] ]
The inositol trisphosphate receptor (IPR) is a crucial ion channel that regulates the Ca$^{2+}$ influx from the endoplasmic reticulum (ER) to the cytoplasm. A thorough study of the IPR channel contributes to a better understanding of calcium oscillations and waves. It has long been observed that the IPR channel is a typical biological system which performs adaptation. However, recent advances on the physical essence of adaptation show that adaptation systems with a negative feedback mechanism, such as the IPR channel, must break detailed balance and always operate out of equilibrium with energy dissipation. Almost all previous IPR models are equilibrium models assuming detailed balance and thus violate the physical essence of adaptation. In this article, we constructed a nonequilibrium allosteric model of single IPR channels based on the patch-clamp experimental data obtained from the IPR in the outer membranes of isolated nuclei of the \emph{Xenopus} oocyte. It turns out that our model reproduces the patch-clamp experimental data reasonably well and produces both the correct steady-state and dynamic properties of the channel. Particularly, our model successfully describes the complicated bimodal [Ca$^{2+}$] dependence of the mean open duration at high [IP$_3$], a steady-state behavior which fails to be correctly described in previous IPR models. Finally, we used the patch-clamp experimental data to validate that the IPR channel indeed breaks detailed balance and thus is a nonequilibrium system which consumes energy.
2102.12602
Guillermo Lorenzo
Guillermo Lorenzo, David A. Hormuth II, Angela M. Jarrett, Ernesto A. B. F. Lima, Shashank Subramanian, George Biros, J. Tinsley Oden, Thomas J. R. Hughes, and Thomas E. Yankeelov
Quantitative in vivo imaging to enable tumor forecasting and treatment optimization
null
null
null
null
q-bio.TO cs.CE q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Current clinical decision-making in oncology relies on averages of large patient populations to both assess tumor status and treatment outcomes. However, cancers exhibit an inherent evolving heterogeneity that requires an individual approach based on rigorous and precise predictions of cancer growth and treatment response. To this end, we advocate the use of quantitative in vivo imaging data to calibrate mathematical models for the personalized forecasting of tumor development. In this chapter, we summarize the main data types available from both common and emerging in vivo medical imaging technologies, and how these data can be used to obtain patient-specific parameters for common mathematical models of cancer. We then outline computational methods designed to solve these models, thereby enabling their use for producing personalized tumor forecasts in silico, which, ultimately, can be used to not only predict response, but also optimize treatment. Finally, we discuss the main barriers to making the above paradigm a clinical reality.
[ { "created": "Wed, 24 Feb 2021 23:32:48 GMT", "version": "v1" } ]
2021-02-26
[ [ "Lorenzo", "Guillermo", "" ], [ "Hormuth", "David A.", "II" ], [ "Jarrett", "Angela M.", "" ], [ "Lima", "Ernesto A. B. F.", "" ], [ "Subramanian", "Shashank", "" ], [ "Biros", "George", "" ], [ "Oden", "J. Tinsley", "" ], [ "Hughes", "Thomas J. R.", "" ], [ "Yankeelov", "Thomas E.", "" ] ]
Current clinical decision-making in oncology relies on averages of large patient populations to both assess tumor status and treatment outcomes. However, cancers exhibit an inherent evolving heterogeneity that requires an individual approach based on rigorous and precise predictions of cancer growth and treatment response. To this end, we advocate the use of quantitative in vivo imaging data to calibrate mathematical models for the personalized forecasting of tumor development. In this chapter, we summarize the main data types available from both common and emerging in vivo medical imaging technologies, and how these data can be used to obtain patient-specific parameters for common mathematical models of cancer. We then outline computational methods designed to solve these models, thereby enabling their use for producing personalized tumor forecasts in silico, which, ultimately, can be used to not only predict response, but also optimize treatment. Finally, we discuss the main barriers to making the above paradigm a clinical reality.
1802.00102
Laura Schaposnik
Laura P. Schaposnik, Anlin Zhang
Modeling epidemics on d-cliqued graphs
11 pages, 16 figures
Letters in Biomathematics, Vol. 5, Iss. 1, 2018
10.1080/23737867.2017.1419080
null
q-bio.PE cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since social interactions have been shown to lead to symmetric clusters, we propose here that symmetries play a key role in epidemic modeling. Mathematical models on d-ary tree graphs were recently shown to be particularly effective for modeling epidemics in simple networks [Seibold & Callender, 2016]. To account for symmetric relations, we generalize this to a new type of networks modeled on d-cliqued tree graphs, which are obtained by adding edges to regular d-trees to form d-cliques. This setting gives a more realistic model for epidemic outbreaks originating, for example, within a family or classroom and which could reach a population by transmission via children in schools. Specifically, we quantify how an infection starting in a clique (e.g. family) can reach other cliques through the body of the graph (e.g. public places). Moreover, we propose and study the notion of a safe zone, a subset that has a negligible probability of infection.
[ { "created": "Wed, 31 Jan 2018 23:35:04 GMT", "version": "v1" } ]
2019-05-24
[ [ "Schaposnik", "Laura P.", "" ], [ "Zhang", "Anlin", "" ] ]
Since social interactions have been shown to lead to symmetric clusters, we propose here that symmetries play a key role in epidemic modeling. Mathematical models on d-ary tree graphs were recently shown to be particularly effective for modeling epidemics in simple networks [Seibold & Callender, 2016]. To account for symmetric relations, we generalize this to a new type of networks modeled on d-cliqued tree graphs, which are obtained by adding edges to regular d-trees to form d-cliques. This setting gives a more realistic model for epidemic outbreaks originating, for example, within a family or classroom and which could reach a population by transmission via children in schools. Specifically, we quantify how an infection starting in a clique (e.g. family) can reach other cliques through the body of the graph (e.g. public places). Moreover, we propose and study the notion of a safe zone, a subset that has a negligible probability of infection.
2109.01571
Karl Grieshop
Karl Grieshop, Eddie K.H. Ho, Katja R. Kasimatis
Dominance reversals: The resolution of genetic conflict and maintenance of genetic variation
Review paper with some original theory, 1 Figure, 1 table, 1 box, and Supporting Information (including 11 Figures and 2 tables)
Proceedings of the Royal Society B, 291(2018), p.20232816 (2024)
10.1098/rspb.2023.2816
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Beneficial reversals of dominance reduce the costs of genetic trade-offs and can enable selection to maintain genetic variation for fitness. Beneficial dominance reversals are characterized by the beneficial allele for a given context (e.g. habitat, developmental stage, trait, or sex) being dominant in that context but recessive where deleterious. This context-dependence at least partially mitigates the fitness consequence of heterozygotes carrying one non-beneficial allele for their context and can result in balancing selection that maintains alternative alleles. Dominance reversals are theoretically plausible and are supported by mounting empirical evidence. Here we highlight the importance of beneficial dominance reversals as a mechanism for the mitigation of genetic conflict and review the theory and empirical evidence for them. We identify some areas in need of further research and development and outline three methods (dominance ordination, allele-specific expression, and allele-specific ATAC-Seq) that could facilitate the identification of antagonistic genetic variation. There is ample scope for the development of new empirical methods as well as reanalysis of existing data through the lens of dominance reversals. A greater focus on this topic will expand our understanding of the mechanisms that resolve genetic conflict and whether they maintain genetic variation.
[ { "created": "Fri, 3 Sep 2021 14:52:31 GMT", "version": "v1" }, { "created": "Sat, 15 Oct 2022 16:15:07 GMT", "version": "v2" }, { "created": "Wed, 13 Dec 2023 13:08:56 GMT", "version": "v3" }, { "created": "Fri, 26 Jan 2024 09:32:50 GMT", "version": "v4" } ]
2024-03-28
[ [ "Grieshop", "Karl", "" ], [ "Ho", "Eddie K. H.", "" ], [ "Kasimatis", "Katja R.", "" ] ]
Beneficial reversals of dominance reduce the costs of genetic trade-offs and can enable selection to maintain genetic variation for fitness. Beneficial dominance reversals are characterized by the beneficial allele for a given context (e.g. habitat, developmental stage, trait, or sex) being dominant in that context but recessive where deleterious. This context-dependence at least partially mitigates the fitness consequence of heterozygotes carrying one non-beneficial allele for their context and can result in balancing selection that maintains alternative alleles. Dominance reversals are theoretically plausible and are supported by mounting empirical evidence. Here we highlight the importance of beneficial dominance reversals as a mechanism for the mitigation of genetic conflict and review the theory and empirical evidence for them. We identify some areas in need of further research and development and outline three methods (dominance ordination, allele-specific expression, and allele-specific ATAC-Seq) that could facilitate the identification of antagonistic genetic variation. There is ample scope for the development of new empirical methods as well as reanalysis of existing data through the lens of dominance reversals. A greater focus on this topic will expand our understanding of the mechanisms that resolve genetic conflict and whether they maintain genetic variation.
2201.11748
Roberto Rojas-Cessa
Jorge Medina, Roberto Rojas-Cessa, and Vatcharapan Umpaichitra
Reducing COVID-19 Cases and Deaths by Applying Blockchain in Vaccination Rollout Management
Peer reviewed
in IEEE Open Journal of Engineering in Medicine and Biology, vol. 2, pp. 249-255, 2021
10.1109/OJEMB.2021.3093774
null
q-bio.QM cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Because a fast vaccination rollout against coronavirus disease 2019 (COVID-19) is critical to restore daily life and avoid virus mutations, it is tempting to have a relaxed vaccination-administration management system. However, a robust management system can support the enforcement of preventive measures, and in turn, reduce incidence and deaths. Here, we model a trustable and reliable management system based on blockchain for vaccine distribution by extending the Susceptible-Exposed-Infected-Recovery (SEIR) model. The model includes prevention measures such as mask-wearing, social distance, vaccination rate, and vaccination efficiency. It also considers negative social behavior, such as violations of social distance and attempts of using illegitimate vaccination proofs. By evaluating the model, we show that the proposed system can reduce up to 2.5 million cases and half a million deaths in the most demanding scenarios.
[ { "created": "Thu, 27 Jan 2022 18:31:41 GMT", "version": "v1" }, { "created": "Wed, 16 Mar 2022 13:29:18 GMT", "version": "v2" } ]
2022-03-17
[ [ "Medina", "Jorge", "" ], [ "Rojas-Cessa", "Roberto", "" ], [ "Umpaichitra", "Vatcharapan", "" ] ]
Because a fast vaccination rollout against coronavirus disease 2019 (COVID-19) is critical to restore daily life and avoid virus mutations, it is tempting to have a relaxed vaccination-administration management system. However, a robust management system can support the enforcement of preventive measures, and in turn, reduce incidence and deaths. Here, we model a trustable and reliable management system based on blockchain for vaccine distribution by extending the Susceptible-Exposed-Infected-Recovery (SEIR) model. The model includes prevention measures such as mask-wearing, social distance, vaccination rate, and vaccination efficiency. It also considers negative social behavior, such as violations of social distance and attempts of using illegitimate vaccination proofs. By evaluating the model, we show that the proposed system can reduce up to 2.5 million cases and half a million deaths in the most demanding scenarios.
1907.04953
Da-Young Lee
Da-Young Lee, Dong-Yeop Na, Yong Seok Heo, and Guo-Liang Wang
Digital image quantification of rice sheath blight: Optimized segmentation and automatic classification
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Rapid and accurate phenotypic screening of rice germplasms is crucial in screening for sources of rice sheath blight resistance. However, visual and/or caliper-based estimations of coalescing, necrotic, ShB disease lesions are time-consuming, labor-intensive and exposed to human rater subjectivity. Here, we propose the use of RGB images and image processing techniques to quantify ShB disease progression in terms of lesion height and diseased area. To be specific, we developed a pixel color- and coordinate-based K-Means Clustering (PCC-KMC) algorithm utilizing Mahalanobis metric aimed at accurate segmentation of symptomatic and non-symptomatic regions within rice stem images. The performance of PCC-KMC was evaluated using Lin's concordance correlation coefficient by comparing its results to visual measurements of ShB lesion height and to lesion/diseased area measured using ImageJ. Low bias and high precision were observed for absolute lesion height (bias=0.93, precision=0.94) and absolute symptomatic area (bias=0.98, precision=0.97) studies. Moreover, we introduced a convolutional neural network (CNN) for the automatic annotation on clusters, termed PCC-KMC-CNN. Our CNN was trained based on 85%:15% of composition for training and testing dataset from total 168 ShB-infected stem sample images, recording 92% accuracy and 0.21 loss. PCC-KMC-CNN also showed high accuracy and precision for the absolute lesion height (bias=0.86, precision=0.90) and absolute diseased area (bias=0.99, precision=0.97) studies. These results demonstrate that the present methodology has great potential and promise to substitute the traditional visual-based ShB disease severity assessment.
[ { "created": "Wed, 10 Jul 2019 23:11:27 GMT", "version": "v1" }, { "created": "Tue, 13 Apr 2021 06:04:03 GMT", "version": "v2" } ]
2021-04-14
[ [ "Lee", "Da-Young", "" ], [ "Na", "Dong-Yeop", "" ], [ "Heo", "Yong Seok", "" ], [ "Wang", "Guo-Liang", "" ] ]
Rapid and accurate phenotypic screening of rice germplasms is crucial in screening for sources of rice sheath blight resistance. However, visual and/or caliper-based estimations of coalescing, necrotic, ShB disease lesions are time-consuming, labor-intensive and exposed to human rater subjectivity. Here, we propose the use of RGB images and image processing techniques to quantify ShB disease progression in terms of lesion height and diseased area. To be specific, we developed a pixel color- and coordinate-based K-Means Clustering (PCC-KMC) algorithm utilizing Mahalanobis metric aimed at accurate segmentation of symptomatic and non-symptomatic regions within rice stem images. The performance of PCC-KMC was evaluated using Lin's concordance correlation coefficient by comparing its results to visual measurements of ShB lesion height and to lesion/diseased area measured using ImageJ. Low bias and high precision were observed for absolute lesion height (bias=0.93, precision=0.94) and absolute symptomatic area (bias=0.98, precision=0.97) studies. Moreover, we introduced a convolutional neural network (CNN) for the automatic annotation on clusters, termed PCC-KMC-CNN. Our CNN was trained based on 85%:15% of composition for training and testing dataset from total 168 ShB-infected stem sample images, recording 92% accuracy and 0.21 loss. PCC-KMC-CNN also showed high accuracy and precision for the absolute lesion height (bias=0.86, precision=0.90) and absolute diseased area (bias=0.99, precision=0.97) studies. These results demonstrate that the present methodology has great potential and promise to substitute the traditional visual-based ShB disease severity assessment.
2006.09818
Nathaniel Barlow
Steven J. Weinstein, Morgan S. Holland, Kelly E. Rogers, Nathaniel S. Barlow
Analytic solution of the SEIR epidemic model via asymptotic approximant
original version had substantial text overlap with arXiv:2004.07833; this is now less so
null
10.1016/j.physd.2020.132633
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-sa/4.0/
An analytic solution is obtained to the SEIR Epidemic Model. The solution is created by constructing a single second-order nonlinear differential equation in $\ln S$ and analytically continuing its divergent power series solution such that it matches the correct long-time exponential damping of the epidemic model. This is achieved through an asymptotic approximant (Barlow et. al, 2017, Q. Jl Mech. Appl. Math, 70 (1), 21-48) in the form of a modified symmetric Pad\'e approximant that incorporates this damping. The utility of the analytical form is demonstrated through its application to the COVID-19 pandemic.
[ { "created": "Fri, 12 Jun 2020 20:18:44 GMT", "version": "v1" }, { "created": "Tue, 30 Jun 2020 01:52:51 GMT", "version": "v2" } ]
2020-07-01
[ [ "Weinstein", "Steven J.", "" ], [ "Holland", "Morgan S.", "" ], [ "Rogers", "Kelly E.", "" ], [ "Barlow", "Nathaniel S.", "" ] ]
An analytic solution is obtained to the SEIR Epidemic Model. The solution is created by constructing a single second-order nonlinear differential equation in $\ln S$ and analytically continuing its divergent power series solution such that it matches the correct long-time exponential damping of the epidemic model. This is achieved through an asymptotic approximant (Barlow et. al, 2017, Q. Jl Mech. Appl. Math, 70 (1), 21-48) in the form of a modified symmetric Pad\'e approximant that incorporates this damping. The utility of the analytical form is demonstrated through its application to the COVID-19 pandemic.
1108.4642
Steven Kelk
Steven Kelk
A note on efficient computation of hybridization number via softwired clusters
null
null
null
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we present a new fixed parameter tractable algorithm to compute the hybridization number r of two rooted binary phylogenetic trees on taxon set X in time (6r)^r.poly(n), where n=|X|. The novelty of this approach is that it avoids the use of Maximum Acyclic Agreement Forests (MAAFs) and instead exploits the equivalence of the problem with a related problem from the softwired clusters literature. This offers an alternative perspective on the underlying combinatorial structure of the hybridization number problem.
[ { "created": "Tue, 23 Aug 2011 15:54:02 GMT", "version": "v1" } ]
2011-08-24
[ [ "Kelk", "Steven", "" ] ]
Here we present a new fixed parameter tractable algorithm to compute the hybridization number r of two rooted binary phylogenetic trees on taxon set X in time (6r)^r.poly(n), where n=|X|. The novelty of this approach is that it avoids the use of Maximum Acyclic Agreement Forests (MAAFs) and instead exploits the equivalence of the problem with a related problem from the softwired clusters literature. This offers an alternative perspective on the underlying combinatorial structure of the hybridization number problem.
2102.06490
Michael Grinfeld
Michael Grinfeld, Nigel Mottram and Jozsef Farkas
A General Model of Structured Cell Kinetics
8 pages, 1 figure
null
null
null
q-bio.CB nlin.PS
http://creativecommons.org/licenses/by/4.0/
We present a modelling framework for the dynamics of cells structured by the concentration of a micromolecule they contain. We derive general equations for the evolution of the cell population and of the extra-cellular concentration of the molecule and apply this approach to models of silicosis and quorum sensing in Gram-negative bacteria
[ { "created": "Fri, 12 Feb 2021 12:44:56 GMT", "version": "v1" } ]
2021-02-15
[ [ "Grinfeld", "Michael", "" ], [ "Mottram", "Nigel", "" ], [ "Farkas", "Jozsef", "" ] ]
We present a modelling framework for the dynamics of cells structured by the concentration of a micromolecule they contain. We derive general equations for the evolution of the cell population and of the extra-cellular concentration of the molecule and apply this approach to models of silicosis and quorum sensing in Gram-negative bacteria
1703.10677
Yang Chen
Yang Chen, Dong-Jie Zhao, Chao Song, Wei-He Liu, Zi-Yang Wang, Zhong-Yi Wang, Guiliang Tang, and Lan Huang
Detecting causality in Plant electrical signal by a hybrid causal analysis approach
12 figures
null
null
null
q-bio.NC q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
At present, multi-electrode array (MEA) approach and optical recording allow us to acquire plant electrical activity with higher spatio-temporal resolution. To understand the dynamic information flow of the electrical signaling system and estimate the effective connectivity, we proposed a solution to combine the two casualty analysis approaches, i.e. Granger causality and transfer entropy, which they complement each other to measure dynamics effective connectivity of the complex system. Our findings in three qualitatively different levels of plant bioelectrical activities revealed direction of information flow and dynamic complex causal connectives by using the two causal analysis approaches, especially indicated that the direction of information flow is not only along the longitudinal section but also spreading in transection.
[ { "created": "Wed, 22 Mar 2017 02:11:45 GMT", "version": "v1" } ]
2017-04-03
[ [ "Chen", "Yang", "" ], [ "Zhao", "Dong-Jie", "" ], [ "Song", "Chao", "" ], [ "Liu", "Wei-He", "" ], [ "Wang", "Zi-Yang", "" ], [ "Wang", "Zhong-Yi", "" ], [ "Tang", "Guiliang", "" ], [ "Huang", "Lan", "" ] ]
At present, multi-electrode array (MEA) approach and optical recording allow us to acquire plant electrical activity with higher spatio-temporal resolution. To understand the dynamic information flow of the electrical signaling system and estimate the effective connectivity, we proposed a solution to combine the two casualty analysis approaches, i.e. Granger causality and transfer entropy, which they complement each other to measure dynamics effective connectivity of the complex system. Our findings in three qualitatively different levels of plant bioelectrical activities revealed direction of information flow and dynamic complex causal connectives by using the two causal analysis approaches, especially indicated that the direction of information flow is not only along the longitudinal section but also spreading in transection.
0811.3464
Elfi Kraka
Sushilee Ranganathan, Dmitry Izotov, Elfi Kraka, and Dieter Cremer
Classification of Supersecondary Structures in Proteins Using the Automated Protein Structure Analysis Method
40 pages, 5 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Automated Protein Structure Analysis (APSA) method is used for the classification of supersecondary structures. Basis for the classification is the encoding of three-dimensional (3D) residue conformations into a 16-letter code (3D-1D projection). It is shown that the letter code of the protein makes it possible to reconstruct its overall shape without ambiguity (1D-3D translation). Accordingly, the letter code is used for the development of classification rules that distinguish supersecondary structures by the properties of their turns and the orientation of the flanking helix or strand structures. The orientations of turn and flanking structures are collected in an octant system that helps to specify 196 supersecondary groups for (alpha,alpha)-, (alpha,beta)-, (beta,alpha)-, (beta,beta)-class. 391 protein chains leading to 2499 super secondary structures were analyzed. Frequently occurring super secondary structures are identified with the help of the octant classification system and explained on the basis of their letter and classification codes.
[ { "created": "Fri, 21 Nov 2008 04:24:40 GMT", "version": "v1" }, { "created": "Mon, 24 Nov 2008 20:52:14 GMT", "version": "v2" } ]
2008-11-24
[ [ "Ranganathan", "Sushilee", "" ], [ "Izotov", "Dmitry", "" ], [ "Kraka", "Elfi", "" ], [ "Cremer", "Dieter", "" ] ]
The Automated Protein Structure Analysis (APSA) method is used for the classification of supersecondary structures. Basis for the classification is the encoding of three-dimensional (3D) residue conformations into a 16-letter code (3D-1D projection). It is shown that the letter code of the protein makes it possible to reconstruct its overall shape without ambiguity (1D-3D translation). Accordingly, the letter code is used for the development of classification rules that distinguish supersecondary structures by the properties of their turns and the orientation of the flanking helix or strand structures. The orientations of turn and flanking structures are collected in an octant system that helps to specify 196 supersecondary groups for (alpha,alpha)-, (alpha,beta)-, (beta,alpha)-, (beta,beta)-class. 391 protein chains leading to 2499 super secondary structures were analyzed. Frequently occurring super secondary structures are identified with the help of the octant classification system and explained on the basis of their letter and classification codes.
1205.2358
Thomas R. Weikl
Thomas R. Weikl and David D. Boehr
Conformational selection and induced changes along the catalytic cycle of E. coli DHFR
14 pages, 8 figures, 2 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein function often involves changes between different conformations. Central questions are how these conformational changes are coupled to the binding or catalytic processes during which they occur, and how they affect the catalytic rates of enzymes. An important model system is the enzyme dihydrofolate reductase (DHFR) from E. coli, which exhibits characteristic conformational changes of the active-site loop during the catalytic step and during unbinding of the product. In this article, we present a general kinetic framework that can be used (1) to identify the ordering of events in the coupling of conformational changes, binding and catalysis and (2) to determine the rates of the substeps of coupled processes from a combined analysis of NMR R2 relaxation dispersion experiments and traditional enzyme kinetics measurements. We apply this framework to E. coli DHFR and find that the conformational change during product unbinding follows a conformational-selection mechanism, i.e. the conformational change occurs predominantly prior to unbinding. The conformational change during the catalytic step, in contrast, is an induced change, i.e. the change occurs after the chemical reaction. We propose that the reason for these conformational changes, which are absent in human and other vertebrate DHFRs, is robustness of the catalytic rate against large pH variations and changes to substrate/product concentrations in E. coli.
[ { "created": "Thu, 10 May 2012 19:25:53 GMT", "version": "v1" } ]
2012-05-11
[ [ "Weikl", "Thomas R.", "" ], [ "Boehr", "David D.", "" ] ]
Protein function often involves changes between different conformations. Central questions are how these conformational changes are coupled to the binding or catalytic processes during which they occur, and how they affect the catalytic rates of enzymes. An important model system is the enzyme dihydrofolate reductase (DHFR) from E. coli, which exhibits characteristic conformational changes of the active-site loop during the catalytic step and during unbinding of the product. In this article, we present a general kinetic framework that can be used (1) to identify the ordering of events in the coupling of conformational changes, binding and catalysis and (2) to determine the rates of the substeps of coupled processes from a combined analysis of NMR R2 relaxation dispersion experiments and traditional enzyme kinetics measurements. We apply this framework to E. coli DHFR and find that the conformational change during product unbinding follows a conformational-selection mechanism, i.e. the conformational change occurs predominantly prior to unbinding. The conformational change during the catalytic step, in contrast, is an induced change, i.e. the change occurs after the chemical reaction. We propose that the reason for these conformational changes, which are absent in human and other vertebrate DHFRs, is robustness of the catalytic rate against large pH variations and changes to substrate/product concentrations in E. coli.
2311.12080
Alaa Sadiq
Alaa M. Sadeq
Exploring the Relationship Between COVID-19 Induced Economic Downturn and Women's Nutritional Health Disparities
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
This study explores how the COVID-19 pandemic's economic impact has exacerbated nutritional health disparities among women. It sought to understand the effects of economic challenges on women's dietary choices and access to nutritious food across different socioeconomic groups. Using a mixed-methods approach, the research combined quantitative data from health and economic records with qualitative insights from interviews with diverse women. The study analyzed trends in nutritional health and economic factors before and after the pandemic and gathered personal accounts regarding nutrition and economic difficulties during this period. Findings showed a clear link between the economic downturn and deteriorating nutritional health, particularly in low-income and marginalized groups. These women reported decreased access to healthy foods and an increased dependence on less nutritious options due to budget constraints, leading to a decline in dietary quality. This trend was less evident in higher-income groups, highlighting stark disparities. The pandemic intensified pre-existing nutritional inequalities, with the most vulnerable groups facing greater adverse effects. However, community support and public health measures provided some relief. In summary, the pandemic's economic repercussions have indirectly impaired women's nutritional health, especially among the socioeconomically disadvantaged. This highlights the necessity for tailored nutritional interventions and economic policies focused on safeguarding women's health.
[ { "created": "Mon, 20 Nov 2023 09:10:25 GMT", "version": "v1" } ]
2023-11-22
[ [ "Sadeq", "Alaa M.", "" ] ]
This study explores how the COVID-19 pandemic's economic impact has exacerbated nutritional health disparities among women. It sought to understand the effects of economic challenges on women's dietary choices and access to nutritious food across different socioeconomic groups. Using a mixed-methods approach, the research combined quantitative data from health and economic records with qualitative insights from interviews with diverse women. The study analyzed trends in nutritional health and economic factors before and after the pandemic and gathered personal accounts regarding nutrition and economic difficulties during this period. Findings showed a clear link between the economic downturn and deteriorating nutritional health, particularly in low-income and marginalized groups. These women reported decreased access to healthy foods and an increased dependence on less nutritious options due to budget constraints, leading to a decline in dietary quality. This trend was less evident in higher-income groups, highlighting stark disparities. The pandemic intensified pre-existing nutritional inequalities, with the most vulnerable groups facing greater adverse effects. However, community support and public health measures provided some relief. In summary, the pandemic's economic repercussions have indirectly impaired women's nutritional health, especially among the socioeconomically disadvantaged. This highlights the necessity for tailored nutritional interventions and economic policies focused on safeguarding women's health.
1307.7872
Claudius Gros
Claudius Gros
Generating functionals for guided self-organization
To be published in "Guided Self-Organization: Inception", Springer Series on Emergence, Complexity and Computation, M. Prokopenko (ed), Proceedings of Fifth International Workshop on Guided Self-Organization, Sydney 2012
M. Prokopenko (ed.), Guided Self-Organization: Inception, 53-66, Springer (2014)
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time evolution equations for dynamical systems can often be derived from generating functionals. Examples are Newton's equations of motion in classical dynamics which can be generated within the Lagrange or the Hamiltonian formalism. We propose that generating functionals for self-organizing complex systems offer several advantages. Generating functionals allow to formulate complex dynamical systems systematically and the results obtained are typically valid for classes of complex systems, as defined by the type of their respective generating functionals. The generated dynamical systems tend, in addition, to be minimal, containing only few free and undetermined parameters. We point out that two or more generating functionals may be used to define a complex system and that multiple generating function may not, and should not, be combined into a single overall objective function. We provide and discuss examples in terms of adapting neural networks.
[ { "created": "Tue, 30 Jul 2013 08:53:04 GMT", "version": "v1" } ]
2014-04-23
[ [ "Gros", "Claudius", "" ] ]
Time evolution equations for dynamical systems can often be derived from generating functionals. Examples are Newton's equations of motion in classical dynamics which can be generated within the Lagrange or the Hamiltonian formalism. We propose that generating functionals for self-organizing complex systems offer several advantages. Generating functionals allow to formulate complex dynamical systems systematically and the results obtained are typically valid for classes of complex systems, as defined by the type of their respective generating functionals. The generated dynamical systems tend, in addition, to be minimal, containing only few free and undetermined parameters. We point out that two or more generating functionals may be used to define a complex system and that multiple generating function may not, and should not, be combined into a single overall objective function. We provide and discuss examples in terms of adapting neural networks.
1211.2458
Paul H\'eroux
Su Dong, Paul Heroux
Survey of Extra-Low Frequency and Very-Low Frequency Magnetic Fields in Cell Culture Incubators
59 pages, 20 figures, 7 appendices
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A typical cell culture CO2 incubator was probed in detail to document the pattern of 60-Hz magnetic fields (MFs) inside the unit, as well as the ability of the incubator to attenuate environmental MFs. Subsequently, a survey of 46 cell culture incubators was performed. The survey measured MFs outside and inside the incubators, the frequency spectrum between 5 and 2000 Hz, and variations over time of the 60-Hz MF. Our measurements show an uneven spatial distribution, reflecting electronic and electrical components hidden within the walls. Attenuation of environmental MFs varied between 18 % and 33 %, signalling easy penetration into the units. MF levels, frequency spectra and variations over time were very different from one unit to the next. All 46 incubators surveyed had an average field greater than 0.2 microT; among them, 39 (85 %) had an average field greater than 1 microT. There is substantial work to be done in improving control over the MF environment of in vitro experiments in bio-medicine, particularly if they involve cancer cells.
[ { "created": "Sun, 11 Nov 2012 20:40:00 GMT", "version": "v1" } ]
2012-11-13
[ [ "Dong", "Su", "" ], [ "Heroux", "Paul", "" ] ]
A typical cell culture CO2 incubator was probed in detail to document the pattern of 60-Hz magnetic fields (MFs) inside the unit, as well as the ability of the incubator to attenuate environmental MFs. Subsequently, a survey of 46 cell culture incubators was performed. The survey measured MFs outside and inside the incubators, the frequency spectrum between 5 and 2000 Hz, and variations over time of the 60-Hz MF. Our measurements show an uneven spatial distribution, reflecting electronic and electrical components hidden within the walls. Attenuation of environmental MFs varied between 18 % and 33 %, signalling easy penetration into the units. MF levels, frequency spectra and variations over time were very different from one unit to the next. All 46 incubators surveyed had an average field greater than 0.2 microT; among them, 39 (85 %) had an average field greater than 1 microT. There is substantial work to be done in improving control over the MF environment of in vitro experiments in bio-medicine, particularly if they involve cancer cells.
2406.15987
Francis Motta
Francis C. Motta, Kevin McGoff, Breschine Cummins, Steven B. Haase
Generalized Measures of Population Synchrony
null
null
null
null
q-bio.PE math.PR q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Synchronized behavior among individuals is a ubiquitous feature of populations. Understanding mechanisms of (de)synchronization demands meaningful, interpretable, computable quantifications of synchrony, relevant to measurements that can be made of dynamic populations. Despite the importance to analyzing and modeling populations, existing notions of synchrony often lack rigorous definitions, may be specialized to a particular experimental system and/or measurement, or may have undesirable properties that limit their utility. We introduce a notion of synchrony for populations of individuals occupying a compact metric space that depends on the Fr\'{e}chet variance of the distribution of individuals. We establish several fundamental and desirable mathematical properties of this synchrony measure, including continuity and invariance to metric scaling. We establish a general approximation result that controls the disparity between synchrony in the true space and the synchrony observed through a discretization of state space, as may occur when observable states are limited by measurement constraints. We develop efficient algorithms to compute synchrony in a variety of state spaces, including all finite state spaces and empirical distributions on the circle, and provide accessible implementations in an open-source Python module. To demonstrate the usefulness of the synchrony measure in biological applications, we investigate several biologically relevant models of mechanisms that can alter the dynamics of synchrony over time, and reanalyze published data concerning the dynamics of the intraerythrocytic developmental cycles of $\textit{Plasmodium}$ parasites. We anticipate that the rigorous definition of population synchrony and the mathematical and biological results presented here will be broadly useful in analyzing and modeling populations in a variety of contexts.
[ { "created": "Sun, 23 Jun 2024 02:46:43 GMT", "version": "v1" } ]
2024-06-25
[ [ "Motta", "Francis C.", "" ], [ "McGoff", "Kevin", "" ], [ "Cummins", "Breschine", "" ], [ "Haase", "Steven B.", "" ] ]
Synchronized behavior among individuals is a ubiquitous feature of populations. Understanding mechanisms of (de)synchronization demands meaningful, interpretable, computable quantifications of synchrony, relevant to measurements that can be made of dynamic populations. Despite the importance to analyzing and modeling populations, existing notions of synchrony often lack rigorous definitions, may be specialized to a particular experimental system and/or measurement, or may have undesirable properties that limit their utility. We introduce a notion of synchrony for populations of individuals occupying a compact metric space that depends on the Fr\'{e}chet variance of the distribution of individuals. We establish several fundamental and desirable mathematical properties of this synchrony measure, including continuity and invariance to metric scaling. We establish a general approximation result that controls the disparity between synchrony in the true space and the synchrony observed through a discretization of state space, as may occur when observable states are limited by measurement constraints. We develop efficient algorithms to compute synchrony in a variety of state spaces, including all finite state spaces and empirical distributions on the circle, and provide accessible implementations in an open-source Python module. To demonstrate the usefulness of the synchrony measure in biological applications, we investigate several biologically relevant models of mechanisms that can alter the dynamics of synchrony over time, and reanalyze published data concerning the dynamics of the intraerythrocytic developmental cycles of $\textit{Plasmodium}$ parasites. We anticipate that the rigorous definition of population synchrony and the mathematical and biological results presented here will be broadly useful in analyzing and modeling populations in a variety of contexts.
2106.10234
Yingce Xia
Jinhua Zhu, Yingce Xia, Tao Qin, Wengang Zhou, Houqiang Li, Tie-Yan Liu
Dual-view Molecule Pre-training
Add new results of retrosynthesis
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by its success in natural language processing and computer vision, pre-training has attracted substantial attention in cheminformatics and bioinformatics, especially for molecule based tasks. A molecule can be represented by either a graph (where atoms are connected by bonds) or a SMILES sequence (where depth-first-search is applied to the molecular graph with specific rules). Existing works on molecule pre-training use either graph representations only or SMILES representations only. In this work, we propose to leverage both the representations and design a new pre-training algorithm, dual-view molecule pre-training (briefly, DMP), that can effectively combine the strengths of both types of molecule representations. The model of DMP consists of two branches: a Transformer branch that takes the SMILES sequence of a molecule as input, and a GNN branch that takes a molecular graph as input. The training of DMP contains three tasks: (1) predicting masked tokens in a SMILES sequence by the Transformer branch, (2) predicting masked atoms in a molecular graph by the GNN branch, and (3) maximizing the consistency between the two high-level representations output by the Transformer and GNN branches separately. After pre-training, we can use either the Transformer branch (this one is recommended according to empirical results), the GNN branch, or both for downstream tasks. DMP is tested on nine molecular property prediction tasks and achieves state-of-the-art performances on seven of them. Furthermore, we test DMP on three retrosynthesis tasks and achieve state-of-the-art results on them.
[ { "created": "Thu, 17 Jun 2021 03:58:38 GMT", "version": "v1" }, { "created": "Wed, 13 Oct 2021 03:09:28 GMT", "version": "v2" } ]
2021-10-14
[ [ "Zhu", "Jinhua", "" ], [ "Xia", "Yingce", "" ], [ "Qin", "Tao", "" ], [ "Zhou", "Wengang", "" ], [ "Li", "Houqiang", "" ], [ "Liu", "Tie-Yan", "" ] ]
Inspired by its success in natural language processing and computer vision, pre-training has attracted substantial attention in cheminformatics and bioinformatics, especially for molecule based tasks. A molecule can be represented by either a graph (where atoms are connected by bonds) or a SMILES sequence (where depth-first-search is applied to the molecular graph with specific rules). Existing works on molecule pre-training use either graph representations only or SMILES representations only. In this work, we propose to leverage both the representations and design a new pre-training algorithm, dual-view molecule pre-training (briefly, DMP), that can effectively combine the strengths of both types of molecule representations. The model of DMP consists of two branches: a Transformer branch that takes the SMILES sequence of a molecule as input, and a GNN branch that takes a molecular graph as input. The training of DMP contains three tasks: (1) predicting masked tokens in a SMILES sequence by the Transformer branch, (2) predicting masked atoms in a molecular graph by the GNN branch, and (3) maximizing the consistency between the two high-level representations output by the Transformer and GNN branches separately. After pre-training, we can use either the Transformer branch (this one is recommended according to empirical results), the GNN branch, or both for downstream tasks. DMP is tested on nine molecular property prediction tasks and achieves state-of-the-art performances on seven of them. Furthermore, we test DMP on three retrosynthesis tasks and achieve state-of-the-art results on them.
2002.09062
Weiqi Ji
Weiqi Ji and Sili Deng
Autonomous Discovery of Unknown Reaction Pathways from Data by Chemical Reaction Neural Network
null
The Journal of Physical Chemistry A, 2021
10.1021/acs.jpca.0c09316
null
q-bio.MN cs.LG physics.chem-ph stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Chemical reactions occur in energy, environmental, biological, and many other natural systems, and the inference of the reaction networks is essential to understand and design the chemical processes in engineering and life sciences. Yet, revealing the reaction pathways for complex systems and processes is still challenging due to the lack of knowledge of the involved species and reactions. Here, we present a neural network approach that autonomously discovers reaction pathways from the time-resolved species concentration data. The proposed Chemical Reaction Neural Network (CRNN), by design, satisfies the fundamental physics laws, including the Law of Mass Action and the Arrhenius Law. Consequently, the CRNN is physically interpretable such that the reaction pathways can be interpreted, and the kinetic parameters can be quantified simultaneously from the weights of the neural network. The inference of the chemical pathways is accomplished by training the CRNN with species concentration data via stochastic gradient descent. We demonstrate the successful implementations and the robustness of the approach in elucidating the chemical reaction pathways of several chemical engineering and biochemical systems. The autonomous inference by the CRNN approach precludes the need for expert knowledge in proposing candidate networks and addresses the curse of dimensionality in complex systems. The physical interpretability also makes the CRNN capable of not only fitting the data for a given system but also developing knowledge of unknown pathways that could be generalized to similar chemical systems.
[ { "created": "Thu, 20 Feb 2020 23:36:46 GMT", "version": "v1" }, { "created": "Fri, 8 Jan 2021 22:18:36 GMT", "version": "v2" } ]
2021-01-22
[ [ "Ji", "Weiqi", "" ], [ "Deng", "Sili", "" ] ]
Chemical reactions occur in energy, environmental, biological, and many other natural systems, and the inference of the reaction networks is essential to understand and design the chemical processes in engineering and life sciences. Yet, revealing the reaction pathways for complex systems and processes is still challenging due to the lack of knowledge of the involved species and reactions. Here, we present a neural network approach that autonomously discovers reaction pathways from the time-resolved species concentration data. The proposed Chemical Reaction Neural Network (CRNN), by design, satisfies the fundamental physics laws, including the Law of Mass Action and the Arrhenius Law. Consequently, the CRNN is physically interpretable such that the reaction pathways can be interpreted, and the kinetic parameters can be quantified simultaneously from the weights of the neural network. The inference of the chemical pathways is accomplished by training the CRNN with species concentration data via stochastic gradient descent. We demonstrate the successful implementations and the robustness of the approach in elucidating the chemical reaction pathways of several chemical engineering and biochemical systems. The autonomous inference by the CRNN approach precludes the need for expert knowledge in proposing candidate networks and addresses the curse of dimensionality in complex systems. The physical interpretability also makes the CRNN capable of not only fitting the data for a given system but also developing knowledge of unknown pathways that could be generalized to similar chemical systems.
1607.08552
Tobias K\"uhn
Tobias K\"uhn, Moritz Helias
Locking of correlated neural activity to ongoing oscillations
57 pages, 12 figures, published version
PLoS Comput Biol 13(6): e1005534 (2017)
10.1371/journal.pcbi.1005534
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis.
[ { "created": "Thu, 28 Jul 2016 18:09:49 GMT", "version": "v1" }, { "created": "Fri, 21 Oct 2016 09:58:28 GMT", "version": "v2" }, { "created": "Mon, 3 Jul 2017 15:25:12 GMT", "version": "v3" } ]
2017-07-04
[ [ "Kühn", "Tobias", "" ], [ "Helias", "Moritz", "" ] ]
Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis.
1811.08068
Xiaochen Liu
X. Liu, P. Sanz-Leon, P. A. Robinson
Gamma-Band Correlations in Primary Visual Cortex
23 pages; 12 figures
Phys. Rev. E 101, 042406 (2020)
10.1103/PhysRevE.101.042406
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural field theory is used to quantitatively analyze the two-dimensional spatiotemporal correlation properties of gamma-band (30 -- 70 Hz) oscillations evoked by stimuli arriving at the primary visual cortex (V1), and modulated by patchy connectivities that depend on orientation preference (OP). Correlation functions are derived analytically under different stimulus and measurement conditions. The predictions reproduce a range of published experimental results, including the existence of two-point oscillatory temporal cross-correlations with zero time-lag between neurons with similar OP, the influence of spatial separation of neurons on the strength of the correlations, and the effects of differing stimulus orientations.
[ { "created": "Tue, 20 Nov 2018 04:28:28 GMT", "version": "v1" } ]
2021-03-19
[ [ "Liu", "X.", "" ], [ "Sanz-Leon", "P.", "" ], [ "Robinson", "P. A.", "" ] ]
Neural field theory is used to quantitatively analyze the two-dimensional spatiotemporal correlation properties of gamma-band (30 -- 70 Hz) oscillations evoked by stimuli arriving at the primary visual cortex (V1), and modulated by patchy connectivities that depend on orientation preference (OP). Correlation functions are derived analytically under different stimulus and measurement conditions. The predictions reproduce a range of published experimental results, including the existence of two-point oscillatory temporal cross-correlations with zero time-lag between neurons with similar OP, the influence of spatial separation of neurons on the strength of the correlations, and the effects of differing stimulus orientations.
2007.16018
Daniele Marinazzo
Sebastiano Stramaglia, Tomas Scagliarini, Bryan C. Daniels, and Daniele Marinazzo
Quantifying dynamical high-order interdependencies from the O-information: an application to neural spiking dynamics
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
We address the problem of efficiently and informatively quantifying how multiplets of variables carry information about the future of the dynamical system they belong to. In particular we want to identify groups of variables carrying redundant or synergistic information, and track how the size and the composition of these multiplets changes as the collective behavior of the system evolves. In order to afford a parsimonious expansion of shared information, and at the same time control for lagged interactions and common effect, we develop a dynamical, conditioned version of the O-information, a framework recently proposed to quantify high-order interdependencies via multivariate extension of the mutual information. We thus obtain an expansion of the transfer entropy in which synergistic and redundant effects are separated. We apply this framework to a dataset of spiking neurons from a monkey performing a perceptual discrimination task. The method identifies synergistic multiplets that include neurons previously categorized as containing little relevant information individually.
[ { "created": "Fri, 31 Jul 2020 12:34:19 GMT", "version": "v1" } ]
2020-08-03
[ [ "Stramaglia", "Sebastiano", "" ], [ "Scagliarini", "Tomas", "" ], [ "Daniels", "Bryan C.", "" ], [ "Marinazzo", "Daniele", "" ] ]
We address the problem of efficiently and informatively quantifying how multiplets of variables carry information about the future of the dynamical system they belong to. In particular we want to identify groups of variables carrying redundant or synergistic information, and track how the size and the composition of these multiplets changes as the collective behavior of the system evolves. In order to afford a parsimonious expansion of shared information, and at the same time control for lagged interactions and common effect, we develop a dynamical, conditioned version of the O-information, a framework recently proposed to quantify high-order interdependencies via multivariate extension of the mutual information. We thus obtain an expansion of the transfer entropy in which synergistic and redundant effects are separated. We apply this framework to a dataset of spiking neurons from a monkey performing a perceptual discrimination task. The method identifies synergistic multiplets that include neurons previously categorized as containing little relevant information individually.
1706.03835
Christian Meisel
Christian Meisel, Kimberlyn Bailey, Peter Achermann and Dietmar Plenz
Decline of long-range temporal correlations in the human brain during sustained wakefulness
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sleep is crucial for daytime functioning, cognitive performance and general well-being. These aspects of daily life are known to be impaired after extended wake, yet, the underlying neuronal correlates have been difficult to identify. Accumulating evidence suggests that normal functioning of the brain is characterized by long-range temporal correlations (LRTCs) in cortex, which are supportive for decision-making and working memory tasks. Here we assess LRTCs in resting state human EEG data during a 40-hour sleep deprivation experiment by evaluating the decay in autocorrelation and the scaling exponent of the detrended fluctuation analysis from EEG amplitude fluctuations. We find with both measures that LRTCs decline as sleep deprivation progresses. This decline becomes evident when taking changes in signal power into appropriate consideration. Our results demonstrate the importance of sleep to maintain LRTCs in the human brain. In complex networks, LRTCs naturally emerge in the vicinity of a critical state. The observation of declining LRTCs during wake thus provides additional support for our hypothesis that sleep reorganizes cortical networks towards critical dynamics for optimal functioning.
[ { "created": "Mon, 12 Jun 2017 20:15:43 GMT", "version": "v1" } ]
2017-06-14
[ [ "Meisel", "Christian", "" ], [ "Bailey", "Kimberlyn", "" ], [ "Achermann", "Peter", "" ], [ "Plenz", "Dietmar", "" ] ]
Sleep is crucial for daytime functioning, cognitive performance and general well-being. These aspects of daily life are known to be impaired after extended wake, yet, the underlying neuronal correlates have been difficult to identify. Accumulating evidence suggests that normal functioning of the brain is characterized by long-range temporal correlations (LRTCs) in cortex, which are supportive for decision-making and working memory tasks. Here we assess LRTCs in resting state human EEG data during a 40-hour sleep deprivation experiment by evaluating the decay in autocorrelation and the scaling exponent of the detrended fluctuation analysis from EEG amplitude fluctuations. We find with both measures that LRTCs decline as sleep deprivation progresses. This decline becomes evident when taking changes in signal power into appropriate consideration. Our results demonstrate the importance of sleep to maintain LRTCs in the human brain. In complex networks, LRTCs naturally emerge in the vicinity of a critical state. The observation of declining LRTCs during wake thus provides additional support for our hypothesis that sleep reorganizes cortical networks towards critical dynamics for optimal functioning.
q-bio/0504025
Edwin Wang Dr.
Edwin Wang and Enrico Purisima
Self-organization of gene regulatory network motifs enriched with short transcript's half-life transcription factors
Trends Genet (in press), main text 1, supplementary notes 1, 40 pages, 7 tables, 4 figs, minor modifications
Trends Genet, 21:492-495, 2005
10.1016/j.tig.2005.06.013
null
q-bio.MN
null
Network motifs, the recurring regulatory structural patterns in networks, are able to self-organize to produce networks. Three major motifs, feedforward loop, single input modules and bi-fan are found in gene regulatory networks. The large ratio of genes to transcription factors (TFs) in genomes leads to a sharing of TFs by motifs and is sufficient to result in network self-organization. We find a common design principle of these motifs: short transcript's half-life (THL) TFs are significantly enriched in motifs and hubs. This enrichment becomes one of the driving forces for the emergence of the network scale-free topology and allows the network to quickly adapt to environmental changes. Most feedforward loops and bi-fans contain at least one short THL TF, which can be seen as a criterion for self-assembling these motifs. We have classified the motifs according to their short THL TF content. We show that the percentage of the different motif subtypes varies in different cellular conditions.
[ { "created": "Mon, 18 Apr 2005 23:20:19 GMT", "version": "v1" }, { "created": "Mon, 25 Apr 2005 22:36:25 GMT", "version": "v2" } ]
2007-05-23
[ [ "Wang", "Edwin", "" ], [ "Purisima", "Enrico", "" ] ]
Network motifs, the recurring regulatory structural patterns in networks, are able to self-organize to produce networks. Three major motifs, feedforward loop, single input modules and bi-fan are found in gene regulatory networks. The large ratio of genes to transcription factors (TFs) in genomes leads to a sharing of TFs by motifs and is sufficient to result in network self-organization. We find a common design principle of these motifs: short transcript's half-life (THL) TFs are significantly enriched in motifs and hubs. This enrichment becomes one of the driving forces for the emergence of the network scale-free topology and allows the network to quickly adapt to environmental changes. Most feedforward loops and bi-fans contain at least one short THL TF, which can be seen as a criterion for self-assembling these motifs. We have classified the motifs according to their short THL TF content. We show that the percentage of the different motif subtypes varies in different cellular conditions.
q-bio/0701031
Miroslaw Dudek
Miroslaw R. Dudek
Lotka-Volterra population model of genetic evolution
10 pages, 6 figures
null
null
null
q-bio.PE q-bio.OT
null
A deterministic model of an age-structured population with genetics analogous to the discrete time Penna model of genetic evolution is constructed on the basis of the Lotka-Volterra scheme. It is shown that if, as in the Penna model, genetic information is represented by the fraction of defective genes in the population, the population numbers for each specific individual's age are represented by exactly the same functions of age in both models. This gives us a new possibility to consider multi-species evolution without using detailed microscopic Penna model. We discuss a particular case of the predator-prey system representing an ecosystem consisting of a limited amount of energy resources consumed by the age-structured species living in this ecosystem. Then, the increase in number of the individuals in the population under consideration depends on the available energy resources, the shape of the distribution function of defective genes in the population and the fertility age. We show that these parameters determine the trend toward equilibrium of the whole ecosystem.
[ { "created": "Sat, 20 Jan 2007 15:54:21 GMT", "version": "v1" }, { "created": "Sun, 28 Jan 2007 08:18:45 GMT", "version": "v2" } ]
2007-05-23
[ [ "Dudek", "Miroslaw R.", "" ] ]
A deterministic model of an age-structured population with genetics analogous to the discrete time Penna model of genetic evolution is constructed on the basis of the Lotka-Volterra scheme. It is shown that if, as in the Penna model, genetic information is represented by the fraction of defective genes in the population, the population numbers for each specific individual's age are represented by exactly the same functions of age in both models. This gives us a new possibility to consider multi-species evolution without using detailed microscopic Penna model. We discuss a particular case of the predator-prey system representing an ecosystem consisting of a limited amount of energy resources consumed by the age-structured species living in this ecosystem. Then, the increase in number of the individuals in the population under consideration depends on the available energy resources, the shape of the distribution function of defective genes in the population and the fertility age. We show that these parameters determine the trend toward equilibrium of the whole ecosystem.
1911.04585
Ralf Engbert
Johan Chandra, Andre Kruegel, Ralf Engbert
Modulation of oculomotor control during reading of mirrored and inverted texts
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The interplay between cognitive and oculomotor processes during reading can be explored when the spatial layout of text deviates from the typical display. In this study, we investigate various eye-movement measures during reading of text with experimentally manipulated layout (word-wise and letter-wise mirrored-reversed text as well as inverted and scrambled text). While typical findings (e.g., longer mean fixation times, shorter mean saccades lengths) in reading manipulated texts compared to normal texts were reported in earlier work, little is known about changes of oculomotor targeting observed in within-word landing positions under the above text layouts. Here we carry out precise analyses of landing positions and find substantial changes in the so-called launch-site effect in addition to the expected overall slow-down of reading performance. Specifically, during reading of our manipulated text conditions with reversed letter order (against overall reading direction), we find a reduced launch-site effect, while in all other manipulated text conditions, we observe an increased launch-site effect. Our results clearly indicate that the oculomotor system is highly adaptive when confronted with unusual reading conditions.
[ { "created": "Mon, 11 Nov 2019 22:21:22 GMT", "version": "v1" } ]
2019-11-13
[ [ "Chandra", "Johan", "" ], [ "Kruegel", "Andre", "" ], [ "Engbert", "Ralf", "" ] ]
The interplay between cognitive and oculomotor processes during reading can be explored when the spatial layout of text deviates from the typical display. In this study, we investigate various eye-movement measures during reading of text with experimentally manipulated layout (word-wise and letter-wise mirrored-reversed text as well as inverted and scrambled text). While typical findings (e.g., longer mean fixation times, shorter mean saccades lengths) in reading manipulated texts compared to normal texts were reported in earlier work, little is known about changes of oculomotor targeting observed in within-word landing positions under the above text layouts. Here we carry out precise analyses of landing positions and find substantial changes in the so-called launch-site effect in addition to the expected overall slow-down of reading performance. Specifically, during reading of our manipulated text conditions with reversed letter order (against overall reading direction), we find a reduced launch-site effect, while in all other manipulated text conditions, we observe an increased launch-site effect. Our results clearly indicate that the oculomotor system is highly adaptive when confronted with unusual reading conditions.
2102.00316
Michael Wilkinson
Michael Wilkinson, David Yllanes and Greg Huber
Polysomally Protected Viruses
14 pages, 4 figures, Physical Biology, in press
Phys. Biol. 18 046009 (2021)
10.1088/1478-3975/abf5b5
null
q-bio.SC q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is conceivable that an RNA virus could use a polysome, that is, a string of ribosomes covering the RNA strand, to protect the genetic material from degradation inside a host cell. This paper discusses how such a virus might operate, and how its presence might be detected by ribosome profiling. There are two possible forms for such a polysomally protected virus, depending upon whether just the forward strand or both the forward and complementary strands can be encased by ribosomes (these will be termed type 1 and type 2, respectively). It is argued that in the type 2 case the viral RNA would evolve an ambigrammatic property, whereby the viral genes are free of stop codons in a reverse reading frame (with forward and reverse codons aligned). Recent observations of ribosome profiles of ambigrammatic narnavirus sequences are consistent with our predictions for the type 2 case.
[ { "created": "Sat, 30 Jan 2021 21:34:11 GMT", "version": "v1" }, { "created": "Thu, 8 Apr 2021 21:50:34 GMT", "version": "v2" } ]
2021-06-24
[ [ "Wilkinson", "Michael", "" ], [ "Yllanes", "David", "" ], [ "Huber", "Greg", "" ] ]
It is conceivable that an RNA virus could use a polysome, that is, a string of ribosomes covering the RNA strand, to protect the genetic material from degradation inside a host cell. This paper discusses how such a virus might operate, and how its presence might be detected by ribosome profiling. There are two possible forms for such a polysomally protected virus, depending upon whether just the forward strand or both the forward and complementary strands can be encased by ribosomes (these will be termed type 1 and type 2, respectively). It is argued that in the type 2 case the viral RNA would evolve an ambigrammatic property, whereby the viral genes are free of stop codons in a reverse reading frame (with forward and reverse codons aligned). Recent observations of ribosome profiles of ambigrammatic narnavirus sequences are consistent with our predictions for the type 2 case.
2104.04161
Simone Pigolotti
Anzhelika Koldaeva, Hsieh-Fu Tsai, Amy Q. Shen, and Simone Pigolotti
Population genetics in microchannels
20 pages, 9 figures, combined main text + SI
Proc. Natl. Acad. Sci. 119(12), e2120821119 (2022)
10.1073/pnas.2120821119
null
q-bio.PE cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial constraints such as rigid barriers affect the dynamics of cell populations, potentially altering the course of natural evolution. In this paper, we investigate the population genetics of Escherichia coli proliferating in microchannels with open ends. Our analysis is based on a population model in which reproducing cells shift entire lanes of cells towards the open ends of the channel. The model predicts that diversity is lost very rapidly within lanes, but at a much slower pace among lanes. As a consequence, two mixed, neutral E. coli strains competing in a microchannel must organize into an ordered regular stripe pattern in the course of a few generations. These predictions are in quantitative agreement with our experiments. We also demonstrate that random mutations appearing in the middle of the channel are much more likely to reach fixation than those occurring elsewhere. Our results illustrate fundamental mechanisms of microbial evolution in spatially confined space.
[ { "created": "Fri, 9 Apr 2021 02:33:01 GMT", "version": "v1" }, { "created": "Tue, 28 Sep 2021 06:11:01 GMT", "version": "v2" }, { "created": "Fri, 25 Mar 2022 08:02:11 GMT", "version": "v3" } ]
2022-03-28
[ [ "Koldaeva", "Anzhelika", "" ], [ "Tsai", "Hsieh-Fu", "" ], [ "Shen", "Amy Q.", "" ], [ "Pigolotti", "Simone", "" ] ]
Spatial constraints such as rigid barriers affect the dynamics of cell populations, potentially altering the course of natural evolution. In this paper, we investigate the population genetics of Escherichia coli proliferating in microchannels with open ends. Our analysis is based on a population model in which reproducing cells shift entire lanes of cells towards the open ends of the channel. The model predicts that diversity is lost very rapidly within lanes, but at a much slower pace among lanes. As a consequence, two mixed, neutral E. coli strains competing in a microchannel must organize into an ordered regular stripe pattern in the course of a few generations. These predictions are in quantitative agreement with our experiments. We also demonstrate that random mutations appearing in the middle of the channel are much more likely to reach fixation than those occurring elsewhere. Our results illustrate fundamental mechanisms of microbial evolution in spatially confined space.
2002.02642
Narendra Dixit
Rajat Desikan, Rustom Antia, Narendra M. Dixit
The weakest link bridging germinal center B cells and follicular dendritic cells limits antibody affinity maturation
null
BioEssays, 2021
10.1002/bies.202000159
null
q-bio.PE q-bio.BM q-bio.CB
http://creativecommons.org/licenses/by-nc-sa/4.0/
The affinity of antibodies (Abs) produced in vivo for their target antigens (Ags) is typically well below the maximum affinity possible. Nearly 25 years ago, Foote and Eisen explained how an 'affinity ceiling' could arise from constraints associated with the acquisition of soluble antigen by B cells. However, recent studies have shown that B cells in germinal centers (where Ab affinity maturation occurs) acquire Ag not in soluble form but presented as receptor-bound immune complexes on follicular dendritic cells (FDCs). How the affinity ceiling arises in such a scenario is unclear. Here, we argue that the ceiling arises from the weakest link of the chain of protein complexes that bridges B cells and FDCs and is broken during Ag acquisition. This hypothesis explains the affinity ceiling realized in vivo and suggests that strengthening the weakest link could raise the ceiling and improve Ab responses.
[ { "created": "Fri, 7 Feb 2020 06:47:25 GMT", "version": "v1" } ]
2021-02-16
[ [ "Desikan", "Rajat", "" ], [ "Antia", "Rustom", "" ], [ "Dixit", "Narendra M.", "" ] ]
The affinity of antibodies (Abs) produced in vivo for their target antigens (Ags) is typically well below the maximum affinity possible. Nearly 25 years ago, Foote and Eisen explained how an 'affinity ceiling' could arise from constraints associated with the acquisition of soluble antigen by B cells. However, recent studies have shown that B cells in germinal centers (where Ab affinity maturation occurs) acquire Ag not in soluble form but presented as receptor-bound immune complexes on follicular dendritic cells (FDCs). How the affinity ceiling arises in such a scenario is unclear. Here, we argue that the ceiling arises from the weakest link of the chain of protein complexes that bridges B cells and FDCs and is broken during Ag acquisition. This hypothesis explains the affinity ceiling realized in vivo and suggests that strengthening the weakest link could raise the ceiling and improve Ab responses.
1605.01219
Michele Caselle
Antonio Rosanova, Alberto Colliva, Matteo Osella, Michele Caselle
Modelling the evolution of transcription factor binding preferences in complex eukaryotes
14 pages, 5 figures. Minor changes. Final version, accepted for publication
Sci Rep. 2017 Aug 8;7(1):7596
10.1038/s41598-017-07761-0.
null
q-bio.GN q-bio.MN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcription factors (TFs) exert their regulatory action by binding to DNA with specific sequence preferences. However, different TFs can partially share their binding sequences due to their common evolutionary origin. This `redundancy' of binding defines a way of organizing TFs in `motif families' by grouping TFs with similar binding preferences. Since these ultimately define the TF target genes, the motif family organization entails information about the structure of transcriptional regulation as it has been shaped by evolution. Focusing on the human TF repertoire, we show that a one-parameter evolutionary model of the Birth-Death-Innovation type can explain the TF empirical ripartition in motif families, and allows to highlight the relevant evolutionary forces at the origin of this organization. Moreover, the model allows to pinpoint few deviations from the neutral scenario it assumes: three over-expanded families (including HOX and FOX genes), a set of `singleton' TFs for which duplication seems to be selected against, and a higher-than-average rate of diversification of the binding preferences of TFs with a Zinc Finger DNA binding domain. Finally, a comparison of the TF motif family organization in different eukaryotic species suggests an increase of redundancy of binding with organism complexity.
[ { "created": "Wed, 4 May 2016 10:52:25 GMT", "version": "v1" }, { "created": "Tue, 4 Dec 2018 16:05:21 GMT", "version": "v2" } ]
2018-12-05
[ [ "Rosanova", "Antonio", "" ], [ "Colliva", "Alberto", "" ], [ "Osella", "Matteo", "" ], [ "Caselle", "Michele", "" ] ]
Transcription factors (TFs) exert their regulatory action by binding to DNA with specific sequence preferences. However, different TFs can partially share their binding sequences due to their common evolutionary origin. This `redundancy' of binding defines a way of organizing TFs in `motif families' by grouping TFs with similar binding preferences. Since these ultimately define the TF target genes, the motif family organization entails information about the structure of transcriptional regulation as it has been shaped by evolution. Focusing on the human TF repertoire, we show that a one-parameter evolutionary model of the Birth-Death-Innovation type can explain the TF empirical ripartition in motif families, and allows to highlight the relevant evolutionary forces at the origin of this organization. Moreover, the model allows to pinpoint few deviations from the neutral scenario it assumes: three over-expanded families (including HOX and FOX genes), a set of `singleton' TFs for which duplication seems to be selected against, and a higher-than-average rate of diversification of the binding preferences of TFs with a Zinc Finger DNA binding domain. Finally, a comparison of the TF motif family organization in different eukaryotic species suggests an increase of redundancy of binding with organism complexity.
0901.1598
Frederick Matsen IV
Frederick A. Matsen
constNJ: an algorithm to reconstruct sets of phylogenetic trees satisfying pairwise topological constraints
Please contact me with any questions or comments!
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces constNJ, the first algorithm for phylogenetic reconstruction of sets of trees with constrained pairwise rooted subtree-prune regraft (rSPR) distance. We are motivated by the problem of constructing sets of trees which must fit into a recombination, hybridization, or similar network. Rather than first finding a set of trees which are optimal according to a phylogenetic criterion (e.g. likelihood or parsimony) and then attempting to fit them into a network, constNJ estimates the trees while enforcing specified rSPR distance constraints. The primary input for constNJ is a collection of distance matrices derived from sequence blocks which are assumed to have evolved in a tree-like manner, such as blocks of an alignment which do not contain any recombination breakpoints. The other input is a set of rSPR constraints for any set of pairs of trees. ConstNJ is consistent and a strict generalization of the neighbor-joining algorithm; it uses the new notion of "maximum agreement partitions" to assure that the resulting trees satisfy the given rSPR distance constraints.
[ { "created": "Mon, 12 Jan 2009 15:46:13 GMT", "version": "v1" }, { "created": "Tue, 20 Jan 2009 19:41:29 GMT", "version": "v2" } ]
2009-09-30
[ [ "Matsen", "Frederick A.", "" ] ]
This paper introduces constNJ, the first algorithm for phylogenetic reconstruction of sets of trees with constrained pairwise rooted subtree-prune regraft (rSPR) distance. We are motivated by the problem of constructing sets of trees which must fit into a recombination, hybridization, or similar network. Rather than first finding a set of trees which are optimal according to a phylogenetic criterion (e.g. likelihood or parsimony) and then attempting to fit them into a network, constNJ estimates the trees while enforcing specified rSPR distance constraints. The primary input for constNJ is a collection of distance matrices derived from sequence blocks which are assumed to have evolved in a tree-like manner, such as blocks of an alignment which do not contain any recombination breakpoints. The other input is a set of rSPR constraints for any set of pairs of trees. ConstNJ is consistent and a strict generalization of the neighbor-joining algorithm; it uses the new notion of "maximum agreement partitions" to assure that the resulting trees satisfy the given rSPR distance constraints.
1812.02467
Jessie Renton
Jessie Renton and Karen M. Page
Evolution of cooperation on an epithelium
19 pages, 8 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperation is prevalent in nature, not only in the context of social interactions within the animal kingdom, but also on the cellular level. In cancer for example, tumour cells can cooperate by producing growth factors. The evolution of cooperation has traditionally been studied for well-mixed populations under the framework of evolutionary game theory, and more recently for structured populations using evolutionary graph theory. The population structures arising due to cellular arrangement in tissues however are dynamic and thus cannot be accurately represented by either of these frameworks. In this work we compare the conditions for cooperative success in an epithelium modelled using evolutionary graph theory, to those in a mechanical model of an epithelium =- the Voronoi tessellation model. Crucially, in this latter model cells are able to move, and birth and death are not spatially coupled. We calculate fixation probabilities in the Voronoi tessellation model through simulation and an approximate analytic technique and show that this leads to stronger promotion of cooperation in comparison with the evolutionary graph theory model.
[ { "created": "Thu, 6 Dec 2018 11:22:28 GMT", "version": "v1" } ]
2018-12-07
[ [ "Renton", "Jessie", "" ], [ "Page", "Karen M.", "" ] ]
Cooperation is prevalent in nature, not only in the context of social interactions within the animal kingdom, but also on the cellular level. In cancer for example, tumour cells can cooperate by producing growth factors. The evolution of cooperation has traditionally been studied for well-mixed populations under the framework of evolutionary game theory, and more recently for structured populations using evolutionary graph theory. The population structures arising due to cellular arrangement in tissues however are dynamic and thus cannot be accurately represented by either of these frameworks. In this work we compare the conditions for cooperative success in an epithelium modelled using evolutionary graph theory, to those in a mechanical model of an epithelium =- the Voronoi tessellation model. Crucially, in this latter model cells are able to move, and birth and death are not spatially coupled. We calculate fixation probabilities in the Voronoi tessellation model through simulation and an approximate analytic technique and show that this leads to stronger promotion of cooperation in comparison with the evolutionary graph theory model.
2005.11454
Aydogan Ozcan
Calvin Brown, Derek Tseng, Paige M. K. Larkin, Susan Realegeno, Leanne Mortimer, Arjun Subramonian, Dino Di Carlo, Omai B. Garner, Aydogan Ozcan
An Automated, Cost-Effective Optical System for Accelerated Anti-microbial Susceptibility Testing (AST) using Deep Learning
13 Pages, 6 Figures, 1 Table
ACS Photonics (2020)
10.1021/acsphotonics.0c00841
null
q-bio.QM physics.app-ph physics.ins-det physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Antimicrobial susceptibility testing (AST) is a standard clinical procedure used to quantify antimicrobial resistance (AMR). Currently, the gold standard method requires incubation for 18-24 h and subsequent inspection for growth by a trained medical technologist. We demonstrate an automated, cost-effective optical system that delivers early AST results, minimizing incubation time and eliminating human errors, while remaining compatible with standard phenotypic assay workflow. The system is composed of cost-effective components and eliminates the need for optomechanical scanning. A neural network processes the captured optical intensity information from an array of fiber optic cables to determine whether bacterial growth has occurred in each well of a 96-well microplate. When the system was blindly tested on isolates from 33 patients with Staphylococcus aureus infections, 95.03% of all the wells containing growth were correctly identified using our neural network, with an average of 5.72 h of incubation time required to identify growth. 90% of all wells (growth and no-growth) were correctly classified after 7 h, and 95% after 10.5 h. Our deep learning-based optical system met the FDA-defined criteria for essential and categorical agreements for all 14 antibiotics tested after an average of 6.13 h and 6.98 h, respectively. Furthermore, our system met the FDA criteria for major and very major error rates for 11 of 12 possible drugs after an average of 4.02 h, and 9 of 13 possible drugs after an average of 9.39 h, respectively. This system could enable faster, inexpensive, automated AST, especially in resource limited settings, helping to mitigate the rise of global AMR.
[ { "created": "Sat, 23 May 2020 02:38:26 GMT", "version": "v1" } ]
2020-07-17
[ [ "Brown", "Calvin", "" ], [ "Tseng", "Derek", "" ], [ "Larkin", "Paige M. K.", "" ], [ "Realegeno", "Susan", "" ], [ "Mortimer", "Leanne", "" ], [ "Subramonian", "Arjun", "" ], [ "Di Carlo", "Dino", "" ], [ "Garner", "Omai B.", "" ], [ "Ozcan", "Aydogan", "" ] ]
Antimicrobial susceptibility testing (AST) is a standard clinical procedure used to quantify antimicrobial resistance (AMR). Currently, the gold standard method requires incubation for 18-24 h and subsequent inspection for growth by a trained medical technologist. We demonstrate an automated, cost-effective optical system that delivers early AST results, minimizing incubation time and eliminating human errors, while remaining compatible with standard phenotypic assay workflow. The system is composed of cost-effective components and eliminates the need for optomechanical scanning. A neural network processes the captured optical intensity information from an array of fiber optic cables to determine whether bacterial growth has occurred in each well of a 96-well microplate. When the system was blindly tested on isolates from 33 patients with Staphylococcus aureus infections, 95.03% of all the wells containing growth were correctly identified using our neural network, with an average of 5.72 h of incubation time required to identify growth. 90% of all wells (growth and no-growth) were correctly classified after 7 h, and 95% after 10.5 h. Our deep learning-based optical system met the FDA-defined criteria for essential and categorical agreements for all 14 antibiotics tested after an average of 6.13 h and 6.98 h, respectively. Furthermore, our system met the FDA criteria for major and very major error rates for 11 of 12 possible drugs after an average of 4.02 h, and 9 of 13 possible drugs after an average of 9.39 h, respectively. This system could enable faster, inexpensive, automated AST, especially in resource limited settings, helping to mitigate the rise of global AMR.
1007.5022
Tsvi Tlusty
Ilan Breskin, Jordi Soriano, Elisha Moses, and Tsvi Tlusty
Percolation in living neural networks
PACS numbers: 87.18.Sn, 87.19.La, 64.60.Ak http://www.weizmann.ac.il/complex/tlusty/papers/PhysRevLett2006.pdf
Breskin I Soriano J Moses E & Tlusty T Percolation in Living Neural Networks Phys Rev Lett 97 188102-4 (2006)
10.1103/PhysRevLett.97.188102
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study living neural networks by measuring the neurons' response to a global electrical stimulation. Neural connectivity is lowered by reducing the synaptic strength, chemically blocking neurotransmitter receptors. We use a graph-theoretic approach to show that the connectivity undergoes a percolation transition. This occurs as the giant component disintegrates, characterized by a power law with critical exponent $\beta \simeq 0.65$ is independent of the balance between excitatory and inhibitory neurons and indicates that the degree distribution is gaussian rather than scale free
[ { "created": "Wed, 28 Jul 2010 16:12:39 GMT", "version": "v1" } ]
2010-07-30
[ [ "Breskin", "Ilan", "" ], [ "Soriano", "Jordi", "" ], [ "Moses", "Elisha", "" ], [ "Tlusty", "Tsvi", "" ] ]
We study living neural networks by measuring the neurons' response to a global electrical stimulation. Neural connectivity is lowered by reducing the synaptic strength, chemically blocking neurotransmitter receptors. We use a graph-theoretic approach to show that the connectivity undergoes a percolation transition. This occurs as the giant component disintegrates, characterized by a power law with critical exponent $\beta \simeq 0.65$ is independent of the balance between excitatory and inhibitory neurons and indicates that the degree distribution is gaussian rather than scale free
1910.05881
Taiping Zeng
Taiping Zeng, XiaoLi Li, and Bailu Si
Bayesian Integration of Multi-resolutional Grid Codes for Spatial Cognition
arXiv admin note: text overlap with arXiv:1910.04590
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fourier-like summation of several grid cell modules with different spatial frequencies in the medial entorhinal cortex (MEC) has long been proposed to form the contours of place firing fields. Recent experiments largely, but not completely, support this theory. Place fields are obviously expanded by inactivation of dorsal MEC, which fits the hypothesis. However, contrary to the prediction, inactivation of ventral MEC is also weakly broaden the spatial place firing patterns. In this study, we derive the model from grid spatial frequencies represented by Gaussian profiles to a 1D place field by Bayesian inference, and further provide completely theoretical explanations for expansion of place fields and predictions for alignments of grid components. To understand the information transform across between neocortex, entorhinal cortex, and hippocampus, we propose spatial memory indexing theory from hippocampal indexing theory to investigate how neural dynamics work in the entorhinal-hippocampal circuit. The inputs of place cells in CA3 are converged from three grid modules with different grid spacings layer II of MEC by Bayesian mechanism. We resort to the robot system to test Fourier hypothesis and spatial memory indexing theory, and validate our proposed entorhinal-hippocampal model. And then we demonstrate its cognitive mapping capability on the KITTI odometry benchmark dataset. Results suggest that our model provides a rational theoretical explanation for the biological experimental results. Results also show that the proposed model is robust for simultaneous localization and mapping (SLAM) in the large-scale environment. Our proposed model theoretically supports for Fourier hypothesis in a general Bayesian mechanism, which may pertain to other neural systems in addition to spatial cognition.
[ { "created": "Mon, 14 Oct 2019 01:43:11 GMT", "version": "v1" } ]
2019-10-15
[ [ "Zeng", "Taiping", "" ], [ "Li", "XiaoLi", "" ], [ "Si", "Bailu", "" ] ]
Fourier-like summation of several grid cell modules with different spatial frequencies in the medial entorhinal cortex (MEC) has long been proposed to form the contours of place firing fields. Recent experiments largely, but not completely, support this theory. Place fields are obviously expanded by inactivation of dorsal MEC, which fits the hypothesis. However, contrary to the prediction, inactivation of ventral MEC is also weakly broaden the spatial place firing patterns. In this study, we derive the model from grid spatial frequencies represented by Gaussian profiles to a 1D place field by Bayesian inference, and further provide completely theoretical explanations for expansion of place fields and predictions for alignments of grid components. To understand the information transform across between neocortex, entorhinal cortex, and hippocampus, we propose spatial memory indexing theory from hippocampal indexing theory to investigate how neural dynamics work in the entorhinal-hippocampal circuit. The inputs of place cells in CA3 are converged from three grid modules with different grid spacings layer II of MEC by Bayesian mechanism. We resort to the robot system to test Fourier hypothesis and spatial memory indexing theory, and validate our proposed entorhinal-hippocampal model. And then we demonstrate its cognitive mapping capability on the KITTI odometry benchmark dataset. Results suggest that our model provides a rational theoretical explanation for the biological experimental results. Results also show that the proposed model is robust for simultaneous localization and mapping (SLAM) in the large-scale environment. Our proposed model theoretically supports for Fourier hypothesis in a general Bayesian mechanism, which may pertain to other neural systems in addition to spatial cognition.
1405.1610
Nico Riedel
Nico Riedel, Bhavin S. Khatri, Michael L\"assig, Johannes Berg
Multiple-line inference of selection on quantitative traits
21 pages, 11 figures; to appear in Genetics
Genetics 201 (1), 305-322 (2015)
10.1534/genetics.115.178988
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trait differences between species may be attributable to natural selection. However, quantifying the strength of evidence for selection acting on a particular trait is a difficult task. Here we develop a population-genetic test for selection acting on a quantitative trait which is based on multiple-line crosses. We show that using multiple lines increases both the power and the scope of selection inference. First, a test based on three or more lines detects selection with strongly increased statistical significance, and we show explicitly how the sensitivity of the test depends on the number of lines. Second, a multiple-line test allows to distinguish different lineage-specific selection scenarios. Our analytical results are complemented by extensive numerical simulations. We then apply the multiple-line test to QTL data on floral character traits in plant species of the Mimulus genus and on photoperiodic traits in different maize strains, where we find a signatures of lineage-specific selection not seen in a two-line test.
[ { "created": "Wed, 7 May 2014 14:10:29 GMT", "version": "v1" }, { "created": "Mon, 6 Jul 2015 09:28:36 GMT", "version": "v2" } ]
2017-08-09
[ [ "Riedel", "Nico", "" ], [ "Khatri", "Bhavin S.", "" ], [ "Lässig", "Michael", "" ], [ "Berg", "Johannes", "" ] ]
Trait differences between species may be attributable to natural selection. However, quantifying the strength of evidence for selection acting on a particular trait is a difficult task. Here we develop a population-genetic test for selection acting on a quantitative trait which is based on multiple-line crosses. We show that using multiple lines increases both the power and the scope of selection inference. First, a test based on three or more lines detects selection with strongly increased statistical significance, and we show explicitly how the sensitivity of the test depends on the number of lines. Second, a multiple-line test allows to distinguish different lineage-specific selection scenarios. Our analytical results are complemented by extensive numerical simulations. We then apply the multiple-line test to QTL data on floral character traits in plant species of the Mimulus genus and on photoperiodic traits in different maize strains, where we find a signatures of lineage-specific selection not seen in a two-line test.
1810.12016
Mattia Bramini
Mattia Bramini, Silvio Sacchetti, Andrea Armirotti, Anna Rocchi, Ester V\'azquez, Ver\'onica Le\'on Castellanos, Tiziano Bandiera, Fabrizia Cesca and Fabio Benfenati
Graphene oxide nanosheets disrupt lipid composition, Ca2+ homeostasis and synaptic transmission in primary cortical neurons
This document is the unedited Author's version of a Submitted Work that was subsequently accepted for publication in ACS Nano. To access the final edited and published work see https://pubs.acs.org/articlesonrequest/AOR-MGXEfuAxY43fnrHfBEuQ
ACS Nano 2016, 10, 7, 7154-7171
10.1021/acsnano.6b03438
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Graphene has the potential to make a very significant impact on society, with important applications in the biomedical field. The possibility to engineer graphene-based medical devices at the neuronal interface is of particular interest, making it imperative to determine the biocompatibility of graphene materials with neuronal cells. Here we conducted a comprehensive analysis of the effects of chronic and acute exposure of rat primary cortical neurons to few-layers pristine graphene (GR) and monolayer graphene oxide (GO) flakes. By combining a range of cell biology, microscopy, electrophysiology and omics approaches we characterized the graphene neuron interaction from the first steps of membrane contact and internalization to the long-term effects on cell viability, synaptic transmission and cell metabolism. GR/GO flakes are found in contact with the neuronal membrane, free in the cytoplasm and internalized through the endolysosomal pathway, with no significant impact on neuron viability. However, GO exposure selectively caused the inhibition of excitatory transmission, paralleled by a reduction in the number of excitatory synaptic contacts, and a concomitant enhancement of the inhibitory activity. This was accompanied by induction of autophagy, altered Ca2+ dynamics and by a downregulation of some of the main players in the regulation of Ca2+ homeostasis in both excitatory and inhibitory neurons. Our results show that, although graphene exposure does not impact on neuron viability, it does nevertheless have important effects on neuronal transmission and network functionality, thus warranting caution when planning to employ this material for neuro-biological applications.
[ { "created": "Mon, 29 Oct 2018 09:23:01 GMT", "version": "v1" } ]
2018-10-30
[ [ "Bramini", "Mattia", "" ], [ "Sacchetti", "Silvio", "" ], [ "Armirotti", "Andrea", "" ], [ "Rocchi", "Anna", "" ], [ "Vázquez", "Ester", "" ], [ "Castellanos", "Verónica León", "" ], [ "Bandiera", "Tiziano", "" ], [ "Cesca", "Fabrizia", "" ], [ "Benfenati", "Fabio", "" ] ]
Graphene has the potential to make a very significant impact on society, with important applications in the biomedical field. The possibility to engineer graphene-based medical devices at the neuronal interface is of particular interest, making it imperative to determine the biocompatibility of graphene materials with neuronal cells. Here we conducted a comprehensive analysis of the effects of chronic and acute exposure of rat primary cortical neurons to few-layers pristine graphene (GR) and monolayer graphene oxide (GO) flakes. By combining a range of cell biology, microscopy, electrophysiology and omics approaches we characterized the graphene neuron interaction from the first steps of membrane contact and internalization to the long-term effects on cell viability, synaptic transmission and cell metabolism. GR/GO flakes are found in contact with the neuronal membrane, free in the cytoplasm and internalized through the endolysosomal pathway, with no significant impact on neuron viability. However, GO exposure selectively caused the inhibition of excitatory transmission, paralleled by a reduction in the number of excitatory synaptic contacts, and a concomitant enhancement of the inhibitory activity. This was accompanied by induction of autophagy, altered Ca2+ dynamics and by a downregulation of some of the main players in the regulation of Ca2+ homeostasis in both excitatory and inhibitory neurons. Our results show that, although graphene exposure does not impact on neuron viability, it does nevertheless have important effects on neuronal transmission and network functionality, thus warranting caution when planning to employ this material for neuro-biological applications.
2111.04326
Felix Kramer
Felix Kramer, Carl D. Modes
On biological flow networks: Antagonism between hydrodynamic and metabolic stimuli as driver of topological transitions
null
null
null
null
q-bio.TO nlin.AO physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
A plethora of computational models have been developed in recent decades to account for the morphogenesis of complex biological fluid networks, such as capillary beds. Contemporary adaptation models are based on optimization schemes where networks react and adapt toward given flow patterns. Doing so, a system reduces dissipation and network volume, thereby altering its final form. Yet, recent numeric studies on network morphogenesis, incorporating uptake of metabolites by the embedding tissue, have indicated the conventional approach to be insufficient. Here, we systematically study a hybrid-model which combines the network adaptation schemes intended to generate space-filling perfusion as well as optimal filtration of metabolites. As a result, we find hydrodynamic stimuli (wall-shear stress) and filtration based stimuli (uptake of metabolites) to be antagonistic as hydrodynamically optimized systems have suboptimal uptake qualities and vice versa. We show that a switch between different optimization regimes is typically accompanied with a complex transition between topologically redundant meshes and spanning trees. Depending on the metabolite demand and uptake capabilities of the adaptating networks, we are further able to demonstrate the existence of nullity re-entrant behavior and the development of compromised phenotypes such as dangling non-perfused vessels and bottlenecks.
[ { "created": "Mon, 8 Nov 2021 08:37:35 GMT", "version": "v1" }, { "created": "Tue, 9 Nov 2021 10:25:33 GMT", "version": "v2" }, { "created": "Mon, 15 Nov 2021 18:36:53 GMT", "version": "v3" } ]
2021-11-16
[ [ "Kramer", "Felix", "" ], [ "Modes", "Carl D.", "" ] ]
A plethora of computational models have been developed in recent decades to account for the morphogenesis of complex biological fluid networks, such as capillary beds. Contemporary adaptation models are based on optimization schemes where networks react and adapt toward given flow patterns. Doing so, a system reduces dissipation and network volume, thereby altering its final form. Yet, recent numeric studies on network morphogenesis, incorporating uptake of metabolites by the embedding tissue, have indicated the conventional approach to be insufficient. Here, we systematically study a hybrid-model which combines the network adaptation schemes intended to generate space-filling perfusion as well as optimal filtration of metabolites. As a result, we find hydrodynamic stimuli (wall-shear stress) and filtration based stimuli (uptake of metabolites) to be antagonistic as hydrodynamically optimized systems have suboptimal uptake qualities and vice versa. We show that a switch between different optimization regimes is typically accompanied with a complex transition between topologically redundant meshes and spanning trees. Depending on the metabolite demand and uptake capabilities of the adaptating networks, we are further able to demonstrate the existence of nullity re-entrant behavior and the development of compromised phenotypes such as dangling non-perfused vessels and bottlenecks.
2404.04300
Michael Plank
Caleb Sullivan, Pubudu Senanayake, Michael J. Plank
Quantifying age-specific household contacts in Aotearoa New Zealand for infectious disease modelling
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Accounting for population age structure and age-specific contact patterns is crucial for accurate modelling of human infectious disease dynamics and impact. A common approach is to use synthetic contact matrices, which estimate the number of contacts between individuals of different ages in specific settings. These contact matrices are frequently based on data collected from populations with very different demographic and socioeconomic characteristics from the population of interest. Here we use a comprehensive household composition dataset based on New Zealand census and administrative data to construct household contact matrices and a synthetic population that can be used for modelling. We investigate the behaviour of a compartment-based and an agent-based epidemic model parameterised using this data, compared to a commonly used synthetic contact matrix. We find that using the household composition data leads to lower attack rates in older age groups compared to using the synthetic contact matrix. This difference becomes larger when household transmission is more dominant relative to non-household transmission. In addition, explicitly account for household structure in an agent-based models leads to lower attack rates at all ages. We provide electronic versions of the synthetic population and household contact matrix for other researchers to use in infectious disease models.
[ { "created": "Thu, 4 Apr 2024 21:25:53 GMT", "version": "v1" } ]
2024-04-09
[ [ "Sullivan", "Caleb", "" ], [ "Senanayake", "Pubudu", "" ], [ "Plank", "Michael J.", "" ] ]
Accounting for population age structure and age-specific contact patterns is crucial for accurate modelling of human infectious disease dynamics and impact. A common approach is to use synthetic contact matrices, which estimate the number of contacts between individuals of different ages in specific settings. These contact matrices are frequently based on data collected from populations with very different demographic and socioeconomic characteristics from the population of interest. Here we use a comprehensive household composition dataset based on New Zealand census and administrative data to construct household contact matrices and a synthetic population that can be used for modelling. We investigate the behaviour of a compartment-based and an agent-based epidemic model parameterised using this data, compared to a commonly used synthetic contact matrix. We find that using the household composition data leads to lower attack rates in older age groups compared to using the synthetic contact matrix. This difference becomes larger when household transmission is more dominant relative to non-household transmission. In addition, explicitly account for household structure in an agent-based models leads to lower attack rates at all ages. We provide electronic versions of the synthetic population and household contact matrix for other researchers to use in infectious disease models.
1110.2519
Michele Bellingeri
Michele Bellingeri
Threshold Extinction in Food Webs
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how an extinction event affects ecosystem is fundamental to biodiversity conservation. For this reason, food web response to species loss has been investigated in several ways in the last years. Several studies focused on secondary extinction due to biodiversity loss in a bottom-up perspective using in-silico extinction experiments in which a single species is removed at each step and the number of secondary extinctions is recorded. In these binary simulations a species goes secondarily extinct if it loses all its resource species, that is, when the energy intake is zero. This pure topological statement represents the best case scenario. In fact a consumer species could go extinct losing a certain fraction of the energy intake and the response of quantitative food webs to node loss could be very different with respect to simple binary predictions. The goal of this paper is to analyze how patterns of secondary extinctions change when higher species sensitivity are included in the analyses. In particular, we explored how food web secondary extinction, triggered by the removal of most connected nodes, varies as a function of the energy intake threshold assumed as the minimum needed for species persistence. As we will show, a very low increase of energy intake threshold stimulates a disproportionate growth of secondary extinction.
[ { "created": "Tue, 11 Oct 2011 22:14:05 GMT", "version": "v1" }, { "created": "Fri, 4 Nov 2011 13:53:51 GMT", "version": "v2" } ]
2011-11-07
[ [ "Bellingeri", "Michele", "" ] ]
Understanding how an extinction event affects ecosystem is fundamental to biodiversity conservation. For this reason, food web response to species loss has been investigated in several ways in the last years. Several studies focused on secondary extinction due to biodiversity loss in a bottom-up perspective using in-silico extinction experiments in which a single species is removed at each step and the number of secondary extinctions is recorded. In these binary simulations a species goes secondarily extinct if it loses all its resource species, that is, when the energy intake is zero. This pure topological statement represents the best case scenario. In fact a consumer species could go extinct losing a certain fraction of the energy intake and the response of quantitative food webs to node loss could be very different with respect to simple binary predictions. The goal of this paper is to analyze how patterns of secondary extinctions change when higher species sensitivity are included in the analyses. In particular, we explored how food web secondary extinction, triggered by the removal of most connected nodes, varies as a function of the energy intake threshold assumed as the minimum needed for species persistence. As we will show, a very low increase of energy intake threshold stimulates a disproportionate growth of secondary extinction.
0709.4200
Philip M. Kim
Philip M. Kim, Jan O. Korbel, Xueying Chen, Mark B. Gerstein
Copy Number Variants and Segmental Duplications Show Different Formation Signatures
13 pages
null
null
null
q-bio.GN q-bio.QM
null
In addition to variation in terms of single nucleotide polymorphisms (SNPs), whole regions ranging from several kilobases up to a megabase in length differ in copy number among individuals. These differences are referred to as Copy Number Variants (CNVs) and extensive mapping of these is underway. Recent studies have highlighted their great prevalence in the human genome. Segmental Duplications (SDs) are long (>1kb) stretches of duplicated DNA with high sequence identity. First, we analyzed the co-localization of SDs and find that SDs are significantly co-localized with each other, resulting in a power-law distribution, which suggests a preferential attachment mechanism, i.e. existing SDs are likely to be involved in creating new ones nearby. Second, we look at the relationship of CNVs/SDs with various types of repeats. We we find that the previously recognized association of SDs with Alu elements is significantly stronger for older SDs and is sharply decreasing for younger ones. While it might be expected that the patterns should be similar for SDs and CNVs, we find, surprisingly, no association of CNVs with Alu elements. This trend is consistent with the decreasing correlation between Alu elements and younger SDs, the activity of Alu elements has been decreasing and by now it they seem no longer active. Furthermore, we find a striking association of SDs with processed pseudogenes suggesting that they may also have mediated SD formation. Moreover, find strong association with microsatellites for both SDs and CNVs that suggests a role for satellites in the formation of both.
[ { "created": "Wed, 26 Sep 2007 15:53:40 GMT", "version": "v1" } ]
2007-09-27
[ [ "Kim", "Philip M.", "" ], [ "Korbel", "Jan O.", "" ], [ "Chen", "Xueying", "" ], [ "Gerstein", "Mark B.", "" ] ]
In addition to variation in terms of single nucleotide polymorphisms (SNPs), whole regions ranging from several kilobases up to a megabase in length differ in copy number among individuals. These differences are referred to as Copy Number Variants (CNVs) and extensive mapping of these is underway. Recent studies have highlighted their great prevalence in the human genome. Segmental Duplications (SDs) are long (>1kb) stretches of duplicated DNA with high sequence identity. First, we analyzed the co-localization of SDs and find that SDs are significantly co-localized with each other, resulting in a power-law distribution, which suggests a preferential attachment mechanism, i.e. existing SDs are likely to be involved in creating new ones nearby. Second, we look at the relationship of CNVs/SDs with various types of repeats. We we find that the previously recognized association of SDs with Alu elements is significantly stronger for older SDs and is sharply decreasing for younger ones. While it might be expected that the patterns should be similar for SDs and CNVs, we find, surprisingly, no association of CNVs with Alu elements. This trend is consistent with the decreasing correlation between Alu elements and younger SDs, the activity of Alu elements has been decreasing and by now it they seem no longer active. Furthermore, we find a striking association of SDs with processed pseudogenes suggesting that they may also have mediated SD formation. Moreover, find strong association with microsatellites for both SDs and CNVs that suggests a role for satellites in the formation of both.
2401.00562
Tim Sziburis
Tim Sziburis, Susanne Blex, Tobias Glasmachers, Ioannis Iossifidis
Ruhr Hand Motion Catalog of Human Center-Out Transport Trajectories in 3D Task-Space Captured by a Redundant Measurement System
null
null
null
null
q-bio.QM eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Neurological conditions are a major source of movement disorders. Motion modelling and variability analysis have the potential to identify pathology but require profound data. We introduce a systematic dataset of 3D center-out task-space trajectories of human hand transport movements in a natural setting. The transport tasks of this study consist of grasping a cylindric object from a unified start position and transporting it to one of nine target locations in unconstrained operational space. The measurement procedure is automatized to record ten trials per target location. With that, the dataset consists of 90 movement trajectories for each hand of 31 participants without known movement disorders. The participants are aged between 21 and 78 years, covering a wide range. Data are recorded redundantly by both an optical tracking system and an IMU sensor. As opposed to the stationary capturing system, the IMU can be considered as a portable, low-cost and energy-efficient alternative to be implemented on embedded systems.
[ { "created": "Sun, 31 Dec 2023 18:39:42 GMT", "version": "v1" } ]
2024-01-02
[ [ "Sziburis", "Tim", "" ], [ "Blex", "Susanne", "" ], [ "Glasmachers", "Tobias", "" ], [ "Iossifidis", "Ioannis", "" ] ]
Neurological conditions are a major source of movement disorders. Motion modelling and variability analysis have the potential to identify pathology but require profound data. We introduce a systematic dataset of 3D center-out task-space trajectories of human hand transport movements in a natural setting. The transport tasks of this study consist of grasping a cylindric object from a unified start position and transporting it to one of nine target locations in unconstrained operational space. The measurement procedure is automatized to record ten trials per target location. With that, the dataset consists of 90 movement trajectories for each hand of 31 participants without known movement disorders. The participants are aged between 21 and 78 years, covering a wide range. Data are recorded redundantly by both an optical tracking system and an IMU sensor. As opposed to the stationary capturing system, the IMU can be considered as a portable, low-cost and energy-efficient alternative to be implemented on embedded systems.
2104.02604
Mar\'ia Virginia Sabando Miss
Mar\'ia Virginia Sabando, Ignacio Ponzoni, Evangelos E. Milios, Axel J. Soto
Using Molecular Embeddings in QSAR Modeling: Does it Make a Difference?
null
Briefings in Bioinformatics, Volume 23, Issue 1, January 2022, bbab365
10.1093/bib/bbab365
null
q-bio.BM cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the consolidation of deep learning in drug discovery, several novel algorithms for learning molecular representations have been proposed. Despite the interest of the community in developing new methods for learning molecular embeddings and their theoretical benefits, comparing molecular embeddings with each other and with traditional representations is not straightforward, which in turn hinders the process of choosing a suitable representation for QSAR modeling. A reason behind this issue is the difficulty of conducting a fair and thorough comparison of the different existing embedding approaches, which requires numerous experiments on various datasets and training scenarios. To close this gap, we reviewed the literature on methods for molecular embeddings and reproduced three unsupervised and two supervised molecular embedding techniques recently proposed in the literature. We compared these five methods concerning their performance in QSAR scenarios using different classification and regression datasets. We also compared these representations to traditional molecular representations, namely molecular descriptors and fingerprints. As opposed to the expected outcome, our experimental setup consisting of over 25,000 trained models and statistical tests revealed that the predictive performance using molecular embeddings did not significantly surpass that of traditional representations. While supervised embeddings yielded competitive results compared to those using traditional molecular representations, unsupervised embeddings tended to perform worse than traditional representations. Our results highlight the need for conducting a careful comparison and analysis of the different embedding techniques prior to using them in drug design tasks, and motivate a discussion about the potential of molecular embeddings in computer-aided drug design.
[ { "created": "Sat, 20 Mar 2021 21:45:22 GMT", "version": "v1" }, { "created": "Wed, 28 Jul 2021 15:30:22 GMT", "version": "v2" } ]
2022-05-09
[ [ "Sabando", "María Virginia", "" ], [ "Ponzoni", "Ignacio", "" ], [ "Milios", "Evangelos E.", "" ], [ "Soto", "Axel J.", "" ] ]
With the consolidation of deep learning in drug discovery, several novel algorithms for learning molecular representations have been proposed. Despite the interest of the community in developing new methods for learning molecular embeddings and their theoretical benefits, comparing molecular embeddings with each other and with traditional representations is not straightforward, which in turn hinders the process of choosing a suitable representation for QSAR modeling. A reason behind this issue is the difficulty of conducting a fair and thorough comparison of the different existing embedding approaches, which requires numerous experiments on various datasets and training scenarios. To close this gap, we reviewed the literature on methods for molecular embeddings and reproduced three unsupervised and two supervised molecular embedding techniques recently proposed in the literature. We compared these five methods concerning their performance in QSAR scenarios using different classification and regression datasets. We also compared these representations to traditional molecular representations, namely molecular descriptors and fingerprints. As opposed to the expected outcome, our experimental setup consisting of over 25,000 trained models and statistical tests revealed that the predictive performance using molecular embeddings did not significantly surpass that of traditional representations. While supervised embeddings yielded competitive results compared to those using traditional molecular representations, unsupervised embeddings tended to perform worse than traditional representations. Our results highlight the need for conducting a careful comparison and analysis of the different embedding techniques prior to using them in drug design tasks, and motivate a discussion about the potential of molecular embeddings in computer-aided drug design.
1911.01174
Alexander L\"uck
Alexander L\"uck, Verena Wolf
Generalized Method of Moments Estimation for Stochastic Models of DNA Methylation Patterns
12 pages, 3 figures, 1 table
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With recent advances in sequencing technologies, large amounts of epigenomic data have become available and computational methods are contributing significantly to the progress of epigenetic research. As an orthogonal approach to methods based on machine learning, mechanistic modeling aims at a description of the mechanisms underlying epigenetic changes. Here, we propose an efficient method for parameter estimation for stochastic models that describe the dynamics of DNA methylation patterns over time. Our method is based on the Generalized Method of Moments (GMM) and gives results with an accuracy similar to that of maximum likelihood-based estimation approaches. However, in contrast to the latter, the GMM still allows an efficient and accurate calibration of parameters even if the complexity of the model is increased by considering longer methylation patterns. We show the usefulness of our method by applying it to hairpin bisulfite sequencing data from mouse ESCs for varying pattern lengths.
[ { "created": "Mon, 4 Nov 2019 13:01:24 GMT", "version": "v1" } ]
2019-11-05
[ [ "Lück", "Alexander", "" ], [ "Wolf", "Verena", "" ] ]
With recent advances in sequencing technologies, large amounts of epigenomic data have become available and computational methods are contributing significantly to the progress of epigenetic research. As an orthogonal approach to methods based on machine learning, mechanistic modeling aims at a description of the mechanisms underlying epigenetic changes. Here, we propose an efficient method for parameter estimation for stochastic models that describe the dynamics of DNA methylation patterns over time. Our method is based on the Generalized Method of Moments (GMM) and gives results with an accuracy similar to that of maximum likelihood-based estimation approaches. However, in contrast to the latter, the GMM still allows an efficient and accurate calibration of parameters even if the complexity of the model is increased by considering longer methylation patterns. We show the usefulness of our method by applying it to hairpin bisulfite sequencing data from mouse ESCs for varying pattern lengths.
2405.07123
Jordan Rozum
Austin M. Marcus, Jordan Rozum, Herbert Sizek, and Luis M. Rocha
CANA v1.0.0 and schematodes: efficient quantification of symmetry in Boolean automata
5 pages 1 figure (two images in figure)
null
null
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The biomolecular networks underpinning cell function exhibit canalization, or the buffering of fluctuations required to function in a noisy environment. One understudied putative mechanism for canalization is the functional equivalence of a biomolecular entity's regulators (e.g., among the transcription factors for a gene). In these discrete dynamical systems, activation and inhibition of biomolecular entities (e.g., transcription of genes) are modeled as the activity of coupled 2-state automata, and thus the equivalence of regulators can be studied using the theory of symmetry in discrete functions. To this end, we present a new exact algorithm for finding maximal symmetry groups among the inputs to discrete functions. We implement this algorithm in Rust as a Python package, schematodes. We include schematodes in the new CANA v1.0.0 release, an open source Python library for analyzing canalization in Boolean networks, which we also present here. We compare our exact method implemented in schematodes to the previously published inexact method used in earlier releases of CANA and find that schematodes significantly outperforms the prior method both in speed and accuracy. We also apply CANA v1.0.0 to study the symmetry properties of regulatory function from an ensemble of experimentally-supported Boolean networks from the Cell Collective. Using CANA v1.0.0, we find that the distribution of a previously reported symmetry parameter, $k_s/k$, is statistically significantly different in the Cell Collective than in random automata with the same in-degree and activation bias (Kolmogorov-Smirnov test, $p<0.001$). In particular, its spread is much wider than in our null model (IQR 0.31 vs IQR 0.20 with equal medians), demonstrating that the Cell Collective is enriched in functions with extreme symmetry or asymmetry.
[ { "created": "Sun, 12 May 2024 01:14:00 GMT", "version": "v1" } ]
2024-05-14
[ [ "Marcus", "Austin M.", "" ], [ "Rozum", "Jordan", "" ], [ "Sizek", "Herbert", "" ], [ "Rocha", "Luis M.", "" ] ]
The biomolecular networks underpinning cell function exhibit canalization, or the buffering of fluctuations required to function in a noisy environment. One understudied putative mechanism for canalization is the functional equivalence of a biomolecular entity's regulators (e.g., among the transcription factors for a gene). In these discrete dynamical systems, activation and inhibition of biomolecular entities (e.g., transcription of genes) are modeled as the activity of coupled 2-state automata, and thus the equivalence of regulators can be studied using the theory of symmetry in discrete functions. To this end, we present a new exact algorithm for finding maximal symmetry groups among the inputs to discrete functions. We implement this algorithm in Rust as a Python package, schematodes. We include schematodes in the new CANA v1.0.0 release, an open source Python library for analyzing canalization in Boolean networks, which we also present here. We compare our exact method implemented in schematodes to the previously published inexact method used in earlier releases of CANA and find that schematodes significantly outperforms the prior method both in speed and accuracy. We also apply CANA v1.0.0 to study the symmetry properties of regulatory function from an ensemble of experimentally-supported Boolean networks from the Cell Collective. Using CANA v1.0.0, we find that the distribution of a previously reported symmetry parameter, $k_s/k$, is statistically significantly different in the Cell Collective than in random automata with the same in-degree and activation bias (Kolmogorov-Smirnov test, $p<0.001$). In particular, its spread is much wider than in our null model (IQR 0.31 vs IQR 0.20 with equal medians), demonstrating that the Cell Collective is enriched in functions with extreme symmetry or asymmetry.
2106.13202
Ju An Park
Ju An Park, Vikram Voleti, Kathryn E. Thomas, Alexander Wong and Jason L. Deglint
SALT: Sea lice Adaptive Lattice Tracking -- An Unsupervised Approach to Generate an Improved Ocean Model
5 pages, 3 figures, 3 tables
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Warming oceans due to climate change are leading to increased numbers of ectoparasitic copepods, also known as sea lice, which can cause significant ecological loss to wild salmon populations and major economic loss to aquaculture sites. The main transport mechanism driving the spread of sea lice populations are near-surface ocean currents. Present strategies to estimate the distribution of sea lice larvae are computationally complex and limit full-scale analysis. Motivated to address this challenge, we propose SALT: Sea lice Adaptive Lattice Tracking approach for efficient estimation of sea lice dispersion and distribution in space and time. Specifically, an adaptive spatial mesh is generated by merging nodes in the lattice graph of the Ocean Model based on local ocean properties, thus enabling highly efficient graph representation. SALT demonstrates improved efficiency while maintaining consistent results with the standard method, using near-surface current data for Hardangerfjord, Norway. The proposed SALT technique shows promise for enhancing proactive aquaculture management through predictive modelling of sea lice infestation pressure maps in a changing climate.
[ { "created": "Thu, 24 Jun 2021 17:29:42 GMT", "version": "v1" } ]
2021-06-25
[ [ "Park", "Ju An", "" ], [ "Voleti", "Vikram", "" ], [ "Thomas", "Kathryn E.", "" ], [ "Wong", "Alexander", "" ], [ "Deglint", "Jason L.", "" ] ]
Warming oceans due to climate change are leading to increased numbers of ectoparasitic copepods, also known as sea lice, which can cause significant ecological loss to wild salmon populations and major economic loss to aquaculture sites. The main transport mechanism driving the spread of sea lice populations are near-surface ocean currents. Present strategies to estimate the distribution of sea lice larvae are computationally complex and limit full-scale analysis. Motivated to address this challenge, we propose SALT: Sea lice Adaptive Lattice Tracking approach for efficient estimation of sea lice dispersion and distribution in space and time. Specifically, an adaptive spatial mesh is generated by merging nodes in the lattice graph of the Ocean Model based on local ocean properties, thus enabling highly efficient graph representation. SALT demonstrates improved efficiency while maintaining consistent results with the standard method, using near-surface current data for Hardangerfjord, Norway. The proposed SALT technique shows promise for enhancing proactive aquaculture management through predictive modelling of sea lice infestation pressure maps in a changing climate.
1503.07116
Bartlomiej Waclaw Dr
Bartlomiej Waclaw, Ivana Bozic, Meredith E. Pittman, Ralph H. Hruban, Bert Vogelstein, Martin A. Nowak
Spatial model predicts dispersal and cell turnover cause reduced intra-tumor heterogeneity
37 pages, 14 figures
null
10.1038/nature14971
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most cancers in humans are large, measuring centimeters in diameter, composed of many billions of cells. An equivalent mass of normal cells would be highly heterogeneous as a result of the mutations that occur during each cell division. What is remarkable about cancers is their homogeneity - virtually every neoplastic cell within a large cancer contains the same core set of genetic alterations, with heterogeneity confined to mutations that have emerged after the last clonal expansions. How such clones expand within the spatially-constrained three dimensional architecture of a tumor, and come to dominate a large, pre-existing lesion, has never been explained. We here describe a model for tumor evolution that shows how short-range migration and cell turnover can account for rapid cell mixing inside the tumor. With it, we show that even a small selective advantage of a single cell within a large tumor allows the descendants of that cell to replace the precursor mass in a clinically relevant time frame. We also demonstrate that the same mechanisms can be responsible for the rapid onset of resistance to chemotherapy. Our model not only provides novel insights into spatial and temporal aspects of tumor growth but also suggests that targeting short range cellular migratory activity could have dramatic effects on tumor growth rates.
[ { "created": "Tue, 24 Mar 2015 17:22:54 GMT", "version": "v1" } ]
2016-02-17
[ [ "Waclaw", "Bartlomiej", "" ], [ "Bozic", "Ivana", "" ], [ "Pittman", "Meredith E.", "" ], [ "Hruban", "Ralph H.", "" ], [ "Vogelstein", "Bert", "" ], [ "Nowak", "Martin A.", "" ] ]
Most cancers in humans are large, measuring centimeters in diameter, composed of many billions of cells. An equivalent mass of normal cells would be highly heterogeneous as a result of the mutations that occur during each cell division. What is remarkable about cancers is their homogeneity - virtually every neoplastic cell within a large cancer contains the same core set of genetic alterations, with heterogeneity confined to mutations that have emerged after the last clonal expansions. How such clones expand within the spatially-constrained three dimensional architecture of a tumor, and come to dominate a large, pre-existing lesion, has never been explained. We here describe a model for tumor evolution that shows how short-range migration and cell turnover can account for rapid cell mixing inside the tumor. With it, we show that even a small selective advantage of a single cell within a large tumor allows the descendants of that cell to replace the precursor mass in a clinically relevant time frame. We also demonstrate that the same mechanisms can be responsible for the rapid onset of resistance to chemotherapy. Our model not only provides novel insights into spatial and temporal aspects of tumor growth but also suggests that targeting short range cellular migratory activity could have dramatic effects on tumor growth rates.
1301.1590
Hamidreza Chitsaz
Hamidreza Chitsaz and Elmirasadat Forouzmand and Gholamreza Haffari
An Efficient Algorithm for Upper Bound on the Partition Function of Nucleic Acids
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been shown that minimum free energy structure for RNAs and RNA-RNA interaction is often incorrect due to inaccuracies in the energy parameters and inherent limitations of the energy model. In contrast, ensemble based quantities such as melting temperature and equilibrium concentrations can be more reliably predicted. Even structure prediction by sampling from the ensemble and clustering those structures by Sfold [7] has proven to be more reliable than minimum free energy structure prediction. The main obstacle for ensemble based approaches is the computational complexity of the partition function and base pairing probabilities. For instance, the space complexity of the partition function for RNA-RNA interaction is $O(n^4)$ and the time complexity is $O(n^6)$ which are prohibitively large [4,12]. Our goal in this paper is to give a fast algorithm, based on sparse folding, to calculate an upper bound on the partition function. Our work is based on the recent algorithm of Hazan and Jaakkola [10]. The space complexity of our algorithm is the same as that of sparse folding algorithms, and the time complexity of our algorithm is $O(MFE(n)\ell)$ for single RNA and $O(MFE(m, n)\ell)$ for RNA-RNA interaction in practice, in which $MFE$ is the running time of sparse folding and $\ell \leq n$ ($\ell \leq n + m$) is a sequence dependent parameter.
[ { "created": "Tue, 8 Jan 2013 16:58:28 GMT", "version": "v1" } ]
2013-01-09
[ [ "Chitsaz", "Hamidreza", "" ], [ "Forouzmand", "Elmirasadat", "" ], [ "Haffari", "Gholamreza", "" ] ]
It has been shown that minimum free energy structure for RNAs and RNA-RNA interaction is often incorrect due to inaccuracies in the energy parameters and inherent limitations of the energy model. In contrast, ensemble based quantities such as melting temperature and equilibrium concentrations can be more reliably predicted. Even structure prediction by sampling from the ensemble and clustering those structures by Sfold [7] has proven to be more reliable than minimum free energy structure prediction. The main obstacle for ensemble based approaches is the computational complexity of the partition function and base pairing probabilities. For instance, the space complexity of the partition function for RNA-RNA interaction is $O(n^4)$ and the time complexity is $O(n^6)$ which are prohibitively large [4,12]. Our goal in this paper is to give a fast algorithm, based on sparse folding, to calculate an upper bound on the partition function. Our work is based on the recent algorithm of Hazan and Jaakkola [10]. The space complexity of our algorithm is the same as that of sparse folding algorithms, and the time complexity of our algorithm is $O(MFE(n)\ell)$ for single RNA and $O(MFE(m, n)\ell)$ for RNA-RNA interaction in practice, in which $MFE$ is the running time of sparse folding and $\ell \leq n$ ($\ell \leq n + m$) is a sequence dependent parameter.
1807.07566
Yu-Cheng Chen
Yu-Cheng Chen, Qiushu Chen, Xiaotain Tan, Grace Chen, Ingrid Bergin, Muhammad Nadeem Aslam, and Xudong Fan
Chromatin Laser Imaging Reveals Abnormal Nuclear Changes for Early Cancer Detection
null
null
null
null
q-bio.TO physics.bio-ph physics.optics q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We developed and applied rapid scanning laser-emission microscopy to detect abnormal changes in cell nuclei for early diagnosis of cancer and cancer precursors. Regulation of chromatins is essential for genetic development and normal cell functions, while abnormal nuclear changes may lead to many diseases, in particular, cancer. The capability to detect abnormal changes in apparently normal tissues at a stage earlier than tumor development is critical for cancer prevention. Here we report using LEM to analyze colonic tissues from mice at-risk for colon cancer by detecting prepolyp nuclear abnormality. By imaging the lasing emissions from chromatins, we discovered that, despite the absence of observable lesions, polyps, or tumors under stereoscope, high-fat mice exhibited significantly lower lasing thresholds than low-fat mice. The low lasing threshold is, in fact, very similar to that of adenomas and is caused by abnormal cell proliferation and chromatin deregulation that can potentially lead to cancer. Our findings suggest that conventional methods, such as colonoscopy, may be insufficient to reveal hidden or early tumors under development. We envision that this work will provide new insights into LEM for early tumor detection in clinical diagnosis and fundamental biological and biomedical research of chromatin changes at the biomolecular level of cancer development.
[ { "created": "Thu, 19 Jul 2018 11:08:10 GMT", "version": "v1" } ]
2018-07-23
[ [ "Chen", "Yu-Cheng", "" ], [ "Chen", "Qiushu", "" ], [ "Tan", "Xiaotain", "" ], [ "Chen", "Grace", "" ], [ "Bergin", "Ingrid", "" ], [ "Aslam", "Muhammad Nadeem", "" ], [ "Fan", "Xudong", "" ] ]
We developed and applied rapid scanning laser-emission microscopy to detect abnormal changes in cell nuclei for early diagnosis of cancer and cancer precursors. Regulation of chromatins is essential for genetic development and normal cell functions, while abnormal nuclear changes may lead to many diseases, in particular, cancer. The capability to detect abnormal changes in apparently normal tissues at a stage earlier than tumor development is critical for cancer prevention. Here we report using LEM to analyze colonic tissues from mice at-risk for colon cancer by detecting prepolyp nuclear abnormality. By imaging the lasing emissions from chromatins, we discovered that, despite the absence of observable lesions, polyps, or tumors under stereoscope, high-fat mice exhibited significantly lower lasing thresholds than low-fat mice. The low lasing threshold is, in fact, very similar to that of adenomas and is caused by abnormal cell proliferation and chromatin deregulation that can potentially lead to cancer. Our findings suggest that conventional methods, such as colonoscopy, may be insufficient to reveal hidden or early tumors under development. We envision that this work will provide new insights into LEM for early tumor detection in clinical diagnosis and fundamental biological and biomedical research of chromatin changes at the biomolecular level of cancer development.
1902.08555
Jes\'us Fern\'andez-S\'anchez
Marta Casanellas, Jes\'us Fern\'andez-S\'anchez, Jordi Roca-Lacostena
Embeddability and rate identifiability of Kimura 2-parameter matrices
20 pages; 10 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deciding whether a Markov matrix is embeddable (i.e. can be written as the exponential of a rate matrix) is an open problem even for $4\times 4$ matrices. We study the embedding problem and rate identifiability for the K80 model of nucleotide substitution. For these $4\times 4$ matrices, we fully characterize the set of embeddable K80 Markov matrices and the set of embeddable matrices for which rates are identifiable. In particular, we describe an open subset of embeddable matrices with non-identifiable rates. This set contains matrices with positive eigenvalues and also diagonal largest in column matrices, which might lead to consequences in parameter estimation in phylogenetics. Finally, we compute the relative volumes of embeddable K80 matrices and of embeddable matrices with identifiable rates. This study concludes the embedding problem for the more general model K81 and its submodels, which had been initiated by the last two authors in a separate work.
[ { "created": "Fri, 22 Feb 2019 16:43:07 GMT", "version": "v1" }, { "created": "Wed, 27 Nov 2019 09:53:46 GMT", "version": "v2" } ]
2019-11-28
[ [ "Casanellas", "Marta", "" ], [ "Fernández-Sánchez", "Jesús", "" ], [ "Roca-Lacostena", "Jordi", "" ] ]
Deciding whether a Markov matrix is embeddable (i.e. can be written as the exponential of a rate matrix) is an open problem even for $4\times 4$ matrices. We study the embedding problem and rate identifiability for the K80 model of nucleotide substitution. For these $4\times 4$ matrices, we fully characterize the set of embeddable K80 Markov matrices and the set of embeddable matrices for which rates are identifiable. In particular, we describe an open subset of embeddable matrices with non-identifiable rates. This set contains matrices with positive eigenvalues and also diagonal largest in column matrices, which might lead to consequences in parameter estimation in phylogenetics. Finally, we compute the relative volumes of embeddable K80 matrices and of embeddable matrices with identifiable rates. This study concludes the embedding problem for the more general model K81 and its submodels, which had been initiated by the last two authors in a separate work.
2106.05783
Leonardo Dalla Porta
Leonardo Dalla Porta, Daniel M. Castro, Mauro Copelli, Pedro V. Carelli, and Fernanda S. Matias
Feedforward and feedback influences through distinct frequency bands between two spiking-neuron networks
null
null
10.1103/PhysRevE.104.054404
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Several studies with brain signals suggested that bottom-up and top-down influences are exerted through distinct frequency bands among visual cortical areas. It has been recently shown that theta and gamma rhythms subserve feedforward, whereas the feedback influence is dominated by the alpha-beta rhythm in primates. A few theoretical models for reproducing these effects have been proposed so far. Here we show that a simple but biophysically plausible two-network motif composed of spiking-neuron models and chemical synapses can exhibit feedforward and feedback influences through distinct frequency bands. Differently from previous studies, this kind of model allows us to study directed influences not only at the population level, by using a proxy for the local field potential, but also at the cellular level, by using the neuronal spiking series.
[ { "created": "Thu, 10 Jun 2021 14:34:03 GMT", "version": "v1" } ]
2021-11-24
[ [ "Porta", "Leonardo Dalla", "" ], [ "Castro", "Daniel M.", "" ], [ "Copelli", "Mauro", "" ], [ "Carelli", "Pedro V.", "" ], [ "Matias", "Fernanda S.", "" ] ]
Several studies with brain signals suggested that bottom-up and top-down influences are exerted through distinct frequency bands among visual cortical areas. It has been recently shown that theta and gamma rhythms subserve feedforward, whereas the feedback influence is dominated by the alpha-beta rhythm in primates. A few theoretical models for reproducing these effects have been proposed so far. Here we show that a simple but biophysically plausible two-network motif composed of spiking-neuron models and chemical synapses can exhibit feedforward and feedback influences through distinct frequency bands. Differently from previous studies, this kind of model allows us to study directed influences not only at the population level, by using a proxy for the local field potential, but also at the cellular level, by using the neuronal spiking series.
1807.07127
Charo del Genio
Erin Connelly, Charo I. del Genio, Freya Harrison
Datamining a medieval medical text reveals patterns in ingredient choice that reflect biological activity against the causative agents of specified infections
27 pages, 4 figures
null
null
null
q-bio.QM cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The pharmacopeia used by physicians and lay people in medieval Europe has largely been dismissed as placebo or superstition. While we now recognise that some of the materia medica used by medieval physicians could have had useful biological properties, research in this area is limited by the labour-intensive process of searching and interpreting historical medical texts. Here, we demonstrate the potential power of turning medieval medical texts into contextualised electronic databases amenable to exploration by algorithm. We use established methodologies from network science to reveal statistically significant patterns in ingredient selection and usage in a key text, the fifteenth-century Lylye of Medicynes, focusing on remedies to treat symptoms of microbial infection. We discuss the potential that these patterns reflect rational medical decisions. In providing a worked example of data-driven textual analysis, we demonstrate the potential of this approach to encourage interdisciplinary collaboration and to shine a new light on the ethnopharmacology of historical medical texts.
[ { "created": "Wed, 18 Jul 2018 20:08:18 GMT", "version": "v1" } ]
2018-07-20
[ [ "Connelly", "Erin", "" ], [ "del Genio", "Charo I.", "" ], [ "Harrison", "Freya", "" ] ]
The pharmacopeia used by physicians and lay people in medieval Europe has largely been dismissed as placebo or superstition. While we now recognise that some of the materia medica used by medieval physicians could have had useful biological properties, research in this area is limited by the labour-intensive process of searching and interpreting historical medical texts. Here, we demonstrate the potential power of turning medieval medical texts into contextualised electronic databases amenable to exploration by algorithm. We use established methodologies from network science to reveal statistically significant patterns in ingredient selection and usage in a key text, the fifteenth-century Lylye of Medicynes, focusing on remedies to treat symptoms of microbial infection. We discuss the potential that these patterns reflect rational medical decisions. In providing a worked example of data-driven textual analysis, we demonstrate the potential of this approach to encourage interdisciplinary collaboration and to shine a new light on the ethnopharmacology of historical medical texts.
1109.5159
Michael Knudsen
Michael Knudsen, Elisenda Feliu, Carsten Wiuf
Exact Analysis of Intrinsic Qualitative Features of Phosphorelays using Mathematical Models
null
null
null
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phosphorelays are a class of signaling mechanisms used by cells to respond to changes in their environment. Phosphorelays (of which two-component systems constitute a special case) are particularly abundant in prokaryotes and have been shown to be involved in many fundamental processes such as stress response, osmotic regulation, virulence, and chemotaxis. We develop a general model of phosphorelays extending existing models of phosphorelays and two-component systems. We analyze the model analytically under the assumption of mass-action kinetics and prove that a phosphorelay has a unique stable steady-state. Furthermore, we derive explicit functions relating stimulus to the response in any layer of a phosphorelay and show that a limited degree of ultrasensitivity (the ability to respond to changes in stimulus in a switch-like manner) in the bottom layer of a phosphorelay is an intrinsic feature which does not depend on any reaction rates or substrate amounts. On the other hand, we show how adjusting reaction rates and substrate amounts may lead to higher degrees of ultrasensitivity in intermediate layers. The explicit formulas also enable us to prove how the response changes with alterations in stimulus, kinetic parameters, and substrate amounts. Aside from providing biological insight, the formulas may also be used to avoid time-consuming simulations in numerical analyses and simulations.
[ { "created": "Fri, 23 Sep 2011 19:08:28 GMT", "version": "v1" } ]
2011-09-26
[ [ "Knudsen", "Michael", "" ], [ "Feliu", "Elisenda", "" ], [ "Wiuf", "Carsten", "" ] ]
Phosphorelays are a class of signaling mechanisms used by cells to respond to changes in their environment. Phosphorelays (of which two-component systems constitute a special case) are particularly abundant in prokaryotes and have been shown to be involved in many fundamental processes such as stress response, osmotic regulation, virulence, and chemotaxis. We develop a general model of phosphorelays extending existing models of phosphorelays and two-component systems. We analyze the model analytically under the assumption of mass-action kinetics and prove that a phosphorelay has a unique stable steady-state. Furthermore, we derive explicit functions relating stimulus to the response in any layer of a phosphorelay and show that a limited degree of ultrasensitivity (the ability to respond to changes in stimulus in a switch-like manner) in the bottom layer of a phosphorelay is an intrinsic feature which does not depend on any reaction rates or substrate amounts. On the other hand, we show how adjusting reaction rates and substrate amounts may lead to higher degrees of ultrasensitivity in intermediate layers. The explicit formulas also enable us to prove how the response changes with alterations in stimulus, kinetic parameters, and substrate amounts. Aside from providing biological insight, the formulas may also be used to avoid time-consuming simulations in numerical analyses and simulations.
1807.08686
Anita Mehta
Anita Mehta
Storing and retrieving long-term memories: cooperation and competition in synaptic dynamics
34 pages, 7 figures
Advances in Physics: X, 3:1, 755-789, (2018)
10.1080/23746149.2018.1480415
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We first review traditional approaches to memory storage and formation, drawing on the literature of quantitative neuroscience as well as statistical physics. These have generally focused on the fast dynamics of neurons; however, there is now an increasing emphasis on the slow dynamics of synapses, whose weight changes are held to be responsible for memory storage. An important first step in this direction was taken in the context of Fusi's cascade model, where complex synaptic architectures were invoked, in particular, to store long-term memories. No explicit synaptic dynamics were, however, invoked in that work. These were recently incorporated theoretically using the techniques used in agent-based modelling, and subsequently, models of competing and cooperating synapses were formulated. It was found that the key to the storage of long-term memories lay in the competitive dynamics of synapses. In this review, we focus on models of synaptic competition and cooperation, and look at the outstanding challenges that remain.
[ { "created": "Mon, 23 Jul 2018 15:54:33 GMT", "version": "v1" } ]
2018-07-24
[ [ "Mehta", "Anita", "" ] ]
We first review traditional approaches to memory storage and formation, drawing on the literature of quantitative neuroscience as well as statistical physics. These have generally focused on the fast dynamics of neurons; however, there is now an increasing emphasis on the slow dynamics of synapses, whose weight changes are held to be responsible for memory storage. An important first step in this direction was taken in the context of Fusi's cascade model, where complex synaptic architectures were invoked, in particular, to store long-term memories. No explicit synaptic dynamics were, however, invoked in that work. These were recently incorporated theoretically using the techniques used in agent-based modelling, and subsequently, models of competing and cooperating synapses were formulated. It was found that the key to the storage of long-term memories lay in the competitive dynamics of synapses. In this review, we focus on models of synaptic competition and cooperation, and look at the outstanding challenges that remain.
2008.01237
Haoyu Cheng
Haoyu Cheng, Gregory T Concepcion, Xiaowen Feng, Haowen Zhang and Heng Li
Haplotype-resolved de novo assembly with phased assembly graphs
11 pages, 3 figures, 3 tables
Nature Methods, 2021
10.1038/s41592-020-01056-5
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Haplotype-resolved de novo assembly is the ultimate solution to the study of sequence variations in a genome. However, existing algorithms either collapse heterozygous alleles into one consensus copy or fail to cleanly separate the haplotypes to produce high-quality phased assemblies. Here we describe hifiasm, a new de novo assembler that takes advantage of long high-fidelity sequence reads to faithfully represent the haplotype information in a phased assembly graph. Unlike other graph-based assemblers that only aim to maintain the contiguity of one haplotype, hifiasm strives to preserve the contiguity of all haplotypes. This feature enables the development of a graph trio binning algorithm that greatly advances over standard trio binning. On three human and five non-human datasets, including California redwood with a $\sim$30-gigabase hexaploid genome, we show that hifiasm frequently delivers better assemblies than existing tools and consistently outperforms others on haplotype-resolved assembly.
[ { "created": "Mon, 3 Aug 2020 23:10:44 GMT", "version": "v1" } ]
2021-02-03
[ [ "Cheng", "Haoyu", "" ], [ "Concepcion", "Gregory T", "" ], [ "Feng", "Xiaowen", "" ], [ "Zhang", "Haowen", "" ], [ "Li", "Heng", "" ] ]
Haplotype-resolved de novo assembly is the ultimate solution to the study of sequence variations in a genome. However, existing algorithms either collapse heterozygous alleles into one consensus copy or fail to cleanly separate the haplotypes to produce high-quality phased assemblies. Here we describe hifiasm, a new de novo assembler that takes advantage of long high-fidelity sequence reads to faithfully represent the haplotype information in a phased assembly graph. Unlike other graph-based assemblers that only aim to maintain the contiguity of one haplotype, hifiasm strives to preserve the contiguity of all haplotypes. This feature enables the development of a graph trio binning algorithm that greatly advances over standard trio binning. On three human and five non-human datasets, including California redwood with a $\sim$30-gigabase hexaploid genome, we show that hifiasm frequently delivers better assemblies than existing tools and consistently outperforms others on haplotype-resolved assembly.
0809.0391
Masudul Haque
Alexey Mikaberidze, Masudul Haque
Survival benefits in mimicry: a quantitative framework
9 pages, 7 figures
Journal of Theoretical Biology, Vol. 259, pages 462-468 (2009)
10.1016/j.jtbi.2009.02.024
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mimicry is a resemblance between species that benefits at least one of the species. It is a ubiquitous evolutionary phenomenon particularly common among prey species, in which case the advantage involves better protection from predation. We formulate a mathematical description of mimicry among prey species, to investigate benefits and disadvantages of mimicry. The basic setup involves differential equations for quantities representing predator behavior, namely, the probabilities for attacking prey at the next encounter. Using this framework, we present new quantitative results, and also provide a unified description of a significant fraction of the quantitative mimicry literature. The new results include `temporary' mutualism between prey species, and an optimal density at which the survival benefit is greatest for the mimic. The formalism leads naturally to extensions in several directions, such as the evolution of mimicry, the interplay of mimicry with population dynamics, etc. We demonstrate this extensibility by presenting some explorations on spatiotemporal pattern dynamics.
[ { "created": "Tue, 2 Sep 2008 10:19:34 GMT", "version": "v1" } ]
2011-08-09
[ [ "Mikaberidze", "Alexey", "" ], [ "Haque", "Masudul", "" ] ]
Mimicry is a resemblance between species that benefits at least one of the species. It is a ubiquitous evolutionary phenomenon particularly common among prey species, in which case the advantage involves better protection from predation. We formulate a mathematical description of mimicry among prey species, to investigate benefits and disadvantages of mimicry. The basic setup involves differential equations for quantities representing predator behavior, namely, the probabilities for attacking prey at the next encounter. Using this framework, we present new quantitative results, and also provide a unified description of a significant fraction of the quantitative mimicry literature. The new results include `temporary' mutualism between prey species, and an optimal density at which the survival benefit is greatest for the mimic. The formalism leads naturally to extensions in several directions, such as the evolution of mimicry, the interplay of mimicry with population dynamics, etc. We demonstrate this extensibility by presenting some explorations on spatiotemporal pattern dynamics.
1307.8407
Ilya Zhbannikov
Ilya Y. Zhbannikov, Samuel S. Hunter, Matthew L. Settles and James A. Foster
SlopMap: a software application tool for quick and flexible identification of similar sequences using exact k-mer matching
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of Next-Generation (NG) sequencing, it has become possible to sequence an entire genome quickly and inexpensively. However, in some experiments one only needs to extract and assembly a portion of the sequence reads, for example when performing transcriptome studies, sequencing mitochondrial genomes, or characterizing exomes. With the raw DNA-library of a complete genome it would appear to be a trivial problem to identify reads of interest. But it is not always easy to incorporate well-known tools such as BLAST, BLAT, Bowtie, and SOAP directly into a bioinformatics pipelines before the assembly stage, either due to in- compatibility with the assembler's file inputs, or because it is desirable to incorporate information that must be extracted separately. For example, in order to incorporate flowgrams from a Roche 454 sequencer into the Newbler assembler it is necessary to first extract them from the original SFF files. We present SlopMap, a bioinformatics software utility which allows rapid identification similar to provided target sequences from either Roche 454 or Illumnia DNA library. With a simple and intuitive command- line interface along with file output formats compatible with assembly programs, SlopMap can be directly embedded in biological data processing pipeline without any additional programming work. In addition, SlopMap preserves flowgram information needed for Roche 454 assembler.
[ { "created": "Wed, 31 Jul 2013 18:06:05 GMT", "version": "v1" } ]
2013-08-01
[ [ "Zhbannikov", "Ilya Y.", "" ], [ "Hunter", "Samuel S.", "" ], [ "Settles", "Matthew L.", "" ], [ "Foster", "James A.", "" ] ]
With the advent of Next-Generation (NG) sequencing, it has become possible to sequence an entire genome quickly and inexpensively. However, in some experiments one only needs to extract and assembly a portion of the sequence reads, for example when performing transcriptome studies, sequencing mitochondrial genomes, or characterizing exomes. With the raw DNA-library of a complete genome it would appear to be a trivial problem to identify reads of interest. But it is not always easy to incorporate well-known tools such as BLAST, BLAT, Bowtie, and SOAP directly into a bioinformatics pipelines before the assembly stage, either due to in- compatibility with the assembler's file inputs, or because it is desirable to incorporate information that must be extracted separately. For example, in order to incorporate flowgrams from a Roche 454 sequencer into the Newbler assembler it is necessary to first extract them from the original SFF files. We present SlopMap, a bioinformatics software utility which allows rapid identification similar to provided target sequences from either Roche 454 or Illumnia DNA library. With a simple and intuitive command- line interface along with file output formats compatible with assembly programs, SlopMap can be directly embedded in biological data processing pipeline without any additional programming work. In addition, SlopMap preserves flowgram information needed for Roche 454 assembler.
2212.11367
Adam Tonks
Adam Tonks (1), Trevor Harris (2), Bo Li (1), William Brown (3), Rebecca Smith (3) ((1) Department of Statistics, University of Illinois at Urbana-Champaign, (2) Department of Statistics, Texas A&M University, (3) Department of Pathobiology, University of Illinois at Urbana-Champaign)
Forecasting West Nile Virus with Graph Neural Networks: Harnessing Spatial Dependence in Irregularly Sampled Geospatial Data
null
GeoHealth 8 (7), e2023GH000784
10.1029/2023GH000784
null
q-bio.PE cs.LG q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning methods have seen increased application to geospatial environmental problems, such as precipitation nowcasting, haze forecasting, and crop yield prediction. However, many of the machine learning methods applied to mosquito population and disease forecasting do not inherently take into account the underlying spatial structure of the given data. In our work, we apply a spatially aware graph neural network model consisting of GraphSAGE layers to forecast the presence of West Nile virus in Illinois, to aid mosquito surveillance and abatement efforts within the state. More generally, we show that graph neural networks applied to irregularly sampled geospatial data can exceed the performance of a range of baseline methods including logistic regression, XGBoost, and fully-connected neural networks.
[ { "created": "Wed, 21 Dec 2022 21:08:45 GMT", "version": "v1" } ]
2024-07-09
[ [ "Tonks", "Adam", "" ], [ "Harris", "Trevor", "" ], [ "Li", "Bo", "" ], [ "Brown", "William", "" ], [ "Smith", "Rebecca", "" ] ]
Machine learning methods have seen increased application to geospatial environmental problems, such as precipitation nowcasting, haze forecasting, and crop yield prediction. However, many of the machine learning methods applied to mosquito population and disease forecasting do not inherently take into account the underlying spatial structure of the given data. In our work, we apply a spatially aware graph neural network model consisting of GraphSAGE layers to forecast the presence of West Nile virus in Illinois, to aid mosquito surveillance and abatement efforts within the state. More generally, we show that graph neural networks applied to irregularly sampled geospatial data can exceed the performance of a range of baseline methods including logistic regression, XGBoost, and fully-connected neural networks.
1105.0866
Shivendra Tewari
Shivendra Tewari and Kaushik Majumdar
A Mathematical Model of Tripartite Synapse: Astrocyte Induced Synaptic Plasticity
42 pages, 14 figures, Journal of Biological Physics (to appear)
null
10.1007/s10867-012-9267-7
null
q-bio.NC math.DS q-bio.CB q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a biologically detailed mathematical model of tripartite synapses, where astrocytes modulate short-term synaptic plasticity. The model consists of a pre-synaptic bouton, a post-synaptic dendritic spine-head, a synaptic cleft and a peri-synaptic astrocyte controlling Ca2+ dynamics inside the synaptic bouton. This in turn controls glutamate release dynamics in the cleft. As a consequence of this, glutamate concentration in the cleft has been modeled, in which glutamate reuptake by astrocytes has also been incorporated. Finally, dendritic spine-head dynamics has been modeled. As an application, this model clearly shows synaptic potentiation in the hippocampal region, i.e., astrocyte Ca2+ mediates synaptic plasticity, which is in conformity with the majority of the recent findings (Perea & Araque, 2007; Henneberger et al., 2010; Navarrete et al., 2012).
[ { "created": "Wed, 4 May 2011 16:33:28 GMT", "version": "v1" }, { "created": "Sat, 17 Dec 2011 16:43:50 GMT", "version": "v2" }, { "created": "Mon, 12 Mar 2012 13:45:18 GMT", "version": "v3" } ]
2012-06-05
[ [ "Tewari", "Shivendra", "" ], [ "Majumdar", "Kaushik", "" ] ]
In this paper we present a biologically detailed mathematical model of tripartite synapses, where astrocytes modulate short-term synaptic plasticity. The model consists of a pre-synaptic bouton, a post-synaptic dendritic spine-head, a synaptic cleft and a peri-synaptic astrocyte controlling Ca2+ dynamics inside the synaptic bouton. This in turn controls glutamate release dynamics in the cleft. As a consequence of this, glutamate concentration in the cleft has been modeled, in which glutamate reuptake by astrocytes has also been incorporated. Finally, dendritic spine-head dynamics has been modeled. As an application, this model clearly shows synaptic potentiation in the hippocampal region, i.e., astrocyte Ca2+ mediates synaptic plasticity, which is in conformity with the majority of the recent findings (Perea & Araque, 2007; Henneberger et al., 2010; Navarrete et al., 2012).
1906.08317
Jonathan Desponds
Jonathan Desponds, Massimo Vergassola and Aleksandra M. Walczak
Hunchback promoters can readout morphogenetic positional information in less than a minute
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first cell fate decisions in the developing fly embryo are made very rapidly : hunchback genes decide in a few minutes whether a given nucleus follows the anterior or the posterior developmental blueprint by reading out the positional information encoded in the Bicoid morphogen. This developmental system constitutes a prototypical instance of the broad spectrum of regulatory decision processes that combine speed and accuracy. Traditional arguments based on fixed-time sampling of Bicoid concentration indicate that an accurate readout is not possible within the short times observed experimentally. This raises the general issue of how speed-accuracy tradeoffs are achieved. Here, we compare fixed-time sampling strategies to decisions made on-the-fly, which are based on updating and comparing the likelihoods of being at an anterior or a posterior location. We found that these more efficient schemes can complete reliable cell fate decisions even within the very short embryological timescales. We discuss the influence of promoter architectures on the mean decision time and decision error rate and present concrete promoter architectures that allow for the fast readout of the morphogen. Lastly, we formulate explicit predictions for new experiments involving Bicoid mutants.
[ { "created": "Wed, 19 Jun 2019 19:20:11 GMT", "version": "v1" }, { "created": "Fri, 21 Jun 2019 17:49:01 GMT", "version": "v2" } ]
2019-06-24
[ [ "Desponds", "Jonathan", "" ], [ "Vergassola", "Massimo", "" ], [ "Walczak", "Aleksandra M.", "" ] ]
The first cell fate decisions in the developing fly embryo are made very rapidly : hunchback genes decide in a few minutes whether a given nucleus follows the anterior or the posterior developmental blueprint by reading out the positional information encoded in the Bicoid morphogen. This developmental system constitutes a prototypical instance of the broad spectrum of regulatory decision processes that combine speed and accuracy. Traditional arguments based on fixed-time sampling of Bicoid concentration indicate that an accurate readout is not possible within the short times observed experimentally. This raises the general issue of how speed-accuracy tradeoffs are achieved. Here, we compare fixed-time sampling strategies to decisions made on-the-fly, which are based on updating and comparing the likelihoods of being at an anterior or a posterior location. We found that these more efficient schemes can complete reliable cell fate decisions even within the very short embryological timescales. We discuss the influence of promoter architectures on the mean decision time and decision error rate and present concrete promoter architectures that allow for the fast readout of the morphogen. Lastly, we formulate explicit predictions for new experiments involving Bicoid mutants.
1608.03047
John Wentworth
Emma Wentworth and John Wentworth
Computational Limitations of First-Order Repressor Systems
null
null
null
null
q-bio.MN cs.SY math.DS q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Almost all current approaches for engineering modular logic components in synthetic biology use first-order regulators, including most CRISPR/CAS, TAL, zinc finger, and RNA interference systems. Many practitioners understand intuitively that second and higher order binding is necessary for scalability, and this is easy to show for single-input single-output systems. However, no study to date has analysed whether a more complex system, utilizing e.g. feedback or error correction, can produce scalable computation from first-order regulators. We prove here that first order repressor systems cannot support bistability. In the process, we introduce a function G to measure signal quality in molecular systems, and we show that G always decreases in dynamic feedback systems as well as static feed-forward logic cascades of first-order repressors. As a result, first order repressors cannot build memory or signal buffering elements. Finally, we suggest G as a potential new property for characterization of standard biological parts.
[ { "created": "Wed, 10 Aug 2016 05:00:30 GMT", "version": "v1" } ]
2016-08-11
[ [ "Wentworth", "Emma", "" ], [ "Wentworth", "John", "" ] ]
Almost all current approaches for engineering modular logic components in synthetic biology use first-order regulators, including most CRISPR/CAS, TAL, zinc finger, and RNA interference systems. Many practitioners understand intuitively that second and higher order binding is necessary for scalability, and this is easy to show for single-input single-output systems. However, no study to date has analysed whether a more complex system, utilizing e.g. feedback or error correction, can produce scalable computation from first-order regulators. We prove here that first order repressor systems cannot support bistability. In the process, we introduce a function G to measure signal quality in molecular systems, and we show that G always decreases in dynamic feedback systems as well as static feed-forward logic cascades of first-order repressors. As a result, first order repressors cannot build memory or signal buffering elements. Finally, we suggest G as a potential new property for characterization of standard biological parts.
2010.00214
Antoine Le Gall
B. Guilhas, J.C. Walter, J. Rech, G. David, N.-O. Walliser, J. Palmeri, C. Mathieu-Demaziere, A. Parmeggiani, J.Y. Bouet, A. Le Gall, M. Nollmann
ATP-driven separation of liquid phase condensates in bacteria
null
Molecular Cell, 2020, Pages 293-303.e4
10.1016/j.molcel.2020.06.034
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Liquid-liquid phase separated (LLPS) states are key to compartmentalise components in the absence of membranes, however it is unclear whether LLPS condensates are actively and specifically organized in the sub-cellular space and by which mechanisms. Here, we address this question by focusing on the ParABS DNA segregation system, composed of a centromeric-like sequence (parS), a DNA-binding protein (ParB) and a motor (ParA). We show that parS-ParB associate to form nanometer-sized, round condensates. ParB molecules diffuse rapidly within the nucleoid volume, but display confined motions when trapped inside ParB condensates. Single ParB molecules are able to rapidly diffuse between different condensates, and nucleation is strongly favoured by parS. Notably, the ParA motor is required to prevent the fusion of ParB condensates. These results describe a novel active mechanism that splits, segregates and localises non-canonical LLPS condensates in the sub-cellular space.
[ { "created": "Thu, 1 Oct 2020 06:44:38 GMT", "version": "v1" } ]
2020-10-02
[ [ "Guilhas", "B.", "" ], [ "Walter", "J. C.", "" ], [ "Rech", "J.", "" ], [ "David", "G.", "" ], [ "Walliser", "N. -O.", "" ], [ "Palmeri", "J.", "" ], [ "Mathieu-Demaziere", "C.", "" ], [ "Parmeggiani", "A.", "" ], [ "Bouet", "J. Y.", "" ], [ "Gall", "A. Le", "" ], [ "Nollmann", "M.", "" ] ]
Liquid-liquid phase separated (LLPS) states are key to compartmentalise components in the absence of membranes, however it is unclear whether LLPS condensates are actively and specifically organized in the sub-cellular space and by which mechanisms. Here, we address this question by focusing on the ParABS DNA segregation system, composed of a centromeric-like sequence (parS), a DNA-binding protein (ParB) and a motor (ParA). We show that parS-ParB associate to form nanometer-sized, round condensates. ParB molecules diffuse rapidly within the nucleoid volume, but display confined motions when trapped inside ParB condensates. Single ParB molecules are able to rapidly diffuse between different condensates, and nucleation is strongly favoured by parS. Notably, the ParA motor is required to prevent the fusion of ParB condensates. These results describe a novel active mechanism that splits, segregates and localises non-canonical LLPS condensates in the sub-cellular space.
1310.4522
Orjan Carlborg
Xia Shen, Simon Forsberg, Mats Pettersson, Zheya Sheng and Orjan Carlborg
Natural CMT2 variation is associated with genome-wide methylation changes and temperature adaptation
Rewrite to improve clarity of presentation. Results unchanged. 43 p, 3 main fig, 1 main table, 15 suppl fig, 2 suppl tables. Particular updates - New title - More detailed abstract and introduction - Updated results section for clarity and focus - Updated discussion connecting work to unpublished work in other research groups - Corrected typos - Updated references and acknowledgements
PLoS Genet 10(12): e1004842
10.1371/journal.pgen.1004842
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central problem when studying adaptation to a new environment is the interplay between genetic variation and phenotypic plasticity. Arabidopsis thaliana has colonized a wide range of habitats across the world and it is therefore an attractive model for studying the genetic mechanisms underlying environmental adaptation 4,5. Here, we used publicly available data from two collections of A. thaliana accessions, covering the native range of the species, to identify loci associated with differences in climates at the sampling sites. To address the confounding between geographic location, climate and population structure, a new genome-wide association analysis method was developed that facilitates detection of potentially adaptive loci where the alternative alleles display different tolerable climate ranges. Sixteen novel such loci, many of which contained candidate genes with amino acid changes, were found including a strong association between Chromomethylase 2 (CMT2) and variability in seasonal temperatures. The reference allele dominated in areas with less seasonal variability in temperature, and the alternative allele, which disrupts genome-wide CHH-methylation, existed in both stable and variable regions. Our results link natural variation in CMT2, and differential genome-wide CHH methylation, to the distribution of A. thaliana accessions across habitats with different seasonal temperature variability. They also suggest a role for genetic regulation of epigenetic modifications in natural adaptation, potentially through differential allelic plasticity, and illustrate the importance of re-analyses of existing data using new analytical methods to obtain a more complete understanding of the mechanisms contributing to adaptation.
[ { "created": "Wed, 16 Oct 2013 21:27:07 GMT", "version": "v1" }, { "created": "Thu, 9 Jan 2014 10:18:27 GMT", "version": "v2" } ]
2014-12-22
[ [ "Shen", "Xia", "" ], [ "Forsberg", "Simon", "" ], [ "Pettersson", "Mats", "" ], [ "Sheng", "Zheya", "" ], [ "Carlborg", "Orjan", "" ] ]
A central problem when studying adaptation to a new environment is the interplay between genetic variation and phenotypic plasticity. Arabidopsis thaliana has colonized a wide range of habitats across the world and it is therefore an attractive model for studying the genetic mechanisms underlying environmental adaptation 4,5. Here, we used publicly available data from two collections of A. thaliana accessions, covering the native range of the species, to identify loci associated with differences in climates at the sampling sites. To address the confounding between geographic location, climate and population structure, a new genome-wide association analysis method was developed that facilitates detection of potentially adaptive loci where the alternative alleles display different tolerable climate ranges. Sixteen novel such loci, many of which contained candidate genes with amino acid changes, were found including a strong association between Chromomethylase 2 (CMT2) and variability in seasonal temperatures. The reference allele dominated in areas with less seasonal variability in temperature, and the alternative allele, which disrupts genome-wide CHH-methylation, existed in both stable and variable regions. Our results link natural variation in CMT2, and differential genome-wide CHH methylation, to the distribution of A. thaliana accessions across habitats with different seasonal temperature variability. They also suggest a role for genetic regulation of epigenetic modifications in natural adaptation, potentially through differential allelic plasticity, and illustrate the importance of re-analyses of existing data using new analytical methods to obtain a more complete understanding of the mechanisms contributing to adaptation.
1310.6590
Chuan-Chao Wang
Chuan-Chao Wang, Hui Li
Discovery of Phylogenetic Relevant Y-chromosome Variants in 1000 Genomes Project Data
11 pages, 14 figures
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current Y chromosome research is limited in the poor resolution of Y chromosome phylogenetic tree. Entirely sequenced Y chromosomes in numerous human individuals have only recently become available by the advent of next-generation sequencing technology. The 1000 Genomes Project has sequenced Y chromosomes from more than 1000 males. Here, we analyzed 1000 Genomes Project Y chromosome data of 1269 individuals and discovered about 25,000 phylogenetic relevant SNPs. Those new markers are useful in the phylogeny of Y chromosome and will lead to an increased phylogenetic resolution for many Y chromosome studies.
[ { "created": "Thu, 24 Oct 2013 13:02:50 GMT", "version": "v1" } ]
2013-10-25
[ [ "Wang", "Chuan-Chao", "" ], [ "Li", "Hui", "" ] ]
Current Y chromosome research is limited in the poor resolution of Y chromosome phylogenetic tree. Entirely sequenced Y chromosomes in numerous human individuals have only recently become available by the advent of next-generation sequencing technology. The 1000 Genomes Project has sequenced Y chromosomes from more than 1000 males. Here, we analyzed 1000 Genomes Project Y chromosome data of 1269 individuals and discovered about 25,000 phylogenetic relevant SNPs. Those new markers are useful in the phylogeny of Y chromosome and will lead to an increased phylogenetic resolution for many Y chromosome studies.
2304.00148
Samaila Jackson Yaga
S.J Yaga and F.W.O Saporu
A study of a deterministic model for meningitis epidemic
null
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by-nc-nd/4.0/
A compartmental deterministic model that allows (1) immunity from two stages of infection and carriage, and (2) disease induced death, is used in studying the dynamics of meningitis epidemic process in a closed population. It allows for difference in the transmission rate of infection to a susceptible by a carrier and an infective. It is generalized to allow a proportion ({\phi}) of those susceptibles infected to progress directly to infectives in stage I. Both models are used in this study. The threshold conditions for the spread of carrier and infectives in stage I are derived for the two models. Sensitivity analysis is performed on the reproductive number derived from the next generation matrix. The case-carrier ratio profile for various parameters and threshold values are shown. So also are the graphs of the total number ever infected as influenced by {\epsilon} and {\phi}. The infection transmission rate (\b{eta}), the odds in favor of a carrier, over an infective, in transmitting an infection to a susceptible ({\epsilon}) and the carrier conversion rate ({\phi}) to an infective in stage I, are identified as key parameters that should be subject of attention for any control intervention strategy. The case-carrier ratio profiles provide evidence of a critical case-carrier ratio attained before the number of reported cases grows to an epidemic level. They also provide visual evidence of epidemiological context, in this case, epidemic incidence (in later part of dry season) and endemic incidence (during rainy season). Results from total proportion ever infected suggest that the model, in which {\phi}=0 obtained, can adequately represent, in essence, the generalized model for this study.
[ { "created": "Fri, 31 Mar 2023 21:54:48 GMT", "version": "v1" } ]
2023-04-04
[ [ "Yaga", "S. J", "" ], [ "Saporu", "F. W. O", "" ] ]
A compartmental deterministic model that allows (1) immunity from two stages of infection and carriage, and (2) disease induced death, is used in studying the dynamics of meningitis epidemic process in a closed population. It allows for difference in the transmission rate of infection to a susceptible by a carrier and an infective. It is generalized to allow a proportion ({\phi}) of those susceptibles infected to progress directly to infectives in stage I. Both models are used in this study. The threshold conditions for the spread of carrier and infectives in stage I are derived for the two models. Sensitivity analysis is performed on the reproductive number derived from the next generation matrix. The case-carrier ratio profile for various parameters and threshold values are shown. So also are the graphs of the total number ever infected as influenced by {\epsilon} and {\phi}. The infection transmission rate (\b{eta}), the odds in favor of a carrier, over an infective, in transmitting an infection to a susceptible ({\epsilon}) and the carrier conversion rate ({\phi}) to an infective in stage I, are identified as key parameters that should be subject of attention for any control intervention strategy. The case-carrier ratio profiles provide evidence of a critical case-carrier ratio attained before the number of reported cases grows to an epidemic level. They also provide visual evidence of epidemiological context, in this case, epidemic incidence (in later part of dry season) and endemic incidence (during rainy season). Results from total proportion ever infected suggest that the model, in which {\phi}=0 obtained, can adequately represent, in essence, the generalized model for this study.
2107.01670
Emil Iftekhar
Emil Nafis Iftekhar, Viola Priesemann, Rudi Balling, Simon Bauer, Philippe Beutels, Andr\'e Calero Valdez, Sarah Cuschieri, Thomas Czypionka, Uga Dumpis, Enrico Glaab, Eva Grill, Claudia Hanson, Pirta Hotulainen, Peter Klimek, Mirjam Kretzschmar, Tyll Kr\"uger, Jenny Krutzinna, Nicola Low, Helena Machado, Carlos Martins, Martin McKee, Sebastian Bernd Mohr, Armin Nassehi, Matja\v{z} Perc, Elena Petelos, Martyn Pickersgill, Barbara Prainsack, Joacim Rockl\"ov, Eva Schernhammer, Anthony Staines, Ewa Szczurek, Sotirios Tsiodras, Steven Van Gucht, Peter Willeit
A look into the future of the COVID-19 pandemic in Europe: an expert consultation
Manuscript is accepted by The Lancet Regional Health - Europe as a Viewpoint article. Supplementary material can be accessed here: https://owncloud.gwdg.de/index.php/f/1439962756
Lancet Reg. Health Eur. 8, 100185 (2021)
10.1016/j.lanepe.2021.100185
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
How will the coronavirus disease 2019 (COVID-19) pandemic develop in the coming months and years? Based on an expert survey, we examine key aspects that are likely to influence COVID-19 in Europe. The future challenges and developments will strongly depend on the progress of national and global vaccination programs, the emergence and spread of variants of concern, and public responses to nonpharmaceutical interventions (NPIs). In the short term, many people are still unvaccinated, VOCs continue to emerge and spread, and mobility and population mixing is expected to increase over the summer. Therefore, policies that lift restrictions too much and too early risk another damaging wave. This challenge remains despite the reduced opportunities for transmission due to vaccination progress and reduced indoor mixing in the summer. In autumn 2021, increased indoor activity might accelerate the spread again, but a necessary reintroduction of NPIs might be too slow. The incidence may strongly rise again, possibly filling intensive care units, if vaccination levels are not high enough. A moderate, adaptive level of NPIs will thus remain necessary. These epidemiological aspects are put into perspective with the economic, social, and health-related consequences and thereby provide a holistic perspective on the future of COVID-19.
[ { "created": "Sun, 4 Jul 2021 15:55:34 GMT", "version": "v1" }, { "created": "Fri, 23 Jul 2021 09:57:38 GMT", "version": "v2" } ]
2021-10-04
[ [ "Iftekhar", "Emil Nafis", "" ], [ "Priesemann", "Viola", "" ], [ "Balling", "Rudi", "" ], [ "Bauer", "Simon", "" ], [ "Beutels", "Philippe", "" ], [ "Valdez", "André Calero", "" ], [ "Cuschieri", "Sarah", "" ], [ "Czypionka", "Thomas", "" ], [ "Dumpis", "Uga", "" ], [ "Glaab", "Enrico", "" ], [ "Grill", "Eva", "" ], [ "Hanson", "Claudia", "" ], [ "Hotulainen", "Pirta", "" ], [ "Klimek", "Peter", "" ], [ "Kretzschmar", "Mirjam", "" ], [ "Krüger", "Tyll", "" ], [ "Krutzinna", "Jenny", "" ], [ "Low", "Nicola", "" ], [ "Machado", "Helena", "" ], [ "Martins", "Carlos", "" ], [ "McKee", "Martin", "" ], [ "Mohr", "Sebastian Bernd", "" ], [ "Nassehi", "Armin", "" ], [ "Perc", "Matjaž", "" ], [ "Petelos", "Elena", "" ], [ "Pickersgill", "Martyn", "" ], [ "Prainsack", "Barbara", "" ], [ "Rocklöv", "Joacim", "" ], [ "Schernhammer", "Eva", "" ], [ "Staines", "Anthony", "" ], [ "Szczurek", "Ewa", "" ], [ "Tsiodras", "Sotirios", "" ], [ "Van Gucht", "Steven", "" ], [ "Willeit", "Peter", "" ] ]
How will the coronavirus disease 2019 (COVID-19) pandemic develop in the coming months and years? Based on an expert survey, we examine key aspects that are likely to influence COVID-19 in Europe. The future challenges and developments will strongly depend on the progress of national and global vaccination programs, the emergence and spread of variants of concern, and public responses to nonpharmaceutical interventions (NPIs). In the short term, many people are still unvaccinated, VOCs continue to emerge and spread, and mobility and population mixing is expected to increase over the summer. Therefore, policies that lift restrictions too much and too early risk another damaging wave. This challenge remains despite the reduced opportunities for transmission due to vaccination progress and reduced indoor mixing in the summer. In autumn 2021, increased indoor activity might accelerate the spread again, but a necessary reintroduction of NPIs might be too slow. The incidence may strongly rise again, possibly filling intensive care units, if vaccination levels are not high enough. A moderate, adaptive level of NPIs will thus remain necessary. These epidemiological aspects are put into perspective with the economic, social, and health-related consequences and thereby provide a holistic perspective on the future of COVID-19.
1302.0395
Vitaly Vodyanoy
T. Moore, I. Sorokulova, O. Pustovyy, L. Globa, D. Pascoe, M. Rudisill and Vitaly Vodyanoy
Microscopic and thermodynamic evaluation of vesicles shed by erythrocytes at elevated temperatures
18 pages, 7 figures, Submitted to the Journal of Thermal Biology on January 25, 2013
null
null
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Erythrocytes and vesicles shed by erythrocytes from human and rat blood were collected and analyzed after temperature was elevated by physical exercise or by exposure to external heat. The images of erythrocytes and vesicles were analyzed by the light microscopy system with spatial resolution of better than 90 nm. The samples were observed in an aqueous environment and required no freezing, dehydration, staining, shadowing, marking or any other manipulation. Temperature elevation, whether passive or through exercise, resulted in significant concentration increase of structurally transformed erythrocytes (echinocytes) and vesicles in blood. At temperature of 37 oC, mean vesicle concentrations and diameters in human and rat blood were (1.50+-0.35)x10^6 and (1.4+-0.2)x10^6 vesicles/{\mu}L, and 0.365+-0.065 and 0.436+-0.03 {\mu}m, respectively. It was estimated that 80% of all vesicles found in human blood are smaller than 0.4 {\mu}m. Thermodynamic analysis of experimental and literature data showed that erythrocyte transformation, vesicle release and other associated processes are driven by entropy with enthalpy-entropy compensation. It is suggested that physical state of hydrated cell membrane is responsible for the compensation. The increase of vesicle number related to elevated temperatures may be indicative of the heat stress level and serve as diagnostic of erythrocyte stability and human performance.
[ { "created": "Sat, 2 Feb 2013 15:58:44 GMT", "version": "v1" }, { "created": "Tue, 5 Feb 2013 16:13:31 GMT", "version": "v2" } ]
2013-02-06
[ [ "Moore", "T.", "" ], [ "Sorokulova", "I.", "" ], [ "Pustovyy", "O.", "" ], [ "Globa", "L.", "" ], [ "Pascoe", "D.", "" ], [ "Rudisill", "M.", "" ], [ "Vodyanoy", "Vitaly", "" ] ]
Erythrocytes and vesicles shed by erythrocytes from human and rat blood were collected and analyzed after temperature was elevated by physical exercise or by exposure to external heat. The images of erythrocytes and vesicles were analyzed by the light microscopy system with spatial resolution of better than 90 nm. The samples were observed in an aqueous environment and required no freezing, dehydration, staining, shadowing, marking or any other manipulation. Temperature elevation, whether passive or through exercise, resulted in significant concentration increase of structurally transformed erythrocytes (echinocytes) and vesicles in blood. At temperature of 37 oC, mean vesicle concentrations and diameters in human and rat blood were (1.50+-0.35)x10^6 and (1.4+-0.2)x10^6 vesicles/{\mu}L, and 0.365+-0.065 and 0.436+-0.03 {\mu}m, respectively. It was estimated that 80% of all vesicles found in human blood are smaller than 0.4 {\mu}m. Thermodynamic analysis of experimental and literature data showed that erythrocyte transformation, vesicle release and other associated processes are driven by entropy with enthalpy-entropy compensation. It is suggested that physical state of hydrated cell membrane is responsible for the compensation. The increase of vesicle number related to elevated temperatures may be indicative of the heat stress level and serve as diagnostic of erythrocyte stability and human performance.
2403.05762
Xinyu Yu
Hongguang Pan and Xinyu Yu and Yong Yang
Lateral Control of Brain-Controlled Vehicle Based on SVM Probability Output Model
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The non-stationary characteristics of EEG signal and the individual differences of brain-computer interfaces (BCIs) lead to poor performance in the control process of the brain-controlled vehicles (BCVs). In this paper, by combining steady-state visual evoked potential (SSVEP) interactive interface, brain instructions generation module and vehicle lateral control module, a probabilistic output model based on support vector machine (SVM) is proposed for BCV lateral control to improve the driving performance. Firstly, a filter bank common spatial pattern (FBCSP) algorithm is introduced into the brain instructions generation module, which can improve the off-line decoding performance. Secondly, a sigmod-fitting SVM (SF-SVM) is trained based on the sigmod-fitting method and the lateral control module is developed, which can produce all commands in the form of probability instead of specific single command. Finally, a pre-experiment and two road-keeping experiments are conducted. In the pre-experiment, the experiment results show that, the average highest off-line accuracy among subjects is 95.64\%, while for those in the online stage, the average accuracy is only 84.44\%. In the road-keeping experiments, the task completion rate in the two designed scenes increased by 25.6\% and 20\%, respectively.
[ { "created": "Sat, 9 Mar 2024 02:15:06 GMT", "version": "v1" } ]
2024-03-12
[ [ "Pan", "Hongguang", "" ], [ "Yu", "Xinyu", "" ], [ "Yang", "Yong", "" ] ]
The non-stationary characteristics of EEG signal and the individual differences of brain-computer interfaces (BCIs) lead to poor performance in the control process of the brain-controlled vehicles (BCVs). In this paper, by combining steady-state visual evoked potential (SSVEP) interactive interface, brain instructions generation module and vehicle lateral control module, a probabilistic output model based on support vector machine (SVM) is proposed for BCV lateral control to improve the driving performance. Firstly, a filter bank common spatial pattern (FBCSP) algorithm is introduced into the brain instructions generation module, which can improve the off-line decoding performance. Secondly, a sigmod-fitting SVM (SF-SVM) is trained based on the sigmod-fitting method and the lateral control module is developed, which can produce all commands in the form of probability instead of specific single command. Finally, a pre-experiment and two road-keeping experiments are conducted. In the pre-experiment, the experiment results show that, the average highest off-line accuracy among subjects is 95.64\%, while for those in the online stage, the average accuracy is only 84.44\%. In the road-keeping experiments, the task completion rate in the two designed scenes increased by 25.6\% and 20\%, respectively.
q-bio/0505053
Christophe Pouzat
Matthieu Delescluse (LPC), Christophe Pouzat (LPC)
Efficient spike-sorting of multi-state neurons using inter-spike intervals information
25 pages, to be published in Journal of Neurocience Methods
null
null
null
q-bio.QM math.ST physics.bio-ph physics.data-an stat.TH
null
We demonstrate the efficacy of a new spike-sorting method based on a Markov Chain Monte Carlo (MCMC) algorithm by applying it to real data recorded from Purkinje cells (PCs) in young rat cerebellar slices. This algorithm is unique in its capability to estimate and make use of the firing statistics as well as the spike amplitude dynamics of the recorded neurons. PCs exhibit multiple discharge states, giving rise to multimodal interspike interval (ISI) histograms and to correlations between successive ISIs. The amplitude of the spikes generated by a PC in an "active" state decreases, a feature typical of many neurons from both vertebrates and invertebrates. These two features constitute a major and recurrent problem for all the presently available spike-sorting methods. We first show that a Hidden Markov Model with 3 log-Normal states provides a flexible and satisfying description of the complex firing of single PCs. We then incorporate this model into our previous MCMC based spike-sorting algorithm (Pouzat et al, 2004, J. Neurophys. 91, 2910-2928) and test this new algorithm on multi-unit recordings of bursting PCs. We show that our method successfully classifies the bursty spike trains fired by PCs by using an independent single unit recording from a patch-clamp pipette.
[ { "created": "Fri, 27 May 2005 11:53:44 GMT", "version": "v1" } ]
2011-11-10
[ [ "Delescluse", "Matthieu", "", "LPC" ], [ "Pouzat", "Christophe", "", "LPC" ] ]
We demonstrate the efficacy of a new spike-sorting method based on a Markov Chain Monte Carlo (MCMC) algorithm by applying it to real data recorded from Purkinje cells (PCs) in young rat cerebellar slices. This algorithm is unique in its capability to estimate and make use of the firing statistics as well as the spike amplitude dynamics of the recorded neurons. PCs exhibit multiple discharge states, giving rise to multimodal interspike interval (ISI) histograms and to correlations between successive ISIs. The amplitude of the spikes generated by a PC in an "active" state decreases, a feature typical of many neurons from both vertebrates and invertebrates. These two features constitute a major and recurrent problem for all the presently available spike-sorting methods. We first show that a Hidden Markov Model with 3 log-Normal states provides a flexible and satisfying description of the complex firing of single PCs. We then incorporate this model into our previous MCMC based spike-sorting algorithm (Pouzat et al, 2004, J. Neurophys. 91, 2910-2928) and test this new algorithm on multi-unit recordings of bursting PCs. We show that our method successfully classifies the bursty spike trains fired by PCs by using an independent single unit recording from a patch-clamp pipette.
1711.09113
Michael Deem
Melia E. Bonomo and Michael W. Deem
How the other half lives: CRISPR-Cas's influence on bacteriophages
24 pages, 8 figures
Evolutionary Biology : Self, Non-Self Evolution, Species and Complex Traits, Evolution, Methods and Concepts, ISBN 978-3-319-61569-1, edited by Pierre Pontarotti, Springer Nature, September 2017, pp. 63-85
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CRISPR-Cas is a genetic adaptive immune system unique to prokaryotic cells used to combat phage and plasmid threats. The host cell adapts by incorporating DNA sequences from invading phages or plasmids into its CRISPR locus as spacers. These spacers are expressed as mobile surveillance RNAs that direct CRISPR-associated (Cas) proteins to protect against subsequent attack by the same phages or plasmids. The threat from mobile genetic elements inevitably shapes the CRISPR loci of archaea and bacteria, and simultaneously the CRISPR-Cas immune system drives evolution of these invaders. Here we highlight our recent work, as well as that of others, that seeks to understand phage mechanisms of CRISPR-Cas evasion and conditions for population coexistence of phages with CRISPR-protected prokaryotes.
[ { "created": "Fri, 24 Nov 2017 19:29:35 GMT", "version": "v1" } ]
2017-11-28
[ [ "Bonomo", "Melia E.", "" ], [ "Deem", "Michael W.", "" ] ]
CRISPR-Cas is a genetic adaptive immune system unique to prokaryotic cells used to combat phage and plasmid threats. The host cell adapts by incorporating DNA sequences from invading phages or plasmids into its CRISPR locus as spacers. These spacers are expressed as mobile surveillance RNAs that direct CRISPR-associated (Cas) proteins to protect against subsequent attack by the same phages or plasmids. The threat from mobile genetic elements inevitably shapes the CRISPR loci of archaea and bacteria, and simultaneously the CRISPR-Cas immune system drives evolution of these invaders. Here we highlight our recent work, as well as that of others, that seeks to understand phage mechanisms of CRISPR-Cas evasion and conditions for population coexistence of phages with CRISPR-protected prokaryotes.
2211.09005
Florian Nill
Florian Nill
Endemic Oscillations for SARS-CoV-2 Omicron -- A SIRS model analysis
19 pages, 9 figures
Chaos, Solitons and Fractals 173 (2023) 113678
10.1016/j.chaos.2023.113678
null
q-bio.PE math.DS physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
The SIRS model with constant vaccination and immunity waning rates is well known to show a transition from a disease-free to an endemic equilibrium as the basic reproduction number $r_0$ is raised above threshold. It is shown that this model maps to Hethcote's classic endemic model originally published in 1973. In this way one obtains unifying formulas for a whole class of models showing endemic bifurcation. In particular, if the vaccination rate is smaller than the recovery rate and $r_- < r_0 < r_+$ for certain upper and lower bounds $r_\pm$, then trajectories spiral into the endemic equilibrium via damped infection waves. Latest data of the SARS-CoV-2 Omicron variant suggest that according to this simplified model continuous vaccination programs will not be capable to escape the oscillating endemic phase. However, in view of the strong damping factors predicted by the model, in reality these oscillations will certainly be overruled by time-dependent contact behaviors.
[ { "created": "Wed, 16 Nov 2022 16:02:04 GMT", "version": "v1" }, { "created": "Tue, 30 May 2023 08:58:07 GMT", "version": "v2" } ]
2023-06-26
[ [ "Nill", "Florian", "" ] ]
The SIRS model with constant vaccination and immunity waning rates is well known to show a transition from a disease-free to an endemic equilibrium as the basic reproduction number $r_0$ is raised above threshold. It is shown that this model maps to Hethcote's classic endemic model originally published in 1973. In this way one obtains unifying formulas for a whole class of models showing endemic bifurcation. In particular, if the vaccination rate is smaller than the recovery rate and $r_- < r_0 < r_+$ for certain upper and lower bounds $r_\pm$, then trajectories spiral into the endemic equilibrium via damped infection waves. Latest data of the SARS-CoV-2 Omicron variant suggest that according to this simplified model continuous vaccination programs will not be capable to escape the oscillating endemic phase. However, in view of the strong damping factors predicted by the model, in reality these oscillations will certainly be overruled by time-dependent contact behaviors.