id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1709.01371
Burkhard Morgenstern
Burkhard Morgenstern, Svenja Sch\"obel, Chris-Andr\'e Leimeister
Estimating phylogenetic distances between genomic sequences based on the length distribution of k-mismatch common substrings
null
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various approaches to alignment-free sequence comparison are based on the length of exact or inexact word matches between two input sequences. Haubold {\em et al.} (2009) showed how the average number of substitutions between two DNA sequences can be estimated based on the average length of exact common substrings. In this paper, we study the length distribution of $k$-mismatch common substrings between two sequences. We show that the number of substitutions per position that have occurred since two sequences have evolved from their last common ancestor, can be estimated from the position of a local maximum in the length distribution of their $k$-mismatch common substrings.
[ { "created": "Tue, 5 Sep 2017 13:23:46 GMT", "version": "v1" } ]
2017-09-06
[ [ "Morgenstern", "Burkhard", "" ], [ "Schöbel", "Svenja", "" ], [ "Leimeister", "Chris-André", "" ] ]
Various approaches to alignment-free sequence comparison are based on the length of exact or inexact word matches between two input sequences. Haubold {\em et al.} (2009) showed how the average number of substitutions between two DNA sequences can be estimated based on the average length of exact common substrings. In this paper, we study the length distribution of $k$-mismatch common substrings between two sequences. We show that the number of substitutions per position that have occurred since two sequences have evolved from their last common ancestor, can be estimated from the position of a local maximum in the length distribution of their $k$-mismatch common substrings.
2210.08888
Carl Whitfield
Carl A Whitfield, University of Manchester COVID-19 Modelling Group, Ian Hall
Modelling the impact of repeat asymptomatic testing policies for staff on SARS-CoV-2 transmission potential
74 pages, 6 tables, 14 figures and supplementary table
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
Repeat asymptomatic testing in order to identify and quarantine infectious individuals has become a widely-used intervention to control SARS-CoV-2 transmission. In some workplaces, and in particular health and social care settings with vulnerable patients, regular asymptomatic testing has been deployed to staff to reduce the likelihood of workplace outbreaks. We have developed a model based on data available in the literature to predict the potential impact of repeat asymptomatic testing on SARS-CoV-2 transmission. The results highlight features that are important to consider when modelling testing interventions, including population heterogeneity of infectiousness and correlation with test-positive probability, as well as adherence behaviours in response to policy. Furthermore, the model based on the reduction in transmission potential presented here can be used to parameterise existing epidemiological models without them having to explicitly simulate the testing process. Overall, we find that even with different model paramterisations, in theory, regular asymptomatic testing is likely to be a highly effective measure to reduce transmission in workplaces, subject to adherence.
[ { "created": "Mon, 17 Oct 2022 09:34:45 GMT", "version": "v1" } ]
2022-10-18
[ [ "Whitfield", "Carl A", "" ], [ "Group", "University of Manchester COVID-19 Modelling", "" ], [ "Hall", "Ian", "" ] ]
Repeat asymptomatic testing in order to identify and quarantine infectious individuals has become a widely-used intervention to control SARS-CoV-2 transmission. In some workplaces, and in particular health and social care settings with vulnerable patients, regular asymptomatic testing has been deployed to staff to reduce the likelihood of workplace outbreaks. We have developed a model based on data available in the literature to predict the potential impact of repeat asymptomatic testing on SARS-CoV-2 transmission. The results highlight features that are important to consider when modelling testing interventions, including population heterogeneity of infectiousness and correlation with test-positive probability, as well as adherence behaviours in response to policy. Furthermore, the model based on the reduction in transmission potential presented here can be used to parameterise existing epidemiological models without them having to explicitly simulate the testing process. Overall, we find that even with different model paramterisations, in theory, regular asymptomatic testing is likely to be a highly effective measure to reduce transmission in workplaces, subject to adherence.
2211.14855
Javier Andreu-Perez Dr
Maria Laura Filippetti, Javier Andreu-Perez, Carina de Klerk, Chloe Richmond, Silvia Rigato
Are advanced methods necessary to improve infant fNIRS data analysis? An assessment of baseline-corrected averaging, general linear model (GLM) and multivariate pattern analysis (MVPA) based approaches
null
Neuroimage, 2022, 119756
10.1016/j.neuroimage.2022.119756
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the last decade, fNIRS has provided a non-invasive method to investigate neural activation in developmental populations. Despite its increasing use in developmental cognitive neuroscience, there is little consistency or consensus on how to pre-process and analyse infant fNIRS data. With this registered report, we investigated the feasibility of applying more advanced statistical analyses to infant fNIRS data and compared the most commonly used baseline-corrected averaging, General Linear Model (GLM)-based univariate, and Multivariate Pattern Analysis (MVPA) approaches, to show how the conclusions one would draw based on these different analysis approaches converge or differ. The different analysis methods were tested using a face inversion paradigm where changes in brain activation in response to upright and inverted face stimuli were measured in thirty 4-to-6-month-old infants. By including more standard approaches together with recent machine learning techniques, we aim to inform the fNIRS community on alternative ways to analyse infant fNIRS datasets.
[ { "created": "Sun, 27 Nov 2022 15:18:29 GMT", "version": "v1" } ]
2022-11-29
[ [ "Filippetti", "Maria Laura", "" ], [ "Andreu-Perez", "Javier", "" ], [ "de Klerk", "Carina", "" ], [ "Richmond", "Chloe", "" ], [ "Rigato", "Silvia", "" ] ]
In the last decade, fNIRS has provided a non-invasive method to investigate neural activation in developmental populations. Despite its increasing use in developmental cognitive neuroscience, there is little consistency or consensus on how to pre-process and analyse infant fNIRS data. With this registered report, we investigated the feasibility of applying more advanced statistical analyses to infant fNIRS data and compared the most commonly used baseline-corrected averaging, General Linear Model (GLM)-based univariate, and Multivariate Pattern Analysis (MVPA) approaches, to show how the conclusions one would draw based on these different analysis approaches converge or differ. The different analysis methods were tested using a face inversion paradigm where changes in brain activation in response to upright and inverted face stimuli were measured in thirty 4-to-6-month-old infants. By including more standard approaches together with recent machine learning techniques, we aim to inform the fNIRS community on alternative ways to analyse infant fNIRS datasets.
q-bio/0402033
Jonathan Doye
Jonathan P. K. Doye, Ard A. Louis, Michele Vendruscolo
Inhibition of protein crystallization by evolutionary negative design
5 pages
Physical Biology 1, P9-P13 (2004)
10.1088/1478-3967/1/1/P02
null
q-bio.BM cond-mat.soft physics.bio-ph
null
In this perspective we address the question: why are proteins seemingly so hard to crystallize? We suggest that this is because of evolutionary negative design, i.e. proteins have evolved not to crystallize, because crystallization, as with any type of protein aggregation, compromises the viability of the cell. There is much evidence in the literature that supports this hypothesis, including the effect of mutations on the crystallizability of a protein, the correlations found in the properties of crystal contacts in bioinformatics databases, and the positive use of protein crystallization by bacteria and viruses.
[ { "created": "Mon, 16 Feb 2004 10:54:12 GMT", "version": "v1" } ]
2007-05-23
[ [ "Doye", "Jonathan P. K.", "" ], [ "Louis", "Ard A.", "" ], [ "Vendruscolo", "Michele", "" ] ]
In this perspective we address the question: why are proteins seemingly so hard to crystallize? We suggest that this is because of evolutionary negative design, i.e. proteins have evolved not to crystallize, because crystallization, as with any type of protein aggregation, compromises the viability of the cell. There is much evidence in the literature that supports this hypothesis, including the effect of mutations on the crystallizability of a protein, the correlations found in the properties of crystal contacts in bioinformatics databases, and the positive use of protein crystallization by bacteria and viruses.
1507.07270
Ilan Golani
A.Gomez-Marin, E. Oron, A.Gakamsky, D. Valente, Y. Benjamini, I. Golani
Searching for behavioral homologies: Shared generative rules for expansion and narrowing down of the locomotor repertoire in Arthropods and Vertebrates
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use immobility as an origin and reference for the measurement of locomotor behavior; speed, the direction of walking and the direction of facing as the three degrees of freedom shaping fly locomotor behavior, and cocaine as the parameter inducing a progressive transition in and out of immobility. In this way we expose and quantify the generative rules that shape fruit fly locomotor behavior, which consist of a gradual narrowing down of the fly's locomotor freedom of movement during the transition into immobility and a precisely opposite expansion of freedom during the transition from immobility to normal behavior. The same generative rules of narrowing down and expansion apply to vertebrate behavior in a variety of contexts, Recent claims for deep homology between the vertebrate basal ganglia and the arthropod central complex, and neurochemical processes explaining the expansion of locomotor behavior in vertebrates could guide the search for equivalent neurochemical processes that mediate locomotor narrowing down and expansion in arthropods. We argue that a methodology for isolating relevant measures and quantifying generative rules having a potential for discovering candidate behavioral homologies is already available and we specify some of its essential features.
[ { "created": "Mon, 27 Jul 2015 00:39:34 GMT", "version": "v1" } ]
2015-07-28
[ [ "Gomez-Marin", "A.", "" ], [ "Oron", "E.", "" ], [ "Gakamsky", "A.", "" ], [ "Valente", "D.", "" ], [ "Benjamini", "Y.", "" ], [ "Golani", "I.", "" ] ]
We use immobility as an origin and reference for the measurement of locomotor behavior; speed, the direction of walking and the direction of facing as the three degrees of freedom shaping fly locomotor behavior, and cocaine as the parameter inducing a progressive transition in and out of immobility. In this way we expose and quantify the generative rules that shape fruit fly locomotor behavior, which consist of a gradual narrowing down of the fly's locomotor freedom of movement during the transition into immobility and a precisely opposite expansion of freedom during the transition from immobility to normal behavior. The same generative rules of narrowing down and expansion apply to vertebrate behavior in a variety of contexts, Recent claims for deep homology between the vertebrate basal ganglia and the arthropod central complex, and neurochemical processes explaining the expansion of locomotor behavior in vertebrates could guide the search for equivalent neurochemical processes that mediate locomotor narrowing down and expansion in arthropods. We argue that a methodology for isolating relevant measures and quantifying generative rules having a potential for discovering candidate behavioral homologies is already available and we specify some of its essential features.
0903.2987
Christian Meisel
Christian Meisel and Thilo Gross
Adaptive self-organization in a realistic neural network model
6 pages, 4 figures
Phys. Rev. E 80, 061917 (2009)
10.1103/PhysRevE.80.061917
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information processing in complex systems is often found to be maximally efficient close to critical states associated with phase transitions. It is therefore conceivable that also neural information processing operates close to criticality. This is further supported by the observation of power-law distributions, which are a hallmark of phase transitions. An important open question is how neural networks could remain close to a critical point while undergoing a continual change in the course of development, adaptation, learning, and more. An influential contribution was made by Bornholdt and Rohlf, introducing a generic mechanism of robust self-organized criticality in adaptive networks. Here, we address the question whether this mechanism is relevant for real neural networks. We show in a realistic model that spike-time-dependent synaptic plasticity can self-organize neural networks robustly toward criticality. Our model reproduces several empirical observations and makes testable predictions on the distribution of synaptic strength, relating them to the critical state of the network. These results suggest that the interplay between dynamics and topology may be essential for neural information processing.
[ { "created": "Tue, 17 Mar 2009 15:51:05 GMT", "version": "v1" }, { "created": "Wed, 26 May 2010 08:55:11 GMT", "version": "v2" } ]
2015-05-13
[ [ "Meisel", "Christian", "" ], [ "Gross", "Thilo", "" ] ]
Information processing in complex systems is often found to be maximally efficient close to critical states associated with phase transitions. It is therefore conceivable that also neural information processing operates close to criticality. This is further supported by the observation of power-law distributions, which are a hallmark of phase transitions. An important open question is how neural networks could remain close to a critical point while undergoing a continual change in the course of development, adaptation, learning, and more. An influential contribution was made by Bornholdt and Rohlf, introducing a generic mechanism of robust self-organized criticality in adaptive networks. Here, we address the question whether this mechanism is relevant for real neural networks. We show in a realistic model that spike-time-dependent synaptic plasticity can self-organize neural networks robustly toward criticality. Our model reproduces several empirical observations and makes testable predictions on the distribution of synaptic strength, relating them to the critical state of the network. These results suggest that the interplay between dynamics and topology may be essential for neural information processing.
2111.06126
Javier Andreu-Perez Dr
Abbas Salami, Javier Andreu-Perez, Helge Gillmeister
Symptoms of depersonalisation/derealisation disorder as measured by brain electrical activity: A systematic review
null
Neuroscience & Biobehavioral Reviews (2020)
10.1016/j.neubiorev.2020.08.011
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Depersonalisation/derealisation disorder (DPD) refers to frequent and persistent detachment from bodily self and disengagement from the outside world. As a dissociative disorder, DPD affects 1-2% of the population, but takes 7-12 years on average to be accurately diagnosed. In this systematic review, we comprehensively describe research targeting the neural correlates of core DPD symptoms, covering publications between 1992 and 2020 that have used electrophysiological techniques. The aim was to investigate the diagnostic potential of these relatively inexpensive and convenient neuroimaging tools. We review the EEG power spectrum, components of the event-related potential (ERP), as well as vestibular and heartbeat evoked potentials as likely electrophysiological biomarkers to study DPD symptoms. We argue that acute anxiety- or trauma-related impairments in the integration of interoceptive and exteroceptive signals play a key role in the formation of DPD symptoms, and that future research needs analysis methods that can take this integration into account. We suggest tools for prospective studies of electrophysiological DPD biomarkers, which are urgently needed to fully develop their diagnostic potential.
[ { "created": "Thu, 11 Nov 2021 10:09:22 GMT", "version": "v1" } ]
2021-11-12
[ [ "Salami", "Abbas", "" ], [ "Andreu-Perez", "Javier", "" ], [ "Gillmeister", "Helge", "" ] ]
Depersonalisation/derealisation disorder (DPD) refers to frequent and persistent detachment from bodily self and disengagement from the outside world. As a dissociative disorder, DPD affects 1-2% of the population, but takes 7-12 years on average to be accurately diagnosed. In this systematic review, we comprehensively describe research targeting the neural correlates of core DPD symptoms, covering publications between 1992 and 2020 that have used electrophysiological techniques. The aim was to investigate the diagnostic potential of these relatively inexpensive and convenient neuroimaging tools. We review the EEG power spectrum, components of the event-related potential (ERP), as well as vestibular and heartbeat evoked potentials as likely electrophysiological biomarkers to study DPD symptoms. We argue that acute anxiety- or trauma-related impairments in the integration of interoceptive and exteroceptive signals play a key role in the formation of DPD symptoms, and that future research needs analysis methods that can take this integration into account. We suggest tools for prospective studies of electrophysiological DPD biomarkers, which are urgently needed to fully develop their diagnostic potential.
1903.01005
Gergely R\"ost
Ruth E. Baker, P\'eter Boldog and Gergely R\"ost
Convergence of solutions in a mean-field model of go-or-grow type with reservation of sites for proliferation and cell cycle delay
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the mean-field approximation of an individual-based model describing cell motility and proliferation, which incorporates the volume exclusion principle, the go-or-grow hypothesis and an explicit cell cycle delay. To utilise the framework of on-lattice agent-based models, we make the assumption that cells enter mitosis only if they can secure an additional site for the daughter cell, in which case they occupy two lattice sites until the completion of mitosis. The mean-field model is expressed by a system of delay differential equations and includes variables such as the number of motile cells, proliferating cells, reserved sites and empty sites. We prove the convergence of biologically feasible solutions: eventually all available space will be filled by mobile cells, after an initial phase when the proliferating cell population is increasing then diminishing. By comparing the behaviour of the mean-field model for different parameter values and initial cell distributions, we illustrate that the total cell population may follow a logistic-type growth curve, or may grow in a step-function-like fashion.
[ { "created": "Sun, 3 Mar 2019 22:32:47 GMT", "version": "v1" } ]
2019-03-05
[ [ "Baker", "Ruth E.", "" ], [ "Boldog", "Péter", "" ], [ "Röst", "Gergely", "" ] ]
We consider the mean-field approximation of an individual-based model describing cell motility and proliferation, which incorporates the volume exclusion principle, the go-or-grow hypothesis and an explicit cell cycle delay. To utilise the framework of on-lattice agent-based models, we make the assumption that cells enter mitosis only if they can secure an additional site for the daughter cell, in which case they occupy two lattice sites until the completion of mitosis. The mean-field model is expressed by a system of delay differential equations and includes variables such as the number of motile cells, proliferating cells, reserved sites and empty sites. We prove the convergence of biologically feasible solutions: eventually all available space will be filled by mobile cells, after an initial phase when the proliferating cell population is increasing then diminishing. By comparing the behaviour of the mean-field model for different parameter values and initial cell distributions, we illustrate that the total cell population may follow a logistic-type growth curve, or may grow in a step-function-like fashion.
1512.06896
Morgan Craig
Morgan Craig and Antony R Humphries and Michael C Mackey
A mathematical model of granulopoiesis incorporating the negative feedback dynamics and kinetics of G-CSF/neutrophil binding and internalisation
null
Bulletin of Mathematical Biology, 78(12). 2304-2357 (2016)
10.1007/s11538-016-0179-8
null
q-bio.CB q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a physiological model of granulopoiesis which includes explicit modelling of the kinetics of the cytokine granulocyte colony-stimulating factor (G-CSF) incorporating both the freely circulating concentration and the concentration of the cytokine bound to mature neutrophils. G-CSF concentrations are used to directly regulate neutrophil production, with the rate of differentiation of stem cells to neutrophil precursors, the effective proliferation rate in mitosis, the maturation time, and the release rate from the mature marrow reservoir into circulation all dependent on the level of G-CSF in the system. The dependence of the maturation time on the cytokine concentration introduces a state-dependent delay into our differential equation model, and we show how this is derived from an age-structured partial differential equation model of the mitosis and maturation, and also detail the derivation of the rest of our model. The model and its estimated parameters are shown to successfully predict the neutrophil and G-CSF responses to a variety of treatment scenarios, including the combined administration of chemotherapy and exogenous G-CSF. This concomitant treatment was reproduced without any additional fitting to characterise drug-drug interactions.
[ { "created": "Mon, 21 Dec 2015 22:35:18 GMT", "version": "v1" } ]
2016-11-17
[ [ "Craig", "Morgan", "" ], [ "Humphries", "Antony R", "" ], [ "Mackey", "Michael C", "" ] ]
We develop a physiological model of granulopoiesis which includes explicit modelling of the kinetics of the cytokine granulocyte colony-stimulating factor (G-CSF) incorporating both the freely circulating concentration and the concentration of the cytokine bound to mature neutrophils. G-CSF concentrations are used to directly regulate neutrophil production, with the rate of differentiation of stem cells to neutrophil precursors, the effective proliferation rate in mitosis, the maturation time, and the release rate from the mature marrow reservoir into circulation all dependent on the level of G-CSF in the system. The dependence of the maturation time on the cytokine concentration introduces a state-dependent delay into our differential equation model, and we show how this is derived from an age-structured partial differential equation model of the mitosis and maturation, and also detail the derivation of the rest of our model. The model and its estimated parameters are shown to successfully predict the neutrophil and G-CSF responses to a variety of treatment scenarios, including the combined administration of chemotherapy and exogenous G-CSF. This concomitant treatment was reproduced without any additional fitting to characterise drug-drug interactions.
2302.01449
Mohammad Shifat E Rabbi
Mohammad Shifat E Rabbi, Natasha Ironside, John A Ozolek, Rajendra Singh, Liron Pantanowitz, Gustavo K Rohde
Transport-based morphometry of nuclear structures of digital pathology images in cancers
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alterations in nuclear morphology are useful adjuncts and even diagnostic tools used by pathologists in the diagnosis and grading of many tumors, particularly malignant tumors. Large datasets such as TCGA and the Human Protein Atlas, in combination with emerging machine learning and statistical modeling methods, such as feature extraction and deep learning techniques, can be used to extract meaningful knowledge from images of nuclei, particularly from cancerous tumors. Here we describe a new technique based on the mathematics of optimal transport for modeling the information content related to nuclear chromatin structure directly from imaging data. In contrast to other techniques, our method represents the entire information content of each nucleus relative to a template nucleus using a transport-based morphometry (TBM) framework. We demonstrate the model is robust to different staining patterns and imaging protocols, and can be used to discover meaningful and interpretable information within and across datasets and cancer types. In particular, we demonstrate morphological differences capable of distinguishing nuclear features along the spectrum from benign to malignant categories of tumors across different cancer tissue types, including tumors derived from liver parenchyma, thyroid gland, lung mesothelium, and skin epithelium. We believe these proof of concept calculations demonstrate that the TBM framework can provide the quantitative measurements necessary for performing meaningful comparisons across a wide range of datasets and cancer types that can potentially enable numerous cancer studies, technologies, and clinical applications and help elevate the role of nuclear morphometry into a more quantitative science. The source codes implementing our method is available at https://github.com/rohdelab/nuclear_morphometry.
[ { "created": "Thu, 2 Feb 2023 22:36:31 GMT", "version": "v1" } ]
2023-02-06
[ [ "Rabbi", "Mohammad Shifat E", "" ], [ "Ironside", "Natasha", "" ], [ "Ozolek", "John A", "" ], [ "Singh", "Rajendra", "" ], [ "Pantanowitz", "Liron", "" ], [ "Rohde", "Gustavo K", "" ] ]
Alterations in nuclear morphology are useful adjuncts and even diagnostic tools used by pathologists in the diagnosis and grading of many tumors, particularly malignant tumors. Large datasets such as TCGA and the Human Protein Atlas, in combination with emerging machine learning and statistical modeling methods, such as feature extraction and deep learning techniques, can be used to extract meaningful knowledge from images of nuclei, particularly from cancerous tumors. Here we describe a new technique based on the mathematics of optimal transport for modeling the information content related to nuclear chromatin structure directly from imaging data. In contrast to other techniques, our method represents the entire information content of each nucleus relative to a template nucleus using a transport-based morphometry (TBM) framework. We demonstrate the model is robust to different staining patterns and imaging protocols, and can be used to discover meaningful and interpretable information within and across datasets and cancer types. In particular, we demonstrate morphological differences capable of distinguishing nuclear features along the spectrum from benign to malignant categories of tumors across different cancer tissue types, including tumors derived from liver parenchyma, thyroid gland, lung mesothelium, and skin epithelium. We believe these proof of concept calculations demonstrate that the TBM framework can provide the quantitative measurements necessary for performing meaningful comparisons across a wide range of datasets and cancer types that can potentially enable numerous cancer studies, technologies, and clinical applications and help elevate the role of nuclear morphometry into a more quantitative science. The source codes implementing our method is available at https://github.com/rohdelab/nuclear_morphometry.
1907.01288
Ahmed ELGazzar
Ahmed El Gazzar, Leonardo Cerliani, Guido van Wingen, Rajat Mani Thomas
Simple 1-D Convolutional Networks for Resting-State fMRI Based Classification in Autism
accepted for publication in IJCNN 2019
null
null
null
q-bio.NC cs.LG eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning methods are increasingly being used with neuroimaging data like structural and function magnetic resonance imaging (MRI) to predict the diagnosis of neuropsychiatric and neurological disorders. For psychiatric disorders in particular, it is believed that one of the most promising modality is the resting-state functional MRI (rsfMRI), which captures the intrinsic connectivity between regions in the brain. Because rsfMRI data points are inherently high-dimensional (~1M), it is impossible to process the entire input in its raw form. In this paper, we propose a very simple transformation of the rsfMRI images that captures all of the temporal dynamics of the signal but sub-samples its spatial extent. As a result, we use a very simple 1-D convolutional network which is fast to train, requires minimal preprocessing and performs at par with the state-of-the-art on the classification of Autism spectrum disorders.
[ { "created": "Tue, 2 Jul 2019 10:35:25 GMT", "version": "v1" } ]
2019-07-03
[ [ "Gazzar", "Ahmed El", "" ], [ "Cerliani", "Leonardo", "" ], [ "van Wingen", "Guido", "" ], [ "Thomas", "Rajat Mani", "" ] ]
Deep learning methods are increasingly being used with neuroimaging data like structural and function magnetic resonance imaging (MRI) to predict the diagnosis of neuropsychiatric and neurological disorders. For psychiatric disorders in particular, it is believed that one of the most promising modality is the resting-state functional MRI (rsfMRI), which captures the intrinsic connectivity between regions in the brain. Because rsfMRI data points are inherently high-dimensional (~1M), it is impossible to process the entire input in its raw form. In this paper, we propose a very simple transformation of the rsfMRI images that captures all of the temporal dynamics of the signal but sub-samples its spatial extent. As a result, we use a very simple 1-D convolutional network which is fast to train, requires minimal preprocessing and performs at par with the state-of-the-art on the classification of Autism spectrum disorders.
1906.08586
Qingnan Sun
Qingnan Sun, Marko V. Jankovic, Stavroula G. Mougiakakou
Reinforcement Learning-Based Adaptive Insulin Advisor for Individuals with Type 1 Diabetes Patients under Multiple Daily Injections Therapy
4 pages, 1 figure, 1 table, EMBC2019
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The existing adaptive basal-bolus advisor (ABBA) was further developed to benefit patients under insulin therapy with multiple daily injections (MDI). Three different in silico experiments were conducted with the DMMS.R simulator to validate the approach of combined use of self-monitoring of blood glucose (SMBG) and insulin injection devices, e.g. insulin pen, as are used by the majority of type 1 diabetes patients under insulin therapy. The proposed approach outperforms the conventional method, as it increases the time spent within the target range and simultaneously reduces the risks of hyperglycaemic and hypoglycaemic events.
[ { "created": "Fri, 7 Jun 2019 13:59:09 GMT", "version": "v1" } ]
2019-06-21
[ [ "Sun", "Qingnan", "" ], [ "Jankovic", "Marko V.", "" ], [ "Mougiakakou", "Stavroula G.", "" ] ]
The existing adaptive basal-bolus advisor (ABBA) was further developed to benefit patients under insulin therapy with multiple daily injections (MDI). Three different in silico experiments were conducted with the DMMS.R simulator to validate the approach of combined use of self-monitoring of blood glucose (SMBG) and insulin injection devices, e.g. insulin pen, as are used by the majority of type 1 diabetes patients under insulin therapy. The proposed approach outperforms the conventional method, as it increases the time spent within the target range and simultaneously reduces the risks of hyperglycaemic and hypoglycaemic events.
2404.16196
Jakub Adamczyk
Jakub Adamczyk, Jakub Poziemski, Pawe{\l} Siedlecki
ApisTox: a new benchmark dataset for the classification of small molecules toxicity on honey bees
null
null
null
null
q-bio.QM cs.AI cs.LG q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The global decline in bee populations poses significant risks to agriculture, biodiversity, and environmental stability. To bridge the gap in existing data, we introduce ApisTox, a comprehensive dataset focusing on the toxicity of pesticides to honey bees (Apis mellifera). This dataset combines and leverages data from existing sources such as ECOTOX and PPDB, providing an extensive, consistent, and curated collection that surpasses the previous datasets. ApisTox incorporates a wide array of data, including toxicity levels for chemicals, details such as time of their publication in literature, and identifiers linking them to external chemical databases. This dataset may serve as an important tool for environmental and agricultural research, but also can support the development of policies and practices aimed at minimizing harm to bee populations. Finally, ApisTox offers a unique resource for benchmarking molecular property prediction methods on agrochemical compounds, facilitating advancements in both environmental science and cheminformatics. This makes it a valuable tool for both academic research and practical applications in bee conservation.
[ { "created": "Wed, 24 Apr 2024 20:35:17 GMT", "version": "v1" } ]
2024-04-26
[ [ "Adamczyk", "Jakub", "" ], [ "Poziemski", "Jakub", "" ], [ "Siedlecki", "Paweł", "" ] ]
The global decline in bee populations poses significant risks to agriculture, biodiversity, and environmental stability. To bridge the gap in existing data, we introduce ApisTox, a comprehensive dataset focusing on the toxicity of pesticides to honey bees (Apis mellifera). This dataset combines and leverages data from existing sources such as ECOTOX and PPDB, providing an extensive, consistent, and curated collection that surpasses the previous datasets. ApisTox incorporates a wide array of data, including toxicity levels for chemicals, details such as time of their publication in literature, and identifiers linking them to external chemical databases. This dataset may serve as an important tool for environmental and agricultural research, but also can support the development of policies and practices aimed at minimizing harm to bee populations. Finally, ApisTox offers a unique resource for benchmarking molecular property prediction methods on agrochemical compounds, facilitating advancements in both environmental science and cheminformatics. This makes it a valuable tool for both academic research and practical applications in bee conservation.
1003.1304
Benedikt Obermayer
Benedikt Obermayer and Erwin Frey
Error thresholds for self- and cross-specific enzymatic replication
23 pages, 7 figures; final version as published
J. Theor. Biol. 267:653-662 (2010)
10.1016/j.jtbi.2010.09.016
LMU-ASC 17/10
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The information content of a non-enzymatic self-replicator is limited by Eigen's error threshold. Presumably, enzymatic replication can maintain higher complexity, but in a competitive environment such a replicator is faced with two problems related to its twofold role as enzyme and substrate: as enzyme, it should replicate itself rather than wastefully copy non-functional substrates, and as substrate it should preferably be replicated by superior enzymes instead of less-efficient mutants. Because specific recognition can enforce these propensities, we thoroughly analyze an idealized quasispecies model for enzymatic replication, with replication rates that are either a decreasing (self-specific) or increasing (cross-specific) function of the Hamming distance between the recognition or "tag" sequences of enzyme and substrate. We find that very weak self-specificity suffices to localize a population about a master sequence and thus to preserve its information, while simultaneous localization about complementary sequences in the cross-specific case is more challenging. A surprising result is that stronger specificity constraints allow longer recognition sequences, because the populations are better localized. Extrapolating from experimental data, we obtain rough quantitative estimates for the maximal length of the recognition or tag sequence that can be used to reliably discriminate appropriate and infeasible enzymes and substrates, respectively.
[ { "created": "Fri, 5 Mar 2010 16:40:18 GMT", "version": "v1" }, { "created": "Mon, 18 Oct 2010 14:57:16 GMT", "version": "v2" } ]
2010-10-19
[ [ "Obermayer", "Benedikt", "" ], [ "Frey", "Erwin", "" ] ]
The information content of a non-enzymatic self-replicator is limited by Eigen's error threshold. Presumably, enzymatic replication can maintain higher complexity, but in a competitive environment such a replicator is faced with two problems related to its twofold role as enzyme and substrate: as enzyme, it should replicate itself rather than wastefully copy non-functional substrates, and as substrate it should preferably be replicated by superior enzymes instead of less-efficient mutants. Because specific recognition can enforce these propensities, we thoroughly analyze an idealized quasispecies model for enzymatic replication, with replication rates that are either a decreasing (self-specific) or increasing (cross-specific) function of the Hamming distance between the recognition or "tag" sequences of enzyme and substrate. We find that very weak self-specificity suffices to localize a population about a master sequence and thus to preserve its information, while simultaneous localization about complementary sequences in the cross-specific case is more challenging. A surprising result is that stronger specificity constraints allow longer recognition sequences, because the populations are better localized. Extrapolating from experimental data, we obtain rough quantitative estimates for the maximal length of the recognition or tag sequence that can be used to reliably discriminate appropriate and infeasible enzymes and substrates, respectively.
2303.08758
Giorgio Gonnella
Giorgio Gonnella
EGC: a format for expressing prokaryotic genomes content expectations
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
The number of available genomes of prokaryotic organisms is rapidly growing enabling comparative genomics studies. The comparison of genomes of organisms with a common phenotype, habitat or phylogeny often shows that these genomes share some common contents. Collecting rules expressing common genome traits depending on given factors is useful, as such rules could be used for quality control or for identifying interesting exceptions and formulating hypothesis. Automatizing the rules verification using computation tools requires the definition of a representation schema. In this study, we present EGC (Expected Genome Contents), a flat-text file format for the representation of expectation rules about the content of prokaryotic genomes. A parser for the EGC format has been implemented using the TextFormats software library, accompanied by a set of related Python packages.
[ { "created": "Wed, 15 Mar 2023 16:55:03 GMT", "version": "v1" }, { "created": "Wed, 10 May 2023 08:39:12 GMT", "version": "v2" } ]
2023-05-11
[ [ "Gonnella", "Giorgio", "" ] ]
The number of available genomes of prokaryotic organisms is rapidly growing enabling comparative genomics studies. The comparison of genomes of organisms with a common phenotype, habitat or phylogeny often shows that these genomes share some common contents. Collecting rules expressing common genome traits depending on given factors is useful, as such rules could be used for quality control or for identifying interesting exceptions and formulating hypothesis. Automatizing the rules verification using computation tools requires the definition of a representation schema. In this study, we present EGC (Expected Genome Contents), a flat-text file format for the representation of expectation rules about the content of prokaryotic genomes. A parser for the EGC format has been implemented using the TextFormats software library, accompanied by a set of related Python packages.
1609.08035
Jayajit Das
Sayak Mukherjee, David Stewart, William Stewart, Lewis L. Lanier, Jayajit Das
Connecting the dots across time: Reconstruction of single cell signaling trajectories using time-stamped data
revised version, accepted for publication in Royal Society Open Science
null
null
null
q-bio.QM cond-mat.stat-mech cs.CG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single cell responses are shaped by the geometry of signaling kinetic trajectories carved in a multidimensional space spanned by signaling protein abundances. It is however challenging to assay large number (>3) of signaling species in live-cell imaging which makes it difficult to probe single cell signaling kinetic trajectories in large dimensions. Flow and mass cytometry techniques can measure a large number (4 - >40) of signaling species but are unable to track single cells. Thus cytometry experiments provide detailed time stamped snapshots of single cell signaling kinetics. Is it possible to use the time stamped cytometry data to reconstruct single cell signaling trajectories? Borrowing concepts of conserved and slow variables from non-equilibrium statistical physics we develop an approach to reconstruct signaling trajectories using snapshot data by creating new variables that remain invariant or vary slowly during the signaling kinetics. We apply this approach to reconstruct trajectories using snapshot data obtained from in silico simulations and live-cell imaging measurements. The use of invariants and slow variables to reconstruct trajectories provides a radically different way to track object using snapshot data. The approach is likely to have implications for solving matching problems in a wide range of disciplines.
[ { "created": "Mon, 26 Sep 2016 15:56:12 GMT", "version": "v1" }, { "created": "Thu, 20 Jul 2017 14:33:02 GMT", "version": "v2" } ]
2017-07-27
[ [ "Mukherjee", "Sayak", "" ], [ "Stewart", "David", "" ], [ "Stewart", "William", "" ], [ "Lanier", "Lewis L.", "" ], [ "Das", "Jayajit", "" ] ]
Single cell responses are shaped by the geometry of signaling kinetic trajectories carved in a multidimensional space spanned by signaling protein abundances. It is however challenging to assay large number (>3) of signaling species in live-cell imaging which makes it difficult to probe single cell signaling kinetic trajectories in large dimensions. Flow and mass cytometry techniques can measure a large number (4 - >40) of signaling species but are unable to track single cells. Thus cytometry experiments provide detailed time stamped snapshots of single cell signaling kinetics. Is it possible to use the time stamped cytometry data to reconstruct single cell signaling trajectories? Borrowing concepts of conserved and slow variables from non-equilibrium statistical physics we develop an approach to reconstruct signaling trajectories using snapshot data by creating new variables that remain invariant or vary slowly during the signaling kinetics. We apply this approach to reconstruct trajectories using snapshot data obtained from in silico simulations and live-cell imaging measurements. The use of invariants and slow variables to reconstruct trajectories provides a radically different way to track object using snapshot data. The approach is likely to have implications for solving matching problems in a wide range of disciplines.
1806.03140
Enrico Gavagnin
Enrico Gavagnin, Matthew J. Ford, Richard L. Mort, Tim Rogers and Christian A. Yates
The invasion speed of cell migration models with realistic cell cycle time distributions
null
null
null
null
q-bio.CB math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell proliferation is typically incorporated into stochastic mathematical models of cell migration by assuming that cell divisions occur after an exponentially distributed waiting time. Experimental observations, however, show that this assumption is often far from the real cell cycle time distribution (CCTD). Recent studies have suggested an alternative approach to modelling cell proliferation based on a multi-stage representation of the CCTD. In order to validate and parametrise these models, it is important to connect them to experimentally measurable quantities. In this paper we investigate the connection between the CCTD and the speed of the collective invasion. We first state a result for a general CCTD, which allows the computation of the invasion speed using the Laplace transform of the CCTD. We use this to deduce the range of speeds for the general case. We then focus on the more realistic case of multi-stage models, using both a stochastic agent-based model and a set of reaction-diffusion equations for the cells' average density. By studying the corresponding travelling wave solutions, we obtain an analytical expression for the speed of invasion for a general N-stage model with identical transition rates, in which case the resulting cell cycle times are Erlang distributed. We show that, for a general N-stage model, the Erlang distribution and the exponential distribution lead to the minimum and maximum invasion speed, respectively. This result allows us to determine the range of possible invasion speeds in terms of the average proliferation time for any multi-stage model.
[ { "created": "Fri, 8 Jun 2018 13:23:16 GMT", "version": "v1" } ]
2018-06-11
[ [ "Gavagnin", "Enrico", "" ], [ "Ford", "Matthew J.", "" ], [ "Mort", "Richard L.", "" ], [ "Rogers", "Tim", "" ], [ "Yates", "Christian A.", "" ] ]
Cell proliferation is typically incorporated into stochastic mathematical models of cell migration by assuming that cell divisions occur after an exponentially distributed waiting time. Experimental observations, however, show that this assumption is often far from the real cell cycle time distribution (CCTD). Recent studies have suggested an alternative approach to modelling cell proliferation based on a multi-stage representation of the CCTD. In order to validate and parametrise these models, it is important to connect them to experimentally measurable quantities. In this paper we investigate the connection between the CCTD and the speed of the collective invasion. We first state a result for a general CCTD, which allows the computation of the invasion speed using the Laplace transform of the CCTD. We use this to deduce the range of speeds for the general case. We then focus on the more realistic case of multi-stage models, using both a stochastic agent-based model and a set of reaction-diffusion equations for the cells' average density. By studying the corresponding travelling wave solutions, we obtain an analytical expression for the speed of invasion for a general N-stage model with identical transition rates, in which case the resulting cell cycle times are Erlang distributed. We show that, for a general N-stage model, the Erlang distribution and the exponential distribution lead to the minimum and maximum invasion speed, respectively. This result allows us to determine the range of possible invasion speeds in terms of the average proliferation time for any multi-stage model.
1002.0458
Andrea De Martino
A. De Martino, E. Marinari
The solution space of metabolic networks: producibility, robustness and fluctuations
10 pages, prepared for the Proceedings of the International Workshop on Statistical-Mechanical Informatics, March 7-10, 2010, Kyoto, Japan
null
10.1088/1742-6596/233/1/012019
null
q-bio.MN cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flux analysis is a class of constraint-based approaches to the study of biochemical reaction networks: they are based on determining the reaction flux configurations compatible with given stoichiometric and thermodynamic constraints. One of its main areas of application is the study of cellular metabolic networks. We briefly and selectively review the main approaches to this problem and then, building on recent work, we provide a characterization of the productive capabilities of the metabolic network of the bacterium E.coli in a specified growth medium in terms of the producible biochemical species. While a robust and physiologically meaningful production profile clearly emerges (including biomass components, biomass products, waste etc.), the underlying constraints still allow for significant fluctuations even in key metabolites like ATP and, as a consequence, apparently lay the ground for very different growth scenarios.
[ { "created": "Tue, 2 Feb 2010 11:10:40 GMT", "version": "v1" } ]
2015-05-18
[ [ "De Martino", "A.", "" ], [ "Marinari", "E.", "" ] ]
Flux analysis is a class of constraint-based approaches to the study of biochemical reaction networks: they are based on determining the reaction flux configurations compatible with given stoichiometric and thermodynamic constraints. One of its main areas of application is the study of cellular metabolic networks. We briefly and selectively review the main approaches to this problem and then, building on recent work, we provide a characterization of the productive capabilities of the metabolic network of the bacterium E.coli in a specified growth medium in terms of the producible biochemical species. While a robust and physiologically meaningful production profile clearly emerges (including biomass components, biomass products, waste etc.), the underlying constraints still allow for significant fluctuations even in key metabolites like ATP and, as a consequence, apparently lay the ground for very different growth scenarios.
2310.14721
Francois Fages
Mathieu Hemery (Lifeware), Fran\c{c}ois Fages (Lifeware)
On a model of online analog computation in the cell with absolute functional robustness: algebraic characterization, function compiler and error control
arXiv admin note: substantial text overlap with arXiv:2206.09624
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Turing completeness of continuous Chemical Reaction Networks (CRNs) states that any computable real function can be computed by a continuous CRN on a finite set of molecular species, possibly restricted to elementary reactions, i.e. with at most two reactants and mass action law kinetics. In this paper, we introduce a more stringent notion of robust online analog computation, and Absolute Functional Robustness (AFR), for the CRNs that stabilize the concentration values of some output species to the result of one function of the input species concentrations, in a perfectly robust manner with respect to perturbations of both intermediate and output species. We prove that the set of real functions stabilized by a CRN with mass action law kinetics is precisely the set of real algebraic functions. Based on this result, we present a compiler which takes as input any algebraic function (defined by one polynomial and one point for selecting one branch of the algebraic curve defined by the polynomial) and generates an abstract CRN to stabilize it. Furthermore, we provide error bounds to estimate and control the error of an unperturbed system, under the assumption that the environment inputs are driven by k-Lipschitz functions.
[ { "created": "Mon, 23 Oct 2023 08:56:58 GMT", "version": "v1" } ]
2023-10-24
[ [ "Hemery", "Mathieu", "", "Lifeware" ], [ "Fages", "François", "", "Lifeware" ] ]
The Turing completeness of continuous Chemical Reaction Networks (CRNs) states that any computable real function can be computed by a continuous CRN on a finite set of molecular species, possibly restricted to elementary reactions, i.e. with at most two reactants and mass action law kinetics. In this paper, we introduce a more stringent notion of robust online analog computation, and Absolute Functional Robustness (AFR), for the CRNs that stabilize the concentration values of some output species to the result of one function of the input species concentrations, in a perfectly robust manner with respect to perturbations of both intermediate and output species. We prove that the set of real functions stabilized by a CRN with mass action law kinetics is precisely the set of real algebraic functions. Based on this result, we present a compiler which takes as input any algebraic function (defined by one polynomial and one point for selecting one branch of the algebraic curve defined by the polynomial) and generates an abstract CRN to stabilize it. Furthermore, we provide error bounds to estimate and control the error of an unperturbed system, under the assumption that the environment inputs are driven by k-Lipschitz functions.
2108.08163
Adrian Jones
Adrian Jones, Daoyu Zhang, Yuri Deigin and Steven C. Quay
Analysis of pangolin metagenomic datasets reveals significant contamination, raising concerns for pangolin CoV host attribution
55 pages, 15 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Metagenomic datasets from pangolin tissue specimens have previously yielded SARS-related coronaviruses which show high homology in their receptor binding domain to SARS-CoV-2, suggesting a potential zoonotic source for this feature of the human virus, possibly via recombination (Liu et al. 2019, Lam et al. 2020, Xiao et al. 2020, Liu et al. 2020). Here we re-examine these published datasets. We report that only a few pangolin samples were found to contain coronavirus reads, and even then in low abundance, while other non-pangolin hosted viruses were present in higher abundance. We also discovered extensive contamination with human, rodent, and other mammalian gene sequences, which was a surprising finding. Furthermore, we uncovered a number of pangolin CoV sequences embedded in standard laboratory cloning vectors, which suggests the pangolin specimens could have been contaminated with sequences derived from synthetic biology experiments. Finally, we discover a third pangolin dataset (He et al. 2022) with low levels of SARSr-CoV sequences and unambiguous extensive contamination of several pangolin samples. For these reasons, we find it unlikely that the pangolins in question had a coronavirus infection while alive, and all current versions of the cited papers claiming a zoonotic infection of pangolins with a SARS-r CoV require substantial corrections and should be retracted until such corrections are made.
[ { "created": "Wed, 18 Aug 2021 14:14:32 GMT", "version": "v1" }, { "created": "Mon, 23 Aug 2021 12:14:13 GMT", "version": "v2" }, { "created": "Tue, 1 Mar 2022 10:44:02 GMT", "version": "v3" } ]
2022-03-02
[ [ "Jones", "Adrian", "" ], [ "Zhang", "Daoyu", "" ], [ "Deigin", "Yuri", "" ], [ "Quay", "Steven C.", "" ] ]
Metagenomic datasets from pangolin tissue specimens have previously yielded SARS-related coronaviruses which show high homology in their receptor binding domain to SARS-CoV-2, suggesting a potential zoonotic source for this feature of the human virus, possibly via recombination (Liu et al. 2019, Lam et al. 2020, Xiao et al. 2020, Liu et al. 2020). Here we re-examine these published datasets. We report that only a few pangolin samples were found to contain coronavirus reads, and even then in low abundance, while other non-pangolin hosted viruses were present in higher abundance. We also discovered extensive contamination with human, rodent, and other mammalian gene sequences, which was a surprising finding. Furthermore, we uncovered a number of pangolin CoV sequences embedded in standard laboratory cloning vectors, which suggests the pangolin specimens could have been contaminated with sequences derived from synthetic biology experiments. Finally, we discover a third pangolin dataset (He et al. 2022) with low levels of SARSr-CoV sequences and unambiguous extensive contamination of several pangolin samples. For these reasons, we find it unlikely that the pangolins in question had a coronavirus infection while alive, and all current versions of the cited papers claiming a zoonotic infection of pangolins with a SARS-r CoV require substantial corrections and should be retracted until such corrections are made.
q-bio/0311006
Dr. Paul J. Werbos
Paul J. Werbos
What do neural nets and quantum theory tell us about mind and reality?
17p.Transcript of plenary talk at international conference "Towards a Science of Consciousness," hosted by United Nations University. Tokyo, 1999
In K.Yasue, M.Jibu, T. Della Senta eds, No Matter, Never Mind, John Benjamins Books, 2001
null
null
q-bio.NC
null
This paper proposes an approach to framing and answering fundamental questions about consciousness. It argues that many of the more theoretical debates about consciousness, such as debates about "when does it begin?", are misplaced and meaningless, in part because "consciousness" as a word has many valid and interesting definitions, and in part because consciousness qua mind or intelligence (the main focus here)is a matter of degree or level, not a binary variable. It proposes that new mathematical work related to functional neural network designs -- designs so functional that they can be used in engineering -- is essential to a functional understanding of intelligence as such, and outlines some key mathematics as of 1999, citing earlier work for more details. Quantum theory is relevant, but not in the simple ways proposed in more popular philosophies.
[ { "created": "Fri, 7 Nov 2003 17:11:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Werbos", "Paul J.", "" ] ]
This paper proposes an approach to framing and answering fundamental questions about consciousness. It argues that many of the more theoretical debates about consciousness, such as debates about "when does it begin?", are misplaced and meaningless, in part because "consciousness" as a word has many valid and interesting definitions, and in part because consciousness qua mind or intelligence (the main focus here)is a matter of degree or level, not a binary variable. It proposes that new mathematical work related to functional neural network designs -- designs so functional that they can be used in engineering -- is essential to a functional understanding of intelligence as such, and outlines some key mathematics as of 1999, citing earlier work for more details. Quantum theory is relevant, but not in the simple ways proposed in more popular philosophies.
1701.03549
Karina Laneri
M\'onica Denham and Karina Laneri
Using efficient parallelization in Graphic Processing Units to parameterize stochastic fire propagation models
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fire propagation is a major concern in the world in general and in Argentinian northwestern Patagonia in particular where every year hundreds of hectares are affected by both natural and anthropogenic forest fires. We developed an efficient cellular automata model in Graphic Processing Units (GPUs) to simulate fire propagation. The graphical advantages of GPUs were exploded by overlapping wind direction maps, as well as vegetation, slope and aspect maps, taking into account relevant landscape characteristics for fire propagation. Stochastic propagation was performed with a probability model that depends on aspect, slope, wind direction and vegetation type. Implementing a genetic algorithm search strategy we show, using simulated fires, that we recover the five parameter values that characterize fire propagation. The efficiency of the fire simulation procedure allowed us to also estimate the fire ignition point when it is unknown as well as its associated uncertainty, making this approach suitable for the analysis of fire spread based on maps of burned areas without knowing the point of origin of the fires or how they spread.
[ { "created": "Fri, 13 Jan 2017 02:28:59 GMT", "version": "v1" }, { "created": "Sat, 19 Aug 2017 04:42:41 GMT", "version": "v2" } ]
2017-08-22
[ [ "Denham", "Mónica", "" ], [ "Laneri", "Karina", "" ] ]
Fire propagation is a major concern in the world in general and in Argentinian northwestern Patagonia in particular where every year hundreds of hectares are affected by both natural and anthropogenic forest fires. We developed an efficient cellular automata model in Graphic Processing Units (GPUs) to simulate fire propagation. The graphical advantages of GPUs were exploded by overlapping wind direction maps, as well as vegetation, slope and aspect maps, taking into account relevant landscape characteristics for fire propagation. Stochastic propagation was performed with a probability model that depends on aspect, slope, wind direction and vegetation type. Implementing a genetic algorithm search strategy we show, using simulated fires, that we recover the five parameter values that characterize fire propagation. The efficiency of the fire simulation procedure allowed us to also estimate the fire ignition point when it is unknown as well as its associated uncertainty, making this approach suitable for the analysis of fire spread based on maps of burned areas without knowing the point of origin of the fires or how they spread.
1212.0872
Daile Avila
Daile Avila, Rolando Cardenas, Osmel Martin
On the photosynthetic potential in the very Early Archean oceans
Accepted for publication in Origins of Life and Evolution of Biospheres
null
10.1007/s11084-012-9322-1
null
q-bio.PE physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we apply a mathematical model of photosynthesis to quantify the potential for photosynthetic life in the very Early Archean oceans. We assume the presence of oceanic blockers of ultraviolet radiation, specifically ferrous ions. For this scenario, our results suggest a potential for photosynthetic life greater than or similar to that in later eras/eons, such as the Late Archean and the current Phanerozoic eon.
[ { "created": "Tue, 4 Dec 2012 21:07:53 GMT", "version": "v1" } ]
2015-06-12
[ [ "Avila", "Daile", "" ], [ "Cardenas", "Rolando", "" ], [ "Martin", "Osmel", "" ] ]
In this work we apply a mathematical model of photosynthesis to quantify the potential for photosynthetic life in the very Early Archean oceans. We assume the presence of oceanic blockers of ultraviolet radiation, specifically ferrous ions. For this scenario, our results suggest a potential for photosynthetic life greater than or similar to that in later eras/eons, such as the Late Archean and the current Phanerozoic eon.
0706.1306
Alain Destexhe
Zuzanna Piwkowska, Martin Pospischil, Romain Brette, Julia Sliwa, Michelle Rudolph-Lilith, Thierry Bal and Alain Destexhe
Characterizing synaptic conductance fluctuations in cortical neurons and their influence on spike generation
9 figures, Journal of Neuroscience Methods (in press, 2008)
Journal of Neuroscience Methods 169: 302-322, 2008.
null
null
q-bio.NC
null
Cortical neurons are subject to sustained and irregular synaptic activity which causes important fluctuations of the membrane potential (Vm). We review here different methods to characterize this activity and its impact on spike generation. The simplified, fluctuating point-conductance model of synaptic activity provides the starting point of a variety of methods for the analysis of intracellular Vm recordings. In this model, the synaptic excitatory and inhibitory conductances are described by Gaussian-distributed stochastic variables, or colored conductance noise. The matching of experimentally recorded Vm distributions to an invertible theoretical expression derived from the model allows the extraction of parameters characterizing the synaptic conductance distributions. This analysis can be complemented by the matching of experimental Vm power spectral densities (PSDs) to a theoretical template, even though the unexpected scaling properties of experimental PSDs limit the precision of this latter approach. Building on this stochastic characterization of synaptic activity, we also propose methods to qualitatively and quantitatively evaluate spike-triggered averages of synaptic time-courses preceding spikes. This analysis points to an essential role for synaptic conductance variance in determining spike times. The presented methods are evaluated using controlled conductance injection in cortical neurons in vitro with the dynamic-clamp technique. We review their applications to the analysis of in vivo intracellular recordings in cat association cortex, which suggest a predominant role for inhibition in determining both sub- and supra-threshold dynamics of cortical neurons embedded in active networks.
[ { "created": "Sat, 9 Jun 2007 12:32:51 GMT", "version": "v1" }, { "created": "Thu, 15 Nov 2007 20:52:01 GMT", "version": "v2" }, { "created": "Thu, 15 Nov 2007 21:28:00 GMT", "version": "v3" } ]
2009-04-29
[ [ "Piwkowska", "Zuzanna", "" ], [ "Pospischil", "Martin", "" ], [ "Brette", "Romain", "" ], [ "Sliwa", "Julia", "" ], [ "Rudolph-Lilith", "Michelle", "" ], [ "Bal", "Thierry", "" ], [ "Destexhe", "Alain", "" ] ]
Cortical neurons are subject to sustained and irregular synaptic activity which causes important fluctuations of the membrane potential (Vm). We review here different methods to characterize this activity and its impact on spike generation. The simplified, fluctuating point-conductance model of synaptic activity provides the starting point of a variety of methods for the analysis of intracellular Vm recordings. In this model, the synaptic excitatory and inhibitory conductances are described by Gaussian-distributed stochastic variables, or colored conductance noise. The matching of experimentally recorded Vm distributions to an invertible theoretical expression derived from the model allows the extraction of parameters characterizing the synaptic conductance distributions. This analysis can be complemented by the matching of experimental Vm power spectral densities (PSDs) to a theoretical template, even though the unexpected scaling properties of experimental PSDs limit the precision of this latter approach. Building on this stochastic characterization of synaptic activity, we also propose methods to qualitatively and quantitatively evaluate spike-triggered averages of synaptic time-courses preceding spikes. This analysis points to an essential role for synaptic conductance variance in determining spike times. The presented methods are evaluated using controlled conductance injection in cortical neurons in vitro with the dynamic-clamp technique. We review their applications to the analysis of in vivo intracellular recordings in cat association cortex, which suggest a predominant role for inhibition in determining both sub- and supra-threshold dynamics of cortical neurons embedded in active networks.
2204.13321
Claudia Bank
Claudia Bank
Epistasis and Adaptation on Fitness Landscapes
25 pages; submitted to Annual Reviews in Ecology, Evolution, and Systematics
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Epistasis occurs when the effect of a mutation depends on its carrier's genetic background. Despite increasing evidence that epistasis for fitness is common, its role during evolution is contentious. Fitness landscapes, mappings of genotype or phenotype to fitness, capture the full extent and complexity of epistasis. Fitness landscape theory has shown how epistasis affects the course and the outcome of evolution. Moreover, by measuring the competitive fitness of sets of tens to thousands of connected genotypes, empirical fitness landscapes have shown that epistasis is frequent and depends on the fitness measure, the choice of mutations for the landscape, and the environment in which it was measured. Here, I review fitness landscape theory and experiments and their implications for the role of epistasis in adaptation. I discuss theoretical expectations in the light of empirical fitness landscapes and highlight open challenges and future directions towards integrating theory and data, and incorporating ecological factors.
[ { "created": "Thu, 28 Apr 2022 07:43:20 GMT", "version": "v1" }, { "created": "Thu, 5 May 2022 09:37:12 GMT", "version": "v2" }, { "created": "Fri, 10 Jun 2022 06:43:35 GMT", "version": "v3" } ]
2022-06-13
[ [ "Bank", "Claudia", "" ] ]
Epistasis occurs when the effect of a mutation depends on its carrier's genetic background. Despite increasing evidence that epistasis for fitness is common, its role during evolution is contentious. Fitness landscapes, mappings of genotype or phenotype to fitness, capture the full extent and complexity of epistasis. Fitness landscape theory has shown how epistasis affects the course and the outcome of evolution. Moreover, by measuring the competitive fitness of sets of tens to thousands of connected genotypes, empirical fitness landscapes have shown that epistasis is frequent and depends on the fitness measure, the choice of mutations for the landscape, and the environment in which it was measured. Here, I review fitness landscape theory and experiments and their implications for the role of epistasis in adaptation. I discuss theoretical expectations in the light of empirical fitness landscapes and highlight open challenges and future directions towards integrating theory and data, and incorporating ecological factors.
2402.06392
Juan Jim\'enez-S\'anchez
Juan Jim\'enez-S\'anchez (1), Carmen Ortega-Sabater (1), Philip K. Maini (2), V\'ictor M. P\'erez-Garc\'ia (1), Tommaso Lorenzi (3) ((1) Mathematical Oncology Laboratory, University of Castilla-La Mancha, Ciudad Real, Spain, (2) Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford, United Kingdom, (3) Dipartimento di Scienze Matematiche, Politecnico di Torino, Turin, Italy)
A glance at evolvability: a theoretical analysis of its role in the evolutionary dynamics of cell populations
26 pages, 6 figures, 1 table
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Evolvability is defined as the ability of a population to generate heritable variation to facilitate its adaptation to new environments or selection pressures. In this article, we consider evolvability as a phenotypic trait subject to evolution and discuss its implications in the adaptation of cell populations. We explore the evolutionary dynamics of an actively proliferating population of cells subject to changes in their proliferative potential and their evolvability using a stochastic individual-based model and its deterministic continuum counterpart through numerical simulations of these models. We find robust adaptive trajectories that rely on cells with high evolvability rapidly exploring the phenotypic landscape and reaching the proliferative potential with the highest fitness. The strength of selection on the proliferative potential, and the cost associated with evolvability, can alter these trajectories such that, if both are sufficiently constraining, highly evolvable populations can become extinct in our individual-based model simulations. We explore the impact of this interaction at various scales, discussing its effects in undisturbed environments and also in disrupted contexts, such as cancer.
[ { "created": "Fri, 9 Feb 2024 13:15:32 GMT", "version": "v1" } ]
2024-02-12
[ [ "Jiménez-Sánchez", "Juan", "" ], [ "Ortega-Sabater", "Carmen", "" ], [ "Maini", "Philip K.", "" ], [ "Pérez-García", "Víctor M.", "" ], [ "Lorenzi", "Tommaso", "" ] ]
Evolvability is defined as the ability of a population to generate heritable variation to facilitate its adaptation to new environments or selection pressures. In this article, we consider evolvability as a phenotypic trait subject to evolution and discuss its implications in the adaptation of cell populations. We explore the evolutionary dynamics of an actively proliferating population of cells subject to changes in their proliferative potential and their evolvability using a stochastic individual-based model and its deterministic continuum counterpart through numerical simulations of these models. We find robust adaptive trajectories that rely on cells with high evolvability rapidly exploring the phenotypic landscape and reaching the proliferative potential with the highest fitness. The strength of selection on the proliferative potential, and the cost associated with evolvability, can alter these trajectories such that, if both are sufficiently constraining, highly evolvable populations can become extinct in our individual-based model simulations. We explore the impact of this interaction at various scales, discussing its effects in undisturbed environments and also in disrupted contexts, such as cancer.
2006.14158
Helena Stage
Helena B. Stage, Joseph Shingleton, Sanmitra Ghosh, Francesca Scarabel, Lorenzo Pellis, Thomas Finnie
Shut and re-open: the role of schools in the spread of COVID-19 in Europe
null
null
10.1098/rstb.2020.0277
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the effect of school closure and subsequent reopening on the transmission of COVID-19, by considering Denmark, Norway, Sweden, and German states as case studies. By comparing the growth rates in daily hospitalisations or confirmed cases under different interventions, we provide evidence that the effect of school closure is visible as a reduction in the growth rate approximately 9 days after implementation. Limited school attendance, such as older students sitting exams or the partial return of younger year groups, does not appear to significantly affect community transmission. A large-scale reopening of schools while controlling or suppressing the epidemic appears feasible in countries such as Denmark or Norway, where community transmission is generally low. However, school reopening can contribute to significant increases in the growth rate in countries like Germany, where community transmission is relatively high. Our findings underscore the need for a cautious evaluation of reopening strategies that ensure low classroom occupancy and a solid infrastructure to quickly identify and isolate new infections.
[ { "created": "Thu, 25 Jun 2020 03:56:36 GMT", "version": "v1" }, { "created": "Mon, 26 Apr 2021 09:59:17 GMT", "version": "v2" } ]
2021-11-18
[ [ "Stage", "Helena B.", "" ], [ "Shingleton", "Joseph", "" ], [ "Ghosh", "Sanmitra", "" ], [ "Scarabel", "Francesca", "" ], [ "Pellis", "Lorenzo", "" ], [ "Finnie", "Thomas", "" ] ]
We investigate the effect of school closure and subsequent reopening on the transmission of COVID-19, by considering Denmark, Norway, Sweden, and German states as case studies. By comparing the growth rates in daily hospitalisations or confirmed cases under different interventions, we provide evidence that the effect of school closure is visible as a reduction in the growth rate approximately 9 days after implementation. Limited school attendance, such as older students sitting exams or the partial return of younger year groups, does not appear to significantly affect community transmission. A large-scale reopening of schools while controlling or suppressing the epidemic appears feasible in countries such as Denmark or Norway, where community transmission is generally low. However, school reopening can contribute to significant increases in the growth rate in countries like Germany, where community transmission is relatively high. Our findings underscore the need for a cautious evaluation of reopening strategies that ensure low classroom occupancy and a solid infrastructure to quickly identify and isolate new infections.
2102.13210
Ugo Bastolla
Ugo Bastolla, Patrick Chambers, David Abia, Maria-Laura Garc\'ia-Bermejo and Manuel Fresno
Is Covid-19 severity associated with ACE2 degradation?
null
null
null
null
q-bio.TO q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Covid-19 is particularly mild with children, and its severity escalates with age. Several theories have been proposed to explain these facts. In particular, it was proposed that the lower expression of the viral receptor ACE2 in children protects them from severe Covid. However, other works suggested an inverse relationship between ACE2 expression and disease severity. Here we try to reconcile seemingly contradicting observations noting that ACE2 is not monotonically related with age but it reaches a maximum at a young age that depends on the cell type and then decreases. This pattern is consistent with most existing data from humans and rodents and it is expected to be more marked for ACE2 cell protein than for mRNA because of the increase with age of the protease TACE/ADAM17 that sheds ACE2 from the cell membrane to the serum. The negative relation between ACE2 level and Covid-19 severity at old age is not paradoxical but it is consistent with a mathematical model of virus propagation that predicts that higher viral receptor does not necessarily favour virus propagation and it can even slow it down. More importantly, ACE2 is known to protect organs from chronic and acute inflammation, which are worsened by low ACE2 levels. Here we propose that ACE2 contributes essentially to reverse the inflammatory process by downregulating the pro-inflammatory peptides of the angiotensin and bradykinin system, and that failure to revert the inflammation triggered by SARS-COV-2 may underlie both severe CoViD-19 infection and its many post-infection manifestations, including the multi-inflammatory syndrome of children (MIS-C). Within this view, lower severity in children despite lower ACE2 expression may be consistent with their higher expression of the alternative angiotensin II receptor ATR2 and in general of the anti-inflammatory arm of the Renin-Angiotensin System (RAS) at young age.
[ { "created": "Mon, 22 Feb 2021 11:11:41 GMT", "version": "v1" } ]
2021-03-01
[ [ "Bastolla", "Ugo", "" ], [ "Chambers", "Patrick", "" ], [ "Abia", "David", "" ], [ "García-Bermejo", "Maria-Laura", "" ], [ "Fresno", "Manuel", "" ] ]
Covid-19 is particularly mild with children, and its severity escalates with age. Several theories have been proposed to explain these facts. In particular, it was proposed that the lower expression of the viral receptor ACE2 in children protects them from severe Covid. However, other works suggested an inverse relationship between ACE2 expression and disease severity. Here we try to reconcile seemingly contradicting observations noting that ACE2 is not monotonically related with age but it reaches a maximum at a young age that depends on the cell type and then decreases. This pattern is consistent with most existing data from humans and rodents and it is expected to be more marked for ACE2 cell protein than for mRNA because of the increase with age of the protease TACE/ADAM17 that sheds ACE2 from the cell membrane to the serum. The negative relation between ACE2 level and Covid-19 severity at old age is not paradoxical but it is consistent with a mathematical model of virus propagation that predicts that higher viral receptor does not necessarily favour virus propagation and it can even slow it down. More importantly, ACE2 is known to protect organs from chronic and acute inflammation, which are worsened by low ACE2 levels. Here we propose that ACE2 contributes essentially to reverse the inflammatory process by downregulating the pro-inflammatory peptides of the angiotensin and bradykinin system, and that failure to revert the inflammation triggered by SARS-COV-2 may underlie both severe CoViD-19 infection and its many post-infection manifestations, including the multi-inflammatory syndrome of children (MIS-C). Within this view, lower severity in children despite lower ACE2 expression may be consistent with their higher expression of the alternative angiotensin II receptor ATR2 and in general of the anti-inflammatory arm of the Renin-Angiotensin System (RAS) at young age.
2101.12210
David Medina-Ortiz Mr
Cristofer Quiroz, Yasna Barrera Saavedra, Benjam\'in Armijo-Galdames, Juan Amado-Hinojosa, \'Alvaro Olivera-Nappa, Anamaria Sanchez-Daza, and David Medina-Ortiz
Peptipedia: a comprehensive database for peptide research supported by Assembled predictive models and Data Mining approaches
null
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Motivation: Peptides have attracted the attention in this century due to their remarkable therapeutic properties. Computational tools are being developed to take advantage of existing information, encapsulating knowledge and making it available in a simple way for general public use. However, these are property-specific redundant data systems, and usually do not display the data in a clear way. In some cases, information download is not even possible. This data needs to be available in a simple form for drug design and other biotechnological applications. Results: We developed Peptipedia, a user-friendly database and web application to search, characterise and analyse peptide sequences. Our tool integrates the information from thirty previously reported databases, making it the largest repository of peptides with recorded activities so far. Besides, we implemented a variety of services to increase our tool's usability. The significant differences of our tools with other existing alternatives becomes a substantial contribution to develop biotechnological and bioengineering applications for peptides. Availability: Peptipedia is available for non-commercial use as an open-access software, licensed under the GNU General Public License, version GPL 3.0. The web platform is publicly available at pesb2.cl/peptipedia. Both the source code and sample datasets are available in the GitHub repository https://github.com/CristoferQ/PeptideDatabase. Contact: david.medina@cebib.cl, ana.sanchez@ing.uchile.cl
[ { "created": "Thu, 28 Jan 2021 10:59:51 GMT", "version": "v1" } ]
2021-02-01
[ [ "Quiroz", "Cristofer", "" ], [ "Saavedra", "Yasna Barrera", "" ], [ "Armijo-Galdames", "Benjamín", "" ], [ "Amado-Hinojosa", "Juan", "" ], [ "Olivera-Nappa", "Álvaro", "" ], [ "Sanchez-Daza", "Anamaria", "" ], [ "Medina-Ortiz", "David", "" ] ]
Motivation: Peptides have attracted the attention in this century due to their remarkable therapeutic properties. Computational tools are being developed to take advantage of existing information, encapsulating knowledge and making it available in a simple way for general public use. However, these are property-specific redundant data systems, and usually do not display the data in a clear way. In some cases, information download is not even possible. This data needs to be available in a simple form for drug design and other biotechnological applications. Results: We developed Peptipedia, a user-friendly database and web application to search, characterise and analyse peptide sequences. Our tool integrates the information from thirty previously reported databases, making it the largest repository of peptides with recorded activities so far. Besides, we implemented a variety of services to increase our tool's usability. The significant differences of our tools with other existing alternatives becomes a substantial contribution to develop biotechnological and bioengineering applications for peptides. Availability: Peptipedia is available for non-commercial use as an open-access software, licensed under the GNU General Public License, version GPL 3.0. The web platform is publicly available at pesb2.cl/peptipedia. Both the source code and sample datasets are available in the GitHub repository https://github.com/CristoferQ/PeptideDatabase. Contact: david.medina@cebib.cl, ana.sanchez@ing.uchile.cl
0910.4943
Daniel Segre'
W. J. Riehl, P. L. Krapivsky, S. Redner and D. Segre
Signatures of arithmetic simplicity in metabolic network architecture
null
PLoS Comput Biol 6(4): e1000725 (2010)
10.1371/journal.pcbi.1000725
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metabolic networks perform some of the most fundamental functions in living cells, including energy transduction and building block biosynthesis. While these are the best characterized networks in living systems, understanding their evolutionary history and complex wiring constitutes one of the most fascinating open questions in biology, intimately related to the enigma of life's origin itself. Is the evolution of metabolism subject to general principles, beyond the unpredictable accumulation of multiple historical accidents? Here we search for such principles by applying to an artificial chemical universe some of the methodologies developed for the study of genome scale models of cellular metabolism. In particular, we use metabolic flux constraint-based models to exhaustively search for artificial chemistry pathways that can optimally perform an array of elementary metabolic functions. Despite the simplicity of the model employed, we find that the ensuing pathways display a surprisingly rich set of properties, including the existence of autocatalytic cycles and hierarchical modules, the appearance of universally preferable metabolites and reactions, and a logarithmic trend of pathway length as a function of input/output molecule size. Some of these properties can be derived analytically, borrowing methods previously used in cryptography. In addition, by mapping biochemical networks onto a simplified carbon atom reaction backbone, we find that several of the properties predicted by the artificial chemistry model hold for real metabolic networks. These findings suggest that optimality principles and arithmetic simplicity might lie beneath some aspects of biochemical complexity.
[ { "created": "Mon, 26 Oct 2009 19:47:41 GMT", "version": "v1" }, { "created": "Fri, 2 Apr 2010 14:49:02 GMT", "version": "v2" } ]
2010-04-05
[ [ "Riehl", "W. J.", "" ], [ "Krapivsky", "P. L.", "" ], [ "Redner", "S.", "" ], [ "Segre", "D.", "" ] ]
Metabolic networks perform some of the most fundamental functions in living cells, including energy transduction and building block biosynthesis. While these are the best characterized networks in living systems, understanding their evolutionary history and complex wiring constitutes one of the most fascinating open questions in biology, intimately related to the enigma of life's origin itself. Is the evolution of metabolism subject to general principles, beyond the unpredictable accumulation of multiple historical accidents? Here we search for such principles by applying to an artificial chemical universe some of the methodologies developed for the study of genome scale models of cellular metabolism. In particular, we use metabolic flux constraint-based models to exhaustively search for artificial chemistry pathways that can optimally perform an array of elementary metabolic functions. Despite the simplicity of the model employed, we find that the ensuing pathways display a surprisingly rich set of properties, including the existence of autocatalytic cycles and hierarchical modules, the appearance of universally preferable metabolites and reactions, and a logarithmic trend of pathway length as a function of input/output molecule size. Some of these properties can be derived analytically, borrowing methods previously used in cryptography. In addition, by mapping biochemical networks onto a simplified carbon atom reaction backbone, we find that several of the properties predicted by the artificial chemistry model hold for real metabolic networks. These findings suggest that optimality principles and arithmetic simplicity might lie beneath some aspects of biochemical complexity.
2201.07353
Heyrim Cho
Runpeng Li, Prativa Sahoo, Dongrui Wang, Qixuan Wang, Christine E. Brown, Russell C. Rockne, Heyrim Cho
Modeling interaction of Glioma cells and CAR T-cells considering multiple CAR T-cells bindings
12 pages, 9 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chimeric antigen receptor (CAR) T-cell based immunotherapy has shown its potential in treating blood cancers, and its application to solid tumors is currently being extensively investigated. For glioma brain tumors, various CAR T-cell targets include IL13Ra2, EGFRvIII, HER2, EphA2, GD2, B7-H3, and chlorotoxin. In this work, we are interested in developing a mathematical model of IL13Ra2 targeting CAR T-cells for treating glioma. We focus on extending the work of Kuznetsov et al. (1994) by considering binding of multiple CAR T-cells to a single glioma cell, and the dynamics of these multi-cellular conjugates. Our model more accurately describes experimentally observed CAR T-cell killing assay data than a model which does not consider cell binding. Moreover, we derive conditions in the CAR T-cell expansion rate that determines treatment success or failure. Finally, we show that our model captures distinct CAR T-cell killing dynamics at low, medium, and high antigen receptor densities in patient-derived brain tumor cells.
[ { "created": "Tue, 18 Jan 2022 23:21:05 GMT", "version": "v1" } ]
2022-01-20
[ [ "Li", "Runpeng", "" ], [ "Sahoo", "Prativa", "" ], [ "Wang", "Dongrui", "" ], [ "Wang", "Qixuan", "" ], [ "Brown", "Christine E.", "" ], [ "Rockne", "Russell C.", "" ], [ "Cho", "Heyrim", "" ] ]
Chimeric antigen receptor (CAR) T-cell based immunotherapy has shown its potential in treating blood cancers, and its application to solid tumors is currently being extensively investigated. For glioma brain tumors, various CAR T-cell targets include IL13Ra2, EGFRvIII, HER2, EphA2, GD2, B7-H3, and chlorotoxin. In this work, we are interested in developing a mathematical model of IL13Ra2 targeting CAR T-cells for treating glioma. We focus on extending the work of Kuznetsov et al. (1994) by considering binding of multiple CAR T-cells to a single glioma cell, and the dynamics of these multi-cellular conjugates. Our model more accurately describes experimentally observed CAR T-cell killing assay data than a model which does not consider cell binding. Moreover, we derive conditions in the CAR T-cell expansion rate that determines treatment success or failure. Finally, we show that our model captures distinct CAR T-cell killing dynamics at low, medium, and high antigen receptor densities in patient-derived brain tumor cells.
1908.08807
Badong Chen
Hao Wu, Ziyu Zhu, Jiayi Wang, Nanning Zheng, Badong Chen
An encoding framework with brain inner state for natural image identification
null
null
null
null
q-bio.NC cs.LG eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural encoding and decoding, which aim to characterize the relationship between stimuli and brain activities, have emerged as an important area in cognitive neuroscience. Traditional encoding models, which focus on feature extraction and mapping, consider the brain as an input-output mapper without inner states. In this work, inspired by the fact that human brain acts like a state machine, we proposed a novel encoding framework that combines information from both the external world and the inner state to predict brain activity. The framework comprises two parts: forward encoding model that deals with visual stimuli and inner state model that captures influence from intrinsic connections in the brain. The forward model can be any traditional encoding model, making the framework flexible. The inner state model is a linear model to utilize information in the prediction residuals of the forward model. The proposed encoding framework can achieve much better performance on natural image identification from fMRI response than forwardonly models. The identification accuracy will decrease slightly with the dataset size increasing, but remain relatively stable with different identification methods. The results confirm that the new encoding framework is effective and robust when used for brain decoding.
[ { "created": "Thu, 22 Aug 2019 13:41:49 GMT", "version": "v1" } ]
2019-08-26
[ [ "Wu", "Hao", "" ], [ "Zhu", "Ziyu", "" ], [ "Wang", "Jiayi", "" ], [ "Zheng", "Nanning", "" ], [ "Chen", "Badong", "" ] ]
Neural encoding and decoding, which aim to characterize the relationship between stimuli and brain activities, have emerged as an important area in cognitive neuroscience. Traditional encoding models, which focus on feature extraction and mapping, consider the brain as an input-output mapper without inner states. In this work, inspired by the fact that human brain acts like a state machine, we proposed a novel encoding framework that combines information from both the external world and the inner state to predict brain activity. The framework comprises two parts: forward encoding model that deals with visual stimuli and inner state model that captures influence from intrinsic connections in the brain. The forward model can be any traditional encoding model, making the framework flexible. The inner state model is a linear model to utilize information in the prediction residuals of the forward model. The proposed encoding framework can achieve much better performance on natural image identification from fMRI response than forwardonly models. The identification accuracy will decrease slightly with the dataset size increasing, but remain relatively stable with different identification methods. The results confirm that the new encoding framework is effective and robust when used for brain decoding.
2201.07590
Geir Storvik
Geir Storvik, Alfonso Diz-Lois Palomares, Solveig Engebretsen, Gunnar {\O}yvind Isaksson R{\o}, Kenth Eng{\o}-Monsen, Aja Br{\aa}then Kristoffersen, Birgitte Freiesleben de Blasio, Arnoldo Frigessi
A sequential Monte Carlo approach to estimate a time varying reproduction number in infectious disease models: the Covid-19 case
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
During the first months, the Covid-19 pandemic has required most countries to implement complex sequences of non-pharmaceutical interventions, with the aim of controlling the transmission of the virus in the population. To be able to take rapid decisions, a detailed understanding of the current situation is necessary. Estimates of time-varying, instantaneous reproduction numbers represent a way to quantify the viral transmission in real time. They are often defined through a mathematical compartmental model of the epidemic, like a stochastic SEIR model, whose parameters must be estimated from multiple time series of epidemiological data. Because of very high dimensional parameter spaces (partly due to the stochasticity in the spread models) and incomplete and delayed data, inference is very challenging. We propose a state space formalisation of the model and a sequential Monte Carlo approach which allow to estimate a daily-varying reproduction number for the Covid-19 epidemic in Norway with sufficient precision, on the basis of daily hospitalisation and positive test incidences. The method is in regular use in Norway and is a powerful instrument for epidemic monitoring and management.
[ { "created": "Wed, 19 Jan 2022 13:26:47 GMT", "version": "v1" } ]
2023-02-27
[ [ "Storvik", "Geir", "" ], [ "Palomares", "Alfonso Diz-Lois", "" ], [ "Engebretsen", "Solveig", "" ], [ "Rø", "Gunnar Øyvind Isaksson", "" ], [ "Engø-Monsen", "Kenth", "" ], [ "Kristoffersen", "Aja Bråthen", "" ], [ "de Blasio", "Birgitte Freiesleben", "" ], [ "Frigessi", "Arnoldo", "" ] ]
During the first months, the Covid-19 pandemic has required most countries to implement complex sequences of non-pharmaceutical interventions, with the aim of controlling the transmission of the virus in the population. To be able to take rapid decisions, a detailed understanding of the current situation is necessary. Estimates of time-varying, instantaneous reproduction numbers represent a way to quantify the viral transmission in real time. They are often defined through a mathematical compartmental model of the epidemic, like a stochastic SEIR model, whose parameters must be estimated from multiple time series of epidemiological data. Because of very high dimensional parameter spaces (partly due to the stochasticity in the spread models) and incomplete and delayed data, inference is very challenging. We propose a state space formalisation of the model and a sequential Monte Carlo approach which allow to estimate a daily-varying reproduction number for the Covid-19 epidemic in Norway with sufficient precision, on the basis of daily hospitalisation and positive test incidences. The method is in regular use in Norway and is a powerful instrument for epidemic monitoring and management.
2311.10383
Matthieu M. De Wit
Vicente Raja, Matthieu M. de Wit
Affordance switching in self-organizing brain-body-environment systems
To be published in: Mangalam, M., Hajnal, A., & Kelty-Stephen, D.G. (Eds.). (2024). The Modern Legacy of Gibson's Affordances for the Sciences of Organisms. Routledge, New York, NY
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
In the ecological approach to perception and action, information that specifies affordances is available in the energy arrays surrounding organisms, and this information is detected by organisms in order to perceptually guide their actions. At the behavioral scale, organisms responding to affordances are understood as self-organizing and reorganizing softly-assembled synergies. Within the ecological community, little effort has so far been devoted to studying this process at the neural scale, though interest in the topic is growing under the header of ecological neuroscience. From this perspective, switches between affordances may be conceptualized as transitions within brain-body-environment systems as a whole rather than under the control of a privileged (neural) scale. We discuss extant empirical research at the behavioral scale in support of this view as well as ongoing and planned work at the neural scale that attempts to further flesh out this view by characterizing the neural dynamics that are associated with these transitions while participants dynamically respond to affordances.
[ { "created": "Fri, 17 Nov 2023 08:20:11 GMT", "version": "v1" } ]
2023-11-20
[ [ "Raja", "Vicente", "" ], [ "de Wit", "Matthieu M.", "" ] ]
In the ecological approach to perception and action, information that specifies affordances is available in the energy arrays surrounding organisms, and this information is detected by organisms in order to perceptually guide their actions. At the behavioral scale, organisms responding to affordances are understood as self-organizing and reorganizing softly-assembled synergies. Within the ecological community, little effort has so far been devoted to studying this process at the neural scale, though interest in the topic is growing under the header of ecological neuroscience. From this perspective, switches between affordances may be conceptualized as transitions within brain-body-environment systems as a whole rather than under the control of a privileged (neural) scale. We discuss extant empirical research at the behavioral scale in support of this view as well as ongoing and planned work at the neural scale that attempts to further flesh out this view by characterizing the neural dynamics that are associated with these transitions while participants dynamically respond to affordances.
2405.04644
Giovanni Di Muccio
Lucia Coronel, Giovanni Di Muccio, Brad Rothberg, Alberto Giacomello, Vincenzo Carnevale
Lipid-mediated hydrophobic gating in the BK potassium channel
null
null
null
null
q-bio.BM physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The large-conductance, calcium-activated potassium (BK) channel lacks the typical intracellular bundle-crossing gate present in most ion channels of the 6TM family. This observation, initially inferred from Ca$^{2+}$-free-pore accessibility experiments and recently corroborated by a CryoEM structure of the non-conductive state, raises a puzzling question: how can gating occur in absence of steric hindrance? To answer this question, we carried out molecular simulations and accurate free energy calculations to obtain a microscopic picture of the sequence of events that, starting from a Ca$^{2+}$-free state leads to ion conduction upon Ca$^{2+}$ binding. Our results highlight an unexpected role for annular lipids, which turn out to be an integral part of the gating machinery. Due to the presence of fenestrations, the "closed" Ca$^{2+}$-free pore can be occupied by the methyl groups from the lipid alkyl chains. This dynamic occupancy triggers and stabilizes the nucleation of a vapor bubble into the inner pore cavity, thus hindering ion conduction. By contrast, Ca$^{2+}$ binding results into a displacement of these lipids outside the inner cavity, lowering the hydrophobicity of this region and thus allowing for pore hydration and conduction. This lipid-mediated hydrophobic gating rationalizes several seemingly problematic experimental observations, including the state-dependent pore accessibility of blockers.
[ { "created": "Tue, 7 May 2024 20:08:49 GMT", "version": "v1" } ]
2024-05-09
[ [ "Coronel", "Lucia", "" ], [ "Di Muccio", "Giovanni", "" ], [ "Rothberg", "Brad", "" ], [ "Giacomello", "Alberto", "" ], [ "Carnevale", "Vincenzo", "" ] ]
The large-conductance, calcium-activated potassium (BK) channel lacks the typical intracellular bundle-crossing gate present in most ion channels of the 6TM family. This observation, initially inferred from Ca$^{2+}$-free-pore accessibility experiments and recently corroborated by a CryoEM structure of the non-conductive state, raises a puzzling question: how can gating occur in absence of steric hindrance? To answer this question, we carried out molecular simulations and accurate free energy calculations to obtain a microscopic picture of the sequence of events that, starting from a Ca$^{2+}$-free state leads to ion conduction upon Ca$^{2+}$ binding. Our results highlight an unexpected role for annular lipids, which turn out to be an integral part of the gating machinery. Due to the presence of fenestrations, the "closed" Ca$^{2+}$-free pore can be occupied by the methyl groups from the lipid alkyl chains. This dynamic occupancy triggers and stabilizes the nucleation of a vapor bubble into the inner pore cavity, thus hindering ion conduction. By contrast, Ca$^{2+}$ binding results into a displacement of these lipids outside the inner cavity, lowering the hydrophobicity of this region and thus allowing for pore hydration and conduction. This lipid-mediated hydrophobic gating rationalizes several seemingly problematic experimental observations, including the state-dependent pore accessibility of blockers.
1406.6957
Ville Mustonen
Andrej Fischer, Ignacio Vazquez-Garcia and Ville Mustonen
The value of monitoring to control evolving populations
27 pages
null
10.1073/pnas.1409403112
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Populations can evolve in order to adapt to external changes. The capacity to evolve and adapt makes successful treatment of infectious diseases and cancer difficult. Indeed, therapy resistance has quickly become a key challenge for global health. Therefore, ideas of how to control evolving populations in order to overcome this threat are valuable. Here we use the mathematical concepts of stochastic optimal control to study what is needed to control evolving populations. Following established routes to calculate control strategies, we first study how a polymorphism can be maintained in a finite population by adaptively tuning selection. We then introduce a minimal model of drug resistance in a stochastically evolving cancer cell population and compute adaptive therapies, where decisions are based on monitoring the response of the tumor, which can outperform established therapy paradigms. For both case studies, we demonstrate the importance of high-resolution monitoring of the target population in order to achieve a given control objective: to control one must monitor.
[ { "created": "Thu, 26 Jun 2014 17:52:11 GMT", "version": "v1" } ]
2015-06-22
[ [ "Fischer", "Andrej", "" ], [ "Vazquez-Garcia", "Ignacio", "" ], [ "Mustonen", "Ville", "" ] ]
Populations can evolve in order to adapt to external changes. The capacity to evolve and adapt makes successful treatment of infectious diseases and cancer difficult. Indeed, therapy resistance has quickly become a key challenge for global health. Therefore, ideas of how to control evolving populations in order to overcome this threat are valuable. Here we use the mathematical concepts of stochastic optimal control to study what is needed to control evolving populations. Following established routes to calculate control strategies, we first study how a polymorphism can be maintained in a finite population by adaptively tuning selection. We then introduce a minimal model of drug resistance in a stochastically evolving cancer cell population and compute adaptive therapies, where decisions are based on monitoring the response of the tumor, which can outperform established therapy paradigms. For both case studies, we demonstrate the importance of high-resolution monitoring of the target population in order to achieve a given control objective: to control one must monitor.
q-bio/0310005
Rodrick Wallace
Rodrick Wallace, Deborah Wallace
Structured psychosocial stress and therapeutic failure
18 pages, 4 figures
null
null
null
q-bio.NC
null
Generalized language-of-thought arguments appropriate to interacting cognitive modules permit exploration of how disease states interact with medical treatment. The interpenetrating feedback between treatment and response to it creates a kind of idiotypic hall-of-mirrors generating a synergistic pattern of efficacy, treatment failure, adverse reactions, and patient noncompliance which, from a Rate Distortion perspective, embodies a distorted image of externally-imposed structured psychosocial stress. For the US, accelerating spatial and social diffusion of such stress enmeshes both dominant and subordinate populations in a linked system which will express itself, not only in an increasingly unhealthy society, but in the diffusion of therapeutic failure, including, but not limited to, drug-based treatments.
[ { "created": "Tue, 7 Oct 2003 18:00:50 GMT", "version": "v1" }, { "created": "Fri, 24 Oct 2003 18:43:55 GMT", "version": "v2" } ]
2007-05-23
[ [ "Wallace", "Rodrick", "" ], [ "Wallace", "Deborah", "" ] ]
Generalized language-of-thought arguments appropriate to interacting cognitive modules permit exploration of how disease states interact with medical treatment. The interpenetrating feedback between treatment and response to it creates a kind of idiotypic hall-of-mirrors generating a synergistic pattern of efficacy, treatment failure, adverse reactions, and patient noncompliance which, from a Rate Distortion perspective, embodies a distorted image of externally-imposed structured psychosocial stress. For the US, accelerating spatial and social diffusion of such stress enmeshes both dominant and subordinate populations in a linked system which will express itself, not only in an increasingly unhealthy society, but in the diffusion of therapeutic failure, including, but not limited to, drug-based treatments.
0806.0691
Vladimir Ivancevic
Tijana T. Ivancevic
Ricci Flow and Entropy Model for Avascular Tumor Growth and Decay Control
24 pages, 2 figures, Latex, revised
null
null
null
q-bio.CB q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prediction and control of cancer invasion is a vital problem in medical science. This paper proposes a modern geometric Ricci-flow and entropy based model for control of avascular multicellular tumor spheroid growth and decay. As a tumor growth/decay control tool, a monoclonal antibody therapy is proposed. Keywords: avascular tumor growth and decay, multicellular tumor spheroid, Ricci flow and entropy, nonlinear heat equation, monoclonal antibody cancer therapy
[ { "created": "Wed, 4 Jun 2008 14:51:38 GMT", "version": "v1" }, { "created": "Thu, 5 Jun 2008 07:06:09 GMT", "version": "v2" }, { "created": "Tue, 10 Jun 2008 01:53:22 GMT", "version": "v3" }, { "created": "Sun, 31 Aug 2008 05:20:28 GMT", "version": "v4" } ]
2008-08-31
[ [ "Ivancevic", "Tijana T.", "" ] ]
Prediction and control of cancer invasion is a vital problem in medical science. This paper proposes a modern geometric Ricci-flow and entropy based model for control of avascular multicellular tumor spheroid growth and decay. As a tumor growth/decay control tool, a monoclonal antibody therapy is proposed. Keywords: avascular tumor growth and decay, multicellular tumor spheroid, Ricci flow and entropy, nonlinear heat equation, monoclonal antibody cancer therapy
2209.15563
William Dorrell Mr
William Dorrell, Peter E. Latham, Timothy E.J. Behrens, James C.R. Whittington
Actionable Neural Representations: Grid Cells from Minimal Constraints
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
To afford flexible behaviour, the brain must build internal representations that mirror the structure of variables in the external world. For example, 2D space obeys rules: the same set of actions combine in the same way everywhere (step north, then south, and you won't have moved, wherever you start). We suggest the brain must represent this consistent meaning of actions across space, as it allows you to find new short-cuts and navigate in unfamiliar settings. We term this representation an `actionable representation'. We formulate actionable representations using group and representation theory, and show that, when combined with biological and functional constraints - non-negative firing, bounded neural activity, and precise coding - multiple modules of hexagonal grid cells are the optimal representation of 2D space. We support this claim with intuition, analytic justification, and simulations. Our analytic results normatively explain a set of surprising grid cell phenomena, and make testable predictions for future experiments. Lastly, we highlight the generality of our approach beyond just understanding 2D space. Our work characterises a new principle for understanding and designing flexible internal representations: they should be actionable, allowing animals and machines to predict the consequences of their actions, rather than just encode.
[ { "created": "Fri, 30 Sep 2022 16:20:51 GMT", "version": "v1" }, { "created": "Fri, 17 Feb 2023 05:11:04 GMT", "version": "v2" }, { "created": "Tue, 28 Feb 2023 21:05:06 GMT", "version": "v3" } ]
2023-03-02
[ [ "Dorrell", "William", "" ], [ "Latham", "Peter E.", "" ], [ "Behrens", "Timothy E. J.", "" ], [ "Whittington", "James C. R.", "" ] ]
To afford flexible behaviour, the brain must build internal representations that mirror the structure of variables in the external world. For example, 2D space obeys rules: the same set of actions combine in the same way everywhere (step north, then south, and you won't have moved, wherever you start). We suggest the brain must represent this consistent meaning of actions across space, as it allows you to find new short-cuts and navigate in unfamiliar settings. We term this representation an `actionable representation'. We formulate actionable representations using group and representation theory, and show that, when combined with biological and functional constraints - non-negative firing, bounded neural activity, and precise coding - multiple modules of hexagonal grid cells are the optimal representation of 2D space. We support this claim with intuition, analytic justification, and simulations. Our analytic results normatively explain a set of surprising grid cell phenomena, and make testable predictions for future experiments. Lastly, we highlight the generality of our approach beyond just understanding 2D space. Our work characterises a new principle for understanding and designing flexible internal representations: they should be actionable, allowing animals and machines to predict the consequences of their actions, rather than just encode.
1406.6912
Brian Williams Dr
Brian G. Williams
HIV and TB in Eastern and Southern Africa: Evidence for behaviour change and the impact of ART
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The United Nations Joint Programme on HIV/AIDS (UNAIDS) has set a target to ensure that 15 million HIV-positive people in the world would be receiving combination anti-retroviral treatment (ART) by 2015. This target is likely to be reached and new targets for 2020 and 2030 are needed. Eastern and Southern Africa (ESA) account for approximately half of all people living with HIV in the world and it will be especially important to set reachable and affordable targets for this region. In order to make future projections of HIV and TB prevalence, incidence and mortality assuming different levels of ART scale-up and coverage, it is first necessary to assess the current state of the epidemic. Here we review national data on the prevalence of HIV, the coverage of ART and the notification rates of TB to provide a firm basis for making future projections. We use the data to assess the extent to which behaviour change and ART have reduced the number of people living with HIV who remain infectious.
[ { "created": "Wed, 25 Jun 2014 19:15:25 GMT", "version": "v1" } ]
2014-06-27
[ [ "Williams", "Brian G.", "" ] ]
The United Nations Joint Programme on HIV/AIDS (UNAIDS) has set a target to ensure that 15 million HIV-positive people in the world would be receiving combination anti-retroviral treatment (ART) by 2015. This target is likely to be reached and new targets for 2020 and 2030 are needed. Eastern and Southern Africa (ESA) account for approximately half of all people living with HIV in the world and it will be especially important to set reachable and affordable targets for this region. In order to make future projections of HIV and TB prevalence, incidence and mortality assuming different levels of ART scale-up and coverage, it is first necessary to assess the current state of the epidemic. Here we review national data on the prevalence of HIV, the coverage of ART and the notification rates of TB to provide a firm basis for making future projections. We use the data to assess the extent to which behaviour change and ART have reduced the number of people living with HIV who remain infectious.
1402.0275
N. Michael Mayer
Norbert Michael Mayer
Aging as a mean to retain an adaptive mutation rate in mutagenesis with asymmetric reproduction
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper discusses a connection between asymmetric reproduction -- that is reproduction in a parent-child relationship where the parent does not mutate during reproduction --, the fact that all non-viral lifeforms bear genes of their reproduction machinery and how this could relate to evolutionary mechanisms behind aging. In a highly simplified model of the evolution process rules are derived under which aging is an important factor of the adaption in the evolution process and what groups of life-forms necessarily have to age and where exceptions from that rule are possible.
[ { "created": "Mon, 3 Feb 2014 03:28:37 GMT", "version": "v1" }, { "created": "Sat, 30 May 2015 06:17:09 GMT", "version": "v2" }, { "created": "Sun, 9 Aug 2015 10:24:22 GMT", "version": "v3" } ]
2015-08-11
[ [ "Mayer", "Norbert Michael", "" ] ]
The paper discusses a connection between asymmetric reproduction -- that is reproduction in a parent-child relationship where the parent does not mutate during reproduction --, the fact that all non-viral lifeforms bear genes of their reproduction machinery and how this could relate to evolutionary mechanisms behind aging. In a highly simplified model of the evolution process rules are derived under which aging is an important factor of the adaption in the evolution process and what groups of life-forms necessarily have to age and where exceptions from that rule are possible.
2104.10530
Agnese Barbensi
Agnese Barbensi, Naya Yerolemou, Oliver Vipond, Barbara I. Mahler, Pawel Dabrowski-Tumanski, Dimos Goundaroulis
A topological selection of folding pathways from native states of knotted proteins
5 figures, Supplementary information contained as an appendix
null
null
null
q-bio.BM math.GT
http://creativecommons.org/licenses/by/4.0/
Understanding the biological function of knots in proteins and their folding process is an open and challenging question in biology. Recent studies classify the topology and geometry of knotted proteins by analysing the distribution of a protein's planar projections using topological objects called knotoids. We approach the analysis of proteins with the same topology by introducing a topologically inspired statistical metric between their knotoid distributions. We detect geometric differences between trefoil proteins by characterising their entanglement and we recover a clustering by sequence similarity. By looking directly at the geometry and topology of their native states, we are able to probe different folding pathways for proteins forming open-ended trefoil knots. Interestingly, our pipeline reveals that the folding pathway of shallow knotted Carbonic Anhydrases involves the creation of a double-looped structure, differently from what was previously observed for deeply knotted trefoil proteins. We validate this with Molecular Dynamics simulations.
[ { "created": "Wed, 21 Apr 2021 13:41:54 GMT", "version": "v1" } ]
2021-04-22
[ [ "Barbensi", "Agnese", "" ], [ "Yerolemou", "Naya", "" ], [ "Vipond", "Oliver", "" ], [ "Mahler", "Barbara I.", "" ], [ "Dabrowski-Tumanski", "Pawel", "" ], [ "Goundaroulis", "Dimos", "" ] ]
Understanding the biological function of knots in proteins and their folding process is an open and challenging question in biology. Recent studies classify the topology and geometry of knotted proteins by analysing the distribution of a protein's planar projections using topological objects called knotoids. We approach the analysis of proteins with the same topology by introducing a topologically inspired statistical metric between their knotoid distributions. We detect geometric differences between trefoil proteins by characterising their entanglement and we recover a clustering by sequence similarity. By looking directly at the geometry and topology of their native states, we are able to probe different folding pathways for proteins forming open-ended trefoil knots. Interestingly, our pipeline reveals that the folding pathway of shallow knotted Carbonic Anhydrases involves the creation of a double-looped structure, differently from what was previously observed for deeply knotted trefoil proteins. We validate this with Molecular Dynamics simulations.
1711.00755
Sergei Maslov
Akshit Goyal and Sergei Maslov
Diversity, stability, and reproducibility in stochastically assembled microbial ecosystems
null
Phys. Rev. Lett. 120, 158102 (2018)
10.1103/PhysRevLett.120.158102
null
q-bio.PE cond-mat.stat-mech q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microbial ecosystems are remarkably diverse, stable, and often consist of a balanced mixture of core and peripheral species. Here we propose a conceptual model exhibiting all these emergent properties in quantitative agreement with real ecosystem data, specifically species' abundance and prevalence distributions. Resource competition and metabolic commensalism drive stochastic ecosystem assembly in our model. We demonstrate that even when supplied with just one resource, ecosystems can exhibit high diversity, increasing stability, and partial reproducibility between samples.
[ { "created": "Thu, 2 Nov 2017 14:19:35 GMT", "version": "v1" } ]
2018-04-18
[ [ "Goyal", "Akshit", "" ], [ "Maslov", "Sergei", "" ] ]
Microbial ecosystems are remarkably diverse, stable, and often consist of a balanced mixture of core and peripheral species. Here we propose a conceptual model exhibiting all these emergent properties in quantitative agreement with real ecosystem data, specifically species' abundance and prevalence distributions. Resource competition and metabolic commensalism drive stochastic ecosystem assembly in our model. We demonstrate that even when supplied with just one resource, ecosystems can exhibit high diversity, increasing stability, and partial reproducibility between samples.
1411.1722
Marta Luksza
Marta {\L}uksza, Trevor Bedford, Michael L\"assig
Epidemiological and evolutionary analysis of the 2014 Ebola virus outbreak
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 2014 epidemic of the Ebola virus is governed by a genetically diverse viral population. In the early Sierra Leone outbreak, a recent study has identified new mutations that generate genetically distinct sequence clades. Here we find evidence that major Sierra Leone clades have systematic differences in growth rate and reproduction number. If this growth heterogeneity remains stable, it will generate major shifts in clade frequencies and influence the overall epidemic dynamics on time scales within the current outbreak. Our method is based on simple summary statistics of clade growth, which can be inferred from genealogical trees with an underlying clade-specific birth-death model of the infection dynamics. This method can be used to perform realtime tracking of an evolving epidemic and identify emerging clades of epidemiological or evolutionary significance.
[ { "created": "Thu, 6 Nov 2014 19:59:37 GMT", "version": "v1" } ]
2014-11-07
[ [ "Łuksza", "Marta", "" ], [ "Bedford", "Trevor", "" ], [ "Lässig", "Michael", "" ] ]
The 2014 epidemic of the Ebola virus is governed by a genetically diverse viral population. In the early Sierra Leone outbreak, a recent study has identified new mutations that generate genetically distinct sequence clades. Here we find evidence that major Sierra Leone clades have systematic differences in growth rate and reproduction number. If this growth heterogeneity remains stable, it will generate major shifts in clade frequencies and influence the overall epidemic dynamics on time scales within the current outbreak. Our method is based on simple summary statistics of clade growth, which can be inferred from genealogical trees with an underlying clade-specific birth-death model of the infection dynamics. This method can be used to perform realtime tracking of an evolving epidemic and identify emerging clades of epidemiological or evolutionary significance.
1704.01733
Manoj Gopalkrishnan
Muppirala Viswa Virinchi, Abhishek Behera, Manoj Gopalkrishnan
A stochastic molecular scheme for an artificial cell to infer its environment from partial observations
12 pages, 1 figure
null
null
null
q-bio.MN cond-mat.stat-mech cs.IT math.IT nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The notion of entropy is shared between statistics and thermodynamics, and is fundamental to both disciplines. This makes statistical problems particularly suitable for reaction network implementations. In this paper we show how to perform a statistical operation known as Information Projection or E projection with stochastic mass-action kinetics. Our scheme encodes desired conditional distributions as the equilibrium distributions of reaction systems. To our knowledge this is a first scheme to exploit the inherent stochasticity of reaction networks for information processing. We apply this to the problem of an artificial cell trying to infer its environment from partial observations.
[ { "created": "Thu, 6 Apr 2017 07:42:50 GMT", "version": "v1" } ]
2017-04-07
[ [ "Virinchi", "Muppirala Viswa", "" ], [ "Behera", "Abhishek", "" ], [ "Gopalkrishnan", "Manoj", "" ] ]
The notion of entropy is shared between statistics and thermodynamics, and is fundamental to both disciplines. This makes statistical problems particularly suitable for reaction network implementations. In this paper we show how to perform a statistical operation known as Information Projection or E projection with stochastic mass-action kinetics. Our scheme encodes desired conditional distributions as the equilibrium distributions of reaction systems. To our knowledge this is a first scheme to exploit the inherent stochasticity of reaction networks for information processing. We apply this to the problem of an artificial cell trying to infer its environment from partial observations.
1608.08932
Mauro Mobilia
Mauro Mobilia, Alastair M. Rucklidge and Bartosz Szczesny
The Influence of Mobility Rate on Spiral Waves in Spatial Rock-Paper-Scissors Games
20 pages, 5 figures; published in Games
Games 7, 24 (2016)
10.3390/g7030024
null
q-bio.PE cond-mat.stat-mech nlin.PS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a two-dimensional model of three species in rock-paper-scissors competition and study the self-organisation of the population into fascinating spiraling patterns. Within our individual-based metapopulation formulation, the population composition changes due to cyclic dominance (dominance-removal and dominance-replacement), mutations, and pair-exchange of neighboring individuals. Here, we study the influence of mobility on the emerging patterns and investigate when the pair-exchange rate is responsible for spiral waves to become elusive in stochastic lattice simulations. In particular, we show that the spiral waves predicted by the system's deterministic partial equations are found in lattice simulations only within a finite range of the mobility rate. We also report that in the absence of mutations and dominance-replacement, the resulting spiraling patterns are subject to convective instability and far-field breakup at low mobility rate. Possible applications of these resolution and far-field breakup phenomena are discussed.
[ { "created": "Wed, 31 Aug 2016 16:39:02 GMT", "version": "v1" }, { "created": "Mon, 12 Sep 2016 09:13:44 GMT", "version": "v2" } ]
2016-09-13
[ [ "Mobilia", "Mauro", "" ], [ "Rucklidge", "Alastair M.", "" ], [ "Szczesny", "Bartosz", "" ] ]
We consider a two-dimensional model of three species in rock-paper-scissors competition and study the self-organisation of the population into fascinating spiraling patterns. Within our individual-based metapopulation formulation, the population composition changes due to cyclic dominance (dominance-removal and dominance-replacement), mutations, and pair-exchange of neighboring individuals. Here, we study the influence of mobility on the emerging patterns and investigate when the pair-exchange rate is responsible for spiral waves to become elusive in stochastic lattice simulations. In particular, we show that the spiral waves predicted by the system's deterministic partial equations are found in lattice simulations only within a finite range of the mobility rate. We also report that in the absence of mutations and dominance-replacement, the resulting spiraling patterns are subject to convective instability and far-field breakup at low mobility rate. Possible applications of these resolution and far-field breakup phenomena are discussed.
1910.05673
Yusuke Himeoka
Yusuke Himeoka and Namiko Mitarai
Dynamics of bacterial populations under the feast-famine cycles
null
Phys. Rev. Research 2, 013372 (2020)
10.1103/PhysRevResearch.2.013372
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bacterial populations in natural conditions are expected to experience stochastic environmental fluctuations, and in addition, environments are affected by bacterial activities since they consume substrates and excrete various chemicals. We here study possible outcomes of population dynamics and evolution under the repeated cycle of substrate-rich conditions and starvation, called the "feast-famine cycle", by a simple stochastic model with the trade-off relationship between the growth rate and the growth yield or the death rate. In the model, the feast (substrate-rich) period is led by a stochastic substrate addition event, while the famine (starvation) period is evoked because bacteria use the supplied substrate. Under the repeated feast-famine cycle, the bacterial population tends to increase the growth rate, even though that tends to decrease the total population size due to the trade-off. Analysis of the model shows that the ratio between the growth rate and the death rate becomes the effective fitness of the population. Hence, the functional form of the trade-off between the growth and death rate determines if the bacterial population eventually goes extinct as an evolutionary consequence. We then show that the increase of the added substrate in the feast period can drive the extinction faster. Overall, the model sheds light on non-trivial possible outcomes under repeated feast-famine cycles.
[ { "created": "Sun, 13 Oct 2019 03:23:21 GMT", "version": "v1" }, { "created": "Tue, 15 Oct 2019 07:53:41 GMT", "version": "v2" }, { "created": "Tue, 28 Jan 2020 16:38:01 GMT", "version": "v3" }, { "created": "Tue, 10 Mar 2020 15:57:54 GMT", "version": "v4" } ]
2020-04-01
[ [ "Himeoka", "Yusuke", "" ], [ "Mitarai", "Namiko", "" ] ]
Bacterial populations in natural conditions are expected to experience stochastic environmental fluctuations, and in addition, environments are affected by bacterial activities since they consume substrates and excrete various chemicals. We here study possible outcomes of population dynamics and evolution under the repeated cycle of substrate-rich conditions and starvation, called the "feast-famine cycle", by a simple stochastic model with the trade-off relationship between the growth rate and the growth yield or the death rate. In the model, the feast (substrate-rich) period is led by a stochastic substrate addition event, while the famine (starvation) period is evoked because bacteria use the supplied substrate. Under the repeated feast-famine cycle, the bacterial population tends to increase the growth rate, even though that tends to decrease the total population size due to the trade-off. Analysis of the model shows that the ratio between the growth rate and the death rate becomes the effective fitness of the population. Hence, the functional form of the trade-off between the growth and death rate determines if the bacterial population eventually goes extinct as an evolutionary consequence. We then show that the increase of the added substrate in the feast period can drive the extinction faster. Overall, the model sheds light on non-trivial possible outcomes under repeated feast-famine cycles.
2201.03476
Guilherme Costa
Guilherme S. Costa, Wesley Cota and Silvio C. Ferreira
Data-driven approach in a compartmental epidemic model to assess undocumented infections
Revised version; 10 pages, 7 figures. Supplementary figures available as ancillary files
null
10.1016/j.chaos.2022.112520
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowcasting and forecasting of epidemic spreading rely on incidence series of reported cases to derive the fundamental epidemiological parameters for a given pathogen. Two relevant drawbacks for predictions are the unknown fractions of undocumented cases and levels of nonpharmacological interventions, which span highly heterogeneously across different places and times. We describe a simple data-driven approach using a compartmental model including asymptomatic and presymptomatic contagions that allows to estimate both the level of undocumented infections and the value of effective reproductive number R t from time series of reported cases, deaths, and epidemiological parameters. The method was applied to epidemic series for COVID-19 across different municipalities in Brazil allowing to estimate the heterogeneity level of under-reporting across different places. The reproductive number derived within the current framework is little sensitive to both diagnosis and infection rates during the asymptomatic states. The methods described here can be extended to more general cases if data is available and adapted to other epidemiological approaches and surveillance data.
[ { "created": "Mon, 10 Jan 2022 17:19:48 GMT", "version": "v1" }, { "created": "Tue, 12 Apr 2022 19:31:50 GMT", "version": "v2" } ]
2022-08-31
[ [ "Costa", "Guilherme S.", "" ], [ "Cota", "Wesley", "" ], [ "Ferreira", "Silvio C.", "" ] ]
Nowcasting and forecasting of epidemic spreading rely on incidence series of reported cases to derive the fundamental epidemiological parameters for a given pathogen. Two relevant drawbacks for predictions are the unknown fractions of undocumented cases and levels of nonpharmacological interventions, which span highly heterogeneously across different places and times. We describe a simple data-driven approach using a compartmental model including asymptomatic and presymptomatic contagions that allows to estimate both the level of undocumented infections and the value of effective reproductive number R t from time series of reported cases, deaths, and epidemiological parameters. The method was applied to epidemic series for COVID-19 across different municipalities in Brazil allowing to estimate the heterogeneity level of under-reporting across different places. The reproductive number derived within the current framework is little sensitive to both diagnosis and infection rates during the asymptomatic states. The methods described here can be extended to more general cases if data is available and adapted to other epidemiological approaches and surveillance data.
1404.0929
Heng Li
Heng Li
Towards Better Understanding of Artifacts in Variant Calling from High-Coverage Samples
Published version
Bioinformatics. 2014; 30:2843-51
10.1093/bioinformatics/btu356
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Whole-genome high-coverage sequencing has been widely used for personal and cancer genomics as well as in various research areas. However, in the lack of an unbiased whole-genome truth set, the global error rate of variant calls and the leading causal artifacts still remain unclear even given the great efforts in the evaluation of variant calling methods. Results: We made ten SNP and INDEL call sets with two read mappers and five variant callers, both on a haploid human genome and a diploid genome at a similar coverage. By investigating false heterozygous calls in the haploid genome, we identified the erroneous realignment in low-complexity regions and the incomplete reference genome with respect to the sample as the two major sources of errors, which press for continued improvements in these two areas. We estimated that the error rate of raw genotype calls is as high as 1 in 10-15kb, but the error rate of post-filtered calls is reduced to 1 in 100-200kb without significant compromise on the sensitivity. Availability: BWA-MEM alignment: http://bit.ly/1g8XqRt; Scripts: https://github.com/lh3/varcmp; Additional data: https://figshare.com/articles/Towards_better_understanding_of_artifacts_in_variating_calling_from_high_coverage_samples/981073
[ { "created": "Thu, 3 Apr 2014 14:24:59 GMT", "version": "v1" }, { "created": "Thu, 23 Jul 2015 01:35:22 GMT", "version": "v2" } ]
2018-07-27
[ [ "Li", "Heng", "" ] ]
Motivation: Whole-genome high-coverage sequencing has been widely used for personal and cancer genomics as well as in various research areas. However, in the lack of an unbiased whole-genome truth set, the global error rate of variant calls and the leading causal artifacts still remain unclear even given the great efforts in the evaluation of variant calling methods. Results: We made ten SNP and INDEL call sets with two read mappers and five variant callers, both on a haploid human genome and a diploid genome at a similar coverage. By investigating false heterozygous calls in the haploid genome, we identified the erroneous realignment in low-complexity regions and the incomplete reference genome with respect to the sample as the two major sources of errors, which press for continued improvements in these two areas. We estimated that the error rate of raw genotype calls is as high as 1 in 10-15kb, but the error rate of post-filtered calls is reduced to 1 in 100-200kb without significant compromise on the sensitivity. Availability: BWA-MEM alignment: http://bit.ly/1g8XqRt; Scripts: https://github.com/lh3/varcmp; Additional data: https://figshare.com/articles/Towards_better_understanding_of_artifacts_in_variating_calling_from_high_coverage_samples/981073
1511.06427
Makoto Fukushima
Makoto Fukushima, Richard F. Betzel, Ye He, Marcel A. de Reus, Martijn P. van den Heuvel, Xi-Nian Zuo, Olaf Sporns
Fluctuations between high- and low-modularity topology in time-resolved functional connectivity
Reorganized the paper; to appear in NeuroImage; arXiv abstract shortened to fit within character limits
NeuroImage, vol. 180, pp. 406-416, 2018
10.1016/j.neuroimage.2017.08.044
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modularity is an important topological attribute for functional brain networks. Recent studies have reported that modularity of functional networks varies not only across individuals being related to demographics and cognitive performance, but also within individuals co-occurring with fluctuations in network properties of functional connectivity, estimated over short time intervals. However, characteristics of these time-resolved functional networks during periods of high and low modularity have remained largely unexplored. In this study we investigate spatiotemporal properties of time-resolved networks in the high and low modularity periods during rest, with a particular focus on their spatial connectivity patterns, temporal homogeneity and test-retest reliability. We show that spatial connectivity patterns of time-resolved networks in the high and low modularity periods are represented by increased and decreased dissociation of the default mode network module from task-positive network modules, respectively. We also find that the instances of time-resolved functional connectivity sampled from within the high (low) modularity period are relatively homogeneous (heterogeneous) over time, indicating that during the low modularity period the default mode network interacts with other networks in a variable manner. We confirmed that the occurrence of the high and low modularity periods varies across individuals with moderate inter-session test-retest reliability and that it is correlated with previously-reported individual differences in the modularity of functional connectivity estimated over longer timescales. Our findings illustrate how time-resolved functional networks are spatiotemporally organized during periods of high and low modularity, allowing one to trace individual differences in long-timescale modularity to the variable occurrence of network configurations at shorter timescales.
[ { "created": "Thu, 19 Nov 2015 22:35:55 GMT", "version": "v1" }, { "created": "Wed, 24 Aug 2016 18:40:03 GMT", "version": "v2" }, { "created": "Tue, 22 Aug 2017 22:01:42 GMT", "version": "v3" } ]
2018-10-05
[ [ "Fukushima", "Makoto", "" ], [ "Betzel", "Richard F.", "" ], [ "He", "Ye", "" ], [ "de Reus", "Marcel A.", "" ], [ "Heuvel", "Martijn P. van den", "" ], [ "Zuo", "Xi-Nian", "" ], [ "Sporns", "Olaf", "" ] ]
Modularity is an important topological attribute for functional brain networks. Recent studies have reported that modularity of functional networks varies not only across individuals being related to demographics and cognitive performance, but also within individuals co-occurring with fluctuations in network properties of functional connectivity, estimated over short time intervals. However, characteristics of these time-resolved functional networks during periods of high and low modularity have remained largely unexplored. In this study we investigate spatiotemporal properties of time-resolved networks in the high and low modularity periods during rest, with a particular focus on their spatial connectivity patterns, temporal homogeneity and test-retest reliability. We show that spatial connectivity patterns of time-resolved networks in the high and low modularity periods are represented by increased and decreased dissociation of the default mode network module from task-positive network modules, respectively. We also find that the instances of time-resolved functional connectivity sampled from within the high (low) modularity period are relatively homogeneous (heterogeneous) over time, indicating that during the low modularity period the default mode network interacts with other networks in a variable manner. We confirmed that the occurrence of the high and low modularity periods varies across individuals with moderate inter-session test-retest reliability and that it is correlated with previously-reported individual differences in the modularity of functional connectivity estimated over longer timescales. Our findings illustrate how time-resolved functional networks are spatiotemporally organized during periods of high and low modularity, allowing one to trace individual differences in long-timescale modularity to the variable occurrence of network configurations at shorter timescales.
2004.03411
Ambrish Kumar Srivastava
Ambrish Kumar Srivastava, Abhishek Kumar, Neeraj Misra
On the Inhibition of COVID-19 Protease by Indian Herbal Plants: An In Silico Investigation
null
Journal of the Indian Chemical Society, 2022
10.1016/j.jics.2022.100640
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COVID-19 has quickly spread across the globe, becoming a pandemic. This disease has a variable impact in different countries depending on their cultural norms, mitigation efforts and health infrastructure. In India, a majority of people rely upon traditional Indian medicine to treat human maladies due to less-cost, easier availability and without any side-effect. These medicines are made by herbal plants. This study aims to assess the Indian herbal plants in the pursuit of potential COVID-19 inhibitors using in silico approaches. We have considered 18 extracted compounds of 11 different species of these plants. Our calculated lipophilicity, aqueous solubility and binding affinity of the extracted compounds suggest that the inhibition potentials in the order; harsingar > aloe vera > giloy > turmeric > neem > ashwagandha > red onion > tulsi > cannabis > black pepper. On comparing the binding affinity with hydroxychloroquine, we note that the inhibition potentials of the extracts of harsingar, aloe vera and giloy are very promising. Therefore, we believe that these findings will open further possibilities and accelerate the works towards finding an antidote for this malady.
[ { "created": "Sun, 5 Apr 2020 10:16:35 GMT", "version": "v1" } ]
2023-04-25
[ [ "Srivastava", "Ambrish Kumar", "" ], [ "Kumar", "Abhishek", "" ], [ "Misra", "Neeraj", "" ] ]
COVID-19 has quickly spread across the globe, becoming a pandemic. This disease has a variable impact in different countries depending on their cultural norms, mitigation efforts and health infrastructure. In India, a majority of people rely upon traditional Indian medicine to treat human maladies due to less-cost, easier availability and without any side-effect. These medicines are made by herbal plants. This study aims to assess the Indian herbal plants in the pursuit of potential COVID-19 inhibitors using in silico approaches. We have considered 18 extracted compounds of 11 different species of these plants. Our calculated lipophilicity, aqueous solubility and binding affinity of the extracted compounds suggest that the inhibition potentials in the order; harsingar > aloe vera > giloy > turmeric > neem > ashwagandha > red onion > tulsi > cannabis > black pepper. On comparing the binding affinity with hydroxychloroquine, we note that the inhibition potentials of the extracts of harsingar, aloe vera and giloy are very promising. Therefore, we believe that these findings will open further possibilities and accelerate the works towards finding an antidote for this malady.
1904.10899
Laura Wadkin MMath
Laura E Wadkin, Sirio Orozco-Fuentes, Irina Neganova, Sanja Bojic, Alex Laude, Majlina Lako, Nicholas G Parker and Anvar Shukurov
Seeding hESCs to achieve optimal colony clonality
null
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs) have promising clinical applications which often rely on clonally-homogeneous cell populations. To achieve this, cross-contamination and merger of colonies should be avoided. This motivates us to experimentally study and quantitatively model the growth of hESC colonies. The colony population is unexpectedly found to be multi-modal. We associate these sub-populations with different numbers of founding cells, and predict their occurrence by considering the role of cell-cell interactions and cell behaviour on randomly seeded cells. We develop a multi-population stochastic exponential model for the colony population which captures our experimental observations, and apply this to calculate the timescales for colony merges and over which colony size no longer predicts the number of founding cells. These results can be used to achieve the best outcome for homogeneous colony growth from different cell seeding densities.
[ { "created": "Tue, 16 Apr 2019 18:58:38 GMT", "version": "v1" } ]
2019-04-25
[ [ "Wadkin", "Laura E", "" ], [ "Orozco-Fuentes", "Sirio", "" ], [ "Neganova", "Irina", "" ], [ "Bojic", "Sanja", "" ], [ "Laude", "Alex", "" ], [ "Lako", "Majlina", "" ], [ "Parker", "Nicholas G", "" ], [ "Shukurov", "Anvar", "" ] ]
Human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs) have promising clinical applications which often rely on clonally-homogeneous cell populations. To achieve this, cross-contamination and merger of colonies should be avoided. This motivates us to experimentally study and quantitatively model the growth of hESC colonies. The colony population is unexpectedly found to be multi-modal. We associate these sub-populations with different numbers of founding cells, and predict their occurrence by considering the role of cell-cell interactions and cell behaviour on randomly seeded cells. We develop a multi-population stochastic exponential model for the colony population which captures our experimental observations, and apply this to calculate the timescales for colony merges and over which colony size no longer predicts the number of founding cells. These results can be used to achieve the best outcome for homogeneous colony growth from different cell seeding densities.
1404.2681
Mike Steel Prof.
Elchanan Mossel and Mike Steel
Majority rule has transition ratio 4 on Yule trees under a 2-state symmetric model
6 pages, 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inferring the ancestral state at the root of a phylogenetic tree from states observed at the leaves is a problem arising in evolutionary biology. The simplest technique -- majority rule -- estimates the root state by the most frequently occurring state at the leaves. Alternative methods -- such as maximum parsimony - explicitly take the tree structure into account. Since either method can outperform the other on particular trees, it is useful to consider the accuracy of the methods on trees generated under some evolutionary null model, such as a Yule pure-birth model. In this short note, we answer a recently posed question concerning the performance of majority rule on Yule trees under a symmetric 2-state Markovian substitution model of character state change. We show that majority rule is accurate precisely when the ratio of the birth (speciation) rate of the Yule process to the substitution rate exceeds the value $4$. By contrast, maximum parsimony has been shown to be accurate only when this ratio is at least 6. Our proof relies on a second moment calculation, coupling, and a novel application of a reflection principle.
[ { "created": "Thu, 10 Apr 2014 03:43:10 GMT", "version": "v1" } ]
2014-04-11
[ [ "Mossel", "Elchanan", "" ], [ "Steel", "Mike", "" ] ]
Inferring the ancestral state at the root of a phylogenetic tree from states observed at the leaves is a problem arising in evolutionary biology. The simplest technique -- majority rule -- estimates the root state by the most frequently occurring state at the leaves. Alternative methods -- such as maximum parsimony - explicitly take the tree structure into account. Since either method can outperform the other on particular trees, it is useful to consider the accuracy of the methods on trees generated under some evolutionary null model, such as a Yule pure-birth model. In this short note, we answer a recently posed question concerning the performance of majority rule on Yule trees under a symmetric 2-state Markovian substitution model of character state change. We show that majority rule is accurate precisely when the ratio of the birth (speciation) rate of the Yule process to the substitution rate exceeds the value $4$. By contrast, maximum parsimony has been shown to be accurate only when this ratio is at least 6. Our proof relies on a second moment calculation, coupling, and a novel application of a reflection principle.
q-bio/0502035
Roger Guimera
Roger Guimera and Luis A. Nunes Amaral
Functional cartography of complex metabolic networks
17 pages, 4 figures. Go to http://amaral.northwestern.edu for the PDF file of the reprint
Nature 433, 895-900 (2005)
10.1038/nature03288
null
q-bio.MN cond-mat.dis-nn
null
High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks. Specifically, we demonstrate that one can (i) find functional modules in complex networks, and (ii) classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a ``cartographic representation'' of complex networks. Metabolic networks are among the most challenging biological networks and, arguably, the ones with more potential for immediate applicability. We use our method to analyze the metabolic networks of twelve organisms from three different super-kingdoms. We find that, typically, 80% of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that low-degree metabolites that connect different modules are more conserved than hubs whose links are mostly within a single module.
[ { "created": "Wed, 23 Feb 2005 20:43:57 GMT", "version": "v1" } ]
2009-11-11
[ [ "Guimera", "Roger", "" ], [ "Amaral", "Luis A. Nunes", "" ] ]
High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks. Specifically, we demonstrate that one can (i) find functional modules in complex networks, and (ii) classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a ``cartographic representation'' of complex networks. Metabolic networks are among the most challenging biological networks and, arguably, the ones with more potential for immediate applicability. We use our method to analyze the metabolic networks of twelve organisms from three different super-kingdoms. We find that, typically, 80% of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that low-degree metabolites that connect different modules are more conserved than hubs whose links are mostly within a single module.
1306.2443
Charalambos Neophytou
Charalambos Neophytou, Filippos A. Aravanopoulos, Siegfried Fink, Aikaterini Dounavi
Interfertile oaks in an island environment. II. Limited hybridization between Quercus alnifolia Poech and Q. coccifera L. in a mixed stand
30 pages, 7 figures, author's accepted manuscript
European Journal of Forest Research 130 (2011): pp. 623-635
10.1007/s10342-010-0454-4
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hybridization and introgression between Quercus alnifolia Poech and Q. coccifera L. is studied by analyzing morphological traits, nuclear and chloroplast DNA markers. The study site is a mixed stand on Troodos Mountains (Cyprus) and the analyzed material includes both adult trees and progenies of specific mother trees. Multivariate analysis of morphological traits shows that the two species can be well distinguished using simple leaf morphometric parameters. A lower genetic diversity in Q. alnifolia than in Q. coccifera and a high interspecific differentiation between the two species are supported by an analysis of nuclear and chloroplast microsatellites. The intermediacy of the four designated hybrids is verified by both leaf morphometric and genetic data. Analysis of progeny arrays provides evidence that interspecific crossings are rare. This finding is further supported by limited introgression of chloroplast genomes. Reproductive barriers (e.g. asynchronous phenology, post-zygotic incompatibilities) might account for this result. A directionality of interspecific gene flow is indicated by a genetic assignment analysis of effective pollen clouds with Q. alnifolia acting as pollen donor. Differences in flowering phenology and species distribution in the stand may have influenced the direction of gene flow and the genetic differentiation among effective pollen clouds of different mother trees within species.
[ { "created": "Tue, 11 Jun 2013 07:47:44 GMT", "version": "v1" }, { "created": "Fri, 14 Jun 2013 16:49:46 GMT", "version": "v2" } ]
2013-06-17
[ [ "Neophytou", "Charalambos", "" ], [ "Aravanopoulos", "Filippos A.", "" ], [ "Fink", "Siegfried", "" ], [ "Dounavi", "Aikaterini", "" ] ]
Hybridization and introgression between Quercus alnifolia Poech and Q. coccifera L. is studied by analyzing morphological traits, nuclear and chloroplast DNA markers. The study site is a mixed stand on Troodos Mountains (Cyprus) and the analyzed material includes both adult trees and progenies of specific mother trees. Multivariate analysis of morphological traits shows that the two species can be well distinguished using simple leaf morphometric parameters. A lower genetic diversity in Q. alnifolia than in Q. coccifera and a high interspecific differentiation between the two species are supported by an analysis of nuclear and chloroplast microsatellites. The intermediacy of the four designated hybrids is verified by both leaf morphometric and genetic data. Analysis of progeny arrays provides evidence that interspecific crossings are rare. This finding is further supported by limited introgression of chloroplast genomes. Reproductive barriers (e.g. asynchronous phenology, post-zygotic incompatibilities) might account for this result. A directionality of interspecific gene flow is indicated by a genetic assignment analysis of effective pollen clouds with Q. alnifolia acting as pollen donor. Differences in flowering phenology and species distribution in the stand may have influenced the direction of gene flow and the genetic differentiation among effective pollen clouds of different mother trees within species.
2008.01176
Ezequiel Alvarez
Ezequiel Alvarez (ICAS, Argentina) and Franco Marsico (Health Ministry, Buenos Aires)
COVID-19 mild cases determination from correlating COVID-line calls to reported cases
7 pages, 2 figures
null
null
ICAS 051/20
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: One of the most challenging keys to understand COVID-19 evolution is to have a measure on those mild cases which are never tested because their few symptoms are soft and/or fade away soon. The problem is not only that they are difficult to identify and test, but also that it is believed that they may constitute the bulk of the cases and could be crucial in the pandemic equation. Methods: We present a novel algorithm to extract the number of these mild cases by correlating a COVID-line calls to reported cases in given districts. The key assumption is to realize that, being a highly contagious disease, the number of calls by mild cases should be proportional to the number of reported cases. Whereas a background of calls not related to infected people should be proportional to the district population. Results: We find that for Buenos Aires Province, in addition to the background, there are in signal 6.6 +/- 0.4 calls per each reported COVID-19 case. Using this we estimate in Buenos Aires Province 20 +/- 2 COVID-19 symptomatic cases for each one reported. Conclusions: A very simple algorithm that models the COVID-line calls as sum of signal plus background allows to estimate the crucial number of the rate of symptomatic to reported COVID-19 cases in a given district. The result from this method is an early and inexpensive estimate and should be contrasted to other methods such as serology and/or massive testing.
[ { "created": "Mon, 3 Aug 2020 20:17:53 GMT", "version": "v1" } ]
2020-08-05
[ [ "Alvarez", "Ezequiel", "", "ICAS, Argentina" ], [ "Marsico", "Franco", "", "Health\n Ministry, Buenos Aires" ] ]
Background: One of the most challenging keys to understand COVID-19 evolution is to have a measure on those mild cases which are never tested because their few symptoms are soft and/or fade away soon. The problem is not only that they are difficult to identify and test, but also that it is believed that they may constitute the bulk of the cases and could be crucial in the pandemic equation. Methods: We present a novel algorithm to extract the number of these mild cases by correlating a COVID-line calls to reported cases in given districts. The key assumption is to realize that, being a highly contagious disease, the number of calls by mild cases should be proportional to the number of reported cases. Whereas a background of calls not related to infected people should be proportional to the district population. Results: We find that for Buenos Aires Province, in addition to the background, there are in signal 6.6 +/- 0.4 calls per each reported COVID-19 case. Using this we estimate in Buenos Aires Province 20 +/- 2 COVID-19 symptomatic cases for each one reported. Conclusions: A very simple algorithm that models the COVID-line calls as sum of signal plus background allows to estimate the crucial number of the rate of symptomatic to reported COVID-19 cases in a given district. The result from this method is an early and inexpensive estimate and should be contrasted to other methods such as serology and/or massive testing.
1304.2149
Moritz Helias
Moritz Helias, Tom Tetzlaff, Markus Diesmann
The correlation structure of local cortical networks intrinsically results from recurrent dynamics
null
null
10.1371/journal.pcbi.1003428
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The co-occurrence of action potentials of pairs of neurons within short time intervals is known since long. Such synchronous events can appear time-locked to the behavior of an animal and also theoretical considerations argue for a functional role of synchrony. Early theoretical work tried to explain correlated activity by neurons transmitting common fluctuations due to shared inputs. This, however, overestimates correlations. Recently the recurrent connectivity of cortical networks was shown responsible for the observed low baseline correlations. Two different explanations were given: One argues that excitatory and inhibitory population activities closely follow the external inputs to the network, so that their effects on a pair of cells mutually cancel. Another explanation relies on negative recurrent feedback to suppress fluctuations in the population activity, equivalent to small correlations. In a biological neuronal network one expects both, external inputs and recurrence, to affect correlated activity. The present work extends the theoretical framework of correlations to include both contributions and explains their qualitative differences. Moreover the study shows that the arguments of fast tracking and recurrent feedback are not equivalent, only the latter correctly predicts the cell-type specific correlations.
[ { "created": "Mon, 8 Apr 2013 10:00:18 GMT", "version": "v1" }, { "created": "Sun, 14 Jul 2013 21:06:53 GMT", "version": "v2" }, { "created": "Fri, 13 Sep 2013 13:55:16 GMT", "version": "v3" } ]
2022-05-17
[ [ "Helias", "Moritz", "" ], [ "Tetzlaff", "Tom", "" ], [ "Diesmann", "Markus", "" ] ]
The co-occurrence of action potentials of pairs of neurons within short time intervals is known since long. Such synchronous events can appear time-locked to the behavior of an animal and also theoretical considerations argue for a functional role of synchrony. Early theoretical work tried to explain correlated activity by neurons transmitting common fluctuations due to shared inputs. This, however, overestimates correlations. Recently the recurrent connectivity of cortical networks was shown responsible for the observed low baseline correlations. Two different explanations were given: One argues that excitatory and inhibitory population activities closely follow the external inputs to the network, so that their effects on a pair of cells mutually cancel. Another explanation relies on negative recurrent feedback to suppress fluctuations in the population activity, equivalent to small correlations. In a biological neuronal network one expects both, external inputs and recurrence, to affect correlated activity. The present work extends the theoretical framework of correlations to include both contributions and explains their qualitative differences. Moreover the study shows that the arguments of fast tracking and recurrent feedback are not equivalent, only the latter correctly predicts the cell-type specific correlations.
1707.09295
Alexandre de Castro
Alexandre de Castro
A network model for clonal differentiation and immune memory
19 pages
Physica A: Statistical Mechanics and its Applications, Vol. 355(15). pp. 408-426 (2005)
10.1016/j.physa.2005.03.036
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A model of bit-strings, that uses the technique of multi-spin coding, was previously used to study the time evolution of B-cell clone repertoire, in a paper by Lagreca, Almeida and Santos. In this work we extend that simplified model to include independently the role of the populations of antibodies, in the control of the immune response, producing mechanisms of differentiation and regulation in a more complete way. Although the antibodies have the same molecular shape of the B-cells receptors (BCR), they should present a different time evolution and thus should be treated separately. We have also studied a possible model for the network immune memory, suggesting a random memory regeneration, which is self-perpetuating.
[ { "created": "Tue, 11 Jul 2017 21:55:48 GMT", "version": "v1" } ]
2017-07-31
[ [ "de Castro", "Alexandre", "" ] ]
A model of bit-strings, that uses the technique of multi-spin coding, was previously used to study the time evolution of B-cell clone repertoire, in a paper by Lagreca, Almeida and Santos. In this work we extend that simplified model to include independently the role of the populations of antibodies, in the control of the immune response, producing mechanisms of differentiation and regulation in a more complete way. Although the antibodies have the same molecular shape of the B-cells receptors (BCR), they should present a different time evolution and thus should be treated separately. We have also studied a possible model for the network immune memory, suggesting a random memory regeneration, which is self-perpetuating.
1409.4713
Mike Steel Prof.
Mike Steel
Reflections on the extinction-explosion dichotomy
12 pages, 0 figures
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A wide range of stochastic processes that model the growth and decline of populations exhibit a curious dichotomy: with certainty either the population goes extinct or its size tends to infinity. There is a elegant and classical theorem that explains why this dichotomy must hold under certain assumptions concerning the process. In this note, I explore how these assumptions might be relaxed further in order to obtain the same, or a similar conclusion, and obtain both positive and negative results.
[ { "created": "Tue, 16 Sep 2014 17:49:20 GMT", "version": "v1" } ]
2014-09-17
[ [ "Steel", "Mike", "" ] ]
A wide range of stochastic processes that model the growth and decline of populations exhibit a curious dichotomy: with certainty either the population goes extinct or its size tends to infinity. There is a elegant and classical theorem that explains why this dichotomy must hold under certain assumptions concerning the process. In this note, I explore how these assumptions might be relaxed further in order to obtain the same, or a similar conclusion, and obtain both positive and negative results.
2011.11070
Caroline Weis
Stefan Groha, Caroline Weis, Alexander Gusev, Bastian Rieck
Topological Data Analysis of copy number alterations in cancer
null
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Identifying subgroups and properties of cancer biopsy samples is a crucial step towards obtaining precise diagnoses and being able to perform personalized treatment of cancer patients. Recent data collections provide a comprehensive characterization of cancer cell data, including genetic data on copy number alterations (CNAs). We explore the potential to capture information contained in cancer genomic information using a novel topology-based approach that encodes each cancer sample as a persistence diagram of topological features, i.e., high-dimensional voids represented in the data. We find that this technique has the potential to extract meaningful low-dimensional representations in cancer somatic genetic data and demonstrate the viability of some applications on finding substructures in cancer data as well as comparing similarity of cancer types.
[ { "created": "Sun, 22 Nov 2020 17:31:23 GMT", "version": "v1" }, { "created": "Thu, 22 Apr 2021 17:28:37 GMT", "version": "v2" } ]
2021-04-23
[ [ "Groha", "Stefan", "" ], [ "Weis", "Caroline", "" ], [ "Gusev", "Alexander", "" ], [ "Rieck", "Bastian", "" ] ]
Identifying subgroups and properties of cancer biopsy samples is a crucial step towards obtaining precise diagnoses and being able to perform personalized treatment of cancer patients. Recent data collections provide a comprehensive characterization of cancer cell data, including genetic data on copy number alterations (CNAs). We explore the potential to capture information contained in cancer genomic information using a novel topology-based approach that encodes each cancer sample as a persistence diagram of topological features, i.e., high-dimensional voids represented in the data. We find that this technique has the potential to extract meaningful low-dimensional representations in cancer somatic genetic data and demonstrate the viability of some applications on finding substructures in cancer data as well as comparing similarity of cancer types.
1310.6338
Randal Olson
Arend Hintze, Randal S. Olson, Christoph Adami, and Ralph Hertwig
Risk aversion as an evolutionary adaptation
18 pages, 7 figures
Scientific Reports 5 (2015) 8242
10.1038/srep08242
null
q-bio.PE cs.GT cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Risk aversion is a common behavior universal to humans and animals alike. Economists have traditionally defined risk preferences by the curvature of the utility function. Psychologists and behavioral economists also make use of concepts such as loss aversion and probability weighting to model risk aversion. Neurophysiological evidence suggests that loss aversion has its origins in relatively ancient neural circuitries (e.g., ventral striatum). Could there thus be an evolutionary origin to risk avoidance? We study this question by evolving strategies that adapt to play the equivalent mean payoff gamble. We hypothesize that risk aversion in the equivalent mean payoff gamble is beneficial as an adaptation to living in small groups, and find that a preference for risk averse strategies only evolves in small populations of less than 1,000 individuals, while agents exhibit no such strategy preference in larger populations. Further, we discover that risk aversion can also evolve in larger populations, but only when the population is segmented into small groups of around 150 individuals. Finally, we observe that risk aversion only evolves when the gamble is a rare event that has a large impact on the individual's fitness. These findings align with earlier reports that humans lived in small groups for a large portion of their evolutionary history. As such, we suggest that rare, high-risk, high-payoff events such as mating and mate competition could have driven the evolution of risk averse behavior in humans living in small groups.
[ { "created": "Wed, 23 Oct 2013 19:27:10 GMT", "version": "v1" } ]
2015-11-18
[ [ "Hintze", "Arend", "" ], [ "Olson", "Randal S.", "" ], [ "Adami", "Christoph", "" ], [ "Hertwig", "Ralph", "" ] ]
Risk aversion is a common behavior universal to humans and animals alike. Economists have traditionally defined risk preferences by the curvature of the utility function. Psychologists and behavioral economists also make use of concepts such as loss aversion and probability weighting to model risk aversion. Neurophysiological evidence suggests that loss aversion has its origins in relatively ancient neural circuitries (e.g., ventral striatum). Could there thus be an evolutionary origin to risk avoidance? We study this question by evolving strategies that adapt to play the equivalent mean payoff gamble. We hypothesize that risk aversion in the equivalent mean payoff gamble is beneficial as an adaptation to living in small groups, and find that a preference for risk averse strategies only evolves in small populations of less than 1,000 individuals, while agents exhibit no such strategy preference in larger populations. Further, we discover that risk aversion can also evolve in larger populations, but only when the population is segmented into small groups of around 150 individuals. Finally, we observe that risk aversion only evolves when the gamble is a rare event that has a large impact on the individual's fitness. These findings align with earlier reports that humans lived in small groups for a large portion of their evolutionary history. As such, we suggest that rare, high-risk, high-payoff events such as mating and mate competition could have driven the evolution of risk averse behavior in humans living in small groups.
1509.07775
Ling Xue Ms
Ling Xue, Carrie A. Manore, Panpim Thongsripong, James M. Hyman
Two-sex mosquito model for the persistence of Wolbachia
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wolbachia is a genus of endosymbiotic bacteria that can infect mosquitoes and reduce their ability to transmit dengue virus. Although the bacterium is transmitted vertically from infected mothers to their offspring, it can be difficult to establish an endemic infection in a wild mosquito population. We developed and analyzed an ordinary differential equation model to investigate the transmission dynamics of releasing Wolbachia-infected mosquitoes to establish an endemic infection in a population of wild uninfected mosquitoes. Our transmission model for the adult and aquatic-stage mosquitoes takes into account Wolbachia-induced fitness change and cytoplasmic incompatibility. We showed that, for a wide range of realistic parameter values, the basic reproduction number is less than one. Hence, the epidemic will die out if only a few Wolbachia-infected mosquitoes are introduced into the wild population. Even though the basic reproduction number is less than one, an endemic Wolbachia infection can be established if a sufficient number of infected mosquitoes are released. This threshold effect is created by a backward bifurcation with three coexisting equilibria: a stable zero-infection equilibrium, an intermediate-infection unstable endemic equilibrium, and a high-infection stable endemic equilibrium. We analyzed the impact of reducing the wild mosquito population before introducing the infected mosquitoes and observed that the most effective approach to establish the infection in the wild is based on reducing mosquitoes in both the adult and aquatic stages.
[ { "created": "Fri, 25 Sep 2015 16:21:07 GMT", "version": "v1" }, { "created": "Sun, 4 Oct 2015 01:57:34 GMT", "version": "v2" } ]
2015-10-06
[ [ "Xue", "Ling", "" ], [ "Manore", "Carrie A.", "" ], [ "Thongsripong", "Panpim", "" ], [ "Hyman", "James M.", "" ] ]
Wolbachia is a genus of endosymbiotic bacteria that can infect mosquitoes and reduce their ability to transmit dengue virus. Although the bacterium is transmitted vertically from infected mothers to their offspring, it can be difficult to establish an endemic infection in a wild mosquito population. We developed and analyzed an ordinary differential equation model to investigate the transmission dynamics of releasing Wolbachia-infected mosquitoes to establish an endemic infection in a population of wild uninfected mosquitoes. Our transmission model for the adult and aquatic-stage mosquitoes takes into account Wolbachia-induced fitness change and cytoplasmic incompatibility. We showed that, for a wide range of realistic parameter values, the basic reproduction number is less than one. Hence, the epidemic will die out if only a few Wolbachia-infected mosquitoes are introduced into the wild population. Even though the basic reproduction number is less than one, an endemic Wolbachia infection can be established if a sufficient number of infected mosquitoes are released. This threshold effect is created by a backward bifurcation with three coexisting equilibria: a stable zero-infection equilibrium, an intermediate-infection unstable endemic equilibrium, and a high-infection stable endemic equilibrium. We analyzed the impact of reducing the wild mosquito population before introducing the infected mosquitoes and observed that the most effective approach to establish the infection in the wild is based on reducing mosquitoes in both the adult and aquatic stages.
2306.15041
Misque Boswell
Priyanka Subash, Alex Gray, Misque Boswell, Samantha L. Cohen, Rachael Garner, Sana Salehi, Calvary Fisher, Samuel Hobel, Satrajit Ghosh, Yaroslav Halchenko, Benjamin Dichter, Russell A. Poldrack, Chris Markiewicz, Dora Hermes, Arnaud Delorme, Scott Makeig, Brendan Behan, Alana Sparks, Stephen R Arnott, Zhengjia Wang, John Magnotti, Michael S. Beauchamp, Nader Pouratian, Arthur W. Toga, Dominique Duncan
A Comparison of Neuroelectrophysiology Databases
22 pages, 6 figures, 5 tables
null
null
null
q-bio.QM cs.DB
http://creativecommons.org/licenses/by/4.0/
As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. The aim of this review is to describe archives that provide researchers with tools to store, share, and reanalyze both human and non-human neurophysiology data based on criteria that are of interest to the neuroscientific community. The Brain Imaging Data Structure (BIDS) and Neurodata Without Borders (NWB) are utilized by these archives to make data more accessible to researchers by implementing a common standard. As the necessity for integrating large-scale analysis into data repository platforms continues to grow within the neuroscientific community, this article will highlight the various analytical and customizable tools developed within the chosen archives that may advance the field of neuroinformatics.
[ { "created": "Mon, 26 Jun 2023 19:59:57 GMT", "version": "v1" }, { "created": "Wed, 30 Aug 2023 23:19:33 GMT", "version": "v2" } ]
2023-09-01
[ [ "Subash", "Priyanka", "" ], [ "Gray", "Alex", "" ], [ "Boswell", "Misque", "" ], [ "Cohen", "Samantha L.", "" ], [ "Garner", "Rachael", "" ], [ "Salehi", "Sana", "" ], [ "Fisher", "Calvary", "" ], [ "Hobel", "Samuel", "" ], [ "Ghosh", "Satrajit", "" ], [ "Halchenko", "Yaroslav", "" ], [ "Dichter", "Benjamin", "" ], [ "Poldrack", "Russell A.", "" ], [ "Markiewicz", "Chris", "" ], [ "Hermes", "Dora", "" ], [ "Delorme", "Arnaud", "" ], [ "Makeig", "Scott", "" ], [ "Behan", "Brendan", "" ], [ "Sparks", "Alana", "" ], [ "Arnott", "Stephen R", "" ], [ "Wang", "Zhengjia", "" ], [ "Magnotti", "John", "" ], [ "Beauchamp", "Michael S.", "" ], [ "Pouratian", "Nader", "" ], [ "Toga", "Arthur W.", "" ], [ "Duncan", "Dominique", "" ] ]
As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. The aim of this review is to describe archives that provide researchers with tools to store, share, and reanalyze both human and non-human neurophysiology data based on criteria that are of interest to the neuroscientific community. The Brain Imaging Data Structure (BIDS) and Neurodata Without Borders (NWB) are utilized by these archives to make data more accessible to researchers by implementing a common standard. As the necessity for integrating large-scale analysis into data repository platforms continues to grow within the neuroscientific community, this article will highlight the various analytical and customizable tools developed within the chosen archives that may advance the field of neuroinformatics.
2205.04868
Mal\'u Grave
Mal\'u Grave and Alex Viguerie and Gabriel F. Barros and Alessandro Reali and Roberto F. S. Andrade and Alvaro L.G.A. Coutinho
Modeling nonlocal behavior in epidemics via a reaction-diffusion system incorporating population movement along a network
null
null
10.1016/j.cma.2022.115541
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The outbreak of COVID-19, beginning in 2019 and continuing through the time of writing, has led to renewed interest in the mathematical modeling of infectious disease. Recent works have focused on partial differential equation (PDE) models, particularly reaction-diffusion models, able to describe the progression of an epidemic in both space and time. These studies have shown generally promising results in describing and predicting COVID-19 progression. However, people often travel long distances in short periods of time, leading to nonlocal transmission of the disease. Such contagion dynamics are not well-represented by diffusion alone. In contrast, ordinary differential equation (ODE) models may easily account for this behavior by considering disparate regions as nodes in a network, with the edges defining nonlocal transmission. In this work, we attempt to combine these modeling paradigms via the introduction of a network structure within a reaction-diffusion PDE system. This is achieved through the definition of a population-transfer operator, which couples disjoint and potentially distant geographic regions, facilitating nonlocal population movement between them. We provide analytical results demonstrating that this operator does not disrupt the physical consistency or mathematical well-posedness of the system, and verify these results through numerical experiments. We then use this technique to simulate the COVID-19 epidemic in the Brazilian region of Rio de Janeiro, showcasing its ability to capture important nonlocal behaviors, while maintaining the advantages of a reaction-diffusion model for describing local dynamics.
[ { "created": "Mon, 9 May 2022 14:16:01 GMT", "version": "v1" } ]
2022-10-26
[ [ "Grave", "Malú", "" ], [ "Viguerie", "Alex", "" ], [ "Barros", "Gabriel F.", "" ], [ "Reali", "Alessandro", "" ], [ "Andrade", "Roberto F. S.", "" ], [ "Coutinho", "Alvaro L. G. A.", "" ] ]
The outbreak of COVID-19, beginning in 2019 and continuing through the time of writing, has led to renewed interest in the mathematical modeling of infectious disease. Recent works have focused on partial differential equation (PDE) models, particularly reaction-diffusion models, able to describe the progression of an epidemic in both space and time. These studies have shown generally promising results in describing and predicting COVID-19 progression. However, people often travel long distances in short periods of time, leading to nonlocal transmission of the disease. Such contagion dynamics are not well-represented by diffusion alone. In contrast, ordinary differential equation (ODE) models may easily account for this behavior by considering disparate regions as nodes in a network, with the edges defining nonlocal transmission. In this work, we attempt to combine these modeling paradigms via the introduction of a network structure within a reaction-diffusion PDE system. This is achieved through the definition of a population-transfer operator, which couples disjoint and potentially distant geographic regions, facilitating nonlocal population movement between them. We provide analytical results demonstrating that this operator does not disrupt the physical consistency or mathematical well-posedness of the system, and verify these results through numerical experiments. We then use this technique to simulate the COVID-19 epidemic in the Brazilian region of Rio de Janeiro, showcasing its ability to capture important nonlocal behaviors, while maintaining the advantages of a reaction-diffusion model for describing local dynamics.
1811.08885
Jorge Vila
Jorge A. Vila
Guessing the upper bound free-energy difference between native-like structures
null
Physica A 533 (2019) 122053
10.1016/j.physa.2019.122053
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Use of a combination of statistical thermodynamics and the Gershgorin theorem enable us to guess, in the thermodynamic limit, a plausible value for the upper bound free-energy difference between native-like structures of monomeric globular proteins. Support to our result in light of both the observed free-energy change between the native and denatured states and the microstability free-energy values obtained from the observed micro-unfolding tendency of nine globular proteins, will be here discussed.
[ { "created": "Wed, 21 Nov 2018 18:58:08 GMT", "version": "v1" } ]
2019-09-30
[ [ "Vila", "Jorge A.", "" ] ]
Use of a combination of statistical thermodynamics and the Gershgorin theorem enable us to guess, in the thermodynamic limit, a plausible value for the upper bound free-energy difference between native-like structures of monomeric globular proteins. Support to our result in light of both the observed free-energy change between the native and denatured states and the microstability free-energy values obtained from the observed micro-unfolding tendency of nine globular proteins, will be here discussed.
1703.10231
Norichika Ogata
Hikoyu Suzuki and Norichika Ogata
A $4,000 Workstation for Mammalian Genome Assembly with Long Reads
4 pages, 1 figure, 1 table
null
null
null
q-bio.GN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-read sequencing has enabled the de novo assembly of several mammalian genomes, but with high cost in computing. Here, we demonstrated de novo assembly of mammalian genome using long reads in an efficient and inexpensive workstation.
[ { "created": "Mon, 20 Mar 2017 10:05:54 GMT", "version": "v1" } ]
2017-03-31
[ [ "Suzuki", "Hikoyu", "" ], [ "Ogata", "Norichika", "" ] ]
Long-read sequencing has enabled the de novo assembly of several mammalian genomes, but with high cost in computing. Here, we demonstrated de novo assembly of mammalian genome using long reads in an efficient and inexpensive workstation.
2011.13024
Elisa Castagnola Dr.
Elisa Castagnola, Sanitta Thongpang, Mieko Hirabayashi, Giorgio Nava, Surabhi Nimbalkar, Tri Nguyen, Sandra Lara, Alexis Oyawale, James Bunnell, Chet Moritz, Sam Kassegne
Glassy Carbon Microelectrode Arrays Enable Voltage-Peak Separated Simultaneous Detection of Dopamine and Serotonin Using Fast Scan Cyclic Voltammetry
null
null
10.1039/D1AN00425E
null
q-bio.BM q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Progress in real-time, simultaneous in vivo detection of multiple neurotransmitters will help accelerate advances in neuroscience research. The need for development of probes capable of stable electrochemical detection of rapid neurotransmitter fluctuations with high sensitivity and selectivity and sub-second temporal resolution has, therefore, become compelling. Additionally, a higher spatial resolution multi-channel capability is required to capture the complex neurotransmission dynamics across different brain regions. These research needs have inspired the introduction of glassy carbon (GC) microelectrode arrays on flexible polymer substrates through carbon MEMS (C-MEMS) microfabrication process followed by a novel pattern transfer technique. These implantable GC microelectrodes offer unique advantages in electrochemical detection of electroactive neurotransmitters through the presence of active carboxyl, carbonyl, and hydroxyl functional groups. In addition, they offer fast electron transfer kinetics, capacitive electrochemical behavior, and wide electrochemical window. Here, we combine the use of these GC microelectrodes with the fast scan cyclic voltammetry (FSCV) technique to optimize the co-detection of dopamine and serotonin in vitro and in vivo. We demonstrate that using optimized FSCV triangular waveform at scan rates lower than 700 V/s and holding and switching at potentials of 0.4 and 1V respectively, it is possible to discriminate voltage reduction and oxidation peaks of serotonin and dopamine, with serotonin contributing distinct multiple oxidation peaks. Taken together, our results present a compelling case for a carbon-based MEA platform rich with active functional groups that allows for repeatable and stable detection of electroactive multiple neurotransmitters at concentrations as low as 10 nM
[ { "created": "Wed, 25 Nov 2020 21:17:01 GMT", "version": "v1" } ]
2021-07-14
[ [ "Castagnola", "Elisa", "" ], [ "Thongpang", "Sanitta", "" ], [ "Hirabayashi", "Mieko", "" ], [ "Nava", "Giorgio", "" ], [ "Nimbalkar", "Surabhi", "" ], [ "Nguyen", "Tri", "" ], [ "Lara", "Sandra", "" ], [ "Oyawale", "Alexis", "" ], [ "Bunnell", "James", "" ], [ "Moritz", "Chet", "" ], [ "Kassegne", "Sam", "" ] ]
Progress in real-time, simultaneous in vivo detection of multiple neurotransmitters will help accelerate advances in neuroscience research. The need for development of probes capable of stable electrochemical detection of rapid neurotransmitter fluctuations with high sensitivity and selectivity and sub-second temporal resolution has, therefore, become compelling. Additionally, a higher spatial resolution multi-channel capability is required to capture the complex neurotransmission dynamics across different brain regions. These research needs have inspired the introduction of glassy carbon (GC) microelectrode arrays on flexible polymer substrates through carbon MEMS (C-MEMS) microfabrication process followed by a novel pattern transfer technique. These implantable GC microelectrodes offer unique advantages in electrochemical detection of electroactive neurotransmitters through the presence of active carboxyl, carbonyl, and hydroxyl functional groups. In addition, they offer fast electron transfer kinetics, capacitive electrochemical behavior, and wide electrochemical window. Here, we combine the use of these GC microelectrodes with the fast scan cyclic voltammetry (FSCV) technique to optimize the co-detection of dopamine and serotonin in vitro and in vivo. We demonstrate that using optimized FSCV triangular waveform at scan rates lower than 700 V/s and holding and switching at potentials of 0.4 and 1V respectively, it is possible to discriminate voltage reduction and oxidation peaks of serotonin and dopamine, with serotonin contributing distinct multiple oxidation peaks. Taken together, our results present a compelling case for a carbon-based MEA platform rich with active functional groups that allows for repeatable and stable detection of electroactive multiple neurotransmitters at concentrations as low as 10 nM
2306.11375
Carles Navarro
Carles Navarro, Maciej Majewski and Gianni de Fabritiis
Top-down machine learning of coarse-grained protein force-fields
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Developing accurate and efficient coarse-grained representations of proteins is crucial for understanding their folding, function, and interactions over extended timescales. Our methodology involves simulating proteins with molecular dynamics and utilizing the resulting trajectories to train a neural network potential through differentiable trajectory reweighting. Remarkably, this method requires only the native conformation of proteins, eliminating the need for labeled data derived from extensive simulations or memory-intensive end-to-end differentiable simulations. Once trained, the model can be employed to run parallel molecular dynamics simulations and sample folding events for proteins both within and beyond the training distribution, showcasing its extrapolation capabilities. By applying Markov State Models, native-like conformations of the simulated proteins can be predicted from the coarse-grained simulations. Owing to its theoretical transferability and ability to use solely experimental static structures as training data, we anticipate that this approach will prove advantageous for developing new protein force fields and further advancing the study of protein dynamics, folding, and interactions.
[ { "created": "Tue, 20 Jun 2023 08:31:24 GMT", "version": "v1" }, { "created": "Wed, 21 Jun 2023 05:49:49 GMT", "version": "v2" }, { "created": "Tue, 27 Jun 2023 12:02:21 GMT", "version": "v3" }, { "created": "Tue, 10 Oct 2023 08:32:56 GMT", "version": "v4" } ]
2023-10-11
[ [ "Navarro", "Carles", "" ], [ "Majewski", "Maciej", "" ], [ "de Fabritiis", "Gianni", "" ] ]
Developing accurate and efficient coarse-grained representations of proteins is crucial for understanding their folding, function, and interactions over extended timescales. Our methodology involves simulating proteins with molecular dynamics and utilizing the resulting trajectories to train a neural network potential through differentiable trajectory reweighting. Remarkably, this method requires only the native conformation of proteins, eliminating the need for labeled data derived from extensive simulations or memory-intensive end-to-end differentiable simulations. Once trained, the model can be employed to run parallel molecular dynamics simulations and sample folding events for proteins both within and beyond the training distribution, showcasing its extrapolation capabilities. By applying Markov State Models, native-like conformations of the simulated proteins can be predicted from the coarse-grained simulations. Owing to its theoretical transferability and ability to use solely experimental static structures as training data, we anticipate that this approach will prove advantageous for developing new protein force fields and further advancing the study of protein dynamics, folding, and interactions.
2112.04106
Hao Wang
Xiunan Wang, Hao Wang, Pouria Ramazi, Kyeongah Nah, Mark Lewis
A hypothesis-free bridging of disease dynamics and non-pharmaceutical policies
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate prediction of the number of daily or weekly confirmed cases of COVID-19 is critical to the control of the pandemic. Existing mechanistic models nicely capture the disease dynamics. However, to forecast the future, they require the transmission rate to be known, limiting their prediction power. Typically, a hypothesis is made on the form of the transmission rate with respect to time. Yet the real form is too complex to be mechanistically modeled due to the unknown dynamics of many influential factors. We tackle this problem by using a hypothesis-free machine-learning algorithm to estimate the transmission rate from data on non-pharmaceutical policies, and in turn forecast the confirmed cases using a mechanistic disease model. More specifically, we build a hybrid model consisting of a mechanistic ordinary differential equation (ODE) model and a generalized boosting model (GBM). To calibrate the parameters, we develop an "inverse method" that obtains the transmission rate inversely in time from the other variables in the ODE model and then feed it into the GBM to connect with the policy data. The resulting model forecasted the number of daily confirmed cases up to 35 days in the future in the United States with an averaged mean absolute percentage error of 27%. Being partly data-driven, the method is more accurate than typical mechanistic models and meanwhile more intuitive, and possibly reliable, than purely data-based machine learning models. Moreover, it can identify the most informative predictive variables, which can be helpful in designing improved forecasters as well as informing policymakers.
[ { "created": "Wed, 8 Dec 2021 04:32:45 GMT", "version": "v1" }, { "created": "Tue, 8 Mar 2022 22:08:00 GMT", "version": "v2" } ]
2022-03-10
[ [ "Wang", "Xiunan", "" ], [ "Wang", "Hao", "" ], [ "Ramazi", "Pouria", "" ], [ "Nah", "Kyeongah", "" ], [ "Lewis", "Mark", "" ] ]
Accurate prediction of the number of daily or weekly confirmed cases of COVID-19 is critical to the control of the pandemic. Existing mechanistic models nicely capture the disease dynamics. However, to forecast the future, they require the transmission rate to be known, limiting their prediction power. Typically, a hypothesis is made on the form of the transmission rate with respect to time. Yet the real form is too complex to be mechanistically modeled due to the unknown dynamics of many influential factors. We tackle this problem by using a hypothesis-free machine-learning algorithm to estimate the transmission rate from data on non-pharmaceutical policies, and in turn forecast the confirmed cases using a mechanistic disease model. More specifically, we build a hybrid model consisting of a mechanistic ordinary differential equation (ODE) model and a generalized boosting model (GBM). To calibrate the parameters, we develop an "inverse method" that obtains the transmission rate inversely in time from the other variables in the ODE model and then feed it into the GBM to connect with the policy data. The resulting model forecasted the number of daily confirmed cases up to 35 days in the future in the United States with an averaged mean absolute percentage error of 27%. Being partly data-driven, the method is more accurate than typical mechanistic models and meanwhile more intuitive, and possibly reliable, than purely data-based machine learning models. Moreover, it can identify the most informative predictive variables, which can be helpful in designing improved forecasters as well as informing policymakers.
q-bio/0404001
Jan Karbowski
Jan Karbowski
Towards comparative theoretical neuroanatomy of the cerebral cortex
To be published in "Journal of Integrative Neuroscience"; special issue on 'Neuromorphic models'
null
null
null
q-bio.NC q-bio.TO
null
Despite differences in brain sizes and cognitive niches among mammals, their cerebral cortices posses many common features and regularities. These regularities have been a subject of experimental investigation in neuroanatomy for the last 100 years. It is believed that such studies may provide clues about cortical design principles and perhaps function. However, on a theoretical side there has been little interest, until recently, in studying quantitatively these regularities. This article reviews some attempts in this direction with an emphasis on neuronal connectivity. It is suggested that the brain development is influenced by different, conflicting in outcome, functional/biochemical constraints. Because of these conflicting constraints, it is hypothesized that the architecture of the cerebral cortex is shaped by some global optimization plan.
[ { "created": "Thu, 1 Apr 2004 03:23:04 GMT", "version": "v1" } ]
2007-05-23
[ [ "Karbowski", "Jan", "" ] ]
Despite differences in brain sizes and cognitive niches among mammals, their cerebral cortices posses many common features and regularities. These regularities have been a subject of experimental investigation in neuroanatomy for the last 100 years. It is believed that such studies may provide clues about cortical design principles and perhaps function. However, on a theoretical side there has been little interest, until recently, in studying quantitatively these regularities. This article reviews some attempts in this direction with an emphasis on neuronal connectivity. It is suggested that the brain development is influenced by different, conflicting in outcome, functional/biochemical constraints. Because of these conflicting constraints, it is hypothesized that the architecture of the cerebral cortex is shaped by some global optimization plan.
1612.08763
Alicia Dickenstein
Mercedes P\'erez Mill\'an, Alicia Dickenstein
The structure of MESSI biological systems
Several small improvements
null
null
null
q-bio.MN math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a general framework for biological systems, called MESSI systems, that describe Modifications of type Enzyme-Substrate or Swap with Intermediates, and we prove general results based on the network structure. Many post-translational modification networks are MESSI systems. For example: the motifs in [Feliu and Wiuf (2012a)], sequential distributive and processive multisite phosphorylation networks, most of the examples in [Angeli et al. (2007)], phosphorylation cascades, two component systems as in [Kothamachu et al. (2015)], the bacterial EnvZ/OmpR network in [Shinar and Feinberg (2010)], and all linear networks. We show that, under mass-action kinetics, MESSI systems are conservative. We simplify the study of steady states of these systems by explicit elimination of intermediate complexes and we give conditions to ensure an explicit rational parametrization of the variety of steady states (inspired by [Feliu and Wiuf (2013a, 2013b), Thomson and Gunawardena (2009)]). We define an important subclass of MESSI systems with toric steady states [P\'erez Mill\'an et al. (2012)] and we give for MESSI systems with toric steady states an easy algorithm to determine the capacity for multistationarity. In this case, the algorithm provides rate constants for which multistationarity takes place, based on the theory of oriented matroids.
[ { "created": "Tue, 27 Dec 2016 22:09:23 GMT", "version": "v1" }, { "created": "Sun, 24 Sep 2017 20:33:37 GMT", "version": "v2" }, { "created": "Sat, 5 May 2018 22:44:02 GMT", "version": "v3" } ]
2018-05-08
[ [ "Millán", "Mercedes Pérez", "" ], [ "Dickenstein", "Alicia", "" ] ]
We introduce a general framework for biological systems, called MESSI systems, that describe Modifications of type Enzyme-Substrate or Swap with Intermediates, and we prove general results based on the network structure. Many post-translational modification networks are MESSI systems. For example: the motifs in [Feliu and Wiuf (2012a)], sequential distributive and processive multisite phosphorylation networks, most of the examples in [Angeli et al. (2007)], phosphorylation cascades, two component systems as in [Kothamachu et al. (2015)], the bacterial EnvZ/OmpR network in [Shinar and Feinberg (2010)], and all linear networks. We show that, under mass-action kinetics, MESSI systems are conservative. We simplify the study of steady states of these systems by explicit elimination of intermediate complexes and we give conditions to ensure an explicit rational parametrization of the variety of steady states (inspired by [Feliu and Wiuf (2013a, 2013b), Thomson and Gunawardena (2009)]). We define an important subclass of MESSI systems with toric steady states [P\'erez Mill\'an et al. (2012)] and we give for MESSI systems with toric steady states an easy algorithm to determine the capacity for multistationarity. In this case, the algorithm provides rate constants for which multistationarity takes place, based on the theory of oriented matroids.
1901.02420
Cristina Cornelio PhD
Cristina Cornelio, Lucrezia Furian, Antonio Nicolo' and Francesca Rossi
Using deceased-donor kidneys to initiate chains of living donor kidney paired donations: algorithms and experimentation
To be published in AIES 2019
null
null
null
q-bio.TO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We design a flexible algorithm that exploits deceased donor kidneys to initiate chains of living donor kidney paired donations, combining deceased and living donor allocation mechanisms to improve the quantity and quality of kidney transplants. The advantages of this approach have been measured using retrospective data on the pool of donor/recipient incompatible and desensitized pairs at the Padua University Hospital, the largest center for living donor kidney transplants in Italy. The experiments show a remarkable improvement on the number of patients with incompatible donor who could be transplanted, a decrease in the number of desensitization procedures, and an increase in the number of UT patients (that is, patients unlikely to be transplanted for immunological reasons) in the waiting list who could receive an organ.
[ { "created": "Mon, 17 Dec 2018 15:56:18 GMT", "version": "v1" } ]
2019-01-09
[ [ "Cornelio", "Cristina", "" ], [ "Furian", "Lucrezia", "" ], [ "Nicolo'", "Antonio", "" ], [ "Rossi", "Francesca", "" ] ]
We design a flexible algorithm that exploits deceased donor kidneys to initiate chains of living donor kidney paired donations, combining deceased and living donor allocation mechanisms to improve the quantity and quality of kidney transplants. The advantages of this approach have been measured using retrospective data on the pool of donor/recipient incompatible and desensitized pairs at the Padua University Hospital, the largest center for living donor kidney transplants in Italy. The experiments show a remarkable improvement on the number of patients with incompatible donor who could be transplanted, a decrease in the number of desensitization procedures, and an increase in the number of UT patients (that is, patients unlikely to be transplanted for immunological reasons) in the waiting list who could receive an organ.
1610.03930
Esteban Vargas Bernal
Esteban Vargas and Camilo Sanabria
Modeling sexual selection in T\'ungara frog and rationality of mate choice
17 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The males of the specie of frogs Engystomops pustulosus produce simple and com- plex calls to lure females, as a way of Intersexual selection. Complex calls lead males to a greater reproductive success than simple calls do. However, the complex calls are also more attractive to the main predator of these amphibians, the bat Trachops cirrhosus. Therefore, as M. Ryan suggests, the complexity of the calls let the frogs keep a trade off between reproductive success and predation. In this paper, we first propose to model the proportion of simple to complex calls as a symmetric game of two strategies. We also propose a model with three strategies (simple callers, complex callers and quiet males), where we assess the effect of a male that keeps quiet and intercepts females, which would play a role of Intrasexual selection. We analyze the stable points of the replicator equations of the models that we propose. Under the assumption that the decision of the males takes into account this trade off between reproductive success and predation, our model reproduces the observed behavior reported in the literature with minimal assumption on the parameters. From the three strategies model, we verify that the quiet strategy could only coexists with the simple and complex strategies as long as the rate at which quiet males intercept females is high. We conclude that the reproductive strategy of the male frog Engystomops pustulosus is rational.
[ { "created": "Thu, 13 Oct 2016 03:56:08 GMT", "version": "v1" } ]
2016-10-14
[ [ "Vargas", "Esteban", "" ], [ "Sanabria", "Camilo", "" ] ]
The males of the specie of frogs Engystomops pustulosus produce simple and com- plex calls to lure females, as a way of Intersexual selection. Complex calls lead males to a greater reproductive success than simple calls do. However, the complex calls are also more attractive to the main predator of these amphibians, the bat Trachops cirrhosus. Therefore, as M. Ryan suggests, the complexity of the calls let the frogs keep a trade off between reproductive success and predation. In this paper, we first propose to model the proportion of simple to complex calls as a symmetric game of two strategies. We also propose a model with three strategies (simple callers, complex callers and quiet males), where we assess the effect of a male that keeps quiet and intercepts females, which would play a role of Intrasexual selection. We analyze the stable points of the replicator equations of the models that we propose. Under the assumption that the decision of the males takes into account this trade off between reproductive success and predation, our model reproduces the observed behavior reported in the literature with minimal assumption on the parameters. From the three strategies model, we verify that the quiet strategy could only coexists with the simple and complex strategies as long as the rate at which quiet males intercept females is high. We conclude that the reproductive strategy of the male frog Engystomops pustulosus is rational.
1307.1583
Guido Tiana
Sara Lui and Guido Tiana
The network of stabilizing contacts in proteins studied by coevolutionary data
null
null
10.1063/1.4826096
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The primary structure of proteins, that is their sequence, represents one of the most abundant set of experimental data concerning biomolecules. The study of correlations in families of co--evolving proteins by means of an inverse Ising--model approach allows to obtain information on their native conformation. Following up on a recent development along this line, we optimize the algorithm to calculate effective energies between the residues, validating the approach both back-calculating interaction energies in a model system, and predicting the free energies associated to mutations in real systems. Making use of these effective energies, we study the networks of interactions which stabilizes the native conformation of some well--studied proteins, showing that it display different properties than the associated contact network.
[ { "created": "Fri, 5 Jul 2013 11:22:28 GMT", "version": "v1" } ]
2015-06-16
[ [ "Lui", "Sara", "" ], [ "Tiana", "Guido", "" ] ]
The primary structure of proteins, that is their sequence, represents one of the most abundant set of experimental data concerning biomolecules. The study of correlations in families of co--evolving proteins by means of an inverse Ising--model approach allows to obtain information on their native conformation. Following up on a recent development along this line, we optimize the algorithm to calculate effective energies between the residues, validating the approach both back-calculating interaction energies in a model system, and predicting the free energies associated to mutations in real systems. Making use of these effective energies, we study the networks of interactions which stabilizes the native conformation of some well--studied proteins, showing that it display different properties than the associated contact network.
1605.08909
Mih\'aly B\'anyai
Mih\'aly B\'anyai, Zsombor Koman, Gerg\H{o} Orb\'an
Response statistics dissect the contributions of different sources of variability to population activity in V1
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the Doubly Stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the Rectified Gaussian (RG) model that traces variability back to membrane potential variance, to analyze stimulus-dependent modulation of response statistics. Using a model of a pair of neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. In order to test the models against data, we build a population model to simulate stimulus change-related modulations in response statistics. We use unit recordings from the primary visual cortex of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modelling of stochasticity provides an efficient strategy to model correlations.
[ { "created": "Sat, 28 May 2016 16:05:16 GMT", "version": "v1" } ]
2016-05-31
[ [ "Bányai", "Mihály", "" ], [ "Koman", "Zsombor", "" ], [ "Orbán", "Gergő", "" ] ]
Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the Doubly Stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the Rectified Gaussian (RG) model that traces variability back to membrane potential variance, to analyze stimulus-dependent modulation of response statistics. Using a model of a pair of neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. In order to test the models against data, we build a population model to simulate stimulus change-related modulations in response statistics. We use unit recordings from the primary visual cortex of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modelling of stochasticity provides an efficient strategy to model correlations.
2102.11212
Brinnae Bent
Brinnae Bent, Maria Henriquez, Jessilyn Dunn
cgmquantify: Python and R packages for comprehensive analysis of interstitial glucose and glycemic variability from continuous glucose monitor data
10 pages, 2 figures, 1 table
null
10.1109/OJEMB.2021.3105816
null
q-bio.QM stat.ME
http://creativecommons.org/licenses/by-nc-sa/4.0/
Continuous glucose monitoring (CGM) systems provide real-time, dynamic glucose information by tracking interstitial glucose values throughout the day (typically values are recorded every 5 minutes). CGMs are commonly used in diabetes management by clinicians and patients and in research to understand how factors of longitudinal glucose and glucose variability relate to disease onset and severity and the efficacy of interventions. CGM data presents unique bioinformatic challenges because the data is longitudinal, temporal, and there are nearly infinite possible ways to summarize and use this data. There are over 20 metrics of glucose variability, no standardization of metrics, and little validation across studies. Here we present open source python and R packages called cgmquantify, which contains over 20 functions with over 25 clinically validated metrics of glucose and glucose variability and functions for visualizing longitudinal CGM data. This is expected to be useful for researchers and may provide additional insights to patients and clinicians about glucose patterns.
[ { "created": "Mon, 8 Feb 2021 17:21:48 GMT", "version": "v1" } ]
2021-10-05
[ [ "Bent", "Brinnae", "" ], [ "Henriquez", "Maria", "" ], [ "Dunn", "Jessilyn", "" ] ]
Continuous glucose monitoring (CGM) systems provide real-time, dynamic glucose information by tracking interstitial glucose values throughout the day (typically values are recorded every 5 minutes). CGMs are commonly used in diabetes management by clinicians and patients and in research to understand how factors of longitudinal glucose and glucose variability relate to disease onset and severity and the efficacy of interventions. CGM data presents unique bioinformatic challenges because the data is longitudinal, temporal, and there are nearly infinite possible ways to summarize and use this data. There are over 20 metrics of glucose variability, no standardization of metrics, and little validation across studies. Here we present open source python and R packages called cgmquantify, which contains over 20 functions with over 25 clinically validated metrics of glucose and glucose variability and functions for visualizing longitudinal CGM data. This is expected to be useful for researchers and may provide additional insights to patients and clinicians about glucose patterns.
1210.0754
Tony Lindeberg
Tony Lindeberg
Invariance of visual operations at the level of receptive fields
40 pages, 17 figures
PLoS ONE 8(7):e66990, 2013
10.1371/journal.pone.0066990
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Receptive field profiles registered by cell recordings have shown that mammalian vision has developed receptive fields tuned to different sizes and orientations in the image domain as well as to different image velocities in space-time. This article presents a theoretical model by which families of idealized receptive field profiles can be derived mathematically from a small set of basic assumptions that correspond to structural properties of the environment. The article also presents a theory for how basic invariance properties to variations in scale, viewing direction and relative motion can be obtained from the output of such receptive fields, using complementary selection mechanisms that operate over the output of families of receptive fields tuned to different parameters. Thereby, the theory shows how basic invariance properties of a visual system can be obtained already at the level of receptive fields, and we can explain the different shapes of receptive field profiles found in biological vision from a requirement that the visual system should be invariant to the natural types of image transformations that occur in its environment.
[ { "created": "Tue, 2 Oct 2012 12:43:18 GMT", "version": "v1" } ]
2014-04-09
[ [ "Lindeberg", "Tony", "" ] ]
Receptive field profiles registered by cell recordings have shown that mammalian vision has developed receptive fields tuned to different sizes and orientations in the image domain as well as to different image velocities in space-time. This article presents a theoretical model by which families of idealized receptive field profiles can be derived mathematically from a small set of basic assumptions that correspond to structural properties of the environment. The article also presents a theory for how basic invariance properties to variations in scale, viewing direction and relative motion can be obtained from the output of such receptive fields, using complementary selection mechanisms that operate over the output of families of receptive fields tuned to different parameters. Thereby, the theory shows how basic invariance properties of a visual system can be obtained already at the level of receptive fields, and we can explain the different shapes of receptive field profiles found in biological vision from a requirement that the visual system should be invariant to the natural types of image transformations that occur in its environment.
1803.02287
Eberhard Korsching
H Buerger, F Boecker, J Packeisen, K Agelopoulos, K Poos, W Nadler, E Korsching
Analyzing the basic principles of tissue microarray data measuring the cooperative phenomena of marker proteins in invasive breast cancer
The DOVE Press Journal 'Open Access Bioinformatics' ceased publishing in May 2016
Open Access Bioinformatics, 2013, 5(1):1-21, 10.2147/OAB.S36565
10.2147/OAB.S36565
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The analysis of a protein-expression pattern from tissue microarray (TMA) data will not immediately give an answer on synergistic or antagonistic effects between the expression of the observed proteins. But contrary to apparent first impression, it is possible to reveal those cooperative phenomena from TMA data. We present here a largely assumption-free combinatorial analysis, related to correlation networks but with much less arbitrary constraints. A strong focus was put on the analysis of the basic data to analyze how the cooperative phenomena might be imprinted in the TMA data structure. The study design was based on two independent panels of 589 and 366 invasive breast cancer cases from different institutions, assembled on tissue microarrays. The combinatorial analysis generates an optimal rank ordering of protein-expression coherence. The outcome of the analysis corresponds to all the single observations scattered over several publications and integrates them in one context. This means all these scattered observations can also be deduced from one TMA experiment. A comprehensive statistical meta-analysis of the TMA data suggests the existence of a superposition of three basic coherence situations, and offers the opportunity to analyze these data properties with additional real-world data and synthetic data in more detail. The presented algorithm gives molecular pathologists a tool to extract dependency information from TMA data. Beyond this practical benefit, some light was shed on how dependency aspects might be imprinted into expression data. This will certainly foster the refinement of algorithms to reconstruct dependency networks. The implementation of the algorithm is at the moment not end-user suitable, but available on request.
[ { "created": "Tue, 6 Mar 2018 16:29:42 GMT", "version": "v1" } ]
2018-03-07
[ [ "Buerger", "H", "" ], [ "Boecker", "F", "" ], [ "Packeisen", "J", "" ], [ "Agelopoulos", "K", "" ], [ "Poos", "K", "" ], [ "Nadler", "W", "" ], [ "Korsching", "E", "" ] ]
The analysis of a protein-expression pattern from tissue microarray (TMA) data will not immediately give an answer on synergistic or antagonistic effects between the expression of the observed proteins. But contrary to apparent first impression, it is possible to reveal those cooperative phenomena from TMA data. We present here a largely assumption-free combinatorial analysis, related to correlation networks but with much less arbitrary constraints. A strong focus was put on the analysis of the basic data to analyze how the cooperative phenomena might be imprinted in the TMA data structure. The study design was based on two independent panels of 589 and 366 invasive breast cancer cases from different institutions, assembled on tissue microarrays. The combinatorial analysis generates an optimal rank ordering of protein-expression coherence. The outcome of the analysis corresponds to all the single observations scattered over several publications and integrates them in one context. This means all these scattered observations can also be deduced from one TMA experiment. A comprehensive statistical meta-analysis of the TMA data suggests the existence of a superposition of three basic coherence situations, and offers the opportunity to analyze these data properties with additional real-world data and synthetic data in more detail. The presented algorithm gives molecular pathologists a tool to extract dependency information from TMA data. Beyond this practical benefit, some light was shed on how dependency aspects might be imprinted into expression data. This will certainly foster the refinement of algorithms to reconstruct dependency networks. The implementation of the algorithm is at the moment not end-user suitable, but available on request.
0912.4283
Yuriy Shckorbatov G
Y.G. Shckorbatov, V.N. Pasiuga, V.A. Grabina, N.N. Kolchigin, D.D. Ivanchenko, V.N.Bykov
Human cell recovery after microwave irradiation
12 pages, 18 figures, 16 references
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells of human buccal epithelium of 6 male donors were exposed to microwave radiation (frequency f=36,64 GHz, power density E = 10, 100, and 400 mcW/cm^2). Exposure time in all experiments was 10 seconds. Heterochromatin was stained by 2% orcein in 45 % acetic acid. The stainability of cells with trypan blue (0,5 %) and indigocarmine (5 mM) after 5 min of staining was investigated. Irradiation induced chromatin condensation (increase of number of heterochromatin granules) and increase of membrane permeability to vital dyes trypan blue and indigocarmine. Isolated human buccal cells had shown the ability to recover these changes. Number of heterochromatin granules lowered to initial level after 0,5 hour (E = 10 mcW/cm^2) and 2 hours (E = 100, and 400 mcW/cm^2) after irradiation. Cell plasma membrane permeability recovered a bit later, in correspondence, after 1 hour and 3 hours after irradiation.
[ { "created": "Mon, 21 Dec 2009 22:49:01 GMT", "version": "v1" } ]
2009-12-23
[ [ "Shckorbatov", "Y. G.", "" ], [ "Pasiuga", "V. N.", "" ], [ "Grabina", "V. A.", "" ], [ "Kolchigin", "N. N.", "" ], [ "Ivanchenko", "D. D.", "" ], [ "Bykov", "V. N.", "" ] ]
Cells of human buccal epithelium of 6 male donors were exposed to microwave radiation (frequency f=36,64 GHz, power density E = 10, 100, and 400 mcW/cm^2). Exposure time in all experiments was 10 seconds. Heterochromatin was stained by 2% orcein in 45 % acetic acid. The stainability of cells with trypan blue (0,5 %) and indigocarmine (5 mM) after 5 min of staining was investigated. Irradiation induced chromatin condensation (increase of number of heterochromatin granules) and increase of membrane permeability to vital dyes trypan blue and indigocarmine. Isolated human buccal cells had shown the ability to recover these changes. Number of heterochromatin granules lowered to initial level after 0,5 hour (E = 10 mcW/cm^2) and 2 hours (E = 100, and 400 mcW/cm^2) after irradiation. Cell plasma membrane permeability recovered a bit later, in correspondence, after 1 hour and 3 hours after irradiation.
2106.06929
Michael Kochen
Michael A. Kochen, Steven S. Andrews, H. Steven Wiley, Song Feng, Herbert M. Sauro
Dynamics and Sensitivity of Signaling Pathways
null
null
null
null
q-bio.MN q-bio.SC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Signaling pathways serve to communicate information about extracellular conditions into the cell, to both the nucleus and cytoplasmic processes to control cell responses. Genetic mutations in signaling network components are frequently associated with cancer and can result in cells acquiring an ability to divide and grow uncontrollably. Because signaling pathways play such a significant role in cancer initiation and advancement, their constituent proteins are attractive therapeutic targets. In this review, we discuss how signaling pathway modeling can assist with identifying effective drugs for treating diseases, such as cancer. An achievement that would facilitate the use of such models is their ability to identify controlling biochemical parameters in signaling pathways, such as molecular abundances and chemical reaction rates, because this would help determine effective points of attack by therapeutics.
[ { "created": "Sun, 13 Jun 2021 05:55:15 GMT", "version": "v1" } ]
2021-06-15
[ [ "Kochen", "Michael A.", "" ], [ "Andrews", "Steven S.", "" ], [ "Wiley", "H. Steven", "" ], [ "Feng", "Song", "" ], [ "Sauro", "Herbert M.", "" ] ]
Signaling pathways serve to communicate information about extracellular conditions into the cell, to both the nucleus and cytoplasmic processes to control cell responses. Genetic mutations in signaling network components are frequently associated with cancer and can result in cells acquiring an ability to divide and grow uncontrollably. Because signaling pathways play such a significant role in cancer initiation and advancement, their constituent proteins are attractive therapeutic targets. In this review, we discuss how signaling pathway modeling can assist with identifying effective drugs for treating diseases, such as cancer. An achievement that would facilitate the use of such models is their ability to identify controlling biochemical parameters in signaling pathways, such as molecular abundances and chemical reaction rates, because this would help determine effective points of attack by therapeutics.
1703.05826
Kelath Murali Manoj
Kelath Murali Manoj
Mitochondrial oxidative phosphorylation: Debunking the concepts of electron transport chain, proton pumps, chemiosmosis and rotary ATP synthesis
Main manuscript (including abstract, Tables & Figures, References, etc.) is 32 pages. 2 Tables & 2 Figures. Supplementary Information has 7 items, and the document (together with the main manuscript) adds up to 65 pages
Biochemistry Insights, 2018
10.1177/1178626418818442
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Herein (the first part of my work), I debunk the long-standing hypotheses that explain mitochondrial oxidative phosphorylation. Simple calculations point out that mitochondria are highly proton-deficient microcosms and therefore, elaborate proton pump machinery are not tenable. Further, other elements like the elaborate electron transport chain, chemiosmosis, rotary ATP synthesis, etc. are also critically evaluated to point out that such complicated systems are non-viable. The communication necessitates a new explanatory paradigm for cellular respiration. In the second part of my work, I have put forward a viable alternative explanatory paradigm for mitochondrial oxidative phosphorylation.
[ { "created": "Thu, 16 Mar 2017 21:21:27 GMT", "version": "v1" } ]
2018-12-21
[ [ "Manoj", "Kelath Murali", "" ] ]
Herein (the first part of my work), I debunk the long-standing hypotheses that explain mitochondrial oxidative phosphorylation. Simple calculations point out that mitochondria are highly proton-deficient microcosms and therefore, elaborate proton pump machinery are not tenable. Further, other elements like the elaborate electron transport chain, chemiosmosis, rotary ATP synthesis, etc. are also critically evaluated to point out that such complicated systems are non-viable. The communication necessitates a new explanatory paradigm for cellular respiration. In the second part of my work, I have put forward a viable alternative explanatory paradigm for mitochondrial oxidative phosphorylation.
2312.01527
Ishir Rao
Ishir Rao
NovoMol: Recurrent Neural Network for Orally Bioavailable Drug Design and Validation on PDGFR{\alpha} Receptor
21 pages, 10 figures, 4 tables, Submitted to Frontiers in Drug Discovery
null
null
null
q-bio.BM cs.AI q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Longer timelines and lower success rates of drug candidates limit the productivity of clinical trials in the pharmaceutical industry. Promising de novo drug design techniques help solve this by exploring a broader chemical space, efficiently generating new molecules, and providing improved therapies. However, optimizing for molecular characteristics found in approved oral drugs remains a challenge, limiting de novo usage. In this work, we propose NovoMol, a novel de novo method using recurrent neural networks to mass-generate drug molecules with high oral bioavailability, increasing clinical trial time efficiency. Molecules were optimized for desirable traits and ranked using the quantitative estimate of drug-likeness (QED). Generated molecules meeting QED's oral bioavailability threshold were used to retrain the neural network, and, after five training cycles, 76% of generated molecules passed this strict threshold and 96% passed the traditionally used Lipinski's Rule of Five. The trained model was then used to generate specific drug candidates for the cancer-related PDGFR{\alpha} receptor and 44% of generated candidates had better binding affinity than the current state-of-the-art drug, Imatinib (with a receptor binding affinity of -9.4 kcal/mol), and the best-generated candidate at -12.9 kcal/mol. NovoMol provides a time/cost-efficient AI-based de novo method offering promising drug candidates for clinical trials.
[ { "created": "Sun, 3 Dec 2023 22:52:11 GMT", "version": "v1" } ]
2023-12-05
[ [ "Rao", "Ishir", "" ] ]
Longer timelines and lower success rates of drug candidates limit the productivity of clinical trials in the pharmaceutical industry. Promising de novo drug design techniques help solve this by exploring a broader chemical space, efficiently generating new molecules, and providing improved therapies. However, optimizing for molecular characteristics found in approved oral drugs remains a challenge, limiting de novo usage. In this work, we propose NovoMol, a novel de novo method using recurrent neural networks to mass-generate drug molecules with high oral bioavailability, increasing clinical trial time efficiency. Molecules were optimized for desirable traits and ranked using the quantitative estimate of drug-likeness (QED). Generated molecules meeting QED's oral bioavailability threshold were used to retrain the neural network, and, after five training cycles, 76% of generated molecules passed this strict threshold and 96% passed the traditionally used Lipinski's Rule of Five. The trained model was then used to generate specific drug candidates for the cancer-related PDGFR{\alpha} receptor and 44% of generated candidates had better binding affinity than the current state-of-the-art drug, Imatinib (with a receptor binding affinity of -9.4 kcal/mol), and the best-generated candidate at -12.9 kcal/mol. NovoMol provides a time/cost-efficient AI-based de novo method offering promising drug candidates for clinical trials.
2206.06281
Edwin Dalmaijer
Edwin S. Dalmaijer
Cumulative culture spontaneously emerges in artificial navigators who are social and memory-guided
Code: https://github.com/esdalmaijer/artificial_navigators Data: https://doi.org/10.5281/zenodo.6944184
null
null
null
q-bio.PE cs.AI
http://creativecommons.org/licenses/by/4.0/
Cumulative cultural evolution occurs when adaptive innovations are passed down to consecutive generations through social learning. This process has shaped human technological innovation, but also occurs in non-human species. While it is traditionally argued that cumulative culture relies on high-fidelity social transmission and advanced cognitive skills, here I show that a much simpler system suffices. Cumulative culture spontaneously emerged in artificial agents who navigate with a minimal cognitive architecture of goal-direction, social proximity, and route memory. Within each generation, naive individuals benefitted from being paired with experienced navigators because they could follow previously established routes. Crucially, experienced navigators also benefitted from the presence of naive individuals through regression to the goal. As experienced agents followed their memorised path, their naive counterparts (unhindered by route memory) were more likely to err towards than away from the goal, and thus biased the pair in that direction. This improved route efficiency within each generation. In control experiments, cumulative culture was attenuated when agents' social proximity or route memory were lesioned, whereas eliminating goal-direction only reduced efficiency. These results demonstrate that cumulative cultural evolution occurs even in the absence of sophisticated communication or thought. One interpretation of this finding is that current definitions are too loose, and should be narrowed. An alternative conclusion is that rudimentary cumulative culture is an emergent property of systems that seek social proximity and have an imprecise memory capacity, providing a flexible complement to traditional evolutionary mechanisms.
[ { "created": "Mon, 13 Jun 2022 16:10:39 GMT", "version": "v1" }, { "created": "Wed, 29 Jun 2022 16:08:25 GMT", "version": "v2" }, { "created": "Tue, 25 Jul 2023 07:58:13 GMT", "version": "v3" } ]
2023-07-26
[ [ "Dalmaijer", "Edwin S.", "" ] ]
Cumulative cultural evolution occurs when adaptive innovations are passed down to consecutive generations through social learning. This process has shaped human technological innovation, but also occurs in non-human species. While it is traditionally argued that cumulative culture relies on high-fidelity social transmission and advanced cognitive skills, here I show that a much simpler system suffices. Cumulative culture spontaneously emerged in artificial agents who navigate with a minimal cognitive architecture of goal-direction, social proximity, and route memory. Within each generation, naive individuals benefitted from being paired with experienced navigators because they could follow previously established routes. Crucially, experienced navigators also benefitted from the presence of naive individuals through regression to the goal. As experienced agents followed their memorised path, their naive counterparts (unhindered by route memory) were more likely to err towards than away from the goal, and thus biased the pair in that direction. This improved route efficiency within each generation. In control experiments, cumulative culture was attenuated when agents' social proximity or route memory were lesioned, whereas eliminating goal-direction only reduced efficiency. These results demonstrate that cumulative cultural evolution occurs even in the absence of sophisticated communication or thought. One interpretation of this finding is that current definitions are too loose, and should be narrowed. An alternative conclusion is that rudimentary cumulative culture is an emergent property of systems that seek social proximity and have an imprecise memory capacity, providing a flexible complement to traditional evolutionary mechanisms.
q-bio/0606031
Arijit Bhattacharyay
A. Bhattacharyay, A. Trovato and F. Seno
Simple solvation potential for coarse-grained models of proteins
18 pages, 8 tables, 3 figures
null
null
null
q-bio.BM
null
We formulate a simple solvation potential based on a coarsed-grain representation of amino acids with two spheres modeling the $C_\alpha$ atom and an effective side-chain centroid. The potential relies on a new method for estimating the buried area of residues, based on counting the effective number of burying neighbours in a suitable way. This latter quantity shows a good correlation with the buried area of residues computed from all atom crystallographic structures. We check the discriminatory power of the solvation potential alone to identify the native fold of a protein from a set of decoys and show the potential to be considerably selective.
[ { "created": "Thu, 22 Jun 2006 18:11:50 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bhattacharyay", "A.", "" ], [ "Trovato", "A.", "" ], [ "Seno", "F.", "" ] ]
We formulate a simple solvation potential based on a coarsed-grain representation of amino acids with two spheres modeling the $C_\alpha$ atom and an effective side-chain centroid. The potential relies on a new method for estimating the buried area of residues, based on counting the effective number of burying neighbours in a suitable way. This latter quantity shows a good correlation with the buried area of residues computed from all atom crystallographic structures. We check the discriminatory power of the solvation potential alone to identify the native fold of a protein from a set of decoys and show the potential to be considerably selective.
2309.15862
Vikram Jakkamsetti
Vikram Jakkamsetti, Clinton Broyles, Frank Buttafarro, Clint R. Myers, Sophia Kwong Myers
Human Behavior Plasticity Measured in Speech Epochs: A Proof-Of-Concept Study
Revised to include one new figure (one new analysis) and addition of relevant text, primarily in Methods, Results and Discussion. A typo for the formula for CV2 was corrected in Methods
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Human behavior training in improvisational theater has shown extensive behavioral and health benefits. Improved empathy measures in medical students, improved behavior outcome in patients with autism and a reduced recidivism rate are among the many benefits attributed to improvisational theater training. However, measuring tangible outcomes of changed behavior is challenging and usually requires multiple sessions or months of training. One of the principal tenets of improvisational theater is to actively listen and collaboratively allow a scene partner to talk and provide equal input as needed. Here we measured human speech epochs and asked if a month of weekly improvisational theater training would reflect in speech epoch assays. We found a significant decrease in speech epoch durations after one month of weekly training. There was no change in epoch durations on the same day, suggesting it to be a stable parameter over a day, but amenable to modulation over a month of training. The overall rhythm of speech epochs remained unchanged but durations of discrete low frequency speech volleys increased. Moreover, training improved regularity of adjacent speech epoch durations, suggesting a better sharing of speaking focus by matching speech time to the previous speech epoch duration. Our assay provides a proof-of-concept study of empathy-relevant tractable speech epoch parameters associated with changes in a month after human behavior training.
[ { "created": "Mon, 25 Sep 2023 20:42:20 GMT", "version": "v1" }, { "created": "Fri, 3 Nov 2023 16:09:08 GMT", "version": "v2" }, { "created": "Mon, 4 Dec 2023 15:52:10 GMT", "version": "v3" } ]
2023-12-05
[ [ "Jakkamsetti", "Vikram", "" ], [ "Broyles", "Clinton", "" ], [ "Buttafarro", "Frank", "" ], [ "Myers", "Clint R.", "" ], [ "Myers", "Sophia Kwong", "" ] ]
Human behavior training in improvisational theater has shown extensive behavioral and health benefits. Improved empathy measures in medical students, improved behavior outcome in patients with autism and a reduced recidivism rate are among the many benefits attributed to improvisational theater training. However, measuring tangible outcomes of changed behavior is challenging and usually requires multiple sessions or months of training. One of the principal tenets of improvisational theater is to actively listen and collaboratively allow a scene partner to talk and provide equal input as needed. Here we measured human speech epochs and asked if a month of weekly improvisational theater training would reflect in speech epoch assays. We found a significant decrease in speech epoch durations after one month of weekly training. There was no change in epoch durations on the same day, suggesting it to be a stable parameter over a day, but amenable to modulation over a month of training. The overall rhythm of speech epochs remained unchanged but durations of discrete low frequency speech volleys increased. Moreover, training improved regularity of adjacent speech epoch durations, suggesting a better sharing of speaking focus by matching speech time to the previous speech epoch duration. Our assay provides a proof-of-concept study of empathy-relevant tractable speech epoch parameters associated with changes in a month after human behavior training.
1310.7275
Emmanuelle Tognoli
Emmanuelle Tognoli and J. A. Scott Kelso
The coordination dynamics of social neuromarkers
24 pages, 6 figures
null
10.3389/fnhum.2015.00563
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social behavior is a complex integrative function that entails many aspects of the brain's sensory, cognitive, emotional and motor capacities. Neural processes are seldom simultaneous but occur according to precise temporal and coordinative choreographies within and between brains. Methods with good temporal resolution such as EEG can help to identify so-called "neuromarkers" of social function (Tognoli, et al., 2007) and aid in disentangling the dynamical architecture of social brains. We have studied neuromarkers and their dynamics during synchronic interactions in which pairs of subjects coordinate behavior spontaneously and intentionally (social coordination) and during diachronic transactions that required subjects to perceive or behave in turn (action observation and delayed imitation). We examined commonalities and differences in the neuromarkers that are recruited for both kinds of tasks. We found that the neuromarker landscape was task-specific: synchronic paradigms of social coordination revealed medial mu, alpha and the phi complex as contributing neuromarkers. Diachronic tasks recruited alpha as well, in addition to lateral mu rhythms and the newly discovered nu and kappa rhythms whose functional significance is still unclear. Social coordination, observation, and delayed imitation share commonality of context: in our experiments, subjects exchanged information through visual perception and moved in similar ways. Nonetheless, there was little overlap between the neuromarkers recruited for synchronic and diachronic tasks, a result that hints strongly of task-specific neural mechanisms for social behaviors. The only neuromarker that transcended both synchronic and diachronic social behaviors was the ubiquitous alpha rhythm, which appears to be a key signature of visually-mediated social behaviors. The present paper is both an entry point and a challenge...
[ { "created": "Sun, 27 Oct 2013 23:37:14 GMT", "version": "v1" } ]
2020-05-11
[ [ "Tognoli", "Emmanuelle", "" ], [ "Kelso", "J. A. Scott", "" ] ]
Social behavior is a complex integrative function that entails many aspects of the brain's sensory, cognitive, emotional and motor capacities. Neural processes are seldom simultaneous but occur according to precise temporal and coordinative choreographies within and between brains. Methods with good temporal resolution such as EEG can help to identify so-called "neuromarkers" of social function (Tognoli, et al., 2007) and aid in disentangling the dynamical architecture of social brains. We have studied neuromarkers and their dynamics during synchronic interactions in which pairs of subjects coordinate behavior spontaneously and intentionally (social coordination) and during diachronic transactions that required subjects to perceive or behave in turn (action observation and delayed imitation). We examined commonalities and differences in the neuromarkers that are recruited for both kinds of tasks. We found that the neuromarker landscape was task-specific: synchronic paradigms of social coordination revealed medial mu, alpha and the phi complex as contributing neuromarkers. Diachronic tasks recruited alpha as well, in addition to lateral mu rhythms and the newly discovered nu and kappa rhythms whose functional significance is still unclear. Social coordination, observation, and delayed imitation share commonality of context: in our experiments, subjects exchanged information through visual perception and moved in similar ways. Nonetheless, there was little overlap between the neuromarkers recruited for synchronic and diachronic tasks, a result that hints strongly of task-specific neural mechanisms for social behaviors. The only neuromarker that transcended both synchronic and diachronic social behaviors was the ubiquitous alpha rhythm, which appears to be a key signature of visually-mediated social behaviors. The present paper is both an entry point and a challenge...
2206.03823
Yanhua Xu
Yanhua Xu and Dominik Wojtczak
Multi-channel neural networks for predicting influenza A virus hosts and antigenic types
Accepted for publication at IC3K (KDIR) 2022
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influenza occurs every season and occasionally causes pandemics. Despite its low mortality rate, influenza is a major public health concern, as it can be complicated by severe diseases like pneumonia. A fast, accurate and low-cost method to predict the origin host and subtype of influenza viruses could help reduce virus transmission and benefit resource-poor areas. In this work, we propose multi-channel neural networks to predict antigenic types and hosts of influenza A viruses with hemagglutinin and neuraminidase protein sequences. An integrated data set containing complete protein sequences were used to produce a pre-trained model, and two other data sets were used for testing the model's performance. One test set contained complete protein sequences, and another test set contained incomplete protein sequences. The results suggest that multi-channel neural networks are applicable and promising for predicting influenza A virus hosts and antigenic subtypes with complete and partial protein sequences.
[ { "created": "Wed, 8 Jun 2022 11:47:31 GMT", "version": "v1" }, { "created": "Fri, 15 Jul 2022 21:14:52 GMT", "version": "v2" }, { "created": "Fri, 29 Jul 2022 10:05:28 GMT", "version": "v3" } ]
2022-08-01
[ [ "Xu", "Yanhua", "" ], [ "Wojtczak", "Dominik", "" ] ]
Influenza occurs every season and occasionally causes pandemics. Despite its low mortality rate, influenza is a major public health concern, as it can be complicated by severe diseases like pneumonia. A fast, accurate and low-cost method to predict the origin host and subtype of influenza viruses could help reduce virus transmission and benefit resource-poor areas. In this work, we propose multi-channel neural networks to predict antigenic types and hosts of influenza A viruses with hemagglutinin and neuraminidase protein sequences. An integrated data set containing complete protein sequences were used to produce a pre-trained model, and two other data sets were used for testing the model's performance. One test set contained complete protein sequences, and another test set contained incomplete protein sequences. The results suggest that multi-channel neural networks are applicable and promising for predicting influenza A virus hosts and antigenic subtypes with complete and partial protein sequences.
1608.04287
Ovidiu Lipan
Cameron Ferwerda and Ovidiu Lipan
Splitting Nodes and Linking Channels: A Method for Assembling Biocircuits from Stochastic Elementary Units
Paper 8 pages with 10 figures; Supplemental Material 13 pages with 10 figures
Phys. Rev. E 94, 052404 (2016)
10.1103/PhysRevE.94.052404
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Akin to electric circuits, we construct biocircuits that are manipulated by cutting and assembling channels through which stochastic information flows. This diagrammatic manipulation allows us to create a method which constructs networks by joining building blocks selected so that (a) they cover only basic processes; (b) it is scalable to large networks; (c) the mean and variance-covariance from the Pauli master equation form a closed system and; (d) given the initial probability distribution, no special boundary conditions are necessary to solve the master equation. The method aims to help with both designing new synthetic signalling pathways and quantifying naturally existing regulatory networks.
[ { "created": "Mon, 15 Aug 2016 14:40:18 GMT", "version": "v1" } ]
2016-11-15
[ [ "Ferwerda", "Cameron", "" ], [ "Lipan", "Ovidiu", "" ] ]
Akin to electric circuits, we construct biocircuits that are manipulated by cutting and assembling channels through which stochastic information flows. This diagrammatic manipulation allows us to create a method which constructs networks by joining building blocks selected so that (a) they cover only basic processes; (b) it is scalable to large networks; (c) the mean and variance-covariance from the Pauli master equation form a closed system and; (d) given the initial probability distribution, no special boundary conditions are necessary to solve the master equation. The method aims to help with both designing new synthetic signalling pathways and quantifying naturally existing regulatory networks.
0909.3132
Wentian Li
Jan Freudengerb, Mingyi Wang, Yaning Yang, Wentian Li
Partial correlation analysis indicates causal relationships between GC-content, exon density and recombination rate in the human genome
null
BMC Bioinformatics, 10(suppl 1), S66 (2009)
10.1186/1471-2105-10-S1-S66
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
{\bf Background}: Several features are known to correlate with the GC-content in the human genome, including recombination rate, gene density and distance to telomere. However, by testing for pairwise correlation only, it is impossible to distinguish direct associations from indirect ones and to distinguish between causes and effects. {\bf Results}: We use partial correlations to construct partially directed graphs for the following four variables: GC-content, recombination rate, exon density and distance-to-telomere. Recombination rate and exon density are unconditionally uncorrelated, but become inversely correlated by conditioning on GC-content. This pattern indicates a model where recombination rate and exon density are two independent causes of GC-content variation. {\bf Conclusions}: Causal inference and graphical models are useful methods to understand genome evolution and the mechanisms of isochore evolution in the human genome.
[ { "created": "Thu, 17 Sep 2009 00:24:45 GMT", "version": "v1" } ]
2012-05-07
[ [ "Freudengerb", "Jan", "" ], [ "Wang", "Mingyi", "" ], [ "Yang", "Yaning", "" ], [ "Li", "Wentian", "" ] ]
{\bf Background}: Several features are known to correlate with the GC-content in the human genome, including recombination rate, gene density and distance to telomere. However, by testing for pairwise correlation only, it is impossible to distinguish direct associations from indirect ones and to distinguish between causes and effects. {\bf Results}: We use partial correlations to construct partially directed graphs for the following four variables: GC-content, recombination rate, exon density and distance-to-telomere. Recombination rate and exon density are unconditionally uncorrelated, but become inversely correlated by conditioning on GC-content. This pattern indicates a model where recombination rate and exon density are two independent causes of GC-content variation. {\bf Conclusions}: Causal inference and graphical models are useful methods to understand genome evolution and the mechanisms of isochore evolution in the human genome.
1907.07821
Yahya Karimipanah
Carson C. Chow and Yahya Karimipanah
Before and beyond the Wilson-Cowan equations
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Wilson-Cowan equations represent a landmark in the history of computational neuroscience. Among the insights Wilson and Cowan offered for neuroscience, they crystallized an approach to modeling neural dynamics and brain function. Although their iconic equations are used in various guises today, the ideas that led to their formulation and the relationship to other approaches are not well known. Here, we give a little context to some of the biological and theoretical concepts that lead to the Wilson-Cowan equations and discuss how to extend beyond them.
[ { "created": "Thu, 18 Jul 2019 00:26:04 GMT", "version": "v1" }, { "created": "Wed, 31 Jul 2019 16:49:27 GMT", "version": "v2" } ]
2019-08-01
[ [ "Chow", "Carson C.", "" ], [ "Karimipanah", "Yahya", "" ] ]
The Wilson-Cowan equations represent a landmark in the history of computational neuroscience. Among the insights Wilson and Cowan offered for neuroscience, they crystallized an approach to modeling neural dynamics and brain function. Although their iconic equations are used in various guises today, the ideas that led to their formulation and the relationship to other approaches are not well known. Here, we give a little context to some of the biological and theoretical concepts that lead to the Wilson-Cowan equations and discuss how to extend beyond them.
2305.06488
Sebastian Lobentanzer
Sebastian Lobentanzer, Shaohong Feng, The BioChatter Consortium, Andreas Maier, Cankun Wang, Jan Baumbach, Nils Krehl, Qin Ma and Julio Saez-Rodriguez
A Platform for the Biomedical Application of Large Language Models
31 pages, 3 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Current-generation Large Language Models (LLMs) have stirred enormous interest in recent months, yielding great potential for accessibility and automation, while simultaneously posing significant challenges and risk of misuse. To facilitate interfacing with LLMs in the biomedical space, while at the same time safeguarding their functionalities through sensible constraints, we propose a dedicated, open-source framework: BioChatter. Based on open-source software packages, we synergise the many functionalities that are currently developing around LLMs, such as knowledge integration / retrieval-augmented generation, model chaining, and benchmarking, resulting in an easy-to-use and inclusive framework for application in many use cases of biomedicine. We focus on robust and user-friendly implementation, including ways to deploy privacy-preserving local open-source LLMs. We demonstrate use cases via two multi-purpose web apps (https://chat.biocypher.org), and provide documentation, support, and an open community.
[ { "created": "Wed, 10 May 2023 22:36:27 GMT", "version": "v1" }, { "created": "Tue, 16 May 2023 17:29:26 GMT", "version": "v2" }, { "created": "Fri, 21 Jul 2023 14:02:08 GMT", "version": "v3" }, { "created": "Sat, 17 Feb 2024 08:45:30 GMT", "version": "v4" } ]
2024-02-20
[ [ "Lobentanzer", "Sebastian", "" ], [ "Feng", "Shaohong", "" ], [ "Consortium", "The BioChatter", "" ], [ "Maier", "Andreas", "" ], [ "Wang", "Cankun", "" ], [ "Baumbach", "Jan", "" ], [ "Krehl", "Nils", "" ], [ "Ma", "Qin", "" ], [ "Saez-Rodriguez", "Julio", "" ] ]
Current-generation Large Language Models (LLMs) have stirred enormous interest in recent months, yielding great potential for accessibility and automation, while simultaneously posing significant challenges and risk of misuse. To facilitate interfacing with LLMs in the biomedical space, while at the same time safeguarding their functionalities through sensible constraints, we propose a dedicated, open-source framework: BioChatter. Based on open-source software packages, we synergise the many functionalities that are currently developing around LLMs, such as knowledge integration / retrieval-augmented generation, model chaining, and benchmarking, resulting in an easy-to-use and inclusive framework for application in many use cases of biomedicine. We focus on robust and user-friendly implementation, including ways to deploy privacy-preserving local open-source LLMs. We demonstrate use cases via two multi-purpose web apps (https://chat.biocypher.org), and provide documentation, support, and an open community.
1908.09116
Markus D Schirmer
Ai Wern Chung and Markus D. Schirmer
Network Dependency Index Stratified Subnetwork Analysis of Functional Connectomes: An application to autism
null
null
10.1007/978-3-030-32391-2_13
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autism spectrum disorder (ASD) is a neurodevelopmental condition impacting high-level cognitive processing and social behavior. Recognizing the distributed nature of brain function, neuroscientists are exploiting the connectome to aid with the characterization of this complex disease. The human connectome has demonstrated the brain to be a highly organized system with a centralized core vital for effective function. As such, many have used this topological principle to not only assess core regions, but have stratified the remaining graph into subnetworks depending on their relation to the core. Subnetworks are then utilized to further understand the supporting role of more peripheral nodes with respects to the overall function in the network. A recently proposed framework for subnetwork definition is based on the network dependency index (NDI), a measure of a node's importance based on its contribution to overall efficiency in the network, and the derived subnetworks, or Tiers, have been shown to be largely stable across ages in structural networks. Here, we extend the NDI framework to test its efficacy against a number experimental conditions. We first not only demonstrated NDI's feasibility on resting-state functional MRI data, but also its stability irrespective of the group connectome on which NDI was determined for various edge thresholds. Secondly, by comparing network theory measures of transitivity and efficiency, significant group differences were identified in NDI Tiers of greatest importance. This demonstrates the efficacy of utilizing NDI stratified subnetworks, which can help to improve our understanding of diseases and how they affect overall brain connectivity.
[ { "created": "Sat, 24 Aug 2019 09:44:39 GMT", "version": "v1" }, { "created": "Wed, 25 Sep 2019 09:06:06 GMT", "version": "v2" } ]
2019-09-26
[ [ "Chung", "Ai Wern", "" ], [ "Schirmer", "Markus D.", "" ] ]
Autism spectrum disorder (ASD) is a neurodevelopmental condition impacting high-level cognitive processing and social behavior. Recognizing the distributed nature of brain function, neuroscientists are exploiting the connectome to aid with the characterization of this complex disease. The human connectome has demonstrated the brain to be a highly organized system with a centralized core vital for effective function. As such, many have used this topological principle to not only assess core regions, but have stratified the remaining graph into subnetworks depending on their relation to the core. Subnetworks are then utilized to further understand the supporting role of more peripheral nodes with respects to the overall function in the network. A recently proposed framework for subnetwork definition is based on the network dependency index (NDI), a measure of a node's importance based on its contribution to overall efficiency in the network, and the derived subnetworks, or Tiers, have been shown to be largely stable across ages in structural networks. Here, we extend the NDI framework to test its efficacy against a number experimental conditions. We first not only demonstrated NDI's feasibility on resting-state functional MRI data, but also its stability irrespective of the group connectome on which NDI was determined for various edge thresholds. Secondly, by comparing network theory measures of transitivity and efficiency, significant group differences were identified in NDI Tiers of greatest importance. This demonstrates the efficacy of utilizing NDI stratified subnetworks, which can help to improve our understanding of diseases and how they affect overall brain connectivity.
1402.6367
Steven Frank
Steven A. Frank
Microbial metabolism: optimal control of uptake versus synthesis
null
PeerJ 2:e267
10.7717/peerj.267
null
q-bio.PE q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microbes require several complex organic molecules for growth. A species may obtain a required factor by taking up molecules released by other species or by synthesizing the molecule. The patterns of uptake and synthesis set a flow of resources through the multiple species that create a microbial community. This article analyzes a simple mathematical model of the tradeoff between uptake and synthesis. Key factors include the influx rate from external sources relative to the outflux rate, the rate of internal decay within cells, and the cost of synthesis. Aspects of demography also matter, such as cellular birth and death rates, the expected time course of a local resource flow, and the associated lifespan of the local population. Spatial patterns of genetic variability and differentiation between populations may also strongly influence the evolution of metabolic regulatory controls of individual species and thus the structuring of microbial communities. The widespread use of optimality approaches in recent work on microbial metabolism has ignored demography and genetic structure.
[ { "created": "Tue, 25 Feb 2014 23:02:18 GMT", "version": "v1" } ]
2014-02-27
[ [ "Frank", "Steven A.", "" ] ]
Microbes require several complex organic molecules for growth. A species may obtain a required factor by taking up molecules released by other species or by synthesizing the molecule. The patterns of uptake and synthesis set a flow of resources through the multiple species that create a microbial community. This article analyzes a simple mathematical model of the tradeoff between uptake and synthesis. Key factors include the influx rate from external sources relative to the outflux rate, the rate of internal decay within cells, and the cost of synthesis. Aspects of demography also matter, such as cellular birth and death rates, the expected time course of a local resource flow, and the associated lifespan of the local population. Spatial patterns of genetic variability and differentiation between populations may also strongly influence the evolution of metabolic regulatory controls of individual species and thus the structuring of microbial communities. The widespread use of optimality approaches in recent work on microbial metabolism has ignored demography and genetic structure.
2404.07150
Cristiano Capone
Cristiano Capone, Luca Falorsi, Maurizio Mattia
Adaptive behavior with stable synapses
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Behavioral changes in animals and humans, as a consequence of an error or a verbal instruction, can be extremely rapid. Improvement in behavioral performances are usually associated in machine learning and reinforcement learning to synaptic plasticity, and, in general, to changes and optimization of network parameters. However, such rapid changes are not coherent with the timescales of synaptic plasticity, suggesting that the mechanism responsible for that could be a dynamical network reconfiguration. In the last few years, similar capabilities have been observed in transformers, foundational architecture in the field of machine learning that are widely used in applications such as natural language and image processing. Transformers are capable of in-context learning, the ability to adapt and acquire new information dynamically within the context of the task or environment they are currently engaged in, without the need for significant changes to their underlying parameters. Building upon the notion of something unique within transformers enabling the emergence of this property, we claim that it could also be supported by input segregation and dendritic amplification, features extensively observed in biological networks. We propose an architecture composed of gain-modulated recurrent networks that excels at in-context learning, showing abilities inaccessible to standard networks.
[ { "created": "Wed, 10 Apr 2024 16:33:55 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 09:48:40 GMT", "version": "v2" } ]
2024-05-29
[ [ "Capone", "Cristiano", "" ], [ "Falorsi", "Luca", "" ], [ "Mattia", "Maurizio", "" ] ]
Behavioral changes in animals and humans, as a consequence of an error or a verbal instruction, can be extremely rapid. Improvement in behavioral performances are usually associated in machine learning and reinforcement learning to synaptic plasticity, and, in general, to changes and optimization of network parameters. However, such rapid changes are not coherent with the timescales of synaptic plasticity, suggesting that the mechanism responsible for that could be a dynamical network reconfiguration. In the last few years, similar capabilities have been observed in transformers, foundational architecture in the field of machine learning that are widely used in applications such as natural language and image processing. Transformers are capable of in-context learning, the ability to adapt and acquire new information dynamically within the context of the task or environment they are currently engaged in, without the need for significant changes to their underlying parameters. Building upon the notion of something unique within transformers enabling the emergence of this property, we claim that it could also be supported by input segregation and dendritic amplification, features extensively observed in biological networks. We propose an architecture composed of gain-modulated recurrent networks that excels at in-context learning, showing abilities inaccessible to standard networks.
1811.05001
H\'el\`ene Delano\"e-Ayari Ph.D.
S. Tlili, M. Durande, C. Gay, B. Ladoux, F. Graner and H. Delano\"e-Ayari
A migrating epithelial monolayer flows like a Maxwell viscoelastic liquid
17 pages, 15 figures
Phys. Rev. Lett. 125, 088102 (2020)
10.1103/PhysRevLett.125.088102
null
q-bio.TO physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We perform a bidimensional Stokes experiment in an active cellular material: an autonomously migrating monolayer of Madin-Darby Canine Kidney (MDCK) epithelial cells flows around a circular obstacle within a long and narrow channel, involving an interplay between cell shape changes and neighbour rearrangements. Based on image analysis of tissue flow and coarse-grained cell anisotropy, we determine the tissue strain rate, cell deformation and rearrangement rate fields, which are spatially heterogeneous. We find that the cell deformation and rearrangement rate fields correlate strongly, which is compatible with a Maxwell viscoelastic liquid behaviour (and not with a Kelvin-Voigt viscoelastic solid behaviour). The value of the associated relaxation time is measured as $\tau = 70 \pm 15$~min, is observed to be independent of obstacle size and division rate, and is increased by inhibiting myosin activity. In this experiment, the monolayer behaves as a flowing material with a Weissenberg number close to one which shows that both elastic and viscous effects can have comparable contributions in the process of collective cell migration.
[ { "created": "Mon, 12 Nov 2018 21:01:50 GMT", "version": "v1" }, { "created": "Mon, 27 Apr 2020 15:09:02 GMT", "version": "v2" } ]
2020-08-26
[ [ "Tlili", "S.", "" ], [ "Durande", "M.", "" ], [ "Gay", "C.", "" ], [ "Ladoux", "B.", "" ], [ "Graner", "F.", "" ], [ "Delanoë-Ayari", "H.", "" ] ]
We perform a bidimensional Stokes experiment in an active cellular material: an autonomously migrating monolayer of Madin-Darby Canine Kidney (MDCK) epithelial cells flows around a circular obstacle within a long and narrow channel, involving an interplay between cell shape changes and neighbour rearrangements. Based on image analysis of tissue flow and coarse-grained cell anisotropy, we determine the tissue strain rate, cell deformation and rearrangement rate fields, which are spatially heterogeneous. We find that the cell deformation and rearrangement rate fields correlate strongly, which is compatible with a Maxwell viscoelastic liquid behaviour (and not with a Kelvin-Voigt viscoelastic solid behaviour). The value of the associated relaxation time is measured as $\tau = 70 \pm 15$~min, is observed to be independent of obstacle size and division rate, and is increased by inhibiting myosin activity. In this experiment, the monolayer behaves as a flowing material with a Weissenberg number close to one which shows that both elastic and viscous effects can have comparable contributions in the process of collective cell migration.
0809.0285
Dietrich Stauffer
J.S. Sa Martins, D. Stauffer, P.M.C. de Oliveira, and S. Moss de Oliveira
Simulated self-organisation of death by inherited mutations
13 pages including all figures; draft, supergluous figures removed
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An agent-based computer simulation of death by inheritable mutations in a changing environment shows a maximal population, or avoids extinction, at so intermediate mutation rate of the individuals. Thus death seems needed to al for evolution of the fittest, as required by a changing environment.
[ { "created": "Mon, 1 Sep 2008 17:32:17 GMT", "version": "v1" }, { "created": "Tue, 2 Sep 2008 06:25:07 GMT", "version": "v2" } ]
2008-09-02
[ [ "Martins", "J. S. Sa", "" ], [ "Stauffer", "D.", "" ], [ "de Oliveira", "P. M. C.", "" ], [ "de Oliveira", "S. Moss", "" ] ]
An agent-based computer simulation of death by inheritable mutations in a changing environment shows a maximal population, or avoids extinction, at so intermediate mutation rate of the individuals. Thus death seems needed to al for evolution of the fittest, as required by a changing environment.
1912.07709
Alex Matlock
Alex Matlock, Anne Sentenac, Patrick C. Chaumet, Ji Yi, and Lei Tian
Inverse scattering for reflection intensity phase microscopy
25 pages, 5 figures
null
null
null
q-bio.QM eess.IV physics.comp-ph physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reflection phase imaging provides label-free, high-resolution characterization of biological samples, typically using interferometric-based techniques. Here, we investigate reflection phase microscopy from intensity-only measurements under diverse illumination. We evaluate the forward and inverse scattering model based on the first Born approximation for imaging scattering objects above a glass slide. Under this design, the measured field combines linear forward-scattering and height-dependent nonlinear back-scattering from the object that complicates object phase recovery. Using only the forward-scattering, we derive a linear inverse scattering model and evaluate this model's validity range in simulation and experiment using a standard reflection microscope modified with a programmable light source. Our method provides enhanced contrast of thin, weakly scattering samples that complement transmission techniques. This model provides a promising development for creating simplified intensity-based reflection quantitative phase imaging systems easily adoptable for biological research.
[ { "created": "Mon, 16 Dec 2019 21:22:23 GMT", "version": "v1" } ]
2019-12-18
[ [ "Matlock", "Alex", "" ], [ "Sentenac", "Anne", "" ], [ "Chaumet", "Patrick C.", "" ], [ "Yi", "Ji", "" ], [ "Tian", "Lei", "" ] ]
Reflection phase imaging provides label-free, high-resolution characterization of biological samples, typically using interferometric-based techniques. Here, we investigate reflection phase microscopy from intensity-only measurements under diverse illumination. We evaluate the forward and inverse scattering model based on the first Born approximation for imaging scattering objects above a glass slide. Under this design, the measured field combines linear forward-scattering and height-dependent nonlinear back-scattering from the object that complicates object phase recovery. Using only the forward-scattering, we derive a linear inverse scattering model and evaluate this model's validity range in simulation and experiment using a standard reflection microscope modified with a programmable light source. Our method provides enhanced contrast of thin, weakly scattering samples that complement transmission techniques. This model provides a promising development for creating simplified intensity-based reflection quantitative phase imaging systems easily adoptable for biological research.
2306.13449
Amos Maritan
Sandro Azaele and Amos Maritan
Large system population dynamics with non-Gaussian interactions
null
null
null
null
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech
http://creativecommons.org/licenses/by-nc-nd/4.0/
We investigate the Generalized Lotka-Volterra (GLV) equations, a central model in theoretical ecology, where species interactions are assumed to be fixed over time and heterogeneous (quenched noise). Recent studies have suggested that the stability properties and abundance distributions of large disordered GLV systems depend, in the simplest scenario, solely on the mean and variance of the distribution of species interactions. However, empirical communities deviate from this level of universality. In this article, we present a generalized version of the dynamical mean field theory for non-Gaussian interactions that can be applied to various models, including the GLV equations. Our results show that the generalized mean field equations have solutions which depend on all cumulants of the distribution of species interactions, leading to a breakdown of universality. We leverage on this informative breakdown to extract microscopic interaction details from the macroscopic distribution of densities which are in agreement with empirical data. Specifically, in the case of sparse interactions, which we analytically investigate, we establish a simple relationship between the distribution of interactions and the distribution of species population densities.
[ { "created": "Fri, 23 Jun 2023 11:36:21 GMT", "version": "v1" } ]
2023-06-26
[ [ "Azaele", "Sandro", "" ], [ "Maritan", "Amos", "" ] ]
We investigate the Generalized Lotka-Volterra (GLV) equations, a central model in theoretical ecology, where species interactions are assumed to be fixed over time and heterogeneous (quenched noise). Recent studies have suggested that the stability properties and abundance distributions of large disordered GLV systems depend, in the simplest scenario, solely on the mean and variance of the distribution of species interactions. However, empirical communities deviate from this level of universality. In this article, we present a generalized version of the dynamical mean field theory for non-Gaussian interactions that can be applied to various models, including the GLV equations. Our results show that the generalized mean field equations have solutions which depend on all cumulants of the distribution of species interactions, leading to a breakdown of universality. We leverage on this informative breakdown to extract microscopic interaction details from the macroscopic distribution of densities which are in agreement with empirical data. Specifically, in the case of sparse interactions, which we analytically investigate, we establish a simple relationship between the distribution of interactions and the distribution of species population densities.
1106.4317
Dimitris Vavoulis
Dimitrios V. Vavoulis, Volko A. Straub, John A.D. Aston, Jianfeng Feng
A self-organizing state-space-model approach for parameter estimation in Hodgkin-Huxley-type models of single neurons
null
Vavoulis DV, Straub VA, Aston JAD, Feng J (2012) A Self-Organizing State-Space-Model Approach for Parameter Estimation in Hodgkin-Huxley-Type Models of Single Neurons. PLoS Comput Biol 8(3): e1002401
10.1371/journal.pcbi.1002401
null
q-bio.QM q-bio.NC
http://creativecommons.org/licenses/by/3.0/
Traditionally, parameter estimation in biophysical neuron and neural network models usually adopts a global search algorithm, often combined with a local search method in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents and measurement noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models.
[ { "created": "Tue, 21 Jun 2011 20:04:52 GMT", "version": "v1" }, { "created": "Sat, 29 Oct 2011 23:18:59 GMT", "version": "v2" } ]
2012-03-05
[ [ "Vavoulis", "Dimitrios V.", "" ], [ "Straub", "Volko A.", "" ], [ "Aston", "John A. D.", "" ], [ "Feng", "Jianfeng", "" ] ]
Traditionally, parameter estimation in biophysical neuron and neural network models usually adopts a global search algorithm, often combined with a local search method in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents and measurement noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models.
2402.06928
Richard Tj\"ornhammar
Richard Tj\"ornhammar
Happy and Immersive Clustering Segmentations of Biological Co-Expression Patterns
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
In this work, we present an approach for evaluating segmentation strategies and solving the biological problem of creating robust interpretable maps of biological data by employing wards agglomerative hierarchical clustering applied to coexpression coordinates to deduce a faithful representation of the input. We adopt and quantify two analyte-centric metrics named happiness and immersiveness, one for describing the suitability of a single analyte concerning the segmentation as well as a second metric for describing how well the segmentation catches the underlying data variation. We show that these two functions drive aggregation and segregation of segmentation respectively and can produce trustworthy segmentation solutions. We discover that the immersiveness metric exhibits higher-order phase transition properties in its derivative to cluster numbers. Finally, we find that the cluster representations and label annotations, in the case with clusters of high immersiveness, correspond to compositionally inferred labels with the highest specificity. The interconnectedness mirrors the potential relationships between cluster representations, label annotations, and inferred labels, emphasizing the intricate nature of biology and the representation of the specific expressions of gene products.
[ { "created": "Sat, 10 Feb 2024 11:35:03 GMT", "version": "v1" } ]
2024-02-13
[ [ "Tjörnhammar", "Richard", "" ] ]
In this work, we present an approach for evaluating segmentation strategies and solving the biological problem of creating robust interpretable maps of biological data by employing wards agglomerative hierarchical clustering applied to coexpression coordinates to deduce a faithful representation of the input. We adopt and quantify two analyte-centric metrics named happiness and immersiveness, one for describing the suitability of a single analyte concerning the segmentation as well as a second metric for describing how well the segmentation catches the underlying data variation. We show that these two functions drive aggregation and segregation of segmentation respectively and can produce trustworthy segmentation solutions. We discover that the immersiveness metric exhibits higher-order phase transition properties in its derivative to cluster numbers. Finally, we find that the cluster representations and label annotations, in the case with clusters of high immersiveness, correspond to compositionally inferred labels with the highest specificity. The interconnectedness mirrors the potential relationships between cluster representations, label annotations, and inferred labels, emphasizing the intricate nature of biology and the representation of the specific expressions of gene products.