id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2401.03595
Yeeren Low
Yeeren I. Low
Quantifying T cell morphodynamics and migration in 3D collagen matrices
46 pages, 22 figures, 17 tables; clarifications and corrections
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
T cells undergo large shape changes (morphodynamics) when migrating. While progress has been made elucidating the molecular basis of cell migration, statistical characterization of morphodynamics and migration has been limited, particularly in physiologically realistic 3D environments. A previous study (H. Cavanagh et al., J. R. Soc. Interface 19: 20220081) found discrete states of dynamics as well as periodic oscillations of shape. However, we show that these results are due to artifacts of the analysis methods. Here, we present a revised analysis of the data, applying a method based on an underdamped Langevin equation. We find that different shape modes have different correlation times. We also find novel non-Gaussian effects. This study provides a framework in which quantitative comparisons of cell morphodynamics and migration can be made, e.g. between different biological conditions or mechanistic models.
[ { "created": "Sun, 7 Jan 2024 22:31:44 GMT", "version": "v1" }, { "created": "Wed, 31 Jan 2024 15:34:46 GMT", "version": "v2" }, { "created": "Mon, 3 Jun 2024 12:52:54 GMT", "version": "v3" }, { "created": "Fri, 28 Jun 2024 23:11:11 GMT", "version": "v4" }, { "created": "Tue, 16 Jul 2024 23:07:37 GMT", "version": "v5" }, { "created": "Thu, 8 Aug 2024 00:01:47 GMT", "version": "v6" } ]
2024-08-09
[ [ "Low", "Yeeren I.", "" ] ]
T cells undergo large shape changes (morphodynamics) when migrating. While progress has been made elucidating the molecular basis of cell migration, statistical characterization of morphodynamics and migration has been limited, particularly in physiologically realistic 3D environments. A previous study (H. Cavanagh et al., J. R. Soc. Interface 19: 20220081) found discrete states of dynamics as well as periodic oscillations of shape. However, we show that these results are due to artifacts of the analysis methods. Here, we present a revised analysis of the data, applying a method based on an underdamped Langevin equation. We find that different shape modes have different correlation times. We also find novel non-Gaussian effects. This study provides a framework in which quantitative comparisons of cell morphodynamics and migration can be made, e.g. between different biological conditions or mechanistic models.
2007.07643
Peter Taylor
Tom W. Owen, Jane de Tisi, Sjoerd B. Vos, Gavin P. Winston, John S. Duncan, Yujiang Wang, Peter N. Taylor
Multivariate white matter alterations are associated with epilepsy duration
33 pages, 6 main figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Previous studies investigating associations between white matter alterations and duration of temporal lobe epilepsy (TLE) have shown differing results, and were typically limited to univariate analyses of tracts in isolation. In this study we apply a multivariate measure (the Mahalanobis distance), to capture the distinct ways white matter may differ in individual patients, and relate this to epilepsy duration. Diffusion MRI, from a cohort of 94 subjects (28 healthy controls, 33 left-TLE and 33 right-TLE), was used to assess associations between tract fractional anisotropy (FA) and epilepsy duration. Using ten white matter tracts, we analysed associations using traditional univariate analyses (z-scores) and a complementary multivariate approach (Mahalanobis distance), incorporating multiple white matter tracts into a single unified analysis. In patients with right-TLE, FA was not significantly associated with epilepsy duration for any tract studied in isolation. In patients with left-TLE, the FA of two limbic tracts (ipsilateral fornix, contralateral cingulum gyrus) was significantly negatively associated with epilepsy duration (Bonferonni corrected p<0.05). Using a multivariate approach we found significant ipsilateral positive associations with duration in both left, and right-TLE cohorts (left-TLE: Spearman's rho=0.487, right-TLE: Spearman's rho=0.422). Extrapolating our multivariate results to duration equals zero (i.e. at onset) we found no significant difference between patients and controls. Associations using the multivariate approach were more robust than univariate methods. The multivariate distance measure provides non-overlapping and more robust results than traditional univariate analyses. Future studies should consider adopting both frameworks into their analysis in order to ascertain a more complete understanding of epilepsy progression, regardless of laterality.
[ { "created": "Wed, 15 Jul 2020 12:00:15 GMT", "version": "v1" } ]
2020-07-16
[ [ "Owen", "Tom W.", "" ], [ "de Tisi", "Jane", "" ], [ "Vos", "Sjoerd B.", "" ], [ "Winston", "Gavin P.", "" ], [ "Duncan", "John S.", "" ], [ "Wang", "Yujiang", "" ], [ "Taylor", "Peter N.", "" ] ]
Previous studies investigating associations between white matter alterations and duration of temporal lobe epilepsy (TLE) have shown differing results, and were typically limited to univariate analyses of tracts in isolation. In this study we apply a multivariate measure (the Mahalanobis distance), to capture the distinct ways white matter may differ in individual patients, and relate this to epilepsy duration. Diffusion MRI, from a cohort of 94 subjects (28 healthy controls, 33 left-TLE and 33 right-TLE), was used to assess associations between tract fractional anisotropy (FA) and epilepsy duration. Using ten white matter tracts, we analysed associations using traditional univariate analyses (z-scores) and a complementary multivariate approach (Mahalanobis distance), incorporating multiple white matter tracts into a single unified analysis. In patients with right-TLE, FA was not significantly associated with epilepsy duration for any tract studied in isolation. In patients with left-TLE, the FA of two limbic tracts (ipsilateral fornix, contralateral cingulum gyrus) was significantly negatively associated with epilepsy duration (Bonferonni corrected p<0.05). Using a multivariate approach we found significant ipsilateral positive associations with duration in both left, and right-TLE cohorts (left-TLE: Spearman's rho=0.487, right-TLE: Spearman's rho=0.422). Extrapolating our multivariate results to duration equals zero (i.e. at onset) we found no significant difference between patients and controls. Associations using the multivariate approach were more robust than univariate methods. The multivariate distance measure provides non-overlapping and more robust results than traditional univariate analyses. Future studies should consider adopting both frameworks into their analysis in order to ascertain a more complete understanding of epilepsy progression, regardless of laterality.
1304.6623
Flora Bacelar Dra.
Flora S. Bacelar, Justin M. Calabrese, Em\'ilio Hern\'andez-Garc\'ia
Exploring the tug of war between positive and negative interactions among savanna trees: Competition, dispersal, and protection from fire
null
null
10.1016/j.ecocom.2013.11.007
null
q-bio.PE nlin.CG physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Savannas are characterized by a discontinuous tree layer superimposed on a continuous layer of grass. Identifying the mechanisms that facilitate this tree-grass coexistence has remained a persistent challenge in ecology and is known as the "savanna problem". In this work, we propose a model that combines a previous savanna model (Calabrese et al., 2010), which includes competitive interactions among trees and dispersal, with the Drossel-Schwabl forest fire model, therefore representing fire in a spatially explicit manner. The model is used to explore how the pattern of fire-spread, coupled with an explicit, fire-vulnerable tree life stage, affects tree density and spatial pattern. Tree density depends strongly on both fire frequency and tree-tree competition although the fire frequency, which induces indirect interactions between trees and between trees and grass, appears to be the crucial factor controlling the tree-extinction transition in which the savanna becomes grassland. Depending on parameters, adult trees may arrange in different regular or clumped patterns, the later of two different types (compact or open). Cluster-size distributions have fat tails but clean power-law behavior is only attained in specific cases.
[ { "created": "Wed, 24 Apr 2013 15:16:33 GMT", "version": "v1" } ]
2014-01-30
[ [ "Bacelar", "Flora S.", "" ], [ "Calabrese", "Justin M.", "" ], [ "Hernández-García", "Emílio", "" ] ]
Savannas are characterized by a discontinuous tree layer superimposed on a continuous layer of grass. Identifying the mechanisms that facilitate this tree-grass coexistence has remained a persistent challenge in ecology and is known as the "savanna problem". In this work, we propose a model that combines a previous savanna model (Calabrese et al., 2010), which includes competitive interactions among trees and dispersal, with the Drossel-Schwabl forest fire model, therefore representing fire in a spatially explicit manner. The model is used to explore how the pattern of fire-spread, coupled with an explicit, fire-vulnerable tree life stage, affects tree density and spatial pattern. Tree density depends strongly on both fire frequency and tree-tree competition although the fire frequency, which induces indirect interactions between trees and between trees and grass, appears to be the crucial factor controlling the tree-extinction transition in which the savanna becomes grassland. Depending on parameters, adult trees may arrange in different regular or clumped patterns, the later of two different types (compact or open). Cluster-size distributions have fat tails but clean power-law behavior is only attained in specific cases.
1706.05452
Stuart Hagler
Stuart Hagler
A General Optimal Control Model of Human Movement Patterns I: Walking Gait
30 pages, 2 figures
null
null
null
q-bio.QM cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The biomechanics of the human body gives subjects a high degree of freedom in how they can execute movement. Nevertheless, subjects exhibit regularity in their movement patterns. One way to account for this regularity is to suppose that subjects select movement trajectories that are optimal in some sense. We adopt the principle that human movements are optimal and develop a general model for human movement patters that uses variational methods in the form of optimal control theory to calculate trajectories of movement trajectories of the body. We find that in this approach a constant of the motion that arises from the model and which plays a role in the optimal control model that is analogous to the role that the mechanical energy plays in classical physics. We illustrate how this approach works in practice by using it to develop a model of walking gait, making all the derivations and calculations in detail. We finally show that this optimal control model of walking gait recovers in an appropriate limit an existing model of walking gait which has been shown to provide good estimates of many observed characteristics of walking gait.
[ { "created": "Fri, 16 Jun 2017 23:05:13 GMT", "version": "v1" }, { "created": "Tue, 20 Jun 2017 00:58:20 GMT", "version": "v2" }, { "created": "Thu, 22 Jun 2017 15:04:34 GMT", "version": "v3" }, { "created": "Sun, 9 Dec 2018 20:31:01 GMT", "version": "v4" } ]
2018-12-11
[ [ "Hagler", "Stuart", "" ] ]
The biomechanics of the human body gives subjects a high degree of freedom in how they can execute movement. Nevertheless, subjects exhibit regularity in their movement patterns. One way to account for this regularity is to suppose that subjects select movement trajectories that are optimal in some sense. We adopt the principle that human movements are optimal and develop a general model for human movement patters that uses variational methods in the form of optimal control theory to calculate trajectories of movement trajectories of the body. We find that in this approach a constant of the motion that arises from the model and which plays a role in the optimal control model that is analogous to the role that the mechanical energy plays in classical physics. We illustrate how this approach works in practice by using it to develop a model of walking gait, making all the derivations and calculations in detail. We finally show that this optimal control model of walking gait recovers in an appropriate limit an existing model of walking gait which has been shown to provide good estimates of many observed characteristics of walking gait.
2111.15080
Tong Chen
Tong Chen, Sheng Wang
SurvODE: Extrapolating Gene Expression Distribution for Early Cancer Identification
12 pages, 6 figures
null
null
null
q-bio.GN cs.AI cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasingly available large-scale cancer genomics datasets, machine learning approaches have played an important role in revealing novel insights into cancer development. Existing methods have shown encouraging performance in identifying genes that are predictive for cancer survival, but are still limited in modeling the distribution over genes. Here, we proposed a novel method that can simulate the gene expression distribution at any given time point, including those that are out of the range of the observed time points. In order to model the irregular time series where each patient is one observation, we integrated a neural ordinary differential equation (neural ODE) with cox regression into our framework. We evaluated our method on eight cancer types on TCGA and observed a substantial improvement over existing approaches. Our visualization results and further analysis indicate how our method can be used to simulate expression at the early cancer stage, offering the possibility for early cancer identification.
[ { "created": "Tue, 30 Nov 2021 02:49:11 GMT", "version": "v1" } ]
2021-12-01
[ [ "Chen", "Tong", "" ], [ "Wang", "Sheng", "" ] ]
With the increasingly available large-scale cancer genomics datasets, machine learning approaches have played an important role in revealing novel insights into cancer development. Existing methods have shown encouraging performance in identifying genes that are predictive for cancer survival, but are still limited in modeling the distribution over genes. Here, we proposed a novel method that can simulate the gene expression distribution at any given time point, including those that are out of the range of the observed time points. In order to model the irregular time series where each patient is one observation, we integrated a neural ordinary differential equation (neural ODE) with cox regression into our framework. We evaluated our method on eight cancer types on TCGA and observed a substantial improvement over existing approaches. Our visualization results and further analysis indicate how our method can be used to simulate expression at the early cancer stage, offering the possibility for early cancer identification.
1405.6172
Paulo Lucio
Paulo S. Lucio, Nicolas Degallier, Maria H. C. Spyrides, Cl\'audio M. S. e Silva, Julio C. B. da Silva, Helder J. F. da Silva, Geovane M\'aximo, Walter Junior, Michel Mesquita
The Dengue risk transmission during the FIFA 2014 World Cup
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dengue is a viral infection that can produce a severe fever and symptoms that may require hospitalization. It is transmitted between humans by the urban-adapted, day-biting Aedes mosquitoes and is therefore a particular problem in towns and cities. To explore this risk, one has assessed the potential levels of exposure by a climate-driven model for dengue risk transmission in Brazil and records of its seasonal variation at the key sites. Like the weather, it is unworkable to forecast the precise situation with regard to dengue in Brazil in 2014. One can, however, make informed guesses on the basis of averaged records of dengue in previous years. For the areas around the World Cup stadiums, these records show that the main dengue season will have passed before the World Cup is held in June and July. Unfortunately, the risk remains, even this is low but not negliegible, during these months in the Brazilian north and northeast. But, in fact, the risk of an outbreak of dengue fever during the upcoming soccer World Cup in Brazil is not serious enough to warrant a high alert in the host cities, according to a reliable early warning system for the disease.
[ { "created": "Fri, 23 May 2014 18:45:05 GMT", "version": "v1" } ]
2014-05-26
[ [ "Lucio", "Paulo S.", "" ], [ "Degallier", "Nicolas", "" ], [ "Spyrides", "Maria H. C.", "" ], [ "Silva", "Cláudio M. S. e", "" ], [ "da Silva", "Julio C. B.", "" ], [ "da Silva", "Helder J. F.", "" ], [ "Máximo", "Geovane", "" ], [ "Junior", "Walter", "" ], [ "Mesquita", "Michel", "" ] ]
Dengue is a viral infection that can produce a severe fever and symptoms that may require hospitalization. It is transmitted between humans by the urban-adapted, day-biting Aedes mosquitoes and is therefore a particular problem in towns and cities. To explore this risk, one has assessed the potential levels of exposure by a climate-driven model for dengue risk transmission in Brazil and records of its seasonal variation at the key sites. Like the weather, it is unworkable to forecast the precise situation with regard to dengue in Brazil in 2014. One can, however, make informed guesses on the basis of averaged records of dengue in previous years. For the areas around the World Cup stadiums, these records show that the main dengue season will have passed before the World Cup is held in June and July. Unfortunately, the risk remains, even this is low but not negliegible, during these months in the Brazilian north and northeast. But, in fact, the risk of an outbreak of dengue fever during the upcoming soccer World Cup in Brazil is not serious enough to warrant a high alert in the host cities, according to a reliable early warning system for the disease.
2110.05531
Kanupriya Goswami
Kanupriya Goswami, Arpana Sharma, Madhu Pruthi, Richa Gupta
Study of Drug Assimilation in Human System using Physics Informed Neural Networks
Incomplete research work with insufficient data and lot of errors in results and languauge
null
null
null
q-bio.OT cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Differential equations play a pivotal role in modern world ranging from science, engineering, ecology, economics and finance where these can be used to model many physical systems and processes. In this paper, we study two mathematical models of a drug assimilation in the human system using Physics Informed Neural Networks (PINNs). In the first model, we consider the case of single dose of drug in the human system and in the second case, we consider the course of this drug taken at regular intervals. We have used the compartment diagram to model these cases. The resulting differential equations are solved using PINN, where we employ a feed forward multilayer perceptron as function approximator and the network parameters are tuned for minimum error. Further, the network is trained by finding the gradient of the error function with respect to the network parameters. We have employed DeepXDE, a python library for PINNs, to solve the simultaneous first order differential equations describing the two models of drug assimilation. The results show high degree of accuracy between the exact solution and the predicted solution as much as the resulting error reaches10^(-11) for the first model and 10^(-8) for the second model. This validates the use of PINN in solving any dynamical system.
[ { "created": "Fri, 8 Oct 2021 07:46:46 GMT", "version": "v1" }, { "created": "Thu, 15 Sep 2022 10:50:03 GMT", "version": "v2" } ]
2022-09-16
[ [ "Goswami", "Kanupriya", "" ], [ "Sharma", "Arpana", "" ], [ "Pruthi", "Madhu", "" ], [ "Gupta", "Richa", "" ] ]
Differential equations play a pivotal role in modern world ranging from science, engineering, ecology, economics and finance where these can be used to model many physical systems and processes. In this paper, we study two mathematical models of a drug assimilation in the human system using Physics Informed Neural Networks (PINNs). In the first model, we consider the case of single dose of drug in the human system and in the second case, we consider the course of this drug taken at regular intervals. We have used the compartment diagram to model these cases. The resulting differential equations are solved using PINN, where we employ a feed forward multilayer perceptron as function approximator and the network parameters are tuned for minimum error. Further, the network is trained by finding the gradient of the error function with respect to the network parameters. We have employed DeepXDE, a python library for PINNs, to solve the simultaneous first order differential equations describing the two models of drug assimilation. The results show high degree of accuracy between the exact solution and the predicted solution as much as the resulting error reaches10^(-11) for the first model and 10^(-8) for the second model. This validates the use of PINN in solving any dynamical system.
1606.02372
Joaquin Rapela
Joaquin Rapela
Entrainment of traveling waves to rhythmic motor acts
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We hypothesized that rhythmic motor acts entrain neural oscillations in speech production brain regions. We tested this hypothesis in an experiment where a subject produced consonant-vowel (CV) syllables in a rhythmic fashion, while we performed ECoG recordings. Over the ventral sensorimotor cortex (vSMC) we detected significant concentration of phase across trials at the specific frequency of speech production. We also observed amplitude modulations. In addition we found significant coupling between the phase of brain oscillations at the frequency of speech production and their amplitude in the high-gamma range (i.e., phase-amplitude coupling, PAC). Furthermore, we saw that brain oscillations at the frequency of speech production organized as traveling waves (TWs), synchronized to the rhythm of speech production. It has been hypothesized that PAC is a mechanism to allow low-frequency oscillations to synchronize with high-frequency neural activity so that spiking occurs at behaviorally relevant times. If this hypothesis is true, when PAC coexists with TWs, we expect a specific organization of PAC curves. We observed this organization experimentally and verified that the peaks of high-gamma oscillations, and therefore spiking, occur at the same times across electrodes. Importantly, we observed that these spiking times were synchronized with the rhythm of speech production. To our knowledge, this is the first report of motor actions organizing (a) the phase coherence of low-frequency brain oscillations, (b) the coupling between the phase of these oscillations and the amplitude of high-frequency oscillations, and (c) TW. It is also the first demonstration that TWs induce an organization of PAC so that spiking across spatial locations is synchronized to behaviorally relevant times.
[ { "created": "Wed, 8 Jun 2016 02:01:20 GMT", "version": "v1" } ]
2016-06-09
[ [ "Rapela", "Joaquin", "" ] ]
We hypothesized that rhythmic motor acts entrain neural oscillations in speech production brain regions. We tested this hypothesis in an experiment where a subject produced consonant-vowel (CV) syllables in a rhythmic fashion, while we performed ECoG recordings. Over the ventral sensorimotor cortex (vSMC) we detected significant concentration of phase across trials at the specific frequency of speech production. We also observed amplitude modulations. In addition we found significant coupling between the phase of brain oscillations at the frequency of speech production and their amplitude in the high-gamma range (i.e., phase-amplitude coupling, PAC). Furthermore, we saw that brain oscillations at the frequency of speech production organized as traveling waves (TWs), synchronized to the rhythm of speech production. It has been hypothesized that PAC is a mechanism to allow low-frequency oscillations to synchronize with high-frequency neural activity so that spiking occurs at behaviorally relevant times. If this hypothesis is true, when PAC coexists with TWs, we expect a specific organization of PAC curves. We observed this organization experimentally and verified that the peaks of high-gamma oscillations, and therefore spiking, occur at the same times across electrodes. Importantly, we observed that these spiking times were synchronized with the rhythm of speech production. To our knowledge, this is the first report of motor actions organizing (a) the phase coherence of low-frequency brain oscillations, (b) the coupling between the phase of these oscillations and the amplitude of high-frequency oscillations, and (c) TW. It is also the first demonstration that TWs induce an organization of PAC so that spiking across spatial locations is synchronized to behaviorally relevant times.
2101.03467
Tom Chou
Lucas B\"ottcher, Maria D'Orsogna, Tom Chou
Using excess deaths and testing statistics to improve estimates of COVID-19 mortalities
null
Eur. J. Epidemiol. 36, 545--558 (2021)
10.1007/s10654-021-00748-2
null
q-bio.QM physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Factors such as non-uniform definitions of mortality, uncertainty in disease prevalence, and biased sampling complicate the quantification of fatality during an epidemic. Regardless of the employed fatality measure, the infected population and the number of infection-caused deaths need to be consistently estimated for comparing mortality across regions. We combine historical and current mortality data, a statistical testing model, and an SIR epidemic model, to improve estimation of mortality. We find that the average excess death across the entire US is 13$\%$ higher than the number of reported COVID-19 deaths. In some areas, such as New York City, the number of weekly deaths is about eight times higher than in previous years. Other countries such as Peru, Ecuador, Mexico, and Spain exhibit excess deaths significantly higher than their reported COVID-19 deaths. Conversely, we find negligible or negative excess deaths for part and all of 2020 for Denmark, Germany, and Norway.
[ { "created": "Sun, 10 Jan 2021 03:45:20 GMT", "version": "v1" } ]
2021-12-28
[ [ "Böttcher", "Lucas", "" ], [ "D'Orsogna", "Maria", "" ], [ "Chou", "Tom", "" ] ]
Factors such as non-uniform definitions of mortality, uncertainty in disease prevalence, and biased sampling complicate the quantification of fatality during an epidemic. Regardless of the employed fatality measure, the infected population and the number of infection-caused deaths need to be consistently estimated for comparing mortality across regions. We combine historical and current mortality data, a statistical testing model, and an SIR epidemic model, to improve estimation of mortality. We find that the average excess death across the entire US is 13$\%$ higher than the number of reported COVID-19 deaths. In some areas, such as New York City, the number of weekly deaths is about eight times higher than in previous years. Other countries such as Peru, Ecuador, Mexico, and Spain exhibit excess deaths significantly higher than their reported COVID-19 deaths. Conversely, we find negligible or negative excess deaths for part and all of 2020 for Denmark, Germany, and Norway.
1106.4199
Jean-Philippe Vert
Kevin Bleakley (INRIA Saclay - Ile de France), Jean-Philippe Vert (CBIO)
The group fused Lasso for multiple change-point detection
null
null
null
null
q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the group fused Lasso for detection of multiple change-points shared by a set of co-occurring one-dimensional signals. Change-points are detected by approximating the original signals with a constraint on the multidimensional total variation, leading to piecewise-constant approximations. Fast algorithms are proposed to solve the resulting optimization problems, either exactly or approximately. Conditions are given for consistency of both algorithms as the number of signals increases, and empirical evidence is provided to support the results on simulated and array comparative genomic hybridization data.
[ { "created": "Tue, 21 Jun 2011 13:34:43 GMT", "version": "v1" } ]
2011-06-23
[ [ "Bleakley", "Kevin", "", "INRIA Saclay - Ile de France" ], [ "Vert", "Jean-Philippe", "", "CBIO" ] ]
We present the group fused Lasso for detection of multiple change-points shared by a set of co-occurring one-dimensional signals. Change-points are detected by approximating the original signals with a constraint on the multidimensional total variation, leading to piecewise-constant approximations. Fast algorithms are proposed to solve the resulting optimization problems, either exactly or approximately. Conditions are given for consistency of both algorithms as the number of signals increases, and empirical evidence is provided to support the results on simulated and array comparative genomic hybridization data.
1708.09416
Gianluca Gabbriellini
Gianluca Gabbriellini
Seasonal Effects on Honey Bee Population Dynamics: a Nonautonomous System of Difference Equations
16 pages, 8 figures
International Journal of Difference Equations, ISSN 0973-6069, Volume 12, Number 2, pp. 211-233 (2017)
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The honey bees play a role of unquestioned relevance in nature and the comprehension of the mechanisms affecting their population dynamic is of fundamental importance. As experimentally documented, the proper development of a colony is related to the nest temperature, whose value is maintained around the optimal value if the colony population is sufficiently large. Then, the environmental temperature, the way in which this influence the nest temperature and the colony population size, are variables closely linked to each other and deserve to be taken into account in a model that aims to describe the population dynamics. In the present study, as first step, the continuous-time autonomous system proposed by Khoury, Myerscoug and Barron (KMB) in 2011 was approximated by means a Nonstandard finite difference (NSFD) scheme in order to obtain a set of autonomous difference equations. Subsequently, with the aim to introduce the seasonal effects, a nonautonomous version (NAKMB) was proposed and formulated in discrete-time domain via a NSFD scheme, by introducing a time-dependent formulation for the queen bee laying rate and the recruitment rate coefficients. By means the phase-plane analysis was possible to deduce that, with an appropriate choice of the parameters, the NAKMB model admits both a limit cycle at nonzero population size and an equilibrium point marking the colony collapse, depending on the initial population size.
[ { "created": "Wed, 30 Aug 2017 18:21:57 GMT", "version": "v1" } ]
2017-09-01
[ [ "Gabbriellini", "Gianluca", "" ] ]
The honey bees play a role of unquestioned relevance in nature and the comprehension of the mechanisms affecting their population dynamic is of fundamental importance. As experimentally documented, the proper development of a colony is related to the nest temperature, whose value is maintained around the optimal value if the colony population is sufficiently large. Then, the environmental temperature, the way in which this influence the nest temperature and the colony population size, are variables closely linked to each other and deserve to be taken into account in a model that aims to describe the population dynamics. In the present study, as first step, the continuous-time autonomous system proposed by Khoury, Myerscoug and Barron (KMB) in 2011 was approximated by means a Nonstandard finite difference (NSFD) scheme in order to obtain a set of autonomous difference equations. Subsequently, with the aim to introduce the seasonal effects, a nonautonomous version (NAKMB) was proposed and formulated in discrete-time domain via a NSFD scheme, by introducing a time-dependent formulation for the queen bee laying rate and the recruitment rate coefficients. By means the phase-plane analysis was possible to deduce that, with an appropriate choice of the parameters, the NAKMB model admits both a limit cycle at nonzero population size and an equilibrium point marking the colony collapse, depending on the initial population size.
q-bio/0503028
Ron Maimon
Ron Maimon
Computational Theory of Biological Function I
22 pages
null
null
null
q-bio.MN
null
This series presents an approach to mathematical biology which makes precise the function of biological molecules. Because biological systems compute, the theory is a general purpose computer language. I build a language for efficiently representing the function of protein-like molecules in a cell. The first paper only presents the kinematic part of the formalism, but this is already useful for representing large-scale protein networks. The full formalism allows us to investigate the properties of protein interaction models, ultimately yielding an estimate of the random-access memory of the proteins, a measure of their capacity for computation.
[ { "created": "Fri, 18 Mar 2005 02:37:21 GMT", "version": "v1" }, { "created": "Sun, 20 Mar 2005 21:14:50 GMT", "version": "v2" } ]
2007-05-23
[ [ "Maimon", "Ron", "" ] ]
This series presents an approach to mathematical biology which makes precise the function of biological molecules. Because biological systems compute, the theory is a general purpose computer language. I build a language for efficiently representing the function of protein-like molecules in a cell. The first paper only presents the kinematic part of the formalism, but this is already useful for representing large-scale protein networks. The full formalism allows us to investigate the properties of protein interaction models, ultimately yielding an estimate of the random-access memory of the proteins, a measure of their capacity for computation.
1309.2852
Kenna Lehmann
Kenna D. S. Lehmann, Brian W. Goldman, Ian Dworkin, David M. Bryson, Aaron P. Wagner
From Cues to Signals: Evolution of Interspecific Communication Via Aposematism and Mimicry in a Predator-Prey System
null
PLoS One 2014 9(3): e91783
10.1371/journal.pone.0091783
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current evolutionary theory suggests that many natural signaling systems evolved from preexisting cues. In aposematic systems, prey warning signals benefit both predator and prey. When the signal is highly beneficial, a third species often evolves to mimic the unpalatable species, exploiting the signaling system for its own protection. We investigated the evolutionary development of predator cue utilization and prey signaling in a digital predator-prey system in which mimicking prey could evolve to alter their appearance to resemble poison-free or poisonous prey. In predators, we observed rapid evolution of cue recognition (i.e. active behavioral responses) when presented with sufficiently poisonous prey. In addition, active signaling (i.e. mimicry) evolved in prey under all conditions that led to cue utilization. Thus we show that despite imperfect and dishonest signaling, given a high cost of consuming poisonous prey, complex systems of interspecific communication can evolve via predator cue recognition and prey signal manipulation. This provides evidence supporting hypotheses that cues may serve as stepping-stones in the evolution of more advanced communication systems and signals incorporating information about the environment.
[ { "created": "Wed, 11 Sep 2013 15:11:01 GMT", "version": "v1" } ]
2014-06-04
[ [ "Lehmann", "Kenna D. S.", "" ], [ "Goldman", "Brian W.", "" ], [ "Dworkin", "Ian", "" ], [ "Bryson", "David M.", "" ], [ "Wagner", "Aaron P.", "" ] ]
Current evolutionary theory suggests that many natural signaling systems evolved from preexisting cues. In aposematic systems, prey warning signals benefit both predator and prey. When the signal is highly beneficial, a third species often evolves to mimic the unpalatable species, exploiting the signaling system for its own protection. We investigated the evolutionary development of predator cue utilization and prey signaling in a digital predator-prey system in which mimicking prey could evolve to alter their appearance to resemble poison-free or poisonous prey. In predators, we observed rapid evolution of cue recognition (i.e. active behavioral responses) when presented with sufficiently poisonous prey. In addition, active signaling (i.e. mimicry) evolved in prey under all conditions that led to cue utilization. Thus we show that despite imperfect and dishonest signaling, given a high cost of consuming poisonous prey, complex systems of interspecific communication can evolve via predator cue recognition and prey signal manipulation. This provides evidence supporting hypotheses that cues may serve as stepping-stones in the evolution of more advanced communication systems and signals incorporating information about the environment.
0711.3421
Christian Hagendorf
Francois David, Christian Hagendorf, Kay Joerg Wiese
A growth model for RNA secondary structures
null
J. Stat. Mech. (2008) P04008
10.1088/1742-5468/2008/04/P04008
LPTENS 07/54, SPhT-T07/068
q-bio.BM cond-mat.stat-mech
null
A hierarchical model for the growth of planar arch structures for RNA secondary structures is presented, and shown to be equivalent to a tree-growth model. Both models can be solved analytically, giving access to scaling functions for large molecules, and corrections to scaling, checked by numerical simulations of up to 6500 bases. The equivalence of both models should be helpful in understanding more general tree-growth processes.
[ { "created": "Wed, 21 Nov 2007 17:51:06 GMT", "version": "v1" } ]
2008-04-11
[ [ "David", "Francois", "" ], [ "Hagendorf", "Christian", "" ], [ "Wiese", "Kay Joerg", "" ] ]
A hierarchical model for the growth of planar arch structures for RNA secondary structures is presented, and shown to be equivalent to a tree-growth model. Both models can be solved analytically, giving access to scaling functions for large molecules, and corrections to scaling, checked by numerical simulations of up to 6500 bases. The equivalence of both models should be helpful in understanding more general tree-growth processes.
1909.06845
Diederik Aerts
Diederik Aerts and Lester Beltran
Quantum Structure in Cognition: Human Language as a Boson Gas of Entangled Words
45 pages, 11 figures
Foundations of Science 25, pp. 755-802 (2020)
10.1007/s10699-019-09633-4
null
q-bio.NC cs.CL quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model a piece of text of human language telling a story by means of the quantum structure describing a Bose gas in a state close to a Bose-Einstein condensate near absolute zero temperature. For this we introduce energy levels for the words (concepts) used in the story and we also introduce the new notion of 'cogniton' as the quantum of human thought. Words (concepts) are then cognitons in different energy states as it is the case for photons in different energy states, or states of different radiative frequency, when the considered boson gas is that of the quanta of the electromagnetic field. We show that Bose-Einstein statistics delivers a very good model for these pieces of texts telling stories, both for short stories and for long stories of the size of novels. We analyze an unexpected connection with Zipf's law in human language, the Zipf ranking relating to the energy levels of the words, and the Bose-Einstein graph coinciding with the Zipf graph. We investigate the issue of 'identity and indistinguishability' from this new perspective and conjecture that the way one can easily understand how two of 'the same concepts' are 'absolutely identical and indistinguishable' in human language is also the way in which quantum particles are absolutely identical and indistinguishable in physical reality, providing in this way new evidence for our conceptuality interpretation of quantum theory.
[ { "created": "Sun, 15 Sep 2019 17:40:57 GMT", "version": "v1" }, { "created": "Sat, 5 Oct 2019 17:15:59 GMT", "version": "v2" } ]
2023-02-27
[ [ "Aerts", "Diederik", "" ], [ "Beltran", "Lester", "" ] ]
We model a piece of text of human language telling a story by means of the quantum structure describing a Bose gas in a state close to a Bose-Einstein condensate near absolute zero temperature. For this we introduce energy levels for the words (concepts) used in the story and we also introduce the new notion of 'cogniton' as the quantum of human thought. Words (concepts) are then cognitons in different energy states as it is the case for photons in different energy states, or states of different radiative frequency, when the considered boson gas is that of the quanta of the electromagnetic field. We show that Bose-Einstein statistics delivers a very good model for these pieces of texts telling stories, both for short stories and for long stories of the size of novels. We analyze an unexpected connection with Zipf's law in human language, the Zipf ranking relating to the energy levels of the words, and the Bose-Einstein graph coinciding with the Zipf graph. We investigate the issue of 'identity and indistinguishability' from this new perspective and conjecture that the way one can easily understand how two of 'the same concepts' are 'absolutely identical and indistinguishable' in human language is also the way in which quantum particles are absolutely identical and indistinguishable in physical reality, providing in this way new evidence for our conceptuality interpretation of quantum theory.
2101.04137
Zied Ben Houidi
Zied Ben Houidi
The backpropagation-based recollection hypothesis: Backpropagated action potentials mediate recall, imagination, language understanding and naming
19 pages, 6 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Ever since the advent of the neuron doctrine more than a century ago, information processing in the brain is widely believed to mainly follow the forward pre to post-synaptic neurons direction. Challenging this prevalent view, in this paper, we emit the backpropagation-based recollection hypothesis as follows: weak and fast fading Action Potentials following the (highest weight) post to pre-synaptic backward pathways, mediate explicit cue-based memory recall. This includes also the tasks of imagination, future episodic thinking, language understanding and associating names to various stimuli. These signals originate in highly invariant neurons, which uniquely respond to some specific stimuli (e.g. image of a cat). They then travel backwards to reactivate the same populations of neurons that uniquely respond to this specific stimuli during perception, thus recreating "offline" an experience that is similar. After stating our hypothesis in details, we challenge its assumptions through a thorough literature review. We find abundant evidence that supports most of its assumptions, including the existence of backpropagating signals that have interesting properties. We then leverage simulations based on existing spiking neural network models with STDP learning to show the computational feasibility of using such a mechanism to map the image of an object to its name with the same high accuracy as a state of the art machine learning classifier. Although not yet a theory, we believe this hypothesis presents a paradigm shift that is worth further investigating: it opens the way, among others, to new interpretations of language acquisition and understanding, the interplay between memories encoding and retrieval, as well as reconciling the apparently opposed views between sparse coding and distributed representations.
[ { "created": "Mon, 11 Jan 2021 19:02:35 GMT", "version": "v1" }, { "created": "Thu, 14 Jan 2021 14:44:58 GMT", "version": "v2" }, { "created": "Mon, 13 Sep 2021 21:41:03 GMT", "version": "v3" } ]
2021-09-15
[ [ "Houidi", "Zied Ben", "" ] ]
Ever since the advent of the neuron doctrine more than a century ago, information processing in the brain is widely believed to mainly follow the forward pre to post-synaptic neurons direction. Challenging this prevalent view, in this paper, we emit the backpropagation-based recollection hypothesis as follows: weak and fast fading Action Potentials following the (highest weight) post to pre-synaptic backward pathways, mediate explicit cue-based memory recall. This includes also the tasks of imagination, future episodic thinking, language understanding and associating names to various stimuli. These signals originate in highly invariant neurons, which uniquely respond to some specific stimuli (e.g. image of a cat). They then travel backwards to reactivate the same populations of neurons that uniquely respond to this specific stimuli during perception, thus recreating "offline" an experience that is similar. After stating our hypothesis in details, we challenge its assumptions through a thorough literature review. We find abundant evidence that supports most of its assumptions, including the existence of backpropagating signals that have interesting properties. We then leverage simulations based on existing spiking neural network models with STDP learning to show the computational feasibility of using such a mechanism to map the image of an object to its name with the same high accuracy as a state of the art machine learning classifier. Although not yet a theory, we believe this hypothesis presents a paradigm shift that is worth further investigating: it opens the way, among others, to new interpretations of language acquisition and understanding, the interplay between memories encoding and retrieval, as well as reconciling the apparently opposed views between sparse coding and distributed representations.
2303.07163
Thierry Mora
Thierry Mora and Aleksandra M. Walczak
Towards a quantitative theory of tolerance
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A cornerstone of the classical view of tolerance is the elimination of self-reactive T cells during negative selection in the thymus. However, high-throughput T-cell receptor sequencing data has so far failed to detect substantial signatures of negative selection in the observed repertoires. In addition, quantitative estimates as well as recent experiments suggest that the elimination of self-reactive T cells is at best incomplete. We discuss several recent theoretical ideas that can explain tolerance while being consistent with these observations, including collective decision making through quorum sensing, and sensitivity to change through dynamic tuning and adaptation. We propose that a unified quantitative theory of tolerance should combine these elements to explain the plasticity of the immune system and its robustness to autoimmunity.
[ { "created": "Mon, 13 Mar 2023 15:07:04 GMT", "version": "v1" } ]
2023-03-14
[ [ "Mora", "Thierry", "" ], [ "Walczak", "Aleksandra M.", "" ] ]
A cornerstone of the classical view of tolerance is the elimination of self-reactive T cells during negative selection in the thymus. However, high-throughput T-cell receptor sequencing data has so far failed to detect substantial signatures of negative selection in the observed repertoires. In addition, quantitative estimates as well as recent experiments suggest that the elimination of self-reactive T cells is at best incomplete. We discuss several recent theoretical ideas that can explain tolerance while being consistent with these observations, including collective decision making through quorum sensing, and sensitivity to change through dynamic tuning and adaptation. We propose that a unified quantitative theory of tolerance should combine these elements to explain the plasticity of the immune system and its robustness to autoimmunity.
2106.06975
Mohsen Annabestani
Mohsen Annabestani, Pouria Esmaili-Dokht, Seyyed Ali Olianasab, Nooshin Orouji, Zeinab Alipour, Mohammad Hossein Sayad, Kimia Rajabi, Barbara Mazzolai, Mehdi Fardmanesh
A novel fully 3D, microfluidic-oriented, gel-based and low cost stretchable soft sensor
null
null
null
null
q-bio.QM cond-mat.soft cs.RO physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a novel fully 3D, microfluidic-oriented, gel-based, and low-cost highly stretchable resistive sensors have been presented. By the proposed method we are able to measure and discriminate all of the stretch, twist, and pressure features by a single sensor which is the potential that we have obtained from the fully 3D structure of our sensor. Against previous sensors which all have used EGaIn as the conductive material of their sensor, we have used low-cost, safe, and ubiquitous glycol-based gel instead. To show the functionality of the proposed sensor some FEM simulations, a set of the designed experimental tests were done which showed the linear, accurate, and durable operation of the proposed sensor. Finally, the sensor was put through its paces on the knee, elbow, and wrist of a female test subject. Also, to evaluate the pressure functionality of the sensor, a fully 3D active foot insole was developed, fabricated, and evaluated. All of the results show promising features for the proposed sensor to be used in real-world applications like rehabilitation, wearable devices, soft robotics, smart clothing, gait analysis, AR/VR, etc.
[ { "created": "Sun, 13 Jun 2021 12:33:10 GMT", "version": "v1" } ]
2021-06-15
[ [ "Annabestani", "Mohsen", "" ], [ "Esmaili-Dokht", "Pouria", "" ], [ "Olianasab", "Seyyed Ali", "" ], [ "Orouji", "Nooshin", "" ], [ "Alipour", "Zeinab", "" ], [ "Sayad", "Mohammad Hossein", "" ], [ "Rajabi", "Kimia", "" ], [ "Mazzolai", "Barbara", "" ], [ "Fardmanesh", "Mehdi", "" ] ]
In this paper, a novel fully 3D, microfluidic-oriented, gel-based, and low-cost highly stretchable resistive sensors have been presented. By the proposed method we are able to measure and discriminate all of the stretch, twist, and pressure features by a single sensor which is the potential that we have obtained from the fully 3D structure of our sensor. Against previous sensors which all have used EGaIn as the conductive material of their sensor, we have used low-cost, safe, and ubiquitous glycol-based gel instead. To show the functionality of the proposed sensor some FEM simulations, a set of the designed experimental tests were done which showed the linear, accurate, and durable operation of the proposed sensor. Finally, the sensor was put through its paces on the knee, elbow, and wrist of a female test subject. Also, to evaluate the pressure functionality of the sensor, a fully 3D active foot insole was developed, fabricated, and evaluated. All of the results show promising features for the proposed sensor to be used in real-world applications like rehabilitation, wearable devices, soft robotics, smart clothing, gait analysis, AR/VR, etc.
1504.07797
Sakuntala Chatterjee
Subrata Dev and Sakuntala Chatterjee
Optimal search in E.coli chemotaxis
null
Physical Review E, vol. 91, 042714 (2015)
10.1103/PhysRevE.91.042714
null
q-bio.CB cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study chemotaxis of a single {\sl E.coli} bacterium in a medium where the nutrient chemical is also undergoing diffusion and its concentration has the form of a Gaussian whose width increases with time. We measure the average first passage time of the bacterium at a region of high nutrient concentration. In the limit of very slow nutrient diffusion, the bacterium effectively experiences a Gaussian concentration profile with a fixed width. In this case we find that there exists an optimum width of the Gaussian when the average first passage time is minimum, {\sl i.e.}, the search process is most efficient. We verify the existence of the optimum width for the deterministic initial position of the bacterium and also for the stochastic initial position, drawn from uniform and steady state distributions. Our numerical simulation in a model of a non-Markovian random walker agrees well with our analytical calculations in a related coarse-grained model. We also present our simulation results for the case when the nutrient diffusion and bacterial motion occur over comparable time-scales and the bacterium senses a time-varying concentration field.
[ { "created": "Wed, 29 Apr 2015 10:32:11 GMT", "version": "v1" } ]
2015-04-30
[ [ "Dev", "Subrata", "" ], [ "Chatterjee", "Sakuntala", "" ] ]
We study chemotaxis of a single {\sl E.coli} bacterium in a medium where the nutrient chemical is also undergoing diffusion and its concentration has the form of a Gaussian whose width increases with time. We measure the average first passage time of the bacterium at a region of high nutrient concentration. In the limit of very slow nutrient diffusion, the bacterium effectively experiences a Gaussian concentration profile with a fixed width. In this case we find that there exists an optimum width of the Gaussian when the average first passage time is minimum, {\sl i.e.}, the search process is most efficient. We verify the existence of the optimum width for the deterministic initial position of the bacterium and also for the stochastic initial position, drawn from uniform and steady state distributions. Our numerical simulation in a model of a non-Markovian random walker agrees well with our analytical calculations in a related coarse-grained model. We also present our simulation results for the case when the nutrient diffusion and bacterial motion occur over comparable time-scales and the bacterium senses a time-varying concentration field.
1607.08507
Kamal Dhakal
Jingsong Zhou, Kamal Dhakal, Jianxun Yi
Mitochondrial Ca2+ uptake in skeletal muscle health and disease
null
null
10.1007/s11427-016-5089-3
null
q-bio.BM physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Muscle uses Ca2+ as a messenger to control contraction and relies on ATP to maintain the intracellular Ca2+ homeostasis. Mitochondria are the major sub-cellular organelle of ATP production. With a negative inner membrane potential, mitochondria take up Ca2+ from their surroundings, a process called mitochondrial Ca2+ uptake. Under physiological conditions, Ca2+ uptake into mitochondria promotes ATP production. Excessive uptake causes mitochondrial Ca2+ overload, which activates downstream adverse responses leading to cell dysfunction. Moreover, mitochondrial Ca2+ uptake could shape spatio-temporal patterns of intracellular Ca2+ signaling. Malfunction of mitochondrial Ca2+ uptake is implicated in muscle degeneration. Unlike non-excitable cells, mitochondria in muscle cells experience dramatic changes of intracellular Ca2+ levels. Besides the sudden elevation of Ca2+ level induced by action potentials, Ca2+ transients in muscle cells can be as short as a few milliseconds during a single twitch or as long as minutes during tetanic contraction, which raises the question whether mitochondrial Ca2+ uptake is fast and big enough to shape intracellular Ca2+ signaling during excitation-contraction coupling and creates technical challenges for quantification of the dynamic changes of Ca2+ inside mitochondria. This review focuses on characterization of mitochondrial Ca2+ uptake in skeletal muscle and its role in muscle physiology and diseases.
[ { "created": "Thu, 28 Jul 2016 15:47:22 GMT", "version": "v1" } ]
2016-07-29
[ [ "Zhou", "Jingsong", "" ], [ "Dhakal", "Kamal", "" ], [ "Yi", "Jianxun", "" ] ]
Muscle uses Ca2+ as a messenger to control contraction and relies on ATP to maintain the intracellular Ca2+ homeostasis. Mitochondria are the major sub-cellular organelle of ATP production. With a negative inner membrane potential, mitochondria take up Ca2+ from their surroundings, a process called mitochondrial Ca2+ uptake. Under physiological conditions, Ca2+ uptake into mitochondria promotes ATP production. Excessive uptake causes mitochondrial Ca2+ overload, which activates downstream adverse responses leading to cell dysfunction. Moreover, mitochondrial Ca2+ uptake could shape spatio-temporal patterns of intracellular Ca2+ signaling. Malfunction of mitochondrial Ca2+ uptake is implicated in muscle degeneration. Unlike non-excitable cells, mitochondria in muscle cells experience dramatic changes of intracellular Ca2+ levels. Besides the sudden elevation of Ca2+ level induced by action potentials, Ca2+ transients in muscle cells can be as short as a few milliseconds during a single twitch or as long as minutes during tetanic contraction, which raises the question whether mitochondrial Ca2+ uptake is fast and big enough to shape intracellular Ca2+ signaling during excitation-contraction coupling and creates technical challenges for quantification of the dynamic changes of Ca2+ inside mitochondria. This review focuses on characterization of mitochondrial Ca2+ uptake in skeletal muscle and its role in muscle physiology and diseases.
2310.15177
Alexander Ororbia
Alexander Ororbia, Mary Alexandria Kelly
A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian Learning and Free Energy Minimization
Additional section on hopfield functionals and CogNGen's full free energy, basal ganglia sub-circuit diagram integrated
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last few years, large neural generative models, capable of synthesizing semantically rich passages of text or producing complex images, have recently emerged as a popular representation of what has come to be known as ``generative artificial intelligence'' (generative AI). Beyond opening the door to new opportunities as well as challenges for the domain of statistical machine learning, the rising popularity of generative AI brings with it interesting questions for Cognitive Science, which seeks to discover the nature of the processes that underpin minds and brains as well as to understand how such functionality might be acquired and instantianted in biological (or artificial) substrate. With this goal in mind, we argue that a promising research program lies in the crafting of cognitive architectures, a long-standing tradition of the field, cast fundamentally in terms of neuro-mimetic generative building blocks. Concretely, we discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition in terms of Hebbian adaptation operating in service of optimizing a variational free energy functional.
[ { "created": "Sat, 14 Oct 2023 23:28:48 GMT", "version": "v1" }, { "created": "Sat, 4 Nov 2023 03:23:20 GMT", "version": "v2" } ]
2023-11-07
[ [ "Ororbia", "Alexander", "" ], [ "Kelly", "Mary Alexandria", "" ] ]
Over the last few years, large neural generative models, capable of synthesizing semantically rich passages of text or producing complex images, have recently emerged as a popular representation of what has come to be known as ``generative artificial intelligence'' (generative AI). Beyond opening the door to new opportunities as well as challenges for the domain of statistical machine learning, the rising popularity of generative AI brings with it interesting questions for Cognitive Science, which seeks to discover the nature of the processes that underpin minds and brains as well as to understand how such functionality might be acquired and instantianted in biological (or artificial) substrate. With this goal in mind, we argue that a promising research program lies in the crafting of cognitive architectures, a long-standing tradition of the field, cast fundamentally in terms of neuro-mimetic generative building blocks. Concretely, we discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition in terms of Hebbian adaptation operating in service of optimizing a variational free energy functional.
2304.07412
Ziyu Zhao
Ziyu Zhao, Dae-Sung Hwangbo, Sumit Saurabh, Clark Rosensweig, Ravi Allada, William L. Kath and Rosemary Braun
Modeling Transient Changes in Circadian Rhythms
null
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The circadian clock can adapt itself to external cues, but the molecular mechanisms and regulatory networks governing circadian oscillations' transient adjustments are still largely unknown. Here we consider the specific case of circadian oscillations transiently responding to a temperature change. Using a framework motivated by Floquet theory, we model the mRNA expression level of the fat body from Drosophila melanogaster following a step change from 25C to 18C. Using the method we infer the adaptation rates of individual genes as they adapt to the new temperature. To deal with heteroskedastic noise and outliers present in the expression data we employ quantile regression and wild bootstrap for significance testing. Model selection with finite-sample corrected Akaike Information Criterion (AICc) is performed additionally for robust inference. We identify several genes with fast transition rates as potential sources of temperature-mediated responses in the circadian system of fruit flies, and the constructed network suggests that the proteasome may play important roles in governing these responses.
[ { "created": "Fri, 14 Apr 2023 22:11:16 GMT", "version": "v1" } ]
2023-04-18
[ [ "Zhao", "Ziyu", "" ], [ "Hwangbo", "Dae-Sung", "" ], [ "Saurabh", "Sumit", "" ], [ "Rosensweig", "Clark", "" ], [ "Allada", "Ravi", "" ], [ "Kath", "William L.", "" ], [ "Braun", "Rosemary", "" ] ]
The circadian clock can adapt itself to external cues, but the molecular mechanisms and regulatory networks governing circadian oscillations' transient adjustments are still largely unknown. Here we consider the specific case of circadian oscillations transiently responding to a temperature change. Using a framework motivated by Floquet theory, we model the mRNA expression level of the fat body from Drosophila melanogaster following a step change from 25C to 18C. Using the method we infer the adaptation rates of individual genes as they adapt to the new temperature. To deal with heteroskedastic noise and outliers present in the expression data we employ quantile regression and wild bootstrap for significance testing. Model selection with finite-sample corrected Akaike Information Criterion (AICc) is performed additionally for robust inference. We identify several genes with fast transition rates as potential sources of temperature-mediated responses in the circadian system of fruit flies, and the constructed network suggests that the proteasome may play important roles in governing these responses.
2305.02199
Anees Kazi
Anees Kazi, Jocelyn Mora, Bruce Fischl, Adrian V. Dalca, and Iman Aganj
Multi-Head Graph Convolutional Network for Structural Connectome Classification
null
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We tackle classification based on brain connectivity derived from diffusion magnetic resonance images. We propose a machine-learning model inspired by graph convolutional networks (GCNs), which takes a brain connectivity input graph and processes the data separately through a parallel GCN mechanism with multiple heads. The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes, capturing representations from the input data thoroughly. To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification. This quantifies the degree to which the connectome varies depending on the sex, which is important for improving our understanding of health and disease in both sexes. We show experiments on two publicly available datasets: PREVENT-AD (347 subjects) and OASIS3 (771 subjects). The proposed model demonstrates the highest performance compared to the existing machine-learning algorithms we tested, including classical methods and (graph and non-graph) deep learning. We provide a detailed analysis of each component of our model.
[ { "created": "Tue, 2 May 2023 15:04:30 GMT", "version": "v1" }, { "created": "Wed, 20 Sep 2023 15:03:08 GMT", "version": "v2" } ]
2023-09-21
[ [ "Kazi", "Anees", "" ], [ "Mora", "Jocelyn", "" ], [ "Fischl", "Bruce", "" ], [ "Dalca", "Adrian V.", "" ], [ "Aganj", "Iman", "" ] ]
We tackle classification based on brain connectivity derived from diffusion magnetic resonance images. We propose a machine-learning model inspired by graph convolutional networks (GCNs), which takes a brain connectivity input graph and processes the data separately through a parallel GCN mechanism with multiple heads. The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes, capturing representations from the input data thoroughly. To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification. This quantifies the degree to which the connectome varies depending on the sex, which is important for improving our understanding of health and disease in both sexes. We show experiments on two publicly available datasets: PREVENT-AD (347 subjects) and OASIS3 (771 subjects). The proposed model demonstrates the highest performance compared to the existing machine-learning algorithms we tested, including classical methods and (graph and non-graph) deep learning. We provide a detailed analysis of each component of our model.
2305.00006
Benjamin Hitz
Meenakshi S. Kagda, Bonita Lam, Casey Litton, Corinn Small, Cricket A. Sloan, Emma Spragins, Forrest Tanaka, Ian Whaling, Idan Gabdank, Ingrid Youngworth, J. Seth Strattan, Jason Hilton, Jennifer Jou, Jessica Au, Jin-Wook Lee, Kalina Andreeva, Keenan Graham, Khine Lin, Matt Simison, Otto Jolanki, Paul Sud, Pedro Assis, Philip Adenekan, Eric Douglas, Mingjie Li, Pedro Assis, Keenan Graham, Paul Sud, Stuart Miyasato, Weiwei Zhong, Yunhai Luo, Zachary Myers, J. Michael Cherry and Benjamin C. Hitz
Data navigation on the ENCODE portal
null
null
null
null
q-bio.GN cs.DB
http://creativecommons.org/licenses/by/4.0/
Spanning two decades, the Encyclopaedia of DNA Elements (ENCODE) is a collaborative research project that aims to identify all the functional elements in the human and mouse genomes. To best serve the scientific community, all data generated by the consortium is shared through a web-portal (https://www.encodeproject.org/) with no access restrictions. The fourth and final phase of the project added a diverse set of new samples (including those associated with human disease), and a wide range of new assays aimed at detection, characterization and validation of functional genomic elements. The ENCODE data portal hosts results from over 23,000 functional genomics experiments, over 800 functional elements characterization experiments (including in vivo transgenic enhancer assays, reporter assays and CRISPR screens) along with over 60,000 results of computational and integrative analyses (including imputations, predictions and genome annotations). The ENCODE Data Coordination Center (DCC) is responsible for development and maintenance of the data portal, along with the implementation and utilisation of the ENCODE uniform processing pipelines to generate uniformly processed data. Here we report recent updates to the data portal. Specifically, we have completely redesigned the home page, improved search interface, added several new pages to highlight collections of biologically related data (deeply profiled cell lines, immune cells, Alzheimer's Disease, RNA-Protein interactions, degron matrix and a matrix of experiments organised by human donors), added single-cell experiments, and enhanced the cart interface for visualisation and download of user-selected datasets.
[ { "created": "Thu, 27 Apr 2023 23:45:12 GMT", "version": "v1" }, { "created": "Thu, 4 May 2023 17:33:07 GMT", "version": "v2" } ]
2023-05-05
[ [ "Kagda", "Meenakshi S.", "" ], [ "Lam", "Bonita", "" ], [ "Litton", "Casey", "" ], [ "Small", "Corinn", "" ], [ "Sloan", "Cricket A.", "" ], [ "Spragins", "Emma", "" ], [ "Tanaka", "Forrest", "" ], [ "Whaling", "Ian", "" ], [ "Gabdank", "Idan", "" ], [ "Youngworth", "Ingrid", "" ], [ "Strattan", "J. Seth", "" ], [ "Hilton", "Jason", "" ], [ "Jou", "Jennifer", "" ], [ "Au", "Jessica", "" ], [ "Lee", "Jin-Wook", "" ], [ "Andreeva", "Kalina", "" ], [ "Graham", "Keenan", "" ], [ "Lin", "Khine", "" ], [ "Simison", "Matt", "" ], [ "Jolanki", "Otto", "" ], [ "Sud", "Paul", "" ], [ "Assis", "Pedro", "" ], [ "Adenekan", "Philip", "" ], [ "Douglas", "Eric", "" ], [ "Li", "Mingjie", "" ], [ "Assis", "Pedro", "" ], [ "Graham", "Keenan", "" ], [ "Sud", "Paul", "" ], [ "Miyasato", "Stuart", "" ], [ "Zhong", "Weiwei", "" ], [ "Luo", "Yunhai", "" ], [ "Myers", "Zachary", "" ], [ "Cherry", "J. Michael", "" ], [ "Hitz", "Benjamin C.", "" ] ]
Spanning two decades, the Encyclopaedia of DNA Elements (ENCODE) is a collaborative research project that aims to identify all the functional elements in the human and mouse genomes. To best serve the scientific community, all data generated by the consortium is shared through a web-portal (https://www.encodeproject.org/) with no access restrictions. The fourth and final phase of the project added a diverse set of new samples (including those associated with human disease), and a wide range of new assays aimed at detection, characterization and validation of functional genomic elements. The ENCODE data portal hosts results from over 23,000 functional genomics experiments, over 800 functional elements characterization experiments (including in vivo transgenic enhancer assays, reporter assays and CRISPR screens) along with over 60,000 results of computational and integrative analyses (including imputations, predictions and genome annotations). The ENCODE Data Coordination Center (DCC) is responsible for development and maintenance of the data portal, along with the implementation and utilisation of the ENCODE uniform processing pipelines to generate uniformly processed data. Here we report recent updates to the data portal. Specifically, we have completely redesigned the home page, improved search interface, added several new pages to highlight collections of biologically related data (deeply profiled cell lines, immune cells, Alzheimer's Disease, RNA-Protein interactions, degron matrix and a matrix of experiments organised by human donors), added single-cell experiments, and enhanced the cart interface for visualisation and download of user-selected datasets.
2312.14285
Michael Kuoch
Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello, James J. DiCarlo, Haim Sompolinsky, SueYeon Chung
Probing Biological and Artificial Neural Networks with Task-dependent Neural Manifolds
To appear in the proceedings of the Conference on Parsimony and Learning (CPAL) 2024
null
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches. However, concrete methodologies for bridging the gap between these levels of abstraction remain elusive. In this work, we investigate the internal mechanisms of neural networks through the lens of neural population geometry, aiming to provide understanding at an intermediate level of abstraction, as a way to bridge that gap. Utilizing manifold capacity theory (MCT) from statistical physics and manifold alignment analysis (MAA) from high-dimensional statistics, we probe the underlying organization of task-dependent manifolds in deep neural networks and macaque neural recordings. Specifically, we quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models and demonstrate how these geometric analyses are connected to the decodability of task-relevant information. These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry, potentially opening up many future research avenues in both machine learning and neuroscience.
[ { "created": "Thu, 21 Dec 2023 20:40:51 GMT", "version": "v1" } ]
2023-12-25
[ [ "Kuoch", "Michael", "" ], [ "Chou", "Chi-Ning", "" ], [ "Parthasarathy", "Nikhil", "" ], [ "Dapello", "Joel", "" ], [ "DiCarlo", "James J.", "" ], [ "Sompolinsky", "Haim", "" ], [ "Chung", "SueYeon", "" ] ]
Recently, growth in our understanding of the computations performed in both biological and artificial neural networks has largely been driven by either low-level mechanistic studies or global normative approaches. However, concrete methodologies for bridging the gap between these levels of abstraction remain elusive. In this work, we investigate the internal mechanisms of neural networks through the lens of neural population geometry, aiming to provide understanding at an intermediate level of abstraction, as a way to bridge that gap. Utilizing manifold capacity theory (MCT) from statistical physics and manifold alignment analysis (MAA) from high-dimensional statistics, we probe the underlying organization of task-dependent manifolds in deep neural networks and macaque neural recordings. Specifically, we quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models and demonstrate how these geometric analyses are connected to the decodability of task-relevant information. These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry, potentially opening up many future research avenues in both machine learning and neuroscience.
1207.2072
Mauro Mobilia
Mauro Mobilia
Stochastic dynamics of the prisoner's dilemma with cooperation facilitators
10 pages, 5 figures. Final version, published in Physical Review E
Phys. Rev. E 86, 011134:1-9 (2012)
10.1103/PhysRevE.86.011134
null
q-bio.PE cond-mat.stat-mech nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the framework of the paradigmatic prisoner's dilemma, we investigate the evolutionary dynamics of social dilemmas in the presence of "cooperation facilitators". In our model, cooperators and defectors interact as in the classical prisoner's dilemma game, where selection favors defection. However, here the presence of a small number of cooperation facilitators enhances the fitness (reproductive potential) of cooperators, while it does not alter that of defectors. In a finite population of size N, the dynamics of the prisoner's dilemma with facilitators is characterized by the probability that cooperation takes over (fixation probability) and by the mean times to reach the absorbing states. These quantities are computed exactly and using Fokker-Planck equations. Our findings, corroborated by stochastic simulations, demonstrate that the influence of facilitators crucially depends on the difference between their density z and the game's cost-to-benefit ratio r. When z>r, the fixation of cooperators is likely in a large population and, under weak selection pressure, invasion and replacement of defection by cooperation is favored by selection if b(z-r)(1-z)>1/N, where 0<b<= 1 is the cooperation payoff benefit. When z<r, the fixation probability of cooperators is exponentially enhanced by the presence of facilitators but defection is the dominating strategy.
[ { "created": "Mon, 9 Jul 2012 15:09:09 GMT", "version": "v1" }, { "created": "Wed, 1 Aug 2012 14:33:36 GMT", "version": "v2" } ]
2012-08-02
[ [ "Mobilia", "Mauro", "" ] ]
In the framework of the paradigmatic prisoner's dilemma, we investigate the evolutionary dynamics of social dilemmas in the presence of "cooperation facilitators". In our model, cooperators and defectors interact as in the classical prisoner's dilemma game, where selection favors defection. However, here the presence of a small number of cooperation facilitators enhances the fitness (reproductive potential) of cooperators, while it does not alter that of defectors. In a finite population of size N, the dynamics of the prisoner's dilemma with facilitators is characterized by the probability that cooperation takes over (fixation probability) and by the mean times to reach the absorbing states. These quantities are computed exactly and using Fokker-Planck equations. Our findings, corroborated by stochastic simulations, demonstrate that the influence of facilitators crucially depends on the difference between their density z and the game's cost-to-benefit ratio r. When z>r, the fixation of cooperators is likely in a large population and, under weak selection pressure, invasion and replacement of defection by cooperation is favored by selection if b(z-r)(1-z)>1/N, where 0<b<= 1 is the cooperation payoff benefit. When z<r, the fixation probability of cooperators is exponentially enhanced by the presence of facilitators but defection is the dominating strategy.
1901.06967
Christoph M Augustin
Kristin L. Leskoske, Fran\c{c}oise M. Roelants, Anita Emmerstorfer-Augustin, Christoph M. Augustin, Edward P. Si, Jennifer M. Hill, Jeremy Thorner
Phosphorylation by the stress-activated MAPK Slt2 down-regulates the yeast TOR complex 2
This work was supported by National Institutes of Health (NIH) Predoctoral Traineeship GM07232 and a University of California at Berkeley MacArthur and Lakhan-Pal Graduate Fellowship to K.L.L., Erwin Schroedinger Fellowship J3787-B21 from the Austrian Science Fund to AE-A, Marie Sklodowska-Curie Action H2020-MSCA-IF-2016 InsiliCardio, GA 75083 to CMA, and NIH R01 research grant GM21841 to JT
Genes & Development, 32:1-15, 2018
10.1101/gad.318709.118
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Saccharomyces cerevisiae target of rapamycin (TOR) complex 2 (TORC2) is an essential regulator of plasma membrane lipid and protein homeostasis. How TORC2 activity is modulated in response to changes in the status of the cell envelope is unclear. Here we document that TORC2 subunit Avo2 is a direct target of Slt2, the mitogen-activated protein kinase (MAPK) of the cell wall integrity pathway. Activation of Slt2 by overexpression of a constitutively active allele of an upstream Slt2 activator (Pkc1) or by auxin-induced degradation of a negative Slt2 regulator (Sln1) caused hyperphosphorylation of Avo2 at its MAPK phosphoacceptor sites in a Slt2-dependent manner and diminished TORC2-mediated phosphorylation of its major downstream effector, protein kinase Ypk1. Deletion of Avo2 or expression of a phosphomimetic Avo2 allele rendered cells sensitive to two stresses (myriocin treatment and elevated exogenous acetic acid) that the cell requires Ypk1 activation by TORC2 to survive. Thus, Avo2 is necessary for optimal TORC2 activity, and Slt2-mediated phosphorylation of Avo2 down-regulates TORC2 signaling. Compared with wild-type Avo2, phosphomimetic Avo2 shows significant displacement from the plasma membrane, suggesting that Slt2 inhibits TORC2 by promoting Avo2 dissociation. Our findings are the first demonstration that TORC2 function is regulated by MAPK-mediated phosphorylation.
[ { "created": "Mon, 21 Jan 2019 15:36:49 GMT", "version": "v1" } ]
2019-01-23
[ [ "Leskoske", "Kristin L.", "" ], [ "Roelants", "Françoise M.", "" ], [ "Emmerstorfer-Augustin", "Anita", "" ], [ "Augustin", "Christoph M.", "" ], [ "Si", "Edward P.", "" ], [ "Hill", "Jennifer M.", "" ], [ "Thorner", "Jeremy", "" ] ]
Saccharomyces cerevisiae target of rapamycin (TOR) complex 2 (TORC2) is an essential regulator of plasma membrane lipid and protein homeostasis. How TORC2 activity is modulated in response to changes in the status of the cell envelope is unclear. Here we document that TORC2 subunit Avo2 is a direct target of Slt2, the mitogen-activated protein kinase (MAPK) of the cell wall integrity pathway. Activation of Slt2 by overexpression of a constitutively active allele of an upstream Slt2 activator (Pkc1) or by auxin-induced degradation of a negative Slt2 regulator (Sln1) caused hyperphosphorylation of Avo2 at its MAPK phosphoacceptor sites in a Slt2-dependent manner and diminished TORC2-mediated phosphorylation of its major downstream effector, protein kinase Ypk1. Deletion of Avo2 or expression of a phosphomimetic Avo2 allele rendered cells sensitive to two stresses (myriocin treatment and elevated exogenous acetic acid) that the cell requires Ypk1 activation by TORC2 to survive. Thus, Avo2 is necessary for optimal TORC2 activity, and Slt2-mediated phosphorylation of Avo2 down-regulates TORC2 signaling. Compared with wild-type Avo2, phosphomimetic Avo2 shows significant displacement from the plasma membrane, suggesting that Slt2 inhibits TORC2 by promoting Avo2 dissociation. Our findings are the first demonstration that TORC2 function is regulated by MAPK-mediated phosphorylation.
1811.10172
Jumpei Yamagishi
Jumpei F Yamagishi, Nen Saito, and Kunihiko Kaneko
The advantage of leakage of essential metabolites and resultant symbiosis of diverse species
9 pages, 4 figures
Phys. Rev. Lett. 124, 048101 (2020)
10.1103/PhysRevLett.124.048101
null
q-bio.PE physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microbial communities display extreme diversity. A variety of strains or species coexist even when limited by a single resource. It has been argued that metabolite secretion creates new niches and facilitates such diversity. Nonetheless, it is still a controversial topic why cells secrete even essential metabolites so often; in fact, even under isolation conditions, microbial cells secrete various metabolites, including those essential for their growth. First, we demonstrate that leaking essential metabolites can be advantageous. If the intracellular chemical reactions include multibody reactions like catalytic reactions, this advantageous leakage of essential metabolites is possible and indeed typical for most metabolic networks via "flux control" and "growth-dilution" mechanisms; the later is a result of the balance between synthesis and growth-induced dilution with autocatalytic reactions. Counterintuitively, the mechanisms can work even when the supplied resource is scarce. Next, when such cells are crowded, the presence of another cell type, which consumes the leaked chemicals is beneficial for both cell types, so that their coexistence enhances the growth of both. The latter part of the paper is devoted to the analysis of such unusual form of symbiosis: "consumer" cell types benefit from the uptake of metabolites secreted by "leaker" cell types, and such consumption reduces the concentration of metabolites accumulated in the environment; this environmental change enables further secretion from the leaker cell types. This situation leads to frequency-dependent coexistence of several cell types, as supported by extensive simulations. A new look at the diversity in a microbial ecosystem is thus presented.
[ { "created": "Mon, 26 Nov 2018 04:11:27 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2019 12:24:04 GMT", "version": "v2" } ]
2020-02-05
[ [ "Yamagishi", "Jumpei F", "" ], [ "Saito", "Nen", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Microbial communities display extreme diversity. A variety of strains or species coexist even when limited by a single resource. It has been argued that metabolite secretion creates new niches and facilitates such diversity. Nonetheless, it is still a controversial topic why cells secrete even essential metabolites so often; in fact, even under isolation conditions, microbial cells secrete various metabolites, including those essential for their growth. First, we demonstrate that leaking essential metabolites can be advantageous. If the intracellular chemical reactions include multibody reactions like catalytic reactions, this advantageous leakage of essential metabolites is possible and indeed typical for most metabolic networks via "flux control" and "growth-dilution" mechanisms; the later is a result of the balance between synthesis and growth-induced dilution with autocatalytic reactions. Counterintuitively, the mechanisms can work even when the supplied resource is scarce. Next, when such cells are crowded, the presence of another cell type, which consumes the leaked chemicals is beneficial for both cell types, so that their coexistence enhances the growth of both. The latter part of the paper is devoted to the analysis of such unusual form of symbiosis: "consumer" cell types benefit from the uptake of metabolites secreted by "leaker" cell types, and such consumption reduces the concentration of metabolites accumulated in the environment; this environmental change enables further secretion from the leaker cell types. This situation leads to frequency-dependent coexistence of several cell types, as supported by extensive simulations. A new look at the diversity in a microbial ecosystem is thus presented.
1503.01233
Chloe Lerebours
C. Lerebours, P. R. Buenzli, S. Scheiner, P. Pivonka
A multiscale mechanobiological model of bone remodelling predicts site-specific bone loss in the femur during osteoporosis and mechanical disuse
25 pages, 10 figures
Biomech. Model. Mechanobiol. (2016) 15:43-67
10.1007/s10237-015-0705-x
null
q-bio.TO physics.bio-ph physics.med-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a multiscale mechanobiological model of bone remodelling to investigate the site-specific evolution of bone volume fraction across the midshaft of a femur. The model includes hormonal regulation and biochemical coupling of bone cell populations, the influence of the microstructure on bone turnover rate, and mechanical adaptation of the tissue. Both microscopic and tissue-scale stress/strain states of the tissue are calculated from macroscopic loads by a combination of beam theory and micromechanical homogenisation. This model is applied to simulate the spatio-temporal evolution of a human midshaft femur scan subjected to two deregulating circumstances: (i) osteoporosis and (ii) mechanical disuse. Both simulated deregulations led to endocortical bone loss, cortical wall thinning and expansion of the medullary cavity, in accordance with experimental findings. Our model suggests that these observations are attributable to a large extent to the influence of the microstructure on bone turnover rate. Mechanical adaptation is found to help preserve intracortical bone matrix near the periosteum. Moreover, it leads to non-uniform cortical wall thickness due to the asymmetry of macroscopic loads introduced by the bending moment. The effect of mechanical adaptation near the endosteum can be greatly affected by whether the mechanical stimulus includes stress concentration effects or not.
[ { "created": "Wed, 4 Mar 2015 06:14:04 GMT", "version": "v1" } ]
2016-05-13
[ [ "Lerebours", "C.", "" ], [ "Buenzli", "P. R.", "" ], [ "Scheiner", "S.", "" ], [ "Pivonka", "P.", "" ] ]
We propose a multiscale mechanobiological model of bone remodelling to investigate the site-specific evolution of bone volume fraction across the midshaft of a femur. The model includes hormonal regulation and biochemical coupling of bone cell populations, the influence of the microstructure on bone turnover rate, and mechanical adaptation of the tissue. Both microscopic and tissue-scale stress/strain states of the tissue are calculated from macroscopic loads by a combination of beam theory and micromechanical homogenisation. This model is applied to simulate the spatio-temporal evolution of a human midshaft femur scan subjected to two deregulating circumstances: (i) osteoporosis and (ii) mechanical disuse. Both simulated deregulations led to endocortical bone loss, cortical wall thinning and expansion of the medullary cavity, in accordance with experimental findings. Our model suggests that these observations are attributable to a large extent to the influence of the microstructure on bone turnover rate. Mechanical adaptation is found to help preserve intracortical bone matrix near the periosteum. Moreover, it leads to non-uniform cortical wall thickness due to the asymmetry of macroscopic loads introduced by the bending moment. The effect of mechanical adaptation near the endosteum can be greatly affected by whether the mechanical stimulus includes stress concentration effects or not.
1711.03809
Richard Betzel
Richard F. Betzel, Danielle S. Bassett
The specificity and robustness of long-distance connections in weighted, interareal connectomes
18 pages, 8 figures
null
10.1073/pnas.1720186115
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain areas' functional repertoires are shaped by their incoming and outgoing structural connections. In empirically measured networks, most connections are short, reflecting spatial and energetic constraints. Nonetheless, a small number of connections span long distances, consistent with the notion that the functionality of these connections must outweigh their cost. While the precise function of these long-distance connections is not known, the leading hypothesis is that they act to reduce the topological distance between brain areas and facilitate efficient interareal communication. However, this hypothesis implies a non-specificity of long-distance connections that we contend is unlikely. Instead, we propose that long-distance connections serve to diversify brain areas' inputs and outputs, thereby promoting complex dynamics. Through analysis of five interareal network datasets, we show that long-distance connections play only minor roles in reducing average interareal topological distance. In contrast, areas' long-distance and short-range neighbors exhibit marked differences in their connectivity profiles, suggesting that long-distance connections enhance dissimilarity between regional inputs and outputs. Next, we show that -- in isolation -- areas' long-distance connectivity profiles exhibit non-random levels of similarity, suggesting that the communication pathways formed by long connections exhibit redundancies that may serve to promote robustness. Finally, we use a linearization of Wilson-Cowan dynamics to simulate the covariance structure of neural activity and show that in the absence of long-distance connections, a common measure of functional diversity decreases. Collectively, our findings suggest that long-distance connections are necessary for supporting diverse and complex brain dynamics.
[ { "created": "Fri, 10 Nov 2017 13:23:09 GMT", "version": "v1" } ]
2022-06-08
[ [ "Betzel", "Richard F.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Brain areas' functional repertoires are shaped by their incoming and outgoing structural connections. In empirically measured networks, most connections are short, reflecting spatial and energetic constraints. Nonetheless, a small number of connections span long distances, consistent with the notion that the functionality of these connections must outweigh their cost. While the precise function of these long-distance connections is not known, the leading hypothesis is that they act to reduce the topological distance between brain areas and facilitate efficient interareal communication. However, this hypothesis implies a non-specificity of long-distance connections that we contend is unlikely. Instead, we propose that long-distance connections serve to diversify brain areas' inputs and outputs, thereby promoting complex dynamics. Through analysis of five interareal network datasets, we show that long-distance connections play only minor roles in reducing average interareal topological distance. In contrast, areas' long-distance and short-range neighbors exhibit marked differences in their connectivity profiles, suggesting that long-distance connections enhance dissimilarity between regional inputs and outputs. Next, we show that -- in isolation -- areas' long-distance connectivity profiles exhibit non-random levels of similarity, suggesting that the communication pathways formed by long connections exhibit redundancies that may serve to promote robustness. Finally, we use a linearization of Wilson-Cowan dynamics to simulate the covariance structure of neural activity and show that in the absence of long-distance connections, a common measure of functional diversity decreases. Collectively, our findings suggest that long-distance connections are necessary for supporting diverse and complex brain dynamics.
1512.05602
Chloe Lerebours
C. Lerebours and P. R. Buenzli
Towards a cell-based mechanostat theory of bone: the need to account for osteocyte desensitisation and osteocyte replacement
13 pages, 6 figures
null
null
null
q-bio.CB q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bone's mechanostat theory describes the adaptation of bone tissues to their mechanical environment. Many experiments have investigated and observed such structural adaptation. However, there is still much uncertainty about how to define the reference mechanical state at which bone structure is adapted and stable. Clinical and experimental observations show that this reference state varies both in space and in time, over a wide range of timescales. We propose an osteocyte-based mechanostat theory that links various timescales of structural adaptation with various dynamic features of the osteocyte network in bone. This theory assumes that osteocytes are formed adapted to their current local mechanical environment through modulation of morphological and genotypic osteocyte properties involved in mechanical sensitivity. We distinguish two main types of physiological responses by which osteocytes subsequently modify the reference mechanical state. One is the replacement of osteocytes during bone remodelling, which occurs over the long timescales of bone turnover. The other is cell desensitisation responses, which occur more rapidly and reversibly during an osteocyte's lifetime. The novelty of this theory is to propose that long-lasting morphological and genotypic osteocyte properties provide a material basis for a long-term mechanical memory of bone that is gradually reset by bone remodelling. We test this theory by simulating long-term mechanical disuse (modelling spinal cord injury), and short-term mechanical loadings (modelling daily exercises) with a mathematical model. The consideration of osteocyte desensitisation and of osteocyte replacement by remodelling is able to capture the different phenomena and timescales observed during the mechanical adaptation of bone tissues, lending support to this theory.
[ { "created": "Thu, 17 Dec 2015 14:41:41 GMT", "version": "v1" }, { "created": "Fri, 1 Apr 2016 05:30:05 GMT", "version": "v2" } ]
2016-04-04
[ [ "Lerebours", "C.", "" ], [ "Buenzli", "P. R.", "" ] ]
Bone's mechanostat theory describes the adaptation of bone tissues to their mechanical environment. Many experiments have investigated and observed such structural adaptation. However, there is still much uncertainty about how to define the reference mechanical state at which bone structure is adapted and stable. Clinical and experimental observations show that this reference state varies both in space and in time, over a wide range of timescales. We propose an osteocyte-based mechanostat theory that links various timescales of structural adaptation with various dynamic features of the osteocyte network in bone. This theory assumes that osteocytes are formed adapted to their current local mechanical environment through modulation of morphological and genotypic osteocyte properties involved in mechanical sensitivity. We distinguish two main types of physiological responses by which osteocytes subsequently modify the reference mechanical state. One is the replacement of osteocytes during bone remodelling, which occurs over the long timescales of bone turnover. The other is cell desensitisation responses, which occur more rapidly and reversibly during an osteocyte's lifetime. The novelty of this theory is to propose that long-lasting morphological and genotypic osteocyte properties provide a material basis for a long-term mechanical memory of bone that is gradually reset by bone remodelling. We test this theory by simulating long-term mechanical disuse (modelling spinal cord injury), and short-term mechanical loadings (modelling daily exercises) with a mathematical model. The consideration of osteocyte desensitisation and of osteocyte replacement by remodelling is able to capture the different phenomena and timescales observed during the mechanical adaptation of bone tissues, lending support to this theory.
1812.01342
Milena \v{C}uki\'c Dr
Milena Cukic, Miodrag Stokic, Slavoljub Radenkovic, Milos Ljubisavljevic and Dragoljub Donald Pokrajac
The Shift in brain-state induced by tDCS: an EEG study
23 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcranial direct current stimulation (tDCS) is known to have a modulatory effect on neural tissue and that it is polarity specific. It is also shown that tDCS demonstrated the lasting effect in therapeutic applications. The main aim of the study was to examine the effects of tDCS on cortical dynamics by analyzing EEG recordings. We applied here measures taken from Recurrence Quantification Analysis, Mean State Shift (MSS) and State Variance (SV) which were previously used to detect changes in brain-state dynamics after TMS. The studied cohort comprised of 16 healthy subjects; all subjects received anodal and cathodal tDCS, which were given in two separate sessions on the same day. The EEG was recorded from 10 electrodes, positioned over left motor cortex and mirroring right cortex corresponding to 10/20 standard. From three traces of recordings (pre, post1 and post2/before the stimulation, immediately after and 30min after tDCS) we extracted five different intervals (T1-T5) comprising of 500 samples. After calculating MSS and SV on those epochs and statistical testing for a significant difference, we applied Principal Component Analysis (PCA) on the same time series to check whether the data are separable. The results show that tDCS exert polarity specific effects on the MSS as shown by significantly lower MSS values after cathodal stimulation compared to anodal stimulation. Cathodal stimulation affected the SV, as compared to anodal stimulation, which did not lead to detectable changes. We are offering here for the first time an informative PCA visualization of a time-effect of a tDCS stimulation on brain state shift. Further research is needed to elucidate for how long that change can be detected and what neurobiological changes are introduced by that phenomena.
[ { "created": "Tue, 4 Dec 2018 11:17:13 GMT", "version": "v1" } ]
2018-12-05
[ [ "Cukic", "Milena", "" ], [ "Stokic", "Miodrag", "" ], [ "Radenkovic", "Slavoljub", "" ], [ "Ljubisavljevic", "Milos", "" ], [ "Pokrajac", "Dragoljub Donald", "" ] ]
Transcranial direct current stimulation (tDCS) is known to have a modulatory effect on neural tissue and that it is polarity specific. It is also shown that tDCS demonstrated the lasting effect in therapeutic applications. The main aim of the study was to examine the effects of tDCS on cortical dynamics by analyzing EEG recordings. We applied here measures taken from Recurrence Quantification Analysis, Mean State Shift (MSS) and State Variance (SV) which were previously used to detect changes in brain-state dynamics after TMS. The studied cohort comprised of 16 healthy subjects; all subjects received anodal and cathodal tDCS, which were given in two separate sessions on the same day. The EEG was recorded from 10 electrodes, positioned over left motor cortex and mirroring right cortex corresponding to 10/20 standard. From three traces of recordings (pre, post1 and post2/before the stimulation, immediately after and 30min after tDCS) we extracted five different intervals (T1-T5) comprising of 500 samples. After calculating MSS and SV on those epochs and statistical testing for a significant difference, we applied Principal Component Analysis (PCA) on the same time series to check whether the data are separable. The results show that tDCS exert polarity specific effects on the MSS as shown by significantly lower MSS values after cathodal stimulation compared to anodal stimulation. Cathodal stimulation affected the SV, as compared to anodal stimulation, which did not lead to detectable changes. We are offering here for the first time an informative PCA visualization of a time-effect of a tDCS stimulation on brain state shift. Further research is needed to elucidate for how long that change can be detected and what neurobiological changes are introduced by that phenomena.
0811.3587
Elfi Kraka
Sushilee Raganathan, Dmitry Izotov, Elfi Kraka, and Dieter Cremer
Automated and accurate protein structure description: Distribution of Ideal Secondary Structural Units in Natural Proteins
27 pages, 7 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new method for the Automated Protein Structure Analysis (APSA) is derived, which simplifies the protein backbone to a smooth curve in 3-dimensional space. For the purpose of obtaining this smooth line each amino acid is represented by its C$_{\alpha}$ atom, which serves as suitable anchor point for a cubic spline fit. The backbone line is characterized by arc length $s$, curvature $\kappa(s)$, and torsion $\tau(s)$. The $\kappa(s)$ and $\tau(s)$ diagrams of the protein backbone suppress, because of the level of coarse graining applied, details of the bond framework of the backbone, however reveal accurately all secondary structure features of a protein. Advantages of APSA are its quantitative representation and analysis of 3-dimensional structure in form of 2-dimensional curvature and torsion patterns, its easy visualization of complicated conformational features, and its general applicability. Typical differences between 3$_{10}$-,$\alpha$-, $\pi$-helices, and $\beta$-strands are quantified with the help of the $\kappa(s)$ and $\tau(s)$ diagrams. For a test set of 20 proteins, 63 % of all helical residues and 48.5 % of all extended residues are identified to be in ideal conformational environments with the help of APSA. APSA is compared with other methods for protein structure analysis and its applicability to higher levels of protein structure is discussed.
[ { "created": "Fri, 21 Nov 2008 18:45:03 GMT", "version": "v1" }, { "created": "Mon, 24 Nov 2008 21:57:22 GMT", "version": "v2" } ]
2008-11-24
[ [ "Raganathan", "Sushilee", "" ], [ "Izotov", "Dmitry", "" ], [ "Kraka", "Elfi", "" ], [ "Cremer", "Dieter", "" ] ]
A new method for the Automated Protein Structure Analysis (APSA) is derived, which simplifies the protein backbone to a smooth curve in 3-dimensional space. For the purpose of obtaining this smooth line each amino acid is represented by its C$_{\alpha}$ atom, which serves as suitable anchor point for a cubic spline fit. The backbone line is characterized by arc length $s$, curvature $\kappa(s)$, and torsion $\tau(s)$. The $\kappa(s)$ and $\tau(s)$ diagrams of the protein backbone suppress, because of the level of coarse graining applied, details of the bond framework of the backbone, however reveal accurately all secondary structure features of a protein. Advantages of APSA are its quantitative representation and analysis of 3-dimensional structure in form of 2-dimensional curvature and torsion patterns, its easy visualization of complicated conformational features, and its general applicability. Typical differences between 3$_{10}$-,$\alpha$-, $\pi$-helices, and $\beta$-strands are quantified with the help of the $\kappa(s)$ and $\tau(s)$ diagrams. For a test set of 20 proteins, 63 % of all helical residues and 48.5 % of all extended residues are identified to be in ideal conformational environments with the help of APSA. APSA is compared with other methods for protein structure analysis and its applicability to higher levels of protein structure is discussed.
q-bio/0505042
Paul Grayson
Paul Grayson, Alex Evilevitch, Mandar M. Inamdar, Prashant K. Purohit, William M. Gelbart, Charles M. Knobler, Rob Phillips
The effect of genome length on ejection forces in bacteriophage lambda
null
null
null
null
q-bio.BM
null
A variety of viruses tightly pack their genetic material into protein capsids that are barely large enough to enclose the genome. In particular, in bacteriophages, forces as high as 60 pN are encountered during packaging and ejection, produced by DNA bending elasticity and self-interactions. The high forces are believed to be important for the ejection process, though the extent of their involvement is not yet clear. As a result, there is a need for quantitative models and experiments that reveal the nature of the forces relevant to DNA ejection. Here we report measurements of the ejection forces for two different mutants of bacteriophage lambda, lambda b221cI26 and lambda cI60, which differ in genome length by ~30%. As expected for a force-driven ejection mechanism, the osmotic pressure at which DNA release is completely inhibited varies with the genome length: we find inhibition pressures of 15 atm and 25 atm, respectively, values that are in agreement with our theoretical calculations.
[ { "created": "Sat, 21 May 2005 18:29:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Grayson", "Paul", "" ], [ "Evilevitch", "Alex", "" ], [ "Inamdar", "Mandar M.", "" ], [ "Purohit", "Prashant K.", "" ], [ "Gelbart", "William M.", "" ], [ "Knobler", "Charles M.", "" ], [ "Phillips", "Rob", "" ] ]
A variety of viruses tightly pack their genetic material into protein capsids that are barely large enough to enclose the genome. In particular, in bacteriophages, forces as high as 60 pN are encountered during packaging and ejection, produced by DNA bending elasticity and self-interactions. The high forces are believed to be important for the ejection process, though the extent of their involvement is not yet clear. As a result, there is a need for quantitative models and experiments that reveal the nature of the forces relevant to DNA ejection. Here we report measurements of the ejection forces for two different mutants of bacteriophage lambda, lambda b221cI26 and lambda cI60, which differ in genome length by ~30%. As expected for a force-driven ejection mechanism, the osmotic pressure at which DNA release is completely inhibited varies with the genome length: we find inhibition pressures of 15 atm and 25 atm, respectively, values that are in agreement with our theoretical calculations.
2012.14504
Isaac Lucas-G\'omez
Isaac Lucas-G\'omez, Abelardo L\'opez-Fern\'andez, Brenda Karen Gonz\'alez-P\'erez, M. Rivas-Castillo Andrea, A. Valdez Calder\'on and Manuel A. Gayosso-Morales
Docking study for Protein Nsp-12 of SARS-CoV with Betalains and Alfa-Bisabolol
18 pages and 11 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The present Health Crisis tests the response of modern science and medicine to finding treatment for a new COVID-19 disease. The presentation on the world stage of antivirals such as remdesivir, obeys to the continuous investigation of biologically active molecules with multiple theoretical, computational and experimental tools. Diseases such as COVID:19 remind us that research into active ingredients for therapeutic purposes should cover all available sources, such as plants. In the present work, in silico tools, specifically docking study, were used to evaluate the binding and inhibition capacity of an antiviral such as remdesivir on the NSP-12 protein of SARS-CoV, a polymerase that is key in the replication of the SARS-COV virus. The results are then compared with a docking analysis of two natural products (Alpha-Bisabolol and betalain) with SARS-CoV protein, in order to find more candidates for COVID-19 virus replication inhibitors. in addition to increasing studies that help explain the specific mechanisms of the SARs-CoV-2 virus, remembering that we will have to live with the virus for an indefinite time from now on. Finally, natural products such as betalains may have inhibitory effects of a small order but in conjunction with other synergistic active ingredients they may increase their inhibition effect on NSP-12 protein of SARS-CoV.
[ { "created": "Mon, 28 Dec 2020 22:07:20 GMT", "version": "v1" } ]
2021-01-01
[ [ "Lucas-Gómez", "Isaac", "" ], [ "López-Fernández", "Abelardo", "" ], [ "González-Pérez", "Brenda Karen", "" ], [ "Andrea", "M. Rivas-Castillo", "" ], [ "Calderón", "A. Valdez", "" ], [ "Gayosso-Morales", "Manuel A.", "" ] ]
The present Health Crisis tests the response of modern science and medicine to finding treatment for a new COVID-19 disease. The presentation on the world stage of antivirals such as remdesivir, obeys to the continuous investigation of biologically active molecules with multiple theoretical, computational and experimental tools. Diseases such as COVID:19 remind us that research into active ingredients for therapeutic purposes should cover all available sources, such as plants. In the present work, in silico tools, specifically docking study, were used to evaluate the binding and inhibition capacity of an antiviral such as remdesivir on the NSP-12 protein of SARS-CoV, a polymerase that is key in the replication of the SARS-COV virus. The results are then compared with a docking analysis of two natural products (Alpha-Bisabolol and betalain) with SARS-CoV protein, in order to find more candidates for COVID-19 virus replication inhibitors. in addition to increasing studies that help explain the specific mechanisms of the SARs-CoV-2 virus, remembering that we will have to live with the virus for an indefinite time from now on. Finally, natural products such as betalains may have inhibitory effects of a small order but in conjunction with other synergistic active ingredients they may increase their inhibition effect on NSP-12 protein of SARS-CoV.
q-bio/0610026
Isaac Klapper
I. Klapper, P. Gilbert, B.P. Ayati, J. Dockery, P.S. Stewart
Senescence Can Explain Microbial Persistence
16 pages
null
null
null
q-bio.CB
null
It has been known for many years that small fractions of persister cells resist killing in many bacterial colony-antimicrobial confrontations. These persisters are not believed to be mutants. Rather it has been hypothesized that they are phenotypic variants. Current models allow cells to switch in and out of the persister phenotype. Here we suggest a different explanation, namely senescence, for persister formation. Using a mathematical model including age structure, we show that senescence provides a natural explanation for persister-related phenomena including the observations that persister fraction depends on growth phase in batch culture and dilution rate in continuous culture.
[ { "created": "Fri, 13 Oct 2006 21:06:38 GMT", "version": "v1" } ]
2007-05-23
[ [ "Klapper", "I.", "" ], [ "Gilbert", "P.", "" ], [ "Ayati", "B. P.", "" ], [ "Dockery", "J.", "" ], [ "Stewart", "P. S.", "" ] ]
It has been known for many years that small fractions of persister cells resist killing in many bacterial colony-antimicrobial confrontations. These persisters are not believed to be mutants. Rather it has been hypothesized that they are phenotypic variants. Current models allow cells to switch in and out of the persister phenotype. Here we suggest a different explanation, namely senescence, for persister formation. Using a mathematical model including age structure, we show that senescence provides a natural explanation for persister-related phenomena including the observations that persister fraction depends on growth phase in batch culture and dilution rate in continuous culture.
1709.06199
Roman Voronov
Q. L. Pham, D. Chege, T. Dijamco, J. Brito, E. Stein, N. A. N. Tong, S. Basuray, and R. S. Voronov
Cell Sequence and Mitosis Affect Fibroblast Directional Decision-Making during Chemotaxis in Tissue-Mimicking Microfluidic Mazes
see last page for supplemental materials
Pham, Q.L., Rodrigues, L.N., Maximov, M.A. et al. Cel. Mol. Bioeng. (2018)
10.1007/s12195-018-0551-x
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Directed fibroblast migration is central to highly proliferative processes in regenerative medicine and developmental biology, such as wound healing and embryogenesis. However, the mechanisms by which single fibroblasts affect each other's directional decisions, while chemotaxing in microscopic tissue pores, are not well understood. Therefore, we explored the effects of two types of relevant social interactions on fibroblast PDGF-BB-induced migration in microfluidic tissue-mimicking mazes: cell sequence and mitosis. Surprisingly, it was found that in both cases, the cells display behavior that is contradictory to the chemoattractant gradient established in the maze. In case of the sequence, the cells do not like to take the same path through the maze as their predecessor, when faced with a bifurcation. To the contrary, they tend to alternate - if a leading cell takes the shorter (steeper gradient) path, the cell following it chooses the longer (weaker gradient) path, and vice versa. Additionally, we found that when a mother cell divides, its two daughters go in opposite directions (even if it means migrating against the chemoattractant gradient and overcoming on-going cell traffic). Therefore, it is apparent that fibroblasts modify each other's directional decisions in a manner that is counter-intuitive to what is expected from classical chemotaxis theory. Consequently, accounting for these effects could lead to a better understanding of tissue generation in vivo, and result in more advanced engineered tissue products in vitro.
[ { "created": "Mon, 18 Sep 2017 23:28:01 GMT", "version": "v1" } ]
2018-08-29
[ [ "Pham", "Q. L.", "" ], [ "Chege", "D.", "" ], [ "Dijamco", "T.", "" ], [ "Brito", "J.", "" ], [ "Stein", "E.", "" ], [ "Tong", "N. A. N.", "" ], [ "Basuray", "S.", "" ], [ "Voronov", "R. S.", "" ] ]
Directed fibroblast migration is central to highly proliferative processes in regenerative medicine and developmental biology, such as wound healing and embryogenesis. However, the mechanisms by which single fibroblasts affect each other's directional decisions, while chemotaxing in microscopic tissue pores, are not well understood. Therefore, we explored the effects of two types of relevant social interactions on fibroblast PDGF-BB-induced migration in microfluidic tissue-mimicking mazes: cell sequence and mitosis. Surprisingly, it was found that in both cases, the cells display behavior that is contradictory to the chemoattractant gradient established in the maze. In case of the sequence, the cells do not like to take the same path through the maze as their predecessor, when faced with a bifurcation. To the contrary, they tend to alternate - if a leading cell takes the shorter (steeper gradient) path, the cell following it chooses the longer (weaker gradient) path, and vice versa. Additionally, we found that when a mother cell divides, its two daughters go in opposite directions (even if it means migrating against the chemoattractant gradient and overcoming on-going cell traffic). Therefore, it is apparent that fibroblasts modify each other's directional decisions in a manner that is counter-intuitive to what is expected from classical chemotaxis theory. Consequently, accounting for these effects could lead to a better understanding of tissue generation in vivo, and result in more advanced engineered tissue products in vitro.
2402.02766
Arsenii Onuchin Andreevich
Viktor Plusnin, Nadezhda Khoroshavkina, Ali Abonakour, Arsenii Onuchin, Viktoriya Manyukhina, Nadezhda Titova, Artem Kirsanov, Vasiliy Solodovnikov, Nikita Pospelov, Vladimir Sotskov
Multiple Neuronal Specializations Elicited By Socially Driven Recognition Of Food Odors
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
This study investigates the dynamics of non-spatial specializations in hippocampal place cells during exposure to novel environments. Hippocampal place cells, known for their role in spatial mapping, exhibit multi-modal responses to sensory cues. The research focuses on understanding how these cells adapt their specialization in response to novel stimuli, specifically examining non-spatial determinants such as odors and social interactions. Using a social-driven food odor recognition model in mice, the study records CA1 hippocampal neuron activity through miniscope imaging. The experimental design involves demonstrations of novel odors to mice, followed by observation sessions with food options. The analysis employs deep neural network tools for behavior tracking and the custom-developed INTENS software package for identifying neural specializations. Results indicate multiple specializations, particularly those related to odor, with differences observed between training and testing sessions. The findings suggest a temporal aspect to the formation of these specializations in novel conditions, necessitating further investigation for precise tracking.
[ { "created": "Mon, 5 Feb 2024 06:58:58 GMT", "version": "v1" } ]
2024-02-06
[ [ "Plusnin", "Viktor", "" ], [ "Khoroshavkina", "Nadezhda", "" ], [ "Abonakour", "Ali", "" ], [ "Onuchin", "Arsenii", "" ], [ "Manyukhina", "Viktoriya", "" ], [ "Titova", "Nadezhda", "" ], [ "Kirsanov", "Artem", "" ], [ "Solodovnikov", "Vasiliy", "" ], [ "Pospelov", "Nikita", "" ], [ "Sotskov", "Vladimir", "" ] ]
This study investigates the dynamics of non-spatial specializations in hippocampal place cells during exposure to novel environments. Hippocampal place cells, known for their role in spatial mapping, exhibit multi-modal responses to sensory cues. The research focuses on understanding how these cells adapt their specialization in response to novel stimuli, specifically examining non-spatial determinants such as odors and social interactions. Using a social-driven food odor recognition model in mice, the study records CA1 hippocampal neuron activity through miniscope imaging. The experimental design involves demonstrations of novel odors to mice, followed by observation sessions with food options. The analysis employs deep neural network tools for behavior tracking and the custom-developed INTENS software package for identifying neural specializations. Results indicate multiple specializations, particularly those related to odor, with differences observed between training and testing sessions. The findings suggest a temporal aspect to the formation of these specializations in novel conditions, necessitating further investigation for precise tracking.
2010.00959
August Balkema
Guus Balkema
Shielding the vulnerable in an epidemic: a numerical approach
13 + 6 pages, 11 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The death toll for Covid-19 may be reduced by dividing the population into two classes, the vulnerable and the fit, with different lockdown regimes. Instead of one reproduction number there now are four parameters. These make it possible to quantify the effect of the social distancing measures. There is a simple stochastic model for epidemics in a two type population. Apart from the size of the population of the vulnerable and the fit, and the initial number of infected in the two classes, only the four reproduction parameters are needed to run the two type Reed-Frost model. The program is simple and fast. On a pc it takes less than five minutes to do a hundred thousand simulations of the epidemic for a population of the size of the US. Epidemics are non-linear processes. Results may be counterintuitive. The average number of vulnerable persons infected by an infectious fit person is a crucial parameter of the epidemic in the two type population. Intuitively this parameter should be small. However simulations show that even if this parameter is small the death toll may be higher than without shielding. Under certain conditions increasing the value of the parameter may reduce the death toll. The article addresses these blind spots in our intuition.
[ { "created": "Thu, 1 Oct 2020 07:07:55 GMT", "version": "v1" } ]
2020-10-05
[ [ "Balkema", "Guus", "" ] ]
The death toll for Covid-19 may be reduced by dividing the population into two classes, the vulnerable and the fit, with different lockdown regimes. Instead of one reproduction number there now are four parameters. These make it possible to quantify the effect of the social distancing measures. There is a simple stochastic model for epidemics in a two type population. Apart from the size of the population of the vulnerable and the fit, and the initial number of infected in the two classes, only the four reproduction parameters are needed to run the two type Reed-Frost model. The program is simple and fast. On a pc it takes less than five minutes to do a hundred thousand simulations of the epidemic for a population of the size of the US. Epidemics are non-linear processes. Results may be counterintuitive. The average number of vulnerable persons infected by an infectious fit person is a crucial parameter of the epidemic in the two type population. Intuitively this parameter should be small. However simulations show that even if this parameter is small the death toll may be higher than without shielding. Under certain conditions increasing the value of the parameter may reduce the death toll. The article addresses these blind spots in our intuition.
1404.6126
Markus Dahlem
Markus A. Dahlem and J\"urgen Kurths and Michel D. Ferrari and Kazuyuki Aihara and Marten Scheffer and Arne May
Understanding migraine using dynamical network biomarkers
4 pages, 1 figure
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Mathematical modeling approaches are becoming ever more established in clinical neuroscience. They provide insight that is key to understand complex interactions of network phenomena, in general, and interactions within the migraine generator network, in particular. Purpose: In this study, two recent modeling studies on migraine are set in the context of premonitory symptoms that are easy to confuse for trigger factors. This causality confusion is explained, if migraine attacks are initiated by a transition caused by a tipping point. Conclusion: We need to characterize the involved neuronal and autonomic subnetworks and their connections during all parts of the migraine cycle if we are ever to understand migraine. We predict that mathematical models have the potential to dismantle large and correlated fluctuations in such subnetworks as a dynamical network biomarker of migraine.
[ { "created": "Thu, 24 Apr 2014 14:10:27 GMT", "version": "v1" } ]
2014-04-25
[ [ "Dahlem", "Markus A.", "" ], [ "Kurths", "Jürgen", "" ], [ "Ferrari", "Michel D.", "" ], [ "Aihara", "Kazuyuki", "" ], [ "Scheffer", "Marten", "" ], [ "May", "Arne", "" ] ]
Background: Mathematical modeling approaches are becoming ever more established in clinical neuroscience. They provide insight that is key to understand complex interactions of network phenomena, in general, and interactions within the migraine generator network, in particular. Purpose: In this study, two recent modeling studies on migraine are set in the context of premonitory symptoms that are easy to confuse for trigger factors. This causality confusion is explained, if migraine attacks are initiated by a transition caused by a tipping point. Conclusion: We need to characterize the involved neuronal and autonomic subnetworks and their connections during all parts of the migraine cycle if we are ever to understand migraine. We predict that mathematical models have the potential to dismantle large and correlated fluctuations in such subnetworks as a dynamical network biomarker of migraine.
2202.04773
Jacob Russin
Jacob Russin, Maryam Zolfaghar, Seongmin A. Park, Erie Boorman, Randall C. O'Reilly
A Neural Network Model of Continual Learning with Cognitive Control
7 pages, 5 figures, paper accepted as a talk to CogSci 2022 (https://escholarship.org/uc/item/3gn3w58z)
CogSci 2022, 44
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Neural networks struggle in continual learning settings from catastrophic forgetting: when trials are blocked, new learning can overwrite the learning from previous blocks. Humans learn effectively in these settings, in some cases even showing an advantage of blocking, suggesting the brain contains mechanisms to overcome this problem. Here, we build on previous work and show that neural networks equipped with a mechanism for cognitive control do not exhibit catastrophic forgetting when trials are blocked. We further show an advantage of blocking over interleaving when there is a bias for active maintenance in the control signal, implying a tradeoff between maintenance and the strength of control. Analyses of map-like representations learned by the networks provided additional insights into these mechanisms. Our work highlights the potential of cognitive control to aid continual learning in neural networks, and offers an explanation for the advantage of blocking that has been observed in humans.
[ { "created": "Wed, 9 Feb 2022 23:53:05 GMT", "version": "v1" }, { "created": "Thu, 3 Nov 2022 22:50:33 GMT", "version": "v2" } ]
2022-11-07
[ [ "Russin", "Jacob", "" ], [ "Zolfaghar", "Maryam", "" ], [ "Park", "Seongmin A.", "" ], [ "Boorman", "Erie", "" ], [ "O'Reilly", "Randall C.", "" ] ]
Neural networks struggle in continual learning settings from catastrophic forgetting: when trials are blocked, new learning can overwrite the learning from previous blocks. Humans learn effectively in these settings, in some cases even showing an advantage of blocking, suggesting the brain contains mechanisms to overcome this problem. Here, we build on previous work and show that neural networks equipped with a mechanism for cognitive control do not exhibit catastrophic forgetting when trials are blocked. We further show an advantage of blocking over interleaving when there is a bias for active maintenance in the control signal, implying a tradeoff between maintenance and the strength of control. Analyses of map-like representations learned by the networks provided additional insights into these mechanisms. Our work highlights the potential of cognitive control to aid continual learning in neural networks, and offers an explanation for the advantage of blocking that has been observed in humans.
q-bio/0410011
Andrea Giansanti (Mr.
Luca Ferraro, Andrea Giansanti, Giovanni Giuliano and Vittorio Rosato
Co-expression of statistically over-represented peptides in proteomes: a key to phylogeny ?
Minor typos corrected, partly rewritten conclusions and bibliography added
null
null
null
q-bio.MN q-bio.GN q-bio.PE
null
It is proposed that the co-expression of statistically significant motifs among the sequences of a proteome is a phylogenetic trait. From the co-expression matrix of such motifs in a group of prokaryotic proteomes a suitable definition of a phylogenetic distance is introduced and the corresponding distance matrix between proteomes is constructed. From the distance matrix a phylogenetic tree is inferred, following a standard procedure. It compares well with a reference tree deduced from a distance matrix obtained from the alignment of ribosomal RNA sequences. Our results are consistent with the hypothesis that biological evolution manifests itself with a modulation of basic correlations between shared peptides of short length, present in protein sequences. Moreover, the simple procedure we propose reconfirms that it is possible, sampling entire proteomes, to average the effects of lateral gene transfer and infer reasonable phylogenies.
[ { "created": "Sun, 10 Oct 2004 13:46:10 GMT", "version": "v1" }, { "created": "Tue, 14 Dec 2004 20:34:38 GMT", "version": "v2" } ]
2007-05-23
[ [ "Ferraro", "Luca", "" ], [ "Giansanti", "Andrea", "" ], [ "Giuliano", "Giovanni", "" ], [ "Rosato", "Vittorio", "" ] ]
It is proposed that the co-expression of statistically significant motifs among the sequences of a proteome is a phylogenetic trait. From the co-expression matrix of such motifs in a group of prokaryotic proteomes a suitable definition of a phylogenetic distance is introduced and the corresponding distance matrix between proteomes is constructed. From the distance matrix a phylogenetic tree is inferred, following a standard procedure. It compares well with a reference tree deduced from a distance matrix obtained from the alignment of ribosomal RNA sequences. Our results are consistent with the hypothesis that biological evolution manifests itself with a modulation of basic correlations between shared peptides of short length, present in protein sequences. Moreover, the simple procedure we propose reconfirms that it is possible, sampling entire proteomes, to average the effects of lateral gene transfer and infer reasonable phylogenies.
1503.03680
Santosh Tirunagari
Santosh Tirunagari, Norman Poh, Hajara Abdulrahman, Nawal Nemmour and David Windridge
Breast Cancer Data Analytics With Missing Values: A study on Ethnic, Age and Income Groups
The paper analyzes a breast cancer data with missing values, where the missing values of ethnicity are imputed based on a Naive Bayes classifier. Further, the data was analysed from domain purpose as well such as the effect of ethnicity, age, and income on the survival of the breast cancer
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An analysis of breast cancer incidences in women and the relationship between ethnicity and survival rate has been an ongoing study with recorded incidences of missing values in the secondary data. In this paper, we study and report the results of breast cancer survival rate by ethnicity, age and income groups from the dataset collected for 53593 patients in South East England between the years 1998 and 2003. In addition to this, we also predict the missing values for the ethnic groups in the dataset. The principle findings in our study suggest that: 1) women of white ethnicity in South East England have a highest percentage of survival rate when compared to the black ethnicity, 2) High income groups have higher survival rates to that of lower income groups and 3) Age groups between 80-95 have lower percentage of survival rate.
[ { "created": "Thu, 12 Mar 2015 11:35:53 GMT", "version": "v1" } ]
2015-03-13
[ [ "Tirunagari", "Santosh", "" ], [ "Poh", "Norman", "" ], [ "Abdulrahman", "Hajara", "" ], [ "Nemmour", "Nawal", "" ], [ "Windridge", "David", "" ] ]
An analysis of breast cancer incidences in women and the relationship between ethnicity and survival rate has been an ongoing study with recorded incidences of missing values in the secondary data. In this paper, we study and report the results of breast cancer survival rate by ethnicity, age and income groups from the dataset collected for 53593 patients in South East England between the years 1998 and 2003. In addition to this, we also predict the missing values for the ethnic groups in the dataset. The principle findings in our study suggest that: 1) women of white ethnicity in South East England have a highest percentage of survival rate when compared to the black ethnicity, 2) High income groups have higher survival rates to that of lower income groups and 3) Age groups between 80-95 have lower percentage of survival rate.
1709.09155
Wayne Hayes
Huy Pham, Emile Ramez Shehada, Shawna Stahlheber, Wayne B. Hayes
No cell left behind: automated physics-based tracking of {\em every} cell in a dense and growing colony
21 pages, 14 Figures, 1 Table
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A human watching a video of closely-packed cells can generally identify every individual cell, regardless of density and noise, but most currently-available cell-tracking software cannot. This is because the human brain automatically builds a physical model of the scene as it progresses, allowing it to readily distinguish cells from noise and not be unduly confused by overlapping cells. Here we introduce software that uses physical rules to create a simulation of the activity in a cell video, synchronizing itself with the video as the activity progresses. Because our simulation includes every individual cell, we are trivially able to track all cell movement, growth, and divisions. Our method is also particularly robust to noise without requiring any substantial image processing. We demonstrate the effectiveness of this method by tracking the motion and lineage tree of a densely-packed colony of cells that grows from 4 to more than 200 individuals.
[ { "created": "Tue, 26 Sep 2017 17:43:46 GMT", "version": "v1" } ]
2017-09-27
[ [ "Pham", "Huy", "" ], [ "Shehada", "Emile Ramez", "" ], [ "Stahlheber", "Shawna", "" ], [ "Hayes", "Wayne B.", "" ] ]
A human watching a video of closely-packed cells can generally identify every individual cell, regardless of density and noise, but most currently-available cell-tracking software cannot. This is because the human brain automatically builds a physical model of the scene as it progresses, allowing it to readily distinguish cells from noise and not be unduly confused by overlapping cells. Here we introduce software that uses physical rules to create a simulation of the activity in a cell video, synchronizing itself with the video as the activity progresses. Because our simulation includes every individual cell, we are trivially able to track all cell movement, growth, and divisions. Our method is also particularly robust to noise without requiring any substantial image processing. We demonstrate the effectiveness of this method by tracking the motion and lineage tree of a densely-packed colony of cells that grows from 4 to more than 200 individuals.
1006.0017
Teruhiko Yoneyama
Teruhiko Yoneyama and Mukkai S. Krishnamoorthy
Simulating the Spread of Pandemics with Different Origins Considering International Traffic
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/3.0/
Pandemics have the potential to cause immense disruption and damage to communities and societies. In this paper, we propose a hybrid model to determine how the pandemic spread through the world. The model combines the SEIR-based model for local areas and the network model for global connection between countries. We simulate the potential pandemic with different origins and find how the difference of the origin of a pandemic influences the impact in the world. We investigate the travelers network which is derived from real data, and simulate 65 countries, and see how the pandemic spread through the world from different 14 countries as origins of pandemic. We compare the difference in terms of the impact in countries and the impact in the world. As a result, the impact in the world increases when pandemic originates from the United States, India, and China.
[ { "created": "Tue, 11 May 2010 22:31:37 GMT", "version": "v1" } ]
2010-06-02
[ [ "Yoneyama", "Teruhiko", "" ], [ "Krishnamoorthy", "Mukkai S.", "" ] ]
Pandemics have the potential to cause immense disruption and damage to communities and societies. In this paper, we propose a hybrid model to determine how the pandemic spread through the world. The model combines the SEIR-based model for local areas and the network model for global connection between countries. We simulate the potential pandemic with different origins and find how the difference of the origin of a pandemic influences the impact in the world. We investigate the travelers network which is derived from real data, and simulate 65 countries, and see how the pandemic spread through the world from different 14 countries as origins of pandemic. We compare the difference in terms of the impact in countries and the impact in the world. As a result, the impact in the world increases when pandemic originates from the United States, India, and China.
0801.3435
Giuseppe Gaeta
G. Gaeta
A mean-field version of the Nicodemi-Prisco SSB model for X-chromosome inactivation
null
null
10.1142/S140292510900008X
null
q-bio.BM
null
Nicodemi and Prisco recently proposed a model for X-chromosome inactivation in mammals, explaining this phenomenon in terms of a spontaneous symmetry-breaking mechanism [{\it Phys. Rev. Lett.} 99 (2007), 108104]. Here we provide a mean-field version of their model.
[ { "created": "Tue, 22 Jan 2008 18:31:56 GMT", "version": "v1" }, { "created": "Tue, 4 Mar 2008 18:16:06 GMT", "version": "v2" } ]
2015-05-13
[ [ "Gaeta", "G.", "" ] ]
Nicodemi and Prisco recently proposed a model for X-chromosome inactivation in mammals, explaining this phenomenon in terms of a spontaneous symmetry-breaking mechanism [{\it Phys. Rev. Lett.} 99 (2007), 108104]. Here we provide a mean-field version of their model.
2304.02936
Xavier Richard
Xavier Richard, Beno\^it Richard, Christian Mazza, Jan Roelof van der Meer
Complete mathematical characterization of two simple toggle-switch biological systems
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prokaryotic gene expression is dynamic and noisy, and can lead to phenotypic variations. Previous work has shown that the selection of such variation can be modeled through basic reaction networks described by O.D.E. displaying a bistable behavior. While previous mathematical studies have shown that mono- or bistable behavior depends on the rate of the reactions in the system, no analytical solution of the curves delineating the actual parameter conditions that result in mono or bistability has so far been provided. In this work we provide the first explicit analytical solution for the boundary curve that separates the parameter space defining domains where double positive and double negative feedback loops become bistable.
[ { "created": "Thu, 6 Apr 2023 08:50:36 GMT", "version": "v1" }, { "created": "Fri, 7 Apr 2023 09:47:48 GMT", "version": "v2" } ]
2023-04-10
[ [ "Richard", "Xavier", "" ], [ "Richard", "Benoît", "" ], [ "Mazza", "Christian", "" ], [ "van der Meer", "Jan Roelof", "" ] ]
Prokaryotic gene expression is dynamic and noisy, and can lead to phenotypic variations. Previous work has shown that the selection of such variation can be modeled through basic reaction networks described by O.D.E. displaying a bistable behavior. While previous mathematical studies have shown that mono- or bistable behavior depends on the rate of the reactions in the system, no analytical solution of the curves delineating the actual parameter conditions that result in mono or bistability has so far been provided. In this work we provide the first explicit analytical solution for the boundary curve that separates the parameter space defining domains where double positive and double negative feedback loops become bistable.
2003.08149
Arnab Bhadra
Arnab Bhadra and Kalidas Y
Site2Vec: a reference frame invariant algorithm for vector embedding of protein-ligand binding sites
null
null
10.1088/2632-2153/abad88
null
q-bio.BM cs.CG cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-ligand interactions are one of the fundamental types of molecular interactions in living systems. Ligands are small molecules that interact with protein molecules at specific regions on their surfaces called binding sites. Tasks such as assessment of protein functional similarity and detection of side effects of drugs need identification of similar binding sites of disparate proteins across diverse pathways. Machine learning methods for similarity assessment require feature descriptors of binding sites. Traditional methods based on hand engineered motifs and atomic configurations are not scalable across several thousands of sites. In this regard, deep neural network algorithms are now deployed which can capture very complex input feature space. However, one fundamental challenge in applying deep learning to structures of binding sites is the input representation and the reference frame. We report here a novel algorithm Site2Vec that derives reference frame invariant vector embedding of a protein-ligand binding site. The method is based on pairwise distances between representative points and chemical compositions in terms of constituent amino acids of a site. The vector embedding serves as a locality sensitive hash function for proximity queries and determining similar sites. The method has been the top performer with more than 95% quality scores in extensive benchmarking studies carried over 10 datasets and against 23 other site comparison methods. The algorithm serves for high throughput processing and has been evaluated for stability with respect to reference frame shifts, coordinate perturbations and residue mutations. We provide Site2Vec as a stand alone executable and a web service hosted at \url{http://services.iittp.ac.in/bioinfo/home}.
[ { "created": "Wed, 18 Mar 2020 10:56:27 GMT", "version": "v1" }, { "created": "Mon, 27 Jul 2020 10:30:14 GMT", "version": "v2" } ]
2020-08-11
[ [ "Bhadra", "Arnab", "" ], [ "Y", "Kalidas", "" ] ]
Protein-ligand interactions are one of the fundamental types of molecular interactions in living systems. Ligands are small molecules that interact with protein molecules at specific regions on their surfaces called binding sites. Tasks such as assessment of protein functional similarity and detection of side effects of drugs need identification of similar binding sites of disparate proteins across diverse pathways. Machine learning methods for similarity assessment require feature descriptors of binding sites. Traditional methods based on hand engineered motifs and atomic configurations are not scalable across several thousands of sites. In this regard, deep neural network algorithms are now deployed which can capture very complex input feature space. However, one fundamental challenge in applying deep learning to structures of binding sites is the input representation and the reference frame. We report here a novel algorithm Site2Vec that derives reference frame invariant vector embedding of a protein-ligand binding site. The method is based on pairwise distances between representative points and chemical compositions in terms of constituent amino acids of a site. The vector embedding serves as a locality sensitive hash function for proximity queries and determining similar sites. The method has been the top performer with more than 95% quality scores in extensive benchmarking studies carried over 10 datasets and against 23 other site comparison methods. The algorithm serves for high throughput processing and has been evaluated for stability with respect to reference frame shifts, coordinate perturbations and residue mutations. We provide Site2Vec as a stand alone executable and a web service hosted at \url{http://services.iittp.ac.in/bioinfo/home}.
1108.5399
Domenico Napoletani
D. Napoletani, E. Petricoin, D. C. Struppa
Geometric Path Integrals. A Language for Multiscale Biology and Systems Robustness
null
The Mathematical Legacy of Leon Ehrenpreis, Springer Proceedings in Mathematics, Volume 16, 2012, pp 247-260
10.1007/978-88-470-1947-8_16
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we suggest that, under suitable conditions, supervised learning can provide the basis to formulate at the microscopic level quantitative questions on the phenotype structure of multicellular organisms. The problem of explaining the robustness of the phenotype structure is rephrased as a real geometrical problem on a fixed domain. We further suggest a generalization of path integrals that reduces the problem of deciding whether a given molecular network can generate specific phenotypes to a numerical property of a robustness function with complex output, for which we give heuristic justification. Finally, we use our formalism to interpret a pointedly quantitative developmental biology problem on the allowed number of pairs of legs in centipedes.
[ { "created": "Fri, 26 Aug 2011 21:50:41 GMT", "version": "v1" } ]
2012-06-14
[ [ "Napoletani", "D.", "" ], [ "Petricoin", "E.", "" ], [ "Struppa", "D. C.", "" ] ]
In this paper we suggest that, under suitable conditions, supervised learning can provide the basis to formulate at the microscopic level quantitative questions on the phenotype structure of multicellular organisms. The problem of explaining the robustness of the phenotype structure is rephrased as a real geometrical problem on a fixed domain. We further suggest a generalization of path integrals that reduces the problem of deciding whether a given molecular network can generate specific phenotypes to a numerical property of a robustness function with complex output, for which we give heuristic justification. Finally, we use our formalism to interpret a pointedly quantitative developmental biology problem on the allowed number of pairs of legs in centipedes.
1807.00226
Phan Nguyen
Phan Nguyen and Rosemary Braun
Time-lagged Ordered Lasso for network inference
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate gene regulatory networks can be used to explain the emergence of different phenotypes, disease mechanisms, and other biological functions. Many methods have been proposed to infer networks from gene expression data but have been hampered by problems such as low sample size, inaccurate constraints, and incomplete characterizations of regulatory dynamics. Since expression regulation is dynamic, time-course data can be used to infer causality, but these datasets tend to be short or sparsely sampled. In addition, temporal methods typically assume that the expression of a gene at a time point depends on the expression of other genes at only the immediately preceding time point, while other methods include additional time points without any constraints to account for their temporal distance. These limitations can contribute to inaccurate networks with many missing and anomalous links. We adapted the time-lagged Ordered Lasso, a regularized regression method with temporal monotonicity constraints, for \textit{de novo} reconstruction. We also developed a semi-supervised method that embeds prior network information into the Ordered Lasso to discover novel regulatory dependencies in existing pathways. We evaluated these approaches on simulated data for a repressilator, time-course data from past DREAM challenges, and a HeLa cell cycle dataset to show that they can produce accurate networks subject to the dynamics and assumptions of the time-lagged Ordered Lasso regression.
[ { "created": "Sat, 30 Jun 2018 21:16:20 GMT", "version": "v1" }, { "created": "Mon, 10 Dec 2018 07:59:12 GMT", "version": "v2" } ]
2018-12-11
[ [ "Nguyen", "Phan", "" ], [ "Braun", "Rosemary", "" ] ]
Accurate gene regulatory networks can be used to explain the emergence of different phenotypes, disease mechanisms, and other biological functions. Many methods have been proposed to infer networks from gene expression data but have been hampered by problems such as low sample size, inaccurate constraints, and incomplete characterizations of regulatory dynamics. Since expression regulation is dynamic, time-course data can be used to infer causality, but these datasets tend to be short or sparsely sampled. In addition, temporal methods typically assume that the expression of a gene at a time point depends on the expression of other genes at only the immediately preceding time point, while other methods include additional time points without any constraints to account for their temporal distance. These limitations can contribute to inaccurate networks with many missing and anomalous links. We adapted the time-lagged Ordered Lasso, a regularized regression method with temporal monotonicity constraints, for \textit{de novo} reconstruction. We also developed a semi-supervised method that embeds prior network information into the Ordered Lasso to discover novel regulatory dependencies in existing pathways. We evaluated these approaches on simulated data for a repressilator, time-course data from past DREAM challenges, and a HeLa cell cycle dataset to show that they can produce accurate networks subject to the dynamics and assumptions of the time-lagged Ordered Lasso regression.
1808.01805
Paola Sessa
Arianna Schiano Lomoriello, Federica Meconi, Irene Rinaldi, and Paola Sessa
Out of sight out of mind: Perceived physical distance between the observer and someone in pain shapes observer's neural empathic reactions
42 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social and affective relations may shape empathy to others' affective states. Previous studies also revealed that people tend to form very different mental representations of stimuli on the basis of their physical distance. In this regard, embodied cognition proposes that different physical distances between individuals activate different interpersonal processing modes, such that close physical distance tends to activate the interpersonal processing mode typical of socially and affectively close relationships. In Experiment 1, two groups of participants were administered a pain decision task involving upright and inverted face stimuli painfully or neutrally stimulated, and we monitored their neural empathic reactions by means of event-related potentials (ERPs) technique. Crucially, participants were presented with face stimuli of one of two possible sizes in order to manipulate retinal size and perceived physical distance, roughly corresponding to the close and far portions of social distance. ERPs modulations compatible with an empathic reaction were observed only for the group exposed to face stimuli appearing to be at a close social distance from the participants. This reaction was absent in the group exposed to smaller stimuli corresponding to face stimuli observed from a far social distance. In Experiment 2, one different group of participants was engaged in a match-to-sample task involving the two-size upright face stimuli of Experiment 1 to test whether the modulation of neural empathic reaction observed in Experiment 1 could be ascribable to differences in the ability to identify faces of the two different sizes. Results suggested that face stimuli of the two sizes could be equally identifiable. In line with the Construal Level and Embodied Simulation theoretical frameworks, we conclude that perceived physical distance may shape empathy as well as social and affective distance.
[ { "created": "Mon, 6 Aug 2018 10:06:19 GMT", "version": "v1" } ]
2018-08-07
[ [ "Lomoriello", "Arianna Schiano", "" ], [ "Meconi", "Federica", "" ], [ "Rinaldi", "Irene", "" ], [ "Sessa", "Paola", "" ] ]
Social and affective relations may shape empathy to others' affective states. Previous studies also revealed that people tend to form very different mental representations of stimuli on the basis of their physical distance. In this regard, embodied cognition proposes that different physical distances between individuals activate different interpersonal processing modes, such that close physical distance tends to activate the interpersonal processing mode typical of socially and affectively close relationships. In Experiment 1, two groups of participants were administered a pain decision task involving upright and inverted face stimuli painfully or neutrally stimulated, and we monitored their neural empathic reactions by means of event-related potentials (ERPs) technique. Crucially, participants were presented with face stimuli of one of two possible sizes in order to manipulate retinal size and perceived physical distance, roughly corresponding to the close and far portions of social distance. ERPs modulations compatible with an empathic reaction were observed only for the group exposed to face stimuli appearing to be at a close social distance from the participants. This reaction was absent in the group exposed to smaller stimuli corresponding to face stimuli observed from a far social distance. In Experiment 2, one different group of participants was engaged in a match-to-sample task involving the two-size upright face stimuli of Experiment 1 to test whether the modulation of neural empathic reaction observed in Experiment 1 could be ascribable to differences in the ability to identify faces of the two different sizes. Results suggested that face stimuli of the two sizes could be equally identifiable. In line with the Construal Level and Embodied Simulation theoretical frameworks, we conclude that perceived physical distance may shape empathy as well as social and affective distance.
2301.05905
Andrew Leifer
Kevin S. Chen, Rui Wu, Marc H. Gershow, and Andrew M. Leifer
Continuous odor profile monitoring to study olfactory navigation in small animals
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Olfactory navigation is observed across species and plays a crucial role in locating resources for survival. In the laboratory, understanding the behavioral strategies and neural circuits underlying odor-taxis requires a detailed understanding of the animal's sensory environment. For small model organisms like C. elegans and larval D. melanogaster, controlling and measuring the odor environment experienced by the animal can be challenging, especially for airborne odors, which are subject to subtle effects from airflow, temperature variation, and from the odor's adhesion, adsorption or reemission. Here we present a method to flexibly control and precisely measure airborne odor concentration in an arena with agar while imaging animal behavior. Crucially and unlike previous methods, our method allows continuous monitoring of the odor profile during behavior. We construct stationary chemical landscapes in an odor flow chamber through spatially patterned odorized air. The odor concentration is measured with a spatially distributed array of digital gas sensors. Careful placement of the sensors allows the odor concentration across the arena to be accurately inferred and continuously monitored at all points in time. We use this approach to measure the precise odor concentration that each animal experiences as it undergoes chemotaxis behavior and report chemotaxis strategies for C. elegans and D. melanogaster larvae populations under different spatial odor landscapes.
[ { "created": "Sat, 14 Jan 2023 12:15:02 GMT", "version": "v1" } ]
2023-01-18
[ [ "Chen", "Kevin S.", "" ], [ "Wu", "Rui", "" ], [ "Gershow", "Marc H.", "" ], [ "Leifer", "Andrew M.", "" ] ]
Olfactory navigation is observed across species and plays a crucial role in locating resources for survival. In the laboratory, understanding the behavioral strategies and neural circuits underlying odor-taxis requires a detailed understanding of the animal's sensory environment. For small model organisms like C. elegans and larval D. melanogaster, controlling and measuring the odor environment experienced by the animal can be challenging, especially for airborne odors, which are subject to subtle effects from airflow, temperature variation, and from the odor's adhesion, adsorption or reemission. Here we present a method to flexibly control and precisely measure airborne odor concentration in an arena with agar while imaging animal behavior. Crucially and unlike previous methods, our method allows continuous monitoring of the odor profile during behavior. We construct stationary chemical landscapes in an odor flow chamber through spatially patterned odorized air. The odor concentration is measured with a spatially distributed array of digital gas sensors. Careful placement of the sensors allows the odor concentration across the arena to be accurately inferred and continuously monitored at all points in time. We use this approach to measure the precise odor concentration that each animal experiences as it undergoes chemotaxis behavior and report chemotaxis strategies for C. elegans and D. melanogaster larvae populations under different spatial odor landscapes.
1806.01675
Dimos Goundaroulis
Julien Dorier, Dimos Goundaroulis, Fabrizio Benedetti and Andrzej Stasiak
Knoto-ID: a tool to study the entanglement of open protein chains using the concept of knotoids
3 pages, 1 figure. This is the Authors Original Version of the article has been accepted for publication in Bioinformatics Published by Oxford University Press
Bioinformatics (2018)
10.1093/bioinformatics/bty365
null
q-bio.BM math.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The backbone of most proteins forms an open curve. To study their entanglement, a common strategy consists in searching for the presence of knots in their backbones using topological invariants. However, this approach requires to close the curve into a loop, which alters the geometry of curve. Knoto-ID allows evaluating the entanglement of open curves without the need to close them, using the recent concept of knotoids which is a generalization of the classical knot theory to open curves. Knoto-ID can analyse the global topology of the full chain as well as the local topology by exhaustively studying all subchains or only determining the knotted core. Knoto-ID permits to localize topologically non-trivial protein folds that are not detected by informatics tools detecting knotted protein folds.
[ { "created": "Tue, 5 Jun 2018 13:16:15 GMT", "version": "v1" } ]
2018-06-06
[ [ "Dorier", "Julien", "" ], [ "Goundaroulis", "Dimos", "" ], [ "Benedetti", "Fabrizio", "" ], [ "Stasiak", "Andrzej", "" ] ]
The backbone of most proteins forms an open curve. To study their entanglement, a common strategy consists in searching for the presence of knots in their backbones using topological invariants. However, this approach requires to close the curve into a loop, which alters the geometry of curve. Knoto-ID allows evaluating the entanglement of open curves without the need to close them, using the recent concept of knotoids which is a generalization of the classical knot theory to open curves. Knoto-ID can analyse the global topology of the full chain as well as the local topology by exhaustively studying all subchains or only determining the knotted core. Knoto-ID permits to localize topologically non-trivial protein folds that are not detected by informatics tools detecting knotted protein folds.
q-bio/0703016
Chad M. Topaz
Chad M. Topaz, Andrew J. Bernoff, Sheldon Logan, Wyatt Toolson
A model for rolling swarms of locusts
18 pages, 11 figures
null
10.1140/epjst/e2008-00633-y
null
q-bio.PE nlin.AO
null
We construct an individual-based kinematic model of rolling migratory locust swarms. The model incorporates social interactions, gravity, wind, and the effect of the impenetrable boundary formed by the ground. We study the model using numerical simulations and tools from statistical mechanics, namely the notion of H-stability. For a free-space swarm (no wind and gravity), as the number of locusts increases, it approaches a crystalline lattice of fixed density if it is H-stable, and in contrast becomes ever more dense if it is catastrophic. Numerical simulations suggest that whether or not a swarm rolls depends on the statistical mechanical properties of the corresponding free-space swarm. For a swarm that is H-stable in free space, gravity causes the group to land and form a crystalline lattice. Wind, in turn, smears the swarm out along the ground until all individuals are stationary. In contrast, for a swarm that is catastrophic in free space, gravity causes the group to land and form a bubble-like shape. In the presence of wind, the swarm migrates with a rolling motion similar to natural locust swarms. The rolling structure is similar to that observed by biologists, and includes a takeoff zone, a landing zone, and a stationary zone where grounded locusts can rest and feed.
[ { "created": "Tue, 6 Mar 2007 20:38:52 GMT", "version": "v1" } ]
2009-11-13
[ [ "Topaz", "Chad M.", "" ], [ "Bernoff", "Andrew J.", "" ], [ "Logan", "Sheldon", "" ], [ "Toolson", "Wyatt", "" ] ]
We construct an individual-based kinematic model of rolling migratory locust swarms. The model incorporates social interactions, gravity, wind, and the effect of the impenetrable boundary formed by the ground. We study the model using numerical simulations and tools from statistical mechanics, namely the notion of H-stability. For a free-space swarm (no wind and gravity), as the number of locusts increases, it approaches a crystalline lattice of fixed density if it is H-stable, and in contrast becomes ever more dense if it is catastrophic. Numerical simulations suggest that whether or not a swarm rolls depends on the statistical mechanical properties of the corresponding free-space swarm. For a swarm that is H-stable in free space, gravity causes the group to land and form a crystalline lattice. Wind, in turn, smears the swarm out along the ground until all individuals are stationary. In contrast, for a swarm that is catastrophic in free space, gravity causes the group to land and form a bubble-like shape. In the presence of wind, the swarm migrates with a rolling motion similar to natural locust swarms. The rolling structure is similar to that observed by biologists, and includes a takeoff zone, a landing zone, and a stationary zone where grounded locusts can rest and feed.
2408.02552
Fabio Sanchez PhD
Jimmy Calvo-Monge, Baltazar Espinoza, Fabio Sanchez
Interplay between Foraging Choices and Population Growth Dynamics
14 pages, 11 figures
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
In this study, we couple a population dynamics model with a model for optimal foraging to study the interdependence between individual-level cost-benefits and population-scale dynamics. Specifically, we study the logistic growth model, which provides insights into population dynamics under resource limitations. Unlike exponential growth, the logistic model incorporates the concept of carrying capacity, thus offering a more realistic depiction of biological populations as they near environmental limits. We aim to study the impact of individual-level incentives driving behavioral responses in a dynamic environment. Specifically, explore the coupled dynamics between population density and individuals' foraging times. Our results yield insights into the effects of population size on individuals' optimal foraging efforts, which impacts the population's size.
[ { "created": "Mon, 5 Aug 2024 15:29:03 GMT", "version": "v1" } ]
2024-08-06
[ [ "Calvo-Monge", "Jimmy", "" ], [ "Espinoza", "Baltazar", "" ], [ "Sanchez", "Fabio", "" ] ]
In this study, we couple a population dynamics model with a model for optimal foraging to study the interdependence between individual-level cost-benefits and population-scale dynamics. Specifically, we study the logistic growth model, which provides insights into population dynamics under resource limitations. Unlike exponential growth, the logistic model incorporates the concept of carrying capacity, thus offering a more realistic depiction of biological populations as they near environmental limits. We aim to study the impact of individual-level incentives driving behavioral responses in a dynamic environment. Specifically, explore the coupled dynamics between population density and individuals' foraging times. Our results yield insights into the effects of population size on individuals' optimal foraging efforts, which impacts the population's size.
2001.05720
Jose V Manjon
Jos\'e V. Manj\'on, Jose E. Romero, Roberto Vivo-Hernando, Gregorio Rubio-Navarro, Mar\'ia De la Iglesia-Vaya, Fernando Aparici-Robles and Pierrick Coup\'e
Deep ICE: A Deep learning approach for MRI Intracranial Cavity Extraction
(19 pages, 5 figures)
null
null
null
q-bio.QM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic methods for measuring normalized regional brain volumes from MRI data are a key tool to help in the objective diagnostic and follow-up of many neurological diseases. To estimate such regional brain volumes, the intracranial cavity volume is commonly used for normalization. In this paper, we present an accurate and efficient approach to automatically segment the intracranial cavity using a volumetric 3D convolutional neural network and a new 3D patch extraction strategy specially adapted to deal with the traditional low number of training cases available in supervised segmentation and the memory limitations of modern GPUs. The proposed method is compared with recent state-of-the-art methods and the results show an excellent accuracy and improved performance in terms of computational burden.
[ { "created": "Thu, 16 Jan 2020 10:00:26 GMT", "version": "v1" } ]
2020-01-17
[ [ "Manjón", "José V.", "" ], [ "Romero", "Jose E.", "" ], [ "Vivo-Hernando", "Roberto", "" ], [ "Rubio-Navarro", "Gregorio", "" ], [ "De la Iglesia-Vaya", "María", "" ], [ "Aparici-Robles", "Fernando", "" ], [ "Coupé", "Pierrick", "" ] ]
Automatic methods for measuring normalized regional brain volumes from MRI data are a key tool to help in the objective diagnostic and follow-up of many neurological diseases. To estimate such regional brain volumes, the intracranial cavity volume is commonly used for normalization. In this paper, we present an accurate and efficient approach to automatically segment the intracranial cavity using a volumetric 3D convolutional neural network and a new 3D patch extraction strategy specially adapted to deal with the traditional low number of training cases available in supervised segmentation and the memory limitations of modern GPUs. The proposed method is compared with recent state-of-the-art methods and the results show an excellent accuracy and improved performance in terms of computational burden.
1910.05846
Emmanuel Afolabi Bakare E. A.
Bakare E.A., Are E.B., Abolarin O.E., Osanyinlusi S.A., Ngwu Benitho, and Ubaka Obiaderi N
Mathematical Modelling and Analysis of Transmission Dynamics of Lassa fever
23 pages, 6 figures
null
null
null
q-bio.PE stat.AP stat.CO
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this work, a periodically-forced seasonal non-autonomous system of a non-linear ordinary differential equation is developed that captures the dynamics of Lassa fever transmission and seasonal variation in the birth of mastomys rodents where time was measured in days to capture seasonality. It was shown that the model is epidemiologically meaningful and mathematically well-posed by using the results from the qualitative properties of the solution of the model. It was established that in order to eliminate Lassa fever disease, treatments with Ribavirin must be provided early to reduce mortality and other preventive measures like an educational campaign, community hygiene, Isolation of infected humans, and culling/destruction of rodents must be applied to also reduce the morbidity of the disease. Finally, the obtained results gave a primer framework for planning and designing cost-effective strategies for good interventions in eliminating Lassa fever.
[ { "created": "Sun, 13 Oct 2019 22:20:12 GMT", "version": "v1" } ]
2019-10-15
[ [ "A.", "Bakare E.", "" ], [ "B.", "Are E.", "" ], [ "E.", "Abolarin O.", "" ], [ "A.", "Osanyinlusi S.", "" ], [ "Benitho", "Ngwu", "" ], [ "N", "Ubaka Obiaderi", "" ] ]
In this work, a periodically-forced seasonal non-autonomous system of a non-linear ordinary differential equation is developed that captures the dynamics of Lassa fever transmission and seasonal variation in the birth of mastomys rodents where time was measured in days to capture seasonality. It was shown that the model is epidemiologically meaningful and mathematically well-posed by using the results from the qualitative properties of the solution of the model. It was established that in order to eliminate Lassa fever disease, treatments with Ribavirin must be provided early to reduce mortality and other preventive measures like an educational campaign, community hygiene, Isolation of infected humans, and culling/destruction of rodents must be applied to also reduce the morbidity of the disease. Finally, the obtained results gave a primer framework for planning and designing cost-effective strategies for good interventions in eliminating Lassa fever.
2301.08023
Chiara Villa
Luis Almeida, J\'er\^ome Denis, Nathalie Ferrand, Tommaso Lorenzi, Antonin Prunet, Mich\'ele Sabbah, Chiara Villa
Evolutionary dynamics of glucose-deprived cancer cells: insights from experimentally-informed mathematical modelling
Main manuscript: 14 pages, 4 figures. Supplementary material: 29 pages, 11 figures, 2 tables
null
10.1098/rsif.2023.0587
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Glucose is a primary energy source for cancer cells. Several lines of evidence support the idea that monocarboxylate transporters, such as MCT1, elicit metabolic reprogramming of cancer cells in glucose-poor environments, allowing them to reuse lactate, a byproduct of glucose metabolism, as an alternative energy source with serious consequences for disease progression. We employ a synergistic experimental and mathematical modelling approach to explore the evolutionary processes at the root of cancer cell adaptation to glucose deprivation, with particular focus on the mechanisms underlying the increase in MCT1 expression observed in glucose-deprived aggressive cancer cells. Data from in vitro experiments on breast cancer cells are used to inform and calibrate a mathematical model that comprises a partial integro-differential equation for the dynamics of a population of cancer cells structured by the level of MCT1 expression. Analytical and numerical results of this model suggest that environment-induced changes in MCT1 expression mediated by lactate-associated signalling pathways enable a prompt adaptive response of glucose-deprived cancer cells, whilst fluctuations in MCT1 expression due to epigenetic changes create the substrate for environmental selection to act upon, speeding up the selective sweep underlying cancer cell adaptation to glucose deprivation, and may constitute a long-term bet-hedging mechanism.
[ { "created": "Thu, 19 Jan 2023 11:46:46 GMT", "version": "v1" }, { "created": "Mon, 9 Oct 2023 14:44:04 GMT", "version": "v2" }, { "created": "Wed, 24 Jan 2024 13:15:13 GMT", "version": "v3" } ]
2024-01-25
[ [ "Almeida", "Luis", "" ], [ "Denis", "Jérôme", "" ], [ "Ferrand", "Nathalie", "" ], [ "Lorenzi", "Tommaso", "" ], [ "Prunet", "Antonin", "" ], [ "Sabbah", "Michéle", "" ], [ "Villa", "Chiara", "" ] ]
Glucose is a primary energy source for cancer cells. Several lines of evidence support the idea that monocarboxylate transporters, such as MCT1, elicit metabolic reprogramming of cancer cells in glucose-poor environments, allowing them to reuse lactate, a byproduct of glucose metabolism, as an alternative energy source with serious consequences for disease progression. We employ a synergistic experimental and mathematical modelling approach to explore the evolutionary processes at the root of cancer cell adaptation to glucose deprivation, with particular focus on the mechanisms underlying the increase in MCT1 expression observed in glucose-deprived aggressive cancer cells. Data from in vitro experiments on breast cancer cells are used to inform and calibrate a mathematical model that comprises a partial integro-differential equation for the dynamics of a population of cancer cells structured by the level of MCT1 expression. Analytical and numerical results of this model suggest that environment-induced changes in MCT1 expression mediated by lactate-associated signalling pathways enable a prompt adaptive response of glucose-deprived cancer cells, whilst fluctuations in MCT1 expression due to epigenetic changes create the substrate for environmental selection to act upon, speeding up the selective sweep underlying cancer cell adaptation to glucose deprivation, and may constitute a long-term bet-hedging mechanism.
2403.09673
Zhangyang Gao
Zhangyang Gao, Cheng Tan, Jue Wang, Yufei Huang, Lirong Wu, Stan Z. Li
FoldToken: Learning Protein Language via Vector Quantization and Beyond
null
null
null
null
q-bio.BM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Is there a foreign language describing protein sequences and structures simultaneously? Protein structures, represented by continuous 3D points, have long posed a challenge due to the contrasting modeling paradigms of discrete sequences. We introduce \textbf{FoldTokenizer} to represent protein sequence-structure as discrete symbols. This innovative approach involves projecting residue types and structures into a discrete space, guided by a reconstruction loss for information preservation. We refer to the learned discrete symbols as \textbf{FoldToken}, and the sequence of FoldTokens serves as a new protein language, transforming the protein sequence-structure into a unified modality. We apply the created protein language on general backbone inpainting and antibody design tasks, building the first GPT-style model (\textbf{FoldGPT}) for sequence-structure co-generation with promising results. Key to our success is the substantial enhancement of the vector quantization module, Soft Conditional Vector Quantization (\textbf{SoftCVQ}).
[ { "created": "Sun, 4 Feb 2024 12:18:51 GMT", "version": "v1" }, { "created": "Tue, 19 Mar 2024 05:29:23 GMT", "version": "v2" } ]
2024-03-20
[ [ "Gao", "Zhangyang", "" ], [ "Tan", "Cheng", "" ], [ "Wang", "Jue", "" ], [ "Huang", "Yufei", "" ], [ "Wu", "Lirong", "" ], [ "Li", "Stan Z.", "" ] ]
Is there a foreign language describing protein sequences and structures simultaneously? Protein structures, represented by continuous 3D points, have long posed a challenge due to the contrasting modeling paradigms of discrete sequences. We introduce \textbf{FoldTokenizer} to represent protein sequence-structure as discrete symbols. This innovative approach involves projecting residue types and structures into a discrete space, guided by a reconstruction loss for information preservation. We refer to the learned discrete symbols as \textbf{FoldToken}, and the sequence of FoldTokens serves as a new protein language, transforming the protein sequence-structure into a unified modality. We apply the created protein language on general backbone inpainting and antibody design tasks, building the first GPT-style model (\textbf{FoldGPT}) for sequence-structure co-generation with promising results. Key to our success is the substantial enhancement of the vector quantization module, Soft Conditional Vector Quantization (\textbf{SoftCVQ}).
1508.03667
Henry Arellano -P
H. Arellano-P. and J. O. Rangel-Ch
A solution for reducing high bias in estimates of stored carbon in tropical forests (aboveground biomass)
Write to author if the supplementary material is required
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
A nondestructive method for estimating the amount of carbon stored by individuals, communities, vegetation types, and coverages, as well as their volume and aboveground biomass, is presented. This methodology is based on information on carbon stocks obtained through three-dimensional analysis of tree architecture and artificial neural networks. This technique accurately incorporates the diversity of plant forms measured in plots, transects, and relev\'es. Stored carbon in any vegetation type is usually calculated as half the biomass of sampled individuals, estimated with allometric formulas. The most complete of these formulas incorporate diameter, height, and specific gravity of wood but do not consider the variation in carbon stored in different organs or different species, nor do they include information on the wide array of architectures present in different plant communities. To develop these allometric models, many individuals of different species must be sacrificed to identify and validate samples and to minimize error. It is common to find cutting-edge studies that encourage logging to improve estimates of carbon. In our approach we replace this destructive methodology with a new technique for quantifying global aboveground carbon. We demonstrate that carbon content in forest aboveground biomass in the pantropics could rise to 723.97 Pg C. This study shows that a reevaluation of climatic and ecological models is needed to move toward a better understanding of the adverse effects of climate change, deforestation, and degradation of tropical vegetation.
[ { "created": "Fri, 14 Aug 2015 21:19:29 GMT", "version": "v1" }, { "created": "Fri, 17 Jun 2016 14:27:49 GMT", "version": "v2" } ]
2016-06-20
[ [ "Arellano-P.", "H.", "" ], [ "Rangel-Ch", "J. O.", "" ] ]
A nondestructive method for estimating the amount of carbon stored by individuals, communities, vegetation types, and coverages, as well as their volume and aboveground biomass, is presented. This methodology is based on information on carbon stocks obtained through three-dimensional analysis of tree architecture and artificial neural networks. This technique accurately incorporates the diversity of plant forms measured in plots, transects, and relev\'es. Stored carbon in any vegetation type is usually calculated as half the biomass of sampled individuals, estimated with allometric formulas. The most complete of these formulas incorporate diameter, height, and specific gravity of wood but do not consider the variation in carbon stored in different organs or different species, nor do they include information on the wide array of architectures present in different plant communities. To develop these allometric models, many individuals of different species must be sacrificed to identify and validate samples and to minimize error. It is common to find cutting-edge studies that encourage logging to improve estimates of carbon. In our approach we replace this destructive methodology with a new technique for quantifying global aboveground carbon. We demonstrate that carbon content in forest aboveground biomass in the pantropics could rise to 723.97 Pg C. This study shows that a reevaluation of climatic and ecological models is needed to move toward a better understanding of the adverse effects of climate change, deforestation, and degradation of tropical vegetation.
2311.09411
Salman Mohamadi
Salman Mohamadi, Donald A. Adjeroh
Heteroskedasticity as a Signature of Association for Age-Related Genes
null
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Human aging is a process controlled by both genetics and environment. Many studies have been conducted to identify a subset of genes related to aging from the human genome. Biologists implicitly categorize age-related genes into genes that cause aging and genes that are influenced by aging, which resulted in both causal inference and inference of associations studies. While inference of association is better explored, causal inference and computational causal inference, remains less explored. In this work, we are primarily motivated to tackle the problem of identifying genes associated with aging, while having a brief look into genes with probable causal relations, both from a computational perspective. Specifically, we form a set of hypotheses and accordingly, introduce a data-tailored framework for inference. First we perform linear modeling on the expression values of age-related genes, and then examine the presence of heteroskedastic properties in the residual of the model. We evaluate this framework and our results suggest that, 1) presence of heteroskedasticity in these residuals is a potential signature of association for age-related genes, and 2) consistent heteroskedasticity along the human life span could imply some sort of causality. To our knowledge, along with identifying age-associated genes, this is the first work to propose a framework for computational causal inference on age-related genes, using a dataset of human dermal fibroblast gene expression data. Hence the results of our simple, yet effective approach can be used not only to assess future age-related genes, but also as a possible criterion to select new associative or potential causal genes with respect to aging.
[ { "created": "Wed, 15 Nov 2023 22:26:32 GMT", "version": "v1" } ]
2023-11-17
[ [ "Mohamadi", "Salman", "" ], [ "Adjeroh", "Donald A.", "" ] ]
Human aging is a process controlled by both genetics and environment. Many studies have been conducted to identify a subset of genes related to aging from the human genome. Biologists implicitly categorize age-related genes into genes that cause aging and genes that are influenced by aging, which resulted in both causal inference and inference of associations studies. While inference of association is better explored, causal inference and computational causal inference, remains less explored. In this work, we are primarily motivated to tackle the problem of identifying genes associated with aging, while having a brief look into genes with probable causal relations, both from a computational perspective. Specifically, we form a set of hypotheses and accordingly, introduce a data-tailored framework for inference. First we perform linear modeling on the expression values of age-related genes, and then examine the presence of heteroskedastic properties in the residual of the model. We evaluate this framework and our results suggest that, 1) presence of heteroskedasticity in these residuals is a potential signature of association for age-related genes, and 2) consistent heteroskedasticity along the human life span could imply some sort of causality. To our knowledge, along with identifying age-associated genes, this is the first work to propose a framework for computational causal inference on age-related genes, using a dataset of human dermal fibroblast gene expression data. Hence the results of our simple, yet effective approach can be used not only to assess future age-related genes, but also as a possible criterion to select new associative or potential causal genes with respect to aging.
q-bio/0601020
Jie Liang
Jie Liang
Computation of protein geometry and its applications: Packing and function prediction
32 pages, 9 figures
null
10.1007/978-0-387-68372-0_6
null
q-bio.BM
null
This chapter discusses geometric models of biomolecules and geometric constructs, including the union of ball model, the weigthed Voronoi diagram, the weighted Delaunay triangulation, and the alpha shapes. These geometric constructs enable fast and analytical computaton of shapes of biomoleculres (including features such as voids and pockets) and metric properties (such as area and volume). The algorithms of Delaunay triangulation, computation of voids and pockets, as well volume/area computation are also described. In addition, applications in packing analysis of protein structures and protein function prediction are also discussed.
[ { "created": "Sat, 14 Jan 2006 07:23:33 GMT", "version": "v1" } ]
2015-06-26
[ [ "Liang", "Jie", "" ] ]
This chapter discusses geometric models of biomolecules and geometric constructs, including the union of ball model, the weigthed Voronoi diagram, the weighted Delaunay triangulation, and the alpha shapes. These geometric constructs enable fast and analytical computaton of shapes of biomoleculres (including features such as voids and pockets) and metric properties (such as area and volume). The algorithms of Delaunay triangulation, computation of voids and pockets, as well volume/area computation are also described. In addition, applications in packing analysis of protein structures and protein function prediction are also discussed.
2305.13427
Lucas Rudelt Mr
Lucas Rudelt, Daniel Gonz\'alez Marx, F. Paul Spitzner, Benjamin Cramer, Johannes Zierenberg, and Viola Priesemann
Signatures of hierarchical temporal processing in the mouse visual system
20 pages, 4 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
A core challenge for the brain is to process information across various timescales. This could be achieved by a hierarchical organization of temporal processing through intrinsic mechanisms (e.g., recurrent coupling or adaptation), but recent evidence from spike recordings of the rodent visual system seems to conflict with this hypothesis. Here, we used an optimized information-theoretic and classical autocorrelation analysis to show that information- and intrinsic timescales of spiking activity increase along the anatomical hierarchy of the mouse visual system, while information-theoretic predictability decreases. Moreover, the timescale hierarchy was invariant to the stimulus condition, whereas the decrease in predictability was strongest under natural movie stimulation. We could reproduce this effect in a basic recurrent network model with correlated sensory input. Our findings suggest that the rodent visual system indeed employs intrinsic mechanisms to achieve longer integration for higher cortical areas, while simultaneously reducing predictability for an efficient neural code.
[ { "created": "Mon, 22 May 2023 19:16:56 GMT", "version": "v1" }, { "created": "Wed, 17 Jan 2024 14:57:03 GMT", "version": "v2" } ]
2024-01-18
[ [ "Rudelt", "Lucas", "" ], [ "Marx", "Daniel González", "" ], [ "Spitzner", "F. Paul", "" ], [ "Cramer", "Benjamin", "" ], [ "Zierenberg", "Johannes", "" ], [ "Priesemann", "Viola", "" ] ]
A core challenge for the brain is to process information across various timescales. This could be achieved by a hierarchical organization of temporal processing through intrinsic mechanisms (e.g., recurrent coupling or adaptation), but recent evidence from spike recordings of the rodent visual system seems to conflict with this hypothesis. Here, we used an optimized information-theoretic and classical autocorrelation analysis to show that information- and intrinsic timescales of spiking activity increase along the anatomical hierarchy of the mouse visual system, while information-theoretic predictability decreases. Moreover, the timescale hierarchy was invariant to the stimulus condition, whereas the decrease in predictability was strongest under natural movie stimulation. We could reproduce this effect in a basic recurrent network model with correlated sensory input. Our findings suggest that the rodent visual system indeed employs intrinsic mechanisms to achieve longer integration for higher cortical areas, while simultaneously reducing predictability for an efficient neural code.
2106.07949
Yanshuai Tu
Yanshuai Tu, Duyan Ta, Zhong-Lin Lu, Yalin Wang
Topological Receptive Field Model for Human Retinotopic Mapping
Submitted to MICCAI 2021
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The mapping between visual inputs on the retina and neuronal activations in the visual cortex, i.e., retinotopic map, is an essential topic in vision science and neuroscience. Human retinotopic maps can be revealed by analyzing the functional magnetic resonance imaging (fMRI) signal responses to designed visual stimuli in vivo. Neurophysiology studies summarized that visual areas are topological (i.e., nearby neurons have receptive fields at nearby locations in the image). However, conventional fMRI-based analyses frequently generate non-topological results because they process fMRI signals on a voxel-wise basis, without considering the neighbor relations on the surface. Here we propose a topological receptive field (tRF) model which imposes the topological condition when decoding retinotopic fMRI signals. More specifically, we parametrized the cortical surface to a unit disk, characterized the topological condition by tRF, and employed an efficient scheme to solve the tRF model. We tested our framework on both synthetic and human fMRI data. Experimental results showed that the tRF model could remove the topological violations, improve model explaining power, and generate biologically plausible retinotopic maps. The proposed framework is general and can be applied to other sensory maps.
[ { "created": "Tue, 15 Jun 2021 08:04:35 GMT", "version": "v1" }, { "created": "Wed, 16 Jun 2021 00:46:07 GMT", "version": "v2" } ]
2021-06-17
[ [ "Tu", "Yanshuai", "" ], [ "Ta", "Duyan", "" ], [ "Lu", "Zhong-Lin", "" ], [ "Wang", "Yalin", "" ] ]
The mapping between visual inputs on the retina and neuronal activations in the visual cortex, i.e., retinotopic map, is an essential topic in vision science and neuroscience. Human retinotopic maps can be revealed by analyzing the functional magnetic resonance imaging (fMRI) signal responses to designed visual stimuli in vivo. Neurophysiology studies summarized that visual areas are topological (i.e., nearby neurons have receptive fields at nearby locations in the image). However, conventional fMRI-based analyses frequently generate non-topological results because they process fMRI signals on a voxel-wise basis, without considering the neighbor relations on the surface. Here we propose a topological receptive field (tRF) model which imposes the topological condition when decoding retinotopic fMRI signals. More specifically, we parametrized the cortical surface to a unit disk, characterized the topological condition by tRF, and employed an efficient scheme to solve the tRF model. We tested our framework on both synthetic and human fMRI data. Experimental results showed that the tRF model could remove the topological violations, improve model explaining power, and generate biologically plausible retinotopic maps. The proposed framework is general and can be applied to other sensory maps.
1910.02520
Shuohao Liao
Shuohao Liao
Analysis of tensor methods for stochastic models of gene regulatory networks
29 pages, 3 figures, 1 table; for associated project website, see http://people.maths.ox.ac.uk/liao/stobifan/index.html
null
null
null
q-bio.QM cs.NA math.NA q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tensor-structured parametric analysis (TPA) has been recently developed for simulating and analysing stochastic behaviours of gene regulatory networks [Liao et. al., 2015]. The method employs the Fokker-Planck approximation of the chemical master equation, and uses the Quantized Tensor Train (QTT) format, as a low-parametric tensor-structured representation of classical matrices and vectors, to approximate the high-dimensional stationary probability distribution. This paper presents a detailed error analysis of all approximation steps of the TPA regarding validity and accuracy, including modelling error, artificial boundary error, discretization error, tensor rounding error, and algebraic error. The error analysis is illustrated using computational examples, including the death-birth process and a 50-dimensional isomerization reaction chain.
[ { "created": "Sun, 6 Oct 2019 20:35:35 GMT", "version": "v1" } ]
2019-10-08
[ [ "Liao", "Shuohao", "" ] ]
The tensor-structured parametric analysis (TPA) has been recently developed for simulating and analysing stochastic behaviours of gene regulatory networks [Liao et. al., 2015]. The method employs the Fokker-Planck approximation of the chemical master equation, and uses the Quantized Tensor Train (QTT) format, as a low-parametric tensor-structured representation of classical matrices and vectors, to approximate the high-dimensional stationary probability distribution. This paper presents a detailed error analysis of all approximation steps of the TPA regarding validity and accuracy, including modelling error, artificial boundary error, discretization error, tensor rounding error, and algebraic error. The error analysis is illustrated using computational examples, including the death-birth process and a 50-dimensional isomerization reaction chain.
1911.10801
Lukas Eigentler
Lukas Eigentler and Jonathan A Sherratt
Spatial self-organisation enables species coexistence in a model for savanna ecosystems
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The savanna biome is characterised by a continuous vegetation cover, comprised of herbaceous and woody plants. The coexistence of species in arid savannas, where water availability is the main limiting resource for plant growth, provides an apparent contradiction to the classical principle of competitive exclusion. Previous theoretical work using nonspatial models has focussed on the development of an understanding of coexistence mechanisms through the consideration of resource niche separation and ecosystem disturbances. In this paper, we propose that a spatial self-organisation principle, caused by a positive feedback between local vegetation growth and water redistribution, is sufficient for species coexistence in savanna ecosystems. We propose a spatiotemporal ecohydrological model of partial differential equations, based on the Klausmeier reaction-advection-diffusion model for vegetation patterns, to investigate the effects of spatial interactions on species coexistence on sloped terrain. Our results suggest that species coexistence is a possible model outcome, if a balance is kept between the species' average fitness (a measure of a species' competitive abilities in a spatially uniform setting) and their colonisation abilities. Spatial heterogeneities in resource availability are utilised by the superior coloniser (grasses), before it is outcompeted by the species of higher average fitness (trees). A stability analysis of the spatially nonuniform coexistence solutions further suggests that grasses act as ecosystem engineers and facilitate the formation of a continuous tree cover for precipitation levels unable to support a uniform tree density in the absence of a grass species.
[ { "created": "Mon, 25 Nov 2019 10:14:08 GMT", "version": "v1" } ]
2019-11-26
[ [ "Eigentler", "Lukas", "" ], [ "Sherratt", "Jonathan A", "" ] ]
The savanna biome is characterised by a continuous vegetation cover, comprised of herbaceous and woody plants. The coexistence of species in arid savannas, where water availability is the main limiting resource for plant growth, provides an apparent contradiction to the classical principle of competitive exclusion. Previous theoretical work using nonspatial models has focussed on the development of an understanding of coexistence mechanisms through the consideration of resource niche separation and ecosystem disturbances. In this paper, we propose that a spatial self-organisation principle, caused by a positive feedback between local vegetation growth and water redistribution, is sufficient for species coexistence in savanna ecosystems. We propose a spatiotemporal ecohydrological model of partial differential equations, based on the Klausmeier reaction-advection-diffusion model for vegetation patterns, to investigate the effects of spatial interactions on species coexistence on sloped terrain. Our results suggest that species coexistence is a possible model outcome, if a balance is kept between the species' average fitness (a measure of a species' competitive abilities in a spatially uniform setting) and their colonisation abilities. Spatial heterogeneities in resource availability are utilised by the superior coloniser (grasses), before it is outcompeted by the species of higher average fitness (trees). A stability analysis of the spatially nonuniform coexistence solutions further suggests that grasses act as ecosystem engineers and facilitate the formation of a continuous tree cover for precipitation levels unable to support a uniform tree density in the absence of a grass species.
1609.00639
Johannes Friedrich
Johannes Friedrich, Pengcheng Zhou, Liam Paninski
Fast Online Deconvolution of Calcium Imaging Data
Extended version that significantly elaborates on the conference proceeding that appeared at NIPS 2016
PLOS Computational Biology 2017; 13(3): e1005423
10.1371/journal.pcbi.1005423
null
q-bio.NC q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fluorescent calcium indicators are a popular means for observing the spiking activity of large neuronal populations, but extracting the activity of each neuron from raw fluorescence calcium imaging data is a nontrivial problem. We present a fast online active set method to solve this sparse non-negative deconvolution problem. Importantly, the algorithm progresses through each time series sequentially from beginning to end, thus enabling real-time online estimation of neural activity during the imaging session. Our algorithm is a generalization of the pool adjacent violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity. We gain remarkable increases in processing speed: more than one order of magnitude compared to currently employed state of the art convex solvers relying on interior point methods. Unlike these approaches, our method can exploit warm starts; therefore optimizing model hyperparameters only requires a handful of passes through the data. A minor modification can further improve the quality of activity inference by imposing a constraint on the minimum spike size. The algorithm enables real-time simultaneous deconvolution of $O(10^5)$ traces of whole-brain larval zebrafish imaging data on a laptop.
[ { "created": "Fri, 2 Sep 2016 15:10:38 GMT", "version": "v1" }, { "created": "Thu, 24 Nov 2016 18:19:55 GMT", "version": "v2" }, { "created": "Thu, 16 Mar 2017 17:06:50 GMT", "version": "v3" } ]
2017-03-17
[ [ "Friedrich", "Johannes", "" ], [ "Zhou", "Pengcheng", "" ], [ "Paninski", "Liam", "" ] ]
Fluorescent calcium indicators are a popular means for observing the spiking activity of large neuronal populations, but extracting the activity of each neuron from raw fluorescence calcium imaging data is a nontrivial problem. We present a fast online active set method to solve this sparse non-negative deconvolution problem. Importantly, the algorithm progresses through each time series sequentially from beginning to end, thus enabling real-time online estimation of neural activity during the imaging session. Our algorithm is a generalization of the pool adjacent violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity. We gain remarkable increases in processing speed: more than one order of magnitude compared to currently employed state of the art convex solvers relying on interior point methods. Unlike these approaches, our method can exploit warm starts; therefore optimizing model hyperparameters only requires a handful of passes through the data. A minor modification can further improve the quality of activity inference by imposing a constraint on the minimum spike size. The algorithm enables real-time simultaneous deconvolution of $O(10^5)$ traces of whole-brain larval zebrafish imaging data on a laptop.
2306.06353
Alexandr Ezhov
Alexandr A. Ezhov
Neural Replicator Analysis of the genus Flavivirus
17 pages, 7 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
The results of applying neural replicator analysis (NRA) to the genomes of viruses belonging to the genus Flavivirus are presented. It is shown that the viral genomes considered in this study can be placed in five different cells of the viral genome table. Some of these cells appear for the first time and are characterized by 9-periodicity of WS-encoded genomic sequences. It is noteworthy that Japanese encephalitis viral strains and Zika viral strains occupy not one, but two common cells of this table. We also present the results of the NRA of Zika viral strains and suggest that the earliest strain in Asia is an Indian strain that spread from Africa (Uganda) to the East. The fine structure of the sets of Japanese encephalitis viral strains is presented and it is shown that their generally accepted genotypes 1 and 3 can be clearly divided into two subgenotypes. It is also shown that probably not Indonesian, but Indian strains of this virus can be considered the earliest known strains that further evolved and spread in Asian countries.
[ { "created": "Sat, 10 Jun 2023 06:04:45 GMT", "version": "v1" } ]
2023-06-13
[ [ "Ezhov", "Alexandr A.", "" ] ]
The results of applying neural replicator analysis (NRA) to the genomes of viruses belonging to the genus Flavivirus are presented. It is shown that the viral genomes considered in this study can be placed in five different cells of the viral genome table. Some of these cells appear for the first time and are characterized by 9-periodicity of WS-encoded genomic sequences. It is noteworthy that Japanese encephalitis viral strains and Zika viral strains occupy not one, but two common cells of this table. We also present the results of the NRA of Zika viral strains and suggest that the earliest strain in Asia is an Indian strain that spread from Africa (Uganda) to the East. The fine structure of the sets of Japanese encephalitis viral strains is presented and it is shown that their generally accepted genotypes 1 and 3 can be clearly divided into two subgenotypes. It is also shown that probably not Indonesian, but Indian strains of this virus can be considered the earliest known strains that further evolved and spread in Asian countries.
1309.1196
Kevin Leder
Jasmine Foo and Kevin Leder and Marc Ryser
Multifocality and recurrence risk: a quantitative model of field cancerization
36 pages, 11 figures
null
null
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Primary tumors often emerge within genetically altered fields of premalignant cells that appear histologically normal but have a high chance of progression to malignancy. Clinical observations have suggested that these premalignant fields pose high risks for emergence of secondary recurrent tumors if left behind after surgical removal of the primary tumor. In this work, we develop a spatio-temporal stochastic model of epithelial carcinogenesis, combining cellular reproduction and death dynamics with a general framework for multi-stage genetic progression to cancer. Using this model, we investigate how macroscopic features (e.g. size and geometry of premalignant fields) depend on microscopic cellular properties of the tissue (e.g.\ tissue renewal rate, mutation rate, selection advantages conferred by genetic events leading to cancer, etc). We develop methods to characterize how clinically relevant quantities such as waiting time until emergence of second field tumors and recurrence risk after tumor resection. We also study the clonal relatedness of recurrent tumors to primary tumors, and analyze how these phenomena depend upon specific characteristics of the tissue and cancer type. This study contributes to a growing literature seeking to obtain a quantitative understanding of the spatial dynamics in cancer initiation.
[ { "created": "Wed, 4 Sep 2013 22:01:18 GMT", "version": "v1" } ]
2013-09-06
[ [ "Foo", "Jasmine", "" ], [ "Leder", "Kevin", "" ], [ "Ryser", "Marc", "" ] ]
Primary tumors often emerge within genetically altered fields of premalignant cells that appear histologically normal but have a high chance of progression to malignancy. Clinical observations have suggested that these premalignant fields pose high risks for emergence of secondary recurrent tumors if left behind after surgical removal of the primary tumor. In this work, we develop a spatio-temporal stochastic model of epithelial carcinogenesis, combining cellular reproduction and death dynamics with a general framework for multi-stage genetic progression to cancer. Using this model, we investigate how macroscopic features (e.g. size and geometry of premalignant fields) depend on microscopic cellular properties of the tissue (e.g.\ tissue renewal rate, mutation rate, selection advantages conferred by genetic events leading to cancer, etc). We develop methods to characterize how clinically relevant quantities such as waiting time until emergence of second field tumors and recurrence risk after tumor resection. We also study the clonal relatedness of recurrent tumors to primary tumors, and analyze how these phenomena depend upon specific characteristics of the tissue and cancer type. This study contributes to a growing literature seeking to obtain a quantitative understanding of the spatial dynamics in cancer initiation.
2207.08812
Alex Skvortsov
Leonardo Dagdug, Alexei T. Skvortsov, Alexander M. Berezhkovskii, and Sergey M. Bezrukov
Blocker effect on diffusion resistance of a membrane channel. Dependence on the blocker geometry
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Being motivated by recent progress in nanopore sensing, we develop a theory of the effect of large analytes, or blockers, trapped within the nanopore confines, on diffusion flow of small solutes. The focus is on the nanopore diffusion resistance which is the ratio of the solute concentration difference in the reservoirs connected by the nanopore to the solute flux driven by this difference. Analytical expressions for the diffusion resistance are derived for a cylindrically symmetric blocker whose axis coincides with the axis of a cylindrical nanopore in two limiting cases where the blocker radius changes either smoothly or abruptly. Comparison of our theoretical predictions with the results obtained from Brownian dynamics simulations shows good agreement between the two.
[ { "created": "Mon, 18 Jul 2022 00:37:19 GMT", "version": "v1" } ]
2022-07-20
[ [ "Dagdug", "Leonardo", "" ], [ "Skvortsov", "Alexei T.", "" ], [ "Berezhkovskii", "Alexander M.", "" ], [ "Bezrukov", "Sergey M.", "" ] ]
Being motivated by recent progress in nanopore sensing, we develop a theory of the effect of large analytes, or blockers, trapped within the nanopore confines, on diffusion flow of small solutes. The focus is on the nanopore diffusion resistance which is the ratio of the solute concentration difference in the reservoirs connected by the nanopore to the solute flux driven by this difference. Analytical expressions for the diffusion resistance are derived for a cylindrically symmetric blocker whose axis coincides with the axis of a cylindrical nanopore in two limiting cases where the blocker radius changes either smoothly or abruptly. Comparison of our theoretical predictions with the results obtained from Brownian dynamics simulations shows good agreement between the two.
q-bio/0404040
Eugene Shakhnovich
Boris Shakhnovich, Eric Deeds, Charles Delisi and Eugene Shakhnovich
Protein Structure and Evolutionary History Determine Sequence Space Topology
null
null
null
null
q-bio.BM q-bio.GN
null
Understanding the observed variability in the number of homologs of a gene is a very important, unsolved problem that has broad implications for research into co-evolution of structure and function, gene duplication, pseudogene formation and possibly for emerging diseases. Here we attempt to define and elucidate the reasons behind this observed unevenness in sequence space. We present evidence that sequence variability and functional diversity of a gene or fold family is influenced by certain quantitative characteristics of the protein structure that reflect potential for sequence plasticity i.e. the ability to accept mutation without losing thermodynamic stability.
[ { "created": "Wed, 28 Apr 2004 02:39:42 GMT", "version": "v1" } ]
2007-05-23
[ [ "Shakhnovich", "Boris", "" ], [ "Deeds", "Eric", "" ], [ "Delisi", "Charles", "" ], [ "Shakhnovich", "Eugene", "" ] ]
Understanding the observed variability in the number of homologs of a gene is a very important, unsolved problem that has broad implications for research into co-evolution of structure and function, gene duplication, pseudogene formation and possibly for emerging diseases. Here we attempt to define and elucidate the reasons behind this observed unevenness in sequence space. We present evidence that sequence variability and functional diversity of a gene or fold family is influenced by certain quantitative characteristics of the protein structure that reflect potential for sequence plasticity i.e. the ability to accept mutation without losing thermodynamic stability.
2203.15093
Scott Greenhalgh
Jacob Pacheco, Ahmani Roman, Kursad Tosun, Scott Greenhalgh
Guns, Zombies, and Steelhead Axes: cost-effective recommendations for surviving human societies
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In pop culture, there are many strategies to curtail zombie apocalypses. However, it remains unclear what routes we should take to eliminate zombies effectively and affordably. So, we created a mathematical model to examine interventions that armed adults with steelhead axes or provided enough ammunition and a 9mm handgun to kill zombies in either a "single-tap" or "double-tap". We investigate each case over two years under slow, moderate speed, and fast zombies scenarios. We quantify health burden by zombies averted, disability-adjusted life-years averted, and determine cost-effectiveness using the incremental cost-effectiveness ratio. Our predictions show the single=tap intervention is the best for stopping the zombie apocalypse, as it would avert hundreds of millions of zombies and deaths while also being the most cost-effective intervention. Altogether, this suggests conserving ammunition and supplying ranged weapons would be an effective use of limited resources in the event of a zombie uprising.
[ { "created": "Mon, 28 Mar 2022 20:56:11 GMT", "version": "v1" } ]
2022-03-30
[ [ "Pacheco", "Jacob", "" ], [ "Roman", "Ahmani", "" ], [ "Tosun", "Kursad", "" ], [ "Greenhalgh", "Scott", "" ] ]
In pop culture, there are many strategies to curtail zombie apocalypses. However, it remains unclear what routes we should take to eliminate zombies effectively and affordably. So, we created a mathematical model to examine interventions that armed adults with steelhead axes or provided enough ammunition and a 9mm handgun to kill zombies in either a "single-tap" or "double-tap". We investigate each case over two years under slow, moderate speed, and fast zombies scenarios. We quantify health burden by zombies averted, disability-adjusted life-years averted, and determine cost-effectiveness using the incremental cost-effectiveness ratio. Our predictions show the single=tap intervention is the best for stopping the zombie apocalypse, as it would avert hundreds of millions of zombies and deaths while also being the most cost-effective intervention. Altogether, this suggests conserving ammunition and supplying ranged weapons would be an effective use of limited resources in the event of a zombie uprising.
2104.00777
Camila Lorenz
Camila Lorenz, Thiago Salomao de Azevedo, Francisco Chiaravalloti-Neto
Impact of climate change on West Nile virus distribution in South America
25 pages, 5 figures and 1 table
null
null
null
q-bio.PE stat.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
West Nile virus (WNV) is a vector-borne pathogen of global relevance and is currently the most widely distributed flavivirus of encephalitis worldwide. This virus infects birds, humans, horses, and other mammals, and its transmission cycle occurs in urban and rural areas. Climate conditions have direct and indirect impacts on vector abundance and virus dynamics within the mosquito. The significance of environmental variables as drivers in WNV epidemiology is increasing under the current climate change scenario. In this study, we used a machine learning algorithm to model WNV distributions in South America. Our model evaluated eight environmental variables (type of biome, annual temperature, seasonality of temperature, daytime temperature variation, thermal amplitude, seasonality of precipitation, annual rainfall, and elevation) for their contribution to the occurrence of WNV since its introduction in South America (2004). Our results showed that environmental variables can directly alter the occurrence of WNV, with lower precipitation and higher temperatures associated with increased virus incidence. High-risk areas may be modified in the coming years, becoming more evident with high greenhouse gas emission levels. Countries such as Bolivia and Paraguay will be greatly affected, drastically changing their current WNV distribution. Several Brazilian areas will also increase the likelihood of presenting WNV, mainly in the Northeast and Midwest regions and the Pantanal biome. The Galapagos Islands will also probably increase their geographic range suitable for WNV occurrence. It is necessary to develop preventive policies to minimize potential WNV infection in humans and enhance active epidemiological surveillance in birds, humans, and other mammals before it becomes a more significant public health problem in South America.
[ { "created": "Thu, 1 Apr 2021 21:48:48 GMT", "version": "v1" } ]
2021-04-05
[ [ "Lorenz", "Camila", "" ], [ "de Azevedo", "Thiago Salomao", "" ], [ "Chiaravalloti-Neto", "Francisco", "" ] ]
West Nile virus (WNV) is a vector-borne pathogen of global relevance and is currently the most widely distributed flavivirus of encephalitis worldwide. This virus infects birds, humans, horses, and other mammals, and its transmission cycle occurs in urban and rural areas. Climate conditions have direct and indirect impacts on vector abundance and virus dynamics within the mosquito. The significance of environmental variables as drivers in WNV epidemiology is increasing under the current climate change scenario. In this study, we used a machine learning algorithm to model WNV distributions in South America. Our model evaluated eight environmental variables (type of biome, annual temperature, seasonality of temperature, daytime temperature variation, thermal amplitude, seasonality of precipitation, annual rainfall, and elevation) for their contribution to the occurrence of WNV since its introduction in South America (2004). Our results showed that environmental variables can directly alter the occurrence of WNV, with lower precipitation and higher temperatures associated with increased virus incidence. High-risk areas may be modified in the coming years, becoming more evident with high greenhouse gas emission levels. Countries such as Bolivia and Paraguay will be greatly affected, drastically changing their current WNV distribution. Several Brazilian areas will also increase the likelihood of presenting WNV, mainly in the Northeast and Midwest regions and the Pantanal biome. The Galapagos Islands will also probably increase their geographic range suitable for WNV occurrence. It is necessary to develop preventive policies to minimize potential WNV infection in humans and enhance active epidemiological surveillance in birds, humans, and other mammals before it becomes a more significant public health problem in South America.
1908.05635
Jennifer Hoyal Cuthill
Jennifer F. Hoyal Cuthill, Nicholas Guttenberg, Sophie Ledger, Robyn Crowther, Blanca Huertas
Deep learning on butterfly phenotypes tests evolution's oldest mathematical model
Manuscript and combined supplementary information
Sci Adv 5, eaaw4967 (2019)
10.1126/sciadv.aaw4967
null
q-bio.PE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional anatomical analyses captured only a fraction of real phenomic information. Here, we apply deep learning to quantify total phenotypic similarity across 2468 butterfly photographs, covering 38 subspecies from the polymorphic mimicry complex of $\textit{Heliconius erato}$ and $\textit{Heliconius melpomene}$. Euclidean phenotypic distances, calculated using a deep convolutional triplet network, demonstrate significant convergence between interspecies co-mimics. This quantitatively validates a key prediction of M\"ullerian mimicry theory, evolutionary biology's oldest mathematical model. Phenotypic neighbor-joining trees are significantly correlated with wing pattern gene phylogenies, demonstrating objective, phylogenetically informative phenome capture. Comparative analyses indicate frequency-dependent, mutual convergence with coevolutionary exchange of wing pattern features. Therefore, phenotypic analysis supports reciprocal coevolution, predicted by classical mimicry theory but since disputed, and reveals mutual convergence as an intrinsic generator for the surprising diversity of M\"ullerian mimicry. This demonstrates that deep learning can generate phenomic spatial embeddings which enable quantitative tests of evolutionary hypotheses previously only testable subjectively.
[ { "created": "Thu, 15 Aug 2019 16:55:27 GMT", "version": "v1" } ]
2019-08-16
[ [ "Cuthill", "Jennifer F. Hoyal", "" ], [ "Guttenberg", "Nicholas", "" ], [ "Ledger", "Sophie", "" ], [ "Crowther", "Robyn", "" ], [ "Huertas", "Blanca", "" ] ]
Traditional anatomical analyses captured only a fraction of real phenomic information. Here, we apply deep learning to quantify total phenotypic similarity across 2468 butterfly photographs, covering 38 subspecies from the polymorphic mimicry complex of $\textit{Heliconius erato}$ and $\textit{Heliconius melpomene}$. Euclidean phenotypic distances, calculated using a deep convolutional triplet network, demonstrate significant convergence between interspecies co-mimics. This quantitatively validates a key prediction of M\"ullerian mimicry theory, evolutionary biology's oldest mathematical model. Phenotypic neighbor-joining trees are significantly correlated with wing pattern gene phylogenies, demonstrating objective, phylogenetically informative phenome capture. Comparative analyses indicate frequency-dependent, mutual convergence with coevolutionary exchange of wing pattern features. Therefore, phenotypic analysis supports reciprocal coevolution, predicted by classical mimicry theory but since disputed, and reveals mutual convergence as an intrinsic generator for the surprising diversity of M\"ullerian mimicry. This demonstrates that deep learning can generate phenomic spatial embeddings which enable quantitative tests of evolutionary hypotheses previously only testable subjectively.
2306.02206
Pascal Friederich
Hunter Sturm, Jonas Teufel, Kaitlin A. Isfeld, Pascal Friederich, Rebecca L. Davis
Mitigating Molecular Aggregation in Drug Discovery with Predictive Insights from Explainable AI
17 pages, plus SI
null
null
null
q-bio.BM cond-mat.soft cs.LG
http://creativecommons.org/licenses/by/4.0/
As the importance of high-throughput screening (HTS) continues to grow due to its value in early stage drug discovery and data generation for training machine learning models, there is a growing need for robust methods for pre-screening compounds to identify and prevent false-positive hits. Small, colloidally aggregating molecules are one of the primary sources of false-positive hits in high-throughput screens, making them an ideal candidate to target for removal from libraries using predictive pre-screening tools. However, a lack of understanding of the causes of molecular aggregation introduces difficulty in the development of predictive tools for detecting aggregating molecules. Herein, we present an examination of the molecular features differentiating datasets of aggregating and non-aggregating molecules, as well as a machine learning approach to predicting molecular aggregation. Our method uses explainable graph neural networks and counterfactuals to reliably predict and explain aggregation, giving additional insights and design rules for future screening. The integration of this method in HTS approaches will help combat false positives, providing better lead molecules more rapidly and thus accelerating drug discovery cycles.
[ { "created": "Sat, 3 Jun 2023 22:30:45 GMT", "version": "v1" } ]
2023-06-06
[ [ "Sturm", "Hunter", "" ], [ "Teufel", "Jonas", "" ], [ "Isfeld", "Kaitlin A.", "" ], [ "Friederich", "Pascal", "" ], [ "Davis", "Rebecca L.", "" ] ]
As the importance of high-throughput screening (HTS) continues to grow due to its value in early stage drug discovery and data generation for training machine learning models, there is a growing need for robust methods for pre-screening compounds to identify and prevent false-positive hits. Small, colloidally aggregating molecules are one of the primary sources of false-positive hits in high-throughput screens, making them an ideal candidate to target for removal from libraries using predictive pre-screening tools. However, a lack of understanding of the causes of molecular aggregation introduces difficulty in the development of predictive tools for detecting aggregating molecules. Herein, we present an examination of the molecular features differentiating datasets of aggregating and non-aggregating molecules, as well as a machine learning approach to predicting molecular aggregation. Our method uses explainable graph neural networks and counterfactuals to reliably predict and explain aggregation, giving additional insights and design rules for future screening. The integration of this method in HTS approaches will help combat false positives, providing better lead molecules more rapidly and thus accelerating drug discovery cycles.
1911.07012
Lucilla De Arcangelis
Damian Berger, Emanuele Varriale, Laurens Michiels van Kessenich, Hans J. Herrmann, Lucilla de Arcangelis
Three cooperative mechanisms required for recovery after brain damage
null
Scientific Reports 9:15858 (2019)
10.1038/s41598-019-50946-y
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stroke is one of the main causes of human disabilities. Experimental observations indicate that several mechanisms are activated during the recovery of functional activity after a stroke. Here we unveil how the brain recovers by explaining the role played by three mechanisms: Plastic adaptation, hyperexcitability and synaptogenesis. We consider two different damages in a neural network: A diffuse damage that simply causes the reduction of the effective system size and a localized damage, a stroke, that strongly alters the spontaneous activity of the system. Recovery mechanisms observed experimentally are implemented both separately and in a combined way. Interestingly, each mechanism contributes to the recovery to a limited extent. Only the combined application of all three together is able to recover the spontaneous activity of the undamaged system. This explains why the brain triggers independent mechanisms, whose cooperation is the fundamental ingredient for the system recovery.
[ { "created": "Sat, 16 Nov 2019 10:54:50 GMT", "version": "v1" } ]
2019-11-19
[ [ "Berger", "Damian", "" ], [ "Varriale", "Emanuele", "" ], [ "van Kessenich", "Laurens Michiels", "" ], [ "Herrmann", "Hans J.", "" ], [ "de Arcangelis", "Lucilla", "" ] ]
Stroke is one of the main causes of human disabilities. Experimental observations indicate that several mechanisms are activated during the recovery of functional activity after a stroke. Here we unveil how the brain recovers by explaining the role played by three mechanisms: Plastic adaptation, hyperexcitability and synaptogenesis. We consider two different damages in a neural network: A diffuse damage that simply causes the reduction of the effective system size and a localized damage, a stroke, that strongly alters the spontaneous activity of the system. Recovery mechanisms observed experimentally are implemented both separately and in a combined way. Interestingly, each mechanism contributes to the recovery to a limited extent. Only the combined application of all three together is able to recover the spontaneous activity of the undamaged system. This explains why the brain triggers independent mechanisms, whose cooperation is the fundamental ingredient for the system recovery.
1407.7114
Denis Menshykau
Tam\'as Kurics, Denis Menshykau and Dagmar Iber
Feedbacks, Receptor Clustering, and Receptor Restriction to Single Cells yield large Turing Spaces for Ligand-receptor based Turing Models
null
null
10.1103/PhysRevE.90.022716
null
q-bio.MN q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Turing mechanisms can yield a large variety of patterns from noisy, homogenous initial conditions and have been proposed as patterning mechanism for many developmental processes. However, the molecular components that give rise to Turing patterns have remained elusive, and the small size of the parameter space that permits Turing patterns to emerge makes it difficult to explain how Turing patterns could evolve. We have recently shown that Turing patterns can be obtained with a single ligand if the ligand-receptor interaction is taken into account. Here we show that the general properties of ligand-receptor systems result in very large Turing spaces. Thus, the restriction of receptors to single cells, negative feedbacks, regulatory interactions between different ligand-receptor systems, and the clustering of receptors on the cell surface all greatly enlarge the Turing space. We further show that the feedbacks that occur in the FGF10/SHH network that controls lung branching morphogenesis are sufficient to result in large Turing spaces. We conclude that the cellular restriction of receptors provides a mechanism to sufficiently increase the size of the Turing space to make the evolution of Turing patterns likely. Additional feedbacks may then have further enlarged the Turing space. Given their robustness and flexibility, we propose that receptor-ligand based Turing mechanisms present a general mechanism for patterning in biology.
[ { "created": "Sat, 26 Jul 2014 09:48:14 GMT", "version": "v1" } ]
2015-06-22
[ [ "Kurics", "Tamás", "" ], [ "Menshykau", "Denis", "" ], [ "Iber", "Dagmar", "" ] ]
Turing mechanisms can yield a large variety of patterns from noisy, homogenous initial conditions and have been proposed as patterning mechanism for many developmental processes. However, the molecular components that give rise to Turing patterns have remained elusive, and the small size of the parameter space that permits Turing patterns to emerge makes it difficult to explain how Turing patterns could evolve. We have recently shown that Turing patterns can be obtained with a single ligand if the ligand-receptor interaction is taken into account. Here we show that the general properties of ligand-receptor systems result in very large Turing spaces. Thus, the restriction of receptors to single cells, negative feedbacks, regulatory interactions between different ligand-receptor systems, and the clustering of receptors on the cell surface all greatly enlarge the Turing space. We further show that the feedbacks that occur in the FGF10/SHH network that controls lung branching morphogenesis are sufficient to result in large Turing spaces. We conclude that the cellular restriction of receptors provides a mechanism to sufficiently increase the size of the Turing space to make the evolution of Turing patterns likely. Additional feedbacks may then have further enlarged the Turing space. Given their robustness and flexibility, we propose that receptor-ligand based Turing mechanisms present a general mechanism for patterning in biology.
1907.08711
Bernadette Stolz
Helen M Byrne, Heather A Harrington, Ruth Muschel, Gesine Reinert, Bernadette J Stolz, Ulrike Tillmann
Topological Methods for Characterising Spatial Networks: A Case Study in Tumour Vasculature
null
null
null
null
q-bio.QM math.AT math.DS q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how the spatial structure of blood vessel networks relates to their function in healthy and abnormal biological tissues could improve diagnosis and treatment for diseases such as cancer. New imaging techniques can generate multiple, high-resolution images of the same tissue region, and show how vessel networks evolve during disease onset and treatment. Such experimental advances have created an exciting opportunity for discovering new links between vessel structure and disease through the development of mathematical tools that can analyse these rich datasets. Here we explain how topological data analysis (TDA) can be used to study vessel network structures. TDA is a growing field in the mathematical and computational sciences, that consists of algorithmic methods for identifying global and multi-scale structures in high-dimensional data sets that may be noisy and incomplete. TDA has identified the effect of ageing on vessel networks in the brain and more recently proposed to study blood flow and stenosis. Here we present preliminary work which shows how TDA of spatial network structure can be used to characterise tumour vasculature.
[ { "created": "Fri, 19 Jul 2019 22:21:21 GMT", "version": "v1" } ]
2019-07-23
[ [ "Byrne", "Helen M", "" ], [ "Harrington", "Heather A", "" ], [ "Muschel", "Ruth", "" ], [ "Reinert", "Gesine", "" ], [ "Stolz", "Bernadette J", "" ], [ "Tillmann", "Ulrike", "" ] ]
Understanding how the spatial structure of blood vessel networks relates to their function in healthy and abnormal biological tissues could improve diagnosis and treatment for diseases such as cancer. New imaging techniques can generate multiple, high-resolution images of the same tissue region, and show how vessel networks evolve during disease onset and treatment. Such experimental advances have created an exciting opportunity for discovering new links between vessel structure and disease through the development of mathematical tools that can analyse these rich datasets. Here we explain how topological data analysis (TDA) can be used to study vessel network structures. TDA is a growing field in the mathematical and computational sciences, that consists of algorithmic methods for identifying global and multi-scale structures in high-dimensional data sets that may be noisy and incomplete. TDA has identified the effect of ageing on vessel networks in the brain and more recently proposed to study blood flow and stenosis. Here we present preliminary work which shows how TDA of spatial network structure can be used to characterise tumour vasculature.
2306.12320
Azmain Yakin Srizon
Azmain Yakin Srizon
Prognostic Biomarker Identification for Pancreatic Cancer by Analyzing Multiple mRNA Microarray and microRNA Expression Datasets
Undergraduate Thesis Work, Supervised By: Md. Al Mehedi Hasan, Degree was awarded by Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Possessing the five-year durability rate of nearly 5%, currently, the fourth leading cause for cancer-related deaths is pancreatic cancer. Previously, several works have resolved that early diagnosis performs a meaningful function in enhancing the durability rate and diverse online tools have been utilized to distinguish prognostic biomarker which is a lengthy process. We believe that the statistical feature selection method can produce a better and faster result here. To authenticate our statement, we picked three different mRNA microarray (GSE15471, GSE28735, and GSE16515) and a microRNA (GSE41372) dataset for identification of differentially expressed genes (DEGs) and differentially expressed microRNAs (DEMs). By adopting some feature selecting methods, 178 DEGs and 16 DEMs were elected. After identifying target genes of DEMs, we selected two DEGs (ECT2 and NRP2) which were also identified among DEMs target genes. Moreover, overall durability report established that ECT2 and NRP2 were associated with poor overall survival. Hence, we concluded that for pancreatic cancer, statistical feature selection approaches certainly perform better for biomarker identification than pre-defined online programs, and here, ECT2 and NRP2 can act as possible prognostic biomarkers. All the resources, programs and snippets of our literature can be discovered at https://github.com/Srizon143005/PancreaticCancerBiomarkers.
[ { "created": "Wed, 21 Jun 2023 15:02:13 GMT", "version": "v1" } ]
2023-06-22
[ [ "Srizon", "Azmain Yakin", "" ] ]
Possessing the five-year durability rate of nearly 5%, currently, the fourth leading cause for cancer-related deaths is pancreatic cancer. Previously, several works have resolved that early diagnosis performs a meaningful function in enhancing the durability rate and diverse online tools have been utilized to distinguish prognostic biomarker which is a lengthy process. We believe that the statistical feature selection method can produce a better and faster result here. To authenticate our statement, we picked three different mRNA microarray (GSE15471, GSE28735, and GSE16515) and a microRNA (GSE41372) dataset for identification of differentially expressed genes (DEGs) and differentially expressed microRNAs (DEMs). By adopting some feature selecting methods, 178 DEGs and 16 DEMs were elected. After identifying target genes of DEMs, we selected two DEGs (ECT2 and NRP2) which were also identified among DEMs target genes. Moreover, overall durability report established that ECT2 and NRP2 were associated with poor overall survival. Hence, we concluded that for pancreatic cancer, statistical feature selection approaches certainly perform better for biomarker identification than pre-defined online programs, and here, ECT2 and NRP2 can act as possible prognostic biomarkers. All the resources, programs and snippets of our literature can be discovered at https://github.com/Srizon143005/PancreaticCancerBiomarkers.
2303.10122
Lucrezia Caselli
Jacopo Cardellini, Andrea Ridolfi, Melissa Donati, Valentina Giampietro, Mirko Severi, Marco Brucale, Francesco Valle, Paolo Bergese, Costanza Montis, Lucrezia Caselli, and Debora Berti
Probing the coverage of nanoparticles by biomimetic membranes through nanoplasmonics
null
null
10.1016/j.jcis.2023.02.073
null
q-bio.BM cond-mat.soft physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Although promising for biomedicine, the clinical translation of inorganic nanoparticles (NPs) is limited by low biocompatibility and stability in biological fluids. A common strategy to circumvent this drawback consists in disguising the active inorganic core with a lipid bilayer coating, reminiscent of the structure of the cell membrane to redefine the chemical and biological identity of NPs. While recent reports introduced membrane coating procedures for NPs, a robust and accessible method to quantify the integrity of the bilayer coverage is not yet available. To fill this gap, we prepared SiO2 nanoparticles (SiO2NPs) with different membrane coverage degrees and monitored their interaction with AuNPs by combining microscopic, scattering, and optical techniques. The membrane-coating on SiO2NPs induces spontaneous clustering of AuNPs, whose extent depends on the coating integrity. Remarkably, we discovered a linear correlation between the membrane coverage and a spectral descriptor for the AuNPs plasmonic resonance, spanning a wide range of coating yields. These results provide a fast and cost-effective assay to monitor the compatibilization of NPs with biological environments, essential for bench tests and scale-up. In addition, we introduce a robust and scalable method to prepare SiO2NPsAuNPs hybrids through spontaneous self assembly, with a high fidelity structural control mediated by a lipid bilayer.
[ { "created": "Fri, 17 Mar 2023 17:01:25 GMT", "version": "v1" } ]
2023-03-20
[ [ "Cardellini", "Jacopo", "" ], [ "Ridolfi", "Andrea", "" ], [ "Donati", "Melissa", "" ], [ "Giampietro", "Valentina", "" ], [ "Severi", "Mirko", "" ], [ "Brucale", "Marco", "" ], [ "Valle", "Francesco", "" ], [ "Bergese", "Paolo", "" ], [ "Montis", "Costanza", "" ], [ "Caselli", "Lucrezia", "" ], [ "Berti", "Debora", "" ] ]
Although promising for biomedicine, the clinical translation of inorganic nanoparticles (NPs) is limited by low biocompatibility and stability in biological fluids. A common strategy to circumvent this drawback consists in disguising the active inorganic core with a lipid bilayer coating, reminiscent of the structure of the cell membrane to redefine the chemical and biological identity of NPs. While recent reports introduced membrane coating procedures for NPs, a robust and accessible method to quantify the integrity of the bilayer coverage is not yet available. To fill this gap, we prepared SiO2 nanoparticles (SiO2NPs) with different membrane coverage degrees and monitored their interaction with AuNPs by combining microscopic, scattering, and optical techniques. The membrane-coating on SiO2NPs induces spontaneous clustering of AuNPs, whose extent depends on the coating integrity. Remarkably, we discovered a linear correlation between the membrane coverage and a spectral descriptor for the AuNPs plasmonic resonance, spanning a wide range of coating yields. These results provide a fast and cost-effective assay to monitor the compatibilization of NPs with biological environments, essential for bench tests and scale-up. In addition, we introduce a robust and scalable method to prepare SiO2NPsAuNPs hybrids through spontaneous self assembly, with a high fidelity structural control mediated by a lipid bilayer.
2402.14592
H\'el\`ene Todd
H\'el\`ene Todd, Alex Cayco-Gajic and Boris Gutkin
The role of gap junctions and clustered connectivity in emergent synchronisation patterns of inhibitory neuronal networks
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Inhibitory interneurons, ubiquitous in the central nervous system, form networks connected through both chemical synapses and gap junctions. These networks are essential for regulating the activity of principal neurons, especially by inducing temporally patterned dynamic states. We aim to understand the dynamic mechanisms for synchronisation in networks of electrically and chemically coupled interneurons. We use the exact mean-field reduction to derive a neural mass model for both homogeneous and clustered networks. We first analyse a single population of neurons to understand how the two couplings interact with one another. We demonstrate that the network transitions from an asynchronous to a synchronous regime either by increasing the strength of the gap junction connectivity or the strength of the background input current. Conversely, the strength of inhibitory synapses affects the population firing rate, suggesting that electrical and chemical coupling strengths act as complementary mechanisms by which networks can tune synchronous oscillatory behavior. In line with previous work, we confirm that the depolarizing spikelet is crucial for the emergence of synchrony. Furthermore, find that the fast frequency component of the spikelet ensures robustness to heterogeneity. Next, inspired by the existence of multiple interconnected interneuron subtypes in the cerebellum, we analyse networks consisting of two clusters of cell types defined by differing chemical versus electrical coupling strengths. We show that breaking the electrical and chemical coupling symmetry between these clusters induces bistability, so that a transient external input can switch the network between synchronous and asynchronous firing. Together, our results shows the variety of cell-intrinsic and network properties that contribute to synchronisation of interneuronal networks with multiple types of coupling.
[ { "created": "Thu, 22 Feb 2024 14:45:22 GMT", "version": "v1" } ]
2024-02-23
[ [ "Todd", "Hélène", "" ], [ "Cayco-Gajic", "Alex", "" ], [ "Gutkin", "Boris", "" ] ]
Inhibitory interneurons, ubiquitous in the central nervous system, form networks connected through both chemical synapses and gap junctions. These networks are essential for regulating the activity of principal neurons, especially by inducing temporally patterned dynamic states. We aim to understand the dynamic mechanisms for synchronisation in networks of electrically and chemically coupled interneurons. We use the exact mean-field reduction to derive a neural mass model for both homogeneous and clustered networks. We first analyse a single population of neurons to understand how the two couplings interact with one another. We demonstrate that the network transitions from an asynchronous to a synchronous regime either by increasing the strength of the gap junction connectivity or the strength of the background input current. Conversely, the strength of inhibitory synapses affects the population firing rate, suggesting that electrical and chemical coupling strengths act as complementary mechanisms by which networks can tune synchronous oscillatory behavior. In line with previous work, we confirm that the depolarizing spikelet is crucial for the emergence of synchrony. Furthermore, find that the fast frequency component of the spikelet ensures robustness to heterogeneity. Next, inspired by the existence of multiple interconnected interneuron subtypes in the cerebellum, we analyse networks consisting of two clusters of cell types defined by differing chemical versus electrical coupling strengths. We show that breaking the electrical and chemical coupling symmetry between these clusters induces bistability, so that a transient external input can switch the network between synchronous and asynchronous firing. Together, our results shows the variety of cell-intrinsic and network properties that contribute to synchronisation of interneuronal networks with multiple types of coupling.
1702.07814
Vu Dinh
Vu Dinh, Arman Bilge, Cheng Zhang, and Frederick A. Matsen IV
Probabilistic Path Hamiltonian Monte Carlo
Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 2017; 15 pages; 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hamiltonian Monte Carlo (HMC) is an efficient and effective means of sampling posterior distributions on Euclidean space, which has been extended to manifolds with boundary. However, some applications require an extension to more general spaces. For example, phylogenetic (evolutionary) trees are defined in terms of both a discrete graph and associated continuous parameters; although one can represent these aspects using a single connected space, this rather complex space is not suitable for existing HMC algorithms. In this paper, we develop Probabilistic Path HMC (PPHMC) as a first step to sampling distributions on spaces with intricate combinatorial structure. We define PPHMC on orthant complexes, show that the resulting Markov chain is ergodic, and provide a promising implementation for the case of phylogenetic trees in open-source software. We also show that a surrogate function to ease the transition across a boundary on which the log-posterior has discontinuous derivatives can greatly improve efficiency.
[ { "created": "Sat, 25 Feb 2017 01:20:42 GMT", "version": "v1" }, { "created": "Fri, 23 Jun 2017 04:34:36 GMT", "version": "v2" } ]
2017-06-26
[ [ "Dinh", "Vu", "" ], [ "Bilge", "Arman", "" ], [ "Zhang", "Cheng", "" ], [ "Matsen", "Frederick A.", "IV" ] ]
Hamiltonian Monte Carlo (HMC) is an efficient and effective means of sampling posterior distributions on Euclidean space, which has been extended to manifolds with boundary. However, some applications require an extension to more general spaces. For example, phylogenetic (evolutionary) trees are defined in terms of both a discrete graph and associated continuous parameters; although one can represent these aspects using a single connected space, this rather complex space is not suitable for existing HMC algorithms. In this paper, we develop Probabilistic Path HMC (PPHMC) as a first step to sampling distributions on spaces with intricate combinatorial structure. We define PPHMC on orthant complexes, show that the resulting Markov chain is ergodic, and provide a promising implementation for the case of phylogenetic trees in open-source software. We also show that a surrogate function to ease the transition across a boundary on which the log-posterior has discontinuous derivatives can greatly improve efficiency.
2003.09342
Yuri Kozitsky
Yuri Kozitsky and Krzysztof Pilorz
Modeling tumor growth: a simple individual-based model and its analysis
A Chapter in: Order, Disorder and Criticality: Advanced Problems of Phase Transition Theory. Ed. by Yu. Holovatch.. Vol. 6, 2020, World Scientific, Singapore https://www.worldscientific.com/worldscibooks/10.1142/11711
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Initiation and development of a malignant tumor is a complex phenomenon that has critical stages determining its long time behavior. This phenomenon is mathematically described by means of various models: from simple heuristic models to those employing stochastic processes. In this chapter, we discuss some aspects of such modeling by analyzing a simple individual-based model, in which tumor cells are presented as point particles drifting in $\mathbf{R}_{+}:=[0,+\infty)$ towards the origin with unit speed. At the origin, each of them splits into two new particles that instantly appear in $\mathbf{R}_{+}$ at random positions. During their drift the particles are subject to a random death before splitting. In this model, trait $x\in \mathbf{R}_{+}$ of a given cell corresponds to time to its division and the death is caused by therapeutic factors. On its base we demonstrate how to derive a condition -- involving the therapy related death rate and cell cycle distribution parameters -- under which the tumor size remains bounded in time, which practically means combating the disease.
[ { "created": "Fri, 20 Mar 2020 15:51:40 GMT", "version": "v1" } ]
2020-03-23
[ [ "Kozitsky", "Yuri", "" ], [ "Pilorz", "Krzysztof", "" ] ]
Initiation and development of a malignant tumor is a complex phenomenon that has critical stages determining its long time behavior. This phenomenon is mathematically described by means of various models: from simple heuristic models to those employing stochastic processes. In this chapter, we discuss some aspects of such modeling by analyzing a simple individual-based model, in which tumor cells are presented as point particles drifting in $\mathbf{R}_{+}:=[0,+\infty)$ towards the origin with unit speed. At the origin, each of them splits into two new particles that instantly appear in $\mathbf{R}_{+}$ at random positions. During their drift the particles are subject to a random death before splitting. In this model, trait $x\in \mathbf{R}_{+}$ of a given cell corresponds to time to its division and the death is caused by therapeutic factors. On its base we demonstrate how to derive a condition -- involving the therapy related death rate and cell cycle distribution parameters -- under which the tumor size remains bounded in time, which practically means combating the disease.
2307.02170
K. Anton Feenstra
Olga Ivanova, Jose Gavald\'a-Garci\'a, Dea Gogishvili, Isabel Houtkamp, Robbin Bouwmeester, K. Anton Feenstra, Sanne Abeln
Structure Alignment
editorial responsability: K. Anton Feenstra, Sanne Abeln. This chapter is part of the book "Introduction to Protein Structural Bioinformatics". The Preface arXiv:1801.09442 contains links to all the (published) chapters. The update adds available arxiv hyperlinks for the chapters
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. The Protein DataBank (PDB) contains a wealth of structural information. In order to investigate the similarity between different proteins in this database, one can compare the primary sequence through pairwise alignment and calculate the sequence identity (or similarity) over the two sequences. This strategy will work particularly well if the proteins you want to compare are close homologs. However, in this chapter we will explain that a structural comparison through structural alignment will give you much more valuable information, that allows you to investigate similarities between proteins that cannot be discovered by comparing the sequences alone.
[ { "created": "Wed, 5 Jul 2023 10:12:57 GMT", "version": "v1" }, { "created": "Thu, 6 Jul 2023 18:07:04 GMT", "version": "v2" } ]
2023-07-10
[ [ "Ivanova", "Olga", "" ], [ "Gavaldá-Garciá", "Jose", "" ], [ "Gogishvili", "Dea", "" ], [ "Houtkamp", "Isabel", "" ], [ "Bouwmeester", "Robbin", "" ], [ "Feenstra", "K. Anton", "" ], [ "Abeln", "Sanne", "" ] ]
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. The Protein DataBank (PDB) contains a wealth of structural information. In order to investigate the similarity between different proteins in this database, one can compare the primary sequence through pairwise alignment and calculate the sequence identity (or similarity) over the two sequences. This strategy will work particularly well if the proteins you want to compare are close homologs. However, in this chapter we will explain that a structural comparison through structural alignment will give you much more valuable information, that allows you to investigate similarities between proteins that cannot be discovered by comparing the sequences alone.
2003.08479
Vinko Zlati\'c
Vinko Zlati\'c, Irena Barja\v{s}i\'c, Andrea Kadovi\'c, Hrvoje \v{S}tefan\v{c}i\'c, Andrea Gabrielli
Bi-stability of SUDR+K model of epidemics and test kits applied to COVID-19
6 pages, 7 figures, Nonlinear Dyn (2020)
Nonlinear Dynamics volume 101 (2020)
10.1007/s11071-020-05888-w
RBI-ThPhys-2020-08
q-bio.PE cond-mat.stat-mech nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated with various responses of world governments to COVID-19, here we develop a toy model of the dependence epidemics spreading on the availability of tests for disease. Our model, that we call SUDR+K, is based on usual SIR model, but it splits the total fraction of infected individuals into two components: those that are undetected and those that are detected through tests. Moreover, we assume that available tests increase at a constant rate from the beginning of epidemics but are consumed to detect infected individuals. Strikingly we find a bi-stable behavior between a phase with a giant fraction of infected and a phase with a very small fraction. We show that the separation between these two regimes is governed by a match between the rate of testing and a rate of infection spread at given time. We also show that the existence of two phases does not depend on the mathematical choice of the form of the term describing the rate at which undetected individuals are tested and detected. Presented research implies that a vigorous early testing activity, before the epidemics enters into its giant phase, can potentially keep epidemics under control, and that even a very small change in rate of testing can increase or decrease the size of the whole epidemics of various orders of magnitude. For the real application of realistic model to ongoing epidemics, we would gladly collaborate with field epidemiologists in order to develop quantitative models of testing process.
[ { "created": "Wed, 18 Mar 2020 21:32:42 GMT", "version": "v1" }, { "created": "Fri, 24 Apr 2020 10:02:48 GMT", "version": "v2" } ]
2020-12-01
[ [ "Zlatić", "Vinko", "" ], [ "Barjašić", "Irena", "" ], [ "Kadović", "Andrea", "" ], [ "Štefančić", "Hrvoje", "" ], [ "Gabrielli", "Andrea", "" ] ]
Motivated with various responses of world governments to COVID-19, here we develop a toy model of the dependence epidemics spreading on the availability of tests for disease. Our model, that we call SUDR+K, is based on usual SIR model, but it splits the total fraction of infected individuals into two components: those that are undetected and those that are detected through tests. Moreover, we assume that available tests increase at a constant rate from the beginning of epidemics but are consumed to detect infected individuals. Strikingly we find a bi-stable behavior between a phase with a giant fraction of infected and a phase with a very small fraction. We show that the separation between these two regimes is governed by a match between the rate of testing and a rate of infection spread at given time. We also show that the existence of two phases does not depend on the mathematical choice of the form of the term describing the rate at which undetected individuals are tested and detected. Presented research implies that a vigorous early testing activity, before the epidemics enters into its giant phase, can potentially keep epidemics under control, and that even a very small change in rate of testing can increase or decrease the size of the whole epidemics of various orders of magnitude. For the real application of realistic model to ongoing epidemics, we would gladly collaborate with field epidemiologists in order to develop quantitative models of testing process.
2006.14357
Samuel Johnson
Samuel D. N. Johnson and Sean P. Cox
Hierarchical stock assessment methods improve management performance in multi-species, data-limited fisheries
39 pages, 9 figures, 6 tables, 3 appendices
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Management performance of five alternative stock assessment methods was evaluated by using them to set harvest levels targeting multi-species maximum yield in a multi-species flatfish fishery, including single-species and hierarchical multi-species models, and methods that pooled data across species and spatial strata, with catch outcomes of each method under three data scenarios compared to catch under an omniscient manager simulation. Operating models included technical interactions between species intended to produce choke effects often observed in output controlled multi-species fisheries. Hierarchical multi-species models outperformed all other methods under data-poor and data-moderate scenarios, and outperformed single-species models under the data-rich scenario. Hierarchical models were least sensitive to prior precision, sometimes improving in performance when prior precision was reduced. Choke effects were found to both positive and negative effects, sometimes leading to underfishing of non-choke species, but at other times preventing overfishing of non-choke species. We highlight the importance of including technical interactions in multi-species assessment models and management objectives, how choke species can indicate mismatches between management objectives and system dynamics, and recommend hierarchical multi-species models for multi-species fishery management systems.
[ { "created": "Thu, 25 Jun 2020 12:57:43 GMT", "version": "v1" } ]
2020-06-26
[ [ "Johnson", "Samuel D. N.", "" ], [ "Cox", "Sean P.", "" ] ]
Management performance of five alternative stock assessment methods was evaluated by using them to set harvest levels targeting multi-species maximum yield in a multi-species flatfish fishery, including single-species and hierarchical multi-species models, and methods that pooled data across species and spatial strata, with catch outcomes of each method under three data scenarios compared to catch under an omniscient manager simulation. Operating models included technical interactions between species intended to produce choke effects often observed in output controlled multi-species fisheries. Hierarchical multi-species models outperformed all other methods under data-poor and data-moderate scenarios, and outperformed single-species models under the data-rich scenario. Hierarchical models were least sensitive to prior precision, sometimes improving in performance when prior precision was reduced. Choke effects were found to both positive and negative effects, sometimes leading to underfishing of non-choke species, but at other times preventing overfishing of non-choke species. We highlight the importance of including technical interactions in multi-species assessment models and management objectives, how choke species can indicate mismatches between management objectives and system dynamics, and recommend hierarchical multi-species models for multi-species fishery management systems.
2111.06892
Iman Mohammed Attia Abd Elkhalik Abo Elreesh Dr.
Iman Mohammed Attia Ebd-Elkhalik Abo-Elreesh
Analysis of chronic diseases progression using stochastic modeling
this book is 219 pages , there are MATLAB code published in code ocean site as well as stata do file and dataset published in the same site also
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
This book handles the fatty liver disease from the bio-statistical point of view . It discusses the disease process in the simple general form of health-disease-death multi-states model . Continuous Time Markov Chains are used to estimate the rate transition matrix utilizing the MLE and Quasi-Newton formula , once obtained , the probability transition matrix can be estimated by exponentiation of the rate matrix . The probability transition matrix can also be obtained by solving the forward Kolmogorov differential equations , which yields more stable solution than exponentiation of rate matrix. The disease process is expanded in 9 states model to explain the transition among the detailed stages of the disease process , in more elaborate form. The probability transition matrix is used to estimate the number of patients in each stage , this matrix along with the rate transition matrix , both are used to estimate life expectancy of patients is each stage. These statistical indices are of great value as they can help the health policy makers and medical insurance managers to allocate the resources for investigating and treating patients in different stages of the disease . This method is of a high potential value to be used in longitudinal studies conducted by the pharmaceutical companies to evaluate the effect of anti-fibrotic drugs used to treat patients within the initial stages of fibrosis . Poisson regression model is also used to relate the high risk covariates such as type 2 diabetes, hypercholesterolemia , obesity and hypertension with the rate of progression and evolution of the stages of the disease over time. The general model , the expanded model and the model with covariates are illustrated by artificial hypothetical examples to demonstrate the mathematical statistical indices .
[ { "created": "Fri, 12 Nov 2021 17:26:04 GMT", "version": "v1" } ]
2021-11-16
[ [ "Abo-Elreesh", "Iman Mohammed Attia Ebd-Elkhalik", "" ] ]
This book handles the fatty liver disease from the bio-statistical point of view . It discusses the disease process in the simple general form of health-disease-death multi-states model . Continuous Time Markov Chains are used to estimate the rate transition matrix utilizing the MLE and Quasi-Newton formula , once obtained , the probability transition matrix can be estimated by exponentiation of the rate matrix . The probability transition matrix can also be obtained by solving the forward Kolmogorov differential equations , which yields more stable solution than exponentiation of rate matrix. The disease process is expanded in 9 states model to explain the transition among the detailed stages of the disease process , in more elaborate form. The probability transition matrix is used to estimate the number of patients in each stage , this matrix along with the rate transition matrix , both are used to estimate life expectancy of patients is each stage. These statistical indices are of great value as they can help the health policy makers and medical insurance managers to allocate the resources for investigating and treating patients in different stages of the disease . This method is of a high potential value to be used in longitudinal studies conducted by the pharmaceutical companies to evaluate the effect of anti-fibrotic drugs used to treat patients within the initial stages of fibrosis . Poisson regression model is also used to relate the high risk covariates such as type 2 diabetes, hypercholesterolemia , obesity and hypertension with the rate of progression and evolution of the stages of the disease over time. The general model , the expanded model and the model with covariates are illustrated by artificial hypothetical examples to demonstrate the mathematical statistical indices .
q-bio/0611083
Thierry Rabilloud
Mireille Chevallet, H\'el\`ene Diemer (IPHC), Sylvie Luche, Alain van Dorsselaer (IPHC), Thierry Rabilloud, Emmanuelle Leize-Wagner (IPHC)
Improved mass spectrometry compatibility is afforded by ammoniacal silver staining
website publisher http://www.interscience.wiley.com
Proteomics 6 (04/2006) 2350-4
10.1002/pmic.200500567
null
q-bio.GN
null
Sequence coverage in MS analysis of protein digestion-derived peptides is a key issue for detailed characterization of proteins or identification at low quantities. In gel-based proteomics studies, the sequence coverage greatly depends on the protein detection method. It is shown here that ammoniacal silver detection methods offer improved sequence coverage over standard silver nitrate methods, while keeping the high sensitivity of silver staining. With the development of 2D-PAGE-based proteomics, another burden is placed on the detection methods used for protein detection on 2-D-gels. Besides the classical requirements of linearity, sensitivity, and homogeneity from one protein to another, detection methods must now take into account another aspect, namely their compatibility with MS. This compatibility is evidenced by two different and complementary aspects, which are (i) the absence of adducts and artefactual modifications on the peptides obtained after protease digestion of a protein detected and digested in - gel, and (ii) the quantitative yield of peptides recovered after digestion and analyzed by the mass spectrometer. While this quantitative yield is not very important per se, it is however a crucial parameter as it strongly influences the S/N of the mass spectrum and thus the number of peptides that can be detected from a given protein input, especially at low protein amounts. This influences in turn the sequence coverage and thus the detail of the analysis provided by the mass spectrometer.
[ { "created": "Fri, 24 Nov 2006 12:50:09 GMT", "version": "v1" } ]
2016-08-16
[ [ "Chevallet", "Mireille", "", "IPHC" ], [ "Diemer", "Hélène", "", "IPHC" ], [ "Luche", "Sylvie", "", "IPHC" ], [ "van Dorsselaer", "Alain", "", "IPHC" ], [ "Rabilloud", "Thierry", "", "IPHC" ], [ "Leize-Wagner", "Emmanuelle", "", "IPHC" ] ]
Sequence coverage in MS analysis of protein digestion-derived peptides is a key issue for detailed characterization of proteins or identification at low quantities. In gel-based proteomics studies, the sequence coverage greatly depends on the protein detection method. It is shown here that ammoniacal silver detection methods offer improved sequence coverage over standard silver nitrate methods, while keeping the high sensitivity of silver staining. With the development of 2D-PAGE-based proteomics, another burden is placed on the detection methods used for protein detection on 2-D-gels. Besides the classical requirements of linearity, sensitivity, and homogeneity from one protein to another, detection methods must now take into account another aspect, namely their compatibility with MS. This compatibility is evidenced by two different and complementary aspects, which are (i) the absence of adducts and artefactual modifications on the peptides obtained after protease digestion of a protein detected and digested in - gel, and (ii) the quantitative yield of peptides recovered after digestion and analyzed by the mass spectrometer. While this quantitative yield is not very important per se, it is however a crucial parameter as it strongly influences the S/N of the mass spectrum and thus the number of peptides that can be detected from a given protein input, especially at low protein amounts. This influences in turn the sequence coverage and thus the detail of the analysis provided by the mass spectrometer.
1202.3015
Andrew Teschendorff
James West, Ginestra Bianconi, Simone Severini, Andrew Teschendorff
On dynamic network entropy in cancer
10 pages, 3 figures, 4 tables. Submitted
Sci Rep. 2012;2:802
10.1038/srep00802
null
q-bio.MN cond-mat.dis-nn cond-mat.stat-mech q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cellular phenotype is described by a complex network of molecular interactions. Elucidating network properties that distinguish disease from the healthy cellular state is therefore of critical importance for gaining systems-level insights into disease mechanisms and ultimately for developing improved therapies. By integrating gene expression data with a protein interaction network to induce a stochastic dynamics on the network, we here demonstrate that cancer cells are characterised by an increase in the dynamic network entropy, compared to cells of normal physiology. Using a fundamental relation between the macroscopic resilience of a dynamical system and the uncertainty (entropy) in the underlying microscopic processes, we argue that cancer cells will be more robust to random gene perturbations. In addition, we formally demonstrate that gene expression differences between normal and cancer tissue are anticorrelated with local dynamic entropy changes, thus providing a systemic link between gene expression changes at the nodes and their local network dynamics. In particular, we also find that genes which drive cell-proliferation in cancer cells and which often encode oncogenes are associated with reductions in the dynamic network entropy. In summary, our results support the view that the observed increased robustness of cancer cells to perturbation and therapy may be due to an increase in the dynamic network entropy that allows cells to adapt to the new cellular stresses. Conversely, genes that exhibit local flux entropy decreases in cancer may render cancer cells more susceptible to targeted intervention and may therefore represent promising drug targets.
[ { "created": "Tue, 14 Feb 2012 12:24:09 GMT", "version": "v1" }, { "created": "Sat, 18 Aug 2012 19:50:11 GMT", "version": "v2" } ]
2012-11-22
[ [ "West", "James", "" ], [ "Bianconi", "Ginestra", "" ], [ "Severini", "Simone", "" ], [ "Teschendorff", "Andrew", "" ] ]
The cellular phenotype is described by a complex network of molecular interactions. Elucidating network properties that distinguish disease from the healthy cellular state is therefore of critical importance for gaining systems-level insights into disease mechanisms and ultimately for developing improved therapies. By integrating gene expression data with a protein interaction network to induce a stochastic dynamics on the network, we here demonstrate that cancer cells are characterised by an increase in the dynamic network entropy, compared to cells of normal physiology. Using a fundamental relation between the macroscopic resilience of a dynamical system and the uncertainty (entropy) in the underlying microscopic processes, we argue that cancer cells will be more robust to random gene perturbations. In addition, we formally demonstrate that gene expression differences between normal and cancer tissue are anticorrelated with local dynamic entropy changes, thus providing a systemic link between gene expression changes at the nodes and their local network dynamics. In particular, we also find that genes which drive cell-proliferation in cancer cells and which often encode oncogenes are associated with reductions in the dynamic network entropy. In summary, our results support the view that the observed increased robustness of cancer cells to perturbation and therapy may be due to an increase in the dynamic network entropy that allows cells to adapt to the new cellular stresses. Conversely, genes that exhibit local flux entropy decreases in cancer may render cancer cells more susceptible to targeted intervention and may therefore represent promising drug targets.
1911.03044
Min Xu
Xiangrui Zeng, Min Xu
AITom: Open-source AI platform for cryo-electron tomography data analysis
2 figures
null
null
null
q-bio.QM cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryo-electron tomography (cryo-ET) is an emerging technology for the 3D visualization of structural organizations and interactions of subcellular components at near-native state and sub-molecular resolution. Tomograms captured by cryo-ET contain heterogeneous structures representing the complex and dynamic subcellular environment. Since the structures are not purified or fluorescently labeled, the spatial organization and interaction between both the known and unknown structures can be studied in their native environment. The rapid advances of cryo-electron tomography (cryo-ET) have generated abundant 3D cellular imaging data. However, the systematic localization, identification, segmentation, and structural recovery of the subcellular components require efficient and accurate large-scale image analysis methods. We introduce AITom, an open-source artificial intelligence platform for cryo-ET researchers. AITom provides many public as well as in-house algorithms for performing cryo-ET data analysis through both the traditional template-based or template-free approach and the deep learning approach. AITom also supports remote interactive analysis. Comprehensive tutorials for each analysis module are provided to guide the user through. We welcome researchers and developers to join this collaborative open-source software development project. Availability: https://github.com/xulabs/aitom
[ { "created": "Fri, 8 Nov 2019 04:33:18 GMT", "version": "v1" }, { "created": "Fri, 30 Oct 2020 17:44:31 GMT", "version": "v2" } ]
2020-11-02
[ [ "Zeng", "Xiangrui", "" ], [ "Xu", "Min", "" ] ]
Cryo-electron tomography (cryo-ET) is an emerging technology for the 3D visualization of structural organizations and interactions of subcellular components at near-native state and sub-molecular resolution. Tomograms captured by cryo-ET contain heterogeneous structures representing the complex and dynamic subcellular environment. Since the structures are not purified or fluorescently labeled, the spatial organization and interaction between both the known and unknown structures can be studied in their native environment. The rapid advances of cryo-electron tomography (cryo-ET) have generated abundant 3D cellular imaging data. However, the systematic localization, identification, segmentation, and structural recovery of the subcellular components require efficient and accurate large-scale image analysis methods. We introduce AITom, an open-source artificial intelligence platform for cryo-ET researchers. AITom provides many public as well as in-house algorithms for performing cryo-ET data analysis through both the traditional template-based or template-free approach and the deep learning approach. AITom also supports remote interactive analysis. Comprehensive tutorials for each analysis module are provided to guide the user through. We welcome researchers and developers to join this collaborative open-source software development project. Availability: https://github.com/xulabs/aitom
2102.10400
Alain Moise Dikande Pr.
Alain M. Dikande
On a nonlinear electromechanical model of nerve
7 pages, 7 figures, submitted
null
null
null
q-bio.NC nlin.PS
http://creativecommons.org/licenses/by/4.0/
The generation of action potential brings into play specific mechanosensory stimuli manifest in the variation of membrane capacitance, resulting from the selective membrane permeability to ions exchanges and testifying to the central role of electromechanical processes in the buildup mechanism of nerve impulse. As well established [See e.g. D. Gross et al, Cellular and Molecular Neurobiology vol. 3, p. 89 (1983)], in these electromechanical processes the net instantaneous charge stored in the membrane is regulated by the rate of change of the net fluid density through the membrane, orresponding to the difference in densities of extacellular and intracellular fluids. An electromechanical model is proposed for which mechanical forces are assumed to result from the flow of ionic liquids through the nerve membrane, generating pressure waves stimulating the membrane and hence controlling the net charge stored in the membrane capacitor. The model features coupled nonlinear partial differential equations: the familiar Hodgkin-Huxley's cable equation for the transmembrane voltage in which the membrane capacitor is now a capacitive diode, and the Heimburg-Jackson's nonlinear hydrodynamic equation for the pressure wave controlling the total charge in the membrane capacitor. In the stationary regime, the Hodgkin-Huxley cable equation with variable capacitance reduces to a linear operator problem with zero eigenvalue, the bound states of which can be obtained exactly for specific values of characteristic parameters of the model. In the dynamical regime, numerical simulations of the modified Hodgkin-Huxley equation lead to a variety of typical figures for the transmembrane voltage, reminiscent of action potentials observed in real physiological contexts.
[ { "created": "Sat, 20 Feb 2021 17:43:07 GMT", "version": "v1" } ]
2021-02-23
[ [ "Dikande", "Alain M.", "" ] ]
The generation of action potential brings into play specific mechanosensory stimuli manifest in the variation of membrane capacitance, resulting from the selective membrane permeability to ions exchanges and testifying to the central role of electromechanical processes in the buildup mechanism of nerve impulse. As well established [See e.g. D. Gross et al, Cellular and Molecular Neurobiology vol. 3, p. 89 (1983)], in these electromechanical processes the net instantaneous charge stored in the membrane is regulated by the rate of change of the net fluid density through the membrane, orresponding to the difference in densities of extacellular and intracellular fluids. An electromechanical model is proposed for which mechanical forces are assumed to result from the flow of ionic liquids through the nerve membrane, generating pressure waves stimulating the membrane and hence controlling the net charge stored in the membrane capacitor. The model features coupled nonlinear partial differential equations: the familiar Hodgkin-Huxley's cable equation for the transmembrane voltage in which the membrane capacitor is now a capacitive diode, and the Heimburg-Jackson's nonlinear hydrodynamic equation for the pressure wave controlling the total charge in the membrane capacitor. In the stationary regime, the Hodgkin-Huxley cable equation with variable capacitance reduces to a linear operator problem with zero eigenvalue, the bound states of which can be obtained exactly for specific values of characteristic parameters of the model. In the dynamical regime, numerical simulations of the modified Hodgkin-Huxley equation lead to a variety of typical figures for the transmembrane voltage, reminiscent of action potentials observed in real physiological contexts.
2004.09471
Markus Kantner
Markus Kantner and Thomas Koprucki
Beyond just "flattening the curve": Optimal control of epidemics with purely non-pharmaceutical interventions
Keywords: Mathematical epidemiology, optimal control, non-pharmaceutical interventions, effective reproduction number, dynamical systems, COVID-19, SARS-CoV2
Journal of Mathematics in Industry 10, 23 (2020)
10.1186/s13362-020-00091-3
null
q-bio.PE math.DS math.OC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When effective medical treatment and vaccination are not available, non-pharmaceutical interventions such as social distancing, home quarantine and far-reaching shutdown of public life are the only available strategies to prevent the spread of epidemics. Based on an extended SEIR (susceptible-exposed-infectious-recovered) model and continuous-time optimal control theory, we compute the optimal non-pharmaceutical intervention strategy for the case that a vaccine is never found and complete containment (eradication of the epidemic) is impossible. In this case, the optimal control must meet competing requirements: First, the minimization of disease-related deaths, and, second, the establishment of a sufficient degree of natural immunity at the end of the measures, in order to exclude a second wave. Moreover, the socio-economic costs of the intervention shall be kept at a minimum. The numerically computed optimal control strategy is a single-intervention scenario that goes beyond heuristically motivated interventions and simple "flattening of the curve." Careful analysis of the computed control strategy reveals, however, that the obtained solution is in fact a tightrope walk close to the stability boundary of the system, where socio-economic costs and the risk of a new outbreak must be constantly balanced against one another. The model system is calibrated to reproduce the initial exponential growth phase of the COVID-19 pandemic in Germany.
[ { "created": "Mon, 20 Apr 2020 17:47:55 GMT", "version": "v1" }, { "created": "Thu, 23 Apr 2020 17:55:20 GMT", "version": "v2" }, { "created": "Tue, 14 Jul 2020 00:46:22 GMT", "version": "v3" } ]
2020-08-19
[ [ "Kantner", "Markus", "" ], [ "Koprucki", "Thomas", "" ] ]
When effective medical treatment and vaccination are not available, non-pharmaceutical interventions such as social distancing, home quarantine and far-reaching shutdown of public life are the only available strategies to prevent the spread of epidemics. Based on an extended SEIR (susceptible-exposed-infectious-recovered) model and continuous-time optimal control theory, we compute the optimal non-pharmaceutical intervention strategy for the case that a vaccine is never found and complete containment (eradication of the epidemic) is impossible. In this case, the optimal control must meet competing requirements: First, the minimization of disease-related deaths, and, second, the establishment of a sufficient degree of natural immunity at the end of the measures, in order to exclude a second wave. Moreover, the socio-economic costs of the intervention shall be kept at a minimum. The numerically computed optimal control strategy is a single-intervention scenario that goes beyond heuristically motivated interventions and simple "flattening of the curve." Careful analysis of the computed control strategy reveals, however, that the obtained solution is in fact a tightrope walk close to the stability boundary of the system, where socio-economic costs and the risk of a new outbreak must be constantly balanced against one another. The model system is calibrated to reproduce the initial exponential growth phase of the COVID-19 pandemic in Germany.
1805.00106
Nathaniel McVicar
Nathaniel McVicar, Akina Hoshino, Anna La Torre, Thomas A. Reh, Walter L. Ruzzo and Scott Hauck
FPGA Acceleration of Short Read Alignment
15 pages, technical report version
null
null
null
q-bio.GN cs.CE cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aligning millions of short DNA or RNA reads, of 75 to 250 base pairs each, to a reference genome is a significant computation problem in bioinformatics. We present a flexible and fast FPGA-based short read alignment tool. Our aligner makes use of the processing power of FPGAs in conjunction with the greater host memory bandwidth and flexibility of software to improve performance and achieve a high level of configurability. This flexible design supports a variety of reference genome sizes without the performance degradation suffered by other software and FPGA-based aligners. It is also better able to support the features of new alignment algorithms, which frequently crop up in the rapidly evolving field of bioinformatics. We demonstrate these advantages in a case study where we align RNA-Seq data from a hypothesized mouse / human xenograft. In this case study, our aligner provides a speedup of 5.6x over BWA-SW with energy savings of 21%, while also reducing incorrect short read classification by 29%. To demonstrate the flexibility of our system we show that the speedup can be substantially improved while retaining most of the accuracy gains over BWA-SW. The speedup can be increased to 71.3x, while still enjoying a 28% incorrect classification improvement and 52% improvement in unaligned reads.
[ { "created": "Mon, 30 Apr 2018 21:26:41 GMT", "version": "v1" } ]
2018-05-02
[ [ "McVicar", "Nathaniel", "" ], [ "Hoshino", "Akina", "" ], [ "La Torre", "Anna", "" ], [ "Reh", "Thomas A.", "" ], [ "Ruzzo", "Walter L.", "" ], [ "Hauck", "Scott", "" ] ]
Aligning millions of short DNA or RNA reads, of 75 to 250 base pairs each, to a reference genome is a significant computation problem in bioinformatics. We present a flexible and fast FPGA-based short read alignment tool. Our aligner makes use of the processing power of FPGAs in conjunction with the greater host memory bandwidth and flexibility of software to improve performance and achieve a high level of configurability. This flexible design supports a variety of reference genome sizes without the performance degradation suffered by other software and FPGA-based aligners. It is also better able to support the features of new alignment algorithms, which frequently crop up in the rapidly evolving field of bioinformatics. We demonstrate these advantages in a case study where we align RNA-Seq data from a hypothesized mouse / human xenograft. In this case study, our aligner provides a speedup of 5.6x over BWA-SW with energy savings of 21%, while also reducing incorrect short read classification by 29%. To demonstrate the flexibility of our system we show that the speedup can be substantially improved while retaining most of the accuracy gains over BWA-SW. The speedup can be increased to 71.3x, while still enjoying a 28% incorrect classification improvement and 52% improvement in unaligned reads.
2303.04691
Juan Antonio Guerrero Montero
Juan Guerrero Montero, Richard A. Blythe
Self-contained Beta-with-Spikes Approximation for Inference Under a Wright-Fisher Model
null
null
10.1093/genetics/iyad092
null
q-bio.PE cond-mat.stat-mech cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct a reliable estimation of evolutionary parameters within the Wright-Fisher model, which describes changes in allele frequencies due to selection and genetic drift, from time-series data. Such data exists for biological populations, for example via artificial evolution experiments, and for the cultural evolution of behavior, such as linguistic corpora that document historical usage of different words with similar meanings. Our method of analysis builds on a Beta-with-Spikes approximation to the distribution of allele frequencies predicted by the Wright-Fisher model. We introduce a self-contained scheme for estimating the parameters in the approximation, and demonstrate its robustness with synthetic data, especially in the strong-selection and near-extinction regimes where previous approaches fail. We further apply to allele frequency data for baker's yeast (Saccharomyces cerevisiae), finding a significant signal of selection in cases where independent evidence supports such a conclusion. We further demonstrate the possibility of detecting time-points at which evolutionary parameters change in the context of a historical spelling reform in the Spanish language.
[ { "created": "Wed, 8 Mar 2023 16:32:10 GMT", "version": "v1" }, { "created": "Thu, 11 May 2023 15:59:00 GMT", "version": "v2" } ]
2023-05-26
[ [ "Montero", "Juan Guerrero", "" ], [ "Blythe", "Richard A.", "" ] ]
We construct a reliable estimation of evolutionary parameters within the Wright-Fisher model, which describes changes in allele frequencies due to selection and genetic drift, from time-series data. Such data exists for biological populations, for example via artificial evolution experiments, and for the cultural evolution of behavior, such as linguistic corpora that document historical usage of different words with similar meanings. Our method of analysis builds on a Beta-with-Spikes approximation to the distribution of allele frequencies predicted by the Wright-Fisher model. We introduce a self-contained scheme for estimating the parameters in the approximation, and demonstrate its robustness with synthetic data, especially in the strong-selection and near-extinction regimes where previous approaches fail. We further apply to allele frequency data for baker's yeast (Saccharomyces cerevisiae), finding a significant signal of selection in cases where independent evidence supports such a conclusion. We further demonstrate the possibility of detecting time-points at which evolutionary parameters change in the context of a historical spelling reform in the Spanish language.
1608.01179
David Zwicker
David Zwicker
Normalized neural representations of natural odors
10 pages, 5 figures
null
10.1371/journal.pone.0166456
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The olfactory system removes correlations in natural odors using a network of inhibitory neurons in the olfactory bulb. It has been proposed that this network integrates the response from all olfactory receptors and inhibits them equally. However, how such global inhibition influences the neural representations of odors is unclear. Here, we study a simple statistical model of this situation, which leads to concentration-invariant, sparse representations of the odor composition. We show that the inhibition strength can be tuned to obtain sparse representations that are still useful to discriminate odors that vary in relative concentration, size, and composition. The model reveals two generic consequences of global inhibition: (i) odors with many molecular species are more difficult to discriminate and (ii) receptor arrays with heterogeneous sensitivities perform badly. Our work can thus help to understand how global inhibition shapes normalized odor representations for further processing in the brain.
[ { "created": "Wed, 3 Aug 2016 13:14:05 GMT", "version": "v1" } ]
2022-01-31
[ [ "Zwicker", "David", "" ] ]
The olfactory system removes correlations in natural odors using a network of inhibitory neurons in the olfactory bulb. It has been proposed that this network integrates the response from all olfactory receptors and inhibits them equally. However, how such global inhibition influences the neural representations of odors is unclear. Here, we study a simple statistical model of this situation, which leads to concentration-invariant, sparse representations of the odor composition. We show that the inhibition strength can be tuned to obtain sparse representations that are still useful to discriminate odors that vary in relative concentration, size, and composition. The model reveals two generic consequences of global inhibition: (i) odors with many molecular species are more difficult to discriminate and (ii) receptor arrays with heterogeneous sensitivities perform badly. Our work can thus help to understand how global inhibition shapes normalized odor representations for further processing in the brain.
1504.07124
Emily Jane McTavish
Emily Jane McTavish, Mike Steel, Mark T. Holder
Twisted trees and inconsistency of tree estimation when gaps are treated as missing data -- the impact of model mis-specification in distance corrections
29 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistically consistent estimation of phylogenetic trees or gene trees is possible if pairwise sequence dissimilarities can be converted to a set of distances that are proportional to the true evolutionary distances. Susko et al. (2004) reported some strikingly broad results about the forms of inconsistency in tree estimation that can arise if corrected distances are not proportional to the true distances. They showed that if the corrected distance is a concave function of the true distance, then inconsistency due to long branch attraction will occur. If these functions are convex, then two "long branch repulsion" trees will be preferred over the true tree -- though these two incorrect trees are expected to be tied as the preferred true. Here we extend their results, and demonstrate the existence of a tree shape (which we refer to as a "twisted Farris-zone" tree) for which a single incorrect tree topology will be guaranteed to be preferred if the corrected distance function is convex. We also report that the standard practice of treating gaps in sequence alignments as missing data is sufficient to produce non-linear corrected distance functions if the substitution process is not independent of the insertion/deletion process. Taken together, these results imply inconsistent tree inference under mild conditions. For example, if some positions in a sequence are constrained to be free of substitutions and insertion/deletion events while the remaining sites evolve with independent substitutions and insertion/deletion events, then the distances obtained by treating gaps as missing data can support an incorrect tree topology even given an unlimited amount of data.
[ { "created": "Mon, 27 Apr 2015 15:08:52 GMT", "version": "v1" } ]
2015-04-28
[ [ "McTavish", "Emily Jane", "" ], [ "Steel", "Mike", "" ], [ "Holder", "Mark T.", "" ] ]
Statistically consistent estimation of phylogenetic trees or gene trees is possible if pairwise sequence dissimilarities can be converted to a set of distances that are proportional to the true evolutionary distances. Susko et al. (2004) reported some strikingly broad results about the forms of inconsistency in tree estimation that can arise if corrected distances are not proportional to the true distances. They showed that if the corrected distance is a concave function of the true distance, then inconsistency due to long branch attraction will occur. If these functions are convex, then two "long branch repulsion" trees will be preferred over the true tree -- though these two incorrect trees are expected to be tied as the preferred true. Here we extend their results, and demonstrate the existence of a tree shape (which we refer to as a "twisted Farris-zone" tree) for which a single incorrect tree topology will be guaranteed to be preferred if the corrected distance function is convex. We also report that the standard practice of treating gaps in sequence alignments as missing data is sufficient to produce non-linear corrected distance functions if the substitution process is not independent of the insertion/deletion process. Taken together, these results imply inconsistent tree inference under mild conditions. For example, if some positions in a sequence are constrained to be free of substitutions and insertion/deletion events while the remaining sites evolve with independent substitutions and insertion/deletion events, then the distances obtained by treating gaps as missing data can support an incorrect tree topology even given an unlimited amount of data.
1806.04710
Hao Wang
Hao Wang, Jiahui Wang, Xin Yuan Thow, Chengkuo Lee
A quantized physical framework for understanding the working mechanism of ion channels
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A quantized physical framework, called the five-anchor model, is developed for a general understanding of the working mechanism of ion channels. According to the hypotheses of this model, the following two basic physical principles are assigned to each anchor: the polarity change induced by an electron transition and the mutual repulsion and attraction induced by an electrostatic force. Consequently, many unique phenomena, such as fast and slow inactivation, the stochastic gating pattern and constant conductance of a single ion channel, the difference between electrical and optical stimulation (optogenetics), nerve conduction block and the generation of an action potential, become intrinsic features of this physical model. Moreover, this model also provides a foundation for the probability equation used to calculate the results of electrical stimulation in our previous C-P theory.
[ { "created": "Wed, 13 Jun 2018 08:58:45 GMT", "version": "v1" }, { "created": "Sat, 4 Aug 2018 09:17:38 GMT", "version": "v2" } ]
2018-08-07
[ [ "Wang", "Hao", "" ], [ "Wang", "Jiahui", "" ], [ "Thow", "Xin Yuan", "" ], [ "Lee", "Chengkuo", "" ] ]
A quantized physical framework, called the five-anchor model, is developed for a general understanding of the working mechanism of ion channels. According to the hypotheses of this model, the following two basic physical principles are assigned to each anchor: the polarity change induced by an electron transition and the mutual repulsion and attraction induced by an electrostatic force. Consequently, many unique phenomena, such as fast and slow inactivation, the stochastic gating pattern and constant conductance of a single ion channel, the difference between electrical and optical stimulation (optogenetics), nerve conduction block and the generation of an action potential, become intrinsic features of this physical model. Moreover, this model also provides a foundation for the probability equation used to calculate the results of electrical stimulation in our previous C-P theory.
q-bio/0412022
Sergiy Perepelytsya
S. M. Perepelytsya, S. N. Volkov
Ion-Phosphate Mode in the DNA Low-Frequency Spectra
MiKTeX 2.1, 6 pages with 2 figures
S. M. Perepelytsya, S. N. Volkov, Ukr. J. Phys. 49, 1072 (2004)
null
null
q-bio.BM
null
The vibrational dynamics of a DNA molecule with counterions neutralizing the charged phosphate groups have been studied. With the help of elaborated model the conformational vibrations of the DNA double helix with alkaline metal ions have been described both qualitatively and quantitatively. For the complexes of DNA with counterions Li+, Na+, K+, Rb+ and Cs+ the normal modes have been found, and a mode characterized by the most notable ion displacements with respect to the DNA backbone has been determined. The frequency of counterion vibrations has been established to decrease as the ion mass increases. The results of theoretical calculation have been showed to be in good agreement with the experimental data of Raman spectroscopy.
[ { "created": "Sat, 11 Dec 2004 12:17:29 GMT", "version": "v1" } ]
2007-05-23
[ [ "Perepelytsya", "S. M.", "" ], [ "Volkov", "S. N.", "" ] ]
The vibrational dynamics of a DNA molecule with counterions neutralizing the charged phosphate groups have been studied. With the help of elaborated model the conformational vibrations of the DNA double helix with alkaline metal ions have been described both qualitatively and quantitatively. For the complexes of DNA with counterions Li+, Na+, K+, Rb+ and Cs+ the normal modes have been found, and a mode characterized by the most notable ion displacements with respect to the DNA backbone has been determined. The frequency of counterion vibrations has been established to decrease as the ion mass increases. The results of theoretical calculation have been showed to be in good agreement with the experimental data of Raman spectroscopy.
1709.06658
John Sekar
John A.P. Sekar, James R. Faeder
An Introduction to Rule-based Modeling of Immune Receptor Signaling
5 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cells process external and internal signals through chemical interactions. Cells that constitute the immune system (e.g., antigen presenting cell, T-cell, B-cell, mast cell) can have different functions (e.g., adaptive memory, inflammatory response) depending on the type and number of receptor molecules on the cell surface and the specific intracellular signaling pathways activated by those receptors. Explicitly modeling and simulating kinetic interactions between molecules allows us to pose questions about the dynamics of a signaling network under various conditions. However, the application of chemical kinetics to biochemical signaling systems has been limited by the complexity of the systems under consideration. Rule-based modeling (BioNetGen, Kappa, Simmune, PySB) is an approach to address this complexity. In this chapter, by application to the Fc$\varepsilon$RI receptor system, we will explore the origins of complexity in macromolecular interactions, show how rule-based modeling can be used to address complexity, and demonstrate how to build a model in the BioNetGen framework. Open source BioNetGen software and documentation are available at http://bionetgen.org.
[ { "created": "Tue, 19 Sep 2017 21:46:45 GMT", "version": "v1" } ]
2017-09-21
[ [ "Sekar", "John A. P.", "" ], [ "Faeder", "James R.", "" ] ]
Cells process external and internal signals through chemical interactions. Cells that constitute the immune system (e.g., antigen presenting cell, T-cell, B-cell, mast cell) can have different functions (e.g., adaptive memory, inflammatory response) depending on the type and number of receptor molecules on the cell surface and the specific intracellular signaling pathways activated by those receptors. Explicitly modeling and simulating kinetic interactions between molecules allows us to pose questions about the dynamics of a signaling network under various conditions. However, the application of chemical kinetics to biochemical signaling systems has been limited by the complexity of the systems under consideration. Rule-based modeling (BioNetGen, Kappa, Simmune, PySB) is an approach to address this complexity. In this chapter, by application to the Fc$\varepsilon$RI receptor system, we will explore the origins of complexity in macromolecular interactions, show how rule-based modeling can be used to address complexity, and demonstrate how to build a model in the BioNetGen framework. Open source BioNetGen software and documentation are available at http://bionetgen.org.
2107.07309
Andreas Jenet
Andreas Jenet, Samira Nik, Livia Mian, Stella-Zoe Schmidtler, Alessandro Annunziato, Montserrat Marin-Ferrer, Josephine McCourt, Anne Sophie Lequarre, Ashok Ganesh and Fabio Taucer
Standardisation needs for COVID-19. Scoping exercise on potential standards gaps carried out among JRC scientists. Putting Science into Standards (PSIS)
24 pages, European Commission, Brussels, 2021, JRC121514
null
10.13140/RG.2.2.16142.08004
JRC121514
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
The Joint Research Centre in collaboration with the European standardization bodies CEN and CENELEC launched a scoping exercise on standardization needs in response to COVID-19 and future pandemics. The purpose of the exercise was to identify ongoing harmonization initiatives, as was well as further standardization needs in relevant sectors such as artisanal reusable face masks, medical face masks, and social distancing in closed public or commercial spaces. An overview of already ongoing standardization activities relevant to COVID-19 in Spain and Italy illustrate, although fragmented and partially complete, the importance of standardization in key sectors for combatting pandemics, such as in health, social, safety and security. This report informs colleagues in European institutions and Member States about the crucial role standardization plays in the common efforts to overcome the COVID-19 pandemic. Examples include potential inputs to the drafting of guidelines, methods and or interoperability standards. Finally, the report also provides practical examples of agile standardization activities and deliverables that have the potential to enable the EU to respond more effectively and multilaterally to future crises. With this report we aim to raise awareness about the opportunities that standardization and harmonization can bring in the context of the COVID-19 pandemic.
[ { "created": "Thu, 15 Jul 2021 13:29:34 GMT", "version": "v1" } ]
2021-07-16
[ [ "Jenet", "Andreas", "" ], [ "Nik", "Samira", "" ], [ "Mian", "Livia", "" ], [ "Schmidtler", "Stella-Zoe", "" ], [ "Annunziato", "Alessandro", "" ], [ "Marin-Ferrer", "Montserrat", "" ], [ "McCourt", "Josephine", "" ], [ "Lequarre", "Anne Sophie", "" ], [ "Ganesh", "Ashok", "" ], [ "Taucer", "Fabio", "" ] ]
The Joint Research Centre in collaboration with the European standardization bodies CEN and CENELEC launched a scoping exercise on standardization needs in response to COVID-19 and future pandemics. The purpose of the exercise was to identify ongoing harmonization initiatives, as was well as further standardization needs in relevant sectors such as artisanal reusable face masks, medical face masks, and social distancing in closed public or commercial spaces. An overview of already ongoing standardization activities relevant to COVID-19 in Spain and Italy illustrate, although fragmented and partially complete, the importance of standardization in key sectors for combatting pandemics, such as in health, social, safety and security. This report informs colleagues in European institutions and Member States about the crucial role standardization plays in the common efforts to overcome the COVID-19 pandemic. Examples include potential inputs to the drafting of guidelines, methods and or interoperability standards. Finally, the report also provides practical examples of agile standardization activities and deliverables that have the potential to enable the EU to respond more effectively and multilaterally to future crises. With this report we aim to raise awareness about the opportunities that standardization and harmonization can bring in the context of the COVID-19 pandemic.