id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0505048
Daniel G. M. Silvestre
Daniel G. M. Silvestre, Jose F. Fontanari
Template coexistence in prebiotic vesicle models
7 pages, 8 figures
Eur. Phys. J. B 47, 423-429 (2005)
10.1140/epjb/e2005-00346-5
null
q-bio.PE
null
The coexistence of distinct templates is a common feature of the diverse proposals advanced to resolve the information crisis of prebiotic evolution. However, achieving robust template coexistence turned out to be such a difficult demand that only a class of models, the so-called package models, seems to have met it so far. Here we apply Wright's Island formulation of group selection to study the conditions for the coexistence of two distinct template types confined in packages (vesicles) of finite capacity. In particular, we show how selection acting at the level of the vesicles can neutralize the pressures towards the fixation of any one of the template types (random drift) and of the type with higher replication rate (deterministic competition). We give emphasis to the role of the distinct generation times of templates and vesicles as yet another obstacle to coexistence.
[ { "created": "Wed, 25 May 2005 14:17:06 GMT", "version": "v1" }, { "created": "Thu, 30 Nov 2006 18:19:44 GMT", "version": "v2" } ]
2007-05-23
[ [ "Silvestre", "Daniel G. M.", "" ], [ "Fontanari", "Jose F.", "" ] ]
The coexistence of distinct templates is a common feature of the diverse proposals advanced to resolve the information crisis of prebiotic evolution. However, achieving robust template coexistence turned out to be such a difficult demand that only a class of models, the so-called package models, seems to have met it so far. Here we apply Wright's Island formulation of group selection to study the conditions for the coexistence of two distinct template types confined in packages (vesicles) of finite capacity. In particular, we show how selection acting at the level of the vesicles can neutralize the pressures towards the fixation of any one of the template types (random drift) and of the type with higher replication rate (deterministic competition). We give emphasis to the role of the distinct generation times of templates and vesicles as yet another obstacle to coexistence.
1010.4145
Christophe Magnani
Christophe Magnani, Daniel Eug\`ene, Erwin Idoux, L.E. Moore
Voltage clamp analysis of nonlinear dendritic propertie in prepositus hypoglossi neurons
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The nonlinear properties of the dendrites in prepositus hypoglossi neurons are involved in maintenance of eye position. The biophysical properties of these neurons are essential for the operation of the vestibular neural integrator that converts a head velocity signal to one that controls eye position. A novel method named QSA (quadratic sinusoidal analysis) for voltage clamped neurons was used to quantify nonlinear responses that are dominated by dendrites. The voltage clamp currents were measured at harmonic and interactive frequencies using specific stimulation frequencies, which act as frequency probes of the intrinsic nonlinear neuronal behavior. These responses to paired frequencies form a matrix that can be reduced by eigendecomposition to provide a very compact piecewise quadratic analysis at different membrane potentials that otherwise is usually described by complex differential equations involving a large numbers of parameters and dendritic compartments. Moreover, the QSA matrix can be interpolated to capture most of the nonlinear neuronal behavior like a Volterra kernel. The interpolated quadratic functions of the two major prepositus hypoglossi neurons, namely type B and D, are strikingly different. A major part of the nonlinear responses is due to the persistent sodium conductance, which appears to be essential for sustained nonlinear effects induced by NMDA activation and thus would be critical for the operation of the neural integrator. Finally, the dominance of the nonlinear responses by the dendrites supports the hypothesis that persistent sodium conductance channels and NMDA receptors act synergistically to dynamically control the influence of individual synaptic inputs on network behavior.
[ { "created": "Wed, 20 Oct 2010 09:48:52 GMT", "version": "v1" }, { "created": "Fri, 29 Oct 2010 14:29:24 GMT", "version": "v2" } ]
2010-11-01
[ [ "Magnani", "Christophe", "" ], [ "Eugène", "Daniel", "" ], [ "Idoux", "Erwin", "" ], [ "Moore", "L. E.", "" ] ]
The nonlinear properties of the dendrites in prepositus hypoglossi neurons are involved in maintenance of eye position. The biophysical properties of these neurons are essential for the operation of the vestibular neural integrator that converts a head velocity signal to one that controls eye position. A novel method named QSA (quadratic sinusoidal analysis) for voltage clamped neurons was used to quantify nonlinear responses that are dominated by dendrites. The voltage clamp currents were measured at harmonic and interactive frequencies using specific stimulation frequencies, which act as frequency probes of the intrinsic nonlinear neuronal behavior. These responses to paired frequencies form a matrix that can be reduced by eigendecomposition to provide a very compact piecewise quadratic analysis at different membrane potentials that otherwise is usually described by complex differential equations involving a large numbers of parameters and dendritic compartments. Moreover, the QSA matrix can be interpolated to capture most of the nonlinear neuronal behavior like a Volterra kernel. The interpolated quadratic functions of the two major prepositus hypoglossi neurons, namely type B and D, are strikingly different. A major part of the nonlinear responses is due to the persistent sodium conductance, which appears to be essential for sustained nonlinear effects induced by NMDA activation and thus would be critical for the operation of the neural integrator. Finally, the dominance of the nonlinear responses by the dendrites supports the hypothesis that persistent sodium conductance channels and NMDA receptors act synergistically to dynamically control the influence of individual synaptic inputs on network behavior.
2002.02803
Junping Shi
Jimin Zhang, Junping Shi and Xiaoyuan Chang
Model of Algal Growth Depending on Nutrients and Inorganic Carbon in a Poorly Mixed Water Column
27 pages, 7 figures
null
null
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we establish a reaction-diffusion-advection partial differential equation model to describe the growth of algae depending on both nutrients and inorganic carbon in a poorly mixed water column. Nutrients from the water bottom and inorganic carbon from the water surface form an asymmetric resource supply mechanism on the algal growth. The existence and stability of semi-trivial steady state and coexistence steady state of the model are proved, and a threshold condition for the regime shift from extinction to survival of algae is established. The influence of environmental parameters on the vertical distribution of algae is investigated in the water column. It is shown that the vertical distribution of algae can exhibit many different profiles under the joint limitation of nutrients and inorganic carbon.
[ { "created": "Fri, 31 Jan 2020 17:00:49 GMT", "version": "v1" } ]
2020-02-10
[ [ "Zhang", "Jimin", "" ], [ "Shi", "Junping", "" ], [ "Chang", "Xiaoyuan", "" ] ]
In this paper, we establish a reaction-diffusion-advection partial differential equation model to describe the growth of algae depending on both nutrients and inorganic carbon in a poorly mixed water column. Nutrients from the water bottom and inorganic carbon from the water surface form an asymmetric resource supply mechanism on the algal growth. The existence and stability of semi-trivial steady state and coexistence steady state of the model are proved, and a threshold condition for the regime shift from extinction to survival of algae is established. The influence of environmental parameters on the vertical distribution of algae is investigated in the water column. It is shown that the vertical distribution of algae can exhibit many different profiles under the joint limitation of nutrients and inorganic carbon.
1502.02689
Kazuhiro Takemoto
Kazuhiro Takemoto
Heterogeneity of cells may explain allometric scaling of metabolic rate
8 pages, 5 figures
Biosystems 130, 11-16 (2015)
10.1016/j.biosystems.2015.02.003
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The origin of allometric scaling of metabolic rate is a long-standing question in biology. Several models have been proposed for explaining the origin; however, they have advantages and disadvantages. In particular, previous models only demonstrate either two important observations for the allometric scaling: the variability of scaling exponents and predominance of 3/4-power law. Thus, these models have a dispute over their validity. In this study, we propose a simple geometry model, and show that a hypothesis that total surface area of cells determines metabolic rate can reproduce these two observations by combining two concepts: the impact of cell sizes on metabolic rate and fractal-like (hierarchical) organization. The proposed model both theoretically and numerically demonstrates the approximately 3/4-power law although several different biological strategies are considered. The model validity is confirmed using empirical data. Furthermore, the model suggests the importance of heterogeneity of cell size for the emergence of the allometric scaling. The proposed model provides intuitive and unique insights into the origin of allometric scaling laws in biology, despite several limitations of the model.
[ { "created": "Wed, 4 Feb 2015 13:59:49 GMT", "version": "v1" }, { "created": "Tue, 17 Feb 2015 08:01:21 GMT", "version": "v2" } ]
2015-02-23
[ [ "Takemoto", "Kazuhiro", "" ] ]
The origin of allometric scaling of metabolic rate is a long-standing question in biology. Several models have been proposed for explaining the origin; however, they have advantages and disadvantages. In particular, previous models only demonstrate either two important observations for the allometric scaling: the variability of scaling exponents and predominance of 3/4-power law. Thus, these models have a dispute over their validity. In this study, we propose a simple geometry model, and show that a hypothesis that total surface area of cells determines metabolic rate can reproduce these two observations by combining two concepts: the impact of cell sizes on metabolic rate and fractal-like (hierarchical) organization. The proposed model both theoretically and numerically demonstrates the approximately 3/4-power law although several different biological strategies are considered. The model validity is confirmed using empirical data. Furthermore, the model suggests the importance of heterogeneity of cell size for the emergence of the allometric scaling. The proposed model provides intuitive and unique insights into the origin of allometric scaling laws in biology, despite several limitations of the model.
1808.00595
Jonathan Tyler
Jonathan Tyler, Anne Shiu, Jay Walton
Revisiting a synthetic intracellular regulatory network that exhibits oscillations
null
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 2000, Elowitz and Leibler introduced the repressilator--a synthetic gene circuit with three genes that cyclically repress transcription of the next gene--as well as a corresponding mathematical model. Experimental data and model simulations exhibited oscillations in the protein concentrations across generations. In 2006, M\"{u}ller \textit{et al.}\ generalized the model to an arbitrary number of genes and analyzed the resulting dynamics. Their new model arose from five key assumptions, two of which are restrictive given current biological knowledge. Accordingly, we propose a new repressilator system that allows for general functions to model transcription, degradation, and translation. We prove that, with an odd number of genes, the new model has a unique steady state and the system converges to this steady state or to a periodic orbit. We also give a necessary and sufficient condition for stability of steady states when the number of genes is even and conjecture a condition for stability for an odd number. Finally, we derive a new rate function describing transcription that arises under more reasonable biological assumptions than the widely used single-step binding assumption. With this new transcription-rate function, we compare the model's amplitude and period with that of a model with the conventional transcription-rate function. Taken together, our results enhance our understanding of genetic regulation by repression.
[ { "created": "Wed, 1 Aug 2018 23:10:27 GMT", "version": "v1" }, { "created": "Mon, 31 Dec 2018 16:48:47 GMT", "version": "v2" } ]
2019-01-01
[ [ "Tyler", "Jonathan", "" ], [ "Shiu", "Anne", "" ], [ "Walton", "Jay", "" ] ]
In 2000, Elowitz and Leibler introduced the repressilator--a synthetic gene circuit with three genes that cyclically repress transcription of the next gene--as well as a corresponding mathematical model. Experimental data and model simulations exhibited oscillations in the protein concentrations across generations. In 2006, M\"{u}ller \textit{et al.}\ generalized the model to an arbitrary number of genes and analyzed the resulting dynamics. Their new model arose from five key assumptions, two of which are restrictive given current biological knowledge. Accordingly, we propose a new repressilator system that allows for general functions to model transcription, degradation, and translation. We prove that, with an odd number of genes, the new model has a unique steady state and the system converges to this steady state or to a periodic orbit. We also give a necessary and sufficient condition for stability of steady states when the number of genes is even and conjecture a condition for stability for an odd number. Finally, we derive a new rate function describing transcription that arises under more reasonable biological assumptions than the widely used single-step binding assumption. With this new transcription-rate function, we compare the model's amplitude and period with that of a model with the conventional transcription-rate function. Taken together, our results enhance our understanding of genetic regulation by repression.
2303.08496
Jorge Vila-Tom\'as
Jorge Vila-Tom\'as, Pablo Hern\'andez-C\'amara and Jes\'us Malo
Psychophysics of Artificial Neural Networks Questions Classical Hue Cancellation Experiments
17 pages, 7 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
We show that classical hue cancellation experiments lead to human-like opponent curves even if the task is done by trivial (identity) artificial networks. Specifically, human-like opponent spectral sensitivities always emerge in artificial networks as long as (i) the retina converts the input radiation into any tristimulus-like representation, and (ii) the post-retinal network solves the standard hue cancellation task, e.g. the network looks for the weights of the cancelling lights so that every monochromatic stimulus plus the weighted cancelling lights match a grey reference in the (arbitrary) color representation used by the network. In fact, the specific cancellation lights (and not the network architecture) are key to obtain human-like curves: results show that the classical choice of the lights is the one that leads to the best (more human-like) result, and any other choices lead to progressively different spectral sensitivities. We show this in two ways: through artificial psychophysics using a range of networks with different architectures and a range of cancellation lights, and through a change-of-basis theoretical analogy of the experiments. This suggests that the opponent curves of the classical experiment are just a by-product of the front-end photoreceptors and of a very specific experimental choice but they do not inform about the downstream color representation. In fact, the architecture of the post-retinal network (signal recombination or internal color space) seems irrelevant for the emergence of the curves in the classical experiment. This result in artificial networks questions the conventional interpretation of the classical result in humans by Jameson and Hurvich.
[ { "created": "Wed, 15 Mar 2023 10:13:34 GMT", "version": "v1" }, { "created": "Tue, 28 Mar 2023 14:54:30 GMT", "version": "v2" } ]
2023-03-29
[ [ "Vila-Tomás", "Jorge", "" ], [ "Hernández-Cámara", "Pablo", "" ], [ "Malo", "Jesús", "" ] ]
We show that classical hue cancellation experiments lead to human-like opponent curves even if the task is done by trivial (identity) artificial networks. Specifically, human-like opponent spectral sensitivities always emerge in artificial networks as long as (i) the retina converts the input radiation into any tristimulus-like representation, and (ii) the post-retinal network solves the standard hue cancellation task, e.g. the network looks for the weights of the cancelling lights so that every monochromatic stimulus plus the weighted cancelling lights match a grey reference in the (arbitrary) color representation used by the network. In fact, the specific cancellation lights (and not the network architecture) are key to obtain human-like curves: results show that the classical choice of the lights is the one that leads to the best (more human-like) result, and any other choices lead to progressively different spectral sensitivities. We show this in two ways: through artificial psychophysics using a range of networks with different architectures and a range of cancellation lights, and through a change-of-basis theoretical analogy of the experiments. This suggests that the opponent curves of the classical experiment are just a by-product of the front-end photoreceptors and of a very specific experimental choice but they do not inform about the downstream color representation. In fact, the architecture of the post-retinal network (signal recombination or internal color space) seems irrelevant for the emergence of the curves in the classical experiment. This result in artificial networks questions the conventional interpretation of the classical result in humans by Jameson and Hurvich.
1404.4482
Oliver Burren Mr
Oliver S Burren, Hui Guo and Chris Wallace
VSEAMS: A pipeline for variant set enrichment analysis using summary GWAS data identifies IKZF3, BATF and ESRRA as key transcription factors in type 1 diabetes
null
null
10.1093/bioinformatics/btu571
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Genome-wide association studies (GWAS) have identified many loci implicated in disease susceptibility. Integration of GWAS summary statistics (p values) and functional genomic datasets should help to elucidate mechanisms. Results: We describe the extension of a previously described non-parametric method to test whether GWAS signals are enriched in functionally defined loci to a situation where only GWAS p values are available. The approach is implemented in VSEAMS, a freely available software pipeline. We use VSEAMS to integrate functional gene sets defined via transcription factor knock down experiments with GWAS results for type 1 diabetes and find variant set enrichment in gene sets associated with IKZF3, BATF and ESRRA. IKZF3 lies in a known T1D susceptibility region, whilst BATF and ESRRA overlap other immune disease susceptibility regions, validating our approach and suggesting novel avenues of research for type 1 diabetes. Availability and implementation: VSEAMS is available for download http://github.com/ollyburren/vseams.
[ { "created": "Thu, 17 Apr 2014 11:02:03 GMT", "version": "v1" } ]
2014-10-17
[ [ "Burren", "Oliver S", "" ], [ "Guo", "Hui", "" ], [ "Wallace", "Chris", "" ] ]
Motivation: Genome-wide association studies (GWAS) have identified many loci implicated in disease susceptibility. Integration of GWAS summary statistics (p values) and functional genomic datasets should help to elucidate mechanisms. Results: We describe the extension of a previously described non-parametric method to test whether GWAS signals are enriched in functionally defined loci to a situation where only GWAS p values are available. The approach is implemented in VSEAMS, a freely available software pipeline. We use VSEAMS to integrate functional gene sets defined via transcription factor knock down experiments with GWAS results for type 1 diabetes and find variant set enrichment in gene sets associated with IKZF3, BATF and ESRRA. IKZF3 lies in a known T1D susceptibility region, whilst BATF and ESRRA overlap other immune disease susceptibility regions, validating our approach and suggesting novel avenues of research for type 1 diabetes. Availability and implementation: VSEAMS is available for download http://github.com/ollyburren/vseams.
1901.07429
Nik Khadijah Nik Aznan
Nik Khadijah Nik Aznan, Amir Atapour-Abarghouei, Stephen Bonner, Jason Connolly, Noura Al Moubayed and Toby Breckon
Simulating Brain Signals: Creating Synthetic EEG Data via Neural-Based Generative Models for Improved SSVEP Classification
Accepted as a full paper at International Joint Conference on Neural Network (IJCNN) 2019
null
10.1109/IJCNN.2019.8852227
null
q-bio.QM eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant recent progress in the area of Brain-Computer Interface (BCI), there are numerous shortcomings associated with collecting Electroencephalography (EEG) signals in real-world environments. These include, but are not limited to, subject and session data variance, long and arduous calibration processes and predictive generalisation issues across different subjects or sessions. This implies that many downstream applications, including Steady State Visual Evoked Potential (SSVEP) based classification systems, can suffer from a shortage of reliable data. Generating meaningful and realistic synthetic data can therefore be of significant value in circumventing this problem. We explore the use of modern neural-based generative models trained on a limited quantity of EEG data collected from different subjects to generate supplementary synthetic EEG signal vectors, subsequently utilised to train an SSVEP classifier. Extensive experimental analysis demonstrates the efficacy of our generated data, leading to improvements across a variety of evaluations, with the crucial task of cross-subject generalisation improving by over 35% with the use of such synthetic data.
[ { "created": "Tue, 15 Jan 2019 19:25:35 GMT", "version": "v1" }, { "created": "Wed, 3 Apr 2019 09:49:57 GMT", "version": "v2" } ]
2019-10-14
[ [ "Aznan", "Nik Khadijah Nik", "" ], [ "Atapour-Abarghouei", "Amir", "" ], [ "Bonner", "Stephen", "" ], [ "Connolly", "Jason", "" ], [ "Moubayed", "Noura Al", "" ], [ "Breckon", "Toby", "" ] ]
Despite significant recent progress in the area of Brain-Computer Interface (BCI), there are numerous shortcomings associated with collecting Electroencephalography (EEG) signals in real-world environments. These include, but are not limited to, subject and session data variance, long and arduous calibration processes and predictive generalisation issues across different subjects or sessions. This implies that many downstream applications, including Steady State Visual Evoked Potential (SSVEP) based classification systems, can suffer from a shortage of reliable data. Generating meaningful and realistic synthetic data can therefore be of significant value in circumventing this problem. We explore the use of modern neural-based generative models trained on a limited quantity of EEG data collected from different subjects to generate supplementary synthetic EEG signal vectors, subsequently utilised to train an SSVEP classifier. Extensive experimental analysis demonstrates the efficacy of our generated data, leading to improvements across a variety of evaluations, with the crucial task of cross-subject generalisation improving by over 35% with the use of such synthetic data.
0912.4189
Christel Kamp
Christel Kamp
Untangling the interplay between epidemic spreading and transmission network dynamic
16 pages, 3 figures
PLoS Comput Biol 6(11): e1000984 (2010)
10.1371/journal.pcbi.1000984
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epidemic spreading of infectious diseases is ubiquitous and has often considerable impact on public health and economic wealth. The large variability in spatio-temporal patterns of epidemics prohibits simple interventions and demands for a detailed analysis of each epidemic with respect to its infectious agent and the corresponding routes of transmission. To facilitate this analysis, a mathematical framework is introduced which links epidemic patterns to the topology and dynamics of the underlying transmission network. The evolution both in disease prevalence and transmission network topology are derived from a closed set of partial differential equations for infections without recovery which are in excellent agreement with complementarily conducted agent based simulations. The mutual influence between the epidemic process and its transmission network is shown by several case studies on HIV epidemics in synthetic populations. They reveal context dependent key processes which drive the epidemic and which in turn can be exploited for targeted intervention strategies. The mathematical framework provides a capable toolbox to analyze epidemics from first principles. This allows for fast in silico modeling - and manipulation - of epidemics which is especially powerful if complemented with adequate empirical data for parametrization.
[ { "created": "Mon, 21 Dec 2009 16:08:36 GMT", "version": "v1" } ]
2010-11-25
[ [ "Kamp", "Christel", "" ] ]
Epidemic spreading of infectious diseases is ubiquitous and has often considerable impact on public health and economic wealth. The large variability in spatio-temporal patterns of epidemics prohibits simple interventions and demands for a detailed analysis of each epidemic with respect to its infectious agent and the corresponding routes of transmission. To facilitate this analysis, a mathematical framework is introduced which links epidemic patterns to the topology and dynamics of the underlying transmission network. The evolution both in disease prevalence and transmission network topology are derived from a closed set of partial differential equations for infections without recovery which are in excellent agreement with complementarily conducted agent based simulations. The mutual influence between the epidemic process and its transmission network is shown by several case studies on HIV epidemics in synthetic populations. They reveal context dependent key processes which drive the epidemic and which in turn can be exploited for targeted intervention strategies. The mathematical framework provides a capable toolbox to analyze epidemics from first principles. This allows for fast in silico modeling - and manipulation - of epidemics which is especially powerful if complemented with adequate empirical data for parametrization.
1605.08454
Yuanjun Gao
Yuanjun Gao, Evan Archer, Liam Paninski, John P. Cunningham
Linear dynamical neural population models through nonlinear embeddings
NIPS 2016
null
null
null
q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A body of recent work in modeling neural activity focuses on recovering low-dimensional latent features that capture the statistical structure of large-scale neural populations. Most such approaches have focused on linear generative models, where inference is computationally tractable. Here, we propose fLDS, a general class of nonlinear generative models that permits the firing rate of each neuron to vary as an arbitrary smooth function of a latent, linear dynamical state. This extra flexibility allows the model to capture a richer set of neural variability than a purely linear model, but retains an easily visualizable low-dimensional latent space. To fit this class of non-conjugate models we propose a variational inference scheme, along with a novel approximate posterior capable of capturing rich temporal correlations across time. We show that our techniques permit inference in a wide class of generative models.We also show in application to two neural datasets that, compared to state-of-the-art neural population models, fLDS captures a much larger proportion of neural variability with a small number of latent dimensions, providing superior predictive performance and interpretability.
[ { "created": "Thu, 26 May 2016 21:25:26 GMT", "version": "v1" }, { "created": "Tue, 25 Oct 2016 19:44:03 GMT", "version": "v2" } ]
2016-10-26
[ [ "Gao", "Yuanjun", "" ], [ "Archer", "Evan", "" ], [ "Paninski", "Liam", "" ], [ "Cunningham", "John P.", "" ] ]
A body of recent work in modeling neural activity focuses on recovering low-dimensional latent features that capture the statistical structure of large-scale neural populations. Most such approaches have focused on linear generative models, where inference is computationally tractable. Here, we propose fLDS, a general class of nonlinear generative models that permits the firing rate of each neuron to vary as an arbitrary smooth function of a latent, linear dynamical state. This extra flexibility allows the model to capture a richer set of neural variability than a purely linear model, but retains an easily visualizable low-dimensional latent space. To fit this class of non-conjugate models we propose a variational inference scheme, along with a novel approximate posterior capable of capturing rich temporal correlations across time. We show that our techniques permit inference in a wide class of generative models.We also show in application to two neural datasets that, compared to state-of-the-art neural population models, fLDS captures a much larger proportion of neural variability with a small number of latent dimensions, providing superior predictive performance and interpretability.
1212.0031
Carina Curto
Carina Curto and Anda Degeratu and Vladimir Itskov
Encoding binary neural codes in networks of threshold-linear neurons
35 pages, 5 figures. Minor revisions only. Accepted to Neural Computation
Neural Computation, Vol 25, pp. 2858-2903, 2013
null
null
q-bio.NC math.CO math.MG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks of neurons in the brain encode preferred patterns of neural activity via their synaptic connections. Despite receiving considerable attention, the precise relationship between network connectivity and encoded patterns is still poorly understood. Here we consider this problem for networks of threshold-linear neurons whose computational function is to learn and store a set of binary patterns (e.g., a neural code) as "permitted sets" of the network. We introduce a simple Encoding Rule that selectively turns "on" synapses between neurons that co-appear in one or more patterns. The rule uses synapses that are binary, in the sense of having only two states ("on" or "off"), but also heterogeneous, with weights drawn from an underlying synaptic strength matrix S. Our main results precisely describe the stored patterns that result from the Encoding Rule -- including unintended "spurious" states -- and give an explicit characterization of the dependence on S. In particular, we find that binary patterns are successfully stored in these networks when the excitatory connections between neurons are geometrically balanced -- i.e., they satisfy a set of geometric constraints. Furthermore, we find that certain types of neural codes are "natural" in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns. Interestingly, many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small fraction of patterns. To obtain our results, we prove new theorems using classical ideas from convex and distance geometry, such as Cayley-Menger determinants, revealing a novel connection between these areas of mathematics and coding properties of neural networks.
[ { "created": "Fri, 30 Nov 2012 22:43:11 GMT", "version": "v1" }, { "created": "Fri, 5 Apr 2013 22:45:57 GMT", "version": "v2" }, { "created": "Thu, 16 May 2013 20:44:24 GMT", "version": "v3" } ]
2015-02-25
[ [ "Curto", "Carina", "" ], [ "Degeratu", "Anda", "" ], [ "Itskov", "Vladimir", "" ] ]
Networks of neurons in the brain encode preferred patterns of neural activity via their synaptic connections. Despite receiving considerable attention, the precise relationship between network connectivity and encoded patterns is still poorly understood. Here we consider this problem for networks of threshold-linear neurons whose computational function is to learn and store a set of binary patterns (e.g., a neural code) as "permitted sets" of the network. We introduce a simple Encoding Rule that selectively turns "on" synapses between neurons that co-appear in one or more patterns. The rule uses synapses that are binary, in the sense of having only two states ("on" or "off"), but also heterogeneous, with weights drawn from an underlying synaptic strength matrix S. Our main results precisely describe the stored patterns that result from the Encoding Rule -- including unintended "spurious" states -- and give an explicit characterization of the dependence on S. In particular, we find that binary patterns are successfully stored in these networks when the excitatory connections between neurons are geometrically balanced -- i.e., they satisfy a set of geometric constraints. Furthermore, we find that certain types of neural codes are "natural" in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns. Interestingly, many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small fraction of patterns. To obtain our results, we prove new theorems using classical ideas from convex and distance geometry, such as Cayley-Menger determinants, revealing a novel connection between these areas of mathematics and coding properties of neural networks.
1009.4801
Monica Zoppe'
Maria Francesca Zini, Yuri Porozov, Raluca Mihaela Andrei, Tiziana Loni, Claudia Caudai and Monica Zopp\`e
BioBlender: Fast and Efficient All Atom Morphing of Proteins Using Blender Game Engine
Paper strictly associated with other paper 'BioBlender: A Software for Intuitive Representation of Surface Properties of Biomolecules', also submitted at the same time. This paper 14 pages, including 7 figures. Paper submitted
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this and the associated article 'BioBlender: A Software for Intuitive Representation of Surface Properties of Biomolecules', (Andrei et al) we present BioBlender as a complete instrument for the elaboration of motion (here) and the visualization (Andrei et al) of proteins and other macromolecules, using instruments of computer graphics. A vast number of protein (if not most) exert their function through some extent of motion. Despite recent advances in higly performant methods, it is very difficult to obtain direct information on conformational changes of molecules. However, several systems exist that can shed some light on the variability of conformations of a single peptide chain; among them, NMR methods provide collections of a number of static 'shots' of a moving protein. Starting from this data, and assuming that if a protein exists in more than 1 conformation it must be able to transit between the different states, we have elaborated a system that makes ample use of the computational power of 3D computer graphics technology. Considering information of all (heavy) atoms, we use animation and game engine of Blender to obtain transition states. The model we chose to elaborate our system is Calmodulin, a protein favorite among structural and dynamic studies due to its (relative) simplicity of structure and small dimension. Using Calmodulin we show a procedure that enables the building of a 'navigation map' of NMR models, that can help in the identification of movements. In the process, a number of intermediate conformations is generated, all of which respond to strict bio-physical and bio-chemical criteria. The BioBlender system is available for download from the website www.bioblender.net, together with examples, tutorial and other useful material.
[ { "created": "Fri, 24 Sep 2010 10:40:30 GMT", "version": "v1" } ]
2010-09-27
[ [ "Zini", "Maria Francesca", "" ], [ "Porozov", "Yuri", "" ], [ "Andrei", "Raluca Mihaela", "" ], [ "Loni", "Tiziana", "" ], [ "Caudai", "Claudia", "" ], [ "Zoppè", "Monica", "" ] ]
In this and the associated article 'BioBlender: A Software for Intuitive Representation of Surface Properties of Biomolecules', (Andrei et al) we present BioBlender as a complete instrument for the elaboration of motion (here) and the visualization (Andrei et al) of proteins and other macromolecules, using instruments of computer graphics. A vast number of protein (if not most) exert their function through some extent of motion. Despite recent advances in higly performant methods, it is very difficult to obtain direct information on conformational changes of molecules. However, several systems exist that can shed some light on the variability of conformations of a single peptide chain; among them, NMR methods provide collections of a number of static 'shots' of a moving protein. Starting from this data, and assuming that if a protein exists in more than 1 conformation it must be able to transit between the different states, we have elaborated a system that makes ample use of the computational power of 3D computer graphics technology. Considering information of all (heavy) atoms, we use animation and game engine of Blender to obtain transition states. The model we chose to elaborate our system is Calmodulin, a protein favorite among structural and dynamic studies due to its (relative) simplicity of structure and small dimension. Using Calmodulin we show a procedure that enables the building of a 'navigation map' of NMR models, that can help in the identification of movements. In the process, a number of intermediate conformations is generated, all of which respond to strict bio-physical and bio-chemical criteria. The BioBlender system is available for download from the website www.bioblender.net, together with examples, tutorial and other useful material.
2301.06640
Yixiang Wu
Shanshan Chen, Jie Liu, Yixiang Wu
On the impact of spatial heterogeneity and drift rate in a three-patch two-species Lotka-Volterra competition model over a stream
null
null
10.1007/s00033-023-02009-6
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
In this paper, we study a three-patch two-species Lotka-Volterra competition patch model over a stream network. The individuals are subject to both random and directed movements, and the two species are assumed to be identical except for the movement rates. The environment is heterogeneous, and the carrying capacity is lager in upstream locations. We treat one species as a resident species and investigate whether the other species can invade or not. Our results show that the spatial heterogeneity of environment and the magnitude of the drift rates have a large impact on the competition outcomes of the stream species.
[ { "created": "Mon, 16 Jan 2023 23:48:30 GMT", "version": "v1" } ]
2023-06-07
[ [ "Chen", "Shanshan", "" ], [ "Liu", "Jie", "" ], [ "Wu", "Yixiang", "" ] ]
In this paper, we study a three-patch two-species Lotka-Volterra competition patch model over a stream network. The individuals are subject to both random and directed movements, and the two species are assumed to be identical except for the movement rates. The environment is heterogeneous, and the carrying capacity is lager in upstream locations. We treat one species as a resident species and investigate whether the other species can invade or not. Our results show that the spatial heterogeneity of environment and the magnitude of the drift rates have a large impact on the competition outcomes of the stream species.
q-bio/0406048
Elchanan Mossel
Elchanan Mossel and Mike Steel
How much can evolved characters tell us about the tree that generated them?
null
null
null
null
q-bio.PE math.ST stat.TH
null
In this paper we review some recent results that shed light on a fundamental question in molecular systematics: how much phylogenetic `signal' can we expect from characters that have evolved under some Markov process? There are many sides to this question and we begin by describing some explicit bounds on the probability of correctly reconstructing an ancestral state from the states observed at the tips. We show how this bound sets upper limits on the probability of tree reconstruction from aligned sequences, and we provide some new extensions that allow site-to-site rate variation or a covarion mechanism. We then explore the relationship between the number of sites required for accurate tree reconstruction and other model parameters - such as the number of species, and substitution probabilities, and we describe a phase transition that occurs when substitution probabilities exceed a critical value. In the remainder of this paper we turn to models of character evolution where the state space is assumed to be either infinite or very large. These models have some relevance to certain types of genomic data (such as gene order) and here we again investigate how many characters are required for accurate tree reconstruction.
[ { "created": "Thu, 24 Jun 2004 18:38:07 GMT", "version": "v1" } ]
2011-11-10
[ [ "Mossel", "Elchanan", "" ], [ "Steel", "Mike", "" ] ]
In this paper we review some recent results that shed light on a fundamental question in molecular systematics: how much phylogenetic `signal' can we expect from characters that have evolved under some Markov process? There are many sides to this question and we begin by describing some explicit bounds on the probability of correctly reconstructing an ancestral state from the states observed at the tips. We show how this bound sets upper limits on the probability of tree reconstruction from aligned sequences, and we provide some new extensions that allow site-to-site rate variation or a covarion mechanism. We then explore the relationship between the number of sites required for accurate tree reconstruction and other model parameters - such as the number of species, and substitution probabilities, and we describe a phase transition that occurs when substitution probabilities exceed a critical value. In the remainder of this paper we turn to models of character evolution where the state space is assumed to be either infinite or very large. These models have some relevance to certain types of genomic data (such as gene order) and here we again investigate how many characters are required for accurate tree reconstruction.
2405.12225
Ignacio Alvarez Illan
F.J. Alcaide, I.A. Illan, J. Ramirez, J.M. Gorriz
Unraveling the Autism spectrum heterogeneity: Insights from ABIDE I Database using data/model-driven permutation testing approaches
54 pages, 14 figures
null
null
null
q-bio.QM cs.LG eess.IV eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Autism Spectrum Condition (ASC) is a neurodevelopmental condition characterized by impairments in communication, social interaction and restricted or repetitive behaviors. Extensive research has been conducted to identify distinctions between individuals with ASC and neurotypical individuals. However, limited attention has been given to comprehensively evaluating how variations in image acquisition protocols across different centers influence these observed differences. This analysis focuses on structural magnetic resonance imaging (sMRI) data from the Autism Brain Imaging Data Exchange I (ABIDE I) database, evaluating subjects' condition and individual centers to identify disparities between ASC and control groups. Statistical analysis, employing permutation tests, utilizes two distinct statistical mapping methods: Statistical Agnostic Mapping (SAM) and Statistical Parametric Mapping (SPM). Results reveal the absence of statistically significant differences in any brain region, attributed to factors such as limited sample sizes within certain centers, noise effects and the problem of multicentrism in a heterogeneous condition such as autism. This study indicates limitations in using the ABIDE I database to detect structural differences in the brain between neurotypical individuals and those diagnosed with ASC. Furthermore, results from the SAM mapping method show greater consistency with existing literature.
[ { "created": "Mon, 22 Apr 2024 13:21:57 GMT", "version": "v1" } ]
2024-05-22
[ [ "Alcaide", "F. J.", "" ], [ "Illan", "I. A.", "" ], [ "Ramirez", "J.", "" ], [ "Gorriz", "J. M.", "" ] ]
Autism Spectrum Condition (ASC) is a neurodevelopmental condition characterized by impairments in communication, social interaction and restricted or repetitive behaviors. Extensive research has been conducted to identify distinctions between individuals with ASC and neurotypical individuals. However, limited attention has been given to comprehensively evaluating how variations in image acquisition protocols across different centers influence these observed differences. This analysis focuses on structural magnetic resonance imaging (sMRI) data from the Autism Brain Imaging Data Exchange I (ABIDE I) database, evaluating subjects' condition and individual centers to identify disparities between ASC and control groups. Statistical analysis, employing permutation tests, utilizes two distinct statistical mapping methods: Statistical Agnostic Mapping (SAM) and Statistical Parametric Mapping (SPM). Results reveal the absence of statistically significant differences in any brain region, attributed to factors such as limited sample sizes within certain centers, noise effects and the problem of multicentrism in a heterogeneous condition such as autism. This study indicates limitations in using the ABIDE I database to detect structural differences in the brain between neurotypical individuals and those diagnosed with ASC. Furthermore, results from the SAM mapping method show greater consistency with existing literature.
1107.0334
Michael Bachmann
Tristan Bereau, Markus Deserno, and Michael Bachmann
Structural Basis of Folding Cooperativity in Model Proteins: Insights from a Microcanonical Perspective
28 pages, 7 figures
Biophys. J. 100, 2764-2772 (2011)
10.1016/j.bpj.2011.03.056
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two-state cooperativity is an important characteristic in protein folding. It is defined by a depletion of states lying energetically between folded and unfolded conformations. While there are different ways to test for two-state cooperativity, most of them probe indirect proxies of this depletion. Yet, generalized-ensemble computer simulations allow to unambiguously identify this transition by a microcanonical analysis on the basis of the density of states. Here we perform a detailed characterization of several helical peptides using coarse-grained simulations. The level of resolution of the coarse-grained model allows to study realistic structures ranging from small alpha-helices to a de novo three-helix bundle - without biasing the force field toward the native state of the protein. Linking thermodynamic and structural features shows that while short alpha-helices exhibit two-state cooperativity, the type of transition changes for longer chain lengths because the chain forms multiple helix nucleation sites, stabilizing a significant population of intermediate states. The helix bundle exhibits the signs of two-state cooperativity owing to favorable helix-helix interactions, as predicted from theoretical models. The detailed analysis of secondary and tertiary structure formation fits well into the framework of several folding mechanisms and confirms features observed so far only in lattice models.
[ { "created": "Fri, 1 Jul 2011 20:51:28 GMT", "version": "v1" } ]
2015-05-28
[ [ "Bereau", "Tristan", "" ], [ "Deserno", "Markus", "" ], [ "Bachmann", "Michael", "" ] ]
Two-state cooperativity is an important characteristic in protein folding. It is defined by a depletion of states lying energetically between folded and unfolded conformations. While there are different ways to test for two-state cooperativity, most of them probe indirect proxies of this depletion. Yet, generalized-ensemble computer simulations allow to unambiguously identify this transition by a microcanonical analysis on the basis of the density of states. Here we perform a detailed characterization of several helical peptides using coarse-grained simulations. The level of resolution of the coarse-grained model allows to study realistic structures ranging from small alpha-helices to a de novo three-helix bundle - without biasing the force field toward the native state of the protein. Linking thermodynamic and structural features shows that while short alpha-helices exhibit two-state cooperativity, the type of transition changes for longer chain lengths because the chain forms multiple helix nucleation sites, stabilizing a significant population of intermediate states. The helix bundle exhibits the signs of two-state cooperativity owing to favorable helix-helix interactions, as predicted from theoretical models. The detailed analysis of secondary and tertiary structure formation fits well into the framework of several folding mechanisms and confirms features observed so far only in lattice models.
0812.3344
Vahid Shahrezaei
Vahid Shahrezaei and Peter S. Swain
Analytical distributions for stochastic gene expression
Supplementary information can be found on PNAS website
Proc Natl Acad Sci U S A. 2008 Nov 11;105(45):17256-61
10.1073/pnas.0803850105
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression is significantly stochastic making modeling of genetic networks challenging. We present an approximation that allows the calculation of not only the mean and variance but also the distribution of protein numbers. We assume that proteins decay substantially slower than their mRNA and confirm that many genes satisfy this relation using high-throughput data from budding yeast. For a two-stage model of gene expression, with transcription and translation as first-order reactions, we calculate the protein distribution for all times greater than several mRNA lifetimes and thus qualitatively predict the distribution of times for protein levels to first cross an arbitrary threshold. If in addition the promoter fluctuates between inactive and active states, we can find the steady-state protein distribution, which can be bimodal if promoter fluctuations are slow. We show that our assumptions imply that protein synthesis occurs in geometrically distributed bursts and allows mRNA to be eliminated from a master equation description. In general, we find that protein distributions are asymmetric and may be poorly characterized by their mean and variance. Through maximum likelihood methods, our expressions should therefore allow more quantitative comparisons with experimental data. More generally, we introduce a technique to derive a simpler, effective dynamics for a stochastic system by eliminating a fast variable.
[ { "created": "Wed, 17 Dec 2008 17:09:23 GMT", "version": "v1" } ]
2008-12-18
[ [ "Shahrezaei", "Vahid", "" ], [ "Swain", "Peter S.", "" ] ]
Gene expression is significantly stochastic making modeling of genetic networks challenging. We present an approximation that allows the calculation of not only the mean and variance but also the distribution of protein numbers. We assume that proteins decay substantially slower than their mRNA and confirm that many genes satisfy this relation using high-throughput data from budding yeast. For a two-stage model of gene expression, with transcription and translation as first-order reactions, we calculate the protein distribution for all times greater than several mRNA lifetimes and thus qualitatively predict the distribution of times for protein levels to first cross an arbitrary threshold. If in addition the promoter fluctuates between inactive and active states, we can find the steady-state protein distribution, which can be bimodal if promoter fluctuations are slow. We show that our assumptions imply that protein synthesis occurs in geometrically distributed bursts and allows mRNA to be eliminated from a master equation description. In general, we find that protein distributions are asymmetric and may be poorly characterized by their mean and variance. Through maximum likelihood methods, our expressions should therefore allow more quantitative comparisons with experimental data. More generally, we introduce a technique to derive a simpler, effective dynamics for a stochastic system by eliminating a fast variable.
1304.3577
Richard Savage
Richard S. Savage, Zoubin Ghahramani, Jim E. Griffin, Paul Kirk, David L. Wild
Identifying cancer subtypes in glioblastoma by combining genomic, transcriptomic and epigenomic data
null
International Conference on Machine Learning (ICML) 2012: Workshop on Machine Learning in Genetics and Genomics
null
null
q-bio.GN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a nonparametric Bayesian method for disease subtype discovery in multi-dimensional cancer data. Our method can simultaneously analyse a wide range of data types, allowing for both agreement and disagreement between their underlying clustering structure. It includes feature selection and infers the most likely number of disease subtypes, given the data. We apply the method to 277 glioblastoma samples from The Cancer Genome Atlas, for which there are gene expression, copy number variation, methylation and microRNA data. We identify 8 distinct consensus subtypes and study their prognostic value for death, new tumour events, progression and recurrence. The consensus subtypes are prognostic of tumour recurrence (log-rank p-value of $3.6 \times 10^{-4}$ after correction for multiple hypothesis tests). This is driven principally by the methylation data (log-rank p-value of $2.0 \times 10^{-3}$) but the effect is strengthened by the other 3 data types, demonstrating the value of integrating multiple data types. Of particular note is a subtype of 47 patients characterised by very low levels of methylation. This subtype has very low rates of tumour recurrence and no new events in 10 years of follow up. We also identify a small gene expression subtype of 6 patients that shows particularly poor survival outcomes. Additionally, we note a consensus subtype that showly a highly distinctive data signature and suggest that it is therefore a biologically distinct subtype of glioblastoma. The code is available from https://sites.google.com/site/multipledatafusion/
[ { "created": "Fri, 12 Apr 2013 09:06:45 GMT", "version": "v1" }, { "created": "Mon, 15 Apr 2013 09:40:34 GMT", "version": "v2" } ]
2013-04-16
[ [ "Savage", "Richard S.", "" ], [ "Ghahramani", "Zoubin", "" ], [ "Griffin", "Jim E.", "" ], [ "Kirk", "Paul", "" ], [ "Wild", "David L.", "" ] ]
We present a nonparametric Bayesian method for disease subtype discovery in multi-dimensional cancer data. Our method can simultaneously analyse a wide range of data types, allowing for both agreement and disagreement between their underlying clustering structure. It includes feature selection and infers the most likely number of disease subtypes, given the data. We apply the method to 277 glioblastoma samples from The Cancer Genome Atlas, for which there are gene expression, copy number variation, methylation and microRNA data. We identify 8 distinct consensus subtypes and study their prognostic value for death, new tumour events, progression and recurrence. The consensus subtypes are prognostic of tumour recurrence (log-rank p-value of $3.6 \times 10^{-4}$ after correction for multiple hypothesis tests). This is driven principally by the methylation data (log-rank p-value of $2.0 \times 10^{-3}$) but the effect is strengthened by the other 3 data types, demonstrating the value of integrating multiple data types. Of particular note is a subtype of 47 patients characterised by very low levels of methylation. This subtype has very low rates of tumour recurrence and no new events in 10 years of follow up. We also identify a small gene expression subtype of 6 patients that shows particularly poor survival outcomes. Additionally, we note a consensus subtype that showly a highly distinctive data signature and suggest that it is therefore a biologically distinct subtype of glioblastoma. The code is available from https://sites.google.com/site/multipledatafusion/
0906.4471
Alexey Mazur K
Alexey K. Mazur
Analysis of Accordion DNA Stretching Revealed by The Gold Cluster Ruler
9 pages, 4 figures, to appear in Phys. Rev. E
Phys. Rev. E, 80, 010901, 2009
10.1103/PhysRevE.80.010901
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A promising new method for measuring intramolecular distances in solution uses small-angle X-ray scattering interference between gold nanocrystal labels (Mathew-Fenn et al, Science, 322, 446 (2008)). When applied to double stranded DNA, it revealed that the DNA length fluctuations are strikingly strong and correlated over at least 80 base pair steps. In other words, the DNA behaves as accordion bellows, with distant fragments stretching and shrinking concertedly. This hypothesis, however, disagrees with earlier experimental and computational observations. This Letter shows that the discrepancy can be rationalized by taking into account the cluster exclusion volume and assuming a moderate long-range repulsion between them. The long-range interaction can originate from an ion exclusion effect and cluster polarization in close proximity to the DNA surface.
[ { "created": "Wed, 24 Jun 2009 13:31:17 GMT", "version": "v1" } ]
2015-05-13
[ [ "Mazur", "Alexey K.", "" ] ]
A promising new method for measuring intramolecular distances in solution uses small-angle X-ray scattering interference between gold nanocrystal labels (Mathew-Fenn et al, Science, 322, 446 (2008)). When applied to double stranded DNA, it revealed that the DNA length fluctuations are strikingly strong and correlated over at least 80 base pair steps. In other words, the DNA behaves as accordion bellows, with distant fragments stretching and shrinking concertedly. This hypothesis, however, disagrees with earlier experimental and computational observations. This Letter shows that the discrepancy can be rationalized by taking into account the cluster exclusion volume and assuming a moderate long-range repulsion between them. The long-range interaction can originate from an ion exclusion effect and cluster polarization in close proximity to the DNA surface.
2306.10038
John Buckleton
Tim Kalafut, James Curran, Mike Coble, John Buckleton
Comments arising from WJ Thompson "Uncertainty in probabilistic genotyping of low template DNA A case study comparing STRmix and TrueAllele"
9 pages 1 figure
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Thompson reports a comparison of data from STRmix and TrueAllele. The data he has arises from different inputs to the two software. If the input data are made more similar the outputs become more similar. Thompson argues that the Analytical Threshold, AT, should be varied in casework. This produced different LRs but the analyst would be left deciding what to do with these options. This cannot be based on the LRs but should be based on whether any movement in the AT adds reliable or unreliable data. This is how most laboratories set their AT in the first place. Hence it is pointless, and potentially dangerous, to experimentally vary the AT in casework. The profile is low level and shows at most three peaks. Thompson argues that LR results assuming that the number of contributors (NoC) is 2 or 3 should be reported. Uncertainty in NoC should be treated as a nuisance variable and summed out.
[ { "created": "Sat, 10 Jun 2023 01:51:23 GMT", "version": "v1" } ]
2023-06-21
[ [ "Kalafut", "Tim", "" ], [ "Curran", "James", "" ], [ "Coble", "Mike", "" ], [ "Buckleton", "John", "" ] ]
Thompson reports a comparison of data from STRmix and TrueAllele. The data he has arises from different inputs to the two software. If the input data are made more similar the outputs become more similar. Thompson argues that the Analytical Threshold, AT, should be varied in casework. This produced different LRs but the analyst would be left deciding what to do with these options. This cannot be based on the LRs but should be based on whether any movement in the AT adds reliable or unreliable data. This is how most laboratories set their AT in the first place. Hence it is pointless, and potentially dangerous, to experimentally vary the AT in casework. The profile is low level and shows at most three peaks. Thompson argues that LR results assuming that the number of contributors (NoC) is 2 or 3 should be reported. Uncertainty in NoC should be treated as a nuisance variable and summed out.
1109.2683
Hui Zeng
Wei-Mou Zheng, Hui Zeng, Dong-Bo Bu, Ming-Fu Shao, Ke-Song Liu and Chao Wang
Looking for packing units of the protein structure
10 pages, 5 tables
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lattice-model simulations and experiments of some small proteins suggest that folding is essentially controlled by a few conserved contacts. Residues of these conserved contacts form the minimum set of native contacts needed to ensure foldability. Keeping such conserved specific contacts in mind, we examine contacts made by two secondary structure elements of different helices or sheets and look for possible 'packing units' of the protein structure. Two short backbone fragments of width five centred at the C? atoms in contact is called an H-form, which serves as a candidate for the packing units. The structural alignment of protein family members or even across families indicates that there are conservative H-forms which are similar both in their sequences and local geometry, and consistent with the structural alignment. Carrying strong sequence signals, such packing units would provide 3D constraints as a complement of the potential functions for the structure prediction.
[ { "created": "Tue, 13 Sep 2011 07:03:06 GMT", "version": "v1" } ]
2011-09-14
[ [ "Zheng", "Wei-Mou", "" ], [ "Zeng", "Hui", "" ], [ "Bu", "Dong-Bo", "" ], [ "Shao", "Ming-Fu", "" ], [ "Liu", "Ke-Song", "" ], [ "Wang", "Chao", "" ] ]
Lattice-model simulations and experiments of some small proteins suggest that folding is essentially controlled by a few conserved contacts. Residues of these conserved contacts form the minimum set of native contacts needed to ensure foldability. Keeping such conserved specific contacts in mind, we examine contacts made by two secondary structure elements of different helices or sheets and look for possible 'packing units' of the protein structure. Two short backbone fragments of width five centred at the C? atoms in contact is called an H-form, which serves as a candidate for the packing units. The structural alignment of protein family members or even across families indicates that there are conservative H-forms which are similar both in their sequences and local geometry, and consistent with the structural alignment. Carrying strong sequence signals, such packing units would provide 3D constraints as a complement of the potential functions for the structure prediction.
2308.12325
Colin Zhang
John Ho, Zhao-Heng Yin, Colin Zhang, Nicole Guo, Yang Ha
Predicting Drug Solubility Using Different Machine Learning Methods -- Linear Regression Model with Extracted Chemical Features vs Graph Convolutional Neural Network
7 pages, 4 figures, 2 tables
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting the solubility of given molecules remains crucial in the pharmaceutical industry. In this study, we revisited this extensively studied topic, leveraging the capabilities of contemporary computing resources. We employed two machine learning models: a linear regression model and a graph convolutional neural network (GCNN) model, using various experimental datasets. Both methods yielded reasonable predictions, with the GCNN model exhibiting the highest level of performance. However, the present GCNN model has limited interpretability while the linear regression model allows scientists for a greater in-depth analysis of the underlying factors through feature importance analysis, although more human inputs and evaluations on the overall dataset is required. From the perspective of chemistry, using the linear regression model, we elucidated the impact of individual atom species and functional groups on overall solubility, highlighting the significance of comprehending how chemical structure influences chemical properties in the drug development process. It is learned that introducing oxygen atoms can increase the solubility of organic molecules, while almost all other hetero atoms except oxygen and nitrogen tend to decrease solubility.
[ { "created": "Wed, 23 Aug 2023 15:35:20 GMT", "version": "v1" }, { "created": "Fri, 5 Jan 2024 01:28:36 GMT", "version": "v2" } ]
2024-01-08
[ [ "Ho", "John", "" ], [ "Yin", "Zhao-Heng", "" ], [ "Zhang", "Colin", "" ], [ "Guo", "Nicole", "" ], [ "Ha", "Yang", "" ] ]
Predicting the solubility of given molecules remains crucial in the pharmaceutical industry. In this study, we revisited this extensively studied topic, leveraging the capabilities of contemporary computing resources. We employed two machine learning models: a linear regression model and a graph convolutional neural network (GCNN) model, using various experimental datasets. Both methods yielded reasonable predictions, with the GCNN model exhibiting the highest level of performance. However, the present GCNN model has limited interpretability while the linear regression model allows scientists for a greater in-depth analysis of the underlying factors through feature importance analysis, although more human inputs and evaluations on the overall dataset is required. From the perspective of chemistry, using the linear regression model, we elucidated the impact of individual atom species and functional groups on overall solubility, highlighting the significance of comprehending how chemical structure influences chemical properties in the drug development process. It is learned that introducing oxygen atoms can increase the solubility of organic molecules, while almost all other hetero atoms except oxygen and nitrogen tend to decrease solubility.
2109.09649
Gregory Kiar
Gregory Kiar, Yohan Chatelain, Ali Salari, Alan C. Evans, Tristan Glatard
Data Augmentation Through Monte Carlo Arithmetic Leads to More Generalizable Classification in Connectomics
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Machine learning models are commonly applied to human brain imaging datasets in an effort to associate function or structure with behaviour, health, or other individual phenotypes. Such models often rely on low-dimensional maps generated by complex processing pipelines. However, the numerical instabilities inherent to pipelines limit the fidelity of these maps and introduce computational bias. Monte Carlo Arithmetic, a technique for introducing controlled amounts of numerical noise, was used to perturb a structural connectome estimation pipeline, ultimately producing a range of plausible networks for each sample. The variability in the perturbed networks was captured in an augmented dataset, which was then used for an age classification task. We found that resampling brain networks across a series of such numerically perturbed outcomes led to improved performance in all tested classifiers, preprocessing strategies, and dimensionality reduction techniques. Importantly, we find that this benefit does not hinge on a large number of perturbations, suggesting that even minimally perturbing a dataset adds meaningful variance which can be captured in the subsequently designed models.
[ { "created": "Mon, 20 Sep 2021 16:06:05 GMT", "version": "v1" } ]
2021-09-21
[ [ "Kiar", "Gregory", "" ], [ "Chatelain", "Yohan", "" ], [ "Salari", "Ali", "" ], [ "Evans", "Alan C.", "" ], [ "Glatard", "Tristan", "" ] ]
Machine learning models are commonly applied to human brain imaging datasets in an effort to associate function or structure with behaviour, health, or other individual phenotypes. Such models often rely on low-dimensional maps generated by complex processing pipelines. However, the numerical instabilities inherent to pipelines limit the fidelity of these maps and introduce computational bias. Monte Carlo Arithmetic, a technique for introducing controlled amounts of numerical noise, was used to perturb a structural connectome estimation pipeline, ultimately producing a range of plausible networks for each sample. The variability in the perturbed networks was captured in an augmented dataset, which was then used for an age classification task. We found that resampling brain networks across a series of such numerically perturbed outcomes led to improved performance in all tested classifiers, preprocessing strategies, and dimensionality reduction techniques. Importantly, we find that this benefit does not hinge on a large number of perturbations, suggesting that even minimally perturbing a dataset adds meaningful variance which can be captured in the subsequently designed models.
0902.0602
Ramana Dodla
Ramana Dodla, Charles J. Wilson
Asynchronous response of coupled pacemaker neurons
11 pages, 4 figures. To appear in Phys. Rev. Lett
null
10.1103/PhysRevLett.102.068102
null
q-bio.NC nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a network model of two conductance-based pacemaker neurons of differing natural frequency, coupled with either mutual excitation or inhibition, and receiving shared random inhibitory synaptic input. The networks may phase-lock spike-to-spike for strong mutual coupling. But the shared input can desynchronize the locked spike-pairs by selectively eliminating the lagging spike or modulating its timing with respect to the leading spike depending on their separation time window. Such loss of synchrony is also found in a large network of sparsely coupled heterogeneous spiking neurons receiving shared input.
[ { "created": "Tue, 3 Feb 2009 20:49:32 GMT", "version": "v1" } ]
2009-11-13
[ [ "Dodla", "Ramana", "" ], [ "Wilson", "Charles J.", "" ] ]
We study a network model of two conductance-based pacemaker neurons of differing natural frequency, coupled with either mutual excitation or inhibition, and receiving shared random inhibitory synaptic input. The networks may phase-lock spike-to-spike for strong mutual coupling. But the shared input can desynchronize the locked spike-pairs by selectively eliminating the lagging spike or modulating its timing with respect to the leading spike depending on their separation time window. Such loss of synchrony is also found in a large network of sparsely coupled heterogeneous spiking neurons receiving shared input.
1003.0902
Nikita Sakhanenko
Nikita A. Sakhanenko, David J. Galas
Markov Logic Networks in the Analysis of Genetic Data
29 pages, 9 figures, 1 table
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of partial influences of each individual gene and can often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying genotype-phenotype mechanisms. Modeling approaches from the AI field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), as a framework for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we have been able to replicate the results of traditional statistical methods. Moreover, we show that even with simple models we are able to go beyond finding independent markers linked to a phenotype by using joint inference that avoids an independence assumption. The method is applied to genetic data on yeast sporulation, a phenotype governed by non-linear gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method is able to identify four loci with small effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics.
[ { "created": "Wed, 3 Mar 2010 21:01:10 GMT", "version": "v1" } ]
2010-03-05
[ [ "Sakhanenko", "Nikita A.", "" ], [ "Galas", "David J.", "" ] ]
Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of partial influences of each individual gene and can often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying genotype-phenotype mechanisms. Modeling approaches from the AI field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), as a framework for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we have been able to replicate the results of traditional statistical methods. Moreover, we show that even with simple models we are able to go beyond finding independent markers linked to a phenotype by using joint inference that avoids an independence assumption. The method is applied to genetic data on yeast sporulation, a phenotype governed by non-linear gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method is able to identify four loci with small effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics.
1408.6616
Federico Maggi
Maggi Federico, Domenico Bosco, Cristina Marzachi'
Conceptual and mathematical modeling of insect-borne plant diseases: theory and application to flavescence doree in grapevine
null
null
null
School of Civil Engineering, Research Report R947, ISSN 1833-2781
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Insect-borne plant diseases recur commonly in wild plants and in agricultural crops, and are responsible for severe losses in terms of produce yield and monetary return. Mathematical models of insect-borne plant diseases are therefore an essential tool to help predicting the progression of an epidemic disease and aid in decision making when control strategies are to be implemented in the field. While retaining a generalized applicability of the proposed model to plant epidemics vectored by insects, we specifically investigated the epidemics of Flavescence dor\'ee phytoplasma (FD) in grapevine plant Vitis vinifera specifically transmitted by the leafhopper Scaphoideus titanus. The epidemiological model accounted for life-cycle stage of S. titanus, FD pathogen cycle within S. titanus and V. vinifera, vineyard setting, and agronomic practices. The model was comprehensively tested against biological S. titanus life cycle and FD epidemics data collected in various research sites in Piemonte, Italy, over multiple years. The work presented here represents a unique suite of governing equations tested on existing independent data and sets the basis for further modelling advances and possible applications to investigate effectiveness of real-case epidemics control strategies and scenarios.
[ { "created": "Thu, 28 Aug 2014 03:22:55 GMT", "version": "v1" } ]
2014-08-29
[ [ "Federico", "Maggi", "" ], [ "Bosco", "Domenico", "" ], [ "Marzachi'", "Cristina", "" ] ]
Insect-borne plant diseases recur commonly in wild plants and in agricultural crops, and are responsible for severe losses in terms of produce yield and monetary return. Mathematical models of insect-borne plant diseases are therefore an essential tool to help predicting the progression of an epidemic disease and aid in decision making when control strategies are to be implemented in the field. While retaining a generalized applicability of the proposed model to plant epidemics vectored by insects, we specifically investigated the epidemics of Flavescence dor\'ee phytoplasma (FD) in grapevine plant Vitis vinifera specifically transmitted by the leafhopper Scaphoideus titanus. The epidemiological model accounted for life-cycle stage of S. titanus, FD pathogen cycle within S. titanus and V. vinifera, vineyard setting, and agronomic practices. The model was comprehensively tested against biological S. titanus life cycle and FD epidemics data collected in various research sites in Piemonte, Italy, over multiple years. The work presented here represents a unique suite of governing equations tested on existing independent data and sets the basis for further modelling advances and possible applications to investigate effectiveness of real-case epidemics control strategies and scenarios.
1602.04099
Ryan Morris
Ryan J. Morris, Giovanni B. Brandani, Vibhuti Desai, Brian O. Smith, Marieke Schor, Cait E. MacPhee
The Conformation of Interfacially Adsorbed Ranaspumin-2 is an Arrested State on the Unfolding Pathway
8 figures
Biophys. J. 111(4), 732-742, (2016)
10.1016/j.bpj.2016.06.006
null
q-bio.BM cond-mat.soft q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ranaspumin-2 (Rsn-2) is a surfactant protein found in the foam nests of the t\'{u}ngara frog. Previous experimental work has led to a proposed model of adsorption which involves an unusual clam shell-like `unhinging' of the protein at an interface. Interestingly, there is no concomitant denaturation of the secondary structural elements of Rsn-2 with the large scale transformation of its tertiary structure. In this work we use both experiment and simulation to better understand the driving forces underpinning this unusual process. We develop a modified G\={o}-model approach where we have included explicit representation of the side-chains in order to realistically model the interaction between the secondary structure elements of the protein and the interface. Doing so allows for the study of the underlying energy landscape which governs the mechanism of Rsn-2 interfacial adsorption. Experimentally, we study targeted mutants of Rsn-2, using the Langmuir trough, pendant drop tensiometry and circular dichroism, to demonstrate that the clam-shell model is correct. We find that Rsn-2 adsorption is in fact a two-step process: the hydrophobic N-terminal tail recruits the protein to the interface after which Rsn-2 undergoes an unfolding transition which maintains its secondary structure. Intriguingly, our simulations show that the conformation Rsn-2 adopts at an interface is an arrested state along the denaturation pathway. More generally, our computational model should prove a useful, and computationally efficient, tool in studying the dynamics and energetics of protein-interface interactions.
[ { "created": "Fri, 12 Feb 2016 16:12:35 GMT", "version": "v1" } ]
2016-08-30
[ [ "Morris", "Ryan J.", "" ], [ "Brandani", "Giovanni B.", "" ], [ "Desai", "Vibhuti", "" ], [ "Smith", "Brian O.", "" ], [ "Schor", "Marieke", "" ], [ "MacPhee", "Cait E.", "" ] ]
Ranaspumin-2 (Rsn-2) is a surfactant protein found in the foam nests of the t\'{u}ngara frog. Previous experimental work has led to a proposed model of adsorption which involves an unusual clam shell-like `unhinging' of the protein at an interface. Interestingly, there is no concomitant denaturation of the secondary structural elements of Rsn-2 with the large scale transformation of its tertiary structure. In this work we use both experiment and simulation to better understand the driving forces underpinning this unusual process. We develop a modified G\={o}-model approach where we have included explicit representation of the side-chains in order to realistically model the interaction between the secondary structure elements of the protein and the interface. Doing so allows for the study of the underlying energy landscape which governs the mechanism of Rsn-2 interfacial adsorption. Experimentally, we study targeted mutants of Rsn-2, using the Langmuir trough, pendant drop tensiometry and circular dichroism, to demonstrate that the clam-shell model is correct. We find that Rsn-2 adsorption is in fact a two-step process: the hydrophobic N-terminal tail recruits the protein to the interface after which Rsn-2 undergoes an unfolding transition which maintains its secondary structure. Intriguingly, our simulations show that the conformation Rsn-2 adopts at an interface is an arrested state along the denaturation pathway. More generally, our computational model should prove a useful, and computationally efficient, tool in studying the dynamics and energetics of protein-interface interactions.
1112.1510
Ganesh Bagler Dr
Shikha Vashisht and Ganesh Bagler
An approach for the identification of targets specific to bone metastasis using cancer genes interactome and gene ontology analysis
54 pages (19 pages main text; 11 Figures; 26 pages of supplementary information). Revised after critical reviews. Accepted for Publication in PLoS ONE
null
10.1371/journal.pone.0049401
CSIR-IHBT communication number 2245
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metastasis is one of the most enigmatic aspects of cancer pathogenesis and is a major cause of cancer-associated mortality. Secondary bone cancer (SBC) is a complex disease caused by metastasis of tumor cells from their primary site and is characterized by intricate interplay of molecular interactions. Identification of targets for multifactorial diseases such as SBC, the most frequent complication of breast and prostate cancers, is a challenge. Towards achieving our aim of identification of targets specific to SBC, we constructed a 'Cancer Genes Network', a representative protein interactome of cancer genes. Using graph theoretical methods, we obtained a set of key genes that are relevant for generic mechanisms of cancers and have a role in biological essentiality. We also compiled a curated dataset of 391 SBC genes from published literature which serves as a basis of ontological correlates of secondary bone cancer. Building on these results, we implement a strategy based on generic cancer genes, SBC genes and gene ontology enrichment method, to obtain a set of targets that are specific to bone metastasis. Through this study, we present an approach for probing one of the major complications in cancers, namely, metastasis. The results on genes that play generic roles in cancer phenotype, obtained by network analysis of 'Cancer Genes Network', have broader implications in understanding the role of molecular regulators in mechanisms of cancers. Specifically, our study provides a set of potential targets that are of ontological and regulatory relevance to secondary bone cancer.
[ { "created": "Wed, 7 Dec 2011 10:11:20 GMT", "version": "v1" }, { "created": "Thu, 11 Oct 2012 06:46:23 GMT", "version": "v2" } ]
2015-06-03
[ [ "Vashisht", "Shikha", "" ], [ "Bagler", "Ganesh", "" ] ]
Metastasis is one of the most enigmatic aspects of cancer pathogenesis and is a major cause of cancer-associated mortality. Secondary bone cancer (SBC) is a complex disease caused by metastasis of tumor cells from their primary site and is characterized by intricate interplay of molecular interactions. Identification of targets for multifactorial diseases such as SBC, the most frequent complication of breast and prostate cancers, is a challenge. Towards achieving our aim of identification of targets specific to SBC, we constructed a 'Cancer Genes Network', a representative protein interactome of cancer genes. Using graph theoretical methods, we obtained a set of key genes that are relevant for generic mechanisms of cancers and have a role in biological essentiality. We also compiled a curated dataset of 391 SBC genes from published literature which serves as a basis of ontological correlates of secondary bone cancer. Building on these results, we implement a strategy based on generic cancer genes, SBC genes and gene ontology enrichment method, to obtain a set of targets that are specific to bone metastasis. Through this study, we present an approach for probing one of the major complications in cancers, namely, metastasis. The results on genes that play generic roles in cancer phenotype, obtained by network analysis of 'Cancer Genes Network', have broader implications in understanding the role of molecular regulators in mechanisms of cancers. Specifically, our study provides a set of potential targets that are of ontological and regulatory relevance to secondary bone cancer.
1309.6692
Varunyu Khamviwath
Varunyu Khamviwath and Hans G. Othmer
Signal transduction and directional sensing in eukaryotes
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Control of the cytoskeleton and mechanical contacts with the extracellular environment are essential component of motility in eukaryotic cells. In the absence of signals, cells continuously rebuild the cytoskeleton and periodically extend pseudopods or other protrusions at random membrane locations. Extracellular signals bias the direction of movement by biasing the extension of protrusions, but this involves another layer of biochemical networks for signal detection, transduction, and control of the rebuilding of the cytoskeleton. Here we develop a model for the latter processes that centers on a Ras-based module that adapts to constant extracellular signals and controls the downstream PI3K-PIP3-based module responsible for amplifying a spatial gradient of the signal. The resulting spatial gradient can lead to polarization, which enables cells to move in the preferred direction (up gradient for attractants and down-gradient for repellents). We show that the model can replicate many of the observed characteristics of the responses to cAMP stimulation for Dictyostelium, and analyze how cell geometry and signaling interact to produce the observed localization of some of the key components of the amplification module. We show how polarization can emerge without directional cues, and how it interacts with directional signals and leads to directional persistence. Since other cells such as neutrophils use similar pathways, the model is a generic one for a large class of eukaryotic cells.
[ { "created": "Wed, 25 Sep 2013 23:30:50 GMT", "version": "v1" } ]
2013-09-27
[ [ "Khamviwath", "Varunyu", "" ], [ "Othmer", "Hans G.", "" ] ]
Control of the cytoskeleton and mechanical contacts with the extracellular environment are essential component of motility in eukaryotic cells. In the absence of signals, cells continuously rebuild the cytoskeleton and periodically extend pseudopods or other protrusions at random membrane locations. Extracellular signals bias the direction of movement by biasing the extension of protrusions, but this involves another layer of biochemical networks for signal detection, transduction, and control of the rebuilding of the cytoskeleton. Here we develop a model for the latter processes that centers on a Ras-based module that adapts to constant extracellular signals and controls the downstream PI3K-PIP3-based module responsible for amplifying a spatial gradient of the signal. The resulting spatial gradient can lead to polarization, which enables cells to move in the preferred direction (up gradient for attractants and down-gradient for repellents). We show that the model can replicate many of the observed characteristics of the responses to cAMP stimulation for Dictyostelium, and analyze how cell geometry and signaling interact to produce the observed localization of some of the key components of the amplification module. We show how polarization can emerge without directional cues, and how it interacts with directional signals and leads to directional persistence. Since other cells such as neutrophils use similar pathways, the model is a generic one for a large class of eukaryotic cells.
0712.2932
Leo van Iersel
Leo van Iersel, Steven Kelk and Matthias Mnich
Uniqueness, intractability and exact algorithms: reflections on level-k phylogenetic networks
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks provide a way to describe and visualize evolutionary histories that have undergone so-called reticulate evolutionary events such as recombination, hybridization or horizontal gene transfer. The level k of a network determines how non-treelike the evolution can be, with level-0 networks being trees. We study the problem of constructing level-k phylogenetic networks from triplets, i.e. phylogenetic trees for three leaves (taxa). We give, for each k, a level-k network that is uniquely defined by its triplets. We demonstrate the applicability of this result by using it to prove that (1) for all k of at least one it is NP-hard to construct a level-k network consistent with all input triplets, and (2) for all k it is NP-hard to construct a level-k network consistent with a maximum number of input triplets, even when the input is dense. As a response to this intractability we give an exact algorithm for constructing level-1 networks consistent with a maximum number of input triplets.
[ { "created": "Tue, 18 Dec 2007 11:59:51 GMT", "version": "v1" }, { "created": "Mon, 14 Jan 2008 09:24:17 GMT", "version": "v2" }, { "created": "Mon, 21 Jul 2008 10:53:23 GMT", "version": "v3" } ]
2008-07-21
[ [ "van Iersel", "Leo", "" ], [ "Kelk", "Steven", "" ], [ "Mnich", "Matthias", "" ] ]
Phylogenetic networks provide a way to describe and visualize evolutionary histories that have undergone so-called reticulate evolutionary events such as recombination, hybridization or horizontal gene transfer. The level k of a network determines how non-treelike the evolution can be, with level-0 networks being trees. We study the problem of constructing level-k phylogenetic networks from triplets, i.e. phylogenetic trees for three leaves (taxa). We give, for each k, a level-k network that is uniquely defined by its triplets. We demonstrate the applicability of this result by using it to prove that (1) for all k of at least one it is NP-hard to construct a level-k network consistent with all input triplets, and (2) for all k it is NP-hard to construct a level-k network consistent with a maximum number of input triplets, even when the input is dense. As a response to this intractability we give an exact algorithm for constructing level-1 networks consistent with a maximum number of input triplets.
2401.07016
Samira Bolandghamat
Samira Bolandghamat, Morteza Behnam-Rassouli
Iron role paradox in nerve degeneration and regeneration
null
Bolandghamat, S., & Behnam-Rassouli, M. (2024). Iron role paradox in nerve degeneration and regeneration. Physiological Reports, 12, e15908
10.14814/phy2.15908
null
q-bio.NC q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Iron accumulates in the neural tissue during peripheral nerve degeneration. Some studies have already been suggested that iron facilitates Wallerian degeneration (WD) events such as Schwann cell de-differentiation. On the other hand, intracellular iron levels remain elevated during nerve regeneration and gradually decrease. Iron enhances Schwann cell differentiation and axonal outgrowth. Therefore, there seems to be a paradox in the role of iron during nerve degeneration and regeneration. We explain this contradiction by suggesting that the increase in intracellular iron concentration during peripheral nerve degeneration is likely to prepare neural cells for the initiation of regeneration. Changes in iron levels are the result of changes in the expression of iron homeostasis proteins. In this review, we will first discuss the changes in the iron/iron homeostasis protein levels during peripheral nerve degeneration and regeneration and then explain how iron is related to nerve regeneration. This data may help better understand the mechanisms of peripheral nerve repair and find a solution to prevent or slow the progression of peripheral neuropathies.
[ { "created": "Sat, 13 Jan 2024 08:56:53 GMT", "version": "v1" } ]
2024-01-23
[ [ "Bolandghamat", "Samira", "" ], [ "Behnam-Rassouli", "Morteza", "" ] ]
Iron accumulates in the neural tissue during peripheral nerve degeneration. Some studies have already been suggested that iron facilitates Wallerian degeneration (WD) events such as Schwann cell de-differentiation. On the other hand, intracellular iron levels remain elevated during nerve regeneration and gradually decrease. Iron enhances Schwann cell differentiation and axonal outgrowth. Therefore, there seems to be a paradox in the role of iron during nerve degeneration and regeneration. We explain this contradiction by suggesting that the increase in intracellular iron concentration during peripheral nerve degeneration is likely to prepare neural cells for the initiation of regeneration. Changes in iron levels are the result of changes in the expression of iron homeostasis proteins. In this review, we will first discuss the changes in the iron/iron homeostasis protein levels during peripheral nerve degeneration and regeneration and then explain how iron is related to nerve regeneration. This data may help better understand the mechanisms of peripheral nerve repair and find a solution to prevent or slow the progression of peripheral neuropathies.
2007.08464
Luca Parma
S. Busti (1), A. Bonaldo (1), F. Dondi (1), D. Cavallini (1), M. Yufera (2), N. Gilannejad (2), F. J. Moyano (3), P.P Gatta (1), L. Parma (1) ((1) Department of Veterinary Medical Sciences University of Bologna, (2) Instituto de Ciencias Marinas de Andaluc\'ia, (3) Department of Biology and Geology Universidad de Almer\'ia)
Effects of different feeding frequencies on growth, feed utilisation, digestive enzyme activities and plasma biochemistry of gilthead sea bream (Sparus aurata) fed with different fishmeal and fish oil dietary levels
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the context of Mediterranean aquaculture little attention has been paid to the manipulation of feeding frequency at the on-growing phase. The effects of different feeding frequencies: one meal per day, two meals per day, three meals per day on growth, digestive enzyme activity, feed digestibility and plasma biochemistry were studied in gilthead sea bream (Sparus aurata, L. 1758) fed with high and low fishmeal and fish oil levels. Isonitrogenous and isolipidic extruded diets were fed to triplicate fish groups by a fixed ration over 109 days. No significant effects of feeding frequency on overall performance, feed efficiency and feed digestibility during the on-growing of gilthead sea bream fed high or low fishmeal and fish oil dietary level were observed. Pepsin activity showed an apparent decrease in fish receiving more than one meal a day which was not compensated by an increased production of alkaline proteases particularly in fish fed on low FM. Although there were no effects on growth and feed utilisation at increasing feeding frequency, trypsin decreased significantly with an increasing number of meals only under low FMFO diet. Thus, it seemed that consecutive meals could have amplified the potential trypsin inhibitor effect of the vegetable meal-based diet adopted. Results of the plasma parameters related to nutritional and physiological conditions were not affected by feeding frequency. The higher level of plasma creatinine detected in fish fed a single daily meal with high FMFO level seems to be within physiological values in relation to the higher protein efficiency observed with this diet. According to the results, gilthead sea bream seems able to maximise feed utilisation regardless of the number of meals, and this could be a useful indicator for planning feeding activity at farm level to optimise growth of fish and costs of feeding procedures.
[ { "created": "Tue, 14 Jul 2020 13:26:28 GMT", "version": "v1" }, { "created": "Wed, 10 Feb 2021 11:14:24 GMT", "version": "v2" } ]
2021-02-11
[ [ "Busti", "S.", "" ], [ "Bonaldo", "A.", "" ], [ "Dondi", "F.", "" ], [ "Cavallini", "D.", "" ], [ "Yufera", "M.", "" ], [ "Gilannejad", "N.", "" ], [ "Moyano", "F. J.", "" ], [ "Gatta", "P. P", "" ], [ "Parma", "L.", "" ] ]
In the context of Mediterranean aquaculture little attention has been paid to the manipulation of feeding frequency at the on-growing phase. The effects of different feeding frequencies: one meal per day, two meals per day, three meals per day on growth, digestive enzyme activity, feed digestibility and plasma biochemistry were studied in gilthead sea bream (Sparus aurata, L. 1758) fed with high and low fishmeal and fish oil levels. Isonitrogenous and isolipidic extruded diets were fed to triplicate fish groups by a fixed ration over 109 days. No significant effects of feeding frequency on overall performance, feed efficiency and feed digestibility during the on-growing of gilthead sea bream fed high or low fishmeal and fish oil dietary level were observed. Pepsin activity showed an apparent decrease in fish receiving more than one meal a day which was not compensated by an increased production of alkaline proteases particularly in fish fed on low FM. Although there were no effects on growth and feed utilisation at increasing feeding frequency, trypsin decreased significantly with an increasing number of meals only under low FMFO diet. Thus, it seemed that consecutive meals could have amplified the potential trypsin inhibitor effect of the vegetable meal-based diet adopted. Results of the plasma parameters related to nutritional and physiological conditions were not affected by feeding frequency. The higher level of plasma creatinine detected in fish fed a single daily meal with high FMFO level seems to be within physiological values in relation to the higher protein efficiency observed with this diet. According to the results, gilthead sea bream seems able to maximise feed utilisation regardless of the number of meals, and this could be a useful indicator for planning feeding activity at farm level to optimise growth of fish and costs of feeding procedures.
1508.07616
Stephen Pankavich
Stephen Pankavich and Deborah Shutt
An in-host model of HIV incorporating latent infection and viral mutation
10 pages, 7 figures, Proceedings of AIMS Conference on Differential Equations and Dynamical Systems (2015)
Dynamical Systems, Differential Equations and Applications, AIMS Proceedings, 2015 pp. 913-922
10.3934/proc.2015.0913
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct a seven-component model of the in-host dynamics of the Human Immunodeficiency Virus Type-1 (i.e, HIV) that accounts for latent infection and the propensity of viral mutation. A dynamical analysis is conducted and a theorem is presented which characterizes the long time behavior of the model. Finally, we study the effects of an antiretroviral drug and treatment implications.
[ { "created": "Sun, 30 Aug 2015 19:01:00 GMT", "version": "v1" } ]
2016-04-18
[ [ "Pankavich", "Stephen", "" ], [ "Shutt", "Deborah", "" ] ]
We construct a seven-component model of the in-host dynamics of the Human Immunodeficiency Virus Type-1 (i.e, HIV) that accounts for latent infection and the propensity of viral mutation. A dynamical analysis is conducted and a theorem is presented which characterizes the long time behavior of the model. Finally, we study the effects of an antiretroviral drug and treatment implications.
1611.01842
Daniel B. Weissman
Daniel B. Weissman, Oskar Hallatschek
Minimal-assumption inference from population-genomic data
21 pages, 9 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.
[ { "created": "Sun, 6 Nov 2016 20:50:24 GMT", "version": "v1" } ]
2016-11-08
[ [ "Weissman", "Daniel B.", "" ], [ "Hallatschek", "Oskar", "" ] ]
Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.
1905.02007
Ahmed El Hady
Pepe Alcami, Ahmed El Hady
Axonal Computations
25 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Axons functionally link the somato-dendritic compartment to synaptic terminals. Structurally and functionally diverse, they accomplish a central role in determining the delays and reliability with which neuronal ensembles communicate. By combining their active and passive biophysical properties, they ensure a plethora of physiological computations. In this review, we revisit the biophysics of generation and propagation of electrical signals in the axon, their complex interplay, and their rich dynamics. We further place the computational abilities of axons in the context of intracellular and intercellular coupling. We discuss how, by means of sophisticated biophysical mechanisms, axons expand the repertoire of axonal computation, and thereby, of neural computation.
[ { "created": "Mon, 6 May 2019 12:57:12 GMT", "version": "v1" } ]
2019-05-07
[ [ "Alcami", "Pepe", "" ], [ "Hady", "Ahmed El", "" ] ]
Axons functionally link the somato-dendritic compartment to synaptic terminals. Structurally and functionally diverse, they accomplish a central role in determining the delays and reliability with which neuronal ensembles communicate. By combining their active and passive biophysical properties, they ensure a plethora of physiological computations. In this review, we revisit the biophysics of generation and propagation of electrical signals in the axon, their complex interplay, and their rich dynamics. We further place the computational abilities of axons in the context of intracellular and intercellular coupling. We discuss how, by means of sophisticated biophysical mechanisms, axons expand the repertoire of axonal computation, and thereby, of neural computation.
2103.14419
Pierre Morisse
Pierre Morisse, Claire Lemaitre, Fabrice Legeai
LRez: C++ API and toolkit for analyzing and managing Linked-Reads data
4 pages, 1 table
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Linked-Reads technologies, such as 10x Genomics, combine both the high-quality and low cost of short-reads sequencing and a long-range information, through the use of barcodes able to tag reads which originate from a common long DNA fragment. This technology has been employed in a broad range of applications including assembly or phasing of genomes, and structural variant calling. However, to date, no tool or API dedicated to the manipulation of Linked-Reads data exist. We introduce LRez, a C++ API and toolkit which allows easy management of Linked-Reads data. LRez includes various functionalities, for computing number of common barcodes between genomic regions, extracting barcodes from BAM files, as well as indexing and querying both BAM and FASTQ files to quickly fetch reads or alignments sharing one or multiple barcodes. LRez can thus be used in a broad range of applications requiring barcode processing, in order to improve their performances. LRez is implemented in C++, supported on Linux platforms, and available under AGPL-3.0 License at https://github.com/morispi/LRez.
[ { "created": "Fri, 26 Mar 2021 11:59:52 GMT", "version": "v1" }, { "created": "Mon, 29 Mar 2021 07:23:09 GMT", "version": "v2" } ]
2021-03-30
[ [ "Morisse", "Pierre", "" ], [ "Lemaitre", "Claire", "" ], [ "Legeai", "Fabrice", "" ] ]
Linked-Reads technologies, such as 10x Genomics, combine both the high-quality and low cost of short-reads sequencing and a long-range information, through the use of barcodes able to tag reads which originate from a common long DNA fragment. This technology has been employed in a broad range of applications including assembly or phasing of genomes, and structural variant calling. However, to date, no tool or API dedicated to the manipulation of Linked-Reads data exist. We introduce LRez, a C++ API and toolkit which allows easy management of Linked-Reads data. LRez includes various functionalities, for computing number of common barcodes between genomic regions, extracting barcodes from BAM files, as well as indexing and querying both BAM and FASTQ files to quickly fetch reads or alignments sharing one or multiple barcodes. LRez can thus be used in a broad range of applications requiring barcode processing, in order to improve their performances. LRez is implemented in C++, supported on Linux platforms, and available under AGPL-3.0 License at https://github.com/morispi/LRez.
1810.05470
\'Alvaro Garc\'ia L\'opez
\'Alvaro G. L\'opez, Kelly C. Iarosz, Antonio M. Batista, Jes\'us M. Seoane, Ricardo L. Viana, Miguel A. F. Sanju\'an
Nonlinear cancer chemotherapy: modelling the Norton-Simon hypothesis
null
null
10.1016/j.cnsns.2018.11.006
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental model of tumor growth in the presence of cytotoxic chemotherapeutic agents is formulated. The model allows to study the role of the Norton-Simon hypothesis in the context of dose-dense chemotherapy. Dose-dense protocols aim at reducing the period between courses of chemotherapy from three weeks to two weeks, in order to avoid tumor regrowth between cycles. We address the conditions under which these protocols might be more or less beneficial in comparison to less dense settings, depending on the sensitivity of the tumor cells to the cytotoxic drugs. The effects of varying other parameters of the protocol, as for example the duration of each continuous drug infusion, are also inspected. We believe that the present model might serve as a foundation for the development of more sophisticated models for cancer chemotherapy.
[ { "created": "Fri, 12 Oct 2018 12:16:35 GMT", "version": "v1" }, { "created": "Mon, 15 Oct 2018 18:29:32 GMT", "version": "v2" } ]
2018-12-05
[ [ "López", "Álvaro G.", "" ], [ "Iarosz", "Kelly C.", "" ], [ "Batista", "Antonio M.", "" ], [ "Seoane", "Jesús M.", "" ], [ "Viana", "Ricardo L.", "" ], [ "Sanjuán", "Miguel A. F.", "" ] ]
A fundamental model of tumor growth in the presence of cytotoxic chemotherapeutic agents is formulated. The model allows to study the role of the Norton-Simon hypothesis in the context of dose-dense chemotherapy. Dose-dense protocols aim at reducing the period between courses of chemotherapy from three weeks to two weeks, in order to avoid tumor regrowth between cycles. We address the conditions under which these protocols might be more or less beneficial in comparison to less dense settings, depending on the sensitivity of the tumor cells to the cytotoxic drugs. The effects of varying other parameters of the protocol, as for example the duration of each continuous drug infusion, are also inspected. We believe that the present model might serve as a foundation for the development of more sophisticated models for cancer chemotherapy.
q-bio/0608031
Carlos Escudero
Carlos Escudero
Geometrical approach to tumor growth
null
Phys. Rev. E 74, 021901 (2006)
10.1103/PhysRevE.74.021901
null
q-bio.QM cond-mat.stat-mech nlin.AO q-bio.TO
null
Tumor growth has a number of features in common with a physical process known as molecular beam epitaxy. Both growth processes are characterized by the constraint of growth development to the body border, and surface diffusion of cells/particles at the growing edge. However, tumor growth implies an approximate spherical symmetry that makes necessary a geometrical treatment of the growth equations. The basic model was introduced in a former article [C. Escudero, Phys. Rev. E 73, 020902(R) (2006)], and in the present work we extend our analysis and try to shed light on the possible geometrical principles that drive tumor growth. We present two-dimensional models that reproduce the experimental observations, and analyse the unexplored three-dimensional case, for which new conclusions on tumor growth are derived.
[ { "created": "Fri, 18 Aug 2006 17:49:54 GMT", "version": "v1" } ]
2009-11-13
[ [ "Escudero", "Carlos", "" ] ]
Tumor growth has a number of features in common with a physical process known as molecular beam epitaxy. Both growth processes are characterized by the constraint of growth development to the body border, and surface diffusion of cells/particles at the growing edge. However, tumor growth implies an approximate spherical symmetry that makes necessary a geometrical treatment of the growth equations. The basic model was introduced in a former article [C. Escudero, Phys. Rev. E 73, 020902(R) (2006)], and in the present work we extend our analysis and try to shed light on the possible geometrical principles that drive tumor growth. We present two-dimensional models that reproduce the experimental observations, and analyse the unexplored three-dimensional case, for which new conclusions on tumor growth are derived.
2312.00932
Elbert Branscomb
Elbert Branscomb
Boltzmann's casino and the unbridgeable chasm in emergence of life research
68 Pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Notwithstanding its long history and compelling motivation, research seeking to explicate the emergence life (EoL) has throughout been a cacophony of unresolved speculation and dispute; absent still any clear convergence or other inarguable evidence of progress. This notwithstanding that it has also produced a rich and varied supply of putatively promising technical advances. Not surprising then the effort being advanced by some to establish a shared basis in fundamental assumptions upon which a more productive community research effort might arise. In this essay however, I press a case in opposition. First, that a chasm divides the rich fauna of contesting EoL models into two conceptually incommensurate classes; here named "chemistry models" (to which class belongs nearly all thinking and work in the field, past and present) and "engine models" (advanced in various more-or-less partial forms by a marginal minority of voices dating from Boltzmann forward). Second, that contemporary non-equilibrium thermodynamics dictates that 'engine-less' (i.e. 'chemistry') models cannot in principle generate non-equilibrium, organized states of matter and are in consequence inherently incapable of prizing life out of inanimate matter.
[ { "created": "Fri, 1 Dec 2023 21:20:20 GMT", "version": "v1" } ]
2023-12-05
[ [ "Branscomb", "Elbert", "" ] ]
Notwithstanding its long history and compelling motivation, research seeking to explicate the emergence life (EoL) has throughout been a cacophony of unresolved speculation and dispute; absent still any clear convergence or other inarguable evidence of progress. This notwithstanding that it has also produced a rich and varied supply of putatively promising technical advances. Not surprising then the effort being advanced by some to establish a shared basis in fundamental assumptions upon which a more productive community research effort might arise. In this essay however, I press a case in opposition. First, that a chasm divides the rich fauna of contesting EoL models into two conceptually incommensurate classes; here named "chemistry models" (to which class belongs nearly all thinking and work in the field, past and present) and "engine models" (advanced in various more-or-less partial forms by a marginal minority of voices dating from Boltzmann forward). Second, that contemporary non-equilibrium thermodynamics dictates that 'engine-less' (i.e. 'chemistry') models cannot in principle generate non-equilibrium, organized states of matter and are in consequence inherently incapable of prizing life out of inanimate matter.
2211.04195
Lucas Hedstr\"om
Lucas Hedstr\"om and Ludvig Lizana
Modelling chromosome-wide target search
15 pages, 11 figures
New J. Phys. 25 033024 (2023=
10.1088/1367-2630/acc127
null
q-bio.QM cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The most common gene regulation mechanism is when a transcription factor protein binds to a regulatory sequence to increase or decrease RNA transcription. However, transcription factors face two main challenges when searching for these sequences. First, they are vanishingly short relative to the genome length. Second, many nearly identical sequences are scattered across the genome, causing proteins to suspend the search. But as pointed out in a computational study of LacI regulation in Escherichia coli, such almost-targets may lower search times if considering DNA looping. In this paper, we explore if this also occurs over chromosome-wide distances. To this end, we developed a cross-scale computational framework that combines established facilitated-diffusion models for basepair-level search and a network model capturing chromosome-wide leaps. To make our model realistic, we used Hi-C data sets as a proxy for 3D proximity between long-ranged DNA segments and binding profiles for more than 100 transcription factors. Using our cross-scale model, we found that median search times to individual targets critically depend on a network metric combining node strength (sum of link weights) and local dissociation rates. Also, by randomizing these rates, we found that some actual 3D target configurations stand out as considerably faster or slower than their random counterparts. This finding hints that chromosomes' 3D structure funnels essential transcription factors to relevant DNA regions.
[ { "created": "Tue, 8 Nov 2022 12:22:19 GMT", "version": "v1" } ]
2023-11-21
[ [ "Hedström", "Lucas", "" ], [ "Lizana", "Ludvig", "" ] ]
The most common gene regulation mechanism is when a transcription factor protein binds to a regulatory sequence to increase or decrease RNA transcription. However, transcription factors face two main challenges when searching for these sequences. First, they are vanishingly short relative to the genome length. Second, many nearly identical sequences are scattered across the genome, causing proteins to suspend the search. But as pointed out in a computational study of LacI regulation in Escherichia coli, such almost-targets may lower search times if considering DNA looping. In this paper, we explore if this also occurs over chromosome-wide distances. To this end, we developed a cross-scale computational framework that combines established facilitated-diffusion models for basepair-level search and a network model capturing chromosome-wide leaps. To make our model realistic, we used Hi-C data sets as a proxy for 3D proximity between long-ranged DNA segments and binding profiles for more than 100 transcription factors. Using our cross-scale model, we found that median search times to individual targets critically depend on a network metric combining node strength (sum of link weights) and local dissociation rates. Also, by randomizing these rates, we found that some actual 3D target configurations stand out as considerably faster or slower than their random counterparts. This finding hints that chromosomes' 3D structure funnels essential transcription factors to relevant DNA regions.
2401.10334
Longyue Wang
Geyan Ye, Xibao Cai, Houtim Lai, Xing Wang, Junhong Huang, Longyue Wang, Wei Liu, Xiangxiang Zeng
DrugAssist: A Large Language Model for Molecule Optimization
Geyan Ye and Xibao Cai are equal contributors; Longyue Wang is corresponding author
null
null
null
q-bio.QM cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, the impressive performance of large language models (LLMs) on a wide range of tasks has attracted an increasing number of attempts to apply LLMs in drug discovery. However, molecule optimization, a critical task in the drug discovery pipeline, is currently an area that has seen little involvement from LLMs. Most of existing approaches focus solely on capturing the underlying patterns in chemical structures provided by the data, without taking advantage of expert feedback. These non-interactive approaches overlook the fact that the drug discovery process is actually one that requires the integration of expert experience and iterative refinement. To address this gap, we propose DrugAssist, an interactive molecule optimization model which performs optimization through human-machine dialogue by leveraging LLM's strong interactivity and generalizability. DrugAssist has achieved leading results in both single and multiple property optimization, simultaneously showcasing immense potential in transferability and iterative optimization. In addition, we publicly release a large instruction-based dataset called MolOpt-Instructions for fine-tuning language models on molecule optimization tasks. We have made our code and data publicly available at https://github.com/blazerye/DrugAssist, which we hope to pave the way for future research in LLMs' application for drug discovery.
[ { "created": "Thu, 28 Dec 2023 10:46:56 GMT", "version": "v1" } ]
2024-01-22
[ [ "Ye", "Geyan", "" ], [ "Cai", "Xibao", "" ], [ "Lai", "Houtim", "" ], [ "Wang", "Xing", "" ], [ "Huang", "Junhong", "" ], [ "Wang", "Longyue", "" ], [ "Liu", "Wei", "" ], [ "Zeng", "Xiangxiang", "" ] ]
Recently, the impressive performance of large language models (LLMs) on a wide range of tasks has attracted an increasing number of attempts to apply LLMs in drug discovery. However, molecule optimization, a critical task in the drug discovery pipeline, is currently an area that has seen little involvement from LLMs. Most of existing approaches focus solely on capturing the underlying patterns in chemical structures provided by the data, without taking advantage of expert feedback. These non-interactive approaches overlook the fact that the drug discovery process is actually one that requires the integration of expert experience and iterative refinement. To address this gap, we propose DrugAssist, an interactive molecule optimization model which performs optimization through human-machine dialogue by leveraging LLM's strong interactivity and generalizability. DrugAssist has achieved leading results in both single and multiple property optimization, simultaneously showcasing immense potential in transferability and iterative optimization. In addition, we publicly release a large instruction-based dataset called MolOpt-Instructions for fine-tuning language models on molecule optimization tasks. We have made our code and data publicly available at https://github.com/blazerye/DrugAssist, which we hope to pave the way for future research in LLMs' application for drug discovery.
2312.15320
Da Wu
Da Wu, Jingye Yang, Cong Liu, Tzung-Chien Hsieh, Elaine Marchi, Justin Blair, Peter Krawitz, Chunhua Weng, Wendy Chung, Gholson J. Lyon, Ian D. Krantz, Jennifer M. Kalish, Kai Wang
GestaltMML: Enhancing Rare Genetic Disease Diagnosis through Multimodal Machine Learning Combining Facial Images and Clinical Texts
Significant revisions
null
null
null
q-bio.QM cs.CV cs.LG cs.MM q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Individuals with suspected rare genetic disorders often undergo multiple clinical evaluations, imaging studies, laboratory tests and genetic tests, to find a possible answer over a prolonged period of time. Addressing this "diagnostic odyssey" thus has substantial clinical, psychosocial, and economic benefits. Many rare genetic diseases have distinctive facial features, which can be used by artificial intelligence algorithms to facilitate clinical diagnosis, in prioritizing candidate diseases to be further examined by lab tests or genetic assays, or in helping the phenotype-driven reinterpretation of genome/exome sequencing data. Existing methods using frontal facial photos were built on conventional Convolutional Neural Networks (CNNs), rely exclusively on facial images, and cannot capture non-facial phenotypic traits and demographic information essential for guiding accurate diagnoses. Here we introduce GestaltMML, a multimodal machine learning (MML) approach solely based on the Transformer architecture. It integrates facial images, demographic information (age, sex, ethnicity), and clinical notes (optionally, a list of Human Phenotype Ontology terms) to improve prediction accuracy. Furthermore, we also evaluated GestaltMML on a diverse range of datasets, including 528 diseases from the GestaltMatcher Database, several in-house datasets of Beckwith-Wiedemann syndrome (BWS, over-growth syndrome with distinct facial features), Sotos syndrome (overgrowth syndrome with overlapping features with BWS), NAA10-related neurodevelopmental syndrome, Cornelia de Lange syndrome (multiple malformation syndrome), and KBG syndrome (multiple malformation syndrome). Our results suggest that GestaltMML effectively incorporates multiple modalities of data, greatly narrowing candidate genetic diagnoses of rare diseases and may facilitate the reinterpretation of genome/exome sequencing data.
[ { "created": "Sat, 23 Dec 2023 18:40:25 GMT", "version": "v1" }, { "created": "Mon, 22 Apr 2024 00:41:34 GMT", "version": "v2" } ]
2024-04-23
[ [ "Wu", "Da", "" ], [ "Yang", "Jingye", "" ], [ "Liu", "Cong", "" ], [ "Hsieh", "Tzung-Chien", "" ], [ "Marchi", "Elaine", "" ], [ "Blair", "Justin", "" ], [ "Krawitz", "Peter", "" ], [ "Weng", "Chunhua", "" ], [ "Chung", "Wendy", "" ], [ "Lyon", "Gholson J.", "" ], [ "Krantz", "Ian D.", "" ], [ "Kalish", "Jennifer M.", "" ], [ "Wang", "Kai", "" ] ]
Individuals with suspected rare genetic disorders often undergo multiple clinical evaluations, imaging studies, laboratory tests and genetic tests, to find a possible answer over a prolonged period of time. Addressing this "diagnostic odyssey" thus has substantial clinical, psychosocial, and economic benefits. Many rare genetic diseases have distinctive facial features, which can be used by artificial intelligence algorithms to facilitate clinical diagnosis, in prioritizing candidate diseases to be further examined by lab tests or genetic assays, or in helping the phenotype-driven reinterpretation of genome/exome sequencing data. Existing methods using frontal facial photos were built on conventional Convolutional Neural Networks (CNNs), rely exclusively on facial images, and cannot capture non-facial phenotypic traits and demographic information essential for guiding accurate diagnoses. Here we introduce GestaltMML, a multimodal machine learning (MML) approach solely based on the Transformer architecture. It integrates facial images, demographic information (age, sex, ethnicity), and clinical notes (optionally, a list of Human Phenotype Ontology terms) to improve prediction accuracy. Furthermore, we also evaluated GestaltMML on a diverse range of datasets, including 528 diseases from the GestaltMatcher Database, several in-house datasets of Beckwith-Wiedemann syndrome (BWS, over-growth syndrome with distinct facial features), Sotos syndrome (overgrowth syndrome with overlapping features with BWS), NAA10-related neurodevelopmental syndrome, Cornelia de Lange syndrome (multiple malformation syndrome), and KBG syndrome (multiple malformation syndrome). Our results suggest that GestaltMML effectively incorporates multiple modalities of data, greatly narrowing candidate genetic diagnoses of rare diseases and may facilitate the reinterpretation of genome/exome sequencing data.
1212.0672
David Saakian
David B. Saakian, Laurent Schwartz
The three different phases in the dynamics of chemical reaction networks and their relationship to cancer
5 pages, 2 figures, EPL, in press
null
10.1209/0295-5075/100/68003
null
q-bio.OT cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the catalytic reactions model used in cell modeling. The reaction kinetic is defined through the energies of different species of molecules following random independent distribution. The related statistical physics model has three phases and these three phases emerged in the dynamics: fast dynamics phase, slow dynamic phase and ultra-slow dynamic phase. The phenomenon we found is a rather general, does not depend on the details of the model. We assume as a hypothesis that the transition between these phases (glassiness degrees) is related to cancer. The imbalance in the rate of processes between key aspects of the cell (gene regulation, protein-protein interaction, metabolical networks) creates a change in the fine tuning between these key aspects, affects the logics of the cell and initiates cancer. It is probable that cancer is a change of phase resulting from increased and deregulated metabolic reactions.
[ { "created": "Tue, 4 Dec 2012 10:50:16 GMT", "version": "v1" } ]
2015-06-12
[ [ "Saakian", "David B.", "" ], [ "Schwartz", "Laurent", "" ] ]
We investigate the catalytic reactions model used in cell modeling. The reaction kinetic is defined through the energies of different species of molecules following random independent distribution. The related statistical physics model has three phases and these three phases emerged in the dynamics: fast dynamics phase, slow dynamic phase and ultra-slow dynamic phase. The phenomenon we found is a rather general, does not depend on the details of the model. We assume as a hypothesis that the transition between these phases (glassiness degrees) is related to cancer. The imbalance in the rate of processes between key aspects of the cell (gene regulation, protein-protein interaction, metabolical networks) creates a change in the fine tuning between these key aspects, affects the logics of the cell and initiates cancer. It is probable that cancer is a change of phase resulting from increased and deregulated metabolic reactions.
2001.03781
Arfeen Khalid
Arfeen Khalid
BioMETA: A multiple specification parameter estimation system for stochastic biochemical models
null
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The inherent behavioral variability exhibited by stochastic biochemical systems makes it a challenging task for human experts to manually analyze them. Computational modeling of such systems helps in investigating and predicting the behaviors of the underlying biochemical processes but at the same time introduces the presence of several unknown parameters. A key challenge faced in this scenario is to determine the values of these unknown parameters against known behavioral specifications. The solutions that have been presented so far estimate the parameters of a given model against a single specification whereas a correct model is expected to satisfy all the behavioral specifications when instantiated with a single set of parameter values. We present a new method, BioMETA, to address this problem such that a single set of parameter values causes a parameterized stochastic biochemical model to satisfy all the given probabilistic temporal logic behavioral specifications simultaneously. Our method is based on combining a multiple hypothesis testing based statistical model checking technique with simulated annealing search to look for a single set of parameter values so that the given parameterized model satisfies multiple probabilistic behavioral specifications. We study two stochastic rule-based models of biochemical receptors, namely, Fc$\epsilon$RI and T-cell as our benchmarks to evaluate the usefulness of the presented method. Our experimental results successfully estimate $26$ parameters of Fc$\epsilon$RI and $29$ parameters of T-cell receptor model against three probabilistic temporal logic behavioral specifications each.
[ { "created": "Sun, 5 Jan 2020 03:04:53 GMT", "version": "v1" } ]
2020-01-14
[ [ "Khalid", "Arfeen", "" ] ]
The inherent behavioral variability exhibited by stochastic biochemical systems makes it a challenging task for human experts to manually analyze them. Computational modeling of such systems helps in investigating and predicting the behaviors of the underlying biochemical processes but at the same time introduces the presence of several unknown parameters. A key challenge faced in this scenario is to determine the values of these unknown parameters against known behavioral specifications. The solutions that have been presented so far estimate the parameters of a given model against a single specification whereas a correct model is expected to satisfy all the behavioral specifications when instantiated with a single set of parameter values. We present a new method, BioMETA, to address this problem such that a single set of parameter values causes a parameterized stochastic biochemical model to satisfy all the given probabilistic temporal logic behavioral specifications simultaneously. Our method is based on combining a multiple hypothesis testing based statistical model checking technique with simulated annealing search to look for a single set of parameter values so that the given parameterized model satisfies multiple probabilistic behavioral specifications. We study two stochastic rule-based models of biochemical receptors, namely, Fc$\epsilon$RI and T-cell as our benchmarks to evaluate the usefulness of the presented method. Our experimental results successfully estimate $26$ parameters of Fc$\epsilon$RI and $29$ parameters of T-cell receptor model against three probabilistic temporal logic behavioral specifications each.
2407.00560
Wenda Wang
Wenda Wang, Jiaqi Zhai, He Huang, Xinqi Gong
DCI: An Accurate Quality Assessment Criteria for Protein Complex Structure Models
null
null
null
null
q-bio.BM math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The structure of proteins is the basis for studying protein function and drug design. The emergence of AlphaFold 2 has greatly promoted the prediction of protein 3D structures, and it is of great significance to give an overall and accurate evaluation of the predicted models, especially the complex models. Among the existing methods for evaluating multimer structures, DockQ is the most commonly used. However, as a more suitable metric for complex docking, DockQ cannot provide a unique and accurate evaluation in the non-docking situation. Therefore, it is necessary to propose an evaluation strategy that can directly evaluate the whole complex without limitation and achieve good results. In this work, we proposed DCI score, a new evaluation strategy for protein complex structure models, which only bases on distance map and CI (contact-interface) map, DCI focuses on the prediction accuracy of the contact interface based on the overall evaluation of complex structure, is not inferior to DockQ in the evaluation accuracy according to CAPRI classification, and is able to handle the non-docking situation better than DockQ. Besides, we calculated DCI score on CASP datasets and compared it with CASP official assessment, which obtained good results. In addition, we found that DCI can better evaluate the overall structure deviation caused by interface prediction errors in the case of multi-chains. Our DCI is available at \url{https://gitee.com/WendaWang/DCI-score.git}, and the online-server is available at \url{http://mialab.ruc.edu.cn/DCIServer/}.
[ { "created": "Sun, 30 Jun 2024 02:02:04 GMT", "version": "v1" } ]
2024-07-02
[ [ "Wang", "Wenda", "" ], [ "Zhai", "Jiaqi", "" ], [ "Huang", "He", "" ], [ "Gong", "Xinqi", "" ] ]
The structure of proteins is the basis for studying protein function and drug design. The emergence of AlphaFold 2 has greatly promoted the prediction of protein 3D structures, and it is of great significance to give an overall and accurate evaluation of the predicted models, especially the complex models. Among the existing methods for evaluating multimer structures, DockQ is the most commonly used. However, as a more suitable metric for complex docking, DockQ cannot provide a unique and accurate evaluation in the non-docking situation. Therefore, it is necessary to propose an evaluation strategy that can directly evaluate the whole complex without limitation and achieve good results. In this work, we proposed DCI score, a new evaluation strategy for protein complex structure models, which only bases on distance map and CI (contact-interface) map, DCI focuses on the prediction accuracy of the contact interface based on the overall evaluation of complex structure, is not inferior to DockQ in the evaluation accuracy according to CAPRI classification, and is able to handle the non-docking situation better than DockQ. Besides, we calculated DCI score on CASP datasets and compared it with CASP official assessment, which obtained good results. In addition, we found that DCI can better evaluate the overall structure deviation caused by interface prediction errors in the case of multi-chains. Our DCI is available at \url{https://gitee.com/WendaWang/DCI-score.git}, and the online-server is available at \url{http://mialab.ruc.edu.cn/DCIServer/}.
1302.5917
N Khusnutdinov
Y. Suleymanov, F. Gafarov, N. Khusnutdinov
Modeling of interstitial branching of axonal networks
12 pages, 7 figures
1. J. Integr. Neurosci. 12, 1-14, (2013)
10.1142/S0219635213500064
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A single axon can generate branches connecting with plenty synaptic targets. Process of branching is very important for making connections in central nervous system. The interstitial branching along primary axon shaft occurs during nervous system development. Growing axon makes pause in its movement and leaves active points behind its terminal. The new branches appear from these points. We suggest mathematical model to describe and investigate neural network branching process. The model under consideration describes neural network growth in which the concentration of axon guidance molecules manages axon's growth. We model the interstitial branching from axon shaft. Numerical simulations show that in the model framework axonal networks are similar to neural network.
[ { "created": "Sun, 24 Feb 2013 15:53:19 GMT", "version": "v1" } ]
2013-04-25
[ [ "Suleymanov", "Y.", "" ], [ "Gafarov", "F.", "" ], [ "Khusnutdinov", "N.", "" ] ]
A single axon can generate branches connecting with plenty synaptic targets. Process of branching is very important for making connections in central nervous system. The interstitial branching along primary axon shaft occurs during nervous system development. Growing axon makes pause in its movement and leaves active points behind its terminal. The new branches appear from these points. We suggest mathematical model to describe and investigate neural network branching process. The model under consideration describes neural network growth in which the concentration of axon guidance molecules manages axon's growth. We model the interstitial branching from axon shaft. Numerical simulations show that in the model framework axonal networks are similar to neural network.
2002.04402
Lasse Frey
Lasse Jannis Frey, David Vorl\"ander, Detlev Rasch, Sven Meinen, Bernhard M\"uller, Torsten Mayr, Andreas Dietzel, Jan-Hendrik Grosch, Rainer Krull
Defining mass transfer in a capillary wave micro-bioreactor
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For high-throughput cell culture and associated analytics, droplet-based cultivation systems open up the opportunities for parallelization and rapid data generation. In contrast to microfluidics with continuous flow, sessile droplet approaches enhance the flexibility for fluid manipulation with usually less operational effort. Generating biologically favorable conditions and promoting cell growth in a droplet, however, is particularly challenging due to mass transfer limitations, which has to be solved by implementing an effective mixing technique. Here, capillary waves induced by vertical oscillation are used to mix inside a sessile droplet micro-bioreactor (MBR) system avoiding additional moving parts inside the fluid. Depending on the excitation frequency, different patterns are formed on the oscillating liquid surface, which are described by a model of a vibrated sessile droplet. Analyzing mixing times and oxygen transport into the liquid, a strong dependency of mass transfer on the oscillation parameters, especially the excitation frequency, is demonstrated. Oscillations at distinct capillary wave resonant frequencies lead to rapid homogenization with mixing times of 2 s and volumetric liquid-phase mass transfer coefficients of more than 340 h-1. This shows that the mass transfer in a droplet MBR can be specifically controlled via capillary waves, what is subsequently demonstrated for cultivations of Escherichia coli BL21 cells. Therefore, the presented MBR in combination with vertical oscillation mixing for intensified mass transfer is a promising tool for highly parallel cultivation and data generation.
[ { "created": "Tue, 4 Feb 2020 09:23:23 GMT", "version": "v1" } ]
2020-02-12
[ [ "Frey", "Lasse Jannis", "" ], [ "Vorländer", "David", "" ], [ "Rasch", "Detlev", "" ], [ "Meinen", "Sven", "" ], [ "Müller", "Bernhard", "" ], [ "Mayr", "Torsten", "" ], [ "Dietzel", "Andreas", "" ], [ "Grosch", "Jan-Hendrik", "" ], [ "Krull", "Rainer", "" ] ]
For high-throughput cell culture and associated analytics, droplet-based cultivation systems open up the opportunities for parallelization and rapid data generation. In contrast to microfluidics with continuous flow, sessile droplet approaches enhance the flexibility for fluid manipulation with usually less operational effort. Generating biologically favorable conditions and promoting cell growth in a droplet, however, is particularly challenging due to mass transfer limitations, which has to be solved by implementing an effective mixing technique. Here, capillary waves induced by vertical oscillation are used to mix inside a sessile droplet micro-bioreactor (MBR) system avoiding additional moving parts inside the fluid. Depending on the excitation frequency, different patterns are formed on the oscillating liquid surface, which are described by a model of a vibrated sessile droplet. Analyzing mixing times and oxygen transport into the liquid, a strong dependency of mass transfer on the oscillation parameters, especially the excitation frequency, is demonstrated. Oscillations at distinct capillary wave resonant frequencies lead to rapid homogenization with mixing times of 2 s and volumetric liquid-phase mass transfer coefficients of more than 340 h-1. This shows that the mass transfer in a droplet MBR can be specifically controlled via capillary waves, what is subsequently demonstrated for cultivations of Escherichia coli BL21 cells. Therefore, the presented MBR in combination with vertical oscillation mixing for intensified mass transfer is a promising tool for highly parallel cultivation and data generation.
1201.1107
Brajesh Kumar JHA
Brajesh Kumar Jha, Neeru Adlakha and M. N. Mehta
Finite Volume Model to Study the Effect of ER flux on Cytosolic Calcium Distribution in Astrocytes
7 pages, 9 figures, journal; ISSN 2151-9617 https://sites.google.com/site/journalofcomputing; http://www.journalofcomputing.org
Journal of Computing, Volume 3, Issue 11, 2011, 74-80
null
null
q-bio.CB math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of the intra-cellular events involved in the initiation and propagation phases of this process has now been identified astrocytes. The control of the spread of intracellular calcium signaling has been demonstrated to occur at several levels including IP3 receptors, intracellular Ca2+ stores like endoplasmic reticulum (ER) . In normal and pathological situations that affect one or several of these steps can be predicted to influence on astrocytic calcium waves. In view of above a mathematical model is developed to study interdependence of all the important parameters like diffusion coefficient and influx over [Ca2+] profile. Model incorporates the ER fluxes like, leak Pump Chan J J andJ . Finite volume method is employed to solve the problem. A program has been developed using in MATLAB 7.5 for the entire problem and simulated on an AMD-Turion 32-bite machine to compute the numerical results. In view of above a mathematical model is developed to study calcium transport between cytosol and ER.
[ { "created": "Thu, 5 Jan 2012 10:35:40 GMT", "version": "v1" } ]
2013-01-08
[ [ "Jha", "Brajesh Kumar", "" ], [ "Adlakha", "Neeru", "" ], [ "Mehta", "M. N.", "" ] ]
Most of the intra-cellular events involved in the initiation and propagation phases of this process has now been identified astrocytes. The control of the spread of intracellular calcium signaling has been demonstrated to occur at several levels including IP3 receptors, intracellular Ca2+ stores like endoplasmic reticulum (ER) . In normal and pathological situations that affect one or several of these steps can be predicted to influence on astrocytic calcium waves. In view of above a mathematical model is developed to study interdependence of all the important parameters like diffusion coefficient and influx over [Ca2+] profile. Model incorporates the ER fluxes like, leak Pump Chan J J andJ . Finite volume method is employed to solve the problem. A program has been developed using in MATLAB 7.5 for the entire problem and simulated on an AMD-Turion 32-bite machine to compute the numerical results. In view of above a mathematical model is developed to study calcium transport between cytosol and ER.
2306.17566
Ruriko Yoshida
Ruriko Yoshida
Imputing phylogenetic trees using tropical polytopes over the space of phylogenetic trees
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
When we apply comparative phylogenetic analyses to genome data, it is a well-known problem and challenge that some of given species (or taxa) often have missing genes. In such a case, we have to impute a missing part of a gene tree from a sample of gene trees. In this short paper we propose a novel method to infer a missing part of a phylogenetic tree using an analogue of a classical linear regression in the setting of tropical geometry. In our approach, we consider a tropical polytope, a convex hull with respect to the tropical metric closest to the data points. We show a condition that we can guarantee that an estimated tree from our method has at most four Robinson-Foulds (RF) distance from the ground truth and computational experiments with simulated data show our method works well.
[ { "created": "Fri, 30 Jun 2023 11:39:48 GMT", "version": "v1" }, { "created": "Mon, 3 Jul 2023 18:55:04 GMT", "version": "v2" } ]
2023-07-06
[ [ "Yoshida", "Ruriko", "" ] ]
When we apply comparative phylogenetic analyses to genome data, it is a well-known problem and challenge that some of given species (or taxa) often have missing genes. In such a case, we have to impute a missing part of a gene tree from a sample of gene trees. In this short paper we propose a novel method to infer a missing part of a phylogenetic tree using an analogue of a classical linear regression in the setting of tropical geometry. In our approach, we consider a tropical polytope, a convex hull with respect to the tropical metric closest to the data points. We show a condition that we can guarantee that an estimated tree from our method has at most four Robinson-Foulds (RF) distance from the ground truth and computational experiments with simulated data show our method works well.
2305.04120
Cong Fu
Cong Fu, Keqiang Yan, Limei Wang, Wing Yee Au, Michael McThrow, Tao Komikado, Koji Maruhashi, Kanji Uchino, Xiaoning Qian, Shuiwang Ji
A Latent Diffusion Model for Protein Structure Generation
Accepted by the Second Learning on Graphs Conference (LoG 2023)
null
null
null
q-bio.BM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins are complex biomolecules that perform a variety of crucial functions within living organisms. Designing and generating novel proteins can pave the way for many future synthetic biology applications, including drug discovery. However, it remains a challenging computational task due to the large modeling space of protein structures. In this study, we propose a latent diffusion model that can reduce the complexity of protein modeling while flexibly capturing the distribution of natural protein structures in a condensed latent space. Specifically, we propose an equivariant protein autoencoder that embeds proteins into a latent space and then uses an equivariant diffusion model to learn the distribution of the latent protein representations. Experimental results demonstrate that our method can effectively generate novel protein backbone structures with high designability and efficiency. The code will be made publicly available at https://github.com/divelab/AIRS/tree/main/OpenProt/LatentDiff
[ { "created": "Sat, 6 May 2023 19:10:19 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2023 23:53:20 GMT", "version": "v2" } ]
2023-12-08
[ [ "Fu", "Cong", "" ], [ "Yan", "Keqiang", "" ], [ "Wang", "Limei", "" ], [ "Au", "Wing Yee", "" ], [ "McThrow", "Michael", "" ], [ "Komikado", "Tao", "" ], [ "Maruhashi", "Koji", "" ], [ "Uchino", "Kanji", "" ], [ "Qian", "Xiaoning", "" ], [ "Ji", "Shuiwang", "" ] ]
Proteins are complex biomolecules that perform a variety of crucial functions within living organisms. Designing and generating novel proteins can pave the way for many future synthetic biology applications, including drug discovery. However, it remains a challenging computational task due to the large modeling space of protein structures. In this study, we propose a latent diffusion model that can reduce the complexity of protein modeling while flexibly capturing the distribution of natural protein structures in a condensed latent space. Specifically, we propose an equivariant protein autoencoder that embeds proteins into a latent space and then uses an equivariant diffusion model to learn the distribution of the latent protein representations. Experimental results demonstrate that our method can effectively generate novel protein backbone structures with high designability and efficiency. The code will be made publicly available at https://github.com/divelab/AIRS/tree/main/OpenProt/LatentDiff
2111.05882
Mohamed Amgad
Lantian Zhang (1 and 2), Mohamed Amgad (2), Lee A.D. Cooper (2) ((1) North Shore Country Day, Winnetka, IL, USA, (2) Department of Pathology, Northwestern University, Chicago, IL, USA)
A Histopathology Study Comparing Contrastive Semi-Supervised and Fully Supervised Learning
7 pages, 4 figures, 4 tables
null
null
null
q-bio.QM cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Data labeling is often the most challenging task when developing computational pathology models. Pathologist participation is necessary to generate accurate labels, and the limitations on pathologist time and demand for large, labeled datasets has led to research in areas including weakly supervised learning using patient-level labels, machine assisted annotation and active learning. In this paper we explore self-supervised learning to reduce labeling burdens in computational pathology. We explore this in the context of classification of breast cancer tissue using the Barlow Twins approach, and we compare self-supervision with alternatives like pre-trained networks in low-data scenarios. For the task explored in this paper, we find that ImageNet pre-trained networks largely outperform the self-supervised representations obtained using Barlow Twins.
[ { "created": "Wed, 10 Nov 2021 19:04:08 GMT", "version": "v1" } ]
2021-11-12
[ [ "Zhang", "Lantian", "", "1 and 2" ], [ "Amgad", "Mohamed", "" ], [ "Cooper", "Lee A. D.", "" ] ]
Data labeling is often the most challenging task when developing computational pathology models. Pathologist participation is necessary to generate accurate labels, and the limitations on pathologist time and demand for large, labeled datasets has led to research in areas including weakly supervised learning using patient-level labels, machine assisted annotation and active learning. In this paper we explore self-supervised learning to reduce labeling burdens in computational pathology. We explore this in the context of classification of breast cancer tissue using the Barlow Twins approach, and we compare self-supervision with alternatives like pre-trained networks in low-data scenarios. For the task explored in this paper, we find that ImageNet pre-trained networks largely outperform the self-supervised representations obtained using Barlow Twins.
0910.5057
Christian Mulder PhD
A. Jan Hendriks, Christian Mulder
Delayed Logistic and Rosenzweig - MacArthur Models with Allometric Parameter Setting Estimate Population Cycles Well
15 pages, 3 figures, 4 tables, 50 references
Ecological Complexity 9 (2012)
10.1016/j.ecocom.2011.12.001
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context. So far, theoretical explanations for body-size patterns in periodic population dynamics have received little attention. In particular, tuning and testing of allometric models on empirical data and regressions has not been carried out yet. Here, oscillations expected from a one-species (delayed logistic) and a two-species (Rosenzweig-MacArthur) model were compared to cycles observed in laboratory experiments and field surveys for a wide range of invertebrates and vertebrates. The parameters in the equations were linked to body mass, using a consistent set of allometric relationships that was calibrated on 230 regressions. Oscillation period and amplitude predicted by the models were validated with data taken from literature. Results. The collected data showed that cycle times of herbivores scaled to species body mass with a slope up to 1/4 as expected from the models. With exception of aquatic herbi-detritivores, intercepts were observed at the level calculated by the two-species model. Remarkably, oscillation periods were size-independent for predatory invertebrates, fishes, birds and mammals. Average cycles were of 4 to 5 years, similar to those predicted by the one-species model with a size-independent delay of 1 year. The consistent difference between herbivores and the carnivores could be explained by the models from the small parameter space for consumer-resource cycles in generalist predators. As expected, amplitudes recorded in the field did not scale to size. Observed oscillation periods were generally within a factor of about 2 from the values expected from the models. This demonstrates that a set of slopes and intercepts for age and density parameters applicable to a wide range of species allows a reasonable estimate of independently measured cycle times.
[ { "created": "Tue, 27 Oct 2009 08:25:57 GMT", "version": "v1" } ]
2013-05-24
[ [ "Hendriks", "A. Jan", "" ], [ "Mulder", "Christian", "" ] ]
Context. So far, theoretical explanations for body-size patterns in periodic population dynamics have received little attention. In particular, tuning and testing of allometric models on empirical data and regressions has not been carried out yet. Here, oscillations expected from a one-species (delayed logistic) and a two-species (Rosenzweig-MacArthur) model were compared to cycles observed in laboratory experiments and field surveys for a wide range of invertebrates and vertebrates. The parameters in the equations were linked to body mass, using a consistent set of allometric relationships that was calibrated on 230 regressions. Oscillation period and amplitude predicted by the models were validated with data taken from literature. Results. The collected data showed that cycle times of herbivores scaled to species body mass with a slope up to 1/4 as expected from the models. With exception of aquatic herbi-detritivores, intercepts were observed at the level calculated by the two-species model. Remarkably, oscillation periods were size-independent for predatory invertebrates, fishes, birds and mammals. Average cycles were of 4 to 5 years, similar to those predicted by the one-species model with a size-independent delay of 1 year. The consistent difference between herbivores and the carnivores could be explained by the models from the small parameter space for consumer-resource cycles in generalist predators. As expected, amplitudes recorded in the field did not scale to size. Observed oscillation periods were generally within a factor of about 2 from the values expected from the models. This demonstrates that a set of slopes and intercepts for age and density parameters applicable to a wide range of species allows a reasonable estimate of independently measured cycle times.
q-bio/0502038
Rui Dilao
Rui Dilao and Joaquim Sainhas
Modelling butterfly wing eyespot patterns
12 pages, 3 figures
null
10.1098/rspb.2004.2761
M03-49
q-bio.TO
null
Eyespots are concentric motifs with contrasting colours on butterfly wings. Eyespots have intra- and inter-specific visual signalling functions with adaptive and selective roles. We propose a reaction-diffusion model that accounts for eyespot development. The model considers two diffusive morphogens and three non-diffusive pigment precursors. The first morphogen is produced in the focus and determines the differentiation of the first eyespot ring. A second morphogen is then produced, modifying the chromatic properties of the wing background pigment precursor, inducing the differentiation of a second ring. The model simulates the general structural organisation of eyespots, their phenotypic plasticity and seasonal variability, and predicts effects from microsurgical manipulations on pupal wings as reported in the literature.
[ { "created": "Thu, 24 Feb 2005 23:18:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Dilao", "Rui", "" ], [ "Sainhas", "Joaquim", "" ] ]
Eyespots are concentric motifs with contrasting colours on butterfly wings. Eyespots have intra- and inter-specific visual signalling functions with adaptive and selective roles. We propose a reaction-diffusion model that accounts for eyespot development. The model considers two diffusive morphogens and three non-diffusive pigment precursors. The first morphogen is produced in the focus and determines the differentiation of the first eyespot ring. A second morphogen is then produced, modifying the chromatic properties of the wing background pigment precursor, inducing the differentiation of a second ring. The model simulates the general structural organisation of eyespots, their phenotypic plasticity and seasonal variability, and predicts effects from microsurgical manipulations on pupal wings as reported in the literature.
2311.11004
Uriah Israel
Uriah Israel, Markus Marks, Rohit Dilip, Qilin Li, Morgan Schwartz, Elora Pradhan, Edward Pao, Shenyi Li, Alexander Pearson-Goulart, Pietro Perona, Georgia Gkioxari, Ross Barnowski, Yisong Yue, David Van Valen
A Foundation Model for Cell Segmentation
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Cells are the fundamental unit of biological organization, and identifying them in imaging data - cell segmentation - is a critical task for various cellular imaging experiments. While deep learning methods have led to substantial progress on this problem, models that have seen wide use are specialist models that work well for specific domains. Methods that have learned the general notion of "what is a cell" and can identify them across different domains of cellular imaging data have proven elusive. In this work, we present CellSAM, a foundation model for cell segmentation that generalizes across diverse cellular imaging data. CellSAM builds on top of the Segment Anything Model (SAM) by developing a prompt engineering approach to mask generation. We train an object detector, CellFinder, to automatically detect cells and prompt SAM to generate segmentations. We show that this approach allows a single model to achieve state-of-the-art performance for segmenting images of mammalian cells (in tissues and cell culture), yeast, and bacteria collected with various imaging modalities. To enable accessibility, we integrate CellSAM into DeepCell Label to further accelerate human-in-the-loop labeling strategies for cellular imaging data. A deployed version of CellSAM is available at https://label-dev.deepcell.org/.
[ { "created": "Sat, 18 Nov 2023 07:55:09 GMT", "version": "v1" } ]
2023-11-21
[ [ "Israel", "Uriah", "" ], [ "Marks", "Markus", "" ], [ "Dilip", "Rohit", "" ], [ "Li", "Qilin", "" ], [ "Schwartz", "Morgan", "" ], [ "Pradhan", "Elora", "" ], [ "Pao", "Edward", "" ], [ "Li", "Shenyi", "" ], [ "Pearson-Goulart", "Alexander", "" ], [ "Perona", "Pietro", "" ], [ "Gkioxari", "Georgia", "" ], [ "Barnowski", "Ross", "" ], [ "Yue", "Yisong", "" ], [ "Van Valen", "David", "" ] ]
Cells are the fundamental unit of biological organization, and identifying them in imaging data - cell segmentation - is a critical task for various cellular imaging experiments. While deep learning methods have led to substantial progress on this problem, models that have seen wide use are specialist models that work well for specific domains. Methods that have learned the general notion of "what is a cell" and can identify them across different domains of cellular imaging data have proven elusive. In this work, we present CellSAM, a foundation model for cell segmentation that generalizes across diverse cellular imaging data. CellSAM builds on top of the Segment Anything Model (SAM) by developing a prompt engineering approach to mask generation. We train an object detector, CellFinder, to automatically detect cells and prompt SAM to generate segmentations. We show that this approach allows a single model to achieve state-of-the-art performance for segmenting images of mammalian cells (in tissues and cell culture), yeast, and bacteria collected with various imaging modalities. To enable accessibility, we integrate CellSAM into DeepCell Label to further accelerate human-in-the-loop labeling strategies for cellular imaging data. A deployed version of CellSAM is available at https://label-dev.deepcell.org/.
1907.11885
Kai Qiao
Kai Qiao, Chi Zhang, Jian Chen, Linyuan Wang, Li Tong, Bin Yan
Effective and efficient ROI-wise visual encoding using an end-to-end CNN regression model and selective optimization
under review in Computational Intelligence and Neuroscience
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, visual encoding based on functional magnetic resonance imaging (fMRI) have realized many achievements with the rapid development of deep network computation. Visual encoding model is aimed at predicting brain activity in response to presented image stimuli. Currently, visual encoding is accomplished mainly by firstly extracting image features through convolutional neural network (CNN) model pre-trained on computer vision task, and secondly training a linear regression model to map specific layer of CNN features to each voxel, namely voxel-wise encoding. However, the two-step manner model, essentially, is hard to determine which kind of well features are well linearly matched for beforehand unknown fMRI data with little understanding of human visual representation. Analogizing computer vision mostly related human vision, we proposed the end-to-end convolution regression model (ETECRM) in the region of interest (ROI)-wise manner to accomplish effective and efficient visual encoding. The end-to-end manner was introduced to make the model automatically learn better matching features to improve encoding performance. The ROI-wise manner was used to improve the encoding efficiency for many voxels. In addition, we designed the selective optimization including self-adapting weight learning and weighted correlation loss, noise regularization to avoid interfering of ineffective voxels in ROI-wise encoding. Experiment demonstrated that the proposed model obtained better predicting accuracy than the two-step manner of encoding models. Comparative analysis implied that end-to-end manner and large volume of fMRI data may drive the future development of visual encoding.
[ { "created": "Sat, 27 Jul 2019 10:09:05 GMT", "version": "v1" } ]
2019-07-30
[ [ "Qiao", "Kai", "" ], [ "Zhang", "Chi", "" ], [ "Chen", "Jian", "" ], [ "Wang", "Linyuan", "" ], [ "Tong", "Li", "" ], [ "Yan", "Bin", "" ] ]
Recently, visual encoding based on functional magnetic resonance imaging (fMRI) have realized many achievements with the rapid development of deep network computation. Visual encoding model is aimed at predicting brain activity in response to presented image stimuli. Currently, visual encoding is accomplished mainly by firstly extracting image features through convolutional neural network (CNN) model pre-trained on computer vision task, and secondly training a linear regression model to map specific layer of CNN features to each voxel, namely voxel-wise encoding. However, the two-step manner model, essentially, is hard to determine which kind of well features are well linearly matched for beforehand unknown fMRI data with little understanding of human visual representation. Analogizing computer vision mostly related human vision, we proposed the end-to-end convolution regression model (ETECRM) in the region of interest (ROI)-wise manner to accomplish effective and efficient visual encoding. The end-to-end manner was introduced to make the model automatically learn better matching features to improve encoding performance. The ROI-wise manner was used to improve the encoding efficiency for many voxels. In addition, we designed the selective optimization including self-adapting weight learning and weighted correlation loss, noise regularization to avoid interfering of ineffective voxels in ROI-wise encoding. Experiment demonstrated that the proposed model obtained better predicting accuracy than the two-step manner of encoding models. Comparative analysis implied that end-to-end manner and large volume of fMRI data may drive the future development of visual encoding.
0809.1605
P. Grassberger
Alexander Kraskov and Peter Grassberger
MIC: Mutual Information based hierarchical Clustering
22 pages, including 7 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering is a concept used in a huge variety of applications. We review a conceptually very simple algorithm for hierarchical clustering called in the following the {\it mutual information clustering} (MIC) algorithm. It uses mutual information (MI) as a similarity measure and exploits its grouping property: The MI between three objects X, Y, and Z is equal to the sum of the MI between X and Y, plus the MI between Z and the combined object (XY). We use MIC both in the Shannon (probabilistic) version of information theory, where the "objects" are probability distributions represented by random samples, and in the Kolmogorov (algorithmic) version, where the "objects" are symbol sequences. We apply our method to the construction of phylogenetic trees from mitochondrial DNA sequences and we reconstruct the fetal ECG from the output of independent components analysis (ICA) applied to the ECG of a pregnant woman.
[ { "created": "Tue, 9 Sep 2008 16:50:09 GMT", "version": "v1" } ]
2008-09-10
[ [ "Kraskov", "Alexander", "" ], [ "Grassberger", "Peter", "" ] ]
Clustering is a concept used in a huge variety of applications. We review a conceptually very simple algorithm for hierarchical clustering called in the following the {\it mutual information clustering} (MIC) algorithm. It uses mutual information (MI) as a similarity measure and exploits its grouping property: The MI between three objects X, Y, and Z is equal to the sum of the MI between X and Y, plus the MI between Z and the combined object (XY). We use MIC both in the Shannon (probabilistic) version of information theory, where the "objects" are probability distributions represented by random samples, and in the Kolmogorov (algorithmic) version, where the "objects" are symbol sequences. We apply our method to the construction of phylogenetic trees from mitochondrial DNA sequences and we reconstruct the fetal ECG from the output of independent components analysis (ICA) applied to the ECG of a pregnant woman.
1211.2878
Erick Chastain
Erick Chastain, Rustom Antia, Carl T. Bergstrom
Defensive complexity and the phylogenetic conservation of immune control
arXiv admin note: substantial text overlap with arXiv:1203.4601
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One strategy for winning a coevolutionary struggle is to evolve rapidly. Most of the literature on host-pathogen coevolution focuses on this phenomenon, and looks for consequent evidence of coevolutionary arms races. An alternative strategy, less often considered in the literature, is to deter rapid evolutionary change by the opponent. To study how this can be done, we construct an evolutionary game between a controller that must process information, and an adversary that can tamper with this information processing. In this game, a species can foil its antagonist by processing information in a way that is hard for the antagonist to manipulate. We show that the structure of the information processing system induces a fitness landscape on which the adversary population evolves. Complex processing logic can carve long, deep fitness valleys that slow adaptive evolution in the adversary population. We suggest that this type of defensive complexity on the part of the vertebrate adaptive immune system may be an important element of coevolutionary dynamics between pathogens and their vertebrate hosts. Furthermore, we cite evidence that the immune control logic is phylogenetically conserved in mammalian lineages. Thus our model of defensive complexity suggests a new hypothesis for the lower rates of evolution for immune control logic compared to other immune structures.
[ { "created": "Tue, 13 Nov 2012 03:26:45 GMT", "version": "v1" } ]
2012-11-14
[ [ "Chastain", "Erick", "" ], [ "Antia", "Rustom", "" ], [ "Bergstrom", "Carl T.", "" ] ]
One strategy for winning a coevolutionary struggle is to evolve rapidly. Most of the literature on host-pathogen coevolution focuses on this phenomenon, and looks for consequent evidence of coevolutionary arms races. An alternative strategy, less often considered in the literature, is to deter rapid evolutionary change by the opponent. To study how this can be done, we construct an evolutionary game between a controller that must process information, and an adversary that can tamper with this information processing. In this game, a species can foil its antagonist by processing information in a way that is hard for the antagonist to manipulate. We show that the structure of the information processing system induces a fitness landscape on which the adversary population evolves. Complex processing logic can carve long, deep fitness valleys that slow adaptive evolution in the adversary population. We suggest that this type of defensive complexity on the part of the vertebrate adaptive immune system may be an important element of coevolutionary dynamics between pathogens and their vertebrate hosts. Furthermore, we cite evidence that the immune control logic is phylogenetically conserved in mammalian lineages. Thus our model of defensive complexity suggests a new hypothesis for the lower rates of evolution for immune control logic compared to other immune structures.
1007.0858
Christophe Magnani
Christophe Magnani and L.E.Moore
Quadratic Sinusoidal Analysis of Neurons in Voltage Clamp
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nonlinear biophysical properties of individual neurons are known to play a major role in the nervous system. Earlier electrophysiological studies have made use of piecewise linear characterization of voltage clamped neurons, which consists of a sequence of linear admittances computed at different voltage levels. In this paper, the linear approach is extended to a piecewise quadratic characterization in two different ways. First, an analytical model is derived with power series following the work pionneered by Fitzhugh. Second, matrix calculus is developed to provide a novel quantitative analysis not dependent on differential equations. This method provides an assessment of quadratic responses for both data recorded from individual neurons and their corresponding models.
[ { "created": "Tue, 6 Jul 2010 10:45:10 GMT", "version": "v1" } ]
2010-07-07
[ [ "Magnani", "Christophe", "" ], [ "Moore", "L. E.", "" ] ]
Nonlinear biophysical properties of individual neurons are known to play a major role in the nervous system. Earlier electrophysiological studies have made use of piecewise linear characterization of voltage clamped neurons, which consists of a sequence of linear admittances computed at different voltage levels. In this paper, the linear approach is extended to a piecewise quadratic characterization in two different ways. First, an analytical model is derived with power series following the work pionneered by Fitzhugh. Second, matrix calculus is developed to provide a novel quantitative analysis not dependent on differential equations. This method provides an assessment of quadratic responses for both data recorded from individual neurons and their corresponding models.
2111.04866
Marcus Aguiar de
D\'ebora Princepe, Marcus A. M. de Aguiar and Joshua B. Plotkin
Mito-nuclear selection induces a trade-off between species ecological dominance and evolutionary lifespan
26 pages, 6 figures, supplemental material
Nature Ecology & Evolution, 2022
10.1038/s41559-022-01901-0
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Mitochondrial and nuclear genomes must be co-adapted to ensure proper cellular respiration and energy production. Mito-nuclear incompatibility reduces individual fitness and induces hybrid infertility, suggesting a possible role in reproductive barriers and speciation. Here we develop a birth-death model for evolution in spatially extended populations under selection for mito-nuclear co-adaptation. Mating is constrained by physical and genetic proximity, and offspring inherit nuclear genomes from both parents, with recombination. The model predicts macroscopic patterns including a community's long-term species diversity, its species abundance distribution, speciation and extinction rates, as well as intra- and inter-specific genetic variation. We explore how these long-term outcomes depend upon the microscopic parameters of reproduction: individual fitness governed by mito-nuclear compatibility, constraints on mating compatibility, and ecological carrying capacity. We find that strong selection for mito-nuclear compatibility reduces the equilibrium number of species after a radiation, increases the species' abundances, while simultaneously increasing both speciation and extinction rates. The negative correlation between species diversity and diversification rates in our model agrees with the broad empirical pattern of lower species diversity and higher speciation/extinction rates in temperate regions, compared to the tropics. We therefore suggest that these empirical patterns may be caused in part by latitudinal variation in metabolic demands, and corresponding variation in selection on mito-nuclear function.
[ { "created": "Mon, 8 Nov 2021 23:13:24 GMT", "version": "v1" }, { "created": "Wed, 8 Jun 2022 16:51:07 GMT", "version": "v2" } ]
2022-10-12
[ [ "Princepe", "Débora", "" ], [ "de Aguiar", "Marcus A. M.", "" ], [ "Plotkin", "Joshua B.", "" ] ]
Mitochondrial and nuclear genomes must be co-adapted to ensure proper cellular respiration and energy production. Mito-nuclear incompatibility reduces individual fitness and induces hybrid infertility, suggesting a possible role in reproductive barriers and speciation. Here we develop a birth-death model for evolution in spatially extended populations under selection for mito-nuclear co-adaptation. Mating is constrained by physical and genetic proximity, and offspring inherit nuclear genomes from both parents, with recombination. The model predicts macroscopic patterns including a community's long-term species diversity, its species abundance distribution, speciation and extinction rates, as well as intra- and inter-specific genetic variation. We explore how these long-term outcomes depend upon the microscopic parameters of reproduction: individual fitness governed by mito-nuclear compatibility, constraints on mating compatibility, and ecological carrying capacity. We find that strong selection for mito-nuclear compatibility reduces the equilibrium number of species after a radiation, increases the species' abundances, while simultaneously increasing both speciation and extinction rates. The negative correlation between species diversity and diversification rates in our model agrees with the broad empirical pattern of lower species diversity and higher speciation/extinction rates in temperate regions, compared to the tropics. We therefore suggest that these empirical patterns may be caused in part by latitudinal variation in metabolic demands, and corresponding variation in selection on mito-nuclear function.
0904.2466
Siew-Ann Cheong
Siew-Ann Cheong, Paul Stodghill, David J. Schneider, Samuel W. Cartinhour, and Christopher R. Myers
Extending the Recursive Jensen-Shannon Segmentation of Biological Sequences
IEEEtran class, 30 pages, 7 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we extend a previously developed recursive entropic segmentation scheme for applications to biological sequences. Instead of Bernoulli chains, we model the statistically stationary segments in a biological sequence as Markov chains, and define a generalized Jensen-Shannon divergence for distinguishing between two Markov chains. We then undertake a mean-field analysis, based on which we identify pitfalls associated with the recursive Jensen-Shannon segmentation scheme. Following this, we explain the need for segmentation optimization, and describe two local optimization schemes for improving the positions of domain walls discovered at each recursion stage. We also develop a new termination criterion for recursive Jensen-Shannon segmentation based on the strength of statistical fluctuations up to a minimum statistically reliable segment length, avoiding the need for unrealistic null and alternative segment models of the target sequence. Finally, we compare the extended scheme against the original scheme by recursively segmenting the Escherichia coli K-12 MG1655 genome.
[ { "created": "Thu, 16 Apr 2009 11:15:33 GMT", "version": "v1" } ]
2009-04-17
[ [ "Cheong", "Siew-Ann", "" ], [ "Stodghill", "Paul", "" ], [ "Schneider", "David J.", "" ], [ "Cartinhour", "Samuel W.", "" ], [ "Myers", "Christopher R.", "" ] ]
In this paper, we extend a previously developed recursive entropic segmentation scheme for applications to biological sequences. Instead of Bernoulli chains, we model the statistically stationary segments in a biological sequence as Markov chains, and define a generalized Jensen-Shannon divergence for distinguishing between two Markov chains. We then undertake a mean-field analysis, based on which we identify pitfalls associated with the recursive Jensen-Shannon segmentation scheme. Following this, we explain the need for segmentation optimization, and describe two local optimization schemes for improving the positions of domain walls discovered at each recursion stage. We also develop a new termination criterion for recursive Jensen-Shannon segmentation based on the strength of statistical fluctuations up to a minimum statistically reliable segment length, avoiding the need for unrealistic null and alternative segment models of the target sequence. Finally, we compare the extended scheme against the original scheme by recursively segmenting the Escherichia coli K-12 MG1655 genome.
2403.06331
Pablo Zambrano Ph.D.
Pablo Zambrano
Membrane Interactions in Alzheimer`s Treatment Strategies with Multitarget Molecules
6 pages, 1 figure
null
10.1016/j.bioorg.2024.107407
null
q-bio.BM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Addressing Alzheimer's disease (AD) requires innovative strategies beyond current single-target drugs. This Letter to the Editor suggests that multitarget molecules, especially those targeting neuronal membrane protection, could offer a comprehensive approach to AD therapy, advocating for further research into their mechanisms and therapeutic potential.
[ { "created": "Sun, 10 Mar 2024 22:34:25 GMT", "version": "v1" } ]
2024-06-04
[ [ "Zambrano", "Pablo", "" ] ]
Addressing Alzheimer's disease (AD) requires innovative strategies beyond current single-target drugs. This Letter to the Editor suggests that multitarget molecules, especially those targeting neuronal membrane protection, could offer a comprehensive approach to AD therapy, advocating for further research into their mechanisms and therapeutic potential.
2405.03726
Andac Demir
Andac Demir, Elizaveta Solovyeva, James Boylan, Mei Xiao, Fabrizio Serluca, Sebastian Hoersch, Jeremy Jenkins, Murthy Devarakonda, Bulent Kiziltan
sc-OTGM: Single-Cell Perturbation Modeling by Solving Optimal Mass Transport on the Manifold of Gaussian Mixtures
ICLR 2024, Machine Learning for Genomics Explorations Workshop
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Influenced by breakthroughs in LLMs, single-cell foundation models are emerging. While these models show successful performance in cell type clustering, phenotype classification, and gene perturbation response prediction, it remains to be seen if a simpler model could achieve comparable or better results, especially with limited data. This is important, as the quantity and quality of single-cell data typically fall short of the standards in textual data used for training LLMs. Single-cell sequencing often suffers from technical artifacts, dropout events, and batch effects. These challenges are compounded in a weakly supervised setting, where the labels of cell states can be noisy, further complicating the analysis. To tackle these challenges, we present sc-OTGM, streamlined with less than 500K parameters, making it approximately 100x more compact than the foundation models, offering an efficient alternative. sc-OTGM is an unsupervised model grounded in the inductive bias that the scRNAseq data can be generated from a combination of the finite multivariate Gaussian distributions. The core function of sc-OTGM is to create a probabilistic latent space utilizing a GMM as its prior distribution and distinguish between distinct cell populations by learning their respective marginal PDFs. It uses a Hit-and-Run Markov chain sampler to determine the OT plan across these PDFs within the GMM framework. We evaluated our model against a CRISPR-mediated perturbation dataset, called CROP-seq, consisting of 57 one-gene perturbations. Our results demonstrate that sc-OTGM is effective in cell state classification, aids in the analysis of differential gene expression, and ranks genes for target identification through a recommender system. It also predicts the effects of single-gene perturbations on downstream gene regulation and generates synthetic scRNA-seq data conditioned on specific cell states.
[ { "created": "Mon, 6 May 2024 06:46:11 GMT", "version": "v1" } ]
2024-05-08
[ [ "Demir", "Andac", "" ], [ "Solovyeva", "Elizaveta", "" ], [ "Boylan", "James", "" ], [ "Xiao", "Mei", "" ], [ "Serluca", "Fabrizio", "" ], [ "Hoersch", "Sebastian", "" ], [ "Jenkins", "Jeremy", "" ], [ "Devarakonda", "Murthy", "" ], [ "Kiziltan", "Bulent", "" ] ]
Influenced by breakthroughs in LLMs, single-cell foundation models are emerging. While these models show successful performance in cell type clustering, phenotype classification, and gene perturbation response prediction, it remains to be seen if a simpler model could achieve comparable or better results, especially with limited data. This is important, as the quantity and quality of single-cell data typically fall short of the standards in textual data used for training LLMs. Single-cell sequencing often suffers from technical artifacts, dropout events, and batch effects. These challenges are compounded in a weakly supervised setting, where the labels of cell states can be noisy, further complicating the analysis. To tackle these challenges, we present sc-OTGM, streamlined with less than 500K parameters, making it approximately 100x more compact than the foundation models, offering an efficient alternative. sc-OTGM is an unsupervised model grounded in the inductive bias that the scRNAseq data can be generated from a combination of the finite multivariate Gaussian distributions. The core function of sc-OTGM is to create a probabilistic latent space utilizing a GMM as its prior distribution and distinguish between distinct cell populations by learning their respective marginal PDFs. It uses a Hit-and-Run Markov chain sampler to determine the OT plan across these PDFs within the GMM framework. We evaluated our model against a CRISPR-mediated perturbation dataset, called CROP-seq, consisting of 57 one-gene perturbations. Our results demonstrate that sc-OTGM is effective in cell state classification, aids in the analysis of differential gene expression, and ranks genes for target identification through a recommender system. It also predicts the effects of single-gene perturbations on downstream gene regulation and generates synthetic scRNA-seq data conditioned on specific cell states.
1110.1516
Daniel Kaschek
Daniel Kaschek and Jens Timmer
A Variational Approach to Parameter Estimation in Ordinary Differential Equations
null
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
[ { "created": "Fri, 7 Oct 2011 13:05:35 GMT", "version": "v1" }, { "created": "Thu, 5 Jul 2012 08:47:28 GMT", "version": "v2" } ]
2012-07-06
[ [ "Kaschek", "Daniel", "" ], [ "Timmer", "Jens", "" ] ]
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
1906.08612
Kevin Parker PhD
Kevin J. Parker, Thomas Szabo, Sverre Holm
Towards a consensus on rheological models for shear waves in soft tissues
39 pages, 11 figures
Phys Med Biol 64(21) p.215012, 2019
10.1088/1361-6560/ab453d
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A rising wave of technologies and instruments are enabling more labs and clinics to make a variety of measurements related to tissue viscoelastic properties. These instruments include elastography imaging scanners, rheological shear viscometers, and a variety of calibrated stress-strain analyzers. From these many sources of disparate data, a common step in analyzing results is to fit the measurements of tissue response to some viscoelastic model. In the best scenario, this places the measurements within a theoretical framework and enables meaningful comparisons of the parameters against other types of tissues. However, there is a large set of established rheological models, even within the class of linear, causal, viscoelastic solid models, so which of these should be chosen? Is it simply a matter of best fit to a minimum mean squared error of the model to several data points? We argue that the long history of biomechanics, including the concept of the extended relaxation spectrum, along with data collected from viscoelastic soft tissues over an extended range of times and frequencies, and the theoretical framework of multiple relaxation models which model the multi-scale nature of physical tissues, all lead to the conclusion that fractional derivative models represent the most succinct and meaningful models of soft tissue viscoelastic behavior. These arguments are presented with the goal of clarifying some distinctions between, and consequences of, some of the most commonly used models, and with the longer term goal of reaching a consensus among different sub-fields in acoustics, biomechanics, and elastography that have common interests in comparing tissue measurements.
[ { "created": "Thu, 20 Jun 2019 13:43:21 GMT", "version": "v1" }, { "created": "Thu, 19 Sep 2019 18:04:20 GMT", "version": "v2" } ]
2020-02-14
[ [ "Parker", "Kevin J.", "" ], [ "Szabo", "Thomas", "" ], [ "Holm", "Sverre", "" ] ]
A rising wave of technologies and instruments are enabling more labs and clinics to make a variety of measurements related to tissue viscoelastic properties. These instruments include elastography imaging scanners, rheological shear viscometers, and a variety of calibrated stress-strain analyzers. From these many sources of disparate data, a common step in analyzing results is to fit the measurements of tissue response to some viscoelastic model. In the best scenario, this places the measurements within a theoretical framework and enables meaningful comparisons of the parameters against other types of tissues. However, there is a large set of established rheological models, even within the class of linear, causal, viscoelastic solid models, so which of these should be chosen? Is it simply a matter of best fit to a minimum mean squared error of the model to several data points? We argue that the long history of biomechanics, including the concept of the extended relaxation spectrum, along with data collected from viscoelastic soft tissues over an extended range of times and frequencies, and the theoretical framework of multiple relaxation models which model the multi-scale nature of physical tissues, all lead to the conclusion that fractional derivative models represent the most succinct and meaningful models of soft tissue viscoelastic behavior. These arguments are presented with the goal of clarifying some distinctions between, and consequences of, some of the most commonly used models, and with the longer term goal of reaching a consensus among different sub-fields in acoustics, biomechanics, and elastography that have common interests in comparing tissue measurements.
2105.06049
Fan Wang
Fan Wang, Saarthak Kapse, Steven Liu, Prateek Prasanna, Chao Chen
TopoTxR: A Topological Biomarker for Predicting Treatment Response in Breast Cancer
12 pages, 5 figures, 2 tables, accepted to International Conference on Information Processing in Medical Imaging (IPMI) 2021
null
null
null
q-bio.QM cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Characterization of breast parenchyma on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a challenging task owing to the complexity of underlying tissue structures. Current quantitative approaches, including radiomics and deep learning models, do not explicitly capture the complex and subtle parenchymal structures, such as fibroglandular tissue. In this paper, we propose a novel method to direct a neural network's attention to a dedicated set of voxels surrounding biologically relevant tissue structures. By extracting multi-dimensional topological structures with high saliency, we build a topology-derived biomarker, TopoTxR. We demonstrate the efficacy of TopoTxR in predicting response to neoadjuvant chemotherapy in breast cancer. Our qualitative and quantitative results suggest differential topological behavior of breast tissue on treatment-na\"ive imaging, in patients who respond favorably to therapy versus those who do not.
[ { "created": "Thu, 13 May 2021 02:38:48 GMT", "version": "v1" } ]
2021-05-14
[ [ "Wang", "Fan", "" ], [ "Kapse", "Saarthak", "" ], [ "Liu", "Steven", "" ], [ "Prasanna", "Prateek", "" ], [ "Chen", "Chao", "" ] ]
Characterization of breast parenchyma on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a challenging task owing to the complexity of underlying tissue structures. Current quantitative approaches, including radiomics and deep learning models, do not explicitly capture the complex and subtle parenchymal structures, such as fibroglandular tissue. In this paper, we propose a novel method to direct a neural network's attention to a dedicated set of voxels surrounding biologically relevant tissue structures. By extracting multi-dimensional topological structures with high saliency, we build a topology-derived biomarker, TopoTxR. We demonstrate the efficacy of TopoTxR in predicting response to neoadjuvant chemotherapy in breast cancer. Our qualitative and quantitative results suggest differential topological behavior of breast tissue on treatment-na\"ive imaging, in patients who respond favorably to therapy versus those who do not.
0809.2950
Kamenev Alex
Alex Kamenev, Baruch Meerson, Boris Shklovskii
Population extinction in a fluctuating environment
4 pages, 3 figure
Phys. Rev. Lett. 101, 268103 (2008)
10.1103/PhysRevLett.101.268103
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Environmental noise can cause an exponential reduction in the mean time to extinction (MTE) of an isolated population. We study this effect on an example of a stochastic birth-death process with rates modulated by a colored Gaussian noise. A path integral formulation yields a transparent way of evaluating the MTE and finding the optimal realization of the environmental noise that determines the most probable path to extinction. The population-size dependence of the MTE changes from exponential in the absence of the environmental noise to a power law for a short-correlated noise and to no dependence for long-correlated noise. We also establish the validity domains of the limits of white noise and adiabatic noise.
[ { "created": "Wed, 17 Sep 2008 17:35:01 GMT", "version": "v1" } ]
2015-05-13
[ [ "Kamenev", "Alex", "" ], [ "Meerson", "Baruch", "" ], [ "Shklovskii", "Boris", "" ] ]
Environmental noise can cause an exponential reduction in the mean time to extinction (MTE) of an isolated population. We study this effect on an example of a stochastic birth-death process with rates modulated by a colored Gaussian noise. A path integral formulation yields a transparent way of evaluating the MTE and finding the optimal realization of the environmental noise that determines the most probable path to extinction. The population-size dependence of the MTE changes from exponential in the absence of the environmental noise to a power law for a short-correlated noise and to no dependence for long-correlated noise. We also establish the validity domains of the limits of white noise and adiabatic noise.
2004.04271
Michael Nikolaou
Michael Nikolaou
Simple Formulas for a Two-Tier Strategy to Flatten the Curve
6 pages 7 figures arXiv:2003.12055v1 [q-bio.PE] 26 Mar 2020
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Some basic facts that enable or constrain efforts to "flatten the curve" by non-pharmaceutical interventions (NPI) are summarized and placed in the current context of the COVID-19 epidemic. Analytical formulas are presented for a simple two-tier NPI strategy that places different social distancing targets for high- and low-risk groups. The results aim to facilitate rapid what-if analysis rather than replace detailed simulation, which remains indispensable for informed decision making. The appearance of the oft neglected Lambert function in the resulting formulas is noted.
[ { "created": "Wed, 8 Apr 2020 21:27:44 GMT", "version": "v1" }, { "created": "Sun, 12 Apr 2020 21:38:45 GMT", "version": "v2" } ]
2020-04-14
[ [ "Nikolaou", "Michael", "" ] ]
Some basic facts that enable or constrain efforts to "flatten the curve" by non-pharmaceutical interventions (NPI) are summarized and placed in the current context of the COVID-19 epidemic. Analytical formulas are presented for a simple two-tier NPI strategy that places different social distancing targets for high- and low-risk groups. The results aim to facilitate rapid what-if analysis rather than replace detailed simulation, which remains indispensable for informed decision making. The appearance of the oft neglected Lambert function in the resulting formulas is noted.
1010.1099
Junbai Wang
Junbai Wang, Leo Wang-Kit Cheung and Jan Delabie
Application of new probabilistic graphical models in the genetic regulatory networks studies
38 pages, 3 figures
J Biomed Inform. 2005 Dec;38(6):443-55. Epub 2005 Jun 9
10.1016/j.jbi.2005.04.003
null
q-bio.QM q-bio.MN
http://creativecommons.org/licenses/publicdomain/
This paper introduces two new probabilistic graphical models for reconstruction of genetic regulatory networks using DNA microarray data. One is an Independence Graph (IG) model with either a forward or a backward search algorithm and the other one is a Gaussian Network (GN) model with a novel greedy search method. The performances of both models were evaluated on four MAPK pathways in yeast and three simulated data sets. Generally, an IG model provides a sparse graph but a GN model produces a dense graph where more information about gene-gene interactions is preserved. Additionally, we found two key limitations in the prediction of genetic regulatory networks using DNA microarray data, the first is the sufficiency of sample size and the second is the complexity of network structures may not be captured without additional data at the protein level. Those limitations are present in all prediction methods which used only DNA microarray data.
[ { "created": "Wed, 6 Oct 2010 09:25:42 GMT", "version": "v1" } ]
2010-10-07
[ [ "Wang", "Junbai", "" ], [ "Cheung", "Leo Wang-Kit", "" ], [ "Delabie", "Jan", "" ] ]
This paper introduces two new probabilistic graphical models for reconstruction of genetic regulatory networks using DNA microarray data. One is an Independence Graph (IG) model with either a forward or a backward search algorithm and the other one is a Gaussian Network (GN) model with a novel greedy search method. The performances of both models were evaluated on four MAPK pathways in yeast and three simulated data sets. Generally, an IG model provides a sparse graph but a GN model produces a dense graph where more information about gene-gene interactions is preserved. Additionally, we found two key limitations in the prediction of genetic regulatory networks using DNA microarray data, the first is the sufficiency of sample size and the second is the complexity of network structures may not be captured without additional data at the protein level. Those limitations are present in all prediction methods which used only DNA microarray data.
q-bio/0412008
Suan Li Mai
Mai Suan Li, D. K. Klimov and D. Thirumalai
Finite size effects on calorimetric cooperativity of two-state proteins
3 eps figures. To appear in the special issue of Physica A
null
10.1016/j.physa.2004.11.029
null
q-bio.BM q-bio.QM
null
Finite size effects on the calorimetric cooperatity of the folding-unfolding transition in two-state proteins are considered using the Go lattice models with and without side chains. We show that for models without side chains a dimensionless measure of calorimetric cooperativity kappa2 defined as the ratio of the van't Hoff to calorimetric enthalpy does not depend on the number of amino acids N. The average value of kappa2 is about 3/4 which is lower than the experimental value kappa2=1. For models with side chains kappa2 approaches unity as kappa2 \sim N^mu, where exponent mu=0.17. Above the critical chain length Nc =135 these models can mimic the truly all-or-non folding-unfolding transition.
[ { "created": "Sun, 5 Dec 2004 13:59:14 GMT", "version": "v1" } ]
2009-11-10
[ [ "Li", "Mai Suan", "" ], [ "Klimov", "D. K.", "" ], [ "Thirumalai", "D.", "" ] ]
Finite size effects on the calorimetric cooperatity of the folding-unfolding transition in two-state proteins are considered using the Go lattice models with and without side chains. We show that for models without side chains a dimensionless measure of calorimetric cooperativity kappa2 defined as the ratio of the van't Hoff to calorimetric enthalpy does not depend on the number of amino acids N. The average value of kappa2 is about 3/4 which is lower than the experimental value kappa2=1. For models with side chains kappa2 approaches unity as kappa2 \sim N^mu, where exponent mu=0.17. Above the critical chain length Nc =135 these models can mimic the truly all-or-non folding-unfolding transition.
2310.10544
Laurence Maloney
Laurence T Maloney, Maria F Dal Martello, Vivian Fei and Valerie Ma
Use of probabilistic phrases in a coordination game: human versus GPT-4
Corrected typos, extended discussion, added references
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
English speakers use probabilistic phrases such as likely to communicate information about the probability or likelihood of events. Communication is successful to the extent that the listener grasps what the speaker means to convey and, if communication is successful, individuals can potentially coordinate their actions based on shared knowledge about uncertainty. We first assessed human ability to estimate the probability and the ambiguity (imprecision) of twenty-three probabilistic phrases in a coordination game in two different contexts, investment advice and medical advice. We then had GPT4 (OpenAI), a Large Language Model, complete the same tasks as the human participants. We found that the median human participant and GPT4 assigned probability estimates that were in good agreement (proportions of variance accounted for close to .90). GPT4's estimates of probability both in the investment and Medical contexts were as close or closer to that of the human participants as the human participants' estimates were to one another. Estimates of probability for both the human participants and GPT4 were little affected by context. In contrast, human and GPT4 estimates of ambiguity were not in such good agreement.
[ { "created": "Mon, 16 Oct 2023 16:14:27 GMT", "version": "v1" }, { "created": "Fri, 3 Nov 2023 18:13:43 GMT", "version": "v2" }, { "created": "Sat, 25 Nov 2023 19:12:02 GMT", "version": "v3" } ]
2023-11-28
[ [ "Maloney", "Laurence T", "" ], [ "Martello", "Maria F Dal", "" ], [ "Fei", "Vivian", "" ], [ "Ma", "Valerie", "" ] ]
English speakers use probabilistic phrases such as likely to communicate information about the probability or likelihood of events. Communication is successful to the extent that the listener grasps what the speaker means to convey and, if communication is successful, individuals can potentially coordinate their actions based on shared knowledge about uncertainty. We first assessed human ability to estimate the probability and the ambiguity (imprecision) of twenty-three probabilistic phrases in a coordination game in two different contexts, investment advice and medical advice. We then had GPT4 (OpenAI), a Large Language Model, complete the same tasks as the human participants. We found that the median human participant and GPT4 assigned probability estimates that were in good agreement (proportions of variance accounted for close to .90). GPT4's estimates of probability both in the investment and Medical contexts were as close or closer to that of the human participants as the human participants' estimates were to one another. Estimates of probability for both the human participants and GPT4 were little affected by context. In contrast, human and GPT4 estimates of ambiguity were not in such good agreement.
2110.10866
Ashkaan Fahimipour
Ashkaan K. Fahimipour, Fanqi Zeng, Martin Homer, Arne Traulsen, Simon A. Levin, Thilo Gross
Sharp thresholds limit the benefit of defector avoidance in cooperation on networks
14 pages, 4 figures
null
10.1073/pnas.2120120119
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consider a cooperation game on a spatial network of habitat patches, where players can relocate between patches if they judge the local conditions to be unfavorable. In time, the relocation events may lead to a homogeneous state where all patches harbor the same relative densities of cooperators and defectors or they may lead to self-organized patterns, where some patches become safe havens that maintain an elevated cooperator density. Here we analyze the transition between these states mathematically. We show that safe havens form once a certain threshold in connectivity is crossed. This threshold can be analytically linked to the structure of the patch network and specifically to certain network motifs. Surprisingly, a forgiving defector avoidance strategy may be most favorable for cooperators. Our results demonstrate that the analysis of cooperation games in ecological metacommunity models is mathematically tractable and has the potential to link topics such as macroecological patterns, behavioral evolution, and network topology.
[ { "created": "Thu, 21 Oct 2021 02:57:55 GMT", "version": "v1" }, { "created": "Tue, 12 Jul 2022 19:46:30 GMT", "version": "v2" } ]
2022-10-12
[ [ "Fahimipour", "Ashkaan K.", "" ], [ "Zeng", "Fanqi", "" ], [ "Homer", "Martin", "" ], [ "Traulsen", "Arne", "" ], [ "Levin", "Simon A.", "" ], [ "Gross", "Thilo", "" ] ]
Consider a cooperation game on a spatial network of habitat patches, where players can relocate between patches if they judge the local conditions to be unfavorable. In time, the relocation events may lead to a homogeneous state where all patches harbor the same relative densities of cooperators and defectors or they may lead to self-organized patterns, where some patches become safe havens that maintain an elevated cooperator density. Here we analyze the transition between these states mathematically. We show that safe havens form once a certain threshold in connectivity is crossed. This threshold can be analytically linked to the structure of the patch network and specifically to certain network motifs. Surprisingly, a forgiving defector avoidance strategy may be most favorable for cooperators. Our results demonstrate that the analysis of cooperation games in ecological metacommunity models is mathematically tractable and has the potential to link topics such as macroecological patterns, behavioral evolution, and network topology.
1911.00325
Marzio Pennisi
Giulia Russo, Francesco Pappalardo, Miguel A. Juarez, Marzio Pennisi, Pere Joan Cardona, Rhea Coler, Epifanio Fichera, Marco Viceconti
Evaluation of the efficacy of RUTI and ID93/GLA-SE vaccines in tuberculosis treatment: in silico trial through UISS-TB simulator
5 pages
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Tuberculosis (TB) is one of the deadliest diseases worldwide, with 1,5 million fatalities every year along with potential devastating effects on society, families and individuals. To address this alarming burden, vaccines can play a fundamental role, even though to date no fully effective TB vaccine really exists. Current treatments involve several combinations of antibiotics administered to TB patients for up to two years, leading often to financial issues and reduced therapy adherence. Along with this, the development and spread of drug-resistant TB strains is another big complicating matter. Faced with these challenges, there is an urgent need to explore new vaccination strategies in order to boost immunity against tuberculosis and shorten the duration of treatment. Computational modeling represents an extraordinary way to simulate and predict the outcome of vaccination strategies, speeding up the arduous process of vaccine pipeline development and relative time to market. Here, we present EU - funded STriTuVaD project computational platform able to predict the artificial immunity induced by RUTI and ID93/GLA-SE, two specific tuberculosis vaccines. Such an in silico trial will be validated through a phase 2b clinical trial. Moreover, STriTuVaD computational framework is able to inform of the reasons for failure should the vaccinations strategies against M. tuberculosis under testing found not efficient, which will suggest possible improvements.
[ { "created": "Sat, 26 Oct 2019 12:42:52 GMT", "version": "v1" } ]
2019-11-04
[ [ "Russo", "Giulia", "" ], [ "Pappalardo", "Francesco", "" ], [ "Juarez", "Miguel A.", "" ], [ "Pennisi", "Marzio", "" ], [ "Cardona", "Pere Joan", "" ], [ "Coler", "Rhea", "" ], [ "Fichera", "Epifanio", "" ], [ "Viceconti", "Marco", "" ] ]
Tuberculosis (TB) is one of the deadliest diseases worldwide, with 1,5 million fatalities every year along with potential devastating effects on society, families and individuals. To address this alarming burden, vaccines can play a fundamental role, even though to date no fully effective TB vaccine really exists. Current treatments involve several combinations of antibiotics administered to TB patients for up to two years, leading often to financial issues and reduced therapy adherence. Along with this, the development and spread of drug-resistant TB strains is another big complicating matter. Faced with these challenges, there is an urgent need to explore new vaccination strategies in order to boost immunity against tuberculosis and shorten the duration of treatment. Computational modeling represents an extraordinary way to simulate and predict the outcome of vaccination strategies, speeding up the arduous process of vaccine pipeline development and relative time to market. Here, we present EU - funded STriTuVaD project computational platform able to predict the artificial immunity induced by RUTI and ID93/GLA-SE, two specific tuberculosis vaccines. Such an in silico trial will be validated through a phase 2b clinical trial. Moreover, STriTuVaD computational framework is able to inform of the reasons for failure should the vaccinations strategies against M. tuberculosis under testing found not efficient, which will suggest possible improvements.
1503.05791
Filippo Disanto
Filippo Disanto and Noah A. Rosenberg
Asymptotic properties of the number of matching coalescent histories for caterpillar-like families of species trees
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coalescent histories provide lists of species tree branches on which gene tree coalescences can take place, and their enumerative properties assist in understanding the computational complexity of calculations central in the study of gene trees and species trees. Here, we solve an enumerative problem left open by Rosenberg (IEEE/ACM Transactions on Computational Biology and Bioinformatics 10: 1253-1262, 2013) concerning the number of coalescent histories for gene trees and species trees with a matching labeled topology that belongs to a generic caterpillar-like family. By bringing a generating function approach to the study of coalescent histories, we prove that for any caterpillar-like family with seed tree $t$, the sequence $(h_n)_{n\geq 0}$ describing the number of matching coalescent histories of the $n$th tree of the family grows asymptotically as a constant multiple of the Catalan numbers. Thus, $h_n \sim \beta_t c_n$, where the asymptotic constant $\beta_t > 0$ depends on the shape of the seed tree $t$. The result extends a claim demonstrated only for seed trees with at most 8 taxa to arbitrary seed trees, expanding the set of cases for which detailed enumerative properties of coalescent histories can be determined. We introduce a procedure that computes from $t$ the constant $\beta_t$ as well as the algebraic expression for the generating function of the sequence $(h_n)_{n\geq 0}$.
[ { "created": "Thu, 19 Mar 2015 15:05:07 GMT", "version": "v1" } ]
2015-03-20
[ [ "Disanto", "Filippo", "" ], [ "Rosenberg", "Noah A.", "" ] ]
Coalescent histories provide lists of species tree branches on which gene tree coalescences can take place, and their enumerative properties assist in understanding the computational complexity of calculations central in the study of gene trees and species trees. Here, we solve an enumerative problem left open by Rosenberg (IEEE/ACM Transactions on Computational Biology and Bioinformatics 10: 1253-1262, 2013) concerning the number of coalescent histories for gene trees and species trees with a matching labeled topology that belongs to a generic caterpillar-like family. By bringing a generating function approach to the study of coalescent histories, we prove that for any caterpillar-like family with seed tree $t$, the sequence $(h_n)_{n\geq 0}$ describing the number of matching coalescent histories of the $n$th tree of the family grows asymptotically as a constant multiple of the Catalan numbers. Thus, $h_n \sim \beta_t c_n$, where the asymptotic constant $\beta_t > 0$ depends on the shape of the seed tree $t$. The result extends a claim demonstrated only for seed trees with at most 8 taxa to arbitrary seed trees, expanding the set of cases for which detailed enumerative properties of coalescent histories can be determined. We introduce a procedure that computes from $t$ the constant $\beta_t$ as well as the algebraic expression for the generating function of the sequence $(h_n)_{n\geq 0}$.
2110.05232
Romain Weppe
R. Weppe (UMR ISEM), Ma\"eva Orliac (UMR ISEM), G. Guinot (UMR ISEM), Fabien L. Condamine (UMR ISEM)
Evolutionary drivers, morphological evolution and diversity dynamics of a surviving mammal clade: cainotherioids at the Eocene--Oligocene transition
null
Proceedings of the Royal Society B: Biological Sciences, Royal Society, The, 2021, 288 (1952), pp.20210173. \&\#x27E8;10.1098/rspb.2021.0173\&\#x27E9
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Eocene--Oligocene transition (EOT) represents a period of global environmental changes particularly marked in Europe and coincides with a dramatic biotic turnover. Here, using an exceptional fossil preservation, we document and analyse the diversity dynamics of a mammal clade, Cainotherioidea (Artiodactyla), that survived the EOT and radiated rapidly immediately after. We infer their diversification history from Quercy Konzentrat--Lagerst{\"a}tte (south-west France) at the species level using Bayesian birth--death models. We show that cainotherioid diversity fluctuated through time, with extinction events at the EOT and in the late Oligocene, and a major speciation burst in the early Oligocene. The latter is in line with our finding that cainotherioids had a high morphological adaptability following environmental changes throughout the EOT, which probably played a key role in the survival and evolutionary success of this clade in the aftermath. Speciation is positively associated with temperature and continental fragmentation in a time-continuous way, while extinction seems to synchronize with environmental change in a punctuated way. Within-clade interactions negatively affected the cainotherioid diversification, while inter-clade competition might explain their final decline during the late Oligocene. Our results provide a detailed dynamic picture of the evolutionary history of a mammal clade in a context of global change.
[ { "created": "Mon, 11 Oct 2021 12:49:30 GMT", "version": "v1" } ]
2021-10-12
[ [ "Weppe", "R.", "", "UMR ISEM" ], [ "Orliac", "Maëva", "", "UMR ISEM" ], [ "Guinot", "G.", "", "UMR ISEM" ], [ "Condamine", "Fabien L.", "", "UMR ISEM" ] ]
The Eocene--Oligocene transition (EOT) represents a period of global environmental changes particularly marked in Europe and coincides with a dramatic biotic turnover. Here, using an exceptional fossil preservation, we document and analyse the diversity dynamics of a mammal clade, Cainotherioidea (Artiodactyla), that survived the EOT and radiated rapidly immediately after. We infer their diversification history from Quercy Konzentrat--Lagerst{\"a}tte (south-west France) at the species level using Bayesian birth--death models. We show that cainotherioid diversity fluctuated through time, with extinction events at the EOT and in the late Oligocene, and a major speciation burst in the early Oligocene. The latter is in line with our finding that cainotherioids had a high morphological adaptability following environmental changes throughout the EOT, which probably played a key role in the survival and evolutionary success of this clade in the aftermath. Speciation is positively associated with temperature and continental fragmentation in a time-continuous way, while extinction seems to synchronize with environmental change in a punctuated way. Within-clade interactions negatively affected the cainotherioid diversification, while inter-clade competition might explain their final decline during the late Oligocene. Our results provide a detailed dynamic picture of the evolutionary history of a mammal clade in a context of global change.
2302.02692
Jeyashree Krishnan
Jeyashree Krishnan, Zeyu Lian, Pieter E. Oomen, Xiulan He, Soodabeh Majdi, Andreas Schuppert, Andrew Ewing
Spike-by-Spike Frequency Analysis of Amperometry Traces Provides Statistical Validation of Observations in the Time Domain
34 pages, 16 figures
null
null
null
q-bio.SC physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Amperometry is a commonly used electrochemical method for studying the process of exocytosis in real-time. Given the high precision of recording that amperometry procedures offer, the volume of data generated can span over several hundreds of megabytes to a few gigabytes and therefore necessitates systematic and reproducible methods for analysis. Though the spike characteristics of amperometry traces in the time domain hold information about the dynamics of exocytosis, these biochemical signals are, more often than not, characterized by time-varying signal properties. Such signals with time-variant properties may occur at different frequencies and therefore analyzing them in the frequency domain may provide statistical validation for observations already established in the time domain. This necessitates the use of time-variant, frequency-selective signal processing methods as well, which can adeptly quantify the dominant or mean frequencies in the signal. The Fast Fourier Transform (FFT) is a well-established computational tool that is commonly used to find the frequency components of a signal buried in noise. In this work, we outline a method for spike-based frequency analysis of amperometry traces using FFT that also provides statistical validation of observations on spike characteristics in the time domain. We demonstrate the method by utilizing simulated signals and by subsequently testing it on diverse amperometry datasets generated from different experiments with various chemical stimulations. To our knowledge, this is the first fully automated open-source tool available dedicated to the analysis of spikes extracted from amperometry signals in the frequency domain.
[ { "created": "Mon, 6 Feb 2023 10:49:02 GMT", "version": "v1" } ]
2023-02-07
[ [ "Krishnan", "Jeyashree", "" ], [ "Lian", "Zeyu", "" ], [ "Oomen", "Pieter E.", "" ], [ "He", "Xiulan", "" ], [ "Majdi", "Soodabeh", "" ], [ "Schuppert", "Andreas", "" ], [ "Ewing", "Andrew", "" ] ]
Amperometry is a commonly used electrochemical method for studying the process of exocytosis in real-time. Given the high precision of recording that amperometry procedures offer, the volume of data generated can span over several hundreds of megabytes to a few gigabytes and therefore necessitates systematic and reproducible methods for analysis. Though the spike characteristics of amperometry traces in the time domain hold information about the dynamics of exocytosis, these biochemical signals are, more often than not, characterized by time-varying signal properties. Such signals with time-variant properties may occur at different frequencies and therefore analyzing them in the frequency domain may provide statistical validation for observations already established in the time domain. This necessitates the use of time-variant, frequency-selective signal processing methods as well, which can adeptly quantify the dominant or mean frequencies in the signal. The Fast Fourier Transform (FFT) is a well-established computational tool that is commonly used to find the frequency components of a signal buried in noise. In this work, we outline a method for spike-based frequency analysis of amperometry traces using FFT that also provides statistical validation of observations on spike characteristics in the time domain. We demonstrate the method by utilizing simulated signals and by subsequently testing it on diverse amperometry datasets generated from different experiments with various chemical stimulations. To our knowledge, this is the first fully automated open-source tool available dedicated to the analysis of spikes extracted from amperometry signals in the frequency domain.
2008.12683
Mohammad Ali Moni
Sakifa Aktar, Ashis Talukder, Md. Martuza Ahamad, A. H. M. Kamal, Jahidur Rahman Khan, Md. Protikuzzaman, Nasif Hossain, Julian M.W. Quinn, Mathew A. Summers, Teng Liaw, Valsamma Eapen, Mohammad Ali Moni
Machine Learning and Meta-Analysis Approach to Identify Patient Comorbidities and Symptoms that Increased Risk of Mortality in COVID-19
null
Diagnostics 2021
10.3390/diagnostics11081383
2008.12683
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Background: Providing appropriate care for people suffering from COVID-19, the disease caused by the pandemic SARS-CoV-2 virus is a significant global challenge. Many individuals who become infected have pre-existing conditions that may interact with COVID-19 to increase symptom severity and mortality risk. COVID-19 patient comorbidities are likely to be informative about individual risk of severe illness and mortality. Accurately determining how comorbidities are associated with severe symptoms and mortality would thus greatly assist in COVID-19 care planning and provision. Methods: To assess the interaction of patient comorbidities with COVID-19 severity and mortality we performed a meta-analysis of the published global literature, and machine learning predictive analysis using an aggregated COVID-19 global dataset. Results: Our meta-analysis identified chronic obstructive pulmonary disease (COPD), cerebrovascular disease (CEVD), cardiovascular disease (CVD), type 2 diabetes, malignancy, and hypertension as most significantly associated with COVID-19 severity in the current published literature. Machine learning classification using novel aggregated cohort data similarly found COPD, CVD, CKD, type 2 diabetes, malignancy and hypertension, as well as asthma, as the most significant features for classifying those deceased versus those who survived COVID-19. While age and gender were the most significant predictor of mortality, in terms of symptom-comorbidity combinations, it was observed that Pneumonia-Hypertension, Pneumonia-Diabetes and Acute Respiratory Distress Syndrome (ARDS)-Hypertension showed the most significant effects on COVID-19 mortality. Conclusions: These results highlight patient cohorts most at risk of COVID-19 related severe morbidity and mortality which have implications for prioritization of hospital resources.
[ { "created": "Fri, 21 Aug 2020 12:31:54 GMT", "version": "v1" } ]
2023-02-24
[ [ "Aktar", "Sakifa", "" ], [ "Talukder", "Ashis", "" ], [ "Ahamad", "Md. Martuza", "" ], [ "Kamal", "A. H. M.", "" ], [ "Khan", "Jahidur Rahman", "" ], [ "Protikuzzaman", "Md.", "" ], [ "Hossain", "Nasif", "" ], [ "Quinn", "Julian M. W.", "" ], [ "Summers", "Mathew A.", "" ], [ "Liaw", "Teng", "" ], [ "Eapen", "Valsamma", "" ], [ "Moni", "Mohammad Ali", "" ] ]
Background: Providing appropriate care for people suffering from COVID-19, the disease caused by the pandemic SARS-CoV-2 virus is a significant global challenge. Many individuals who become infected have pre-existing conditions that may interact with COVID-19 to increase symptom severity and mortality risk. COVID-19 patient comorbidities are likely to be informative about individual risk of severe illness and mortality. Accurately determining how comorbidities are associated with severe symptoms and mortality would thus greatly assist in COVID-19 care planning and provision. Methods: To assess the interaction of patient comorbidities with COVID-19 severity and mortality we performed a meta-analysis of the published global literature, and machine learning predictive analysis using an aggregated COVID-19 global dataset. Results: Our meta-analysis identified chronic obstructive pulmonary disease (COPD), cerebrovascular disease (CEVD), cardiovascular disease (CVD), type 2 diabetes, malignancy, and hypertension as most significantly associated with COVID-19 severity in the current published literature. Machine learning classification using novel aggregated cohort data similarly found COPD, CVD, CKD, type 2 diabetes, malignancy and hypertension, as well as asthma, as the most significant features for classifying those deceased versus those who survived COVID-19. While age and gender were the most significant predictor of mortality, in terms of symptom-comorbidity combinations, it was observed that Pneumonia-Hypertension, Pneumonia-Diabetes and Acute Respiratory Distress Syndrome (ARDS)-Hypertension showed the most significant effects on COVID-19 mortality. Conclusions: These results highlight patient cohorts most at risk of COVID-19 related severe morbidity and mortality which have implications for prioritization of hospital resources.
1812.05872
Fiona Macfarlane
Mark AJ Chaplain, Tommaso Lorenzi, Fiona R Macfarlane
Bridging the gap between individual-based and continuum models of growing cell populations
null
null
10.1007/s00285-019-01391-y
null
q-bio.TO math.AP q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuum models for the spatial dynamics of growing cell populations have been widely used to investigate the mechanisms underpinning tissue development and tumour invasion. These models consist of nonlinear partial differential equations that describe the evolution of cellular densities in response to pressure gradients generated by population growth. Little prior work has explored the relation between such continuum models and related single-cell-based models. We present here a simple stochastic individual-based model for the spatial dynamics of multicellular systems whereby cells undergo pressure-driven movement and pressure-dependent proliferation.We show that nonlinear partial differential equations commonly used to model the spatial dynamics of growing cell populations can be formally derived from the branching random walk that underlies our discrete model. Moreover, we carry out a systematic comparison between the individual-based model and its continuum counterparts, both in the case of one single cell population and in the case of multiple cell populations with different biophysical properties. The outcomes of our comparative study demonstrate that the results of computational simulations of the individual-based model faithfully mirror the qualitative and quantitative properties of the solutions to the corresponding nonlinear partial differential equations. Ultimately, these results illustrate how the simple rules governing the dynamics of single cells in our individual-based model can lead to the emergence of complex spatial patterns of population growth observed in continuum models.
[ { "created": "Fri, 14 Dec 2018 12:03:46 GMT", "version": "v1" }, { "created": "Tue, 18 Dec 2018 08:48:26 GMT", "version": "v2" }, { "created": "Sat, 11 May 2019 09:59:55 GMT", "version": "v3" } ]
2019-07-15
[ [ "Chaplain", "Mark AJ", "" ], [ "Lorenzi", "Tommaso", "" ], [ "Macfarlane", "Fiona R", "" ] ]
Continuum models for the spatial dynamics of growing cell populations have been widely used to investigate the mechanisms underpinning tissue development and tumour invasion. These models consist of nonlinear partial differential equations that describe the evolution of cellular densities in response to pressure gradients generated by population growth. Little prior work has explored the relation between such continuum models and related single-cell-based models. We present here a simple stochastic individual-based model for the spatial dynamics of multicellular systems whereby cells undergo pressure-driven movement and pressure-dependent proliferation.We show that nonlinear partial differential equations commonly used to model the spatial dynamics of growing cell populations can be formally derived from the branching random walk that underlies our discrete model. Moreover, we carry out a systematic comparison between the individual-based model and its continuum counterparts, both in the case of one single cell population and in the case of multiple cell populations with different biophysical properties. The outcomes of our comparative study demonstrate that the results of computational simulations of the individual-based model faithfully mirror the qualitative and quantitative properties of the solutions to the corresponding nonlinear partial differential equations. Ultimately, these results illustrate how the simple rules governing the dynamics of single cells in our individual-based model can lead to the emergence of complex spatial patterns of population growth observed in continuum models.
1703.10643
Ann Sizemore
Ann E. Sizemore and Danielle S. Bassett
Dynamic Graph Metrics: Tutorial, Toolbox, and Tale
21 pages, 5 figures. Toolbox not yet publicly available
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The central nervous system is composed of many individual units -- from cells to areas -- that are connected with one another in a complex pattern of functional interactions that supports perception, action, and cognition. One natural and parsimonious representation of such a system is a graph in which nodes (units) are connected by edges (interactions). While applicable across spatiotemporal scales, species, and cohorts, the traditional graph approach is unable to address the complexity of time-varying connectivity patterns that may be critically important for an understanding of emotional and cognitive state, task-switching, adaptation and development, or aging and disease progression. Here we survey a set of tools from applied mathematics that offer measures to characterize dynamic graphs. Along with this survey, we offer suggestions for visualization and a publicly-available MATLAB toolbox to facilitate the application of these metrics to existing or yet-to-be acquired neuroimaging data. We illustrate the toolbox by applying it to a previously published data set of time-varying functional graphs, but note that the tools can also be applied to time-varying structural graphs or to other sorts of relational data entirely. Our aim is to provide the neuroimaging community with a useful set of tools, and an intuition regarding how to use them, for addressing emerging questions that hinge on accurate and creative analyses of dynamic graphs.
[ { "created": "Thu, 30 Mar 2017 19:16:33 GMT", "version": "v1" } ]
2017-04-03
[ [ "Sizemore", "Ann E.", "" ], [ "Bassett", "Danielle S.", "" ] ]
The central nervous system is composed of many individual units -- from cells to areas -- that are connected with one another in a complex pattern of functional interactions that supports perception, action, and cognition. One natural and parsimonious representation of such a system is a graph in which nodes (units) are connected by edges (interactions). While applicable across spatiotemporal scales, species, and cohorts, the traditional graph approach is unable to address the complexity of time-varying connectivity patterns that may be critically important for an understanding of emotional and cognitive state, task-switching, adaptation and development, or aging and disease progression. Here we survey a set of tools from applied mathematics that offer measures to characterize dynamic graphs. Along with this survey, we offer suggestions for visualization and a publicly-available MATLAB toolbox to facilitate the application of these metrics to existing or yet-to-be acquired neuroimaging data. We illustrate the toolbox by applying it to a previously published data set of time-varying functional graphs, but note that the tools can also be applied to time-varying structural graphs or to other sorts of relational data entirely. Our aim is to provide the neuroimaging community with a useful set of tools, and an intuition regarding how to use them, for addressing emerging questions that hinge on accurate and creative analyses of dynamic graphs.
1302.2906
Christoph Adami
Bj{\o}rn {\O}stman and Christoph Adami
Predicting evolution and visualizing high-dimensional fitness landscapes
12 pages, 7 figures. To appear in "Recent Advances in the Theory and Application of Fitness Landscapes" (A. Engelbrecht and H. Richter, eds.). Springer Series in Emergence, Complexity, and Computation, 2013
null
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tempo and mode of an adaptive process is strongly determined by the structure of the fitness landscape that underlies it. In order to be able to predict evolutionary outcomes (even on the short term), we must know more about the nature of realistic fitness landscapes than we do today. For example, in order to know whether evolution is predominantly taking paths that move upwards in fitness and along neutral ridges, or else entails a significant number of valley crossings, we need to be able to visualize these landscapes: we must determine whether there are peaks in the landscape, where these peaks are located with respect to one another, and whether evolutionary paths can connect them. This is a difficult task because genetic fitness landscapes (as opposed to those based on traits) are high-dimensional, and tools for visualizing such landscapes are lacking. In this contribution, we focus on the predictability of evolution on rugged genetic fitness landscapes, and determine that peaks in such landscapes are highly clustered: high peaks are predominantly close to other high peaks. As a consequence, the valleys separating such peaks are shallow and narrow, such that evolutionary trajectories towards the highest peak in the landscape can be achieved via a series of valley crossings
[ { "created": "Tue, 12 Feb 2013 20:49:54 GMT", "version": "v1" }, { "created": "Tue, 26 Feb 2013 21:24:34 GMT", "version": "v2" } ]
2013-02-28
[ [ "Østman", "Bjørn", "" ], [ "Adami", "Christoph", "" ] ]
The tempo and mode of an adaptive process is strongly determined by the structure of the fitness landscape that underlies it. In order to be able to predict evolutionary outcomes (even on the short term), we must know more about the nature of realistic fitness landscapes than we do today. For example, in order to know whether evolution is predominantly taking paths that move upwards in fitness and along neutral ridges, or else entails a significant number of valley crossings, we need to be able to visualize these landscapes: we must determine whether there are peaks in the landscape, where these peaks are located with respect to one another, and whether evolutionary paths can connect them. This is a difficult task because genetic fitness landscapes (as opposed to those based on traits) are high-dimensional, and tools for visualizing such landscapes are lacking. In this contribution, we focus on the predictability of evolution on rugged genetic fitness landscapes, and determine that peaks in such landscapes are highly clustered: high peaks are predominantly close to other high peaks. As a consequence, the valleys separating such peaks are shallow and narrow, such that evolutionary trajectories towards the highest peak in the landscape can be achieved via a series of valley crossings
1210.3555
Danielle Bassett
Danielle S. Bassett, Nicholas F. Wymbs, M. Puck Rombach, Mason A. Porter, Peter J. Mucha, Scott T. Grafton
Task-Based Core-Periphery Organisation of Human Brain Dynamics
21 pages, 9 figures, and Supplementary Information
null
null
null
q-bio.NC cond-mat.dis-nn nlin.AO stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a person learns a new skill, distinct synapses, brain regions, and circuits are engaged and change over time. In this paper, we develop methods to examine patterns of correlated activity across a large set of brain regions. Our goal is to identify properties that enable robust learning of a motor skill. We measure brain activity during motor sequencing and characterize network properties based on coherent activity between brain regions. Using recently developed algorithms to detect time-evolving communities, we find that the complex reconfiguration patterns of the brain's putative functional modules that control learning can be described parsimoniously by the combined presence of a relatively stiff temporal core that is composed primarily of sensorimotor and visual regions whose connectivity changes little in time and a flexible temporal periphery that is composed primarily of multimodal association regions whose connectivity changes frequently. The separation between temporal core and periphery changes over the course of training and, importantly, is a good predictor of individual differences in learning success. The core of dynamically stiff regions exhibits dense connectivity, which is consistent with notions of core-periphery organization established previously in social networks. Our results demonstrate that core-periphery organization provides an insightful way to understand how putative functional modules are linked. This, in turn, enables the prediction of fundamental human capacities, including the production of complex goal-directed behavior.
[ { "created": "Fri, 12 Oct 2012 15:50:47 GMT", "version": "v1" }, { "created": "Wed, 30 Oct 2013 15:30:56 GMT", "version": "v2" } ]
2013-10-31
[ [ "Bassett", "Danielle S.", "" ], [ "Wymbs", "Nicholas F.", "" ], [ "Rombach", "M. Puck", "" ], [ "Porter", "Mason A.", "" ], [ "Mucha", "Peter J.", "" ], [ "Grafton", "Scott T.", "" ] ]
As a person learns a new skill, distinct synapses, brain regions, and circuits are engaged and change over time. In this paper, we develop methods to examine patterns of correlated activity across a large set of brain regions. Our goal is to identify properties that enable robust learning of a motor skill. We measure brain activity during motor sequencing and characterize network properties based on coherent activity between brain regions. Using recently developed algorithms to detect time-evolving communities, we find that the complex reconfiguration patterns of the brain's putative functional modules that control learning can be described parsimoniously by the combined presence of a relatively stiff temporal core that is composed primarily of sensorimotor and visual regions whose connectivity changes little in time and a flexible temporal periphery that is composed primarily of multimodal association regions whose connectivity changes frequently. The separation between temporal core and periphery changes over the course of training and, importantly, is a good predictor of individual differences in learning success. The core of dynamically stiff regions exhibits dense connectivity, which is consistent with notions of core-periphery organization established previously in social networks. Our results demonstrate that core-periphery organization provides an insightful way to understand how putative functional modules are linked. This, in turn, enables the prediction of fundamental human capacities, including the production of complex goal-directed behavior.
q-bio/0503029
Changbong Hyeon
Changbong Hyeon and D. Thirumalai
Mechanical unfolding of RNA hairpins
23 pages, 6 Figures. in press (Proc. Natl. Acad. Sci.)
Proc. Natl. Acad. Sci. 102, 6789-6794 (2005)
10.1073/pnas.0408314102
null
q-bio.BM q-bio.QM
null
Mechanical unfolding trajectories, generated by applying constant force in optical tweezer experiments, show that RNA hairpins and the P5abc subdomain of the group I intron unfold reversibly. We use coarse-grained Go-like models for RNA hairpins to explore forced-unfolding over a broad range of temperatures. A number of predictions that are amenable to experimental tests are made. At the critical force the hairpin jumps between folded and unfolded conformations without populating any discernible intermediates. The phase diagram in the force-temperature (f,T) plane shows that the hairpin unfolds by an all-or-none process. The cooperativity of the unfolding transition increases dramatically at low temperatures. Free energy of stability, obtained from time averages of mechanical unfolding trajectories, coincide with ensemble averages which establishes ergodicity. The hopping time between the the native basin of attraction (NBA) and the unfolded basin increases dramatically along the phase boundary. Thermal unfolding is stochastic whereas mechanical unfolding occurs in "quantized steps" with great variations in the step lengths. Refolding times, upon force quench, from stretched states to the NBA is "at least an order of magnitude" greater than folding times by temperature quench. Upon force quench from stretched states the NBA is reached in at least three stages. In the initial stages the mean end-to-end distance decreases nearly continuously and only in the last stage there is a sudden transition to the NBA. Because of the generality of the results we propose that similar behavior should be observed in force quench refolding of proteins.
[ { "created": "Fri, 18 Mar 2005 19:52:45 GMT", "version": "v1" } ]
2009-11-11
[ [ "Hyeon", "Changbong", "" ], [ "Thirumalai", "D.", "" ] ]
Mechanical unfolding trajectories, generated by applying constant force in optical tweezer experiments, show that RNA hairpins and the P5abc subdomain of the group I intron unfold reversibly. We use coarse-grained Go-like models for RNA hairpins to explore forced-unfolding over a broad range of temperatures. A number of predictions that are amenable to experimental tests are made. At the critical force the hairpin jumps between folded and unfolded conformations without populating any discernible intermediates. The phase diagram in the force-temperature (f,T) plane shows that the hairpin unfolds by an all-or-none process. The cooperativity of the unfolding transition increases dramatically at low temperatures. Free energy of stability, obtained from time averages of mechanical unfolding trajectories, coincide with ensemble averages which establishes ergodicity. The hopping time between the the native basin of attraction (NBA) and the unfolded basin increases dramatically along the phase boundary. Thermal unfolding is stochastic whereas mechanical unfolding occurs in "quantized steps" with great variations in the step lengths. Refolding times, upon force quench, from stretched states to the NBA is "at least an order of magnitude" greater than folding times by temperature quench. Upon force quench from stretched states the NBA is reached in at least three stages. In the initial stages the mean end-to-end distance decreases nearly continuously and only in the last stage there is a sudden transition to the NBA. Because of the generality of the results we propose that similar behavior should be observed in force quench refolding of proteins.
1101.1556
Simon DeDeo
Simon DeDeo, David C. Krakauer, Jessica C. Flack
Evidence of strategic periodicities in collective conflict dynamics
22 pages, 7 figures, 1 table. Accepted for publication in Journal of the Royal Society Interface
J. R. Soc. Interface (2011) vol. 8, no. 62, 1260-1273
10.1098/rsif.2010.0687
SFI Working Paper #11-01-002
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the timescales of conflict decision-making in a primate society. We present evidence for multiple, periodic timescales associated with social decision-making and behavioral patterns. We demonstrate the existence of periodicities that are not directly coupled to environmental cycles or known ultraridian mechanisms. Among specific biological and socially-defined demographic classes, periodicities span timescales between hours and days, and many are not driven by exogenous or internal regularities. Our results indicate that they are instead driven by strategic responses to social interaction patterns. Analyses also reveal that a class of individuals, playing a critical functional role, policing, have a signature timescale on the order of one hour. We propose a classification of behavioral timescales analogous to those of the nervous system, with high-frequency, or $\alpha$-scale, behavior occurring on hour-long scales, through to multi-hour, or $\beta$-scale, behavior, and, finally $\gamma$ periodicities observed on a timescale of days.
[ { "created": "Fri, 7 Jan 2011 23:39:54 GMT", "version": "v1" } ]
2011-09-21
[ [ "DeDeo", "Simon", "" ], [ "Krakauer", "David C.", "" ], [ "Flack", "Jessica C.", "" ] ]
We analyze the timescales of conflict decision-making in a primate society. We present evidence for multiple, periodic timescales associated with social decision-making and behavioral patterns. We demonstrate the existence of periodicities that are not directly coupled to environmental cycles or known ultraridian mechanisms. Among specific biological and socially-defined demographic classes, periodicities span timescales between hours and days, and many are not driven by exogenous or internal regularities. Our results indicate that they are instead driven by strategic responses to social interaction patterns. Analyses also reveal that a class of individuals, playing a critical functional role, policing, have a signature timescale on the order of one hour. We propose a classification of behavioral timescales analogous to those of the nervous system, with high-frequency, or $\alpha$-scale, behavior occurring on hour-long scales, through to multi-hour, or $\beta$-scale, behavior, and, finally $\gamma$ periodicities observed on a timescale of days.
2206.11878
Zepeng Huo
Zepeng Huo, Bobak J. Mortazavi, Theodora Chaspari, Nicolaas Deutz, Laura Ruebush, Ricardo Gutierrez-Osuna
Predicting the meal macronutrient composition from continuous glucose monitors
null
In 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), pp. 1-4. IEEE, 2019
10.1109/BHI.2019.8834488
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sustained high levels of blood glucose in type 2 diabetes (T2DM) can have disastrous long-term health consequences. An essential component of clinical interventions for T2DM is monitoring dietary intake to keep plasma glucose levels within an acceptable range. Yet, current techniques to monitor food intake are time intensive and error prone. To address this issue, we are developing techniques to automatically monitor food intake and the composition of those foods using continuous glucose monitors (CGMs). This article presents the results of a clinical study in which participants consumed nine standardized meals with known macronutrients amounts (carbohydrate, protein, and fat) while wearing a CGM. We built a multitask neural network to estimate the macronutrient composition from the CGM signal, and compared it against a baseline linear regression. The best prediction result comes from our proposed neural network, trained with subject-dependent data, as measured by root mean squared relative error and correlation coefficient. These findings suggest that it is possible to estimate macronutrient composition from CGM signals, opening the possibility to develop automatic techniques to track food intake.
[ { "created": "Thu, 23 Jun 2022 17:41:25 GMT", "version": "v1" } ]
2022-06-24
[ [ "Huo", "Zepeng", "" ], [ "Mortazavi", "Bobak J.", "" ], [ "Chaspari", "Theodora", "" ], [ "Deutz", "Nicolaas", "" ], [ "Ruebush", "Laura", "" ], [ "Gutierrez-Osuna", "Ricardo", "" ] ]
Sustained high levels of blood glucose in type 2 diabetes (T2DM) can have disastrous long-term health consequences. An essential component of clinical interventions for T2DM is monitoring dietary intake to keep plasma glucose levels within an acceptable range. Yet, current techniques to monitor food intake are time intensive and error prone. To address this issue, we are developing techniques to automatically monitor food intake and the composition of those foods using continuous glucose monitors (CGMs). This article presents the results of a clinical study in which participants consumed nine standardized meals with known macronutrients amounts (carbohydrate, protein, and fat) while wearing a CGM. We built a multitask neural network to estimate the macronutrient composition from the CGM signal, and compared it against a baseline linear regression. The best prediction result comes from our proposed neural network, trained with subject-dependent data, as measured by root mean squared relative error and correlation coefficient. These findings suggest that it is possible to estimate macronutrient composition from CGM signals, opening the possibility to develop automatic techniques to track food intake.
2108.06666
R.K. Brojen Singh
Keilash Chirom, Md. Zubbair Malik, Pallavi Somvanshi and R.K. Brojen Singh
Network medicine in ovarian cancer: Topological properties to drug discovery
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
The investigation of topological properties of ovarian cancer network (OCN) and the roles of hubs involved in it by digging the network at various levels of organization are important to understand how OCN is organized to understand disease states. The OCN constructed from the experimentally verified genes exhibits fractal nature in the topological properties of the network and deeply rooted communities. Also, the network properties at all levels of organization obey one parameter scaling law which lacks centrality lethality rule. We then showed that $\langle k\rangle$ can be taken as a scaling parameter, where, power law exponent can be estimated from the ratio of network diameters, $\lambda=1+\rho\left[\frac{d_c}{d_k}\right]$. The betweenness centrality shows two distinct behaviors one shown by high degree hubs and the other by segregated low degree nodes. The $C_B$ distribution follows power law behavior with the exponent connected to exponents of distributions of high and low degree nodes by, $\theta\sim\left[1+\frac{1}{2}\left(\frac{1}{\epsilon_s}+\frac{1}{\epsilon_h}\right)(\lambda-1)\right]$. Absence of rich-club formation leads to the missing of a number of attractors in the network causing formation of weakly tied diverse functional modules to keep optimal network efficiency. The hubs knockout experiment shows that provincial hubs take major responsibility to keep network integrity and organization. The identified key regulators are found to be provincial and connector hubs. Further, two key regulators, EPCAM and CD44 are found to be maximally over expressed at various cancer stages (II-IV). They are also positively correlated with immune infiltrates (CD4+ T cells). Finally, few potential drugs are identified related to the key regulators.
[ { "created": "Sun, 15 Aug 2021 06:44:37 GMT", "version": "v1" } ]
2021-08-17
[ [ "Chirom", "Keilash", "" ], [ "Malik", "Md. Zubbair", "" ], [ "Somvanshi", "Pallavi", "" ], [ "Singh", "R. K. Brojen", "" ] ]
The investigation of topological properties of ovarian cancer network (OCN) and the roles of hubs involved in it by digging the network at various levels of organization are important to understand how OCN is organized to understand disease states. The OCN constructed from the experimentally verified genes exhibits fractal nature in the topological properties of the network and deeply rooted communities. Also, the network properties at all levels of organization obey one parameter scaling law which lacks centrality lethality rule. We then showed that $\langle k\rangle$ can be taken as a scaling parameter, where, power law exponent can be estimated from the ratio of network diameters, $\lambda=1+\rho\left[\frac{d_c}{d_k}\right]$. The betweenness centrality shows two distinct behaviors one shown by high degree hubs and the other by segregated low degree nodes. The $C_B$ distribution follows power law behavior with the exponent connected to exponents of distributions of high and low degree nodes by, $\theta\sim\left[1+\frac{1}{2}\left(\frac{1}{\epsilon_s}+\frac{1}{\epsilon_h}\right)(\lambda-1)\right]$. Absence of rich-club formation leads to the missing of a number of attractors in the network causing formation of weakly tied diverse functional modules to keep optimal network efficiency. The hubs knockout experiment shows that provincial hubs take major responsibility to keep network integrity and organization. The identified key regulators are found to be provincial and connector hubs. Further, two key regulators, EPCAM and CD44 are found to be maximally over expressed at various cancer stages (II-IV). They are also positively correlated with immune infiltrates (CD4+ T cells). Finally, few potential drugs are identified related to the key regulators.
2008.00028
Meijian Yang
Meijian Yang
Investigating the association between meteorological factors and the transmission and fatality of COVID-19 in the US
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel coronavirus disease (COVID-19) is sweeping the world and has taken away thousands of lives. As the current epicenter, the United States has the largest number of confirmed and death cases of COVID-19. Meteorological factors have been found associated with many respiratory diseases in the past studies. In order to understand that how and during which period of time do the meteorological factors have the strongest association with the transmission and fatality of COVID-19, we analyze the correlation between each meteorological factor during different time periods within the incubation window and the confirmation and fatality rate, and develop statistic models to quantify the effects at county level. Results show that meteorological variables except maximum wind speed during the day 13 - 0 before current day shows the most significant correlation (P < 0.05) with the daily confirmed rate, while temperature during the day 13 - 8 before are most significantly correlated (P < 0.05) with the daily fatality rate. Temperature is the only meteorological factor showing dramatic positive association nationally, particularly in the southeast US where the current outbreak most intensive. The influence of temperature is remarkable on the confirmed rate with an increase of over 5 pmp in many counties, but not as much on the fatality rate (mostly within 0.01%). Findings in this study will help understanding the role of meteorological factors in the spreading of COVID-19 and provide insights for public and individual in fighting against this global epidemic.
[ { "created": "Thu, 30 Jul 2020 03:51:32 GMT", "version": "v1" } ]
2020-08-04
[ [ "Yang", "Meijian", "" ] ]
A novel coronavirus disease (COVID-19) is sweeping the world and has taken away thousands of lives. As the current epicenter, the United States has the largest number of confirmed and death cases of COVID-19. Meteorological factors have been found associated with many respiratory diseases in the past studies. In order to understand that how and during which period of time do the meteorological factors have the strongest association with the transmission and fatality of COVID-19, we analyze the correlation between each meteorological factor during different time periods within the incubation window and the confirmation and fatality rate, and develop statistic models to quantify the effects at county level. Results show that meteorological variables except maximum wind speed during the day 13 - 0 before current day shows the most significant correlation (P < 0.05) with the daily confirmed rate, while temperature during the day 13 - 8 before are most significantly correlated (P < 0.05) with the daily fatality rate. Temperature is the only meteorological factor showing dramatic positive association nationally, particularly in the southeast US where the current outbreak most intensive. The influence of temperature is remarkable on the confirmed rate with an increase of over 5 pmp in many counties, but not as much on the fatality rate (mostly within 0.01%). Findings in this study will help understanding the role of meteorological factors in the spreading of COVID-19 and provide insights for public and individual in fighting against this global epidemic.
1807.09932
Patryk Orzechowski
Patryk Orzechowski and Jason H. Moore
EBIC: an open source software for high-dimensional and big data biclustering analyses
2 pages, 1 figure
null
null
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: In this paper we present the latest release of EBIC, a next-generation biclustering algorithm for mining genetic data. The major contribution of this paper is adding support for big data, making it possible to efficiently run large genomic data mining analyses. Additional enhancements include integration with R and Bioconductor and an option to remove influence of missing value on the final result. Results: EBIC was applied to datasets of different sizes, including a large DNA methylation dataset with 436,444 rows. For the largest dataset we observed over 6.6 fold speedup in computation time on a cluster of 8 GPUs compared to running the method on a single GPU. This proves high scalability of the algorithm. Availability: The latest version of EBIC could be downloaded from http://github.com/EpistasisLab/ebic . Installation and usage instructions are also available online.
[ { "created": "Thu, 26 Jul 2018 02:57:19 GMT", "version": "v1" } ]
2018-07-27
[ [ "Orzechowski", "Patryk", "" ], [ "Moore", "Jason H.", "" ] ]
Motivation: In this paper we present the latest release of EBIC, a next-generation biclustering algorithm for mining genetic data. The major contribution of this paper is adding support for big data, making it possible to efficiently run large genomic data mining analyses. Additional enhancements include integration with R and Bioconductor and an option to remove influence of missing value on the final result. Results: EBIC was applied to datasets of different sizes, including a large DNA methylation dataset with 436,444 rows. For the largest dataset we observed over 6.6 fold speedup in computation time on a cluster of 8 GPUs compared to running the method on a single GPU. This proves high scalability of the algorithm. Availability: The latest version of EBIC could be downloaded from http://github.com/EpistasisLab/ebic . Installation and usage instructions are also available online.
1710.03264
William Redman T
William T Redman
A General Approach to Coding in Early Olfactory and Visual Neural Populations
11 pages
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Recent experimental and theoretical work on neural populations belonging to two separate early sensory systems, olfaction and vision, has challenged the notion that the two operate under different computational paradigms by providing evidence for the respective neural population codes having three central, common features: they are highly redundant; they are organized such that information is carried in the identity, and not the relative timing, of the active neurons; they are capable of error correction. We present the first model that captures these three properties in a general manner, making it possible to investigate whether similar structure is present in other population codes. Our model also makes specific predictions about additional, as yet unseen, structure in such codes. If these predictions are found in real data, this would provide new evidence that such population codes are operating under more general computational principles.
[ { "created": "Mon, 9 Oct 2017 18:56:11 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2017 23:39:01 GMT", "version": "v2" }, { "created": "Sun, 12 Aug 2018 18:09:56 GMT", "version": "v3" } ]
2018-08-14
[ [ "Redman", "William T", "" ] ]
Recent experimental and theoretical work on neural populations belonging to two separate early sensory systems, olfaction and vision, has challenged the notion that the two operate under different computational paradigms by providing evidence for the respective neural population codes having three central, common features: they are highly redundant; they are organized such that information is carried in the identity, and not the relative timing, of the active neurons; they are capable of error correction. We present the first model that captures these three properties in a general manner, making it possible to investigate whether similar structure is present in other population codes. Our model also makes specific predictions about additional, as yet unseen, structure in such codes. If these predictions are found in real data, this would provide new evidence that such population codes are operating under more general computational principles.
q-bio/0311013
Pau Fern\'andez
Ricard V. Sole, Pau Fernandez, Stuart A. Kauffman
Adaptive walks in a gene network model of morphogenesis: insights into the Cambrian explosion
to appear in International Journal of Developmental Biology, special issue on Evo-Devo (2003)
null
null
null
q-bio.GN q-bio.MN q-bio.PE
null
The emergence of complex patterns of organization close to the Cambrian boundary is known to have happened over a (geologically) short period of time. It involved the rapid diversification of body plans and stands as one of the major transitions in evolution. How it took place is a controversial issue. Here we explore this problem by considering a simple model of pattern formation in multicellular organisms. By modeling gene network-based morphogenesis and its evolution through adaptive walks, we explore the question of how combinatorial explosions might have been actually involved in the Cambrian event. Here we show that a small amount of genetic complexity including both gene regulation and cell-cell signaling allows one to generate an extraordinary repertoire of stable spatial patterns of gene expression compatible with observed anteroposterior patterns in early development of metazoans. The consequences for the understanding of the tempo and mode of the Cambrian event are outlined.
[ { "created": "Mon, 10 Nov 2003 15:12:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Sole", "Ricard V.", "" ], [ "Fernandez", "Pau", "" ], [ "Kauffman", "Stuart A.", "" ] ]
The emergence of complex patterns of organization close to the Cambrian boundary is known to have happened over a (geologically) short period of time. It involved the rapid diversification of body plans and stands as one of the major transitions in evolution. How it took place is a controversial issue. Here we explore this problem by considering a simple model of pattern formation in multicellular organisms. By modeling gene network-based morphogenesis and its evolution through adaptive walks, we explore the question of how combinatorial explosions might have been actually involved in the Cambrian event. Here we show that a small amount of genetic complexity including both gene regulation and cell-cell signaling allows one to generate an extraordinary repertoire of stable spatial patterns of gene expression compatible with observed anteroposterior patterns in early development of metazoans. The consequences for the understanding of the tempo and mode of the Cambrian event are outlined.
1505.02888
Nadav M. Shnerb
David Kessler, Samir Suweis, Marco Formentin and Nadav M. Shnerb
Neutral dynamics with environmental noise: age-size statistics and species lifetimes
null
Phys. Rev. E 92, 022722 (2015)
10.1103/PhysRevE.92.022722
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neutral dynamics, where taxa are assumed to be demographically equivalent and their abundance is governed solely by the stochasticity of the underlying birth-death process, has proved itself as an important minimal model that accounts for many empirical datasets in genetics and ecology. However, the restriction of the model to demographic [${\cal{O}} ({\sqrt N})$] noise yields relatively slow dynamics that appears to be in conflict with both short-term and long-term characteristics of the observed systems. Here we analyze two of these problems - age size relationships and species extinction time - in the framework of a neutral theory with both demographic and environmental stochasticity. It turns out that environmentally induced variations of the demographic rates control the long-term dynamics and modify dramatically the predictions of the neutral theory with demographic noise only, yielding much better agreement with empirical data. We consider two prototypes of "zero mean" environmental noise, one which is balanced with regard to the arithmetic abundance, another balanced in the logarithmic (fitness) space, study their species lifetime statistics and discuss their relevance to realistic models of community dynamics.
[ { "created": "Tue, 12 May 2015 07:15:50 GMT", "version": "v1" } ]
2015-09-09
[ [ "Kessler", "David", "" ], [ "Suweis", "Samir", "" ], [ "Formentin", "Marco", "" ], [ "Shnerb", "Nadav M.", "" ] ]
Neutral dynamics, where taxa are assumed to be demographically equivalent and their abundance is governed solely by the stochasticity of the underlying birth-death process, has proved itself as an important minimal model that accounts for many empirical datasets in genetics and ecology. However, the restriction of the model to demographic [${\cal{O}} ({\sqrt N})$] noise yields relatively slow dynamics that appears to be in conflict with both short-term and long-term characteristics of the observed systems. Here we analyze two of these problems - age size relationships and species extinction time - in the framework of a neutral theory with both demographic and environmental stochasticity. It turns out that environmentally induced variations of the demographic rates control the long-term dynamics and modify dramatically the predictions of the neutral theory with demographic noise only, yielding much better agreement with empirical data. We consider two prototypes of "zero mean" environmental noise, one which is balanced with regard to the arithmetic abundance, another balanced in the logarithmic (fitness) space, study their species lifetime statistics and discuss their relevance to realistic models of community dynamics.
1505.06920
Catherine Bauge
Catherine Baug\'e, Olivier Cauvard, Sylvain Leclercq, Philippe Gal\'era (MILPAT), Karim Boum\'ediene
Modulation of transforming growth factor beta signalling pathway genes by transforming growth factor beta in human osteoarthritic chondrocytes: involvement of Sp1 in both early and late response cells to transforming growth factor beta
null
Arthritis Research \& Therapy, 2010, 13 (1), pp.R23. \&lt;http://arthritis-research.com/content/13/1/R23\&gt;. \&lt;10.1038/sj.onc.1204808\&gt;
10.1038/sj.onc.1204808
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transforming growth factor beta (TGF$\beta$) plays a central role in morphogenesis, growth, and cell differentiation. This cytokine is particularly important in cartilage where it regulates cell proliferation and extracellular matrix synthesis. While the action of TGF$\beta$ on chondrocyte metabolism has been extensively catalogued, the modulation of specific genes that function as mediators of TGF$\beta$ signalling is poorly defined. In the current study, elements of the Smad component of the TGF$\beta$ intracellular signalling system and TGF$\beta$ receptors were characterised in human chondrocytes upon TGF$\beta$1 treatment. Human articular chondrocytes were incubated with TGF$\beta$1. Then, mRNA and protein levels of TGF$\beta$ receptors and Smads were analysed by RT-PCR and western blot analysis. The role of specific protein 1 (Sp1) was investigated by gain and loss of function (inhibitor, siRNA, expression vector). We showed that TGF$\beta$1 regulates mRNA levels of its own receptors, and of Smad3 and Smad7. It modulates TGF$\beta$ receptors post-transcriptionally by affecting their mRNA stability, but does not change the Smad-3 and Smad-7 mRNA half-life span, suggesting a potential transcriptional effect on these genes. Moreover, the transcriptional factor Sp1, which is downregulated by TGF$\beta$1, is involved in the repression of both TGF$\beta$ receptors but not in the modulation of Smad3 and Smad7. Interestingly, Sp1 ectopic expression permitted also to maintain a similar expression pattern to early response to TGF$\beta$ at 24 hours of treatment. It restored the induction of Sox9 and COL2A1 and blocked the late response (repression of aggrecan, induction of COL1A1 and COL10A1). These data help to better understand the negative feedback loop in the TGF$\beta$ signalling system, and enlighten an interesting role of Sp1 to regulate TGF$\beta$ response.
[ { "created": "Tue, 26 May 2015 12:19:19 GMT", "version": "v1" } ]
2015-05-27
[ [ "Baugé", "Catherine", "", "MILPAT" ], [ "Cauvard", "Olivier", "", "MILPAT" ], [ "Leclercq", "Sylvain", "", "MILPAT" ], [ "Galéra", "Philippe", "", "MILPAT" ], [ "Boumédiene", "Karim", "" ] ]
Transforming growth factor beta (TGF$\beta$) plays a central role in morphogenesis, growth, and cell differentiation. This cytokine is particularly important in cartilage where it regulates cell proliferation and extracellular matrix synthesis. While the action of TGF$\beta$ on chondrocyte metabolism has been extensively catalogued, the modulation of specific genes that function as mediators of TGF$\beta$ signalling is poorly defined. In the current study, elements of the Smad component of the TGF$\beta$ intracellular signalling system and TGF$\beta$ receptors were characterised in human chondrocytes upon TGF$\beta$1 treatment. Human articular chondrocytes were incubated with TGF$\beta$1. Then, mRNA and protein levels of TGF$\beta$ receptors and Smads were analysed by RT-PCR and western blot analysis. The role of specific protein 1 (Sp1) was investigated by gain and loss of function (inhibitor, siRNA, expression vector). We showed that TGF$\beta$1 regulates mRNA levels of its own receptors, and of Smad3 and Smad7. It modulates TGF$\beta$ receptors post-transcriptionally by affecting their mRNA stability, but does not change the Smad-3 and Smad-7 mRNA half-life span, suggesting a potential transcriptional effect on these genes. Moreover, the transcriptional factor Sp1, which is downregulated by TGF$\beta$1, is involved in the repression of both TGF$\beta$ receptors but not in the modulation of Smad3 and Smad7. Interestingly, Sp1 ectopic expression permitted also to maintain a similar expression pattern to early response to TGF$\beta$ at 24 hours of treatment. It restored the induction of Sox9 and COL2A1 and blocked the late response (repression of aggrecan, induction of COL1A1 and COL10A1). These data help to better understand the negative feedback loop in the TGF$\beta$ signalling system, and enlighten an interesting role of Sp1 to regulate TGF$\beta$ response.
1707.09361
James Moore
James Moore, Hasan Ahmed
High Dimensional Random Walks Can Appear Low Dimensional: Application to Influenza H3N2 Evolution
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One important feature of the mammalian immune system is the highly specific binding of antigens to antibodies. Antibodies generated in response to one infection may also provide some level of cross immunity to other infections. One model to describe this cross immunity is the notion of antigenic space, which assigns each antibody and each virus a point in $\mathbb{R}^n$. Past studies have suggested the dimensionality of antigenic space, $n$, may be small. In this study we show that data from hemagglutination assays suggest a high dimensional random walk (or self avoiding random random walk). The discrepancy between our result and prior studies is due to the fact that random walks can appear low dimensional according to a variety of analyses. including principal component analysis (PCA) and multidimensional scaling (MDS).
[ { "created": "Thu, 27 Jul 2017 21:20:58 GMT", "version": "v1" } ]
2017-08-01
[ [ "Moore", "James", "" ], [ "Ahmed", "Hasan", "" ] ]
One important feature of the mammalian immune system is the highly specific binding of antigens to antibodies. Antibodies generated in response to one infection may also provide some level of cross immunity to other infections. One model to describe this cross immunity is the notion of antigenic space, which assigns each antibody and each virus a point in $\mathbb{R}^n$. Past studies have suggested the dimensionality of antigenic space, $n$, may be small. In this study we show that data from hemagglutination assays suggest a high dimensional random walk (or self avoiding random random walk). The discrepancy between our result and prior studies is due to the fact that random walks can appear low dimensional according to a variety of analyses. including principal component analysis (PCA) and multidimensional scaling (MDS).
2307.09029
Priya Chakraborty
Priya Chakraborty, Sayantari Ghosh
Quantitative Modelling of Diffusion-driven Pattern Formation in microRNA-regulated Gene Expression
31 pages
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MicroRNAs are extensively known for post-transcriptional gene regulation and pattern formation in the embryonic developmental stage. We explore the origin of these spatio-temporal patterns mathematically, considering three different motifs here. For three scenarios, (1) simple microRNA-based mRNA regulation with a graded response in output, (2) microRNA-based mRNA regulation resulting in bistability in the dynamics, and (3) a coordinated response of microRNA (miRNA), simultaneously regulating the mRNAs of two different pools, detailed dynamical analysis, as well as the reaction-diffusion scenario have been considered and analyzed in the steady state and for the transient dynamics further. We have observed persistent-temporal patterns, as a result of the dynamics of the motifs, that explain spatial gradients and relevant patterns formed by related proteins in development and phenotypic heterogenetic aspects in biological systems. Competitive effects of miRNA regulation have also been found to be capable to cause spatio-temporal patterns, persistent enough to direct developmental decisions. Under coordinated regulation, miRNAs are found to generate spatio-temporal patterning even from complete homogeneity in concentration of target protein, which may have impactful insights in choice of cell-fates.
[ { "created": "Tue, 18 Jul 2023 07:42:09 GMT", "version": "v1" } ]
2023-07-19
[ [ "Chakraborty", "Priya", "" ], [ "Ghosh", "Sayantari", "" ] ]
MicroRNAs are extensively known for post-transcriptional gene regulation and pattern formation in the embryonic developmental stage. We explore the origin of these spatio-temporal patterns mathematically, considering three different motifs here. For three scenarios, (1) simple microRNA-based mRNA regulation with a graded response in output, (2) microRNA-based mRNA regulation resulting in bistability in the dynamics, and (3) a coordinated response of microRNA (miRNA), simultaneously regulating the mRNAs of two different pools, detailed dynamical analysis, as well as the reaction-diffusion scenario have been considered and analyzed in the steady state and for the transient dynamics further. We have observed persistent-temporal patterns, as a result of the dynamics of the motifs, that explain spatial gradients and relevant patterns formed by related proteins in development and phenotypic heterogenetic aspects in biological systems. Competitive effects of miRNA regulation have also been found to be capable to cause spatio-temporal patterns, persistent enough to direct developmental decisions. Under coordinated regulation, miRNAs are found to generate spatio-temporal patterning even from complete homogeneity in concentration of target protein, which may have impactful insights in choice of cell-fates.
1711.02739
Xiangming Zhang
Xiangming Zhang and Zhihua Liu
Bifurcation analysis of an age structured HIV infection model with both virus-to-cell and cell-to-cell transmissions
arXiv admin note: text overlap with arXiv:1711.01595, arXiv:1711.01599
null
10.1142/S0218127418501092
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We make a mathematical analysis of an age structured HIV infection model with both virus-to-cell and cell-to-cell transmissions to understand the dynamical behavior of HIV infection in vivo. In the model, we consider the proliferation of uninfected CD4+ T cells by a logistic function and the infected CD4+ T cells are assumed to have an infection-age structure. Our main results concern the Hopf bifurcation of the model by using the theory of integrated semigroup and the Hopf bifurcation theory for semilinear equations with nondense domain. Bifurcation analysis indicates that there exist some parameter values such that this HIV infection model has a non-trivial periodic solution which bifurcates from the positive equilibrium. The numerical simulations are also carried out.
[ { "created": "Sun, 5 Nov 2017 14:26:44 GMT", "version": "v1" } ]
2018-09-26
[ [ "Zhang", "Xiangming", "" ], [ "Liu", "Zhihua", "" ] ]
We make a mathematical analysis of an age structured HIV infection model with both virus-to-cell and cell-to-cell transmissions to understand the dynamical behavior of HIV infection in vivo. In the model, we consider the proliferation of uninfected CD4+ T cells by a logistic function and the infected CD4+ T cells are assumed to have an infection-age structure. Our main results concern the Hopf bifurcation of the model by using the theory of integrated semigroup and the Hopf bifurcation theory for semilinear equations with nondense domain. Bifurcation analysis indicates that there exist some parameter values such that this HIV infection model has a non-trivial periodic solution which bifurcates from the positive equilibrium. The numerical simulations are also carried out.
2101.01607
Mark Sinzger
Mark Sinzger, Maximilian Gehri, Heinz Koeppl
Poisson channel with binary Markov input and average sojourn time constraint
This article was accepted for publication by IEEE, ISIT 2020
null
10.1109/ISIT44484.2020.9174360
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
A minimal model for gene expression, consisting of a switchable promoter together with the resulting messenger RNA, is equivalent to a Poisson channel with a binary Markovian input process. Determining its capacity is an optimization problem with respect to two parameters: the average sojourn times of the promoter's active (ON) and inactive (OFF) state. An expression for the mutual information is found by solving the associated filtering problem analytically on the level of distributions. For fixed peak power, three bandwidth-like constraints are imposed by lower-bounding (i) the average sojourn times (ii) the autocorrelation time and (iii) the average time until a transition. OFF-favoring optima are found for all three constraints, as commonly encountered for the Poisson channel. In addition, constraint (i) exhibits a region that favors the ON state, and (iii) shows ON-favoring local optima.
[ { "created": "Tue, 5 Jan 2021 15:47:55 GMT", "version": "v1" } ]
2021-01-06
[ [ "Sinzger", "Mark", "" ], [ "Gehri", "Maximilian", "" ], [ "Koeppl", "Heinz", "" ] ]
A minimal model for gene expression, consisting of a switchable promoter together with the resulting messenger RNA, is equivalent to a Poisson channel with a binary Markovian input process. Determining its capacity is an optimization problem with respect to two parameters: the average sojourn times of the promoter's active (ON) and inactive (OFF) state. An expression for the mutual information is found by solving the associated filtering problem analytically on the level of distributions. For fixed peak power, three bandwidth-like constraints are imposed by lower-bounding (i) the average sojourn times (ii) the autocorrelation time and (iii) the average time until a transition. OFF-favoring optima are found for all three constraints, as commonly encountered for the Poisson channel. In addition, constraint (i) exhibits a region that favors the ON state, and (iii) shows ON-favoring local optima.
0903.3451
Jun Kitazono
Jun Kitazono, Toshiaki Omori, Masato Okada
Neural network model with discrete and continuous information representation
15 pages, 5 figures
null
10.1143/JPSJ.78.114801
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An associative memory model and a neural network model with a Mexican-hat type interaction are the two most typical attractor networks used in the artificial neural network models. The associative memory model has discretely distributed fixed-point attractors, and achieves a discrete information representation. On the other hand, a neural network model with a Mexican-hat type interaction uses a line attractor to achieves a continuous information representation, which can be seen in the working memory in the prefrontal cortex and columnar activity in the visual cortex. In the present study, we propose a neural network model that achieves discrete and continuous information representation. We use a statistical-mechanical analysis to find that a localized retrieval phase exists in the proposed model, where the memory pattern is retrieved in the localized subpopulation of the network. In the localized retrieval phase, the discrete and continuous information representation is achieved by using the orthogonality of the memory patterns and the neutral stability of fixed points along the positions of the localized retrieval. The obtained phase diagram suggests that the antiferromagnetic interaction and the external field are important for generating the localized retrieval phase.
[ { "created": "Fri, 20 Mar 2009 05:17:39 GMT", "version": "v1" } ]
2015-05-13
[ [ "Kitazono", "Jun", "" ], [ "Omori", "Toshiaki", "" ], [ "Okada", "Masato", "" ] ]
An associative memory model and a neural network model with a Mexican-hat type interaction are the two most typical attractor networks used in the artificial neural network models. The associative memory model has discretely distributed fixed-point attractors, and achieves a discrete information representation. On the other hand, a neural network model with a Mexican-hat type interaction uses a line attractor to achieves a continuous information representation, which can be seen in the working memory in the prefrontal cortex and columnar activity in the visual cortex. In the present study, we propose a neural network model that achieves discrete and continuous information representation. We use a statistical-mechanical analysis to find that a localized retrieval phase exists in the proposed model, where the memory pattern is retrieved in the localized subpopulation of the network. In the localized retrieval phase, the discrete and continuous information representation is achieved by using the orthogonality of the memory patterns and the neutral stability of fixed points along the positions of the localized retrieval. The obtained phase diagram suggests that the antiferromagnetic interaction and the external field are important for generating the localized retrieval phase.
q-bio/0512043
Cyrill Muratov
R. E. Lee DeVille, Cyrill B. Muratov, Eric Vanden-Eijnden
Non-meanfield deterministic limits in chemical reaction kinetics far from equilibrium
4 pages, 4 figures (submitted to Phys. Rev. Lett.)
J. Chem. Phys. 124, 231102 (2006).
10.1063/1.2217013
null
q-bio.QM q-bio.SC
null
A general mechanism is proposed by which small intrinsic fluctuations in a system far from equilibrium can result in nearly deterministic dynamical behaviors which are markedly distinct from those realized in the meanfield limit. The mechanism is demonstrated for the kinetic Monte-Carlo version of the Schnakenberg reaction where we identified a scaling limit in which the global deterministic bifurcation picture is fundamentally altered by fluctuations. Numerical simulations of the model are found to be in quantitative agreement with theoretical predictions.
[ { "created": "Sun, 25 Dec 2005 15:42:03 GMT", "version": "v1" } ]
2015-06-26
[ [ "DeVille", "R. E. Lee", "" ], [ "Muratov", "Cyrill B.", "" ], [ "Vanden-Eijnden", "Eric", "" ] ]
A general mechanism is proposed by which small intrinsic fluctuations in a system far from equilibrium can result in nearly deterministic dynamical behaviors which are markedly distinct from those realized in the meanfield limit. The mechanism is demonstrated for the kinetic Monte-Carlo version of the Schnakenberg reaction where we identified a scaling limit in which the global deterministic bifurcation picture is fundamentally altered by fluctuations. Numerical simulations of the model are found to be in quantitative agreement with theoretical predictions.
1701.02272
Tom Portegys
Thomas E. Portegys
Morphognosis: the shape of knowledge in space and time
null
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence research to a great degree focuses on the brain and behaviors that the brain generates. But the brain, an extremely complex structure resulting from millions of years of evolution, can be viewed as a solution to problems posed by an environment existing in space and time. The environment generates signals that produce sensory events within an organism. Building an internal spatial and temporal model of the environment allows an organism to navigate and manipulate the environment. Higher intelligence might be the ability to process information coming from a larger extent of space-time. In keeping with nature's penchant for extending rather than replacing, the purpose of the mammalian neocortex might then be to record events from distant reaches of space and time and render them, as though yet near and present, to the older, deeper brain whose instinctual roles have changed little over eons. Here this notion is embodied in a model called morphognosis (morpho = shape and gnosis = knowledge). Its basic structure is a pyramid of event recordings called a morphognostic. At the apex of the pyramid are the most recent and nearby events. Receding from the apex are less recent and possibly more distant events. A morphognostic can thus be viewed as a structure of progressively larger chunks of space-time knowledge. A set of morphognostics forms long-term memories that are learned by exposure to the environment. A cellular automaton is used as the platform to investigate the morphognosis model, using a simulated organism that learns to forage in its world for food, build a nest, and play the game of Pong.
[ { "created": "Thu, 5 Jan 2017 23:10:54 GMT", "version": "v1" }, { "created": "Sat, 4 Mar 2017 18:20:10 GMT", "version": "v2" } ]
2017-03-07
[ [ "Portegys", "Thomas E.", "" ] ]
Artificial intelligence research to a great degree focuses on the brain and behaviors that the brain generates. But the brain, an extremely complex structure resulting from millions of years of evolution, can be viewed as a solution to problems posed by an environment existing in space and time. The environment generates signals that produce sensory events within an organism. Building an internal spatial and temporal model of the environment allows an organism to navigate and manipulate the environment. Higher intelligence might be the ability to process information coming from a larger extent of space-time. In keeping with nature's penchant for extending rather than replacing, the purpose of the mammalian neocortex might then be to record events from distant reaches of space and time and render them, as though yet near and present, to the older, deeper brain whose instinctual roles have changed little over eons. Here this notion is embodied in a model called morphognosis (morpho = shape and gnosis = knowledge). Its basic structure is a pyramid of event recordings called a morphognostic. At the apex of the pyramid are the most recent and nearby events. Receding from the apex are less recent and possibly more distant events. A morphognostic can thus be viewed as a structure of progressively larger chunks of space-time knowledge. A set of morphognostics forms long-term memories that are learned by exposure to the environment. A cellular automaton is used as the platform to investigate the morphognosis model, using a simulated organism that learns to forage in its world for food, build a nest, and play the game of Pong.
2106.06506
Abagael Sykes
Abagael L. Sykes, Gustavo S. Silva, Derald J. Holtkamp, Broc W. Mauch, Onyekachukwu Osemeke, Daniel C.L. Linhares, Gustavo Machado
Interpretable machine learning applied to on-farm biosecurity and porcine reproductive and respiratory syndrome virus
null
null
10.1111/tbed.14369
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Effective biosecurity practices in swine production are key in preventing the introduction and dissemination of infectious pathogens. Ideally, biosecurity practices should be chosen by their impact on bio-containment and bio-exclusion, however quantitative supporting evidence is often unavailable. Therefore, the development of methodologies capable of quantifying and ranking biosecurity practices according to their efficacy in reducing risk have the potential to facilitate better informed choices. Using survey data on biosecurity practices, farm demographics, and previous outbreaks from 139 herds, a set of machine learning algorithms were trained to classify farms by porcine reproductive and respiratory syndrome virus status, depending on their biosecurity practices, to produce a predicted outbreak risk. A novel interpretable machine learning toolkit, MrIML-biosecurity, was developed to benchmark farms and production systems by predicted risk, and quantify the impact of biosecurity practices on disease risk at individual farms. Quantifying the variable impact on predicted risk 50% of 42 variables were associated with fomite spread while 31% were associated with local transmission. Results from machine learning interpretations identified similar results, finding substantial contribution to predicted outbreak risk from biosecurity practices relating to: the turnover and number of employees; the surrounding density of swine premises and pigs; the sharing of trailers; distance from the public road; and production type. In addition, the development of individualized biosecurity assessments provides the opportunity to guide biosecurity implementation on a case-by-case basis. Finally, the flexibility of the MrIML-biosecurity toolkit gives it potential to be applied to wider areas of biosecurity benchmarking, to address weaknesses in other livestock systems and industry relevant diseases.
[ { "created": "Fri, 11 Jun 2021 17:01:40 GMT", "version": "v1" }, { "created": "Fri, 19 Nov 2021 19:56:30 GMT", "version": "v2" } ]
2021-11-23
[ [ "Sykes", "Abagael L.", "" ], [ "Silva", "Gustavo S.", "" ], [ "Holtkamp", "Derald J.", "" ], [ "Mauch", "Broc W.", "" ], [ "Osemeke", "Onyekachukwu", "" ], [ "Linhares", "Daniel C. L.", "" ], [ "Machado", "Gustavo", "" ] ]
Effective biosecurity practices in swine production are key in preventing the introduction and dissemination of infectious pathogens. Ideally, biosecurity practices should be chosen by their impact on bio-containment and bio-exclusion, however quantitative supporting evidence is often unavailable. Therefore, the development of methodologies capable of quantifying and ranking biosecurity practices according to their efficacy in reducing risk have the potential to facilitate better informed choices. Using survey data on biosecurity practices, farm demographics, and previous outbreaks from 139 herds, a set of machine learning algorithms were trained to classify farms by porcine reproductive and respiratory syndrome virus status, depending on their biosecurity practices, to produce a predicted outbreak risk. A novel interpretable machine learning toolkit, MrIML-biosecurity, was developed to benchmark farms and production systems by predicted risk, and quantify the impact of biosecurity practices on disease risk at individual farms. Quantifying the variable impact on predicted risk 50% of 42 variables were associated with fomite spread while 31% were associated with local transmission. Results from machine learning interpretations identified similar results, finding substantial contribution to predicted outbreak risk from biosecurity practices relating to: the turnover and number of employees; the surrounding density of swine premises and pigs; the sharing of trailers; distance from the public road; and production type. In addition, the development of individualized biosecurity assessments provides the opportunity to guide biosecurity implementation on a case-by-case basis. Finally, the flexibility of the MrIML-biosecurity toolkit gives it potential to be applied to wider areas of biosecurity benchmarking, to address weaknesses in other livestock systems and industry relevant diseases.
1205.6598
Gasper Tkacik
Ga\v{s}per Tka\v{c}ik and Einat Granot-Atedgi and Ronen Segev and Elad Schneidman
Retinal metric: a stimulus distance measure derived from population neural responses
5 pages, 4 figures, to appear in Phys Rev Lett
Phys Rev Lett 110 (2013): 058104
10.1103/PhysRevLett.110.058104
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability of the organism to distinguish between various stimuli is limited by the structure and noise in the population code of its sensory neurons. Here we infer a distance measure on the stimulus space directly from the recorded activity of 100 neurons in the salamander retina. In contrast to previously used measures of stimulus similarity, this "neural metric" tells us how distinguishable a pair of stimulus clips is to the retina, given the noise in the neural population response. We show that the retinal distance strongly deviates from Euclidean, or any static metric, yet has a simple structure: we identify the stimulus features that the neural population is jointly sensitive to, and show the SVM-like kernel function relating the stimulus and neural response spaces. We show that the non-Euclidean nature of the retinal distance has important consequences for neural decoding.
[ { "created": "Wed, 30 May 2012 09:29:44 GMT", "version": "v1" }, { "created": "Sat, 8 Dec 2012 08:07:53 GMT", "version": "v2" } ]
2013-06-14
[ [ "Tkačik", "Gašper", "" ], [ "Granot-Atedgi", "Einat", "" ], [ "Segev", "Ronen", "" ], [ "Schneidman", "Elad", "" ] ]
The ability of the organism to distinguish between various stimuli is limited by the structure and noise in the population code of its sensory neurons. Here we infer a distance measure on the stimulus space directly from the recorded activity of 100 neurons in the salamander retina. In contrast to previously used measures of stimulus similarity, this "neural metric" tells us how distinguishable a pair of stimulus clips is to the retina, given the noise in the neural population response. We show that the retinal distance strongly deviates from Euclidean, or any static metric, yet has a simple structure: we identify the stimulus features that the neural population is jointly sensitive to, and show the SVM-like kernel function relating the stimulus and neural response spaces. We show that the non-Euclidean nature of the retinal distance has important consequences for neural decoding.
1610.06033
Khalil Cherifi
Khalil Cherifi, El Houssein Boufous, Hassan Boubaker, Fouad Msanda
Comparative Salt Tolerance Study of Some Acacia Species at Seed Germination Stage
9 pages, 3 figures
Asian Journal of Plant Sciences 15(3-4) (2016) 66-74
10.3923/ajps.2016.66.74
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Objective: The purpose of this study was to assess and compare the seed germination response of six Acacia species under different NaCl concentrations in order to explore opportunities for selection and breeding salt tolerant genotypes. Methodology: Germination of seeds was evaluated under salt stresses using 5 treatment levels: 0, 100, 200, 300 and 400 mM of NaCl. Corrected germination rate (GC), germination rate index (GRI) and mean germination time (MGT) were recorded during 10 days. Results: The results indicated that germination was significantly reduced in all species with the increase in NaCl concentrations. However, significant interspecific variation for salt tolerance was observed. The greatest variability in tolerance was observed at moderate salt stress (200 mM of NaCl) and the decrease in germination appeared to be more accentuated in A. cyanophylla and A. cyclops. Although, A. raddiana, remains the most interesting, it preserved the highest percentage (GC = 80%) and velocity of germination in all species studied in this study, even in the high salt levels. This species exhibited a particular adaptability to salt environment, at least at this stage in the life cycle and could be recommended for plantation establishment in salt affected areas. On the other hand, when ungerminated seeds were transferred from NaCl treatments to distilled water, they recovered largely their germination without a lag period and with high speed. This indicated that the germination inhibition was related to a reversible osmotic stress that induced dormancy rather than specific ion toxicity. Conclusion: This ability to germinate after exposure to higher concentrations of NaCl suggests that studied species, especially the most tolerant could be able to germinate under the salt affected soils and could be utilized for the rehabilitation of damaged arid zones.
[ { "created": "Wed, 19 Oct 2016 14:32:20 GMT", "version": "v1" } ]
2016-10-20
[ [ "Cherifi", "Khalil", "" ], [ "Boufous", "El Houssein", "" ], [ "Boubaker", "Hassan", "" ], [ "Msanda", "Fouad", "" ] ]
Objective: The purpose of this study was to assess and compare the seed germination response of six Acacia species under different NaCl concentrations in order to explore opportunities for selection and breeding salt tolerant genotypes. Methodology: Germination of seeds was evaluated under salt stresses using 5 treatment levels: 0, 100, 200, 300 and 400 mM of NaCl. Corrected germination rate (GC), germination rate index (GRI) and mean germination time (MGT) were recorded during 10 days. Results: The results indicated that germination was significantly reduced in all species with the increase in NaCl concentrations. However, significant interspecific variation for salt tolerance was observed. The greatest variability in tolerance was observed at moderate salt stress (200 mM of NaCl) and the decrease in germination appeared to be more accentuated in A. cyanophylla and A. cyclops. Although, A. raddiana, remains the most interesting, it preserved the highest percentage (GC = 80%) and velocity of germination in all species studied in this study, even in the high salt levels. This species exhibited a particular adaptability to salt environment, at least at this stage in the life cycle and could be recommended for plantation establishment in salt affected areas. On the other hand, when ungerminated seeds were transferred from NaCl treatments to distilled water, they recovered largely their germination without a lag period and with high speed. This indicated that the germination inhibition was related to a reversible osmotic stress that induced dormancy rather than specific ion toxicity. Conclusion: This ability to germinate after exposure to higher concentrations of NaCl suggests that studied species, especially the most tolerant could be able to germinate under the salt affected soils and could be utilized for the rehabilitation of damaged arid zones.