id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1610.02584
Krishna Garikipati
Krishna Garikipati
Perspectives on the mathematics of biological patterning and morphogenesis
35 pages, 17 figures. For supplementary movies, see the version published in J. Mech Phys Solids (below)
Journal of the Mechanics and Physics of Solids, 99, 192-210, 2017
10.1016/j.jmps.2016.11.013
null
q-bio.CB q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
A central question in developmental biology is how size and position are determined. The genetic code carries instructions on how to control these properties in order to regulate the pattern and morphology of structures in the developing organism. Transcription and protein translation mechanisms implement these instructions. However, this cannot happen without some manner of sampling of epigenetic information on the current patterns and morphological forms of structures in the organism. Any rigorous description of space- and time-varying patterns and morphological forms reduces to one among various classes of spatio-temporal partial differential equations. Reaction-transport equations represent one such class. Starting from simple Fickian diffusion, the incorporation of reaction, phase segregation and advection terms can represent many of the patterns seen in the animal and plant kingdoms. Morphological form, requiring the development of three-dimensional structure, also can be represented by these equations of mass transport, albeit to a limited degree. The recognition that physical forces play controlling roles in shaping tissues leads to the conclusion that (nonlinear) elasticity governs the development of morphological form. In this setting, inhomogeneous growth drives the elasticity problem. The combination of reaction-transport equations with those of elasto-growth makes accessible a potentially unlimited spectrum of patterning and morphogenetic phenomena in developmental biology. This perspective communication is a survey of the partial differential equations of mathematical physics that have been proposed to govern patterning and morphogenesis in developmental biology. Several numerical examples are included to illustrate these equations and the corresponding physics, with the intention of providing physical insight wherever possible.
[ { "created": "Sat, 8 Oct 2016 21:16:09 GMT", "version": "v1" }, { "created": "Fri, 2 Dec 2016 14:29:52 GMT", "version": "v2" } ]
2016-12-05
[ [ "Garikipati", "Krishna", "" ] ]
A central question in developmental biology is how size and position are determined. The genetic code carries instructions on how to control these properties in order to regulate the pattern and morphology of structures in the developing organism. Transcription and protein translation mechanisms implement these instructions. However, this cannot happen without some manner of sampling of epigenetic information on the current patterns and morphological forms of structures in the organism. Any rigorous description of space- and time-varying patterns and morphological forms reduces to one among various classes of spatio-temporal partial differential equations. Reaction-transport equations represent one such class. Starting from simple Fickian diffusion, the incorporation of reaction, phase segregation and advection terms can represent many of the patterns seen in the animal and plant kingdoms. Morphological form, requiring the development of three-dimensional structure, also can be represented by these equations of mass transport, albeit to a limited degree. The recognition that physical forces play controlling roles in shaping tissues leads to the conclusion that (nonlinear) elasticity governs the development of morphological form. In this setting, inhomogeneous growth drives the elasticity problem. The combination of reaction-transport equations with those of elasto-growth makes accessible a potentially unlimited spectrum of patterning and morphogenetic phenomena in developmental biology. This perspective communication is a survey of the partial differential equations of mathematical physics that have been proposed to govern patterning and morphogenesis in developmental biology. Several numerical examples are included to illustrate these equations and the corresponding physics, with the intention of providing physical insight wherever possible.
2002.10626
Liang Yu
Bo Li, Junying Zhang, Liang Yu
Identification and Validation of the SNV Biomarkers Based on Multi-Dimensional Patterns
null
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Single nucleotide variants (SNVs) are detected as different distributions of DNA samples of distinct types of cancer patients. Even though, it is an exacting task to select the appropriate method to identify cancer to the greatest extent of SNVs. Results: In this paper, we proposed a biomarker concept based on SNV patterns in different feature dimensions. Raw dataset (2761 samples) consisting of twelve different cancers was obtained from TCGA (The Cancer Genome Atlas). After preliminary screening of 562,321 DNA mutation sites in the samples, the mutation sites were extracted and characterized by cancer types in six different SNV feature dimensions. In this study, we found that the extracted features showed similar distribution in the cluster center of the disease type of the samples. After the initial processing of the raw data, the sample was more focused on the subtype distribution of the cancer or the cancer at the SNV level. We used k-nearest neighbors (KNN) to classify the extracted features and Leave-One-Out cross verified them. The accuracy of classifying is stable at around 97% and reached 97.43% at the highest. During the validation phase, we found validated oncogenes in the loci of the features with the highest importance among nine cancers. Conclusions: In summary, the samples showed consistent patterns according to the cancer in which it belongs. It is feasible to classify the cancer of the sample by the distribution of different dimensions of the SNVs and has a high accuracy. And has potential implications for the discovery of cancer-causing genes.
[ { "created": "Tue, 25 Feb 2020 02:27:06 GMT", "version": "v1" } ]
2020-02-26
[ [ "Li", "Bo", "" ], [ "Zhang", "Junying", "" ], [ "Yu", "Liang", "" ] ]
Background: Single nucleotide variants (SNVs) are detected as different distributions of DNA samples of distinct types of cancer patients. Even though, it is an exacting task to select the appropriate method to identify cancer to the greatest extent of SNVs. Results: In this paper, we proposed a biomarker concept based on SNV patterns in different feature dimensions. Raw dataset (2761 samples) consisting of twelve different cancers was obtained from TCGA (The Cancer Genome Atlas). After preliminary screening of 562,321 DNA mutation sites in the samples, the mutation sites were extracted and characterized by cancer types in six different SNV feature dimensions. In this study, we found that the extracted features showed similar distribution in the cluster center of the disease type of the samples. After the initial processing of the raw data, the sample was more focused on the subtype distribution of the cancer or the cancer at the SNV level. We used k-nearest neighbors (KNN) to classify the extracted features and Leave-One-Out cross verified them. The accuracy of classifying is stable at around 97% and reached 97.43% at the highest. During the validation phase, we found validated oncogenes in the loci of the features with the highest importance among nine cancers. Conclusions: In summary, the samples showed consistent patterns according to the cancer in which it belongs. It is feasible to classify the cancer of the sample by the distribution of different dimensions of the SNVs and has a high accuracy. And has potential implications for the discovery of cancer-causing genes.
1010.2975
Anatoly Ruvinsky
Anatoly M. Ruvinsky, Tatsiana Kirys, Alexander V. Tuzikov, and Ilya A. Vakser
Side-chain conformational changes upon protein-protein association
21 pages, 6 figures
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conformational changes upon protein-protein association are the key element of the binding mechanism. The study presents a systematic large-scale analysis of such conformational changes in the side chains. The results indicate that short and long side chains have different propensities for the conformational changes. Long side chains with three or more dihedral angles are often subject to large conformational transition. Shorter residues with one or two dihedral angles typically undergo local conformational changes not leading to a conformational transition. The relationship between the local readjustments and the equilibrium fluctuations of a side chain around its unbound conformation is suggested. Most of the side chains undergo larger changes in the dihedral angle most distant from the backbone. The amino acids with symmetric aromatic (Phe and Tyr) and charged (Asp and Glu) groups show the opposite trend where the near-backbone dihedral angles change the most. The frequencies of the core-to-surface interface transitions of six nonpolar residues and Tyr exceed the frequencies of the opposite, surface-to-core transitions. The binding increases both polar and nonpolar interface areas. However, the increase of the nonpolar area is larger for all considered classes of protein complexes. The results suggest that the protein association perturbs the unbound interfaces to increase the hydrophobic forces. The results facilitate better understanding of the conformational changes in proteins and suggest directions for efficient conformational sampling in docking protocols.
[ { "created": "Thu, 14 Oct 2010 16:52:10 GMT", "version": "v1" } ]
2010-10-15
[ [ "Ruvinsky", "Anatoly M.", "" ], [ "Kirys", "Tatsiana", "" ], [ "Tuzikov", "Alexander V.", "" ], [ "Vakser", "Ilya A.", "" ] ]
Conformational changes upon protein-protein association are the key element of the binding mechanism. The study presents a systematic large-scale analysis of such conformational changes in the side chains. The results indicate that short and long side chains have different propensities for the conformational changes. Long side chains with three or more dihedral angles are often subject to large conformational transition. Shorter residues with one or two dihedral angles typically undergo local conformational changes not leading to a conformational transition. The relationship between the local readjustments and the equilibrium fluctuations of a side chain around its unbound conformation is suggested. Most of the side chains undergo larger changes in the dihedral angle most distant from the backbone. The amino acids with symmetric aromatic (Phe and Tyr) and charged (Asp and Glu) groups show the opposite trend where the near-backbone dihedral angles change the most. The frequencies of the core-to-surface interface transitions of six nonpolar residues and Tyr exceed the frequencies of the opposite, surface-to-core transitions. The binding increases both polar and nonpolar interface areas. However, the increase of the nonpolar area is larger for all considered classes of protein complexes. The results suggest that the protein association perturbs the unbound interfaces to increase the hydrophobic forces. The results facilitate better understanding of the conformational changes in proteins and suggest directions for efficient conformational sampling in docking protocols.
2006.03737
Ding Zhou
Xue-Xin Wei, Ding Zhou, Andres Grosmark, Zaki Ajabi, Fraser Sparks, Pengcheng Zhou, Mark Brandon, Attila Losonczy, Liam Paninski
A zero-inflated gamma model for deconvolved calcium imaging traces
Accepted for publication in Neurons, Behavior, Data analysis, and Theory
Neurons, Behavior, Data Analysis, and Theory, 2020
10.1101/637652
null
q-bio.NC stat.ML
http://creativecommons.org/licenses/by/4.0/
Calcium imaging is a critical tool for measuring the activity of large neural populations. Much effort has been devoted to developing "pre-processing" tools for calcium video data, addressing the important issues of e.g., motion correction, denoising, compression, demixing, and deconvolution. However, statistical modeling of deconvolved calcium signals (i.e., the estimated activity extracted by a pre-processing pipeline) is just as critical for interpreting calcium measurements, and for incorporating these observations into downstream probabilistic encoding and decoding models. Surprisingly, these issues have to date received significantly less attention. In this work we examine the statistical properties of the deconvolved activity estimates, and compare probabilistic models for these random signals. In particular, we propose a zero-inflated gamma (ZIG) model, which characterizes the calcium responses as a mixture of a gamma distribution and a point mass that serves to model zero responses. We apply the resulting models to neural encoding and decoding problems. We find that the ZIG model outperforms simpler models (e.g., Poisson or Bernoulli models) in the context of both simulated and real neural data, and can therefore play a useful role in bridging calcium imaging analysis methods with tools for analyzing activity in large neural populations.
[ { "created": "Fri, 5 Jun 2020 23:29:33 GMT", "version": "v1" } ]
2020-06-09
[ [ "Wei", "Xue-Xin", "" ], [ "Zhou", "Ding", "" ], [ "Grosmark", "Andres", "" ], [ "Ajabi", "Zaki", "" ], [ "Sparks", "Fraser", "" ], [ "Zhou", "Pengcheng", "" ], [ "Brandon", "Mark", "" ], [ "Losonczy...
Calcium imaging is a critical tool for measuring the activity of large neural populations. Much effort has been devoted to developing "pre-processing" tools for calcium video data, addressing the important issues of e.g., motion correction, denoising, compression, demixing, and deconvolution. However, statistical modeling of deconvolved calcium signals (i.e., the estimated activity extracted by a pre-processing pipeline) is just as critical for interpreting calcium measurements, and for incorporating these observations into downstream probabilistic encoding and decoding models. Surprisingly, these issues have to date received significantly less attention. In this work we examine the statistical properties of the deconvolved activity estimates, and compare probabilistic models for these random signals. In particular, we propose a zero-inflated gamma (ZIG) model, which characterizes the calcium responses as a mixture of a gamma distribution and a point mass that serves to model zero responses. We apply the resulting models to neural encoding and decoding problems. We find that the ZIG model outperforms simpler models (e.g., Poisson or Bernoulli models) in the context of both simulated and real neural data, and can therefore play a useful role in bridging calcium imaging analysis methods with tools for analyzing activity in large neural populations.
2102.09452
Archan Mukhopadhyay
Archan Mukhopadhyay and Sagar Chakraborty
Replicator equations induced by microscopic processes in nonoverlapping population playing bimatrix games
null
Chaos 31, 023123 (2021)
10.1063/5.0032311
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is concerned with exploring the microscopic basis for the discrete versions of the standard replicator equation and the adjusted replicator equation. To this end, we introduce frequency-dependent selection -- as a result of competition fashioned by game-theoretic consideration -- into the Wright--Fisher process, a stochastic birth-death process. The process is further considered to be active in a generation-wise nonoverlapping finite population where individuals play a two-strategy bimatrix population game. Subsequently, connections among the corresponding master equation, the Fokker--Planck equation, and the Langevin equation are exploited to arrive at the deterministic discrete replicator maps in the limit of infinite population size.
[ { "created": "Wed, 17 Feb 2021 17:06:16 GMT", "version": "v1" } ]
2021-02-19
[ [ "Mukhopadhyay", "Archan", "" ], [ "Chakraborty", "Sagar", "" ] ]
This paper is concerned with exploring the microscopic basis for the discrete versions of the standard replicator equation and the adjusted replicator equation. To this end, we introduce frequency-dependent selection -- as a result of competition fashioned by game-theoretic consideration -- into the Wright--Fisher process, a stochastic birth-death process. The process is further considered to be active in a generation-wise nonoverlapping finite population where individuals play a two-strategy bimatrix population game. Subsequently, connections among the corresponding master equation, the Fokker--Planck equation, and the Langevin equation are exploited to arrive at the deterministic discrete replicator maps in the limit of infinite population size.
2301.07175
Agustin Kruel
Agustin Kruel, Andrew D. McNaughton, Neeraj Kumar
Scaffold-Based Multi-Objective Drug Candidate Optimization
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In therapeutic design, balancing various physiochemical properties is crucial for molecule development, similar to how Multiparameter Optimization (MPO) evaluates multiple variables to meet a primary goal. While many molecular features can now be predicted using \textit{in silico} methods, aiding early drug development, the vast data generated from high throughput virtual screening challenges the practicality of traditional MPO approaches. Addressing this, we introduce a scaffold focused graph-based Markov chain Monte Carlo framework (ScaMARS) built to generate molecules with optimal properties. This innovative framework is capable of self-training and handling a wider array of properties, sampling different chemical spaces according to the starting scaffold. The benchmark analysis on several properties shows that ScaMARS has a diversity score of 84.6\% and has a much higher success rate of 99.5\% compared to conditional models. The integration of new features into MPO significantly enhances its adaptability and effectiveness in therapeutic design, facilitating the discovery of candidates that efficiently optimize multiple properties.
[ { "created": "Thu, 15 Dec 2022 21:42:17 GMT", "version": "v1" }, { "created": "Tue, 2 Jan 2024 12:49:36 GMT", "version": "v2" } ]
2024-01-03
[ [ "Kruel", "Agustin", "" ], [ "McNaughton", "Andrew D.", "" ], [ "Kumar", "Neeraj", "" ] ]
In therapeutic design, balancing various physiochemical properties is crucial for molecule development, similar to how Multiparameter Optimization (MPO) evaluates multiple variables to meet a primary goal. While many molecular features can now be predicted using \textit{in silico} methods, aiding early drug development, the vast data generated from high throughput virtual screening challenges the practicality of traditional MPO approaches. Addressing this, we introduce a scaffold focused graph-based Markov chain Monte Carlo framework (ScaMARS) built to generate molecules with optimal properties. This innovative framework is capable of self-training and handling a wider array of properties, sampling different chemical spaces according to the starting scaffold. The benchmark analysis on several properties shows that ScaMARS has a diversity score of 84.6\% and has a much higher success rate of 99.5\% compared to conditional models. The integration of new features into MPO significantly enhances its adaptability and effectiveness in therapeutic design, facilitating the discovery of candidates that efficiently optimize multiple properties.
0904.4792
Baruch Fischer
Baruch Fischer and Moshe Zakai
The Distribution Route from Ancestors to Descendants
4 pages, 2. appears in BDD (Bar-Ilan University Press) 23_71 2010
null
null
null
q-bio.PE math.PR physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We add here compared to our former arXiv version an explicit expression for the descendant ratio along the generations The equation that is added here appeared in the Hebrew version published in BDD, Bar Ilan University Press, 23, 71, 2010, titled The distribution route from ancestors to descendants in Equation 11. Otherwise, we don not change nor show here the paper but leave and direct the reader to the former arXiv version with the above addition or to the published paper in Hebrew.
[ { "created": "Thu, 30 Apr 2009 19:55:26 GMT", "version": "v1" }, { "created": "Thu, 30 Apr 2009 21:34:03 GMT", "version": "v2" }, { "created": "Thu, 14 May 2009 09:34:51 GMT", "version": "v3" }, { "created": "Thu, 23 Oct 2014 13:12:39 GMT", "version": "v4" }, { "c...
2022-11-14
[ [ "Fischer", "Baruch", "" ], [ "Zakai", "Moshe", "" ] ]
We add here compared to our former arXiv version an explicit expression for the descendant ratio along the generations The equation that is added here appeared in the Hebrew version published in BDD, Bar Ilan University Press, 23, 71, 2010, titled The distribution route from ancestors to descendants in Equation 11. Otherwise, we don not change nor show here the paper but leave and direct the reader to the former arXiv version with the above addition or to the published paper in Hebrew.
2003.08784
Ramin Golestanian
Philip Bittihn and Ramin Golestanian
Stochastic effects on the dynamics of an epidemic due to population subdivision
A proposal for a containment/exit strategy in response to the COVID-19 pandemic. Note the change in title from the original (2020/3/19) version "Containment strategy for an epidemic based on fluctuations in the SIR model"
Chaos 30, 101102 (2020)
10.1063/5.0028972
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using a stochastic Susceptible-Infected-Removed (SIR) meta-population model of disease transmission, we present analytical calculations and numerical simulations dissecting the interplay between stochasticity and the division of a population into mutually independent sub-populations. We show that subdivision activates two stochastic effects---extinction and desynchronization---diminishing the overall impact of the outbreak, even when the total population has already left the stochastic regime and the basic reproduction number is not altered by the subdivision. Both effects are quantitatively captured by our theoretical estimates, allowing us to determine their individual contributions to the observed reduction of the peak of the epidemic.
[ { "created": "Thu, 19 Mar 2020 13:56:37 GMT", "version": "v1" }, { "created": "Sun, 22 Mar 2020 11:32:31 GMT", "version": "v2" }, { "created": "Mon, 13 Apr 2020 16:35:02 GMT", "version": "v3" }, { "created": "Wed, 16 Dec 2020 08:28:32 GMT", "version": "v4" } ]
2020-12-17
[ [ "Bittihn", "Philip", "" ], [ "Golestanian", "Ramin", "" ] ]
Using a stochastic Susceptible-Infected-Removed (SIR) meta-population model of disease transmission, we present analytical calculations and numerical simulations dissecting the interplay between stochasticity and the division of a population into mutually independent sub-populations. We show that subdivision activates two stochastic effects---extinction and desynchronization---diminishing the overall impact of the outbreak, even when the total population has already left the stochastic regime and the basic reproduction number is not altered by the subdivision. Both effects are quantitatively captured by our theoretical estimates, allowing us to determine their individual contributions to the observed reduction of the peak of the epidemic.
1701.00987
Pierre Casadebaig
Pierre Casadebaig and Emmanuelle Mestries and Philippe Debaeke
A model-based approach to assist variety evaluation in sunflower crop
25 pages, 10 figures
Casadebaig, P.; Mestries, E. & Debaeke, P. (2016), 'A model-based approach to assist variety assessment in sunflower crop', European Journal of Agronomy 81, 92--105
10.1016/j.eja.2016.09.001
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assessing the performance and the characteristics (e.g. yield, quality, disease resistance, abiotic stress tolerance) of new varieties is a key component of crop performance improvement. However, the variety testing process is presently exclusively based on experimental field approaches which inherently reduces the number and the diversity of experienced combinations of varieties x environmental conditions in regard of the multiplicity of growing conditions within the cultivation area. Our aim is to make a greater and faster use of the information issuing from these trials using crop modeling and simulation to amplify the environmental and agronomic conditions in which the new varieties are tested. In this study, we present a model-based approach to assist variety testing and implement this approach on sunflower crop, using the SUNFLO simulation model and a subset of 80 trials from a large multi-environment trial (MET) conducted each year by agricultural extension services to compare newly released sunflower hybrids. After estimating parameter values (using plant phenotyping) to account for new genetic material, we independently evaluated the model prediction capacity on the MET (model accuracy was 54.4 %) and its capacity to rank commercial hybrids for performance level (Kendall's $\tau$ = 0.41, P < 0.01). We then designed a numerical experiment by combining the previously tested genetic and new cropping conditions (2100 virtual trials) to determine the best varieties and related management in representative French production regions. We suggest that this approach could find operational outcomes to recommend varieties according to environment types. Such spatial management of genetic resources could potentially improve crop performance by reducing the genotype-phenotype mismatch in farming environments.
[ { "created": "Wed, 4 Jan 2017 12:45:33 GMT", "version": "v1" } ]
2017-01-05
[ [ "Casadebaig", "Pierre", "" ], [ "Mestries", "Emmanuelle", "" ], [ "Debaeke", "Philippe", "" ] ]
Assessing the performance and the characteristics (e.g. yield, quality, disease resistance, abiotic stress tolerance) of new varieties is a key component of crop performance improvement. However, the variety testing process is presently exclusively based on experimental field approaches which inherently reduces the number and the diversity of experienced combinations of varieties x environmental conditions in regard of the multiplicity of growing conditions within the cultivation area. Our aim is to make a greater and faster use of the information issuing from these trials using crop modeling and simulation to amplify the environmental and agronomic conditions in which the new varieties are tested. In this study, we present a model-based approach to assist variety testing and implement this approach on sunflower crop, using the SUNFLO simulation model and a subset of 80 trials from a large multi-environment trial (MET) conducted each year by agricultural extension services to compare newly released sunflower hybrids. After estimating parameter values (using plant phenotyping) to account for new genetic material, we independently evaluated the model prediction capacity on the MET (model accuracy was 54.4 %) and its capacity to rank commercial hybrids for performance level (Kendall's $\tau$ = 0.41, P < 0.01). We then designed a numerical experiment by combining the previously tested genetic and new cropping conditions (2100 virtual trials) to determine the best varieties and related management in representative French production regions. We suggest that this approach could find operational outcomes to recommend varieties according to environment types. Such spatial management of genetic resources could potentially improve crop performance by reducing the genotype-phenotype mismatch in farming environments.
2301.09600
Braden Brinkman
Braden A. W. Brinkman
Non-perturbative renormalization group analysis of nonlinear spiking networks
Revised preprint, some new figures and typo fixes, Appendices moved to Supplement file (available in previous version)
null
null
null
q-bio.NC cond-mat.dis-nn cond-mat.soft cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
The critical brain hypothesis posits that neural circuits may operate close to critical points of a phase transition, which has been argued to have functional benefits for neural computation. Theoretical and computational studies arguing for or against criticality in neural dynamics have largely relied on establishing power laws in neural data, while a proper understanding of critical phenomena requires a renormalization group (RG) analysis. However, neural activity is typically non-Gaussian, nonlinear, and non-local, rendering models that capture all of these features difficult to study using standard statistical physics techniques. We overcome these issues by adapting the non-perturbative renormalization group (NPRG) to work on network models of stochastic spiking neurons. Within a ``local potential approximation,'' we are able to calculate non-universal quantities such as the effective firing rate nonlinearity of the network, allowing improved quantitative estimates of network statistics. We also derive the dimensionless flow equation that admits universal critical points in the renormalization group flow of the model, and identify two important types of critical points: in networks with an absorbing state there is a fixed point corresponding to a non-equilibrium phase transition between sustained activity and extinction of activity, and in spontaneously active networks there is a physically meaningful \emph{complex valued} critical point, corresponding to a discontinuous transition between high and low firing rate states. Our analysis suggests these fixed points are related to two well-known universality classes, the non-equilibrium directed percolation class, and the kinetic Ising model with explicitly broken symmetry, respectively.
[ { "created": "Mon, 23 Jan 2023 18:00:05 GMT", "version": "v1" }, { "created": "Fri, 17 Mar 2023 20:15:48 GMT", "version": "v2" } ]
2023-03-21
[ [ "Brinkman", "Braden A. W.", "" ] ]
The critical brain hypothesis posits that neural circuits may operate close to critical points of a phase transition, which has been argued to have functional benefits for neural computation. Theoretical and computational studies arguing for or against criticality in neural dynamics have largely relied on establishing power laws in neural data, while a proper understanding of critical phenomena requires a renormalization group (RG) analysis. However, neural activity is typically non-Gaussian, nonlinear, and non-local, rendering models that capture all of these features difficult to study using standard statistical physics techniques. We overcome these issues by adapting the non-perturbative renormalization group (NPRG) to work on network models of stochastic spiking neurons. Within a ``local potential approximation,'' we are able to calculate non-universal quantities such as the effective firing rate nonlinearity of the network, allowing improved quantitative estimates of network statistics. We also derive the dimensionless flow equation that admits universal critical points in the renormalization group flow of the model, and identify two important types of critical points: in networks with an absorbing state there is a fixed point corresponding to a non-equilibrium phase transition between sustained activity and extinction of activity, and in spontaneously active networks there is a physically meaningful \emph{complex valued} critical point, corresponding to a discontinuous transition between high and low firing rate states. Our analysis suggests these fixed points are related to two well-known universality classes, the non-equilibrium directed percolation class, and the kinetic Ising model with explicitly broken symmetry, respectively.
1704.04626
Byungjoon Min
Flaviano Morone, Kevin Roth, Byungjoon Min, H. Eugene Stanley, Hern\'an A. Makse
Model of Brain Activation Predicts the Neural Collective Influence Map of the Brain
18 pages, 5 figures
Proc. Natl. Acad. Sci. 114 (15), 3849-3854 (2017)
10.1073/pnas.1620808114
null
q-bio.NC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient complex systems have a modular structure, but modularity does not guarantee robustness, because efficiency also requires an ingenious interplay of the interacting modular components. The human brain is the elemental paradigm of an efficient robust modular system interconnected as a network of networks (NoN). Understanding the emergence of robustness in such modular architectures from the interconnections of its parts is a long-standing challenge that has concerned many scientists. Current models of dependencies in NoN inspired by the power grid express interactions among modules with fragile couplings that amplify even small shocks, thus preventing functionality. Therefore, we introduce a model of NoN to shape the pattern of brain activations to form a modular environment that is robust. The model predicts the map of neural collective influencers (NCIs) in the brain, through the optimization of the influence of the minimal set of essential nodes responsible for broadcasting information to the whole-brain NoN. Our results suggest new intervention protocols to control brain activity by targeting influential neural nodes predicted by network theory.
[ { "created": "Sat, 15 Apr 2017 11:40:50 GMT", "version": "v1" } ]
2022-06-08
[ [ "Morone", "Flaviano", "" ], [ "Roth", "Kevin", "" ], [ "Min", "Byungjoon", "" ], [ "Stanley", "H. Eugene", "" ], [ "Makse", "Hernán A.", "" ] ]
Efficient complex systems have a modular structure, but modularity does not guarantee robustness, because efficiency also requires an ingenious interplay of the interacting modular components. The human brain is the elemental paradigm of an efficient robust modular system interconnected as a network of networks (NoN). Understanding the emergence of robustness in such modular architectures from the interconnections of its parts is a long-standing challenge that has concerned many scientists. Current models of dependencies in NoN inspired by the power grid express interactions among modules with fragile couplings that amplify even small shocks, thus preventing functionality. Therefore, we introduce a model of NoN to shape the pattern of brain activations to form a modular environment that is robust. The model predicts the map of neural collective influencers (NCIs) in the brain, through the optimization of the influence of the minimal set of essential nodes responsible for broadcasting information to the whole-brain NoN. Our results suggest new intervention protocols to control brain activity by targeting influential neural nodes predicted by network theory.
2010.05656
Marcus Aguiar de
Debora Princepe and Marcus A.M. de Aguiar
Modeling Mito-nuclear Compatibility and its Role in Species Identification
27 pages, 6 figures
Systematic Biology 70(1) pp.133-144, 2021
10.1093/sysbio/syaa044
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mitochondrial genetic material is widely used for phylogenetic reconstruction and as a barcode for species identification. Here we study how mito-nuclear interactions affect the accuracy of species identification by mtDNA, as well as the speciation process itself. We simulate the evolution of a population of individuals who carry a recombining nuclear genome and a mitochondrial genome inherited maternally. We compare a null model fitness landscape that lacks any mito-nuclear interaction against a scenario in which interactions influence fitness. Fitness is assigned to individuals according to their mito-nuclear compatibility, which drives the coevolution of the nuclear and mitochondrial genomes. When the population breaks into distinct species we analyze the accuracy of mtDNA barcode for species identification. Remarkably, we find that species identification by mtDNA is equally accurate in the presence or absence of mito-nuclear coupling and that the success of the DNA barcode derives mainly from population geographical isolation during speciation. Nevertheless, selection imposed by mito-nuclear compatibility influences the diversification process and leaves signatures in the genetic content and spatial distribution of the populations, in three ways: phylogenetic trees are more balanced; clades correlate strongly with the spatial distribution and; there is a substantial increase in the intraspecies mtDNA similarity. We compare the evolutionary patterns observed in our model to empirical data from copepods (\textit{T. californicus}). We find good qualitative agreement in the geographic patterns and the topology of the phylogenetic tree, provided the model includes selection based on mito-nuclear interactions. These results highlight the role of mito-nuclear compatibility in the speciation process and its reconstruction from genetic data.
[ { "created": "Mon, 12 Oct 2020 12:54:04 GMT", "version": "v1" } ]
2020-12-21
[ [ "Princepe", "Debora", "" ], [ "de Aguiar", "Marcus A. M.", "" ] ]
Mitochondrial genetic material is widely used for phylogenetic reconstruction and as a barcode for species identification. Here we study how mito-nuclear interactions affect the accuracy of species identification by mtDNA, as well as the speciation process itself. We simulate the evolution of a population of individuals who carry a recombining nuclear genome and a mitochondrial genome inherited maternally. We compare a null model fitness landscape that lacks any mito-nuclear interaction against a scenario in which interactions influence fitness. Fitness is assigned to individuals according to their mito-nuclear compatibility, which drives the coevolution of the nuclear and mitochondrial genomes. When the population breaks into distinct species we analyze the accuracy of mtDNA barcode for species identification. Remarkably, we find that species identification by mtDNA is equally accurate in the presence or absence of mito-nuclear coupling and that the success of the DNA barcode derives mainly from population geographical isolation during speciation. Nevertheless, selection imposed by mito-nuclear compatibility influences the diversification process and leaves signatures in the genetic content and spatial distribution of the populations, in three ways: phylogenetic trees are more balanced; clades correlate strongly with the spatial distribution and; there is a substantial increase in the intraspecies mtDNA similarity. We compare the evolutionary patterns observed in our model to empirical data from copepods (\textit{T. californicus}). We find good qualitative agreement in the geographic patterns and the topology of the phylogenetic tree, provided the model includes selection based on mito-nuclear interactions. These results highlight the role of mito-nuclear compatibility in the speciation process and its reconstruction from genetic data.
1510.03507
Kevin Moon
Stephen V. Gliske, Kevin R. Moon, William C. Stacey, Alfred O. Hero III
The intrinsic value of HFO features as a biomarker of epileptic activity
5 pages, 5 figures
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 6290-6294, Mar. 2016
10.1109/ICASSP.2016.7472887
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High frequency oscillations (HFOs) are a promising biomarker of epileptic brain tissue and activity. HFOs additionally serve as a prototypical example of challenges in the analysis of discrete events in high-temporal resolution, intracranial EEG data. Two primary challenges are 1) dimensionality reduction, and 2) assessing feasibility of classification. Dimensionality reduction assumes that the data lie on a manifold with dimension less than that of the feature space. However, previous HFO analyses have assumed a linear manifold, global across time, space (i.e. recording electrode/channel), and individual patients. Instead, we assess both a) whether linear methods are appropriate and b) the consistency of the manifold across time, space, and patients. We also estimate bounds on the Bayes classification error to quantify the distinction between two classes of HFOs (those occurring during seizures and those occurring due to other processes). This analysis provides the foundation for future clinical use of HFO features and buides the analysis for other discrete events, such as individual action potentials or multi-unit activity.
[ { "created": "Tue, 13 Oct 2015 01:57:12 GMT", "version": "v1" } ]
2017-06-13
[ [ "Gliske", "Stephen V.", "" ], [ "Moon", "Kevin R.", "" ], [ "Stacey", "William C.", "" ], [ "Hero", "Alfred O.", "III" ] ]
High frequency oscillations (HFOs) are a promising biomarker of epileptic brain tissue and activity. HFOs additionally serve as a prototypical example of challenges in the analysis of discrete events in high-temporal resolution, intracranial EEG data. Two primary challenges are 1) dimensionality reduction, and 2) assessing feasibility of classification. Dimensionality reduction assumes that the data lie on a manifold with dimension less than that of the feature space. However, previous HFO analyses have assumed a linear manifold, global across time, space (i.e. recording electrode/channel), and individual patients. Instead, we assess both a) whether linear methods are appropriate and b) the consistency of the manifold across time, space, and patients. We also estimate bounds on the Bayes classification error to quantify the distinction between two classes of HFOs (those occurring during seizures and those occurring due to other processes). This analysis provides the foundation for future clinical use of HFO features and buides the analysis for other discrete events, such as individual action potentials or multi-unit activity.
2003.03024
Sean Vittadello
Sean T. Vittadello, Scott W. McCue, Gency Gunasingh, Nikolas K. Haass, and Matthew J. Simpson
A novel mathematical model of heterogeneous cell proliferation
36 pages, 3 figures
null
10.1007/s00285-021-01580-8
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel mathematical model of heterogeneous cell proliferation where the total population consists of a subpopulation of slow-proliferating cells and a subpopulation of fast-proliferating cells. The model incorporates two cellular processes, asymmetric cell division and induced switching between proliferative states, which are important determinants for the heterogeneity of a cell population. As motivation for our model we provide experimental data that illustrate the induced-switching process. Our model consists of a system of two coupled delay differential equations with distributed time delays and the cell densities as functions of time. The distributed delays are bounded and allow for the choice of delay kernel. We analyse the model and prove the non-negativity and boundedness of solutions, the existence and uniqueness of solutions, and the local stability characteristics of the equilibrium points. We find that the parameters for induced switching are bifurcation parameters and therefore determine the long-term behaviour of the model. Numerical simulations illustrate and support the theoretical findings, and demonstrate the primary importance of transient dynamics for understanding the evolution of many experimental cell populations.
[ { "created": "Fri, 6 Mar 2020 04:04:25 GMT", "version": "v1" }, { "created": "Fri, 24 Jul 2020 07:59:14 GMT", "version": "v2" } ]
2021-11-04
[ [ "Vittadello", "Sean T.", "" ], [ "McCue", "Scott W.", "" ], [ "Gunasingh", "Gency", "" ], [ "Haass", "Nikolas K.", "" ], [ "Simpson", "Matthew J.", "" ] ]
We present a novel mathematical model of heterogeneous cell proliferation where the total population consists of a subpopulation of slow-proliferating cells and a subpopulation of fast-proliferating cells. The model incorporates two cellular processes, asymmetric cell division and induced switching between proliferative states, which are important determinants for the heterogeneity of a cell population. As motivation for our model we provide experimental data that illustrate the induced-switching process. Our model consists of a system of two coupled delay differential equations with distributed time delays and the cell densities as functions of time. The distributed delays are bounded and allow for the choice of delay kernel. We analyse the model and prove the non-negativity and boundedness of solutions, the existence and uniqueness of solutions, and the local stability characteristics of the equilibrium points. We find that the parameters for induced switching are bifurcation parameters and therefore determine the long-term behaviour of the model. Numerical simulations illustrate and support the theoretical findings, and demonstrate the primary importance of transient dynamics for understanding the evolution of many experimental cell populations.
1810.13027
Changchuan Yin Dr.
Changchuan Yin, Stephen S.-T. Yau
Whole genome single nucleotide polymorphism genotyping of Staphylococcus aureus
6 figures, 2 tables
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Next-generation sequencing technology enables routine detection of bacterial pathogens for clinical diagnostics and genetic research. Whole genome sequencing has been of importance in the epidemiologic analysis of bacterial pathogens. However, few whole genome sequencing-based genotyping pipelines are available for practical applications. Here, we present the whole genome sequencing-based single nucleotide polymorphism (SNP) genotyping method and apply to the evolutionary analysis of methicillin-resistant Staphylococcus aureus. The SNP genotyping method calls genome variants using next-generation sequencing reads of whole genomes and calculates the pair-wise Jaccard distances of the genome variants. The method may reveal the high-resolution whole genome SNP profiles and the structural variants of different isolates of methicillin-resistant S. aureus (MRSA) and methicillin-susceptible S. aureus (MSSA) strains. The phylogenetic analysis of whole genomes and particular regions may monitor and track the evolution and the transmission dynamic of bacterial pathogens. The computer programs of the whole genome sequencing-based SNP genotyping method are available to the public at https://github.com/cyinbox/NGS.
[ { "created": "Tue, 30 Oct 2018 22:57:03 GMT", "version": "v1" } ]
2018-11-01
[ [ "Yin", "Changchuan", "" ], [ "Yau", "Stephen S. -T.", "" ] ]
Next-generation sequencing technology enables routine detection of bacterial pathogens for clinical diagnostics and genetic research. Whole genome sequencing has been of importance in the epidemiologic analysis of bacterial pathogens. However, few whole genome sequencing-based genotyping pipelines are available for practical applications. Here, we present the whole genome sequencing-based single nucleotide polymorphism (SNP) genotyping method and apply to the evolutionary analysis of methicillin-resistant Staphylococcus aureus. The SNP genotyping method calls genome variants using next-generation sequencing reads of whole genomes and calculates the pair-wise Jaccard distances of the genome variants. The method may reveal the high-resolution whole genome SNP profiles and the structural variants of different isolates of methicillin-resistant S. aureus (MRSA) and methicillin-susceptible S. aureus (MSSA) strains. The phylogenetic analysis of whole genomes and particular regions may monitor and track the evolution and the transmission dynamic of bacterial pathogens. The computer programs of the whole genome sequencing-based SNP genotyping method are available to the public at https://github.com/cyinbox/NGS.
1109.1206
Philippe Marcq
P. Marcq, N. Yoshinaga and J. Prost
Rigidity sensing explained by active matter theory
4 pages, 2 figures
Biophys J 101, L33-L35 (2011)
10.1016/j.bpj.2011.08.023
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The magnitude of traction forces exerted by living animal cells on their environment is a monotonically increasing and approximately sigmoidal function of the stiffness of the external medium. This observation is rationalized using active matter theory: adaptation to substrate rigidity results from an interplay between passive elasticity and active contractility.
[ { "created": "Tue, 6 Sep 2011 14:43:40 GMT", "version": "v1" } ]
2011-09-22
[ [ "Marcq", "P.", "" ], [ "Yoshinaga", "N.", "" ], [ "Prost", "J.", "" ] ]
The magnitude of traction forces exerted by living animal cells on their environment is a monotonically increasing and approximately sigmoidal function of the stiffness of the external medium. This observation is rationalized using active matter theory: adaptation to substrate rigidity results from an interplay between passive elasticity and active contractility.
2005.01628
Jos\'e Mar\'ia Castro \'Alvarez
Jos\'e M. \'Alvarez-Castro
Gene-Environment Interaction in the Era of Precision Medicine -- Fix the Potholes or Start Building a New Road?
12 pages with two figres (with two panels each) and two tables
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetic mapping sprung in the last decade of the 20th century with the development of statistical procedures putting classical models of genetic effects together with molecular biology techniques. It eventually became clear that those models, originally developed to serve other purposes, implied limitations at different stages of the analyses-disclosing loci, measuring their effects and providing additional parameters for adequate biological/medical interpretations. The present paper is aimed to ponder whether it is realistic and worth to try and further amend classical models of genetic effects or it proves more sensible to undertake alternative theoretical strategies instead. In order to further feed into that debate, mathematical developments for gene-environment interaction stemming from the classical models of genetic effects are here revised and brought up-to-date with the prospects present-day available data bestow, particularly in the context of precision medicine. Those developments strengthen the methodology required to overcome the COVID-19 pandemic.
[ { "created": "Mon, 4 May 2020 16:34:25 GMT", "version": "v1" } ]
2020-05-05
[ [ "Álvarez-Castro", "José M.", "" ] ]
Genetic mapping sprung in the last decade of the 20th century with the development of statistical procedures putting classical models of genetic effects together with molecular biology techniques. It eventually became clear that those models, originally developed to serve other purposes, implied limitations at different stages of the analyses-disclosing loci, measuring their effects and providing additional parameters for adequate biological/medical interpretations. The present paper is aimed to ponder whether it is realistic and worth to try and further amend classical models of genetic effects or it proves more sensible to undertake alternative theoretical strategies instead. In order to further feed into that debate, mathematical developments for gene-environment interaction stemming from the classical models of genetic effects are here revised and brought up-to-date with the prospects present-day available data bestow, particularly in the context of precision medicine. Those developments strengthen the methodology required to overcome the COVID-19 pandemic.
1310.6844
Steven Kelk
Steven Kelk, Simone Linz, and David A. Morrison
Fighting network space: it is time for an SQL-type language to filter phylogenetic networks
opinion piece
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The search space of rooted phylogenetic trees is vast and a major research focus of recent decades has been the development of algorithms to effectively navigate this space. However this space is tiny when compared with the space of rooted phylogenetic networks, and navigating this enlarged space remains a poorly understood problem. This, and the difficulty of biologically interpreting such networks, obstructs adoption of networks as tools for modelling reticulation. Here, we argue that the superimposition of biologically motivated constraints, via an SQL-style language, can both stimulate use of network software by biologists and potentially significantly prune the search space.
[ { "created": "Fri, 25 Oct 2013 08:10:42 GMT", "version": "v1" } ]
2013-10-28
[ [ "Kelk", "Steven", "" ], [ "Linz", "Simone", "" ], [ "Morrison", "David A.", "" ] ]
The search space of rooted phylogenetic trees is vast and a major research focus of recent decades has been the development of algorithms to effectively navigate this space. However this space is tiny when compared with the space of rooted phylogenetic networks, and navigating this enlarged space remains a poorly understood problem. This, and the difficulty of biologically interpreting such networks, obstructs adoption of networks as tools for modelling reticulation. Here, we argue that the superimposition of biologically motivated constraints, via an SQL-style language, can both stimulate use of network software by biologists and potentially significantly prune the search space.
2004.08990
Juan C. Mora Mr
Juan C. Mora, Sandra P\'erez, Ignacio Rodr\'iguez, Asunci\'on N\'u\~nez and Alla Dvorzhak
A Semiempirical Dynamical Model to Forecast the Propagation of Epidemics: The Case of the Sars-Cov-2 in Spain
null
null
null
null
q-bio.QM math.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
A semiempirical model, based in the logistic map, has been succesfully applied to forecast important quantities along the several phases of the outbreak of the covid-19 for different countries. This paper shows how the model was calibrated and applied to perform predictions of people needing to be hospitalized, needs of ventilators, or the number of deaths which would be produced. It is shown specifically the results obtained in the case of Spain, showing a prediction of diagnosed infected and deaths which will be observed after the ease of the total lockdown produced the 13th of March. Is also shown how this model can provide an insight of what the level of infection in the different regions of Spain is forecasted. The model predicts for Spain for the end of May more than 400,000 diagnosed infected cases, number which will be probably higher due to the change in the possibilities of performing massive number of tests to the general population. The number of forecasted deaths for that date is 46,000 +/- 15,000. The model also predicts the level of infection at the different Spanish regions, providing a counterintuitive result in the cases of Madrid and Catalonia as the result shows a higher the level of infection at Catalonia than the level at Madrid, according with this model. All of these results can be used to guide policy makers in order to optimize resources and to avoid future outbreaks of the covid-19.
[ { "created": "Sun, 19 Apr 2020 23:38:05 GMT", "version": "v1" } ]
2020-04-21
[ [ "Mora", "Juan C.", "" ], [ "Pérez", "Sandra", "" ], [ "Rodríguez", "Ignacio", "" ], [ "Núñez", "Asunción", "" ], [ "Dvorzhak", "Alla", "" ] ]
A semiempirical model, based in the logistic map, has been succesfully applied to forecast important quantities along the several phases of the outbreak of the covid-19 for different countries. This paper shows how the model was calibrated and applied to perform predictions of people needing to be hospitalized, needs of ventilators, or the number of deaths which would be produced. It is shown specifically the results obtained in the case of Spain, showing a prediction of diagnosed infected and deaths which will be observed after the ease of the total lockdown produced the 13th of March. Is also shown how this model can provide an insight of what the level of infection in the different regions of Spain is forecasted. The model predicts for Spain for the end of May more than 400,000 diagnosed infected cases, number which will be probably higher due to the change in the possibilities of performing massive number of tests to the general population. The number of forecasted deaths for that date is 46,000 +/- 15,000. The model also predicts the level of infection at the different Spanish regions, providing a counterintuitive result in the cases of Madrid and Catalonia as the result shows a higher the level of infection at Catalonia than the level at Madrid, according with this model. All of these results can be used to guide policy makers in order to optimize resources and to avoid future outbreaks of the covid-19.
0803.1471
Dylan Walker
Koon-Kiu Yan, Dylan Walker, Sergei Maslov
Fluctuations in Mass-Action Equilibrium of Protein Binding Networks
4 pages, 3 figures
null
10.1103/PhysRevLett.101.268102
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider two types of fluctuations in the mass-action equilibrium in protein binding networks. The first type is driven by relatively slow changes in total concentrations (copy numbers) of interacting proteins. The second type, to which we refer to as spontaneous, is caused by quickly decaying thermodynamic deviations away from the equilibrium of the system. As such they are amenable to methods of equilibrium statistical mechanics used in our study. We investigate the effects of network connectivity on these fluctuations and compare them to their upper and lower bounds. The collective effects are shown to sometimes lead to large power-law distributed amplification of spontaneous fluctuations as compared to the expectation for isolated dimers. As a consequence of this, the strength of both types of fluctuations is positively correlated with the overall network connectivity of proteins forming the complex. On the other hand, the relative amplitude of fluctuations is negatively correlated with the abundance of the complex. Our general findings are illustrated using a real network of protein-protein interactions in baker's yeast with experimentally determined protein concentrations.
[ { "created": "Mon, 10 Mar 2008 18:37:19 GMT", "version": "v1" } ]
2009-11-13
[ [ "Yan", "Koon-Kiu", "" ], [ "Walker", "Dylan", "" ], [ "Maslov", "Sergei", "" ] ]
We consider two types of fluctuations in the mass-action equilibrium in protein binding networks. The first type is driven by relatively slow changes in total concentrations (copy numbers) of interacting proteins. The second type, to which we refer to as spontaneous, is caused by quickly decaying thermodynamic deviations away from the equilibrium of the system. As such they are amenable to methods of equilibrium statistical mechanics used in our study. We investigate the effects of network connectivity on these fluctuations and compare them to their upper and lower bounds. The collective effects are shown to sometimes lead to large power-law distributed amplification of spontaneous fluctuations as compared to the expectation for isolated dimers. As a consequence of this, the strength of both types of fluctuations is positively correlated with the overall network connectivity of proteins forming the complex. On the other hand, the relative amplitude of fluctuations is negatively correlated with the abundance of the complex. Our general findings are illustrated using a real network of protein-protein interactions in baker's yeast with experimentally determined protein concentrations.
1803.07669
Iaroslav Ispolatov
Iaroslav Ispolatov and Michael Doebeli
A note on the complexity of evolutionary dynamics in a classic consumer-resource model
null
Theor Ecol (2019). https://doi.org/10.1007/s12080-019-0427-2
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study how the complexity of evolutionary dynamics in the classic MacArthur consumer-resource model depends on resource uptake and utilization rates. The traditional assumption in such models is that the utilization rate of the consumer is proportional to the uptake rate. More generally, we show that if these two rates are related through a power law (which includes the traditional assumption as a special case), then the resulting evolutionary dynamics in the consumer is necessarily a simple hill-climbing process leading to an evolutionary equilibrium, regardless of the dimension of phenotype space. When utilization and uptake rates are not related by a power law, more complex evolutionary trajectories can occur, including the chaotic dynamics observed in previous studies for high-dimensional phenotype spaces. These results draw attention to the importance of distinguishing between utilization and uptake rates in consumer-resource models.
[ { "created": "Tue, 20 Mar 2018 21:47:38 GMT", "version": "v1" } ]
2019-10-18
[ [ "Ispolatov", "Iaroslav", "" ], [ "Doebeli", "Michael", "" ] ]
We study how the complexity of evolutionary dynamics in the classic MacArthur consumer-resource model depends on resource uptake and utilization rates. The traditional assumption in such models is that the utilization rate of the consumer is proportional to the uptake rate. More generally, we show that if these two rates are related through a power law (which includes the traditional assumption as a special case), then the resulting evolutionary dynamics in the consumer is necessarily a simple hill-climbing process leading to an evolutionary equilibrium, regardless of the dimension of phenotype space. When utilization and uptake rates are not related by a power law, more complex evolutionary trajectories can occur, including the chaotic dynamics observed in previous studies for high-dimensional phenotype spaces. These results draw attention to the importance of distinguishing between utilization and uptake rates in consumer-resource models.
q-bio/0309011
Petter Holme
Petter Holme, Mikael Huss
Discovery and analysis of biochemical subnetwork hierarchies
submitted to Proceedings of the third workshop on computation of biochemical pathways and genetic networks
3rd Workshop on Computation of Biochemical Pathways and Genetic Networks, R. Gauges, U. Kummer, J. Pahle, and U. Rost, eds., European Media Lab Proceedings (Logos, Berlin, 2003), pp. 3-9.
null
null
q-bio.MN cond-mat.dis-nn
null
The representation of a biochemical network as a graph is the coarsest level of description in cellular biochemistry. By studying the network structure one can draw conclusions on the large scale organisation of the biochemical processes. We describe methods how one can extract hierarchies of subnetworks, how these can be interpreted and further deconstructed to find autonomous subnetworks. The large-scale organisation we find is characterised by a tightly connected core surrounded by increasingly loosely connected substrates.
[ { "created": "Tue, 23 Sep 2003 09:39:25 GMT", "version": "v1" }, { "created": "Tue, 30 Sep 2003 13:00:30 GMT", "version": "v2" } ]
2007-05-23
[ [ "Holme", "Petter", "" ], [ "Huss", "Mikael", "" ] ]
The representation of a biochemical network as a graph is the coarsest level of description in cellular biochemistry. By studying the network structure one can draw conclusions on the large scale organisation of the biochemical processes. We describe methods how one can extract hierarchies of subnetworks, how these can be interpreted and further deconstructed to find autonomous subnetworks. The large-scale organisation we find is characterised by a tightly connected core surrounded by increasingly loosely connected substrates.
1703.02725
Yi-Hsuan Lin
Yi-Hsuan Lin and Hue Sun Chan
Phase separation and single-chain compactness of charged disordered proteins are strongly correlated
4 page, 3 figures, accepted for publication in Biophysical Journal as a Letter
Biophys. J. 112 (10) 2017 2043-2046
10.1016/j.bpj.2017.04.021
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Liquid-liquid phase separation of intrinsically disordered proteins (IDPs) is a major undergirding factor in the regulated formation of membraneless organelles in the cell. The phase behavior of an IDP is sensitive to its amino acid sequence. Here we apply a recent random-phase-approximation polymer theory to investigate how the tendency for multiple chains of a protein to phase separate, as characterized by the critical temperature $T^*_{\rm cr}$, is related to the protein's single-chain average radius of gyration $\langle R_{\rm g} \rangle$. For a set of sequences containing different permutations of an equal number of positively and negatively charged residues, we found a striking correlation $T^*_{\rm cr}\sim \langle R_{\rm g} \rangle^{-\gamma}$ with $\gamma$ as large as $\sim 6.0$, indicating that electrostatic effects have similarly significant impact on promoting single-chain conformational compactness and phase separation. Moreover, $T^*_{\rm cr}\propto -{\rm SCD}$, where SCD is a recently proposed "sequence charge decoration" parameter determined solely by sequence information. Ramifications of our findings for deciphering the sequence dependence of IDP phase separation are discussed.
[ { "created": "Wed, 8 Mar 2017 06:35:19 GMT", "version": "v1" }, { "created": "Wed, 19 Apr 2017 03:30:51 GMT", "version": "v2" }, { "created": "Fri, 12 May 2017 20:26:12 GMT", "version": "v3" }, { "created": "Thu, 18 May 2017 04:46:44 GMT", "version": "v4" } ]
2022-01-11
[ [ "Lin", "Yi-Hsuan", "" ], [ "Chan", "Hue Sun", "" ] ]
Liquid-liquid phase separation of intrinsically disordered proteins (IDPs) is a major undergirding factor in the regulated formation of membraneless organelles in the cell. The phase behavior of an IDP is sensitive to its amino acid sequence. Here we apply a recent random-phase-approximation polymer theory to investigate how the tendency for multiple chains of a protein to phase separate, as characterized by the critical temperature $T^*_{\rm cr}$, is related to the protein's single-chain average radius of gyration $\langle R_{\rm g} \rangle$. For a set of sequences containing different permutations of an equal number of positively and negatively charged residues, we found a striking correlation $T^*_{\rm cr}\sim \langle R_{\rm g} \rangle^{-\gamma}$ with $\gamma$ as large as $\sim 6.0$, indicating that electrostatic effects have similarly significant impact on promoting single-chain conformational compactness and phase separation. Moreover, $T^*_{\rm cr}\propto -{\rm SCD}$, where SCD is a recently proposed "sequence charge decoration" parameter determined solely by sequence information. Ramifications of our findings for deciphering the sequence dependence of IDP phase separation are discussed.
1106.4600
Dennis Wylie
Dennis Wylie
Perturbed and Permuted: Signal Integration in Network-Structured Dynamic Systems
null
null
null
null
q-bio.QM cond-mat.dis-nn cs.SY math.DS q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological systems (among others) may respond to a large variety of distinct external stimuli, or signals. These perturbations will generally be presented to the system not singly, but in various combinations, so that a proper understanding of the system response requires assessment of the degree to which the effects of one signal modulate the effects of another. This paper develops a pair of structural metrics for sparse differential equation models of complex dynamic systems and demonstrates that said metrics correlate with proxies of the susceptibility of one signal-response to be altered in the context of a second signal. One of these metrics may be interpreted as a normalized arc density in the neighborhood of certain influential nodes; this metric appears to correlate with increased independence of signal response.
[ { "created": "Thu, 23 Jun 2011 00:06:46 GMT", "version": "v1" } ]
2011-06-24
[ [ "Wylie", "Dennis", "" ] ]
Biological systems (among others) may respond to a large variety of distinct external stimuli, or signals. These perturbations will generally be presented to the system not singly, but in various combinations, so that a proper understanding of the system response requires assessment of the degree to which the effects of one signal modulate the effects of another. This paper develops a pair of structural metrics for sparse differential equation models of complex dynamic systems and demonstrates that said metrics correlate with proxies of the susceptibility of one signal-response to be altered in the context of a second signal. One of these metrics may be interpreted as a normalized arc density in the neighborhood of certain influential nodes; this metric appears to correlate with increased independence of signal response.
0708.1136
Zeba Wunderlich
Zeba Wunderlich, Leonid A. Mirny
Spatial effects on the speed and reliability of protein-DNA search
16 pages, 4 figures
Nucleic Acids Res. 2008 May 3
10.1093/nar/gkn173
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Strong experimental and theoretical evidence shows that transcription factors and other specific DNA-binding proteins find their sites using a two-mode search: alternating between 3D diffusion through the cell and 1D sliding along the DNA. We consider the role spatial effects in the mechanism on two different scales. First, we reconcile recent experimental findings by showing that the 3D diffusion of the transcription factor is often local, i.e. the transcription factor lands quite near its dissociation site. Second, we discriminate between two types of searches: global searches and local searches. We show that these searches differ significantly in average search time and the variability of search time. Using experimentally measured parameter values, we also show that 1D and 3D search is not optimally balanced, leading to much larger estimates of search time. Together, these results lead to a number of biological implications including suggestions of how prokaryotes and eukaryotes achieve rapid gene regulation and the relationship between the search mechanism and noise in gene expression.
[ { "created": "Wed, 8 Aug 2007 19:36:39 GMT", "version": "v1" }, { "created": "Mon, 3 Dec 2007 17:06:05 GMT", "version": "v2" }, { "created": "Wed, 11 Jun 2008 17:23:35 GMT", "version": "v3" } ]
2008-06-11
[ [ "Wunderlich", "Zeba", "" ], [ "Mirny", "Leonid A.", "" ] ]
Strong experimental and theoretical evidence shows that transcription factors and other specific DNA-binding proteins find their sites using a two-mode search: alternating between 3D diffusion through the cell and 1D sliding along the DNA. We consider the role spatial effects in the mechanism on two different scales. First, we reconcile recent experimental findings by showing that the 3D diffusion of the transcription factor is often local, i.e. the transcription factor lands quite near its dissociation site. Second, we discriminate between two types of searches: global searches and local searches. We show that these searches differ significantly in average search time and the variability of search time. Using experimentally measured parameter values, we also show that 1D and 3D search is not optimally balanced, leading to much larger estimates of search time. Together, these results lead to a number of biological implications including suggestions of how prokaryotes and eukaryotes achieve rapid gene regulation and the relationship between the search mechanism and noise in gene expression.
1403.7486
Greg Faust
Gregory G. Faust and Ira M. Hall
SAMBLASTER: fast duplicate marking and structural variant read extraction
null
Bioinformatics (2014) 30 (17): 2503-2505
10.1093/bioinformatics/btu314
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Illumina DNA sequencing is now the predominant source of raw genomic data, and data volumes are growing rapidly. Bioinformatic analysis pipelines are having trouble keeping pace. A common bottleneck in such pipelines is the requirement to read, write, sort and compress large BAM files multiple times. Results: We present SAMBLASTER, a tool that reduces the number of times such costly operations are performed. SAMBLASTER is designed to mark duplicates in read-sorted SAM files as a piped post-pass on DNA aligner output before it is compressed to BAM. In addition, it can simultaneously output into separate files the discordant read-pairs and/or split-read mappings used for structural variant calling. As an alignment post-pass, its own runtime overhead is negligible, while dramatically reducing overall pipeline complexity and runtime. As a stand-alone duplicate marking tool, it performs significantly better than PICARD or SAMBAMBA in terms of both speed and memory usage, while achieving nearly identical results. Availability: SAMBLASTER is open source C++ code and freely available from https://github.com/GregoryFaust/samblaster
[ { "created": "Fri, 28 Mar 2014 19:05:47 GMT", "version": "v1" } ]
2014-09-09
[ [ "Faust", "Gregory G.", "" ], [ "Hall", "Ira M.", "" ] ]
Motivation: Illumina DNA sequencing is now the predominant source of raw genomic data, and data volumes are growing rapidly. Bioinformatic analysis pipelines are having trouble keeping pace. A common bottleneck in such pipelines is the requirement to read, write, sort and compress large BAM files multiple times. Results: We present SAMBLASTER, a tool that reduces the number of times such costly operations are performed. SAMBLASTER is designed to mark duplicates in read-sorted SAM files as a piped post-pass on DNA aligner output before it is compressed to BAM. In addition, it can simultaneously output into separate files the discordant read-pairs and/or split-read mappings used for structural variant calling. As an alignment post-pass, its own runtime overhead is negligible, while dramatically reducing overall pipeline complexity and runtime. As a stand-alone duplicate marking tool, it performs significantly better than PICARD or SAMBAMBA in terms of both speed and memory usage, while achieving nearly identical results. Availability: SAMBLASTER is open source C++ code and freely available from https://github.com/GregoryFaust/samblaster
1709.00583
Chaofei Hong
Chaofei Hong
Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible to Various Temporal Codes
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version will be superseded
null
null
null
q-bio.NC cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional modeling approaches have found limitations in matching the increasingly detailed neural network structures and dynamics recorded in experiments to the diverse brain functionalities. On another approach, studies have demonstrated to train spiking neural networks for simple functions using supervised learning. Here, we introduce a modified SpikeProp learning algorithm, which achieved better learning stability in different activity states. In addition, we show biological realistic features such as lateral connections and sparse activities can be included in the network. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, which are MNIST digits recognition, spatial coordinate transformation, and motor sequence generation. Moreover, we find several characteristic features have evolved alongside the task training, such as selective activity, excitatory-inhibitory balance, and weak pair-wise correlation. The coincidence between the self-evolved and experimentally observed features indicates their importance on the brain functionality. Our results suggest a unified setting in which diverse cognitive computations and mechanisms can be studied.
[ { "created": "Sat, 2 Sep 2017 13:59:39 GMT", "version": "v1" } ]
2017-09-05
[ [ "Hong", "Chaofei", "" ] ]
Conventional modeling approaches have found limitations in matching the increasingly detailed neural network structures and dynamics recorded in experiments to the diverse brain functionalities. On another approach, studies have demonstrated to train spiking neural networks for simple functions using supervised learning. Here, we introduce a modified SpikeProp learning algorithm, which achieved better learning stability in different activity states. In addition, we show biological realistic features such as lateral connections and sparse activities can be included in the network. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, which are MNIST digits recognition, spatial coordinate transformation, and motor sequence generation. Moreover, we find several characteristic features have evolved alongside the task training, such as selective activity, excitatory-inhibitory balance, and weak pair-wise correlation. The coincidence between the self-evolved and experimentally observed features indicates their importance on the brain functionality. Our results suggest a unified setting in which diverse cognitive computations and mechanisms can be studied.
1201.1746
Ramon Ferrer i Cancho
Jaume Baixeries, Antoni Hernandez-Fernandez, Nuria Forns and Ramon Ferrer-i-Cancho
The parameters of Menzerath-Altmann law in genomes
Typos and little inaccuracies corrected. Title and references updated (the previous update failed)
null
10.1080/09296174.2013.773141
null
q-bio.GN physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relationship between the size of the whole and the size of the parts in language and music is known to follow Menzerath-Altmann law at many levels of description (morphemes, words, sentences...). Qualitatively, the law states that larger the whole, the smaller its parts, e.g., the longer a word (in syllables) the shorter its syllables (in letters or phonemes). This patterning has also been found in genomes: the longer a genome (in chromosomes), the shorter its chromosomes (in base pairs). However, it has been argued recently that mean chromosome length is trivially a pure power function of chromosome number with an exponent of -1. The functional dependency between mean chromosome size and chromosome number in groups of organisms from three different kingdoms is studied. The fit of a pure power function yields exponents between -1.6 and 0.1. It is shown that an exponent of -1 is unlikely for fungi, gymnosperm plants, insects, reptiles, ray-finned fishes and amphibians. Even when the exponent is very close to -1, adding an exponential component is able to yield a better fit with regard to a pure power-law in plants, mammals, ray-finned fishes and amphibians. The parameters of Menzerath-Altmann law in genomes deviate significantly from a power law with a -1 exponent with the exception of birds and cartilaginous fishes.
[ { "created": "Mon, 9 Jan 2012 12:51:45 GMT", "version": "v1" }, { "created": "Sat, 9 Jun 2012 20:50:08 GMT", "version": "v2" }, { "created": "Thu, 14 Jun 2012 10:17:39 GMT", "version": "v3" } ]
2014-12-03
[ [ "Baixeries", "Jaume", "" ], [ "Hernandez-Fernandez", "Antoni", "" ], [ "Forns", "Nuria", "" ], [ "Ferrer-i-Cancho", "Ramon", "" ] ]
The relationship between the size of the whole and the size of the parts in language and music is known to follow Menzerath-Altmann law at many levels of description (morphemes, words, sentences...). Qualitatively, the law states that larger the whole, the smaller its parts, e.g., the longer a word (in syllables) the shorter its syllables (in letters or phonemes). This patterning has also been found in genomes: the longer a genome (in chromosomes), the shorter its chromosomes (in base pairs). However, it has been argued recently that mean chromosome length is trivially a pure power function of chromosome number with an exponent of -1. The functional dependency between mean chromosome size and chromosome number in groups of organisms from three different kingdoms is studied. The fit of a pure power function yields exponents between -1.6 and 0.1. It is shown that an exponent of -1 is unlikely for fungi, gymnosperm plants, insects, reptiles, ray-finned fishes and amphibians. Even when the exponent is very close to -1, adding an exponential component is able to yield a better fit with regard to a pure power-law in plants, mammals, ray-finned fishes and amphibians. The parameters of Menzerath-Altmann law in genomes deviate significantly from a power law with a -1 exponent with the exception of birds and cartilaginous fishes.
1310.2110
Cang Hui
Cang Hui
A road map for synthesizing the scaling patterns in ecology
6 pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Ecology studies biodiversity in its variety and complexity. It describes how species distribute and perform in response to environmental changes. Ecological processes and structures are highly complex and adaptive. In order to quantify emerging ecological patterns and investigate their hidden mechanisms, we need to rely on the simplicity of mathematical language. This becomes especially apparent when dealing with scaling patterns in ecology. Indeed, nearly all of ecological patterns are scale dependent. Such scale dependence hampers our predictive power and creates problems in our inference. This challenge calls for a clear and fundamental understanding of how and why ecological patterns change across scales. As Simon Levin stated in his MacArthur Award lecture, the problem of relating phenomena across scales is the central problem in ecology and other natural sciences. It has become clear that there is currently a drive in ecology and complexity science to develop new quantitative approaches that are suitable for analysing and forecasting patterns of ecological systems. Here I provide a road map for future works on synthesizing the scaling patterns in ecology, aiming (i) to collect and sort a diverse array of ecological patterns, (ii) to present the dominant parametric forms of how these patterns change across spatial and temporal scales, (iii) to detect the processes and mechanisms using mathematical models, and finally (iv) to probe the physical meaning of these scaling patterns. This road map is divided into three parts and covers three main concepts of scale in ecology: heterogeneity, hierarchy and size. Using scale as a thread, this road map and its following works weave the kaleidoscope of ecological scaling patterns into a cohesive whole.
[ { "created": "Tue, 8 Oct 2013 12:20:40 GMT", "version": "v1" } ]
2013-10-09
[ [ "Hui", "Cang", "" ] ]
Ecology studies biodiversity in its variety and complexity. It describes how species distribute and perform in response to environmental changes. Ecological processes and structures are highly complex and adaptive. In order to quantify emerging ecological patterns and investigate their hidden mechanisms, we need to rely on the simplicity of mathematical language. This becomes especially apparent when dealing with scaling patterns in ecology. Indeed, nearly all of ecological patterns are scale dependent. Such scale dependence hampers our predictive power and creates problems in our inference. This challenge calls for a clear and fundamental understanding of how and why ecological patterns change across scales. As Simon Levin stated in his MacArthur Award lecture, the problem of relating phenomena across scales is the central problem in ecology and other natural sciences. It has become clear that there is currently a drive in ecology and complexity science to develop new quantitative approaches that are suitable for analysing and forecasting patterns of ecological systems. Here I provide a road map for future works on synthesizing the scaling patterns in ecology, aiming (i) to collect and sort a diverse array of ecological patterns, (ii) to present the dominant parametric forms of how these patterns change across spatial and temporal scales, (iii) to detect the processes and mechanisms using mathematical models, and finally (iv) to probe the physical meaning of these scaling patterns. This road map is divided into three parts and covers three main concepts of scale in ecology: heterogeneity, hierarchy and size. Using scale as a thread, this road map and its following works weave the kaleidoscope of ecological scaling patterns into a cohesive whole.
2209.02816
Edoardo Balzani
Edoardo Balzani, Jean Paul Noel, Pedro Herrero-Vidal, Dora E. Angelaki, and Cristina Savin
A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Latent manifolds provide a compact characterization of neural population activity and of shared co-variability across brain areas. Nonetheless, existing statistical tools for extracting neural manifolds face limitations in terms of interpretability of latents with respect to task variables, and can be hard to apply to datasets with no trial repeats. Here we propose a novel probabilistic framework that allows for interpretable partitioning of population variability within and across areas in the context of naturalistic behavior. Our approach for task aligned manifold estimation (TAME-GP) extends a probabilistic variant of demixed PCA by (1) explicitly partitioning variability into private and shared sources, (2) using a Poisson noise model, and (3) introducing temporal smoothing of latent trajectories in the form of a Gaussian Process prior. This TAME-GP graphical model allows for robust estimation of task-relevant variability in local population responses, and of shared co-variability between brain areas. We demonstrate the efficiency of our estimator on within model and biologically motivated simulated data. We also apply it to neural recordings in a closed-loop virtual navigation task in monkeys, demonstrating the capacity of TAME-GP to capture meaningful intra- and inter-area neural variability with single trial resolution.
[ { "created": "Tue, 6 Sep 2022 21:03:40 GMT", "version": "v1" } ]
2022-09-08
[ [ "Balzani", "Edoardo", "" ], [ "Noel", "Jean Paul", "" ], [ "Herrero-Vidal", "Pedro", "" ], [ "Angelaki", "Dora E.", "" ], [ "Savin", "Cristina", "" ] ]
Latent manifolds provide a compact characterization of neural population activity and of shared co-variability across brain areas. Nonetheless, existing statistical tools for extracting neural manifolds face limitations in terms of interpretability of latents with respect to task variables, and can be hard to apply to datasets with no trial repeats. Here we propose a novel probabilistic framework that allows for interpretable partitioning of population variability within and across areas in the context of naturalistic behavior. Our approach for task aligned manifold estimation (TAME-GP) extends a probabilistic variant of demixed PCA by (1) explicitly partitioning variability into private and shared sources, (2) using a Poisson noise model, and (3) introducing temporal smoothing of latent trajectories in the form of a Gaussian Process prior. This TAME-GP graphical model allows for robust estimation of task-relevant variability in local population responses, and of shared co-variability between brain areas. We demonstrate the efficiency of our estimator on within model and biologically motivated simulated data. We also apply it to neural recordings in a closed-loop virtual navigation task in monkeys, demonstrating the capacity of TAME-GP to capture meaningful intra- and inter-area neural variability with single trial resolution.
1803.06180
Daqing Guo
Shengdun Wu, Yangsong Zhang, Yan Cui, Heng Li, Jiakang Wang, Lijun Guo, Yang Xia, Dezhong Yao, Peng Xu, Daqing Guo
Heterogeneity of Synaptic Input Connectivity Regulates Spike-based Neuronal Avalanches
37 pages, 9 figures, 1 table
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our mysterious brain is believed to operate near a non-equilibrium point and generate critical self-organized avalanches in neuronal activity. Recent experimental evidence has revealed significant heterogeneity in both synaptic input and output connectivity, but whether the structural heterogeneity participates in the regulation of neuronal avalanches remains poorly understood. By computational modelling, we predict that different types of structural heterogeneity contribute distinct effects on avalanche neurodynamics. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics. In the criticality region, the co-emergence of multi-scale cortical activities is observed, and both the avalanche dynamics and neuronal oscillations are modulated by the input heterogeneity. Remarkably, we show similar results can be reproduced in networks with various types of in- and out-degree distributions. Overall, these findings not only provide details on the underlying circuitry mechanisms of nonrandom synaptic connectivity in the regulation of neuronal avalanches, but also inspire testable hypotheses for future experimental studies.
[ { "created": "Fri, 16 Mar 2018 11:54:38 GMT", "version": "v1" }, { "created": "Wed, 11 Jul 2018 13:26:39 GMT", "version": "v2" } ]
2018-07-12
[ [ "Wu", "Shengdun", "" ], [ "Zhang", "Yangsong", "" ], [ "Cui", "Yan", "" ], [ "Li", "Heng", "" ], [ "Wang", "Jiakang", "" ], [ "Guo", "Lijun", "" ], [ "Xia", "Yang", "" ], [ "Yao", "Dezhong", ...
Our mysterious brain is believed to operate near a non-equilibrium point and generate critical self-organized avalanches in neuronal activity. Recent experimental evidence has revealed significant heterogeneity in both synaptic input and output connectivity, but whether the structural heterogeneity participates in the regulation of neuronal avalanches remains poorly understood. By computational modelling, we predict that different types of structural heterogeneity contribute distinct effects on avalanche neurodynamics. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics. In the criticality region, the co-emergence of multi-scale cortical activities is observed, and both the avalanche dynamics and neuronal oscillations are modulated by the input heterogeneity. Remarkably, we show similar results can be reproduced in networks with various types of in- and out-degree distributions. Overall, these findings not only provide details on the underlying circuitry mechanisms of nonrandom synaptic connectivity in the regulation of neuronal avalanches, but also inspire testable hypotheses for future experimental studies.
2203.08149
Arman Hasanzadeh
Arman Hasanzadeh, Ehsan Hajiramezanali, Nick Duffield, Xiaoning Qian
MoReL: Multi-omics Relational Learning
null
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-omics data analysis has the potential to discover hidden molecular interactions, revealing potential regulatory and/or signal transduction pathways for cellular processes of interest when studying life and disease systems. One of critical challenges when dealing with real-world multi-omics data is that they may manifest heterogeneous structures and data quality as often existing data may be collected from different subjects under different conditions for each type of omics data. We propose a novel deep Bayesian generative model to efficiently infer a multi-partite graph encoding molecular interactions across such heterogeneous views, using a fused Gromov-Wasserstein (FGW) regularization between latent representations of corresponding views for integrative analysis. With such an optimal transport regularization in the deep Bayesian generative model, it not only allows incorporating view-specific side information, either with graph-structured or unstructured data in different views, but also increases the model flexibility with the distribution-based regularization. This allows efficient alignment of heterogeneous latent variable distributions to derive reliable interaction predictions compared to the existing point-based graph embedding methods. Our experiments on several real-world datasets demonstrate enhanced performance of MoReL in inferring meaningful interactions compared to existing baselines.
[ { "created": "Tue, 15 Mar 2022 02:50:07 GMT", "version": "v1" } ]
2022-03-18
[ [ "Hasanzadeh", "Arman", "" ], [ "Hajiramezanali", "Ehsan", "" ], [ "Duffield", "Nick", "" ], [ "Qian", "Xiaoning", "" ] ]
Multi-omics data analysis has the potential to discover hidden molecular interactions, revealing potential regulatory and/or signal transduction pathways for cellular processes of interest when studying life and disease systems. One of critical challenges when dealing with real-world multi-omics data is that they may manifest heterogeneous structures and data quality as often existing data may be collected from different subjects under different conditions for each type of omics data. We propose a novel deep Bayesian generative model to efficiently infer a multi-partite graph encoding molecular interactions across such heterogeneous views, using a fused Gromov-Wasserstein (FGW) regularization between latent representations of corresponding views for integrative analysis. With such an optimal transport regularization in the deep Bayesian generative model, it not only allows incorporating view-specific side information, either with graph-structured or unstructured data in different views, but also increases the model flexibility with the distribution-based regularization. This allows efficient alignment of heterogeneous latent variable distributions to derive reliable interaction predictions compared to the existing point-based graph embedding methods. Our experiments on several real-world datasets demonstrate enhanced performance of MoReL in inferring meaningful interactions compared to existing baselines.
2405.02076
Ajit Johnson Nirmal
Ajit J. Nirmal and Peter K. Sorger
SCIMAP: A Python Toolkit for Integrated Spatial Analysis of Multiplexed Imaging Data
6 pages, 1 figure
null
null
null
q-bio.QM q-bio.TO
http://creativecommons.org/licenses/by-sa/4.0/
Multiplexed imaging data are revolutionizing our understanding of the composition and organization of tissues and tumors. A critical aspect of such tissue profiling is quantifying the spatial relationship relationships among cells at different scales from the interaction of neighboring cells to recurrent communities of cells of multiple types. This often involves statistical analysis of 10^7 or more cells in which up to 100 biomolecules (commonly proteins) have been measured. While software tools currently cater to the analysis of spatial transcriptomics data, there remains a need for toolkits explicitly tailored to the complexities of multiplexed imaging data including the need to seamlessly integrate image visualization with data analysis and exploration. We introduce SCIMAP, a Python package specifically crafted to address these challenges. With SCIMAP, users can efficiently preprocess, analyze, and visualize large datasets, facilitating the exploration of spatial relationships and their statistical significance. SCIMAP's modular design enables the integration of new algorithms, enhancing its capabilities for spatial analysis.
[ { "created": "Fri, 3 May 2024 13:10:07 GMT", "version": "v1" } ]
2024-05-06
[ [ "Nirmal", "Ajit J.", "" ], [ "Sorger", "Peter K.", "" ] ]
Multiplexed imaging data are revolutionizing our understanding of the composition and organization of tissues and tumors. A critical aspect of such tissue profiling is quantifying the spatial relationship relationships among cells at different scales from the interaction of neighboring cells to recurrent communities of cells of multiple types. This often involves statistical analysis of 10^7 or more cells in which up to 100 biomolecules (commonly proteins) have been measured. While software tools currently cater to the analysis of spatial transcriptomics data, there remains a need for toolkits explicitly tailored to the complexities of multiplexed imaging data including the need to seamlessly integrate image visualization with data analysis and exploration. We introduce SCIMAP, a Python package specifically crafted to address these challenges. With SCIMAP, users can efficiently preprocess, analyze, and visualize large datasets, facilitating the exploration of spatial relationships and their statistical significance. SCIMAP's modular design enables the integration of new algorithms, enhancing its capabilities for spatial analysis.
2312.02248
Sophia Krix
Sophia Krix, Ella Wilczynski, Neus Falg\`as, Raquel S\'anchez-Valle, Eti Yoles, Uri Nevo, Kuti Baruch, Holger Fr\"ohlich
Towards early diagnosis of Alzheimer's disease: Advances in immune-related blood biomarkers and computational modeling approaches
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Alzheimer's disease has an increasing prevalence in the population world-wide, yet current diagnostic methods based on recommended biomarkers are only available in specialized clinics. Due to these circumstances, Alzheimer's disease is usually diagnosed late, which contrasts with the currently available treatment options that are only effective for patients at an early stage. Blood-based biomarkers could fill in the gap of easily accessible and low-cost methods for early diagnosis of the disease. In particular, immune-based blood-biomarkers might be a promising option, given the recently discovered cross-talk of immune cells of the central nervous system with those in the peripheral immune system. With the help of machine learning algorithms and mechanistic modeling approaches, such as agent-based modeling, an in-depth analysis of the simulation of cell dynamics is possible as well as of high-dimensional omics resources indicative of pathway signaling changes. Here, we give a background on advances in research on brain-immune system cross-talk in Alzheimer's disease and review recent machine learning and mechanistic modeling approaches which leverage modern omics technologies for blood-based immune system-related biomarker discovery.
[ { "created": "Mon, 4 Dec 2023 16:05:45 GMT", "version": "v1" }, { "created": "Wed, 6 Dec 2023 10:05:42 GMT", "version": "v2" } ]
2023-12-07
[ [ "Krix", "Sophia", "" ], [ "Wilczynski", "Ella", "" ], [ "Falgàs", "Neus", "" ], [ "Sánchez-Valle", "Raquel", "" ], [ "Yoles", "Eti", "" ], [ "Nevo", "Uri", "" ], [ "Baruch", "Kuti", "" ], [ "Fröhlic...
Alzheimer's disease has an increasing prevalence in the population world-wide, yet current diagnostic methods based on recommended biomarkers are only available in specialized clinics. Due to these circumstances, Alzheimer's disease is usually diagnosed late, which contrasts with the currently available treatment options that are only effective for patients at an early stage. Blood-based biomarkers could fill in the gap of easily accessible and low-cost methods for early diagnosis of the disease. In particular, immune-based blood-biomarkers might be a promising option, given the recently discovered cross-talk of immune cells of the central nervous system with those in the peripheral immune system. With the help of machine learning algorithms and mechanistic modeling approaches, such as agent-based modeling, an in-depth analysis of the simulation of cell dynamics is possible as well as of high-dimensional omics resources indicative of pathway signaling changes. Here, we give a background on advances in research on brain-immune system cross-talk in Alzheimer's disease and review recent machine learning and mechanistic modeling approaches which leverage modern omics technologies for blood-based immune system-related biomarker discovery.
2005.08395
Babacar Mbaye Ndiaye
Mouhamadou A.M.T. Balde, Coura Balde, Babacar M. Ndiaye
Impact studies of nationwide measures COVID-19 anti-pandemic: compartmental model and machine learning
null
null
null
null
q-bio.PE physics.soc-ph stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we deal with the study of the impact of nationwide measures COVID-19 anti-pandemic. We drive two processes to analyze COVID-19 data considering measures. We associate level of nationwide measure with value of parameters related to the contact rate of the model. Then a parametric solve, with respect to those parameters of measures, shows different possibilities of the evolution of the pandemic. Two machine learning tools are used to forecast the evolution of the pandemic. Finally, we show comparison between deterministic and two machine learning tools.
[ { "created": "Sun, 17 May 2020 23:23:38 GMT", "version": "v1" }, { "created": "Sun, 9 Aug 2020 21:57:38 GMT", "version": "v2" } ]
2020-08-11
[ [ "Balde", "Mouhamadou A. M. T.", "" ], [ "Balde", "Coura", "" ], [ "Ndiaye", "Babacar M.", "" ] ]
In this paper, we deal with the study of the impact of nationwide measures COVID-19 anti-pandemic. We drive two processes to analyze COVID-19 data considering measures. We associate level of nationwide measure with value of parameters related to the contact rate of the model. Then a parametric solve, with respect to those parameters of measures, shows different possibilities of the evolution of the pandemic. Two machine learning tools are used to forecast the evolution of the pandemic. Finally, we show comparison between deterministic and two machine learning tools.
1310.3796
Chad Giusti
Chad Giusti and Vladimir Itskov
A no-go theorem for one-layer feedforward networks
10 pages, 2 figures
null
null
null
q-bio.NC math.CO math.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks, and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of noise. When we coarsened the notion of combinatorial neural code to keep track only of maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or many-layer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by removing a polyhedron from the positive orthant of some Euclidean space.
[ { "created": "Mon, 14 Oct 2013 19:16:11 GMT", "version": "v1" } ]
2013-10-15
[ [ "Giusti", "Chad", "" ], [ "Itskov", "Vladimir", "" ] ]
It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks, and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of noise. When we coarsened the notion of combinatorial neural code to keep track only of maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or many-layer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by removing a polyhedron from the positive orthant of some Euclidean space.
2404.13902
Rebecca Russell
RD Russell, J He, LJ Black, A Begley
Evaluating experiences in a digital nutrition education program for people with multiple sclerosis: a qualitative study
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background Multiple sclerosis (MS) is a complex immune-mediated disease with no currently known cure. There is growing evidence to support the role of diet in reducing some of the symptoms and disease progression in MS, and we previously developed and tested the feasibility of a digital nutrition education program for people with MS. Objective The aim of this study was to explore factors that influenced engagement in the digital nutrition education program, including features influencing capability, opportunity, and motivation to change their dietary behaviours. Methods Semi-structured interviews were conducted with people who MS who completed some or all of the program, until data saturation was reached. Interviews were analysed inductively using thematic analysis. Themes were deductively mapped against the COM-B behaviour change model. Results 16 interviews were conducted with participants who completed all (n=10) or some of the program (n=6). Four themes emerged: 1) Acquiring and validating nutrition knowledge; 2) Influence of time and social support; 3) Getting in early to improve health; and 4) Accounting for food literacy experiences. Discussion This is the first online nutrition program with suitable behavioural supports for people with MS. It highlights the importance of disease-specific and evidence-based nutrition education to support people with MS to make dietary changes. Acquiring nutrition knowledge, coupled with practical support mechanisms such as recipe booklets and goal-setting, emerged as crucial for facilitating engagement with the program. Conclusions When designing education programs for people with MS and other neurological conditions, healthcare professionals and program designers should consider flexible delivery and building peer support to address the needs and challenges faced by participants.
[ { "created": "Mon, 22 Apr 2024 06:24:00 GMT", "version": "v1" } ]
2024-04-23
[ [ "Russell", "RD", "" ], [ "He", "J", "" ], [ "Black", "LJ", "" ], [ "Begley", "A", "" ] ]
Background Multiple sclerosis (MS) is a complex immune-mediated disease with no currently known cure. There is growing evidence to support the role of diet in reducing some of the symptoms and disease progression in MS, and we previously developed and tested the feasibility of a digital nutrition education program for people with MS. Objective The aim of this study was to explore factors that influenced engagement in the digital nutrition education program, including features influencing capability, opportunity, and motivation to change their dietary behaviours. Methods Semi-structured interviews were conducted with people who MS who completed some or all of the program, until data saturation was reached. Interviews were analysed inductively using thematic analysis. Themes were deductively mapped against the COM-B behaviour change model. Results 16 interviews were conducted with participants who completed all (n=10) or some of the program (n=6). Four themes emerged: 1) Acquiring and validating nutrition knowledge; 2) Influence of time and social support; 3) Getting in early to improve health; and 4) Accounting for food literacy experiences. Discussion This is the first online nutrition program with suitable behavioural supports for people with MS. It highlights the importance of disease-specific and evidence-based nutrition education to support people with MS to make dietary changes. Acquiring nutrition knowledge, coupled with practical support mechanisms such as recipe booklets and goal-setting, emerged as crucial for facilitating engagement with the program. Conclusions When designing education programs for people with MS and other neurological conditions, healthcare professionals and program designers should consider flexible delivery and building peer support to address the needs and challenges faced by participants.
1210.2320
Diana Garcia Lopez
Benjamin J. Z. Quigley, Diana Garc\'ia L\'opez, Angus Buckling, Alan J. McKane, and Sam P. Brown
The mode of host-parasite interaction shapes coevolutionary dynamics and the fate of host cooperation
8 pages, 4 figures, 1 Supplementary Material file attached (to view it, please download the source file listed under "Other formats")
Proc. R. Soc. B (2012) vol. 279 no. 1743, 3742-3748
10.1098/rspb.2012.0769
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Antagonistic coevolution between hosts and parasites can have a major impact on host population structures, and hence on the evolution of social traits. Using stochastic modelling techniques in the context of bacteria-virus interactions, we investigate the impact of coevolution across a continuum of host-parasite genetic specificity (specifically, where genotypes have the same infectivity/resistance ranges (matching alleles, MA) to highly variable ranges (gene-for-gene, GFG)) on population genetic structure, and on the social behaviour of the host. We find that host cooperation is more likely to be maintained towards the MA end of the continuum, as the more frequent bottlenecks associated with an MA-like interaction can prevent defector invasion, and can even allow migrant cooperators to invade populations of defectors.
[ { "created": "Mon, 8 Oct 2012 15:56:29 GMT", "version": "v1" } ]
2012-10-09
[ [ "Quigley", "Benjamin J. Z.", "" ], [ "López", "Diana García", "" ], [ "Buckling", "Angus", "" ], [ "McKane", "Alan J.", "" ], [ "Brown", "Sam P.", "" ] ]
Antagonistic coevolution between hosts and parasites can have a major impact on host population structures, and hence on the evolution of social traits. Using stochastic modelling techniques in the context of bacteria-virus interactions, we investigate the impact of coevolution across a continuum of host-parasite genetic specificity (specifically, where genotypes have the same infectivity/resistance ranges (matching alleles, MA) to highly variable ranges (gene-for-gene, GFG)) on population genetic structure, and on the social behaviour of the host. We find that host cooperation is more likely to be maintained towards the MA end of the continuum, as the more frequent bottlenecks associated with an MA-like interaction can prevent defector invasion, and can even allow migrant cooperators to invade populations of defectors.
2403.09010
Andreas Tiffeau-Mayer
Touchchai Chotisorayuth and Andreas Tiffeau-Mayer
Lightning-fast adaptive immune receptor similarity search by symmetric deletion lookup
13 pages, 8 figures
null
null
null
q-bio.QM q-bio.GN
http://creativecommons.org/licenses/by/4.0/
An individual's adaptive immune receptor (AIR) repertoire records immune history due to the exquisite antigen specificity of AIRs. Reading this record requires computational approaches for inferring receptor function from sequence, as the diversity of possible receptor-antigen pairs vastly outstrips experimental knowledge. Identification of AIRs with similar sequence and thus putatively similar function is a common performance bottleneck in these approaches. Here, we benchmark the time complexity of five different algorithmic approaches to radius-based search for Levenshtein neighbors. We show that a symmetric deletion lookup approach, originally proposed for spell-checking, is particularly scalable. We then introduce XTNeighbor, a variant of this algorithm that can be massively parallelized on GPUs. For one million input sequences, XTNeighbor identifies all sequence neighbors that differ by up to two edits in seconds on commodity hardware, orders of magnitude faster than existing approaches. We also demonstrate how symmetric deletion lookup can speed up search with more complex sequence-similarity metrics such as TCRdist. Our contribution is poised to greatly speed up existing analysis pipelines and enable processing of large-scale immunosequencing data without downsampling.
[ { "created": "Thu, 14 Mar 2024 00:17:34 GMT", "version": "v1" } ]
2024-03-15
[ [ "Chotisorayuth", "Touchchai", "" ], [ "Tiffeau-Mayer", "Andreas", "" ] ]
An individual's adaptive immune receptor (AIR) repertoire records immune history due to the exquisite antigen specificity of AIRs. Reading this record requires computational approaches for inferring receptor function from sequence, as the diversity of possible receptor-antigen pairs vastly outstrips experimental knowledge. Identification of AIRs with similar sequence and thus putatively similar function is a common performance bottleneck in these approaches. Here, we benchmark the time complexity of five different algorithmic approaches to radius-based search for Levenshtein neighbors. We show that a symmetric deletion lookup approach, originally proposed for spell-checking, is particularly scalable. We then introduce XTNeighbor, a variant of this algorithm that can be massively parallelized on GPUs. For one million input sequences, XTNeighbor identifies all sequence neighbors that differ by up to two edits in seconds on commodity hardware, orders of magnitude faster than existing approaches. We also demonstrate how symmetric deletion lookup can speed up search with more complex sequence-similarity metrics such as TCRdist. Our contribution is poised to greatly speed up existing analysis pipelines and enable processing of large-scale immunosequencing data without downsampling.
2204.10847
Dibakar Ghosh Dr.
Tugba Palabas, Andre Longtin, Dibakar Ghosh, Muhammet Uzuntarla
Controlling the spontaneous firing behavior of a neuron with astrocyte
11 pages, 11 figures; accepted for publication in Chaos: An Interdisciplinary Journal of Nonlinear Science, 2022
null
10.1063/5.0093234
null
q-bio.NC nlin.CD
http://creativecommons.org/licenses/by/4.0/
Mounting evidence in recent years suggests that astrocytes, a sub-type of glial cells, not only serve metabolic and structural support for neurons and synapses but also play critical roles in regulation of proper functioning of the nervous system. In this work, we investigate the effect of astrocyte on the spontaneous firing activity of a neuron through a combined model which includes a neuron-astrocyte pair. First, we show that an astrocyte may provide a kind of multistability in neuron dynamics by inducing different firing modes such as random and bursty spiking. Then, we identify the underlying mechanism of this behavior and search for the astrocytic factors that may have regulatory roles in different firing regimes. More specifically, we explore how an astrocyte can participate in occurrence and control of spontaneous irregular spiking activity of a neuron in random spiking mode. Additionally, we systematically investigate the bursty firing regime dynamics of the neuron under the variation of biophysical facts related to the intracellular environment of the astrocyte. It is found that an astrocyte coupled to a neuron can provide a control mechanism for both spontaneous firing irregularity and burst firing statistics, i.e., burst regularity and size.
[ { "created": "Fri, 22 Apr 2022 17:53:34 GMT", "version": "v1" } ]
2022-05-18
[ [ "Palabas", "Tugba", "" ], [ "Longtin", "Andre", "" ], [ "Ghosh", "Dibakar", "" ], [ "Uzuntarla", "Muhammet", "" ] ]
Mounting evidence in recent years suggests that astrocytes, a sub-type of glial cells, not only serve metabolic and structural support for neurons and synapses but also play critical roles in regulation of proper functioning of the nervous system. In this work, we investigate the effect of astrocyte on the spontaneous firing activity of a neuron through a combined model which includes a neuron-astrocyte pair. First, we show that an astrocyte may provide a kind of multistability in neuron dynamics by inducing different firing modes such as random and bursty spiking. Then, we identify the underlying mechanism of this behavior and search for the astrocytic factors that may have regulatory roles in different firing regimes. More specifically, we explore how an astrocyte can participate in occurrence and control of spontaneous irregular spiking activity of a neuron in random spiking mode. Additionally, we systematically investigate the bursty firing regime dynamics of the neuron under the variation of biophysical facts related to the intracellular environment of the astrocyte. It is found that an astrocyte coupled to a neuron can provide a control mechanism for both spontaneous firing irregularity and burst firing statistics, i.e., burst regularity and size.
0812.3840
Simone Bianco
Simone Bianco, Leah B. Shaw, Ira B. Schwartz
Epidemics with Multistrain Interactions: The Interplay Between Cross Immunity and Antibody-Dependent Enhancement
10 pages, 11 figures. Updated version
null
10.1063/1.3270261
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines the interplay of the effect of cross immunity and antibody-dependent enhancement (ADE) in mutistrain diseases. Motivated by dengue fever, we study a model for the spreading of epidemics in a population with multistrain interactions mediated by both partial temporary cross immunity and ADE. Although ADE models have previously been observed to cause chaotic outbreaks, we show analytically that weak cross immunity has a stabilizing effect on the system. That is, the onset of disease fluctuations requires a larger value of ADE with small cross immunity than without. However, strong cross immunity is shown numerically to cause oscillations and chaotic outbreaks even for low values of ADE.
[ { "created": "Fri, 19 Dec 2008 16:32:26 GMT", "version": "v1" }, { "created": "Tue, 28 Jul 2009 15:14:03 GMT", "version": "v2" } ]
2015-05-13
[ [ "Bianco", "Simone", "" ], [ "Shaw", "Leah B.", "" ], [ "Schwartz", "Ira B.", "" ] ]
This paper examines the interplay of the effect of cross immunity and antibody-dependent enhancement (ADE) in mutistrain diseases. Motivated by dengue fever, we study a model for the spreading of epidemics in a population with multistrain interactions mediated by both partial temporary cross immunity and ADE. Although ADE models have previously been observed to cause chaotic outbreaks, we show analytically that weak cross immunity has a stabilizing effect on the system. That is, the onset of disease fluctuations requires a larger value of ADE with small cross immunity than without. However, strong cross immunity is shown numerically to cause oscillations and chaotic outbreaks even for low values of ADE.
2211.00938
Farzana Nasrin
Erik J. Am\'ezquita, Farzana Nasrin, Kathleen M. Storey, and Masato Yoshizawa
Genomics Data Analysis via Spectral Shape and Topology
21 pages and 10 figures
null
10.1371/journal.pone.0284820
null
q-bio.GN math.AT stat.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mapper, a topological algorithm, is frequently used as an exploratory tool to build a graphical representation of data. This representation can help to gain a better understanding of the intrinsic shape of high-dimensional genomic data and to retain information that may be lost using standard dimension-reduction algorithms. We propose a novel workflow to process and analyze RNA-seq data from tumor and healthy subjects integrating Mapper and differential gene expression. Precisely, we show that a Gaussian mixture approximation method can be used to produce graphical structures that successfully separate tumor and healthy subjects, and produce two subgroups of tumor subjects. A further analysis using DESeq2, a popular tool for the detection of differentially expressed genes, shows that these two subgroups of tumor cells bear two distinct gene regulations, suggesting two discrete paths for forming lung cancer, which could not be highlighted by other popular clustering methods, including t-SNE. Although Mapper shows promise in analyzing high-dimensional data, building tools to statistically analyze Mapper graphical structures is limited in the existing literature. In this paper, we develop a scoring method using heat kernel signatures that provides an empirical setting for statistical inferences such as hypothesis testing, sensitivity analysis, and correlation analysis.
[ { "created": "Wed, 2 Nov 2022 07:51:59 GMT", "version": "v1" } ]
2023-07-19
[ [ "Amézquita", "Erik J.", "" ], [ "Nasrin", "Farzana", "" ], [ "Storey", "Kathleen M.", "" ], [ "Yoshizawa", "Masato", "" ] ]
Mapper, a topological algorithm, is frequently used as an exploratory tool to build a graphical representation of data. This representation can help to gain a better understanding of the intrinsic shape of high-dimensional genomic data and to retain information that may be lost using standard dimension-reduction algorithms. We propose a novel workflow to process and analyze RNA-seq data from tumor and healthy subjects integrating Mapper and differential gene expression. Precisely, we show that a Gaussian mixture approximation method can be used to produce graphical structures that successfully separate tumor and healthy subjects, and produce two subgroups of tumor subjects. A further analysis using DESeq2, a popular tool for the detection of differentially expressed genes, shows that these two subgroups of tumor cells bear two distinct gene regulations, suggesting two discrete paths for forming lung cancer, which could not be highlighted by other popular clustering methods, including t-SNE. Although Mapper shows promise in analyzing high-dimensional data, building tools to statistically analyze Mapper graphical structures is limited in the existing literature. In this paper, we develop a scoring method using heat kernel signatures that provides an empirical setting for statistical inferences such as hypothesis testing, sensitivity analysis, and correlation analysis.
1512.06557
Gunther Schauberger
C. Wu, J. Liu, P. Zhao, M. Piringer, G. Schauberger
Conversion of the chemical concentration of odorous mixtures into odour concentration and odour intensity: a comparison of methods
accepted for publication on Dec. 20, 2015, Atmospheric Environment (2016)
null
10.1016/j.atmosenv.2015.12.051
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuous odour measurements both of emissions as well as ambient concentrations are seldom realised, mainly because of their high costs. They are therefore often substituted by concentration measurements of odorous substances. Then a conversion of the chemical concentrations C (mg m-3) into odour concentrations COD (ouE m-3) and odour intensities OI is necessary. Four methods to convert the concentrations of single substances to the odour concentrations and odour intensities of an odorous mixture are investigated: (1) direct use of measured concentrations, (2) the sum of the odour activity value SOAV, (3) the sum of the odour intensities SOI, and (4) the equivalent odour concentration EOC, as a new method. The methods are evaluated with olfactometric measurements of seven substances as well as their mixtures. The results indicate that the SOI and EOC conversion methods deliver reliable values. These methods use not only the odour threshold concentration but also the slope of the Weber-Fechner law to include the sensitivity of the odour perception of the individual substances. They fulfil the criteria of an objective conversion without the need of a further calibration by additional olfactometric measurements.
[ { "created": "Mon, 21 Dec 2015 10:00:39 GMT", "version": "v1" } ]
2015-12-28
[ [ "Wu", "C.", "" ], [ "Liu", "J.", "" ], [ "Zhao", "P.", "" ], [ "Piringer", "M.", "" ], [ "Schauberger", "G.", "" ] ]
Continuous odour measurements both of emissions as well as ambient concentrations are seldom realised, mainly because of their high costs. They are therefore often substituted by concentration measurements of odorous substances. Then a conversion of the chemical concentrations C (mg m-3) into odour concentrations COD (ouE m-3) and odour intensities OI is necessary. Four methods to convert the concentrations of single substances to the odour concentrations and odour intensities of an odorous mixture are investigated: (1) direct use of measured concentrations, (2) the sum of the odour activity value SOAV, (3) the sum of the odour intensities SOI, and (4) the equivalent odour concentration EOC, as a new method. The methods are evaluated with olfactometric measurements of seven substances as well as their mixtures. The results indicate that the SOI and EOC conversion methods deliver reliable values. These methods use not only the odour threshold concentration but also the slope of the Weber-Fechner law to include the sensitivity of the odour perception of the individual substances. They fulfil the criteria of an objective conversion without the need of a further calibration by additional olfactometric measurements.
0906.0254
Arto Annila
Salla Jaakkola, Vivek Sharma, Arto Annila
Cause of Chirality Consensus
8 pages, 2 figures
Current Chemical Biology, 2008, 2, 153-158
10.2174/187231308784220536
null
q-bio.PE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological macromolecules, proteins and nucleic acids are composed exclusively of chirally pure monomers. The chirality consensus appears vital for life and it has even been considered as a prerequisite of life. However the primary cause for the ubiquitous handedness has remained obscure. We propose that the chirality consensus is a kinetic consequence that follows from the principle of increasing entropy, i.e. the 2nd law of thermodynamics. Entropy increases when an open system evolves by decreasing gradients in free energy with more and more efficient mechanisms of energy transduction. The rate of entropy increase is the universal fitness criterion of natural selection that favors diverse functional molecules and drives the system to the chirality consensus to attain and maintain high-entropy non-equilibrium states.
[ { "created": "Mon, 1 Jun 2009 10:30:35 GMT", "version": "v1" } ]
2009-06-02
[ [ "Jaakkola", "Salla", "" ], [ "Sharma", "Vivek", "" ], [ "Annila", "Arto", "" ] ]
Biological macromolecules, proteins and nucleic acids are composed exclusively of chirally pure monomers. The chirality consensus appears vital for life and it has even been considered as a prerequisite of life. However the primary cause for the ubiquitous handedness has remained obscure. We propose that the chirality consensus is a kinetic consequence that follows from the principle of increasing entropy, i.e. the 2nd law of thermodynamics. Entropy increases when an open system evolves by decreasing gradients in free energy with more and more efficient mechanisms of energy transduction. The rate of entropy increase is the universal fitness criterion of natural selection that favors diverse functional molecules and drives the system to the chirality consensus to attain and maintain high-entropy non-equilibrium states.
1610.07373
Thierry Rabilloud
Thierry Rabilloud
A single step protein assay that is both detergent and reducer compatible: The cydex blue assay
null
ELECTROPHORESIS, Wiley-VCH Verlag, 2016, 37 (20), pp.2595-2601
10.1002/elps.201600270
null
q-bio.GN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determination of protein concentration in often an absolute pre-requisite in preparing samples for biochemical and proteomic analyses. However, current protein assay methods are not compatible with both reducers and detergents, which are however present simultaneously in most denaturing extraction buffers used in proteomics and electrophoresis, and in particular in SDS electrophoresis. We found that inclusion of cyclodextrins in a Coomassie blue-based assay made it compatible with detergents, as cyclodextrins complex detergents in a 1:1 molecular ratio. As this type of assay is intrinsically resistant to reducers, we have thus developed a single step assay that is both detergent and reducer compatible. Depending on the type and concentration of detergents present in the sample buffer, either beta-cyclodextrin or alpha-cyclodextrin can be used, the former being able to complex a wider range of detergents and the latter being able to complex higher amounts of detergents due to its greater solubility in water. Cyclodextrins are used at final concentrations of 2-10 mg/mL in the assay mix. This typically allows to measure samples containing as little as 0.1 mg/mL protein, in the presence of up to 2% detergent and reducers such as 5 % mercaptoethanol or 50 mM DTT in a single step with a simple spectrophotometric assay. This article is protected by copyright. All rights reserved.
[ { "created": "Mon, 24 Oct 2016 11:44:07 GMT", "version": "v1" } ]
2016-10-25
[ [ "Rabilloud", "Thierry", "" ] ]
Determination of protein concentration in often an absolute pre-requisite in preparing samples for biochemical and proteomic analyses. However, current protein assay methods are not compatible with both reducers and detergents, which are however present simultaneously in most denaturing extraction buffers used in proteomics and electrophoresis, and in particular in SDS electrophoresis. We found that inclusion of cyclodextrins in a Coomassie blue-based assay made it compatible with detergents, as cyclodextrins complex detergents in a 1:1 molecular ratio. As this type of assay is intrinsically resistant to reducers, we have thus developed a single step assay that is both detergent and reducer compatible. Depending on the type and concentration of detergents present in the sample buffer, either beta-cyclodextrin or alpha-cyclodextrin can be used, the former being able to complex a wider range of detergents and the latter being able to complex higher amounts of detergents due to its greater solubility in water. Cyclodextrins are used at final concentrations of 2-10 mg/mL in the assay mix. This typically allows to measure samples containing as little as 0.1 mg/mL protein, in the presence of up to 2% detergent and reducers such as 5 % mercaptoethanol or 50 mM DTT in a single step with a simple spectrophotometric assay. This article is protected by copyright. All rights reserved.
2003.03638
James Brunner
James D. Brunner and Nicholas Chia
Minimizing the number of optimizations for efficient community dynamic flux balance analysis
9 figures
null
10.1371/journal.pcbi.1007786
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Dynamic flux balance analysis uses a quasi-steady state assumption to calculate an organism's metabolic activity at each time-step of a dynamic simulation, using the well-known technique of flux balance analysis. For microbial communities, this calculation is especially costly and involves solving a linear constrained optimization problem for each member of the community at each time step. However, this is unnecessary and inefficient, as prior solutions can be used to inform future time steps. Here, we show that a basis for the space of internal fluxes can be chosen for each microbe in a community and this basis can be used to simulate forward by solving a relatively inexpensive system of linear equations at most time steps. We can use this solution as long as the resulting metabolic activity remains within the optimization problem's constraints (i.e. the solution to the linear system of equations remains a feasible to the linear program). As the solution becomes infeasible, it first becomes a feasible but degenerate solution to the optimization problem, and we can solve a different but related optimization problem to choose an appropriate basis to continue forward simulation. We demonstrate the efficiency and robustness of our method by comparing with currently used methods on a four species community, and show that our method requires at least $91\%$ fewer optimizations to be solved. For reproducibility, we prototyped the method using Python. Source code is available at \verb|https://github.com/jdbrunner/surfin_fba|.
[ { "created": "Sat, 7 Mar 2020 19:07:31 GMT", "version": "v1" }, { "created": "Tue, 28 Jul 2020 23:08:33 GMT", "version": "v2" } ]
2021-01-27
[ [ "Brunner", "James D.", "" ], [ "Chia", "Nicholas", "" ] ]
Dynamic flux balance analysis uses a quasi-steady state assumption to calculate an organism's metabolic activity at each time-step of a dynamic simulation, using the well-known technique of flux balance analysis. For microbial communities, this calculation is especially costly and involves solving a linear constrained optimization problem for each member of the community at each time step. However, this is unnecessary and inefficient, as prior solutions can be used to inform future time steps. Here, we show that a basis for the space of internal fluxes can be chosen for each microbe in a community and this basis can be used to simulate forward by solving a relatively inexpensive system of linear equations at most time steps. We can use this solution as long as the resulting metabolic activity remains within the optimization problem's constraints (i.e. the solution to the linear system of equations remains a feasible to the linear program). As the solution becomes infeasible, it first becomes a feasible but degenerate solution to the optimization problem, and we can solve a different but related optimization problem to choose an appropriate basis to continue forward simulation. We demonstrate the efficiency and robustness of our method by comparing with currently used methods on a four species community, and show that our method requires at least $91\%$ fewer optimizations to be solved. For reproducibility, we prototyped the method using Python. Source code is available at \verb|https://github.com/jdbrunner/surfin_fba|.
2003.09032
Sebastian Goncalves Dr
Albertine Weber, Flavio Ianelli, Sebastian Goncalves
Trend analysis of the COVID-19 pandemic in China and the rest of the world
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent epidemic of Coronavirus (COVID-19) that started in China has already been "exported" to more than 140 countries in all the continents, evolving in most of them by local spreading. In this contribution we analyze the trends of the cases reported in all the Chinese provinces, as well as in some countries that, until March 15th, 2020, have more than 500 cases reported. Notably and differently from other epidemics, the provinces did not show an exponential phase. The data available at the Johns Hopkins University site seem to fit well an algebraic sub-exponential growing behavior as was pointed out recently. All the provinces show a clear and consistent pattern of slowing down with growing exponent going nearly zero, so it can be said that the epidemic was contained in China. On the other side, the more recent spread in countries like, Italy, Iran, and Spain show a clear exponential growth, as well as other European countries. Even more recently, US -which was one of the first countries to have an individual infected outside China (Jan 21st, 2020)- seems to follow the same path. We calculate the exponential growth of the most affected countries, showing the evolution along time after the first local case. We identify clearly different patterns in the analyzed data and we give interpretations and possible explanations for them. The analysis and conclusions of our study can help countries that, after importing some cases, are not yet in the local spreading phase, or have just started.
[ { "created": "Thu, 19 Mar 2020 22:22:07 GMT", "version": "v1" } ]
2020-03-23
[ [ "Weber", "Albertine", "" ], [ "Ianelli", "Flavio", "" ], [ "Goncalves", "Sebastian", "" ] ]
The recent epidemic of Coronavirus (COVID-19) that started in China has already been "exported" to more than 140 countries in all the continents, evolving in most of them by local spreading. In this contribution we analyze the trends of the cases reported in all the Chinese provinces, as well as in some countries that, until March 15th, 2020, have more than 500 cases reported. Notably and differently from other epidemics, the provinces did not show an exponential phase. The data available at the Johns Hopkins University site seem to fit well an algebraic sub-exponential growing behavior as was pointed out recently. All the provinces show a clear and consistent pattern of slowing down with growing exponent going nearly zero, so it can be said that the epidemic was contained in China. On the other side, the more recent spread in countries like, Italy, Iran, and Spain show a clear exponential growth, as well as other European countries. Even more recently, US -which was one of the first countries to have an individual infected outside China (Jan 21st, 2020)- seems to follow the same path. We calculate the exponential growth of the most affected countries, showing the evolution along time after the first local case. We identify clearly different patterns in the analyzed data and we give interpretations and possible explanations for them. The analysis and conclusions of our study can help countries that, after importing some cases, are not yet in the local spreading phase, or have just started.
2003.07220
Nikolai Yakovenko
Nikolai Yakovenko, Avantika Lal, Johnny Israeli and Bryan Catanzaro
Genome Variant Calling with a Deep Averaging Network
8 pages, submitted to NeurIPS 2019
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variant calling, the problem of estimating whether a position in a DNA sequence differs from a reference sequence, given noisy, redundant, overlapping short sequences that cover that position, is fundamental to genomics. We propose a deep averaging network designed specifically for variant calling. Our model takes into account the independence of each short input read sequence by transforming individual reads through a series of convolutional layers, limiting the communication between individual reads to averaging and concatenating operations. Training and testing on the precisionFDA Truth Challenge (pFDA), we match state of the art overall 99.89 F1 score. Genome datasets exhibit extreme skew between easy examples and those on the decision boundary. We take advantage of this property to converge models at 5x the speed of standard epoch-based training by skipping easy examples during training. To facilitate future work, we release our code, trained models and pre-processed public domain datasets.
[ { "created": "Fri, 13 Mar 2020 14:21:56 GMT", "version": "v1" } ]
2020-03-17
[ [ "Yakovenko", "Nikolai", "" ], [ "Lal", "Avantika", "" ], [ "Israeli", "Johnny", "" ], [ "Catanzaro", "Bryan", "" ] ]
Variant calling, the problem of estimating whether a position in a DNA sequence differs from a reference sequence, given noisy, redundant, overlapping short sequences that cover that position, is fundamental to genomics. We propose a deep averaging network designed specifically for variant calling. Our model takes into account the independence of each short input read sequence by transforming individual reads through a series of convolutional layers, limiting the communication between individual reads to averaging and concatenating operations. Training and testing on the precisionFDA Truth Challenge (pFDA), we match state of the art overall 99.89 F1 score. Genome datasets exhibit extreme skew between easy examples and those on the decision boundary. We take advantage of this property to converge models at 5x the speed of standard epoch-based training by skipping easy examples during training. To facilitate future work, we release our code, trained models and pre-processed public domain datasets.
1811.05288
Yannick Rondelez
Ad\`ele Dram\'e-Maign\'e, Anton Zadorin, Iaroslava Golovkova, Yannick Rondelez
Quantifying the performance of high-throughput directed evolution protocols
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Most protocols for the high-throughput directed evolution of enzymes rely on random encapsulation to link phenotype and genotype. In order to optimize these approaches, or compare one to another, one needs a measure of their performance at extracting the best variants. We introduce here a new metric named the Selection Quality Index (SQI), which can be computed from a simple mock experiment with a known initial fraction of active variants. As opposed to previous approaches, our index integrates the random co-encapsulation of entities in compartments and comes with a straightforward experimental interpretation. We further show how this new metric can be used to extract general trends of protocol efficiency, or reveal hidden mechanisms such as a counterintuitive form of beneficial poisoning in the Compartmentalized Self-Replication protocol.
[ { "created": "Tue, 13 Nov 2018 13:54:48 GMT", "version": "v1" } ]
2018-11-14
[ [ "Dramé-Maigné", "Adèle", "" ], [ "Zadorin", "Anton", "" ], [ "Golovkova", "Iaroslava", "" ], [ "Rondelez", "Yannick", "" ] ]
Most protocols for the high-throughput directed evolution of enzymes rely on random encapsulation to link phenotype and genotype. In order to optimize these approaches, or compare one to another, one needs a measure of their performance at extracting the best variants. We introduce here a new metric named the Selection Quality Index (SQI), which can be computed from a simple mock experiment with a known initial fraction of active variants. As opposed to previous approaches, our index integrates the random co-encapsulation of entities in compartments and comes with a straightforward experimental interpretation. We further show how this new metric can be used to extract general trends of protocol efficiency, or reveal hidden mechanisms such as a counterintuitive form of beneficial poisoning in the Compartmentalized Self-Replication protocol.
1412.0742
Hari Sivakumar
Hari Sivakumar, Stephen R. Proulx and Jo\~ao P. Hespanha
Modular decomposition and analysis of biological networks
Journal / technical report, 43 pages, 17 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the decomposition of biochemical networks into functional modules that preserve their dynamic properties upon interconnection with other modules, which permits the inference of network behavior from the properties of its constituent modules. The modular decomposition method developed here also has the property that any changes in the parameters of a chemical reaction only affect the dynamics of a single module. To illustrate our results, we define and analyze a few key biological modules that arise in gene regulation, enzymatic networks, and signaling pathways. We also provide a collection of examples that demonstrate how the behavior of a biological network can be deduced from the properties of its constituent modules, based on results from control systems theory.
[ { "created": "Mon, 1 Dec 2014 23:55:17 GMT", "version": "v1" } ]
2014-12-03
[ [ "Sivakumar", "Hari", "" ], [ "Proulx", "Stephen R.", "" ], [ "Hespanha", "João P.", "" ] ]
This paper addresses the decomposition of biochemical networks into functional modules that preserve their dynamic properties upon interconnection with other modules, which permits the inference of network behavior from the properties of its constituent modules. The modular decomposition method developed here also has the property that any changes in the parameters of a chemical reaction only affect the dynamics of a single module. To illustrate our results, we define and analyze a few key biological modules that arise in gene regulation, enzymatic networks, and signaling pathways. We also provide a collection of examples that demonstrate how the behavior of a biological network can be deduced from the properties of its constituent modules, based on results from control systems theory.
1407.3682
Giovanni Destro Bisol
Paolo Anagnostou, Marco Capocasa, Nicola Milia, Emanuele Sanna, Daniela Luzi and Giovanni Destro Bisol
When data sharing gets close to 100%: what ancient human DNA studies can teach the Open Science movement
26 pages, 7 figures (1 supplementary), 6 Tables (5 supplementary of which 2 are available only upon request)
null
10.1371/journal.pone.0121409
null
q-bio.PE cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study analyzes rates and ways of data sharing regarding mitochondrial, Y chromosomal and autosomal polymorphisms in a total of 162 papers on human ancient DNA published between 1988 and 2013. For the most part, data are available in such a way as to make their scrutiny and reuse possible. The estimated sharing rate is not far from totality (97.6% +/- 2.1%) and substantially higher than observed in other fields of genetic research (Evolutionary, Medical and Forensic Genetics). A questionnaire-based survey suggests that the authors awareness of the importance of openness and transparency for scientific progress is a fundamental factor for the achievement of such a high sharing rate. Most data were made available through body text, but the use of primary databases increased with the application of complete mitochondrial and next generation sequencing methods. Our study highlights three important aspects. First, we provide evidence that researchers motivations are as necessary as stakeholders policies and norms to achieve very high sharing rates. Second, careful analyses of the ways in which data are made available are an important first step to maximize data findability, accessibility, useability and preservation. Third and finally, the case of human ancient DNA studies demonstrates how Open Science can foster scientific advancements, showing that openness and transparency can help build rigorous and reliable scientific practices even in the presence of complex experimental challenges.
[ { "created": "Mon, 14 Jul 2014 14:58:20 GMT", "version": "v1" }, { "created": "Tue, 15 Jul 2014 10:15:12 GMT", "version": "v2" } ]
2015-08-19
[ [ "Anagnostou", "Paolo", "" ], [ "Capocasa", "Marco", "" ], [ "Milia", "Nicola", "" ], [ "Sanna", "Emanuele", "" ], [ "Luzi", "Daniela", "" ], [ "Bisol", "Giovanni Destro", "" ] ]
This study analyzes rates and ways of data sharing regarding mitochondrial, Y chromosomal and autosomal polymorphisms in a total of 162 papers on human ancient DNA published between 1988 and 2013. For the most part, data are available in such a way as to make their scrutiny and reuse possible. The estimated sharing rate is not far from totality (97.6% +/- 2.1%) and substantially higher than observed in other fields of genetic research (Evolutionary, Medical and Forensic Genetics). A questionnaire-based survey suggests that the authors awareness of the importance of openness and transparency for scientific progress is a fundamental factor for the achievement of such a high sharing rate. Most data were made available through body text, but the use of primary databases increased with the application of complete mitochondrial and next generation sequencing methods. Our study highlights three important aspects. First, we provide evidence that researchers motivations are as necessary as stakeholders policies and norms to achieve very high sharing rates. Second, careful analyses of the ways in which data are made available are an important first step to maximize data findability, accessibility, useability and preservation. Third and finally, the case of human ancient DNA studies demonstrates how Open Science can foster scientific advancements, showing that openness and transparency can help build rigorous and reliable scientific practices even in the presence of complex experimental challenges.
q-bio/0603030
Jean-Philippe Vert
Franck Rapaport (CB, CURIE-BIOINFO), Andrei Zinovyev (CURIE-BIOINFO), Marie Dutreix (GCC), Emmanuel Barillot (CURIE-BIOINFO), Jean-Philippe Vert (CB)
Spectral analysis of gene expression profiles using gene networks
null
null
null
null
q-bio.QM
null
Microarrays have become extremely useful for analysing genetic phenomena, but establishing a relation between microarray analysis results (typically a list of genes) and their biological significance is often difficult. Currently, the standard approach is to map a posteriori the results onto gene networks to elucidate the functions perturbed at the level of pathways. However, integrating a priori knowledge of the gene networks could help in the statistical analysis of gene expression data and in their biological interpretation. Here we propose a method to integrate a priori the knowledge of a gene network in the analysis of gene expression data. The approach is based on the spectral decomposition of gene expression profiles with respect to the eigenfunctions of the graph, resulting in an attenuation of the high-frequency components of the expression profiles with respect to the topology of the graph. We show how to derive unsupervised and supervised classification algorithms of expression profiles, resulting in classifiers with biological relevance. We applied the method to the analysis of a set of expression profiles from irradiated and non-irradiated yeast strains. It performed at least as well as the usual classification but provides much more biologically relevant results and allows a direct biological interpretation.
[ { "created": "Sun, 26 Mar 2006 05:31:51 GMT", "version": "v1" } ]
2007-05-23
[ [ "Rapaport", "Franck", "", "CB, CURIE-BIOINFO" ], [ "Zinovyev", "Andrei", "", "CURIE-BIOINFO" ], [ "Dutreix", "Marie", "", "GCC" ], [ "Barillot", "Emmanuel", "", "CURIE-BIOINFO" ], [ "Vert", "Jean-Philippe", "", "CB" ...
Microarrays have become extremely useful for analysing genetic phenomena, but establishing a relation between microarray analysis results (typically a list of genes) and their biological significance is often difficult. Currently, the standard approach is to map a posteriori the results onto gene networks to elucidate the functions perturbed at the level of pathways. However, integrating a priori knowledge of the gene networks could help in the statistical analysis of gene expression data and in their biological interpretation. Here we propose a method to integrate a priori the knowledge of a gene network in the analysis of gene expression data. The approach is based on the spectral decomposition of gene expression profiles with respect to the eigenfunctions of the graph, resulting in an attenuation of the high-frequency components of the expression profiles with respect to the topology of the graph. We show how to derive unsupervised and supervised classification algorithms of expression profiles, resulting in classifiers with biological relevance. We applied the method to the analysis of a set of expression profiles from irradiated and non-irradiated yeast strains. It performed at least as well as the usual classification but provides much more biologically relevant results and allows a direct biological interpretation.
1812.00680
Subrata Dev
Subrata Dev and Sakuntala Chatterjee
Run-and-tumble motion with step-like responses to a stochastic input
Published in Physical Rev. E
Phys. Rev. E 99, 012402 (2019)
10.1103/PhysRevE.99.012402
null
q-bio.CB cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
We study a simple run-and-tumble random walk whose switching frequency from run mode to tumble mode and the reverse depend on a stochastic signal. We consider a particularly sharp, step-like dependence, where the run to tumble switching probability jumps from zero to one as the signal crosses a particular value (say y_1 ) from below. Similarly, tumble to run switching probability also shows a jump like this as the signal crosses another value (y_2 < y_1 ) from above. We are interested in characterizing the effect of signaling noise on the long time behavior of the random walker. We consider two different time-evolutions of the stochastic signal. In one case, the signal dynamics is an independent stochastic process and does not depend on the run-and-tumble motion. In this case we can analytically calculate the mean value and the complete distribution function of the run duration and tumble duration. In the second case, we assume that the signal dynamics is influenced by the spatial location of the random walker. For this system, we numerically measure the steady state position distribution of the random walker. We discuss some similarities and differences between our system and E.coli chemotaxis, which is another well-known run-and-tumble motion encountered in nature.
[ { "created": "Mon, 3 Dec 2018 11:36:09 GMT", "version": "v1" }, { "created": "Fri, 4 Jan 2019 06:56:07 GMT", "version": "v2" } ]
2019-01-09
[ [ "Dev", "Subrata", "" ], [ "Chatterjee", "Sakuntala", "" ] ]
We study a simple run-and-tumble random walk whose switching frequency from run mode to tumble mode and the reverse depend on a stochastic signal. We consider a particularly sharp, step-like dependence, where the run to tumble switching probability jumps from zero to one as the signal crosses a particular value (say y_1 ) from below. Similarly, tumble to run switching probability also shows a jump like this as the signal crosses another value (y_2 < y_1 ) from above. We are interested in characterizing the effect of signaling noise on the long time behavior of the random walker. We consider two different time-evolutions of the stochastic signal. In one case, the signal dynamics is an independent stochastic process and does not depend on the run-and-tumble motion. In this case we can analytically calculate the mean value and the complete distribution function of the run duration and tumble duration. In the second case, we assume that the signal dynamics is influenced by the spatial location of the random walker. For this system, we numerically measure the steady state position distribution of the random walker. We discuss some similarities and differences between our system and E.coli chemotaxis, which is another well-known run-and-tumble motion encountered in nature.
1610.05654
Ramon Ferrer i Cancho
Antoni Hern\'andez-Fern\'andez and Ramon Ferrer-i-Cancho
The infochemical core
Little corrections of format and language
Journal of Quantitative Linguistics 23 (2), 133-153 (2016)
10.1080/09296174.2016.1142323
null
q-bio.NC cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vocalizations and less often gestures have been the object of linguistic research over decades. However, the development of a general theory of communication with human language as a particular case requires a clear understanding of the organization of communication through other means. Infochemicals are chemical compounds that carry information and are employed by small organisms that cannot emit acoustic signals of optimal frequency to achieve successful communication. Here the distribution of infochemicals across species is investigated when they are ranked by their degree or the number of species with which it is associated (because they produce or they are sensitive to it). The quality of the fit of different functions to the dependency between degree and rank is evaluated with a penalty for the number of parameters of the function. Surprisingly, a double Zipf (a Zipf distribution with two regimes with a different exponent each) is the model yielding the best fit although it is the function with the largest number of parameters. This suggests that the world wide repertoire of infochemicals contains a chemical nucleus shared by many species and reminiscent of the core vocabularies found for human language in dictionaries or large corpora.
[ { "created": "Tue, 18 Oct 2016 14:53:20 GMT", "version": "v1" }, { "created": "Mon, 24 Oct 2016 11:22:52 GMT", "version": "v2" } ]
2016-10-25
[ [ "Hernández-Fernández", "Antoni", "" ], [ "Ferrer-i-Cancho", "Ramon", "" ] ]
Vocalizations and less often gestures have been the object of linguistic research over decades. However, the development of a general theory of communication with human language as a particular case requires a clear understanding of the organization of communication through other means. Infochemicals are chemical compounds that carry information and are employed by small organisms that cannot emit acoustic signals of optimal frequency to achieve successful communication. Here the distribution of infochemicals across species is investigated when they are ranked by their degree or the number of species with which it is associated (because they produce or they are sensitive to it). The quality of the fit of different functions to the dependency between degree and rank is evaluated with a penalty for the number of parameters of the function. Surprisingly, a double Zipf (a Zipf distribution with two regimes with a different exponent each) is the model yielding the best fit although it is the function with the largest number of parameters. This suggests that the world wide repertoire of infochemicals contains a chemical nucleus shared by many species and reminiscent of the core vocabularies found for human language in dictionaries or large corpora.
1104.4913
Andr\'es D. Medus
A. D. Medus and C. O. Dorso
Diseases spreading through individual based models with realistic mobility patterns
19 pages, 7 figures, submitted to PRE
null
null
null
q-bio.PE physics.bio-ph physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The individual-based models constitute a set of widely implemented tools to analyze the incidence of individuals heterogeneities in the spread of an infectious disease. In this work we focus our attention on human contacts heterogeneities through two of the main individual-based models: mobile agents and complex networks models. We introduce a novel mobile agents model in which individuals make displacements with sizes according to a truncated power-law distribution based on empirical evidence about human mobility. Besides, we present a procedure to obtain an equivalent weighted contact network from the previous mobile agents model, where the weights of the links are interpreted as contact probabilities. From the topological analysis of the equivalent contact networks we show that small world characteristics are related with truncated power-law distribution for agent displacements. Finally, we show the equivalence between both approaches through some numerical experiments for the spread of an infectious disease.
[ { "created": "Mon, 25 Apr 2011 11:45:03 GMT", "version": "v1" } ]
2011-04-27
[ [ "Medus", "A. D.", "" ], [ "Dorso", "C. O.", "" ] ]
The individual-based models constitute a set of widely implemented tools to analyze the incidence of individuals heterogeneities in the spread of an infectious disease. In this work we focus our attention on human contacts heterogeneities through two of the main individual-based models: mobile agents and complex networks models. We introduce a novel mobile agents model in which individuals make displacements with sizes according to a truncated power-law distribution based on empirical evidence about human mobility. Besides, we present a procedure to obtain an equivalent weighted contact network from the previous mobile agents model, where the weights of the links are interpreted as contact probabilities. From the topological analysis of the equivalent contact networks we show that small world characteristics are related with truncated power-law distribution for agent displacements. Finally, we show the equivalence between both approaches through some numerical experiments for the spread of an infectious disease.
2002.05748
Yasmine Ahmed
Yasmine Ahmed, Cheryl Telmer, and Natasa Miskov-Zivanov
ACCORDION: Clustering and Selecting Relevant Data for Guided Network Extension and Query Answering
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Querying new information from knowledge sources, in general, and published literature, in particular, aims to provide precise and quick answers to questions raised about a system under study. In this paper, we present ACCORDION (Automated Clustering Conditional On Relating Data of Interactions tO a Network), a novel tool and a methodology to enable efficient answering of biological questions by automatically assembling new, or expanding existing models using published literature. Our approach integrates information extraction and clustering with simulation and formal analysis to allow for an automated iterative process that includes assembling, testing and selecting the most relevant models, given a set of desired system properties. We applied our methodology to a model of the circuitry that con-trols T cell differentiation. To evaluate our approach, we compare the model that we obtained, using our automated model extension approach, with the previously published manually extended T cell differentiation model. Besides demonstrating automated and rapid reconstruction of a model that was previously built manually, ACCORDION can assemble multiple models that satisfy desired properties. As such, it replaces large number of tedious or even imprac-tical manual experiments and guides alternative hypotheses and interventions in biological systems.
[ { "created": "Thu, 13 Feb 2020 19:16:02 GMT", "version": "v1" }, { "created": "Tue, 12 May 2020 06:48:37 GMT", "version": "v2" } ]
2020-05-13
[ [ "Ahmed", "Yasmine", "" ], [ "Telmer", "Cheryl", "" ], [ "Miskov-Zivanov", "Natasa", "" ] ]
Querying new information from knowledge sources, in general, and published literature, in particular, aims to provide precise and quick answers to questions raised about a system under study. In this paper, we present ACCORDION (Automated Clustering Conditional On Relating Data of Interactions tO a Network), a novel tool and a methodology to enable efficient answering of biological questions by automatically assembling new, or expanding existing models using published literature. Our approach integrates information extraction and clustering with simulation and formal analysis to allow for an automated iterative process that includes assembling, testing and selecting the most relevant models, given a set of desired system properties. We applied our methodology to a model of the circuitry that con-trols T cell differentiation. To evaluate our approach, we compare the model that we obtained, using our automated model extension approach, with the previously published manually extended T cell differentiation model. Besides demonstrating automated and rapid reconstruction of a model that was previously built manually, ACCORDION can assemble multiple models that satisfy desired properties. As such, it replaces large number of tedious or even imprac-tical manual experiments and guides alternative hypotheses and interventions in biological systems.
1706.08836
Juan B Gutierrez
Yi H. Yan, Diego M. Moncada, Elizabeth D. Trippe, Juan B. Gutierrez
Correlates of severity of disease in Macaca mulatta infected with Plasmodium cynomolgi
10 pages, 8 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Characterization of host responses associated with severe malaria through an integrative approach is necessary to understand the dynamics of a \textit{Plasmodium cynomolgi} infection. In this study, we conducted temporal immune profiling, cytokine profiling and transcriptomic analysis of five \textit{Macaca mulatta} infected with \textit{P. cynomolgi}. This experiment resulted in two severe infections, and two mild infections. Our analysis reveals that differential transcriptional up-regulation of genes linked with response to pathogen-associated molecular pattern (PAMP) and pro-inflammatory cytokines is characteristic of hosts experiencing severe malaria. Furthermore, our analysis discovered associations of transcriptional differential regulation unique to severe hosts with specific cellular and cytokine responses. The combined data provide a molecular and cellular basis for the development of severe malaria during \textit{P. cynomolgi} infection.
[ { "created": "Sun, 25 Jun 2017 16:19:18 GMT", "version": "v1" }, { "created": "Fri, 30 Jun 2017 02:34:25 GMT", "version": "v2" } ]
2017-07-03
[ [ "Yan", "Yi H.", "" ], [ "Moncada", "Diego M.", "" ], [ "Trippe", "Elizabeth D.", "" ], [ "Gutierrez", "Juan B.", "" ] ]
Characterization of host responses associated with severe malaria through an integrative approach is necessary to understand the dynamics of a \textit{Plasmodium cynomolgi} infection. In this study, we conducted temporal immune profiling, cytokine profiling and transcriptomic analysis of five \textit{Macaca mulatta} infected with \textit{P. cynomolgi}. This experiment resulted in two severe infections, and two mild infections. Our analysis reveals that differential transcriptional up-regulation of genes linked with response to pathogen-associated molecular pattern (PAMP) and pro-inflammatory cytokines is characteristic of hosts experiencing severe malaria. Furthermore, our analysis discovered associations of transcriptional differential regulation unique to severe hosts with specific cellular and cytokine responses. The combined data provide a molecular and cellular basis for the development of severe malaria during \textit{P. cynomolgi} infection.
2008.07352
Yaron Oz
Yaron Oz, Ittai Rubinstein, Muli Safra
Superspreaders and High Variance Infectious Diseases
9 pages, 5 figures
null
10.1088/1742-5468/abed44
null
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A well-known characteristic of pandemics such as COVID-19 is the high level of transmission heterogeneity in the infection spread: not all infected individuals spread the disease at the same rate and some individuals (superspreaders) are responsible for most of the infections. To quantify this phenomenon requires the analysis of the effect of the variance and higher moments of the infection distribution. Working in the framework of stochastic branching processes, we derive an approximate analytical formula for the probability of an outbreak in the high variance regime of the infection distribution, verify it numerically and analyze its regime of validity in various examples. We show that it is possible for an outbreak not to occur in the high variance regime even when the basic reproduction number $R_0$ is larger than one and discuss the implications of our results for COVID-19 and other pandemics.
[ { "created": "Mon, 17 Aug 2020 14:25:47 GMT", "version": "v1" }, { "created": "Mon, 12 Oct 2020 18:52:53 GMT", "version": "v2" } ]
2021-05-26
[ [ "Oz", "Yaron", "" ], [ "Rubinstein", "Ittai", "" ], [ "Safra", "Muli", "" ] ]
A well-known characteristic of pandemics such as COVID-19 is the high level of transmission heterogeneity in the infection spread: not all infected individuals spread the disease at the same rate and some individuals (superspreaders) are responsible for most of the infections. To quantify this phenomenon requires the analysis of the effect of the variance and higher moments of the infection distribution. Working in the framework of stochastic branching processes, we derive an approximate analytical formula for the probability of an outbreak in the high variance regime of the infection distribution, verify it numerically and analyze its regime of validity in various examples. We show that it is possible for an outbreak not to occur in the high variance regime even when the basic reproduction number $R_0$ is larger than one and discuss the implications of our results for COVID-19 and other pandemics.
2404.05730
Giorgia Ciavolella
Giorgia Ciavolella, Nathalie Ferrand, Mich\`ele Sabbah, Beno\^it Perthame, Roberto Natalini
A model for membrane degradation using a gelatin invadopodia assay
null
null
null
null
q-bio.TO
http://creativecommons.org/publicdomain/zero/1.0/
One of the most crucial and lethal characteristics of solid tumors is represented by the increased ability of cancer cells to migrate and invade other organs during the so-called metastatic spread. This is allowed thanks to the production of matrix metalloproteinases (MMPs), enzymes capable of degrading a type of collagen abundant in the basal membrane separating the epithelial tissue from the connective one. In this work, we employ a synergistic experimental and mathematical modelling approach to explore the invasion process of tumor cells. A athematical model composed of reaction-diffusion equations describing the evolution of the tumor cells density on a gelatin substrate, MMPs enzymes concentration and the degradation of the gelatin is proposed. This is completed with a calibration strategy. We perform a sensitivity analysis and explore a parameter estimation technique both on synthetic and experimental data in order to find the optimal parameters that describe the in vitro experiments. A comparison between numerical and experimental solutions ends the work.
[ { "created": "Thu, 8 Feb 2024 11:22:21 GMT", "version": "v1" } ]
2024-04-10
[ [ "Ciavolella", "Giorgia", "" ], [ "Ferrand", "Nathalie", "" ], [ "Sabbah", "Michèle", "" ], [ "Perthame", "Benoît", "" ], [ "Natalini", "Roberto", "" ] ]
One of the most crucial and lethal characteristics of solid tumors is represented by the increased ability of cancer cells to migrate and invade other organs during the so-called metastatic spread. This is allowed thanks to the production of matrix metalloproteinases (MMPs), enzymes capable of degrading a type of collagen abundant in the basal membrane separating the epithelial tissue from the connective one. In this work, we employ a synergistic experimental and mathematical modelling approach to explore the invasion process of tumor cells. A athematical model composed of reaction-diffusion equations describing the evolution of the tumor cells density on a gelatin substrate, MMPs enzymes concentration and the degradation of the gelatin is proposed. This is completed with a calibration strategy. We perform a sensitivity analysis and explore a parameter estimation technique both on synthetic and experimental data in order to find the optimal parameters that describe the in vitro experiments. A comparison between numerical and experimental solutions ends the work.
2405.06032
Jean-Fran\c{c}ois Flot
Yann Sp\"ori and Jean-Fran\c{c}ois Flot
Champuru 2: Improved scoring of alignments and a user-friendly graphical interface
13 pages, 3 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Champuru is a web software tool that helps determine the two sequences present in mixed Sanger chromatograms obtained by sequencing simultaneously two DNA templates of unequal lengths. A previous version (Champuru 1.0) was published as a simple Perl CGI (Common Gateway Interface) application, but the server hosting it was discontinued, which prompted us to update it and develop it further. The new Champuru 2, implemented in Haxe and hosted at GitHub Pages, offers an improved graphical user interface as well as more sophisticated algorithms to compute alignment scores, making it more efficient at detecting the most likely alignment positions between forward and reverse traces. Champuru 2 now make it possible to analyze other offset pairs than the one detected as most likely by the selected algorithm. Champuru 2 is freely accessible at https://eeg-ebe.github.io/Champuru/, including both a graphical user interface (running a JavaScript version transpiled from the Haxe source code) and a compiled command-line version (obtained by transpiling the Haxe source code into C++).
[ { "created": "Thu, 9 May 2024 18:03:37 GMT", "version": "v1" } ]
2024-05-13
[ [ "Spöri", "Yann", "" ], [ "Flot", "Jean-François", "" ] ]
Champuru is a web software tool that helps determine the two sequences present in mixed Sanger chromatograms obtained by sequencing simultaneously two DNA templates of unequal lengths. A previous version (Champuru 1.0) was published as a simple Perl CGI (Common Gateway Interface) application, but the server hosting it was discontinued, which prompted us to update it and develop it further. The new Champuru 2, implemented in Haxe and hosted at GitHub Pages, offers an improved graphical user interface as well as more sophisticated algorithms to compute alignment scores, making it more efficient at detecting the most likely alignment positions between forward and reverse traces. Champuru 2 now make it possible to analyze other offset pairs than the one detected as most likely by the selected algorithm. Champuru 2 is freely accessible at https://eeg-ebe.github.io/Champuru/, including both a graphical user interface (running a JavaScript version transpiled from the Haxe source code) and a compiled command-line version (obtained by transpiling the Haxe source code into C++).
2011.00163
Martin Vasilev
Martin R. Vasilev, Fabrice B. R. Parmentier, Julie A. Kirkby
Distraction by auditory novelty during reading: Evidence for disruption in saccade planning, but not saccade execution
null
null
10.1177/1747021820982267
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Novel or unexpected sounds that deviate from an otherwise repetitive sequence of the same sound cause behavioural distraction. Recent work has suggested that distraction also occurs during reading as fixation durations increased when a deviant sound was presented at the fixation onset of words. The present study tested the hypothesis that this increase in fixation durations occurs due to saccadic inhibition. This was done by manipulating the temporal onset of sounds relative to the fixation onset of words in the text. If novel sounds cause saccadic inhibition, they should be more distracting when presented during the second half of fixations when saccade programming usually takes place. Participants read single sentences and heard a 120 ms sound when they fixated five target words in the sentence. On most occasions (p= 0.9), the same sine wave tone was presented ("standard"), while on the remaining occasions (p= 0.1) a new sound was presented ("novel"). Critically, sounds were played either during the first half of the fixation (0 ms delay) or during the second half of the fixation (120 ms delay). Consistent with the saccadic inhibition hypothesis, novel sounds led to longer fixation durations in the 120 ms compared to the 0 ms delay condition. However, novel sounds did not generally influence the execution of the subsequent saccade. These results suggest that unexpected sounds have a rapid influence on saccade planning, but not saccade execution.
[ { "created": "Sat, 31 Oct 2020 01:35:52 GMT", "version": "v1" } ]
2020-12-22
[ [ "Vasilev", "Martin R.", "" ], [ "Parmentier", "Fabrice B. R.", "" ], [ "Kirkby", "Julie A.", "" ] ]
Novel or unexpected sounds that deviate from an otherwise repetitive sequence of the same sound cause behavioural distraction. Recent work has suggested that distraction also occurs during reading as fixation durations increased when a deviant sound was presented at the fixation onset of words. The present study tested the hypothesis that this increase in fixation durations occurs due to saccadic inhibition. This was done by manipulating the temporal onset of sounds relative to the fixation onset of words in the text. If novel sounds cause saccadic inhibition, they should be more distracting when presented during the second half of fixations when saccade programming usually takes place. Participants read single sentences and heard a 120 ms sound when they fixated five target words in the sentence. On most occasions (p= 0.9), the same sine wave tone was presented ("standard"), while on the remaining occasions (p= 0.1) a new sound was presented ("novel"). Critically, sounds were played either during the first half of the fixation (0 ms delay) or during the second half of the fixation (120 ms delay). Consistent with the saccadic inhibition hypothesis, novel sounds led to longer fixation durations in the 120 ms compared to the 0 ms delay condition. However, novel sounds did not generally influence the execution of the subsequent saccade. These results suggest that unexpected sounds have a rapid influence on saccade planning, but not saccade execution.
1401.2397
Alicia Mart\'inez-Gonz\'alez
Alicia Mart\'inez-Gonz\'alez, Mario Dur\'an-Prado, Gabriel F. Calvo, Francisco J. Alca\'in, Luis A. P\'erez-Romasanta and V\'ictor M. P\'erez-Garc\'ia
Combined therapies of antithrombotics and antioxidants delay in silico brain tumor progression
8 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Glioblastoma multiforme, the most frequent type of primary brain tumor, is a rapidly evolving and spatially heterogeneous high-grade astrocytoma that presents areas of necrosis, hypercellularity and microvascular hyperplasia. The aberrant vasculature leads to hypoxic areas and results in an increase of the oxidative stress selecting for more invasive tumor cell phenotypes. In our study we assay in silico different therapeutic approaches which combine antithrombotics, antioxidants and standard radiotherapy. To do so, we have developed a biocomputational model of glioblastoma multiforme that incorporates the spatio-temporal interplay among two glioma cell phenotypes corresponding to oxygenated and hypoxic cells, a necrotic core and the local vasculature whose response evolves with tumor progression. Our numerical simulations predict that suitable combinations of antithrombotics and antioxidants may diminish, in a synergetic way, oxidative stress and the subsequent hypoxic response. This novel therapeutical strategy, with potentially low or no toxicity, might reduce tumor invasion and further sensitize glioblastoma multiforme to conventional radiotherapy or other cytotoxic agents, hopefully increasing median patient overall survival time.
[ { "created": "Fri, 10 Jan 2014 16:41:03 GMT", "version": "v1" } ]
2014-01-13
[ [ "Martínez-González", "Alicia", "" ], [ "Durán-Prado", "Mario", "" ], [ "Calvo", "Gabriel F.", "" ], [ "Alcaín", "Francisco J.", "" ], [ "Pérez-Romasanta", "Luis A.", "" ], [ "Pérez-García", "Víctor M.", "" ] ]
Glioblastoma multiforme, the most frequent type of primary brain tumor, is a rapidly evolving and spatially heterogeneous high-grade astrocytoma that presents areas of necrosis, hypercellularity and microvascular hyperplasia. The aberrant vasculature leads to hypoxic areas and results in an increase of the oxidative stress selecting for more invasive tumor cell phenotypes. In our study we assay in silico different therapeutic approaches which combine antithrombotics, antioxidants and standard radiotherapy. To do so, we have developed a biocomputational model of glioblastoma multiforme that incorporates the spatio-temporal interplay among two glioma cell phenotypes corresponding to oxygenated and hypoxic cells, a necrotic core and the local vasculature whose response evolves with tumor progression. Our numerical simulations predict that suitable combinations of antithrombotics and antioxidants may diminish, in a synergetic way, oxidative stress and the subsequent hypoxic response. This novel therapeutical strategy, with potentially low or no toxicity, might reduce tumor invasion and further sensitize glioblastoma multiforme to conventional radiotherapy or other cytotoxic agents, hopefully increasing median patient overall survival time.
1610.09937
Sourya Bhattacharyya
Sourya Bhattacharyya and Jayanta Mukhopadhyay
COSPEDTree-II: Improved Couplet based Phylogenetic Supertree
8 page detailed manuscript corresponding to the short paper (of the same title) accepted for the proceedings of IEEE BIBM 2016
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Supertree synthesizes the topologies of a set of phylogenetic trees carrying overlapping taxa set. In process, conflicts in the tree topologies are aimed to be resolved with the consensus clades. Such a problem is proved to be NP-hard. Various heuristics on local search, maximum parsimony, graph cut, etc. lead to different supertree approaches, of which the most popular methods are based on analyzing fixed size subtree topologies (such as triplets or quartets). Time and space complexities of these methods, however, depend on the subtree size considered. Our earlier proposed supertree method COSPEDTree, uses evolutionary relationship among individual couplets (taxa pair), to produce slightly conservative (not fully resolved) supertrees. Here we propose its improved version COSPEDTree-II, which produces better resolved supertree with lower number of missing branches, and incurs much lower running time. Results on biological datasets show that COSPEDTree-II belongs to the category of high performance and computationally efficient supertree methods.
[ { "created": "Mon, 31 Oct 2016 14:29:21 GMT", "version": "v1" } ]
2016-11-01
[ [ "Bhattacharyya", "Sourya", "" ], [ "Mukhopadhyay", "Jayanta", "" ] ]
A Supertree synthesizes the topologies of a set of phylogenetic trees carrying overlapping taxa set. In process, conflicts in the tree topologies are aimed to be resolved with the consensus clades. Such a problem is proved to be NP-hard. Various heuristics on local search, maximum parsimony, graph cut, etc. lead to different supertree approaches, of which the most popular methods are based on analyzing fixed size subtree topologies (such as triplets or quartets). Time and space complexities of these methods, however, depend on the subtree size considered. Our earlier proposed supertree method COSPEDTree, uses evolutionary relationship among individual couplets (taxa pair), to produce slightly conservative (not fully resolved) supertrees. Here we propose its improved version COSPEDTree-II, which produces better resolved supertree with lower number of missing branches, and incurs much lower running time. Results on biological datasets show that COSPEDTree-II belongs to the category of high performance and computationally efficient supertree methods.
1310.7131
Graham Coop
Yaniv Brandvain, Amanda M. Kenney, Lex Flagel, Graham Coop, Andrea L Sweigart
Speciation and introgression between Mimulus nasutus and Mimulus guttatus
Brandvain and Kenney contributed equally to this work. Coop and Sweigart jointly supervised this work. 65 pages, 3 main text figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Mimulus guttatus and M. nasutus are an evolutionary and ecological model sister species pair differentiated by ecology, mating system, and partial reproductive isolation. Despite extensive research on this system, the history of divergence and differentiation in this sister pair is unclear. We present and analyze a novel population genomic data set which shows that M. nasutus "budded" off of a central Californian M. guttatus population within the last 200 to 500 thousand years. In this time, the M. nasutus genome has accrued numerous genomic signatures of the transition to predominant selfing. Despite clear biological differentiation, we document ongoing, bidirectional introgression. We observe a negative relationship between the recombination rate and divergence between M. nasutus and sympatric M. guttatus samples, suggesting that selection acts against M. nasutus ancestry in M. guttatus.
[ { "created": "Sat, 26 Oct 2013 17:06:54 GMT", "version": "v1" } ]
2013-10-29
[ [ "Brandvain", "Yaniv", "" ], [ "Kenney", "Amanda M.", "" ], [ "Flagel", "Lex", "" ], [ "Coop", "Graham", "" ], [ "Sweigart", "Andrea L", "" ] ]
Mimulus guttatus and M. nasutus are an evolutionary and ecological model sister species pair differentiated by ecology, mating system, and partial reproductive isolation. Despite extensive research on this system, the history of divergence and differentiation in this sister pair is unclear. We present and analyze a novel population genomic data set which shows that M. nasutus "budded" off of a central Californian M. guttatus population within the last 200 to 500 thousand years. In this time, the M. nasutus genome has accrued numerous genomic signatures of the transition to predominant selfing. Despite clear biological differentiation, we document ongoing, bidirectional introgression. We observe a negative relationship between the recombination rate and divergence between M. nasutus and sympatric M. guttatus samples, suggesting that selection acts against M. nasutus ancestry in M. guttatus.
0807.3126
Evgeniy Khain
Evgeniy Khain, Casey M. Schneider-Mizell, Michal O. Nowicki, E. Antonio Chiocca, S. E. Lawler, and Leonard M. Sander
Pattern Formation of Glioma Cells: Effects of Adhesion
6 pages, 6 figures
EPL (Europhysics Letters) 88, 28006 (2009)
10.1209/0295-5075/88/28006
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate clustering of malignant glioma cells. \emph{In vitro} experiments in collagen gels identified a cell line that formed clusters in a region of low cell density, whereas a very similar cell line (which lacks an important mutation) did not cluster significantly. We hypothesize that the mutation affects the strength of cell-cell adhesion. We investigate this effect in a new experiment, which follows the clustering dynamics of glioma cells on a surface. We interpret our results in terms of a stochastic model and identify two mechanisms of clustering. First, there is a critical value of the strength of adhesion; above the threshold, large clusters grow from a homogeneous suspension of cells; below it, the system remains homogeneous, similarly to the ordinary phase separation. Second, when cells form a cluster, we have evidence that they increase their proliferation rate. We have successfully reproduced the experimental findings and found that both mechanisms are crucial for cluster formation and growth.
[ { "created": "Sat, 19 Jul 2008 22:23:11 GMT", "version": "v1" }, { "created": "Sat, 14 Nov 2009 22:25:08 GMT", "version": "v2" } ]
2009-11-15
[ [ "Khain", "Evgeniy", "" ], [ "Schneider-Mizell", "Casey M.", "" ], [ "Nowicki", "Michal O.", "" ], [ "Chiocca", "E. Antonio", "" ], [ "Lawler", "S. E.", "" ], [ "Sander", "Leonard M.", "" ] ]
We investigate clustering of malignant glioma cells. \emph{In vitro} experiments in collagen gels identified a cell line that formed clusters in a region of low cell density, whereas a very similar cell line (which lacks an important mutation) did not cluster significantly. We hypothesize that the mutation affects the strength of cell-cell adhesion. We investigate this effect in a new experiment, which follows the clustering dynamics of glioma cells on a surface. We interpret our results in terms of a stochastic model and identify two mechanisms of clustering. First, there is a critical value of the strength of adhesion; above the threshold, large clusters grow from a homogeneous suspension of cells; below it, the system remains homogeneous, similarly to the ordinary phase separation. Second, when cells form a cluster, we have evidence that they increase their proliferation rate. We have successfully reproduced the experimental findings and found that both mechanisms are crucial for cluster formation and growth.
1308.0821
Ivan Lazarevich
Ivan A. Lazarevich and Victor B. Kazantsev
Dendritic signal transmission induced by intracellular charge inhomogeneities
null
Phys. Rev. E 88, 062718 (2013)
10.1103/PhysRevE.88.062718
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Signal propagation in neuronal dendrites represents the basis for interneuron communication and information processing in the brain. Here we take into account charge inhomogeneities arising in the vicinity of ion channels in cytoplasm and obtained a modified cable equation. We show that the charge inhomogeneities acting on the millisecond time scale can lead to the appearance of propagating waves with wavelengths of hundreds of micrometers. They correspond to a certain frequency band predicting the appearance of resonant properties in brain neuron signalling.
[ { "created": "Sun, 4 Aug 2013 16:09:56 GMT", "version": "v1" } ]
2013-12-25
[ [ "Lazarevich", "Ivan A.", "" ], [ "Kazantsev", "Victor B.", "" ] ]
Signal propagation in neuronal dendrites represents the basis for interneuron communication and information processing in the brain. Here we take into account charge inhomogeneities arising in the vicinity of ion channels in cytoplasm and obtained a modified cable equation. We show that the charge inhomogeneities acting on the millisecond time scale can lead to the appearance of propagating waves with wavelengths of hundreds of micrometers. They correspond to a certain frequency band predicting the appearance of resonant properties in brain neuron signalling.
2110.11432
Nicholas Manoukis
Nicholas C Manoukis, Matthew P. Hill
Probability of Insect Capture in a Trap Network: Low Prevalence and Detection Trapping with TrapGrid
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Attractant-based trap networks targeting insects are ubiquitous worldwide. These networks have diverse targets, goals, and efficiencies, but all are constrained by practical considerations like cost and available lures. An important way to balance goals and constrains is through quantitative mathematical modeling. Here we describe an extension of a computer model of trapping networks known as "TrapGrid" to include an alternative mode of calculating the probability of capture over time in a trapping network: Strict detection ("capture of one or more") compared with the average probability of capture as implemented in the original version. We suggest that this new calculation may be useful in situations of low prevalence where trap network operators wish to interpret the meaning of zero captures at a small scale. The original remains preferred for comparing the sensitivity and suitability of alternate trap networks (i.e. density of traps, their placement, lure attractiveness, etc.).
[ { "created": "Thu, 21 Oct 2021 18:59:20 GMT", "version": "v1" } ]
2021-10-25
[ [ "Manoukis", "Nicholas C", "" ], [ "Hill", "Matthew P.", "" ] ]
Attractant-based trap networks targeting insects are ubiquitous worldwide. These networks have diverse targets, goals, and efficiencies, but all are constrained by practical considerations like cost and available lures. An important way to balance goals and constrains is through quantitative mathematical modeling. Here we describe an extension of a computer model of trapping networks known as "TrapGrid" to include an alternative mode of calculating the probability of capture over time in a trapping network: Strict detection ("capture of one or more") compared with the average probability of capture as implemented in the original version. We suggest that this new calculation may be useful in situations of low prevalence where trap network operators wish to interpret the meaning of zero captures at a small scale. The original remains preferred for comparing the sensitivity and suitability of alternate trap networks (i.e. density of traps, their placement, lure attractiveness, etc.).
1801.07299
Yibo Li
Yibo Li, Liangren Zhang, Zhenming Liu
Multi-Objective De Novo Drug Design with Conditional Graph Generative Model
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, deep generative models have revealed itself as a promising way of performing de novo molecule design. However, previous research has focused mainly on generating SMILES strings instead of molecular graphs. Although current graph generative models are available, they are often too general and computationally expensive, which restricts their application to molecules with small sizes. In this work, a new de novo molecular design framework is proposed based on a type sequential graph generators that do not use atom level recurrent units. Compared with previous graph generative models, the proposed method is much more tuned for molecule generation and have been scaled up to cover significantly larger molecules in the ChEMBL database. It is shown that the graph-based model outperforms SMILES based models in a variety of metrics, especially in the rate of valid outputs. For the application of drug design tasks, conditional graph generative model is employed. This method offers higher flexibility compared to previous fine-tuning based approach and is suitable for generation based on multiple objectives. This approach is applied to solve several drug design problems, including the generation of compounds containing a given scaffold, generation of compounds with specific drug-likeness and synthetic accessibility requirements, as well as generating dual inhibitors against JNK3 and GSK3$\beta$. Results show high enrichment rates for outputs satisfying the given requirements.
[ { "created": "Thu, 18 Jan 2018 13:54:55 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2018 03:50:40 GMT", "version": "v2" }, { "created": "Sat, 21 Apr 2018 15:36:33 GMT", "version": "v3" } ]
2018-04-24
[ [ "Li", "Yibo", "" ], [ "Zhang", "Liangren", "" ], [ "Liu", "Zhenming", "" ] ]
Recently, deep generative models have revealed itself as a promising way of performing de novo molecule design. However, previous research has focused mainly on generating SMILES strings instead of molecular graphs. Although current graph generative models are available, they are often too general and computationally expensive, which restricts their application to molecules with small sizes. In this work, a new de novo molecular design framework is proposed based on a type sequential graph generators that do not use atom level recurrent units. Compared with previous graph generative models, the proposed method is much more tuned for molecule generation and have been scaled up to cover significantly larger molecules in the ChEMBL database. It is shown that the graph-based model outperforms SMILES based models in a variety of metrics, especially in the rate of valid outputs. For the application of drug design tasks, conditional graph generative model is employed. This method offers higher flexibility compared to previous fine-tuning based approach and is suitable for generation based on multiple objectives. This approach is applied to solve several drug design problems, including the generation of compounds containing a given scaffold, generation of compounds with specific drug-likeness and synthetic accessibility requirements, as well as generating dual inhibitors against JNK3 and GSK3$\beta$. Results show high enrichment rates for outputs satisfying the given requirements.
2202.01744
Md. Rayhan
MD. Rayhan, M. Abdullah-Al-Wadud, M. Helal Uddin Ahmed
The Impact of Vaccination on the Infection rate and the Severity of Covid-19
Correspondence article, includes 6 figures, a title page
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
This study aims to statistically assess the effectiveness of vaccination against SARS-CoV-2. It is indispensable to investigate the relationship between Covid-19 deadliness and vaccination in order to study the impact of vaccine in real-world. We studied rates of infection and death due to Covid-19 in different countries with respect to their levels of vaccination. People who received the required dose of vaccination were considered as fully vaccinated in this study. Based on the percentage of fully vaccinated population, countries were categorized into several groups. Though a high-level study on the vaccine effectiveness may not provide much insight for individual level differences, a global analysis is imperative to infer the influence of vaccination as a controlling measure of the pandemic.
[ { "created": "Thu, 3 Feb 2022 18:09:42 GMT", "version": "v1" } ]
2022-02-04
[ [ "Rayhan", "MD.", "" ], [ "Abdullah-Al-Wadud", "M.", "" ], [ "Ahmed", "M. Helal Uddin", "" ] ]
This study aims to statistically assess the effectiveness of vaccination against SARS-CoV-2. It is indispensable to investigate the relationship between Covid-19 deadliness and vaccination in order to study the impact of vaccine in real-world. We studied rates of infection and death due to Covid-19 in different countries with respect to their levels of vaccination. People who received the required dose of vaccination were considered as fully vaccinated in this study. Based on the percentage of fully vaccinated population, countries were categorized into several groups. Though a high-level study on the vaccine effectiveness may not provide much insight for individual level differences, a global analysis is imperative to infer the influence of vaccination as a controlling measure of the pandemic.
2109.12190
Shyamalika Gopalan
Shyamalika Gopalan, Samuel Patillo Smith, Katharine Korunes, Iman Hamid, Sohini Ramachandran, Amy Goldberg
Human genetic admixture through the lens of population genomics
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Over the last fifty years, geneticists have made great strides in understanding how our species' evolutionary history gave rise to current patterns of human genetic diversity classically summarized by Lewontin in his 1972 paper, 'The Apportionment of Human Diversity'. One evolutionary process that requires special attention in both population genetics and statistical genetics is admixture: gene flow between two or more previously separated source populations to form a new admixed population. The admixture process introduces unique patterns of genetic variation within and between populations, which in turn influences the inference of demographic histories, identification genetic targets of selection, and prediction of phenotypes. In this review, we highlight recent studies and methodological advances that have leveraged genomic signatures of admixture to gain insights into human history, natural selection, and complex trait architecture. We also outline some challenges for admixture population genetics, including limitations of applying methods designed for single-ancestry populations to the study of admixed populations.
[ { "created": "Fri, 24 Sep 2021 20:56:46 GMT", "version": "v1" }, { "created": "Fri, 11 Feb 2022 15:03:10 GMT", "version": "v2" } ]
2022-02-14
[ [ "Gopalan", "Shyamalika", "" ], [ "Smith", "Samuel Patillo", "" ], [ "Korunes", "Katharine", "" ], [ "Hamid", "Iman", "" ], [ "Ramachandran", "Sohini", "" ], [ "Goldberg", "Amy", "" ] ]
Over the last fifty years, geneticists have made great strides in understanding how our species' evolutionary history gave rise to current patterns of human genetic diversity classically summarized by Lewontin in his 1972 paper, 'The Apportionment of Human Diversity'. One evolutionary process that requires special attention in both population genetics and statistical genetics is admixture: gene flow between two or more previously separated source populations to form a new admixed population. The admixture process introduces unique patterns of genetic variation within and between populations, which in turn influences the inference of demographic histories, identification genetic targets of selection, and prediction of phenotypes. In this review, we highlight recent studies and methodological advances that have leveraged genomic signatures of admixture to gain insights into human history, natural selection, and complex trait architecture. We also outline some challenges for admixture population genetics, including limitations of applying methods designed for single-ancestry populations to the study of admixed populations.
2207.02586
R.K. Brojen Singh
Preet Mishra and R. K. Brojen Singh
Spatiotemporal patterns of Covid-19 pandemic in India: Inferences of pandemic dynamics from data analysis
14 pages, 7 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Modeling and analysis of the large scale Covid-19 pandemic data can yield inferences about it's dynamics and characteristics of disease propagation. These inferences can then be correlated with contextual factors like population density, effects of strategic interventions, heterogeneous disease propagation etc, and such set of validated inferences can serve as precedents for designing of subsequent mitigation strategies. In this work, we present the analysis of Covid-19 pandemic data in Indian context using growth functions fitting procedure and harmonic analysis method. Our results of growth function fitting to the data indicate that the growth function parameters are quite sensitive to the growth of the infected population indicating positive impact of lockdown strategy, identification of inflection point and nearly synchronous statistical features of disease spreading. The harmonic analysis of the data shows the countrywide synchronous incident features due to simultaneous implementation of control strategies. However, if one analyzes the data from each state of the India, one can see various forms of travelling waves in the countrywide wave pattern. Hence, one needs to do these analysis from time to time to understand the effectiveness of any control strategy and to closely look at the disease propagation to devise the required type of mitigation strategies.
[ { "created": "Wed, 6 Jul 2022 11:00:06 GMT", "version": "v1" } ]
2022-07-07
[ [ "Mishra", "Preet", "" ], [ "Singh", "R. K. Brojen", "" ] ]
Modeling and analysis of the large scale Covid-19 pandemic data can yield inferences about it's dynamics and characteristics of disease propagation. These inferences can then be correlated with contextual factors like population density, effects of strategic interventions, heterogeneous disease propagation etc, and such set of validated inferences can serve as precedents for designing of subsequent mitigation strategies. In this work, we present the analysis of Covid-19 pandemic data in Indian context using growth functions fitting procedure and harmonic analysis method. Our results of growth function fitting to the data indicate that the growth function parameters are quite sensitive to the growth of the infected population indicating positive impact of lockdown strategy, identification of inflection point and nearly synchronous statistical features of disease spreading. The harmonic analysis of the data shows the countrywide synchronous incident features due to simultaneous implementation of control strategies. However, if one analyzes the data from each state of the India, one can see various forms of travelling waves in the countrywide wave pattern. Hence, one needs to do these analysis from time to time to understand the effectiveness of any control strategy and to closely look at the disease propagation to devise the required type of mitigation strategies.
q-bio/0410029
Takaaki Aoki
Takaaki Aoki and Toshio Aoyagi
Effect of Synchronous Incoming Spikes on Activity Pattern in A Network of Spiking Neurons
14 pages, 4 figures
null
null
null
q-bio.NC
null
Although recent neurophysiological experiments suggest that synchronous neural activity is involved in some perceptual and cognitive processes, the functional role of such coherent neuronal behavior is not well understood. As a first step in clarifying this role, we investigate how the temporal coherence of certain neuronal activity affects the activity pattern in a neural network. Using a simple network of leaky integrate-and-fire neurons, we study the effects of synchronized incoming spikes on the functioning of two mechanisms typically used in model neural systems, winner-take-all competition and associative memory. We demonstrate that a pair of switches undergone by the incoming spikes, from asynchronous to synchronous and then back to asynchronous, triggers a transition of the network from one state to another state. In the case of associative memory, for example, this switching controls the timing of the next recalling, whereas the firing rate pattern in the asynchronous state prepares the network for the next retrieval pattern.
[ { "created": "Mon, 25 Oct 2004 19:46:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Aoki", "Takaaki", "" ], [ "Aoyagi", "Toshio", "" ] ]
Although recent neurophysiological experiments suggest that synchronous neural activity is involved in some perceptual and cognitive processes, the functional role of such coherent neuronal behavior is not well understood. As a first step in clarifying this role, we investigate how the temporal coherence of certain neuronal activity affects the activity pattern in a neural network. Using a simple network of leaky integrate-and-fire neurons, we study the effects of synchronized incoming spikes on the functioning of two mechanisms typically used in model neural systems, winner-take-all competition and associative memory. We demonstrate that a pair of switches undergone by the incoming spikes, from asynchronous to synchronous and then back to asynchronous, triggers a transition of the network from one state to another state. In the case of associative memory, for example, this switching controls the timing of the next recalling, whereas the firing rate pattern in the asynchronous state prepares the network for the next retrieval pattern.
1809.06731
Micha{\l} \'Swi\k{a}tek
Micha{\l} \'Swi\k{a}tek and Ewa Gudowska-Nowak
Delineating elastic properties of kinesin linker and their sensitivity to point mutations
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze free energy estimators from simulation trials mimicking single-molecule pulling experiments on a neck linker of a kinesin motor. For that purpose, we have performed a version of steered molecular dynamics (SMD) calculations. The sample trajectories have been analyzed to derive distribution of work done on the system. In order to induce unfolding of the linker, we have stretched the molecule at a constant pulling force and allowed for a subsequent relaxation of its structure. The use of fluctuation relations (FR) relevant to non-equilibrium systems subject to thermal fluctuations allows us to assess the difference in free energy between stretched and relaxed conformations. To further understand effects of potential mutations on elastic properties of the linker, we have performed similar in silico studies on a structure formed of a polyalanine sequence (Ala-only) and on three other structures, created by substituting selected types of amino acid residues in the linker's sequence with alanine (Ala) ones. The results of SMD simulations indicate a crucial role played by the Asparagine (Asn) and Lysine (Lys) residues in controlling stretching and relaxation properties of the linker domain of the motor.
[ { "created": "Tue, 18 Sep 2018 13:51:37 GMT", "version": "v1" } ]
2018-09-19
[ [ "Świątek", "Michał", "" ], [ "Gudowska-Nowak", "Ewa", "" ] ]
We analyze free energy estimators from simulation trials mimicking single-molecule pulling experiments on a neck linker of a kinesin motor. For that purpose, we have performed a version of steered molecular dynamics (SMD) calculations. The sample trajectories have been analyzed to derive distribution of work done on the system. In order to induce unfolding of the linker, we have stretched the molecule at a constant pulling force and allowed for a subsequent relaxation of its structure. The use of fluctuation relations (FR) relevant to non-equilibrium systems subject to thermal fluctuations allows us to assess the difference in free energy between stretched and relaxed conformations. To further understand effects of potential mutations on elastic properties of the linker, we have performed similar in silico studies on a structure formed of a polyalanine sequence (Ala-only) and on three other structures, created by substituting selected types of amino acid residues in the linker's sequence with alanine (Ala) ones. The results of SMD simulations indicate a crucial role played by the Asparagine (Asn) and Lysine (Lys) residues in controlling stretching and relaxation properties of the linker domain of the motor.
1308.3978
Loic Chaumont
Romain Bourget, Lo\"ic Chaumont, Natalia Sapoukhina
Timing of Pathogen Adaptation to a Multicomponent Treatment
3 figures
null
10.1371/journal.pone.0071926
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sustainable use of multicomponent treatments such as combination therapies, combination vaccines/chemicals, and plants carrying multigenic resistance requires an understanding of how their population-wide deployment affects the speed of the pathogen adaptation. Here, we develop a stochastic model describing the emergence of a mutant pathogen and its dynamics in a heterogeneous host population split into various types by the management strategy. Based on a multi-type Markov birth and death process, the model can be used to provide a basic understanding of how the life-cycle parameters of the pathogen population, and the controllable parameters of a management strategy affect the speed at which a pathogen adapts to a multicomponent treatment. Our results reveal the importance of coupling stochastic mutation and migration processes, and illustrate how their stochasticity can alter our view of the principles of managing pathogen adaptive dynamics at the population level. In particular, we identify the growth and migration rates that allow pathogens to adapt to a multicomponent treatment even if it is deployed on only small proportions of the host. In contrast to the accepted view, our model suggests that treatment durability should not systematically be identified with mutation cost. We show also that associating a multicomponent treatment with defeated monocomponent treatments can be more durable than associating it with intermediate treatments including only some of the components. We conclude that the explicit modelling of stochastic processes underlying evolutionary dynamics could help to elucidate the principles of the sustainable use of multicomponent treatments in population-wide management strategies intended to impede the evolution of harmful populations.
[ { "created": "Mon, 19 Aug 2013 10:42:00 GMT", "version": "v1" } ]
2015-06-16
[ [ "Bourget", "Romain", "" ], [ "Chaumont", "Loïc", "" ], [ "Sapoukhina", "Natalia", "" ] ]
The sustainable use of multicomponent treatments such as combination therapies, combination vaccines/chemicals, and plants carrying multigenic resistance requires an understanding of how their population-wide deployment affects the speed of the pathogen adaptation. Here, we develop a stochastic model describing the emergence of a mutant pathogen and its dynamics in a heterogeneous host population split into various types by the management strategy. Based on a multi-type Markov birth and death process, the model can be used to provide a basic understanding of how the life-cycle parameters of the pathogen population, and the controllable parameters of a management strategy affect the speed at which a pathogen adapts to a multicomponent treatment. Our results reveal the importance of coupling stochastic mutation and migration processes, and illustrate how their stochasticity can alter our view of the principles of managing pathogen adaptive dynamics at the population level. In particular, we identify the growth and migration rates that allow pathogens to adapt to a multicomponent treatment even if it is deployed on only small proportions of the host. In contrast to the accepted view, our model suggests that treatment durability should not systematically be identified with mutation cost. We show also that associating a multicomponent treatment with defeated monocomponent treatments can be more durable than associating it with intermediate treatments including only some of the components. We conclude that the explicit modelling of stochastic processes underlying evolutionary dynamics could help to elucidate the principles of the sustainable use of multicomponent treatments in population-wide management strategies intended to impede the evolution of harmful populations.
q-bio/0604008
Daniel Rodriguez-Perez
D. Rodriguez-Perez, Oscar Sotolongo-Grau, Ramon Espinosa Riquelme, Oscar Sotolongo-Costa, J. Antonio Santos Miranda, J.C. Antoranz
Tumors under periodic therapy -- Role of the immune response time delay
6 pages, 2 figures
null
null
null
q-bio.TO
null
We model the interaction between the immune system and tumor cells including a time delay to simulate the time needed by the latter to develop a chemical and cell mediated response to the presence of the tumor. The results are compared with those of a previous paper, concluding that the delay introduces new instabilities in the system leading to an uncontrolable growth of the tumour. Then a cytokine based periodic immunotherapy treatment is included in the model and the effects of its dossage are studied for the case of a weak immune system and a growing tumour. We find the existence of metastable states (that may last for tens of years) induced by the treatment, and also of potentially adverse effects of the dossage frequency on the stabilization of the tumour. These two effects depend on the delay, the cytokine dose burden and other parameters considered in the model.
[ { "created": "Fri, 7 Apr 2006 09:00:36 GMT", "version": "v1" } ]
2007-05-23
[ [ "Rodriguez-Perez", "D.", "" ], [ "Sotolongo-Grau", "Oscar", "" ], [ "Riquelme", "Ramon Espinosa", "" ], [ "Sotolongo-Costa", "Oscar", "" ], [ "Miranda", "J. Antonio Santos", "" ], [ "Antoranz", "J. C.", "" ] ]
We model the interaction between the immune system and tumor cells including a time delay to simulate the time needed by the latter to develop a chemical and cell mediated response to the presence of the tumor. The results are compared with those of a previous paper, concluding that the delay introduces new instabilities in the system leading to an uncontrolable growth of the tumour. Then a cytokine based periodic immunotherapy treatment is included in the model and the effects of its dossage are studied for the case of a weak immune system and a growing tumour. We find the existence of metastable states (that may last for tens of years) induced by the treatment, and also of potentially adverse effects of the dossage frequency on the stabilization of the tumour. These two effects depend on the delay, the cytokine dose burden and other parameters considered in the model.
1210.2288
David Sims
David W. Sims, Nicolas E. Humphries
Levy flight search patterns of marine predators not questioned: a reply to Edwards et al
18 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Edwards et al question aspects of the methods used in two of our published papers that report results showing Levy walk like and Levy flight movement patterns of marine predators.The criticisms are focused on the applicability of some statistical methodologies used to detect power law distributions.We reply to the principal criticisms levelled at each of these papers in turn including our own reanalysis of specific datasets and find that neither of our papers conclusions are overturned in any part by the issues raised.Indeed, in addition to the findings of our research reported in these papers there is strong evidence accumulating from studies worldwide that organisms show movements and behaviour consistent with scale invariant patterns such as Levy flights.
[ { "created": "Mon, 8 Oct 2012 14:12:34 GMT", "version": "v1" } ]
2012-10-09
[ [ "Sims", "David W.", "" ], [ "Humphries", "Nicolas E.", "" ] ]
Edwards et al question aspects of the methods used in two of our published papers that report results showing Levy walk like and Levy flight movement patterns of marine predators.The criticisms are focused on the applicability of some statistical methodologies used to detect power law distributions.We reply to the principal criticisms levelled at each of these papers in turn including our own reanalysis of specific datasets and find that neither of our papers conclusions are overturned in any part by the issues raised.Indeed, in addition to the findings of our research reported in these papers there is strong evidence accumulating from studies worldwide that organisms show movements and behaviour consistent with scale invariant patterns such as Levy flights.
2008.09851
Debashish Chowdhury
Swayamshree Patra and Debashish Chowdhury
Level crossing statistics in a biologically motivated model of a long dynamic protrusion: passage times, random and extreme excursions
Thoroughly revised shorter version
Journal of Statistical Mechanics: Theory and Experiment, 083207 (2021)
10.1088/1742-5468/ac1405
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long cell protrusions, which are effectively one-dimensional, are highly dynamic subcellular structures. Length of many such protrusions keep fluctuating about the mean value even in the the steady state. We develop here a stochastic model motivated by length fluctuations of a type of appendage of an eukaryotic cell called flagellum (also called cilium). Exploiting the techniques developed for the calculation of level-crossing statistics of random excursions of stochastic process, we have derived analytical expressions of passage times for hitting various thresholds, sojourn times of random excursions beyond the threshold and the extreme lengths attained during the lifetime of these model flagella. We identify different parameter regimes of this model flagellum that mimic those of the wildtype and mutants of a well known flagellated cell. By analysing our model in these different parameter regimes, we demonstrate how mutation can alter the level-crossing statistics even when the steady state length remains unaffected by the same mutation. Comparison of the theoretically predicted level crossing statistics, in addition to mean and variance of the length, in the steady state with the corresponding experimental data can be used in near future as stringent tests for the validity of the models of flagellar length control. The experimental data required for this purpose, though never reported till now, can be collected, in principle, using a method developed very recently for flagellar length fluctuations.
[ { "created": "Sat, 22 Aug 2020 14:49:17 GMT", "version": "v1" }, { "created": "Mon, 11 Jan 2021 06:49:28 GMT", "version": "v2" }, { "created": "Tue, 8 Jun 2021 16:29:53 GMT", "version": "v3" } ]
2021-08-31
[ [ "Patra", "Swayamshree", "" ], [ "Chowdhury", "Debashish", "" ] ]
Long cell protrusions, which are effectively one-dimensional, are highly dynamic subcellular structures. Length of many such protrusions keep fluctuating about the mean value even in the the steady state. We develop here a stochastic model motivated by length fluctuations of a type of appendage of an eukaryotic cell called flagellum (also called cilium). Exploiting the techniques developed for the calculation of level-crossing statistics of random excursions of stochastic process, we have derived analytical expressions of passage times for hitting various thresholds, sojourn times of random excursions beyond the threshold and the extreme lengths attained during the lifetime of these model flagella. We identify different parameter regimes of this model flagellum that mimic those of the wildtype and mutants of a well known flagellated cell. By analysing our model in these different parameter regimes, we demonstrate how mutation can alter the level-crossing statistics even when the steady state length remains unaffected by the same mutation. Comparison of the theoretically predicted level crossing statistics, in addition to mean and variance of the length, in the steady state with the corresponding experimental data can be used in near future as stringent tests for the validity of the models of flagellar length control. The experimental data required for this purpose, though never reported till now, can be collected, in principle, using a method developed very recently for flagellar length fluctuations.
1810.03205
Boyi Yang
Boyi Yang, Nabil Aounallah
CTCF Degradation Causes Increased Usage of Upstream Exons in Mouse Embryonic Stem Cells
9 pages, 4 figures, submitted to Genomics, Proteomics & Bioinformatics
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcriptional repressor CTCF is an important regulator of chromatin 3D structure, facilitating the formation of topologically associating domains (TADs). However, its direct effects on gene regulation is less well understood. Here, we utilize previously published ChIP-seq and RNA-seq data to investigate the effects of CTCF on alternative splicing of genes with CTCF sites. We compared the amount of RNA-seq signals in exons upstream and downstream of binding sites following auxin-induced degradation of CTCF in mouse embryonic stem cells. We found that changes in gene expression following CTCF depletion were significant, with a general increase in the presence of upstream exons. We infer that a possible mechanism by which CTCF binding contributes to alternative splicing is by causing pauses in the transcription mechanism during which splicing elements are able to concurrently act on upstream exons already transcribed into RNA.
[ { "created": "Sun, 7 Oct 2018 20:40:27 GMT", "version": "v1" } ]
2018-10-09
[ [ "Yang", "Boyi", "" ], [ "Aounallah", "Nabil", "" ] ]
Transcriptional repressor CTCF is an important regulator of chromatin 3D structure, facilitating the formation of topologically associating domains (TADs). However, its direct effects on gene regulation is less well understood. Here, we utilize previously published ChIP-seq and RNA-seq data to investigate the effects of CTCF on alternative splicing of genes with CTCF sites. We compared the amount of RNA-seq signals in exons upstream and downstream of binding sites following auxin-induced degradation of CTCF in mouse embryonic stem cells. We found that changes in gene expression following CTCF depletion were significant, with a general increase in the presence of upstream exons. We infer that a possible mechanism by which CTCF binding contributes to alternative splicing is by causing pauses in the transcription mechanism during which splicing elements are able to concurrently act on upstream exons already transcribed into RNA.
q-bio/0408001
Chaitanya Athale
Chaitanya Athale, Yuri Mansury and Thomas S. Deisboeck
Simulating the Impact of a Molecular 'Decision-Process' on Cellular Phenotype and Multicellular Patterns in Brain Tumors
null
Journal of Theoretical Biology, Vol. 233, Issue 4 , 21 April 2005, Pages 469-481
10.1016/j.jtbi.2004.10.019
null
q-bio.CB
null
Experimental evidence indicates that human brain cancer cells proliferate or migrate, yet do not display both phenotypes at the same time. Here, we present a novel computational model simulating this cellular decision-process leading up to either phenotype based on a molecular interaction network of genes and proteins. The model's regulatory network consists of the epidermal growth factor receptor (EGFR), its ligand transforming growth factor-a (TGFa), the downstream enzyme phospholipaseC-gamma (PLCg) and a mitosis-associated response pathway. This network is activated by autocrine TGFa secretion, and the EGFR-dependent downstream signaling this step triggers, as well as modulated by an extrinsic nutritive glucose gradient. Employing a framework of mass action kinetics within a multiscale agent-based environment, we analyze both the emergent multicellular behavior of tumor growth and the single-cell molecular profiles that change over time and space. Our results show that one can indeed simulate the dichotomy between cell migration and proliferation based solely on an EGFR decision network. It turns out that these behavioral decisions on the single cell level impact the spatial dynamics of the entire cancerous system. Furthermore, the simulation results yield intriguing experimentally testable hypotheses also on the sub-cellular level such as spatial cytosolic polarization of PLCg towards an extrinsic chemotactic gradient. Implications of these results for future works, both on the modeling and experimental side are discussed.
[ { "created": "Fri, 30 Jul 2004 20:32:13 GMT", "version": "v1" } ]
2007-05-23
[ [ "Athale", "Chaitanya", "" ], [ "Mansury", "Yuri", "" ], [ "Deisboeck", "Thomas S.", "" ] ]
Experimental evidence indicates that human brain cancer cells proliferate or migrate, yet do not display both phenotypes at the same time. Here, we present a novel computational model simulating this cellular decision-process leading up to either phenotype based on a molecular interaction network of genes and proteins. The model's regulatory network consists of the epidermal growth factor receptor (EGFR), its ligand transforming growth factor-a (TGFa), the downstream enzyme phospholipaseC-gamma (PLCg) and a mitosis-associated response pathway. This network is activated by autocrine TGFa secretion, and the EGFR-dependent downstream signaling this step triggers, as well as modulated by an extrinsic nutritive glucose gradient. Employing a framework of mass action kinetics within a multiscale agent-based environment, we analyze both the emergent multicellular behavior of tumor growth and the single-cell molecular profiles that change over time and space. Our results show that one can indeed simulate the dichotomy between cell migration and proliferation based solely on an EGFR decision network. It turns out that these behavioral decisions on the single cell level impact the spatial dynamics of the entire cancerous system. Furthermore, the simulation results yield intriguing experimentally testable hypotheses also on the sub-cellular level such as spatial cytosolic polarization of PLCg towards an extrinsic chemotactic gradient. Implications of these results for future works, both on the modeling and experimental side are discussed.
1005.2714
James P. Crutchfield
James P. Crutchfield and Sean Whalen
Structural Drift: The Population Dynamics of Sequential Learning
15 pages, 9 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/sdrift.htm
null
null
Santa Fe Institute Working Paper 10-05
q-bio.PE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a theory of sequential causal inference in which learners in a chain estimate a structural model from their upstream teacher and then pass samples from the model to their downstream student. It extends the population dynamics of genetic drift, recasting Kimura's selectively neutral theory as a special case of a generalized drift process using structured populations with memory. We examine the diffusion and fixation properties of several drift processes and propose applications to learning, inference, and evolution. We also demonstrate how the organization of drift process space controls fidelity, facilitates innovations, and leads to information loss in sequential learning with and without memory.
[ { "created": "Sat, 15 May 2010 23:50:50 GMT", "version": "v1" }, { "created": "Mon, 27 Feb 2012 18:09:51 GMT", "version": "v2" } ]
2012-02-28
[ [ "Crutchfield", "James P.", "" ], [ "Whalen", "Sean", "" ] ]
We introduce a theory of sequential causal inference in which learners in a chain estimate a structural model from their upstream teacher and then pass samples from the model to their downstream student. It extends the population dynamics of genetic drift, recasting Kimura's selectively neutral theory as a special case of a generalized drift process using structured populations with memory. We examine the diffusion and fixation properties of several drift processes and propose applications to learning, inference, and evolution. We also demonstrate how the organization of drift process space controls fidelity, facilitates innovations, and leads to information loss in sequential learning with and without memory.
2403.05479
Gabriel Palma
Matheus Rakes, Ma\'ira Chagas Morais, Leandro do Prado Ribeiro, Gabriel Rodrigues Palma, Rafael de Andrade Moral, Daniel Bernardi, Anderson Dionei Gr\"utzmacher
The temperature affects the impact levels of synthetic insecticides on a parasitoid wasp used in the biological control of pentatomid pests in soybean crops
45 pages
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The impact of climate change has led to growing global concern about the interaction of temperature and xenobiotics in agricultural toxicological studies. Thus, for the first time, we evaluated the lethal, sublethal and transgerational effects of six insecticides used in the management of stink bug complex in soybean crops on the different life stages of the parasitoid Telenomus podisi (Hymenoptera: Scelionidae) in three temperature levels (15, 25 and 30 {\deg}C). Telenomus podisi adults (F0 generation), when exposed to insecticides based on acephate, spinosad and thiamethoxam + lambda-cyhalothrin, showed accumulated mortality of 100% at all temperature levels tested. On the other hand, methoxyfenozide + spinetoram caused average mortalities of 88.75% at 15 {\deg}C and 38.75% at 25 and 30 {\deg}C. In contrast, the mortality rates caused by chlorfenapyr at 15, 25 and 30 {\deg}C were 1.25, 71.25 and 71.25%. On the other hand, surviving adults in lethal toxicity bioassay did not show differences in egg parasitism (F0 generation) and emergence of F1 generation in all temperature levels studied; however, the insecticide methoxyfenozide + spinetoram showed the lowest level of parasitism and emergence of T. podisi. In addition, our results demonstrated significant changes in the proportion of emerged males and females as the temperature increased; however, we did not find any differences when comparing the insecticides studied. Furthermore, we detected a significant interaction between insecticides and temperatures by contaminating the host's parasitized eggs (parasitoid pupal stage). Generally, the highest emergence reduction values were found at the highest temperature studied (30 {\deg}C). Our results highlighted the temperature-dependent impact of synthetic insecticides on parasitoids, which should be considered in toxicological risk assessments and under predicted climate change scenarios.
[ { "created": "Fri, 8 Mar 2024 17:43:13 GMT", "version": "v1" } ]
2024-03-11
[ [ "Rakes", "Matheus", "" ], [ "Morais", "Maíra Chagas", "" ], [ "Ribeiro", "Leandro do Prado", "" ], [ "Palma", "Gabriel Rodrigues", "" ], [ "Moral", "Rafael de Andrade", "" ], [ "Bernardi", "Daniel", "" ], [ "Grützm...
The impact of climate change has led to growing global concern about the interaction of temperature and xenobiotics in agricultural toxicological studies. Thus, for the first time, we evaluated the lethal, sublethal and transgerational effects of six insecticides used in the management of stink bug complex in soybean crops on the different life stages of the parasitoid Telenomus podisi (Hymenoptera: Scelionidae) in three temperature levels (15, 25 and 30 {\deg}C). Telenomus podisi adults (F0 generation), when exposed to insecticides based on acephate, spinosad and thiamethoxam + lambda-cyhalothrin, showed accumulated mortality of 100% at all temperature levels tested. On the other hand, methoxyfenozide + spinetoram caused average mortalities of 88.75% at 15 {\deg}C and 38.75% at 25 and 30 {\deg}C. In contrast, the mortality rates caused by chlorfenapyr at 15, 25 and 30 {\deg}C were 1.25, 71.25 and 71.25%. On the other hand, surviving adults in lethal toxicity bioassay did not show differences in egg parasitism (F0 generation) and emergence of F1 generation in all temperature levels studied; however, the insecticide methoxyfenozide + spinetoram showed the lowest level of parasitism and emergence of T. podisi. In addition, our results demonstrated significant changes in the proportion of emerged males and females as the temperature increased; however, we did not find any differences when comparing the insecticides studied. Furthermore, we detected a significant interaction between insecticides and temperatures by contaminating the host's parasitized eggs (parasitoid pupal stage). Generally, the highest emergence reduction values were found at the highest temperature studied (30 {\deg}C). Our results highlighted the temperature-dependent impact of synthetic insecticides on parasitoids, which should be considered in toxicological risk assessments and under predicted climate change scenarios.
1809.05743
Netta Haroush
Netta Haroush and Shimon Marom
Inhibition in Random Neuronal Networks Enhances Response Variability and Disrupts Stimulus Discrimination
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inhibition is considered to shape neural activity, and broaden its pattern repertoire. In the sensory organs, where the anatomy of neural circuits is highly structured, lateral inhibition sharpens contrast among stimulus properties. The impact of inhibition on stimulus processing and the involvement of lateral inhibition is less clear when activity propagates to the less-structured relay stations. Here we take a synthetic approach to disentangle the impacts of inhibition from that of specialized anatomy on the repertoire of evoked activity patterns, and as a result, the network capacity to uniquely represent different stimuli. To this aim, we blocked inhibition in randomly rewired networks of cortical neurons in-vitro, and quantified response variability and stimulus discrimination among stimuli provided at different spatial loci, before and after the blockade. We show that blocking inhibition quenches variability of responses evoked by repeated stimuli through any spatial source; for all tested response features. Despite the sharpening role of inhibition in the highly structured sensory organs, in these random networks we find that blocking inhibition enhances stimulus discrimination between spatial sources of stimulation, when based on response features that emphasize the relation among spike times recorded through different electrodes. We further show that under intact inhibition, responses to a given stimulus are a noisy version of those revealed by blocking inhibition; such that intact inhibition disrupts an otherwise coherent, wave propagation of activity.
[ { "created": "Sat, 15 Sep 2018 16:57:13 GMT", "version": "v1" } ]
2018-09-18
[ [ "Haroush", "Netta", "" ], [ "Marom", "Shimon", "" ] ]
Inhibition is considered to shape neural activity, and broaden its pattern repertoire. In the sensory organs, where the anatomy of neural circuits is highly structured, lateral inhibition sharpens contrast among stimulus properties. The impact of inhibition on stimulus processing and the involvement of lateral inhibition is less clear when activity propagates to the less-structured relay stations. Here we take a synthetic approach to disentangle the impacts of inhibition from that of specialized anatomy on the repertoire of evoked activity patterns, and as a result, the network capacity to uniquely represent different stimuli. To this aim, we blocked inhibition in randomly rewired networks of cortical neurons in-vitro, and quantified response variability and stimulus discrimination among stimuli provided at different spatial loci, before and after the blockade. We show that blocking inhibition quenches variability of responses evoked by repeated stimuli through any spatial source; for all tested response features. Despite the sharpening role of inhibition in the highly structured sensory organs, in these random networks we find that blocking inhibition enhances stimulus discrimination between spatial sources of stimulation, when based on response features that emphasize the relation among spike times recorded through different electrodes. We further show that under intact inhibition, responses to a given stimulus are a noisy version of those revealed by blocking inhibition; such that intact inhibition disrupts an otherwise coherent, wave propagation of activity.
1609.02721
Fabio Peruzzo Mr
Fabio Peruzzo, Sandro Azaele
A phenomenological spatial model for macro-ecological patterns in species-rich ecosystems
17 pages, 9 figures
null
null
null
q-bio.PE cond-mat.stat-mech nlin.AO physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last few decades, ecologists have come to appreciate that key ecological patterns, which describe ecological communities at relatively large spatial scales, are not only scale dependent, but also intimately intertwined. The relative abundance of species, which informs us about the commonness and rarity of species, changes its shape from small to large spatial scales. The average number of species as a function of area has a steep initial increase, followed by decreasing slopes at large scales. Finally, if we find a species in a given location, it is more likely we find an individual of the same species close-by, rather than farther apart. Such spatial turnover depends on the geographical distribution of species, which often are spatially aggregated. This reverberates on the abundances as well as the richness of species within a region, but so far it has been difficult to quantify such relationships. Within a neutral framework, which considers all individuals competitively equivalent, we introduce a spatial stochastic model, which phenomenologically accounts for birth, death, immigration and local dispersal of individuals. We calculate the pair correlation function, which encapsulates spatial turnover, and the conditional probability to find a species with a certain population within a given circular area. Also, we calculate the macro-ecological patterns, which we have referred to above, and compare the analytical formul{\ae} with the numerical integration of the model. Finally, we contrast the model predictions with the empirical data for two lowland tropical forest inventories, showing always a good agreement.
[ { "created": "Fri, 9 Sep 2016 09:41:40 GMT", "version": "v1" }, { "created": "Mon, 12 Sep 2016 12:37:57 GMT", "version": "v2" } ]
2016-09-13
[ [ "Peruzzo", "Fabio", "" ], [ "Azaele", "Sandro", "" ] ]
Over the last few decades, ecologists have come to appreciate that key ecological patterns, which describe ecological communities at relatively large spatial scales, are not only scale dependent, but also intimately intertwined. The relative abundance of species, which informs us about the commonness and rarity of species, changes its shape from small to large spatial scales. The average number of species as a function of area has a steep initial increase, followed by decreasing slopes at large scales. Finally, if we find a species in a given location, it is more likely we find an individual of the same species close-by, rather than farther apart. Such spatial turnover depends on the geographical distribution of species, which often are spatially aggregated. This reverberates on the abundances as well as the richness of species within a region, but so far it has been difficult to quantify such relationships. Within a neutral framework, which considers all individuals competitively equivalent, we introduce a spatial stochastic model, which phenomenologically accounts for birth, death, immigration and local dispersal of individuals. We calculate the pair correlation function, which encapsulates spatial turnover, and the conditional probability to find a species with a certain population within a given circular area. Also, we calculate the macro-ecological patterns, which we have referred to above, and compare the analytical formul{\ae} with the numerical integration of the model. Finally, we contrast the model predictions with the empirical data for two lowland tropical forest inventories, showing always a good agreement.
1806.06359
Govind Kaigala
Deborah Huber and Govind V. Kaigala
Rapid micro fluorescence in situ hybridization in tissue sections
Biomicrofluidics, 2018
null
10.1063/1.5023775
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a micro fluorescence in situ hybridization ({\mu}FISH)-based rapid detection of cytogenetic biomarkers on formalin-fixed paraffin embedded (FFPE) tissue sections. We demonstrated this method in the context of detecting human epidermal growth factor 2 (HER2) in breast tissue sections. This method uses a non-contact microfluidic scanning probe (MFP), which localizes FISH probes at the micrometer length-scale to selected cells of the tissue section. The scanning ability of the MFP allows for a versatile implementation of FISH on tissue sections. We demonstrated the use of oligonucleotide FISH probes in ethylene carbonate-based buffer enabling rapid hybridization within < 1 min for chromosome enumeration and 10-15 min for assessment of the HER2 status in FFPE sections. We further demonstrated recycling of FISH probes for multiple sequential tests using a defined volume of probes by forming hierarchical hydrodynamic flow confinements. This microscale method is compatible with the standard FISH protocols and with the Instant Quality (IQ) FISH assay, reduces the FISH probe consumption ~100-fold and the hybridization time 4-fold, resulting in an assay turnaround time of < 3 h. We believe rapid {\mu}FISH has the potential of being used in pathology workflows as a standalone method or in combination with other molecular methods for diagnostic and prognostic analysis of FFPE sections.
[ { "created": "Sun, 17 Jun 2018 10:25:33 GMT", "version": "v1" } ]
2018-06-19
[ [ "Huber", "Deborah", "" ], [ "Kaigala", "Govind V.", "" ] ]
This paper describes a micro fluorescence in situ hybridization ({\mu}FISH)-based rapid detection of cytogenetic biomarkers on formalin-fixed paraffin embedded (FFPE) tissue sections. We demonstrated this method in the context of detecting human epidermal growth factor 2 (HER2) in breast tissue sections. This method uses a non-contact microfluidic scanning probe (MFP), which localizes FISH probes at the micrometer length-scale to selected cells of the tissue section. The scanning ability of the MFP allows for a versatile implementation of FISH on tissue sections. We demonstrated the use of oligonucleotide FISH probes in ethylene carbonate-based buffer enabling rapid hybridization within < 1 min for chromosome enumeration and 10-15 min for assessment of the HER2 status in FFPE sections. We further demonstrated recycling of FISH probes for multiple sequential tests using a defined volume of probes by forming hierarchical hydrodynamic flow confinements. This microscale method is compatible with the standard FISH protocols and with the Instant Quality (IQ) FISH assay, reduces the FISH probe consumption ~100-fold and the hybridization time 4-fold, resulting in an assay turnaround time of < 3 h. We believe rapid {\mu}FISH has the potential of being used in pathology workflows as a standalone method or in combination with other molecular methods for diagnostic and prognostic analysis of FFPE sections.
2310.19192
Dehong Xu
Dehong Xu, Ruiqi Gao, Wen-Hao Zhang, Xue-Xin Wei, Ying Nian Wu
Emergence of Grid-like Representations by Training Recurrent Networks with Conformal Normalization
null
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grid cells in the entorhinal cortex of mammalian brains exhibit striking hexagon grid firing patterns in their response maps as the animal (e.g., a rat) navigates in a 2D open environment. In this paper, we study the emergence of the hexagon grid patterns of grid cells based on a general recurrent neural network (RNN) model that captures the navigation process. The responses of grid cells collectively form a high dimensional vector, representing the 2D self-position of the agent. As the agent moves, the vector is transformed by an RNN that takes the velocity of the agent as input. We propose a simple yet general conformal normalization of the input velocity of the RNN, so that the local displacement of the position vector in the high-dimensional neural space is proportional to the local displacement of the agent in the 2D physical space, regardless of the direction of the input velocity. We apply this mechanism to both a linear RNN and nonlinear RNNs. Theoretically, we provide an understanding that explains the connection between conformal normalization and the emergence of hexagon grid patterns. Empirically, we conduct extensive experiments to verify that conformal normalization is crucial for the emergence of hexagon grid patterns, across various types of RNNs. The learned patterns share similar profiles to biological grid cells, and the topological properties of the patterns also align with our theoretical understanding.
[ { "created": "Sun, 29 Oct 2023 23:12:56 GMT", "version": "v1" }, { "created": "Tue, 20 Feb 2024 04:47:50 GMT", "version": "v2" } ]
2024-02-21
[ [ "Xu", "Dehong", "" ], [ "Gao", "Ruiqi", "" ], [ "Zhang", "Wen-Hao", "" ], [ "Wei", "Xue-Xin", "" ], [ "Wu", "Ying Nian", "" ] ]
Grid cells in the entorhinal cortex of mammalian brains exhibit striking hexagon grid firing patterns in their response maps as the animal (e.g., a rat) navigates in a 2D open environment. In this paper, we study the emergence of the hexagon grid patterns of grid cells based on a general recurrent neural network (RNN) model that captures the navigation process. The responses of grid cells collectively form a high dimensional vector, representing the 2D self-position of the agent. As the agent moves, the vector is transformed by an RNN that takes the velocity of the agent as input. We propose a simple yet general conformal normalization of the input velocity of the RNN, so that the local displacement of the position vector in the high-dimensional neural space is proportional to the local displacement of the agent in the 2D physical space, regardless of the direction of the input velocity. We apply this mechanism to both a linear RNN and nonlinear RNNs. Theoretically, we provide an understanding that explains the connection between conformal normalization and the emergence of hexagon grid patterns. Empirically, we conduct extensive experiments to verify that conformal normalization is crucial for the emergence of hexagon grid patterns, across various types of RNNs. The learned patterns share similar profiles to biological grid cells, and the topological properties of the patterns also align with our theoretical understanding.
1701.00732
Thierry Emonet
Adam James Waite, Nicholas W. Frankel, Yann S. Dufour, Jessica F. Johnston, Junjiajia Long, Thierry Emonet
Non-genetic diversity modulates population performance
null
Molecular Systems Biology, 12(12), 895 (2016)
10.15252/msb.20167044
null
q-bio.CB q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological functions are typically performed by groups of cells that express predominantly the same genes, yet display a continuum of phenotypes. While it is known how one genotype can generate such non-genetic diversity, it remains unclear how different phenotypes contribute to the performance of biological function at the population level. We developed a microfluidic device to simultaneously measure the phenotype and chemotactic performance of tens of thousands of individual, freely-swimming Escherichia coli as they climbed a gradient of attractant. We discovered that spatial structure spontaneously emerged from initially well-mixed wild type populations due to non-genetic diversity. By manipulating the expression of key chemotaxis proteins, we established a causal relationship between protein expression, non- genetic diversity, and performance that was theoretically predicted. This approach generated a complete phenotype-to-performance map, in which we found a nonlinear regime. We used this map to demonstrate how changing the shape of a phenotypic distribution can have as large of an effect on collective performance as changing the mean phenotype, suggesting that selection could act on both during the process of adaptation.
[ { "created": "Tue, 3 Jan 2017 16:03:05 GMT", "version": "v1" } ]
2017-01-04
[ [ "Waite", "Adam James", "" ], [ "Frankel", "Nicholas W.", "" ], [ "Dufour", "Yann S.", "" ], [ "Johnston", "Jessica F.", "" ], [ "Long", "Junjiajia", "" ], [ "Emonet", "Thierry", "" ] ]
Biological functions are typically performed by groups of cells that express predominantly the same genes, yet display a continuum of phenotypes. While it is known how one genotype can generate such non-genetic diversity, it remains unclear how different phenotypes contribute to the performance of biological function at the population level. We developed a microfluidic device to simultaneously measure the phenotype and chemotactic performance of tens of thousands of individual, freely-swimming Escherichia coli as they climbed a gradient of attractant. We discovered that spatial structure spontaneously emerged from initially well-mixed wild type populations due to non-genetic diversity. By manipulating the expression of key chemotaxis proteins, we established a causal relationship between protein expression, non- genetic diversity, and performance that was theoretically predicted. This approach generated a complete phenotype-to-performance map, in which we found a nonlinear regime. We used this map to demonstrate how changing the shape of a phenotypic distribution can have as large of an effect on collective performance as changing the mean phenotype, suggesting that selection could act on both during the process of adaptation.
0911.4458
Thierry Rabilloud
Thierry Rabilloud (BBSI), L. Vuillard, C. Gilly, J. J. Lawrence
Silver-staining of proteins in polyacrylamide gels: a general overview
null
Cellular and Molecular Biology (Noisy-le-Grand, France) 40, 1 (1994) 57-75
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On the basis of the physico-chemical principles underlying silver-staining of proteins, which are recalled in this paper, several methods of silver-staining of proteins after SDS electrophoresis in polyacrylamide gels or isoelectric focusing were tested. The most valuable protocols are presented in this report, including standard methods for unsupported gels and new methods devised for thin (0.5 mm) supported gels for SDS electrophoresis or isoelectric focusing and for staining of small peptides. Generally speaking, the most rapid methods were found to be less sensitive and less reproducible than more time-consuming ones. Among the long methods, those using silver-diammine complex gave the most uniform sensitivity. They require however special home-made gels and cannot be applied to several electrophoretic systems (e.g. systems using tricine or bicine as the trailing ion, or isoelectric focusing in immobilized pH gradients). For these reasons, protocols based on silver nitrate are of a more general use and might be favored. Future trends for silver-staining will also be discussed.
[ { "created": "Mon, 23 Nov 2009 17:47:14 GMT", "version": "v1" } ]
2009-11-24
[ [ "Rabilloud", "Thierry", "", "BBSI" ], [ "Vuillard", "L.", "" ], [ "Gilly", "C.", "" ], [ "Lawrence", "J. J.", "" ] ]
On the basis of the physico-chemical principles underlying silver-staining of proteins, which are recalled in this paper, several methods of silver-staining of proteins after SDS electrophoresis in polyacrylamide gels or isoelectric focusing were tested. The most valuable protocols are presented in this report, including standard methods for unsupported gels and new methods devised for thin (0.5 mm) supported gels for SDS electrophoresis or isoelectric focusing and for staining of small peptides. Generally speaking, the most rapid methods were found to be less sensitive and less reproducible than more time-consuming ones. Among the long methods, those using silver-diammine complex gave the most uniform sensitivity. They require however special home-made gels and cannot be applied to several electrophoretic systems (e.g. systems using tricine or bicine as the trailing ion, or isoelectric focusing in immobilized pH gradients). For these reasons, protocols based on silver nitrate are of a more general use and might be favored. Future trends for silver-staining will also be discussed.
1411.2273
Wilten Nicola
Wilten Nicola, Cheng Ly, Sue Ann Campbell
One-Dimensional Population Density Approaches to Recurrently Coupled Networks of Neurons with Noise
26 Pages, 6 Figures
null
10.1137/140995738
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mean-field systems have been previously derived for networks of coupled, two-dimensional, integrate-and-fire neurons such as the Izhikevich, adapting exponential (AdEx) and quartic integrate and fire (QIF), among others. Unfortunately, the mean-field systems have a degree of frequency error and the networks analyzed often do not include noise when there is adaptation. Here, we derive a one-dimensional partial differential equation (PDE) approximation for the marginal voltage density under a first order moment closure for coupled networks of integrate-and-fire neurons with white noise inputs. The PDE has substantially less frequency error than the mean-field system, and provides a great deal more information, at the cost of analytical tractability. The convergence properties of the mean-field system in the low noise limit are elucidated. A novel method for the analysis of the stability of the asynchronous tonic firing solution is also presented and implemented. Unlike previous attempts at stability analysis with these network types, information about the marginal densities of the adaptation variables is used. This method can in principle be applied to other systems with nonlinear partial differential equations.
[ { "created": "Sun, 9 Nov 2014 19:30:36 GMT", "version": "v1" } ]
2016-05-19
[ [ "Nicola", "Wilten", "" ], [ "Ly", "Cheng", "" ], [ "Campbell", "Sue Ann", "" ] ]
Mean-field systems have been previously derived for networks of coupled, two-dimensional, integrate-and-fire neurons such as the Izhikevich, adapting exponential (AdEx) and quartic integrate and fire (QIF), among others. Unfortunately, the mean-field systems have a degree of frequency error and the networks analyzed often do not include noise when there is adaptation. Here, we derive a one-dimensional partial differential equation (PDE) approximation for the marginal voltage density under a first order moment closure for coupled networks of integrate-and-fire neurons with white noise inputs. The PDE has substantially less frequency error than the mean-field system, and provides a great deal more information, at the cost of analytical tractability. The convergence properties of the mean-field system in the low noise limit are elucidated. A novel method for the analysis of the stability of the asynchronous tonic firing solution is also presented and implemented. Unlike previous attempts at stability analysis with these network types, information about the marginal densities of the adaptation variables is used. This method can in principle be applied to other systems with nonlinear partial differential equations.
0803.1510
Frederick Matsen IV
Daniel Ford, Tanja Gernhard, Frederick Matsen
A method for investigating relative timing information on phylogenetic trees
Feel free to email us with comments and questions
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a new way to understand the timing of branching events in phylogenetic trees. Our method explicitly considers the relative timing of diversification events between sister clades; as such it is complimentary to existing methods using lineages-through-time plots which consider diversification in aggregate. The method looks for evidence of diversification happening in lineage-specific ``bursts'', or the opposite, where diversification between two clades happens in an unusually regular fashion. In order to be able to distinguish interesting events from stochasticity, we propose two classes of neutral models on trees with timing information and develop a statistical framework for testing these models. Our models substantially generalize both the coalescent with ancestral population size variation and the global-rate speciation-extinction models. We end the paper with several example applications: first, we show that the evolution of the Hepatitis C virus appears to proceed in a lineage-specific bursting fashion. Second, we analyze a large tree of ants, demonstrating that a period of elevated diversification rates does not appear to occurred in a bursting manner.
[ { "created": "Mon, 10 Mar 2008 23:43:15 GMT", "version": "v1" } ]
2008-03-12
[ [ "Ford", "Daniel", "" ], [ "Gernhard", "Tanja", "" ], [ "Matsen", "Frederick", "" ] ]
In this paper we present a new way to understand the timing of branching events in phylogenetic trees. Our method explicitly considers the relative timing of diversification events between sister clades; as such it is complimentary to existing methods using lineages-through-time plots which consider diversification in aggregate. The method looks for evidence of diversification happening in lineage-specific ``bursts'', or the opposite, where diversification between two clades happens in an unusually regular fashion. In order to be able to distinguish interesting events from stochasticity, we propose two classes of neutral models on trees with timing information and develop a statistical framework for testing these models. Our models substantially generalize both the coalescent with ancestral population size variation and the global-rate speciation-extinction models. We end the paper with several example applications: first, we show that the evolution of the Hepatitis C virus appears to proceed in a lineage-specific bursting fashion. Second, we analyze a large tree of ants, demonstrating that a period of elevated diversification rates does not appear to occurred in a bursting manner.
1402.5702
Mustafa Mert Ankarali
Noah J. Cowan, Mustafa Mert Ankarali, Jonathan P. Dyhr, Manu S. Madhav, Eatai Roth, Shahin Sefati, Simon Sponberg, Sarah A. Stamper, Eric S. Fortune and Thomas L. Daniel
Feedback Control as a Framework for Understanding Tradeoffs in Biology
Submitted to Integr Comp Biol
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Control theory arose from a need to control synthetic systems. From regulating steam engines to tuning radios to devices capable of autonomous movement, it provided a formal mathematical basis for understanding the role of feedback in the stability (or change) of dynamical systems. It provides a framework for understanding any system with feedback regulation, including biological ones such as regulatory gene networks, cellular metabolic systems, sensorimotor dynamics of moving animals, and even ecological or evolutionary dynamics of organisms and populations. Here we focus on four case studies of the sensorimotor dynamics of animals, each of which involves the application of principles from control theory to probe stability and feedback in an organism's response to perturbations. We use examples from aquatic (electric fish station keeping and jamming avoidance), terrestrial (cockroach wall following) and aerial environments (flight control in moths) to highlight how one can use control theory to understand how feedback mechanisms interact with the physical dynamics of animals to determine their stability and response to sensory inputs and perturbations. Each case study is cast as a control problem with sensory input, neural processing, and motor dynamics, the output of which feeds back to the sensory inputs. Collectively, the interaction of these systems in a closed loop determines the behavior of the entire system.
[ { "created": "Mon, 24 Feb 2014 01:49:49 GMT", "version": "v1" } ]
2014-02-25
[ [ "Cowan", "Noah J.", "" ], [ "Ankarali", "Mustafa Mert", "" ], [ "Dyhr", "Jonathan P.", "" ], [ "Madhav", "Manu S.", "" ], [ "Roth", "Eatai", "" ], [ "Sefati", "Shahin", "" ], [ "Sponberg", "Simon", "" ], ...
Control theory arose from a need to control synthetic systems. From regulating steam engines to tuning radios to devices capable of autonomous movement, it provided a formal mathematical basis for understanding the role of feedback in the stability (or change) of dynamical systems. It provides a framework for understanding any system with feedback regulation, including biological ones such as regulatory gene networks, cellular metabolic systems, sensorimotor dynamics of moving animals, and even ecological or evolutionary dynamics of organisms and populations. Here we focus on four case studies of the sensorimotor dynamics of animals, each of which involves the application of principles from control theory to probe stability and feedback in an organism's response to perturbations. We use examples from aquatic (electric fish station keeping and jamming avoidance), terrestrial (cockroach wall following) and aerial environments (flight control in moths) to highlight how one can use control theory to understand how feedback mechanisms interact with the physical dynamics of animals to determine their stability and response to sensory inputs and perturbations. Each case study is cast as a control problem with sensory input, neural processing, and motor dynamics, the output of which feeds back to the sensory inputs. Collectively, the interaction of these systems in a closed loop determines the behavior of the entire system.
0802.4259
Luca Sbano
L. Sbano and M. Kirkilionis
Molecular Systems with Infinite and Finite Degrees of Freedom. Part I: Multi-Scale Analysis
Corrected typos
null
null
null
q-bio.BM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper analyses stochastic systems describing reacting molecular systems with a combination of two types of state spaces, a finite-dimensional, and an infinite dimenional part. As a typical situation consider the interaction of larger macro-molecules, finite and small in numbers per cell (like protein complexes), with smaller, very abundant molecules, for example metabolites. We study the construction of the continuum approximation of the associated Master Equation (ME) by using the Trotter approximation [27]. The continuum limit shows regimes where the finite degrees of freedom evolve faster than the infinite ones. Then we develop a rigourous asymptotic adiabatic theory upon the condition that the jump process arising from the finite degrees of freedom of the Markov Chain (MC, typically describing conformational changes of the macro-molecules) occurs with large frequency. In a second part of this work, the theory is applied to derive typical enzyme kinetics in an alternative way and interpretation within this framework.
[ { "created": "Thu, 28 Feb 2008 16:59:53 GMT", "version": "v1" }, { "created": "Fri, 29 Feb 2008 10:52:22 GMT", "version": "v2" } ]
2009-09-29
[ [ "Sbano", "L.", "" ], [ "Kirkilionis", "M.", "" ] ]
The paper analyses stochastic systems describing reacting molecular systems with a combination of two types of state spaces, a finite-dimensional, and an infinite dimenional part. As a typical situation consider the interaction of larger macro-molecules, finite and small in numbers per cell (like protein complexes), with smaller, very abundant molecules, for example metabolites. We study the construction of the continuum approximation of the associated Master Equation (ME) by using the Trotter approximation [27]. The continuum limit shows regimes where the finite degrees of freedom evolve faster than the infinite ones. Then we develop a rigourous asymptotic adiabatic theory upon the condition that the jump process arising from the finite degrees of freedom of the Markov Chain (MC, typically describing conformational changes of the macro-molecules) occurs with large frequency. In a second part of this work, the theory is applied to derive typical enzyme kinetics in an alternative way and interpretation within this framework.
1908.03336
Nicolas Blondeau
Michel Tauc, Nicolas Melis (IBV), Miled Bourourou (IPMC), S\'ebastien Giraud (LPS), Thierry Hauet (IRTOMIT), Nicolas Blondeau (IPMC)
A new pharmacological preconditioning-based target: from drosophila to kidney transplantation
Conditioning Medicine, 2019
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the biggest challenges in medicine is to dampen the pathophysiological stress induced by an episode of ischemia. Such stress, due to various pathological or clinical situations, follows a restriction in blood and oxygen supply to tissue, causing a shortage of oxygen and nutrients that are required for cellular metabolism. Ischemia can cause irreversible damage to target tissue leading to a poor physiological recovery outcome for the patient. Contrariwise, preconditioning by brief periods of ischemia has been shown in multiple organs to confer tolerance against subsequent normally lethal ischemia. By definition, preconditioning of organs must be applied preemptively. This limits the applicability of preconditioning in clinical situations, which arise unpredictably, such as myocardial infarction and stroke. There are, however, clinical situations that arise as a result of ischemia-reperfusion injury, which can be anticipated, and are therefore adequate candidates for preconditioning. Organ and more particularly kidney transplantation, the optimal treatment for suitable patients with end stage renal disease (ESRD), is a predictable surgery that permits the use of preconditioning protocols to prepare the organ for subsequent ischemic/reperfusion stress. It therefore seems crucial to develop appropriate preconditioning protocols against ischemia that will occur under transplantation conditions, which up to now mainly referred to mechanical ischemic preconditioning that triggers innate responses. It is not known if preconditioning has to be applied to the donor, the recipient, or both. No drug/target pair has been envisioned and validated in the clinic. Options for identifying new target/drug pairs involve the use of model animals, such as drosophila, in which some physiological pathways, such as the management of oxygen, are highly conserved across evolution. Oxygen is the universal element of life existence on earth. In this review we focus on a very specific pathway of pharmacological preconditioning identified in drosophila that was successfully transferred to mammalian models that has potential application in human health. Very few mechanisms identified in these model animals have been translated to an upper evolutionary level. This review highlights the commonality between oxygen regulation between diverse animals.
[ { "created": "Fri, 9 Aug 2019 07:00:08 GMT", "version": "v1" } ]
2019-08-12
[ [ "Tauc", "Michel", "", "IBV" ], [ "Melis", "Nicolas", "", "IBV" ], [ "Bourourou", "Miled", "", "IPMC" ], [ "Giraud", "Sébastien", "", "LPS" ], [ "Hauet", "Thierry", "", "IRTOMIT" ], [ "Blondeau", "Nicolas", ...
One of the biggest challenges in medicine is to dampen the pathophysiological stress induced by an episode of ischemia. Such stress, due to various pathological or clinical situations, follows a restriction in blood and oxygen supply to tissue, causing a shortage of oxygen and nutrients that are required for cellular metabolism. Ischemia can cause irreversible damage to target tissue leading to a poor physiological recovery outcome for the patient. Contrariwise, preconditioning by brief periods of ischemia has been shown in multiple organs to confer tolerance against subsequent normally lethal ischemia. By definition, preconditioning of organs must be applied preemptively. This limits the applicability of preconditioning in clinical situations, which arise unpredictably, such as myocardial infarction and stroke. There are, however, clinical situations that arise as a result of ischemia-reperfusion injury, which can be anticipated, and are therefore adequate candidates for preconditioning. Organ and more particularly kidney transplantation, the optimal treatment for suitable patients with end stage renal disease (ESRD), is a predictable surgery that permits the use of preconditioning protocols to prepare the organ for subsequent ischemic/reperfusion stress. It therefore seems crucial to develop appropriate preconditioning protocols against ischemia that will occur under transplantation conditions, which up to now mainly referred to mechanical ischemic preconditioning that triggers innate responses. It is not known if preconditioning has to be applied to the donor, the recipient, or both. No drug/target pair has been envisioned and validated in the clinic. Options for identifying new target/drug pairs involve the use of model animals, such as drosophila, in which some physiological pathways, such as the management of oxygen, are highly conserved across evolution. Oxygen is the universal element of life existence on earth. In this review we focus on a very specific pathway of pharmacological preconditioning identified in drosophila that was successfully transferred to mammalian models that has potential application in human health. Very few mechanisms identified in these model animals have been translated to an upper evolutionary level. This review highlights the commonality between oxygen regulation between diverse animals.
1301.4564
Rafael Najmanovich
Rafael Najmanovich
Protein flexibility upon ligand binding: Docking predictions and statistical analysis
Thesis for the degree Doctor of Philosophy submitted to the scientific council of the Weizmann Institute of Science. Rehovot, Israel, May 2003. Work performed under the supervision of Prof. Meir Edelman and Dr. Vladimir Sobolev (Plant Sciences Dept.) and Prof. Eytan Domany (Dept. Physics of Complex Systems). The thesis was accepted after anonymous peer-review and the degree conferred in June 2004
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Side chain flexibility is an important factor in ligand binding. In order to determine the extent to which side chain flexibility is involved in ligand binding, a knowledge-based approach was taken. A database composed of examples of protein structures in the presence or absence of a given ligand is used to analyze which side chains undergo side chain conformational changes. Such an analysis has determined that up to 40% of binding site do not present side chain conformational changes. A total of three residues undergoing side chain conformational changes encompass approximately 85% of the binding sites studied. When analyzing the propensities of different amino acids to undergo side chain conformational changes we find that there are considerable differences between different amino acids. A support vector machine learning approach was used to create a classifier system utilizing information about the solvent accessible area as well as flexibility scale value of each specific side chain to be predicted together with its neighboring side chains. An accuracy level of 70% is reached using this approach. The fact that a small number of residues undergo side chain conformational changes in the majority of binding sites makes it feasible to introduce side chain flexibility in docking simulations. An algorithm has been developed for introducing side chain flexibility utilizing a hybrid genetic-algorithm/exhaustive-search procedure and a surface complementarity based scoring function. This approach is implemented in the software tool FlexAID. FlexAID utilizes a rotamer library to create alternative conformations for a list of residues that are exhaustively searched during the docking simulation. The performance of FlexAID in rigid local as well as global simulations falls in the 70-80% range for both local and global simulations.
[ { "created": "Sat, 19 Jan 2013 14:15:19 GMT", "version": "v1" } ]
2013-01-22
[ [ "Najmanovich", "Rafael", "" ] ]
Side chain flexibility is an important factor in ligand binding. In order to determine the extent to which side chain flexibility is involved in ligand binding, a knowledge-based approach was taken. A database composed of examples of protein structures in the presence or absence of a given ligand is used to analyze which side chains undergo side chain conformational changes. Such an analysis has determined that up to 40% of binding site do not present side chain conformational changes. A total of three residues undergoing side chain conformational changes encompass approximately 85% of the binding sites studied. When analyzing the propensities of different amino acids to undergo side chain conformational changes we find that there are considerable differences between different amino acids. A support vector machine learning approach was used to create a classifier system utilizing information about the solvent accessible area as well as flexibility scale value of each specific side chain to be predicted together with its neighboring side chains. An accuracy level of 70% is reached using this approach. The fact that a small number of residues undergo side chain conformational changes in the majority of binding sites makes it feasible to introduce side chain flexibility in docking simulations. An algorithm has been developed for introducing side chain flexibility utilizing a hybrid genetic-algorithm/exhaustive-search procedure and a surface complementarity based scoring function. This approach is implemented in the software tool FlexAID. FlexAID utilizes a rotamer library to create alternative conformations for a list of residues that are exhaustively searched during the docking simulation. The performance of FlexAID in rigid local as well as global simulations falls in the 70-80% range for both local and global simulations.
2004.02752
Raj Abhijit Dandekar
Raj Dandekar and George Barbastathis
Neural Network aided quarantine control model estimation of global Covid-19 spread
13 pages, 26 figures
null
null
null
q-bio.PE physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the first recording of what we now call Covid-19 infection in Wuhan, Hubei province, China on Dec 31, 2019, the disease has spread worldwide and met with a wide variety of social distancing and quarantine policies. The effectiveness of these responses is notoriously difficult to quantify as individuals travel, violate policies deliberately or inadvertently, and infect others without themselves being detected. In this paper, we attempt to interpret and extrapolate from publicly available data using a mixed first-principles epidemiological equations and data-driven neural network model. Leveraging our neural network augmented model, we focus our analysis on four locales: Wuhan, Italy, South Korea and the United States of America, and compare the role played by the quarantine and isolation measures in each of these countries in controlling the effective reproduction number $R_{t}$ of the virus. Our results unequivocally indicate that the countries in which rapid government interventions and strict public health measures for quarantine and isolation were implemented were successful in halting the spread of infection and prevent it from exploding exponentially. We test the predictive ability of our model by matching predictions in the duration 3 March - 1 April 2020 for Wuhan and in the duration 25 March - 1 April 2020 for Italy and South Korea. In the case of the US, our model captures well the current infected curve growth and predicts a halting of infection spread by 20 April 2020. We further demonstrate that relaxing or reversing quarantine measures right now will lead to an exponential explosion in the infected case count, thus nullifying the role played by all measures implemented in the US since mid March 2020.
[ { "created": "Thu, 2 Apr 2020 19:31:09 GMT", "version": "v1" } ]
2020-04-07
[ [ "Dandekar", "Raj", "" ], [ "Barbastathis", "George", "" ] ]
Since the first recording of what we now call Covid-19 infection in Wuhan, Hubei province, China on Dec 31, 2019, the disease has spread worldwide and met with a wide variety of social distancing and quarantine policies. The effectiveness of these responses is notoriously difficult to quantify as individuals travel, violate policies deliberately or inadvertently, and infect others without themselves being detected. In this paper, we attempt to interpret and extrapolate from publicly available data using a mixed first-principles epidemiological equations and data-driven neural network model. Leveraging our neural network augmented model, we focus our analysis on four locales: Wuhan, Italy, South Korea and the United States of America, and compare the role played by the quarantine and isolation measures in each of these countries in controlling the effective reproduction number $R_{t}$ of the virus. Our results unequivocally indicate that the countries in which rapid government interventions and strict public health measures for quarantine and isolation were implemented were successful in halting the spread of infection and prevent it from exploding exponentially. We test the predictive ability of our model by matching predictions in the duration 3 March - 1 April 2020 for Wuhan and in the duration 25 March - 1 April 2020 for Italy and South Korea. In the case of the US, our model captures well the current infected curve growth and predicts a halting of infection spread by 20 April 2020. We further demonstrate that relaxing or reversing quarantine measures right now will lead to an exponential explosion in the infected case count, thus nullifying the role played by all measures implemented in the US since mid March 2020.
1706.08040
Alexander Feigel
Assaf Engel and Alexander Feigel
Single Equalizer Strategy with no Information Transfer for Conflict Escalation
14 pages, 4 figures
Phys. Rev. E 98, 012415 (2018)
10.1103/PhysRevE.98.012415
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In an iterated two-person game, for instance prisoner's dilemma or the snowdrift game, there exist strategies that force the payoffs of the opponents to be equal. These equalizer strategies form a subset of the more general zero-determinant strategies that unilaterally set the payoff of an opponent. A challenge in the attempts to understand the role of these strategies in the evolution of animal behavior is the lack of iterations in the fights for mating opportunities or territory control. We show that an arbitrary two-parameter strategy may possess a corresponding equalizer strategy which produces the same result: statistics of the fight outcomes in the contests with mutants are the same for each of these two strategies. Therefore, analyzing only the equalizer strategy space may be sufficient to predict animal behavior if nature, indeed, reduces (marginalizes) complex strategies to equalizer strategy space. The work's main finding is that there is a unique equalizer strategy that predicts fight outcomes without mutual cooperation. The lack of mutual cooperation is a common trait in conflict escalation contests that generally require a clear winner. In addition this unique strategy does not assess information of the opponent's state. The method bypasses the standard analysis of evolutionary stability. The results fit well the observations of combat between male bowl and doily spiders and support an empirical assumption of the war of attrition model that the species use only information regarding their own state during conflict escalation.
[ { "created": "Sun, 25 Jun 2017 06:00:25 GMT", "version": "v1" }, { "created": "Thu, 1 Mar 2018 17:35:22 GMT", "version": "v2" } ]
2018-08-01
[ [ "Engel", "Assaf", "" ], [ "Feigel", "Alexander", "" ] ]
In an iterated two-person game, for instance prisoner's dilemma or the snowdrift game, there exist strategies that force the payoffs of the opponents to be equal. These equalizer strategies form a subset of the more general zero-determinant strategies that unilaterally set the payoff of an opponent. A challenge in the attempts to understand the role of these strategies in the evolution of animal behavior is the lack of iterations in the fights for mating opportunities or territory control. We show that an arbitrary two-parameter strategy may possess a corresponding equalizer strategy which produces the same result: statistics of the fight outcomes in the contests with mutants are the same for each of these two strategies. Therefore, analyzing only the equalizer strategy space may be sufficient to predict animal behavior if nature, indeed, reduces (marginalizes) complex strategies to equalizer strategy space. The work's main finding is that there is a unique equalizer strategy that predicts fight outcomes without mutual cooperation. The lack of mutual cooperation is a common trait in conflict escalation contests that generally require a clear winner. In addition this unique strategy does not assess information of the opponent's state. The method bypasses the standard analysis of evolutionary stability. The results fit well the observations of combat between male bowl and doily spiders and support an empirical assumption of the war of attrition model that the species use only information regarding their own state during conflict escalation.
1408.1593
Matthew Turner
Daniel J. G. Pearce and Matthew S. Turner
Differentiating swarming models by mimicking a frustrated anti-Ferromagnet
null
null
10.1098/rsif.2015.0520
null
q-bio.PE cond-mat.soft cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self propelled particle (SPP) models are often compared with animal swarms. However, the collective behaviour observed in experiments usually leaves considerable unconstrained freedom in the structure of these models. To tackle this degeneracy, and better distinguish between candidate models, we study swarms of SPPs circulating in channels (like spins) where we permit information to pass through windows between neighbouring channels. Co-alignment between particles then couples the channels (antiferromagnetically) so that they tend to counter-rotate. We study channels arranged to mimic a geometrically frustrated antiferromagnet and show how the effects of this frustration allow us to better distinguish between SPP models. Similar experiments could therefore improve our understanding of collective motion in animals. Finally we discuss how the spin analogy can be exploited to construct universal logic gates and therefore swarming systems that can function as Turing machines.
[ { "created": "Thu, 7 Aug 2014 13:53:27 GMT", "version": "v1" } ]
2015-11-06
[ [ "Pearce", "Daniel J. G.", "" ], [ "Turner", "Matthew S.", "" ] ]
Self propelled particle (SPP) models are often compared with animal swarms. However, the collective behaviour observed in experiments usually leaves considerable unconstrained freedom in the structure of these models. To tackle this degeneracy, and better distinguish between candidate models, we study swarms of SPPs circulating in channels (like spins) where we permit information to pass through windows between neighbouring channels. Co-alignment between particles then couples the channels (antiferromagnetically) so that they tend to counter-rotate. We study channels arranged to mimic a geometrically frustrated antiferromagnet and show how the effects of this frustration allow us to better distinguish between SPP models. Similar experiments could therefore improve our understanding of collective motion in animals. Finally we discuss how the spin analogy can be exploited to construct universal logic gates and therefore swarming systems that can function as Turing machines.
q-bio/0509039
Paul Higgs
Daniel Urbina, Bin Tang and Paul G. Higgs
The response of amino acid frequencies to directional mutation pressure in mitochondrial genome sequences is related to the physical properties of the amino acids and to the structure of the genetic code
52 pages including 12 figures. Journal of Molecular Evolution (in press)
null
null
null
q-bio.PE q-bio.GN
null
The frequencies of A, C, G and T in mitochondrial DNA vary among species due to unequal rates of mutation between the bases. The frequencies of bases at four-fold degenerate sites respond directly to mutation pressure. At 1st and 2nd positions, selection reduces the degree of frequency variation. Using a simple evolutionary model, we show that 1st position sites are less constrained by selection than 2nd position sites, and therefore that the frequencies of bases at 1st position are more responsive to mutation pressure than those at 2nd position. We define a similarity measure between amino acids that is a function of 8 measured physical properties. We define a proximity measure for each amino acid, which is the average similarity between an amino acid and all others that are accessible via single point mutations in the genetic code. We also define a responsiveness for each amino acid, which measures how rapidly an amino acid frequency changes as a result of mutation pressure acting on the base frequencies. There is a strong correlation between responsiveness and proximity, and both these quantities are also correlated with the mutability of amino acids estimated from the mtREV substitution rate matrix. We also consider the variation of base frequencies between strands and between genes on a strand. These trends are consistent with the patterns expected from analysis of the variation among genomes
[ { "created": "Tue, 27 Sep 2005 15:01:13 GMT", "version": "v1" } ]
2016-09-08
[ [ "Urbina", "Daniel", "" ], [ "Tang", "Bin", "" ], [ "Higgs", "Paul G.", "" ] ]
The frequencies of A, C, G and T in mitochondrial DNA vary among species due to unequal rates of mutation between the bases. The frequencies of bases at four-fold degenerate sites respond directly to mutation pressure. At 1st and 2nd positions, selection reduces the degree of frequency variation. Using a simple evolutionary model, we show that 1st position sites are less constrained by selection than 2nd position sites, and therefore that the frequencies of bases at 1st position are more responsive to mutation pressure than those at 2nd position. We define a similarity measure between amino acids that is a function of 8 measured physical properties. We define a proximity measure for each amino acid, which is the average similarity between an amino acid and all others that are accessible via single point mutations in the genetic code. We also define a responsiveness for each amino acid, which measures how rapidly an amino acid frequency changes as a result of mutation pressure acting on the base frequencies. There is a strong correlation between responsiveness and proximity, and both these quantities are also correlated with the mutability of amino acids estimated from the mtREV substitution rate matrix. We also consider the variation of base frequencies between strands and between genes on a strand. These trends are consistent with the patterns expected from analysis of the variation among genomes
0912.3832
Kanaka Rajan
L F Abbott, Kanaka Rajan and Haim Sompolinsky
Interactions between Intrinsic and Stimulus-Evoked Activity in Recurrent Neural Networks
null
null
null
null
q-bio.NC cond-mat.dis-nn nlin.CD physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trial-to-trial variability is an essential feature of neural responses, but its source is a subject of active debate. Response variability (Mast and Victor, 1991; Arieli et al., 1995 & 1996; Anderson et al., 2000 & 2001; Kenet et al., 2003; Petersen et al., 2003a & b; Fiser, Chiu and Weliky, 2004; MacLean et al., 2005; Yuste et al., 2005; Vincent et al., 2007) is often treated as random noise, generated either by other brain areas, or by stochastic processes within the circuitry being studied. We call such sources of variability external to stress the independence of this form of noise from activity driven by the stimulus. Variability can also be generated internally by the same network dynamics that generates responses to a stimulus. How can we distinguish between external and internal sources of response variability? Here we show that internal sources of variability interact nonlinearly with stimulus-induced activity, and this interaction yields a suppression of noise in the evoked state. This provides a theoretical basis and potential mechanism for the experimental observation that, in many brain areas, stimuli cause significant suppression of neuronal variability (Werner and Mountcastle, 1963; Fortier, Smith and Kalaska, 1993; Anderson et al., 2000; Friedrich and Laurent, 2004; Churchland et al., 2006; Finn, Priebe and Ferster, 2007; Mitchell, Sundberg and Reynolds, 2007; Churchland et al., 2009). The combined theoretical and experimental results suggest that internally generated activity is a significant contributor to response variability in neural circuits.
[ { "created": "Fri, 18 Dec 2009 23:07:02 GMT", "version": "v1" }, { "created": "Mon, 2 Aug 2010 20:45:26 GMT", "version": "v2" } ]
2010-08-04
[ [ "Abbott", "L F", "" ], [ "Rajan", "Kanaka", "" ], [ "Sompolinsky", "Haim", "" ] ]
Trial-to-trial variability is an essential feature of neural responses, but its source is a subject of active debate. Response variability (Mast and Victor, 1991; Arieli et al., 1995 & 1996; Anderson et al., 2000 & 2001; Kenet et al., 2003; Petersen et al., 2003a & b; Fiser, Chiu and Weliky, 2004; MacLean et al., 2005; Yuste et al., 2005; Vincent et al., 2007) is often treated as random noise, generated either by other brain areas, or by stochastic processes within the circuitry being studied. We call such sources of variability external to stress the independence of this form of noise from activity driven by the stimulus. Variability can also be generated internally by the same network dynamics that generates responses to a stimulus. How can we distinguish between external and internal sources of response variability? Here we show that internal sources of variability interact nonlinearly with stimulus-induced activity, and this interaction yields a suppression of noise in the evoked state. This provides a theoretical basis and potential mechanism for the experimental observation that, in many brain areas, stimuli cause significant suppression of neuronal variability (Werner and Mountcastle, 1963; Fortier, Smith and Kalaska, 1993; Anderson et al., 2000; Friedrich and Laurent, 2004; Churchland et al., 2006; Finn, Priebe and Ferster, 2007; Mitchell, Sundberg and Reynolds, 2007; Churchland et al., 2009). The combined theoretical and experimental results suggest that internally generated activity is a significant contributor to response variability in neural circuits.
1310.2528
Michael Deem
J. C. Phillips
Thermodynamic Description of Beta Amyloid Formation
12 pages, 5 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein function depends on both protein structure and amino acid (aa) sequence. Here we show that modular features of both structure and function can be quantified from the aa sequences alone for the small (40,42 aa) plaque-forming amyloid beta fragments. Some edge and center features of the fragments are predicted. Contrasting results from the second order hydropathicity scale based on evolutionary optimization (self-organized criticality) and the first order scale based on complete protein (water-air) unfolding show that fragmentation has mixed first- and second-order character.
[ { "created": "Wed, 9 Oct 2013 15:54:51 GMT", "version": "v1" }, { "created": "Wed, 5 Mar 2014 14:44:32 GMT", "version": "v2" } ]
2014-03-06
[ [ "Phillips", "J. C.", "" ] ]
Protein function depends on both protein structure and amino acid (aa) sequence. Here we show that modular features of both structure and function can be quantified from the aa sequences alone for the small (40,42 aa) plaque-forming amyloid beta fragments. Some edge and center features of the fragments are predicted. Contrasting results from the second order hydropathicity scale based on evolutionary optimization (self-organized criticality) and the first order scale based on complete protein (water-air) unfolding show that fragmentation has mixed first- and second-order character.
q-bio/0402037
Garel
Thomas Garel, Henri Orland
Generalized Poland-Scheraga model for DNA hybridization
20 pages; 14 figures. Minor modifications; accepted for publication in Biopolymers
null
null
SPhT-T04/014
q-bio.BM cond-mat.soft
null
The Poland-Scheraga (PS) model for the helix-coil transition of DNA considers the statistical mechanics of the binding (or hybridization) of two complementary strands of DNA of equal length, with the restriction that only bases with the same index along the strands are allowed to bind. In this paper, we extend this model by relaxing these constraints: We propose a generalization of the PS model which allows for the binding of two strands of unequal lengths $N_{1}$ and $N_{2}$ with unrelated sequences. We study in particular (i) the effect of mismatches on the hybridization of complementary strands (ii) the hybridization of non complementary strands (as resulting from point mutations) of unequal lengths $N_{1}$ and $N_{2}$. The use of a Fixman-Freire scheme scales down the computational complexity of our algorithm from $O(N_{1}^{2}N_{2}^{2})$ to $O(N_{1}N_{2})$.The simulation of complementary strands of a few kbps yields results almost identical to the PS model. For short strands of equal or unequal lengths, the binding displays a strong sensitivity to mutations. This model may be relevant to the experimental protocol in DNA microarrays, and more generally to the molecular recognition of DNA fragments. It also provides a physical implementation of sequence alignments.
[ { "created": "Wed, 18 Feb 2004 14:42:10 GMT", "version": "v1" }, { "created": "Mon, 4 Oct 2004 08:25:04 GMT", "version": "v2" } ]
2007-05-23
[ [ "Garel", "Thomas", "" ], [ "Orland", "Henri", "" ] ]
The Poland-Scheraga (PS) model for the helix-coil transition of DNA considers the statistical mechanics of the binding (or hybridization) of two complementary strands of DNA of equal length, with the restriction that only bases with the same index along the strands are allowed to bind. In this paper, we extend this model by relaxing these constraints: We propose a generalization of the PS model which allows for the binding of two strands of unequal lengths $N_{1}$ and $N_{2}$ with unrelated sequences. We study in particular (i) the effect of mismatches on the hybridization of complementary strands (ii) the hybridization of non complementary strands (as resulting from point mutations) of unequal lengths $N_{1}$ and $N_{2}$. The use of a Fixman-Freire scheme scales down the computational complexity of our algorithm from $O(N_{1}^{2}N_{2}^{2})$ to $O(N_{1}N_{2})$.The simulation of complementary strands of a few kbps yields results almost identical to the PS model. For short strands of equal or unequal lengths, the binding displays a strong sensitivity to mutations. This model may be relevant to the experimental protocol in DNA microarrays, and more generally to the molecular recognition of DNA fragments. It also provides a physical implementation of sequence alignments.