id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1706.01249
Lior Noy
Yuval Hart, Avraham E Mayo, Ruth Mayo, Liron Rozenkrantz, Avichai Tendler, Uri Alon and Lior Noy
Creative Foraging: A Quantitative Paradigm for Studying Creative Exploration
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creative exploration is central to science, art and cognitive development. However, research on creative exploration is limited by a lack of high-resolution automated paradigms. To address this, we present such an automated paradigm, the creative foraging game, in which people search for novel and valuable solutions in a large and well-defined space made of all possible shapes made of ten connected squares. Players discovered shape categories such as digits, letters, and airplanes. They exploited each category, then dropped it to explore once again, and so on. Aligned with a prediction of optimal foraging theory (OFT) prediction, during exploration phases, people moved along meandering paths that are about three times longer than the minimal paths between shapes, when exploiting a category of related shapes, they moved along the minimal paths. The moment of discovery of a new category was usually done at a nonprototypical and ambiguous shape, which can serve as an experimental proxy for creative leaps. People showed individual differences in their search patterns, along a continuum between two strategies: a mercurial quick-to-discover/quick-to-drop strategy and a thorough slow-to-discover/slow-to-drop strategy. Contrary to optimal foraging theory, players leave exploitation to explore again far before categories are depleted. This paradigm opens the way for automated high-resolution study of creative exploration.
[ { "created": "Mon, 5 Jun 2017 09:26:23 GMT", "version": "v1" } ]
2017-06-06
[ [ "Hart", "Yuval", "" ], [ "Mayo", "Avraham E", "" ], [ "Mayo", "Ruth", "" ], [ "Rozenkrantz", "Liron", "" ], [ "Tendler", "Avichai", "" ], [ "Alon", "Uri", "" ], [ "Noy", "Lior", "" ] ]
Creative exploration is central to science, art and cognitive development. However, research on creative exploration is limited by a lack of high-resolution automated paradigms. To address this, we present such an automated paradigm, the creative foraging game, in which people search for novel and valuable solutions in a large and well-defined space made of all possible shapes made of ten connected squares. Players discovered shape categories such as digits, letters, and airplanes. They exploited each category, then dropped it to explore once again, and so on. Aligned with a prediction of optimal foraging theory (OFT) prediction, during exploration phases, people moved along meandering paths that are about three times longer than the minimal paths between shapes, when exploiting a category of related shapes, they moved along the minimal paths. The moment of discovery of a new category was usually done at a nonprototypical and ambiguous shape, which can serve as an experimental proxy for creative leaps. People showed individual differences in their search patterns, along a continuum between two strategies: a mercurial quick-to-discover/quick-to-drop strategy and a thorough slow-to-discover/slow-to-drop strategy. Contrary to optimal foraging theory, players leave exploitation to explore again far before categories are depleted. This paradigm opens the way for automated high-resolution study of creative exploration.
1907.00950
Romuald A. Janik
Romuald A. Janik
Explaining the Human Visual Brain Challenge 2019 -- receptive fields and surrogate features
6 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper I review the submission to the Explaining the Human Visual Brain Challenge 2019 in both the fMRI and MEG tracks. The goal was to construct neural network features which generate the so-called representational dissimilarity matrix (RDM) which is most similar to the one extracted from fMRI and MEG data upon viewing a set of images. I review exploring the optimal granularity of the receptive field, a construction of intermediate surrogate features using Multidimensional Scaling and modelling them using neural network features. I also point out some peculiarities of the RDM construction which have to be taken into account.
[ { "created": "Mon, 1 Jul 2019 17:39:48 GMT", "version": "v1" } ]
2019-07-02
[ [ "Janik", "Romuald A.", "" ] ]
In this paper I review the submission to the Explaining the Human Visual Brain Challenge 2019 in both the fMRI and MEG tracks. The goal was to construct neural network features which generate the so-called representational dissimilarity matrix (RDM) which is most similar to the one extracted from fMRI and MEG data upon viewing a set of images. I review exploring the optimal granularity of the receptive field, a construction of intermediate surrogate features using Multidimensional Scaling and modelling them using neural network features. I also point out some peculiarities of the RDM construction which have to be taken into account.
1004.3951
Christian Mulder PhD
Christian Mulder, A. Jan Hendriks
Scaling Population Cycles of Herbivores and Carnivores
This research was partly supported by a Research Network Programme of the European Science Foundation on body size and ecosystem dynamics (SIZEMIC)
null
null
null
q-bio.QM physics.bio-ph q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Periodicity in population dynamics is a fundamental issue. In addition to current species-specific analyses, allometry facilitates understanding of limit cycles amongst different species. So far, body-size regressions have been derived for the oscillation period of the population densities of warm-blooded species, in particular herbivores. Here, we extend the allometric analysis to other clades, allowing for a comparison between the obtained slopes and intercepts. The oscillation periods were derived from databases and original studies to cover a broad range of conditions and species. Then, values were related to specific body size by regression analysis. For different groups of herbivorous species, the oscillation period increased as a function of individual mass as a power law with exponents of 0.11-0.27. The intercepts of the resulting linear regressions indicated that cycle times for equally-sized species increased from homeotherms up to invertebrates. Overall, cycle times for predators did not scale to body size. Implications for these differences were addressed in the light of intra- and interspecific delays.
[ { "created": "Thu, 22 Apr 2010 15:55:30 GMT", "version": "v1" } ]
2010-04-23
[ [ "Mulder", "Christian", "" ], [ "Hendriks", "A. Jan", "" ] ]
Periodicity in population dynamics is a fundamental issue. In addition to current species-specific analyses, allometry facilitates understanding of limit cycles amongst different species. So far, body-size regressions have been derived for the oscillation period of the population densities of warm-blooded species, in particular herbivores. Here, we extend the allometric analysis to other clades, allowing for a comparison between the obtained slopes and intercepts. The oscillation periods were derived from databases and original studies to cover a broad range of conditions and species. Then, values were related to specific body size by regression analysis. For different groups of herbivorous species, the oscillation period increased as a function of individual mass as a power law with exponents of 0.11-0.27. The intercepts of the resulting linear regressions indicated that cycle times for equally-sized species increased from homeotherms up to invertebrates. Overall, cycle times for predators did not scale to body size. Implications for these differences were addressed in the light of intra- and interspecific delays.
q-bio/0510013
Yury A. Koksharov
Olga A. Koksharova, Johan Klint, and Ulla Rasmussen
The protein map of Synechococcus sp. PCC 7942 - the first overlook
null
null
null
null
q-bio.GN
null
The unicellular cyanobacterium Synechococcus PCC 7942 has been used as a model organism for studies of prokaryotic circadian rhythms, carbon-concentrating mechanisms, response to a variety of nutrient and environmental stresses, and cell division. This paper presents the results of the first proteomic exploratory study of Synechococcus PCC 7942. The proteome was analyzed using two-dimensional gel electrophoresis followed by MALDI-TOF mass spectroscopy, and database searching. Of 140 analyzed protein spots, 110 were successfully identified as 62 different proteins, many of which occurred as multiple spots on the gel. The identified proteins were organized into 18 different functional categories reflecting the major metabolic and cellular processes occurring in the cyanobacterial cells in the exponential growth phase. Among the identified proteins, 14 previously unknown or considered to be hypothetical are here shown to be true gene products in Synechococcus sp. PCC 7942, and may be helpful for annotation of the newly sequenced genome.
[ { "created": "Thu, 6 Oct 2005 11:41:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Koksharova", "Olga A.", "" ], [ "Klint", "Johan", "" ], [ "Rasmussen", "Ulla", "" ] ]
The unicellular cyanobacterium Synechococcus PCC 7942 has been used as a model organism for studies of prokaryotic circadian rhythms, carbon-concentrating mechanisms, response to a variety of nutrient and environmental stresses, and cell division. This paper presents the results of the first proteomic exploratory study of Synechococcus PCC 7942. The proteome was analyzed using two-dimensional gel electrophoresis followed by MALDI-TOF mass spectroscopy, and database searching. Of 140 analyzed protein spots, 110 were successfully identified as 62 different proteins, many of which occurred as multiple spots on the gel. The identified proteins were organized into 18 different functional categories reflecting the major metabolic and cellular processes occurring in the cyanobacterial cells in the exponential growth phase. Among the identified proteins, 14 previously unknown or considered to be hypothetical are here shown to be true gene products in Synechococcus sp. PCC 7942, and may be helpful for annotation of the newly sequenced genome.
2210.02451
Anne Modat
Alberto Mart\'inez-Ort\'i, Sonia Adam, Giovanni Garippa (UNISS), J\'er\^ome Boissier (IHPE), M Dolores Bargues, Santiago Mas-Coma
Morpho-anatomical characterization of the urogenital schistosmiasis vector Bulinus truncatus (Audouin, 1827) (Heterobranchia : Bulinidae) from Southwestern Europe
null
Journal of Conchology, CONCHOLOGICAL SOC GREAT BRITAIN & IRELAND 2022, 44 (4), pp.355-372
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Urogenital schistosomiasis has been present naturally in the South of Europe since the beginning of the 20 th century and nowadays its presence is also known, at least imported by Sub-Saharan emigrants and tourists, in France, Italy, Portugal and Spain. One of the intermediate hosts of this trematode present in Europe is the bulinid mollusc Bulinus truncatus, non-native species that can be reached to Europe by humans and birds. In order to know this mollusc better, we carried out a morpho-anatomical study, of the shell, the reproductive system, radula, the respiratory organs and pseudobranch of several populations from Italy, France and Spain. Spanish conchological material studied comes from different populations, from material deposited in the "Museo Nacional de Ciencias Naturales" of Madrid and the "Museu de Ci{\`e}ncies Naturals" of Barcelona, as well as from its own material deposited in the "Museu Valenci{\`a} d'Hist{\`o}ria Natural" of Alginet (Valencia). The shell growth in captivity and the estimation of the population age of B. truncatus from El Ejido (Almer{\'i}a, Spain), has also been studied. Finally, the finding of aphallic and euphallic specimens in the different populations of southern Europe studied is presented and taxonomic and ecological data of the genus Bulinus are shown.
[ { "created": "Wed, 5 Oct 2022 12:50:07 GMT", "version": "v1" } ]
2022-10-07
[ [ "Martínez-Ortí", "Alberto", "", "UNISS" ], [ "Adam", "Sonia", "", "UNISS" ], [ "Garippa", "Giovanni", "", "UNISS" ], [ "Boissier", "Jérôme", "", "IHPE" ], [ "Bargues", "M Dolores", "" ], [ "Mas-Coma", "Santiago", "" ] ]
Urogenital schistosomiasis has been present naturally in the South of Europe since the beginning of the 20 th century and nowadays its presence is also known, at least imported by Sub-Saharan emigrants and tourists, in France, Italy, Portugal and Spain. One of the intermediate hosts of this trematode present in Europe is the bulinid mollusc Bulinus truncatus, non-native species that can be reached to Europe by humans and birds. In order to know this mollusc better, we carried out a morpho-anatomical study, of the shell, the reproductive system, radula, the respiratory organs and pseudobranch of several populations from Italy, France and Spain. Spanish conchological material studied comes from different populations, from material deposited in the "Museo Nacional de Ciencias Naturales" of Madrid and the "Museu de Ci{\`e}ncies Naturals" of Barcelona, as well as from its own material deposited in the "Museu Valenci{\`a} d'Hist{\`o}ria Natural" of Alginet (Valencia). The shell growth in captivity and the estimation of the population age of B. truncatus from El Ejido (Almer{\'i}a, Spain), has also been studied. Finally, the finding of aphallic and euphallic specimens in the different populations of southern Europe studied is presented and taxonomic and ecological data of the genus Bulinus are shown.
q-bio/0610022
Rafael F. Pont-Lezica
Leila Feiz (SCSV), Muhammad Irshad (SCSV), Rafael F Pont-Lezica (SCSV), Herv\'e Canut (SCSV), Elisabeth Jamet (SCSV)
Evaluation of cell wall preparations for proteomics: a new procedure for purifying cell walls from Arabidopsis hypocotyls
null
Plant Methods 2 (2006) 10
10.1186/1746-4811-2-10
null
q-bio.GN
null
The ultimate goal of proteomic analysis of a cell compartment should be the exhaustive identification of resident proteins; excluding proteins from other cell compartments. Plant cell walls possess specific difficulties. Several reported procedures to isolate cell walls for proteomic analyses led to the isolation of a high proportion (more than 50%) of predicted intracellular proteins. The rationales of several published procedures to isolate cell walls for proteomics were analyzed, with regard to the bioinformatic-predicted subcellular localization of the identified proteins. A new procedure was developed to prepare cell walls from etiolated hypocotyls of Arabidopsis thaliana. After salt extraction, a high proportion of proteins predicted to be secreted was released (73%), belonging to the same functional classes as proteins identified using previously described protocols. The new cell wall preparation described in this paper gives the lowest proportion of proteins predicted to be intracellular when compared to available protocols. The application of its principles should lead to a more realistic view of the cell wall proteome, at least for the weakly bound CWP extractable by salts. In addition, it offers a clean cell wall preparation for subsequent extraction of strongly bound CWP.
[ { "created": "Thu, 12 Oct 2006 08:22:38 GMT", "version": "v1" }, { "created": "Wed, 18 Oct 2006 14:42:30 GMT", "version": "v2" } ]
2016-08-16
[ [ "Feiz", "Leila", "", "SCSV" ], [ "Irshad", "Muhammad", "", "SCSV" ], [ "Pont-Lezica", "Rafael F", "", "SCSV" ], [ "Canut", "Hervé", "", "SCSV" ], [ "Jamet", "Elisabeth", "", "SCSV" ] ]
The ultimate goal of proteomic analysis of a cell compartment should be the exhaustive identification of resident proteins; excluding proteins from other cell compartments. Plant cell walls possess specific difficulties. Several reported procedures to isolate cell walls for proteomic analyses led to the isolation of a high proportion (more than 50%) of predicted intracellular proteins. The rationales of several published procedures to isolate cell walls for proteomics were analyzed, with regard to the bioinformatic-predicted subcellular localization of the identified proteins. A new procedure was developed to prepare cell walls from etiolated hypocotyls of Arabidopsis thaliana. After salt extraction, a high proportion of proteins predicted to be secreted was released (73%), belonging to the same functional classes as proteins identified using previously described protocols. The new cell wall preparation described in this paper gives the lowest proportion of proteins predicted to be intracellular when compared to available protocols. The application of its principles should lead to a more realistic view of the cell wall proteome, at least for the weakly bound CWP extractable by salts. In addition, it offers a clean cell wall preparation for subsequent extraction of strongly bound CWP.
2204.12550
Jose E Amaro
J. E. Amaro
Systematic description of COVID-19 pandemic using exact SIR solutions and Gumbel distributions
null
Nonlinear Dynamics 111, 1947--1969 (2023)
10.1007/s11071-022-07907-4
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
An epidemiological study of deaths is carried out in a dozen countries by analyzing the first wave of the COVID-19 pandemic. These countries are among those most affected by the first wave, i.e. where daily-death data series may closely resemble a solution of the basic SIR equations. The SIR equations are solved parametrically using the proper time as parameter. Some general properties of the SIR solutions are studied such as time-scaling and asymmetry. Additionally, we use approximations to the SIR solutions through Gumbel functions, which present a very similar behavior. The parameters of the SIR model and the Gumbel function are extracted from the data and compared for the different countries. It is found that ten of the selected countries are very well described by the solutions of the SIR model, with a basic reproduction number between 3 and 8.
[ { "created": "Tue, 26 Apr 2022 19:14:35 GMT", "version": "v1" } ]
2023-06-22
[ [ "Amaro", "J. E.", "" ] ]
An epidemiological study of deaths is carried out in a dozen countries by analyzing the first wave of the COVID-19 pandemic. These countries are among those most affected by the first wave, i.e. where daily-death data series may closely resemble a solution of the basic SIR equations. The SIR equations are solved parametrically using the proper time as parameter. Some general properties of the SIR solutions are studied such as time-scaling and asymmetry. Additionally, we use approximations to the SIR solutions through Gumbel functions, which present a very similar behavior. The parameters of the SIR model and the Gumbel function are extracted from the data and compared for the different countries. It is found that ten of the selected countries are very well described by the solutions of the SIR model, with a basic reproduction number between 3 and 8.
1505.04774
Bo Li
Banghe Li, Bo Li, Yuefeng Shen
A Much better replacement of the Michaelis-Menten equation and its application
null
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Michaelis-Menten equation is a basic equation of enzyme kinetics and gives an acceptable approximation of real chemical reaction processes. Analyzing the derivation of this equation yields the fact that its good performance of approximating real reaction processes is due to Michaelis-Menten curve (15). This curve is derived from Quasi-Steady-State Assumption(QSSA), which has been proved always true and called Quasi-Steady-State Law by Banghe Li et al [19]. Here, we found a quartic equation A(S,E)=0 (22), which gives more accurate approximation of the reaction process in two aspects: during the quasi-steady state of a reaction, Michaelis-Menten curve approximates the reaction well, while our quartic equation $A(S,E)=0$ gives better approximation; near the end of the reaction, our equation approaches the end of the reaction with a tangent line same to that of the reaction, while Michaelis-Menten curve does not. In addition, our quartic equation A(S,E)=0 differs to Michaelis-Menten curve less than the order of $1/S^3$ as S approaches $+\infty$. By considering the above merits of A(S,E)=0, we suggest it as a replacement of Michaelis-Menten curve. Intuitively, this new equation is more complex and harder to understand. But, just because its complexity, it provides more information about the rate constants than Michaelis-Menten curve does. Finally, we get a better replacement of the Michaelis-Menten equation by combing A(S,E)=0 and the equation $dP/dt=k_2C(t)$.
[ { "created": "Thu, 14 May 2015 17:33:29 GMT", "version": "v1" } ]
2015-05-19
[ [ "Li", "Banghe", "" ], [ "Li", "Bo", "" ], [ "Shen", "Yuefeng", "" ] ]
Michaelis-Menten equation is a basic equation of enzyme kinetics and gives an acceptable approximation of real chemical reaction processes. Analyzing the derivation of this equation yields the fact that its good performance of approximating real reaction processes is due to Michaelis-Menten curve (15). This curve is derived from Quasi-Steady-State Assumption(QSSA), which has been proved always true and called Quasi-Steady-State Law by Banghe Li et al [19]. Here, we found a quartic equation A(S,E)=0 (22), which gives more accurate approximation of the reaction process in two aspects: during the quasi-steady state of a reaction, Michaelis-Menten curve approximates the reaction well, while our quartic equation $A(S,E)=0$ gives better approximation; near the end of the reaction, our equation approaches the end of the reaction with a tangent line same to that of the reaction, while Michaelis-Menten curve does not. In addition, our quartic equation A(S,E)=0 differs to Michaelis-Menten curve less than the order of $1/S^3$ as S approaches $+\infty$. By considering the above merits of A(S,E)=0, we suggest it as a replacement of Michaelis-Menten curve. Intuitively, this new equation is more complex and harder to understand. But, just because its complexity, it provides more information about the rate constants than Michaelis-Menten curve does. Finally, we get a better replacement of the Michaelis-Menten equation by combing A(S,E)=0 and the equation $dP/dt=k_2C(t)$.
1609.00441
Antonio Rueda-Toicen
Allan A. Zea and Antonio Rueda-Toicen
Characterizing the structure of protein-protein interaction networks
10 pages, 3 figures. Conference: CIMENICS XIII at Caracas, Venezuela, 2016
null
10.13140/RG.2.2.13286.63043
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network theorists have developed methods to characterize the complex interactions in natural phenomena. The structure of the network of interactions between proteins is important in the field of proteomics, and has been subject to intensive research in recent years, as scientists have become increasingly capable and interested in describing the underlying structure of interactions in both normal and pathological biological processes. In this paper, we survey the graph-theoretic characterization of protein-protein interaction networks (PINs) in terms of structural features, and discuss its possible applications in biomedical research. We also perform a brief revision of network theory's classical literature and discuss modern statistical and computational techniques to describe the structure of PINs
[ { "created": "Fri, 2 Sep 2016 01:02:57 GMT", "version": "v1" }, { "created": "Mon, 5 Sep 2016 22:53:36 GMT", "version": "v2" } ]
2016-09-07
[ [ "Zea", "Allan A.", "" ], [ "Rueda-Toicen", "Antonio", "" ] ]
Network theorists have developed methods to characterize the complex interactions in natural phenomena. The structure of the network of interactions between proteins is important in the field of proteomics, and has been subject to intensive research in recent years, as scientists have become increasingly capable and interested in describing the underlying structure of interactions in both normal and pathological biological processes. In this paper, we survey the graph-theoretic characterization of protein-protein interaction networks (PINs) in terms of structural features, and discuss its possible applications in biomedical research. We also perform a brief revision of network theory's classical literature and discuss modern statistical and computational techniques to describe the structure of PINs
1411.7348
Anand Banerjee
Anand Banerjee, Alexander Berzhkovskii and Ralph Nossal
Efficiency of cellular uptake of nanoparticles via receptor-mediated endocytosis
21 pages, 9 figures
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experiments show that cellular uptake of nanoparticles, via receptor-mediated endocytosis, strongly depends on nanoparticle size. There is an optimal size, approximately 50 nm in diameter, at which cellular uptake is the highest. In addition, there is a maximum size, approximately 200 nm, beyond which uptake via receptor-mediated endocytosis does not occur. By comparing results from different experiments, we found that these sizes weakly depend on the type of cells, nanoparticles, and ligands used in the experiments. Here, we argue that these observations are consequences of the energetics and assembly dynamics of the protein coat that forms on the cytoplasmic side of the outer cell membrane during receptor-mediated endocytosis. Specifically, we show that the energetics of coat formation imposes an upper bound on the size of the nanoparticles that can be internalized, whereas the nanoparticle-size-dependent dynamics of coat assembly results in the optimal nanoparticle size. The weak dependence of the optimal and maximum sizes on cell-nanoparticle-ligand type also follows naturally from our analysis.
[ { "created": "Mon, 27 Oct 2014 17:14:12 GMT", "version": "v1" } ]
2014-11-27
[ [ "Banerjee", "Anand", "" ], [ "Berzhkovskii", "Alexander", "" ], [ "Nossal", "Ralph", "" ] ]
Experiments show that cellular uptake of nanoparticles, via receptor-mediated endocytosis, strongly depends on nanoparticle size. There is an optimal size, approximately 50 nm in diameter, at which cellular uptake is the highest. In addition, there is a maximum size, approximately 200 nm, beyond which uptake via receptor-mediated endocytosis does not occur. By comparing results from different experiments, we found that these sizes weakly depend on the type of cells, nanoparticles, and ligands used in the experiments. Here, we argue that these observations are consequences of the energetics and assembly dynamics of the protein coat that forms on the cytoplasmic side of the outer cell membrane during receptor-mediated endocytosis. Specifically, we show that the energetics of coat formation imposes an upper bound on the size of the nanoparticles that can be internalized, whereas the nanoparticle-size-dependent dynamics of coat assembly results in the optimal nanoparticle size. The weak dependence of the optimal and maximum sizes on cell-nanoparticle-ligand type also follows naturally from our analysis.
q-bio/0611043
Daniel Remondini
D. Remondini, N. Neretti, J. M. Sedivy, C. Franceschi, L. Milanesi, P. Tieri, G. C. Castellani
Networks from gene expression time series: characterization of correlation patterns
10 pages, 3 BMP figures, 1 Table. To appear in Int. J. Bif. Chaos, July 2007, Volume 17, Issue 7
null
10.1142/S0218127407018543
null
q-bio.GN
null
This paper describes characteristic features of networks reconstructed from gene expression time series data. Several null models are considered in order to discriminate between informations embedded in the network that are related to real data, and features that are due to the method used for network reconstruction (time correlation).
[ { "created": "Tue, 14 Nov 2006 15:07:48 GMT", "version": "v1" } ]
2015-06-26
[ [ "Remondini", "D.", "" ], [ "Neretti", "N.", "" ], [ "Sedivy", "J. M.", "" ], [ "Franceschi", "C.", "" ], [ "Milanesi", "L.", "" ], [ "Tieri", "P.", "" ], [ "Castellani", "G. C.", "" ] ]
This paper describes characteristic features of networks reconstructed from gene expression time series data. Several null models are considered in order to discriminate between informations embedded in the network that are related to real data, and features that are due to the method used for network reconstruction (time correlation).
1805.12253
Mahdi Imani
Mahdi Imani, Roozbeh Dehghannasiri, Ulisses M. Braga-Neto, Edward R. Dougherty
Sequential Experimental Design for Optimal Structural Intervention in Gene Regulatory Networks Based on the Mean Objective Cost of Uncertainty
null
null
null
null
q-bio.MN cs.SY stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scientists are attempting to use models of ever increasing complexity, especially in medicine, where gene-based diseases such as cancer require better modeling of cell regulation. Complex models suffer from uncertainty and experiments are needed to reduce this uncertainty. Because experiments can be costly and time-consuming it is desirable to determine experiments providing the most useful information. If a sequence of experiments is to be performed, experimental design is needed to determine the order. A classical approach is to maximally reduce the overall uncertainty in the model, meaning maximal entropy reduction. A recently proposed method takes into account both model uncertainty and the translational objective, for instance, optimal structural intervention in gene regulatory networks, where the aim is to alter the regulatory logic to maximally reduce the long-run likelihood of being in a cancerous state. The mean objective cost of uncertainty (MOCU) quantifies uncertainty based on the degree to which model uncertainty affects the objective. Experimental design involves choosing the experiment that yields the greatest reduction in MOCU. This paper introduces finite-horizon dynamic programming for MOCU-based sequential experimental design and compares it to the greedy approach, which selects one experiment at a time without consideration of the full horizon of experiments. A salient aspect of the paper is that it demonstrates the advantage of MOCU-based design over the widely used entropy-based design for both greedy and dynamic-programming strategies and investigates the effect of model conditions on the comparative performances.
[ { "created": "Wed, 30 May 2018 22:53:22 GMT", "version": "v1" } ]
2018-06-01
[ [ "Imani", "Mahdi", "" ], [ "Dehghannasiri", "Roozbeh", "" ], [ "Braga-Neto", "Ulisses M.", "" ], [ "Dougherty", "Edward R.", "" ] ]
Scientists are attempting to use models of ever increasing complexity, especially in medicine, where gene-based diseases such as cancer require better modeling of cell regulation. Complex models suffer from uncertainty and experiments are needed to reduce this uncertainty. Because experiments can be costly and time-consuming it is desirable to determine experiments providing the most useful information. If a sequence of experiments is to be performed, experimental design is needed to determine the order. A classical approach is to maximally reduce the overall uncertainty in the model, meaning maximal entropy reduction. A recently proposed method takes into account both model uncertainty and the translational objective, for instance, optimal structural intervention in gene regulatory networks, where the aim is to alter the regulatory logic to maximally reduce the long-run likelihood of being in a cancerous state. The mean objective cost of uncertainty (MOCU) quantifies uncertainty based on the degree to which model uncertainty affects the objective. Experimental design involves choosing the experiment that yields the greatest reduction in MOCU. This paper introduces finite-horizon dynamic programming for MOCU-based sequential experimental design and compares it to the greedy approach, which selects one experiment at a time without consideration of the full horizon of experiments. A salient aspect of the paper is that it demonstrates the advantage of MOCU-based design over the widely used entropy-based design for both greedy and dynamic-programming strategies and investigates the effect of model conditions on the comparative performances.
1903.10968
\'Elie Besserer-Offroy Ph.D.
David St-Pierre, J\'er\^ome Cabana, Brian J. Holleran, \'Elie Besserer-Offroy, Emanuel Escher, Ga\'etan Guillemette, Pierre Lavigne, and Richard Leduc
Angiotensin II cyclic analogs as tools to investigate AT1R biased signaling mechanisms
This is the preprint version of the following article: St-Pierre D, et al. (2018), Biochem Pharmacol. doi: 10.1016/j.bcp.2018.04.021, which has been accepted and published in final form at https://www.sciencedirect.com/science/article/pii/S0006295218301643. Supplementary information are freely available at doi: 10.6084/m9.figshare.6108440
St-Pierre D, et al. (2018), Biochem Pharmacol. 154:104-17
10.1016/j.bcp.2018.04.021
null
q-bio.MN q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
G protein coupled receptors (GPCRs) produce pleiotropic effects by their capacity to engage numerous signaling pathways once activated. Functional selectivity (also called biased signaling), where specific compounds can bring GPCRs to adopt conformations that enable selective receptor coupling to distinct signaling pathways, continues to be significantly investigated. However, an important but often overlooked aspect of functional selectivity is the capability of ligands such as angiotensin II (AngII) to adopt specific conformations that may preferentially bind to selective GPCRs structures. Understanding both receptor and ligand conformation is of the utmost importance for the design of new drugs targeting GPCRs. In this study, we examined the properties of AngII cyclic analogs to impart biased agonism on the angiotensin type 1 receptor (AT1R). Positions 3 and 5 of AngII were substituted for cysteine and homocysteine residues ([Sar1Hcy3,5]AngII, [Sar1Cys3Hcy5]AngII and [Sar1Cys3,5]AngII) and the resulting analogs were evaluated for their capacity to activate the Gq/11, G12, Gi2, Gi3, Gz, ERK and \b{eta}-arrestin (\b{eta}arr) signaling pathways via AT1R. Interestingly, [Sar1Hcy3,5]AngII exhibited potency and full efficacy on all pathways tested with the exception of the Gq pathway. Molecular dynamic simulations showed that the energy barrier associated with the insertion of residue Phe8 of AngII within the hydrophobic core of AT1R, associated with Gq/11 activation, is increased with [Sar1Hcy3,5]AngII. These results suggest that constraining the movements of molecular determinants within a given ligand by introducing cyclic structures may lead to the generation of novel ligands providing more efficient biased agonism.
[ { "created": "Tue, 26 Mar 2019 15:52:25 GMT", "version": "v1" } ]
2019-03-27
[ [ "St-Pierre", "David", "" ], [ "Cabana", "Jérôme", "" ], [ "Holleran", "Brian J.", "" ], [ "Besserer-Offroy", "Élie", "" ], [ "Escher", "Emanuel", "" ], [ "Guillemette", "Gaétan", "" ], [ "Lavigne", "Pierre", "" ], [ "Leduc", "Richard", "" ] ]
G protein coupled receptors (GPCRs) produce pleiotropic effects by their capacity to engage numerous signaling pathways once activated. Functional selectivity (also called biased signaling), where specific compounds can bring GPCRs to adopt conformations that enable selective receptor coupling to distinct signaling pathways, continues to be significantly investigated. However, an important but often overlooked aspect of functional selectivity is the capability of ligands such as angiotensin II (AngII) to adopt specific conformations that may preferentially bind to selective GPCRs structures. Understanding both receptor and ligand conformation is of the utmost importance for the design of new drugs targeting GPCRs. In this study, we examined the properties of AngII cyclic analogs to impart biased agonism on the angiotensin type 1 receptor (AT1R). Positions 3 and 5 of AngII were substituted for cysteine and homocysteine residues ([Sar1Hcy3,5]AngII, [Sar1Cys3Hcy5]AngII and [Sar1Cys3,5]AngII) and the resulting analogs were evaluated for their capacity to activate the Gq/11, G12, Gi2, Gi3, Gz, ERK and \b{eta}-arrestin (\b{eta}arr) signaling pathways via AT1R. Interestingly, [Sar1Hcy3,5]AngII exhibited potency and full efficacy on all pathways tested with the exception of the Gq pathway. Molecular dynamic simulations showed that the energy barrier associated with the insertion of residue Phe8 of AngII within the hydrophobic core of AT1R, associated with Gq/11 activation, is increased with [Sar1Hcy3,5]AngII. These results suggest that constraining the movements of molecular determinants within a given ligand by introducing cyclic structures may lead to the generation of novel ligands providing more efficient biased agonism.
1711.09560
Ivan Sudakov
Sergey A. Vakulenko, Ivan Sudakov, and Luke Mander
Minor climatic fluctuations lead to species extinction in a conceptual ecosystem model
10 pages, 2 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The extinction of species is a core process that affects the diversity of life on Earth. One way of investigating the causes and consequences of extinctions is to build conceptual ecological models, and to use the dynamical outcomes of such models to provide quantitative formalization of changes to Earth's biosphere. In this paper we propose and study a conceptual resource model that describes a simple and easily understandable mechanism for resource competition, generalizes the well-known Huisman and Weissing model, and takes into account species self-regulation, extinctions, and time dependence of resources. We use analytical investigations and numerical simulations to study the dynamics of our model under chaotic and periodic climate oscillations, and show that the stochastic dynamics of our model exhibit strong dependence on initial parameters. We also demonstrate that extinctions in our model are inevitable if an ecosystem has the maximal possible biodiversity and uses the maximal amount of resources. Our conceptual modeling provides theoretical support for suggestions that non-linear processes were important during major extinction events in Earth history.
[ { "created": "Mon, 27 Nov 2017 06:46:44 GMT", "version": "v1" } ]
2017-11-28
[ [ "Vakulenko", "Sergey A.", "" ], [ "Sudakov", "Ivan", "" ], [ "Mander", "Luke", "" ] ]
The extinction of species is a core process that affects the diversity of life on Earth. One way of investigating the causes and consequences of extinctions is to build conceptual ecological models, and to use the dynamical outcomes of such models to provide quantitative formalization of changes to Earth's biosphere. In this paper we propose and study a conceptual resource model that describes a simple and easily understandable mechanism for resource competition, generalizes the well-known Huisman and Weissing model, and takes into account species self-regulation, extinctions, and time dependence of resources. We use analytical investigations and numerical simulations to study the dynamics of our model under chaotic and periodic climate oscillations, and show that the stochastic dynamics of our model exhibit strong dependence on initial parameters. We also demonstrate that extinctions in our model are inevitable if an ecosystem has the maximal possible biodiversity and uses the maximal amount of resources. Our conceptual modeling provides theoretical support for suggestions that non-linear processes were important during major extinction events in Earth history.
1401.5383
Rayan Chikhi
Rayan Chikhi, Antoine Limasset, Shaun Jackman, Jared Simpson and Paul Medvedev
On the representation of de Bruijn graphs
Journal version (JCB). A preliminary version of this article was published in the proceedings of RECOMB 2014
null
null
null
q-bio.QM cs.DS q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The de Bruijn graph plays an important role in bioinformatics, especially in the context of de novo assembly. However, the representation of the de Bruijn graph in memory is a computational bottleneck for many assemblers. Recent papers proposed a navigational data structure approach in order to improve memory usage. We prove several theoretical space lower bounds to show the limitation of these types of approaches. We further design and implement a general data structure (DBGFM) and demonstrate its use on a human whole-genome dataset, achieving space usage of 1.5 GB and a 46% improvement over previous approaches. As part of DBGFM, we develop the notion of frequency-based minimizers and show how it can be used to enumerate all maximal simple paths of the de Bruijn graph using only 43 MB of memory. Finally, we demonstrate that our approach can be integrated into an existing assembler by modifying the ABySS software to use DBGFM.
[ { "created": "Tue, 21 Jan 2014 16:55:02 GMT", "version": "v1" }, { "created": "Wed, 22 Jan 2014 16:53:37 GMT", "version": "v2" }, { "created": "Fri, 14 Feb 2014 22:55:09 GMT", "version": "v3" }, { "created": "Mon, 6 Oct 2014 12:39:56 GMT", "version": "v4" } ]
2014-10-07
[ [ "Chikhi", "Rayan", "" ], [ "Limasset", "Antoine", "" ], [ "Jackman", "Shaun", "" ], [ "Simpson", "Jared", "" ], [ "Medvedev", "Paul", "" ] ]
The de Bruijn graph plays an important role in bioinformatics, especially in the context of de novo assembly. However, the representation of the de Bruijn graph in memory is a computational bottleneck for many assemblers. Recent papers proposed a navigational data structure approach in order to improve memory usage. We prove several theoretical space lower bounds to show the limitation of these types of approaches. We further design and implement a general data structure (DBGFM) and demonstrate its use on a human whole-genome dataset, achieving space usage of 1.5 GB and a 46% improvement over previous approaches. As part of DBGFM, we develop the notion of frequency-based minimizers and show how it can be used to enumerate all maximal simple paths of the de Bruijn graph using only 43 MB of memory. Finally, we demonstrate that our approach can be integrated into an existing assembler by modifying the ABySS software to use DBGFM.
1605.03726
Atsushi Miyauchi
Atsushi Miyauchi, Kazunari Iwamoto, Satya Nanda Vel Arjunan, Koichi Takahashi
pSpatiocyte: A Parallel Stochastic Method for Particle Reaction-Diffusion Systems
19 pages, 17 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational systems biology has provided plenty of insights into cell biology. Early on, the focus was on reaction networks between molecular species. Spatial distribution only began to be considered mostly within the last decade. However, calculations were restricted to small systems because of tremendously high computational workloads. To date, application to the cell of typical size with molecular resolution is still far from realization. In this article, we present a new parallel stochastic method for particle reaction-diffusion systems. The program called pSpatiocyte was created bearing in mind reaction networks in biological cells operating in crowded intracellular environments as the primary simulation target. pSpatiocyte employs unique discretization and parallelization algorithms based on a hexagonal close-packed lattice for efficient execution particularly on large distributed memory parallel computers. For two-level parallelization, we introduced isolated subdomain and tri-stage lockstep communication for process-level, and voxel-locking techniques for thread-level. We performed a series of parallel runs on RIKEN's K computer. For a fine lattice that had relatively low occupancy, pSpatiocyte achieved 7686 times speedup with 663552 cores relative to 64 cores from the viewpoint of strong scaling and exhibited 74\% parallel efficiency. As for weak scaling, efficiencies at least 60% were observed up to 663552 cores. In addition to computational performance, diffusion and reaction rates were validated by theory and another well-validated program and had good agreement. Lastly, as a preliminary example of real-world applications, we present a calculation of the MAPK model, a typical reaction network motif in cell signaling pathways.
[ { "created": "Thu, 12 May 2016 08:47:45 GMT", "version": "v1" } ]
2016-05-13
[ [ "Miyauchi", "Atsushi", "" ], [ "Iwamoto", "Kazunari", "" ], [ "Arjunan", "Satya Nanda Vel", "" ], [ "Takahashi", "Koichi", "" ] ]
Computational systems biology has provided plenty of insights into cell biology. Early on, the focus was on reaction networks between molecular species. Spatial distribution only began to be considered mostly within the last decade. However, calculations were restricted to small systems because of tremendously high computational workloads. To date, application to the cell of typical size with molecular resolution is still far from realization. In this article, we present a new parallel stochastic method for particle reaction-diffusion systems. The program called pSpatiocyte was created bearing in mind reaction networks in biological cells operating in crowded intracellular environments as the primary simulation target. pSpatiocyte employs unique discretization and parallelization algorithms based on a hexagonal close-packed lattice for efficient execution particularly on large distributed memory parallel computers. For two-level parallelization, we introduced isolated subdomain and tri-stage lockstep communication for process-level, and voxel-locking techniques for thread-level. We performed a series of parallel runs on RIKEN's K computer. For a fine lattice that had relatively low occupancy, pSpatiocyte achieved 7686 times speedup with 663552 cores relative to 64 cores from the viewpoint of strong scaling and exhibited 74\% parallel efficiency. As for weak scaling, efficiencies at least 60% were observed up to 663552 cores. In addition to computational performance, diffusion and reaction rates were validated by theory and another well-validated program and had good agreement. Lastly, as a preliminary example of real-world applications, we present a calculation of the MAPK model, a typical reaction network motif in cell signaling pathways.
1705.07214
Patrick De Leenheer
Martin Schuster, Eric Foxall, David Finch, Hal Smith, Patrick De Leenheer
Tragedy of the Commons in the Chemostat
23 pages, 4 figures
null
10.1371/journal.pone.0186119
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a proof of principle for the phenomenon of the tragedy of the commons that is at the center of many theories on the evolution of cooperation. We establish the tragedy in the context of a general chemostat model with two species, the cooperator and the cheater. Both species have the same growth rate function and yield constant, but the cooperator allocates a portion of the nutrient uptake towards the production of a public good -the "Commons" in the Tragedy- which is needed to digest the externally supplied nutrient. The cheater on the other hand does not produce this enzyme, and allocates all nutrient uptake towards its own growth. We prove that when the cheater is present initially, both the cooperator and the cheater will eventually go extinct, hereby confirming the occurrence of the tragedy. We also show that without the cheater, the cooperator can survive indefinitely, provided that at least a low level of public good or processed nutrient is available initially. Our results provide a predictive framework for the analysis of cooperator-cheater dynamics in a powerful model system of experimental evolution.
[ { "created": "Fri, 19 May 2017 22:40:20 GMT", "version": "v1" } ]
2018-02-07
[ [ "Schuster", "Martin", "" ], [ "Foxall", "Eric", "" ], [ "Finch", "David", "" ], [ "Smith", "Hal", "" ], [ "De Leenheer", "Patrick", "" ] ]
We present a proof of principle for the phenomenon of the tragedy of the commons that is at the center of many theories on the evolution of cooperation. We establish the tragedy in the context of a general chemostat model with two species, the cooperator and the cheater. Both species have the same growth rate function and yield constant, but the cooperator allocates a portion of the nutrient uptake towards the production of a public good -the "Commons" in the Tragedy- which is needed to digest the externally supplied nutrient. The cheater on the other hand does not produce this enzyme, and allocates all nutrient uptake towards its own growth. We prove that when the cheater is present initially, both the cooperator and the cheater will eventually go extinct, hereby confirming the occurrence of the tragedy. We also show that without the cheater, the cooperator can survive indefinitely, provided that at least a low level of public good or processed nutrient is available initially. Our results provide a predictive framework for the analysis of cooperator-cheater dynamics in a powerful model system of experimental evolution.
1401.0071
Sarabjeet Singh
Sarabjeet Singh, Christopher R. Myers
Outbreak statistics and scaling laws for externally driven epidemics
12 pages, 8 figures
null
10.1103/PhysRevE.89.042108
null
q-bio.PE physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic Susceptible-Infectious-Recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as $P(n) \sim n^{-3/2}$ at the critical point as the system size $N$ becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a `reservoir forcing'. We find that the statistics of outbreaks in this system are fundamentally different than those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size $\mathcal{O}(N^{1/3})$ and $\mathcal{O}(N)$ depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as $O(N^{\xi})$ where $\xi \in (0,1] \backslash \{2/3\}$ and $\mathcal{O}((N/\log N)^{2/3})$ at the multi-critical point.
[ { "created": "Tue, 31 Dec 2013 02:06:22 GMT", "version": "v1" } ]
2015-06-18
[ [ "Singh", "Sarabjeet", "" ], [ "Myers", "Christopher R.", "" ] ]
Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic Susceptible-Infectious-Recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as $P(n) \sim n^{-3/2}$ at the critical point as the system size $N$ becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a `reservoir forcing'. We find that the statistics of outbreaks in this system are fundamentally different than those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size $\mathcal{O}(N^{1/3})$ and $\mathcal{O}(N)$ depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as $O(N^{\xi})$ where $\xi \in (0,1] \backslash \{2/3\}$ and $\mathcal{O}((N/\log N)^{2/3})$ at the multi-critical point.
1204.4046
David Lukatsky
Ariel Afek and David B. Lukatsky
Nonspecific Protein-DNA Binding Is Widespread in the Yeast Genome
null
Biophysical Journal 102(8), 1881-1888 (2012)
10.1016/j.bpj.2012.03.044
null
q-bio.BM q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent genome-wide measurements of binding preferences of ~200 transcription regulators in the vicinity of transcription start sites in yeast, have provided a unique insight into the cis- regulatory code of a eukaryotic genome (Venters et al., Mol. Cell 41, 480 (2011)). Here, we show that nonspecific transcription factor (TF)-DNA binding significantly influences binding preferences of the majority of transcription regulators in promoter regions of the yeast genome. We show that promoters of SAGA-dominated and TFIID-dominated genes can be statistically distinguished based on the landscape of nonspecific protein-DNA binding free energy. In particular, we predict that promoters of SAGA-dominated genes possess wider regions of reduced free energy compared to promoters of TFIID-dominated genes. We also show that specific and nonspecific TF-DNA binding are functionally linked and cooperatively influence gene expression in yeast. Our results suggest that nonspecific TF-DNA binding is intrinsically encoded into the yeast genome, and it may play a more important role in transcriptional regulation than previously thought.
[ { "created": "Wed, 18 Apr 2012 11:14:10 GMT", "version": "v1" } ]
2012-04-19
[ [ "Afek", "Ariel", "" ], [ "Lukatsky", "David B.", "" ] ]
Recent genome-wide measurements of binding preferences of ~200 transcription regulators in the vicinity of transcription start sites in yeast, have provided a unique insight into the cis- regulatory code of a eukaryotic genome (Venters et al., Mol. Cell 41, 480 (2011)). Here, we show that nonspecific transcription factor (TF)-DNA binding significantly influences binding preferences of the majority of transcription regulators in promoter regions of the yeast genome. We show that promoters of SAGA-dominated and TFIID-dominated genes can be statistically distinguished based on the landscape of nonspecific protein-DNA binding free energy. In particular, we predict that promoters of SAGA-dominated genes possess wider regions of reduced free energy compared to promoters of TFIID-dominated genes. We also show that specific and nonspecific TF-DNA binding are functionally linked and cooperatively influence gene expression in yeast. Our results suggest that nonspecific TF-DNA binding is intrinsically encoded into the yeast genome, and it may play a more important role in transcriptional regulation than previously thought.
q-bio/0702027
Eben Kenah
Eben Kenah, James M. Robins
Network-based analysis of stochastic SIR epidemic models with random and proportionate mixing
40 pages, 9 figures
Journal of Theoretical Biology 249: 706-722, December 2007
10.1016/j.jtbi.2007.09.011
null
q-bio.QM cond-mat.stat-mech math.PR
null
In this paper, we outline the theory of epidemic percolation networks and their use in the analysis of stochastic SIR epidemic models on undirected contact networks. We then show how the same theory can be used to analyze stochastic SIR models with random and proportionate mixing. The epidemic percolation networks for these models are purely directed because undirected edges disappear in the limit of a large population. In a series of simulations, we show that epidemic percolation networks accurately predict the mean outbreak size and probability and final size of an epidemic for a variety of epidemic models in homogeneous and heterogeneous populations. Finally, we show that epidemic percolation networks can be used to re-derive classical results from several different areas of infectious disease epidemiology. In an appendix, we show that an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model in a closed population and prove that the distribution of outbreak sizes given the infection of any given node in the SIR model is identical to the distribution of its out-component sizes in the corresponding probability space of epidemic percolation networks. We conclude that the theory of percolation on semi-directed networks provides a very general framework for the analysis of stochastic SIR models in closed populations.
[ { "created": "Mon, 12 Feb 2007 02:21:23 GMT", "version": "v1" }, { "created": "Fri, 20 Jul 2007 20:46:17 GMT", "version": "v2" }, { "created": "Fri, 11 Jan 2008 04:39:37 GMT", "version": "v3" } ]
2023-10-24
[ [ "Kenah", "Eben", "" ], [ "Robins", "James M.", "" ] ]
In this paper, we outline the theory of epidemic percolation networks and their use in the analysis of stochastic SIR epidemic models on undirected contact networks. We then show how the same theory can be used to analyze stochastic SIR models with random and proportionate mixing. The epidemic percolation networks for these models are purely directed because undirected edges disappear in the limit of a large population. In a series of simulations, we show that epidemic percolation networks accurately predict the mean outbreak size and probability and final size of an epidemic for a variety of epidemic models in homogeneous and heterogeneous populations. Finally, we show that epidemic percolation networks can be used to re-derive classical results from several different areas of infectious disease epidemiology. In an appendix, we show that an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model in a closed population and prove that the distribution of outbreak sizes given the infection of any given node in the SIR model is identical to the distribution of its out-component sizes in the corresponding probability space of epidemic percolation networks. We conclude that the theory of percolation on semi-directed networks provides a very general framework for the analysis of stochastic SIR models in closed populations.
1209.3820
Sean Stromberg
Sean P Stromberg, Rustom Antia, and Ilya Nemenman
Population-expression models of immune response
Revised manuscript with an additional included Supplemental. The Supplemental contains two contrasting derivations of the population-expression PDE formulation, one from a fluid-dynamics perspective using the divergence theorem, the other from a statistical physics/systems-biology perspective using a chemical master equation
Physical biology 10 (3), 035010, 2013
10.1088/1478-3975/10/3/035010
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The immune response to a pathogen has two basic features. The first is the expansion of a few pathogen-specific cells to form a population large enough to control the pathogen. The second is the process of differentiation of cells from an initial naive phenotype to an effector phenotype which controls the pathogen, and subsequently to a memory phenotype that is maintained and responsible for long-term protection. The expansion and the differentiation have been considered largely independently. Changes in cell populations are typically described using ecologically based ordinary differential equation models. In contrast, differentiation of single cells is studied within systems biology and is frequently modeled by considering changes in gene and protein expression in individual cells. Recent advances in experimental systems biology make available for the first time data to allow the coupling of population and high dimensional expression data of immune cells during infections. Here we describe and develop population-expression models which integrate these two processes into systems biology on the multicellular level. When translated into mathematical equations, these models result in non-conservative, non-local advection-diffusion equations. We describe situations where the population-expression approach can make correct inference from data while previous modeling approaches based on common simplifying assumptions would fail. We also explore how model reduction techniques can be used to build population-expression models, minimizing the complexity of the model while keeping the essential features of the system. While we consider problems in immunology in this paper, we expect population-expression models to be more broadly applicable.
[ { "created": "Tue, 18 Sep 2012 00:46:26 GMT", "version": "v1" }, { "created": "Sat, 8 Dec 2012 05:08:46 GMT", "version": "v2" } ]
2014-02-04
[ [ "Stromberg", "Sean P", "" ], [ "Antia", "Rustom", "" ], [ "Nemenman", "Ilya", "" ] ]
The immune response to a pathogen has two basic features. The first is the expansion of a few pathogen-specific cells to form a population large enough to control the pathogen. The second is the process of differentiation of cells from an initial naive phenotype to an effector phenotype which controls the pathogen, and subsequently to a memory phenotype that is maintained and responsible for long-term protection. The expansion and the differentiation have been considered largely independently. Changes in cell populations are typically described using ecologically based ordinary differential equation models. In contrast, differentiation of single cells is studied within systems biology and is frequently modeled by considering changes in gene and protein expression in individual cells. Recent advances in experimental systems biology make available for the first time data to allow the coupling of population and high dimensional expression data of immune cells during infections. Here we describe and develop population-expression models which integrate these two processes into systems biology on the multicellular level. When translated into mathematical equations, these models result in non-conservative, non-local advection-diffusion equations. We describe situations where the population-expression approach can make correct inference from data while previous modeling approaches based on common simplifying assumptions would fail. We also explore how model reduction techniques can be used to build population-expression models, minimizing the complexity of the model while keeping the essential features of the system. While we consider problems in immunology in this paper, we expect population-expression models to be more broadly applicable.
1809.05254
Tatsuya Haga
Tatsuya Haga, Tomoki Fukai
Extended temporal association memory by inhibitory Hebbian learning
4 pages, 4 figures
Phys. Rev. Lett. 123, 078101 (2019)
10.1103/PhysRevLett.123.078101
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hebbian learning of excitatory synapses plays a central role in storing activity patterns in associative memory models. Furthermore, interstimulus Hebbian learning associates multiple items in the brain by converting temporal correlation to spatial correlation between attractors. However, growing experimental evidence suggests that learning of inhibitory synapses creates "inhibitory engrams", which presumably balance with the patterns encoded in the excitatory network. Controlling inhibitory engrams may modify the behavior of associative memory in neural networks, but the consequence of such control has not been theoretically understood. Noting that Hebbian learning of inhibitory synapses yields an anti-Hebbian effect, we show that the combination of Hebbian and anti-Hebbian learning can increase the span of temporal association between the correlated attractors. The balance of targetted and global inhibition regulates this span of association in the network. Our results suggest a nontrivial role of anti-Hebbian learning and inhibitory engrams in associative memory.
[ { "created": "Fri, 14 Sep 2018 04:57:16 GMT", "version": "v1" } ]
2019-08-21
[ [ "Haga", "Tatsuya", "" ], [ "Fukai", "Tomoki", "" ] ]
Hebbian learning of excitatory synapses plays a central role in storing activity patterns in associative memory models. Furthermore, interstimulus Hebbian learning associates multiple items in the brain by converting temporal correlation to spatial correlation between attractors. However, growing experimental evidence suggests that learning of inhibitory synapses creates "inhibitory engrams", which presumably balance with the patterns encoded in the excitatory network. Controlling inhibitory engrams may modify the behavior of associative memory in neural networks, but the consequence of such control has not been theoretically understood. Noting that Hebbian learning of inhibitory synapses yields an anti-Hebbian effect, we show that the combination of Hebbian and anti-Hebbian learning can increase the span of temporal association between the correlated attractors. The balance of targetted and global inhibition regulates this span of association in the network. Our results suggest a nontrivial role of anti-Hebbian learning and inhibitory engrams in associative memory.
1812.03971
Niv DeMalach
Niv DeMalach, Nadav Shnerb and Tadashi Fukami
Alternative states in plant communities driven by a life-history tradeoff and demographic stochasticity
null
American Naturalist 2021
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Life-history tradeoffs among species are major drivers of community assembly. Most studies investigate how tradeoffs promote deterministic coexistence of species. It remains unclear how tradeoffs may instead promote historically contingent exclusion of species, where species dominance is affected by initial abundances, causing alternative community states. Focusing on the establishment-longevity tradeoff, we study the transient dynamics and equilibrium outcomes of competitive interactions in a simulation model of plant community assembly. We show that, in this model, the establishment-longevity tradeoff is a necessary but not sufficient condition for alternative stable equilibria that require also low fecundity for both species. An analytical approximation of our simulation model demonstrates that alternative stable equilibria are driven by demographic stochasticity in the number of seeds arriving at each establishment site. This site-scale stochasticity is only affected by fecundity and therefore occurs even in infinitely large communities. In many cases where the establishment-longevity tradeoff does not cause alternative stable equilibria, it still decreases the rate of convergence toward the single equilibrium, resulting in decades of transient dynamics that can appear indistinguishable from alternative stable equilibria in empirical studies.
[ { "created": "Mon, 10 Dec 2018 18:44:01 GMT", "version": "v1" }, { "created": "Fri, 18 Jan 2019 22:35:53 GMT", "version": "v2" }, { "created": "Mon, 1 Apr 2019 16:46:21 GMT", "version": "v3" }, { "created": "Thu, 25 Jul 2019 18:43:59 GMT", "version": "v4" }, { "created": "Thu, 29 Oct 2020 15:53:51 GMT", "version": "v5" } ]
2021-03-09
[ [ "DeMalach", "Niv", "" ], [ "Shnerb", "Nadav", "" ], [ "Fukami", "Tadashi", "" ] ]
Life-history tradeoffs among species are major drivers of community assembly. Most studies investigate how tradeoffs promote deterministic coexistence of species. It remains unclear how tradeoffs may instead promote historically contingent exclusion of species, where species dominance is affected by initial abundances, causing alternative community states. Focusing on the establishment-longevity tradeoff, we study the transient dynamics and equilibrium outcomes of competitive interactions in a simulation model of plant community assembly. We show that, in this model, the establishment-longevity tradeoff is a necessary but not sufficient condition for alternative stable equilibria that require also low fecundity for both species. An analytical approximation of our simulation model demonstrates that alternative stable equilibria are driven by demographic stochasticity in the number of seeds arriving at each establishment site. This site-scale stochasticity is only affected by fecundity and therefore occurs even in infinitely large communities. In many cases where the establishment-longevity tradeoff does not cause alternative stable equilibria, it still decreases the rate of convergence toward the single equilibrium, resulting in decades of transient dynamics that can appear indistinguishable from alternative stable equilibria in empirical studies.
2212.10595
Fan Zhang
Fan Zhang
Memory recall by controlling chaos
6 pages
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
By incorporating feedback loops, that engender amplification and damping so that output is not proportional to input, the biological neural networks become highly nonlinear and thus very likely chaotic in nature. Research in control theory reveals that strange attractors can be approximated by collection of cycles, and be collapsed into a more coherent state centered on one of them if we exert control. We speculate that human memories are encoded by such cycles, and can be retrieved once sensory or virtual cues, acting as references, enable feedback controls that nucleates the otherwise chaotic wandering mind.
[ { "created": "Thu, 10 Nov 2022 08:09:41 GMT", "version": "v1" } ]
2022-12-22
[ [ "Zhang", "Fan", "" ] ]
By incorporating feedback loops, that engender amplification and damping so that output is not proportional to input, the biological neural networks become highly nonlinear and thus very likely chaotic in nature. Research in control theory reveals that strange attractors can be approximated by collection of cycles, and be collapsed into a more coherent state centered on one of them if we exert control. We speculate that human memories are encoded by such cycles, and can be retrieved once sensory or virtual cues, acting as references, enable feedback controls that nucleates the otherwise chaotic wandering mind.
2301.08742
Mahendra Samarawickrama
Mahendra Samarawickrama
Unifying Consciousness and Time to Enhance Artificial Intelligence
This discussion paper has been submitted to Cognitive Neuroscience of Routledge, part of the Taylor & Francis publications
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Consciousness is a sequential process of awareness which can focus on one piece of information at a time. This process of awareness experiences causation which underpins the notion of time while it interplays with matter and energy, forming reality. The study of Consciousness, time and reality is complex and evolving fast in many fields, including metaphysics and fundamental physics. Reality composes patterns in human Consciousness in response to the regularities in nature. These regularities could be physical (e.g., astronomical, environmental), biological, chemical, mental, social, etc. The patterns that emerged in Consciousness were correlated to the environment, life and social behaviours followed by constructed frameworks, systems and structures. The complex constructs evolved as cultures, customs, norms and values, which created a diverse society. In the evolution of responsible AI, it is important to be attuned to the evolved cultural, ethical and moral values through Consciousness. This requires the advocated design of self-learning AI aware of time perception and human ethics.
[ { "created": "Tue, 10 Jan 2023 11:15:41 GMT", "version": "v1" } ]
2023-01-24
[ [ "Samarawickrama", "Mahendra", "" ] ]
Consciousness is a sequential process of awareness which can focus on one piece of information at a time. This process of awareness experiences causation which underpins the notion of time while it interplays with matter and energy, forming reality. The study of Consciousness, time and reality is complex and evolving fast in many fields, including metaphysics and fundamental physics. Reality composes patterns in human Consciousness in response to the regularities in nature. These regularities could be physical (e.g., astronomical, environmental), biological, chemical, mental, social, etc. The patterns that emerged in Consciousness were correlated to the environment, life and social behaviours followed by constructed frameworks, systems and structures. The complex constructs evolved as cultures, customs, norms and values, which created a diverse society. In the evolution of responsible AI, it is important to be attuned to the evolved cultural, ethical and moral values through Consciousness. This requires the advocated design of self-learning AI aware of time perception and human ethics.
1303.1374
Reinhard B\"urger
Ada Akerman, Reinhard B\"urger
The consequences of gene flow for local adaptation and differentiation: A two-locus two-deme model
null
J. Math. Biol. 68, 1135-1198 (2014)
10.1007/s00285-013-0660-z
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a population subdivided into two demes connected by migration in which selection acts in opposite direction. We explore the effects of recombination and migration on the maintenance of multilocus polymorphism, on local adaptation, and on differentiation by employing a deterministic model with genic selection on two linked diallelic loci (i.e., no dominance or epistasis). For the following cases, we characterize explicitly the possible equilibrium configurations: weak, strong, highly asymmetric, and super-symmetric migration, no or weak recombination, and independent or strongly recombining loci. For independent loci (linkage equilibrium) and for completely linked loci, we derive the possible bifurcation patterns as functions of the total migration rate, assuming all other parameters are fixed but arbitrary. For these and other cases, we determine analytically the maximum migration rate below which a stable fully polymorphic equilibrium exists. In this case, differentiation and local adaptation are maintained. Their degree is quantified by a new multilocus version of $\Fst$ and by the migration load, respectively. In addition, we investigate the invasion conditions of locally beneficial mutants and show that linkage to a locus that is already in migration-selection balance facilitates invasion. Hence, loci of much smaller effect can invade than predicted by one-locus theory if linkage is sufficiently tight. We study how this minimum amount of linkage admitting invasion depends on the migration pattern. This suggests the emergence of clusters of locally beneficial mutations, which may form `genomic islands of divergence'. Finally, the influence of linkage and two-way migration on the effective migration rate at a linked neutral locus is explored. Numerical work complements our analytical results.
[ { "created": "Wed, 6 Mar 2013 16:30:03 GMT", "version": "v1" } ]
2014-07-21
[ [ "Akerman", "Ada", "" ], [ "Bürger", "Reinhard", "" ] ]
We consider a population subdivided into two demes connected by migration in which selection acts in opposite direction. We explore the effects of recombination and migration on the maintenance of multilocus polymorphism, on local adaptation, and on differentiation by employing a deterministic model with genic selection on two linked diallelic loci (i.e., no dominance or epistasis). For the following cases, we characterize explicitly the possible equilibrium configurations: weak, strong, highly asymmetric, and super-symmetric migration, no or weak recombination, and independent or strongly recombining loci. For independent loci (linkage equilibrium) and for completely linked loci, we derive the possible bifurcation patterns as functions of the total migration rate, assuming all other parameters are fixed but arbitrary. For these and other cases, we determine analytically the maximum migration rate below which a stable fully polymorphic equilibrium exists. In this case, differentiation and local adaptation are maintained. Their degree is quantified by a new multilocus version of $\Fst$ and by the migration load, respectively. In addition, we investigate the invasion conditions of locally beneficial mutants and show that linkage to a locus that is already in migration-selection balance facilitates invasion. Hence, loci of much smaller effect can invade than predicted by one-locus theory if linkage is sufficiently tight. We study how this minimum amount of linkage admitting invasion depends on the migration pattern. This suggests the emergence of clusters of locally beneficial mutations, which may form `genomic islands of divergence'. Finally, the influence of linkage and two-way migration on the effective migration rate at a linked neutral locus is explored. Numerical work complements our analytical results.
2307.00932
Tianye Wang
Tianye Wang, Haoxuan Yao, Tai Sing Lee, Jiayi Hong, Yang Li, Hongfei Jiang, Ian Max Andolina, Shiming Tang
A large calcium-imaging dataset reveals a systematic V4 organization for natural scenes
39 pages, 14 figures
null
null
null
q-bio.NC cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The visual system evolved to process natural scenes, yet most of our understanding of the topology and function of visual cortex derives from studies using artificial stimuli. To gain deeper insights into visual processing of natural scenes, we utilized widefield calcium-imaging of primate V4 in response to many natural images, generating a large dataset of columnar-scale responses. We used this dataset to build a digital twin of V4 via deep learning, generating a detailed topographical map of natural image preferences at each cortical position. The map revealed clustered functional domains for specific classes of natural image features. These ranged from surface-related attributes like color and texture to shape-related features such as edges, curvature, and facial features. We validated the model-predicted domains with additional widefield calcium-imaging and single-cell resolution two-photon imaging. Our study illuminates the detailed topological organization and neural codes in V4 that represent natural scenes.
[ { "created": "Mon, 3 Jul 2023 11:13:28 GMT", "version": "v1" }, { "created": "Mon, 24 Jul 2023 01:57:52 GMT", "version": "v2" } ]
2023-07-25
[ [ "Wang", "Tianye", "" ], [ "Yao", "Haoxuan", "" ], [ "Lee", "Tai Sing", "" ], [ "Hong", "Jiayi", "" ], [ "Li", "Yang", "" ], [ "Jiang", "Hongfei", "" ], [ "Andolina", "Ian Max", "" ], [ "Tang", "Shiming", "" ] ]
The visual system evolved to process natural scenes, yet most of our understanding of the topology and function of visual cortex derives from studies using artificial stimuli. To gain deeper insights into visual processing of natural scenes, we utilized widefield calcium-imaging of primate V4 in response to many natural images, generating a large dataset of columnar-scale responses. We used this dataset to build a digital twin of V4 via deep learning, generating a detailed topographical map of natural image preferences at each cortical position. The map revealed clustered functional domains for specific classes of natural image features. These ranged from surface-related attributes like color and texture to shape-related features such as edges, curvature, and facial features. We validated the model-predicted domains with additional widefield calcium-imaging and single-cell resolution two-photon imaging. Our study illuminates the detailed topological organization and neural codes in V4 that represent natural scenes.
2407.18952
Stefanie Winkelmann
Nathalie Wehlitz, Mohsen Sadeghi, Alberto Montefusco, Christof Sch\"utte, Grigorios A. Pavliotis, Stefanie Winkelmann
Approximating particle-based clustering dynamics by stochastic PDEs
null
null
null
null
q-bio.QM math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes stochastic partial differential equations (SPDEs) as a practical tool to replicate clustering effects of more detailed particle-based dynamics. Inspired by membrane-mediated receptor dynamics on cell surfaces, we formulate a stochastic particle-based model for diffusion and pairwise interaction of particles, leading to intriguing clustering phenomena. Employing numerical simulation and cluster detection methods, we explore the approximation of the particle-based clustering dynamics through mean-field approaches. We find that SPDEs successfully reproduce spatiotemporal clustering dynamics, not only in the initial cluster formation period, but also on longer time scales where the successive merging of clusters cannot be tracked by deterministic mean-field models. The computational efficiency of the SPDE approach allows us to generate extensive statistical data for parameter estimation in a simpler model that uses a Markov jump process to capture the temporal evolution of the cluster number.
[ { "created": "Fri, 12 Jul 2024 13:20:06 GMT", "version": "v1" } ]
2024-07-30
[ [ "Wehlitz", "Nathalie", "" ], [ "Sadeghi", "Mohsen", "" ], [ "Montefusco", "Alberto", "" ], [ "Schütte", "Christof", "" ], [ "Pavliotis", "Grigorios A.", "" ], [ "Winkelmann", "Stefanie", "" ] ]
This work proposes stochastic partial differential equations (SPDEs) as a practical tool to replicate clustering effects of more detailed particle-based dynamics. Inspired by membrane-mediated receptor dynamics on cell surfaces, we formulate a stochastic particle-based model for diffusion and pairwise interaction of particles, leading to intriguing clustering phenomena. Employing numerical simulation and cluster detection methods, we explore the approximation of the particle-based clustering dynamics through mean-field approaches. We find that SPDEs successfully reproduce spatiotemporal clustering dynamics, not only in the initial cluster formation period, but also on longer time scales where the successive merging of clusters cannot be tracked by deterministic mean-field models. The computational efficiency of the SPDE approach allows us to generate extensive statistical data for parameter estimation in a simpler model that uses a Markov jump process to capture the temporal evolution of the cluster number.
2011.03759
Vince Grolmusz
Laszlo Keresztes and Evelin Szogi and Balint Varga and Viktor Farkas and Andras Perczel and Vince Grolmusz
The Budapest Amyloid Predictor and its Applications
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The amyloid state of proteins is widely studied with relevancy in neurology, biochemistry, and biotechnology. In contrast with amorphous aggregation, the amyloid state has a well-defined structure, consisting of parallel and anti-parallel $\beta$-sheets in a periodically repeated formation. The understanding of the amyloid state is growing with the development of novel molecular imaging tools, like cryogenic electron microscopy. Sequence-based amyloid predictors were developed by using mostly artificial neural networks (ANNs) as the underlying computational techniques. From a good neural network-based predictor, it is a very difficult task to identify those attributes of the input amino acid sequence, which implied the decision of the network. Here we present a Support Vector Machine (SVM)-based predictor for hexapeptides with correctness higher than 84\%, i.e., it is at least as good as the published ANN-based tools. Unlike the artificial neural networks, the decision of the SVMs are much easier to analyze, and from a good predictor, we can infer rich biochemical knowledge. Availability and Implementation: The Budapest Amyloid Predictor webserver is freely available at https://pitgroup.org/bap.
[ { "created": "Sat, 7 Nov 2020 12:11:26 GMT", "version": "v1" } ]
2020-11-10
[ [ "Keresztes", "Laszlo", "" ], [ "Szogi", "Evelin", "" ], [ "Varga", "Balint", "" ], [ "Farkas", "Viktor", "" ], [ "Perczel", "Andras", "" ], [ "Grolmusz", "Vince", "" ] ]
The amyloid state of proteins is widely studied with relevancy in neurology, biochemistry, and biotechnology. In contrast with amorphous aggregation, the amyloid state has a well-defined structure, consisting of parallel and anti-parallel $\beta$-sheets in a periodically repeated formation. The understanding of the amyloid state is growing with the development of novel molecular imaging tools, like cryogenic electron microscopy. Sequence-based amyloid predictors were developed by using mostly artificial neural networks (ANNs) as the underlying computational techniques. From a good neural network-based predictor, it is a very difficult task to identify those attributes of the input amino acid sequence, which implied the decision of the network. Here we present a Support Vector Machine (SVM)-based predictor for hexapeptides with correctness higher than 84\%, i.e., it is at least as good as the published ANN-based tools. Unlike the artificial neural networks, the decision of the SVMs are much easier to analyze, and from a good predictor, we can infer rich biochemical knowledge. Availability and Implementation: The Budapest Amyloid Predictor webserver is freely available at https://pitgroup.org/bap.
2205.08902
\`Alex Gim\'enez-Romero
\`Alex Gim\'enez-Romero, Federico Vazquez, Crist\'obal L\'opez and Manuel A. Mat\'ias
Spatial effects in parasite induced marine diseases of immobile hosts
11 pages, 6 figures, 1 table
R. Soc. open sci. 9, 212023 (2022)
10.1098/rsos.212023
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Emerging marine infectious diseases pose a substantial threat to marine ecosystems and the conservation of their biodiversity. Compartmental models of epidemic transmission in marine sessile organisms, available only recently, are based on non-spatial descriptions in which space is homogenised and parasite mobility is not explicitly accounted for. However, in realistic scenarios epidemic transmission is conditioned by the spatial distribution of hosts and the parasites mobility patterns, calling for a explicit description of space. In this work we develop a spatially-explicit individual-based model to study disease transmission by waterborne parasites in sessile marine populations. We investigate the impact of spatial disease transmission through extensive numerical simulations and theoretical analysis. Specifically, the effects of parasite mobility into the epidemic threshold and the temporal progression of the epidemic are assessed. We show that larger values of pathogen mobility imply more severe epidemics, as the number of infections increases, and shorter time-scales to extinction. An analytical expression for the basic reproduction number of the spatial model is derived as function of the non-spatial counterpart, which characterises a transition between a disease-free and a propagation phase, in which the disease propagates over a large fraction of the system.
[ { "created": "Wed, 18 May 2022 12:46:20 GMT", "version": "v1" } ]
2022-11-03
[ [ "Giménez-Romero", "Àlex", "" ], [ "Vazquez", "Federico", "" ], [ "López", "Cristóbal", "" ], [ "Matías", "Manuel A.", "" ] ]
Emerging marine infectious diseases pose a substantial threat to marine ecosystems and the conservation of their biodiversity. Compartmental models of epidemic transmission in marine sessile organisms, available only recently, are based on non-spatial descriptions in which space is homogenised and parasite mobility is not explicitly accounted for. However, in realistic scenarios epidemic transmission is conditioned by the spatial distribution of hosts and the parasites mobility patterns, calling for a explicit description of space. In this work we develop a spatially-explicit individual-based model to study disease transmission by waterborne parasites in sessile marine populations. We investigate the impact of spatial disease transmission through extensive numerical simulations and theoretical analysis. Specifically, the effects of parasite mobility into the epidemic threshold and the temporal progression of the epidemic are assessed. We show that larger values of pathogen mobility imply more severe epidemics, as the number of infections increases, and shorter time-scales to extinction. An analytical expression for the basic reproduction number of the spatial model is derived as function of the non-spatial counterpart, which characterises a transition between a disease-free and a propagation phase, in which the disease propagates over a large fraction of the system.
2204.08578
Yannick Roy
Yannick Roy, Jocelyn Faubert
Is the Contralateral Delay Activity (CDA) a robust neural correlate for Visual Working Memory (VWM) tasks? A reproducibility study
null
Psychophysiology, 2022
10.1111/psyp.14180
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Visual working memory (VWM) allows us to actively store, update and manipulate visual information surrounding us. While the underlying neural mechanisms of VWM remain unclear, contralateral delay activity (CDA), a sustained negativity over the hemisphere contralateral to the positions of visual items to be remembered, is often used to study VWM. To investigate if the CDA is a robust neural correlate for VWM tasks, we reproduced eight CDA-related studies with a publicly accessible EEG dataset. We used the raw EEG data from these eight studies and analyzed all of them with the same basic pipeline to extract CDA. We were able to reproduce the results from all the studies and show that with a basic automated EEG pipeline we can extract a clear CDA signal. We share insights from the trends observed across the studies and raise some questions about the CDA decay and the CDA during the recall phase, which surprisingly, none of the eight studies did address. Finally, we also provide reproducibility recommendations based on our experience and challenges in reproducing these studies.
[ { "created": "Mon, 18 Apr 2022 22:21:22 GMT", "version": "v1" } ]
2022-09-21
[ [ "Roy", "Yannick", "" ], [ "Faubert", "Jocelyn", "" ] ]
Visual working memory (VWM) allows us to actively store, update and manipulate visual information surrounding us. While the underlying neural mechanisms of VWM remain unclear, contralateral delay activity (CDA), a sustained negativity over the hemisphere contralateral to the positions of visual items to be remembered, is often used to study VWM. To investigate if the CDA is a robust neural correlate for VWM tasks, we reproduced eight CDA-related studies with a publicly accessible EEG dataset. We used the raw EEG data from these eight studies and analyzed all of them with the same basic pipeline to extract CDA. We were able to reproduce the results from all the studies and show that with a basic automated EEG pipeline we can extract a clear CDA signal. We share insights from the trends observed across the studies and raise some questions about the CDA decay and the CDA during the recall phase, which surprisingly, none of the eight studies did address. Finally, we also provide reproducibility recommendations based on our experience and challenges in reproducing these studies.
1910.06217
\"Ozg\"ur G\"ultekin
\c{C}a\u{g}atay Eskin, \"Ozg\"ur G\"ultekin
Effect of Harvest on MTE Calculated by Single Step Process for Stochastic Population Model Under Allee Effect
5 pages
null
10.1063/1.5135465
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, first we expand the cubic population model under Allee effect with a quadratic harvest function that represents harvest effect. Then, using four reaction equations representing the micro-interactions within the population under the influence of demographic noise and harvest, we obtain the mean field equations containing the effect of the harvest from the solution of the master equation. In this way, a relationship is established between the micro and macro parameters of the population. As a result, we calculate Mean Time to Extinction (MTE) by using WKB approximation for single step processes and observe the effect of harvesting.
[ { "created": "Mon, 14 Oct 2019 15:41:51 GMT", "version": "v1" } ]
2020-01-08
[ [ "Eskin", "Çağatay", "" ], [ "Gültekin", "Özgür", "" ] ]
In this study, first we expand the cubic population model under Allee effect with a quadratic harvest function that represents harvest effect. Then, using four reaction equations representing the micro-interactions within the population under the influence of demographic noise and harvest, we obtain the mean field equations containing the effect of the harvest from the solution of the master equation. In this way, a relationship is established between the micro and macro parameters of the population. As a result, we calculate Mean Time to Extinction (MTE) by using WKB approximation for single step processes and observe the effect of harvesting.
2006.15162
Vikram Singh
Vikram Singh and Vikram Singh
C19-TraNet: an empirical, global index-case transmission network of SARS-CoV-2
28 pages, 4 figures, 4 tables
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Originating in Wuhan, the novel coronavirus, severe acute respiratory syndrome 2 (SARS-CoV-2), has astonished health-care systems across globe due to its rapid and simultaneous spread to the neighboring and distantly located countries. To gain the systems level understanding of the role of global transmission routes in the COVID-19 spread, in this study, we have developed the first, empirical, global, index-case transmission network of SARS-CoV-2 termed as C19-TraNet. We manually curated the travel history of country wise index-cases using government press releases, their official social media handles and online news reports to construct this C19-TraNet that is a spatio-temporal, sparse, growing network comprising of 187 nodes and 199 edges and follows a power-law degree distribution. To model the growing C19-TraNet, a novel stochastic scale free (SSF) algorithm is proposed that accounts for stochastic addition of both nodes as well as edges at each time step. A peculiar connectivity pattern in C19-TraNet is observed, characterized by a fourth degree polynomial growth curve, that significantly diverges from the average random connectivity pattern obtained from an ensemble of its 1,000 SSF realizations. Partitioning the C19-TraNet, using edge betweenness, it is found that most of the large communities are comprised of a heterogeneous mixture of countries belonging to different world regions suggesting that there are no spatial constraints on the spread of disease. This work characterizes the superspreaders that have very quickly transported the virus, through multiple transmission routes, to long range geographical locations alongwith their local neighborhoods.
[ { "created": "Fri, 26 Jun 2020 18:20:48 GMT", "version": "v1" } ]
2020-06-30
[ [ "Singh", "Vikram", "" ], [ "Singh", "Vikram", "" ] ]
Originating in Wuhan, the novel coronavirus, severe acute respiratory syndrome 2 (SARS-CoV-2), has astonished health-care systems across globe due to its rapid and simultaneous spread to the neighboring and distantly located countries. To gain the systems level understanding of the role of global transmission routes in the COVID-19 spread, in this study, we have developed the first, empirical, global, index-case transmission network of SARS-CoV-2 termed as C19-TraNet. We manually curated the travel history of country wise index-cases using government press releases, their official social media handles and online news reports to construct this C19-TraNet that is a spatio-temporal, sparse, growing network comprising of 187 nodes and 199 edges and follows a power-law degree distribution. To model the growing C19-TraNet, a novel stochastic scale free (SSF) algorithm is proposed that accounts for stochastic addition of both nodes as well as edges at each time step. A peculiar connectivity pattern in C19-TraNet is observed, characterized by a fourth degree polynomial growth curve, that significantly diverges from the average random connectivity pattern obtained from an ensemble of its 1,000 SSF realizations. Partitioning the C19-TraNet, using edge betweenness, it is found that most of the large communities are comprised of a heterogeneous mixture of countries belonging to different world regions suggesting that there are no spatial constraints on the spread of disease. This work characterizes the superspreaders that have very quickly transported the virus, through multiple transmission routes, to long range geographical locations alongwith their local neighborhoods.
1910.07414
Jan Karbowski
Jan Karbowski
Metabolic constraints on synaptic learning and memory
brain, synapses, energy cost of learning and memory, synaptic plasticity, model and estimates
Journal of Neurophysiology 122: 1473-1490 (2019)
10.1152/jn.00092.2019
null
q-bio.NC q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dendritic spines, the carriers of long-term memory, occupy a small fraction of cortical space, and yet they are the major consumers of brain metabolic energy. What fraction of this energy goes for synaptic plasticity, correlated with learning and memory? It is estimated here based on neurophysiological and proteomic data for rat brain that, depending on the level of protein phosphorylation, the energy cost of synaptic plasticity constitutes a small fraction of the energy used for fast excitatory synaptic transmission, typically $4.0-11.2 \%$. Next, this study analyzes a metabolic cost of a new learning and its memory trace in relation to the cost of prior memories, using a class of cascade models of synaptic plasticity. It is argued that these models must contain bidirectional cyclic motifs, related to protein phosphorylation, to be compatible with basic thermodynamic principles. For most investigated parameters longer memories generally require proportionally more energy to store. The exception are the parameters controlling the speed of molecular transitions (e.g. ATP driven phosphorylation rate), for which memory lifetime per invested energy can increase progressively for longer memories. Furthermore, in general, a memory trace decouples dynamically from a corresponding synaptic metabolic rate such that the energy expended on a new learning and its memory trace constitutes in most cases only a small fraction of the baseline energy associated with prior memories. Taken together, these empirical and theoretical results suggest a metabolic efficiency of synaptically stored information.
[ { "created": "Wed, 16 Oct 2019 15:30:52 GMT", "version": "v1" } ]
2019-10-17
[ [ "Karbowski", "Jan", "" ] ]
Dendritic spines, the carriers of long-term memory, occupy a small fraction of cortical space, and yet they are the major consumers of brain metabolic energy. What fraction of this energy goes for synaptic plasticity, correlated with learning and memory? It is estimated here based on neurophysiological and proteomic data for rat brain that, depending on the level of protein phosphorylation, the energy cost of synaptic plasticity constitutes a small fraction of the energy used for fast excitatory synaptic transmission, typically $4.0-11.2 \%$. Next, this study analyzes a metabolic cost of a new learning and its memory trace in relation to the cost of prior memories, using a class of cascade models of synaptic plasticity. It is argued that these models must contain bidirectional cyclic motifs, related to protein phosphorylation, to be compatible with basic thermodynamic principles. For most investigated parameters longer memories generally require proportionally more energy to store. The exception are the parameters controlling the speed of molecular transitions (e.g. ATP driven phosphorylation rate), for which memory lifetime per invested energy can increase progressively for longer memories. Furthermore, in general, a memory trace decouples dynamically from a corresponding synaptic metabolic rate such that the energy expended on a new learning and its memory trace constitutes in most cases only a small fraction of the baseline energy associated with prior memories. Taken together, these empirical and theoretical results suggest a metabolic efficiency of synaptically stored information.
1504.00431
R.K. Brojen Singh
Md. Jahoor Alam and R.K. Brojen Singh
Phase transition in p53 states induced by glucose
null
null
null
null
q-bio.SC q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present p53-MDM2-Glucose model to study spatio-temporal properties of the system induced by glucose. The variation in glucose concentration level triggers the system at different states, namely, oscillation death (stabilized), sustain and damped oscillations which correspond to various cellular states. The transition of these states induced by glucose is phase transition like behaviour. We also found that the intrinsic noise in stochastic system helps the system to stabilize more effectively. Further, the amplitude of $p53$ dynamics with the variation of glucose concentration level follows power law behaviour, $A_s(k)\sim k^\gamma$, where, $\gamma$ is a constant.
[ { "created": "Thu, 2 Apr 2015 02:40:23 GMT", "version": "v1" } ]
2015-04-03
[ [ "Alam", "Md. Jahoor", "" ], [ "Singh", "R. K. Brojen", "" ] ]
We present p53-MDM2-Glucose model to study spatio-temporal properties of the system induced by glucose. The variation in glucose concentration level triggers the system at different states, namely, oscillation death (stabilized), sustain and damped oscillations which correspond to various cellular states. The transition of these states induced by glucose is phase transition like behaviour. We also found that the intrinsic noise in stochastic system helps the system to stabilize more effectively. Further, the amplitude of $p53$ dynamics with the variation of glucose concentration level follows power law behaviour, $A_s(k)\sim k^\gamma$, where, $\gamma$ is a constant.
1701.01744
Tolutola Oyetunde
Tolutola Oyetunde, Jeffrey Czajka, Gang Wu, Cynthia Lo, and Yinjie Tang
Metabolite patterns reveal regulatory responses to genetic perturbations
15 pages, 6 figures intended for Nucleic Acids Research
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetic and environmental perturbation experiments have been used to study microbes in a bid to gain insight into transcriptional regulation, adaptive evolution, and other cellular dynamics. These studies have potential in enabling rational strain design. Unfortunately, experimentally determined intracellular flux distribution are often inconsistent or incomparable due to different experimental conditions and methodologies. Computational strain design relies on constraint-based reconstruction and analysis (COBRA) techniques to predict the effect of gene knockouts such as flux balance analysis (FBA), regulatory on/off minimization(ROOM), minimization of metabolic adjustment (MOMA), relative optimality in metabolic networks (RELATCH). Most of these knock-out prediction methods are based on conserving inherent flux patterns (between wild type and mutant) that are thought to be representative of the cellular regulatory structure. However, it has been recently demonstrated that these methods show poor agreement with experimental data. To improve the fidelity of knockout predictions and subsequent computational strain design, we developed REMEP, a metabolite-centric method. We demonstrate the improved performance of REMEP by comparing the different methods on experimental knockout data of E. coli, and S. cerevisiae grown in batch cultures. REMEP retains most of the features of earlier algorithms but is much more accurate in capturing cellular responses to genetic perturbations. A primary reason for this is that REMEP relies on the assumption that cellular regulatory structure leaves a signature on metabolite patterns and not just flux patterns. REMEP will also prove useful in uncovering novel insights into cellular regulation and control.
[ { "created": "Fri, 6 Jan 2017 19:51:11 GMT", "version": "v1" }, { "created": "Tue, 10 Jan 2017 03:59:21 GMT", "version": "v2" } ]
2017-01-11
[ [ "Oyetunde", "Tolutola", "" ], [ "Czajka", "Jeffrey", "" ], [ "Wu", "Gang", "" ], [ "Lo", "Cynthia", "" ], [ "Tang", "Yinjie", "" ] ]
Genetic and environmental perturbation experiments have been used to study microbes in a bid to gain insight into transcriptional regulation, adaptive evolution, and other cellular dynamics. These studies have potential in enabling rational strain design. Unfortunately, experimentally determined intracellular flux distribution are often inconsistent or incomparable due to different experimental conditions and methodologies. Computational strain design relies on constraint-based reconstruction and analysis (COBRA) techniques to predict the effect of gene knockouts such as flux balance analysis (FBA), regulatory on/off minimization(ROOM), minimization of metabolic adjustment (MOMA), relative optimality in metabolic networks (RELATCH). Most of these knock-out prediction methods are based on conserving inherent flux patterns (between wild type and mutant) that are thought to be representative of the cellular regulatory structure. However, it has been recently demonstrated that these methods show poor agreement with experimental data. To improve the fidelity of knockout predictions and subsequent computational strain design, we developed REMEP, a metabolite-centric method. We demonstrate the improved performance of REMEP by comparing the different methods on experimental knockout data of E. coli, and S. cerevisiae grown in batch cultures. REMEP retains most of the features of earlier algorithms but is much more accurate in capturing cellular responses to genetic perturbations. A primary reason for this is that REMEP relies on the assumption that cellular regulatory structure leaves a signature on metabolite patterns and not just flux patterns. REMEP will also prove useful in uncovering novel insights into cellular regulation and control.
1605.09535
Peter Klimek
Peter Klimek, Silke Aichberger, Stefan Thurner
Disentangling genetic and environmental risk factors for individual diseases from multiplex comorbidity networks
null
null
null
null
q-bio.MN physics.soc-ph stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most disorders are caused by a combination of multiple genetic and/or environmental factors. If two diseases are caused by the same molecular mechanism, they tend to co-occur in patients. Here we provide a quantitative method to disentangle how much genetic or environmental risk factors contribute to the pathogenesis of 358 individual diseases, respectively. We pool data on genetic, pathway-based, and toxicogenomic disease-causing mechanisms with disease co-occurrence data obtained from almost two million patients. From this data we construct a multilayer network where nodes represent disorders that are connected by links that either represent phenotypic comorbidity of the patients or the involvement of a certain molecular mechanism. From the similarity of phenotypic and mechanism-based networks for each disorder we derive measure that allows us to quantify the relative importance of various molecular mechanisms for a given disease. We find that most diseases are dominated by genetic risk factors, while environmental influences prevail for disorders such as depressions, cancers, or dermatitis. Almost never we find that more than one type of mechanisms is involved in the pathogenesis of diseases.
[ { "created": "Tue, 31 May 2016 09:05:07 GMT", "version": "v1" } ]
2016-06-01
[ [ "Klimek", "Peter", "" ], [ "Aichberger", "Silke", "" ], [ "Thurner", "Stefan", "" ] ]
Most disorders are caused by a combination of multiple genetic and/or environmental factors. If two diseases are caused by the same molecular mechanism, they tend to co-occur in patients. Here we provide a quantitative method to disentangle how much genetic or environmental risk factors contribute to the pathogenesis of 358 individual diseases, respectively. We pool data on genetic, pathway-based, and toxicogenomic disease-causing mechanisms with disease co-occurrence data obtained from almost two million patients. From this data we construct a multilayer network where nodes represent disorders that are connected by links that either represent phenotypic comorbidity of the patients or the involvement of a certain molecular mechanism. From the similarity of phenotypic and mechanism-based networks for each disorder we derive measure that allows us to quantify the relative importance of various molecular mechanisms for a given disease. We find that most diseases are dominated by genetic risk factors, while environmental influences prevail for disorders such as depressions, cancers, or dermatitis. Almost never we find that more than one type of mechanisms is involved in the pathogenesis of diseases.
1908.10791
Feng Fu
Katherine P. Royce, Feng Fu
Mathematically Modeling Spillover Dynamics of Emerging Zoonoses with Intermediate Hosts
Comments are welcome
null
10.1371/journal.pone.0237780
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The World Health Organization describes zoonotic diseases as a major pandemic threat, and modeling the behavior of such diseases is a key component of their control. Many emerging zoonoses, such as SARS, Nipah, and Hendra, mutated from their wild type while circulating in an intermediate host population, usually a domestic species, to become more transmissible among humans, and moreover, this transmission route will only become more likely as agriculture and trade intensifies around the world. Passage through an intermediate host enables many otherwise rare diseases to become better adapted to humans, and so understanding this process with mathematical epidemiological models is necessary to prevent epidemics of emerging zoonoses, guide policy interventions in public health, and predict the behavior of an epidemic. In this paper, we account for spillovers of a zoonotic disease mutating in an intermediate host by means of modeling transmission dynamics within and between three host species, namely, wild reservoir, intermediate domestic animals, and humans. We calculate the basic reproductive number of the pathogen, present critical conditions for the emergence dynamics of zoonosis, and perform stability analysis of admissible disease equilibria. Our analytical results agree well with long-term simulations of the system. We find that in the presence of biologically realistic interspecies transmission parameters, a zoonotic disease can establish itself in humans even if it fails to persist in its reservoir and intermediate host species. Our model and results can be used to understand the dynamic behavior of any zoonosis with intermediate hosts and assist efforts to protect public health.
[ { "created": "Wed, 28 Aug 2019 15:47:22 GMT", "version": "v1" } ]
2021-01-27
[ [ "Royce", "Katherine P.", "" ], [ "Fu", "Feng", "" ] ]
The World Health Organization describes zoonotic diseases as a major pandemic threat, and modeling the behavior of such diseases is a key component of their control. Many emerging zoonoses, such as SARS, Nipah, and Hendra, mutated from their wild type while circulating in an intermediate host population, usually a domestic species, to become more transmissible among humans, and moreover, this transmission route will only become more likely as agriculture and trade intensifies around the world. Passage through an intermediate host enables many otherwise rare diseases to become better adapted to humans, and so understanding this process with mathematical epidemiological models is necessary to prevent epidemics of emerging zoonoses, guide policy interventions in public health, and predict the behavior of an epidemic. In this paper, we account for spillovers of a zoonotic disease mutating in an intermediate host by means of modeling transmission dynamics within and between three host species, namely, wild reservoir, intermediate domestic animals, and humans. We calculate the basic reproductive number of the pathogen, present critical conditions for the emergence dynamics of zoonosis, and perform stability analysis of admissible disease equilibria. Our analytical results agree well with long-term simulations of the system. We find that in the presence of biologically realistic interspecies transmission parameters, a zoonotic disease can establish itself in humans even if it fails to persist in its reservoir and intermediate host species. Our model and results can be used to understand the dynamic behavior of any zoonosis with intermediate hosts and assist efforts to protect public health.
1611.02116
Zachary Kilpatrick PhD
Daniel B. Poll and Zachary P. Kilpatrick
Velocity integration in a multilayer neural field model of spatial working memory
37 pages, 9 figures
null
null
null
q-bio.NC nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze a multilayer neural field model of spatial working memory, focusing on the impact of interlaminar connectivity, spatial heterogeneity, and velocity inputs. Models of spatial working memory typically employ networks that generate persistent activity via a combination of local excitation and lateral inhibition. Our model is comprised of a multilayer set of equations that describes connectivity between neurons in the same and different layers using an integral term. The kernel of this integral term then captures the impact of different interlaminar connection strengths, spatial heterogeneity, and velocity input. We begin our analysis by focusing on how interlaminar connectivity shapes the form and stability of (persistent) bump attractor solutions to the model. Subsequently, we derive a low-dimensional approximation that describes how spatial heterogeneity, velocity input, and noise combine to determine the position of bump solutions. The main impact of spatial heterogeneity is to break the translation symmetry of the network, so bumps prefer to reside at one of a finite number of local attractors in the domain. With the reduced model in hand, we can then approximate the dynamics of the bump position using a continuous time Markov chain model that describes bump motion between local attractors. While heterogeneity reduces the effective diffusion of the bumps, it also disrupts the processing of velocity inputs by slowing the velocity-induced propagation of bumps. However, we demonstrate that noise can play a constructive role by promoting bump motion transitions, restoring a mean bump velocity that is close to the input velocity.
[ { "created": "Mon, 7 Nov 2016 15:34:56 GMT", "version": "v1" }, { "created": "Mon, 16 Jan 2017 16:42:41 GMT", "version": "v2" } ]
2017-01-17
[ [ "Poll", "Daniel B.", "" ], [ "Kilpatrick", "Zachary P.", "" ] ]
We analyze a multilayer neural field model of spatial working memory, focusing on the impact of interlaminar connectivity, spatial heterogeneity, and velocity inputs. Models of spatial working memory typically employ networks that generate persistent activity via a combination of local excitation and lateral inhibition. Our model is comprised of a multilayer set of equations that describes connectivity between neurons in the same and different layers using an integral term. The kernel of this integral term then captures the impact of different interlaminar connection strengths, spatial heterogeneity, and velocity input. We begin our analysis by focusing on how interlaminar connectivity shapes the form and stability of (persistent) bump attractor solutions to the model. Subsequently, we derive a low-dimensional approximation that describes how spatial heterogeneity, velocity input, and noise combine to determine the position of bump solutions. The main impact of spatial heterogeneity is to break the translation symmetry of the network, so bumps prefer to reside at one of a finite number of local attractors in the domain. With the reduced model in hand, we can then approximate the dynamics of the bump position using a continuous time Markov chain model that describes bump motion between local attractors. While heterogeneity reduces the effective diffusion of the bumps, it also disrupts the processing of velocity inputs by slowing the velocity-induced propagation of bumps. However, we demonstrate that noise can play a constructive role by promoting bump motion transitions, restoring a mean bump velocity that is close to the input velocity.
1903.00095
Sebastiano Barbieri
Sebastiano Barbieri, Oliver J. Gurney-Champion, Remy Klaassen, Harriet C. Thoeny
Deep Learning How to Fit an Intravoxel Incoherent Motion Model to Diffusion-Weighted MRI
null
Magnetic Resonance in Medicine 2019
10.1002/mrm.27910
null
q-bio.QM cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: This prospective clinical study assesses the feasibility of training a deep neural network (DNN) for intravoxel incoherent motion (IVIM) model fitting to diffusion-weighted magnetic resonance imaging (DW-MRI) data and evaluates its performance. Methods: In May 2011, ten male volunteers (age range: 29 to 53 years, mean: 37 years) underwent DW-MRI of the upper abdomen on 1.5T and 3.0T magnetic resonance scanners. Regions of interest in the left and right liver lobe, pancreas, spleen, renal cortex, and renal medulla were delineated independently by two readers. DNNs were trained for IVIM model fitting using these data; results were compared to least-squares and Bayesian approaches to IVIM fitting. Intraclass Correlation Coefficients (ICC) were used to assess consistency of measurements between readers. Intersubject variability was evaluated using Coefficients of Variation (CV). The fitting error was calculated based on simulated data and the average fitting time of each method was recorded. Results: DNNs were trained successfully for IVIM parameter estimation. This approach was associated with high consistency between the two readers (ICCs between 50 and 97%), low intersubject variability of estimated parameter values (CVs between 9.2 and 28.4), and the lowest error when compared with least-squares and Bayesian approaches. Fitting by DNNs was several orders of magnitude quicker than the other methods but the networks may need to be re-trained for different acquisition protocols or imaged anatomical regions. Conclusion: DNNs are recommended for accurate and robust IVIM model fitting to DW-MRI data. Suitable software is available at (1).
[ { "created": "Thu, 28 Feb 2019 22:42:02 GMT", "version": "v1" }, { "created": "Thu, 23 May 2019 08:17:04 GMT", "version": "v2" } ]
2020-01-08
[ [ "Barbieri", "Sebastiano", "" ], [ "Gurney-Champion", "Oliver J.", "" ], [ "Klaassen", "Remy", "" ], [ "Thoeny", "Harriet C.", "" ] ]
Purpose: This prospective clinical study assesses the feasibility of training a deep neural network (DNN) for intravoxel incoherent motion (IVIM) model fitting to diffusion-weighted magnetic resonance imaging (DW-MRI) data and evaluates its performance. Methods: In May 2011, ten male volunteers (age range: 29 to 53 years, mean: 37 years) underwent DW-MRI of the upper abdomen on 1.5T and 3.0T magnetic resonance scanners. Regions of interest in the left and right liver lobe, pancreas, spleen, renal cortex, and renal medulla were delineated independently by two readers. DNNs were trained for IVIM model fitting using these data; results were compared to least-squares and Bayesian approaches to IVIM fitting. Intraclass Correlation Coefficients (ICC) were used to assess consistency of measurements between readers. Intersubject variability was evaluated using Coefficients of Variation (CV). The fitting error was calculated based on simulated data and the average fitting time of each method was recorded. Results: DNNs were trained successfully for IVIM parameter estimation. This approach was associated with high consistency between the two readers (ICCs between 50 and 97%), low intersubject variability of estimated parameter values (CVs between 9.2 and 28.4), and the lowest error when compared with least-squares and Bayesian approaches. Fitting by DNNs was several orders of magnitude quicker than the other methods but the networks may need to be re-trained for different acquisition protocols or imaged anatomical regions. Conclusion: DNNs are recommended for accurate and robust IVIM model fitting to DW-MRI data. Suitable software is available at (1).
1311.6682
Mario dos Reis Dr
Mario dos Reis
Population genetics and substitution models of adaptive evolution
24 pages, 4 figures and 1 table. Manuscript written between January and April 2010
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ratio of non-synonymous to synonymous substitutions $\omega(=d_{N}/d_{S})$ has been widely used as a measure of adaptive evolution in protein coding genes. Omega can be defined in terms of population genetics parameters as the fixation ratio of selected vs. neutral mutants. Here it is argued that approaches based on the infinite sites model are not appropriate to define $\omega$ for single codon locations. Simple models of amino acid substitution with reversible mutation and selection are analysed, and used to define $\omega$ under several evolutionary scenarios. In most practical cases $\omega<1$ when selection is constant throughout time. However, it is shown that when the pattern of selection on amino acids changes, for example after an environment shift, a temporary burst of adaptive evolution ($\omega\gg1$) can be observed. The fixation probability of a novel mutant under frequency dependent selection is calculated, and it is used to show why $\omega>1$ can be sometimes expected for single locations at equilibrium. An example with influenza data is discussed.
[ { "created": "Tue, 26 Nov 2013 14:21:37 GMT", "version": "v1" } ]
2013-11-27
[ [ "Reis", "Mario dos", "" ] ]
The ratio of non-synonymous to synonymous substitutions $\omega(=d_{N}/d_{S})$ has been widely used as a measure of adaptive evolution in protein coding genes. Omega can be defined in terms of population genetics parameters as the fixation ratio of selected vs. neutral mutants. Here it is argued that approaches based on the infinite sites model are not appropriate to define $\omega$ for single codon locations. Simple models of amino acid substitution with reversible mutation and selection are analysed, and used to define $\omega$ under several evolutionary scenarios. In most practical cases $\omega<1$ when selection is constant throughout time. However, it is shown that when the pattern of selection on amino acids changes, for example after an environment shift, a temporary burst of adaptive evolution ($\omega\gg1$) can be observed. The fixation probability of a novel mutant under frequency dependent selection is calculated, and it is used to show why $\omega>1$ can be sometimes expected for single locations at equilibrium. An example with influenza data is discussed.
2302.01076
Adam Mielke
Adam Mielke and Lasse Engbo Christiansen
Convergence to the Equilibrium State in an Outbreak: When Can Growth Rates Accurately be Measured?
4 pages, 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate sub-leading orders of the classic SEIR-model using contact matrices from modeling of the Omicron and Delta variants of COVID-19 in Denmark. The goal of this is to illustrate when the growth rate, and by extension the infectiousness, can be accurately measured in a new outbreak, e.g. after introduction of a new variant of a virus. We find that as long as susceptible depletion is a minor effect, the transients are gone within around 4 generations.
[ { "created": "Thu, 2 Feb 2023 13:07:33 GMT", "version": "v1" } ]
2023-02-03
[ [ "Mielke", "Adam", "" ], [ "Christiansen", "Lasse Engbo", "" ] ]
We investigate sub-leading orders of the classic SEIR-model using contact matrices from modeling of the Omicron and Delta variants of COVID-19 in Denmark. The goal of this is to illustrate when the growth rate, and by extension the infectiousness, can be accurately measured in a new outbreak, e.g. after introduction of a new variant of a virus. We find that as long as susceptible depletion is a minor effect, the transients are gone within around 4 generations.
2309.05771
Can Firtina
Can Firtina, Melina Soysal, Jo\"el Lindegger, Onur Mutlu
RawHash2: Mapping Raw Nanopore Signals Using Hash-Based Seeding and Adaptive Quantization
Accepted in Bioinformatics: https://doi.org/10.1093/bioinformatics/btae478
Bioinformatics, 2024, btae478
10.1093/bioinformatics/btae478
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Summary: Raw nanopore signals can be analyzed while they are being generated, a process known as real-time analysis. Real-time analysis of raw signals is essential to utilize the unique features that nanopore sequencing provides, enabling the early stopping of the sequencing of a read or the entire sequencing run based on the analysis. The state-of-the-art mechanism, RawHash, offers the first hash-based efficient and accurate similarity identification between raw signals and a reference genome by quickly matching their hash values. In this work, we introduce RawHash2, which provides major improvements over RawHash, including a more sensitive quantization and chaining implementation, weighted mapping decisions, frequency filters to reduce ambiguous seed hits, minimizers for hash-based sketching, and support for the R10.4 flow cell version and various data formats such as POD5 and SLOW5. Compared to RawHash, RawHash2 provides better F1 accuracy (on average by 10.57% and up to 20.25%) and better throughput (on average by 4.0x and up to 9.9x) than RawHash. Availability and Implementation: RawHash2 is available at https://github.com/CMU-SAFARI/RawHash. We also provide the scripts to fully reproduce our results on our GitHub page.
[ { "created": "Mon, 11 Sep 2023 18:56:48 GMT", "version": "v1" }, { "created": "Sat, 21 Oct 2023 20:50:33 GMT", "version": "v2" }, { "created": "Fri, 3 Nov 2023 15:46:20 GMT", "version": "v3" }, { "created": "Wed, 1 May 2024 20:28:49 GMT", "version": "v4" }, { "created": "Tue, 13 Aug 2024 08:25:02 GMT", "version": "v5" } ]
2024-08-14
[ [ "Firtina", "Can", "" ], [ "Soysal", "Melina", "" ], [ "Lindegger", "Joël", "" ], [ "Mutlu", "Onur", "" ] ]
Summary: Raw nanopore signals can be analyzed while they are being generated, a process known as real-time analysis. Real-time analysis of raw signals is essential to utilize the unique features that nanopore sequencing provides, enabling the early stopping of the sequencing of a read or the entire sequencing run based on the analysis. The state-of-the-art mechanism, RawHash, offers the first hash-based efficient and accurate similarity identification between raw signals and a reference genome by quickly matching their hash values. In this work, we introduce RawHash2, which provides major improvements over RawHash, including a more sensitive quantization and chaining implementation, weighted mapping decisions, frequency filters to reduce ambiguous seed hits, minimizers for hash-based sketching, and support for the R10.4 flow cell version and various data formats such as POD5 and SLOW5. Compared to RawHash, RawHash2 provides better F1 accuracy (on average by 10.57% and up to 20.25%) and better throughput (on average by 4.0x and up to 9.9x) than RawHash. Availability and Implementation: RawHash2 is available at https://github.com/CMU-SAFARI/RawHash. We also provide the scripts to fully reproduce our results on our GitHub page.
1411.2441
Karol Niena{\l}towski
Agata Charzy\'nska, Weronika Wronowska, Karol Niena{\l}towski and Anna Gambin
Computational model of sphingolipids metabolism: a case study of Alzheimer's disease
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/3.0/
Background: Sphingolipids - as suggested by the prefix in their name - are mysterious molecules, which play surprisingly various roles in opposable cellular processes, like autophagy, apoptosis, proliferation and differentiation. Recently they have been also recognized as important messengers in cellular signalling pathways. More importantly, sphingolipid metabolism disorders were observed in various pathological conditions such as cancer and neurodegeneration. Results: Existing formal models of sphingolipids metabolism concentrates mostly on de novo ceramide synthesis or restrict their focus to biochemical transformations of a particular subspecies. We propose first comprehensive computational model of sphingolipid metabolism in human tissue. In contrast to previous approaches we explicitly model compartmentalization what allows emphasizing the differences among individual organelles. Conclusions: Presented here model was validated by means of recently proposed model analysis technics allowing for detection of most sensitive and experimentally non-identifiable parameters and determination of main sources of model variance. Moreover, we demonstrate the utility of the model for the study of molecular processes underlying Alzheimer's disease.
[ { "created": "Mon, 10 Nov 2014 14:32:47 GMT", "version": "v1" } ]
2014-11-11
[ [ "Charzyńska", "Agata", "" ], [ "Wronowska", "Weronika", "" ], [ "Nienałtowski", "Karol", "" ], [ "Gambin", "Anna", "" ] ]
Background: Sphingolipids - as suggested by the prefix in their name - are mysterious molecules, which play surprisingly various roles in opposable cellular processes, like autophagy, apoptosis, proliferation and differentiation. Recently they have been also recognized as important messengers in cellular signalling pathways. More importantly, sphingolipid metabolism disorders were observed in various pathological conditions such as cancer and neurodegeneration. Results: Existing formal models of sphingolipids metabolism concentrates mostly on de novo ceramide synthesis or restrict their focus to biochemical transformations of a particular subspecies. We propose first comprehensive computational model of sphingolipid metabolism in human tissue. In contrast to previous approaches we explicitly model compartmentalization what allows emphasizing the differences among individual organelles. Conclusions: Presented here model was validated by means of recently proposed model analysis technics allowing for detection of most sensitive and experimentally non-identifiable parameters and determination of main sources of model variance. Moreover, we demonstrate the utility of the model for the study of molecular processes underlying Alzheimer's disease.
1910.03736
Vladyslav Oles
Vladyslav Oles, Anton Kukushkin
BoolSi: a tool for distributed simulations and analysis of Boolean networks
Added asynchronous BNs to the introduction, added figure showing an attractor, updated figure node correlations, explicitly mentioned if a figure is BoolSi output, stated correlations of CK and TDIF on proliferation in case study
null
null
null
q-bio.MN cs.DC math.DS q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present BoolSi, an open-source cross-platform command line tool for distributed simulations of deterministic Boolean networks with synchronous update. It uses MPI standard to support execution on computational clusters, as well as parallel processing on a single computer. BoolSi can be used to model the behavior of complex dynamic networks, such as gene regulatory networks. In particular, it allows for identification and statistical analysis of network attractors. We perform a case study of the activity of a cambium cell to demonstrate the capabilities of the tool.
[ { "created": "Wed, 9 Oct 2019 01:06:47 GMT", "version": "v1" }, { "created": "Fri, 18 Oct 2019 20:29:56 GMT", "version": "v2" } ]
2019-10-22
[ [ "Oles", "Vladyslav", "" ], [ "Kukushkin", "Anton", "" ] ]
We present BoolSi, an open-source cross-platform command line tool for distributed simulations of deterministic Boolean networks with synchronous update. It uses MPI standard to support execution on computational clusters, as well as parallel processing on a single computer. BoolSi can be used to model the behavior of complex dynamic networks, such as gene regulatory networks. In particular, it allows for identification and statistical analysis of network attractors. We perform a case study of the activity of a cambium cell to demonstrate the capabilities of the tool.
2006.05081
Davide Faranda
Davide Faranda and Tommaso Alberti
Modelling the second wave of COVID-19 infections in France and Italy via a Stochastic SEIR model
null
null
10.1063/5.0015943
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COVID-19 has forced quarantine measures in several countries across the world. These measures have proven to be effective in significantly reducing the prevalence of the virus. To date, no effective treatment or vaccine is available. In the effort of preserving both public health as well as the economical and social textures, France and Italy governments have partially released lockdown measures. Here we extrapolate the long-term behavior of the epidemics in both countries using a Susceptible-Exposed-Infected-Recovered (SEIR) model where parameters are stochastically perturbed to handle the uncertainty in the estimates of COVID-19 prevalence. Our results suggest that uncertainties in both parameters and initial conditions rapidly propagate in the model and can result in different outcomes of the epidemics leading or not to a second wave of infections. Using actual knowledge, asymptotic estimates of COVID-19 prevalence can fluctuate of order of ten millions units in both countries.
[ { "created": "Tue, 9 Jun 2020 07:20:07 GMT", "version": "v1" } ]
2020-12-02
[ [ "Faranda", "Davide", "" ], [ "Alberti", "Tommaso", "" ] ]
COVID-19 has forced quarantine measures in several countries across the world. These measures have proven to be effective in significantly reducing the prevalence of the virus. To date, no effective treatment or vaccine is available. In the effort of preserving both public health as well as the economical and social textures, France and Italy governments have partially released lockdown measures. Here we extrapolate the long-term behavior of the epidemics in both countries using a Susceptible-Exposed-Infected-Recovered (SEIR) model where parameters are stochastically perturbed to handle the uncertainty in the estimates of COVID-19 prevalence. Our results suggest that uncertainties in both parameters and initial conditions rapidly propagate in the model and can result in different outcomes of the epidemics leading or not to a second wave of infections. Using actual knowledge, asymptotic estimates of COVID-19 prevalence can fluctuate of order of ten millions units in both countries.
2112.00069
Armita Nourmohammad
Colin LaMont, Jakub Otwinowski, Kanika Vanshylla, Henning Gruell, Florian Klein, Armita Nourmohammad
Design of an optimal combination therapy with broadly neutralizing antibodies to suppress HIV-1
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Broadly neutralizing antibodies (bNAbs) are promising targets for vaccination and therapy against HIV. Passive infusions of bNAbs have shown promise in clinical trials as a potential alternative for anti-retroviral therapy. A key challenge for the potential clinical application of bnAbs is the suppression of viral escape, which is more effectively achieved with a combination of bNAbs. However, identifying an optimal bNAb cocktail is combinatorially complex. Here, we propose a computational approach to predict the efficacy of a bNAb therapy trial based on the population genetics of HIV escape, which we parametrize using high-throughput HIV sequence data from a cohort of untreated bNAb-naive patients. By quantifying the mutational target size and the fitness cost of HIV-1 escape from bNAbs, we reliably predict the distribution of rebound times in three clinical trials. Importantly, we show that early rebounds are dominated by the pre-treatment standing variation of HIV-1 populations, rather than spontaneous mutations during treatment. Lastly, we show that a cocktail of three bNAbs is necessary to suppress the chances of viral escape below 1%, and we predict the optimal composition of such a bNAb cocktail. Our results offer a rational design for bNAb therapy against HIV-1, and more generally show how genetic data could be used to predict treatment outcomes and design new approaches to pathogenic control.
[ { "created": "Tue, 30 Nov 2021 19:56:50 GMT", "version": "v1" } ]
2021-12-02
[ [ "LaMont", "Colin", "" ], [ "Otwinowski", "Jakub", "" ], [ "Vanshylla", "Kanika", "" ], [ "Gruell", "Henning", "" ], [ "Klein", "Florian", "" ], [ "Nourmohammad", "Armita", "" ] ]
Broadly neutralizing antibodies (bNAbs) are promising targets for vaccination and therapy against HIV. Passive infusions of bNAbs have shown promise in clinical trials as a potential alternative for anti-retroviral therapy. A key challenge for the potential clinical application of bnAbs is the suppression of viral escape, which is more effectively achieved with a combination of bNAbs. However, identifying an optimal bNAb cocktail is combinatorially complex. Here, we propose a computational approach to predict the efficacy of a bNAb therapy trial based on the population genetics of HIV escape, which we parametrize using high-throughput HIV sequence data from a cohort of untreated bNAb-naive patients. By quantifying the mutational target size and the fitness cost of HIV-1 escape from bNAbs, we reliably predict the distribution of rebound times in three clinical trials. Importantly, we show that early rebounds are dominated by the pre-treatment standing variation of HIV-1 populations, rather than spontaneous mutations during treatment. Lastly, we show that a cocktail of three bNAbs is necessary to suppress the chances of viral escape below 1%, and we predict the optimal composition of such a bNAb cocktail. Our results offer a rational design for bNAb therapy against HIV-1, and more generally show how genetic data could be used to predict treatment outcomes and design new approaches to pathogenic control.
2101.09354
M. Ali Vosoughi
Axel Wismuller and M. Ali Vosoughi
Large-scale Augmented Granger Causality (lsAGC) for Connectivity Analysis in Complex Systems: From Computer Simulations to Functional MRI (fMRI)
15 pages, conference
null
null
null
q-bio.NC cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
We introduce large-scale Augmented Granger Causality (lsAGC) as a method for connectivity analysis in complex systems. The lsAGC algorithm combines dimension reduction with source time-series augmentation and uses predictive time-series modeling for estimating directed causal relationships among time-series. This method is a multivariate approach, since it is capable of identifying the influence of each time-series on any other time-series in the presence of all other time-series of the underlying dynamic system. We quantitatively evaluate the performance of lsAGC on synthetic directional time-series networks with known ground truth. As a reference method, we compare our results with cross-correlation, which is typically used as a standard measure of connectivity in the functional MRI (fMRI) literature. Using extensive simulations for a wide range of time-series lengths and two different signal-to-noise ratios of 5 and 15 dB, lsAGC consistently outperforms cross-correlation at accurately detecting network connections, using Receiver Operator Characteristic Curve (ROC) analysis, across all tested time-series lengths and noise levels. In addition, as an outlook to possible clinical application, we perform a preliminary qualitative analysis of connectivity matrices for fMRI data of Autism Spectrum Disorder (ASD) patients and typical controls, using a subset of 59 subjects of the Autism Brain Imaging Data Exchange II (ABIDE II) data repository. Our results suggest that lsAGC, by extracting sparse connectivity matrices, may be useful for network analysis in complex systems, and may be applicable to clinical fMRI analysis in future research, such as targeting disease-related classification or regression tasks on clinical data.
[ { "created": "Sun, 10 Jan 2021 01:44:48 GMT", "version": "v1" } ]
2021-01-26
[ [ "Wismuller", "Axel", "" ], [ "Vosoughi", "M. Ali", "" ] ]
We introduce large-scale Augmented Granger Causality (lsAGC) as a method for connectivity analysis in complex systems. The lsAGC algorithm combines dimension reduction with source time-series augmentation and uses predictive time-series modeling for estimating directed causal relationships among time-series. This method is a multivariate approach, since it is capable of identifying the influence of each time-series on any other time-series in the presence of all other time-series of the underlying dynamic system. We quantitatively evaluate the performance of lsAGC on synthetic directional time-series networks with known ground truth. As a reference method, we compare our results with cross-correlation, which is typically used as a standard measure of connectivity in the functional MRI (fMRI) literature. Using extensive simulations for a wide range of time-series lengths and two different signal-to-noise ratios of 5 and 15 dB, lsAGC consistently outperforms cross-correlation at accurately detecting network connections, using Receiver Operator Characteristic Curve (ROC) analysis, across all tested time-series lengths and noise levels. In addition, as an outlook to possible clinical application, we perform a preliminary qualitative analysis of connectivity matrices for fMRI data of Autism Spectrum Disorder (ASD) patients and typical controls, using a subset of 59 subjects of the Autism Brain Imaging Data Exchange II (ABIDE II) data repository. Our results suggest that lsAGC, by extracting sparse connectivity matrices, may be useful for network analysis in complex systems, and may be applicable to clinical fMRI analysis in future research, such as targeting disease-related classification or regression tasks on clinical data.
2112.11298
Matthew Denwood
Jacob St{\ae}rk-{\O}stergaard, Carsten Kirkeby, Lasse Engbo Christiansen, Michael Asger Andersen, Camilla Holten M{\o}ller, Marianne Voldstedlund, Matthew J. Denwood
Evaluation of diagnostic test procedures for SARS-CoV-2 using latent class models: comparison of antigen test kits and sampling for PCR testing based on Danish national data registries
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Antigen test kits have been used extensively as a screening tool during the worldwide pandemic of coronavirus (SARS-CoV-2). While it is generally expected that taking samples for analysis with PCR testing gives more reliable results than using antigen test kits, the overall sensitivity and specificity of the two protocols in the field have not yet been estimated without assuming that the PCR test constitutes a gold standard. We use latent class models to estimate the in situ performance of both PCR and antigen testing, using data from the Danish national registries. The results are based on 240,000 paired tests results sub-selected from the 55 million test results that were obtained in Denmark during the period from February 2021 until June 2021. We found that the specificity of both tests is very high in our data sample (>99.7%), while the sensitivity of PCR sampling was estimated to be 95.7% (95% CI: 92.8-98.4%) and that of the antigen test kits used in Denmark over the study period was estimated at 53.8% (95% CI: 49.8-57.9%). Our findings can be used as supplementary information for consideration when implementing serial testing strategies that employ a confirmatory PCR sample following a positive result from an antigen test kit, such as the policy used in Denmark. We note that while this strategy reduces the number of false positives associated with antigen test screening, it also increases the false negatives. We demonstrate that the balance of trading false positives for false negatives only favours the use of serial testing when the expected true prevalence is low. Our results contain substantial uncertainty in the estimates for sensitivity due to the relatively small number of positive test results over this period: validation of our findings in a population with higher prevalence would therefore be highly relevant for future work.
[ { "created": "Tue, 21 Dec 2021 15:37:27 GMT", "version": "v1" } ]
2021-12-22
[ [ "Stærk-Østergaard", "Jacob", "" ], [ "Kirkeby", "Carsten", "" ], [ "Christiansen", "Lasse Engbo", "" ], [ "Andersen", "Michael Asger", "" ], [ "Møller", "Camilla Holten", "" ], [ "Voldstedlund", "Marianne", "" ], [ "Denwood", "Matthew J.", "" ] ]
Antigen test kits have been used extensively as a screening tool during the worldwide pandemic of coronavirus (SARS-CoV-2). While it is generally expected that taking samples for analysis with PCR testing gives more reliable results than using antigen test kits, the overall sensitivity and specificity of the two protocols in the field have not yet been estimated without assuming that the PCR test constitutes a gold standard. We use latent class models to estimate the in situ performance of both PCR and antigen testing, using data from the Danish national registries. The results are based on 240,000 paired tests results sub-selected from the 55 million test results that were obtained in Denmark during the period from February 2021 until June 2021. We found that the specificity of both tests is very high in our data sample (>99.7%), while the sensitivity of PCR sampling was estimated to be 95.7% (95% CI: 92.8-98.4%) and that of the antigen test kits used in Denmark over the study period was estimated at 53.8% (95% CI: 49.8-57.9%). Our findings can be used as supplementary information for consideration when implementing serial testing strategies that employ a confirmatory PCR sample following a positive result from an antigen test kit, such as the policy used in Denmark. We note that while this strategy reduces the number of false positives associated with antigen test screening, it also increases the false negatives. We demonstrate that the balance of trading false positives for false negatives only favours the use of serial testing when the expected true prevalence is low. Our results contain substantial uncertainty in the estimates for sensitivity due to the relatively small number of positive test results over this period: validation of our findings in a population with higher prevalence would therefore be highly relevant for future work.
2011.12400
Adeel Razi
Karl J. Friston, Guillaume Flandin, Adeel Razi
Dynamic causal modelling of mitigated epidemiological outcomes
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
This technical report describes the rationale and technical details for the dynamic causal modelling of mitigated epidemiological outcomes based upon a variety of timeseries data. It details the structure of the underlying convolution or generative model (at the time of writing on 6-Nov-20). This report is intended for use as a reference that accompanies the predictions in following dashboard: https://www.fil.ion.ucl.ac.uk/spm/covid-19/dashboard
[ { "created": "Tue, 24 Nov 2020 21:28:51 GMT", "version": "v1" } ]
2020-11-26
[ [ "Friston", "Karl J.", "" ], [ "Flandin", "Guillaume", "" ], [ "Razi", "Adeel", "" ] ]
This technical report describes the rationale and technical details for the dynamic causal modelling of mitigated epidemiological outcomes based upon a variety of timeseries data. It details the structure of the underlying convolution or generative model (at the time of writing on 6-Nov-20). This report is intended for use as a reference that accompanies the predictions in following dashboard: https://www.fil.ion.ucl.ac.uk/spm/covid-19/dashboard
2311.13339
Michael Shapiro
Anna Laddach, Michael Shapiro
Non-deterministic linear thresholding systems reveal their deterministic origins
4 pages
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Linear thresholding systems have been used as a model of neural activation and have more recently been proposed as a model of gene activation. Deterministic linear thresholding systems can be turned into non-deterministic systems by the introduction of noise. Under mild conditions on the noise, we show that the deterministic model can be deduced from the probabilities of the non-deterministic model.
[ { "created": "Wed, 22 Nov 2023 12:03:42 GMT", "version": "v1" } ]
2023-11-23
[ [ "Laddach", "Anna", "" ], [ "Shapiro", "Michael", "" ] ]
Linear thresholding systems have been used as a model of neural activation and have more recently been proposed as a model of gene activation. Deterministic linear thresholding systems can be turned into non-deterministic systems by the introduction of noise. Under mild conditions on the noise, we show that the deterministic model can be deduced from the probabilities of the non-deterministic model.
0812.2174
Lucas Wardil
L. Wardil and J. K. L. da Silva
A discrete inhomogeneous model for the yeast cell cycle
5 pages, 1 figure
Brazilian Journal of Physics, vol. 38, no. 3A, September, 2008
null
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the robustness and stability of the yeast cell regulatory network by using a general inhomogeneous discrete model. We find that inhomogeneity, on average, enhances the stability of the biggest attractor of the dynamics and that the large size of the basin of attraction is robust against changes in the parameters of inhomogeneity. We find that the most frequent orbit, which represents the cell-cycle pathway, has a better biological meaning than the one exhibited by the homogeneous model.
[ { "created": "Thu, 11 Dec 2008 15:33:38 GMT", "version": "v1" } ]
2008-12-12
[ [ "Wardil", "L.", "" ], [ "da Silva", "J. K. L.", "" ] ]
We study the robustness and stability of the yeast cell regulatory network by using a general inhomogeneous discrete model. We find that inhomogeneity, on average, enhances the stability of the biggest attractor of the dynamics and that the large size of the basin of attraction is robust against changes in the parameters of inhomogeneity. We find that the most frequent orbit, which represents the cell-cycle pathway, has a better biological meaning than the one exhibited by the homogeneous model.
2007.15673
Brian Skinner
Calvin Pozderac and Brian Skinner
Superspreading of SARS-CoV-2 in the USA
7+9 pages pages, 3+3 figures; slightly updated numerical estimates; published version
PLoS ONE 16(3): e0248808 (2021)
10.1371/journal.pone.0248808
null
q-bio.PE physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A number of epidemics, including the SARS-CoV-1 epidemic of 2002-2004, have been known to exhibit superspreading, in which a small fraction of infected individuals is responsible for the majority of new infections. The existence of superspreading implies a fat-tailed distribution of infectiousness (new secondary infections caused per day) among different individuals. Here, we present a simple method to estimate the variation in infectiousness by examining the variation in early-time growth rates of new cases among different subpopulations. We use this method to estimate the mean and variance in the infectiousness, $\beta$, for SARS-CoV-2 transmission during the early stages of the pandemic within the United States. We find that $\sigma_\beta/\mu_\beta \gtrsim 3.2$, where $\mu_\beta$ is the mean infectiousness and $\sigma_\beta$ its standard deviation, which implies pervasive superspreading. This result allows us to estimate that in the early stages of the pandemic in the USA, over 81% of new cases were a result of the top 10% of most infectious individuals.
[ { "created": "Thu, 30 Jul 2020 18:09:29 GMT", "version": "v1" }, { "created": "Wed, 30 Sep 2020 01:21:23 GMT", "version": "v2" }, { "created": "Tue, 30 Mar 2021 03:36:08 GMT", "version": "v3" } ]
2021-03-31
[ [ "Pozderac", "Calvin", "" ], [ "Skinner", "Brian", "" ] ]
A number of epidemics, including the SARS-CoV-1 epidemic of 2002-2004, have been known to exhibit superspreading, in which a small fraction of infected individuals is responsible for the majority of new infections. The existence of superspreading implies a fat-tailed distribution of infectiousness (new secondary infections caused per day) among different individuals. Here, we present a simple method to estimate the variation in infectiousness by examining the variation in early-time growth rates of new cases among different subpopulations. We use this method to estimate the mean and variance in the infectiousness, $\beta$, for SARS-CoV-2 transmission during the early stages of the pandemic within the United States. We find that $\sigma_\beta/\mu_\beta \gtrsim 3.2$, where $\mu_\beta$ is the mean infectiousness and $\sigma_\beta$ its standard deviation, which implies pervasive superspreading. This result allows us to estimate that in the early stages of the pandemic in the USA, over 81% of new cases were a result of the top 10% of most infectious individuals.
q-bio/0611003
Laurent Perrinet
Laurent Perrinet (INCM)
Feature detection using spikes: the greedy approach
This work links Matching Pursuit with bayesian inference by providing the underlying hypotheses (linear model, uniform prior, gaussian noise model). A parallel with the parallel and event-based nature of neural computations is explored and we show application to modelling Primary Visual Cortex / image processsing. http://incm.cnrs-mrs.fr/perrinet/dynn/LaurentPerrinet/Publications/Perrinet04tauc
Journal of physiology, Paris. 98 (28/11/2005) 530--9
10.1016/j.jphysparis.2005.09.012
null
q-bio.NC
null
A goal of low-level neural processes is to build an efficient code extracting the relevant information from the sensory input. It is believed that this is implemented in cortical areas by elementary inferential computations dynamically extracting the most likely parameters corresponding to the sensory signal. We explore here a neuro-mimetic feed-forward model of the primary visual area (VI) solving this problem in the case where the signal may be described by a robust linear generative model. This model uses an over-complete dictionary of primitives which provides a distributed probabilistic representation of input features. Relying on an efficiency criterion, we derive an algorithm as an approximate solution which uses incremental greedy inference processes. This algorithm is similar to 'Matching Pursuit' and mimics the parallel architecture of neural computations. We propose here a simple implementation using a network of spiking integrate-and-fire neurons which communicate using lateral interactions. Numerical simulations show that this Sparse Spike Coding strategy provides an efficient model for representing visual data from a set of natural images. Even though it is simplistic, this transformation of spatial data into a spatio-temporal pattern of binary events provides an accurate description of some complex neural patterns observed in the spiking activity of biological neural networks.
[ { "created": "Wed, 1 Nov 2006 20:42:07 GMT", "version": "v1" }, { "created": "Thu, 2 Nov 2006 11:13:51 GMT", "version": "v2" } ]
2007-05-23
[ [ "Perrinet", "Laurent", "", "INCM" ] ]
A goal of low-level neural processes is to build an efficient code extracting the relevant information from the sensory input. It is believed that this is implemented in cortical areas by elementary inferential computations dynamically extracting the most likely parameters corresponding to the sensory signal. We explore here a neuro-mimetic feed-forward model of the primary visual area (VI) solving this problem in the case where the signal may be described by a robust linear generative model. This model uses an over-complete dictionary of primitives which provides a distributed probabilistic representation of input features. Relying on an efficiency criterion, we derive an algorithm as an approximate solution which uses incremental greedy inference processes. This algorithm is similar to 'Matching Pursuit' and mimics the parallel architecture of neural computations. We propose here a simple implementation using a network of spiking integrate-and-fire neurons which communicate using lateral interactions. Numerical simulations show that this Sparse Spike Coding strategy provides an efficient model for representing visual data from a set of natural images. Even though it is simplistic, this transformation of spatial data into a spatio-temporal pattern of binary events provides an accurate description of some complex neural patterns observed in the spiking activity of biological neural networks.
q-bio/0408004
Marconi Barbosa Dr
M. S. Barbosa and L. da F. Costa and E. S. Bernardes and G. Ramakers and J. van Pelt
Characterizing neuromorphologic alterations with additive shape functionals
null
Eur. Phys. J. B 37, 109 115 (2004)
10.1140/epjb/e2004-00035-y
null
q-bio.QM cond-mat.stat-mech
null
The complexity of a neuronal cell shape is known to be related to its function. Specifically, among other indicators, a decreased complexity in the dendritic trees of cortical pyramidal neurons has been associated with mental retardation. In this paper we develop a procedure to address the characterization of morphological changes induced in cultured neurons by over-expressing a gene involved in mental retardation. Measures associated with the multiscale connectivity, an additive image functional, are found to give a reasonable separation criterion between two categories of cells. One category consists of a control group and two transfected groups of neurons, and the other, a class of cat ganglionary cells. The reported framework also identified a trend towards lower complexity in one of the transfected groups. Such results establish the suggested measures as an effective descriptors of cell shape.
[ { "created": "Mon, 9 Aug 2004 18:13:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Barbosa", "M. S.", "" ], [ "Costa", "L. da F.", "" ], [ "Bernardes", "E. S.", "" ], [ "Ramakers", "G.", "" ], [ "van Pelt", "J.", "" ] ]
The complexity of a neuronal cell shape is known to be related to its function. Specifically, among other indicators, a decreased complexity in the dendritic trees of cortical pyramidal neurons has been associated with mental retardation. In this paper we develop a procedure to address the characterization of morphological changes induced in cultured neurons by over-expressing a gene involved in mental retardation. Measures associated with the multiscale connectivity, an additive image functional, are found to give a reasonable separation criterion between two categories of cells. One category consists of a control group and two transfected groups of neurons, and the other, a class of cat ganglionary cells. The reported framework also identified a trend towards lower complexity in one of the transfected groups. Such results establish the suggested measures as an effective descriptors of cell shape.
2303.17615
Hampus Gummesson Svensson
Hampus Gummesson Svensson, Christian Tyrchan, Ola Engkvist, Morteza Haghir Chehreghani
Utilizing Reinforcement Learning for de novo Drug Design
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Deep learning-based approaches for generating novel drug molecules with specific properties have gained a lot of interest in the last few years. Recent studies have demonstrated promising performance for string-based generation of novel molecules utilizing reinforcement learning. In this paper, we develop a unified framework for using reinforcement learning for de novo drug design, wherein we systematically study various on- and off-policy reinforcement learning algorithms and replay buffers to learn an RNN-based policy to generate novel molecules predicted to be active against the dopamine receptor DRD2. Our findings suggest that it is advantageous to use at least both top-scoring and low-scoring molecules for updating the policy when structural diversity is essential. Using all generated molecules at an iteration seems to enhance performance stability for on-policy algorithms. In addition, when replaying high, intermediate, and low-scoring molecules, off-policy algorithms display the potential of improving the structural diversity and number of active molecules generated, but possibly at the cost of a longer exploration phase. Our work provides an open-source framework enabling researchers to investigate various reinforcement learning methods for de novo drug design.
[ { "created": "Thu, 30 Mar 2023 07:40:50 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2024 21:09:48 GMT", "version": "v2" } ]
2024-02-01
[ [ "Svensson", "Hampus Gummesson", "" ], [ "Tyrchan", "Christian", "" ], [ "Engkvist", "Ola", "" ], [ "Chehreghani", "Morteza Haghir", "" ] ]
Deep learning-based approaches for generating novel drug molecules with specific properties have gained a lot of interest in the last few years. Recent studies have demonstrated promising performance for string-based generation of novel molecules utilizing reinforcement learning. In this paper, we develop a unified framework for using reinforcement learning for de novo drug design, wherein we systematically study various on- and off-policy reinforcement learning algorithms and replay buffers to learn an RNN-based policy to generate novel molecules predicted to be active against the dopamine receptor DRD2. Our findings suggest that it is advantageous to use at least both top-scoring and low-scoring molecules for updating the policy when structural diversity is essential. Using all generated molecules at an iteration seems to enhance performance stability for on-policy algorithms. In addition, when replaying high, intermediate, and low-scoring molecules, off-policy algorithms display the potential of improving the structural diversity and number of active molecules generated, but possibly at the cost of a longer exploration phase. Our work provides an open-source framework enabling researchers to investigate various reinforcement learning methods for de novo drug design.
2005.00004
Timothy Schacker
L. Schifanella, J.L. Anderson, M. Galli, M. Corbellino, A. Lai, G. Wieking, B. Grzywacz, N.R. Klatt, A.T. Haase and T.W. Schacker
Massive viral replication and cytopathic effects in early COVID-19 pneumonia
19 pages, 4 figures, 3 extended data figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SARS-CoV-2 is the cause of COVID-19 acute respiratory illness that like its predecessors, MERS and SARS, can be severe and fatal 1-4. By April of 2020, COVID-19 infections had become a worldwide pandemic with nearly 3 million infections and over 200,000 deaths. The relative contributions of virus replication and cytopathic effects or immunopathological host responses to the severe and fatal outcomes of COVID-19 lung infections have as yet to be determined. Here we show that SARS-CoV-2 replication and cytopathic effects in type II alveolar pneumocytes causes focal lung injury in an individual with no history of pulmonary symptoms. These findings point to the potential benefit of early effective antiviral treatment to prevent progression to severe and fatal COVID-19 pneumonia.
[ { "created": "Thu, 30 Apr 2020 12:39:22 GMT", "version": "v1" } ]
2020-05-04
[ [ "Schifanella", "L.", "" ], [ "Anderson", "J. L.", "" ], [ "Galli", "M.", "" ], [ "Corbellino", "M.", "" ], [ "Lai", "A.", "" ], [ "Wieking", "G.", "" ], [ "Grzywacz", "B.", "" ], [ "Klatt", "N. R.", "" ], [ "Haase", "A. T.", "" ], [ "Schacker", "T. W.", "" ] ]
SARS-CoV-2 is the cause of COVID-19 acute respiratory illness that like its predecessors, MERS and SARS, can be severe and fatal 1-4. By April of 2020, COVID-19 infections had become a worldwide pandemic with nearly 3 million infections and over 200,000 deaths. The relative contributions of virus replication and cytopathic effects or immunopathological host responses to the severe and fatal outcomes of COVID-19 lung infections have as yet to be determined. Here we show that SARS-CoV-2 replication and cytopathic effects in type II alveolar pneumocytes causes focal lung injury in an individual with no history of pulmonary symptoms. These findings point to the potential benefit of early effective antiviral treatment to prevent progression to severe and fatal COVID-19 pneumonia.
2111.10374
Dipam Goswami Mr.
Dipam Goswami, Hari Om Aggrawal, Rajiv Gupta, Vinti Agarwal
Urine Microscopic Image Dataset
7 pages, 1 image
null
null
null
q-bio.QM cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Urinalysis is a standard diagnostic test to detect urinary system related problems. The automation of urinalysis will reduce the overall diagnostic time. Recent studies used urine microscopic datasets for designing deep learning based algorithms to classify and detect urine cells. But these datasets are not publicly available for further research. To alleviate the need for urine datsets, we prepare our urine sediment microscopic image (UMID) dataset comprising of around 3700 cell annotations and 3 categories of cells namely RBC, pus and epithelial cells. We discuss the several challenges involved in preparing the dataset and the annotations. We make the dataset publicly available.
[ { "created": "Fri, 19 Nov 2021 13:11:04 GMT", "version": "v1" } ]
2021-11-23
[ [ "Goswami", "Dipam", "" ], [ "Aggrawal", "Hari Om", "" ], [ "Gupta", "Rajiv", "" ], [ "Agarwal", "Vinti", "" ] ]
Urinalysis is a standard diagnostic test to detect urinary system related problems. The automation of urinalysis will reduce the overall diagnostic time. Recent studies used urine microscopic datasets for designing deep learning based algorithms to classify and detect urine cells. But these datasets are not publicly available for further research. To alleviate the need for urine datsets, we prepare our urine sediment microscopic image (UMID) dataset comprising of around 3700 cell annotations and 3 categories of cells namely RBC, pus and epithelial cells. We discuss the several challenges involved in preparing the dataset and the annotations. We make the dataset publicly available.
1312.7774
Tengiz Zorikov
Tengiz Zorikov
Echo-processing mechanisms in bottlenose dolphins
10 pg. 8 fig
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mechanisms of echo-processing were investigated in our experiments, conducted on bottlenose dolphins. Hierarchically organized system of independent dimensions, describing echoes in animals perception, was revealed. The rules of discrimination and recognition of echoes in dolphins were established.
[ { "created": "Mon, 30 Dec 2013 16:47:08 GMT", "version": "v1" } ]
2014-01-02
[ [ "Zorikov", "Tengiz", "" ] ]
The mechanisms of echo-processing were investigated in our experiments, conducted on bottlenose dolphins. Hierarchically organized system of independent dimensions, describing echoes in animals perception, was revealed. The rules of discrimination and recognition of echoes in dolphins were established.
2401.12477
David Murrugarra
David Murrugarra, Alan Veliz-Cuba, Elena Dimitrova, Claus Kadelka, Matthew Wheeler, Reinhard Laubenbacher
Modular Control of Biological Networks
16 pages, 6 figures. arXiv admin note: text overlap with arXiv:2206.04217
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
The concept of control is central to understanding and applications of biological network models. Some of their key structural features relate to control functions, through gene regulation, signaling, or metabolic mechanisms, and computational models need to encode these. Applications of models often focus on model-based control, such as in biomedicine or metabolic engineering. This paper presents an approach to model-based control that exploits two common features of biological networks, namely their modular structure and canalizing features of their regulatory mechanisms. The paper focuses on intracellular regulatory networks, represented by Boolean network models. A main result of this paper is that control strategies can be identified by focusing on one module at a time. This paper also presents a criterion based on canalizing features of the regulatory rules to identify modules that do not contribute to network control and can be excluded. For even moderately sized networks, finding global control inputs is computationally very challenging. The modular approach presented here leads to a highly efficient approach to solving this problem. This approach is applied to a published Boolean network model of blood cancer large granular lymphocyte (T-LGL) leukemia to identify a minimal control set that achieves a desired control objective.
[ { "created": "Tue, 23 Jan 2024 04:13:31 GMT", "version": "v1" }, { "created": "Sun, 7 Jul 2024 19:56:31 GMT", "version": "v2" } ]
2024-07-09
[ [ "Murrugarra", "David", "" ], [ "Veliz-Cuba", "Alan", "" ], [ "Dimitrova", "Elena", "" ], [ "Kadelka", "Claus", "" ], [ "Wheeler", "Matthew", "" ], [ "Laubenbacher", "Reinhard", "" ] ]
The concept of control is central to understanding and applications of biological network models. Some of their key structural features relate to control functions, through gene regulation, signaling, or metabolic mechanisms, and computational models need to encode these. Applications of models often focus on model-based control, such as in biomedicine or metabolic engineering. This paper presents an approach to model-based control that exploits two common features of biological networks, namely their modular structure and canalizing features of their regulatory mechanisms. The paper focuses on intracellular regulatory networks, represented by Boolean network models. A main result of this paper is that control strategies can be identified by focusing on one module at a time. This paper also presents a criterion based on canalizing features of the regulatory rules to identify modules that do not contribute to network control and can be excluded. For even moderately sized networks, finding global control inputs is computationally very challenging. The modular approach presented here leads to a highly efficient approach to solving this problem. This approach is applied to a published Boolean network model of blood cancer large granular lymphocyte (T-LGL) leukemia to identify a minimal control set that achieves a desired control objective.
1506.00597
Vipul Periwal
Carson C. Chow, Yanjun Li and Vipul Periwal
The Universality of Cancer
5 pages, 1 figure
null
null
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer has been characterized as a constellation of hundreds of diseases differing in underlying mutations and depending on cellular environments. Carcinogenesis as a stochastic physical process has been studied for over sixty years, but there is no accepted standard model. We show that the hazard rates of all cancers are characterized by a simple dynamic stochastic process on a half-line, with a universal linear restoring force balancing a universal simple Brownian motion starting from a universal initial distribution. Only a critical radius defining the transition from normal to tumorigenic genomes distinguishes between different cancer types when time is measured in cell--cycle units. Reparametrizing to chronological time units introduces two additional parameters: the onset of cellular senescence with age and the time interval over which this cessation in replication takes place. This universality implies that there may exist a finite separation between normal cells and tumorigenic cells in all tissue types that may be a viable target for both early detection and preventive therapy.
[ { "created": "Mon, 1 Jun 2015 18:28:48 GMT", "version": "v1" } ]
2015-06-02
[ [ "Chow", "Carson C.", "" ], [ "Li", "Yanjun", "" ], [ "Periwal", "Vipul", "" ] ]
Cancer has been characterized as a constellation of hundreds of diseases differing in underlying mutations and depending on cellular environments. Carcinogenesis as a stochastic physical process has been studied for over sixty years, but there is no accepted standard model. We show that the hazard rates of all cancers are characterized by a simple dynamic stochastic process on a half-line, with a universal linear restoring force balancing a universal simple Brownian motion starting from a universal initial distribution. Only a critical radius defining the transition from normal to tumorigenic genomes distinguishes between different cancer types when time is measured in cell--cycle units. Reparametrizing to chronological time units introduces two additional parameters: the onset of cellular senescence with age and the time interval over which this cessation in replication takes place. This universality implies that there may exist a finite separation between normal cells and tumorigenic cells in all tissue types that may be a viable target for both early detection and preventive therapy.
1601.03243
Andrea De Martino
Daniele De Martino, Fabrizio Capuani, Andrea De Martino
Growth against entropy in bacterial metabolism: the phenotypic trade-off behind empirical growth rate distributions in E. coli
12 pages, 5 figures
Phys. Biol. 13 (2016) 036005
10.1088/1478-3975/13/3/036005
null
q-bio.MN cond-mat.dis-nn physics.bio-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The solution space of genome-scale models of cellular metabolism provides a map between physically viable flux configurations and cellular metabolic phenotypes described, at the most basic level, by the corresponding growth rates. By sampling the solution space of E. coli's metabolic network, we show that empirical growth rate distributions recently obtained in experiments at single-cell resolution can be explained in terms of a trade-off between the higher fitness of fast-growing phenotypes and the higher entropy of slow-growing ones. Based on this, we propose a minimal model for the evolution of a large bacterial population that captures this trade-off. The scaling relationships observed in experiments encode, in such frameworks, for the same distance from the maximum achievable growth rate, the same degree of growth rate maximization, and/or the same rate of phenotypic change. Being grounded on genome-scale metabolic network reconstructions, these results allow for multiple implications and extensions in spite of the underlying conceptual simplicity.
[ { "created": "Wed, 13 Jan 2016 13:54:38 GMT", "version": "v1" }, { "created": "Fri, 27 May 2016 14:11:22 GMT", "version": "v2" } ]
2016-05-30
[ [ "De Martino", "Daniele", "" ], [ "Capuani", "Fabrizio", "" ], [ "De Martino", "Andrea", "" ] ]
The solution space of genome-scale models of cellular metabolism provides a map between physically viable flux configurations and cellular metabolic phenotypes described, at the most basic level, by the corresponding growth rates. By sampling the solution space of E. coli's metabolic network, we show that empirical growth rate distributions recently obtained in experiments at single-cell resolution can be explained in terms of a trade-off between the higher fitness of fast-growing phenotypes and the higher entropy of slow-growing ones. Based on this, we propose a minimal model for the evolution of a large bacterial population that captures this trade-off. The scaling relationships observed in experiments encode, in such frameworks, for the same distance from the maximum achievable growth rate, the same degree of growth rate maximization, and/or the same rate of phenotypic change. Being grounded on genome-scale metabolic network reconstructions, these results allow for multiple implications and extensions in spite of the underlying conceptual simplicity.
2107.02835
Semih Kara
Semih Kara, Nuno C. Martins
Pairwise Comparison Evolutionary Dynamics with Strategy-Dependent Revision Rates: Stability and Delta-Passivity (Expanded Version)
null
null
null
null
q-bio.PE cs.SY eess.SY math.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report on new stability conditions for evolutionary dynamics in the context of population games. We adhere to the prevailing framework consisting of many agents, grouped into populations, that interact noncooperatively by selecting strategies with a favorable payoff. Each agent is repeatedly allowed to revise its strategy at a rate referred to as revision rate. Previous stability results considered either that the payoff mechanism was a memoryless potential game, or allowed for dynamics (in the payoff mechanism) at the expense of precluding any explicit dependence of the agents' revision rates on their current strategies. Allowing the dependence of revision rates on strategies is relevant because the agents' strategies at any point in time are generally unequal. To allow for strategy-dependent revision rates and payoff mechanisms that are dynamic (or memoryless games that are not potential), we focus on an evolutionary dynamics class obtained from a straightforward modification of one that stems from the so-called impartial pairwise comparison strategy revision protocol. Revision protocols consistent with the modified class retain from those in the original one the advantage that the agents operate in a fully decentralized manner and with minimal information requirements - they need to access only the payoff values (not the mechanism) of the available strategies. Our main results determine conditions under which system-theoretic passivity properties are assured, which we leverage for stability analysis.
[ { "created": "Tue, 6 Jul 2021 18:33:31 GMT", "version": "v1" } ]
2021-07-08
[ [ "Kara", "Semih", "" ], [ "Martins", "Nuno C.", "" ] ]
We report on new stability conditions for evolutionary dynamics in the context of population games. We adhere to the prevailing framework consisting of many agents, grouped into populations, that interact noncooperatively by selecting strategies with a favorable payoff. Each agent is repeatedly allowed to revise its strategy at a rate referred to as revision rate. Previous stability results considered either that the payoff mechanism was a memoryless potential game, or allowed for dynamics (in the payoff mechanism) at the expense of precluding any explicit dependence of the agents' revision rates on their current strategies. Allowing the dependence of revision rates on strategies is relevant because the agents' strategies at any point in time are generally unequal. To allow for strategy-dependent revision rates and payoff mechanisms that are dynamic (or memoryless games that are not potential), we focus on an evolutionary dynamics class obtained from a straightforward modification of one that stems from the so-called impartial pairwise comparison strategy revision protocol. Revision protocols consistent with the modified class retain from those in the original one the advantage that the agents operate in a fully decentralized manner and with minimal information requirements - they need to access only the payoff values (not the mechanism) of the available strategies. Our main results determine conditions under which system-theoretic passivity properties are assured, which we leverage for stability analysis.
1812.08721
Eleni Panagiotou
E. Panagiotou and K. W. Plaxco
A topological study of protein folding kinetics
13 pages
null
null
null
q-bio.QM cond-mat.soft math.GT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Focusing on a small set of proteins that i) fold in a concerted, all-or-none fashion and ii) do not contain knots or slipknots, we show that the Gauss linking integral, the torsion and the number of sequence-distant contacts provide information regarding the folding rate. Our results suggest that the global topology/geometry of the proteins shifts from right-handed to left-handed with decreasing folding rate, and that this topological change is associated with an increase in the number of more sequence-distant contacts.
[ { "created": "Mon, 3 Dec 2018 13:10:58 GMT", "version": "v1" }, { "created": "Sat, 28 Sep 2019 14:19:29 GMT", "version": "v2" } ]
2019-10-01
[ [ "Panagiotou", "E.", "" ], [ "Plaxco", "K. W.", "" ] ]
Focusing on a small set of proteins that i) fold in a concerted, all-or-none fashion and ii) do not contain knots or slipknots, we show that the Gauss linking integral, the torsion and the number of sequence-distant contacts provide information regarding the folding rate. Our results suggest that the global topology/geometry of the proteins shifts from right-handed to left-handed with decreasing folding rate, and that this topological change is associated with an increase in the number of more sequence-distant contacts.
2002.04484
Robert Marsland III
Fernanda S. Valdovinos and Robert Marsland III
Niche theory for mutualism: A graphical approach to plant-pollinator network dynamics
41 pages, 8 figures
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contemporary Niche Theory is a useful framework for understanding how organisms interact with each other and with their shared environment. Its graphical representation, popularized by Tilman's Resource Ratio Hypothesis, facilitates the analysis of the equilibrium structure of complex dynamical models including species coexistence. This theory has been applied primarily to resource competition since its early beginnings. Here, we integrate mutualism into niche theory by expanding Tilman's graphical representation to the analysis of consumer-resource dynamics of plant-pollinator networks. We graphically explain the qualitative phenomena previously found by numerical simulations, including the effects on community dynamics of nestedness, adaptive foraging, and pollinator invasions. Our graphical approach promotes the unification of niche and network theories, and deepens the synthesis of different types of interactions within a consumer-resource framework.
[ { "created": "Tue, 11 Feb 2020 15:39:13 GMT", "version": "v1" }, { "created": "Wed, 8 Jul 2020 21:35:23 GMT", "version": "v2" } ]
2020-07-10
[ [ "Valdovinos", "Fernanda S.", "" ], [ "Marsland", "Robert", "III" ] ]
Contemporary Niche Theory is a useful framework for understanding how organisms interact with each other and with their shared environment. Its graphical representation, popularized by Tilman's Resource Ratio Hypothesis, facilitates the analysis of the equilibrium structure of complex dynamical models including species coexistence. This theory has been applied primarily to resource competition since its early beginnings. Here, we integrate mutualism into niche theory by expanding Tilman's graphical representation to the analysis of consumer-resource dynamics of plant-pollinator networks. We graphically explain the qualitative phenomena previously found by numerical simulations, including the effects on community dynamics of nestedness, adaptive foraging, and pollinator invasions. Our graphical approach promotes the unification of niche and network theories, and deepens the synthesis of different types of interactions within a consumer-resource framework.
2112.15109
Nour Almadhoun Alserr
Nour Almadhoun Alserr, Ozgur Ulusoy, Erman Ayday, and Onur Mutlu
GenShare: Sharing Accurate Differentially-Private Statistics for Genomic Datasets with Dependent Tuples
8 pages, 7 figures
null
null
null
q-bio.GN cs.CR
http://creativecommons.org/licenses/by/4.0/
Motivation: Cutting the cost of DNA sequencing technology led to a quantum leap in the availability of genomic data. While sharing genomic data across researchers is an essential driver of advances in health and biomedical research, the sharing process is often infeasible due to data privacy concerns. Differential privacy is one of the rigorous mechanisms utilized to facilitate the sharing of aggregate statistics from genomic datasets without disclosing any private individual-level data. However, differential privacy can still divulge sensitive information about the dataset participants due to the correlation between dataset tuples. Results: Here, we propose GenShare model built upon Laplace-perturbation-mechanism-based DP to introduce a privacy-preserving query-answering sharing model for statistical genomic datasets that include dependency due to the inherent correlations between genomes of individuals (i.e., family ties). We demonstrate our privacy improvement over the state-of-the-art approaches for a range of practical queries including cohort discovery, minor allele frequency, and chi^2 association tests. With a fine-grained analysis of sensitivity in the Laplace perturbation mechanism and considering joint distributions, GenShare results near-achieve the formal privacy guarantees permitted by the theory of differential privacy as the queries that computed over independent tuples (only up to 6% differences). GenShare ensures that query results are as accurate as theoretically guaranteed by differential privacy. For empowering the advances in different scientific and medical research areas, GenShare presents a path toward an interactive genomic data sharing system when the datasets include participants with familial relationships.
[ { "created": "Thu, 30 Dec 2021 16:05:26 GMT", "version": "v1" } ]
2022-01-03
[ [ "Alserr", "Nour Almadhoun", "" ], [ "Ulusoy", "Ozgur", "" ], [ "Ayday", "Erman", "" ], [ "Mutlu", "Onur", "" ] ]
Motivation: Cutting the cost of DNA sequencing technology led to a quantum leap in the availability of genomic data. While sharing genomic data across researchers is an essential driver of advances in health and biomedical research, the sharing process is often infeasible due to data privacy concerns. Differential privacy is one of the rigorous mechanisms utilized to facilitate the sharing of aggregate statistics from genomic datasets without disclosing any private individual-level data. However, differential privacy can still divulge sensitive information about the dataset participants due to the correlation between dataset tuples. Results: Here, we propose GenShare model built upon Laplace-perturbation-mechanism-based DP to introduce a privacy-preserving query-answering sharing model for statistical genomic datasets that include dependency due to the inherent correlations between genomes of individuals (i.e., family ties). We demonstrate our privacy improvement over the state-of-the-art approaches for a range of practical queries including cohort discovery, minor allele frequency, and chi^2 association tests. With a fine-grained analysis of sensitivity in the Laplace perturbation mechanism and considering joint distributions, GenShare results near-achieve the formal privacy guarantees permitted by the theory of differential privacy as the queries that computed over independent tuples (only up to 6% differences). GenShare ensures that query results are as accurate as theoretically guaranteed by differential privacy. For empowering the advances in different scientific and medical research areas, GenShare presents a path toward an interactive genomic data sharing system when the datasets include participants with familial relationships.
1901.05935
Rocio Joo
Rocio Joo, Matthew E. Boone, Thomas A. Clay, Samantha C. Patrick, Susana Clusella-Trullas, Mathieu Basille
Navigating through the R packages for movement
77 pages, 4 figures
Journal of Animal Ecology, 2019
10.1111/1365-2656.13116
null
q-bio.QM stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of miniaturized biologging devices has provided ecologists with unprecedented opportunities to record animal movement across scales, and led to the collection of ever-increasing quantities of tracking data. In parallel, sophisticated tools have been developed to process, visualize and analyze tracking data, however many of these tools have proliferated in isolation, making it challenging for users to select the most appropriate method for the question in hand. Indeed, within the R software alone, we listed 58 packages created to deal with tracking data or 'tracking packages'. Here we reviewed and described each tracking package based on a workflow centered around tracking data (i.e. spatio-temporal locations (x,y,t)), broken down into three stages: pre-processing, post-processing and analysis, the latter consisting of data visualization, track description, path reconstruction, behavioral pattern identification, space use characterization, trajectory simulation and others. Supporting documentation is key to render a package accessible for users. Based on a user survey, we reviewed the quality of packages' documentation, and identified 11 packages with good or excellent documentation. Links between packages were assessed through a network graph analysis. Although a large group of packages showed some degree of connectivity (either depending on functions or suggesting the use of another tracking package), one third of the packages worked in isolation, reflecting a fragmentation in the R movement-ecology programming community. Finally, we provide recommendations for users when choosing packages, and for developers to maximize the usefulness of their contribution and strengthen the links within the programming community.
[ { "created": "Thu, 17 Jan 2019 18:13:52 GMT", "version": "v1" }, { "created": "Mon, 22 Jul 2019 20:10:27 GMT", "version": "v2" }, { "created": "Mon, 14 Oct 2019 18:31:49 GMT", "version": "v3" } ]
2019-10-16
[ [ "Joo", "Rocio", "" ], [ "Boone", "Matthew E.", "" ], [ "Clay", "Thomas A.", "" ], [ "Patrick", "Samantha C.", "" ], [ "Clusella-Trullas", "Susana", "" ], [ "Basille", "Mathieu", "" ] ]
The advent of miniaturized biologging devices has provided ecologists with unprecedented opportunities to record animal movement across scales, and led to the collection of ever-increasing quantities of tracking data. In parallel, sophisticated tools have been developed to process, visualize and analyze tracking data, however many of these tools have proliferated in isolation, making it challenging for users to select the most appropriate method for the question in hand. Indeed, within the R software alone, we listed 58 packages created to deal with tracking data or 'tracking packages'. Here we reviewed and described each tracking package based on a workflow centered around tracking data (i.e. spatio-temporal locations (x,y,t)), broken down into three stages: pre-processing, post-processing and analysis, the latter consisting of data visualization, track description, path reconstruction, behavioral pattern identification, space use characterization, trajectory simulation and others. Supporting documentation is key to render a package accessible for users. Based on a user survey, we reviewed the quality of packages' documentation, and identified 11 packages with good or excellent documentation. Links between packages were assessed through a network graph analysis. Although a large group of packages showed some degree of connectivity (either depending on functions or suggesting the use of another tracking package), one third of the packages worked in isolation, reflecting a fragmentation in the R movement-ecology programming community. Finally, we provide recommendations for users when choosing packages, and for developers to maximize the usefulness of their contribution and strengthen the links within the programming community.
2312.13991
Jeffrey West
Cristian Axenie, Oliver L\'opez-Corona, Michail A. Makridis, Meisam Akbarzadeh, Matteo Saveriano, Alexandru Stancu, Jeffrey West
Antifragility as a complex system's response to perturbations, volatility, and time
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Antifragility characterizes the benefit of a dynamical system derived from the variability in environmental perturbations. Antifragility carries a precise definition that quantifies a system's output response to input variability. Systems may respond poorly to perturbations (fragile) or benefit from perturbations (antifragile). In this manuscript, we review a range of applications of antifragility theory in technical systems (e.g., traffic control, robotics) and natural systems (e.g., cancer therapy, antibiotics). While there is a broad overlap in methods used to quantify and apply antifragility across disciplines, there is a need for precisely defining the scales at which antifragility operates. Thus, we provide a brief general introduction to the properties of antifragility in applied systems and review relevant literature for both natural and technical systems' antifragility. We frame this review within three scales common to technical systems: intrinsic (input-output nonlinearity), inherited (extrinsic environmental signals), and interventional (feedback control), with associated counterparts in biological systems: ecological (homogeneous systems), evolutionary (heterogeneous systems), and interventional (control). We use the common noun in designing systems that exhibit antifragile behavior across scales and guide the reader along the spectrum of fragility-adaptiveness-resilience-robustness-antifragility, the principles behind it, and its practical implications.
[ { "created": "Thu, 21 Dec 2023 16:27:31 GMT", "version": "v1" } ]
2023-12-22
[ [ "Axenie", "Cristian", "" ], [ "López-Corona", "Oliver", "" ], [ "Makridis", "Michail A.", "" ], [ "Akbarzadeh", "Meisam", "" ], [ "Saveriano", "Matteo", "" ], [ "Stancu", "Alexandru", "" ], [ "West", "Jeffrey", "" ] ]
Antifragility characterizes the benefit of a dynamical system derived from the variability in environmental perturbations. Antifragility carries a precise definition that quantifies a system's output response to input variability. Systems may respond poorly to perturbations (fragile) or benefit from perturbations (antifragile). In this manuscript, we review a range of applications of antifragility theory in technical systems (e.g., traffic control, robotics) and natural systems (e.g., cancer therapy, antibiotics). While there is a broad overlap in methods used to quantify and apply antifragility across disciplines, there is a need for precisely defining the scales at which antifragility operates. Thus, we provide a brief general introduction to the properties of antifragility in applied systems and review relevant literature for both natural and technical systems' antifragility. We frame this review within three scales common to technical systems: intrinsic (input-output nonlinearity), inherited (extrinsic environmental signals), and interventional (feedback control), with associated counterparts in biological systems: ecological (homogeneous systems), evolutionary (heterogeneous systems), and interventional (control). We use the common noun in designing systems that exhibit antifragile behavior across scales and guide the reader along the spectrum of fragility-adaptiveness-resilience-robustness-antifragility, the principles behind it, and its practical implications.
2311.02258
Trung Le
Lu Mi, Trung Le, Tianxing He, Eli Shlizerman, Uygar S\"umb\"ul
Learning Time-Invariant Representations for Individual Neurons from Population Dynamics
Accepted at NeurIPS 2023
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons can display highly variable dynamics. While such variability presumably supports the wide range of behaviors generated by the organism, their gene expressions are relatively stable in the adult brain. This suggests that neuronal activity is a combination of its time-invariant identity and the inputs the neuron receives from the rest of the circuit. Here, we propose a self-supervised learning based method to assign time-invariant representations to individual neurons based on permutation-, and population size-invariant summary of population recordings. We fit dynamical models to neuronal activity to learn a representation by considering the activity of both the individual and the neighboring population. Our self-supervised approach and use of implicit representations enable robust inference against imperfections such as partial overlap of neurons across sessions, trial-to-trial variability, and limited availability of molecular (transcriptomic) labels for downstream supervised tasks. We demonstrate our method on a public multimodal dataset of mouse cortical neuronal activity and transcriptomic labels. We report > 35% improvement in predicting the transcriptomic subclass identity and > 20% improvement in predicting class identity with respect to the state-of-the-art.
[ { "created": "Fri, 3 Nov 2023 22:30:12 GMT", "version": "v1" } ]
2023-11-07
[ [ "Mi", "Lu", "" ], [ "Le", "Trung", "" ], [ "He", "Tianxing", "" ], [ "Shlizerman", "Eli", "" ], [ "Sümbül", "Uygar", "" ] ]
Neurons can display highly variable dynamics. While such variability presumably supports the wide range of behaviors generated by the organism, their gene expressions are relatively stable in the adult brain. This suggests that neuronal activity is a combination of its time-invariant identity and the inputs the neuron receives from the rest of the circuit. Here, we propose a self-supervised learning based method to assign time-invariant representations to individual neurons based on permutation-, and population size-invariant summary of population recordings. We fit dynamical models to neuronal activity to learn a representation by considering the activity of both the individual and the neighboring population. Our self-supervised approach and use of implicit representations enable robust inference against imperfections such as partial overlap of neurons across sessions, trial-to-trial variability, and limited availability of molecular (transcriptomic) labels for downstream supervised tasks. We demonstrate our method on a public multimodal dataset of mouse cortical neuronal activity and transcriptomic labels. We report > 35% improvement in predicting the transcriptomic subclass identity and > 20% improvement in predicting class identity with respect to the state-of-the-art.
q-bio/0607005
Maurizio Serva
Maurizio Serva
Mitochondrial Dna Replacement Versus Nuclear Dna Persistence
null
null
10.1088/1742-5468/2006/10/P10013
null
q-bio.PE cond-mat.other q-bio.OT
null
In this paper we consider two populations whose generations are not overlapping and whose size is large. The number of males and females in both populations is constant. Any generation is replaced by a new one and any individual has two parents for what concerns nuclear DNA and a single one (the mother) for what concerns mtDNA. Moreover, at any generation some individuals migrate from the first population to the second. In a finite random time $T$, the mtDNA of the second population is completely replaced by the mtDNA of the first. In the same time, the nuclear DNA is not completely replaced and a fraction $F$ of the ancient nuclear DNA persists. We compute both $T$ and $F$. Since this study shows that complete replacement of mtDNA in a population is compatible with the persistence of a large fraction of nuclear DNA, it may have some relevance for the Out of Africa/Multiregional debate in Paleoanthropology.
[ { "created": "Wed, 5 Jul 2006 00:05:29 GMT", "version": "v1" } ]
2009-11-13
[ [ "Serva", "Maurizio", "" ] ]
In this paper we consider two populations whose generations are not overlapping and whose size is large. The number of males and females in both populations is constant. Any generation is replaced by a new one and any individual has two parents for what concerns nuclear DNA and a single one (the mother) for what concerns mtDNA. Moreover, at any generation some individuals migrate from the first population to the second. In a finite random time $T$, the mtDNA of the second population is completely replaced by the mtDNA of the first. In the same time, the nuclear DNA is not completely replaced and a fraction $F$ of the ancient nuclear DNA persists. We compute both $T$ and $F$. Since this study shows that complete replacement of mtDNA in a population is compatible with the persistence of a large fraction of nuclear DNA, it may have some relevance for the Out of Africa/Multiregional debate in Paleoanthropology.
2007.14922
Jan Zrimec
Jan Zrimec
Structural representations of DNA regulatory substrates can enhance sequence-based algorithms by associating functional sequence variants
20 pages, 8 figures, 3 tables, conference
Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics (BCB '20), September 21--24, 2020, Virtual Event, USA
10.1145/3388440.3412482
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The nucleotide sequence representation of DNA can be inadequate for resolving protein-DNA binding sites and regulatory substrates, such as those involved in gene expression and horizontal gene transfer. Considering that sequence-like representations are algorithmically very useful, here we fused over 60 currently available DNA physicochemical and conformational variables into compact structural representations that can encode single DNA binding sites to whole regulatory regions. We find that the main structural components reflect key properties of protein-DNA interactions and can be condensed to the amount of information found in a single nucleotide position. The most accurate structural representations compress functional DNA sequence variants by 30% to 50%, as each instance encodes from tens to thousands of sequences. We show that a structural distance function discriminates among groups of DNA substrates more accurately than nucleotide sequence-based metrics. As this opens up a variety of implementation possibilities, we develop and test a distance-based alignment algorithm, demonstrating the potential of using the structural representations to enhance sequence-based algorithms. Due to the bias of most current bioinformatic methods to nucleotide sequence representations, it is possible that considerable performance increases might still be achievable with such solutions.
[ { "created": "Wed, 29 Jul 2020 15:56:39 GMT", "version": "v1" } ]
2020-07-30
[ [ "Zrimec", "Jan", "" ] ]
The nucleotide sequence representation of DNA can be inadequate for resolving protein-DNA binding sites and regulatory substrates, such as those involved in gene expression and horizontal gene transfer. Considering that sequence-like representations are algorithmically very useful, here we fused over 60 currently available DNA physicochemical and conformational variables into compact structural representations that can encode single DNA binding sites to whole regulatory regions. We find that the main structural components reflect key properties of protein-DNA interactions and can be condensed to the amount of information found in a single nucleotide position. The most accurate structural representations compress functional DNA sequence variants by 30% to 50%, as each instance encodes from tens to thousands of sequences. We show that a structural distance function discriminates among groups of DNA substrates more accurately than nucleotide sequence-based metrics. As this opens up a variety of implementation possibilities, we develop and test a distance-based alignment algorithm, demonstrating the potential of using the structural representations to enhance sequence-based algorithms. Due to the bias of most current bioinformatic methods to nucleotide sequence representations, it is possible that considerable performance increases might still be achievable with such solutions.
1408.7073
Masud Mansuripur
Masud Mansuripur
DNA, Human Memory, and the Storage Technology of the 21st Century
29 pages, 26 figures, 40 references
Proceedings of SPIE, T. Hurst and S. Kobayashi, editors, Vol. 4342, pp 1-29 (2002)
10.1117/12.453368
null
q-bio.OT cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sophisticated tools and techniques employed by Nature for purposeful storage of information stand in stark contrast to the primitive and relatively inefficient means used by man. We describe some impressive features of biological data storage, and speculate on approaches to research and development that could benefit the storage industry in the coming decades.
[ { "created": "Wed, 27 Aug 2014 21:51:53 GMT", "version": "v1" } ]
2014-09-01
[ [ "Mansuripur", "Masud", "" ] ]
The sophisticated tools and techniques employed by Nature for purposeful storage of information stand in stark contrast to the primitive and relatively inefficient means used by man. We describe some impressive features of biological data storage, and speculate on approaches to research and development that could benefit the storage industry in the coming decades.
2211.08516
Ting Hu
Ting Hu and Gabriela Ochoa and Wolfgang Banzhaf
Phenotype Search Trajectory Networks for Linear Genetic Programming
null
null
null
null
q-bio.PE cs.AI
http://creativecommons.org/licenses/by/4.0/
Genotype-to-phenotype mappings translate genotypic variations such as mutations into phenotypic changes. Neutrality is the observation that some mutations do not lead to phenotypic changes. Studying the search trajectories in genotypic and phenotypic spaces, especially through neutral mutations, helps us to better understand the progression of evolution and its algorithmic behaviour. In this study, we visualise the search trajectories of a genetic programming system as graph-based models, where nodes are genotypes/phenotypes and edges represent their mutational transitions. We also quantitatively measure the characteristics of phenotypes including their genotypic abundance (the requirement for neutrality) and Kolmogorov complexity. We connect these quantified metrics with search trajectory visualisations, and find that more complex phenotypes are under-represented by fewer genotypes and are harder for evolution to discover. Less complex phenotypes, on the other hand, are over-represented by genotypes, are easier to find, and frequently serve as stepping-stones for evolution.
[ { "created": "Tue, 15 Nov 2022 21:20:50 GMT", "version": "v1" }, { "created": "Fri, 23 Jun 2023 16:42:01 GMT", "version": "v2" } ]
2023-06-26
[ [ "Hu", "Ting", "" ], [ "Ochoa", "Gabriela", "" ], [ "Banzhaf", "Wolfgang", "" ] ]
Genotype-to-phenotype mappings translate genotypic variations such as mutations into phenotypic changes. Neutrality is the observation that some mutations do not lead to phenotypic changes. Studying the search trajectories in genotypic and phenotypic spaces, especially through neutral mutations, helps us to better understand the progression of evolution and its algorithmic behaviour. In this study, we visualise the search trajectories of a genetic programming system as graph-based models, where nodes are genotypes/phenotypes and edges represent their mutational transitions. We also quantitatively measure the characteristics of phenotypes including their genotypic abundance (the requirement for neutrality) and Kolmogorov complexity. We connect these quantified metrics with search trajectory visualisations, and find that more complex phenotypes are under-represented by fewer genotypes and are harder for evolution to discover. Less complex phenotypes, on the other hand, are over-represented by genotypes, are easier to find, and frequently serve as stepping-stones for evolution.
1008.0209
Carlos Escudero
Carlos Escudero, Christian A. Yates, Jerome Buhl, Iain D. Couzin, Radek Erban, Ioannis G. Kevrekidis and Philip K. Maini
Ergodic directional switching in mobile insect groups
Physical Review Focus 26, July 2010
Phys. Rev. E 82, 011926 (2010)
10.1103/PhysRevE.82.011926
null
q-bio.PE cond-mat.stat-mech q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We obtain a Fokker-Planck equation describing experimental data on the collective motion of locusts. The noise is of internal origin and due to the discrete character and finite number of constituents of the swarm. The stationary probability distribution shows a rich phenomenology including non-monotonic behavior of several order/disorder transition indicators in noise intensity. This complex behavior arises naturally as a result of the randomness in the system. Its counterintuitive character challenges standard interpretations of noise induced transitions and calls for an extension of this theory in order to capture the behavior of certain classes of biologically motivated models. Our results suggest that the collective switches of the group's direction of motion might be due to a random ergodic effect and, as such, they are inherent to group formation.
[ { "created": "Sun, 1 Aug 2010 22:58:07 GMT", "version": "v1" } ]
2015-05-19
[ [ "Escudero", "Carlos", "" ], [ "Yates", "Christian A.", "" ], [ "Buhl", "Jerome", "" ], [ "Couzin", "Iain D.", "" ], [ "Erban", "Radek", "" ], [ "Kevrekidis", "Ioannis G.", "" ], [ "Maini", "Philip K.", "" ] ]
We obtain a Fokker-Planck equation describing experimental data on the collective motion of locusts. The noise is of internal origin and due to the discrete character and finite number of constituents of the swarm. The stationary probability distribution shows a rich phenomenology including non-monotonic behavior of several order/disorder transition indicators in noise intensity. This complex behavior arises naturally as a result of the randomness in the system. Its counterintuitive character challenges standard interpretations of noise induced transitions and calls for an extension of this theory in order to capture the behavior of certain classes of biologically motivated models. Our results suggest that the collective switches of the group's direction of motion might be due to a random ergodic effect and, as such, they are inherent to group formation.
2403.01927
Akhila Krishna
Akhila Krishna, Ravi Kant Gupta, Pranav Jeevan, Amit Sethi
Advancing Gene Selection in Oncology: A Fusion of Deep Learning and Sparsity for Precision Gene Selection
null
null
null
null
q-bio.GN cs.CV q-bio.QM q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Gene selection plays a pivotal role in oncology research for improving outcome prediction accuracy and facilitating cost-effective genomic profiling for cancer patients. This paper introduces two gene selection strategies for deep learning-based survival prediction models. The first strategy uses a sparsity-inducing method while the second one uses importance based gene selection for identifying relevant genes. Our overall approach leverages the power of deep learning to model complex biological data structures, while sparsity-inducing methods ensure the selection process focuses on the most informative genes, minimizing noise and redundancy. Through comprehensive experimentation on diverse genomic and survival datasets, we demonstrate that our strategy not only identifies gene signatures with high predictive power for survival outcomes but can also streamlines the process for low-cost genomic profiling. The implications of this research are profound as it offers a scalable and effective tool for advancing personalized medicine and targeted cancer therapies. By pushing the boundaries of gene selection methodologies, our work contributes significantly to the ongoing efforts in cancer genomics, promising improved diagnostic and prognostic capabilities in clinical settings.
[ { "created": "Mon, 4 Mar 2024 10:44:57 GMT", "version": "v1" } ]
2024-03-05
[ [ "Krishna", "Akhila", "" ], [ "Gupta", "Ravi Kant", "" ], [ "Jeevan", "Pranav", "" ], [ "Sethi", "Amit", "" ] ]
Gene selection plays a pivotal role in oncology research for improving outcome prediction accuracy and facilitating cost-effective genomic profiling for cancer patients. This paper introduces two gene selection strategies for deep learning-based survival prediction models. The first strategy uses a sparsity-inducing method while the second one uses importance based gene selection for identifying relevant genes. Our overall approach leverages the power of deep learning to model complex biological data structures, while sparsity-inducing methods ensure the selection process focuses on the most informative genes, minimizing noise and redundancy. Through comprehensive experimentation on diverse genomic and survival datasets, we demonstrate that our strategy not only identifies gene signatures with high predictive power for survival outcomes but can also streamlines the process for low-cost genomic profiling. The implications of this research are profound as it offers a scalable and effective tool for advancing personalized medicine and targeted cancer therapies. By pushing the boundaries of gene selection methodologies, our work contributes significantly to the ongoing efforts in cancer genomics, promising improved diagnostic and prognostic capabilities in clinical settings.
2209.02780
Youngmin Park
Youngmin Park, C\'ecile Leduc, Sandrine Etienne-Manneville, St\'ephanie Portet
Models of Vimentin Organization Under Actin-Driven Transport
25 pages, 8 figures
null
10.1103/PhysRevE.107.054408
null
q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Intermediate filaments form an essential structural network, spread throughout the cytoplasm and play a key role in cell mechanics, intracellular organization and molecular signaling. The maintenance of the network and its adaptation to the cell's dynamic behavior relies on several mechanisms implicating cytoskeletal crosstalk which are not fully understood. Mathematical modeling allows us to compare several biologically realistic scenarios to help us interpret experimental data. In this study, we observe and model the dynamics of the vimentin intermediate filaments in single glial cells seeded on circular micropatterns following microtubule disruption by nocodazole treatment. In these conditions, the vimentin filaments move towards the cell center and accumulate before eventually reaching a steady-state. In absence of microtubule-driven transport, the motion of the vimentin network is primarily driven by actin-related mechanisms. To model these experimental findings, we hypothesize that vimentin may exist in two states, mobile and immobile, and switches between the states at unknown (either constant or non-constant) rates. Mobile vimentin are assumed to advect with either constant or non-constant velocity. We introduce several biologically realistic scenarios using this set of assumptions. For each scenario, we use differential evolution to find the best parameter sets resulting in a solution that most closely matches the experimental data, then the assumptions are evaluated using the Akaike Information Criterion. This modeling approach allows us to conclude that our experimental data are best explained by a spatially dependent trapping of intermediate filaments or a spatially dependent speed of actin-dependent transport.
[ { "created": "Tue, 6 Sep 2022 19:02:38 GMT", "version": "v1" }, { "created": "Wed, 12 Apr 2023 19:30:08 GMT", "version": "v2" } ]
2023-06-14
[ [ "Park", "Youngmin", "" ], [ "Leduc", "Cécile", "" ], [ "Etienne-Manneville", "Sandrine", "" ], [ "Portet", "Stéphanie", "" ] ]
Intermediate filaments form an essential structural network, spread throughout the cytoplasm and play a key role in cell mechanics, intracellular organization and molecular signaling. The maintenance of the network and its adaptation to the cell's dynamic behavior relies on several mechanisms implicating cytoskeletal crosstalk which are not fully understood. Mathematical modeling allows us to compare several biologically realistic scenarios to help us interpret experimental data. In this study, we observe and model the dynamics of the vimentin intermediate filaments in single glial cells seeded on circular micropatterns following microtubule disruption by nocodazole treatment. In these conditions, the vimentin filaments move towards the cell center and accumulate before eventually reaching a steady-state. In absence of microtubule-driven transport, the motion of the vimentin network is primarily driven by actin-related mechanisms. To model these experimental findings, we hypothesize that vimentin may exist in two states, mobile and immobile, and switches between the states at unknown (either constant or non-constant) rates. Mobile vimentin are assumed to advect with either constant or non-constant velocity. We introduce several biologically realistic scenarios using this set of assumptions. For each scenario, we use differential evolution to find the best parameter sets resulting in a solution that most closely matches the experimental data, then the assumptions are evaluated using the Akaike Information Criterion. This modeling approach allows us to conclude that our experimental data are best explained by a spatially dependent trapping of intermediate filaments or a spatially dependent speed of actin-dependent transport.
1801.08950
Kaushik Majumdar
Puneet Dheer, Sandipan Pati, Srinath Jayachandran, Kaushik Kumar Majumdar
Ictal and Post Ictal Impaired Consciousness due to Enhanced Mutual Information in Temporal Lobe Epilepsy
30 pages, 5 figures, 8 tables, under review in Brain Topography
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Seizure and synchronization are related to each other in complex manner. Altered synchrony has been implicated in loss of consciousness during partial seizures. However, the mechanism of altered consciousness following termination of seizures has not been studied well. In this work we used bivariate mutual information as a measure of synchronization to understand the neural correlate of altered consciousness during and after termination of mesial temporal lobe onset seizures. First, we have compared discrete bivariate mutual information (MI) measure with amplitude correlation (AC), phase synchronization (PS), nonlinear correlation and coherence, and established MI as a robust measure of synchronization. Next, we have extended MI to more than two signals by principal component method. The extended MI was applied on intracranial electroencephalogram (iEEG) before, during and after 23 temporal lobe seizures recorded from 11 patients. The analyses were carried out in delta, theta, alpha, beta and gamma bands. In 77% of the complex partial seizures MI was higher towards the seizure offset than in the first half of the seizure in the seizure onset zone (SOZ) channels in beta and gamma bands, whereas MI remained higher in the beginning or in the middle of the seizure than towards the offset across the least involved channels in the same bands. Synchronization seems built up outside the SOZ, gradually spread and culminated in SOZ and remained high beyond offset leading to impaired consciousness in 82% of the complex partial temporal lobe seizures. Consciousness impairment was scored according to a method previously applied to assess the same in patients with temporal lobe epilepsy during seizure.
[ { "created": "Fri, 26 Jan 2018 19:27:40 GMT", "version": "v1" } ]
2018-01-30
[ [ "Dheer", "Puneet", "" ], [ "Pati", "Sandipan", "" ], [ "Jayachandran", "Srinath", "" ], [ "Majumdar", "Kaushik Kumar", "" ] ]
Seizure and synchronization are related to each other in complex manner. Altered synchrony has been implicated in loss of consciousness during partial seizures. However, the mechanism of altered consciousness following termination of seizures has not been studied well. In this work we used bivariate mutual information as a measure of synchronization to understand the neural correlate of altered consciousness during and after termination of mesial temporal lobe onset seizures. First, we have compared discrete bivariate mutual information (MI) measure with amplitude correlation (AC), phase synchronization (PS), nonlinear correlation and coherence, and established MI as a robust measure of synchronization. Next, we have extended MI to more than two signals by principal component method. The extended MI was applied on intracranial electroencephalogram (iEEG) before, during and after 23 temporal lobe seizures recorded from 11 patients. The analyses were carried out in delta, theta, alpha, beta and gamma bands. In 77% of the complex partial seizures MI was higher towards the seizure offset than in the first half of the seizure in the seizure onset zone (SOZ) channels in beta and gamma bands, whereas MI remained higher in the beginning or in the middle of the seizure than towards the offset across the least involved channels in the same bands. Synchronization seems built up outside the SOZ, gradually spread and culminated in SOZ and remained high beyond offset leading to impaired consciousness in 82% of the complex partial temporal lobe seizures. Consciousness impairment was scored according to a method previously applied to assess the same in patients with temporal lobe epilepsy during seizure.
1912.12058
Souhil Harchaoui
Souhil Harchaoui and Petros Chatzimpiros
The nitrogen operating space of world food production
39 pages, 15 figures, 7 tables
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Agriculture operates within a global ecosystem for which planetary boundaries have recently been defined. Efficiency in nitrogen use is essential for maximizing the benefits of agriculture for humanity and minimizing adverse socio-ecological impacts. The set of variables that support global system efficiency also determine the food production boundaries of agriculture, which govern the maximum supportable human population. Food production boundaries, nitrogen loss and nitrogen self-sufficiency are combined here into the nitrogen operating space of world food production. We position world regions and the world trajectory (1961-2013) within the nitrogen operating space and show that the maximum supportable human population ranges from 6 to almost 17 billion people according to the share of grain used as feed and the nitrogen fertilization regime. All UN population projections for the 21st century can only be conditionally achieved. We discuss the growth rate requirements in production and efficiency to meet food production boundaries and the nitrogen planetary boundary by 2050.
[ { "created": "Fri, 27 Dec 2019 10:58:33 GMT", "version": "v1" } ]
2019-12-30
[ [ "Harchaoui", "Souhil", "" ], [ "Chatzimpiros", "Petros", "" ] ]
Agriculture operates within a global ecosystem for which planetary boundaries have recently been defined. Efficiency in nitrogen use is essential for maximizing the benefits of agriculture for humanity and minimizing adverse socio-ecological impacts. The set of variables that support global system efficiency also determine the food production boundaries of agriculture, which govern the maximum supportable human population. Food production boundaries, nitrogen loss and nitrogen self-sufficiency are combined here into the nitrogen operating space of world food production. We position world regions and the world trajectory (1961-2013) within the nitrogen operating space and show that the maximum supportable human population ranges from 6 to almost 17 billion people according to the share of grain used as feed and the nitrogen fertilization regime. All UN population projections for the 21st century can only be conditionally achieved. We discuss the growth rate requirements in production and efficiency to meet food production boundaries and the nitrogen planetary boundary by 2050.
1510.08780
John Medaglia
John D. Medaglia, Theodore D. Satterthwaite, Tyler M. Moore, Kosha Ruparel, Ruben C. Gur, Raquel E. Gur, Danielle S. Bassett
Flexible Traversal Through Diverse Brain States Underlies Executive Function in Normative Neurodevelopment
14 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adolescence is marked by rapid development of executive function. Mounting evidence suggests that executive function in adults may be driven by dynamic control of neurophysiological processes. Yet, how these dynamics evolve over adolescence and contribute to cognitive development is unknown. Using a novel dynamic graph approach in which each moment in time is a node and the similarity in brain states at two different times is an edge, we identify two primary brain states reminiscent of intrinsic and task-evoked systems. We demonstrate that time spent in these two states increases over development, as does the flexibility with which the brain switches between them. Increasing time spent in primary states and flexibility among states relates to increased executive performance over adolescence. Indeed, flexibility is increasingly advantageous for performance toward early adulthood. These findings demonstrate that brain state dynamics underlie the development of executive function during the critical period of adolescence.
[ { "created": "Thu, 29 Oct 2015 17:06:40 GMT", "version": "v1" } ]
2015-10-30
[ [ "Medaglia", "John D.", "" ], [ "Satterthwaite", "Theodore D.", "" ], [ "Moore", "Tyler M.", "" ], [ "Ruparel", "Kosha", "" ], [ "Gur", "Ruben C.", "" ], [ "Gur", "Raquel E.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Adolescence is marked by rapid development of executive function. Mounting evidence suggests that executive function in adults may be driven by dynamic control of neurophysiological processes. Yet, how these dynamics evolve over adolescence and contribute to cognitive development is unknown. Using a novel dynamic graph approach in which each moment in time is a node and the similarity in brain states at two different times is an edge, we identify two primary brain states reminiscent of intrinsic and task-evoked systems. We demonstrate that time spent in these two states increases over development, as does the flexibility with which the brain switches between them. Increasing time spent in primary states and flexibility among states relates to increased executive performance over adolescence. Indeed, flexibility is increasingly advantageous for performance toward early adulthood. These findings demonstrate that brain state dynamics underlie the development of executive function during the critical period of adolescence.
1609.06021
Thomas R. Weikl
Fabian Paul and Thomas R. Weikl
How to distinguish conformational selection and induced fit based on chemical relaxation rates
20 pages, 4 figures
PLoS Comp Biol 12(9): e1005067 (2016)
10.1371/journal.pcbi.1005067
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein binding often involves conformational changes. Important questions are whether a conformational change occurs prior to a binding event ('conformational selection') or after a binding event ('induced fit'), and how conformational transition rates can be obtained from experiments. In this article, we present general results for the chemical relaxation rates of conformational-selection and induced-fit binding processes that hold for all concentrations of proteins and ligands and, thus, go beyond the standard pseudo-first-order approximation of large ligand concentration. These results allow to distinguish conformational-selection from induced-fit processes - also in cases in which such a distinction is not possible under pseudo-first-order conditions - and to extract conformational transition rates of proteins from chemical relaxation data.
[ { "created": "Tue, 20 Sep 2016 05:22:49 GMT", "version": "v1" } ]
2016-09-22
[ [ "Paul", "Fabian", "" ], [ "Weikl", "Thomas R.", "" ] ]
Protein binding often involves conformational changes. Important questions are whether a conformational change occurs prior to a binding event ('conformational selection') or after a binding event ('induced fit'), and how conformational transition rates can be obtained from experiments. In this article, we present general results for the chemical relaxation rates of conformational-selection and induced-fit binding processes that hold for all concentrations of proteins and ligands and, thus, go beyond the standard pseudo-first-order approximation of large ligand concentration. These results allow to distinguish conformational-selection from induced-fit processes - also in cases in which such a distinction is not possible under pseudo-first-order conditions - and to extract conformational transition rates of proteins from chemical relaxation data.
q-bio/0510051
Quang-Cuong Pham
Quang-Cuong Pham, Jean-Jacques Slotine
Stable Concurrent Synchronization in Dynamic System Networks
32 pages, 12 figures. More detailed proofs were given in section 2. Section 3.4 on robust synchronization was added
null
10.1016/j.neunet.2006.07.008
null
q-bio.NC
null
In a network of dynamical systems, concurrent synchronization is a regime where multiple groups of fully synchronized elements coexist. In the brain, concurrent synchronization may occur at several scales, with multiple ``rhythms'' interacting and functional assemblies combining neural oscillators of many different types. Mathematically, stable concurrent synchronization corresponds to convergence to a flow-invariant linear subspace of the global state space. We derive a general condition for such convergence to occur globally and exponentially. We also show that, under mild conditions, global convergence to a concurrently synchronized regime is preserved under basic system combinations such as negative feedback or hierarchies, so that stable concurrently synchronized aggregates of arbitrary size can be constructed. Robustnesss of stable concurrent synchronization to variations in individual dynamics is also quantified. Simple applications of these results to classical questions in systems neuroscience and robotics are discussed.
[ { "created": "Thu, 27 Oct 2005 20:56:53 GMT", "version": "v1" }, { "created": "Sat, 24 Dec 2005 14:56:53 GMT", "version": "v2" }, { "created": "Thu, 1 Jun 2006 09:06:11 GMT", "version": "v3" } ]
2007-05-23
[ [ "Pham", "Quang-Cuong", "" ], [ "Slotine", "Jean-Jacques", "" ] ]
In a network of dynamical systems, concurrent synchronization is a regime where multiple groups of fully synchronized elements coexist. In the brain, concurrent synchronization may occur at several scales, with multiple ``rhythms'' interacting and functional assemblies combining neural oscillators of many different types. Mathematically, stable concurrent synchronization corresponds to convergence to a flow-invariant linear subspace of the global state space. We derive a general condition for such convergence to occur globally and exponentially. We also show that, under mild conditions, global convergence to a concurrently synchronized regime is preserved under basic system combinations such as negative feedback or hierarchies, so that stable concurrently synchronized aggregates of arbitrary size can be constructed. Robustnesss of stable concurrent synchronization to variations in individual dynamics is also quantified. Simple applications of these results to classical questions in systems neuroscience and robotics are discussed.
2103.06954
Itamar Daniel Landau
Itamar Daniel Landau and Haim Sompolinsky
Macroscopic Fluctuations Emerge in Balanced Networks with Incomplete Recurrent Alignment
null
Phys. Rev. Research 3, 023171 (2021)
10.1103/PhysRevResearch.3.023171
null
q-bio.NC nlin.CD
http://creativecommons.org/licenses/by-nc-nd/4.0/
Networks of strongly-coupled neurons with random connectivity exhibit chaotic, asynchronous fluctuations. In previous work, we showed that when endowed with an additional low-rank connectivity consisting of the outer product of orthogonal vectors, these networks generate large-scale coherent fluctuations. Although a striking phenomenon, that result depended on a fine-tuned choice of low-rank structure. Here we extend that work by generalizing the theory of excitation-inhibition balance to networks with arbitrary low-rank structure and show that low-dimensional variability emerges intrinsically through what we call incomplete recurrent alignment. We say that a low-rank connectivity structure exhibits incomplete alignment if its row-space is not contained in its column-space. In the setting of incomplete alignment, recurrent connectivity can be decomposed into a subspace-recurrent component and an effective-feedforward component. We show that high-dimensional, microscopic fluctuations are propagated via the effective-feedforward component to a low-dimensional subspace where they are dynamically balanced by macroscopic fluctuations. We present biologically plausible examples from excitation-inhibition networks and networks with heterogeneous degree distributions. Finally, we define the alignment matrix as the overlap between left and right-singular vectors of the structured connectivity, and show that the singular values of the alignment matrix determine the amplitude of macroscopic variability, while its singular vectors determine the structure. Our work shows how macroscopic fluctuations can emerge generically in strongly-coupled networks with low-rank structure. Furthermore, by generalizing excitation-inhibition balance to arbitrary low-rank structure our work may find relevance in any setting with strongly interacting units, whether in biological, social, or technological networks.
[ { "created": "Thu, 11 Mar 2021 21:09:09 GMT", "version": "v1" } ]
2021-06-09
[ [ "Landau", "Itamar Daniel", "" ], [ "Sompolinsky", "Haim", "" ] ]
Networks of strongly-coupled neurons with random connectivity exhibit chaotic, asynchronous fluctuations. In previous work, we showed that when endowed with an additional low-rank connectivity consisting of the outer product of orthogonal vectors, these networks generate large-scale coherent fluctuations. Although a striking phenomenon, that result depended on a fine-tuned choice of low-rank structure. Here we extend that work by generalizing the theory of excitation-inhibition balance to networks with arbitrary low-rank structure and show that low-dimensional variability emerges intrinsically through what we call incomplete recurrent alignment. We say that a low-rank connectivity structure exhibits incomplete alignment if its row-space is not contained in its column-space. In the setting of incomplete alignment, recurrent connectivity can be decomposed into a subspace-recurrent component and an effective-feedforward component. We show that high-dimensional, microscopic fluctuations are propagated via the effective-feedforward component to a low-dimensional subspace where they are dynamically balanced by macroscopic fluctuations. We present biologically plausible examples from excitation-inhibition networks and networks with heterogeneous degree distributions. Finally, we define the alignment matrix as the overlap between left and right-singular vectors of the structured connectivity, and show that the singular values of the alignment matrix determine the amplitude of macroscopic variability, while its singular vectors determine the structure. Our work shows how macroscopic fluctuations can emerge generically in strongly-coupled networks with low-rank structure. Furthermore, by generalizing excitation-inhibition balance to arbitrary low-rank structure our work may find relevance in any setting with strongly interacting units, whether in biological, social, or technological networks.
1907.07249
Cecilia Berardo
Cecilia Berardo, Stefan Geritz, Mats Gyllenberg, Ga\"el Raoul
Interactions between different predator-prey states. A method for the derivation of the functional and numerical response
27 pages, 14 figures, 7 sections, 4 appendices
2020, Journal of Mathematical Biology
10.1007/s00285-020-01500-2
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce a formal method for the derivation of a predator's functional response from a system of fast state transitions of the prey or predator on a time scale during which the total prey and predator densities remain constant. Such derivation permits an explicit interpretation of the structure and parameters of the functional response in terms of individual behaviour. The same method is also used here to derive the corresponding numerical response of the predator as well as of the prey.
[ { "created": "Tue, 16 Jul 2019 20:26:45 GMT", "version": "v1" }, { "created": "Mon, 27 Apr 2020 12:15:29 GMT", "version": "v2" } ]
2020-05-19
[ [ "Berardo", "Cecilia", "" ], [ "Geritz", "Stefan", "" ], [ "Gyllenberg", "Mats", "" ], [ "Raoul", "Gaël", "" ] ]
In this paper we introduce a formal method for the derivation of a predator's functional response from a system of fast state transitions of the prey or predator on a time scale during which the total prey and predator densities remain constant. Such derivation permits an explicit interpretation of the structure and parameters of the functional response in terms of individual behaviour. The same method is also used here to derive the corresponding numerical response of the predator as well as of the prey.
2201.13371
Condell Eastmond
Condell Eastmond, Aseem Subedi, Suvranu De, Xavier Intes
Deep Learning in fNIRS: A review
41 pages, 9 figures
null
10.1117/1.NPh.9.4.041411
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Significance: Optical neuroimaging has become a well-established clinical and research tool to monitor cortical activations in the human brain. It is notable that outcomes of functional Near-InfraRed Spectroscopy (fNIRS) studies depend heavily on the data processing pipeline and classification model employed. Recently, Deep Learning (DL) methodologies have demonstrated fast and accurate performances in data processing and classification tasks across many biomedical fields. Aim: We aim to review the emerging DL applications in fNIRS studies. Approach: We first introduce some of the commonly used DL techniques. Then the review summarizes current DL work in some of the most active areas of this field, including brain-computer interface, neuro-impairment diagnosis, and neuroscience discovery. Results: Of the 63 papers considered in this review, 32 report a comparative study of deep learning techniques to traditional machine learning techniques where 26 have been shown outperforming the latter in terms of classification accuracy. Additionally, 8 studies also utilize deep learning to reduce the amount of preprocessing typically done with fNIRS data or increase the amount of data via data augmentation. Conclusions: The application of DL techniques to fNIRS studies has shown to mitigate many of the hurdles present in fNIRS studies such as lengthy data preprocessing or small sample sizes while achieving comparable or improved classification accuracy.
[ { "created": "Mon, 31 Jan 2022 17:33:03 GMT", "version": "v1" }, { "created": "Fri, 15 Jul 2022 11:52:42 GMT", "version": "v2" } ]
2023-01-03
[ [ "Eastmond", "Condell", "" ], [ "Subedi", "Aseem", "" ], [ "De", "Suvranu", "" ], [ "Intes", "Xavier", "" ] ]
Significance: Optical neuroimaging has become a well-established clinical and research tool to monitor cortical activations in the human brain. It is notable that outcomes of functional Near-InfraRed Spectroscopy (fNIRS) studies depend heavily on the data processing pipeline and classification model employed. Recently, Deep Learning (DL) methodologies have demonstrated fast and accurate performances in data processing and classification tasks across many biomedical fields. Aim: We aim to review the emerging DL applications in fNIRS studies. Approach: We first introduce some of the commonly used DL techniques. Then the review summarizes current DL work in some of the most active areas of this field, including brain-computer interface, neuro-impairment diagnosis, and neuroscience discovery. Results: Of the 63 papers considered in this review, 32 report a comparative study of deep learning techniques to traditional machine learning techniques where 26 have been shown outperforming the latter in terms of classification accuracy. Additionally, 8 studies also utilize deep learning to reduce the amount of preprocessing typically done with fNIRS data or increase the amount of data via data augmentation. Conclusions: The application of DL techniques to fNIRS studies has shown to mitigate many of the hurdles present in fNIRS studies such as lengthy data preprocessing or small sample sizes while achieving comparable or improved classification accuracy.
2007.04500
Johann H. Mart\'inez
J. Mendoza-Ruiz, C. E. Alonso-Malaver, M. Valderrama, O. A. Rosso, J.H. Mart\'inez
Dynamics in cortical activity revealed by resting-state MEG rhythms
16 pages, 11 figures
null
10.1063/5.0025189
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain may be thought of as a many-body architecture with a spatio-temporal dynamics described by neuronal structures. The oscillatory nature of brain activity allows these structures (nodes) to be described as a set of coupled oscillators forming a network where the node dynamics, and that of the network topology can be studied. Quantifying its dynamics at various scales is an issue that claims to be explored for several brain activities, e.g., activity at rest. The resting-state associates the underlying brain dynamics of healthy subjects that are not actively compromised with sensory or cognitive processes. Studying its dynamics is highly non-trivial but opens the door to understand the general principles of brain functioning. We hypothesize about how could be the spatio-temporal dynamics of cortical fluctuations for healthy subjects at resting-state. We retrieve the alphabet that reconstructs the dynamics (entropy/complexity) of magnetoencephalograpy signals. We assemble the cortical connectivity to elicit the network's dynamics. We depict an order relation between entropy/complexity for frequency bands. We unveiled that the posterior cortex conglomerates nodes with both stronger dynamics and high clustering for {\alpha} band. The existence of these order relations suggests an emergent phenomenon of each band. Interestingly, we find that the posterior cortex plays a cardinal role in both the dynamics and structure regarding the resting-state. To the best of our knowledge, this is the first study with magnetoencephalograpy involving information theory and network science to better understand the dynamics and structure of brain activity at rest for different bands and scales.
[ { "created": "Thu, 9 Jul 2020 01:37:26 GMT", "version": "v1" } ]
2021-02-03
[ [ "Mendoza-Ruiz", "J.", "" ], [ "Alonso-Malaver", "C. E.", "" ], [ "Valderrama", "M.", "" ], [ "Rosso", "O. A.", "" ], [ "Martínez", "J. H.", "" ] ]
The brain may be thought of as a many-body architecture with a spatio-temporal dynamics described by neuronal structures. The oscillatory nature of brain activity allows these structures (nodes) to be described as a set of coupled oscillators forming a network where the node dynamics, and that of the network topology can be studied. Quantifying its dynamics at various scales is an issue that claims to be explored for several brain activities, e.g., activity at rest. The resting-state associates the underlying brain dynamics of healthy subjects that are not actively compromised with sensory or cognitive processes. Studying its dynamics is highly non-trivial but opens the door to understand the general principles of brain functioning. We hypothesize about how could be the spatio-temporal dynamics of cortical fluctuations for healthy subjects at resting-state. We retrieve the alphabet that reconstructs the dynamics (entropy/complexity) of magnetoencephalograpy signals. We assemble the cortical connectivity to elicit the network's dynamics. We depict an order relation between entropy/complexity for frequency bands. We unveiled that the posterior cortex conglomerates nodes with both stronger dynamics and high clustering for {\alpha} band. The existence of these order relations suggests an emergent phenomenon of each band. Interestingly, we find that the posterior cortex plays a cardinal role in both the dynamics and structure regarding the resting-state. To the best of our knowledge, this is the first study with magnetoencephalograpy involving information theory and network science to better understand the dynamics and structure of brain activity at rest for different bands and scales.
1803.00112
Viktor Stojkoski MSc
Viktor Stojkoski, Zoran Utkovski, Lasko Basnarkov and Ljupco Kocarev
Cooperation dynamics of generalized reciprocity in state-based social dilemmas
29 pages, 5 figures
Phys. Rev. E 97, 052305 (2018)
10.1103/PhysRevE.97.052305
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a framework for studying social dilemmas in networked societies where individuals follow a simple state-based behavioral mechanism based on generalized reciprocity, which is rooted in the principle "help anyone if helped by someone". Within this general framework, which applies to a wide range of social dilemmas including, among others, public goods, donation and snowdrift games, we study the cooperation dynamics on a variety of complex network examples. By interpreting the studied model through the lenses of nonlinear dynamical systems, we show that cooperation through generalized reciprocity always emerges as the unique attractor in which the overall level of cooperation is maximized, while simultaneously exploitation of the participating individuals is prevented. The analysis elucidates the role of the network structure, here captured by a local centrality measure which uniquely quantifies the propensity of the network structure to cooperation, by dictating the degree of cooperation displayed both at microscopic and macroscopic level. We demonstrate the applicability of the analysis on a practical example by considering an interaction structure that couples a donation process with a public goods game.
[ { "created": "Wed, 28 Feb 2018 22:15:32 GMT", "version": "v1" }, { "created": "Fri, 27 Apr 2018 09:00:16 GMT", "version": "v2" }, { "created": "Thu, 28 Feb 2019 12:57:22 GMT", "version": "v3" } ]
2019-03-01
[ [ "Stojkoski", "Viktor", "" ], [ "Utkovski", "Zoran", "" ], [ "Basnarkov", "Lasko", "" ], [ "Kocarev", "Ljupco", "" ] ]
We introduce a framework for studying social dilemmas in networked societies where individuals follow a simple state-based behavioral mechanism based on generalized reciprocity, which is rooted in the principle "help anyone if helped by someone". Within this general framework, which applies to a wide range of social dilemmas including, among others, public goods, donation and snowdrift games, we study the cooperation dynamics on a variety of complex network examples. By interpreting the studied model through the lenses of nonlinear dynamical systems, we show that cooperation through generalized reciprocity always emerges as the unique attractor in which the overall level of cooperation is maximized, while simultaneously exploitation of the participating individuals is prevented. The analysis elucidates the role of the network structure, here captured by a local centrality measure which uniquely quantifies the propensity of the network structure to cooperation, by dictating the degree of cooperation displayed both at microscopic and macroscopic level. We demonstrate the applicability of the analysis on a practical example by considering an interaction structure that couples a donation process with a public goods game.
1407.5946
William Bialek
Gasper Tkacik, Thierry Mora, Olivier Marre, Dario Amodei, Michael J. Berry II, and William Bialek
Thermodynamics for a network of neurons: Signatures of criticality
null
null
null
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but with more spikes the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N=160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as N increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. Networks with more or less correlation among neurons would not reach this critical state. We suggest further tests of criticality, and give a brief discussion of its functional significance.
[ { "created": "Tue, 22 Jul 2014 17:16:12 GMT", "version": "v1" } ]
2014-07-23
[ [ "Tkacik", "Gasper", "" ], [ "Mora", "Thierry", "" ], [ "Marre", "Olivier", "" ], [ "Amodei", "Dario", "" ], [ "Berry", "Michael J.", "II" ], [ "Bialek", "William", "" ] ]
The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but with more spikes the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N=160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as N increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. Networks with more or less correlation among neurons would not reach this critical state. We suggest further tests of criticality, and give a brief discussion of its functional significance.
2006.14933
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Influence of Various Temporal Recoding on Pavlovian Eyeblink Conditioning in The Cerebellum
arXiv admin note: substantial text overlap with arXiv:2003.11325
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the Pavlovian eyeblink conditioning (EBC) via repeated presentation of paired conditioned stimulus (tone) and unconditioned stimulus (airpuff). The influence of various temporal recoding of granule cells on the EBC is investigated in a cerebellar network where the connection probability $p_c$ from Golgi to granule cells is changed. In an optimal case of $p_c^*~(=0.029)$, individual granule cells show various well- and ill-matched firing patterns relative to the unconditioned stimulus. Then, these variously-recoded signals are fed into the Purkinje cells (PCs) through parallel-fibers (PFs). In the case of well-matched PF-PC synapses, their synaptic weights are strongly depressed through strong long-term depression (LTD). On the other hand, practically no LTD occurs for the ill-matched PF-PC synapses. This type of "effective" depression at the PF-PC synapses coordinates firings of PCs effectively, which then make effective inhibitory coordination on cerebellar nucleus neuron [which elicits conditioned response (CR; eyeblink)]. When the learning trial passes a threshold, acquisition of CR begins. In this case, the timing degree ${\cal T}_d$ of CR becomes good due to presence of the ill-matched firing group which plays a role of protection barrier for the timing. With further increase in the trial, strength $\cal S$ of CR (corresponding to the amplitude of eyelid closure) increases due to strong LTD in the well-matched firing group. Thus, with increasing the learning trial, the (overall) learning efficiency degree ${\cal L}_e$ (taking into consideration both timing and strength of CR) for the CR is increased, and eventually it becomes saturated. By changing $p_c$ from $p_c^*$, we also investigate the influence of various temporal recoding on the EBC. It is thus found that, the more various in temporal recoding, the more effective in learning for the Pavlovian EBC.
[ { "created": "Thu, 25 Jun 2020 02:39:48 GMT", "version": "v1" }, { "created": "Wed, 8 Jul 2020 04:02:15 GMT", "version": "v2" } ]
2020-07-09
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
We consider the Pavlovian eyeblink conditioning (EBC) via repeated presentation of paired conditioned stimulus (tone) and unconditioned stimulus (airpuff). The influence of various temporal recoding of granule cells on the EBC is investigated in a cerebellar network where the connection probability $p_c$ from Golgi to granule cells is changed. In an optimal case of $p_c^*~(=0.029)$, individual granule cells show various well- and ill-matched firing patterns relative to the unconditioned stimulus. Then, these variously-recoded signals are fed into the Purkinje cells (PCs) through parallel-fibers (PFs). In the case of well-matched PF-PC synapses, their synaptic weights are strongly depressed through strong long-term depression (LTD). On the other hand, practically no LTD occurs for the ill-matched PF-PC synapses. This type of "effective" depression at the PF-PC synapses coordinates firings of PCs effectively, which then make effective inhibitory coordination on cerebellar nucleus neuron [which elicits conditioned response (CR; eyeblink)]. When the learning trial passes a threshold, acquisition of CR begins. In this case, the timing degree ${\cal T}_d$ of CR becomes good due to presence of the ill-matched firing group which plays a role of protection barrier for the timing. With further increase in the trial, strength $\cal S$ of CR (corresponding to the amplitude of eyelid closure) increases due to strong LTD in the well-matched firing group. Thus, with increasing the learning trial, the (overall) learning efficiency degree ${\cal L}_e$ (taking into consideration both timing and strength of CR) for the CR is increased, and eventually it becomes saturated. By changing $p_c$ from $p_c^*$, we also investigate the influence of various temporal recoding on the EBC. It is thus found that, the more various in temporal recoding, the more effective in learning for the Pavlovian EBC.
1401.7589
Ziyue Gao
Ziyue Gao, Molly Przeworski, Guy Sella
Footprints of ancient balanced polymorphisms in genetic variation data
5 Figures, 4 Supplementary Figures, 3 Supplementary Tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When long-lived, balancing selection can lead to trans-species polymorphisms that are shared by two or more species identical by descent. In this case, the gene genealogies at the selected sites cluster by allele instead of by species and, because of linkage, nearby neutral sites also have unusual genealogies. Although it is clear that this scenario should lead to discernible footprints in genetic variation data, notably the presence of additional neutral polymorphisms shared between species and the absence of fixed differences, the effects remain poorly characterized. We focus on the case of a single site under long-lived balancing selection and derive approximations for summaries of the data that are sensitive to a trans-species polymorphism: the length of the segment that carries most of the signals, the expected number of shared neutral SNPs within the segment and the patterns of allelic associations among them. Coalescent simulations of ancient balancing selection confirm the accuracy of our approximations. We further show that for humans and chimpanzees, and more generally for pairs of species with low genetic diversity levels, the patterns of genetic variation on which we focus are highly unlikely to be generated by neutral recurrent mutations, so these statistics are specific as well as sensitive. We discuss the implications of our results for the design and interpretation of genome scans for ancient balancing selection in apes and other taxa.
[ { "created": "Wed, 29 Jan 2014 17:05:45 GMT", "version": "v1" } ]
2014-01-30
[ [ "Gao", "Ziyue", "" ], [ "Przeworski", "Molly", "" ], [ "Sella", "Guy", "" ] ]
When long-lived, balancing selection can lead to trans-species polymorphisms that are shared by two or more species identical by descent. In this case, the gene genealogies at the selected sites cluster by allele instead of by species and, because of linkage, nearby neutral sites also have unusual genealogies. Although it is clear that this scenario should lead to discernible footprints in genetic variation data, notably the presence of additional neutral polymorphisms shared between species and the absence of fixed differences, the effects remain poorly characterized. We focus on the case of a single site under long-lived balancing selection and derive approximations for summaries of the data that are sensitive to a trans-species polymorphism: the length of the segment that carries most of the signals, the expected number of shared neutral SNPs within the segment and the patterns of allelic associations among them. Coalescent simulations of ancient balancing selection confirm the accuracy of our approximations. We further show that for humans and chimpanzees, and more generally for pairs of species with low genetic diversity levels, the patterns of genetic variation on which we focus are highly unlikely to be generated by neutral recurrent mutations, so these statistics are specific as well as sensitive. We discuss the implications of our results for the design and interpretation of genome scans for ancient balancing selection in apes and other taxa.
1206.6366
Mahnaz Kazemipoor
Mahnaz Kazemipoor (Corresponding author), Che Wan Jasimah Wan Mohamed Radzi, Khyrunnisa Begum, Iman Yaze
Screening of antibacterial activity of lactic acid bacteria isolated from fermented vegetables against food borne pathogens
10 pages
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/3.0/
This study aims to screen the antibacterial activity of lactic acid bacteria (LAB) isolated from home-made fermented vegetables against common food borne pathogens. The antagonistic properties of these isolates against Escherichia coli, Staphylococcus aureus, Yersinia enterocolitica and Bacillus cereus were examined using agar well diffusion method. Four LAB namely MF6, MF10, MF13, and MF15 identified as Lactobacillus animalis, Lactobacillus rhamnosus, Lactobacillus fermentum and Lactobacillus reuteri, respectively were effective against all selected pathogenic strains. Amongst the four isolates, MF6 exhibited the highest antibacterial activity, against all the indicator pathogens tested except Y. enterocolitic. Its activity was maximum against E.coli with a Zone of Inhibition (ZOI) ranging from 18.7 to 21.3 mm and least for Y. enterocolitica (10 \pm 1.1 mm). Isolate MF13 also showed antimicrobial property against all tested pathogens showing highest activity against Y. enterocolitica (14 \pm 1.7 mm) and least against E.coli (8 \pm 1.4 mm), which was in direct contrast to isolate MF6. Isolate MF15 showed greater activity against E.coli (12 \pm 0.8 mm) and least against S. aureus (8 \pm 1.7 mm). Least antimicrobial property was observed in isolate MF10, with a ZOI in the range of 2.5-7 mm. The degree of antimicrobial property among the isolates was in the order of MF6>MF13>MF15>MF10. Overall, the isolated LAB showed the remarkable inhibitory effect against both Gram positive and Gram negative pathogenic strains. However, the spectrum of inhibition was different for the isolates tested. These results suggest that this potent isolates could be used as a natural biopreservatives in different food products.
[ { "created": "Mon, 25 Jun 2012 10:13:46 GMT", "version": "v1" } ]
2012-06-28
[ [ "Kazemipoor", "Mahnaz", "", "Corresponding author" ], [ "Radzi", "Che Wan Jasimah Wan Mohamed", "" ], [ "Begum", "Khyrunnisa", "" ], [ "Yaze", "Iman", "" ] ]
This study aims to screen the antibacterial activity of lactic acid bacteria (LAB) isolated from home-made fermented vegetables against common food borne pathogens. The antagonistic properties of these isolates against Escherichia coli, Staphylococcus aureus, Yersinia enterocolitica and Bacillus cereus were examined using agar well diffusion method. Four LAB namely MF6, MF10, MF13, and MF15 identified as Lactobacillus animalis, Lactobacillus rhamnosus, Lactobacillus fermentum and Lactobacillus reuteri, respectively were effective against all selected pathogenic strains. Amongst the four isolates, MF6 exhibited the highest antibacterial activity, against all the indicator pathogens tested except Y. enterocolitic. Its activity was maximum against E.coli with a Zone of Inhibition (ZOI) ranging from 18.7 to 21.3 mm and least for Y. enterocolitica (10 \pm 1.1 mm). Isolate MF13 also showed antimicrobial property against all tested pathogens showing highest activity against Y. enterocolitica (14 \pm 1.7 mm) and least against E.coli (8 \pm 1.4 mm), which was in direct contrast to isolate MF6. Isolate MF15 showed greater activity against E.coli (12 \pm 0.8 mm) and least against S. aureus (8 \pm 1.7 mm). Least antimicrobial property was observed in isolate MF10, with a ZOI in the range of 2.5-7 mm. The degree of antimicrobial property among the isolates was in the order of MF6>MF13>MF15>MF10. Overall, the isolated LAB showed the remarkable inhibitory effect against both Gram positive and Gram negative pathogenic strains. However, the spectrum of inhibition was different for the isolates tested. These results suggest that this potent isolates could be used as a natural biopreservatives in different food products.
1401.3452
Amir Toor
Juliana K. Sampson, Nihar U. Sheth, Vishal N. Koparde, Allison F. Scalora, Myrna G. Serrano, Vladimir Lee, Catherine H. Roberts, Maximilian Jameson-Lee, Andrea Ferriera-Gonzalez, Masoud H. Manjili, Gregory A. Buck, Michael C. Neale, Amir A. Toor
Whole Exome Sequencing to Estimate Alloreactivity Potential Between Donors and Recipients in Stem Cell Transplantation
12 pages- main article, 29 pages total, 5 figures, 1 supplementary figure
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whole exome sequencing was performed on HLA-matched stem cell donors and transplant recipients to measure sequence variation contributing to minor histocompatibility antigen differences between the two. A large number of nonsynonymous single nucleotide polymorphisms were identified in each of the nine unique donor-recipient pairs tested. This variation was greater in magnitude in unrelated donors as compared with matched related donors. Knowledge of the magnitude of exome variation between stem cell transplant recipients and donors may allow more accurate titration of immunosuppressive therapy following stem cell transplantation.
[ { "created": "Wed, 15 Jan 2014 05:12:48 GMT", "version": "v1" } ]
2014-01-16
[ [ "Sampson", "Juliana K.", "" ], [ "Sheth", "Nihar U.", "" ], [ "Koparde", "Vishal N.", "" ], [ "Scalora", "Allison F.", "" ], [ "Serrano", "Myrna G.", "" ], [ "Lee", "Vladimir", "" ], [ "Roberts", "Catherine H.", "" ], [ "Jameson-Lee", "Maximilian", "" ], [ "Ferriera-Gonzalez", "Andrea", "" ], [ "Manjili", "Masoud H.", "" ], [ "Buck", "Gregory A.", "" ], [ "Neale", "Michael C.", "" ], [ "Toor", "Amir A.", "" ] ]
Whole exome sequencing was performed on HLA-matched stem cell donors and transplant recipients to measure sequence variation contributing to minor histocompatibility antigen differences between the two. A large number of nonsynonymous single nucleotide polymorphisms were identified in each of the nine unique donor-recipient pairs tested. This variation was greater in magnitude in unrelated donors as compared with matched related donors. Knowledge of the magnitude of exome variation between stem cell transplant recipients and donors may allow more accurate titration of immunosuppressive therapy following stem cell transplantation.
2105.06811
Gianluca Truda
Gianluca Truda
Quantified Sleep: Machine learning techniques for observational n-of-1 studies
Source code: https://github.com/gianlucatruda/quantified-sleep
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper applies statistical learning techniques to an observational Quantified-Self (QS) study to build a descriptive model of sleep quality. A total of 472 days of my sleep data was collected with an Oura ring and combined with lifestyle, environmental, and psychological data. Such n-of-1 QS projects pose a number of challenges: heterogeneous data sources; missing values; high dimensionality; dynamic feedback loops; human biases. This paper directly addresses these challenges with an end-to-end QS pipeline that produces robust descriptive models. Sleep quality is one of the most difficult modelling targets in QS research, due to high noise and a large number of weakly-contributing factors. Sleep quality was selected so that approaches from this paper would generalise to most other n-of-1 QS projects. Techniques are presented for combining and engineering features for the different classes of data types, sample frequencies, and schema - including event logs, weather, and geo-spatial data. Statistical analyses for outliers, normality, (auto)correlation, stationarity, and missing data are detailed, along with a proposed method for hierarchical clustering to identify correlated groups of features. The missing data was overcome using a combination of knowledge-based and statistical techniques, including several multivariate imputation algorithms. "Markov unfolding" is presented for collapsing the time series into a collection of independent observations, whilst incorporating historical information. The final model was interpreted in two ways: by inspecting the internal $\beta$-parameters, and using the SHAP framework. These two interpretation techniques were combined to produce a list of the 16 most-predictive features, demonstrating that an observational study can greatly narrow down the number of features that need to be considered when designing interventional QS studies.
[ { "created": "Fri, 14 May 2021 13:13:17 GMT", "version": "v1" } ]
2021-05-17
[ [ "Truda", "Gianluca", "" ] ]
This paper applies statistical learning techniques to an observational Quantified-Self (QS) study to build a descriptive model of sleep quality. A total of 472 days of my sleep data was collected with an Oura ring and combined with lifestyle, environmental, and psychological data. Such n-of-1 QS projects pose a number of challenges: heterogeneous data sources; missing values; high dimensionality; dynamic feedback loops; human biases. This paper directly addresses these challenges with an end-to-end QS pipeline that produces robust descriptive models. Sleep quality is one of the most difficult modelling targets in QS research, due to high noise and a large number of weakly-contributing factors. Sleep quality was selected so that approaches from this paper would generalise to most other n-of-1 QS projects. Techniques are presented for combining and engineering features for the different classes of data types, sample frequencies, and schema - including event logs, weather, and geo-spatial data. Statistical analyses for outliers, normality, (auto)correlation, stationarity, and missing data are detailed, along with a proposed method for hierarchical clustering to identify correlated groups of features. The missing data was overcome using a combination of knowledge-based and statistical techniques, including several multivariate imputation algorithms. "Markov unfolding" is presented for collapsing the time series into a collection of independent observations, whilst incorporating historical information. The final model was interpreted in two ways: by inspecting the internal $\beta$-parameters, and using the SHAP framework. These two interpretation techniques were combined to produce a list of the 16 most-predictive features, demonstrating that an observational study can greatly narrow down the number of features that need to be considered when designing interventional QS studies.
1410.5851
Yong Kong
Yong Kong
Calculating complexity of large randomized libraries
14 pages, 4 figures
Journal of Theoretical Biology 259 (3), 641-645, 2009
10.1016/j.jtbi.2009.04.008
null
q-bio.QM stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Randomized libraries are increasingly popular in protein engineering and other biomedical research fields. Statistics of the libraries are useful to guide and evaluate randomized library construction. Previous works only give the mean of the number of unique sequences in the library, and they can only handle equal molar ratio of the four nucleotides at a small number of mutation sites. We derive formulas to calculate the mean and variance of the number of unique sequences in libraries generated by cassette mutagenesis with mixtures of arbitrary nucleotide ratios. Computer program was developed which utilizes arbitrary numerical precision software package to calculate the statistics of large libraries. The statistics of library with mutations in more than $20$ amino acids can be calculated easily. Results show that the nucleotide ratios have significant effects on these statistics. The more skewed the ratio, the larger the library size is needed to obtain the same expected number of unique sequences. The program is freely available at \url{http://graphics.med.yale.edu/cgi-bin/lib_comp.pl}.
[ { "created": "Tue, 21 Oct 2014 20:43:32 GMT", "version": "v1" } ]
2024-05-28
[ [ "Kong", "Yong", "" ] ]
Randomized libraries are increasingly popular in protein engineering and other biomedical research fields. Statistics of the libraries are useful to guide and evaluate randomized library construction. Previous works only give the mean of the number of unique sequences in the library, and they can only handle equal molar ratio of the four nucleotides at a small number of mutation sites. We derive formulas to calculate the mean and variance of the number of unique sequences in libraries generated by cassette mutagenesis with mixtures of arbitrary nucleotide ratios. Computer program was developed which utilizes arbitrary numerical precision software package to calculate the statistics of large libraries. The statistics of library with mutations in more than $20$ amino acids can be calculated easily. Results show that the nucleotide ratios have significant effects on these statistics. The more skewed the ratio, the larger the library size is needed to obtain the same expected number of unique sequences. The program is freely available at \url{http://graphics.med.yale.edu/cgi-bin/lib_comp.pl}.
q-bio/0703004
Mauro Copelli
Mauro Copelli and Paulo R. A. Campos
Excitable Scale Free Networks
6 pages, 4 figures
Eur. Phys. J. B 56, 273-278 (2007)
10.1140/epjb/e2007-00114-7
null
q-bio.NC cond-mat.dis-nn nlin.CG physics.bio-ph
null
When a simple excitable system is continuously stimulated by a Poissonian external source, the response function (mean activity versus stimulus rate) generally shows a linear saturating shape. This is experimentally verified in some classes of sensory neurons, which accordingly present a small dynamic range (defined as the interval of stimulus intensity which can be appropriately coded by the mean activity of the excitable element), usually about one or two decades only. The brain, on the other hand, can handle a significantly broader range of stimulus intensity, and a collective phenomenon involving the interaction among excitable neurons has been suggested to account for the enhancement of the dynamic range. Since the role of the pattern of such interactions is still unclear, here we investigate the performance of a scale-free (SF) network topology in this dynamic range problem. Specifically, we study the transfer function of disordered SF networks of excitable Greenberg-Hastings cellular automata. We observe that the dynamic range is maximum when the coupling among the elements is critical, corroborating a general reasoning recently proposed. Although the maximum dynamic range yielded by general SF networks is slightly worse than that of random networks, for special SF networks which lack loops the enhancement of the dynamic range can be dramatic, reaching nearly five decades. In order to understand the role of loops on the transfer function we propose a simple model in which the density of loops in the network can be gradually increased, and show that this is accompanied by a gradual decrease of dynamic range.
[ { "created": "Thu, 1 Mar 2007 20:03:21 GMT", "version": "v1" }, { "created": "Tue, 3 Apr 2007 21:41:25 GMT", "version": "v2" }, { "created": "Fri, 11 May 2007 12:44:45 GMT", "version": "v3" } ]
2007-05-23
[ [ "Copelli", "Mauro", "" ], [ "Campos", "Paulo R. A.", "" ] ]
When a simple excitable system is continuously stimulated by a Poissonian external source, the response function (mean activity versus stimulus rate) generally shows a linear saturating shape. This is experimentally verified in some classes of sensory neurons, which accordingly present a small dynamic range (defined as the interval of stimulus intensity which can be appropriately coded by the mean activity of the excitable element), usually about one or two decades only. The brain, on the other hand, can handle a significantly broader range of stimulus intensity, and a collective phenomenon involving the interaction among excitable neurons has been suggested to account for the enhancement of the dynamic range. Since the role of the pattern of such interactions is still unclear, here we investigate the performance of a scale-free (SF) network topology in this dynamic range problem. Specifically, we study the transfer function of disordered SF networks of excitable Greenberg-Hastings cellular automata. We observe that the dynamic range is maximum when the coupling among the elements is critical, corroborating a general reasoning recently proposed. Although the maximum dynamic range yielded by general SF networks is slightly worse than that of random networks, for special SF networks which lack loops the enhancement of the dynamic range can be dramatic, reaching nearly five decades. In order to understand the role of loops on the transfer function we propose a simple model in which the density of loops in the network can be gradually increased, and show that this is accompanied by a gradual decrease of dynamic range.
2011.01873
J. C. Phillips
J. C. Phillips
Self-Organized Networks: Darwinian Evolution of Myosin-1
20 pages, 9 figures
null
null
null
q-bio.OT cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cytoskeletons are self-organized networks based on polymerized proteins: actin, tubulin, and driven by motor proteins, such as myosin, kinesin and dynein. Their positive Darwinian evolution enables them to approach optimized functionality (self-organized criticality). The principal features of the eukaryotic evolution of the cytoskeleton motor protein myosin-1 parallel those of actin and tubulin, but also show striking differences connected to its dynamical function. Optimized (long) hydropathic waves characterize the molecular level Darwinian evolution towards optimized functionality (self-organized criticality). The N-terminal and central domains of myosin-1 have evolved in eukaryotes at different rates, with the central domain hydropathic extrema being optimally active in humans. A test shows that hydropathic scaling can yield accuracies of better than 1% near optimized functionality. Evolution towards synchronized level extrema is connected to a special function of Mys-1 in humans involving Golgi complexes.
[ { "created": "Wed, 28 Oct 2020 19:06:54 GMT", "version": "v1" } ]
2020-11-04
[ [ "Phillips", "J. C.", "" ] ]
Cytoskeletons are self-organized networks based on polymerized proteins: actin, tubulin, and driven by motor proteins, such as myosin, kinesin and dynein. Their positive Darwinian evolution enables them to approach optimized functionality (self-organized criticality). The principal features of the eukaryotic evolution of the cytoskeleton motor protein myosin-1 parallel those of actin and tubulin, but also show striking differences connected to its dynamical function. Optimized (long) hydropathic waves characterize the molecular level Darwinian evolution towards optimized functionality (self-organized criticality). The N-terminal and central domains of myosin-1 have evolved in eukaryotes at different rates, with the central domain hydropathic extrema being optimally active in humans. A test shows that hydropathic scaling can yield accuracies of better than 1% near optimized functionality. Evolution towards synchronized level extrema is connected to a special function of Mys-1 in humans involving Golgi complexes.
0807.0122
Tobias Reichenbach
Tobias Reichenbach and Erwin Frey
Instability of spatial patterns and its ambiguous impact on species diversity
4 pages, 3 figures and supplementary information. To appear in Phys. Rev. Lett.
Phys. Rev. Lett. 101, 058102 (2008)
10.1103/PhysRevLett.101.058102
LMU-ASC 12/08
q-bio.PE cond-mat.stat-mech nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-arrangement of individuals into spatial patterns often accompanies and promotes species diversity in ecological systems. Here, we investigate pattern formation arising from cyclic dominance of three species, operating near a bifurcation point. In its vicinity, an Eckhaus instability occurs, leading to convectively unstable "blurred" patterns. At the bifurcation point, stochastic effects dominate and induce counterintuitive effects on diversity: Large patterns, emerging for medium values of individuals' mobility, lead to rapid species extinction, while small patterns (low mobility) promote diversity, and high mobilities render spatial structures irrelevant. We provide a quantitative analysis of these phenomena, employing a complex Ginzburg-Landau equation.
[ { "created": "Tue, 1 Jul 2008 11:29:30 GMT", "version": "v1" } ]
2008-08-31
[ [ "Reichenbach", "Tobias", "" ], [ "Frey", "Erwin", "" ] ]
Self-arrangement of individuals into spatial patterns often accompanies and promotes species diversity in ecological systems. Here, we investigate pattern formation arising from cyclic dominance of three species, operating near a bifurcation point. In its vicinity, an Eckhaus instability occurs, leading to convectively unstable "blurred" patterns. At the bifurcation point, stochastic effects dominate and induce counterintuitive effects on diversity: Large patterns, emerging for medium values of individuals' mobility, lead to rapid species extinction, while small patterns (low mobility) promote diversity, and high mobilities render spatial structures irrelevant. We provide a quantitative analysis of these phenomena, employing a complex Ginzburg-Landau equation.
1703.05490
Sergei Vakulenko
V. Kozlov, S. Vakulenko and U. Wennergren
Biodiversity, extinctions and evolution of ecosystems with shared resources
null
null
10.1103/PhysRevE.95.032413
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the formation of stable ecological networks where many species share the same resource. We show that such stable ecosystem naturally occurs as a result of extinctions. We obtain an analytical relation for the number of coexisting species and find a relation describing how many species that may go extinct as a result of a sharp environmental change. We introduce a special parameter that is a combination of species traits and resource characteristics used in the model formulation. This parameter describes the pressure on system to converge, by extinctions. When that stress parameter is large we obtain that the species traits concentrate at some values. This stress parameter is thereby a parameter that determines the level of final biodiversity of the system. Moreover, we show that dynamics of this limit system can be described by simple differential equations.
[ { "created": "Thu, 16 Mar 2017 07:07:43 GMT", "version": "v1" } ]
2017-04-05
[ [ "Kozlov", "V.", "" ], [ "Vakulenko", "S.", "" ], [ "Wennergren", "U.", "" ] ]
We investigate the formation of stable ecological networks where many species share the same resource. We show that such stable ecosystem naturally occurs as a result of extinctions. We obtain an analytical relation for the number of coexisting species and find a relation describing how many species that may go extinct as a result of a sharp environmental change. We introduce a special parameter that is a combination of species traits and resource characteristics used in the model formulation. This parameter describes the pressure on system to converge, by extinctions. When that stress parameter is large we obtain that the species traits concentrate at some values. This stress parameter is thereby a parameter that determines the level of final biodiversity of the system. Moreover, we show that dynamics of this limit system can be described by simple differential equations.
1611.00833
Osman Kahraman
Osman Kahraman, Peter D. Koch, William S. Klug, Christoph A. Haselwandter
Architecture and Function of Mechanosensitive Membrane Protein Lattices
null
Sci. Rep. 6, 19214 (2016)
10.1038/srep19214
null
q-bio.BM cond-mat.soft physics.bio-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experiments have revealed that membrane proteins can form two-dimensional clusters with regular translational and orientational protein arrangements, which may allow cells to modulate protein function. However, the physical mechanisms yielding supramolecular organization and collective function of membrane proteins remain largely unknown. Here we show that bilayer-mediated elastic interactions between membrane proteins can yield regular and distinctive lattice architectures of protein clusters, and may provide a link between lattice architecture and lattice function. Using the mechanosensitive channel of large conductance (MscL) as a model system, we obtain relations between the shape of MscL and the supramolecular architecture of MscL lattices. We predict that the tetrameric and pentameric MscL symmetries observed in previous structural studies yield distinct lattice architectures of MscL clusters and that, in turn, these distinct MscL lattice architectures yield distinct lattice activation barriers. Our results suggest general physical mechanisms linking protein symmetry, the lattice architecture of membrane protein clusters, and the collective function of membrane protein lattices.
[ { "created": "Wed, 2 Nov 2016 22:50:55 GMT", "version": "v1" } ]
2016-11-04
[ [ "Kahraman", "Osman", "" ], [ "Koch", "Peter D.", "" ], [ "Klug", "William S.", "" ], [ "Haselwandter", "Christoph A.", "" ] ]
Experiments have revealed that membrane proteins can form two-dimensional clusters with regular translational and orientational protein arrangements, which may allow cells to modulate protein function. However, the physical mechanisms yielding supramolecular organization and collective function of membrane proteins remain largely unknown. Here we show that bilayer-mediated elastic interactions between membrane proteins can yield regular and distinctive lattice architectures of protein clusters, and may provide a link between lattice architecture and lattice function. Using the mechanosensitive channel of large conductance (MscL) as a model system, we obtain relations between the shape of MscL and the supramolecular architecture of MscL lattices. We predict that the tetrameric and pentameric MscL symmetries observed in previous structural studies yield distinct lattice architectures of MscL clusters and that, in turn, these distinct MscL lattice architectures yield distinct lattice activation barriers. Our results suggest general physical mechanisms linking protein symmetry, the lattice architecture of membrane protein clusters, and the collective function of membrane protein lattices.
2111.13537
Sen Cheng
Zahra Fayyaz, Aya Altamimi, Sen Cheng, Laurenz Wiskott
A model of semantic completion in generative episodic memory
15 pages, 9 figures, 58 references
null
null
null
q-bio.NC cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Many different studies have suggested that episodic memory is a generative process, but most computational models adopt a storage view. In this work, we propose a computational model for generative episodic memory. It is based on the central hypothesis that the hippocampus stores and retrieves selected aspects of an episode as a memory trace, which is necessarily incomplete. At recall, the neocortex reasonably fills in the missing information based on general semantic information in a process we call semantic completion. As episodes we use images of digits (MNIST) augmented by different backgrounds representing context. Our model is based on a VQ-VAE which generates a compressed latent representation in form of an index matrix, which still has some spatial resolution. We assume that attention selects some part of the index matrix while others are discarded, this then represents the gist of the episode and is stored as a memory trace. At recall the missing parts are filled in by a PixelCNN, modeling semantic completion, and the completed index matrix is then decoded into a full image by the VQ-VAE. The model is able to complete missing parts of a memory trace in a semantically plausible way up to the point where it can generate plausible images from scratch. Due to the combinatorics in the index matrix, the model generalizes well to images not trained on. Compression as well as semantic completion contribute to a strong reduction in memory requirements and robustness to noise. Finally we also model an episodic memory experiment and can reproduce that semantically congruent contexts are always recalled better than incongruent ones, high attention levels improve memory accuracy in both cases, and contexts that are not remembered correctly are more often remembered semantically congruently than completely wrong.
[ { "created": "Fri, 26 Nov 2021 15:14:17 GMT", "version": "v1" } ]
2021-11-29
[ [ "Fayyaz", "Zahra", "" ], [ "Altamimi", "Aya", "" ], [ "Cheng", "Sen", "" ], [ "Wiskott", "Laurenz", "" ] ]
Many different studies have suggested that episodic memory is a generative process, but most computational models adopt a storage view. In this work, we propose a computational model for generative episodic memory. It is based on the central hypothesis that the hippocampus stores and retrieves selected aspects of an episode as a memory trace, which is necessarily incomplete. At recall, the neocortex reasonably fills in the missing information based on general semantic information in a process we call semantic completion. As episodes we use images of digits (MNIST) augmented by different backgrounds representing context. Our model is based on a VQ-VAE which generates a compressed latent representation in form of an index matrix, which still has some spatial resolution. We assume that attention selects some part of the index matrix while others are discarded, this then represents the gist of the episode and is stored as a memory trace. At recall the missing parts are filled in by a PixelCNN, modeling semantic completion, and the completed index matrix is then decoded into a full image by the VQ-VAE. The model is able to complete missing parts of a memory trace in a semantically plausible way up to the point where it can generate plausible images from scratch. Due to the combinatorics in the index matrix, the model generalizes well to images not trained on. Compression as well as semantic completion contribute to a strong reduction in memory requirements and robustness to noise. Finally we also model an episodic memory experiment and can reproduce that semantically congruent contexts are always recalled better than incongruent ones, high attention levels improve memory accuracy in both cases, and contexts that are not remembered correctly are more often remembered semantically congruently than completely wrong.
1103.1653
Adilson Enio Motter
Sagar Sahasrabudhe and Adilson E. Motter
Rescuing ecosystems from extinction cascades through compensatory perturbations
The supplementary information file can be downloaded from here: http://dyn.phys.northwestern.edu/ncomms1163-s1.pdf. The published version of the article is also available here: http://dyn.phys.northwestern.edu/ncomms1163.pdf
Nature Communications 2, 170 (2011)
10.1038/ncomms1163
null
q-bio.PE cond-mat.dis-nn nlin.AO nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Food-web perturbations stemming from climate change, overexploitation, invasive species, and habitat degradation often cause an initial loss of species that results in a cascade of secondary extinctions, posing considerable challenges to ecosystem conservation efforts. Here we devise a systematic network-based approach to reduce the number of secondary extinctions using a predictive modeling framework. We show that the extinction of one species can often be compensated by the concurrent removal or population suppression of other specific species, which is a counterintuitive effect not previously tested in complex food webs. These compensatory perturbations frequently involve long-range interactions that are not evident from local predator-prey relationships. In numerous cases, even the early removal of a species that would eventually be extinct by the cascade is found to significantly reduce the number of cascading extinctions. These compensatory perturbations only exploit resources available in the system, and illustrate the potential of human intervention combined with predictive modeling for ecosystem management.
[ { "created": "Tue, 8 Mar 2011 22:00:38 GMT", "version": "v1" } ]
2011-03-10
[ [ "Sahasrabudhe", "Sagar", "" ], [ "Motter", "Adilson E.", "" ] ]
Food-web perturbations stemming from climate change, overexploitation, invasive species, and habitat degradation often cause an initial loss of species that results in a cascade of secondary extinctions, posing considerable challenges to ecosystem conservation efforts. Here we devise a systematic network-based approach to reduce the number of secondary extinctions using a predictive modeling framework. We show that the extinction of one species can often be compensated by the concurrent removal or population suppression of other specific species, which is a counterintuitive effect not previously tested in complex food webs. These compensatory perturbations frequently involve long-range interactions that are not evident from local predator-prey relationships. In numerous cases, even the early removal of a species that would eventually be extinct by the cascade is found to significantly reduce the number of cascading extinctions. These compensatory perturbations only exploit resources available in the system, and illustrate the potential of human intervention combined with predictive modeling for ecosystem management.