id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0407014
Yixin Guo
Yixin Guo and Carson C. Chow
Existence and Stability of Standing Pulses in Neural Networks: II Stability
31 pages, 16 figures, submitted to SIAM Journal on Applied Dynamical Systems
null
null
null
q-bio.NC q-bio.QM
null
We analyze the stability of standing pulse solutions of a neural network integro-differential equation. The network consists of a coarse-grained layer of neurons synaptically connected by lateral inhibition with a non-saturating nonlinear gain function. When two standing single-pulse solutions coexist, the small pulse is unstable, and the large pulse is stable. The large single-pulse is bistable with the ``all-off'' state. This bistable localized activity may have strong implications for the mechanism underlying working memory. We show that dimple pulses have similar stability properties to large pulses but double pulses are unstable.
[ { "created": "Thu, 8 Jul 2004 05:57:05 GMT", "version": "v1" } ]
2007-05-23
[ [ "Guo", "Yixin", "" ], [ "Chow", "Carson C.", "" ] ]
We analyze the stability of standing pulse solutions of a neural network integro-differential equation. The network consists of a coarse-grained layer of neurons synaptically connected by lateral inhibition with a non-saturating nonlinear gain function. When two standing single-pulse solutions coexist, the small pulse is unstable, and the large pulse is stable. The large single-pulse is bistable with the ``all-off'' state. This bistable localized activity may have strong implications for the mechanism underlying working memory. We show that dimple pulses have similar stability properties to large pulses but double pulses are unstable.
0708.3098
Sourav Chatterji
Sourav Chatterji, Ichitaro Yamazaki, Zhaojun Bai and Jonathan Eisen
CompostBin: A DNA composition-based algorithm for binning environmental shotgun reads
null
null
null
null
q-bio.GN
null
A major hindrance to studies of microbial diversity has been that the vast majority of microbes cannot be cultured in the laboratory and thus are not amenable to traditional methods of characterization. Environmental shotgun sequencing (ESS) overcomes this hurdle by sequencing the DNA from the organisms present in a microbial community. The interpretation of this metagenomic data can be greatly facilitated by associating every sequence read with its source organism. We report the development of CompostBin, a DNA composition-based algorithm for analyzing metagenomic sequence reads and distributing them into taxon-specific bins. Unlike previous methods that seek to bin assembled contigs and often require training on known reference genomes, CompostBin has the ability to accurately bin raw sequence reads without need for assembly or training. It applies principal component analysis to project the data into an informative lower-dimensional space, and then uses the normalized cut clustering algorithm on this filtered data set to classify sequences into taxon-specific bins. We demonstrate the algorithm's accuracy on a variety of simulated data sets and on one metagenomic data set with known species assignments. CompostBin is a work in progress, with several refinements of the algorithm planned for the future.
[ { "created": "Wed, 22 Aug 2007 21:44:45 GMT", "version": "v1" } ]
2007-08-24
[ [ "Chatterji", "Sourav", "" ], [ "Yamazaki", "Ichitaro", "" ], [ "Bai", "Zhaojun", "" ], [ "Eisen", "Jonathan", "" ] ]
A major hindrance to studies of microbial diversity has been that the vast majority of microbes cannot be cultured in the laboratory and thus are not amenable to traditional methods of characterization. Environmental shotgun sequencing (ESS) overcomes this hurdle by sequencing the DNA from the organisms present in a microbial community. The interpretation of this metagenomic data can be greatly facilitated by associating every sequence read with its source organism. We report the development of CompostBin, a DNA composition-based algorithm for analyzing metagenomic sequence reads and distributing them into taxon-specific bins. Unlike previous methods that seek to bin assembled contigs and often require training on known reference genomes, CompostBin has the ability to accurately bin raw sequence reads without need for assembly or training. It applies principal component analysis to project the data into an informative lower-dimensional space, and then uses the normalized cut clustering algorithm on this filtered data set to classify sequences into taxon-specific bins. We demonstrate the algorithm's accuracy on a variety of simulated data sets and on one metagenomic data set with known species assignments. CompostBin is a work in progress, with several refinements of the algorithm planned for the future.
2004.04583
Jack Noonan
Anatoly Zhigljavsky, Roger Whitaker, Ivan Fesenko, Kobi Kremnizer and Jack Noonan
Comparison of different exit scenarios from the lock-down for COVID-19 epidemic in the UK and assessing uncertainty of the predictions
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model further development of the COVID-19 epidemic in the UK given the current data and assuming different scenarios of handling the epidemic. In this research, we further extend the stochastic model suggested in \cite{us} and incorporate in it all available to us knowledge about parameters characterising the behaviour of the virus and the illness induced by it. The models we use are flexible, comprehensive, fast to run and allow us to incorporate the following: -time-dependent strategies of handling the epidemic; -spatial heterogeneity of the population and heterogeneity of development of epidemic in different areas; -special characteristics of particular groups of people, especially people with specific medical pre-histories and elderly. Standard epidemiological models such as SIR and many of its modifications are not flexible enough and hence are not precise enough in the studies that requires the use of the features above. Decision-makers get serious benefits from using better and more flexible models as they can avoid of nuanced lock-downs, better plan the exit strategy based on local population data, different stages of the epidemic in different areas, making specific recommendations to specific groups of people; all this resulting in a lesser impact on economy, improved forecasts of regional demand upon NHS allowing for intelligent resource allocation.
[ { "created": "Thu, 9 Apr 2020 15:07:00 GMT", "version": "v1" } ]
2020-04-10
[ [ "Zhigljavsky", "Anatoly", "" ], [ "Whitaker", "Roger", "" ], [ "Fesenko", "Ivan", "" ], [ "Kremnizer", "Kobi", "" ], [ "Noonan", "Jack", "" ] ]
We model further development of the COVID-19 epidemic in the UK given the current data and assuming different scenarios of handling the epidemic. In this research, we further extend the stochastic model suggested in \cite{us} and incorporate in it all available to us knowledge about parameters characterising the behaviour of the virus and the illness induced by it. The models we use are flexible, comprehensive, fast to run and allow us to incorporate the following: -time-dependent strategies of handling the epidemic; -spatial heterogeneity of the population and heterogeneity of development of epidemic in different areas; -special characteristics of particular groups of people, especially people with specific medical pre-histories and elderly. Standard epidemiological models such as SIR and many of its modifications are not flexible enough and hence are not precise enough in the studies that requires the use of the features above. Decision-makers get serious benefits from using better and more flexible models as they can avoid of nuanced lock-downs, better plan the exit strategy based on local population data, different stages of the epidemic in different areas, making specific recommendations to specific groups of people; all this resulting in a lesser impact on economy, improved forecasts of regional demand upon NHS allowing for intelligent resource allocation.
2207.12195
Robert Allen
Robert F. Allen, Cassandra Jens, Theodore J. Wendt
Perturbations in epidemiological models: When zombies attack, we can survive!
null
Lett. Biomath. 1.2 (2014), pp. 173-180
10.30707/LiB1.2Allen
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the existence of stability-changing bifurcations in epidemiological models used to study the spread of zombiism through a human population. These bifurcations show that although linear instability of disease-free equilibria may exist in a model, perturbations of model parameters may result in stability. Thus, we show that humans can survive a zombie outbreak.
[ { "created": "Mon, 25 Jul 2022 13:34:09 GMT", "version": "v1" } ]
2022-07-26
[ [ "Allen", "Robert F.", "" ], [ "Jens", "Cassandra", "" ], [ "Wendt", "Theodore J.", "" ] ]
In this paper, we investigate the existence of stability-changing bifurcations in epidemiological models used to study the spread of zombiism through a human population. These bifurcations show that although linear instability of disease-free equilibria may exist in a model, perturbations of model parameters may result in stability. Thus, we show that humans can survive a zombie outbreak.
2005.10797
Marcelo Menezes Morato
Marcelo Menezes Morato, Saulo Benchimol Bastos, Daniel Oliveira Cajueiro and Julio Elias Normey-Rico
An Optimal Predictive Control Strategy for COVID-19 (SARS-CoV-2) Social Distancing Policies in Brazil
30 pages, 9 Figures, Preprint submitted to Annual Reviews in Control
null
null
null
q-bio.PE cs.SY eess.SY math.DS math.OC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The global COVID-19 pandemic (SARS-CoV-2 virus) is the defining health crisis of our century. Due to the absence of vaccines and drugs that can help to fight it, the world solution to control the spread has been to consider public social distance measures that avoids the saturation of the health system. In this context, we investigate a Model Predictive Control (MPC) framework to determine the time and duration of social distancing policies. We use Brazilian data in the period from March to May of 2020. The available data regarding the number of infected individuals and deaths suffers from sub-notification due to the absence of mass tests and the relevant presence of the asymptomatic individuals. We estimate variations of the SIR model using an uncertainty-weighted Least-Squares criterion that considers both nominal and inconsistent-data conditions. Moreover, we add to our versions of the SIR model an additional dynamic state variable to mimic the response of the population to the social distancing policies determined by the government that affects the speed of COVID-19 transmission. Our control framework is within a mixed-logical formalism, since the decision variable is forcefully binary (the existence or the absence of social distance policy). A dwell-time constraint is included to avoid harsh shifting between these two states. Finally, we present simulation results to illustrate how such optimal control policy would operate. These results point out that no social distancing should be relaxed before mid August 2020. If relaxations are necessary, they should not be performed before the beginning this date and should be in small periods, no longer than 25 days. This paradigm would proceed roughly until January/2021. The second peak of infections, which has a forecast to the beginning of October, can be reduced if the periods of no-isolation days are shortened.
[ { "created": "Thu, 21 May 2020 17:24:34 GMT", "version": "v1" } ]
2020-05-22
[ [ "Morato", "Marcelo Menezes", "" ], [ "Bastos", "Saulo Benchimol", "" ], [ "Cajueiro", "Daniel Oliveira", "" ], [ "Normey-Rico", "Julio Elias", "" ] ]
The global COVID-19 pandemic (SARS-CoV-2 virus) is the defining health crisis of our century. Due to the absence of vaccines and drugs that can help to fight it, the world solution to control the spread has been to consider public social distance measures that avoids the saturation of the health system. In this context, we investigate a Model Predictive Control (MPC) framework to determine the time and duration of social distancing policies. We use Brazilian data in the period from March to May of 2020. The available data regarding the number of infected individuals and deaths suffers from sub-notification due to the absence of mass tests and the relevant presence of the asymptomatic individuals. We estimate variations of the SIR model using an uncertainty-weighted Least-Squares criterion that considers both nominal and inconsistent-data conditions. Moreover, we add to our versions of the SIR model an additional dynamic state variable to mimic the response of the population to the social distancing policies determined by the government that affects the speed of COVID-19 transmission. Our control framework is within a mixed-logical formalism, since the decision variable is forcefully binary (the existence or the absence of social distance policy). A dwell-time constraint is included to avoid harsh shifting between these two states. Finally, we present simulation results to illustrate how such optimal control policy would operate. These results point out that no social distancing should be relaxed before mid August 2020. If relaxations are necessary, they should not be performed before the beginning this date and should be in small periods, no longer than 25 days. This paradigm would proceed roughly until January/2021. The second peak of infections, which has a forecast to the beginning of October, can be reduced if the periods of no-isolation days are shortened.
1812.10639
Hyeong-Chai Jeong
Jiwon Bahk, Seung Ki Baek, Hyeong-Chai Jeong
Long-range prisoner's dilemma game on a cycle
7 pages, 8 figures
null
10.1103/PhysRevE.99.012410
null
q-bio.PE physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate evolutionary dynamics of altruism with long-range interaction on a cycle. The interaction between individuals is described by a simplified version of the prisoner's dilemma (PD) game in which the payoffs are parameterized by $c$, the cost of a cooperative action. In our model, the probabilities of the game interaction and competition decay algebraically with $r_{AB}$, the distance between two players $A$ and $B$, but with different exponents: That is, the probability to play the PD game is proportional to $r_{AB}^{-\alpha}$. If player $A$ is chosen for death, on the other hand, the probability for $B$ to occupy the empty site is proportional to $r_{AB}^{-\beta}$. In a limiting case of $\beta\to\infty$, where the competition for an empty site occurs between its nearest neighbors only, we analytically find the condition for the proliferation of altruism in terms of $c_{th}$, a threshold of $c$ below which altruism prevails. For finite $\beta$, we conjecture a formula for $c_{th}$ as a function of $\alpha$ and $\beta$. We also propose a numerical method to locate $c_{th}$, according to which we observe excellent agreement with the conjecture even when the selection strength is of considerable magnitude.
[ { "created": "Thu, 27 Dec 2018 06:16:55 GMT", "version": "v1" } ]
2019-01-30
[ [ "Bahk", "Jiwon", "" ], [ "Baek", "Seung Ki", "" ], [ "Jeong", "Hyeong-Chai", "" ] ]
We investigate evolutionary dynamics of altruism with long-range interaction on a cycle. The interaction between individuals is described by a simplified version of the prisoner's dilemma (PD) game in which the payoffs are parameterized by $c$, the cost of a cooperative action. In our model, the probabilities of the game interaction and competition decay algebraically with $r_{AB}$, the distance between two players $A$ and $B$, but with different exponents: That is, the probability to play the PD game is proportional to $r_{AB}^{-\alpha}$. If player $A$ is chosen for death, on the other hand, the probability for $B$ to occupy the empty site is proportional to $r_{AB}^{-\beta}$. In a limiting case of $\beta\to\infty$, where the competition for an empty site occurs between its nearest neighbors only, we analytically find the condition for the proliferation of altruism in terms of $c_{th}$, a threshold of $c$ below which altruism prevails. For finite $\beta$, we conjecture a formula for $c_{th}$ as a function of $\alpha$ and $\beta$. We also propose a numerical method to locate $c_{th}$, according to which we observe excellent agreement with the conjecture even when the selection strength is of considerable magnitude.
1801.10418
Mareike Fischer
Mareike Fischer
Extremal values of the Sackin tree balance index
30 pages, 4 figures
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tree balance plays an important role in different research areas like theoretical computer science and mathematical phylogenetics. For example, it has long been known that under the Yule model, a pure birth process, imbalanced trees are more likely than balanced ones. Also, concerning ordered search trees, more balanced ones allow for more efficient data structuring than imbalanced ones. Therefore, different methods to measure the balance of trees were introduced. The Sackin index is one of the most frequently used measures for this purpose. In many contexts, statements about the minimal and maximal values of this index have been discussed, but formal proofs have only been provided for some of them, and only in the context of ordered binary (search) trees, not for general rooted trees. Moreover, while the number of trees with maximal Sackin index as well as the number of trees with minimal Sackin index when the number of leaves is a power of 2 are relatively easy to understand, the number of trees with minimal Sackin index for all other numbers of leaves has been completely unknown. In this manuscript, we extend the findings on trees with minimal and maximal Sackin indices from the literature on ordered trees and subsequently use our results to provide formulas to explicitly calculate the numbers of such trees. We also extend previous studies by analyzing the case when the underlying trees need not be binary. Finally, we use our results to contribute both to the phylogenetic as well as the computer scientific literature by using the new findings on Sackin minimal and maximal trees in order to derive formulas to calculate the number of both minimal and maximal phylogenetic trees as well as minimal and maximal ordered trees both in the binary and non-binary settings. All our results have been implemented in the Mathematica package SackinMinimizer, which has been made publicly available.
[ { "created": "Wed, 31 Jan 2018 12:09:52 GMT", "version": "v1" }, { "created": "Fri, 2 Mar 2018 09:39:51 GMT", "version": "v2" }, { "created": "Sat, 26 Jan 2019 17:57:23 GMT", "version": "v3" }, { "created": "Wed, 27 Mar 2019 12:59:35 GMT", "version": "v4" }, { "cr...
2020-12-18
[ [ "Fischer", "Mareike", "" ] ]
Tree balance plays an important role in different research areas like theoretical computer science and mathematical phylogenetics. For example, it has long been known that under the Yule model, a pure birth process, imbalanced trees are more likely than balanced ones. Also, concerning ordered search trees, more balanced ones allow for more efficient data structuring than imbalanced ones. Therefore, different methods to measure the balance of trees were introduced. The Sackin index is one of the most frequently used measures for this purpose. In many contexts, statements about the minimal and maximal values of this index have been discussed, but formal proofs have only been provided for some of them, and only in the context of ordered binary (search) trees, not for general rooted trees. Moreover, while the number of trees with maximal Sackin index as well as the number of trees with minimal Sackin index when the number of leaves is a power of 2 are relatively easy to understand, the number of trees with minimal Sackin index for all other numbers of leaves has been completely unknown. In this manuscript, we extend the findings on trees with minimal and maximal Sackin indices from the literature on ordered trees and subsequently use our results to provide formulas to explicitly calculate the numbers of such trees. We also extend previous studies by analyzing the case when the underlying trees need not be binary. Finally, we use our results to contribute both to the phylogenetic as well as the computer scientific literature by using the new findings on Sackin minimal and maximal trees in order to derive formulas to calculate the number of both minimal and maximal phylogenetic trees as well as minimal and maximal ordered trees both in the binary and non-binary settings. All our results have been implemented in the Mathematica package SackinMinimizer, which has been made publicly available.
2309.14404
Zebin Ma
Zebin Ma, Yonglin Zou, Xiaobin Huang, Wenjin Yan, Hao Xu, Jiexin Yang, Ying Zhang, Jinqi Huang
pLMFPPred: a novel approach for accurate prediction of functional peptides integrating embedding from pre-trained protein language model and imbalanced learning
20 pages, 5 figures,under review
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional peptides have the potential to treat a variety of diseases. Their good therapeutic efficacy and low toxicity make them ideal therapeutic agents. Artificial intelligence-based computational strategies can help quickly identify new functional peptides from collections of protein sequences and discover their different functions.Using protein language model-based embeddings (ESM-2), we developed a tool called pLMFPPred (Protein Language Model-based Functional Peptide Predictor) for predicting functional peptides and identifying toxic peptides. We also introduced SMOTE-TOMEK data synthesis sampling and Shapley value-based feature selection techniques to relieve data imbalance issues and reduce computational costs. On a validated independent test set, pLMFPPred achieved accuracy, Area under the curve - Receiver Operating Characteristics, and F1-Score values of 0.974, 0.99, and 0.974, respectively. Comparative experiments show that pLMFPPred outperforms current methods for predicting functional peptides.The experimental results suggest that the proposed method (pLMFPPred) can provide better performance in terms of Accuracy, Area under the curve - Receiver Operating Characteristics, and F1-Score than existing methods. pLMFPPred has achieved good performance in predicting functional peptides and represents a new computational method for predicting functional peptides.
[ { "created": "Mon, 25 Sep 2023 17:57:39 GMT", "version": "v1" } ]
2023-09-27
[ [ "Ma", "Zebin", "" ], [ "Zou", "Yonglin", "" ], [ "Huang", "Xiaobin", "" ], [ "Yan", "Wenjin", "" ], [ "Xu", "Hao", "" ], [ "Yang", "Jiexin", "" ], [ "Zhang", "Ying", "" ], [ "Huang", "Jinqi", ...
Functional peptides have the potential to treat a variety of diseases. Their good therapeutic efficacy and low toxicity make them ideal therapeutic agents. Artificial intelligence-based computational strategies can help quickly identify new functional peptides from collections of protein sequences and discover their different functions.Using protein language model-based embeddings (ESM-2), we developed a tool called pLMFPPred (Protein Language Model-based Functional Peptide Predictor) for predicting functional peptides and identifying toxic peptides. We also introduced SMOTE-TOMEK data synthesis sampling and Shapley value-based feature selection techniques to relieve data imbalance issues and reduce computational costs. On a validated independent test set, pLMFPPred achieved accuracy, Area under the curve - Receiver Operating Characteristics, and F1-Score values of 0.974, 0.99, and 0.974, respectively. Comparative experiments show that pLMFPPred outperforms current methods for predicting functional peptides.The experimental results suggest that the proposed method (pLMFPPred) can provide better performance in terms of Accuracy, Area under the curve - Receiver Operating Characteristics, and F1-Score than existing methods. pLMFPPred has achieved good performance in predicting functional peptides and represents a new computational method for predicting functional peptides.
1801.08137
Jean Gaudart
Louis Gaudart and Jean Gaudart
Nerve impulse propagation and wavelet theory
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
A luminous stimulus which penetrates in a retina is converted to a nerve message. Ganglion cells give a response that may be approximated by a wavelet. We determine a function PSI which is associated with the propagation of nerve impulses along an axon. Each kind of channel (inward and outward) may be open or closed, depending on the transmembrane potential. The transition between these states is a random event. Using quantum relations, we estimate the number of channels susceptible to switch between the closed and open states. Our quantum approach was first to calculate the energy level distribution in a channel. We obtain, for each kind of channel, the empty level density and the filled level density of the open and closed conformations. The joint density of levels provides the transition number between the closed and open conformations. The algebraic sum of inward and outward open channels is a function PSI of the normalized energy E. The function PSI verifies the major properties of a wavelet. We calculate the functional dependence of the axon membrane conductance with the transmembrane energy.
[ { "created": "Wed, 24 Jan 2018 17:35:27 GMT", "version": "v1" } ]
2018-01-26
[ [ "Gaudart", "Louis", "" ], [ "Gaudart", "Jean", "" ] ]
A luminous stimulus which penetrates in a retina is converted to a nerve message. Ganglion cells give a response that may be approximated by a wavelet. We determine a function PSI which is associated with the propagation of nerve impulses along an axon. Each kind of channel (inward and outward) may be open or closed, depending on the transmembrane potential. The transition between these states is a random event. Using quantum relations, we estimate the number of channels susceptible to switch between the closed and open states. Our quantum approach was first to calculate the energy level distribution in a channel. We obtain, for each kind of channel, the empty level density and the filled level density of the open and closed conformations. The joint density of levels provides the transition number between the closed and open conformations. The algebraic sum of inward and outward open channels is a function PSI of the normalized energy E. The function PSI verifies the major properties of a wavelet. We calculate the functional dependence of the axon membrane conductance with the transmembrane energy.
2003.00305
Tao Zhou
Duanbing Chen, Tao Zhou
Control Efficacy on COVID-19
18 pages, 5 figures, 1 table
PLoS ONE 16 (2021) e0246715
10.1371/journal.pone.0246715
null
q-bio.PE physics.data-an physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We proposed a Monte-Carlo method to estimate temporal reproduction number without complete information about symptom onsets of all cases. Province-level analysis demonstrated the huge success of Chinese control measures on COVID-19, that is, provinces' reproduction numbers quickly decrease to <1 by just one week after taking actions.
[ { "created": "Sat, 29 Feb 2020 17:08:56 GMT", "version": "v1" } ]
2021-02-24
[ [ "Chen", "Duanbing", "" ], [ "Zhou", "Tao", "" ] ]
We proposed a Monte-Carlo method to estimate temporal reproduction number without complete information about symptom onsets of all cases. Province-level analysis demonstrated the huge success of Chinese control measures on COVID-19, that is, provinces' reproduction numbers quickly decrease to <1 by just one week after taking actions.
1612.08935
Leenoy Meshulam
Leenoy Meshulam, Jeffrey L. Gauthier, Carlos D. Brody, David W. Tank, and William Bialek
Collective behavior of place and non-place neurons in the hippocampal network
null
null
10.1016/j.neuron.2017.10.027
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discussions of the hippocampus often focus on place cells, but many neurons are not place cells in any given environment. Here we describe the collective activity in such mixed populations, treating place and non-place cells on the same footing. We start with optical imaging experiments on CA1 in mice as they run along a virtual linear track, and use maximum entropy methods to approximate the distribution of patterns of activity in the population, matching the correlations between pairs of cells but otherwise assuming as little structure as possible. We find that these simple models accurately predict the activity of each neuron from the state of all the other neurons in the network, regardless of how well that neuron codes for position. These and other results suggest that place cells are not a distinct sub-network, but part of a larger system that encodes, collectively, more than just place information.
[ { "created": "Wed, 28 Dec 2016 17:30:54 GMT", "version": "v1" } ]
2019-01-01
[ [ "Meshulam", "Leenoy", "" ], [ "Gauthier", "Jeffrey L.", "" ], [ "Brody", "Carlos D.", "" ], [ "Tank", "David W.", "" ], [ "Bialek", "William", "" ] ]
Discussions of the hippocampus often focus on place cells, but many neurons are not place cells in any given environment. Here we describe the collective activity in such mixed populations, treating place and non-place cells on the same footing. We start with optical imaging experiments on CA1 in mice as they run along a virtual linear track, and use maximum entropy methods to approximate the distribution of patterns of activity in the population, matching the correlations between pairs of cells but otherwise assuming as little structure as possible. We find that these simple models accurately predict the activity of each neuron from the state of all the other neurons in the network, regardless of how well that neuron codes for position. These and other results suggest that place cells are not a distinct sub-network, but part of a larger system that encodes, collectively, more than just place information.
1608.07244
Arash Khodadadi
Arash Khodadadi, Pegah Fakhari, Jerome R Busemeyer
A Neuro-Fuzzy Model of Time-Varying Decision Boundaries
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent study, we reported the results of a new decision making paradigm in which the participants were asked to balance between their speed and accuracy to maximize the total reward they achieve during the experiment. The results of computational modeling provided strong evidence suggesting that the participants used time-varying decision boundaries. Previous theoretical studies of the optimal speed-accuracy trade-off suggested that the participants may learn to use these time-varying boundaries to maximize their average reward rate. The results in our experiment, however, showed that the participants used such boundaries even at the beginning of the experiment and without any prior experience in the task. In this paper, we hypothesize that these boundaries are the results of using some heuristic rules to make decisions in the task. To formulate decision making by these heuristic rules as a computational framework, we use the fuzzy logic theory. Based on this theory, we propose a new computational framework for decision making in evidence accumulation tasks. In this framework, there is no explicit decision boundary. Instead, the subject's desire to stop accumulating evidence and responding at each moment within a trial and for a given value of the accumulated evidence, is determined by a set of fuzzy "IF-TEHN rules". We then use the back-propagation method to derive an algorithm for fitting the fuzzy model to each participant's data. We then investigate how the difference in the participants' performance in the experiment is reflected in the difference in the parameters of the fitted model
[ { "created": "Thu, 25 Aug 2016 18:32:52 GMT", "version": "v1" } ]
2016-08-26
[ [ "Khodadadi", "Arash", "" ], [ "Fakhari", "Pegah", "" ], [ "Busemeyer", "Jerome R", "" ] ]
In a recent study, we reported the results of a new decision making paradigm in which the participants were asked to balance between their speed and accuracy to maximize the total reward they achieve during the experiment. The results of computational modeling provided strong evidence suggesting that the participants used time-varying decision boundaries. Previous theoretical studies of the optimal speed-accuracy trade-off suggested that the participants may learn to use these time-varying boundaries to maximize their average reward rate. The results in our experiment, however, showed that the participants used such boundaries even at the beginning of the experiment and without any prior experience in the task. In this paper, we hypothesize that these boundaries are the results of using some heuristic rules to make decisions in the task. To formulate decision making by these heuristic rules as a computational framework, we use the fuzzy logic theory. Based on this theory, we propose a new computational framework for decision making in evidence accumulation tasks. In this framework, there is no explicit decision boundary. Instead, the subject's desire to stop accumulating evidence and responding at each moment within a trial and for a given value of the accumulated evidence, is determined by a set of fuzzy "IF-TEHN rules". We then use the back-propagation method to derive an algorithm for fitting the fuzzy model to each participant's data. We then investigate how the difference in the participants' performance in the experiment is reflected in the difference in the parameters of the fitted model
2406.19397
Agnieszka Pregowska
Agnieszka Pregowska, Agata Roszkiewicz, Magdalena Osial, Michael Giersig
How scanning probe microscopy can be supported by Artificial Intelligence and quantum computing
19 pages, 4 figures
null
null
null
q-bio.NC cond-mat.mtrl-sci cs.AI quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We focus on the potential possibilities for supporting Scanning Probe Microscopy measurements, emphasizing the application of Artificial Intelligence, especially Machine Learning as well as quantum computing. It turned out that Artificial Intelligence can be helpful in the experimental processes automation in routine operations, the algorithmic search for good sample regions, and shed light on the structure property relationships. Thus, it contributes to increasing the efficiency and accuracy of optical nanoscopy scanning probes. Moreover, the combination of Artificial Intelligence based algorithms and quantum computing may have a huge potential to increase the practical application of Scanning Probe Microscopy. The limitations were also discussed. Finally, we outline a research path for the improvement of the proposed approach.
[ { "created": "Wed, 20 Mar 2024 12:22:02 GMT", "version": "v1" } ]
2024-07-01
[ [ "Pregowska", "Agnieszka", "" ], [ "Roszkiewicz", "Agata", "" ], [ "Osial", "Magdalena", "" ], [ "Giersig", "Michael", "" ] ]
We focus on the potential possibilities for supporting Scanning Probe Microscopy measurements, emphasizing the application of Artificial Intelligence, especially Machine Learning as well as quantum computing. It turned out that Artificial Intelligence can be helpful in the experimental processes automation in routine operations, the algorithmic search for good sample regions, and shed light on the structure property relationships. Thus, it contributes to increasing the efficiency and accuracy of optical nanoscopy scanning probes. Moreover, the combination of Artificial Intelligence based algorithms and quantum computing may have a huge potential to increase the practical application of Scanning Probe Microscopy. The limitations were also discussed. Finally, we outline a research path for the improvement of the proposed approach.
1803.02765
Paul Yaworsky
Paul Yaworsky
Realizing Intelligence
8 pages
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Order exists in the world. The intelligence process enables us to realize that order, to some extent. We provide a high level description of intelligence using simple definitions, basic building blocks, a conceptual framework and general hierarchy. This perspective includes multiple levels of abstraction occurring in space and in time. The resulting model offers simple, useful ways to help realize the essence of intelligence.
[ { "created": "Wed, 7 Mar 2018 17:40:06 GMT", "version": "v1" } ]
2018-03-09
[ [ "Yaworsky", "Paul", "" ] ]
Order exists in the world. The intelligence process enables us to realize that order, to some extent. We provide a high level description of intelligence using simple definitions, basic building blocks, a conceptual framework and general hierarchy. This perspective includes multiple levels of abstraction occurring in space and in time. The resulting model offers simple, useful ways to help realize the essence of intelligence.
2209.13936
M\'at\'e L. Telek
M\'at\'e L. Telek, Elisenda Feliu
Topological descriptors of the parameter region of multistationarity: deciding upon connectivity
Accepted in Plos Computational Biology
null
10.1371/journal.pcbi.1010970
null
q-bio.MN cs.SC q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Switch-like responses arising from bistability have been linked to cell signaling processes and memory. Revealing the shape and properties of the set of parameters that lead to bistability is necessary to understand the underlying biological mechanisms, but is a complex mathematical problem. We present an efficient approach to determine a basic topological property of the parameter region of multistationary, namely whether it is connected or not. The connectivity of this region can be interpreted in terms of the biological mechanisms underlying bistability and the switch-like patterns that the system can create. We provide an algorithm to assert that the parameter region of multistationarity is connected, targeting reaction networks with mass-action kinetics. We show that this is the case for numerous relevant cell signaling motifs, previously described to exhibit bistability. However, we show that for a motif displaying a phosphorylation cycle with allosteric enzyme regulation, the region of multistationarity has two distinct connected components, corresponding to two different, but symmetric, biological mechanisms. The method relies on linear programming and bypasses the expensive computational cost of direct and generic approaches to study parametric polynomial systems. This characteristic makes it suitable for mass-screening of reaction networks.
[ { "created": "Wed, 28 Sep 2022 09:12:15 GMT", "version": "v1" }, { "created": "Sat, 18 Mar 2023 08:20:55 GMT", "version": "v2" } ]
2023-04-26
[ [ "Telek", "Máté L.", "" ], [ "Feliu", "Elisenda", "" ] ]
Switch-like responses arising from bistability have been linked to cell signaling processes and memory. Revealing the shape and properties of the set of parameters that lead to bistability is necessary to understand the underlying biological mechanisms, but is a complex mathematical problem. We present an efficient approach to determine a basic topological property of the parameter region of multistationary, namely whether it is connected or not. The connectivity of this region can be interpreted in terms of the biological mechanisms underlying bistability and the switch-like patterns that the system can create. We provide an algorithm to assert that the parameter region of multistationarity is connected, targeting reaction networks with mass-action kinetics. We show that this is the case for numerous relevant cell signaling motifs, previously described to exhibit bistability. However, we show that for a motif displaying a phosphorylation cycle with allosteric enzyme regulation, the region of multistationarity has two distinct connected components, corresponding to two different, but symmetric, biological mechanisms. The method relies on linear programming and bypasses the expensive computational cost of direct and generic approaches to study parametric polynomial systems. This characteristic makes it suitable for mass-screening of reaction networks.
1010.3752
Luisiana Cundin
Luisiana X. Cundin and William P. Roach
Kramers-Kronig analysis of biological skin
18 pages, 5 figures, tabulated results for theoretical complex index of refraction for biological skin
null
null
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A treatise on the optical property of biological tissue is presented. Water is postulated to be a topological basis and serves to discriminate published skin data. Electromagnetic theory governing dielectric behavior is concisely detailed pertaining to certain optical constants and Kramers-Kronig relation. The Kramers-Kronig relation defining dispersion index is emulated through the discrete Hilbert transform. An accrued absorption set is populated with empirical absorption data for biological skin, pure liquid water and interpolated values. Kramers-Kronig analysis of biological skin yields a comprehensive description of the complex index of refraction from DC to x-ray frequencies.
[ { "created": "Mon, 18 Oct 2010 22:38:11 GMT", "version": "v1" } ]
2010-10-20
[ [ "Cundin", "Luisiana X.", "" ], [ "Roach", "William P.", "" ] ]
A treatise on the optical property of biological tissue is presented. Water is postulated to be a topological basis and serves to discriminate published skin data. Electromagnetic theory governing dielectric behavior is concisely detailed pertaining to certain optical constants and Kramers-Kronig relation. The Kramers-Kronig relation defining dispersion index is emulated through the discrete Hilbert transform. An accrued absorption set is populated with empirical absorption data for biological skin, pure liquid water and interpolated values. Kramers-Kronig analysis of biological skin yields a comprehensive description of the complex index of refraction from DC to x-ray frequencies.
0706.4396
Thierry Rabilloud
Mireille Chevallet (BBSI), Sylvie Luche, Thierry Rabilloud (BBSI)
Silver staining of proteins in polyacrylamide gels
null
Nat Protoc 1, 4 (2006) 1852-8
10.1038/nprot.2006.288
null
q-bio.GN
null
Silver staining is used to detect proteins after electrophoretic separation on polyacrylamide gels. It combines excellent sensitivity (in the low nanogram range) with the use of very simple and cheap equipment and chemicals. It is compatible with downstream processing, such as mass spectrometry analysis after protein digestion. The sequential phases of silver staining are protein fixation, then sensitization, then silver impregnation and finally image development. Several variants of silver staining are described here, which can be completed in a time range from 2 h to 1 d after the end of the electrophoretic separation. Once completed, the stain is stable for several weeks.
[ { "created": "Fri, 29 Jun 2007 11:54:51 GMT", "version": "v1" } ]
2007-07-02
[ [ "Chevallet", "Mireille", "", "BBSI" ], [ "Luche", "Sylvie", "", "BBSI" ], [ "Rabilloud", "Thierry", "", "BBSI" ] ]
Silver staining is used to detect proteins after electrophoretic separation on polyacrylamide gels. It combines excellent sensitivity (in the low nanogram range) with the use of very simple and cheap equipment and chemicals. It is compatible with downstream processing, such as mass spectrometry analysis after protein digestion. The sequential phases of silver staining are protein fixation, then sensitization, then silver impregnation and finally image development. Several variants of silver staining are described here, which can be completed in a time range from 2 h to 1 d after the end of the electrophoretic separation. Once completed, the stain is stable for several weeks.
2306.07505
Shao Li
Lan Wang, Ruiling He, Lili Zhao, Jia Wang, Zhengzi Geng, Tao Ren, Guo Zhang, Peng Zhang, Kaiqiang Tang, Chaofei Gao, Fei Chen, Liting Zhang, Yonghe Zhou, Xin Li, Fanbin He, Hui Huan, Wenjuan Wang, Yunxiao Liang, Juan Tang, Fang Ai, Tingyu Wang, Liyun Zheng, Zhongwei Zhao, Jiansong Ji, Wei Liu, Jiaojiao Xu, Bo Liu, Xuemei Wang, Yao Zhang, Qiong Yan, Muhan Lv, Xiaomei Chen, Shuhua Zhang, Yihua Wang, Yang Liu, Li Yin, Yanni Liu, Yanqing Huang, Yunfang Liu, Kun Wang, Meiqin Su, Li Bian, Ping An, Xin Zhang, Linxue Qian, Shao Li, Xiaolong Qi
Deep learning radiomics for assessment of gastroesophageal varices in people with compensated advanced chronic liver disease
null
null
null
null
q-bio.TO eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: Bleeding from gastroesophageal varices (GEV) is a medical emergency associated with high mortality. We aim to construct an artificial intelligence-based model of two-dimensional shear wave elastography (2D-SWE) of the liver and spleen to precisely assess the risk of GEV and high-risk gastroesophageal varices (HRV). Design: A prospective multicenter study was conducted in patients with compensated advanced chronic liver disease. 305 patients were enrolled from 12 hospitals, and finally 265 patients were included, with 1136 liver stiffness measurement (LSM) images and 1042 spleen stiffness measurement (SSM) images generated by 2D-SWE. We leveraged deep learning methods to uncover associations between image features and patient risk, and thus conducted models to predict GEV and HRV. Results: A multi-modality Deep Learning Risk Prediction model (DLRP) was constructed to assess GEV and HRV, based on LSM and SSM images, and clinical information. Validation analysis revealed that the AUCs of DLRP were 0.91 for GEV (95% CI 0.90 to 0.93, p < 0.05) and 0.88 for HRV (95% CI 0.86 to 0.89, p < 0.01), which were significantly and robustly better than canonical risk indicators, including the value of LSM and SSM. Moreover, DLPR was better than the model using individual parameters, including LSM and SSM images. In HRV prediction, the 2D-SWE images of SSM outperform LSM (p < 0.01). Conclusion: DLRP shows excellent performance in predicting GEV and HRV over canonical risk indicators LSM and SSM. Additionally, the 2D-SWE images of SSM provided more information for better accuracy in predicting HRV than the LSM.
[ { "created": "Tue, 13 Jun 2023 02:32:43 GMT", "version": "v1" } ]
2023-06-14
[ [ "Wang", "Lan", "" ], [ "He", "Ruiling", "" ], [ "Zhao", "Lili", "" ], [ "Wang", "Jia", "" ], [ "Geng", "Zhengzi", "" ], [ "Ren", "Tao", "" ], [ "Zhang", "Guo", "" ], [ "Zhang", "Peng", "" ...
Objective: Bleeding from gastroesophageal varices (GEV) is a medical emergency associated with high mortality. We aim to construct an artificial intelligence-based model of two-dimensional shear wave elastography (2D-SWE) of the liver and spleen to precisely assess the risk of GEV and high-risk gastroesophageal varices (HRV). Design: A prospective multicenter study was conducted in patients with compensated advanced chronic liver disease. 305 patients were enrolled from 12 hospitals, and finally 265 patients were included, with 1136 liver stiffness measurement (LSM) images and 1042 spleen stiffness measurement (SSM) images generated by 2D-SWE. We leveraged deep learning methods to uncover associations between image features and patient risk, and thus conducted models to predict GEV and HRV. Results: A multi-modality Deep Learning Risk Prediction model (DLRP) was constructed to assess GEV and HRV, based on LSM and SSM images, and clinical information. Validation analysis revealed that the AUCs of DLRP were 0.91 for GEV (95% CI 0.90 to 0.93, p < 0.05) and 0.88 for HRV (95% CI 0.86 to 0.89, p < 0.01), which were significantly and robustly better than canonical risk indicators, including the value of LSM and SSM. Moreover, DLPR was better than the model using individual parameters, including LSM and SSM images. In HRV prediction, the 2D-SWE images of SSM outperform LSM (p < 0.01). Conclusion: DLRP shows excellent performance in predicting GEV and HRV over canonical risk indicators LSM and SSM. Additionally, the 2D-SWE images of SSM provided more information for better accuracy in predicting HRV than the LSM.
q-bio/0507021
Atul Narang
Atul Narang
Spontaneous polarization in eukaryotic gradient sensing: A mathematical model based on mutual inhibition of frontness and backness pathways
20 pages, 11 figures
null
null
null
q-bio.CB
null
A key problem of eukaryotic cell motility is the signaling mechanism of chemoattractant gradient sensing. Recent experiments have revealed the molecular correlate of gradient sensing: Frontness molecules, such as PI3P and Rac, localize at the front end of the cell, and backness molecules, such as Rho and myosin II, accumulate at the back of the cell. Importantly, this frontness-backness polarization occurs "spontaneously" even if the cells are exposed to uniform chemoattractant profiles. The spontaneous polarization suggests that the gradient sensing machinery undergoes a Turing bifurcation. This has led to several classical activator-inhibitor and activator-substrate models which identify the frontness molecules with the activator. Conspicuously absent from these models is any accounting of the backness molecules. This stands in sharp contrast to experiments which show that the backness pathways inhibit the frontness pathways. Here, we formulate a model based on the mutually inhibitory interaction between the frontness and backness pathways. The model builds upon the mutual inhibition model proposed by Bourne and coworkers (Xu et al, Cell, 114, 201--214, 2003). We show that mutual inhibition alone, without the help of any positive feedback, can trigger spontaneous polarization of the frontness and backness pathways. The spatial distribution of the frontness and backness molecules in response to inhbition and activation of the frontness and backness pathways are consistent with those observed in experiments. Furthermore, depending on the parameter values, the model yields spatial distributions corresponding to chemoattraction (frontness pathways in-phase with the external gradient) and chemorepulsion (frontness pathways out-of-phase with the external gradient).
[ { "created": "Wed, 13 Jul 2005 20:49:01 GMT", "version": "v1" } ]
2007-05-23
[ [ "Narang", "Atul", "" ] ]
A key problem of eukaryotic cell motility is the signaling mechanism of chemoattractant gradient sensing. Recent experiments have revealed the molecular correlate of gradient sensing: Frontness molecules, such as PI3P and Rac, localize at the front end of the cell, and backness molecules, such as Rho and myosin II, accumulate at the back of the cell. Importantly, this frontness-backness polarization occurs "spontaneously" even if the cells are exposed to uniform chemoattractant profiles. The spontaneous polarization suggests that the gradient sensing machinery undergoes a Turing bifurcation. This has led to several classical activator-inhibitor and activator-substrate models which identify the frontness molecules with the activator. Conspicuously absent from these models is any accounting of the backness molecules. This stands in sharp contrast to experiments which show that the backness pathways inhibit the frontness pathways. Here, we formulate a model based on the mutually inhibitory interaction between the frontness and backness pathways. The model builds upon the mutual inhibition model proposed by Bourne and coworkers (Xu et al, Cell, 114, 201--214, 2003). We show that mutual inhibition alone, without the help of any positive feedback, can trigger spontaneous polarization of the frontness and backness pathways. The spatial distribution of the frontness and backness molecules in response to inhbition and activation of the frontness and backness pathways are consistent with those observed in experiments. Furthermore, depending on the parameter values, the model yields spatial distributions corresponding to chemoattraction (frontness pathways in-phase with the external gradient) and chemorepulsion (frontness pathways out-of-phase with the external gradient).
2002.05774
Richard Carson
Richard G. Carson
What is the function of inter-hemispheric inhibition?
45 pages (including references and figure legends), 4 figures
Journal of Physiology. 2020 Aug 8. Online ahead of print
10.1113/JP279793
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is widely supposed that following unilateral brain injury, there arises an asymmetry in inter-hemispheric inhibition which has an adverse influence upon motor control. I argue that this 'inter-hemispheric imbalance' model arises from a fundamental misunderstanding of the roles played by inter-hemispheric (callosal) projections in mammalian brains. Drawing upon a large body of empirical data, derived largely from animal models, and associated theoretical modeling, it is demonstrated that inter-hemispheric projections perform contrast enhancing and integrative functions via mechanisms such as surround/lateral inhibition. The principal functional unit of callosal influence comprises a facilitatory centre and a depressing peripheral zone, that together shape the influence of converging inputs to pyramidal neurons. Inter-hemispheric inhibition is an instance of a more general feature of mammalian neural systems, whereby inhibitory interneurons act not simply to prevent over-excitation but to sculpt the output of specific circuits. The narrowing of the excitatory focus that occurs through crossed surround inhibition is a highly conserved motif of transcallosal interactions in mammalian sensory and motor cortices. A case is presented that the notion of 'inter-hemispheric imbalance' has been sustained, and clinical interventions derived from this model promoted, by erroneous assumptions concerning that revealed by investigative techniques such as transcranial magnetic stimulation (TMS). The alternative perspective promoted by the present analysis, also permits the basis of positive (e.g. post stroke) associations between the structural integrity of transcallosal projections and motor capability to be better understood.
[ { "created": "Thu, 13 Feb 2020 20:52:47 GMT", "version": "v1" } ]
2020-08-17
[ [ "Carson", "Richard G.", "" ] ]
It is widely supposed that following unilateral brain injury, there arises an asymmetry in inter-hemispheric inhibition which has an adverse influence upon motor control. I argue that this 'inter-hemispheric imbalance' model arises from a fundamental misunderstanding of the roles played by inter-hemispheric (callosal) projections in mammalian brains. Drawing upon a large body of empirical data, derived largely from animal models, and associated theoretical modeling, it is demonstrated that inter-hemispheric projections perform contrast enhancing and integrative functions via mechanisms such as surround/lateral inhibition. The principal functional unit of callosal influence comprises a facilitatory centre and a depressing peripheral zone, that together shape the influence of converging inputs to pyramidal neurons. Inter-hemispheric inhibition is an instance of a more general feature of mammalian neural systems, whereby inhibitory interneurons act not simply to prevent over-excitation but to sculpt the output of specific circuits. The narrowing of the excitatory focus that occurs through crossed surround inhibition is a highly conserved motif of transcallosal interactions in mammalian sensory and motor cortices. A case is presented that the notion of 'inter-hemispheric imbalance' has been sustained, and clinical interventions derived from this model promoted, by erroneous assumptions concerning that revealed by investigative techniques such as transcranial magnetic stimulation (TMS). The alternative perspective promoted by the present analysis, also permits the basis of positive (e.g. post stroke) associations between the structural integrity of transcallosal projections and motor capability to be better understood.
1211.2782
Bahram Houchmandzadeh
Bahram Houchmandzadeh (LIPhy), Marcel Vallade (LIPhy)
Exact results for fixation probability of bithermal evolutionary graphs
null
BioSystems 112, 1 (2013) 49-54
10.1016/j.biosystems.2013.03.020
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most fundamental concepts of evolutionary dynamics is the "fixation" probability, i.e. the probability that a mutant spreads through the whole population. Most natural communities are geographically structured into habitats exchanging individuals among each other and can be modeled by an evolutionary graph (EG), where directed links weight the probability for the offspring of one individual to replace another individual in the community. Very few exact analytical results are known for EGs. We show here how by using the techniques of the fixed point of Probability Generating Function, we can uncover a large class of of graphs, which we term bithermal, for which the exact fixation probability can be simply computed.
[ { "created": "Mon, 12 Nov 2012 20:43:26 GMT", "version": "v1" } ]
2013-05-01
[ [ "Houchmandzadeh", "Bahram", "", "LIPhy" ], [ "Vallade", "Marcel", "", "LIPhy" ] ]
One of the most fundamental concepts of evolutionary dynamics is the "fixation" probability, i.e. the probability that a mutant spreads through the whole population. Most natural communities are geographically structured into habitats exchanging individuals among each other and can be modeled by an evolutionary graph (EG), where directed links weight the probability for the offspring of one individual to replace another individual in the community. Very few exact analytical results are known for EGs. We show here how by using the techniques of the fixed point of Probability Generating Function, we can uncover a large class of of graphs, which we term bithermal, for which the exact fixation probability can be simply computed.
2404.14859
Xin Wang
Ju Kang, Yiyuan Niu, Xin Wang
Mechanisms promoting biodiversity in ecosystems
17 pages, 5 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Explaining biodiversity is a central focus in theoretical ecology. A significant obstacle arises from the Competitive Exclusion Principle (CEP), which states that two species competing for the same type of resources cannot coexist at constant population densities, or more generally, the number of consumer species cannot exceed that of resource species at steady states. The conflict between CEP and biodiversity is exemplified by the paradox of the plankton, where a few types of limiting resources support a plethora of plankton species. In this review, we introduce mechanisms proposed over the years for promoting biodiversity in ecosystems, with a special focus on those that alleviate the constraints imposed by the CEP, including mechanisms that challenge the CEP in well-mixed systems at a steady state or those that circumvent its limitations through contextual differences.
[ { "created": "Tue, 23 Apr 2024 09:25:05 GMT", "version": "v1" } ]
2024-04-24
[ [ "Kang", "Ju", "" ], [ "Niu", "Yiyuan", "" ], [ "Wang", "Xin", "" ] ]
Explaining biodiversity is a central focus in theoretical ecology. A significant obstacle arises from the Competitive Exclusion Principle (CEP), which states that two species competing for the same type of resources cannot coexist at constant population densities, or more generally, the number of consumer species cannot exceed that of resource species at steady states. The conflict between CEP and biodiversity is exemplified by the paradox of the plankton, where a few types of limiting resources support a plethora of plankton species. In this review, we introduce mechanisms proposed over the years for promoting biodiversity in ecosystems, with a special focus on those that alleviate the constraints imposed by the CEP, including mechanisms that challenge the CEP in well-mixed systems at a steady state or those that circumvent its limitations through contextual differences.
1306.5075
Luc Berthouze
Maria Botcharova, Simon F Farmer and Luc Berthouze
A maximum likelihood based technique for validating detrended fluctuation analysis (ML-DFA)
22 pages, 7 figures
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detrended Fluctuation Analysis (DFA) is widely used to assess the presence of long-range temporal correlations in time series. Signals with long-range temporal correlations are typically defined as having a power law decay in their autocorrelation function. The output of DFA is an exponent, which is the slope obtained by linear regression of a log-log fluctuation plot against window size. However, if this fluctuation plot is not linear, then the underlying signal is not self-similar, and the exponent has no meaning. There is currently no method for assessing the linearity of a DFA fluctuation plot. Here we present such a technique, called ML-DFA. We scale the DFA fluctuation plot to construct a likelihood function for a set of alternative models including polynomial, root, exponential, logarithmic and spline functions. We use this likelihood function to determine the maximum likelihood and thus to calculate values of the Akaike and Bayesian information criteria, which identify the best fit model when the number of parameters involved is taken into account and over-fitting is penalised. This ensures that, of the models that fit well, the least complicated is selected as the best fit. We apply ML-DFA to synthetic data from FARIMA processes and sine curves with DFA fluctuation plots whose form has been analytically determined, and to experimentally collected neurophysiological data. ML-DFA assesses whether the hypothesis of a linear fluctuation plot should be rejected, and thus whether the exponent can be considered meaningful. We argue that ML-DFA is essential to obtaining trustworthy results from DFA.
[ { "created": "Fri, 21 Jun 2013 08:38:32 GMT", "version": "v1" } ]
2013-06-24
[ [ "Botcharova", "Maria", "" ], [ "Farmer", "Simon F", "" ], [ "Berthouze", "Luc", "" ] ]
Detrended Fluctuation Analysis (DFA) is widely used to assess the presence of long-range temporal correlations in time series. Signals with long-range temporal correlations are typically defined as having a power law decay in their autocorrelation function. The output of DFA is an exponent, which is the slope obtained by linear regression of a log-log fluctuation plot against window size. However, if this fluctuation plot is not linear, then the underlying signal is not self-similar, and the exponent has no meaning. There is currently no method for assessing the linearity of a DFA fluctuation plot. Here we present such a technique, called ML-DFA. We scale the DFA fluctuation plot to construct a likelihood function for a set of alternative models including polynomial, root, exponential, logarithmic and spline functions. We use this likelihood function to determine the maximum likelihood and thus to calculate values of the Akaike and Bayesian information criteria, which identify the best fit model when the number of parameters involved is taken into account and over-fitting is penalised. This ensures that, of the models that fit well, the least complicated is selected as the best fit. We apply ML-DFA to synthetic data from FARIMA processes and sine curves with DFA fluctuation plots whose form has been analytically determined, and to experimentally collected neurophysiological data. ML-DFA assesses whether the hypothesis of a linear fluctuation plot should be rejected, and thus whether the exponent can be considered meaningful. We argue that ML-DFA is essential to obtaining trustworthy results from DFA.
2208.11998
V.Kuppusamy Chandrasekar
V. R. Saiprasad, R. Gopal, V. K. Chandrasekar and M. Lakshmanan
Analysis of COVID-19 in India using vaccine epidemic model incorporating vaccine effectiveness and herd immunity
21 pages, 7 figures, Accepted for publication in European Journal of Physics Plus
null
null
null
q-bio.PE nlin.AO physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
COVID-19 will be a continuous threat to human population despite having a few vaccines at hand until we reach the endemic state through natural herd immunity and total immunization through universal vaccination. However, the vaccine acts as a practical tool for reducing the massive public health problem and the emerging economic consequences that the continuing COVID -19 epidemic is causing worldwide, while the vaccine efficacy wanes. In this work, we propose and analyze an epidemic model of Susceptible-Exposed-Infected-Recovered-Vaccinated (SEIRV) population taking into account the rate of vaccination and vaccine waning. The dynamics of the model has been investigated, and the condition for a disease-free endemic equilibrium state is obtained. Further, the analysis is extended to study the COVID-19 spread in India by considering the availability of vaccines and the related critical parameters such as vaccination rate, vaccine efficacy and waning of vaccine's impact on deciding the emerging fate of this epidemic. We have also discussed the conditions for herd immunity due to vaccinated individuals among the people. Our results highlight the importance of vaccines, the effectiveness of booster vaccination in protecting people from infection, and their importance in epidemic and pandemic modelling.
[ { "created": "Thu, 25 Aug 2022 11:00:05 GMT", "version": "v1" } ]
2022-08-26
[ [ "Saiprasad", "V. R.", "" ], [ "Gopal", "R.", "" ], [ "Chandrasekar", "V. K.", "" ], [ "Lakshmanan", "M.", "" ] ]
COVID-19 will be a continuous threat to human population despite having a few vaccines at hand until we reach the endemic state through natural herd immunity and total immunization through universal vaccination. However, the vaccine acts as a practical tool for reducing the massive public health problem and the emerging economic consequences that the continuing COVID -19 epidemic is causing worldwide, while the vaccine efficacy wanes. In this work, we propose and analyze an epidemic model of Susceptible-Exposed-Infected-Recovered-Vaccinated (SEIRV) population taking into account the rate of vaccination and vaccine waning. The dynamics of the model has been investigated, and the condition for a disease-free endemic equilibrium state is obtained. Further, the analysis is extended to study the COVID-19 spread in India by considering the availability of vaccines and the related critical parameters such as vaccination rate, vaccine efficacy and waning of vaccine's impact on deciding the emerging fate of this epidemic. We have also discussed the conditions for herd immunity due to vaccinated individuals among the people. Our results highlight the importance of vaccines, the effectiveness of booster vaccination in protecting people from infection, and their importance in epidemic and pandemic modelling.
1911.04046
Michael Saint-Antoine
Michael M. Saint-Antoine and Abhyudai Singh
Network Inference in Systems Biology: Recent Developments, Challenges, and Applications
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most interesting, difficult, and potentially useful topics in computational biology is the inference of gene regulatory networks (GRNs) from expression data. Although researchers have been working on this topic for more than a decade and much progress has been made, it remains an unsolved problem and even the most sophisticated inference algorithms are far from perfect. In this paper, we review the latest developments in network inference, including state-of-the-art algorithms like PIDC, Phixer, and more. We also discuss unsolved computational challenges, including the optimal combination of algorithms, integration of multiple data sources, and pseudo-temporal ordering of static expression data. Lastly, we discuss some exciting applications of network inference in cancer research, and provide a list of useful software tools for researchers hoping to conduct their own network inference analyses.
[ { "created": "Mon, 11 Nov 2019 02:50:13 GMT", "version": "v1" } ]
2019-11-12
[ [ "Saint-Antoine", "Michael M.", "" ], [ "Singh", "Abhyudai", "" ] ]
One of the most interesting, difficult, and potentially useful topics in computational biology is the inference of gene regulatory networks (GRNs) from expression data. Although researchers have been working on this topic for more than a decade and much progress has been made, it remains an unsolved problem and even the most sophisticated inference algorithms are far from perfect. In this paper, we review the latest developments in network inference, including state-of-the-art algorithms like PIDC, Phixer, and more. We also discuss unsolved computational challenges, including the optimal combination of algorithms, integration of multiple data sources, and pseudo-temporal ordering of static expression data. Lastly, we discuss some exciting applications of network inference in cancer research, and provide a list of useful software tools for researchers hoping to conduct their own network inference analyses.
1702.01252
Joel Miller
John Lang and Hans De Sterck and Jamieson L. Kaiser and Joel C. Miller
Random Spatial Networks: Small Worlds without Clustering, Traveling Waves, and Hop-and-Spread Disease Dynamics
null
null
null
null
q-bio.QM nlin.PS physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Random network models play a prominent role in modeling, analyzing and understanding complex phenomena on real-life networks. However, a key property of networks is often neglected: many real-world networks exhibit spatial structure, the tendency of a node to select neighbors with a probability depending on physical distance. Here, we introduce a class of random spatial networks (RSNs) which generalizes many existing random network models but adds spatial structure. In these networks, nodes are placed randomly in space and joined in edges with a probability depending on their distance and their individual expected degrees, in a manner that crucially remains analytically tractable. We use this network class to propose a new generalization of small-world networks, where the average shortest path lengths in the graph are small, as in classical Watts-Strogatz small-world networks, but with close spatial proximity of nodes that are neighbors in the network playing the role of large clustering. Small-world effects are demonstrated on these spatial small-world networks without clustering. We are able to derive partial integro-differential equations governing susceptible-infectious-recovered disease spreading through an RSN, and we demonstrate the existence of traveling wave solutions. If the distance kernel governing edge placement decays slower than exponential, the population-scale dynamics are dominated by long-range hops followed by local spread of traveling waves. This provides a theoretical modeling framework for recent observations of how epidemics like Ebola evolve in modern connected societies, with long-range connections seeding new focal points from which the epidemic locally spreads in a wavelike manner.
[ { "created": "Sat, 4 Feb 2017 08:12:44 GMT", "version": "v1" } ]
2017-02-07
[ [ "Lang", "John", "" ], [ "De Sterck", "Hans", "" ], [ "Kaiser", "Jamieson L.", "" ], [ "Miller", "Joel C.", "" ] ]
Random network models play a prominent role in modeling, analyzing and understanding complex phenomena on real-life networks. However, a key property of networks is often neglected: many real-world networks exhibit spatial structure, the tendency of a node to select neighbors with a probability depending on physical distance. Here, we introduce a class of random spatial networks (RSNs) which generalizes many existing random network models but adds spatial structure. In these networks, nodes are placed randomly in space and joined in edges with a probability depending on their distance and their individual expected degrees, in a manner that crucially remains analytically tractable. We use this network class to propose a new generalization of small-world networks, where the average shortest path lengths in the graph are small, as in classical Watts-Strogatz small-world networks, but with close spatial proximity of nodes that are neighbors in the network playing the role of large clustering. Small-world effects are demonstrated on these spatial small-world networks without clustering. We are able to derive partial integro-differential equations governing susceptible-infectious-recovered disease spreading through an RSN, and we demonstrate the existence of traveling wave solutions. If the distance kernel governing edge placement decays slower than exponential, the population-scale dynamics are dominated by long-range hops followed by local spread of traveling waves. This provides a theoretical modeling framework for recent observations of how epidemics like Ebola evolve in modern connected societies, with long-range connections seeding new focal points from which the epidemic locally spreads in a wavelike manner.
1610.01127
J. C. Phillips
J. C. Phillips
Mechanism for long-acting chimeras based on fusion with the carboxyl-terminal peptide (CTP) of human chorionic gonadotropin beta-subunit 3
9 pages, 5 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermodynamic scaling explains the dramatic successes of CTP fused human growth proteins as regards lifetime in vivo and enhanced functionality compared to their wild-type analogues, like Biogen. The theory is semi-quantitative and contains no adjustable parameters. It shows how hydrophilic terminal spheres orient fused proteins in the neighborhood of a membrane surface, extending lifetimes and improving functionality.
[ { "created": "Tue, 4 Oct 2016 19:14:06 GMT", "version": "v1" } ]
2016-10-05
[ [ "Phillips", "J. C.", "" ] ]
Thermodynamic scaling explains the dramatic successes of CTP fused human growth proteins as regards lifetime in vivo and enhanced functionality compared to their wild-type analogues, like Biogen. The theory is semi-quantitative and contains no adjustable parameters. It shows how hydrophilic terminal spheres orient fused proteins in the neighborhood of a membrane surface, extending lifetimes and improving functionality.
2405.20264
Clotilde Djuikem
Clotilde Djuikem, Julien Arino
Transmission of multiple pathogens across species
null
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
We analyse a model that describes the propagation of many pathogens within and between many species. A branching process approximation is used to compute the probability of disease outbreaks. Special cases of aquatic environments with two host species and one or two pathogens are considered both analytically and computationally.
[ { "created": "Thu, 30 May 2024 17:16:31 GMT", "version": "v1" } ]
2024-05-31
[ [ "Djuikem", "Clotilde", "" ], [ "Arino", "Julien", "" ] ]
We analyse a model that describes the propagation of many pathogens within and between many species. A branching process approximation is used to compute the probability of disease outbreaks. Special cases of aquatic environments with two host species and one or two pathogens are considered both analytically and computationally.
2102.13107
Benoit Goussen
Benoit Goussen, Cecilie Rendal, David Sheffield, Emma Butler, Oliver R. Price, Roman Ashauer
Bioenergetics modelling to analyse and predict the joint effects of multiple stressors: Meta-analysis and model corroboration
10 pages
Science of The Total Environment. 749:141509 (2020)
10.1016/j.scitotenv.2020.141509
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
Understanding the consequences of the combined effects of multiple stressors-including stress from man-made chemicals is important for conservation management, the ecological risk assessment of chemicals, and many other ecological applications. Our current ability to predict and analyse the joint effects of multiple stressors is insufficient to make the prospective risk assessment of chemicals more ecologically relevant because we lack a full understanding of how organisms respond to stress factors alone and in combination. Here, we describe a Dynamic Energy Budget (DEB) based bioenergetics model that predicts the potential effects of single or multiple natural and chemical stressors on life history traits. We demonstrate the plausibility of the model using a meta-analysis of 128 existing studies on freshwater invertebrates. We then validate our model by comparing its predictions for a combination of three stressors (i.e. chemical, temperature, and food availability) with new, independent experimental data on life history traits in the daphnid Ceriodaphnia dubia. We found that the model predictions are in agreement with observed growth curves and reproductive traits. To the best of our knowledge, this is the first time that the combined effects of three stress factors on life history traits observed in laboratory studies have been predicted successfully in invertebrates. We suggest that a re-analysis of existing studies on multiple stressors within the modelling framework outlined here will provide a robust null model for identifying stressor interactions, and expect that a better understanding of the underlying mechanisms will arise from these new analyses. Bioenergetics modelling could be applied more broadly to support environmental management decision making.
[ { "created": "Thu, 25 Feb 2021 12:23:15 GMT", "version": "v1" } ]
2021-03-01
[ [ "Goussen", "Benoit", "" ], [ "Rendal", "Cecilie", "" ], [ "Sheffield", "David", "" ], [ "Butler", "Emma", "" ], [ "Price", "Oliver R.", "" ], [ "Ashauer", "Roman", "" ] ]
Understanding the consequences of the combined effects of multiple stressors-including stress from man-made chemicals is important for conservation management, the ecological risk assessment of chemicals, and many other ecological applications. Our current ability to predict and analyse the joint effects of multiple stressors is insufficient to make the prospective risk assessment of chemicals more ecologically relevant because we lack a full understanding of how organisms respond to stress factors alone and in combination. Here, we describe a Dynamic Energy Budget (DEB) based bioenergetics model that predicts the potential effects of single or multiple natural and chemical stressors on life history traits. We demonstrate the plausibility of the model using a meta-analysis of 128 existing studies on freshwater invertebrates. We then validate our model by comparing its predictions for a combination of three stressors (i.e. chemical, temperature, and food availability) with new, independent experimental data on life history traits in the daphnid Ceriodaphnia dubia. We found that the model predictions are in agreement with observed growth curves and reproductive traits. To the best of our knowledge, this is the first time that the combined effects of three stress factors on life history traits observed in laboratory studies have been predicted successfully in invertebrates. We suggest that a re-analysis of existing studies on multiple stressors within the modelling framework outlined here will provide a robust null model for identifying stressor interactions, and expect that a better understanding of the underlying mechanisms will arise from these new analyses. Bioenergetics modelling could be applied more broadly to support environmental management decision making.
2104.01033
Bas Vroling
Bas Vroling and Stephan Heijl
White paper: The Helix Pathogenicity Prediction Platform
null
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
In this white paper we introduce Helix, an AI based solution for missense pathogenicity prediction. With recent advances in the sequencing of human genomes, massive amounts of genetic data have become available. This has shifted the burden of labor for genetic diagnostics and research from the gathering of data to its interpretation. Helix presents a state of the art platform for the prediction of pathogenicity in human missense variants. In addition to offering best-in-class predictive performance, Helix offers a platform that allows researchers to analyze and interpret variants in depth that can be accessed at helixlabs.ai.
[ { "created": "Fri, 2 Apr 2021 13:09:11 GMT", "version": "v1" }, { "created": "Mon, 3 May 2021 10:05:54 GMT", "version": "v2" } ]
2021-05-04
[ [ "Vroling", "Bas", "" ], [ "Heijl", "Stephan", "" ] ]
In this white paper we introduce Helix, an AI based solution for missense pathogenicity prediction. With recent advances in the sequencing of human genomes, massive amounts of genetic data have become available. This has shifted the burden of labor for genetic diagnostics and research from the gathering of data to its interpretation. Helix presents a state of the art platform for the prediction of pathogenicity in human missense variants. In addition to offering best-in-class predictive performance, Helix offers a platform that allows researchers to analyze and interpret variants in depth that can be accessed at helixlabs.ai.
0707.3750
Guillaume Collet
Rumen Andonov (IRISA), Guillaume Collet (IRISA), Jean-Fran\c{c}ois Gibrat (MIG), Antoine Marin (MIG), Vincent Poirriez (LAMIH), Nikola Yanev (IRISA)
Recent Advances in Solving the Protein Threading Problem
null
null
null
null
q-bio.QM cs.DC
null
The fold recognition methods are promissing tools for capturing the structure of a protein by its amino acid residues sequence but their use is still restricted by the needs of huge computational resources and suitable efficient algorithms as well. In the recent version of FROST (Fold Recognition Oriented Search Tool) package the most efficient algorithm for solving the Protein Threading Problem (PTP) is implemented due to the strong collaboration between the SYMBIOSE group in IRISA and MIG in Jouy-en-Josas. In this paper, we present the diverse components of FROST, emphasizing on the recent advances in formulating and solving new versions of the PTP and on the way of solving on a computer cluster a million of instances in a easonable time.
[ { "created": "Wed, 25 Jul 2007 14:05:59 GMT", "version": "v1" }, { "created": "Mon, 30 Jul 2007 12:45:26 GMT", "version": "v2" } ]
2007-07-30
[ [ "Andonov", "Rumen", "", "IRISA" ], [ "Collet", "Guillaume", "", "IRISA" ], [ "Gibrat", "Jean-François", "", "MIG" ], [ "Marin", "Antoine", "", "MIG" ], [ "Poirriez", "Vincent", "", "LAMIH" ], [ "Yanev", "Ni...
The fold recognition methods are promissing tools for capturing the structure of a protein by its amino acid residues sequence but their use is still restricted by the needs of huge computational resources and suitable efficient algorithms as well. In the recent version of FROST (Fold Recognition Oriented Search Tool) package the most efficient algorithm for solving the Protein Threading Problem (PTP) is implemented due to the strong collaboration between the SYMBIOSE group in IRISA and MIG in Jouy-en-Josas. In this paper, we present the diverse components of FROST, emphasizing on the recent advances in formulating and solving new versions of the PTP and on the way of solving on a computer cluster a million of instances in a easonable time.
1204.4393
Tom Tetzlaff
Tom Tetzlaff, Moritz Helias, Gaute T. Einevoll, Markus Diesmann
Decorrelation of neural-network activity by inhibitory feedback
null
null
10.1371/journal.pcbi.1002596
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).
[ { "created": "Thu, 19 Apr 2012 16:01:02 GMT", "version": "v1" }, { "created": "Wed, 16 May 2012 14:09:55 GMT", "version": "v2" } ]
2015-06-04
[ [ "Tetzlaff", "Tom", "" ], [ "Helias", "Moritz", "" ], [ "Einevoll", "Gaute T.", "" ], [ "Diesmann", "Markus", "" ] ]
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).
2309.01034
Jakub K\"ory
J. K\"ory, N. A. Hill, X. Y. Luo and P. S. Stewart
Discrete-to-continuum models of pre-stressed cytoskeletal filament networks
38 pages, 12 figures
null
null
null
q-bio.QM math.AP physics.bio-ph q-bio.SC
http://creativecommons.org/licenses/by/4.0/
We introduce a mathematical model for the mechanical behaviour of the eukaryotic cell cytoskeleton. This discrete model involves a regular array of pre-stressed protein filaments that exhibit resistance to enthalpic stretching, joined at crosslinks to form a network. Assuming that the inter-crosslink distance is much shorter than the lengthscale of the cell, we upscale the discrete force balance to form a continuum system of governing equations and deduce the corresponding macroscopic stress tensor. We use these discrete and continuum models to analyse the imposed displacement of a bead placed in the domain, characterising the cell rheology through the force-displacement curve. We further derive an analytical approximation to the stress and strain fields in the limit of small bead radius, predicting the net force required to generate a given deformation and elucidating the dependency on the microscale properties of the filaments. We apply these models to networks of the intermediate filament vimentin and demonstrate good agreement between predictions of the discrete, continuum and analytical approaches. In particular, our model predicts that the network stiffness increases sublinearly with the filament pre-stress and scales logarithmically with the bead size.
[ { "created": "Sat, 2 Sep 2023 22:42:59 GMT", "version": "v1" } ]
2023-09-06
[ [ "Köry", "J.", "" ], [ "Hill", "N. A.", "" ], [ "Luo", "X. Y.", "" ], [ "Stewart", "P. S.", "" ] ]
We introduce a mathematical model for the mechanical behaviour of the eukaryotic cell cytoskeleton. This discrete model involves a regular array of pre-stressed protein filaments that exhibit resistance to enthalpic stretching, joined at crosslinks to form a network. Assuming that the inter-crosslink distance is much shorter than the lengthscale of the cell, we upscale the discrete force balance to form a continuum system of governing equations and deduce the corresponding macroscopic stress tensor. We use these discrete and continuum models to analyse the imposed displacement of a bead placed in the domain, characterising the cell rheology through the force-displacement curve. We further derive an analytical approximation to the stress and strain fields in the limit of small bead radius, predicting the net force required to generate a given deformation and elucidating the dependency on the microscale properties of the filaments. We apply these models to networks of the intermediate filament vimentin and demonstrate good agreement between predictions of the discrete, continuum and analytical approaches. In particular, our model predicts that the network stiffness increases sublinearly with the filament pre-stress and scales logarithmically with the bead size.
1307.7282
Charlotte Hemelrijk
C.K. Hemelrijk, D.A.P. Reid, H. Hildenbrandt and J.T. Padding
The increased efficiency of fish swimming in a school
17 pages, inclusively 6 figures, under submission for scientific journal
null
null
null
q-bio.QM physics.bio-ph physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is increasing evidence that fish gain energetic benefits when they swim in a school. The most recent indications of such benefits are a lower tail (or fin) beat at the back of a school and reduced oxygen consumption in schooling fish versus solitary ones. How such advantages may arise is poorly understood. Current hydrodynamic theories concern either fish swimming side by side or in a diamond configuration and they largely ignore effects of viscosity and interactions among wakes and individuals. In reality, however, hydrodynamic effects are complex and fish swim in many configurations. Since these hydrodynamic effects are difficult to study empirically, we investigate them in a computer model by incorporating viscosity and interactions among wakes and with individuals. We compare swimming efficiency of mullets of 12.6 cm travelling solitarily and in schools of four different configurations at several inter-individual distances. The resulting Reynolds number (based on fish length) is approximately 1150. We show that these fish always swim more efficiently in a school than alone (except in a dense phalanx). We indicate how this efficiency may emerge from several kinds of interactions among wakes and individuals. Since individuals in our simulations are not even intending to exploit the wake, gains in efficiency are obtained more easily than previously thought.
[ { "created": "Sat, 27 Jul 2013 17:04:51 GMT", "version": "v1" } ]
2013-07-30
[ [ "Hemelrijk", "C. K.", "" ], [ "Reid", "D. A. P.", "" ], [ "Hildenbrandt", "H.", "" ], [ "Padding", "J. T.", "" ] ]
There is increasing evidence that fish gain energetic benefits when they swim in a school. The most recent indications of such benefits are a lower tail (or fin) beat at the back of a school and reduced oxygen consumption in schooling fish versus solitary ones. How such advantages may arise is poorly understood. Current hydrodynamic theories concern either fish swimming side by side or in a diamond configuration and they largely ignore effects of viscosity and interactions among wakes and individuals. In reality, however, hydrodynamic effects are complex and fish swim in many configurations. Since these hydrodynamic effects are difficult to study empirically, we investigate them in a computer model by incorporating viscosity and interactions among wakes and with individuals. We compare swimming efficiency of mullets of 12.6 cm travelling solitarily and in schools of four different configurations at several inter-individual distances. The resulting Reynolds number (based on fish length) is approximately 1150. We show that these fish always swim more efficiently in a school than alone (except in a dense phalanx). We indicate how this efficiency may emerge from several kinds of interactions among wakes and individuals. Since individuals in our simulations are not even intending to exploit the wake, gains in efficiency are obtained more easily than previously thought.
2104.07071
Trevor Rife
Trevor W. Rife, Chaney Courtney, Jenna Hershberger, Brandon Shaver, Michael A. Gore, Mitchell Neilsen, Jesse A. Poland
Prospector: a mobile app for high-throughput NIRS phenotyping
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Quality traits are some of the most important and time-consuming phenotypes to evaluate in plant breeding programs. These traits are often evaluated late in the breeding pipeline due to their cost, resulting in the potential advancement of many lines that are not suitable for release. Near-infrared spectroscopy (NIRS) is a non-destructive tool that can rapidly increase the speed at which quality traits are evaluated. However, most spectrometers are non-portable or prohibitively expensive. Recent advancements have led to the development of consumer-targeted, inexpensive spectrometers with demonstrated potential for breeding applications. Unfortunately, the mobile applications for these spectrometers are not designed to rapidly collect organized samples at the scale necessary for breeding programs. To that end, we developed Prospector, a mobile application that connects with LinkSquare portable NIR spectrometers and allows breeders to efficiently capture NIR data. In this report, we outline the core functionality of the app and how it can easily be integrated into breeding workflows as well as the opportunities for further development. Prospector and other high throughput phenotyping tools and technologies are required for plant breeders to develop the next generation of improved varieties necessary to feed a growing global population.
[ { "created": "Wed, 14 Apr 2021 18:26:59 GMT", "version": "v1" } ]
2021-04-16
[ [ "Rife", "Trevor W.", "" ], [ "Courtney", "Chaney", "" ], [ "Hershberger", "Jenna", "" ], [ "Shaver", "Brandon", "" ], [ "Gore", "Michael A.", "" ], [ "Neilsen", "Mitchell", "" ], [ "Poland", "Jesse A.", "" ...
Quality traits are some of the most important and time-consuming phenotypes to evaluate in plant breeding programs. These traits are often evaluated late in the breeding pipeline due to their cost, resulting in the potential advancement of many lines that are not suitable for release. Near-infrared spectroscopy (NIRS) is a non-destructive tool that can rapidly increase the speed at which quality traits are evaluated. However, most spectrometers are non-portable or prohibitively expensive. Recent advancements have led to the development of consumer-targeted, inexpensive spectrometers with demonstrated potential for breeding applications. Unfortunately, the mobile applications for these spectrometers are not designed to rapidly collect organized samples at the scale necessary for breeding programs. To that end, we developed Prospector, a mobile application that connects with LinkSquare portable NIR spectrometers and allows breeders to efficiently capture NIR data. In this report, we outline the core functionality of the app and how it can easily be integrated into breeding workflows as well as the opportunities for further development. Prospector and other high throughput phenotyping tools and technologies are required for plant breeders to develop the next generation of improved varieties necessary to feed a growing global population.
1901.04753
Alexander Blokhuis
Alex Blokhuis, Philippe Nghe, Luca Peliti, David Lacoste
The generality of transient compartmentalization and its associated error thresholds
37 pages, 13 figures
null
null
null
q-bio.PE cond-mat.soft physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Can prelife proceed without cell division? A recently proposed mechanism suggests that transient compartmentalization could have preceded cell division in prebiotic scenarios. Here, we study transient compartmentalization dynamics in the presence of mutations and noise in replication, as both can be detrimental the survival of compartments. Our study comprises situations where compartments contain uncoupled autocatalytic reactions feeding on a common resource, and systems based on RNA molecules copied by replicases, following a recent experimental study. Using the theory of branching processes, we show analytically that two regimes are possible. In the diffusion-limited regime, replication is asynchronous which leads to a large variability in the composition of compartments. In contrast, in a replication-limited regime, the growth is synchronous and thus the compositional variability is low. Typically, simple autocatalysts are in the former regime, while polymeric replicators can access the latter. For deterministic growth dynamics, we introduce mutations that turn functional replicators into parasites. We derive the phase boundary separating coexistence or parasite dominance as a function of relative growth, inoculation size and mutation rate. We show that transient compartmentalization allows coexistence beyond the classical error threshold, above which the parasite dominates. Our findings invite to revisit major prebiotic transitions, notably the transitions towards cooperation, complex polymers and cell division.
[ { "created": "Tue, 15 Jan 2019 10:43:06 GMT", "version": "v1" }, { "created": "Sun, 15 Dec 2019 20:20:40 GMT", "version": "v2" } ]
2019-12-17
[ [ "Blokhuis", "Alex", "" ], [ "Nghe", "Philippe", "" ], [ "Peliti", "Luca", "" ], [ "Lacoste", "David", "" ] ]
Can prelife proceed without cell division? A recently proposed mechanism suggests that transient compartmentalization could have preceded cell division in prebiotic scenarios. Here, we study transient compartmentalization dynamics in the presence of mutations and noise in replication, as both can be detrimental the survival of compartments. Our study comprises situations where compartments contain uncoupled autocatalytic reactions feeding on a common resource, and systems based on RNA molecules copied by replicases, following a recent experimental study. Using the theory of branching processes, we show analytically that two regimes are possible. In the diffusion-limited regime, replication is asynchronous which leads to a large variability in the composition of compartments. In contrast, in a replication-limited regime, the growth is synchronous and thus the compositional variability is low. Typically, simple autocatalysts are in the former regime, while polymeric replicators can access the latter. For deterministic growth dynamics, we introduce mutations that turn functional replicators into parasites. We derive the phase boundary separating coexistence or parasite dominance as a function of relative growth, inoculation size and mutation rate. We show that transient compartmentalization allows coexistence beyond the classical error threshold, above which the parasite dominates. Our findings invite to revisit major prebiotic transitions, notably the transitions towards cooperation, complex polymers and cell division.
1512.03850
Junghyo Jo
Marissa Pastor and Juyong Song and Danh-Tai Hoang and Junghyo Jo
Minimal Perceptrons for Memorizing Complex Patterns
14 pages, 5 figures
Physica A 462:31-37 (2016)
10.1016/j.physa.2016.06.025
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agreed with simulations based on the back-propagation algorithm.
[ { "created": "Sat, 12 Dec 2015 00:08:27 GMT", "version": "v1" } ]
2016-07-20
[ [ "Pastor", "Marissa", "" ], [ "Song", "Juyong", "" ], [ "Hoang", "Danh-Tai", "" ], [ "Jo", "Junghyo", "" ] ]
Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agreed with simulations based on the back-propagation algorithm.
2101.03933
Ann Sizemore Blevins
Ann S. Blevins, Jason Z. Kim, Danielle S. Bassett
Variability in higher order structure of noise added to weighted networks
15 pages, 6 figures (main text)
null
null
null
q-bio.QM math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From spiking activity in neuronal networks to force chains in granular materials, the behavior of many real-world systems depends on a network of both strong and weak interactions. These interactions give rise to complex and higher-order system behaviors, and are encoded using data as the network's edges. However, distinguishing between true weak edges and low-weight edges caused by noise remains a challenge. We address this problem by examining the higher-order structure of noisy, weak edges added to model networks. We find that the structure of low-weight, noisy edges varies according to the topology of the model network to which it is added. By investigating this variation more closely, we see that at least three qualitative classes of noise structure emerge. Furthermore, we observe that the structure of noisy edges contains enough model-specific information to classify the model networks with moderate accuracy. Finally, we offer network generation rules that can drive different types of structure in added noisy edges. Our results demonstrate that noise does not present as a monolithic nuisance, but rather as a nuanced, topology-dependent, and even useful entity in characterizing higher-order network interactions. Hence, we provide an alternate approach to noise management by embracing its role in such interactions.
[ { "created": "Mon, 11 Jan 2021 14:54:48 GMT", "version": "v1" } ]
2021-01-12
[ [ "Blevins", "Ann S.", "" ], [ "Kim", "Jason Z.", "" ], [ "Bassett", "Danielle S.", "" ] ]
From spiking activity in neuronal networks to force chains in granular materials, the behavior of many real-world systems depends on a network of both strong and weak interactions. These interactions give rise to complex and higher-order system behaviors, and are encoded using data as the network's edges. However, distinguishing between true weak edges and low-weight edges caused by noise remains a challenge. We address this problem by examining the higher-order structure of noisy, weak edges added to model networks. We find that the structure of low-weight, noisy edges varies according to the topology of the model network to which it is added. By investigating this variation more closely, we see that at least three qualitative classes of noise structure emerge. Furthermore, we observe that the structure of noisy edges contains enough model-specific information to classify the model networks with moderate accuracy. Finally, we offer network generation rules that can drive different types of structure in added noisy edges. Our results demonstrate that noise does not present as a monolithic nuisance, but rather as a nuanced, topology-dependent, and even useful entity in characterizing higher-order network interactions. Hence, we provide an alternate approach to noise management by embracing its role in such interactions.
1112.5508
Deeparnab Chakrabarty
Deeparnab Chakrabarty and Sampath Kannan and Kevin Tian
Variance on the Leaves of a Tree Markov Random Field: Detecting Character Dependencies in Phylogenies
null
null
null
null
q-bio.PE cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic models of evolution (Markov random fields on trivalent trees) generally assume that different characters (different runs of the stochastic process) are independent and identically distributed. In this paper we take the first steps towards addressing dependent characters. Specifically we show that, under certain technical assumptions regarding the evolution of individual characters, we can detect any significant, history independent, correlation between any pair of multistate characters. For the special case of the Cavender-Farris-Neyman (CFN) model on two states with symmetric transition matrices, our analysis needs milder assumptions. To perform the analysis, we need to prove a new concentration result for multistate random variables of a Markov random field on arbitrary trivalent trees: we show that the random variable counting the number of leaves in any particular subset of states has variance that is subquadratic in the number of leaves.
[ { "created": "Fri, 23 Dec 2011 03:01:19 GMT", "version": "v1" }, { "created": "Mon, 16 Jun 2014 05:59:23 GMT", "version": "v2" }, { "created": "Mon, 27 Oct 2014 04:47:22 GMT", "version": "v3" } ]
2014-10-28
[ [ "Chakrabarty", "Deeparnab", "" ], [ "Kannan", "Sampath", "" ], [ "Tian", "Kevin", "" ] ]
Stochastic models of evolution (Markov random fields on trivalent trees) generally assume that different characters (different runs of the stochastic process) are independent and identically distributed. In this paper we take the first steps towards addressing dependent characters. Specifically we show that, under certain technical assumptions regarding the evolution of individual characters, we can detect any significant, history independent, correlation between any pair of multistate characters. For the special case of the Cavender-Farris-Neyman (CFN) model on two states with symmetric transition matrices, our analysis needs milder assumptions. To perform the analysis, we need to prove a new concentration result for multistate random variables of a Markov random field on arbitrary trivalent trees: we show that the random variable counting the number of leaves in any particular subset of states has variance that is subquadratic in the number of leaves.
1909.00454
Daniel Riveline
Alka Bhat, Linjie Lu, Chen-Ho Wang, Simon Lo Vecchio, Riccardo Maraspini, Alf Honigmann and Daniel Riveline
How to orient cells in micro-cavities for high resolution imaging of cytokinesis and lumen formation
null
Eur. Phys. J. E (2020) 43:31
10.1016/bs.mcb.2020.01.002
null
q-bio.QM eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Imaging dynamics of cellular morphogenesis with high spatial-temporal resolution in 3D is challenging, due to the low spatial resolution along the optical axis and photo-toxicity. However, some cellular structures are planar and hence 2D imaging should be sufficient, provided that the structure of interest can be oriented with respect to the optical axis of the microscope. Here, we report a 3D microfabrication method which positions and orients cell divisions very close to the microscope coverglass. We use this approach to study cytokinesis in fission yeasts and polarization to lumen formation in mammalian epithelial cells. We show that this method improves spatial resolution on range of common microscopies, including super-resolution STED. Altogether, this method could shed new lights on self-organization phenomena in single cells and 3D cell culture systems.
[ { "created": "Sun, 1 Sep 2019 19:21:20 GMT", "version": "v1" } ]
2020-07-14
[ [ "Bhat", "Alka", "" ], [ "Lu", "Linjie", "" ], [ "Wang", "Chen-Ho", "" ], [ "Vecchio", "Simon Lo", "" ], [ "Maraspini", "Riccardo", "" ], [ "Honigmann", "Alf", "" ], [ "Riveline", "Daniel", "" ] ]
Imaging dynamics of cellular morphogenesis with high spatial-temporal resolution in 3D is challenging, due to the low spatial resolution along the optical axis and photo-toxicity. However, some cellular structures are planar and hence 2D imaging should be sufficient, provided that the structure of interest can be oriented with respect to the optical axis of the microscope. Here, we report a 3D microfabrication method which positions and orients cell divisions very close to the microscope coverglass. We use this approach to study cytokinesis in fission yeasts and polarization to lumen formation in mammalian epithelial cells. We show that this method improves spatial resolution on range of common microscopies, including super-resolution STED. Altogether, this method could shed new lights on self-organization phenomena in single cells and 3D cell culture systems.
q-bio/0612021
Nadav M. Shnerb
Refael Abta and Nadav M. Shnerb
Angular velocity variations and stability of spatially explicit prey-predator systems
null
null
10.1103/PhysRevE.75.051914
null
q-bio.PE cond-mat.other nlin.AO
null
The linear instability of Lotka-Volterra orbits in the homogenous manifold of a two-patch system is analyzed. The origin of these orbits instability in the absence of prey migration is revealed to be the dependence of the angular velocity on the azimuthal angle; in particular, the system desynchronizes at the exit from the slow part of the trajectory. Using this insight, an analogous model of a two coupled oscillator is presented and shown to yield the same type of linear instability. This unables one to incorporate the linear instability within a recently presented general framework that allows for comparison of all known stabilization mechanisms and for simple classification of observed oscillations.
[ { "created": "Tue, 12 Dec 2006 13:50:19 GMT", "version": "v1" } ]
2009-11-13
[ [ "Abta", "Refael", "" ], [ "Shnerb", "Nadav M.", "" ] ]
The linear instability of Lotka-Volterra orbits in the homogenous manifold of a two-patch system is analyzed. The origin of these orbits instability in the absence of prey migration is revealed to be the dependence of the angular velocity on the azimuthal angle; in particular, the system desynchronizes at the exit from the slow part of the trajectory. Using this insight, an analogous model of a two coupled oscillator is presented and shown to yield the same type of linear instability. This unables one to incorporate the linear instability within a recently presented general framework that allows for comparison of all known stabilization mechanisms and for simple classification of observed oscillations.
0908.3503
Byung Mook Weon
Byung Mook Weon, Jung Ho Je
Predicting Human Lifespan Limits
11 pages, 3 figures, 2 tables; Natural Science (in press)
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent discoveries show steady improvements in life expectancy during modern decades. Does this support that humans continue to live longer in future? We recently put forward the maximum survival tendency, as found in survival curves of industrialized countries, which is described by extended Weibull model with age-dependent stretched exponent. The maximum survival tendency suggests that human survival dynamics may possess its intrinsic limit, beyond which survival is inevitably forbidden. Based on such tendency, we develop the model and explore the patterns in the maximum lifespan limits from industrialized countries during recent three decades. This analysis strategy is simple and useful to interpret the complicated human survival dynamics.
[ { "created": "Mon, 24 Aug 2009 21:33:05 GMT", "version": "v1" }, { "created": "Thu, 27 Aug 2009 23:48:18 GMT", "version": "v2" } ]
2009-08-28
[ [ "Weon", "Byung Mook", "" ], [ "Je", "Jung Ho", "" ] ]
Recent discoveries show steady improvements in life expectancy during modern decades. Does this support that humans continue to live longer in future? We recently put forward the maximum survival tendency, as found in survival curves of industrialized countries, which is described by extended Weibull model with age-dependent stretched exponent. The maximum survival tendency suggests that human survival dynamics may possess its intrinsic limit, beyond which survival is inevitably forbidden. Based on such tendency, we develop the model and explore the patterns in the maximum lifespan limits from industrialized countries during recent three decades. This analysis strategy is simple and useful to interpret the complicated human survival dynamics.
q-bio/0511006
Michael Hagan
Michael F. Hagan and David Chandler
Dynamic Pathways for Viral Capsid Assembly
13 pages, 13 figures. Submitted to Biophys. J
null
10.1529/biophysj.105.076851
null
q-bio.BM
null
We develop a class of models with which we simulate the assembly of particles into T1 capsid-like objects using Newtonian dynamics. By simulating assembly for many different values of system parameters, we vary the forces that drive assembly. For some ranges of parameters, assembly is facile, while for others, assembly is dynamically frustrated by kinetic traps corresponding to malformed or incompletely formed capsids. Our simulations sample many independent trajectories at various capsomer concentrations, allowing for statistically meaningful conclusions. Depending on subunit (i.e., capsomer) geometries, successful assembly proceeds by several mechanisms involving binding of intermediates of various sizes. We discuss the relationship between these mechanisms and experimental evaluations of capsid assembly processes.
[ { "created": "Mon, 7 Nov 2005 01:02:50 GMT", "version": "v1" } ]
2009-11-11
[ [ "Hagan", "Michael F.", "" ], [ "Chandler", "David", "" ] ]
We develop a class of models with which we simulate the assembly of particles into T1 capsid-like objects using Newtonian dynamics. By simulating assembly for many different values of system parameters, we vary the forces that drive assembly. For some ranges of parameters, assembly is facile, while for others, assembly is dynamically frustrated by kinetic traps corresponding to malformed or incompletely formed capsids. Our simulations sample many independent trajectories at various capsomer concentrations, allowing for statistically meaningful conclusions. Depending on subunit (i.e., capsomer) geometries, successful assembly proceeds by several mechanisms involving binding of intermediates of various sizes. We discuss the relationship between these mechanisms and experimental evaluations of capsid assembly processes.
1801.05452
Bob Eisenberg
Bob Eisenberg
Asking Biological Questions of Physical Systems: the Device Approach to Emergent Properties
Version 3, modified preprint of publication
Journal of Molecular Liquids (2018) 270: 212-217
10.1016/j.molliq.2018.01.088
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Life occurs in concentrated `Ringer Solutions' derived from seawater that Lesser Blum studied for most of his life. As we worked together, Lesser and I realized that the questions asked of those solutions were quite different in biology from those in the physical chemistry he knew. Biology is inherited. Information is passed by handfuls of atoms in the genetic code. A few atoms in the proteins built from the code change macroscopic function. Indeed, a few atoms often control biological function in the same sense that a gas pedal controls the speed of a car. Biological questions then are most productive when they are asked in the context of evolution. What function does a system perform? How is the system built to perform that function? What forces are used to perform that function? How are the modules that perform functions connected to make the machinery of life. Physiologists have shown that much of life is a nested hierarchy of devices, one on top of another, linking atomic ions in concentrated solutions to current flow through proteins, current flow to voltage signals, voltage signals to changes in current flow, all connected to make a regenerative system that allows electrical action potentials to move meters, under the control of a few atoms. The hierarchy of devices allows macroscopic properties to emerge from atomic scale interactions. The structures of biology create these devices. The concentration and electrical fields of biology power these devices, more than anything else. The resulting organisms reproduce. Evolution selects the organisms that reproduce more and thereby selects the devices that allow macroscopic control to emerge from the atomic structures of genes and proteins and their motions.
[ { "created": "Tue, 16 Jan 2018 19:21:00 GMT", "version": "v1" }, { "created": "Tue, 21 Sep 2021 14:21:52 GMT", "version": "v2" }, { "created": "Thu, 23 Sep 2021 14:17:16 GMT", "version": "v3" } ]
2021-09-24
[ [ "Eisenberg", "Bob", "" ] ]
Life occurs in concentrated `Ringer Solutions' derived from seawater that Lesser Blum studied for most of his life. As we worked together, Lesser and I realized that the questions asked of those solutions were quite different in biology from those in the physical chemistry he knew. Biology is inherited. Information is passed by handfuls of atoms in the genetic code. A few atoms in the proteins built from the code change macroscopic function. Indeed, a few atoms often control biological function in the same sense that a gas pedal controls the speed of a car. Biological questions then are most productive when they are asked in the context of evolution. What function does a system perform? How is the system built to perform that function? What forces are used to perform that function? How are the modules that perform functions connected to make the machinery of life. Physiologists have shown that much of life is a nested hierarchy of devices, one on top of another, linking atomic ions in concentrated solutions to current flow through proteins, current flow to voltage signals, voltage signals to changes in current flow, all connected to make a regenerative system that allows electrical action potentials to move meters, under the control of a few atoms. The hierarchy of devices allows macroscopic properties to emerge from atomic scale interactions. The structures of biology create these devices. The concentration and electrical fields of biology power these devices, more than anything else. The resulting organisms reproduce. Evolution selects the organisms that reproduce more and thereby selects the devices that allow macroscopic control to emerge from the atomic structures of genes and proteins and their motions.
1206.5846
Christina Boucher
Christine Lo, Boyko Kakaradov, Daniel Lokshtanov, and Christina Boucher
SeeSite: Efficiently Finding Co-occurring Splice Sites and Exon Splicing Enhancers
null
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of identifying splice sites consists of two sub-problems: finding their boundaries, and characterizing their sequence markers. Other splicing elements---including, enhancers and silencers---that occur in the intronic and exonic regions play an important role in splicing activity. Existing methods for detecting splicing elements are limited to finding either splice sites or enhancers and silencers, even though these elements are well-known to co-occur. We introduce SeeSite, an efficient and accurate tool for detecting splice sites and their complementary exon splicing enhancers (ESEs). SeeSite has three stages: graph construction, finding dense subgraphs, and recovering splice sites and ESEs along with their consensus. The third step involves solving Consensus Sequence with Outliers, an NP-complete string clustering problem. We prove that our algorithm for this problem outputs near-optimal solutions in polynomial time. Using SeeSite we demonstrate that ESEs are preferentially associated with weaker splice sites, and splice sites of a certain canonical form co-occur with specific ESEs.
[ { "created": "Mon, 25 Jun 2012 21:20:53 GMT", "version": "v1" } ]
2012-06-27
[ [ "Lo", "Christine", "" ], [ "Kakaradov", "Boyko", "" ], [ "Lokshtanov", "Daniel", "" ], [ "Boucher", "Christina", "" ] ]
The problem of identifying splice sites consists of two sub-problems: finding their boundaries, and characterizing their sequence markers. Other splicing elements---including, enhancers and silencers---that occur in the intronic and exonic regions play an important role in splicing activity. Existing methods for detecting splicing elements are limited to finding either splice sites or enhancers and silencers, even though these elements are well-known to co-occur. We introduce SeeSite, an efficient and accurate tool for detecting splice sites and their complementary exon splicing enhancers (ESEs). SeeSite has three stages: graph construction, finding dense subgraphs, and recovering splice sites and ESEs along with their consensus. The third step involves solving Consensus Sequence with Outliers, an NP-complete string clustering problem. We prove that our algorithm for this problem outputs near-optimal solutions in polynomial time. Using SeeSite we demonstrate that ESEs are preferentially associated with weaker splice sites, and splice sites of a certain canonical form co-occur with specific ESEs.
1410.2071
Aparna Rai
Sarika Jalan, Aparna Rai, Amit Kumar Pawar
Interaction patterns in diabetes mellitus II network: An RMT relation
36 pages, 6 figures, 7 tables
null
null
null
q-bio.MN cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diabetes mellitus type II affects around 8 percent of the total adult population in the world. It is the fifth leading cause of death in high income countries and an epidemic in developing countries. We analyze protein-protein interaction data of the pancreatic cells for normal and disease states. The analysis exhibits overall structural similarities in the normal and disease networks. The important differences are revealed through specific interaction patterns and eigenvector analyses. The top contributing nodes from localized eigenvectors as well as those being part of specific interaction patterns turn out to be significant for the occurrence of the disease. The analysis provides a direction for further development of novel drugs and therapies in curing the disease by targeting specific patterns instead of a single node.
[ { "created": "Wed, 8 Oct 2014 11:58:26 GMT", "version": "v1" }, { "created": "Sat, 15 Nov 2014 09:39:16 GMT", "version": "v2" } ]
2014-11-18
[ [ "Jalan", "Sarika", "" ], [ "Rai", "Aparna", "" ], [ "Pawar", "Amit Kumar", "" ] ]
Diabetes mellitus type II affects around 8 percent of the total adult population in the world. It is the fifth leading cause of death in high income countries and an epidemic in developing countries. We analyze protein-protein interaction data of the pancreatic cells for normal and disease states. The analysis exhibits overall structural similarities in the normal and disease networks. The important differences are revealed through specific interaction patterns and eigenvector analyses. The top contributing nodes from localized eigenvectors as well as those being part of specific interaction patterns turn out to be significant for the occurrence of the disease. The analysis provides a direction for further development of novel drugs and therapies in curing the disease by targeting specific patterns instead of a single node.
2309.00274
Md Nurul Anwar
Md Nurul Anwar, Lauren Smith, Angela Devine, Somya Mehra, Camelia R. Walker, Elizabeth Ivory, Eamon Conway, Ivo Mueller, James M. McCaw, Jennifer A. Flegg, Roslyn I. Hickson
Mathematical models of Plasmodium vivax transmission: a scoping review
null
Plos Computational Biology 20(3), 2024
10.1371/journal.pcbi.1011931
e1011931
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Plasmodium vivax is one of the most geographically widespread malaria parasites in the world due to its ability to remain dormant in the human liver as hypnozoites and subsequently reactivate after the initial infection (i.e. relapse infections). More than 80% of P. vivax infections are due to hypnozoite reactivation. Mathematical modelling approaches have been widely applied to understand P. vivax dynamics and predict the impact of intervention outcomes. In this article, we provide a scoping review of mathematical models that capture P. vivax transmission dynamics published between January 1988 and May 2023 to provide a comprehensive summary of the mathematical models and techniques used to model P. vivax dynamics. We aim to assist researchers working on P. vivax transmission and other aspects of P. vivax malaria by highlighting best practices in currently published models and highlighting where future model development is required. We provide an overview of the different strategies used to incorporate the parasite's biology, use of multiple scales (within-host and population-level), superinfection, immunity, and treatment interventions. In most of the published literature, the rationale for different modelling approaches was driven by the research question at hand. Some models focus on the parasites' complicated biology, while others incorporate simplified assumptions to avoid model complexity. Overall, the existing literature on mathematical models for P. vivax encompasses various aspects of the parasite's dynamics. We recommend that future research should focus on refining how key aspects of P. vivax dynamics are modelled, including spatial heterogeneity in exposure risk, the accumulation of hypnozoite variation, the interaction between P. falciparum and P. vivax, acquisition of immunity, and recovery under superinfection.
[ { "created": "Fri, 1 Sep 2023 06:10:33 GMT", "version": "v1" }, { "created": "Fri, 22 Sep 2023 03:18:33 GMT", "version": "v2" }, { "created": "Mon, 25 Sep 2023 01:49:02 GMT", "version": "v3" } ]
2024-03-19
[ [ "Anwar", "Md Nurul", "" ], [ "Smith", "Lauren", "" ], [ "Devine", "Angela", "" ], [ "Mehra", "Somya", "" ], [ "Walker", "Camelia R.", "" ], [ "Ivory", "Elizabeth", "" ], [ "Conway", "Eamon", "" ], [ ...
Plasmodium vivax is one of the most geographically widespread malaria parasites in the world due to its ability to remain dormant in the human liver as hypnozoites and subsequently reactivate after the initial infection (i.e. relapse infections). More than 80% of P. vivax infections are due to hypnozoite reactivation. Mathematical modelling approaches have been widely applied to understand P. vivax dynamics and predict the impact of intervention outcomes. In this article, we provide a scoping review of mathematical models that capture P. vivax transmission dynamics published between January 1988 and May 2023 to provide a comprehensive summary of the mathematical models and techniques used to model P. vivax dynamics. We aim to assist researchers working on P. vivax transmission and other aspects of P. vivax malaria by highlighting best practices in currently published models and highlighting where future model development is required. We provide an overview of the different strategies used to incorporate the parasite's biology, use of multiple scales (within-host and population-level), superinfection, immunity, and treatment interventions. In most of the published literature, the rationale for different modelling approaches was driven by the research question at hand. Some models focus on the parasites' complicated biology, while others incorporate simplified assumptions to avoid model complexity. Overall, the existing literature on mathematical models for P. vivax encompasses various aspects of the parasite's dynamics. We recommend that future research should focus on refining how key aspects of P. vivax dynamics are modelled, including spatial heterogeneity in exposure risk, the accumulation of hypnozoite variation, the interaction between P. falciparum and P. vivax, acquisition of immunity, and recovery under superinfection.
1410.6570
Andreas Hanke
Stefan M. Giovan, Robert G. Scharein, Andreas Hanke, and Stephen D. Levene
Free-energy calculations for semi-flexible macromolecules: Applications to DNA knotting and looping
Main article: 24 pages, 8 figures; Supplemental Material: 21 pages, 6 figures. Typos corrected
J. Chem. Phys. 141, 174902 (2014)
10.1016/j.bpj.2013.11.2299
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method to obtain numerically accurate values of configurational free energies of semiflexible macromolecular systems, based on the technique of thermodynamic integration combined with normal-mode analysis of a reference system subject to harmonic constraints. Compared with previous free-energy calculations that depend on a reference state, our approach introduces two innovations, namely the use of internal coordinates to constrain the reference states and the ability to freely select these reference states. As a consequence, it is possible to explore systems that undergo substantially larger fluctuations than those considered in previous calculations, including semiflexible biopolymers having arbitrary ratios of contour length L to persistence length P. To validate the method, high accuracy is demonstrated for free energies of prime DNA knots with L/P=20 and L/P=40, corresponding to DNA lengths of 3000 and 6000 base pairs, respectively. We then apply the method to study the free-energy landscape for a model of a synaptic nucleoprotein complex containing a pair of looped domains, revealing a bifurcation in the location of optimal synapse (crossover) sites. This transition is relevant to target-site selection by DNA-binding proteins that occupy multiple DNA sites separated by large linear distances along the genome, a problem that arises naturally in gene regulation, DNA recombination, and the action of type-II topoisomerases.
[ { "created": "Fri, 24 Oct 2014 04:26:09 GMT", "version": "v1" }, { "created": "Tue, 21 Jul 2015 05:33:23 GMT", "version": "v2" } ]
2018-10-17
[ [ "Giovan", "Stefan M.", "" ], [ "Scharein", "Robert G.", "" ], [ "Hanke", "Andreas", "" ], [ "Levene", "Stephen D.", "" ] ]
We present a method to obtain numerically accurate values of configurational free energies of semiflexible macromolecular systems, based on the technique of thermodynamic integration combined with normal-mode analysis of a reference system subject to harmonic constraints. Compared with previous free-energy calculations that depend on a reference state, our approach introduces two innovations, namely the use of internal coordinates to constrain the reference states and the ability to freely select these reference states. As a consequence, it is possible to explore systems that undergo substantially larger fluctuations than those considered in previous calculations, including semiflexible biopolymers having arbitrary ratios of contour length L to persistence length P. To validate the method, high accuracy is demonstrated for free energies of prime DNA knots with L/P=20 and L/P=40, corresponding to DNA lengths of 3000 and 6000 base pairs, respectively. We then apply the method to study the free-energy landscape for a model of a synaptic nucleoprotein complex containing a pair of looped domains, revealing a bifurcation in the location of optimal synapse (crossover) sites. This transition is relevant to target-site selection by DNA-binding proteins that occupy multiple DNA sites separated by large linear distances along the genome, a problem that arises naturally in gene regulation, DNA recombination, and the action of type-II topoisomerases.
2211.10518
Muyuan Chen
Muyuan Chen, Bogdan Toader, Roy Lederman
Integrating molecular models into CryoEM heterogeneity analysis using scalable high-resolution deep Gaussian mixture models
null
null
null
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Resolving the structural variability of proteins is often key to understanding the structure-function relationship of those macromolecular machines. Single particle analysis using Cryogenic electron microscopy (CryoEM), combined with machine learning algorithms, provides a way to reveal the dynamics within the protein system from noisy micrographs. Here, we introduce an improved computational method that uses Gaussian mixture models for protein structure representation and deep neural networks for conformation space embedding. By integrating information from molecular models into the heterogeneity analysis, we can resolve complex protein conformational changes at near atomic resolution and present the results in a more interpretable form.
[ { "created": "Fri, 18 Nov 2022 22:01:59 GMT", "version": "v1" } ]
2022-11-22
[ [ "Chen", "Muyuan", "" ], [ "Toader", "Bogdan", "" ], [ "Lederman", "Roy", "" ] ]
Resolving the structural variability of proteins is often key to understanding the structure-function relationship of those macromolecular machines. Single particle analysis using Cryogenic electron microscopy (CryoEM), combined with machine learning algorithms, provides a way to reveal the dynamics within the protein system from noisy micrographs. Here, we introduce an improved computational method that uses Gaussian mixture models for protein structure representation and deep neural networks for conformation space embedding. By integrating information from molecular models into the heterogeneity analysis, we can resolve complex protein conformational changes at near atomic resolution and present the results in a more interpretable form.
1003.5135
Andre Krzywicki
Z. Burda, A. Krzywicki, O.C. Martin, M. Zagorski
Distribution of essential interactions in model gene regulatory networks under mutation-selection balance
12 pages, 7 figures, references, comments an 1 figure added
Phys. Rev. E 82, 011908 (2010)
10.1103/PhysRevE.82.011908
LPT-Orsay 10-14
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulatory networks typically have low in-degrees, whereby any given gene is regulated by few of the genes in the network. They also tend to have broad distributions for the out-degree. What mechanisms might be responsible for these degree distributions? Starting with an accepted framework of the binding of transcription factors to DNA, we consider a simple model of gene regulatory dynamics. There, we show that selection for a target expression pattern leads to the emergence of minimum connectivities compatible with the selective constraint. As a consequence, these gene networks have low in-degree, and "functionality" is parsimonious, i.e. is concentrated on a sparse number of interactions as measured for instance by their essentiality. Furthermore, we find that mutations of the transcription factors drive the networks to have broad out-degrees. Finally, these classes of models are evolvable, i.e. significantly different genotypes can emerge gradually under mutation-selection balance.
[ { "created": "Fri, 26 Mar 2010 13:38:12 GMT", "version": "v1" }, { "created": "Fri, 18 Jun 2010 11:08:29 GMT", "version": "v2" } ]
2013-05-29
[ [ "Burda", "Z.", "" ], [ "Krzywicki", "A.", "" ], [ "Martin", "O. C.", "" ], [ "Zagorski", "M.", "" ] ]
Gene regulatory networks typically have low in-degrees, whereby any given gene is regulated by few of the genes in the network. They also tend to have broad distributions for the out-degree. What mechanisms might be responsible for these degree distributions? Starting with an accepted framework of the binding of transcription factors to DNA, we consider a simple model of gene regulatory dynamics. There, we show that selection for a target expression pattern leads to the emergence of minimum connectivities compatible with the selective constraint. As a consequence, these gene networks have low in-degree, and "functionality" is parsimonious, i.e. is concentrated on a sparse number of interactions as measured for instance by their essentiality. Furthermore, we find that mutations of the transcription factors drive the networks to have broad out-degrees. Finally, these classes of models are evolvable, i.e. significantly different genotypes can emerge gradually under mutation-selection balance.
2407.09567
Wouter Van Der Wijngaart
Wouter van der Wijngaart
On the nature of information -- an evolutionary perspective
null
null
null
null
q-bio.NC cs.IT math.IT physics.hist-ph
http://creativecommons.org/licenses/by/4.0/
This Perspective explores the origins and persistence of recurrent structures and patterns throughout the known Universe. We start with a first fundamental question: 1. Considering that all information consists of patterns in physical structure but not all physical patterns constitute information, what is the fundamental relation between these two? We first explore the materialistic nature of structures and information, detailing how they can form through spontaneous or templated processes and evolve into complex structures, including self-replicators. We posit that all recurring structures emerge either spontaneously de novo or based on underlying information. A main implication is that all information must be understood as both a product and a driver of evolution. We further observe that the three carriers of information underpin the emergence of three main layers of self-organisation: genes coded in DNA for the biological layer, ideas stored in neural structure for the cultural layer, and records written on innate objects for the civilisation layer. This gives rise to two additional questions, which we subsequently address: 2. What can we anticipate about the future development of self-organizing layers given the role of information in their emergence? 3. What is the universality of information and its evolution throughout the Universe? This manuscript aims to offer a fresh perspective and a universal framework for information and the origin of structures by extending and unifying concepts from physics, biology, and information theory.
[ { "created": "Sat, 6 Jul 2024 16:37:01 GMT", "version": "v1" } ]
2024-07-16
[ [ "van der Wijngaart", "Wouter", "" ] ]
This Perspective explores the origins and persistence of recurrent structures and patterns throughout the known Universe. We start with a first fundamental question: 1. Considering that all information consists of patterns in physical structure but not all physical patterns constitute information, what is the fundamental relation between these two? We first explore the materialistic nature of structures and information, detailing how they can form through spontaneous or templated processes and evolve into complex structures, including self-replicators. We posit that all recurring structures emerge either spontaneously de novo or based on underlying information. A main implication is that all information must be understood as both a product and a driver of evolution. We further observe that the three carriers of information underpin the emergence of three main layers of self-organisation: genes coded in DNA for the biological layer, ideas stored in neural structure for the cultural layer, and records written on innate objects for the civilisation layer. This gives rise to two additional questions, which we subsequently address: 2. What can we anticipate about the future development of self-organizing layers given the role of information in their emergence? 3. What is the universality of information and its evolution throughout the Universe? This manuscript aims to offer a fresh perspective and a universal framework for information and the origin of structures by extending and unifying concepts from physics, biology, and information theory.
q-bio/0402002
Liane Gabora
Liane Gabora
Ideas are Not Replicators but Minds Are
null
=Biology and Philosophy (19)1: 127-143 (2004)
10.1023/B:BIPH.0000013234.87103.76
null
q-bio.PE nlin.AO q-bio.MN q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An idea is not a replicator because it does not consist of coded self-assembly instructions. It may retain structure as it passes from one individual to another, but does not replicate it. The cultural replicator is not an idea but an associatively-structured network of them that together form an internal model of the world, or worldview. A worldview is a primitive, uncoded replicator, like the autocatalytic sets of polymers widely believed to be the earliest form of life. Primitive replicators generate self-similar structure, but because the process happens in a piecemeal manner, through bottom-up interactions rather than a top-down code, they replicate with low fidelity, and acquired characteristics are inherited. Just as polymers catalyze reactions that generate other polymers, the retrieval of an item from memory can in turn trigger other items, thus cross-linking memories, ideas, and concepts into an integrated conceptual structure. Worldviews evolve idea by idea, largely through social exchange. An idea participates in the evolution of culture by revealing certain aspects of the worldview that generated it, thereby affecting the worldviews of those exposed to it. If an idea influences seemingly unrelated fields this does not mean that separate cultural lineages are contaminating one another, because it is worldviews, not ideas, that are the basic unit of cultural evolution.
[ { "created": "Sat, 31 Jan 2004 02:50:46 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2019 01:24:10 GMT", "version": "v2" } ]
2019-07-02
[ [ "Gabora", "Liane", "" ] ]
An idea is not a replicator because it does not consist of coded self-assembly instructions. It may retain structure as it passes from one individual to another, but does not replicate it. The cultural replicator is not an idea but an associatively-structured network of them that together form an internal model of the world, or worldview. A worldview is a primitive, uncoded replicator, like the autocatalytic sets of polymers widely believed to be the earliest form of life. Primitive replicators generate self-similar structure, but because the process happens in a piecemeal manner, through bottom-up interactions rather than a top-down code, they replicate with low fidelity, and acquired characteristics are inherited. Just as polymers catalyze reactions that generate other polymers, the retrieval of an item from memory can in turn trigger other items, thus cross-linking memories, ideas, and concepts into an integrated conceptual structure. Worldviews evolve idea by idea, largely through social exchange. An idea participates in the evolution of culture by revealing certain aspects of the worldview that generated it, thereby affecting the worldviews of those exposed to it. If an idea influences seemingly unrelated fields this does not mean that separate cultural lineages are contaminating one another, because it is worldviews, not ideas, that are the basic unit of cultural evolution.
2310.00174
James Notwell
James H. Notwell and Michael W. Wood
ADMET property prediction through combinations of molecular fingerprints
4 pages, 1 figure
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
While investigating methods to predict small molecule potencies, we found random forests or support vector machines paired with extended-connectivity fingerprints (ECFP) consistently outperformed recently developed methods. A detailed investigation into regression algorithms and molecular fingerprints revealed gradient-boosted decision trees, particularly CatBoost, in conjunction with a combination of ECFP, Avalon, and ErG fingerprints, as well as 200 molecular properties, to be most effective. Incorporating a graph neural network fingerprint further enhanced performance. We successfully validated our model across 22 Therapeutics Data Commons ADMET benchmarks. Our findings underscore the significance of richer molecular representations for accurate property prediction.
[ { "created": "Fri, 29 Sep 2023 22:39:18 GMT", "version": "v1" } ]
2023-10-03
[ [ "Notwell", "James H.", "" ], [ "Wood", "Michael W.", "" ] ]
While investigating methods to predict small molecule potencies, we found random forests or support vector machines paired with extended-connectivity fingerprints (ECFP) consistently outperformed recently developed methods. A detailed investigation into regression algorithms and molecular fingerprints revealed gradient-boosted decision trees, particularly CatBoost, in conjunction with a combination of ECFP, Avalon, and ErG fingerprints, as well as 200 molecular properties, to be most effective. Incorporating a graph neural network fingerprint further enhanced performance. We successfully validated our model across 22 Therapeutics Data Commons ADMET benchmarks. Our findings underscore the significance of richer molecular representations for accurate property prediction.
1204.0119
Leonardo L. Gollo
Leonardo L. Gollo, Claudio Mirasso and V\'ictor M. Egu\'iluz
Signal integration enhances the dynamic range in neuronal systems
5 pages, 4 figures
Phys. Rev. E, 85, 040902 (2012)
10.1103/PhysRevE.85.040902
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech nlin.CG physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamic range measures the capacity of a system to discriminate the intensity of an external stimulus. Such an ability is fundamental for living beings to survive: to leverage resources and to avoid danger. Consequently, the larger is the dynamic range, the greater is the probability of survival. We investigate how the integration of different input signals affects the dynamic range, and in general the collective behavior of a network of excitable units. By means of numerical simulations and a mean-field approach, we explore the nonequilibrium phase transition in the presence of integration. We show that the firing rate in random and scale-free networks undergoes a discontinuous phase transition depending on both the integration time and the density of integrator units. Moreover, in the presence of external stimuli, we find that a system of excitable integrator units operating in a bistable regime largely enhances its dynamic range.
[ { "created": "Sat, 31 Mar 2012 17:46:07 GMT", "version": "v1" }, { "created": "Fri, 27 Apr 2012 17:34:37 GMT", "version": "v2" } ]
2012-05-02
[ [ "Gollo", "Leonardo L.", "" ], [ "Mirasso", "Claudio", "" ], [ "Eguíluz", "Víctor M.", "" ] ]
The dynamic range measures the capacity of a system to discriminate the intensity of an external stimulus. Such an ability is fundamental for living beings to survive: to leverage resources and to avoid danger. Consequently, the larger is the dynamic range, the greater is the probability of survival. We investigate how the integration of different input signals affects the dynamic range, and in general the collective behavior of a network of excitable units. By means of numerical simulations and a mean-field approach, we explore the nonequilibrium phase transition in the presence of integration. We show that the firing rate in random and scale-free networks undergoes a discontinuous phase transition depending on both the integration time and the density of integrator units. Moreover, in the presence of external stimuli, we find that a system of excitable integrator units operating in a bistable regime largely enhances its dynamic range.
1510.05167
Sara Bernardi
Sara Bernardi, Ezio Venturino
Viral epidemiology of the adult Apis Mellifera infested by the Varroa destructor mite
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ectoparasitic mite Varroa destructor has become one of the major worldwide threats for apiculture. Varroa destructor attacks the honey bee Apis mellifera weakening its host by sucking hemolymph. However, the damage to bee colonies is not strictly related to the parasitic action of the mite but it derives, above all, from its action as vector increasing the trasmission of many viral diseases such as acute paralysis (ABPV) and deformed wing viruses (DWV), that are considered among the main causes of CCD (Colony Collapse Disorder). In this work we discuss an SI model that describes how the presence of the mite affects the epidemiology of these viruses on adult bees. We characterize the system behavior, establishing that ultimately either only healthy bees survive, or the disease becomes endemic and mites are wiped out. Another dangerous alternative is the Varroa invasion scenario with the extinction of healthy bees. The final possible configuration is the coexistence equilibrium in which honey bees share their infected hive with mites. The analysis is in line with some observed facts in natural honey bee colonies. Namely, these diseases are endemic. Further, if the mite population is present, necessarily the viral infection occurs. The findings of this study indicate that a low horizontal transmission rate of the virus among honey bees in beehives will help in protecting bee colonies from Varroa infestation and viral epidemics.
[ { "created": "Sat, 17 Oct 2015 21:17:36 GMT", "version": "v1" }, { "created": "Sat, 30 Apr 2016 14:26:08 GMT", "version": "v2" } ]
2016-05-03
[ [ "Bernardi", "Sara", "" ], [ "Venturino", "Ezio", "" ] ]
The ectoparasitic mite Varroa destructor has become one of the major worldwide threats for apiculture. Varroa destructor attacks the honey bee Apis mellifera weakening its host by sucking hemolymph. However, the damage to bee colonies is not strictly related to the parasitic action of the mite but it derives, above all, from its action as vector increasing the trasmission of many viral diseases such as acute paralysis (ABPV) and deformed wing viruses (DWV), that are considered among the main causes of CCD (Colony Collapse Disorder). In this work we discuss an SI model that describes how the presence of the mite affects the epidemiology of these viruses on adult bees. We characterize the system behavior, establishing that ultimately either only healthy bees survive, or the disease becomes endemic and mites are wiped out. Another dangerous alternative is the Varroa invasion scenario with the extinction of healthy bees. The final possible configuration is the coexistence equilibrium in which honey bees share their infected hive with mites. The analysis is in line with some observed facts in natural honey bee colonies. Namely, these diseases are endemic. Further, if the mite population is present, necessarily the viral infection occurs. The findings of this study indicate that a low horizontal transmission rate of the virus among honey bees in beehives will help in protecting bee colonies from Varroa infestation and viral epidemics.
0804.2499
Emilio Hernandez-Garcia
J. M. Zaldivar (1), F.S. Bacelar (2), S. Dueri (1), D. Marinov (1), P. Viaroli (3) and E. Hernandez-Garcia (2). ((1) JRC, Ispra; (2) IFISC, Palma de Mallorca; (3) Univ. de Parma)
Modeling approach to regime shifts of primary production in shallow coastal ecosystems
33 pages, including 10 figures. To appear in Ecological Complexity
Ecological Modelling 220, 3100-3110 (2009)
10.1016/j.ecolmodel.2009.01.022
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pristine coastal shallow systems are usually dominated by extensive meadows of seagrass species, which are assumed to take advantage of nutrient supply from sediment. An increasing nutrient input is thought to favour phytoplankton, epiphytic microalgae, as well as opportunistic ephemeral macroalgae that coexist with seagrasses. The primary cause of shifts and succession in the macrophyte community is the increase of nutrient load to water; however temperature plays also an important role. A competition model between rooted seagrass (Zostera marina), macroalgae (Ulva sp), and phytoplankton has been developed to analyse the succession of primary producer communities in these systems. Successions of dominance states, with different resilience characteristics, are found when modifying the input of nutrients and the seasonal temperature and light intensity forcing.
[ { "created": "Tue, 15 Apr 2008 22:37:52 GMT", "version": "v1" }, { "created": "Wed, 4 Feb 2009 09:01:51 GMT", "version": "v2" } ]
2009-11-04
[ [ "Zaldivar", "J. M.", "" ], [ "Bacelar", "F. S.", "" ], [ "Dueri", "S.", "" ], [ "Marinov", "D.", "" ], [ "Viaroli", "P.", "" ], [ "Hernandez-Garcia", "E.", "" ], [ ".", "", "" ] ]
Pristine coastal shallow systems are usually dominated by extensive meadows of seagrass species, which are assumed to take advantage of nutrient supply from sediment. An increasing nutrient input is thought to favour phytoplankton, epiphytic microalgae, as well as opportunistic ephemeral macroalgae that coexist with seagrasses. The primary cause of shifts and succession in the macrophyte community is the increase of nutrient load to water; however temperature plays also an important role. A competition model between rooted seagrass (Zostera marina), macroalgae (Ulva sp), and phytoplankton has been developed to analyse the succession of primary producer communities in these systems. Successions of dominance states, with different resilience characteristics, are found when modifying the input of nutrients and the seasonal temperature and light intensity forcing.
2106.00525
Xuanyu Zhu
Xuanyu Zhu, Yang Gao, Feng Liu, Stuart Crozier, Hongfu Sun
Deep grey matter quantitative susceptibility mapping from small spatial coverages using deep learning
25 pages, 8 figures, 1 supplementary figure and 1 supplementary table
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introduction: Quantitative Susceptibility Mapping (QSM) is generally acquired with full brain coverage, even though many QSM brain-iron studies focus on the deep grey matter (DGM) region only. Reducing the spatial coverage to the DGM vicinity can substantially shorten the scan time or enhance the spatial resolution without increasing scan time; however, this may lead to significant DGM susceptibility underestimation. Method: A recently proposed deep learning-based QSM method, namely xQSM, is investigated to assess the accuracy of dipole inversion on reduced brain coverages. Pre-processed magnetic field maps are extended symmetrically from the centre of globus pallidus in the coronal plane to simulate QSM acquisitions of difference spatial coverages.Results: The proposed xQSM network led to the lowest DGM contrast lost with the smallest susceptibility variation range across all spatial coverages. For the digital brain phantom simulation, xQSM improved the DGM susceptibility underestimation more than 20% in small spatial coverages. For the in vivo acquisition, less than 5% DGM susceptibility error was achieved in 48 mm axial slabs using the xQSM network, while a minimum of 112 mm coverage was required for conventional methods. It is also shown that the background field removal process performed worse in reduced brain coverages, which further deteriorated the subsequent dipole inversion. Conclusion: The recently proposed deep learning-based xQSM method significantly improves the accuracy of DGM QSM from small spatial coverages as compared with conventional QSM algorithms, which can shorten DGM QSM acquisition time substantially.
[ { "created": "Tue, 1 Jun 2021 14:41:52 GMT", "version": "v1" } ]
2021-06-02
[ [ "Zhu", "Xuanyu", "" ], [ "Gao", "Yang", "" ], [ "Liu", "Feng", "" ], [ "Crozier", "Stuart", "" ], [ "Sun", "Hongfu", "" ] ]
Introduction: Quantitative Susceptibility Mapping (QSM) is generally acquired with full brain coverage, even though many QSM brain-iron studies focus on the deep grey matter (DGM) region only. Reducing the spatial coverage to the DGM vicinity can substantially shorten the scan time or enhance the spatial resolution without increasing scan time; however, this may lead to significant DGM susceptibility underestimation. Method: A recently proposed deep learning-based QSM method, namely xQSM, is investigated to assess the accuracy of dipole inversion on reduced brain coverages. Pre-processed magnetic field maps are extended symmetrically from the centre of globus pallidus in the coronal plane to simulate QSM acquisitions of difference spatial coverages.Results: The proposed xQSM network led to the lowest DGM contrast lost with the smallest susceptibility variation range across all spatial coverages. For the digital brain phantom simulation, xQSM improved the DGM susceptibility underestimation more than 20% in small spatial coverages. For the in vivo acquisition, less than 5% DGM susceptibility error was achieved in 48 mm axial slabs using the xQSM network, while a minimum of 112 mm coverage was required for conventional methods. It is also shown that the background field removal process performed worse in reduced brain coverages, which further deteriorated the subsequent dipole inversion. Conclusion: The recently proposed deep learning-based xQSM method significantly improves the accuracy of DGM QSM from small spatial coverages as compared with conventional QSM algorithms, which can shorten DGM QSM acquisition time substantially.
2208.12851
James Broda
Katherine Meyer, James Broda, Andrew Brettin, Mar\'ia S\'anchez Mu\~niz, Sarah Gorman, Forest Isbell, Sarah E. Hobbie, Mary Lou Zeeman, Richard McGehee
Nitrogen-induced hysteresis in grassland biodiversity: a theoretical test of litter-mediated mechanisms
24 pages, 5 figures
The American Naturalist 201.6 (2023): E153-E167
10.1086/724383
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The global rise in anthropogenic reactive nitrogen (N) and the negative impacts of N deposition on terrestrial plant diversity are well-documented. The R* theory of resource competition predicts reversible decreases in plant diversity in response to N loading. However, empirical evidence for the reversibility of N-induced biodiversity loss is mixed. In a long-term N-enrichment experiment in Minnesota, a low-diversity state that emerged during N addition has persisted for decades after additions ceased. Hypothesized mechanisms preventing recovery of biodiversity include nutrient recycling, insufficient external seed supply, and litter inhibition of plant growth. Here we present an ODE model that unifies these mechanisms, produces bistability at intermediate N inputs, and qualitatively matches the observed hysteresis at Cedar Creek. Key features of the model, including native species' growth advantage in low-N conditions and limitation by litter accumulation, generalize from Cedar Creek to North American grasslands. Our results suggest that effective biodiversity restoration in these systems may require management beyond reducing N inputs, such as burning, grazing, haying, and seed additions. By coupling resource competition with an additional inter-specific inhibitory process, the model also illustrates a general mechanism for bistability and hysteresis that may occur in multiple ecosystem types.
[ { "created": "Fri, 26 Aug 2022 19:45:47 GMT", "version": "v1" } ]
2023-09-28
[ [ "Meyer", "Katherine", "" ], [ "Broda", "James", "" ], [ "Brettin", "Andrew", "" ], [ "Muñiz", "María Sánchez", "" ], [ "Gorman", "Sarah", "" ], [ "Isbell", "Forest", "" ], [ "Hobbie", "Sarah E.", "" ], ...
The global rise in anthropogenic reactive nitrogen (N) and the negative impacts of N deposition on terrestrial plant diversity are well-documented. The R* theory of resource competition predicts reversible decreases in plant diversity in response to N loading. However, empirical evidence for the reversibility of N-induced biodiversity loss is mixed. In a long-term N-enrichment experiment in Minnesota, a low-diversity state that emerged during N addition has persisted for decades after additions ceased. Hypothesized mechanisms preventing recovery of biodiversity include nutrient recycling, insufficient external seed supply, and litter inhibition of plant growth. Here we present an ODE model that unifies these mechanisms, produces bistability at intermediate N inputs, and qualitatively matches the observed hysteresis at Cedar Creek. Key features of the model, including native species' growth advantage in low-N conditions and limitation by litter accumulation, generalize from Cedar Creek to North American grasslands. Our results suggest that effective biodiversity restoration in these systems may require management beyond reducing N inputs, such as burning, grazing, haying, and seed additions. By coupling resource competition with an additional inter-specific inhibitory process, the model also illustrates a general mechanism for bistability and hysteresis that may occur in multiple ecosystem types.
2105.06702
Maxime Lenormand
Cl\'ementine Pr\'eau, Nicolas Dubos, Maxime Lenormand, Pierre Denelle, Marine Le Louarn, Samuel Alleaume and Sandra Luque
Dispersal-based species pools as sources of connectivity area mismatches
18 pages, 7 figures + Appendix
Landscape Ecology 37, 729-743 (2022)
10.1007/s10980-021-01371-y
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context - Prioritising is likely to differ depending on the species considered for connectivity assessments, leading to a lack of consensual decisions for territorial planning. Objectives - The objective was to assess the relevance of identifying priority areas for connectivity for groups of species based on common dispersal abilities. We aimed to assess the impact of target groups choices on predicted priority areas. Method - The study was located at the Thau Lagoon territory to demonstrate the methodological approach. Ecological niche modelling was used to quantify species resistance and to identify suitable habitat patches. We coupled the least-cost path methodology with circuit theory to assess species connectivity. We classified connectivity from high to low levels and averaged the results by dispersal groups. Results - We found important differences in identified priority areas between groups with dissimilar dispersal abilities, with little overlap between highly connected areas. We identified a gap between the level of protection of low dispersal species and highly connected areas. We found mismatches between existing corridors and connectivity in low dispersal species, and a greater impact in areas of expected urban sprawl projects on favourably connected areas for species with high dispersal capabilities. Conclusion - We have demonstrated that a diversity of dispersal capacity ranges must be accounted for in order to identify ecological corridors in programmes that aim to restore habitat connectivity at territorial levels. Our findings are oriented to support the decisions of planning initiatives, at both local and regional scale.
[ { "created": "Fri, 14 May 2021 08:28:18 GMT", "version": "v1" }, { "created": "Mon, 22 Nov 2021 09:30:12 GMT", "version": "v2" } ]
2022-03-08
[ [ "Préau", "Clémentine", "" ], [ "Dubos", "Nicolas", "" ], [ "Lenormand", "Maxime", "" ], [ "Denelle", "Pierre", "" ], [ "Louarn", "Marine Le", "" ], [ "Alleaume", "Samuel", "" ], [ "Luque", "Sandra", "" ] ...
Context - Prioritising is likely to differ depending on the species considered for connectivity assessments, leading to a lack of consensual decisions for territorial planning. Objectives - The objective was to assess the relevance of identifying priority areas for connectivity for groups of species based on common dispersal abilities. We aimed to assess the impact of target groups choices on predicted priority areas. Method - The study was located at the Thau Lagoon territory to demonstrate the methodological approach. Ecological niche modelling was used to quantify species resistance and to identify suitable habitat patches. We coupled the least-cost path methodology with circuit theory to assess species connectivity. We classified connectivity from high to low levels and averaged the results by dispersal groups. Results - We found important differences in identified priority areas between groups with dissimilar dispersal abilities, with little overlap between highly connected areas. We identified a gap between the level of protection of low dispersal species and highly connected areas. We found mismatches between existing corridors and connectivity in low dispersal species, and a greater impact in areas of expected urban sprawl projects on favourably connected areas for species with high dispersal capabilities. Conclusion - We have demonstrated that a diversity of dispersal capacity ranges must be accounted for in order to identify ecological corridors in programmes that aim to restore habitat connectivity at territorial levels. Our findings are oriented to support the decisions of planning initiatives, at both local and regional scale.
2304.01345
Selena (Shuo) Wang
Selena Wang, Yiting Wang, Frederick H. Xu, Li Shen, Yize Zhao (and for the Alzheimer's Disease Neuroimaging Initiative)
Establishing group-level brain structural connectivity incorporating anatomical knowledge under latent space modeling
null
null
null
null
q-bio.NC stat.ME
http://creativecommons.org/licenses/by/4.0/
Brain structural connectivity, capturing the white matter fiber tracts among brain regions inferred by diffusion MRI (dMRI), provides a unique characterization of brain anatomical organization. One fundamental question to address with structural connectivity is how to properly summarize and perform statistical inference for a group-level connectivity architecture, for instance, under different sex groups, or disease cohorts. Existing analyses commonly summarize group-level brain connectivity by a simple entry-wise sample mean or median across individual brain connectivity matrices. However, such a heuristic approach fully ignores the associations among structural connections and the topological properties of brain networks. In this project, we propose a latent space-based generative network model to estimate group-level brain connectivity. We name our method the attributes-informed brain connectivity (ABC) model, which compared with existing group-level connectivity estimations, (1) offers an interpretable latent space representation of the group-level connectivity, (2) incorporates the anatomical knowledge of nodes and tests its co-varying relationship with connectivity and (3) quantifies the uncertainty and evaluates the likelihood of the estimated group-level effects against chance. We devise a novel Bayesian MCMC algorithm to estimate the model. By applying the ABC model to study brain structural connectivity stratified by sex among Alzheimer's Disease (AD) subjects and healthy controls incorporating the anatomical attributes (volume, thickness and area) on nodes, our method shows superior predictive power on out-of-sample structural connectivity and identifies meaningful sex-specific network neuromarkers for AD.
[ { "created": "Tue, 21 Feb 2023 21:20:56 GMT", "version": "v1" } ]
2023-04-05
[ [ "Wang", "Selena", "", "and for\n the Alzheimer's Disease Neuroimaging Initiative" ], [ "Wang", "Yiting", "", "and for\n the Alzheimer's Disease Neuroimaging Initiative" ], [ "Xu", "Frederick H.", "", "and for\n the Alzheimer's Disease Neuroimaging Initiat...
Brain structural connectivity, capturing the white matter fiber tracts among brain regions inferred by diffusion MRI (dMRI), provides a unique characterization of brain anatomical organization. One fundamental question to address with structural connectivity is how to properly summarize and perform statistical inference for a group-level connectivity architecture, for instance, under different sex groups, or disease cohorts. Existing analyses commonly summarize group-level brain connectivity by a simple entry-wise sample mean or median across individual brain connectivity matrices. However, such a heuristic approach fully ignores the associations among structural connections and the topological properties of brain networks. In this project, we propose a latent space-based generative network model to estimate group-level brain connectivity. We name our method the attributes-informed brain connectivity (ABC) model, which compared with existing group-level connectivity estimations, (1) offers an interpretable latent space representation of the group-level connectivity, (2) incorporates the anatomical knowledge of nodes and tests its co-varying relationship with connectivity and (3) quantifies the uncertainty and evaluates the likelihood of the estimated group-level effects against chance. We devise a novel Bayesian MCMC algorithm to estimate the model. By applying the ABC model to study brain structural connectivity stratified by sex among Alzheimer's Disease (AD) subjects and healthy controls incorporating the anatomical attributes (volume, thickness and area) on nodes, our method shows superior predictive power on out-of-sample structural connectivity and identifies meaningful sex-specific network neuromarkers for AD.
1409.1783
Sandip Banerjee Dr.
Sandip Banerjee, Subhas Khajanchi and Swapna Chowdhury
Mathematical modeling to elucidate brain tumor abrogation by immunotherapy with T11 target structure
null
null
10.1371/journal.pone.0123611
null
q-bio.TO math.DS q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
T11 Target structure (T11TS), a membrane glycoprotein isolated from sheep erythrocytes, reverses the immune suppressed state of brain tumor induced animals by boosting the functional status of the immune cells. This study aims at aiding in the design of more efficacious brain tumor therapies with T11 target structure. We propose a mathematical model for brain tumor (glioma) and the immune system interactions, which aims in designing efficacious brain tumor therapy. The model encompasses considerations of the interactive dynamics of macrophages, cytotoxic T lymphocytes, glioma cells, TGF-$\beta$, IFN-$\gamma$ and the T11TS. The system undergoes sensitivity analysis, that determines which state variables are sensitive to the given parameters and the parameters are estimated from the published data. Computer simulations were used for model verification and validation, which highlight the importance of T11 target structure in brain tumor therapy.
[ { "created": "Fri, 29 Aug 2014 15:44:50 GMT", "version": "v1" }, { "created": "Fri, 15 May 2015 05:44:46 GMT", "version": "v2" } ]
2017-02-08
[ [ "Banerjee", "Sandip", "" ], [ "Khajanchi", "Subhas", "" ], [ "Chowdhury", "Swapna", "" ] ]
T11 Target structure (T11TS), a membrane glycoprotein isolated from sheep erythrocytes, reverses the immune suppressed state of brain tumor induced animals by boosting the functional status of the immune cells. This study aims at aiding in the design of more efficacious brain tumor therapies with T11 target structure. We propose a mathematical model for brain tumor (glioma) and the immune system interactions, which aims in designing efficacious brain tumor therapy. The model encompasses considerations of the interactive dynamics of macrophages, cytotoxic T lymphocytes, glioma cells, TGF-$\beta$, IFN-$\gamma$ and the T11TS. The system undergoes sensitivity analysis, that determines which state variables are sensitive to the given parameters and the parameters are estimated from the published data. Computer simulations were used for model verification and validation, which highlight the importance of T11 target structure in brain tumor therapy.
0904.0947
Claus Metzner
C. Metzner, M. Sajitz-Hermstein, M. Schmidberger, and B. Fabry
Noise and critical phenomena in biochemical signaling cycles at small molecule numbers
11 pages, 8 figures, needs style files: revTex, chemarrow
null
10.1103/PhysRevE.80.021915
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biochemical reaction networks in living cells usually involve reversible covalent modification of signaling molecules, such as protein phosphorylation. Under conditions of small molecule numbers, as is frequently the case in living cells, mass action theory fails to describe the dynamics of such systems. Instead, the biochemical reactions must be treated as stochastic processes that intrinsically generate concentration fluctuations of the chemicals. We investigate the stochastic reaction kinetics of covalent modification cycles (CMCs) by analytical modeling and numerically exact Monte-Carlo simulation of the temporally fluctuating concentration. Depending on the parameter regime, we find for the probability density of the concentration qualitatively distinct classes of distribution functions, including power law distributions with a fractional and tunable exponent. These findings challenge the traditional view of biochemical control networks as deterministic computational systems and suggest that CMCs in cells can function as versatile and tunable noise generators.
[ { "created": "Mon, 6 Apr 2009 15:06:12 GMT", "version": "v1" }, { "created": "Tue, 21 Apr 2009 15:32:34 GMT", "version": "v2" }, { "created": "Tue, 7 Jul 2009 07:58:17 GMT", "version": "v3" }, { "created": "Wed, 15 Jul 2009 07:35:11 GMT", "version": "v4" } ]
2013-05-29
[ [ "Metzner", "C.", "" ], [ "Sajitz-Hermstein", "M.", "" ], [ "Schmidberger", "M.", "" ], [ "Fabry", "B.", "" ] ]
Biochemical reaction networks in living cells usually involve reversible covalent modification of signaling molecules, such as protein phosphorylation. Under conditions of small molecule numbers, as is frequently the case in living cells, mass action theory fails to describe the dynamics of such systems. Instead, the biochemical reactions must be treated as stochastic processes that intrinsically generate concentration fluctuations of the chemicals. We investigate the stochastic reaction kinetics of covalent modification cycles (CMCs) by analytical modeling and numerically exact Monte-Carlo simulation of the temporally fluctuating concentration. Depending on the parameter regime, we find for the probability density of the concentration qualitatively distinct classes of distribution functions, including power law distributions with a fractional and tunable exponent. These findings challenge the traditional view of biochemical control networks as deterministic computational systems and suggest that CMCs in cells can function as versatile and tunable noise generators.
2311.17343
Steven DiSilvio
John Blackwelder, Steven DiSilvio, Anthony Ozerov
Forays into Fungal Fighting and Mycological Moisture Modeling
null
null
null
null
q-bio.PE cs.CE
http://creativecommons.org/licenses/by/4.0/
As the impending consequences of climate change loom over the Earth, it has become vital for researchers to understand the role microorganisms play in this process. In this paper, we examine how environmental factors, including moisture levels and temperature, affect the expression of certain fungal characteristics on a microscale, and how these in turn affect fungal biodiversity and ecosystem decomposition rates over time. We first present a differential equation model to understand how the distribution of different fungal isolates depends on regional moisture levels. We introduce both slow and sudden variations into the environment in order to represent the various ways climate change will impact fungal ecosystems. This model demonstrates that increased variability in moisture (both short-term and long-term) increases biodiversity and that fungal populations will shift towards more stress-tolerant fungi as aridity increases. The model further suggests the lack of any direct link between biodiversity and decomposition rates. To better describe fungal competition with respect to space, we develop a local agent-based model (ABM). Unlike the previous model, our ABM focuses on individuals, tracking each fungus and the result of its interactions. Our ABM also features a more accurate spatial combat system, allowing us to precisely discern the influence of fungal interactions on the environment. This model corroborates the results of the differential equation model and further suggests that moisture, through its link with temperature and effects on fungal population, also plays a strong role in determining fungal decomposition rates. Together, these models suggest that climate change, which portends increasing variability in regional conditions and higher average temperatures worldwide, will lead to an increase in both wood decomposition rates and, independently, fungal biodiversity.
[ { "created": "Wed, 29 Nov 2023 03:46:23 GMT", "version": "v1" } ]
2023-11-30
[ [ "Blackwelder", "John", "" ], [ "DiSilvio", "Steven", "" ], [ "Ozerov", "Anthony", "" ] ]
As the impending consequences of climate change loom over the Earth, it has become vital for researchers to understand the role microorganisms play in this process. In this paper, we examine how environmental factors, including moisture levels and temperature, affect the expression of certain fungal characteristics on a microscale, and how these in turn affect fungal biodiversity and ecosystem decomposition rates over time. We first present a differential equation model to understand how the distribution of different fungal isolates depends on regional moisture levels. We introduce both slow and sudden variations into the environment in order to represent the various ways climate change will impact fungal ecosystems. This model demonstrates that increased variability in moisture (both short-term and long-term) increases biodiversity and that fungal populations will shift towards more stress-tolerant fungi as aridity increases. The model further suggests the lack of any direct link between biodiversity and decomposition rates. To better describe fungal competition with respect to space, we develop a local agent-based model (ABM). Unlike the previous model, our ABM focuses on individuals, tracking each fungus and the result of its interactions. Our ABM also features a more accurate spatial combat system, allowing us to precisely discern the influence of fungal interactions on the environment. This model corroborates the results of the differential equation model and further suggests that moisture, through its link with temperature and effects on fungal population, also plays a strong role in determining fungal decomposition rates. Together, these models suggest that climate change, which portends increasing variability in regional conditions and higher average temperatures worldwide, will lead to an increase in both wood decomposition rates and, independently, fungal biodiversity.
2205.13915
Mohammad Asif Khan
Shan Tharanga, Eyyub Selim Unlu, Yongli Hu, Muhammad Farhan Sjaugi, Muhammet A. Celik, Hilal Hekimoglu, Olivo Miotto, Muhammed Miran Oncel, and Asif M. Khan
DiMA: Sequence Diversity Dynamics Analyser for Viruses
18 pages, 2 figures, 50 references
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sequence diversity is one of the major challenges in the design of diagnostic, prophylactic and therapeutic interventions against viruses. DiMA is a novel tool that is big data-ready and designed to facilitate the dissection of sequence diversity dynamics for viruses. DiMA stands out from other diversity analysis tools by offering various unique features. DiMA provides a quantitative overview of sequence (nucleotide/protein) diversity by use of Shannon's entropy corrected for size bias, applied via a user-defined k-mer sliding window to an input alignment file, and each k-mer position is dissected to various diversity motifs. The motifs are defined based on the probability of distinct sequences at a given k-mer position, whereby an index is the predominant sequence, while all the others are (total) variants to the index. The total variants are sub-classified into the major (most common) variant, minor variants (occurring more than once and of frequency lower than the major), and the unique (singleton) variants. DiMA allows user-defined, sequence metadata enrichment for analyses of the motifs. The application of DiMA was demonstrated for the alignment data of the relatively conserved Spike protein (2,106,985 sequences) of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and the relatively highly diverse Pol protein (3,874) of human immunodeficiency virus-1 (HIV-1). The tool is publicly available as a web server (https://dima.bezmialem.edu.tr), as a Python library (via PyPi) and as a command line client (Via GitHub).
[ { "created": "Fri, 27 May 2022 11:36:37 GMT", "version": "v1" }, { "created": "Sat, 27 Jul 2024 11:29:58 GMT", "version": "v2" } ]
2024-07-30
[ [ "Tharanga", "Shan", "" ], [ "Unlu", "Eyyub Selim", "" ], [ "Hu", "Yongli", "" ], [ "Sjaugi", "Muhammad Farhan", "" ], [ "Celik", "Muhammet A.", "" ], [ "Hekimoglu", "Hilal", "" ], [ "Miotto", "Olivo", "" ...
Sequence diversity is one of the major challenges in the design of diagnostic, prophylactic and therapeutic interventions against viruses. DiMA is a novel tool that is big data-ready and designed to facilitate the dissection of sequence diversity dynamics for viruses. DiMA stands out from other diversity analysis tools by offering various unique features. DiMA provides a quantitative overview of sequence (nucleotide/protein) diversity by use of Shannon's entropy corrected for size bias, applied via a user-defined k-mer sliding window to an input alignment file, and each k-mer position is dissected to various diversity motifs. The motifs are defined based on the probability of distinct sequences at a given k-mer position, whereby an index is the predominant sequence, while all the others are (total) variants to the index. The total variants are sub-classified into the major (most common) variant, minor variants (occurring more than once and of frequency lower than the major), and the unique (singleton) variants. DiMA allows user-defined, sequence metadata enrichment for analyses of the motifs. The application of DiMA was demonstrated for the alignment data of the relatively conserved Spike protein (2,106,985 sequences) of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and the relatively highly diverse Pol protein (3,874) of human immunodeficiency virus-1 (HIV-1). The tool is publicly available as a web server (https://dima.bezmialem.edu.tr), as a Python library (via PyPi) and as a command line client (Via GitHub).
1303.1610
Matteo Convertino
Matteo Convertino, Filippo Simini, Filippo Catani, Igor Linkov, Gregory A. Kiker
Power-law of Aggregate-size Spectra in Natural Systems
ICST Transactions on Complex Systems
null
null
null
q-bio.QM math-ph math.MP stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Patterns of animate and inanimate systems show remarkable similarities in their aggregation. One similarity is the double-Pareto distribution of the aggregate-size of system components. Different models have been developed to predict aggregates of system components. However, not many models have been developed to describe probabilistically the aggregate-size distribution of any system regardless of the intrinsic and extrinsic drivers of the aggregation process. Here we consider natural animate systems, from one of the greatest mammals - the African elephant (\textit{Loxodonta africana}) - to the \textit{Escherichia coli} bacteria, and natural inanimate systems in river basins. Considering aggregates as islands and their perimeter as a curve mirroring the sculpting network of the system, the probability of exceedence of the drainage area, and the Hack's law are shown to be the the Kor\v{c}ak's law and the perimeter-area relationship for river basins. The perimeter-area relationship, and the probability of exceedence of the aggregate-size provide a meaningful estimate of the same fractal dimension. Systems aggregate because of the influence exerted by a physical or processes network within the system domain. The aggregate-size distribution is accurately derived using the null-method of box-counting on the occurrences of system components. The importance of the aggregate-size spectrum relies on its ability to reveal system form, function, and dynamics also as a function of other coupled systems. Variations of the fractal dimension and of the aggregate-size distribution are related to changes of systems that are meaningful to monitor because potentially critical for these systems.
[ { "created": "Thu, 7 Mar 2013 06:19:06 GMT", "version": "v1" } ]
2013-03-08
[ [ "Convertino", "Matteo", "" ], [ "Simini", "Filippo", "" ], [ "Catani", "Filippo", "" ], [ "Linkov", "Igor", "" ], [ "Kiker", "Gregory A.", "" ] ]
Patterns of animate and inanimate systems show remarkable similarities in their aggregation. One similarity is the double-Pareto distribution of the aggregate-size of system components. Different models have been developed to predict aggregates of system components. However, not many models have been developed to describe probabilistically the aggregate-size distribution of any system regardless of the intrinsic and extrinsic drivers of the aggregation process. Here we consider natural animate systems, from one of the greatest mammals - the African elephant (\textit{Loxodonta africana}) - to the \textit{Escherichia coli} bacteria, and natural inanimate systems in river basins. Considering aggregates as islands and their perimeter as a curve mirroring the sculpting network of the system, the probability of exceedence of the drainage area, and the Hack's law are shown to be the the Kor\v{c}ak's law and the perimeter-area relationship for river basins. The perimeter-area relationship, and the probability of exceedence of the aggregate-size provide a meaningful estimate of the same fractal dimension. Systems aggregate because of the influence exerted by a physical or processes network within the system domain. The aggregate-size distribution is accurately derived using the null-method of box-counting on the occurrences of system components. The importance of the aggregate-size spectrum relies on its ability to reveal system form, function, and dynamics also as a function of other coupled systems. Variations of the fractal dimension and of the aggregate-size distribution are related to changes of systems that are meaningful to monitor because potentially critical for these systems.
2311.13650
Charles Kocher
Charles D. Kocher and Ken A. Dill
The prebiotic emergence of biological evolution
17 pages, 8 figures
null
10.1098/rsos.240431
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
The origin of life must have been preceded by Darwin-like evolutionary dynamics that could propagate it. How did that adaptive dynamics arise? And from what prebiotic molecules? Using evolutionary invasion analysis, we develop a universal framework for describing any origin story for evolutionary dynamics. We find that cooperative autocatalysts, i.e. autocatalysts whose per-unit reproductive rate grows as their population increases, have the special property of being able to cross a barrier that separates their initial degradation-dominated state from a growth-dominated state with evolutionary dynamics. For some model parameters, this leap to persistent propagation is likely, not rare. We apply this analysis to the Foldcat Mechanism, wherein peptides fold and help catalyze the elongation of each other. Foldcats are found to have cooperative autocatalysis and be capable of emergent evolutionary dynamics.
[ { "created": "Wed, 22 Nov 2023 19:04:58 GMT", "version": "v1" } ]
2024-07-25
[ [ "Kocher", "Charles D.", "" ], [ "Dill", "Ken A.", "" ] ]
The origin of life must have been preceded by Darwin-like evolutionary dynamics that could propagate it. How did that adaptive dynamics arise? And from what prebiotic molecules? Using evolutionary invasion analysis, we develop a universal framework for describing any origin story for evolutionary dynamics. We find that cooperative autocatalysts, i.e. autocatalysts whose per-unit reproductive rate grows as their population increases, have the special property of being able to cross a barrier that separates their initial degradation-dominated state from a growth-dominated state with evolutionary dynamics. For some model parameters, this leap to persistent propagation is likely, not rare. We apply this analysis to the Foldcat Mechanism, wherein peptides fold and help catalyze the elongation of each other. Foldcats are found to have cooperative autocatalysis and be capable of emergent evolutionary dynamics.
2404.05379
Mubasher Rashid
Akriti Srivastava and Mubasher Rashid
Logic-dependent emergence of multistability, hysteresis, and biphasic dynamics in a minimal positive feedback network with an autoloop
null
null
null
null
q-bio.MN math.DS q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Cellular decision-making (CDM) is a dynamic phenomenon often controlled by regulatory networks defining interactions between genes and transcription factor proteins. Traditional studies have focussed on molecular switches such as positive feedback circuits that exhibit at most bistability. However, higher-order dynamics such as tristability is also prominent in many biological processes. It is thus imperative to identify a minimal circuit that can alone explain mono, bi, and tristable dynamics. In this work, we consider a two-component positive feedback network with an autoloop and explore these regimes of stability for different degrees of multimerization and the choice of Boolean logic functions. We report that this network can exhibit numerous dynamical scenarios such as bi-and tristability, hysteresis, and biphasic kinetics, explaining the possibilities of abrupt cell state transitions and the smooth state swap without a step-like switch. Specifically, while with monomeric regulation and competitive OR logic, the circuit exhibits mono-and bistability and biphasic dynamics, with non-competitive AND and OR logics only monostability can be achieved. To obtain bistability in the latter cases, we show that the autoloop must have (at least) dimeric regulation. In pursuit of higher-order stability, we show that tristability occurs with higher degrees of multimerization and with non-competitive OR logic only. Our results, backed by rigorous analytical calculations and numerical examples, thus explain the association between multistability, multimerization, and logic in this minimal circuit. Since this circuit underlies various biological processes, including epithelial-mesenchymal transition which often drives carcinoma metastasis, these results can thus offer crucial inputs to control cell state transition by manipulating multimerization and the logic of regulation in cells.
[ { "created": "Mon, 8 Apr 2024 10:34:48 GMT", "version": "v1" } ]
2024-04-09
[ [ "Srivastava", "Akriti", "" ], [ "Rashid", "Mubasher", "" ] ]
Cellular decision-making (CDM) is a dynamic phenomenon often controlled by regulatory networks defining interactions between genes and transcription factor proteins. Traditional studies have focussed on molecular switches such as positive feedback circuits that exhibit at most bistability. However, higher-order dynamics such as tristability is also prominent in many biological processes. It is thus imperative to identify a minimal circuit that can alone explain mono, bi, and tristable dynamics. In this work, we consider a two-component positive feedback network with an autoloop and explore these regimes of stability for different degrees of multimerization and the choice of Boolean logic functions. We report that this network can exhibit numerous dynamical scenarios such as bi-and tristability, hysteresis, and biphasic kinetics, explaining the possibilities of abrupt cell state transitions and the smooth state swap without a step-like switch. Specifically, while with monomeric regulation and competitive OR logic, the circuit exhibits mono-and bistability and biphasic dynamics, with non-competitive AND and OR logics only monostability can be achieved. To obtain bistability in the latter cases, we show that the autoloop must have (at least) dimeric regulation. In pursuit of higher-order stability, we show that tristability occurs with higher degrees of multimerization and with non-competitive OR logic only. Our results, backed by rigorous analytical calculations and numerical examples, thus explain the association between multistability, multimerization, and logic in this minimal circuit. Since this circuit underlies various biological processes, including epithelial-mesenchymal transition which often drives carcinoma metastasis, these results can thus offer crucial inputs to control cell state transition by manipulating multimerization and the logic of regulation in cells.
q-bio/0412016
Christopher Moseley
Christopher Moseley and Klaus Ziegler
Correlations in Systems of Complex Directed Macromolecules
9 pages, 4 figures
J. Phys.: Condens. Matter 17 (2005) S1809-S1816
10.1088/0953-8984/17/20/010
null
q-bio.BM
null
An ensemble of directed macromolecules on a lattice is considered, where the constituting molecules are chosen as a random sequence of N different types. The same type of molecules experiences a hard-core (exclusion) interaction. We study the robustness of the macromolecules with respect to breaking and substituting individual molecules, using a 1/N expansion. The properties depend strongly on the density of macromolecules. In particular, the macromolecules are robust against breaking and substituting at high densities.
[ { "created": "Thu, 9 Dec 2004 15:15:03 GMT", "version": "v1" }, { "created": "Fri, 29 Jul 2005 12:01:15 GMT", "version": "v2" } ]
2009-11-10
[ [ "Moseley", "Christopher", "" ], [ "Ziegler", "Klaus", "" ] ]
An ensemble of directed macromolecules on a lattice is considered, where the constituting molecules are chosen as a random sequence of N different types. The same type of molecules experiences a hard-core (exclusion) interaction. We study the robustness of the macromolecules with respect to breaking and substituting individual molecules, using a 1/N expansion. The properties depend strongly on the density of macromolecules. In particular, the macromolecules are robust against breaking and substituting at high densities.
1206.3537
Yu Hu
Yu Hu, James Trousdale, Kresimir Josic, Eric Shea-Brown
Motif Statistics and Spike Correlations in Neuronal Networks
null
null
10.1088/1742-5468/2013/03/P03012
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking activity among pairs of cells in a recurrent network model of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second-order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which extrapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state.
[ { "created": "Fri, 15 Jun 2012 18:34:43 GMT", "version": "v1" } ]
2015-06-05
[ [ "Hu", "Yu", "" ], [ "Trousdale", "James", "" ], [ "Josic", "Kresimir", "" ], [ "Shea-Brown", "Eric", "" ] ]
Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking activity among pairs of cells in a recurrent network model of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second-order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which extrapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state.
1509.04304
Enzo Tagliazucchi
Enzo Tagliazucchi, Dante R. Chialvo, Michael Siniatchkin, Jean-Francois Brichant, Vincent Bonhomme, Quentin Noirhomme, Helmut Laufs, Steven Laureys
Large-scale signatures of unconsciousness are consistent with a departure from critical dynamics
to appear in Journal of the Royal Society Interface
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Loss of cortical integration and changes in the dynamics of electrophysiological brain signals characterize the transition from wakefulness towards unconsciousness. The common mechanism underlying these observations remains unknown. In this study we arrive at a basic model, which explains these empirical observations based on the theory of phase transitions in complex systems. We studied the link between spatial and temporal correlations of large-scale brain activity recorded with functional magnetic resonance imaging during wakefulness, propofol-induced sedation and loss of consciousness, as well as during the subsequent recovery. We observed that during unconsciousness activity in frontal and thalamic regions exhibited a reduction of long-range temporal correlations and a departure of functional connectivity from the underlying anatomical constraints. These changes in dynamics and anatomy-function coupling were correlated across participants, suggesting that temporal complexity and an efficient exploration of anatomical connectivity are inter-related phenomena. A model of a system exhibiting a phase transition reproduced our findings, as well as the diminished sensitivity of the cortex to external perturbations during unconsciousness. This theoretical framework unifies different empirical observations about brain activity during unconsciousness and predicts that the principles we identified are universal and independent of the causes behind loss of awareness.
[ { "created": "Mon, 14 Sep 2015 20:23:37 GMT", "version": "v1" }, { "created": "Wed, 23 Dec 2015 16:48:00 GMT", "version": "v2" } ]
2015-12-24
[ [ "Tagliazucchi", "Enzo", "" ], [ "Chialvo", "Dante R.", "" ], [ "Siniatchkin", "Michael", "" ], [ "Brichant", "Jean-Francois", "" ], [ "Bonhomme", "Vincent", "" ], [ "Noirhomme", "Quentin", "" ], [ "Laufs", "Hel...
Loss of cortical integration and changes in the dynamics of electrophysiological brain signals characterize the transition from wakefulness towards unconsciousness. The common mechanism underlying these observations remains unknown. In this study we arrive at a basic model, which explains these empirical observations based on the theory of phase transitions in complex systems. We studied the link between spatial and temporal correlations of large-scale brain activity recorded with functional magnetic resonance imaging during wakefulness, propofol-induced sedation and loss of consciousness, as well as during the subsequent recovery. We observed that during unconsciousness activity in frontal and thalamic regions exhibited a reduction of long-range temporal correlations and a departure of functional connectivity from the underlying anatomical constraints. These changes in dynamics and anatomy-function coupling were correlated across participants, suggesting that temporal complexity and an efficient exploration of anatomical connectivity are inter-related phenomena. A model of a system exhibiting a phase transition reproduced our findings, as well as the diminished sensitivity of the cortex to external perturbations during unconsciousness. This theoretical framework unifies different empirical observations about brain activity during unconsciousness and predicts that the principles we identified are universal and independent of the causes behind loss of awareness.
1808.08375
Bidyut Mallick
Bidyut Mallick
Molecular dynamics simulations reveal the role of ceramicine B as novel PPAR{\gamma} partial agonist against type 2 diabetes
null
null
null
null
q-bio.BM
http://creativecommons.org/publicdomain/zero/1.0/
Peroxisome proliferator-activated receptors gamma (PPAR{\gamma}) are ligand-activated controllers of various metabolic actions and insulin sensitivity. PPAR{\gamma} is thus considered as an important target to treat type 2 diabetes. Available PPAR{\gamma} drugs (full agonists) have robust insulin-sensitizing properties but are accompanied by severe side effects leading to complicated health problems. Here, we have used molecular docking and a molecular dynamics simulation study to find a novel PPAR{\gamma} ligand from a natural product. Our study suggests that the inhibition of ceramicine B in the PPAR{\gamma} ligand-binding domain (LBD) could act as a partial agonist and block cdk5-mediated phosphorylation. This result may provide an opportunity for the development of new anti-diabetic drugs by targeting PPAR{\gamma} while avoiding the side effects associated with full agonists.
[ { "created": "Sat, 25 Aug 2018 08:25:21 GMT", "version": "v1" } ]
2018-08-28
[ [ "Mallick", "Bidyut", "" ] ]
Peroxisome proliferator-activated receptors gamma (PPAR{\gamma}) are ligand-activated controllers of various metabolic actions and insulin sensitivity. PPAR{\gamma} is thus considered as an important target to treat type 2 diabetes. Available PPAR{\gamma} drugs (full agonists) have robust insulin-sensitizing properties but are accompanied by severe side effects leading to complicated health problems. Here, we have used molecular docking and a molecular dynamics simulation study to find a novel PPAR{\gamma} ligand from a natural product. Our study suggests that the inhibition of ceramicine B in the PPAR{\gamma} ligand-binding domain (LBD) could act as a partial agonist and block cdk5-mediated phosphorylation. This result may provide an opportunity for the development of new anti-diabetic drugs by targeting PPAR{\gamma} while avoiding the side effects associated with full agonists.
1201.0156
Felipe Caycedo-Soler PhD
Felipe Caycedo-Soler, Alex W. Chin, Javier Almeida, Susana F. Huelga and Martin B. Plenio
The nature of the low energy band of the Fenna-Matthews-Olson complex: vibronic signatures
14 pages, 6 figures
J. Chem. Phys. 136, 155102 (2012)
10.1063/1.3703504
null
q-bio.BM cond-mat.soft quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based entirely upon actual experimental observations on electron-phonon coupling, we develop a theoretical framework to show that the lowest energy band of the Fenna- Matthews-Olson (FMO) complex exhibits observable features due to the quantum nature of the vibrational manifolds present in its chromophores. The study of linear spectra provides us with the basis to understand the dynamical features arising from the vibronic structure in non-linear spectra in a progressive fashion, starting from a microscopic model to finally performing an inhomogenous average. We show that the discreteness of the vibronic structure can be witnessed by probing the diagonal peaks of the non-linear spectra by means of a relative phase shift in the waiting time resolved signal. Moreover, we demonstrate the photon-echo and non-rephasing paths are sensitive to different harmonics in the vibrational manifold when static disorder is taken into account. Supported by analytical and numerical calculations, we show that nondiagonal resonances in the 2D spectra in the waiting time, further capture the discreteness of vibrations through a modulation of the amplitude without any effect in the signal intrinsic frequency. This fact generates a signal that is highly sensitive to correlations in the static disorder of the excitonic energy albeit protected against dephasing due to inhomogeneities of the vibrational ensemble.
[ { "created": "Fri, 30 Dec 2011 17:21:09 GMT", "version": "v1" }, { "created": "Sat, 19 May 2012 14:12:13 GMT", "version": "v2" } ]
2012-05-22
[ [ "Caycedo-Soler", "Felipe", "" ], [ "Chin", "Alex W.", "" ], [ "Almeida", "Javier", "" ], [ "Huelga", "Susana F.", "" ], [ "Plenio", "Martin B.", "" ] ]
Based entirely upon actual experimental observations on electron-phonon coupling, we develop a theoretical framework to show that the lowest energy band of the Fenna- Matthews-Olson (FMO) complex exhibits observable features due to the quantum nature of the vibrational manifolds present in its chromophores. The study of linear spectra provides us with the basis to understand the dynamical features arising from the vibronic structure in non-linear spectra in a progressive fashion, starting from a microscopic model to finally performing an inhomogenous average. We show that the discreteness of the vibronic structure can be witnessed by probing the diagonal peaks of the non-linear spectra by means of a relative phase shift in the waiting time resolved signal. Moreover, we demonstrate the photon-echo and non-rephasing paths are sensitive to different harmonics in the vibrational manifold when static disorder is taken into account. Supported by analytical and numerical calculations, we show that nondiagonal resonances in the 2D spectra in the waiting time, further capture the discreteness of vibrations through a modulation of the amplitude without any effect in the signal intrinsic frequency. This fact generates a signal that is highly sensitive to correlations in the static disorder of the excitonic energy albeit protected against dephasing due to inhomogeneities of the vibrational ensemble.
2003.06758
Alfonso Vivanco Lira
Alfonso Vivanco-Lira
Predicting COVID-19 distribution in Mexico through a discrete and time-dependent Markov chain and an SIR-like model
19 pages, 5 figures, 1 table
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
COVID-19 is an emergent viral infection which rose in December 2019 in a city in the Chinese province of Hubei, Wuhan; the viral aetiology of this infection is now known as COVID-19 virus, which belongs to the Betacoronavirus genus. This virus produces the syndrome of acute respiratory stress that h as been witnessed in other coronaviruses, such as that MERS-CoV in Middle East countries or SARS-CoV which was seen in 2002 and 2003 in China. This virus mediates its entry through its spike (S) proteins interacting with ACE2 receptors in lung epithelial cells, and may promote an inflammatory response by means of inflammasome NLRP3 activation and unfolded protein response (these are possibly consequence of the envelope E protein of COVID-19 virus). Efforts have been made worldwide to prevent further spread of the disease, but in March 2020 the WHO declared it a pandemic emergency and Mexico started to report its first cases. In this paper we attempt to summarize the biological features of the virus and the possible pathophysiological mechanisms of its disease, as well as a stochastic model characterizing the probability distribution of cases in Mexico by states and the estimated number of cases in Mexico through a differential equation model (modified SIR model), thus will we be able to characterize the disease and its course in Mexico in order to display more preparedness and promote more logical actions by both the policy makers as well as the general population.
[ { "created": "Sun, 15 Mar 2020 05:20:44 GMT", "version": "v1" } ]
2020-03-17
[ [ "Vivanco-Lira", "Alfonso", "" ] ]
COVID-19 is an emergent viral infection which rose in December 2019 in a city in the Chinese province of Hubei, Wuhan; the viral aetiology of this infection is now known as COVID-19 virus, which belongs to the Betacoronavirus genus. This virus produces the syndrome of acute respiratory stress that h as been witnessed in other coronaviruses, such as that MERS-CoV in Middle East countries or SARS-CoV which was seen in 2002 and 2003 in China. This virus mediates its entry through its spike (S) proteins interacting with ACE2 receptors in lung epithelial cells, and may promote an inflammatory response by means of inflammasome NLRP3 activation and unfolded protein response (these are possibly consequence of the envelope E protein of COVID-19 virus). Efforts have been made worldwide to prevent further spread of the disease, but in March 2020 the WHO declared it a pandemic emergency and Mexico started to report its first cases. In this paper we attempt to summarize the biological features of the virus and the possible pathophysiological mechanisms of its disease, as well as a stochastic model characterizing the probability distribution of cases in Mexico by states and the estimated number of cases in Mexico through a differential equation model (modified SIR model), thus will we be able to characterize the disease and its course in Mexico in order to display more preparedness and promote more logical actions by both the policy makers as well as the general population.
2203.06122
Haocheng Dai
Haocheng Dai, Martin Bauer, P. Thomas Fletcher, Sarang Joshi
Modeling the Shape of the Brain Connectome via Deep Neural Networks
12 pages, 5 figures
null
null
null
q-bio.NC cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of diffusion-weighted magnetic resonance imaging (DWI) is to infer the structural connectivity of an individual subject's brain in vivo. To statistically study the variability and differences between normal and abnormal brain connectomes, a mathematical model of the neural connections is required. In this paper, we represent the brain connectome as a Riemannian manifold, which allows us to model neural connections as geodesics. This leads to the challenging problem of estimating a Riemannian metric that is compatible with the DWI data, i.e., a metric such that the geodesic curves represent individual fiber tracts of the connectomics. We reduce this problem to that of solving a highly nonlinear set of partial differential equations (PDEs) and study the applicability of convolutional encoder-decoder neural networks (CEDNNs) for solving this geometrically motivated PDE. Our method achieves excellent performance in the alignment of geodesics with white matter pathways and tackles a long-standing issue in previous geodesic tractography methods: the inability to recover crossing fibers with high fidelity.
[ { "created": "Sun, 6 Mar 2022 17:51:31 GMT", "version": "v1" }, { "created": "Fri, 3 Mar 2023 16:02:43 GMT", "version": "v2" } ]
2023-03-06
[ [ "Dai", "Haocheng", "" ], [ "Bauer", "Martin", "" ], [ "Fletcher", "P. Thomas", "" ], [ "Joshi", "Sarang", "" ] ]
The goal of diffusion-weighted magnetic resonance imaging (DWI) is to infer the structural connectivity of an individual subject's brain in vivo. To statistically study the variability and differences between normal and abnormal brain connectomes, a mathematical model of the neural connections is required. In this paper, we represent the brain connectome as a Riemannian manifold, which allows us to model neural connections as geodesics. This leads to the challenging problem of estimating a Riemannian metric that is compatible with the DWI data, i.e., a metric such that the geodesic curves represent individual fiber tracts of the connectomics. We reduce this problem to that of solving a highly nonlinear set of partial differential equations (PDEs) and study the applicability of convolutional encoder-decoder neural networks (CEDNNs) for solving this geometrically motivated PDE. Our method achieves excellent performance in the alignment of geodesics with white matter pathways and tackles a long-standing issue in previous geodesic tractography methods: the inability to recover crossing fibers with high fidelity.
1501.03205
Frazer Meacham
Frazer Meacham and Carl T. Bergstrom
Adaptive behavior can produce maladaptive anxiety due to individual differences in experience
null
null
null
null
q-bio.PE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Normal anxiety is considered an adaptive response to the possible presence of danger, but is susceptible to dysregulation. Anxiety disorders are prevalent at high frequency in contemporary human societies, yet impose substantial disability upon their sufferers. This raises a puzzle: why has evolution left us vulnerable to anxiety disorders? We develop a signal detection model in which individuals must learn how to calibrate their anxiety responses: they need to learn which cues indicate danger in the environment. We derive the optimal strategy for doing so, and find that individuals face an inevitable exploration-exploitation tradeoff between obtaining a better estimate of the level of risk on one hand, and maximizing current payoffs on the other. Because of this tradeoff, a subset of the population can become trapped in a state of self-perpetuating over-sensitivity to threatening stimuli, even when individuals learn optimally. This phenomenon arises because when individuals become too cautious, they stop sampling the environment and fail to correct their misperceptions, whereas when individuals become too careless they continue to sample the environment and soon discover their mistakes. Thus, over-sensitivity to threats becomes common whereas under-sensitivity becomes rare. We suggest that this process may be involved in the development of excessive anxiety in humans.
[ { "created": "Tue, 13 Jan 2015 23:39:37 GMT", "version": "v1" }, { "created": "Thu, 31 Dec 2015 05:17:27 GMT", "version": "v2" }, { "created": "Tue, 7 Jun 2016 19:39:38 GMT", "version": "v3" } ]
2016-06-08
[ [ "Meacham", "Frazer", "" ], [ "Bergstrom", "Carl T.", "" ] ]
Normal anxiety is considered an adaptive response to the possible presence of danger, but is susceptible to dysregulation. Anxiety disorders are prevalent at high frequency in contemporary human societies, yet impose substantial disability upon their sufferers. This raises a puzzle: why has evolution left us vulnerable to anxiety disorders? We develop a signal detection model in which individuals must learn how to calibrate their anxiety responses: they need to learn which cues indicate danger in the environment. We derive the optimal strategy for doing so, and find that individuals face an inevitable exploration-exploitation tradeoff between obtaining a better estimate of the level of risk on one hand, and maximizing current payoffs on the other. Because of this tradeoff, a subset of the population can become trapped in a state of self-perpetuating over-sensitivity to threatening stimuli, even when individuals learn optimally. This phenomenon arises because when individuals become too cautious, they stop sampling the environment and fail to correct their misperceptions, whereas when individuals become too careless they continue to sample the environment and soon discover their mistakes. Thus, over-sensitivity to threats becomes common whereas under-sensitivity becomes rare. We suggest that this process may be involved in the development of excessive anxiety in humans.
1605.09713
Anna Zanzottera
Davide Ambrosi and Anna Zanzottera
Mechanics and polarity in cell motility
null
null
10.1016/j.physd.2016.05.003
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The motility of a fish keratocyte on a flat substrate exhibits two distinct regimes: the non-migrating and the migrating one. In both configurations the shape is fixed in time and, when the cell is moving, the velocity is constant in magnitude and direction. Transition from a stable configuration to the other one can be produced by a mechanical or chemotactic perturbation. In order to point out the mechanical nature of such a bistable behaviour, we focus on the actin dynamics inside the cell using a minimal mathematical model. While the protein diffusion, recruitment and segregation govern the polarization process, we show that the free actin mass balance, driven by diffusion, and the polymerized actin retrograde flow, regulated by the active stress, are sufficient ingredients to account for the motile bistability. The length and velocity of the cell are predicted on the basis of the parameters of the substrate and of the cell itself. The key physical ingredient of the theory is the exchange among actin phases at the edges of the cell, that plays a central role both in kinematics and in dynamics.
[ { "created": "Thu, 31 Mar 2016 09:54:19 GMT", "version": "v1" } ]
2016-06-29
[ [ "Ambrosi", "Davide", "" ], [ "Zanzottera", "Anna", "" ] ]
The motility of a fish keratocyte on a flat substrate exhibits two distinct regimes: the non-migrating and the migrating one. In both configurations the shape is fixed in time and, when the cell is moving, the velocity is constant in magnitude and direction. Transition from a stable configuration to the other one can be produced by a mechanical or chemotactic perturbation. In order to point out the mechanical nature of such a bistable behaviour, we focus on the actin dynamics inside the cell using a minimal mathematical model. While the protein diffusion, recruitment and segregation govern the polarization process, we show that the free actin mass balance, driven by diffusion, and the polymerized actin retrograde flow, regulated by the active stress, are sufficient ingredients to account for the motile bistability. The length and velocity of the cell are predicted on the basis of the parameters of the substrate and of the cell itself. The key physical ingredient of the theory is the exchange among actin phases at the edges of the cell, that plays a central role both in kinematics and in dynamics.
2210.14642
Asu Busra Temizer
Asu B\"u\c{s}ra Temizer, G\"ok\c{c}e Uludo\u{g}an, R{\i}za \"Oz\c{c}elik, Taha Koulani, Elif Ozkirimli, Kutlu O. Ulgen, Nilg\"un Karal{\i}, Arzucan \"Ozg\"ur
Exploring Data-Driven Chemical SMILES Tokenization Approaches to Identify Key Protein-Ligand Binding Moieties
16 pages, 11 figures, new computational analysis and extended case studies
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Machine learning models have found numerous successful applications in computational drug discovery. A large body of these models represents molecules as sequences since molecular sequences are easily available, simple, and informative. The sequence-based models often segment molecular sequences into pieces called chemical words (analogous to the words that make up sentences in human languages) and then apply advanced natural language processing techniques for tasks such as $\textit{de novo}$ drug design, property prediction, and binding affinity prediction. However, the chemical characteristics and significance of these building blocks, chemical words, remain unexplored. This study aims to investigate the chemical vocabularies generated by popular subword tokenization algorithms, namely Byte Pair Encoding (BPE), WordPiece, and Unigram, and identify key chemical words associated with protein-ligand binding. To this end, we build a language-inspired pipeline that treats high affinity ligands of protein targets as documents and selects key chemical words making up those ligands based on tf-idf weighting. Further, we conduct case studies on a number of protein families to analyze the impact of key chemical words on binding. Through our analysis, we find that these key chemical words are specific to protein targets and correspond to known pharmacophores and functional groups. Our findings will help shed light on the chemistry captured by the chemical words, and by machine learning models for drug discovery at large.
[ { "created": "Wed, 26 Oct 2022 11:43:00 GMT", "version": "v1" }, { "created": "Mon, 25 Sep 2023 11:51:48 GMT", "version": "v2" } ]
2023-09-26
[ [ "Temizer", "Asu Büşra", "" ], [ "Uludoğan", "Gökçe", "" ], [ "Özçelik", "Rıza", "" ], [ "Koulani", "Taha", "" ], [ "Ozkirimli", "Elif", "" ], [ "Ulgen", "Kutlu O.", "" ], [ "Karalı", "Nilgün", "" ], [ ...
Machine learning models have found numerous successful applications in computational drug discovery. A large body of these models represents molecules as sequences since molecular sequences are easily available, simple, and informative. The sequence-based models often segment molecular sequences into pieces called chemical words (analogous to the words that make up sentences in human languages) and then apply advanced natural language processing techniques for tasks such as $\textit{de novo}$ drug design, property prediction, and binding affinity prediction. However, the chemical characteristics and significance of these building blocks, chemical words, remain unexplored. This study aims to investigate the chemical vocabularies generated by popular subword tokenization algorithms, namely Byte Pair Encoding (BPE), WordPiece, and Unigram, and identify key chemical words associated with protein-ligand binding. To this end, we build a language-inspired pipeline that treats high affinity ligands of protein targets as documents and selects key chemical words making up those ligands based on tf-idf weighting. Further, we conduct case studies on a number of protein families to analyze the impact of key chemical words on binding. Through our analysis, we find that these key chemical words are specific to protein targets and correspond to known pharmacophores and functional groups. Our findings will help shed light on the chemistry captured by the chemical words, and by machine learning models for drug discovery at large.
2311.11046
Roberto Goya-Maldonado
Vladimir Belov, Tracy Erwin-Grabner, Ling-Li Zeng, Christopher R. K. Ching, Andre Aleman, Alyssa R. Amod, Zeynep Basgoze, Francesco Benedetti, Bianca Besteher, Katharina Brosch, Robin B\"ulow, Romain Colle, Colm G. Connolly, Emmanuelle Corruble, Baptiste Couvy-Duchesne, Kathryn Cullen, Udo Dannlowski, Christopher G. Davey, Annemiek Dols, Jan Ernsting, Jennifer W. Evans, Lukas Fisch, Paola Fuentes-Claramonte, Ali Saffet Gonul, Ian H. Gotlib, Hans J. Grabe, Nynke A. Groenewold, Dominik Grotegerd, Tim Hahn, J. Paul Hamilton, Laura K.M. Han, Ben J Harrison, Tiffany C. Ho, Neda Jahanshad, Alec J. Jamieson, Andriana Karuk, Tilo Kircher, Bonnie Klimes-Dougan, Sheri-Michelle Koopowitz, Thomas Lancaster, Ramona Leenings, Meng Li, David E. J. Linden, Frank P. MacMaster, David M. A. Mehler, Susanne Meinert, Elisa Melloni, Bryon A. Mueller, Benson Mwangi, Igor Nenadi\'c, Amar Ojha, Yasumasa Okamoto, Mardien L. Oudega, Brenda W. J. H. Penninx, Sara Poletti, Edith Pomarol-Clotet, Maria J. Portella, Elena Pozzi, Joaquim Radua, Elena Rodr\'iguez-Cano, Matthew D. Sacchet, Raymond Salvador, Anouk Schrantee, Kang Sim, Jair C. Soares, Aleix Solanes, Dan J. Stein, Frederike Stein, Aleks Stolicyn, Sophia I. Thomopoulos, Yara J. Toenders, Aslihan Uyar-Demir, Eduard Vieta, Yolanda Vives-Gilabert, Henry V\"olzke, Martin Walter, Heather C. Whalley, Sarah Whittle, Nils Winter, Katharina Wittfeld, Margaret J. Wright, Mon-Ju Wu, Tony T. Yang, Carlos Zarate, Dick J. Veltman, Lianne Schmaal, Paul M. Thompson, Roberto Goya-Maldonado
DenseNet and Support Vector Machine classifications of major depressive disorder using vertex-wise cortical features
null
null
null
null
q-bio.QM cs.LG q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Major depressive disorder (MDD) is a complex psychiatric disorder that affects the lives of hundreds of millions of individuals around the globe. Even today, researchers debate if morphological alterations in the brain are linked to MDD, likely due to the heterogeneity of this disorder. The application of deep learning tools to neuroimaging data, capable of capturing complex non-linear patterns, has the potential to provide diagnostic and predictive biomarkers for MDD. However, previous attempts to demarcate MDD patients and healthy controls (HC) based on segmented cortical features via linear machine learning approaches have reported low accuracies. In this study, we used globally representative data from the ENIGMA-MDD working group containing an extensive sample of people with MDD (N=2,772) and HC (N=4,240), which allows a comprehensive analysis with generalizable results. Based on the hypothesis that integration of vertex-wise cortical features can improve classification performance, we evaluated the classification of a DenseNet and a Support Vector Machine (SVM), with the expectation that the former would outperform the latter. As we analyzed a multi-site sample, we additionally applied the ComBat harmonization tool to remove potential nuisance effects of site. We found that both classifiers exhibited close to chance performance (balanced accuracy DenseNet: 51%; SVM: 53%), when estimated on unseen sites. Slightly higher classification performance (balanced accuracy DenseNet: 58%; SVM: 55%) was found when the cross-validation folds contained subjects from all sites, indicating site effect. In conclusion, the integration of vertex-wise morphometric features and the use of the non-linear classifier did not lead to the differentiability between MDD and HC. Our results support the notion that MDD classification on this combination of features and classifiers is unfeasible.
[ { "created": "Sat, 18 Nov 2023 11:46:25 GMT", "version": "v1" } ]
2023-11-21
[ [ "Belov", "Vladimir", "" ], [ "Erwin-Grabner", "Tracy", "" ], [ "Zeng", "Ling-Li", "" ], [ "Ching", "Christopher R. K.", "" ], [ "Aleman", "Andre", "" ], [ "Amod", "Alyssa R.", "" ], [ "Basgoze", "Zeynep", "...
Major depressive disorder (MDD) is a complex psychiatric disorder that affects the lives of hundreds of millions of individuals around the globe. Even today, researchers debate if morphological alterations in the brain are linked to MDD, likely due to the heterogeneity of this disorder. The application of deep learning tools to neuroimaging data, capable of capturing complex non-linear patterns, has the potential to provide diagnostic and predictive biomarkers for MDD. However, previous attempts to demarcate MDD patients and healthy controls (HC) based on segmented cortical features via linear machine learning approaches have reported low accuracies. In this study, we used globally representative data from the ENIGMA-MDD working group containing an extensive sample of people with MDD (N=2,772) and HC (N=4,240), which allows a comprehensive analysis with generalizable results. Based on the hypothesis that integration of vertex-wise cortical features can improve classification performance, we evaluated the classification of a DenseNet and a Support Vector Machine (SVM), with the expectation that the former would outperform the latter. As we analyzed a multi-site sample, we additionally applied the ComBat harmonization tool to remove potential nuisance effects of site. We found that both classifiers exhibited close to chance performance (balanced accuracy DenseNet: 51%; SVM: 53%), when estimated on unseen sites. Slightly higher classification performance (balanced accuracy DenseNet: 58%; SVM: 55%) was found when the cross-validation folds contained subjects from all sites, indicating site effect. In conclusion, the integration of vertex-wise morphometric features and the use of the non-linear classifier did not lead to the differentiability between MDD and HC. Our results support the notion that MDD classification on this combination of features and classifiers is unfeasible.
0902.0089
Yuriy Shckorbatov G
Yuriy Shckorbatov, Valeriy Samokhvalov, Dariya Bevziuk, Maxim Kovaliov
Changes in chromatin state in donors subjected to physical stress
8 pages, 3 figures, 4 tables
null
null
null
q-bio.CB q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of the present study is to evaluate changes in chromatin of human buccal epithelium under the influence of stressing factor - dosed physical activity. Investigations were performed in a group of students (13 men) of age 19-23. Cells were stained on a slide by a 2% orcein solution in 45% acetic acid during 1 h. The following physiological indexes were determined: arterial blood pressure, pulse frequency, and frequency of breathing. The physical stress produced by the dosed physical activity causes the considerable increase of degree of heterochromatinization in the cell nuclei of human buccal epithelium. As a rule, the level of heterochromatinization increases after first stage of training, but in some donors it increases significantly only after the second stage of training.
[ { "created": "Sat, 31 Jan 2009 21:25:59 GMT", "version": "v1" }, { "created": "Mon, 4 Jun 2012 07:48:23 GMT", "version": "v2" } ]
2012-06-05
[ [ "Shckorbatov", "Yuriy", "" ], [ "Samokhvalov", "Valeriy", "" ], [ "Bevziuk", "Dariya", "" ], [ "Kovaliov", "Maxim", "" ] ]
The purpose of the present study is to evaluate changes in chromatin of human buccal epithelium under the influence of stressing factor - dosed physical activity. Investigations were performed in a group of students (13 men) of age 19-23. Cells were stained on a slide by a 2% orcein solution in 45% acetic acid during 1 h. The following physiological indexes were determined: arterial blood pressure, pulse frequency, and frequency of breathing. The physical stress produced by the dosed physical activity causes the considerable increase of degree of heterochromatinization in the cell nuclei of human buccal epithelium. As a rule, the level of heterochromatinization increases after first stage of training, but in some donors it increases significantly only after the second stage of training.
1002.0065
W B Langdon
W. B. Langdon, Olivia Sanchez Graillet, A. P. Harrison
Automated DNA Motif Discovery
12 pages, 2 figures
null
null
null
q-bio.BM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ensembl's human non-coding and protein coding genes are used to automatically find DNA pattern motifs. The Backus-Naur form (BNF) grammar for regular expressions (RE) is used by genetic programming to ensure the generated strings are legal. The evolved motif suggests the presence of Thymine followed by one or more Adenines etc. early in transcripts indicate a non-protein coding gene. Keywords: pseudogene, short and microRNAs, non-coding transcripts, systems biology, machine learning, Bioinformatics, motif, regular expression, strongly typed genetic programming, context-free grammar.
[ { "created": "Sat, 30 Jan 2010 12:43:55 GMT", "version": "v1" } ]
2010-02-02
[ [ "Langdon", "W. B.", "" ], [ "Graillet", "Olivia Sanchez", "" ], [ "Harrison", "A. P.", "" ] ]
Ensembl's human non-coding and protein coding genes are used to automatically find DNA pattern motifs. The Backus-Naur form (BNF) grammar for regular expressions (RE) is used by genetic programming to ensure the generated strings are legal. The evolved motif suggests the presence of Thymine followed by one or more Adenines etc. early in transcripts indicate a non-protein coding gene. Keywords: pseudogene, short and microRNAs, non-coding transcripts, systems biology, machine learning, Bioinformatics, motif, regular expression, strongly typed genetic programming, context-free grammar.
1202.4026
Oscar Westesson
Oscar Westesson, Gerton Lunter, Benedict Paten, Ian Holmes
Accurate reconstruction of insertion-deletion histories by statistical phylogenetics
28 pages, 15 figures. arXiv admin note: text overlap with arXiv:1103.4347
null
10.1371/journal.pone.0034572
null
q-bio.PE q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Multiple Sequence Alignment (MSA) is a computational abstraction that represents a partial summary either of indel history, or of structural similarity. Taking the former view (indel history), it is possible to use formal automata theory to generalize the phylogenetic likelihood framework for finite substitution models (Dayhoff's probability matrices and Felsenstein's pruning algorithm) to arbitrary-length sequences. In this paper, we report results of a simulation-based benchmark of several methods for reconstruction of indel history. The methods tested include a relatively new algorithm for statistical marginalization of MSAs that sums over a stochastically-sampled ensemble of the most probable evolutionary histories. For mammalian evolutionary parameters on several different trees, the single most likely history sampled by our algorithm appears less biased than histories reconstructed by other MSA methods. The algorithm can also be used for alignment-free inference, where the MSA is explicitly summed out of the analysis. As an illustration of our method, we discuss reconstruction of the evolutionary histories of human protein-coding genes.
[ { "created": "Fri, 17 Feb 2012 21:33:37 GMT", "version": "v1" } ]
2015-06-04
[ [ "Westesson", "Oscar", "" ], [ "Lunter", "Gerton", "" ], [ "Paten", "Benedict", "" ], [ "Holmes", "Ian", "" ] ]
The Multiple Sequence Alignment (MSA) is a computational abstraction that represents a partial summary either of indel history, or of structural similarity. Taking the former view (indel history), it is possible to use formal automata theory to generalize the phylogenetic likelihood framework for finite substitution models (Dayhoff's probability matrices and Felsenstein's pruning algorithm) to arbitrary-length sequences. In this paper, we report results of a simulation-based benchmark of several methods for reconstruction of indel history. The methods tested include a relatively new algorithm for statistical marginalization of MSAs that sums over a stochastically-sampled ensemble of the most probable evolutionary histories. For mammalian evolutionary parameters on several different trees, the single most likely history sampled by our algorithm appears less biased than histories reconstructed by other MSA methods. The algorithm can also be used for alignment-free inference, where the MSA is explicitly summed out of the analysis. As an illustration of our method, we discuss reconstruction of the evolutionary histories of human protein-coding genes.
1701.00096
J\'er\'emy Guillon
Jeremy Guillon, Yohan Attal, Olivier Colliot, Valentina La Corte, Bruno Dubois, Denis Schwartz, Mario Chavez, Fabrizio De Vico Fallani
Loss of brain inter-frequency hubs in Alzheimer's disease
27 pages, 6 figures, 3 tables, 3 supplementary figures
Scientific Reports, 7:10879, 2017
10.1038/s41598-017-07846-w
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alzheimer's disease (AD) causes alterations of brain network structure and function. The latter consists of connectivity changes between oscillatory processes at different frequency channels. We proposed a multi-layer network approach to analyze multiple-frequency brain networks inferred from magnetoencephalographic recordings during resting-states in AD subjects and age-matched controls. Main results showed that brain networks tend to facilitate information propagation across different frequencies, as measured by the multi-participation coefficient (MPC). However, regional connectivity in AD subjects was abnormally distributed across frequency bands as compared to controls, causing significant decreases of MPC. This effect was mainly localized in association areas and in the cingulate cortex, which acted, in the healthy group, as a true inter-frequency hub. MPC values significantly correlated with memory impairment of AD subjects, as measured by the total recall score. Most predictive regions belonged to components of the default-mode network that are typically affected by atrophy, metabolism disruption and amyloid-beta deposition. We evaluated the diagnostic power of the MPC and we showed that it led to increased classification accuracy (78.39%) and sensitivity (91.11%). These findings shed new light on the brain functional alterations underlying AD and provide analytical tools for identifying multi-frequency neural mechanisms of brain diseases.
[ { "created": "Sat, 31 Dec 2016 12:29:14 GMT", "version": "v1" }, { "created": "Tue, 3 Jan 2017 15:36:51 GMT", "version": "v2" }, { "created": "Wed, 8 Mar 2017 17:31:01 GMT", "version": "v3" }, { "created": "Fri, 10 Mar 2017 17:06:53 GMT", "version": "v4" } ]
2017-10-17
[ [ "Guillon", "Jeremy", "" ], [ "Attal", "Yohan", "" ], [ "Colliot", "Olivier", "" ], [ "La Corte", "Valentina", "" ], [ "Dubois", "Bruno", "" ], [ "Schwartz", "Denis", "" ], [ "Chavez", "Mario", "" ], [ ...
Alzheimer's disease (AD) causes alterations of brain network structure and function. The latter consists of connectivity changes between oscillatory processes at different frequency channels. We proposed a multi-layer network approach to analyze multiple-frequency brain networks inferred from magnetoencephalographic recordings during resting-states in AD subjects and age-matched controls. Main results showed that brain networks tend to facilitate information propagation across different frequencies, as measured by the multi-participation coefficient (MPC). However, regional connectivity in AD subjects was abnormally distributed across frequency bands as compared to controls, causing significant decreases of MPC. This effect was mainly localized in association areas and in the cingulate cortex, which acted, in the healthy group, as a true inter-frequency hub. MPC values significantly correlated with memory impairment of AD subjects, as measured by the total recall score. Most predictive regions belonged to components of the default-mode network that are typically affected by atrophy, metabolism disruption and amyloid-beta deposition. We evaluated the diagnostic power of the MPC and we showed that it led to increased classification accuracy (78.39%) and sensitivity (91.11%). These findings shed new light on the brain functional alterations underlying AD and provide analytical tools for identifying multi-frequency neural mechanisms of brain diseases.
0804.4834
Raphael Plasson
Raphael Plasson and Hugues Bersini
Energetic and Entropic Analysis of Mirror Symmetry Breaking Processes in a Recycled Microreversible Chemical System
12 pages, 8 figures, 2 tables
J. Phys. Chem. B, 2009, 113:3477-3490
10.1021/jp803807p
NORDITA-2008-20
q-bio.BM physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how biological homochirality emerged remains a challenge for the researchers interested in the origin of life. During the last decades, stable non-racemic steady states of nonequilibrium chemical systems have been discussed as a possible response to this problem. In line with this framework, a description of recycled systems was provided in which stable products can be activated back to reactive compounds. The dynamical behaviour of such systems relies on the presence of a source of energy, leading to the continuous maintaining of unidirectional reaction loops. A full thermodynamic study of recycled systems, composed of microreversible reactions only, is presented here, showing how the energy is transferred and distributed through the system, leading to cycle competitions and the stabilization of asymmetric states.
[ { "created": "Wed, 30 Apr 2008 14:40:14 GMT", "version": "v1" }, { "created": "Fri, 14 Nov 2008 11:10:05 GMT", "version": "v2" }, { "created": "Mon, 20 Apr 2009 10:42:13 GMT", "version": "v3" } ]
2010-06-15
[ [ "Plasson", "Raphael", "" ], [ "Bersini", "Hugues", "" ] ]
Understanding how biological homochirality emerged remains a challenge for the researchers interested in the origin of life. During the last decades, stable non-racemic steady states of nonequilibrium chemical systems have been discussed as a possible response to this problem. In line with this framework, a description of recycled systems was provided in which stable products can be activated back to reactive compounds. The dynamical behaviour of such systems relies on the presence of a source of energy, leading to the continuous maintaining of unidirectional reaction loops. A full thermodynamic study of recycled systems, composed of microreversible reactions only, is presented here, showing how the energy is transferred and distributed through the system, leading to cycle competitions and the stabilization of asymmetric states.
1705.01643
Haley Clark
Haley D. Clark, Stefan A. Reinsberg, Vitali Moiseenko, Jonn Wu, and Steven D. Thomas
Prefer Nested Segmentation to Compound Segmentation
7 figures
null
null
null
q-bio.QM physics.med-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introduction: Intra-organ radiation dose sensitivity is becoming increasingly relevant in clinical radiotherapy. One method for assessment involves partitioning delineated regions of interest and comparing the relative contributions or importance to clinical outcomes. We show that an intuitive method for dividing organ contours, compound (sub-)segmentation, can unintentionally lead to sub-segments with inconsistent volumes, which will bias relative importance assessment. An improved technique, nested segmentation, is introduced and compared. Methods: Clinical radiotherapy planning parotid contours from 510 patients were segmented. Counts of radiotherapy dose matrix voxels interior to sub-segments were used to determine the equivalency of sub-segment volumes. The distribution of voxel counts within sub-segments were compared using Kolmogorov-Smirnov tests and characterized by their dispersion. Analytical solutions for 2D/3D analogues were derived and sub-segment area/volume were compared directly. Results: Both parotid and 2D/3D region of interest analogue segmentation confirmed compound segmentation intrinsically produces sub-segments with volumes that depend on the region of interest shape and selection location. Significant volume differences were observed when sub-segmenting parotid contours into 18ths, and vanishingly small sub-segments were observed when sub-segmenting into 96ths. Central sub-segments were considerably smaller than sub-segments on the periphery. Nested segmentation did not exhibit these shortcomings and produced sub-segments with equivalent volumes when dose grid and contour collinearity was addressed, even when dividing the parotid into 96ths. Nested segmentation was always faster or equivalent in runtime to compound segmentation. Conclusions: Nested segmentation is more suited than compound segmentation for analyses requiring equal weighting of sub-segments.
[ { "created": "Wed, 3 May 2017 22:38:11 GMT", "version": "v1" } ]
2017-05-08
[ [ "Clark", "Haley D.", "" ], [ "Reinsberg", "Stefan A.", "" ], [ "Moiseenko", "Vitali", "" ], [ "Wu", "Jonn", "" ], [ "Thomas", "Steven D.", "" ] ]
Introduction: Intra-organ radiation dose sensitivity is becoming increasingly relevant in clinical radiotherapy. One method for assessment involves partitioning delineated regions of interest and comparing the relative contributions or importance to clinical outcomes. We show that an intuitive method for dividing organ contours, compound (sub-)segmentation, can unintentionally lead to sub-segments with inconsistent volumes, which will bias relative importance assessment. An improved technique, nested segmentation, is introduced and compared. Methods: Clinical radiotherapy planning parotid contours from 510 patients were segmented. Counts of radiotherapy dose matrix voxels interior to sub-segments were used to determine the equivalency of sub-segment volumes. The distribution of voxel counts within sub-segments were compared using Kolmogorov-Smirnov tests and characterized by their dispersion. Analytical solutions for 2D/3D analogues were derived and sub-segment area/volume were compared directly. Results: Both parotid and 2D/3D region of interest analogue segmentation confirmed compound segmentation intrinsically produces sub-segments with volumes that depend on the region of interest shape and selection location. Significant volume differences were observed when sub-segmenting parotid contours into 18ths, and vanishingly small sub-segments were observed when sub-segmenting into 96ths. Central sub-segments were considerably smaller than sub-segments on the periphery. Nested segmentation did not exhibit these shortcomings and produced sub-segments with equivalent volumes when dose grid and contour collinearity was addressed, even when dividing the parotid into 96ths. Nested segmentation was always faster or equivalent in runtime to compound segmentation. Conclusions: Nested segmentation is more suited than compound segmentation for analyses requiring equal weighting of sub-segments.
2110.08209
Lisa Buchauer
Lisa Buchauer and Shalev Itzkovitz
cellanneal: A User-Friendly Deconvolution Software for Omics Data
3 pages; for the cellanneal python package and general documentation see https://github.com/LiBuchauer/cellanneal ; for the cellanneal graphical user interface see http://shalevlab.weizmann.ac.il/resources/
null
null
null
q-bio.QM q-bio.GN
http://creativecommons.org/licenses/by/4.0/
We introduce cellanneal, a python-based software for deconvolving bulk RNA sequencing data. cellanneal relies on the optimization of Spearman's rank correlation coefficient between experimental and computational mixture gene expression vectors using simulated annealing. cellanneal can be used as a python package or via a command line interface, but importantly also provides a simple graphical user interface which is distributed as a single executable file for user convenience. The python package is available at https://github.com/LiBuchauer/cellanneal , the graphical software can be downloaded at http://shalevlab.weizmann.ac.il/resources .
[ { "created": "Fri, 15 Oct 2021 17:14:58 GMT", "version": "v1" } ]
2021-10-18
[ [ "Buchauer", "Lisa", "" ], [ "Itzkovitz", "Shalev", "" ] ]
We introduce cellanneal, a python-based software for deconvolving bulk RNA sequencing data. cellanneal relies on the optimization of Spearman's rank correlation coefficient between experimental and computational mixture gene expression vectors using simulated annealing. cellanneal can be used as a python package or via a command line interface, but importantly also provides a simple graphical user interface which is distributed as a single executable file for user convenience. The python package is available at https://github.com/LiBuchauer/cellanneal , the graphical software can be downloaded at http://shalevlab.weizmann.ac.il/resources .
1901.04847
Hisham Al-Mubaid
Hisham Al-Mubaid, Sasikanth Potu, and M. Shenify
Determining Multifunctional Genes and Diseases in Human Using Gene Ontology
null
null
null
null
q-bio.GN cs.AI cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study of human genes and diseases is very rewarding and can lead to improvements in healthcare, disease diagnostics and drug discovery. In this paper, we further our previous study on gene disease relationship specifically with the multifunctional genes. We investigate the multifunctional gene disease relationship based on the published molecular function annotations of genes from the Gene Ontology which is the most comprehensive source on gene functions.
[ { "created": "Fri, 11 Jan 2019 23:53:33 GMT", "version": "v1" } ]
2019-01-16
[ [ "Al-Mubaid", "Hisham", "" ], [ "Potu", "Sasikanth", "" ], [ "Shenify", "M.", "" ] ]
The study of human genes and diseases is very rewarding and can lead to improvements in healthcare, disease diagnostics and drug discovery. In this paper, we further our previous study on gene disease relationship specifically with the multifunctional genes. We investigate the multifunctional gene disease relationship based on the published molecular function annotations of genes from the Gene Ontology which is the most comprehensive source on gene functions.
2112.15528
Maxwell J. D. Ramstead
Maxwell J. D. Ramstead
The empire strikes back: Some responses to Bruineberg and colleagues
null
null
null
null
q-bio.NC physics.hist-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In their target paper, Bruineberg and colleagues provide us with a timely opportunity to discuss the formal constructs and philosophical implications of the free-energy principle. I critically discuss their proposed distinction between Pearl blankets and Friston blankets. I then critically assess the distinction between inference with a model and inference within a model in light of instrumentalist approaches to science.
[ { "created": "Fri, 31 Dec 2021 16:06:49 GMT", "version": "v1" }, { "created": "Tue, 4 Jan 2022 12:27:42 GMT", "version": "v2" } ]
2022-01-05
[ [ "Ramstead", "Maxwell J. D.", "" ] ]
In their target paper, Bruineberg and colleagues provide us with a timely opportunity to discuss the formal constructs and philosophical implications of the free-energy principle. I critically discuss their proposed distinction between Pearl blankets and Friston blankets. I then critically assess the distinction between inference with a model and inference within a model in light of instrumentalist approaches to science.
1808.07548
Elisenda Feliu
AmirHosein Sadeghimanesh, Elisenda Feliu
The multistationarity structure of networks with intermediates and a binomial core network
Final version
Bulletin of Mathematical Biology (2019) 81, 2428-2462
10.1007/s11538-019-00612-1
null
q-bio.MN math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work addresses whether a reaction network, taken with mass-action kinetics, is multistationary, that is, admits more than one positive steady state in some stoichiometric compatibility class. We build on previous work on the effect that removing or adding intermediates has on multistationarity, and also on methods to detect multistationarity for networks with a binomial steady state ideal. In particular, we provide a new determinant criterion to decide whether a network is multistationary, which applies when the network obtained by removing intermediates has a binomial steady state ideal. We apply this method to easily characterize which subsets of complexes are responsible for multistationarity; this is what we call the \emph{multistationarity structure} of the network. We use our approach to compute the multistationarity structure of the $n$-site sequential distributive phosphorylation cycle for arbitrary n.
[ { "created": "Wed, 22 Aug 2018 20:09:53 GMT", "version": "v1" }, { "created": "Fri, 23 Aug 2019 19:04:24 GMT", "version": "v2" } ]
2019-08-27
[ [ "Sadeghimanesh", "AmirHosein", "" ], [ "Feliu", "Elisenda", "" ] ]
This work addresses whether a reaction network, taken with mass-action kinetics, is multistationary, that is, admits more than one positive steady state in some stoichiometric compatibility class. We build on previous work on the effect that removing or adding intermediates has on multistationarity, and also on methods to detect multistationarity for networks with a binomial steady state ideal. In particular, we provide a new determinant criterion to decide whether a network is multistationary, which applies when the network obtained by removing intermediates has a binomial steady state ideal. We apply this method to easily characterize which subsets of complexes are responsible for multistationarity; this is what we call the \emph{multistationarity structure} of the network. We use our approach to compute the multistationarity structure of the $n$-site sequential distributive phosphorylation cycle for arbitrary n.
1204.4214
Paul Krapivsky
Tibor Antal, P. L. Krapivsky
Outbreak size distributions in epidemics with multiple stages
7 pages, 2 figures; added references, final version
J. Stat. Mech. P07018 (2012)
10.1088/1742-5468/2012/07/P07018
null
q-bio.PE cond-mat.stat-mech math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple-type branching processes that model the spread of infectious diseases are investigated. In these stochastic processes, the disease goes through multiple stages before it eventually disappears. We mostly focus on the critical multistage Susceptible-Infected-Recovered (SIR) infection process. In the infinite population limit, we compute the outbreak size distributions and show that asymptotic results apply to more general multiple-type critical branching processes. Finally using heuristic arguments and simulations we establish scaling laws for a multistage SIR model in a finite population.
[ { "created": "Wed, 18 Apr 2012 21:55:55 GMT", "version": "v1" }, { "created": "Fri, 20 Jul 2012 21:58:46 GMT", "version": "v2" } ]
2015-06-04
[ [ "Antal", "Tibor", "" ], [ "Krapivsky", "P. L.", "" ] ]
Multiple-type branching processes that model the spread of infectious diseases are investigated. In these stochastic processes, the disease goes through multiple stages before it eventually disappears. We mostly focus on the critical multistage Susceptible-Infected-Recovered (SIR) infection process. In the infinite population limit, we compute the outbreak size distributions and show that asymptotic results apply to more general multiple-type critical branching processes. Finally using heuristic arguments and simulations we establish scaling laws for a multistage SIR model in a finite population.
2311.13501
Yujiang Wang
Karoline Leiberg, Timo Blattner, Bethany Little, Victor B.B. Mello, Fernanda H.P. de Moraes, Christian Rummel, Peter N. Taylor, Bruno Mota, Yujiang Wang
Multiscale cortical morphometry reveals pronounced regional and scale-dependent variations across the lifespan
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Characterising the changes in cortical morphology across the lifespan is fundamental for a range of research and clinical applications. Most studies to date have found a monotonic decrease in commonly used morphometrics, such as cortical thickness and volume, across the entire brain with increasing age. Any regional variations reported are subtle changes in the rate of decrease. However, these descriptions of morphological changes have been limited to a single length scale, or resolution. Here, we delineate the morphological changes associated with the healthy lifespan in multiscale morphometrics. Methods: Using MRI from subjects aged 6-88 years from NKI and CamCAN, we computed morphometrics at spatial scales ranging from 0.32 mm to 3 mm. We used generalised additive mixed models to account for site differences when extracting age trajectories. In a proof-of-principle application, we compared brain age estimations based on a single metric (pial surface area) computed at a single scale vs. multiple scales. Results: On the level of whole cortical hemispheres, lifespan trajectories show diverging and even opposing trends at different spatial scales, in contrast to the monotonic decreases of volume and thickness described so far. Pronounced regional differences between lobes also became apparent in scales over 0.7 mm. Using two complementary scales improved brain age estimates in RMSE by about 5 years. Conclusion: Our study provides a comprehensive multiscale description of lifespan effects on cortical morphology in an age range from 6-88 years. In future, this can be used as a normative model to compare individuals or cohorts, hence identifying morphological abnormalities. Our results reveal the complementary information contained in different spatial scales, suggesting that morphometrics should not be considered mere scalars, but functions of length scale.
[ { "created": "Wed, 22 Nov 2023 16:19:52 GMT", "version": "v1" }, { "created": "Wed, 24 Jan 2024 20:58:41 GMT", "version": "v2" }, { "created": "Wed, 7 Feb 2024 13:15:21 GMT", "version": "v3" } ]
2024-02-08
[ [ "Leiberg", "Karoline", "" ], [ "Blattner", "Timo", "" ], [ "Little", "Bethany", "" ], [ "Mello", "Victor B. B.", "" ], [ "de Moraes", "Fernanda H. P.", "" ], [ "Rummel", "Christian", "" ], [ "Taylor", "Peter N....
Motivation: Characterising the changes in cortical morphology across the lifespan is fundamental for a range of research and clinical applications. Most studies to date have found a monotonic decrease in commonly used morphometrics, such as cortical thickness and volume, across the entire brain with increasing age. Any regional variations reported are subtle changes in the rate of decrease. However, these descriptions of morphological changes have been limited to a single length scale, or resolution. Here, we delineate the morphological changes associated with the healthy lifespan in multiscale morphometrics. Methods: Using MRI from subjects aged 6-88 years from NKI and CamCAN, we computed morphometrics at spatial scales ranging from 0.32 mm to 3 mm. We used generalised additive mixed models to account for site differences when extracting age trajectories. In a proof-of-principle application, we compared brain age estimations based on a single metric (pial surface area) computed at a single scale vs. multiple scales. Results: On the level of whole cortical hemispheres, lifespan trajectories show diverging and even opposing trends at different spatial scales, in contrast to the monotonic decreases of volume and thickness described so far. Pronounced regional differences between lobes also became apparent in scales over 0.7 mm. Using two complementary scales improved brain age estimates in RMSE by about 5 years. Conclusion: Our study provides a comprehensive multiscale description of lifespan effects on cortical morphology in an age range from 6-88 years. In future, this can be used as a normative model to compare individuals or cohorts, hence identifying morphological abnormalities. Our results reveal the complementary information contained in different spatial scales, suggesting that morphometrics should not be considered mere scalars, but functions of length scale.
1406.4818
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Noise-Induced Burst and Spike Synchronizations in An Inhibitory Small-World Network of Subthreshold Bursting Neurons
arXiv admin note: substantial text overlap with arXiv:1403.3994
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For modeling complex synaptic connectivity, we consider the Watts-Strogatz small-world network which interpolates between regular lattice and random network via rewiring, and investigate the effect of small-world connectivity on emergence of noise-induced population synchronization in an inhibitory population of subthreshold bursting Hindmarsh-Rose neurons. Thus, noise-induced slow burst synchronization and fasg spike synchronization are found to appear in a synchronized region of the $J-D$ plane. As the rewiring probability $p$ is decreased from 1 (random network) to 0 (regular lattice), the region of spike synchronization shrinks rapidly in the $J-D$ plane, while the region of the burst synchronization decreases slowly. Population synchronization may be well visualized in the raster plot of neural spikes which can be obtained in experiments. Instantaneous population firing rate, $R(t)$, which is directly obtained from the raster plot of spikes, is a realistic population quantity exhibiting collective behaviors with both the slow bursting and the fast spiking timescales. Through frequency filtering, we separate $R(t)$ into $R_b(t)$ (describing the slow bursting behavior) and $R_s(t)$ (describing the fast intraburst spiking behavior). Then, we develop thermodynamic order parameters and statistical-mechanical measures, based on $R_b (t)$ and $R_s (t)$, for characterization of the burst and spike synchronizations of the bursting neurons and show their usefulness in explicit examples. With increase in $p$, both the degrees of the burst and spike synchronizations are found to increase because more long-range connections appear. However, they become saturated for some maximal values of $p$ because long-range short-cuts which appear up to the maximal values of $p$ play sufficient role to get maximal degrees of the burst and spike synchronizations.
[ { "created": "Thu, 12 Jun 2014 00:55:39 GMT", "version": "v1" }, { "created": "Thu, 10 Jul 2014 05:53:34 GMT", "version": "v2" } ]
2014-07-11
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
For modeling complex synaptic connectivity, we consider the Watts-Strogatz small-world network which interpolates between regular lattice and random network via rewiring, and investigate the effect of small-world connectivity on emergence of noise-induced population synchronization in an inhibitory population of subthreshold bursting Hindmarsh-Rose neurons. Thus, noise-induced slow burst synchronization and fasg spike synchronization are found to appear in a synchronized region of the $J-D$ plane. As the rewiring probability $p$ is decreased from 1 (random network) to 0 (regular lattice), the region of spike synchronization shrinks rapidly in the $J-D$ plane, while the region of the burst synchronization decreases slowly. Population synchronization may be well visualized in the raster plot of neural spikes which can be obtained in experiments. Instantaneous population firing rate, $R(t)$, which is directly obtained from the raster plot of spikes, is a realistic population quantity exhibiting collective behaviors with both the slow bursting and the fast spiking timescales. Through frequency filtering, we separate $R(t)$ into $R_b(t)$ (describing the slow bursting behavior) and $R_s(t)$ (describing the fast intraburst spiking behavior). Then, we develop thermodynamic order parameters and statistical-mechanical measures, based on $R_b (t)$ and $R_s (t)$, for characterization of the burst and spike synchronizations of the bursting neurons and show their usefulness in explicit examples. With increase in $p$, both the degrees of the burst and spike synchronizations are found to increase because more long-range connections appear. However, they become saturated for some maximal values of $p$ because long-range short-cuts which appear up to the maximal values of $p$ play sufficient role to get maximal degrees of the burst and spike synchronizations.
2008.03322
Kexin Huang
Kexin Huang
scGNN: scRNA-seq Dropout Imputation via Induced Hierarchical Cell Similarity Graph
Accepted to ICML 2020 Workshop on Computational Biology
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell RNA sequencing provides tremendous insights to understand biological systems. However, the noise from dropout can corrupt the downstream biological analysis. Hence, it is desirable to impute the dropouts accurately. In this work, we propose a simple and powerful dropout imputation method (scGNN) by applying a bottlenecked Graph Convolutional Neural Network on an induced hierarchical cell similarity graph. We show scGNN has competitive performance against state-of-the-art baselines across three datasets and can improve downstream analysis.
[ { "created": "Fri, 7 Aug 2020 18:05:59 GMT", "version": "v1" } ]
2020-08-11
[ [ "Huang", "Kexin", "" ] ]
Single-cell RNA sequencing provides tremendous insights to understand biological systems. However, the noise from dropout can corrupt the downstream biological analysis. Hence, it is desirable to impute the dropouts accurately. In this work, we propose a simple and powerful dropout imputation method (scGNN) by applying a bottlenecked Graph Convolutional Neural Network on an induced hierarchical cell similarity graph. We show scGNN has competitive performance against state-of-the-art baselines across three datasets and can improve downstream analysis.
2008.05205
Tam\'as Kov\'acs
Tam\'as Kov\'acs
How can contemporary climate research help to understand epidemic dynamics? -- Ensemble approach and snapshot attractors
10 pages, 8 figures (+2 supplementary figs)
null
null
null
q-bio.PE nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard epidemic models based on compartmental differential equations are investigated under continuous parameter change as external forcing. We show that seasonal modulation of the contact parameter superimposed a monotonic decay needs a different description than that of the standard chaotic dynamics. The concept of snapshot attractors and their natural probability distribution has been adopted from the field of the latest climate-change-research to show the importance of transient effect and ensemble interpretation of disease spread. After presenting the extended bifurcation diagram of measles, the temporal change of the phase space structure is investigated. By defining statistical measures over the ensemble, we can interpret the internal variability of the epidemic as the onset of complex dynamics even for those values of contact parameter where regular behavior is expected. We argue that anomalous outbreaks of infectious class cannot die out until transient chaos is presented for various parameters. More important, that this fact becomes visible by using of ensemble approach rather than single trajectory representation. These findings are applicable generally in nonlinear dynamical systems such as standard epidemic models regardless of parameter values.
[ { "created": "Wed, 12 Aug 2020 09:55:40 GMT", "version": "v1" } ]
2020-08-13
[ [ "Kovács", "Tamás", "" ] ]
Standard epidemic models based on compartmental differential equations are investigated under continuous parameter change as external forcing. We show that seasonal modulation of the contact parameter superimposed a monotonic decay needs a different description than that of the standard chaotic dynamics. The concept of snapshot attractors and their natural probability distribution has been adopted from the field of the latest climate-change-research to show the importance of transient effect and ensemble interpretation of disease spread. After presenting the extended bifurcation diagram of measles, the temporal change of the phase space structure is investigated. By defining statistical measures over the ensemble, we can interpret the internal variability of the epidemic as the onset of complex dynamics even for those values of contact parameter where regular behavior is expected. We argue that anomalous outbreaks of infectious class cannot die out until transient chaos is presented for various parameters. More important, that this fact becomes visible by using of ensemble approach rather than single trajectory representation. These findings are applicable generally in nonlinear dynamical systems such as standard epidemic models regardless of parameter values.
2402.11776
Jeffrey Lim
Jeffrey Lim, Po T. Wang, Wonjoon Sohn, Claudia Serrano-Amenos, Mina Ibrahim, Derrick Lin, Shravan Thaploo, Susan J. Shaw, Michelle Armacost, Hui Gong, Brian Lee, Darrin Lee, Richard A. Andersen, Payam Heydari, Charles Y. Liu, Zoran Nenadic, An H. Do
Early feasibility of an embedded bi-directional brain-computer interface for ambulation
5 pages, 6 figures, two tables, also submitted to IEEE EMBC 2024 conference
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current treatments for paraplegia induced by spinal cord injury (SCI) are often limited by the severity of the injury. The accompanying loss of sensory and motor functions often results in reliance on wheelchairs, which in turn causes reduced quality of life and increased risk of co-morbidities. While brain-computer interfaces (BCIs) for ambulation have shown promise in restoring or replacing lower extremity motor functions, none so far have simultaneously implemented sensory feedback functions. Additionally, many existing BCIs for ambulation rely on bulky external hardware that make them ill-suited for non-research settings. Here, we present an embedded bi-directional BCI (BDBCI), that restores motor function by enabling neural control over a robotic gait exoskeleton (RGE) and delivers sensory feedback via direct cortical electrical stimulation (DCES) in response to RGE leg swing. A first demonstration with this system was performed with a single subject implanted with electrocorticography electrodes, achieving an average lag-optimized cross-correlation of 0.80$\pm$0.08 between cues and decoded states over 5 runs.
[ { "created": "Mon, 19 Feb 2024 02:07:37 GMT", "version": "v1" } ]
2024-02-20
[ [ "Lim", "Jeffrey", "" ], [ "Wang", "Po T.", "" ], [ "Sohn", "Wonjoon", "" ], [ "Serrano-Amenos", "Claudia", "" ], [ "Ibrahim", "Mina", "" ], [ "Lin", "Derrick", "" ], [ "Thaploo", "Shravan", "" ], [ ...
Current treatments for paraplegia induced by spinal cord injury (SCI) are often limited by the severity of the injury. The accompanying loss of sensory and motor functions often results in reliance on wheelchairs, which in turn causes reduced quality of life and increased risk of co-morbidities. While brain-computer interfaces (BCIs) for ambulation have shown promise in restoring or replacing lower extremity motor functions, none so far have simultaneously implemented sensory feedback functions. Additionally, many existing BCIs for ambulation rely on bulky external hardware that make them ill-suited for non-research settings. Here, we present an embedded bi-directional BCI (BDBCI), that restores motor function by enabling neural control over a robotic gait exoskeleton (RGE) and delivers sensory feedback via direct cortical electrical stimulation (DCES) in response to RGE leg swing. A first demonstration with this system was performed with a single subject implanted with electrocorticography electrodes, achieving an average lag-optimized cross-correlation of 0.80$\pm$0.08 between cues and decoded states over 5 runs.
2206.06159
Daniel Russ
Montserrat Garcia-Closas, Thomas U. Ahearn, Mia M. Gaudet, Amber N. Hurson, Jeya Balaji Balasubramanian, Parichoy Pal Choudhury, Nicole M. Gerlanc, Bhaumik Patel, Daniel Russ, Mustapha Abubakar, Neal D. Freedman, Wendy S.W. Wong, Stephen J. Chanock, Amy Berrington de Gonzalez, Jonas S Almeida
Moving towards FAIR practices in epidemiological research
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reproducibility and replicability of research findings are central to the scientific integrity of epidemiology. In addition, many research questions require combiningdata from multiple sources to achieve adequate statistical power. However, barriers related to confidentiality, costs, and incentives often limit the extent and speed of sharing resources, both data and code. Epidemiological practices that follow FAIR principles can address these barriers by making resources (F)indable with the necessary metadata , (A)ccessible to authorized users and (I)nteroperable with other data, to optimize the (R)e-use of resources with appropriate credit to its creators. We provide an overview of these principles and describe approaches for implementation in epidemiology. Increasing degrees of FAIRness can be achieved by moving data and code from on-site locations to the Cloud, using machine-readable and non-proprietary files, and developing open-source code. Adoption of these practices will improve daily work and collaborative analyses, and facilitate compliance with data sharing policies from funders and scientific journals. Achieving a high degree of FAIRness will require funding, training, organizational support, recognition, and incentives for sharing resources. But these costs are amply outweighed by the benefits of making research more reproducible, impactful, and equitable by facilitating the re-use of precious research resources by the scientific community.
[ { "created": "Mon, 13 Jun 2022 13:44:53 GMT", "version": "v1" } ]
2022-06-14
[ [ "Garcia-Closas", "Montserrat", "" ], [ "Ahearn", "Thomas U.", "" ], [ "Gaudet", "Mia M.", "" ], [ "Hurson", "Amber N.", "" ], [ "Balasubramanian", "Jeya Balaji", "" ], [ "Choudhury", "Parichoy Pal", "" ], [ "Gerlan...
Reproducibility and replicability of research findings are central to the scientific integrity of epidemiology. In addition, many research questions require combiningdata from multiple sources to achieve adequate statistical power. However, barriers related to confidentiality, costs, and incentives often limit the extent and speed of sharing resources, both data and code. Epidemiological practices that follow FAIR principles can address these barriers by making resources (F)indable with the necessary metadata , (A)ccessible to authorized users and (I)nteroperable with other data, to optimize the (R)e-use of resources with appropriate credit to its creators. We provide an overview of these principles and describe approaches for implementation in epidemiology. Increasing degrees of FAIRness can be achieved by moving data and code from on-site locations to the Cloud, using machine-readable and non-proprietary files, and developing open-source code. Adoption of these practices will improve daily work and collaborative analyses, and facilitate compliance with data sharing policies from funders and scientific journals. Achieving a high degree of FAIRness will require funding, training, organizational support, recognition, and incentives for sharing resources. But these costs are amply outweighed by the benefits of making research more reproducible, impactful, and equitable by facilitating the re-use of precious research resources by the scientific community.
0812.5005
Peter Waddell
Peter J. Waddell, Rissa Ota, and David Penny
Measuring Fit of Sequence Data to Phylogenetic Model: Gain of Power using Marginal Tests
null
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (1978) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (1982) to the present. We compare the general log-likelihood ratio (the G or G2 statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (p~0.5), but the marginalized tests do. Tests on pair-wise frequency (F) matrices, strongly (p < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (p < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4t patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with p << 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published analyses may really be far larger than the analytical methods (e.g., bootstrap) report.
[ { "created": "Tue, 30 Dec 2008 05:08:37 GMT", "version": "v1" } ]
2008-12-31
[ [ "Waddell", "Peter J.", "" ], [ "Ota", "Rissa", "" ], [ "Penny", "David", "" ] ]
Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (1978) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (1982) to the present. We compare the general log-likelihood ratio (the G or G2 statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (p~0.5), but the marginalized tests do. Tests on pair-wise frequency (F) matrices, strongly (p < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (p < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4t patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with p << 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published analyses may really be far larger than the analytical methods (e.g., bootstrap) report.
2005.09598
Josefine Bohr Brask
Josefine Bohr Brask, Samuel Ellis, Darren P Croft
Animal social networks: an introduction for complex systems scientists
15 pages, 2 figures
null
null
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many animals live in societies where individuals frequently interact socially with each other. The social structures of these systems can be studied in depth by means of network analysis. A large number of studies on animal social networks in many species have in recent years been carried out in the biological research field of animal behaviour and have provided new insights into behaviour, ecology, and social evolution. This line of research is currently not so well connected to the field of complex systems as could be expected. The purpose of this paper is to provide an introduction to animal social networks for complex systems scientists and highlight areas of synergy. We believe that an increased integration of animal social networks with the interdisciplinary field of complex systems and networks would be beneficial for various reasons. Increased collaboration between researchers in this field and biologists studying animal social systems could be valuable in solving challenges that are of importance to animal social network research. Furthermore, animal social networks provide the opportunity to investigate hypotheses about complex systems across a range of natural real-world social systems. In this paper, we describe what animal social networks are and main research themes where they are studied; we give an overview of the methods commonly used to study animal social networks; we highlight challenges in the study of animal social networks where complex systems expertise may be particularly valuable; and we consider aspects of animal social networks that may be of particular interest to complex systems researchers. We hope that this will help to facilitate further interdisciplinary collaborations involving animal social networks, and further integration of these networks into the field of complex systems.
[ { "created": "Tue, 19 May 2020 17:21:06 GMT", "version": "v1" }, { "created": "Fri, 26 Jun 2020 20:57:13 GMT", "version": "v2" }, { "created": "Fri, 19 Feb 2021 18:22:57 GMT", "version": "v3" } ]
2021-02-22
[ [ "Brask", "Josefine Bohr", "" ], [ "Ellis", "Samuel", "" ], [ "Croft", "Darren P", "" ] ]
Many animals live in societies where individuals frequently interact socially with each other. The social structures of these systems can be studied in depth by means of network analysis. A large number of studies on animal social networks in many species have in recent years been carried out in the biological research field of animal behaviour and have provided new insights into behaviour, ecology, and social evolution. This line of research is currently not so well connected to the field of complex systems as could be expected. The purpose of this paper is to provide an introduction to animal social networks for complex systems scientists and highlight areas of synergy. We believe that an increased integration of animal social networks with the interdisciplinary field of complex systems and networks would be beneficial for various reasons. Increased collaboration between researchers in this field and biologists studying animal social systems could be valuable in solving challenges that are of importance to animal social network research. Furthermore, animal social networks provide the opportunity to investigate hypotheses about complex systems across a range of natural real-world social systems. In this paper, we describe what animal social networks are and main research themes where they are studied; we give an overview of the methods commonly used to study animal social networks; we highlight challenges in the study of animal social networks where complex systems expertise may be particularly valuable; and we consider aspects of animal social networks that may be of particular interest to complex systems researchers. We hope that this will help to facilitate further interdisciplinary collaborations involving animal social networks, and further integration of these networks into the field of complex systems.
1702.05609
Andrew Francis
Andrew Francis, Katharina Huber, Vincent Moulton, Taoyang Wu
Bounds for phylogenetic network space metrics
17 pages, 5 figures. This version has a new figure to illustrate Lemma 3.2, and a new Corollary 5.7 that bounds the distance between networks in different tiers
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks are a generalization of phylogenetic trees that allow for representation of reticulate evolution. Recently, a space of unrooted phylogenetic networks was introduced, where such a network is a connected graph in which every vertex has degree 1 or 3 and whose leaf-set is a fixed set $X$ of taxa. This space, denoted $\mathcal{N}(X)$, is defined in terms of two operations on networks -- the nearest neighbor interchange and triangle operations -- which can be used to transform any network with leaf set $X$ into any other network with that leaf set. In particular, it gives rise to a metric $d$ on $\mathcal N(X)$ which is given by the smallest number of operations required to transform one network in $\mathcal N(X)$ into another in $\mathcal N(X)$. The metric generalizes the well-known NNI-metric on phylogenetic trees which has been intensively studied in the literature. In this paper, we derive a bound for the metric $d$ as well as a related metric $d_{N\!N\!I}$ which arises when restricting $d$ to the subset of $\mathcal{N}(X)$ consisting of all networks with $2(|X|-1+i)$ vertices, $i \ge 1$. We also introduce two new metrics on networks -- the SPR and TBR metrics -- which generalize the metrics on phylogenetic trees with the same name and give bounds for these new metrics. We expect our results to eventually have applications to the development and understanding of network search algorithms.
[ { "created": "Sat, 18 Feb 2017 12:57:06 GMT", "version": "v1" }, { "created": "Wed, 8 Mar 2017 05:28:47 GMT", "version": "v2" } ]
2017-03-09
[ [ "Francis", "Andrew", "" ], [ "Huber", "Katharina", "" ], [ "Moulton", "Vincent", "" ], [ "Wu", "Taoyang", "" ] ]
Phylogenetic networks are a generalization of phylogenetic trees that allow for representation of reticulate evolution. Recently, a space of unrooted phylogenetic networks was introduced, where such a network is a connected graph in which every vertex has degree 1 or 3 and whose leaf-set is a fixed set $X$ of taxa. This space, denoted $\mathcal{N}(X)$, is defined in terms of two operations on networks -- the nearest neighbor interchange and triangle operations -- which can be used to transform any network with leaf set $X$ into any other network with that leaf set. In particular, it gives rise to a metric $d$ on $\mathcal N(X)$ which is given by the smallest number of operations required to transform one network in $\mathcal N(X)$ into another in $\mathcal N(X)$. The metric generalizes the well-known NNI-metric on phylogenetic trees which has been intensively studied in the literature. In this paper, we derive a bound for the metric $d$ as well as a related metric $d_{N\!N\!I}$ which arises when restricting $d$ to the subset of $\mathcal{N}(X)$ consisting of all networks with $2(|X|-1+i)$ vertices, $i \ge 1$. We also introduce two new metrics on networks -- the SPR and TBR metrics -- which generalize the metrics on phylogenetic trees with the same name and give bounds for these new metrics. We expect our results to eventually have applications to the development and understanding of network search algorithms.
1009.2855
Jakob Macke Jakob Macke
Jakob H Macke, Manfred Opper, Matthias Bethge
An analytically tractable model of neural population activity in the presence of common input explains higher-order correlations and entropy
null
null
null
null
q-bio.NC cond-mat.dis-nn physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simultaneously recorded neurons exhibit correlations whose underlying causes are not known. Here, we use a population of threshold neurons receiving correlated inputs to model neural population recordings. We show analytically that small changes in second-order correlations can lead to large changes in higher correlations, and that these higher-order correlations have a strong impact on the entropy, sparsity and statistical heat capacity of the population. Remarkably, our findings for this simple model may explain a couple of surprising effects recently observed in neural population recordings.
[ { "created": "Wed, 15 Sep 2010 07:34:13 GMT", "version": "v1" }, { "created": "Fri, 17 Sep 2010 13:13:21 GMT", "version": "v2" } ]
2010-09-20
[ [ "Macke", "Jakob H", "" ], [ "Opper", "Manfred", "" ], [ "Bethge", "Matthias", "" ] ]
Simultaneously recorded neurons exhibit correlations whose underlying causes are not known. Here, we use a population of threshold neurons receiving correlated inputs to model neural population recordings. We show analytically that small changes in second-order correlations can lead to large changes in higher correlations, and that these higher-order correlations have a strong impact on the entropy, sparsity and statistical heat capacity of the population. Remarkably, our findings for this simple model may explain a couple of surprising effects recently observed in neural population recordings.
1503.07215
Lionel Roques
Lionel Roques
MULTILAND: a neutral landscape generator designed for theoretical studies
null
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main goal of Multiland is to generate neutral landscapes made of several types of regions, with an exact control of the proportions occupied by each type of region. An important feature of the software is that it allows a control of the landscape fragmentation. It is intended to theoretical studies on the effect of landscape structure in applied sciences. It has been developed in the framework of the PEERLESS ANR project "Predictive Ecological Engineering for Landscape Ecosystem Services and Sustainability".
[ { "created": "Tue, 24 Mar 2015 22:00:27 GMT", "version": "v1" } ]
2015-03-26
[ [ "Roques", "Lionel", "" ] ]
The main goal of Multiland is to generate neutral landscapes made of several types of regions, with an exact control of the proportions occupied by each type of region. An important feature of the software is that it allows a control of the landscape fragmentation. It is intended to theoretical studies on the effect of landscape structure in applied sciences. It has been developed in the framework of the PEERLESS ANR project "Predictive Ecological Engineering for Landscape Ecosystem Services and Sustainability".