id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
0903.0924
Xiangwei Chu
Xiangwei Chu, Zhongzhi Zhang, Jihong Guan, Shuigeng Zhou
Epidemic spreading with nonlinear infectivity in weighted scale-free networks
17 pages, 12 figures
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-nc-sa/3.0/
In this paper, we investigate the epidemic spreading for SIR model in weighted scale-free networks with nonlinear infectivity, where the transmission rate in our analytical model is weighted. Concretely, we introduce the infectivity exponent $\alpha$ and the weight exponent $\beta$ into the analytical SIR model, then examine the combination effects of $\alpha$ and $\beta$ on the epidemic threshold and phase transition. We show that one can adjust the values of $\alpha$ and $\beta$ to rebuild the epidemic threshold to a finite value, and it is observed that the steady epidemic prevalence $R$ grows in an exponential form in the early stage, then follows hierarchical dynamics. Furthermore, we find $\alpha$ is more sensitive than $\beta$ in the transformation of the epidemic threshold and epidemic prevalence, which might deliver some useful information or new insights in the epidemic spreading and the correlative immunization schemes.
[ { "created": "Thu, 5 Mar 2009 08:09:51 GMT", "version": "v1" } ]
2009-03-06
[ [ "Chu", "Xiangwei", "" ], [ "Zhang", "Zhongzhi", "" ], [ "Guan", "Jihong", "" ], [ "Zhou", "Shuigeng", "" ] ]
In this paper, we investigate the epidemic spreading for SIR model in weighted scale-free networks with nonlinear infectivity, where the transmission rate in our analytical model is weighted. Concretely, we introduce the infectivity exponent $\alpha$ and the weight exponent $\beta$ into the analytical SIR model, then examine the combination effects of $\alpha$ and $\beta$ on the epidemic threshold and phase transition. We show that one can adjust the values of $\alpha$ and $\beta$ to rebuild the epidemic threshold to a finite value, and it is observed that the steady epidemic prevalence $R$ grows in an exponential form in the early stage, then follows hierarchical dynamics. Furthermore, we find $\alpha$ is more sensitive than $\beta$ in the transformation of the epidemic threshold and epidemic prevalence, which might deliver some useful information or new insights in the epidemic spreading and the correlative immunization schemes.
1903.04443
Alex McAvoy
Alex McAvoy and Martin A. Nowak
Reactive learning strategies for iterated games
18 pages
Proceedings of the Royal Society A (2019)
10.1098/rspa.2018.0819
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In an iterated game between two players, there is much interest in characterizing the set of feasible payoffs for both players when one player uses a fixed strategy and the other player is free to switch. Such characterizations have led to extortionists, equalizers, partners, and rivals. Most of those studies use memory-one strategies, which specify the probabilities to take actions depending on the outcome of the previous round. Here, we consider "reactive learning strategies," which gradually modify their propensity to take certain actions based on past actions of the opponent. Every linear reactive learning strategy, $\mathbf{p}^{\ast}$, corresponds to a memory one-strategy, $\mathbf{p}$, and vice versa. We prove that for evaluating the region of feasible payoffs against a memory-one strategy, $\mathcal{C}\left(\mathbf{p}\right)$, we need to check its performance against at most $11$ other strategies. Thus, $\mathcal{C}\left(\mathbf{p}\right)$ is the convex hull in $\mathbb{R}^{2}$ of at most $11$ points. Furthermore, if $\mathbf{p}$ is a memory-one strategy, with feasible payoff region $\mathcal{C}\left(\mathbf{p}\right)$, and $\mathbf{p}^{\ast}$ is the corresponding reactive learning strategy, with feasible payoff region $\mathcal{C}\left(\mathbf{p}^{\ast}\right)$, then $\mathcal{C}\left(\mathbf{p}^{\ast}\right)$ is a subset of $\mathcal{C}\left(\mathbf{p}\right)$. Reactive learning strategies are therefore powerful tools in restricting the outcomes of iterated games.
[ { "created": "Mon, 11 Mar 2019 17:06:57 GMT", "version": "v1" } ]
2022-02-18
[ [ "McAvoy", "Alex", "" ], [ "Nowak", "Martin A.", "" ] ]
In an iterated game between two players, there is much interest in characterizing the set of feasible payoffs for both players when one player uses a fixed strategy and the other player is free to switch. Such characterizations have led to extortionists, equalizers, partners, and rivals. Most of those studies use memory-one strategies, which specify the probabilities to take actions depending on the outcome of the previous round. Here, we consider "reactive learning strategies," which gradually modify their propensity to take certain actions based on past actions of the opponent. Every linear reactive learning strategy, $\mathbf{p}^{\ast}$, corresponds to a memory one-strategy, $\mathbf{p}$, and vice versa. We prove that for evaluating the region of feasible payoffs against a memory-one strategy, $\mathcal{C}\left(\mathbf{p}\right)$, we need to check its performance against at most $11$ other strategies. Thus, $\mathcal{C}\left(\mathbf{p}\right)$ is the convex hull in $\mathbb{R}^{2}$ of at most $11$ points. Furthermore, if $\mathbf{p}$ is a memory-one strategy, with feasible payoff region $\mathcal{C}\left(\mathbf{p}\right)$, and $\mathbf{p}^{\ast}$ is the corresponding reactive learning strategy, with feasible payoff region $\mathcal{C}\left(\mathbf{p}^{\ast}\right)$, then $\mathcal{C}\left(\mathbf{p}^{\ast}\right)$ is a subset of $\mathcal{C}\left(\mathbf{p}\right)$. Reactive learning strategies are therefore powerful tools in restricting the outcomes of iterated games.
1610.09193
Eugenio Urdapilleta
Eugenio Urdapilleta
Noise-induced interspike interval correlations and spike train regularization in spike-triggered adapting neurons
7 pages, 6 figures, published in Epl
Epl 115, 68002 (2016)
10.1209/0295-5075/115/68002
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spike generation in neurons produces a temporal point process, whose statistics is governed by intrinsic phenomena and the external incoming inputs to be coded. In particular, spike-evoked adaptation currents support a slow temporal process that conditions spiking probability at the present time according to past activity. In this work, we study the statistics of interspike interval correlations arising in such non-renewal spike trains, for a neuron model that reproduces different spike modes in a small adaptation scenario. We found that correlations are stronger as the neuron fires at a particular firing rate, which is defined by the adaptation process. When set in a subthreshold regime, the neuron may sustain this particular firing rate, and thus induce correlations, by noise. Given that, in this regime, interspike intervals are negatively correlated at any lag, this effect surprisingly implies a reduction in the variability of the spike count statistics at a finite noise intensity.
[ { "created": "Fri, 28 Oct 2016 12:58:44 GMT", "version": "v1" } ]
2016-10-31
[ [ "Urdapilleta", "Eugenio", "" ] ]
Spike generation in neurons produces a temporal point process, whose statistics is governed by intrinsic phenomena and the external incoming inputs to be coded. In particular, spike-evoked adaptation currents support a slow temporal process that conditions spiking probability at the present time according to past activity. In this work, we study the statistics of interspike interval correlations arising in such non-renewal spike trains, for a neuron model that reproduces different spike modes in a small adaptation scenario. We found that correlations are stronger as the neuron fires at a particular firing rate, which is defined by the adaptation process. When set in a subthreshold regime, the neuron may sustain this particular firing rate, and thus induce correlations, by noise. Given that, in this regime, interspike intervals are negatively correlated at any lag, this effect surprisingly implies a reduction in the variability of the spike count statistics at a finite noise intensity.
2009.00308
Erhard Scholz
Matthias Kreck, Erhard Scholz
Proposal of a recursive compartment model of epidemics and applications to the Covid-19 pandemic
17 pages, 9 figures
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is work in progress. We make it accessible hoping that people might find the idea useful. We propose a discrete, recursive 5-compartment model for the spread of epidemics, which we call {\em SEPIR-model}. Under mild assumptions which typically are fulfilled for the Covid-19 pandemic it can be used to reproduce the development of an epidemic from a small number of parameters closely related to the data. We demonstrate this at the development in Germany and Switzerland. It also allows model predictions assuming nearly constant reproduction numbers. Thus it might be a useful tool for shedding light on which interventions might be most effective in the future. In future work we will discuss other aspects of the model and more countries.
[ { "created": "Tue, 1 Sep 2020 09:19:37 GMT", "version": "v1" } ]
2020-09-02
[ [ "Kreck", "Matthias", "" ], [ "Scholz", "Erhard", "" ] ]
This is work in progress. We make it accessible hoping that people might find the idea useful. We propose a discrete, recursive 5-compartment model for the spread of epidemics, which we call {\em SEPIR-model}. Under mild assumptions which typically are fulfilled for the Covid-19 pandemic it can be used to reproduce the development of an epidemic from a small number of parameters closely related to the data. We demonstrate this at the development in Germany and Switzerland. It also allows model predictions assuming nearly constant reproduction numbers. Thus it might be a useful tool for shedding light on which interventions might be most effective in the future. In future work we will discuss other aspects of the model and more countries.
1802.10448
Massimiliano Sassoli de Bianchi
Diederik Aerts, Massimiliano Sassoli de Bianchi, Sandro Sozzo and Tomas Veloz
Quantum cognition goes beyond-quantum: modeling the collective participant in psychological measurements
19 pages, 1 figure
Probing the Meaning of Quantum Mechanics, World Scientific, pp. 355-382 (2019)
10.1142/9789813276895_0017
null
q-bio.NC cs.AI quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In psychological measurements, two levels should be distinguished: the 'individual level', relative to the different participants in a given cognitive situation, and the 'collective level', relative to the overall statistics of their outcomes, which we propose to associate with a notion of 'collective participant'. When the distinction between these two levels is properly formalized, it reveals why the modeling of the collective participant generally requires beyond-quantum - non-Bornian - probabilistic models, when sequential measurements at the individual level are considered, and this though a pure quantum description remains valid for single measurement situations.
[ { "created": "Sat, 24 Feb 2018 08:23:11 GMT", "version": "v1" } ]
2019-02-08
[ [ "Aerts", "Diederik", "" ], [ "de Bianchi", "Massimiliano Sassoli", "" ], [ "Sozzo", "Sandro", "" ], [ "Veloz", "Tomas", "" ] ]
In psychological measurements, two levels should be distinguished: the 'individual level', relative to the different participants in a given cognitive situation, and the 'collective level', relative to the overall statistics of their outcomes, which we propose to associate with a notion of 'collective participant'. When the distinction between these two levels is properly formalized, it reveals why the modeling of the collective participant generally requires beyond-quantum - non-Bornian - probabilistic models, when sequential measurements at the individual level are considered, and this though a pure quantum description remains valid for single measurement situations.
0806.0215
Jean-Philippe Vert
Jean-Philippe Vert (CBIO)
Reconstruction of biological networks by supervised machine learning approaches
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review a recent trend in computational systems biology which aims at using pattern recognition algorithms to infer the structure of large-scale biological networks from heterogeneous genomic data. We present several strategies that have been proposed and that lead to different pattern recognition problems and algorithms. The strenght of these approaches is illustrated on the reconstruction of metabolic, protein-protein and regulatory networks of model organisms. In all cases, state-of-the-art performance is reported.
[ { "created": "Mon, 2 Jun 2008 06:40:24 GMT", "version": "v1" }, { "created": "Mon, 22 Sep 2008 12:09:21 GMT", "version": "v2" } ]
2008-09-22
[ [ "Vert", "Jean-Philippe", "", "CBIO" ] ]
We review a recent trend in computational systems biology which aims at using pattern recognition algorithms to infer the structure of large-scale biological networks from heterogeneous genomic data. We present several strategies that have been proposed and that lead to different pattern recognition problems and algorithms. The strenght of these approaches is illustrated on the reconstruction of metabolic, protein-protein and regulatory networks of model organisms. In all cases, state-of-the-art performance is reported.
2008.08448
Jiali Zhou
Jiali Zhou, Haris N. Koutsopoulos
Virus Transmission Risk in Urban Rail Systems: A Microscopic Simulation-based Analysis of Spatio-temporal Characteristics
null
null
null
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transmission risk of air-borne diseases in public transportation systems is a concern. The paper proposes a modified Wells-Riley model for risk analysis in public transportation systems to capture the passenger flow characteristics, including spatial and temporal patterns in terms of number of boarding, alighting passengers, and number of infectors. The model is utilized to assess overall risk as a function of OD flows, actual operations, and factors such as mask wearing, and ventilation. The model is integrated with a microscopic simulation model of subway operations (SimMETRO). Using actual data from a subway system, a case study explores the impact of different factors on transmission risk, including mask-wearing, ventilation rates, infectiousness levels of disease and carrier rates. In general, mask-wearing and ventilation are effective under various demand levels, infectiousness levels, and carrier rates. Mask-wearing is more effective in mitigating risks. Impacts from operations and service frequency are also evaluated, emphasizing the importance of maintaining reliable, frequent operations in lowering transmission risks. Risk spatial patterns are also explored, highlighting locations of higher risk.
[ { "created": "Sun, 16 Aug 2020 18:35:46 GMT", "version": "v1" } ]
2020-08-20
[ [ "Zhou", "Jiali", "" ], [ "Koutsopoulos", "Haris N.", "" ] ]
Transmission risk of air-borne diseases in public transportation systems is a concern. The paper proposes a modified Wells-Riley model for risk analysis in public transportation systems to capture the passenger flow characteristics, including spatial and temporal patterns in terms of number of boarding, alighting passengers, and number of infectors. The model is utilized to assess overall risk as a function of OD flows, actual operations, and factors such as mask wearing, and ventilation. The model is integrated with a microscopic simulation model of subway operations (SimMETRO). Using actual data from a subway system, a case study explores the impact of different factors on transmission risk, including mask-wearing, ventilation rates, infectiousness levels of disease and carrier rates. In general, mask-wearing and ventilation are effective under various demand levels, infectiousness levels, and carrier rates. Mask-wearing is more effective in mitigating risks. Impacts from operations and service frequency are also evaluated, emphasizing the importance of maintaining reliable, frequent operations in lowering transmission risks. Risk spatial patterns are also explored, highlighting locations of higher risk.
2202.00149
Josinaldo Menezes
J. Menezes, S. Batista, M. Tenorio, E. A. Triaca, B. Moura
How local antipredator response unbalances the rock-paper-scissors model
9 pages, 11 figures
Chaos 32, 123142 (2022)
10.1063/5.0106165
null
q-bio.PE nlin.AO nlin.PS physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Antipredator behaviour is a self-preservation strategy present in many biological systems, where individuals join the effort in a collective reaction to avoid being caught by an approaching predator. We study a nonhierarchical tritrophic system, whose predator-prey interactions are described by the rock-paper-scissors game rules. We performed a set of spatial stochastic simulations where organisms of one out of the species can resist predation in a collective strategy. The drop in predation capacity is local, which means that each predator faces a particular opposition depending on the prey group size surrounding it. Considering that the interference in a predator action depends on the prey's physical and cognitive ability, we explore the role of a conditioning factor that indicates the fraction of the species apt to perform the antipredator strategy. Because of the local unbalancing of the cyclic predator-prey interactions, departed spatial domains mainly occupied by a single species emerge. Unlike the rock-paper-scissors model with a weak species because a nonlocal reason, our findings show that if the predation probability of one species is reduced because individuals face local antipredator response, the species does not predominate. Instead, the local unbalancing of the rock-paper-scissors model results in the prevalence of the weak species' prey. Finally, the outcomes show that local unevenness may jeopardise biodiversity, with the coexistence being more threatened for high mobility.
[ { "created": "Mon, 31 Jan 2022 23:45:49 GMT", "version": "v1" }, { "created": "Tue, 27 Dec 2022 16:02:43 GMT", "version": "v2" } ]
2022-12-29
[ [ "Menezes", "J.", "" ], [ "Batista", "S.", "" ], [ "Tenorio", "M.", "" ], [ "Triaca", "E. A.", "" ], [ "Moura", "B.", "" ] ]
Antipredator behaviour is a self-preservation strategy present in many biological systems, where individuals join the effort in a collective reaction to avoid being caught by an approaching predator. We study a nonhierarchical tritrophic system, whose predator-prey interactions are described by the rock-paper-scissors game rules. We performed a set of spatial stochastic simulations where organisms of one out of the species can resist predation in a collective strategy. The drop in predation capacity is local, which means that each predator faces a particular opposition depending on the prey group size surrounding it. Considering that the interference in a predator action depends on the prey's physical and cognitive ability, we explore the role of a conditioning factor that indicates the fraction of the species apt to perform the antipredator strategy. Because of the local unbalancing of the cyclic predator-prey interactions, departed spatial domains mainly occupied by a single species emerge. Unlike the rock-paper-scissors model with a weak species because a nonlocal reason, our findings show that if the predation probability of one species is reduced because individuals face local antipredator response, the species does not predominate. Instead, the local unbalancing of the rock-paper-scissors model results in the prevalence of the weak species' prey. Finally, the outcomes show that local unevenness may jeopardise biodiversity, with the coexistence being more threatened for high mobility.
1403.8072
Eva Kiszka
Eva Kiszka
A Pre-Docking Filter Based on Image Recognition
Master Thesis, 60 pages
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/3.0/
Molecular docking is a central method in the computer-based screening of compound libraries as a part of the rational approach to drug design. Although the method has proved its competence in predicting binding modes correctly, its inherent complexity puts high demands on computational resources. Moreover the chemical space to be screened is prohibitively large. Therefore the application of filtering prior to docking is a promising concept. We implemented a pre-docking filter based on the tangent distance algorithm originally conceived for optical character recognition. The challenging transfer of the method from two-dimensional to three-dimensional data was achieved by representing the molecular structure by a set of density maps extracted from different views of the compound. Additionally, our program applies a binary classification using principal component analysis. Ligand and binding pocket are aligned according to their centroidal axes, enabling a size-based filtering for the purpose of enriching the dataset regarding ligands before docking. The evaluation of our program via redocking produced RMSD values between 8{\AA} and 25{\AA}, indicating that the tangent distance approach is not suited for optimizing the orientation of a ligand and binding pocket. Investigating probable explanations lead to the conclusion that a likely cause for these results is the method's known inability to approximate large transformations. A validation of the principal component analysis alone performed better: Tests on a dataset of 170 ligands and 6,435 decoys yielded a sensitivity of 0.81, while keeping the runtime within a reasonable timeframe (1 to 4 seconds). The dataset's enrichment increased from 2.64% to 2.82%.
[ { "created": "Mon, 31 Mar 2014 16:20:28 GMT", "version": "v1" } ]
2014-04-01
[ [ "Kiszka", "Eva", "" ] ]
Molecular docking is a central method in the computer-based screening of compound libraries as a part of the rational approach to drug design. Although the method has proved its competence in predicting binding modes correctly, its inherent complexity puts high demands on computational resources. Moreover the chemical space to be screened is prohibitively large. Therefore the application of filtering prior to docking is a promising concept. We implemented a pre-docking filter based on the tangent distance algorithm originally conceived for optical character recognition. The challenging transfer of the method from two-dimensional to three-dimensional data was achieved by representing the molecular structure by a set of density maps extracted from different views of the compound. Additionally, our program applies a binary classification using principal component analysis. Ligand and binding pocket are aligned according to their centroidal axes, enabling a size-based filtering for the purpose of enriching the dataset regarding ligands before docking. The evaluation of our program via redocking produced RMSD values between 8{\AA} and 25{\AA}, indicating that the tangent distance approach is not suited for optimizing the orientation of a ligand and binding pocket. Investigating probable explanations lead to the conclusion that a likely cause for these results is the method's known inability to approximate large transformations. A validation of the principal component analysis alone performed better: Tests on a dataset of 170 ligands and 6,435 decoys yielded a sensitivity of 0.81, while keeping the runtime within a reasonable timeframe (1 to 4 seconds). The dataset's enrichment increased from 2.64% to 2.82%.
1010.6284
Salvatore Torquato
Salvatore Torquato
Toward an Ising Model of Cancer and Beyond
55 pages, 21 figures and 3 tables. To appear in Physical Biology. Added references
null
10.1088/1478-3975/8/1/015017
null
q-bio.CB cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Theoretical and computational tools that can be used in the clinic to predict neoplastic progression and propose individualized optimal treatment strategies to control cancer growth is desired. To develop such a predictive model, one must account for the complex mechanisms involved in tumor growth. Here we review resarch work that we have done toward the development of an "Ising model" of cancer. The review begins with a description of a minimalist four-dimensional (three in space and one in time) cellular automaton (CA) model of cancer in which healthy cells transition between states (proliferative, hypoxic, and necrotic) according to simple local rules and their present states, which can viewed as a stripped-down Ising model of cancer. This model is applied to model the growth of glioblastoma multiforme, the most malignant of brain cancers. This is followed by a discussion of the extension of the model to study the effect on the tumor dynamics and geometry of a mutated subpopulation. A discussion of how tumor growth is affected by chemotherapeutic treatment is then described. How angiogenesis as well as the heterogeneous and confined environment in which a tumor grows is incorporated in the CA model is discussed. The characterization of the level of organization of the invasive network around a solid tumor using spanning trees is subsequently described. Then, we describe open problems and future promising avenues for future research, including the need to develop better molecular-based models that incorporate the true heterogeneous environment over wide range of length and time scales (via imaging data), cell motility, oncogenes, tumor suppressor genes and cell-cell communication. The need to bring to bear the powerful machinery of the theory of heterogeneous media to better understand the behavior of cancer in its microenvironment is presented.
[ { "created": "Fri, 29 Oct 2010 17:58:04 GMT", "version": "v1" }, { "created": "Tue, 2 Nov 2010 13:34:18 GMT", "version": "v2" } ]
2015-05-20
[ [ "Torquato", "Salvatore", "" ] ]
Theoretical and computational tools that can be used in the clinic to predict neoplastic progression and propose individualized optimal treatment strategies to control cancer growth is desired. To develop such a predictive model, one must account for the complex mechanisms involved in tumor growth. Here we review resarch work that we have done toward the development of an "Ising model" of cancer. The review begins with a description of a minimalist four-dimensional (three in space and one in time) cellular automaton (CA) model of cancer in which healthy cells transition between states (proliferative, hypoxic, and necrotic) according to simple local rules and their present states, which can viewed as a stripped-down Ising model of cancer. This model is applied to model the growth of glioblastoma multiforme, the most malignant of brain cancers. This is followed by a discussion of the extension of the model to study the effect on the tumor dynamics and geometry of a mutated subpopulation. A discussion of how tumor growth is affected by chemotherapeutic treatment is then described. How angiogenesis as well as the heterogeneous and confined environment in which a tumor grows is incorporated in the CA model is discussed. The characterization of the level of organization of the invasive network around a solid tumor using spanning trees is subsequently described. Then, we describe open problems and future promising avenues for future research, including the need to develop better molecular-based models that incorporate the true heterogeneous environment over wide range of length and time scales (via imaging data), cell motility, oncogenes, tumor suppressor genes and cell-cell communication. The need to bring to bear the powerful machinery of the theory of heterogeneous media to better understand the behavior of cancer in its microenvironment is presented.
q-bio/0310017
Gyorgy Szabo
Gyorgy Szabo and Gustavo Arial Sznaider
Phase transition and selection in a four-species cyclic Lotka-Volterra model
4 pages, 5 figures
Phys. Rev. E 69 (2004) 031911
10.1103/PhysRevE.69.031911
null
q-bio.PE
null
We study a four species ecological system with cyclic dominance whose individuals are distributed on a square lattice. Randomly chosen individuals migrate to one of the neighboring sites if it is empty or invade this site if occupied by their prey. The cyclic dominance maintains the coexistence of all the four species if the concentration of vacant sites is lower than a threshold value. Above the treshold, a symmetry breaking ordering occurs via growing domains containing only two neutral species inside. These two neutral species can protect each other from the external invaders (predators) and extend their common territory. According to our Monte Carlo simulations the observed phase transition is equivalent to those found in spreading models with two equivalent absorbing states although the present model has continuous sets of absorbing states with different portions of the two neutral species. The selection mechanism yielding symmetric phases is related to the domain growth process whith wide boundaries where the four species coexist.
[ { "created": "Wed, 15 Oct 2003 07:17:03 GMT", "version": "v1" } ]
2009-11-10
[ [ "Szabo", "Gyorgy", "" ], [ "Sznaider", "Gustavo Arial", "" ] ]
We study a four species ecological system with cyclic dominance whose individuals are distributed on a square lattice. Randomly chosen individuals migrate to one of the neighboring sites if it is empty or invade this site if occupied by their prey. The cyclic dominance maintains the coexistence of all the four species if the concentration of vacant sites is lower than a threshold value. Above the treshold, a symmetry breaking ordering occurs via growing domains containing only two neutral species inside. These two neutral species can protect each other from the external invaders (predators) and extend their common territory. According to our Monte Carlo simulations the observed phase transition is equivalent to those found in spreading models with two equivalent absorbing states although the present model has continuous sets of absorbing states with different portions of the two neutral species. The selection mechanism yielding symmetric phases is related to the domain growth process whith wide boundaries where the four species coexist.
2309.15935
Jacek Banasiak
Jacek Banasiak and Stephane Tchoumi
Multiscale malaria models and their uniform in-time asymptotic analysis
31 pages, 14 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In this paper, we show that an extension of the classical Tikhonov--Fenichel asymptotic procedure applied to multiscale models of vector-borne diseases, with time scales determined by the dynamics of human and vector populations, yields a simplified model approximating the original one in a consistent, and uniform for large times, way. Furthermore, we construct a higher-order approximation based on the classical Chapman-Enskog procedure of kinetic theory and show, in particular, that it is equivalent to the dynamics on the first-order approximation of the slow manifold in the Fenichel theory.
[ { "created": "Wed, 27 Sep 2023 18:18:57 GMT", "version": "v1" } ]
2023-09-29
[ [ "Banasiak", "Jacek", "" ], [ "Tchoumi", "Stephane", "" ] ]
In this paper, we show that an extension of the classical Tikhonov--Fenichel asymptotic procedure applied to multiscale models of vector-borne diseases, with time scales determined by the dynamics of human and vector populations, yields a simplified model approximating the original one in a consistent, and uniform for large times, way. Furthermore, we construct a higher-order approximation based on the classical Chapman-Enskog procedure of kinetic theory and show, in particular, that it is equivalent to the dynamics on the first-order approximation of the slow manifold in the Fenichel theory.
1810.10389
Marko Popovic
Marko Popovic
Wanted Dead or Alive Extraterrestrial Life Forms (Thermodynamic criterion for life is a growing open system that performs self-assembly processes)
null
null
null
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For more than 100 years, humanity (both specialists and enthusiastic laics) has been searching for extraterrestrial life hoping we are not alone. The first step in the quest for extraterrestrial life is to define what and where exactly to look for. Thus, the basic definition of living matter is a conditio sine qua non for the quest. The diversity of species on Earth is so large that our quest for extraterrestrial life cannot be limited to forms and shapes present and known to us from our environment. However, there are two formal conditions that must be fulfilled in order for something to be assumed as living matter. First, it should represent a growing open thermodynamic system (in biological terms - a cell), and thus be a system out of equilibrium. Second, it must perform synthesis, self-assembly and accumulation processes (in biological terms to grow, maintain homeostasis, respond to environment, reproduce, exchange matter and energy, evolve). Populated planets are consisted of two components: biosphere and its environment (geosphere, hydrosphere and atmosphere). Living matter and its environment are out of equilibrium. Thus, the candidate planets are dynamic inhomogeneous systems for two reasons. First, a planet receives energy from its star, which leads to disequilibrium for external reasons. Animate matter contributes to disequilibrium for internal reasons: accumulation of matter and self-assembly. In practice, for a screening, an astrobiologist should search for increase in inhomogeneity on a candidate planet.
[ { "created": "Tue, 23 Oct 2018 14:37:58 GMT", "version": "v1" } ]
2018-10-25
[ [ "Popovic", "Marko", "" ] ]
For more than 100 years, humanity (both specialists and enthusiastic laics) has been searching for extraterrestrial life hoping we are not alone. The first step in the quest for extraterrestrial life is to define what and where exactly to look for. Thus, the basic definition of living matter is a conditio sine qua non for the quest. The diversity of species on Earth is so large that our quest for extraterrestrial life cannot be limited to forms and shapes present and known to us from our environment. However, there are two formal conditions that must be fulfilled in order for something to be assumed as living matter. First, it should represent a growing open thermodynamic system (in biological terms - a cell), and thus be a system out of equilibrium. Second, it must perform synthesis, self-assembly and accumulation processes (in biological terms to grow, maintain homeostasis, respond to environment, reproduce, exchange matter and energy, evolve). Populated planets are consisted of two components: biosphere and its environment (geosphere, hydrosphere and atmosphere). Living matter and its environment are out of equilibrium. Thus, the candidate planets are dynamic inhomogeneous systems for two reasons. First, a planet receives energy from its star, which leads to disequilibrium for external reasons. Animate matter contributes to disequilibrium for internal reasons: accumulation of matter and self-assembly. In practice, for a screening, an astrobiologist should search for increase in inhomogeneity on a candidate planet.
1408.7076
Jan Engler
Jan O. Engler, Anna F. Cord, Petra Dieker, J. Wolfgang Waegele, Dennis Roedder
Accounting for the 'network' in the Natura 2000 network: A response to Hochkirch et al. 2013
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Worldwide, we are experiencing an unprecedented, accelerated loss of biodiversity triggered by a bundle of anthropogenic threats such as habitat destruction, environmental pollution and climate change. Despite all efforts of the European biodiversity conservation policy, initiated 20 years ago by the Habitats Directive that provided the legal basis for establishing the Natura 2000 network, the goal to halt the decline of biodiversity in Europe by 2010 has been missed. Hochkirch et al. (2013, Conserv. Lett. 6: 462-467) identified four major shortcomings of the current implementation of the directive concerning prioritization of the annexes, conservation plans, survey systems and financial resources. However they did not account for the intended network character of the Natura 2000 sites, an aspect of highest relevance. This response letter deals with this shortcoming as it is the prerequisite, over any other strategies, ensuring a Natura 2020 network being worth its name.
[ { "created": "Tue, 29 Jul 2014 09:05:06 GMT", "version": "v1" } ]
2014-09-01
[ [ "Engler", "Jan O.", "" ], [ "Cord", "Anna F.", "" ], [ "Dieker", "Petra", "" ], [ "Waegele", "J. Wolfgang", "" ], [ "Roedder", "Dennis", "" ] ]
Worldwide, we are experiencing an unprecedented, accelerated loss of biodiversity triggered by a bundle of anthropogenic threats such as habitat destruction, environmental pollution and climate change. Despite all efforts of the European biodiversity conservation policy, initiated 20 years ago by the Habitats Directive that provided the legal basis for establishing the Natura 2000 network, the goal to halt the decline of biodiversity in Europe by 2010 has been missed. Hochkirch et al. (2013, Conserv. Lett. 6: 462-467) identified four major shortcomings of the current implementation of the directive concerning prioritization of the annexes, conservation plans, survey systems and financial resources. However they did not account for the intended network character of the Natura 2000 sites, an aspect of highest relevance. This response letter deals with this shortcoming as it is the prerequisite, over any other strategies, ensuring a Natura 2020 network being worth its name.
2404.13728
Alex McAvoy
Christoph Hauert, Alex McAvoy
Frequency-dependent returns in nonlinear public goods games
18 pages; comments welcome
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
When individuals interact in groups, the evolution of cooperation is traditionally modeled using the framework of public goods games. Overwhelmingly, these models assume that the return of the public good depends linearly on the fraction of contributors. In contrast, it seems natural that in real life public goods interactions the return most likely depends on the size of the investor pool as well. Here, we consider a model to account for such nonlinearities in which the multiplication factor (marginal per capita return) for the public good depends on how many contribute. We find that nonlinear public goods interactions can break the curse of dominant defection in linear public goods interactions and give rise to richer dynamical outcomes in evolutionary settings. We provide an in-depth analysis of the more varied decisions by the classical rational player in nonlinear public goods interactions as well as a mechanistic, microscopic derivation of the evolutionary outcomes for the stochastic dynamics in finite populations and in the deterministic limit of infinite populations. This kind of nonlinearity provides a natural way to model public goods with diminishing returns as well as economies of scale.
[ { "created": "Sun, 21 Apr 2024 18:14:46 GMT", "version": "v1" } ]
2024-04-23
[ [ "Hauert", "Christoph", "" ], [ "McAvoy", "Alex", "" ] ]
When individuals interact in groups, the evolution of cooperation is traditionally modeled using the framework of public goods games. Overwhelmingly, these models assume that the return of the public good depends linearly on the fraction of contributors. In contrast, it seems natural that in real life public goods interactions the return most likely depends on the size of the investor pool as well. Here, we consider a model to account for such nonlinearities in which the multiplication factor (marginal per capita return) for the public good depends on how many contribute. We find that nonlinear public goods interactions can break the curse of dominant defection in linear public goods interactions and give rise to richer dynamical outcomes in evolutionary settings. We provide an in-depth analysis of the more varied decisions by the classical rational player in nonlinear public goods interactions as well as a mechanistic, microscopic derivation of the evolutionary outcomes for the stochastic dynamics in finite populations and in the deterministic limit of infinite populations. This kind of nonlinearity provides a natural way to model public goods with diminishing returns as well as economies of scale.
q-bio/0603002
Debojyoti Dutta
Debojyoti Dutta, Ting Chen
Mining Mass Spectra: Metric Embeddings and Fast Near Neighbor Search
Computational Proteomics, Mass Spectrometry
null
null
null
q-bio.QM q-bio.OT
null
Mining large-scale high-throughput tandem mass spectrometry data sets is a very important problem in mass spectrometry based protein identification. One of the fundamental problems in large scale mining of spectra is to design appropriate metrics and algorithms to avoid all-pair-wise comparisons of spectra. In this paper, we present a general framework based on vector spaces to avoid pair-wise comparisons. We first robustly embed spectra in a high dimensional space in a novel fashion and then apply fast approximate near neighbor algorithms for tasks such as constructing filters for database search, indexing and similarity searching. We formally prove that our embedding has low distortion compared to the cosine similarity, and, along with locality sensitive hashing (LSH), we design filters for database search that can filter out more than 989% of peptides (118 times less) while missing at most 0.29% of the correct sequences. We then show how our framework can be used in similarity searching, which can then be used to detect tight clusters or replicates. On an average, for a cluster size of 16 spectra, LSH only misses 1 spectrum and admits only 1 false spectrum. In addition, our framework in conjunction with dimension reduction techniques allow us to visualize large datasets in 2D space. Our framework also has the potential to embed and compare datasets with post translation modifications (PTM).
[ { "created": "Wed, 1 Mar 2006 07:42:19 GMT", "version": "v1" } ]
2007-05-23
[ [ "Dutta", "Debojyoti", "" ], [ "Chen", "Ting", "" ] ]
Mining large-scale high-throughput tandem mass spectrometry data sets is a very important problem in mass spectrometry based protein identification. One of the fundamental problems in large scale mining of spectra is to design appropriate metrics and algorithms to avoid all-pair-wise comparisons of spectra. In this paper, we present a general framework based on vector spaces to avoid pair-wise comparisons. We first robustly embed spectra in a high dimensional space in a novel fashion and then apply fast approximate near neighbor algorithms for tasks such as constructing filters for database search, indexing and similarity searching. We formally prove that our embedding has low distortion compared to the cosine similarity, and, along with locality sensitive hashing (LSH), we design filters for database search that can filter out more than 989% of peptides (118 times less) while missing at most 0.29% of the correct sequences. We then show how our framework can be used in similarity searching, which can then be used to detect tight clusters or replicates. On an average, for a cluster size of 16 spectra, LSH only misses 1 spectrum and admits only 1 false spectrum. In addition, our framework in conjunction with dimension reduction techniques allow us to visualize large datasets in 2D space. Our framework also has the potential to embed and compare datasets with post translation modifications (PTM).
1903.06597
Clarissa Braccia
Clarissa Braccia, Meritxell Pons Espinal, Mattia Pini, Davide De Pietri Tonelli, and Andrea Armirotti
A new SWATH ion library for mouse adult hippocampal neural stem cells
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Over the last years, the SWATH data-independent acquisition protocol (Sequential Window acquisition of All THeoretical mass spectra) has become a cornerstone for the worldwide proteomics community. In this approach, a high-resolution quadrupole-ToF mass spectrometer acquires thousands of MS/MS data by selecting not just a single precursor at a time, but by allowing a broad m/z range to be fragmented. This acquisition window is then sequentially moved from the lowest to the highest mass selection range. This technique enables the acquisition of thousands of high-resolution MS/MS spectra per minute in a standard LC-MS run. In the subsequent data analysis phase, the corresponding dataset is searched in a triple quadrupole-like mode, thus not considering the whole MS/MS scan spectrum, but by searching for several precursor to fragment transitions that identify and quantify the corresponding peptide. This search is made possible with the use of an ion library, previously acquired in a classical data dependent, full-spectrum mode. The SWATH protocol, combining the protein identification power of high-resolution MS/MS spectra with the robustness and accuracy in analyte quantification of triple-quad targeted workflows, has become very popular in proteomics research. The major drawback lies in the ion library itself, which is normally demanding and time-consuming to build. Conversely, through the realignment of chromatographic retention times, an ion library of a given proteome can relatively easily be tailored upon any proteomics experiment done on the same proteome. We are thus hereby sharing with the worldwide proteomics community our newly acquired ion library of mouse adult hippocampal neural stem cells. Given the growing effort in neuroscience research involving proteomics experiments, we believe that this data might be of great help for the neuroscience community.
[ { "created": "Fri, 15 Mar 2019 15:21:37 GMT", "version": "v1" } ]
2019-03-18
[ [ "Braccia", "Clarissa", "" ], [ "Espinal", "Meritxell Pons", "" ], [ "Pini", "Mattia", "" ], [ "Tonelli", "Davide De Pietri", "" ], [ "Armirotti", "Andrea", "" ] ]
Over the last years, the SWATH data-independent acquisition protocol (Sequential Window acquisition of All THeoretical mass spectra) has become a cornerstone for the worldwide proteomics community. In this approach, a high-resolution quadrupole-ToF mass spectrometer acquires thousands of MS/MS data by selecting not just a single precursor at a time, but by allowing a broad m/z range to be fragmented. This acquisition window is then sequentially moved from the lowest to the highest mass selection range. This technique enables the acquisition of thousands of high-resolution MS/MS spectra per minute in a standard LC-MS run. In the subsequent data analysis phase, the corresponding dataset is searched in a triple quadrupole-like mode, thus not considering the whole MS/MS scan spectrum, but by searching for several precursor to fragment transitions that identify and quantify the corresponding peptide. This search is made possible with the use of an ion library, previously acquired in a classical data dependent, full-spectrum mode. The SWATH protocol, combining the protein identification power of high-resolution MS/MS spectra with the robustness and accuracy in analyte quantification of triple-quad targeted workflows, has become very popular in proteomics research. The major drawback lies in the ion library itself, which is normally demanding and time-consuming to build. Conversely, through the realignment of chromatographic retention times, an ion library of a given proteome can relatively easily be tailored upon any proteomics experiment done on the same proteome. We are thus hereby sharing with the worldwide proteomics community our newly acquired ion library of mouse adult hippocampal neural stem cells. Given the growing effort in neuroscience research involving proteomics experiments, we believe that this data might be of great help for the neuroscience community.
2107.05579
Mikko Pakkanen
Mikko S. Pakkanen, Xenia Miscouridou, Matthew J. Penn, Charles Whittaker, Tresnia Berah, Swapnil Mishra, Thomas A. Mellan, Samir Bhatt
Unifying incidence and prevalence under a time-varying general branching process
35 pages, 4 figures, v4: major revision, including a new argument for the equivalence of incidence equations
null
null
null
q-bio.PE math.PR q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Renewal equations are a popular approach used in modelling the number of new infections, i.e., incidence, in an outbreak. We develop a stochastic model of an outbreak based on a time-varying variant of the Crump-Mode-Jagers branching process. This model accommodates a time-varying reproduction number and a time-varying distribution for the generation interval. We then derive renewal-like integral equations for incidence, cumulative incidence and prevalence under this model. We show that the equations for incidence and prevalence are consistent with the so-called back-calculation relationship. We analyse two particular cases of these integral equations, one that arises from a Bellman-Harris process and one that arises from an inhomogeneous Poisson process model of transmission. We also show that the incidence integral equations that arise from both of these specific models agree with the renewal equation used ubiquitously in infectious disease modelling. We present a numerical discretisation scheme to solve these equations, and use this scheme to estimate rates of transmission from serological prevalence of SARS-CoV-2 in the UK and historical incidence data on Influenza, Measles, SARS and Smallpox.
[ { "created": "Mon, 12 Jul 2021 16:58:29 GMT", "version": "v1" }, { "created": "Fri, 14 Jan 2022 18:04:03 GMT", "version": "v2" }, { "created": "Wed, 23 Feb 2022 17:16:29 GMT", "version": "v3" }, { "created": "Wed, 21 Dec 2022 16:04:14 GMT", "version": "v4" } ]
2022-12-22
[ [ "Pakkanen", "Mikko S.", "" ], [ "Miscouridou", "Xenia", "" ], [ "Penn", "Matthew J.", "" ], [ "Whittaker", "Charles", "" ], [ "Berah", "Tresnia", "" ], [ "Mishra", "Swapnil", "" ], [ "Mellan", "Thomas A.", "" ], [ "Bhatt", "Samir", "" ] ]
Renewal equations are a popular approach used in modelling the number of new infections, i.e., incidence, in an outbreak. We develop a stochastic model of an outbreak based on a time-varying variant of the Crump-Mode-Jagers branching process. This model accommodates a time-varying reproduction number and a time-varying distribution for the generation interval. We then derive renewal-like integral equations for incidence, cumulative incidence and prevalence under this model. We show that the equations for incidence and prevalence are consistent with the so-called back-calculation relationship. We analyse two particular cases of these integral equations, one that arises from a Bellman-Harris process and one that arises from an inhomogeneous Poisson process model of transmission. We also show that the incidence integral equations that arise from both of these specific models agree with the renewal equation used ubiquitously in infectious disease modelling. We present a numerical discretisation scheme to solve these equations, and use this scheme to estimate rates of transmission from serological prevalence of SARS-CoV-2 in the UK and historical incidence data on Influenza, Measles, SARS and Smallpox.
1808.01279
Charles Delahunt
Charles B Delahunt, Pedro D Maia, J. Nathan Kutz
Built to Last: Functional and structural mechanisms in the moth olfactory network mitigate effects of neural injury
17 pages, 10 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most organisms suffer neuronal damage throughout their lives, which can impair performance of core behaviors. Their neural circuits need to maintain function despite injury, which in particular requires preserving key system outputs. In this work, we explore whether and how certain structural and functional neuronal network motifs act as injury mitigation mechanisms. Specifically, we examine how (i) Hebbian learning, (ii) high levels of noise, and (iii) parallel inhibitory and excitatory connections contribute to the robustness of the olfactory system in the Manduca sexta moth. We simulate injuries on a detailed computational model of the moth olfactory network calibrated to in vivo data. The injuries are modeled on focal axonal swellings, a ubiquitous form of axonal pathology observed in traumatic brain injuries and other brain disorders. Axonal swellings effectively compromise spike train propagation along the axon, reducing the effective neural firing rate delivered to downstream neurons. All three of the network motifs examined significantly mitigate the effects of injury on readout neurons, either by reducing injury's impact on readout neuron responses or by restoring these responses to pre-injury levels. These motifs may thus be partially explained by their value as adaptive mechanisms to minimize the functional effects of neural injury. More generally, robustness to injury is a vital design principle to consider when analyzing neural systems.
[ { "created": "Fri, 3 Aug 2018 17:58:45 GMT", "version": "v1" }, { "created": "Fri, 6 Dec 2019 01:03:51 GMT", "version": "v2" }, { "created": "Fri, 11 Sep 2020 22:38:58 GMT", "version": "v3" } ]
2020-09-15
[ [ "Delahunt", "Charles B", "" ], [ "Maia", "Pedro D", "" ], [ "Kutz", "J. Nathan", "" ] ]
Most organisms suffer neuronal damage throughout their lives, which can impair performance of core behaviors. Their neural circuits need to maintain function despite injury, which in particular requires preserving key system outputs. In this work, we explore whether and how certain structural and functional neuronal network motifs act as injury mitigation mechanisms. Specifically, we examine how (i) Hebbian learning, (ii) high levels of noise, and (iii) parallel inhibitory and excitatory connections contribute to the robustness of the olfactory system in the Manduca sexta moth. We simulate injuries on a detailed computational model of the moth olfactory network calibrated to in vivo data. The injuries are modeled on focal axonal swellings, a ubiquitous form of axonal pathology observed in traumatic brain injuries and other brain disorders. Axonal swellings effectively compromise spike train propagation along the axon, reducing the effective neural firing rate delivered to downstream neurons. All three of the network motifs examined significantly mitigate the effects of injury on readout neurons, either by reducing injury's impact on readout neuron responses or by restoring these responses to pre-injury levels. These motifs may thus be partially explained by their value as adaptive mechanisms to minimize the functional effects of neural injury. More generally, robustness to injury is a vital design principle to consider when analyzing neural systems.
1108.3720
Indrani Bose
Sayantari Ghosh, Subhasis Banerjee and Indrani Bose
Emergent Bistability : Effects of Additive and Multiplicative Noise
16 pages, 11 figures, version accepted for publication in Eur. Phys. J. E
Eur. Phys. J. E. (2012) 35: 11
null
null
q-bio.QM cond-mat.stat-mech q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Positive feedback and cooperativity in the regulation of gene expression are generally considered to be necessary for obtaining bistable expression states. Recently, a novel mechanism of bistability termed emergent bistability has been proposed which involves only positive feedback and no cooperativity in the regulation. An additional positive feedback loop is effectively generated due to the inhibition of cellular growth by the synthesized proteins. The mechanism, demonstrated for a synthetic circuit, may be prevalent in natural systems also as some recent experimental results appear to suggest. In this paper, we study the effects of additive and multiplicative noise on the dynamics governing emergent bistability. The calculational scheme employed is based on the Langevin and Fokker-Planck formalisms. The steady state probability distributions of protein levels and the mean first passage times are computed for different noise strengths and system parameters. In the region of bistability, the bimodal probability distribution is shown to be a linear combination of a lognormal and a Gaussian distribution. The variances of the individual distributions and the relative weights of the distributions are further calculated for varying noise strengths and system parameters. The experimental relevance of the model results is also pointed out.
[ { "created": "Thu, 18 Aug 2011 11:14:23 GMT", "version": "v1" }, { "created": "Mon, 30 Jan 2012 09:03:39 GMT", "version": "v2" } ]
2012-10-22
[ [ "Ghosh", "Sayantari", "" ], [ "Banerjee", "Subhasis", "" ], [ "Bose", "Indrani", "" ] ]
Positive feedback and cooperativity in the regulation of gene expression are generally considered to be necessary for obtaining bistable expression states. Recently, a novel mechanism of bistability termed emergent bistability has been proposed which involves only positive feedback and no cooperativity in the regulation. An additional positive feedback loop is effectively generated due to the inhibition of cellular growth by the synthesized proteins. The mechanism, demonstrated for a synthetic circuit, may be prevalent in natural systems also as some recent experimental results appear to suggest. In this paper, we study the effects of additive and multiplicative noise on the dynamics governing emergent bistability. The calculational scheme employed is based on the Langevin and Fokker-Planck formalisms. The steady state probability distributions of protein levels and the mean first passage times are computed for different noise strengths and system parameters. In the region of bistability, the bimodal probability distribution is shown to be a linear combination of a lognormal and a Gaussian distribution. The variances of the individual distributions and the relative weights of the distributions are further calculated for varying noise strengths and system parameters. The experimental relevance of the model results is also pointed out.
1704.00188
Ingemar Kaj
Daniah Tahir, Sylvain Gl\'emin, Martin Lascoux, Ingemar Kaj
Modeling trait-dependent evolution on a random species tree
null
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the evolution of binary traits, which affects the birth and survival of species and also the rate of molecular evolution, remains challenging. A typical example is the evolution of mating systems in plant species. In this work, we present a probabilistic modeling framework for binary trait, random species trees, in which the number of species and their traits are represented by a two-type, continuous time Markov branching process. We develop our model by considering the impact of mating systems on dN/dS, the ratio of nonsynonymous to synonymous substitutions. A methodology is introduced which enables us to match model parameters with parameter estimates from phylogenetic tree data. The properties obtained from the model are applied to outcrossing and selfing species trees in the Geraniaceae and Solanaceae family. This allows us to investigate not only the branching tree rates, but also the mutation rates and the intensity of selection.
[ { "created": "Sat, 1 Apr 2017 15:54:49 GMT", "version": "v1" } ]
2017-04-04
[ [ "Tahir", "Daniah", "" ], [ "Glémin", "Sylvain", "" ], [ "Lascoux", "Martin", "" ], [ "Kaj", "Ingemar", "" ] ]
Understanding the evolution of binary traits, which affects the birth and survival of species and also the rate of molecular evolution, remains challenging. A typical example is the evolution of mating systems in plant species. In this work, we present a probabilistic modeling framework for binary trait, random species trees, in which the number of species and their traits are represented by a two-type, continuous time Markov branching process. We develop our model by considering the impact of mating systems on dN/dS, the ratio of nonsynonymous to synonymous substitutions. A methodology is introduced which enables us to match model parameters with parameter estimates from phylogenetic tree data. The properties obtained from the model are applied to outcrossing and selfing species trees in the Geraniaceae and Solanaceae family. This allows us to investigate not only the branching tree rates, but also the mutation rates and the intensity of selection.
2010.04123
Pedro Carelli
Nastaran Lotfi, Tha\'is Feliciano, Leandro A. A. Aguiar, Thais Priscila Lima Silva, Tawan T. A. Carvalho, Osvaldo A. Rosso, Mauro Copelli, Fernanda S. Matias, and Pedro V. Carelli
Statistical complexity is maximized close to criticality in cortical dynamics
8 pages, 6 figures
Phys. Rev. E 103, 012415 (2021)
10.1103/PhysRevE.103.012415
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex systems are typically characterized as an intermediate situation between a complete regular structure and a random system. Brain signals can be studied as a striking example of such systems: cortical states can range from highly synchronous and ordered neuronal activity (with higher spiking variability) to desynchronized and disordered regimes (with lower spiking variability). It has been recently shown, by testing independent signatures of criticality, that a phase transition occurs in a cortical state of intermediate spiking variability. Here, we use a symbolic information approach to show that, despite the monotonical increase of the Shannon entropy between ordered and disordered regimes, we can determine an intermediate state of maximum complexity based on the Jensen disequilibrium measure. More specifically, we show that statistical complexity is maximized close to criticality for cortical spiking data of urethane-anesthetized rats, as well as for a network model of excitable elements that presents a critical point of a non-equilibrium phase transition.
[ { "created": "Thu, 8 Oct 2020 17:02:23 GMT", "version": "v1" } ]
2021-02-03
[ [ "Lotfi", "Nastaran", "" ], [ "Feliciano", "Thaís", "" ], [ "Aguiar", "Leandro A. A.", "" ], [ "Silva", "Thais Priscila Lima", "" ], [ "Carvalho", "Tawan T. A.", "" ], [ "Rosso", "Osvaldo A.", "" ], [ "Copelli", "Mauro", "" ], [ "Matias", "Fernanda S.", "" ], [ "Carelli", "Pedro V.", "" ] ]
Complex systems are typically characterized as an intermediate situation between a complete regular structure and a random system. Brain signals can be studied as a striking example of such systems: cortical states can range from highly synchronous and ordered neuronal activity (with higher spiking variability) to desynchronized and disordered regimes (with lower spiking variability). It has been recently shown, by testing independent signatures of criticality, that a phase transition occurs in a cortical state of intermediate spiking variability. Here, we use a symbolic information approach to show that, despite the monotonical increase of the Shannon entropy between ordered and disordered regimes, we can determine an intermediate state of maximum complexity based on the Jensen disequilibrium measure. More specifically, we show that statistical complexity is maximized close to criticality for cortical spiking data of urethane-anesthetized rats, as well as for a network model of excitable elements that presents a critical point of a non-equilibrium phase transition.
2308.04387
Ricardo Henriques Prof
Estibaliz G\'omez-de-Mariscal, Mario Del Rosario, Joanna W Pylv\"an\"ainen, Guillaume Jacquemet, Ricardo Henriques
Harnessing Artificial Intelligence To Reduce Phototoxicity in Live Imaging
11 pages, 4 figures
null
10.1242/jcs.261545
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Fluorescence microscopy, widely used in the study of living cells, tissues, and organisms, often faces the challenge of photodamage. This is primarily caused by the interaction between light and biochemical components during the imaging process, leading to compromised accuracy and reliability of biological results. Methods necessitating extended high-intensity illumination, such as super-resolution microscopy or thick sample imaging, are particularly susceptible to this issue. As part of the solution to these problems, advanced imaging approaches involving artificial intelligence (AI) have been developed. Here we underscore the necessity of establishing constraints to maintain light-induced damage at levels that permit cells to sustain their live behaviour. From this perspective, data-driven live-cell imaging bears significant potential in aiding the development of AI-enhanced photodamage-aware microscopy. These technologies could streamline precise observations of natural biological dynamics while minimising phototoxicity risks.
[ { "created": "Tue, 8 Aug 2023 16:41:30 GMT", "version": "v1" } ]
2024-04-01
[ [ "Gómez-de-Mariscal", "Estibaliz", "" ], [ "Del Rosario", "Mario", "" ], [ "Pylvänäinen", "Joanna W", "" ], [ "Jacquemet", "Guillaume", "" ], [ "Henriques", "Ricardo", "" ] ]
Fluorescence microscopy, widely used in the study of living cells, tissues, and organisms, often faces the challenge of photodamage. This is primarily caused by the interaction between light and biochemical components during the imaging process, leading to compromised accuracy and reliability of biological results. Methods necessitating extended high-intensity illumination, such as super-resolution microscopy or thick sample imaging, are particularly susceptible to this issue. As part of the solution to these problems, advanced imaging approaches involving artificial intelligence (AI) have been developed. Here we underscore the necessity of establishing constraints to maintain light-induced damage at levels that permit cells to sustain their live behaviour. From this perspective, data-driven live-cell imaging bears significant potential in aiding the development of AI-enhanced photodamage-aware microscopy. These technologies could streamline precise observations of natural biological dynamics while minimising phototoxicity risks.
1402.3709
Jose Fontanari
Jose F. Fontanari and Maurizio Serva
Effect of migration in a diffusion model for template coexistence in protocells
null
Bulletin of Mathematical Biology 76 (2014) 654-672
10.1007/s11538-014-9937-7
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The compartmentalization of distinct templates in protocells and the exchange of templates between them (migration) are key elements of a modern scenario for prebiotic evolution. Here we use the diffusion approximation of population genetics to study analytically the steady-state properties of such prebiotic scenario. The coexistence of distinct template types inside a protocell is achieved by a selective pressure at the protocell level (group selection) favoring protocells with a mixed template composition. In the degenerate case, where the templates have the same replication rate, we find that a vanishingly small migration rate suffices to eliminate the segregation effect of random drift and so to promote coexistence. In the non-degenerate case, a small migration rate greatly boosts coexistence as compared with the situation where there is no migration. However, increase of the migration rate beyond a critical value leads to the complete dominance of the more efficient template type (homogeneous regime). In this case, we find a continuous phase transition separating the homogeneous and the coexistence regimes, with the order parameter vanishing linearly with the distance to the transition point.
[ { "created": "Sat, 15 Feb 2014 18:14:16 GMT", "version": "v1" } ]
2014-03-27
[ [ "Fontanari", "Jose F.", "" ], [ "Serva", "Maurizio", "" ] ]
The compartmentalization of distinct templates in protocells and the exchange of templates between them (migration) are key elements of a modern scenario for prebiotic evolution. Here we use the diffusion approximation of population genetics to study analytically the steady-state properties of such prebiotic scenario. The coexistence of distinct template types inside a protocell is achieved by a selective pressure at the protocell level (group selection) favoring protocells with a mixed template composition. In the degenerate case, where the templates have the same replication rate, we find that a vanishingly small migration rate suffices to eliminate the segregation effect of random drift and so to promote coexistence. In the non-degenerate case, a small migration rate greatly boosts coexistence as compared with the situation where there is no migration. However, increase of the migration rate beyond a critical value leads to the complete dominance of the more efficient template type (homogeneous regime). In this case, we find a continuous phase transition separating the homogeneous and the coexistence regimes, with the order parameter vanishing linearly with the distance to the transition point.
1708.09462
Majid Jaberi-Douraki
Majid Jaberi Douraki
Continuous and discrete dynamics of a deterministic model of HIV infection
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We will study a mathematical model of the human immunodeficiency virus (HIV) infection in the presence of combination therapy that includes within-host infectious dynamics. The deterministic model requires us to analyze asymptotic stability of two distinct steady states, disease-free and endemic equilibria. Previous results have focused on investigating the global asymptotic stability of the trivial steady state using an implicit finite-difference method which generates a system of difference equations. We, instead, provide analytic solutions and long term attractive behavior for the endemic steady state using the theory of difference equations. The dynamics of estimated model is appropriately determined by a certain quantity threshold maintaining the immune response to a sufficient level. The result also indicates that a forward bifurcation in the model happens when the disease-free equilibrium loses its stability and a stable endemic equilibrium appears as the basic reproduction number exceeds unity. In this scenario, the classical requirement of the reproduction number being less than unity becomes a necessary and sufficient condition for disease mitigation. When the associated reproduction number is in excess of unity, a stable endemic equilibrium emerges with an unstable disease-free equilibrium (leading to the persistence and existence of HIV within the infected individuals). The attractivity of the model reveals that the disease-free equilibrium is globally asymptotically stable under certain assumptions. A comparison between the continuous and estimated discrete models is also provided to have a clear perception in understanding the behavioral dynamics of disease modelling. Finally, we show that the associated estimation method is very robust in the sense of numerical stability since the equilibria and the stability conditions are independent of the time step.
[ { "created": "Wed, 30 Aug 2017 20:43:41 GMT", "version": "v1" }, { "created": "Fri, 1 Sep 2017 14:01:06 GMT", "version": "v2" } ]
2017-09-04
[ [ "Douraki", "Majid Jaberi", "" ] ]
We will study a mathematical model of the human immunodeficiency virus (HIV) infection in the presence of combination therapy that includes within-host infectious dynamics. The deterministic model requires us to analyze asymptotic stability of two distinct steady states, disease-free and endemic equilibria. Previous results have focused on investigating the global asymptotic stability of the trivial steady state using an implicit finite-difference method which generates a system of difference equations. We, instead, provide analytic solutions and long term attractive behavior for the endemic steady state using the theory of difference equations. The dynamics of estimated model is appropriately determined by a certain quantity threshold maintaining the immune response to a sufficient level. The result also indicates that a forward bifurcation in the model happens when the disease-free equilibrium loses its stability and a stable endemic equilibrium appears as the basic reproduction number exceeds unity. In this scenario, the classical requirement of the reproduction number being less than unity becomes a necessary and sufficient condition for disease mitigation. When the associated reproduction number is in excess of unity, a stable endemic equilibrium emerges with an unstable disease-free equilibrium (leading to the persistence and existence of HIV within the infected individuals). The attractivity of the model reveals that the disease-free equilibrium is globally asymptotically stable under certain assumptions. A comparison between the continuous and estimated discrete models is also provided to have a clear perception in understanding the behavioral dynamics of disease modelling. Finally, we show that the associated estimation method is very robust in the sense of numerical stability since the equilibria and the stability conditions are independent of the time step.
0804.4359
Denis Semenov A.
Denis A. Semenov
Evolution of the genetic code. Why are there strong and weak letter doublets? The first gene, the first protein. Early (ancient) biosynthesis of protein
9 pages, 1 table
null
null
null
q-bio.BM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The idea of the evolution of the genetic code from the CG to the CGUA alphabet has been developed further. The assumption of the originally triplet structure of the genetic code has been substantiated. The hypothesis of the emergence of stop codons at the early stage of the evolution of the genetic code has been additionally supported. The existence of strong and weak letter doublets of codons and the symmetry of strong doublets in the genetic code table, discovered by Rumer, have been explained. A hypothesis concerning the primary structure of the first gene and the first protein has been proposed.
[ { "created": "Mon, 28 Apr 2008 09:45:38 GMT", "version": "v1" } ]
2008-04-29
[ [ "Semenov", "Denis A.", "" ] ]
The idea of the evolution of the genetic code from the CG to the CGUA alphabet has been developed further. The assumption of the originally triplet structure of the genetic code has been substantiated. The hypothesis of the emergence of stop codons at the early stage of the evolution of the genetic code has been additionally supported. The existence of strong and weak letter doublets of codons and the symmetry of strong doublets in the genetic code table, discovered by Rumer, have been explained. A hypothesis concerning the primary structure of the first gene and the first protein has been proposed.
2112.09929
Pierre Schaus
Frederic Docquier and Nicolas Golenvaux and Pierre Schaus
Are Travel Bans the Answer to Stopping the Spread of COVID-19 Variants? Lessons from a Multi-Country SIR Model
null
null
null
null
q-bio.PE
http://creativecommons.org/publicdomain/zero/1.0/
Detections of mutations of the SARS-CoV-2 virus gave rise to new packages of interventions. Among them, international travel restrictions have been one of the fastest and most visible responses to limit the spread of the variants. While inducing large economic losses, the epidemiological consequences of such travel restrictions are highly uncertain. They may be poorly effective when the new highly transmissible strain of the virus already circulates in many regions. Assessing the effectiveness of travel bans is difficult given the paucity of data on daily cross-border mobility and on existing variant circulation. The question is topical and timely as the new omicron variant -- classified as a variant of concern by WHO -- has been detected in Southern Africa, and perceived as (potentially) more contagious than previous strains. In this study, we develop a multi-country compartmental model of the SIR type. We use it to simulate the spread of a new variant across European countries, and to assess the effectiveness of unilateral and multilateral travel bans.
[ { "created": "Sat, 18 Dec 2021 13:30:17 GMT", "version": "v1" } ]
2021-12-21
[ [ "Docquier", "Frederic", "" ], [ "Golenvaux", "Nicolas", "" ], [ "Schaus", "Pierre", "" ] ]
Detections of mutations of the SARS-CoV-2 virus gave rise to new packages of interventions. Among them, international travel restrictions have been one of the fastest and most visible responses to limit the spread of the variants. While inducing large economic losses, the epidemiological consequences of such travel restrictions are highly uncertain. They may be poorly effective when the new highly transmissible strain of the virus already circulates in many regions. Assessing the effectiveness of travel bans is difficult given the paucity of data on daily cross-border mobility and on existing variant circulation. The question is topical and timely as the new omicron variant -- classified as a variant of concern by WHO -- has been detected in Southern Africa, and perceived as (potentially) more contagious than previous strains. In this study, we develop a multi-country compartmental model of the SIR type. We use it to simulate the spread of a new variant across European countries, and to assess the effectiveness of unilateral and multilateral travel bans.
0906.0685
Marcus Kaiser
Marcus Kaiser, Claus C. Hilgetag, Arjen van Ooyen
A simple rule for axon outgrowth and synaptic competition generates realistic connection lengths and filling fractions
31 pages (incl. supplementary information); Cerebral Cortex Advance Access published online on May 12, 2009
Cerebral Cortex 2009 19(12):3001-3010
10.1093/cercor/bhp071
null
q-bio.NC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural connectivity at the cellular and mesoscopic level appears very specific and is presumed to arise from highly specific developmental mechanisms. However, there are general shared features of connectivity in systems as different as the networks formed by individual neurons in Caenorhabditis elegans or in rat visual cortex and the mesoscopic circuitry of cortical areas in the mouse, macaque, and human brain. In all these systems, connection length distributions have very similar shapes, with an initial large peak and a long flat tail representing the admixture of long-distance connections to mostly short-distance connections. Furthermore, not all potentially possible synapses are formed, and only a fraction of axons (called filling fraction) establish synapses with spatially neighboring neurons. We explored what aspects of these connectivity patterns can be explained simply by random axonal outgrowth. We found that random axonal growth away from the soma can already reproduce the known distance distribution of connections. We also observed that experimentally observed filling fractions can be generated by competition for available space at the target neurons--a model markedly different from previous explanations. These findings may serve as a baseline model for the development of connectivity that can be further refined by more specific mechanisms.
[ { "created": "Wed, 3 Jun 2009 10:49:51 GMT", "version": "v1" } ]
2009-12-06
[ [ "Kaiser", "Marcus", "" ], [ "Hilgetag", "Claus C.", "" ], [ "van Ooyen", "Arjen", "" ] ]
Neural connectivity at the cellular and mesoscopic level appears very specific and is presumed to arise from highly specific developmental mechanisms. However, there are general shared features of connectivity in systems as different as the networks formed by individual neurons in Caenorhabditis elegans or in rat visual cortex and the mesoscopic circuitry of cortical areas in the mouse, macaque, and human brain. In all these systems, connection length distributions have very similar shapes, with an initial large peak and a long flat tail representing the admixture of long-distance connections to mostly short-distance connections. Furthermore, not all potentially possible synapses are formed, and only a fraction of axons (called filling fraction) establish synapses with spatially neighboring neurons. We explored what aspects of these connectivity patterns can be explained simply by random axonal outgrowth. We found that random axonal growth away from the soma can already reproduce the known distance distribution of connections. We also observed that experimentally observed filling fractions can be generated by competition for available space at the target neurons--a model markedly different from previous explanations. These findings may serve as a baseline model for the development of connectivity that can be further refined by more specific mechanisms.
1706.00976
Shivakeshavan Ratnadurai Giridharan
Shivakeshavan Ratnadurai-Giridharan, Chung Cheung, Leonid Rubchinsky
Effects of electrical and optogenetic deep brain stimulation on synchronized oscillatory activity in Parkinsonian basal ganglia
IEEE preprint, 8 pages, 9 Figures
IEEE Trans Neural Syst Rehabil Eng 25:2188-2195, 2017
10.1109/TNSRE.2017.2712418
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional deep brain stimulation (DBS) of basal ganglia uses high-frequency regular electrical pulses to treat Parkinsonian motor symptoms and has a series of limitations. Relatively new and not yet clinically tested optogenetic stimulation is an effective experimental stimulation technique to affect pathological network dynamics. We compared the effects of electrical and optogenetic stimulation of the basal ganglia on the pathological parkinsonian rhythmic neural activity. We studied the network response to electrical stimulation and excitatory and inhibitory optogenetic stimulations. Different stimulations exhibit different interactions with pathological activity in the network. We studied these interactions for different network and stimulation parameter values. Optogenetic stimulation was found to be more efficient than electrical stimulation in suppressing pathological rhythmicity. Our findings indicate that optogenetic control of neural synchrony may be more efficacious than electrical control because of the different ways of how stimulations interact with network dynamics.
[ { "created": "Sat, 3 Jun 2017 16:45:21 GMT", "version": "v1" } ]
2021-04-26
[ [ "Ratnadurai-Giridharan", "Shivakeshavan", "" ], [ "Cheung", "Chung", "" ], [ "Rubchinsky", "Leonid", "" ] ]
Conventional deep brain stimulation (DBS) of basal ganglia uses high-frequency regular electrical pulses to treat Parkinsonian motor symptoms and has a series of limitations. Relatively new and not yet clinically tested optogenetic stimulation is an effective experimental stimulation technique to affect pathological network dynamics. We compared the effects of electrical and optogenetic stimulation of the basal ganglia on the pathological parkinsonian rhythmic neural activity. We studied the network response to electrical stimulation and excitatory and inhibitory optogenetic stimulations. Different stimulations exhibit different interactions with pathological activity in the network. We studied these interactions for different network and stimulation parameter values. Optogenetic stimulation was found to be more efficient than electrical stimulation in suppressing pathological rhythmicity. Our findings indicate that optogenetic control of neural synchrony may be more efficacious than electrical control because of the different ways of how stimulations interact with network dynamics.
1409.3499
Niraj Kumar
Niraj Kumar, Thierry Platini, and Rahul V. Kulkarni
Exact distributions for stochastic gene expression models with bursting and feedback
Accepted in Phys. Rev. Lett
null
10.1103/PhysRevLett.113.268105
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochasticity in gene expression can give rise to fluctuations in protein levels and lead to phenotypic variation across a population of genetically identical cells. Recent experiments indicate that bursting and feedback mechanisms play important roles in controlling noise in gene expression and phenotypic variation. A quantitative understanding of the impact of these factors requires analysis of the corresponding stochastic models. However, for stochastic models of gene expression with feedback and bursting, exact analytical results for protein distributions have not been obtained so far. Here, we analyze a model of gene expression with bursting and feedback regulation and obtain exact results for the corresponding protein steady-state distribution. The results obtained provide new insights into the role of bursting and feedback in noise regulation and optimization. Furthermore, for a specific choice of parameters, the system studied maps on to a two-state biochemical switch driven by a bursty input noise source. The analytical results derived thus provide quantitative insights into diverse cellular processes involving noise in gene expression and biochemical switching.
[ { "created": "Thu, 11 Sep 2014 16:45:34 GMT", "version": "v1" }, { "created": "Wed, 26 Nov 2014 15:31:11 GMT", "version": "v2" } ]
2015-06-22
[ [ "Kumar", "Niraj", "" ], [ "Platini", "Thierry", "" ], [ "Kulkarni", "Rahul V.", "" ] ]
Stochasticity in gene expression can give rise to fluctuations in protein levels and lead to phenotypic variation across a population of genetically identical cells. Recent experiments indicate that bursting and feedback mechanisms play important roles in controlling noise in gene expression and phenotypic variation. A quantitative understanding of the impact of these factors requires analysis of the corresponding stochastic models. However, for stochastic models of gene expression with feedback and bursting, exact analytical results for protein distributions have not been obtained so far. Here, we analyze a model of gene expression with bursting and feedback regulation and obtain exact results for the corresponding protein steady-state distribution. The results obtained provide new insights into the role of bursting and feedback in noise regulation and optimization. Furthermore, for a specific choice of parameters, the system studied maps on to a two-state biochemical switch driven by a bursty input noise source. The analytical results derived thus provide quantitative insights into diverse cellular processes involving noise in gene expression and biochemical switching.
1702.08091
William Ott
James J. Winkle, Oleg Igoshin, Matthew R. Bennett, Kre\v{s}imir Josi\'c, and William Ott
Modeling mechanical interactions in growing populations of rod-shaped bacteria
14 pages, 5 figures
null
10.1088/1478-3975/aa7bae
null
q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in synthetic biology allow us to engineer bacterial collectives with pre-specified characteristics. However, the behavior of these collectives is difficult to understand, as cellular growth and division as well as extra-cellular fluid flow lead to complex, changing arrangements of cells within the population. To rationally engineer and control the behavior of cell collectives we need theoretical and computational tools to understand their emergent spatiotemporal dynamics. Here, we present an agent-based model that allows growing cells to detect and respond to mechanical interactions. Crucially, our model couples the dynamics of cell growth to the cell's environment: Mechanical constraints can affect cellular growth rate and a cell may alter its behavior in response to these constraints. This coupling links the mechanical forces that influence cell growth and emergent behaviors in cell assemblies. We illustrate our approach by showing how mechanical interactions can impact the dynamics of bacterial collectives growing in microfluidic traps.
[ { "created": "Sun, 26 Feb 2017 21:52:43 GMT", "version": "v1" } ]
2017-09-13
[ [ "Winkle", "James J.", "" ], [ "Igoshin", "Oleg", "" ], [ "Bennett", "Matthew R.", "" ], [ "Josić", "Krešimir", "" ], [ "Ott", "William", "" ] ]
Advances in synthetic biology allow us to engineer bacterial collectives with pre-specified characteristics. However, the behavior of these collectives is difficult to understand, as cellular growth and division as well as extra-cellular fluid flow lead to complex, changing arrangements of cells within the population. To rationally engineer and control the behavior of cell collectives we need theoretical and computational tools to understand their emergent spatiotemporal dynamics. Here, we present an agent-based model that allows growing cells to detect and respond to mechanical interactions. Crucially, our model couples the dynamics of cell growth to the cell's environment: Mechanical constraints can affect cellular growth rate and a cell may alter its behavior in response to these constraints. This coupling links the mechanical forces that influence cell growth and emergent behaviors in cell assemblies. We illustrate our approach by showing how mechanical interactions can impact the dynamics of bacterial collectives growing in microfluidic traps.
1312.0711
Camellia Sarkar
Ankit Agrawal, Camellia Sarkar, Sanjiv K. Dwivedi, Nitesh Dhasmana and Sarika Jalan
Quantifying randomness in protein-protein interaction networks of different species: A random matrix approach
20 pages, 5 figures
Physica A 404, 359-367 (2014)
10.1016/j.physa.2013.12.005
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze protein-protein interaction networks for six different species under the framework of random matrix theory. Nearest neighbor spacing distribution of the eigenvalues of adjacency matrices of the largest connected part of these networks emulate universal Gaussian orthogonal statistics of random matrix theory. We demonstrate that spectral rigidity, which quantifies long range correlations in eigenvalues, for all protein-protein interaction networks follow random matrix prediction up to certain ranges indicating randomness in interactions. After this range, deviation from the universality evinces underlying structural features in network.
[ { "created": "Tue, 3 Dec 2013 06:43:15 GMT", "version": "v1" }, { "created": "Sat, 17 May 2014 08:20:53 GMT", "version": "v2" } ]
2014-05-20
[ [ "Agrawal", "Ankit", "" ], [ "Sarkar", "Camellia", "" ], [ "Dwivedi", "Sanjiv K.", "" ], [ "Dhasmana", "Nitesh", "" ], [ "Jalan", "Sarika", "" ] ]
We analyze protein-protein interaction networks for six different species under the framework of random matrix theory. Nearest neighbor spacing distribution of the eigenvalues of adjacency matrices of the largest connected part of these networks emulate universal Gaussian orthogonal statistics of random matrix theory. We demonstrate that spectral rigidity, which quantifies long range correlations in eigenvalues, for all protein-protein interaction networks follow random matrix prediction up to certain ranges indicating randomness in interactions. After this range, deviation from the universality evinces underlying structural features in network.
2005.05549
Henry Zhao
Henry Zhao, Zhilan Feng, Carlos Castillo-Chavez, and Simon A. Levin
Staggered Release Policies for COVID-19 Control: Costs and Benefits of Sequentially Relaxing Restrictions by Age
22 pages (including Appendix), 10 figures
null
null
null
q-bio.PE econ.GN math.DS q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Strong social distancing restrictions have been crucial to controlling the COVID-19 outbreak thus far, and the next question is when and how to relax these restrictions. A sequential timing of relaxing restrictions across groups is explored in order to identify policies that simultaneously reduce health risks and economic stagnation relative to current policies. The goal will be to mitigate health risks, particularly among the most fragile sub-populations, while also managing the deleterious effect of restrictions on economic activity. The results of this paper show that a properly constructed sequential release of age-defined subgroups from strict social distancing protocols can lead to lower overall fatality rates than the simultaneous release of all individuals after a lockdown. The optimal release policy, in terms of minimizing overall death rate, must be sequential in nature, and it is important to properly time each step of the staggered release. This model allows for testing of various timing choices for staggered release policies, which can provide insights that may be helpful in the design, testing, and planning of disease management policies for the ongoing COVID-19 pandemic and future outbreaks.
[ { "created": "Tue, 12 May 2020 05:06:27 GMT", "version": "v1" } ]
2020-05-14
[ [ "Zhao", "Henry", "" ], [ "Feng", "Zhilan", "" ], [ "Castillo-Chavez", "Carlos", "" ], [ "Levin", "Simon A.", "" ] ]
Strong social distancing restrictions have been crucial to controlling the COVID-19 outbreak thus far, and the next question is when and how to relax these restrictions. A sequential timing of relaxing restrictions across groups is explored in order to identify policies that simultaneously reduce health risks and economic stagnation relative to current policies. The goal will be to mitigate health risks, particularly among the most fragile sub-populations, while also managing the deleterious effect of restrictions on economic activity. The results of this paper show that a properly constructed sequential release of age-defined subgroups from strict social distancing protocols can lead to lower overall fatality rates than the simultaneous release of all individuals after a lockdown. The optimal release policy, in terms of minimizing overall death rate, must be sequential in nature, and it is important to properly time each step of the staggered release. This model allows for testing of various timing choices for staggered release policies, which can provide insights that may be helpful in the design, testing, and planning of disease management policies for the ongoing COVID-19 pandemic and future outbreaks.
2001.10811
Giulia Marciani
Adriana Moroni, Giovanni Boschian, Jacopo Crezzini, Guido Montanari-Canini, Giulia Marciani, Giulia Capecchi, Simona Arrighi, Daniele Aureli, Claudio Berto, Margherita Freguglia, Astolfo Araujo, Sem Scaramucci, Jean Jacques Hublin, Tobias Lauer, Stefano Benazzi, Fabio Parenti, Marzia Bonato, Stefano Ricci, Sahra Talamo, Aldo G. Segre, Francesco Boschin, Vincenzo Spagnolo
Late Neandertals in Central Italy. High-resolution chronicles from Grotta dei Santi (Monte Argentario, Tuscany)
null
null
10.1016/j.quascirev.2018.11.021
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of the Middle Palaeolithic evidence of Central Italy still lacks a reliable chrono-cultural framework mainly due to research history. In this context Grotta dei Santi, a wide cave located on Monte Argentario, on the southern coast of Tuscany, is particularly relevant as it contains a very well preserved sequence including several Mousterian layers.
[ { "created": "Wed, 29 Jan 2020 13:24:05 GMT", "version": "v1" } ]
2020-01-30
[ [ "Moroni", "Adriana", "" ], [ "Boschian", "Giovanni", "" ], [ "Crezzini", "Jacopo", "" ], [ "Montanari-Canini", "Guido", "" ], [ "Marciani", "Giulia", "" ], [ "Capecchi", "Giulia", "" ], [ "Arrighi", "Simona", "" ], [ "Aureli", "Daniele", "" ], [ "Berto", "Claudio", "" ], [ "Freguglia", "Margherita", "" ], [ "Araujo", "Astolfo", "" ], [ "Scaramucci", "Sem", "" ], [ "Hublin", "Jean Jacques", "" ], [ "Lauer", "Tobias", "" ], [ "Benazzi", "Stefano", "" ], [ "Parenti", "Fabio", "" ], [ "Bonato", "Marzia", "" ], [ "Ricci", "Stefano", "" ], [ "Talamo", "Sahra", "" ], [ "Segre", "Aldo G.", "" ], [ "Boschin", "Francesco", "" ], [ "Spagnolo", "Vincenzo", "" ] ]
Most of the Middle Palaeolithic evidence of Central Italy still lacks a reliable chrono-cultural framework mainly due to research history. In this context Grotta dei Santi, a wide cave located on Monte Argentario, on the southern coast of Tuscany, is particularly relevant as it contains a very well preserved sequence including several Mousterian layers.
1501.05524
Jonathan Potts
Jonathan R. Potts, Guillaume Bastille-Rousseau, Dennis L. Murray, James A. Schaefer, Mark A. Lewis
Predicting local and non-local effects of resources on animal space use using a mechanistic step selection model
null
Methods Ecol Evol (2014) 5(3):253-262
10.1111/2041-210X.12150
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
1. Predicting space use patterns of animals from their interactions with the environment is fundamental for understanding the effect of habitat changes on ecosystem functioning. Recent attempts to address this problem have sought to unify resource selection analysis, where animal space use is derived from available habitat quality, and mechanistic movement models, where detailed movement processes of an animal are used to predict its emergent utilization distribution. Such models bias the animal's movement towards patches that are easily available and resource-rich, and the result is a predicted probability density at a given position being a function of the habitat quality at that position. However, in reality, the probability that an animal will use a patch of the terrain tends to be a function of the resource quality in both that patch and the surrounding habitat. 2. We propose a mechanistic model where this non-local effect of resources naturally emerges from the local movement processes, by taking into account the relative utility of both the habitat where the animal currently resides and that of where it is moving. We give statistical techniques to parametrize the model from location data and demonstrate application of these techniques to GPS data of caribou in Newfoundland. 3. Steady-state animal probability distributions arising from the model have complex patterns that cannot be expressed simply as a function of the local quality of the habitat. In particular, large areas of good habitat are used more intensively than smaller patches of equal quality habitat, whereas isolated patches are used less frequently. 4. Whilst we focus on habitats in this study, our modelling framework can be readily used with any environmental covariates and therefore represents a unification of mechanistic modelling and step selection approaches to understanding animal space use.
[ { "created": "Thu, 22 Jan 2015 15:03:27 GMT", "version": "v1" } ]
2015-01-23
[ [ "Potts", "Jonathan R.", "" ], [ "Bastille-Rousseau", "Guillaume", "" ], [ "Murray", "Dennis L.", "" ], [ "Schaefer", "James A.", "" ], [ "Lewis", "Mark A.", "" ] ]
1. Predicting space use patterns of animals from their interactions with the environment is fundamental for understanding the effect of habitat changes on ecosystem functioning. Recent attempts to address this problem have sought to unify resource selection analysis, where animal space use is derived from available habitat quality, and mechanistic movement models, where detailed movement processes of an animal are used to predict its emergent utilization distribution. Such models bias the animal's movement towards patches that are easily available and resource-rich, and the result is a predicted probability density at a given position being a function of the habitat quality at that position. However, in reality, the probability that an animal will use a patch of the terrain tends to be a function of the resource quality in both that patch and the surrounding habitat. 2. We propose a mechanistic model where this non-local effect of resources naturally emerges from the local movement processes, by taking into account the relative utility of both the habitat where the animal currently resides and that of where it is moving. We give statistical techniques to parametrize the model from location data and demonstrate application of these techniques to GPS data of caribou in Newfoundland. 3. Steady-state animal probability distributions arising from the model have complex patterns that cannot be expressed simply as a function of the local quality of the habitat. In particular, large areas of good habitat are used more intensively than smaller patches of equal quality habitat, whereas isolated patches are used less frequently. 4. Whilst we focus on habitats in this study, our modelling framework can be readily used with any environmental covariates and therefore represents a unification of mechanistic modelling and step selection approaches to understanding animal space use.
2207.03288
Adrian Jones
Adrian Jones, Steven E. Massey, Daoyu Zhang, Yuri Deigin and Steven C. Quay
Further analysis of metagenomic datasets containing GD and GX pangolin CoVs indicates widespread contamination, undermining pangolin host attribution
46 pages, 32 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
The only animals other than bats reported to have been infected with SARS-CoV-2-related coronaviruses (SARS2r-CoVs) prior to the COVID-19 pandemic are pangolins. In early 2020 multiple papers reported the identification of two clades of SARS2r-CoVs, GD and GX, infecting pangolins. However the RNA-Seq datasets supporting pangolin genome assembly were widely contaminated, contained synthetic vectors or were heavily enriched or filtered with little but coronavirus sequences left in the datasets. Here we investigate two pangolin fecal samples sequenced by Li et al. (2021) provided in support of GD PCoV infection of pangolins in Guangdong and find the read distribution consistent with PCR amplicon contamination and SARS-CoV-2 contamination, and further identify the presence of synthetic plasmid sequences. We also build upon our previous work to further analyze the dataset GX/P3B by Lam et al. (2020), which is the only non enriched/heavily filtered pangolin tissue dataset sequenced by Lam et al. (2020). We identify synthetic vectors and confirm human genomic origin samples in the dataset. Finally, we find human mitochondrial sequences in all pangolin organ datasets and mouse and tiger mitochondrial sequences in selected pangolin organ datasets sequenced by Liu et al. (2019). We infer that human and mouse genomic origin sequences were probably sourced from contamination prior to sequencing, while tiger origin sequence contamination may have occurred due to index hopping during sequencing. These observations are problematic for attributing pangolins as SARS2r-CoV hosts in the datasets examined. The forensic methods developed and used here can be applied to examine any third party SRA data sets.
[ { "created": "Thu, 7 Jul 2022 13:35:42 GMT", "version": "v1" }, { "created": "Mon, 11 Jul 2022 11:41:54 GMT", "version": "v2" } ]
2022-07-12
[ [ "Jones", "Adrian", "" ], [ "Massey", "Steven E.", "" ], [ "Zhang", "Daoyu", "" ], [ "Deigin", "Yuri", "" ], [ "Quay", "Steven C.", "" ] ]
The only animals other than bats reported to have been infected with SARS-CoV-2-related coronaviruses (SARS2r-CoVs) prior to the COVID-19 pandemic are pangolins. In early 2020 multiple papers reported the identification of two clades of SARS2r-CoVs, GD and GX, infecting pangolins. However the RNA-Seq datasets supporting pangolin genome assembly were widely contaminated, contained synthetic vectors or were heavily enriched or filtered with little but coronavirus sequences left in the datasets. Here we investigate two pangolin fecal samples sequenced by Li et al. (2021) provided in support of GD PCoV infection of pangolins in Guangdong and find the read distribution consistent with PCR amplicon contamination and SARS-CoV-2 contamination, and further identify the presence of synthetic plasmid sequences. We also build upon our previous work to further analyze the dataset GX/P3B by Lam et al. (2020), which is the only non enriched/heavily filtered pangolin tissue dataset sequenced by Lam et al. (2020). We identify synthetic vectors and confirm human genomic origin samples in the dataset. Finally, we find human mitochondrial sequences in all pangolin organ datasets and mouse and tiger mitochondrial sequences in selected pangolin organ datasets sequenced by Liu et al. (2019). We infer that human and mouse genomic origin sequences were probably sourced from contamination prior to sequencing, while tiger origin sequence contamination may have occurred due to index hopping during sequencing. These observations are problematic for attributing pangolins as SARS2r-CoV hosts in the datasets examined. The forensic methods developed and used here can be applied to examine any third party SRA data sets.
2303.15586
Zahra Shamsi
Zahra Shamsi, Diwakar Shukla
Billion-years old proteins show the importance of N-lobe orientation in Imatinib-kinase selectivity
null
null
null
null
q-bio.BM
http://creativecommons.org/publicdomain/zero/1.0/
The molecular origins of proteins' functions are a combinatorial search problem in the proteins' sequence space, which requires enormous resources to solve. However, evolution has already solved this optimization problem for us, leaving behind suboptimal solutions along the way. Comparing suboptimal proteins along the evolutionary pathway, or ancestors, with more optimal modern proteins can lead us to the exact molecular origins of a particular function. In this paper, we study the long-standing question of the selectivity of Imatinib, an anti-cancer kinase inhibitor drug. We study two related kinases, Src and Abl, and four of their common ancestors, to which Imatinib has significantly different affinities. Our results show that the orientation of the N-lobe with respect to the C-lobe varies between the kinases along their evolutionary pathway and is consistent with Imatinib's inhibition constants as measured experimentally. The conformation of the DFG-motif (Asp-Phe-Gly) and the structure of the P-loop also seem to have different stable conformations along the evolutionary pathway, which is aligned with Imatinib's affinity.
[ { "created": "Mon, 27 Mar 2023 20:31:07 GMT", "version": "v1" } ]
2023-03-29
[ [ "Shamsi", "Zahra", "" ], [ "Shukla", "Diwakar", "" ] ]
The molecular origins of proteins' functions are a combinatorial search problem in the proteins' sequence space, which requires enormous resources to solve. However, evolution has already solved this optimization problem for us, leaving behind suboptimal solutions along the way. Comparing suboptimal proteins along the evolutionary pathway, or ancestors, with more optimal modern proteins can lead us to the exact molecular origins of a particular function. In this paper, we study the long-standing question of the selectivity of Imatinib, an anti-cancer kinase inhibitor drug. We study two related kinases, Src and Abl, and four of their common ancestors, to which Imatinib has significantly different affinities. Our results show that the orientation of the N-lobe with respect to the C-lobe varies between the kinases along their evolutionary pathway and is consistent with Imatinib's inhibition constants as measured experimentally. The conformation of the DFG-motif (Asp-Phe-Gly) and the structure of the P-loop also seem to have different stable conformations along the evolutionary pathway, which is aligned with Imatinib's affinity.
1503.00690
Cengiz Pehlevan
Tao Hu, Cengiz Pehlevan, Dmitri B. Chklovskii
A Hebbian/Anti-Hebbian Network for Online Sparse Dictionary Learning Derived from Symmetric Matrix Factorization
2014 Asilomar Conference on Signals, Systems and Computers. v2: fixed a typo in equation 23
null
10.1109/ACSSC.2014.7094519
null
q-bio.NC cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Olshausen and Field (OF) proposed that neural computations in the primary visual cortex (V1) can be partially modeled by sparse dictionary learning. By minimizing the regularized representation error they derived an online algorithm, which learns Gabor-filter receptive fields from a natural image ensemble in agreement with physiological experiments. Whereas the OF algorithm can be mapped onto the dynamics and synaptic plasticity in a single-layer neural network, the derived learning rule is nonlocal - the synaptic weight update depends on the activity of neurons other than just pre- and postsynaptic ones - and hence biologically implausible. Here, to overcome this problem, we derive sparse dictionary learning from a novel cost-function - a regularized error of the symmetric factorization of the input's similarity matrix. Our algorithm maps onto a neural network of the same architecture as OF but using only biologically plausible local learning rules. When trained on natural images our network learns Gabor-filter receptive fields and reproduces the correlation among synaptic weights hard-wired in the OF network. Therefore, online symmetric matrix factorization may serve as an algorithmic theory of neural computation.
[ { "created": "Mon, 2 Mar 2015 20:16:19 GMT", "version": "v1" }, { "created": "Mon, 30 Nov 2015 17:09:03 GMT", "version": "v2" } ]
2015-12-01
[ [ "Hu", "Tao", "" ], [ "Pehlevan", "Cengiz", "" ], [ "Chklovskii", "Dmitri B.", "" ] ]
Olshausen and Field (OF) proposed that neural computations in the primary visual cortex (V1) can be partially modeled by sparse dictionary learning. By minimizing the regularized representation error they derived an online algorithm, which learns Gabor-filter receptive fields from a natural image ensemble in agreement with physiological experiments. Whereas the OF algorithm can be mapped onto the dynamics and synaptic plasticity in a single-layer neural network, the derived learning rule is nonlocal - the synaptic weight update depends on the activity of neurons other than just pre- and postsynaptic ones - and hence biologically implausible. Here, to overcome this problem, we derive sparse dictionary learning from a novel cost-function - a regularized error of the symmetric factorization of the input's similarity matrix. Our algorithm maps onto a neural network of the same architecture as OF but using only biologically plausible local learning rules. When trained on natural images our network learns Gabor-filter receptive fields and reproduces the correlation among synaptic weights hard-wired in the OF network. Therefore, online symmetric matrix factorization may serve as an algorithmic theory of neural computation.
2305.15385
Michaela Ennis
Michaela Ennis
Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology
PhD thesis copy
null
null
null
q-bio.NC cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.
[ { "created": "Wed, 24 May 2023 17:45:10 GMT", "version": "v1" } ]
2023-05-25
[ [ "Ennis", "Michaela", "" ] ]
The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.
2210.15752
Siavash Golkar
Siavash Golkar, Tiberiu Tesileanu, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
Constrained Predictive Coding as a Biologically Plausible Model of the Cortical Hierarchy
24 pages, 9 figures; accepted in NeurIPS 2022
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Predictive coding has emerged as an influential normative model of neural computation, with numerous extensions and applications. As such, much effort has been put into mapping PC faithfully onto the cortex, but there are issues that remain unresolved or controversial. In particular, current implementations often involve separate value and error neurons and require symmetric forward and backward weights across different brain regions. These features have not been experimentally confirmed. In this work, we show that the PC framework in the linear regime can be modified to map faithfully onto the cortical hierarchy in a manner compatible with empirical observations. By employing a disentangling-inspired constraint on hidden-layer neural activities, we derive an upper bound for the PC objective. Optimization of this upper bound leads to an algorithm that shows the same performance as the original objective and maps onto a biologically plausible network. The units of this network can be interpreted as multi-compartmental neurons with non-Hebbian learning rules, with a remarkable resemblance to recent experimental findings. There exist prior models which also capture these features, but they are phenomenological, while our work is a normative derivation. The network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models required, indicating that these features are not necessary for learning in the cortex. The normative nature of our algorithm in the simplified linear case allows us to prove interesting properties of the framework and analytically understand the computational role of our network's components. The parameters of our network have natural interpretations as physiological quantities in a multi-compartmental model of pyramidal neurons, providing a concrete link between PC and experimental measurements carried out in the cortex.
[ { "created": "Thu, 27 Oct 2022 20:12:14 GMT", "version": "v1" }, { "created": "Sat, 4 Mar 2023 06:15:42 GMT", "version": "v2" } ]
2023-03-07
[ [ "Golkar", "Siavash", "" ], [ "Tesileanu", "Tiberiu", "" ], [ "Bahroun", "Yanis", "" ], [ "Sengupta", "Anirvan M.", "" ], [ "Chklovskii", "Dmitri B.", "" ] ]
Predictive coding has emerged as an influential normative model of neural computation, with numerous extensions and applications. As such, much effort has been put into mapping PC faithfully onto the cortex, but there are issues that remain unresolved or controversial. In particular, current implementations often involve separate value and error neurons and require symmetric forward and backward weights across different brain regions. These features have not been experimentally confirmed. In this work, we show that the PC framework in the linear regime can be modified to map faithfully onto the cortical hierarchy in a manner compatible with empirical observations. By employing a disentangling-inspired constraint on hidden-layer neural activities, we derive an upper bound for the PC objective. Optimization of this upper bound leads to an algorithm that shows the same performance as the original objective and maps onto a biologically plausible network. The units of this network can be interpreted as multi-compartmental neurons with non-Hebbian learning rules, with a remarkable resemblance to recent experimental findings. There exist prior models which also capture these features, but they are phenomenological, while our work is a normative derivation. The network we derive does not involve one-to-one connectivity or signal multiplexing, which the phenomenological models required, indicating that these features are not necessary for learning in the cortex. The normative nature of our algorithm in the simplified linear case allows us to prove interesting properties of the framework and analytically understand the computational role of our network's components. The parameters of our network have natural interpretations as physiological quantities in a multi-compartmental model of pyramidal neurons, providing a concrete link between PC and experimental measurements carried out in the cortex.
1707.07851
J\'ozsef Vass Ph.D.
J\'ozsef Vass, Sergey N. Krylov
Estimating Kinetic Rate Constants and Plug Concentration Profiles from Simulated KCE Electropherogram Signals
Contains 14 pages with 3 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kinetic rate constants fundamentally characterize the dynamics of the chemical interaction of macromolecules, and thus their study sets a major direction in experimental biochemistry. The estimation of such constants is often challenging, partly due to the noisiness of data, and partly due to the theoretical framework. Novel and qualitatively reasonable methods are presented for the estimation of the rate constants of complex formation and dissociation in Kinetic Capillary Electrophoresis (KCE). This also serves the broader effort to resolve the inverse problem of KCE, where these estimates pose as initial starting points in the non-linear optimization space, along with the asymmetric Gaussian parameters describing the injected plug concentration profiles, which is also hereby estimated. This rate constant estimation method is also compared to an earlier one.
[ { "created": "Tue, 25 Jul 2017 08:45:03 GMT", "version": "v1" }, { "created": "Wed, 2 Aug 2017 18:45:39 GMT", "version": "v2" } ]
2017-08-04
[ [ "Vass", "József", "" ], [ "Krylov", "Sergey N.", "" ] ]
Kinetic rate constants fundamentally characterize the dynamics of the chemical interaction of macromolecules, and thus their study sets a major direction in experimental biochemistry. The estimation of such constants is often challenging, partly due to the noisiness of data, and partly due to the theoretical framework. Novel and qualitatively reasonable methods are presented for the estimation of the rate constants of complex formation and dissociation in Kinetic Capillary Electrophoresis (KCE). This also serves the broader effort to resolve the inverse problem of KCE, where these estimates pose as initial starting points in the non-linear optimization space, along with the asymmetric Gaussian parameters describing the injected plug concentration profiles, which is also hereby estimated. This rate constant estimation method is also compared to an earlier one.
1901.00099
Martin Frasch
Paula Desplats, Ashley M. Gutierrez, Marta C. Antonelli and Martin G. Frasch
Microglial memory of early life stress and inflammation: susceptibility to neurodegeneration in adulthood
null
Neurosci Biohav Rev 2019
10.1016/j.neubiorev.2019.10.013
null
q-bio.TO q-bio.CB
http://creativecommons.org/licenses/by-nc-sa/4.0/
We review evidence supporting the role of early life programming in the susceptibility for adult neurodegenerative diseases while highlighting questions and proposing avenues for future research to advance our understanding of this fundamental process. The key elements of this phenomenon are chronic stress, neuroinflammation triggering microglial polarization, microglial memory and their connection to neurodegeneration. We review the mediating mechanisms which may function as early biomarkers of increased susceptibility for neurodegeneration. Can we devise novel early life-modifying interventions to steer developmental trajectories to their optimum?
[ { "created": "Tue, 1 Jan 2019 05:55:00 GMT", "version": "v1" }, { "created": "Sun, 15 Sep 2019 19:35:35 GMT", "version": "v2" } ]
2019-11-11
[ [ "Desplats", "Paula", "" ], [ "Gutierrez", "Ashley M.", "" ], [ "Antonelli", "Marta C.", "" ], [ "Frasch", "Martin G.", "" ] ]
We review evidence supporting the role of early life programming in the susceptibility for adult neurodegenerative diseases while highlighting questions and proposing avenues for future research to advance our understanding of this fundamental process. The key elements of this phenomenon are chronic stress, neuroinflammation triggering microglial polarization, microglial memory and their connection to neurodegeneration. We review the mediating mechanisms which may function as early biomarkers of increased susceptibility for neurodegeneration. Can we devise novel early life-modifying interventions to steer developmental trajectories to their optimum?
1706.08324
Eleonora Alfinito Dr.
R. Cataldo, E.Alfinito, L. Reggiani
Hierarchy and assortativity as new tools for affinity investigation: the case of the TBA aptamer-ligand complex
12 pages, 5 figures
IEEE transaction on nanobioscience (2017)
10.1109/TNB.2017.2783440
null
q-bio.BM cond-mat.soft cs.CE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aptamers are single stranded DNA, RNA or peptide sequences having the ability to bind a variety of specific targets (proteins, molecules as well as ions). Therefore, aptamer production and selection for therapeutic and diagnostic applications is very challenging. Usually they are in vitro generated, but, recently, computational approaches have been developed for the in silico selection, with a higher affinity for the specific target. Anyway, the mechanism of aptamer-ligand formation is not completely clear, and not obvious to predict. This paper aims to develop a computational model able to describe aptamer-ligand affinity performance by using the topological structure of the corresponding graphs, assessed by means of numerical tools such as the conventional degree distribution, but also the rank-degree distribution (hierarchy) and the node assortativity. Calculations are applied to the thrombin binding aptamer (TBA), and the TBA-thrombin complex, produced in the presence of Na+ or K+. The topological analysis reveals different affinity performances between the macromolecules in the presence of the two cations, as expected by previous investigations in literature. These results nominate the graph topological analysis as a novel theoretical tool for testing affinity. Otherwise, starting from the graphs, an electrical network can be obtained by using the specific electrical properties of amino acids and nucleobases. Therefore, a further analysis concerns with the electrical response, which reveals that the resistance sensitively depends on the presence of sodium or potassium thus posing resistance as a crucial physical parameter for testing affinity.
[ { "created": "Mon, 26 Jun 2017 11:16:32 GMT", "version": "v1" } ]
2018-01-04
[ [ "Cataldo", "R.", "" ], [ "Alfinito", "E.", "" ], [ "Reggiani", "L.", "" ] ]
Aptamers are single stranded DNA, RNA or peptide sequences having the ability to bind a variety of specific targets (proteins, molecules as well as ions). Therefore, aptamer production and selection for therapeutic and diagnostic applications is very challenging. Usually they are in vitro generated, but, recently, computational approaches have been developed for the in silico selection, with a higher affinity for the specific target. Anyway, the mechanism of aptamer-ligand formation is not completely clear, and not obvious to predict. This paper aims to develop a computational model able to describe aptamer-ligand affinity performance by using the topological structure of the corresponding graphs, assessed by means of numerical tools such as the conventional degree distribution, but also the rank-degree distribution (hierarchy) and the node assortativity. Calculations are applied to the thrombin binding aptamer (TBA), and the TBA-thrombin complex, produced in the presence of Na+ or K+. The topological analysis reveals different affinity performances between the macromolecules in the presence of the two cations, as expected by previous investigations in literature. These results nominate the graph topological analysis as a novel theoretical tool for testing affinity. Otherwise, starting from the graphs, an electrical network can be obtained by using the specific electrical properties of amino acids and nucleobases. Therefore, a further analysis concerns with the electrical response, which reveals that the resistance sensitively depends on the presence of sodium or potassium thus posing resistance as a crucial physical parameter for testing affinity.
1203.3596
Alex Urban Dr.
Alexander Urban and Bard Ermentrout
Formation of antiwaves in gap-junction-coupled chains of neurons
null
null
10.1103/PhysRevE.86.011907
null
q-bio.NC nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using network models consisting of gap junction coupled Wang-Buszaki neurons, we demonstrate that it is possible to obtain not only synchronous activity between neurons but also a variety of constant phase shifts between 0 and \pi. We call these phase shifts intermediate stable phaselocked states. These phase shifts can produce a large variety of wave-like activity patterns in one-dimensional chains and two-dimensional arrays of neurons, which can be studied by reducing the system of equations to a phase model. The 2\pi periodic coupling functions of these models are characterized by prominent higher order terms in their Fourier expansion, which can be varied by changing model parameters. We study how the relative contribution of the odd and even terms affect what solutions are possible, the basin of attraction of those solutions and their stability. These models may be applicable to the spinal central pattern generators of the dogfish and also to the developing neocortex of the neonatal rat.
[ { "created": "Fri, 16 Mar 2012 01:07:40 GMT", "version": "v1" }, { "created": "Wed, 6 Jun 2012 01:01:51 GMT", "version": "v2" }, { "created": "Fri, 8 Jun 2012 01:26:57 GMT", "version": "v3" } ]
2015-06-04
[ [ "Urban", "Alexander", "" ], [ "Ermentrout", "Bard", "" ] ]
Using network models consisting of gap junction coupled Wang-Buszaki neurons, we demonstrate that it is possible to obtain not only synchronous activity between neurons but also a variety of constant phase shifts between 0 and \pi. We call these phase shifts intermediate stable phaselocked states. These phase shifts can produce a large variety of wave-like activity patterns in one-dimensional chains and two-dimensional arrays of neurons, which can be studied by reducing the system of equations to a phase model. The 2\pi periodic coupling functions of these models are characterized by prominent higher order terms in their Fourier expansion, which can be varied by changing model parameters. We study how the relative contribution of the odd and even terms affect what solutions are possible, the basin of attraction of those solutions and their stability. These models may be applicable to the spinal central pattern generators of the dogfish and also to the developing neocortex of the neonatal rat.
1006.4794
Sebastiano Stramaglia
Daniele Marinazzo, Wei Liao, Mario Pellicoro, and Sebastiano Stramaglia
Grouping time series by pairwise measures of redundancy
4 pages, 8 figures
null
10.1016/j.physleta.2010.08.011
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel approach is proposed to group redundant time series in the frame of causality. It assumes that (i) the dynamics of the system can be described using just a small number of characteristic modes, and that (ii) a pairwise measure of redundancy is sufficient to elicit the presence of correlated degrees of freedom. We show the application of the proposed approach on fMRI data from a resting human brain and gene expression profiles from HeLa cell culture.
[ { "created": "Thu, 24 Jun 2010 13:53:03 GMT", "version": "v1" } ]
2015-05-19
[ [ "Marinazzo", "Daniele", "" ], [ "Liao", "Wei", "" ], [ "Pellicoro", "Mario", "" ], [ "Stramaglia", "Sebastiano", "" ] ]
A novel approach is proposed to group redundant time series in the frame of causality. It assumes that (i) the dynamics of the system can be described using just a small number of characteristic modes, and that (ii) a pairwise measure of redundancy is sufficient to elicit the presence of correlated degrees of freedom. We show the application of the proposed approach on fMRI data from a resting human brain and gene expression profiles from HeLa cell culture.
1808.03488
Yohsuke Murase
Yohsuke Murase, Per Arne Rikvold
Conservation of population size is required for self-organized criticality in evolution models
null
New J. Phys. 20, 083023 (2018)
10.1088/1367-2630/aad861
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study models of biological evolution and investigate a key factor to yield self-organized criticality (SOC). The Bak-Sneppen (BS) model is the most basic model that shows an SOC state, which is developed based on minimal and plausible assumptions of Darwinian competition. Another class of models, which have population dynamics and simple rules for species migrations, has also been studied. It turns out that they do not show an SOC state although the assumptions made in these models are similar to those in the BS model. To clarify the origin of these differences and to identify a key ingredient of SOC, we study models that bridge the BS model and the Dynamical Graph model, which is a representative of the population dynamics models. From a comparative study of the models, we find that SOC is found when the fluctuations of the number of species $N$ are suppressed, while it shows off-critical states when $N$ changes according to its evolutionary dynamics. This indicates that the assumption of the fixed system size in the BS model plays a pivotal role to drive the system into an SOC state, and casts doubt on its applicability to actual evolutionary dynamics.
[ { "created": "Fri, 10 Aug 2018 11:28:00 GMT", "version": "v1" } ]
2018-09-11
[ [ "Murase", "Yohsuke", "" ], [ "Rikvold", "Per Arne", "" ] ]
We study models of biological evolution and investigate a key factor to yield self-organized criticality (SOC). The Bak-Sneppen (BS) model is the most basic model that shows an SOC state, which is developed based on minimal and plausible assumptions of Darwinian competition. Another class of models, which have population dynamics and simple rules for species migrations, has also been studied. It turns out that they do not show an SOC state although the assumptions made in these models are similar to those in the BS model. To clarify the origin of these differences and to identify a key ingredient of SOC, we study models that bridge the BS model and the Dynamical Graph model, which is a representative of the population dynamics models. From a comparative study of the models, we find that SOC is found when the fluctuations of the number of species $N$ are suppressed, while it shows off-critical states when $N$ changes according to its evolutionary dynamics. This indicates that the assumption of the fixed system size in the BS model plays a pivotal role to drive the system into an SOC state, and casts doubt on its applicability to actual evolutionary dynamics.
1907.13459
Rachael Mansbach
Rachael A. Mansbach, Inga V. Leus, Jitender Mehla, Cesar A. Lopez, John K. Walker, Valentin V. Rybenkov, Nicolas W. Hengartner, Helen I. Zgurskaya, S. Gnanakaran
Development of a Fragment-Based Machine Learning Algorithm for Designing Hybrid Drugs Optimized for Permeating Gram-Negative Bacteria
15 pages, 5 figures, 4 pages of supporting information, 3 supporting figures, 2 ancillary files
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Gram-negative bacteria are a serious health concern due to the strong multidrug resistance that they display, partly due to the presence of a permeability barrier comprising two membranes with active efflux. New approaches are urgently needed to design antibiotics effective against these pathogens. In this work, we present a novel topological fragment-based approach ("Hunting Fragments Of X" or "Hunting FOX") to rationally "hunt for" chemical fragments that promote compound ability to permeate the outer membrane. Our approach generalizes to other drug design applications. We measure minimum inhibitory concentrations of compounds in two strains of Pseudomonas aeruginosa with variable permeability barriers and use them as an input to the Hunting FOX algorithm to identify molecular fragments responsible for enhanced outer membrane permeation properties and candidate molecules from an external library that demonstrate good permeation ability. Overall, we present proof of concept for a novel method that is expected to be valuable for rational design of hybrid drugs.
[ { "created": "Mon, 29 Jul 2019 19:33:14 GMT", "version": "v1" } ]
2019-08-01
[ [ "Mansbach", "Rachael A.", "" ], [ "Leus", "Inga V.", "" ], [ "Mehla", "Jitender", "" ], [ "Lopez", "Cesar A.", "" ], [ "Walker", "John K.", "" ], [ "Rybenkov", "Valentin V.", "" ], [ "Hengartner", "Nicolas W.", "" ], [ "Zgurskaya", "Helen I.", "" ], [ "Gnanakaran", "S.", "" ] ]
Gram-negative bacteria are a serious health concern due to the strong multidrug resistance that they display, partly due to the presence of a permeability barrier comprising two membranes with active efflux. New approaches are urgently needed to design antibiotics effective against these pathogens. In this work, we present a novel topological fragment-based approach ("Hunting Fragments Of X" or "Hunting FOX") to rationally "hunt for" chemical fragments that promote compound ability to permeate the outer membrane. Our approach generalizes to other drug design applications. We measure minimum inhibitory concentrations of compounds in two strains of Pseudomonas aeruginosa with variable permeability barriers and use them as an input to the Hunting FOX algorithm to identify molecular fragments responsible for enhanced outer membrane permeation properties and candidate molecules from an external library that demonstrate good permeation ability. Overall, we present proof of concept for a novel method that is expected to be valuable for rational design of hybrid drugs.
q-bio/0612006
Thierry Rabilloud
C. Ang\'enieux, D. Fricker, J. M. Strub, S. Luche (BECP), H. Bausinger, J. P. Cazenave, A. Van Dorsselaer, D. Hanau, H. de la Salle, T. Rabilloud (BECP)
Gene induction during differentiation of human monocytes into dendritic cells: an integrated study at the RNA and protein levels
website publisher: http://www.springerlink.com/content/ha0d2c351qhjhjdm/
Funct Integr Genomics 1 (09/2001) 323-9
10.1007/s101420100037
null
q-bio.GN
null
Changes in gene expression occurring during differentiation of human monocytes into dendritic cells were studied at the RNA and protein levels. These studies showed the induction of several gene classes corresponding to various biological functions. These functions encompass antigen processing and presentation, cytoskeleton, cell signalling and signal transduction, but also an increase in mitochondrial function and in the protein synthesis machinery, including some, but not all, chaperones. These changes put in perspective the events occurring during this differentiation process. On a more technical point, it appears that the studies carried out at the RNA and protein levels are highly complementary.
[ { "created": "Tue, 5 Dec 2006 10:57:36 GMT", "version": "v1" } ]
2016-08-16
[ [ "Angénieux", "C.", "", "BECP" ], [ "Fricker", "D.", "", "BECP" ], [ "Strub", "J. M.", "", "BECP" ], [ "Luche", "S.", "", "BECP" ], [ "Bausinger", "H.", "", "BECP" ], [ "Cazenave", "J. P.", "", "BECP" ], [ "Van Dorsselaer", "A.", "", "BECP" ], [ "Hanau", "D.", "", "BECP" ], [ "de la Salle", "H.", "", "BECP" ], [ "Rabilloud", "T.", "", "BECP" ] ]
Changes in gene expression occurring during differentiation of human monocytes into dendritic cells were studied at the RNA and protein levels. These studies showed the induction of several gene classes corresponding to various biological functions. These functions encompass antigen processing and presentation, cytoskeleton, cell signalling and signal transduction, but also an increase in mitochondrial function and in the protein synthesis machinery, including some, but not all, chaperones. These changes put in perspective the events occurring during this differentiation process. On a more technical point, it appears that the studies carried out at the RNA and protein levels are highly complementary.
1601.02974
James Roach
James P. Roach, Leonard M Sander, Michal R. Zochowski
Memory Recall and Spike Frequency Adaptation
null
Phys. Rev. E 93, 052307 (2016)
10.1103/PhysRevE.93.052307
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain can reproduce memories from partial data; this ability is critical for memory recall. The process of memory recall has been studied using auto-associative networks such as the Hopfield model. This kind of model reliably converges to stored patterns which contain the memory. However, it is unclear how the behavior is controlled by the brain so that after convergence to one configuration, it can proceed with recognition of another one. In the Hopfield model this happens only through unrealistic changes of an effective global temperature that destabilizes all stored configurations. Here we show that spike frequency adaptation (SFA), a common mechanism affecting neuron activation in the brain, can provide state dependent control of pattern retrieval. We demonstrate this in a Hopfield network modified to include SFA, and also in a model network of biophysical neurons. In both cases SFA allows for selective stabilization of attractors with different basins of attraction, and also for temporal dynamics of attractor switching that is not possible in standard auto-associative schemes. The dynamics of our models give a plausible account of different sorts of memory retrieval.
[ { "created": "Tue, 12 Jan 2016 17:40:38 GMT", "version": "v1" }, { "created": "Mon, 14 Mar 2016 17:10:11 GMT", "version": "v2" } ]
2016-05-18
[ [ "Roach", "James P.", "" ], [ "Sander", "Leonard M", "" ], [ "Zochowski", "Michal R.", "" ] ]
The brain can reproduce memories from partial data; this ability is critical for memory recall. The process of memory recall has been studied using auto-associative networks such as the Hopfield model. This kind of model reliably converges to stored patterns which contain the memory. However, it is unclear how the behavior is controlled by the brain so that after convergence to one configuration, it can proceed with recognition of another one. In the Hopfield model this happens only through unrealistic changes of an effective global temperature that destabilizes all stored configurations. Here we show that spike frequency adaptation (SFA), a common mechanism affecting neuron activation in the brain, can provide state dependent control of pattern retrieval. We demonstrate this in a Hopfield network modified to include SFA, and also in a model network of biophysical neurons. In both cases SFA allows for selective stabilization of attractors with different basins of attraction, and also for temporal dynamics of attractor switching that is not possible in standard auto-associative schemes. The dynamics of our models give a plausible account of different sorts of memory retrieval.
2108.01304
Natalie Charitakis
Natalie Charitakis (1), Mirana Ramialison (1,2,3) and Hieu T. Nim (1,2,3) ((1), Murdoch Children's Research Institute, Parkville, Australia, (2) Australian Regenerative Medicine Institute, Monash University, Clayton, Australia, (3) Systems Biology Institute, Clayton, Australia)
Comparative Analysis of Packages and Algorithms for the Analysis of Spatially Resolved Transcriptomics Data
32 pages, Figures 3
null
null
null
q-bio.QM q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
The technology to generate Spatially Resolved Transcriptomics (SRT) data is rapidly being improved and applied to investigate a variety of biological tissues. The ability to interrogate how spatially localised gene expression can lend new insight to different tissue development is critical, but the appropriate tools to analyse this data are still emerging. This chapter reviews available packages and pipelines for the analysis of different SRT datasets with a focus on identifying spatially variable genes (SVGs) alongside other aims, while discussing the importance of and challenges in establishing a standardised 'ground truth' in the biological data for benchmarking.
[ { "created": "Tue, 3 Aug 2021 05:38:15 GMT", "version": "v1" } ]
2021-08-04
[ [ "Charitakis", "Natalie", "" ], [ "Ramialison", "Mirana", "" ], [ "Nim", "Hieu T.", "" ] ]
The technology to generate Spatially Resolved Transcriptomics (SRT) data is rapidly being improved and applied to investigate a variety of biological tissues. The ability to interrogate how spatially localised gene expression can lend new insight to different tissue development is critical, but the appropriate tools to analyse this data are still emerging. This chapter reviews available packages and pipelines for the analysis of different SRT datasets with a focus on identifying spatially variable genes (SVGs) alongside other aims, while discussing the importance of and challenges in establishing a standardised 'ground truth' in the biological data for benchmarking.
2004.06361
Vainav Patel
Dhanashree Jagtap, Selvaa Kumar C, Smita Mahale, Vainav Patel
Modelling and docking of Indian SARS-CoV-2 spike protein 1 with ACE2: implications for co-morbidity and therapeutic intervention
28 pages, 6 figures, 3 tables, 1 supplementary figure
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Presently, India bears amongst the highest burden of non-communicable diseases such as diabetes mellitus (DM), hypertension (HT), and cardio vascular disease (CVD) and thus represents a vulnerable target to the SARS-CoV-2/COVID-19 pandemic. Involvement of the angiotensin converting enzyme 2 (ACE2) in susceptibility to infection and pathogenesis by SARS-CoV-2 is currently an actively pursued research area. An increased susceptibility to infection in individuals with DM, HT and CVD together with higher levels of circulating ACE2 in these settings presents a scenario where interaction with soluble ACE2 may result in disseminated virus-receptor complexes that could enhance virus acquisition and pathogenesis. Thus, understanding the SARS-CoV-2 receptor binding domain-ACE2 interaction, both membrane bound and in the cell free context may contribute to elucidating the role of co-morbidities in increased susceptibility to infection and pathogenesis. Both Azithromycin and Hydroxychloroquine (HCQ) have shown efficacy in mitigating viral carriage in infected individuals. Furthermore, each of these compounds generate active metabolites which in turn may also modulate virus-receptor interaction and thus influence clinical outcomes. In this study, we model the structural interaction of S1 with both full-length and soluble ACE2. Additionally, therapeutic drugs and their active metabolites were docked with soluble ACE2 protein. Our results show that S1 from either of the reported Indian sequences can bind both full-length and soluble ACE2, albeit with varying affinity that can be attributed to a reported substitution in the RBD. Furthermore, both Azythromycin and HCQ together with their active metabolites can allosterically affect, to a range of extents, binding of S1 to ACE2.
[ { "created": "Tue, 14 Apr 2020 08:53:12 GMT", "version": "v1" } ]
2020-04-15
[ [ "Jagtap", "Dhanashree", "" ], [ "C", "Selvaa Kumar", "" ], [ "Mahale", "Smita", "" ], [ "Patel", "Vainav", "" ] ]
Presently, India bears amongst the highest burden of non-communicable diseases such as diabetes mellitus (DM), hypertension (HT), and cardio vascular disease (CVD) and thus represents a vulnerable target to the SARS-CoV-2/COVID-19 pandemic. Involvement of the angiotensin converting enzyme 2 (ACE2) in susceptibility to infection and pathogenesis by SARS-CoV-2 is currently an actively pursued research area. An increased susceptibility to infection in individuals with DM, HT and CVD together with higher levels of circulating ACE2 in these settings presents a scenario where interaction with soluble ACE2 may result in disseminated virus-receptor complexes that could enhance virus acquisition and pathogenesis. Thus, understanding the SARS-CoV-2 receptor binding domain-ACE2 interaction, both membrane bound and in the cell free context may contribute to elucidating the role of co-morbidities in increased susceptibility to infection and pathogenesis. Both Azithromycin and Hydroxychloroquine (HCQ) have shown efficacy in mitigating viral carriage in infected individuals. Furthermore, each of these compounds generate active metabolites which in turn may also modulate virus-receptor interaction and thus influence clinical outcomes. In this study, we model the structural interaction of S1 with both full-length and soluble ACE2. Additionally, therapeutic drugs and their active metabolites were docked with soluble ACE2 protein. Our results show that S1 from either of the reported Indian sequences can bind both full-length and soluble ACE2, albeit with varying affinity that can be attributed to a reported substitution in the RBD. Furthermore, both Azythromycin and HCQ together with their active metabolites can allosterically affect, to a range of extents, binding of S1 to ACE2.
1602.07181
Peter Schuck
Abhiksha Desai, Jonathan Krynitsky, Thomas J. Pohida, Huaying Zhao and Peter Schuck
3D-Printing for Analytical Ultracentrifugation
25 pages, 6 figures
PLoS One. 2016; 11(8):e0155201 PMID: 27525659
10.1371/journal.pone.0155201
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analytical ultracentrifugation (AUC) is a classical technique of physical biochemistry providing information on size, shape, and interactions of macromolecules from the analysis of their migration in centrifugal fields while free in solution. A key mechanical element in AUC is the centerpiece, a component of the sample cell assembly that is mounted between the optical windows to allow imaging and to seal the sample solution column against high vacuum while exposed to gravitational forces in excess of 300,000 g. For sedimentation velocity it needs to be precisely sector-shaped to allow unimpeded radial macromolecular migration. During the history of AUC a great variety of centerpiece designs have been developed for different types of experiments. Here, we report that centerpieces can now be readily fabricated by 3D printing at low cost, from a variety of materials, and with customized designs. The new centerpieces can exhibit sufficient mechanical stability to withstand the gravitational forces at the highest rotor speeds and be sufficiently precise for sedimentation equilibrium and sedimentation velocity experiments. Sedimentation velocity experiments with bovine serum albumin as a reference molecule in 3D printed centerpieces with standard double-sector design result in sedimentation boundaries virtually indistinguishable from those in commercial double-sector epoxy centerpieces, with sedimentation coefficients well within the range of published values. The statistical error of the measurement is slightly above that obtained with commercial epoxy, but still below 1%. Facilitated by modern open-source design and fabrication paradigms, we believe 3D printed centerpieces and AUC accessories can spawn a variety of improvements in AUC experimental design, efficiency and resource allocation.
[ { "created": "Tue, 23 Feb 2016 15:02:53 GMT", "version": "v1" } ]
2016-08-23
[ [ "Desai", "Abhiksha", "" ], [ "Krynitsky", "Jonathan", "" ], [ "Pohida", "Thomas J.", "" ], [ "Zhao", "Huaying", "" ], [ "Schuck", "Peter", "" ] ]
Analytical ultracentrifugation (AUC) is a classical technique of physical biochemistry providing information on size, shape, and interactions of macromolecules from the analysis of their migration in centrifugal fields while free in solution. A key mechanical element in AUC is the centerpiece, a component of the sample cell assembly that is mounted between the optical windows to allow imaging and to seal the sample solution column against high vacuum while exposed to gravitational forces in excess of 300,000 g. For sedimentation velocity it needs to be precisely sector-shaped to allow unimpeded radial macromolecular migration. During the history of AUC a great variety of centerpiece designs have been developed for different types of experiments. Here, we report that centerpieces can now be readily fabricated by 3D printing at low cost, from a variety of materials, and with customized designs. The new centerpieces can exhibit sufficient mechanical stability to withstand the gravitational forces at the highest rotor speeds and be sufficiently precise for sedimentation equilibrium and sedimentation velocity experiments. Sedimentation velocity experiments with bovine serum albumin as a reference molecule in 3D printed centerpieces with standard double-sector design result in sedimentation boundaries virtually indistinguishable from those in commercial double-sector epoxy centerpieces, with sedimentation coefficients well within the range of published values. The statistical error of the measurement is slightly above that obtained with commercial epoxy, but still below 1%. Facilitated by modern open-source design and fabrication paradigms, we believe 3D printed centerpieces and AUC accessories can spawn a variety of improvements in AUC experimental design, efficiency and resource allocation.
1406.0357
Kazuhiro Takemoto
Kazuhiro Takemoto, Saori Kanamaru, Wenfeng Feng
Climatic seasonality may affect ecological network structure: Food webs and mutualistic networks
22 pages, 3 figures, 1 table
Biosystems 121, 29-37 (2014)
10.1016/j.biosystems.2014.06.002
null
q-bio.PE physics.data-an physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecological networks exhibit non-random structural patterns, such as modularity and nestedness, which indicate ecosystem stability, species diversity, and connectance. Such structure-stability relationships are well known. However, another important perspective is less well understood: the relationship between the environment and structure. Inspired by theoretical studies that suggest that network structure can change due to environmental variability, we collected data on a number of empirical food webs and mutualistic networks and evaluated the effect of climatic seasonality on ecological network structure. As expected, we found that climatic seasonality affects ecological network structure. In particular, an increase in modularity due to climatic seasonality was observed in food webs; however, it is debatable whether this occurs in mutualistic networks. Interestingly, the type of climatic seasonality that affects network structure differs with ecosystem type. Rainfall and temperature seasonality influence freshwater food webs and mutualistic networks, respectively; food webs are smaller, and more modular, with increasing rainfall seasonality. Mutualistic networks exhibit a higher diversity (particularly of animals) with increasing temperature seasonality. These results confirm the theoretical prediction that stability increases with greater perturbation. Although these results are still debatable because of several limitations in the data analysis, they may enhance our understanding of environment-structure relationships.
[ { "created": "Mon, 2 Jun 2014 13:19:29 GMT", "version": "v1" } ]
2014-06-12
[ [ "Takemoto", "Kazuhiro", "" ], [ "Kanamaru", "Saori", "" ], [ "Feng", "Wenfeng", "" ] ]
Ecological networks exhibit non-random structural patterns, such as modularity and nestedness, which indicate ecosystem stability, species diversity, and connectance. Such structure-stability relationships are well known. However, another important perspective is less well understood: the relationship between the environment and structure. Inspired by theoretical studies that suggest that network structure can change due to environmental variability, we collected data on a number of empirical food webs and mutualistic networks and evaluated the effect of climatic seasonality on ecological network structure. As expected, we found that climatic seasonality affects ecological network structure. In particular, an increase in modularity due to climatic seasonality was observed in food webs; however, it is debatable whether this occurs in mutualistic networks. Interestingly, the type of climatic seasonality that affects network structure differs with ecosystem type. Rainfall and temperature seasonality influence freshwater food webs and mutualistic networks, respectively; food webs are smaller, and more modular, with increasing rainfall seasonality. Mutualistic networks exhibit a higher diversity (particularly of animals) with increasing temperature seasonality. These results confirm the theoretical prediction that stability increases with greater perturbation. Although these results are still debatable because of several limitations in the data analysis, they may enhance our understanding of environment-structure relationships.
2201.06900
Anil Seth
Anil Seth, Tomasz Korbak, Alexander Tschantz
A continuity of Markov blanket interpretations under the Free Energy Principle
4 pages, 0 figures, invited commentary
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Bruineberg and colleagues helpfully distinguish between instrumental and ontological interpretations of Markov blankets, exposing the dangers of using the former to make claims about the latter. However, proposing a sharp distinction neglects the value of recognising a continuum spanning from instrumental to ontological. This value extends to the related distinction between being and having a model.
[ { "created": "Tue, 18 Jan 2022 12:12:47 GMT", "version": "v1" } ]
2022-01-19
[ [ "Seth", "Anil", "" ], [ "Korbak", "Tomasz", "" ], [ "Tschantz", "Alexander", "" ] ]
Bruineberg and colleagues helpfully distinguish between instrumental and ontological interpretations of Markov blankets, exposing the dangers of using the former to make claims about the latter. However, proposing a sharp distinction neglects the value of recognising a continuum spanning from instrumental to ontological. This value extends to the related distinction between being and having a model.
2112.04985
Sayantari Ghosh
Priya Chakraborty and Sayantari Ghosh
Emergent Regulatory Response and Shift of Half induction point under Resource Competition in Genetic circuits
10 pages, 6 figures
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthetic genetic circuits are implemented in living cells for their operation. During gene expression, proteins are produced from the respective genes, by formation of complexes through the process of transcription and translation. In transcription the circuit uses RNAP, etc. as resource from the host cell and in translation, ribosome, tRNA and other cellular resources are supplied to the operating circuit. As the cell contains these resources in limited number, the circuit can suffer from unprecedented resource competition which might destroy the circuit functionality, or introduce some emergent responses. In this paper, we have studied a three-gene motif under resource competition where interesting behaviour, similar to regulatory responses occur due to limited supply of necessary resources. The system of interest exhibits prominent changes in behaviour which can be observed experimentally. We focus on two specific aspects, namely, dynamic range and half-induction point, which inherently describe the circuit functionalities, and can be affected by corresponding resource affinity and availability.
[ { "created": "Thu, 9 Dec 2021 15:26:05 GMT", "version": "v1" } ]
2021-12-10
[ [ "Chakraborty", "Priya", "" ], [ "Ghosh", "Sayantari", "" ] ]
Synthetic genetic circuits are implemented in living cells for their operation. During gene expression, proteins are produced from the respective genes, by formation of complexes through the process of transcription and translation. In transcription the circuit uses RNAP, etc. as resource from the host cell and in translation, ribosome, tRNA and other cellular resources are supplied to the operating circuit. As the cell contains these resources in limited number, the circuit can suffer from unprecedented resource competition which might destroy the circuit functionality, or introduce some emergent responses. In this paper, we have studied a three-gene motif under resource competition where interesting behaviour, similar to regulatory responses occur due to limited supply of necessary resources. The system of interest exhibits prominent changes in behaviour which can be observed experimentally. We focus on two specific aspects, namely, dynamic range and half-induction point, which inherently describe the circuit functionalities, and can be affected by corresponding resource affinity and availability.
2007.05121
Anand Ramachandran
Yun Heo, Gowthami Manikandan, Anand Ramachandran, Deming Chen
Comprehensive assessment of error correction methods for high-throughput sequencing data
null
null
10.36255/exonpublications.bioinformatics.2021.ch6
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
The advent of DNA and RNA sequencing has revolutionized the study of genomics and molecular biology. Next generation sequencing (NGS) technologies like Illumina, Ion Torrent, SOLiD sequencing etc. have brought about a quick and cheap way to sequence genomes. Recently, third generation sequencing (TGS) technologies like PacBio and Oxford Nanopore Technology (ONT) have also been developed. Different technologies use different underlying methods for sequencing and are prone to different error rates. Though many tools exist for error correction of sequencing data from NGS and TGS methods, no standard method is available yet to evaluate the accuracy and effectiveness of these error-correction tools. In this study, we present a Software Package for Error Correction Tool Assessment on nuCLEic acid sequences (SPECTACLE) providing comprehensive algorithms to evaluate error-correction methods for DNA and RNA sequencing, for NGS and TGS platforms. We also present a compilation of sequencing datasets for Illumina, PacBio and ONT platforms that present challenging scenarios for error-correction tools. Using these datasets and SPECTACLE, we evaluate the performance of 23 different error-correction tools and present unique and helpful insights into their strengths and weaknesses. We hope that our methodology will standardize the evaluation of DNA and RNA error-correction tools in the future.
[ { "created": "Fri, 10 Jul 2020 00:41:38 GMT", "version": "v1" }, { "created": "Thu, 25 Mar 2021 15:49:36 GMT", "version": "v2" } ]
2021-03-26
[ [ "Heo", "Yun", "" ], [ "Manikandan", "Gowthami", "" ], [ "Ramachandran", "Anand", "" ], [ "Chen", "Deming", "" ] ]
The advent of DNA and RNA sequencing has revolutionized the study of genomics and molecular biology. Next generation sequencing (NGS) technologies like Illumina, Ion Torrent, SOLiD sequencing etc. have brought about a quick and cheap way to sequence genomes. Recently, third generation sequencing (TGS) technologies like PacBio and Oxford Nanopore Technology (ONT) have also been developed. Different technologies use different underlying methods for sequencing and are prone to different error rates. Though many tools exist for error correction of sequencing data from NGS and TGS methods, no standard method is available yet to evaluate the accuracy and effectiveness of these error-correction tools. In this study, we present a Software Package for Error Correction Tool Assessment on nuCLEic acid sequences (SPECTACLE) providing comprehensive algorithms to evaluate error-correction methods for DNA and RNA sequencing, for NGS and TGS platforms. We also present a compilation of sequencing datasets for Illumina, PacBio and ONT platforms that present challenging scenarios for error-correction tools. Using these datasets and SPECTACLE, we evaluate the performance of 23 different error-correction tools and present unique and helpful insights into their strengths and weaknesses. We hope that our methodology will standardize the evaluation of DNA and RNA error-correction tools in the future.
2306.16184
Sedigheh Behrouzifar
Sedigheh Behrouzifar
Exploring novel prognostic biomarkers and biologic processes involved in NASH, cirrhosis and HCC based on survival analysis using systems biology approach
20 pages, 5 figures, 1 tables
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-nd/4.0/
There is an unmet need to develop medications or drug combinations which can stop advancement of NASH to liver cirrhosis and HCC. Therefore, identifying key biomarkers based on overall survival and the exploring biological processes involved in the pathogenesis and progression of NASH toward cirrhosis and HCC to improve therapeutic interventions is necessary. The microarray dataset from the GPL13667 platform was downloaded from the Gene Expression Omnibus (GEO). The inclusion criteria for the DEGs included an adjusted p-value <0.05 and a log(2) fold change >1. In the first step, protein-protein interaction (PPI) network of differentially expressed genes (DEGs) in every three groups (NASH, Cirrhosis and HCC) was constructed using STRING online database. In the second step, the DEGs of each group were imported to Cytoscape software separately, and the genes with Degree >3 were selected. In the third step, the genes with Degree >3 were imported to Gephi software (0.9.2 version) and the genes with betweenness centrality >0 were selected. Venn diagram of these genes was depicted for the NASH, cirrhotic and HCC groups. According to venn diagram, 96 (NASH), 30 (cirrhotic) and 213 (HCC) genes were specifically upregulated (based on inclusion criteria). Among specific genes, 22 (NASH), 5 (cirrhotic) and 82 (HCC) genes were with poor overall survival. The overlap among the 3 groups (NASH, Cirrhosis and HCC) contained 4 upregulated genes HLA-F, HLA-DPA1, TPM1 and YWHAZ. From 4 upregulated genes, only YWHAZ gene was with poor overall survival. The present study detected new candidate genes and key biological processes in NASH, cirrhosis and HCC based on overall survival and using in silico analysis. Therefore, performing in vitro and in vivo researches to verify the results is necessary.
[ { "created": "Wed, 28 Jun 2023 13:05:54 GMT", "version": "v1" } ]
2023-06-29
[ [ "Behrouzifar", "Sedigheh", "" ] ]
There is an unmet need to develop medications or drug combinations which can stop advancement of NASH to liver cirrhosis and HCC. Therefore, identifying key biomarkers based on overall survival and the exploring biological processes involved in the pathogenesis and progression of NASH toward cirrhosis and HCC to improve therapeutic interventions is necessary. The microarray dataset from the GPL13667 platform was downloaded from the Gene Expression Omnibus (GEO). The inclusion criteria for the DEGs included an adjusted p-value <0.05 and a log(2) fold change >1. In the first step, protein-protein interaction (PPI) network of differentially expressed genes (DEGs) in every three groups (NASH, Cirrhosis and HCC) was constructed using STRING online database. In the second step, the DEGs of each group were imported to Cytoscape software separately, and the genes with Degree >3 were selected. In the third step, the genes with Degree >3 were imported to Gephi software (0.9.2 version) and the genes with betweenness centrality >0 were selected. Venn diagram of these genes was depicted for the NASH, cirrhotic and HCC groups. According to venn diagram, 96 (NASH), 30 (cirrhotic) and 213 (HCC) genes were specifically upregulated (based on inclusion criteria). Among specific genes, 22 (NASH), 5 (cirrhotic) and 82 (HCC) genes were with poor overall survival. The overlap among the 3 groups (NASH, Cirrhosis and HCC) contained 4 upregulated genes HLA-F, HLA-DPA1, TPM1 and YWHAZ. From 4 upregulated genes, only YWHAZ gene was with poor overall survival. The present study detected new candidate genes and key biological processes in NASH, cirrhosis and HCC based on overall survival and using in silico analysis. Therefore, performing in vitro and in vivo researches to verify the results is necessary.
1911.01865
Chiara Villa
Tommaso Lorenzi, Fiona R. Macfarlane and Chiara Villa
Discrete and continuum models for the evolutionary and spatial dynamics of cancer: a very short introduction through two case studies
21 pages, 5 figures, BIOMAT2019 (19th International Symposium on Mathematical and Computational Biology)
null
null
null
q-bio.TO nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a very short introduction to discrete and continuum models for the evolutionary and spatial dynamics of cancer through two case studies: a model for the evolutionary dynamics of cancer cells under cytotoxic therapy and a model for the mechanical interaction between healthy and cancer cells during tumour growth. First we develop the discrete models, whereby the dynamics of single cells are described through a set of rules that result in branching random walks. Then we present the corresponding continuum models, which are formulated in terms of non-local and nonlinear partial differential equations, and we summarise the key properties of their solutions. Finally, we carry out numerical simulations of the discrete models and we construct numerical solutions of the corresponding continuum models. The biological implications of the results obtained are briefly discussed.
[ { "created": "Tue, 5 Nov 2019 15:24:47 GMT", "version": "v1" }, { "created": "Wed, 6 Nov 2019 08:01:51 GMT", "version": "v2" } ]
2019-11-07
[ [ "Lorenzi", "Tommaso", "" ], [ "Macfarlane", "Fiona R.", "" ], [ "Villa", "Chiara", "" ] ]
We give a very short introduction to discrete and continuum models for the evolutionary and spatial dynamics of cancer through two case studies: a model for the evolutionary dynamics of cancer cells under cytotoxic therapy and a model for the mechanical interaction between healthy and cancer cells during tumour growth. First we develop the discrete models, whereby the dynamics of single cells are described through a set of rules that result in branching random walks. Then we present the corresponding continuum models, which are formulated in terms of non-local and nonlinear partial differential equations, and we summarise the key properties of their solutions. Finally, we carry out numerical simulations of the discrete models and we construct numerical solutions of the corresponding continuum models. The biological implications of the results obtained are briefly discussed.
1107.5095
Frederick Matsen IV
Frederick A. Matsen and Steven N. Evans
Edge principal components and squash clustering: using the special structure of phylogenetic placement data for sample comparison
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Principal components (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples sampled from a given environment. However, a classical application of these techniques to distances computed between samples can lack transparency because there is no ready interpretation of the axes of classical PCA plots, and it is difficult to assign any clear intuitive meaning to either the internal nodes or the edge lengths of trees produced by distance-based hierarchical clustering methods such as UPGMA. We show that more interesting and interpretable results are produced by two new methods that leverage the special structure of phylogenetic placement data. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is simply a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate "average" of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA. We present these methods and illustrate their use with data from the microbiome of the human vagina.
[ { "created": "Mon, 25 Jul 2011 23:39:25 GMT", "version": "v1" } ]
2011-07-27
[ [ "Matsen", "Frederick A.", "" ], [ "Evans", "Steven N.", "" ] ]
Principal components (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples sampled from a given environment. However, a classical application of these techniques to distances computed between samples can lack transparency because there is no ready interpretation of the axes of classical PCA plots, and it is difficult to assign any clear intuitive meaning to either the internal nodes or the edge lengths of trees produced by distance-based hierarchical clustering methods such as UPGMA. We show that more interesting and interpretable results are produced by two new methods that leverage the special structure of phylogenetic placement data. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is simply a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate "average" of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA. We present these methods and illustrate their use with data from the microbiome of the human vagina.
1709.01712
Les Hatton
Les Hatton, Gregory Warr
Information Theory and the Length Distribution of all Discrete Systems
70 pages, 53 figures, inc. 30 pages of Appendices
null
null
null
q-bio.OT cs.IT math.IT physics.bio-ph physics.soc-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We begin with the extraordinary observation that the length distribution of 80 million proteins in UniProt, the Universal Protein Resource, measured in amino acids, is qualitatively identical to the length distribution of large collections of computer functions measured in programming language tokens, at all scales. That two such disparate discrete systems share important structural properties suggests that yet other apparently unrelated discrete systems might share the same properties, and certainly invites an explanation. We demonstrate that this is inevitable for all discrete systems of components built from tokens or symbols. Departing from existing work by embedding the Conservation of Hartley-Shannon information (CoHSI) in a classical statistical mechanics framework, we identify two kinds of discrete system, heterogeneous and homogeneous. Heterogeneous systems contain components built from a unique alphabet of tokens and yield an implicit CoHSI distribution with a sharp unimodal peak asymptoting to a power-law. Homogeneous systems contain components each built from just one kind of token unique to that component and yield a CoHSI distribution corresponding to Zipf's law. This theory is applied to heterogeneous systems, (proteome, computer software, music); homogeneous systems (language texts, abundance of the elements); and to systems in which both heterogeneous and homogeneous behaviour co-exist (word frequencies and word length frequencies in language texts). In each case, the predictions of the theory are tested and supported to high levels of statistical significance. We also show that in the same heterogeneous system, different but consistent alphabets must be related by a power-law. We demonstrate this on a large body of music by excluding and including note duration in the definition of the unique alphabet of notes.
[ { "created": "Wed, 6 Sep 2017 08:23:31 GMT", "version": "v1" } ]
2017-09-13
[ [ "Hatton", "Les", "" ], [ "Warr", "Gregory", "" ] ]
We begin with the extraordinary observation that the length distribution of 80 million proteins in UniProt, the Universal Protein Resource, measured in amino acids, is qualitatively identical to the length distribution of large collections of computer functions measured in programming language tokens, at all scales. That two such disparate discrete systems share important structural properties suggests that yet other apparently unrelated discrete systems might share the same properties, and certainly invites an explanation. We demonstrate that this is inevitable for all discrete systems of components built from tokens or symbols. Departing from existing work by embedding the Conservation of Hartley-Shannon information (CoHSI) in a classical statistical mechanics framework, we identify two kinds of discrete system, heterogeneous and homogeneous. Heterogeneous systems contain components built from a unique alphabet of tokens and yield an implicit CoHSI distribution with a sharp unimodal peak asymptoting to a power-law. Homogeneous systems contain components each built from just one kind of token unique to that component and yield a CoHSI distribution corresponding to Zipf's law. This theory is applied to heterogeneous systems, (proteome, computer software, music); homogeneous systems (language texts, abundance of the elements); and to systems in which both heterogeneous and homogeneous behaviour co-exist (word frequencies and word length frequencies in language texts). In each case, the predictions of the theory are tested and supported to high levels of statistical significance. We also show that in the same heterogeneous system, different but consistent alphabets must be related by a power-law. We demonstrate this on a large body of music by excluding and including note duration in the definition of the unique alphabet of notes.
1607.07474
Pavel Khromov
Pavel Khromov, Constantin D. Malliaris and Alexandre V. Morozov
Generalization of the Ewens sampling formula to arbitrary fitness landscapes
24 pages, 6 figures
null
10.1101/065011
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In considering evolution of transcribed regions, regulatory modules, and other genomic loci of interest, we are often faced with a situation in which the number of allelic states greatly exceeds the population size. In this limit, the population eventually adopts a steady state characterized by mutation-selection-drift balance. Although new alleles continue to be explored through mutation, the statistics of the population, and in particular the probabilities of seeing specific allelic configurations in samples taken from a population, do not change with time. In the absence of selection, probabilities of allelic configurations are given by the Ewens sampling formula, widely used in population genetics to detect deviations from neutrality. Here we develop an extension of this formula to arbitrary, possibly epistatic, fitness landscapes. Although our approach is general, we focus on the class of landscapes in which alleles are grouped into two, three, or several fitness states. This class of landscapes yields sampling probabilities that are computationally more tractable, and can form a basis for the inference of selection signatures from sequence data. We demonstrate that, for a sizeable range of mutation rates and selection coefficients, the steady-state allelic diversity is not neutral. Therefore, it may be used to infer selection coefficients, as well as other key evolutionary parameters, using high-throughput sequencing of evolving populations to collect data on locus polymorphisms. We also find that our theory remains sufficiently accurate even if the assumptions such as the infinite allele limit and the "full connectivity" assumption in which each allele can mutate into any other allele are relaxed. Thus, our framework establishes a theoretical foundation for inferring selection signatures from samples of sequences produced by evolution on epistatic fitness landscapes.
[ { "created": "Mon, 25 Jul 2016 20:53:16 GMT", "version": "v1" } ]
2016-07-27
[ [ "Khromov", "Pavel", "" ], [ "Malliaris", "Constantin D.", "" ], [ "Morozov", "Alexandre V.", "" ] ]
In considering evolution of transcribed regions, regulatory modules, and other genomic loci of interest, we are often faced with a situation in which the number of allelic states greatly exceeds the population size. In this limit, the population eventually adopts a steady state characterized by mutation-selection-drift balance. Although new alleles continue to be explored through mutation, the statistics of the population, and in particular the probabilities of seeing specific allelic configurations in samples taken from a population, do not change with time. In the absence of selection, probabilities of allelic configurations are given by the Ewens sampling formula, widely used in population genetics to detect deviations from neutrality. Here we develop an extension of this formula to arbitrary, possibly epistatic, fitness landscapes. Although our approach is general, we focus on the class of landscapes in which alleles are grouped into two, three, or several fitness states. This class of landscapes yields sampling probabilities that are computationally more tractable, and can form a basis for the inference of selection signatures from sequence data. We demonstrate that, for a sizeable range of mutation rates and selection coefficients, the steady-state allelic diversity is not neutral. Therefore, it may be used to infer selection coefficients, as well as other key evolutionary parameters, using high-throughput sequencing of evolving populations to collect data on locus polymorphisms. We also find that our theory remains sufficiently accurate even if the assumptions such as the infinite allele limit and the "full connectivity" assumption in which each allele can mutate into any other allele are relaxed. Thus, our framework establishes a theoretical foundation for inferring selection signatures from samples of sequences produced by evolution on epistatic fitness landscapes.
1009.0680
Amaury Lambert
Nicolas Champagnat and Amaury Lambert
Splitting trees with neutral Poissonian mutations I: Small families
32 pages, 2 figures. Companion paper in preparation "Splitting trees with neutral Poissonian mutations II: Large or old families"
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a neutral dynamical model of biological diversity, where individuals live and reproduce independently. They have i.i.d. lifetime durations (which are not necessarily exponentially distributed) and give birth (singly) at constant rate b. Such a genealogical tree is usually called a splitting tree, and the population counting process (N_t;t\ge 0) is a homogeneous, binary Crump--Mode--Jagers process. We assume that individuals independently experience mutations at constant rate \theta during their lifetimes, under the infinite-alleles assumption: each mutation instantaneously confers a brand new type, called allele, to its carrier. We are interested in the allele frequency spectrum at time t, i.e., the number A(t) of distinct alleles represented in the population at time t, and more specifically, the numbers A(k,t) of alleles represented by k individuals at time t, k=1,2,...,N_t. We mainly use two classes of tools: coalescent point processes and branching processes counted by random characteristics. We provide explicit formulae for the expectation of A(k,t) in a coalescent point process conditional on population size, which apply to the special case of splitting trees. We separately derive the a.s. limits of A(k,t)/N_t and of A(t)/N_t thanks to random characteristics. Last, we separately compute the expected homozygosity by applying a method characterizing the dynamics of the tree distribution as the origination time of the tree moves back in time, in the spirit of backward Kolmogorov equations.
[ { "created": "Fri, 3 Sep 2010 14:13:16 GMT", "version": "v1" } ]
2010-09-06
[ [ "Champagnat", "Nicolas", "" ], [ "Lambert", "Amaury", "" ] ]
We consider a neutral dynamical model of biological diversity, where individuals live and reproduce independently. They have i.i.d. lifetime durations (which are not necessarily exponentially distributed) and give birth (singly) at constant rate b. Such a genealogical tree is usually called a splitting tree, and the population counting process (N_t;t\ge 0) is a homogeneous, binary Crump--Mode--Jagers process. We assume that individuals independently experience mutations at constant rate \theta during their lifetimes, under the infinite-alleles assumption: each mutation instantaneously confers a brand new type, called allele, to its carrier. We are interested in the allele frequency spectrum at time t, i.e., the number A(t) of distinct alleles represented in the population at time t, and more specifically, the numbers A(k,t) of alleles represented by k individuals at time t, k=1,2,...,N_t. We mainly use two classes of tools: coalescent point processes and branching processes counted by random characteristics. We provide explicit formulae for the expectation of A(k,t) in a coalescent point process conditional on population size, which apply to the special case of splitting trees. We separately derive the a.s. limits of A(k,t)/N_t and of A(t)/N_t thanks to random characteristics. Last, we separately compute the expected homozygosity by applying a method characterizing the dynamics of the tree distribution as the origination time of the tree moves back in time, in the spirit of backward Kolmogorov equations.
1805.10235
Johanna Senk
Johanna Senk, Espen Hagen, Sacha J. van Albada, Markus Diesmann
Reconciliation of weak pairwise spike-train correlations and highly coherent local field potentials across space
51 pages, 9 figures, 5 tables
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-electrode arrays covering several square millimeters of neural tissue provide simultaneous access to population signals such as extracellular potentials and spiking activity of one hundred or more individual neurons. The interpretation of the recorded data calls for multiscale computational models with corresponding spatial dimensions and signal predictions. Such models facilitate identifying candidate mechanisms underlying experimentally observed spatiotemporal activity patterns in the cortex. Multi-layer spiking neuron network models of local cortical circuits covering about 1 mm$^2$ have been developed, integrating experimentally obtained neuron-type-specific connectivity data and reproducing features of observed in-vivo spiking statistics. Local field potentials (LFPs) can be computed from the simulated spiking activity. We here extend a local network and LFP model to an area of 4$\times$4 mm$^2$. The upscaling preserves the densities of neurons while capturing a larger proportion of the local synapses within the model. The procedure further introduces distance-dependent connection probabilities and conduction delays. Based on model predictions of spiking activity and LFPs, we find that the upscaling procedure preserves the overall spiking statistics of the original model and reproduces asynchronous irregular spiking across populations and weak pairwise spike-train correlations in agreement with experimental data recorded in the sensory cortex. In contrast with the weak spike-train correlations, the correlation of LFP signals is strong and decays over a distance of several hundred micrometers, compatible with experimental observations. Enhanced spatial coherence in the low-gamma band around 50 Hz may explain the recent experimental report of an apparent band-pass filter effect in the spatial reach of the LFP.
[ { "created": "Fri, 25 May 2018 16:23:00 GMT", "version": "v1" }, { "created": "Wed, 27 Sep 2023 14:13:07 GMT", "version": "v2" } ]
2023-09-29
[ [ "Senk", "Johanna", "" ], [ "Hagen", "Espen", "" ], [ "van Albada", "Sacha J.", "" ], [ "Diesmann", "Markus", "" ] ]
Multi-electrode arrays covering several square millimeters of neural tissue provide simultaneous access to population signals such as extracellular potentials and spiking activity of one hundred or more individual neurons. The interpretation of the recorded data calls for multiscale computational models with corresponding spatial dimensions and signal predictions. Such models facilitate identifying candidate mechanisms underlying experimentally observed spatiotemporal activity patterns in the cortex. Multi-layer spiking neuron network models of local cortical circuits covering about 1 mm$^2$ have been developed, integrating experimentally obtained neuron-type-specific connectivity data and reproducing features of observed in-vivo spiking statistics. Local field potentials (LFPs) can be computed from the simulated spiking activity. We here extend a local network and LFP model to an area of 4$\times$4 mm$^2$. The upscaling preserves the densities of neurons while capturing a larger proportion of the local synapses within the model. The procedure further introduces distance-dependent connection probabilities and conduction delays. Based on model predictions of spiking activity and LFPs, we find that the upscaling procedure preserves the overall spiking statistics of the original model and reproduces asynchronous irregular spiking across populations and weak pairwise spike-train correlations in agreement with experimental data recorded in the sensory cortex. In contrast with the weak spike-train correlations, the correlation of LFP signals is strong and decays over a distance of several hundred micrometers, compatible with experimental observations. Enhanced spatial coherence in the low-gamma band around 50 Hz may explain the recent experimental report of an apparent band-pass filter effect in the spatial reach of the LFP.
1603.00415
Leo Ai
Leo Ai, Jerel K. Mueller, Andrea Grant, Yigitcan Eryaman, and Wynn Legon
Transcranial Focused Ultrasound for BOLD fMRI Signal Modulation in Humans
EMBC 2016
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcranial focused ultrasound (tFUS) is an emerging form of non-surgical human neuromodulation that confers advantages over existing electro and electromagnetic technologies by providing a superior spatial resolution on the millimeter scale as well as the capability to target sub-cortical structures non-invasively. An examination of the pairing of tFUS and blood oxygen level dependent (BOLD) functional MRI (fMRI) in humans is presented here.
[ { "created": "Tue, 1 Mar 2016 19:22:12 GMT", "version": "v1" } ]
2016-03-02
[ [ "Ai", "Leo", "" ], [ "Mueller", "Jerel K.", "" ], [ "Grant", "Andrea", "" ], [ "Eryaman", "Yigitcan", "" ], [ "Legon", "Wynn", "" ] ]
Transcranial focused ultrasound (tFUS) is an emerging form of non-surgical human neuromodulation that confers advantages over existing electro and electromagnetic technologies by providing a superior spatial resolution on the millimeter scale as well as the capability to target sub-cortical structures non-invasively. An examination of the pairing of tFUS and blood oxygen level dependent (BOLD) functional MRI (fMRI) in humans is presented here.
2308.00526
Nikos Melanitis
Nikos Melanitis and Konstantina Nikita
Visual attention information can be traced on cortical response but not on the retina: evidence from electrophysiological mouse data using natural images as stimuli
null
null
null
null
q-bio.NC cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Visual attention forms the basis of understanding the visual world. In this work we follow a computational approach to investigate the biological basis of visual attention. We analyze retinal and cortical electrophysiological data from mouse. Visual Stimuli are Natural Images depicting real world scenes. Our results show that in primary visual cortex (V1), a subset of around $10\%$ of the neurons responds differently to salient versus non-salient visual regions. Visual attention information was not traced in retinal response. It appears that the retina remains naive concerning visual attention; cortical response gets modulated to interpret visual attention information. Experimental animal studies may be designed to further explore the biological basis of visual attention we traced in this study. In applied and translational science, our study contributes to the design of improved visual prostheses systems -- systems that create artificial visual percepts to visually impaired individuals by electronic implants placed on either the retina or the cortex.
[ { "created": "Tue, 1 Aug 2023 13:09:48 GMT", "version": "v1" } ]
2023-08-02
[ [ "Melanitis", "Nikos", "" ], [ "Nikita", "Konstantina", "" ] ]
Visual attention forms the basis of understanding the visual world. In this work we follow a computational approach to investigate the biological basis of visual attention. We analyze retinal and cortical electrophysiological data from mouse. Visual Stimuli are Natural Images depicting real world scenes. Our results show that in primary visual cortex (V1), a subset of around $10\%$ of the neurons responds differently to salient versus non-salient visual regions. Visual attention information was not traced in retinal response. It appears that the retina remains naive concerning visual attention; cortical response gets modulated to interpret visual attention information. Experimental animal studies may be designed to further explore the biological basis of visual attention we traced in this study. In applied and translational science, our study contributes to the design of improved visual prostheses systems -- systems that create artificial visual percepts to visually impaired individuals by electronic implants placed on either the retina or the cortex.
2305.19573
Pitambar Khanra
Pitambar Khanra, Johan Nakuci, Sarah Muldoon, Takamitsu Watanabe, and Naoki Masuda
Reliability of energy landscape analysis of resting-state functional MRI data
16 pages, 5 figures, 7 tables
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Energy landscape analysis is a data-driven method to analyze multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test-retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e., within-participant reliability) than across different sets of sessions from different participants (i.e., between-participant reliability). We show that the energy landscape analysis has significantly higher within-participant than between-participant test-retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test-retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual-level energy landscape analysis for given data sets with a statistically controlled reliability.
[ { "created": "Wed, 31 May 2023 05:49:56 GMT", "version": "v1" } ]
2023-06-01
[ [ "Khanra", "Pitambar", "" ], [ "Nakuci", "Johan", "" ], [ "Muldoon", "Sarah", "" ], [ "Watanabe", "Takamitsu", "" ], [ "Masuda", "Naoki", "" ] ]
Energy landscape analysis is a data-driven method to analyze multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test-retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e., within-participant reliability) than across different sets of sessions from different participants (i.e., between-participant reliability). We show that the energy landscape analysis has significantly higher within-participant than between-participant test-retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test-retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual-level energy landscape analysis for given data sets with a statistically controlled reliability.
1210.3464
Andreas Wilm
Swee Hoe Ong, Vinutha Uppoor Kukkillaya, Andreas Wilm, Christophe Lay, Eliza Xin Pei Ho, Louie Low, Martin Lloyd Hibberd, Niranjan Nagarajan
Species Identification and Profiling of Complex Microbial Communities Using Shotgun Illumina Sequencing of 16S rRNA Amplicon Sequences
17 pages, 2 tables, 2 figures, supplementary material
null
10.1371/journal.pone.0060811
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The high throughput and cost-effectiveness afforded by short-read sequencing technologies, in principle, enable researchers to perform 16S rRNA profiling of complex microbial communities at unprecedented depth and resolution. Existing Illumina sequencing protocols are, however, limited by the fraction of the 16S rRNA gene that is interrogated and therefore limit the resolution and quality of the profiling. To address this, we present the design of a novel protocol for shotgun Illumina sequencing of the bacterial 16S rRNA gene, optimized to capture more than 90% of sequences in the Greengenes database and with nearly twice the resolution of existing protocols. Using several in silico and experimental datasets, we demonstrate that despite the presence of multiple variable and conserved regions, the resulting shotgun sequences can be used to accurately quantify the diversity of complex microbial communities. The reconstruction of a significant fraction of the 16S rRNA gene also enabled high precision (>90%) in species-level identification thereby opening up potential application of this approach for clinical microbial characterization.
[ { "created": "Fri, 12 Oct 2012 10:02:56 GMT", "version": "v1" }, { "created": "Mon, 11 Mar 2013 15:05:26 GMT", "version": "v2" } ]
2015-06-11
[ [ "Ong", "Swee Hoe", "" ], [ "Kukkillaya", "Vinutha Uppoor", "" ], [ "Wilm", "Andreas", "" ], [ "Lay", "Christophe", "" ], [ "Ho", "Eliza Xin Pei", "" ], [ "Low", "Louie", "" ], [ "Hibberd", "Martin Lloyd", "" ], [ "Nagarajan", "Niranjan", "" ] ]
The high throughput and cost-effectiveness afforded by short-read sequencing technologies, in principle, enable researchers to perform 16S rRNA profiling of complex microbial communities at unprecedented depth and resolution. Existing Illumina sequencing protocols are, however, limited by the fraction of the 16S rRNA gene that is interrogated and therefore limit the resolution and quality of the profiling. To address this, we present the design of a novel protocol for shotgun Illumina sequencing of the bacterial 16S rRNA gene, optimized to capture more than 90% of sequences in the Greengenes database and with nearly twice the resolution of existing protocols. Using several in silico and experimental datasets, we demonstrate that despite the presence of multiple variable and conserved regions, the resulting shotgun sequences can be used to accurately quantify the diversity of complex microbial communities. The reconstruction of a significant fraction of the 16S rRNA gene also enabled high precision (>90%) in species-level identification thereby opening up potential application of this approach for clinical microbial characterization.
2101.01647
Giuseppe Tronci
M. Tarik Arafat, Giuseppe Tronci, David J. Wood, Stephen J. Russell
In-situ crosslinked wet spun collagen triple helices with nanoscale-regulated ciprofloxacin release capability
null
null
10.1016/j.matlet.2019.126550
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The design of antibacterial-releasing coatings or wrapping materials with controlled drug release capability is a promising strategy to minimise risks of infection and medical device failure in vivo. Collagen fibres have been employed as medical device building block, although they still fail to display controlled release capability, competitive wet-state mechanical properties, and retained triple helix organisation. We investigated this challenge by pursuing a multiscale design approach integrating drug encapsulation, in-situ covalent crosslinking and fibre spinning. By selecting ciprofloxacin (Cip) as a typical antibacterial drug, wet spinning was selected as a triple helix-friendly route towards Cip-encapsulated collagen fibres; whilst in situ crosslinking of fibre-forming triple helices with 1,3 phenylenediacetic acid (Ph) was hypothesised to yield Ph-Cip {\pi}-{\pi} stacking aromatic interactions and enable controlled drug release. Higher tensile modulus and strength were measured in Ph crosslinked fibres compared to state-of-the-art carbodiimide crosslinked controls. Cip-encapsulated Ph-crosslinked fibres revealed decreased elongation at break and significantly-enhanced drug retention in vitro with respect to Cip-free variants and carbodiimide-crosslinked controls, respectively. This multiscale manufacturing strategy provides new insight aiming at wet spun collagen triple helices with nanoscale-regulated tensile properties and drug release capability.
[ { "created": "Tue, 5 Jan 2021 16:58:31 GMT", "version": "v1" } ]
2021-01-06
[ [ "Arafat", "M. Tarik", "" ], [ "Tronci", "Giuseppe", "" ], [ "Wood", "David J.", "" ], [ "Russell", "Stephen J.", "" ] ]
The design of antibacterial-releasing coatings or wrapping materials with controlled drug release capability is a promising strategy to minimise risks of infection and medical device failure in vivo. Collagen fibres have been employed as medical device building block, although they still fail to display controlled release capability, competitive wet-state mechanical properties, and retained triple helix organisation. We investigated this challenge by pursuing a multiscale design approach integrating drug encapsulation, in-situ covalent crosslinking and fibre spinning. By selecting ciprofloxacin (Cip) as a typical antibacterial drug, wet spinning was selected as a triple helix-friendly route towards Cip-encapsulated collagen fibres; whilst in situ crosslinking of fibre-forming triple helices with 1,3 phenylenediacetic acid (Ph) was hypothesised to yield Ph-Cip {\pi}-{\pi} stacking aromatic interactions and enable controlled drug release. Higher tensile modulus and strength were measured in Ph crosslinked fibres compared to state-of-the-art carbodiimide crosslinked controls. Cip-encapsulated Ph-crosslinked fibres revealed decreased elongation at break and significantly-enhanced drug retention in vitro with respect to Cip-free variants and carbodiimide-crosslinked controls, respectively. This multiscale manufacturing strategy provides new insight aiming at wet spun collagen triple helices with nanoscale-regulated tensile properties and drug release capability.
1307.0969
Adri\'an Navas MSC
Adri\'an Navas, David Papo, Stefano Boccaletti, F. del-Pozo, Ricardo Bajo, Fernando Maest\'u, Pedro Gil, Irene Sendi\~na-Nadal and Javier M. Buld\'u
Functional Hubs in Mild Cognitive Impairment
12 pages, 5 figures, to appear in International Journal of Bifurcations and Chaos, 2013
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate how hubs of functional brain networks are modified as a result of mild cognitive impairment (MCI), a condition causing a slight but noticeable decline in cognitive abilities, which sometimes precedes the onset of Alzheimer's disease. We used magnetoencephalography (MEG) to investigate the functional brain networks of a group of patients suffering from MCI and a control group of healthy subjects, during the execution of a short-term memory task. Couplings between brain sites were evaluated using synchronization likelihood, from which a network of functional interdependencies was constructed and the centrality, i.e. importance, of their nodes quantified. The results showed that, with respect to healthy controls, MCI patients were associated with decreases and increases in hub centrality respectively in occipital and central scalp regions, supporting the hypothesis that MCI modifies functional brain network topology, leading to more random structures.
[ { "created": "Wed, 3 Jul 2013 11:16:56 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2013 13:08:49 GMT", "version": "v2" } ]
2013-07-11
[ [ "Navas", "Adrián", "" ], [ "Papo", "David", "" ], [ "Boccaletti", "Stefano", "" ], [ "del-Pozo", "F.", "" ], [ "Bajo", "Ricardo", "" ], [ "Maestú", "Fernando", "" ], [ "Gil", "Pedro", "" ], [ "Sendiña-Nadal", "Irene", "" ], [ "Buldú", "Javier M.", "" ] ]
We investigate how hubs of functional brain networks are modified as a result of mild cognitive impairment (MCI), a condition causing a slight but noticeable decline in cognitive abilities, which sometimes precedes the onset of Alzheimer's disease. We used magnetoencephalography (MEG) to investigate the functional brain networks of a group of patients suffering from MCI and a control group of healthy subjects, during the execution of a short-term memory task. Couplings between brain sites were evaluated using synchronization likelihood, from which a network of functional interdependencies was constructed and the centrality, i.e. importance, of their nodes quantified. The results showed that, with respect to healthy controls, MCI patients were associated with decreases and increases in hub centrality respectively in occipital and central scalp regions, supporting the hypothesis that MCI modifies functional brain network topology, leading to more random structures.
1212.3470
R. A. J. van Elburg
Ronald A. J. van Elburg, Oltman O. de Wiljes, Michael Biehl, Fred A. Keijzer
A Behavioural Perspective on the Early Evolution of Nervous Systems: A Computational Model of Excitable Myoepithelia
32 pages, 8 figures and 8 model tables
null
10.3389/fncom.2015.00110
null
q-bio.NC nlin.PS q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How the very first nervous systems evolved remains a fundamental open question. Molecular and genomic techniques have revolutionized our knowledge of the molecular ingredients behind this transition but not yet provided a clear picture of the morphological and tissue changes involved. Here we focus on a behavioural perspective that centres on movement by muscle contraction. Building on the finding that molecules for chemical neural signalling predate multicellular animals, we investigate a gradual evolutionary scenario for nervous systems that consists of two stages: A) Chemically transmission of electrical activity between adjacent cells provided a primitive form of muscle coordination in a contractile epithelial tissue. B) This primitive form of coordination was subsequently improved upon by evolving the axodendritic processes of modern neurons. We use computer simulations to investigate the first stage. The simulations show that chemical transmission across a contractile sheet can indeed produce useful body scale patterns, but only for small-sized animals. For larger animals the noise in chemical neural signalling interferes. Our results imply that a two-stage scenario is a viable approach to nervous system evolution. The first stage could provide an initial behavioural advantage, as well as a clear scaffold for subsequent improvements in behavioural coordination.
[ { "created": "Fri, 14 Dec 2012 13:40:43 GMT", "version": "v1" } ]
2017-10-31
[ [ "van Elburg", "Ronald A. J.", "" ], [ "de Wiljes", "Oltman O.", "" ], [ "Biehl", "Michael", "" ], [ "Keijzer", "Fred A.", "" ] ]
How the very first nervous systems evolved remains a fundamental open question. Molecular and genomic techniques have revolutionized our knowledge of the molecular ingredients behind this transition but not yet provided a clear picture of the morphological and tissue changes involved. Here we focus on a behavioural perspective that centres on movement by muscle contraction. Building on the finding that molecules for chemical neural signalling predate multicellular animals, we investigate a gradual evolutionary scenario for nervous systems that consists of two stages: A) Chemically transmission of electrical activity between adjacent cells provided a primitive form of muscle coordination in a contractile epithelial tissue. B) This primitive form of coordination was subsequently improved upon by evolving the axodendritic processes of modern neurons. We use computer simulations to investigate the first stage. The simulations show that chemical transmission across a contractile sheet can indeed produce useful body scale patterns, but only for small-sized animals. For larger animals the noise in chemical neural signalling interferes. Our results imply that a two-stage scenario is a viable approach to nervous system evolution. The first stage could provide an initial behavioural advantage, as well as a clear scaffold for subsequent improvements in behavioural coordination.
1801.09982
Lorenzo Contento
Lorenzo Contento, Danielle Hilhorst, Masayasu Mimura
Ecological invasion in competition-diffusion systems when the exotic species is either very strong or very weak
null
null
10.1007/s00285-018-1256-4
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reaction-diffusion systems with a Lotka-Volterra-type reaction term, also known as competition-diffusion systems, have been used to investigate the dynamics of the competition among $m$ ecological species for a limited resource necessary to their survival and growth. Notwithstanding their rather simple mathematical structure, such systems may display quite interesting behaviours. In particular, while for $m=2$ no coexistence of the two species is usually possible, if $m \ge 3$ we may observe coexistence of all or a subset of the species, sensitively depending on the parameter values. Such coexistence can take the form of very complex spatio-temporal patterns and oscillations. Unfortunately, at the moment there are no known tools for a complete analytical study of such systems for $m \ge 3$. This means that establishing general criteria for the occurrence of coexistence appears to be very hard. In this paper we will instead give some criteria for the non-coexistence of species, motivated by the ecological problem of the invasion of an ecosystem by an exotic species. We will show that when the environment is very favourable to the invading species the invasion will always be successful and the native species will be driven to extinction. On the other hand, if the environment is not favourable enough, the invasion will always fail.
[ { "created": "Tue, 30 Jan 2018 13:47:54 GMT", "version": "v1" } ]
2018-11-01
[ [ "Contento", "Lorenzo", "" ], [ "Hilhorst", "Danielle", "" ], [ "Mimura", "Masayasu", "" ] ]
Reaction-diffusion systems with a Lotka-Volterra-type reaction term, also known as competition-diffusion systems, have been used to investigate the dynamics of the competition among $m$ ecological species for a limited resource necessary to their survival and growth. Notwithstanding their rather simple mathematical structure, such systems may display quite interesting behaviours. In particular, while for $m=2$ no coexistence of the two species is usually possible, if $m \ge 3$ we may observe coexistence of all or a subset of the species, sensitively depending on the parameter values. Such coexistence can take the form of very complex spatio-temporal patterns and oscillations. Unfortunately, at the moment there are no known tools for a complete analytical study of such systems for $m \ge 3$. This means that establishing general criteria for the occurrence of coexistence appears to be very hard. In this paper we will instead give some criteria for the non-coexistence of species, motivated by the ecological problem of the invasion of an ecosystem by an exotic species. We will show that when the environment is very favourable to the invading species the invasion will always be successful and the native species will be driven to extinction. On the other hand, if the environment is not favourable enough, the invasion will always fail.
2103.16058
David Wu
David Wu, Helen Petousis-Harris, Janine Paynter, Vinod Suresh, Oliver J. Maclaren
Likelihood-based estimation and prediction for a measles outbreak in Samoa
24 pages, 7 figures, 2 tables (Supplementary 19 pages, 18 figures). Added clarification to methods and results; added observational error study into supplementary
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prediction of the progression of an infectious disease outbreak is important for planning and coordinating a response. Differential equations are often used to model an epidemic outbreak's behaviour but are challenging to parameterise. Furthermore, these models can suffer from misspecification, which biases predictions and parameter estimates. Stochastic models can help with misspecification but are even more expensive to simulate and perform inference with. Here, we develop an explicitly likelihood-based variation of the generalised profiling method as a tool for prediction and inference under model misspecification. Our approach allows us to carry out identifiability analysis and uncertainty quantification using profile likelihood-based methods without the need for marginalisation. We provide justification for this approach by introducing a new interpretation of the model approximation component as a stochastic constraint. This preserves the rationale for using profiling rather than integration to remove nuisance parameters while also providing a link back to stochastic models. We applied an initial version of this method during an outbreak of measles in Samoa in 2019-2020 and found that it achieved relatively fast, accurate predictions. Here we present the most recent version of our method and its application to this measles outbreak, along with additional validation.
[ { "created": "Tue, 30 Mar 2021 04:03:15 GMT", "version": "v1" }, { "created": "Wed, 17 Nov 2021 03:54:31 GMT", "version": "v2" }, { "created": "Tue, 15 Mar 2022 09:06:06 GMT", "version": "v3" } ]
2022-03-16
[ [ "Wu", "David", "" ], [ "Petousis-Harris", "Helen", "" ], [ "Paynter", "Janine", "" ], [ "Suresh", "Vinod", "" ], [ "Maclaren", "Oliver J.", "" ] ]
Prediction of the progression of an infectious disease outbreak is important for planning and coordinating a response. Differential equations are often used to model an epidemic outbreak's behaviour but are challenging to parameterise. Furthermore, these models can suffer from misspecification, which biases predictions and parameter estimates. Stochastic models can help with misspecification but are even more expensive to simulate and perform inference with. Here, we develop an explicitly likelihood-based variation of the generalised profiling method as a tool for prediction and inference under model misspecification. Our approach allows us to carry out identifiability analysis and uncertainty quantification using profile likelihood-based methods without the need for marginalisation. We provide justification for this approach by introducing a new interpretation of the model approximation component as a stochastic constraint. This preserves the rationale for using profiling rather than integration to remove nuisance parameters while also providing a link back to stochastic models. We applied an initial version of this method during an outbreak of measles in Samoa in 2019-2020 and found that it achieved relatively fast, accurate predictions. Here we present the most recent version of our method and its application to this measles outbreak, along with additional validation.
2210.07935
Angie Michaiel
Angie Michaiel and Amy Bernard
Neurobiology and Changing Ecosystems: toward understanding the impact of anthropogenic influences on neurons and circuits
In review at Frontiers in Neural Circuits
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Rapid anthropogenic environmental changes, including those due to habitat contamination, degradation, and climate change, have far-reaching effects on biological systems that may outpace animals' adaptive responses (Radchuk et al., 2019). Neurobiological systems mediate interactions between animals and their environments and evolved over millions of years to detect and respond to change. To gain an understanding of the adaptive capacity of nervous systems given and unprecedented pace of environmental change, mechanisms of physiology and behavior at the cellular and biophysical level must be examined. While behavioral changes resulting from anthropogenic activity are becoming increasingly described, identification and examination of the cellular, molecular, and circuit-level processes underlying those changes are profoundly under-explored. Hence, the field of neuroscience lacks predictive frameworks to describe which neurobiology systems may be resilient or vulnerable to rapidly changing ecosystems, or what modes of adaptation are represented in our natural world. In this review, we highlight examples of animal behavior modification and corresponding nervous system adaptation in response to rapid environmental change. The underlying cellular, molecular, and circuit-level component processes underlying these behaviors are not known and emphasize the unmet need for rigorous scientific enquiry into the neurobiology of changing ecosystems.
[ { "created": "Fri, 14 Oct 2022 16:33:27 GMT", "version": "v1" } ]
2022-10-17
[ [ "Michaiel", "Angie", "" ], [ "Bernard", "Amy", "" ] ]
Rapid anthropogenic environmental changes, including those due to habitat contamination, degradation, and climate change, have far-reaching effects on biological systems that may outpace animals' adaptive responses (Radchuk et al., 2019). Neurobiological systems mediate interactions between animals and their environments and evolved over millions of years to detect and respond to change. To gain an understanding of the adaptive capacity of nervous systems given and unprecedented pace of environmental change, mechanisms of physiology and behavior at the cellular and biophysical level must be examined. While behavioral changes resulting from anthropogenic activity are becoming increasingly described, identification and examination of the cellular, molecular, and circuit-level processes underlying those changes are profoundly under-explored. Hence, the field of neuroscience lacks predictive frameworks to describe which neurobiology systems may be resilient or vulnerable to rapidly changing ecosystems, or what modes of adaptation are represented in our natural world. In this review, we highlight examples of animal behavior modification and corresponding nervous system adaptation in response to rapid environmental change. The underlying cellular, molecular, and circuit-level component processes underlying these behaviors are not known and emphasize the unmet need for rigorous scientific enquiry into the neurobiology of changing ecosystems.
1909.07831
Glenn Young
Ryan Murray and Glenn Young
Neutral competition in a deterministically changing environment: revisiting continuum approaches
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Environmental variation can play an important role in ecological competition by influencing the relative advantage between competing species. Here, we consider such effects by extending a classical, competitive Moran model to incorporate an environment that fluctuates periodically in time. We adapt methods from work on these classical models to investigate the effects of the magnitude and frequency of environmental fluctuations on two important population statistics: the probability of fixation and the mean time to fixation. In particular, we find that for small frequencies, the system behaves similar to a system with a constant fitness difference between the two species, and for large frequencies, the system behaves similar to a neutrally competitive model. Most interestingly, the system exhibits nontrivial behavior for intermediate frequencies. We conclude by showing that our results agree quite well with recent theoretical work on competitive models with a stochastically changing environment, and discuss how the methods we develop ease the mathematical analysis required to study such models.
[ { "created": "Tue, 17 Sep 2019 14:12:40 GMT", "version": "v1" } ]
2019-09-18
[ [ "Murray", "Ryan", "" ], [ "Young", "Glenn", "" ] ]
Environmental variation can play an important role in ecological competition by influencing the relative advantage between competing species. Here, we consider such effects by extending a classical, competitive Moran model to incorporate an environment that fluctuates periodically in time. We adapt methods from work on these classical models to investigate the effects of the magnitude and frequency of environmental fluctuations on two important population statistics: the probability of fixation and the mean time to fixation. In particular, we find that for small frequencies, the system behaves similar to a system with a constant fitness difference between the two species, and for large frequencies, the system behaves similar to a neutrally competitive model. Most interestingly, the system exhibits nontrivial behavior for intermediate frequencies. We conclude by showing that our results agree quite well with recent theoretical work on competitive models with a stochastically changing environment, and discuss how the methods we develop ease the mathematical analysis required to study such models.
2109.03565
Rose Hoste
Th\'eau Debroise, Rose Hoste, Quentin Chamayou, Herv\'e Minoux, Bruno Filoche-Romm\'e, Marc Bianciotto, Jean-Philippe Rameau, Laurent Schio, Maximilien Levesque
In silico drug repositioning for COVID-19 using absolute binding free energy calculations
null
null
null
null
q-bio.QM physics.chem-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Since the rise of the SARS-CoV-2 pandemic in the winter of 2019, the need for an affordable and efficient drug has not yet been met. Leveraging its unique, fast and precise binding free energy prediction technology, Aqemia screened and ranked FDA-approved molecules against the 3ClPro protein. This protease is key to the post-translational modification of two polyproteins produced by the viral genome. We propose in our top 10 predicted molecules some drugs or prodrugs that could be repurposed and used in the treatment of COVID cases.
[ { "created": "Wed, 8 Sep 2021 13:07:29 GMT", "version": "v1" }, { "created": "Wed, 22 Sep 2021 21:04:10 GMT", "version": "v2" } ]
2021-09-24
[ [ "Debroise", "Théau", "" ], [ "Hoste", "Rose", "" ], [ "Chamayou", "Quentin", "" ], [ "Minoux", "Hervé", "" ], [ "Filoche-Rommé", "Bruno", "" ], [ "Bianciotto", "Marc", "" ], [ "Rameau", "Jean-Philippe", "" ], [ "Schio", "Laurent", "" ], [ "Levesque", "Maximilien", "" ] ]
Since the rise of the SARS-CoV-2 pandemic in the winter of 2019, the need for an affordable and efficient drug has not yet been met. Leveraging its unique, fast and precise binding free energy prediction technology, Aqemia screened and ranked FDA-approved molecules against the 3ClPro protein. This protease is key to the post-translational modification of two polyproteins produced by the viral genome. We propose in our top 10 predicted molecules some drugs or prodrugs that could be repurposed and used in the treatment of COVID cases.
2003.01541
Taban Eslami
Taban Eslami, Joseph S. Raiker and Fahad Saeed
Explainable and Scalable Machine-Learning Algorithms for Detection of Autism Spectrum Disorder using fMRI Data
null
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diagnosing Autism Spectrum Disorder (ASD) is a challenging problem, and is based purely on behavioral descriptions of symptomology (DSM-5/ICD-10), and requires informants to observe children with disorder across different settings (e.g. home, school). Numerous limitations (e.g., informant discrepancies, lack of adherence to assessment guidelines, informant biases) to current diagnostic practices have the potential to result in over-, under-, or misdiagnosis of the disorder. Advances in neuroimaging technologies are providing a critical step towards a more objective assessment of the disorder. Prior research provides strong evidence that structural and functional magnetic resonance imaging (MRI) data collected from individuals with ASD exhibit distinguishing characteristics that differ in local and global spatial, and temporal neural-patterns of the brain. Our proposed deep-learning model ASD-DiagNet exhibits consistently high accuracy for classification of ASD brain scans from neurotypical scans. We have for the first time integrated traditional machine-learning and deep-learning techniques that allows us to isolate ASD biomarkers from MRI data sets. Our method, called Auto-ASD-Network, uses a combination of deep-learning and Support Vector Machines (SVM) to classify ASD scans from neurotypical scans. Such interpretable models would help explain the decisions made by deep-learning techniques leading to knowledge discovery for neuroscientists, and transparent analysis for clinicians.
[ { "created": "Mon, 2 Mar 2020 18:20:44 GMT", "version": "v1" } ]
2020-03-04
[ [ "Eslami", "Taban", "" ], [ "Raiker", "Joseph S.", "" ], [ "Saeed", "Fahad", "" ] ]
Diagnosing Autism Spectrum Disorder (ASD) is a challenging problem, and is based purely on behavioral descriptions of symptomology (DSM-5/ICD-10), and requires informants to observe children with disorder across different settings (e.g. home, school). Numerous limitations (e.g., informant discrepancies, lack of adherence to assessment guidelines, informant biases) to current diagnostic practices have the potential to result in over-, under-, or misdiagnosis of the disorder. Advances in neuroimaging technologies are providing a critical step towards a more objective assessment of the disorder. Prior research provides strong evidence that structural and functional magnetic resonance imaging (MRI) data collected from individuals with ASD exhibit distinguishing characteristics that differ in local and global spatial, and temporal neural-patterns of the brain. Our proposed deep-learning model ASD-DiagNet exhibits consistently high accuracy for classification of ASD brain scans from neurotypical scans. We have for the first time integrated traditional machine-learning and deep-learning techniques that allows us to isolate ASD biomarkers from MRI data sets. Our method, called Auto-ASD-Network, uses a combination of deep-learning and Support Vector Machines (SVM) to classify ASD scans from neurotypical scans. Such interpretable models would help explain the decisions made by deep-learning techniques leading to knowledge discovery for neuroscientists, and transparent analysis for clinicians.
2003.06266
Mostafa Akhavan Safar
Mostafa Akhavansafar, Babak Teimourpour
KatzDriver: A network Based method to predict cancer causal genes in GR Network
null
Safar, M.A. and Teimourpour, B., 2020. KatzDriver: A network based method to cancer causal genes discovery in gene regulatory network. Biosystems, p.104326
10.1016/j.biosystems.2020.104326
null
q-bio.MN q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
One of the important issues in oncology is finding the genes that perturbation the cell functionality, and result in cancer propagation. The genes, namely driver genes, when they mutate in expression, result in cancer through activation of the mutated proteins. So, many methods have been introduced to predict this group of genes. These are mostly computational methods based on the number of mutations of each gene. Recently, some network-based methods have been proposed to predict Cancer Driver Genes (CDGs). In this study, we use a network-based approach and relative importance of each gene in the propagation and absorption of genes anomalies in the network to recognize CDGs. The experimental results are compared with 19 previous methods that show our proposed algorithm is better than the others in terms of accuracy, precision, and the number of recognized CDGs.
[ { "created": "Fri, 13 Mar 2020 13:18:55 GMT", "version": "v1" } ]
2020-12-16
[ [ "Akhavansafar", "Mostafa", "" ], [ "Teimourpour", "Babak", "" ] ]
One of the important issues in oncology is finding the genes that perturbation the cell functionality, and result in cancer propagation. The genes, namely driver genes, when they mutate in expression, result in cancer through activation of the mutated proteins. So, many methods have been introduced to predict this group of genes. These are mostly computational methods based on the number of mutations of each gene. Recently, some network-based methods have been proposed to predict Cancer Driver Genes (CDGs). In this study, we use a network-based approach and relative importance of each gene in the propagation and absorption of genes anomalies in the network to recognize CDGs. The experimental results are compared with 19 previous methods that show our proposed algorithm is better than the others in terms of accuracy, precision, and the number of recognized CDGs.
1305.2061
Yuval Simons
Yuval B. Simons, Michael C. Turchin, Jonathan K. Pritchard and Guy Sella
The deleterious mutation load is insensitive to recent population history
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human populations have undergone dramatic changes in population size in the past 100,000 years, including a severe bottleneck of non-African populations and recent explosive population growth. There is currently great interest in how these demographic events may have affected the burden of deleterious mutations in individuals and the allele frequency spectrum of disease mutations in populations. Here we use population genetic models to show that--contrary to previous conjectures--recent human demography has likely had very little impact on the average burden of deleterious mutations carried by individuals. This prediction is supported by exome sequence data showing that African American and European American individuals carry very similar burdens of damaging mutations. We next consider whether recent population growth has increased the importance of very rare mutations in complex traits. Our analysis predicts that for most classes of disease variants, rare alleles are unlikely to contribute a large fraction of the total genetic variance, and that the impact of recent growth is likely to be modest. However, for diseases that have a direct impact on fitness, strongly deleterious rare mutations likely do play important roles, and the impact of very rare mutations will be far greater as a result of recent growth. In summary, demographic history has dramatically impacted patterns of variation in different human populations, but these changes have likely had little impact on either genetic load or on the importance of rare variants for most complex traits.
[ { "created": "Thu, 9 May 2013 11:43:41 GMT", "version": "v1" } ]
2013-05-10
[ [ "Simons", "Yuval B.", "" ], [ "Turchin", "Michael C.", "" ], [ "Pritchard", "Jonathan K.", "" ], [ "Sella", "Guy", "" ] ]
Human populations have undergone dramatic changes in population size in the past 100,000 years, including a severe bottleneck of non-African populations and recent explosive population growth. There is currently great interest in how these demographic events may have affected the burden of deleterious mutations in individuals and the allele frequency spectrum of disease mutations in populations. Here we use population genetic models to show that--contrary to previous conjectures--recent human demography has likely had very little impact on the average burden of deleterious mutations carried by individuals. This prediction is supported by exome sequence data showing that African American and European American individuals carry very similar burdens of damaging mutations. We next consider whether recent population growth has increased the importance of very rare mutations in complex traits. Our analysis predicts that for most classes of disease variants, rare alleles are unlikely to contribute a large fraction of the total genetic variance, and that the impact of recent growth is likely to be modest. However, for diseases that have a direct impact on fitness, strongly deleterious rare mutations likely do play important roles, and the impact of very rare mutations will be far greater as a result of recent growth. In summary, demographic history has dramatically impacted patterns of variation in different human populations, but these changes have likely had little impact on either genetic load or on the importance of rare variants for most complex traits.
1310.7955
John Laurie Dr.
John D. Laurie
Epigenetic regulation of repetitive DNA through mitotic asynchrony following double fertilization in angiosperms
24 pages, 1 table, 3 figures
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several recent studies show that companion cells in flowering plant gametophytes relax epigenetic control of transposable elements (TEs) to promote production of small RNA that presumably assist nearby reproductive cells in management of TEs. In light of this possibility, a closer look at the timing of cell division in relation to angiosperm double fertilization is warranted. From such an analysis, it is conceivable that double fertilization can drive angiosperm evolution by facilitating crosses between genetically diverse parents. A key feature of this ability is the order of cell division following double fertilization, since division of the primary endosperm nucleus prior to the zygote would produce small RNA capable of identifying TEs and defining chromatin states in the zygote prior to its entry into S-phase of the cell cycle. Consequently, crosses leading to increased ploidy or between genetically diverse parents would yield offspring better capable of managing a diverse complement of TEs and repetitive DNA. Considering double fertilization in this regard challenges previous notions that the primary purpose of endosperm is for improved seed reserve storage and utilization.
[ { "created": "Tue, 29 Oct 2013 20:20:53 GMT", "version": "v1" } ]
2013-10-31
[ [ "Laurie", "John D.", "" ] ]
Several recent studies show that companion cells in flowering plant gametophytes relax epigenetic control of transposable elements (TEs) to promote production of small RNA that presumably assist nearby reproductive cells in management of TEs. In light of this possibility, a closer look at the timing of cell division in relation to angiosperm double fertilization is warranted. From such an analysis, it is conceivable that double fertilization can drive angiosperm evolution by facilitating crosses between genetically diverse parents. A key feature of this ability is the order of cell division following double fertilization, since division of the primary endosperm nucleus prior to the zygote would produce small RNA capable of identifying TEs and defining chromatin states in the zygote prior to its entry into S-phase of the cell cycle. Consequently, crosses leading to increased ploidy or between genetically diverse parents would yield offspring better capable of managing a diverse complement of TEs and repetitive DNA. Considering double fertilization in this regard challenges previous notions that the primary purpose of endosperm is for improved seed reserve storage and utilization.
q-bio/0406030
Guido Tiana
G. Tiana, M. Colombo, D. Provasi and R. A. Broglia
Deriving amino acid contact potentials from their frequencies of occurence in proteins: a lattice model study
null
null
10.1088/0953-8984/16/15/007
null
q-bio.BM
null
The possibility of deriving the contact potentials between amino acids from their frequencies of occurence in proteins is discussed in evolutionary terms. This approach allows the use of traditional thermodynamics to describe such frequencies and, consequently, to develop a strategy to include in the calculations correlations due to the spatial proximity of the amino acids and to their overall tendency of being conserved in proteins. Making use of a lattice model to describe protein chains and defining a "true" potential, we test these strategies by selecting a database of folding model sequences, deriving the contact potentials from such sequences and comparing them with the "true" potential. Taking into account correlations allows for a markedly better prediction of the interaction potentials.
[ { "created": "Tue, 15 Jun 2004 12:17:41 GMT", "version": "v1" } ]
2009-11-10
[ [ "Tiana", "G.", "" ], [ "Colombo", "M.", "" ], [ "Provasi", "D.", "" ], [ "Broglia", "R. A.", "" ] ]
The possibility of deriving the contact potentials between amino acids from their frequencies of occurence in proteins is discussed in evolutionary terms. This approach allows the use of traditional thermodynamics to describe such frequencies and, consequently, to develop a strategy to include in the calculations correlations due to the spatial proximity of the amino acids and to their overall tendency of being conserved in proteins. Making use of a lattice model to describe protein chains and defining a "true" potential, we test these strategies by selecting a database of folding model sequences, deriving the contact potentials from such sequences and comparing them with the "true" potential. Taking into account correlations allows for a markedly better prediction of the interaction potentials.
2407.15298
Masaki Kato
Masaki Kato, Tetsuya J. Kobayashi
Understanding cell populations sharing information through the environment, as reinforcement learning
null
null
null
null
q-bio.CB nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collective migration is a phenomenon observed in various biological systems, where the cooperation of multiple cells leads to complex functions beyond individual capabilities, such as in immunity and development. A distinctive example is cell populations that not only ascend attractant gradient originating from targets, such as damaged tissue, but also actively modify the gradient, through their own production and degradation. While the optimality of single-cell information processing has been extensively studied, the optimality of the collective information processing that includes gradient sensing and gradient generation, remains underexplored. In this study, we formulated a cell population that produces and degrades an attractant while exploring the environment as an agent population performing distributed reinforcement learning. We demonstrated the existence of optimal couplings between gradient sensing and gradient generation, showing that the optimal gradient generation qualitatively differs depending on whether the gradient sensing is logarithmic or linear. The derived dynamics have a structure homogeneous to the Keller-Segel model, suggesting that cell populations might be learning. Additionally, we showed that the distributed information processing structure of the agent population enables a proportion of the population to robustly accumulate at the target. Our results provide a quantitative foundation for understanding the collective information processing mediated by attractants in extracellular environments.
[ { "created": "Sun, 21 Jul 2024 23:52:30 GMT", "version": "v1" } ]
2024-07-23
[ [ "Kato", "Masaki", "" ], [ "Kobayashi", "Tetsuya J.", "" ] ]
Collective migration is a phenomenon observed in various biological systems, where the cooperation of multiple cells leads to complex functions beyond individual capabilities, such as in immunity and development. A distinctive example is cell populations that not only ascend attractant gradient originating from targets, such as damaged tissue, but also actively modify the gradient, through their own production and degradation. While the optimality of single-cell information processing has been extensively studied, the optimality of the collective information processing that includes gradient sensing and gradient generation, remains underexplored. In this study, we formulated a cell population that produces and degrades an attractant while exploring the environment as an agent population performing distributed reinforcement learning. We demonstrated the existence of optimal couplings between gradient sensing and gradient generation, showing that the optimal gradient generation qualitatively differs depending on whether the gradient sensing is logarithmic or linear. The derived dynamics have a structure homogeneous to the Keller-Segel model, suggesting that cell populations might be learning. Additionally, we showed that the distributed information processing structure of the agent population enables a proportion of the population to robustly accumulate at the target. Our results provide a quantitative foundation for understanding the collective information processing mediated by attractants in extracellular environments.
1702.07734
Souparno Roy
Souparno Roy, Archi Banerjee, Ranjan Sengupta, Dipak Ghosh
Ambiguity invokes Creativity : looking through Quantum physics
11 pages, 2 figures
null
null
null
q-bio.NC physics.bio-ph quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creativity, defined as the tendency to generate or recognize new ideas or alternatives and to make connections between seemingly unrelated phenomena, is too vast a horizon to be summed up in such a simple sentence. The extreme abstractness of creativity makes it harder to quantify in its entirety. Yet, a lot of efforts have been made both by psychologists and neurobiologists to identify its signature. A general conformity is expressed in the Free association theory, i.e. the more freely a persons conceptual nodes are connected, the more divergent thinker (also, creative) he or she is. Also, tolerance of ambiguity is found to be related to divergent thinking. In this study, we approach the problem of creativity from a theoretical physics standpoint. Theoretically, for the initial conceptual state, the next jump to any other node is equally probable and non-deterministic. Repeated intervention of external stimulus (analogous to a measurement) is responsible for such jumps. And to study such a non-deterministic system with continuous measurements, Quantum theory has been proven the most successful, time and again. We suggest that this collection of nodes form a system which is likely to be governed by quantum physics and specify the transformations which could help explain the conceptual jump between states. Our argument, from the point of view of physics is that the initial evolution of the creative process is identical, person or field independent. To answer the next obvious question about individual creativity, we hypothesize that the quantum system, under continuous measurements (in the form of external stimuli) evolves with chaotic dynamics, hence separating a painter from a musician. Possible experimental methodology of these effects has also been suggested using ambiguous figures.
[ { "created": "Thu, 23 Feb 2017 13:04:16 GMT", "version": "v1" } ]
2017-02-28
[ [ "Roy", "Souparno", "" ], [ "Banerjee", "Archi", "" ], [ "Sengupta", "Ranjan", "" ], [ "Ghosh", "Dipak", "" ] ]
Creativity, defined as the tendency to generate or recognize new ideas or alternatives and to make connections between seemingly unrelated phenomena, is too vast a horizon to be summed up in such a simple sentence. The extreme abstractness of creativity makes it harder to quantify in its entirety. Yet, a lot of efforts have been made both by psychologists and neurobiologists to identify its signature. A general conformity is expressed in the Free association theory, i.e. the more freely a persons conceptual nodes are connected, the more divergent thinker (also, creative) he or she is. Also, tolerance of ambiguity is found to be related to divergent thinking. In this study, we approach the problem of creativity from a theoretical physics standpoint. Theoretically, for the initial conceptual state, the next jump to any other node is equally probable and non-deterministic. Repeated intervention of external stimulus (analogous to a measurement) is responsible for such jumps. And to study such a non-deterministic system with continuous measurements, Quantum theory has been proven the most successful, time and again. We suggest that this collection of nodes form a system which is likely to be governed by quantum physics and specify the transformations which could help explain the conceptual jump between states. Our argument, from the point of view of physics is that the initial evolution of the creative process is identical, person or field independent. To answer the next obvious question about individual creativity, we hypothesize that the quantum system, under continuous measurements (in the form of external stimuli) evolves with chaotic dynamics, hence separating a painter from a musician. Possible experimental methodology of these effects has also been suggested using ambiguous figures.
2106.01857
Breno De Oliveira Ferraz
D. Bazeia, M. J. B. Ferreira, B.F. de Oliveira and A. Szolnoki
Environment driven oscillation in an off-lattice May--Leonard model
10 pages, 5 figures, accepted for publication in Scientific Reports
Scientific Reports 11 (2021) 12512
10.1038/s41598-021-91994-7
null
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Cyclic dominance of competing species is an intensively used working hypothesis to explain biodiversity in certain living systems, where the evolutionary selection principle would dictate a single victor otherwise. Technically the May--Leonard models offer a mathematical framework to describe the mentioned non-transitive interaction of competing species when individual movement is also considered in a spatial system. Emerging rotating spirals composed by the competing species are frequently observed character of the resulting patterns. But how do these spiraling patterns change when we vary the external environment which affects the general vitality of individuals? Motivated by this question we suggest an off-lattice version of the tradition May--Leonard model which allows us to change the actual state of the environment gradually. This can be done by introducing a local carrying capacity parameter which value can be varied gently in an off-lattice environment. Our results support a previous analysis obtained in a more intricate metapopulation model and we show that the well-known rotating spirals become evident in a benign environment when the general density of the population is high. The accompanying time-dependent oscillation of competing species can also be detected where the amplitude and the frequency show a scaling law of the parameter that characterizes the state of the environment. These observations highlight that the assumed non-transitive interaction alone is insufficient condition to maintain biodiversity safely, but the actual state of the environment, which characterizes the general living conditions, also plays a decisive role on the evolution of related systems.
[ { "created": "Thu, 3 Jun 2021 13:59:46 GMT", "version": "v1" } ]
2021-06-16
[ [ "Bazeia", "D.", "" ], [ "Ferreira", "M. J. B.", "" ], [ "de Oliveira", "B. F.", "" ], [ "Szolnoki", "A.", "" ] ]
Cyclic dominance of competing species is an intensively used working hypothesis to explain biodiversity in certain living systems, where the evolutionary selection principle would dictate a single victor otherwise. Technically the May--Leonard models offer a mathematical framework to describe the mentioned non-transitive interaction of competing species when individual movement is also considered in a spatial system. Emerging rotating spirals composed by the competing species are frequently observed character of the resulting patterns. But how do these spiraling patterns change when we vary the external environment which affects the general vitality of individuals? Motivated by this question we suggest an off-lattice version of the tradition May--Leonard model which allows us to change the actual state of the environment gradually. This can be done by introducing a local carrying capacity parameter which value can be varied gently in an off-lattice environment. Our results support a previous analysis obtained in a more intricate metapopulation model and we show that the well-known rotating spirals become evident in a benign environment when the general density of the population is high. The accompanying time-dependent oscillation of competing species can also be detected where the amplitude and the frequency show a scaling law of the parameter that characterizes the state of the environment. These observations highlight that the assumed non-transitive interaction alone is insufficient condition to maintain biodiversity safely, but the actual state of the environment, which characterizes the general living conditions, also plays a decisive role on the evolution of related systems.
2310.20231
Peishan Dai
Peishan Dai, Yun Shi, Tong Xiong, Xiaoyan Zhou, Shenghui Liao, Zhongchao Huang, Xiaoping Yi, Bihong T. Chen
Effective connectivity signatures in major depressive disorder: fMRI study using a multi-site dataset
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diagnosis of major depressive disorder (MDD) primarily relies on the patient's self-reported symptoms and a clinical evaluation. Effective connectivity (EC) from resting-state functional magnetic resonance imaging (rs-fMRI) analysis can reflect the directionality of connections between brain regions, making it a candidate method to classify MDD. This study used Granger causality analysis to extract EC features from a large multi-site MDD dataset. The ComBat algorithm and multivariate linear regression were used to harmonize site difference and to remove age and sex covariates, respectively. Two-sample t-tests and model-based feature selection methods were used to screen for highly discriminative EC features for MDD, and LightGBM was used to classify MDD. In this large-scale multi-site rs-fMRI dataset, 97 EC features deemed highly discriminative for MDD were screened. In the nested five-fold cross-validation, the best classification model with the 97 EC features achieved accuracy, sensitivity, and specificity of 94.35%, 93.52%, and 95.25%, respectively. In another independent large dataset, which tested the generalization performance of the 97 EC features, the best classification models achieved 94.74%, 90.59%, and 96.75% for accuracy, sensitivity, and specificity, respectively. This work demonstrated that EC had a reasonable discriminative ability and supported the notion for using EC to potentially assist clinical diagnosis of MDD.
[ { "created": "Tue, 31 Oct 2023 07:19:08 GMT", "version": "v1" }, { "created": "Fri, 29 Dec 2023 13:59:11 GMT", "version": "v2" } ]
2024-01-01
[ [ "Dai", "Peishan", "" ], [ "Shi", "Yun", "" ], [ "Xiong", "Tong", "" ], [ "Zhou", "Xiaoyan", "" ], [ "Liao", "Shenghui", "" ], [ "Huang", "Zhongchao", "" ], [ "Yi", "Xiaoping", "" ], [ "Chen", "Bihong T.", "" ] ]
Diagnosis of major depressive disorder (MDD) primarily relies on the patient's self-reported symptoms and a clinical evaluation. Effective connectivity (EC) from resting-state functional magnetic resonance imaging (rs-fMRI) analysis can reflect the directionality of connections between brain regions, making it a candidate method to classify MDD. This study used Granger causality analysis to extract EC features from a large multi-site MDD dataset. The ComBat algorithm and multivariate linear regression were used to harmonize site difference and to remove age and sex covariates, respectively. Two-sample t-tests and model-based feature selection methods were used to screen for highly discriminative EC features for MDD, and LightGBM was used to classify MDD. In this large-scale multi-site rs-fMRI dataset, 97 EC features deemed highly discriminative for MDD were screened. In the nested five-fold cross-validation, the best classification model with the 97 EC features achieved accuracy, sensitivity, and specificity of 94.35%, 93.52%, and 95.25%, respectively. In another independent large dataset, which tested the generalization performance of the 97 EC features, the best classification models achieved 94.74%, 90.59%, and 96.75% for accuracy, sensitivity, and specificity, respectively. This work demonstrated that EC had a reasonable discriminative ability and supported the notion for using EC to potentially assist clinical diagnosis of MDD.
2001.02201
Markus Lill
Ahmadreza Ghanbarpour, Amr H. Mahmoud, Markus A. Lill
On-the-fly Prediction of Protein Hydration Densities and Free Energies using Deep Learning
null
null
null
null
q-bio.BM cs.LG q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The calculation of thermodynamic properties of biochemical systems typically requires the use of resource-intensive molecular simulation methods. One example thereof is the thermodynamic profiling of hydration sites, i.e. high-probability locations for water molecules on the protein surface, which play an essential role in protein-ligand associations and must therefore be incorporated in the prediction of binding poses and affinities. To replace time-consuming simulations in hydration site predictions, we developed two different types of deep neural-network models aiming to predict hydration site data. In the first approach, meshed 3D images are generated representing the interactions between certain molecular probes placed on regular 3D grids, encompassing the binding pocket, with the static protein. These molecular interaction fields are mapped to the corresponding 3D image of hydration occupancy using a neural network based on an U-Net architecture. In a second approach, hydration occupancy and thermodynamics were predicted point-wise using a neural network based on fully-connected layers. In addition to direct protein interaction fields, the environment of each grid point was represented using moments of a spherical harmonics expansion of the interaction properties of nearby grid points. Application to structure-activity relationship analysis and protein-ligand pose scoring demonstrates the utility of the predicted hydration information.
[ { "created": "Tue, 7 Jan 2020 18:06:30 GMT", "version": "v1" } ]
2020-01-08
[ [ "Ghanbarpour", "Ahmadreza", "" ], [ "Mahmoud", "Amr H.", "" ], [ "Lill", "Markus A.", "" ] ]
The calculation of thermodynamic properties of biochemical systems typically requires the use of resource-intensive molecular simulation methods. One example thereof is the thermodynamic profiling of hydration sites, i.e. high-probability locations for water molecules on the protein surface, which play an essential role in protein-ligand associations and must therefore be incorporated in the prediction of binding poses and affinities. To replace time-consuming simulations in hydration site predictions, we developed two different types of deep neural-network models aiming to predict hydration site data. In the first approach, meshed 3D images are generated representing the interactions between certain molecular probes placed on regular 3D grids, encompassing the binding pocket, with the static protein. These molecular interaction fields are mapped to the corresponding 3D image of hydration occupancy using a neural network based on an U-Net architecture. In a second approach, hydration occupancy and thermodynamics were predicted point-wise using a neural network based on fully-connected layers. In addition to direct protein interaction fields, the environment of each grid point was represented using moments of a spherical harmonics expansion of the interaction properties of nearby grid points. Application to structure-activity relationship analysis and protein-ligand pose scoring demonstrates the utility of the predicted hydration information.
2009.08687
Yang Liu
Yang Liu and Hisashi Kashima
Chemical Property Prediction Under Experimental Biases
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting the chemical properties of compounds is crucial in discovering novel materials and drugs with specific desired characteristics. Recent significant advances in machine learning technologies have enabled automatic predictive modeling from past experimental data reported in the literature. However, these datasets are often biased because of various reasons, such as experimental plans and publication decisions, and the prediction models trained using such biased datasets often suffer from over-fitting to the biased distributions and perform poorly on subsequent uses. Hence, this study focused on mitigating bias in the experimental datasets. We adopted two techniques from causal inference combined with graph neural networks that can represent molecular structures. The experimental results in four possible bias scenarios indicated that the inverse propensity scoring-based method and the counter-factual regression-based method made solid improvements.
[ { "created": "Fri, 18 Sep 2020 08:40:57 GMT", "version": "v1" }, { "created": "Mon, 15 Nov 2021 08:14:06 GMT", "version": "v2" }, { "created": "Thu, 9 Dec 2021 16:12:39 GMT", "version": "v3" } ]
2021-12-10
[ [ "Liu", "Yang", "" ], [ "Kashima", "Hisashi", "" ] ]
Predicting the chemical properties of compounds is crucial in discovering novel materials and drugs with specific desired characteristics. Recent significant advances in machine learning technologies have enabled automatic predictive modeling from past experimental data reported in the literature. However, these datasets are often biased because of various reasons, such as experimental plans and publication decisions, and the prediction models trained using such biased datasets often suffer from over-fitting to the biased distributions and perform poorly on subsequent uses. Hence, this study focused on mitigating bias in the experimental datasets. We adopted two techniques from causal inference combined with graph neural networks that can represent molecular structures. The experimental results in four possible bias scenarios indicated that the inverse propensity scoring-based method and the counter-factual regression-based method made solid improvements.
2311.16208
He Cao
He Cao, Zijing Liu, Xingyu Lu, Yuan Yao, Yu Li
InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery
null
null
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The rapid evolution of artificial intelligence in drug discovery encounters challenges with generalization and extensive training, yet Large Language Models (LLMs) offer promise in reshaping interactions with complex molecular data. Our novel contribution, InstructMol, a multi-modal LLM, effectively aligns molecular structures with natural language via an instruction-tuning approach, utilizing a two-stage training strategy that adeptly combines limited domain-specific data with molecular and textual information. InstructMol showcases substantial performance improvements in drug discovery-related molecular tasks, surpassing leading LLMs and significantly reducing the gap with specialized models, thereby establishing a robust foundation for a versatile and dependable drug discovery assistant.
[ { "created": "Mon, 27 Nov 2023 16:47:51 GMT", "version": "v1" } ]
2023-11-29
[ [ "Cao", "He", "" ], [ "Liu", "Zijing", "" ], [ "Lu", "Xingyu", "" ], [ "Yao", "Yuan", "" ], [ "Li", "Yu", "" ] ]
The rapid evolution of artificial intelligence in drug discovery encounters challenges with generalization and extensive training, yet Large Language Models (LLMs) offer promise in reshaping interactions with complex molecular data. Our novel contribution, InstructMol, a multi-modal LLM, effectively aligns molecular structures with natural language via an instruction-tuning approach, utilizing a two-stage training strategy that adeptly combines limited domain-specific data with molecular and textual information. InstructMol showcases substantial performance improvements in drug discovery-related molecular tasks, surpassing leading LLMs and significantly reducing the gap with specialized models, thereby establishing a robust foundation for a versatile and dependable drug discovery assistant.
1604.03244
Guangyu Zhou
Xiuli Ma, Guangyu Zhou, Jingjing Wang, Jian Peng, Jiawei Han
Complexes Detection in Biological Networks via Diversified Dense Subgraphs Mining
null
null
null
null
q-bio.MN cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-protein interaction (PPI) networks, providing a comprehensive landscape of protein interacting patterns, enable us to explore biological processes and cellular components at multiple resolutions. For a biological process, a number of proteins need to work together to perform the job. Proteins densely interact with each other, forming large molecular machines or cellular building blocks. Identification of such densely interconnected clusters or protein complexes from PPI networks enables us to obtain a better understanding of the hierarchy and organization of biological processes and cellular components. Most existing methods apply efficient graph clustering algorithms on PPI networks, often failing to detect possible densely connected subgraphs and overlapped subgraphs. Besides clustering-based methods, dense subgraph enumeration methods have also been used, which aim to find all densely connected protein sets. However, such methods are not practically tractable even on a small yeast PPI network, due to high computational complexity. In this paper, we introduce a novel approximate algorithm to efficiently enumerate putative protein complexes from biological networks. The key insight of our algorithm is that we do not need to enumerate all dense subgraphs. Instead we only need to find a small subset of subgraphs that cover as many proteins as possible. The problem is formulated as finding a diverse set of dense subgraphs, where we develop highly effective pruning techniques to guarantee efficiency. To handle large networks, we take a divide-and-conquer approach to speed up the algorithm in a distributed manner. By comparing with existing clustering and dense subgraph-based algorithms on several human and yeast PPI networks, we demonstrate that our method can detect more putative protein complexes and achieves better prediction accuracy.
[ { "created": "Tue, 12 Apr 2016 04:39:28 GMT", "version": "v1" } ]
2016-04-13
[ [ "Ma", "Xiuli", "" ], [ "Zhou", "Guangyu", "" ], [ "Wang", "Jingjing", "" ], [ "Peng", "Jian", "" ], [ "Han", "Jiawei", "" ] ]
Protein-protein interaction (PPI) networks, providing a comprehensive landscape of protein interacting patterns, enable us to explore biological processes and cellular components at multiple resolutions. For a biological process, a number of proteins need to work together to perform the job. Proteins densely interact with each other, forming large molecular machines or cellular building blocks. Identification of such densely interconnected clusters or protein complexes from PPI networks enables us to obtain a better understanding of the hierarchy and organization of biological processes and cellular components. Most existing methods apply efficient graph clustering algorithms on PPI networks, often failing to detect possible densely connected subgraphs and overlapped subgraphs. Besides clustering-based methods, dense subgraph enumeration methods have also been used, which aim to find all densely connected protein sets. However, such methods are not practically tractable even on a small yeast PPI network, due to high computational complexity. In this paper, we introduce a novel approximate algorithm to efficiently enumerate putative protein complexes from biological networks. The key insight of our algorithm is that we do not need to enumerate all dense subgraphs. Instead we only need to find a small subset of subgraphs that cover as many proteins as possible. The problem is formulated as finding a diverse set of dense subgraphs, where we develop highly effective pruning techniques to guarantee efficiency. To handle large networks, we take a divide-and-conquer approach to speed up the algorithm in a distributed manner. By comparing with existing clustering and dense subgraph-based algorithms on several human and yeast PPI networks, we demonstrate that our method can detect more putative protein complexes and achieves better prediction accuracy.
1012.0946
Kingsley Cox
Kingsley J.A. Cox and Paul R. Adams
Hocus-Socus: An Error Catastrophe for Complex Hebbian Learning Implies Neocortical Proofreading
10 pages 3 figs main text,85 pages 1 fig Suppl text Originally submitted to Nature in June 2009 (but rejected)
null
null
null
q-bio.NC q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The neocortex is widely believed to be the seat of intelligence and "mind". However, it's unclear what "mind" is, or how the special features of neocortex enable it, though likely "connectionist" principles are involved *A. The key to intelligence1 is learning relationships between large numbers of signals (such as pixel values), rather than memorizing explicit patterns. Causes (such as objects) can then be inferred from a learned internal model. These relationships fall into 2 classes: simple pairwise or second-order correlations (socs), and complex, and vastly more numerous, higher-order correlations (hocsB), such as the product of 3 or more pixels averaged over a set of images. Thus if 3 pixels correlate, they may give an "edge". Neurons with "Hebbian" synapses (changing strength in response to input-output spike-coincidences) are sensitive to such correlations, and it's likely that learned internal models use such neurons. Because output firing depends on input firing via the relevant connection strengths, Hebbian learning provides, in a feedback manner, sensitivity to input correlations. Hocs are vital, since they express "interesting" structure2 (e.g. edges), but their detection requires nonlinear rules operating at synapses of individual neurons. Here we report that in single model neurons learning from hocs fails, and defaults to socs, if nonlinear Hebbian rules are not sufficiently connection-specific. Such failure would inevitably occur if a neuron's input synapses were too crowded, and would undermine biological connectionism. Since the cortex must be hoc-sensitive to achieve the type of learning enabling mind, we propose it uses known, detailed but poorly understood circuitry and physiology to "proofread" Hebbian connections. Analogous DNA proofreading allows evolution of complex genomes (i.e. "life").
[ { "created": "Sat, 4 Dec 2010 20:23:23 GMT", "version": "v1" } ]
2010-12-07
[ [ "Cox", "Kingsley J. A.", "" ], [ "Adams", "Paul R.", "" ] ]
The neocortex is widely believed to be the seat of intelligence and "mind". However, it's unclear what "mind" is, or how the special features of neocortex enable it, though likely "connectionist" principles are involved *A. The key to intelligence1 is learning relationships between large numbers of signals (such as pixel values), rather than memorizing explicit patterns. Causes (such as objects) can then be inferred from a learned internal model. These relationships fall into 2 classes: simple pairwise or second-order correlations (socs), and complex, and vastly more numerous, higher-order correlations (hocsB), such as the product of 3 or more pixels averaged over a set of images. Thus if 3 pixels correlate, they may give an "edge". Neurons with "Hebbian" synapses (changing strength in response to input-output spike-coincidences) are sensitive to such correlations, and it's likely that learned internal models use such neurons. Because output firing depends on input firing via the relevant connection strengths, Hebbian learning provides, in a feedback manner, sensitivity to input correlations. Hocs are vital, since they express "interesting" structure2 (e.g. edges), but their detection requires nonlinear rules operating at synapses of individual neurons. Here we report that in single model neurons learning from hocs fails, and defaults to socs, if nonlinear Hebbian rules are not sufficiently connection-specific. Such failure would inevitably occur if a neuron's input synapses were too crowded, and would undermine biological connectionism. Since the cortex must be hoc-sensitive to achieve the type of learning enabling mind, we propose it uses known, detailed but poorly understood circuitry and physiology to "proofread" Hebbian connections. Analogous DNA proofreading allows evolution of complex genomes (i.e. "life").
1505.00323
Armen Allahverdyan
S. G. Gevorkian, A.E. Allahverdyan, D.S. Gevorgyan, Chin-Kun Hu
Thermal-induced proteinquake in oxyhemoglobin
9 pages, 8 figures, revtex
null
null
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Oxygen is released to living tissues via conformational changes of hemoglobin from R-state (oxyhemoglobin) to T-state (desoxyhemoglobin). The detailed mechanism of this process is not yet fully understood. We have carried out micromechanical experiments on oxyhemoglobin crystals to determine the behavior of the Young's modulus and the internal friction for temperatures between 20 C and 70 C. We have found that around 49 C oxyhemoglobin crystal samples undergo a sudden and strong increase of their Young's modulus, accompanied by a sudden decrease of the internal friction. This sudden mechanical change (proteinquake) takes place in a partially unfolded state and precedes the full denaturation transition at higher temperatures. The hemoglobin crystals after the proteinquake has the same mechanical properies as the initial state at room temperatures. We conjecture that it can be relevant for explaining the oxygen-releasing function of native oxyhemoglobin when the temperature is increased, e.g. due to active sport. The effect is specific for the quaternary structure of hemoglobin, and is absent for myoglobin with only one peptide sequence.
[ { "created": "Sat, 2 May 2015 08:32:01 GMT", "version": "v1" } ]
2015-05-05
[ [ "Gevorkian", "S. G.", "" ], [ "Allahverdyan", "A. E.", "" ], [ "Gevorgyan", "D. S.", "" ], [ "Hu", "Chin-Kun", "" ] ]
Oxygen is released to living tissues via conformational changes of hemoglobin from R-state (oxyhemoglobin) to T-state (desoxyhemoglobin). The detailed mechanism of this process is not yet fully understood. We have carried out micromechanical experiments on oxyhemoglobin crystals to determine the behavior of the Young's modulus and the internal friction for temperatures between 20 C and 70 C. We have found that around 49 C oxyhemoglobin crystal samples undergo a sudden and strong increase of their Young's modulus, accompanied by a sudden decrease of the internal friction. This sudden mechanical change (proteinquake) takes place in a partially unfolded state and precedes the full denaturation transition at higher temperatures. The hemoglobin crystals after the proteinquake has the same mechanical properies as the initial state at room temperatures. We conjecture that it can be relevant for explaining the oxygen-releasing function of native oxyhemoglobin when the temperature is increased, e.g. due to active sport. The effect is specific for the quaternary structure of hemoglobin, and is absent for myoglobin with only one peptide sequence.
2303.06902
Ziqiao Zhang
Ziqiao Zhang, Ailin Xie, Jihong Guan, Shuigeng Zhou
Molecular Property Prediction by Semantic-invariant Contrastive Learning
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive learning have been widely used as pretext tasks for self-supervised pre-trained molecular representation learning models in AI-aided drug design and discovery. However, exiting methods that generate molecular views by noise-adding operations for contrastive learning may face the semantic inconsistency problem, which leads to false positive pairs and consequently poor prediction performance. To address this problem, in this paper we first propose a semantic-invariant view generation method by properly breaking molecular graphs into fragment pairs. Then, we develop a Fragment-based Semantic-Invariant Contrastive Learning (FraSICL) model based on this view generation method for molecular property prediction. The FraSICL model consists of two branches to generate representations of views for contrastive learning, meanwhile a multi-view fusion and an auxiliary similarity loss are introduced to make better use of the information contained in different fragment-pair views. Extensive experiments on various benchmark datasets show that with the least number of pre-training samples, FraSICL can achieve state-of-the-art performance, compared with major existing counterpart models.
[ { "created": "Mon, 13 Mar 2023 07:32:37 GMT", "version": "v1" } ]
2023-03-14
[ [ "Zhang", "Ziqiao", "" ], [ "Xie", "Ailin", "" ], [ "Guan", "Jihong", "" ], [ "Zhou", "Shuigeng", "" ] ]
Contrastive learning have been widely used as pretext tasks for self-supervised pre-trained molecular representation learning models in AI-aided drug design and discovery. However, exiting methods that generate molecular views by noise-adding operations for contrastive learning may face the semantic inconsistency problem, which leads to false positive pairs and consequently poor prediction performance. To address this problem, in this paper we first propose a semantic-invariant view generation method by properly breaking molecular graphs into fragment pairs. Then, we develop a Fragment-based Semantic-Invariant Contrastive Learning (FraSICL) model based on this view generation method for molecular property prediction. The FraSICL model consists of two branches to generate representations of views for contrastive learning, meanwhile a multi-view fusion and an auxiliary similarity loss are introduced to make better use of the information contained in different fragment-pair views. Extensive experiments on various benchmark datasets show that with the least number of pre-training samples, FraSICL can achieve state-of-the-art performance, compared with major existing counterpart models.
1012.3618
Vahid Salari
I. Bokkon, V. Salari, J. Tuszynski
Emergence of Intrinsic Representations of Images by Feedforward and Feedback Processes and Bioluminescent Photons in Early Retinotopic Areas
18 pages, 4 figures, to be published in Journal of Integrative Neuroscience
J Integrative Neuroscience, Vol 10, No. 1, pages 47-64, 2011
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, we put forwarded a redox molecular hypothesis involving the natural biophysical substrate of visual perception and imagery. Here, we explicitly propose that the feedback and feedforward iterative operation processes can be interpreted in terms of a homunculus looking at the biophysical picture in our brain during visual imagery. We further propose that the brain can use both picture-like and language-like representation processes. In our interpretation, visualization (imagery) is a special kind of representation i.e., visual imagery requires a peculiar inherent biophysical (picture-like) mechanism. We also conjecture that the evolution of higher levels of complexity made the biophysical picture representation of the external visual world possible by controlled redox and bioluminescent nonlinear (iterative) biochemical reactions in the V1 and V2 areas during visual imagery. Our proposal deals only with the primary level of visual representation (i.e. perceived "scene").
[ { "created": "Thu, 16 Dec 2010 14:32:19 GMT", "version": "v1" } ]
2011-03-31
[ [ "Bokkon", "I.", "" ], [ "Salari", "V.", "" ], [ "Tuszynski", "J.", "" ] ]
Recently, we put forwarded a redox molecular hypothesis involving the natural biophysical substrate of visual perception and imagery. Here, we explicitly propose that the feedback and feedforward iterative operation processes can be interpreted in terms of a homunculus looking at the biophysical picture in our brain during visual imagery. We further propose that the brain can use both picture-like and language-like representation processes. In our interpretation, visualization (imagery) is a special kind of representation i.e., visual imagery requires a peculiar inherent biophysical (picture-like) mechanism. We also conjecture that the evolution of higher levels of complexity made the biophysical picture representation of the external visual world possible by controlled redox and bioluminescent nonlinear (iterative) biochemical reactions in the V1 and V2 areas during visual imagery. Our proposal deals only with the primary level of visual representation (i.e. perceived "scene").
q-bio/0612002
Thierry Rabilloud
T. Rabilloud (BECP), C. Adessi (LCP), A. Giraudel, J. Lunardi (BECP)
Improvement of the solubilization of proteins in two-dimensional electrophoresis with immobilized pH gradients
website publisher: http://www.interscience.wiley.com
Electrophoresis 18 (31/03/1997) 307-16
10.1002/elps.1150180303
null
q-bio.GN
null
Membrane and nuclear proteins of poor solubility have been separated by high resolution two-dimensional (2-D) gel electrophoresis. Isoelectric focusing with immobilized pH gradients leads to severe quantitative losses of proteins in the resulting 2-D map, although the resolution is usually high. Protein solubility could be improved by using denaturing solutions containing various detergents and chaotropes. Best results were obtained with a denaturing solution containing urea, thiourea, and detergents (both nonionic and zwitterionic). The usefulness of thiourea-containing denaturing mixtures is shown for microsomal and nuclear proteins as well as for tubulin, a protein highly prone to aggregation.
[ { "created": "Mon, 4 Dec 2006 13:38:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Rabilloud", "T.", "", "BECP" ], [ "Adessi", "C.", "", "LCP" ], [ "Giraudel", "A.", "", "BECP" ], [ "Lunardi", "J.", "", "BECP" ] ]
Membrane and nuclear proteins of poor solubility have been separated by high resolution two-dimensional (2-D) gel electrophoresis. Isoelectric focusing with immobilized pH gradients leads to severe quantitative losses of proteins in the resulting 2-D map, although the resolution is usually high. Protein solubility could be improved by using denaturing solutions containing various detergents and chaotropes. Best results were obtained with a denaturing solution containing urea, thiourea, and detergents (both nonionic and zwitterionic). The usefulness of thiourea-containing denaturing mixtures is shown for microsomal and nuclear proteins as well as for tubulin, a protein highly prone to aggregation.
2205.03700
Rim Adenane
Florin Avram, Rim Adenane, Andrei Halanay
New results and open questions for SIR-PH epidemic models with linear birth rate, loss of immunity, vaccination, and disease and vaccination fatalities
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Our paper presents three new classes of models: SIR-PH, SIR-PH-FA, and SIR-PH-IA, and states two problems we would like to solve about them. Recall that deterministic mathematical epidemiology has one basic general law, the R0 alternative" of [52, 51], which states that the local stability condition of the disease free equilibrium may be expressed as R0 < 1, where R0 is the famous basic reproduction number, which plays also a major role in the theory of branching processes. The literature suggests that it is impossible to find general laws concerning the endemic points. However, it is quite common that 1. When R0 > 1, there exists a unique fixed endemic point, and 2. the endemic point is locally stable when R0 > 1. One would like to establish these properties for a large class of realistic epidemic models (and we do not include here epidemics without casualties). We have introduced in [7, 5] a "simple", but broad class of "SIR-PH models" with varying population, with the express purpose of establishing for these processes the two properties above. Since that seemed still hard, we have introduced a further class of "SIR-PH-FA" models, which may be interpreted as approximations for the SIR-PH models, and which includes simpler models typically studied in the literature (with constant population, without loss of immunity, etc). The goal of our paper is to draw attention to the two open problems above, for the SIR-PH, SIR-PH-FA, and also for a second, more refined "intermediate approximation" SIR-PH-IA. We illustrate the current status-quo by presenting new results on a generalization of the SAIRS epidemic model of [44, 40].
[ { "created": "Sat, 7 May 2022 18:32:40 GMT", "version": "v1" } ]
2022-05-10
[ [ "Avram", "Florin", "" ], [ "Adenane", "Rim", "" ], [ "Halanay", "Andrei", "" ] ]
Our paper presents three new classes of models: SIR-PH, SIR-PH-FA, and SIR-PH-IA, and states two problems we would like to solve about them. Recall that deterministic mathematical epidemiology has one basic general law, the R0 alternative" of [52, 51], which states that the local stability condition of the disease free equilibrium may be expressed as R0 < 1, where R0 is the famous basic reproduction number, which plays also a major role in the theory of branching processes. The literature suggests that it is impossible to find general laws concerning the endemic points. However, it is quite common that 1. When R0 > 1, there exists a unique fixed endemic point, and 2. the endemic point is locally stable when R0 > 1. One would like to establish these properties for a large class of realistic epidemic models (and we do not include here epidemics without casualties). We have introduced in [7, 5] a "simple", but broad class of "SIR-PH models" with varying population, with the express purpose of establishing for these processes the two properties above. Since that seemed still hard, we have introduced a further class of "SIR-PH-FA" models, which may be interpreted as approximations for the SIR-PH models, and which includes simpler models typically studied in the literature (with constant population, without loss of immunity, etc). The goal of our paper is to draw attention to the two open problems above, for the SIR-PH, SIR-PH-FA, and also for a second, more refined "intermediate approximation" SIR-PH-IA. We illustrate the current status-quo by presenting new results on a generalization of the SAIRS epidemic model of [44, 40].
1905.11451
Samira Masoudi
Samira Masoudi, Cameron H.G. Wright, Jesse C. Gatlin and John. S. Oakey
Microtubule Motility Analysis based on Time-Lapse Fluorescence Microscopy
11 pages, 3 figures, conference paper
journal of Biomedical Sciences Instrumentation, vol 52, pages 126--133, April 2016, published by ISAs
null
null
q-bio.SC eess.IV
http://creativecommons.org/publicdomain/zero/1.0/
This paper describes an investigation into part of the mechanical mechanisms underlying the formation of mitotic spindle, the cellular machinery responsible for chromosomal separation during cell division. In normal eukaryotic cells, spindles are composed of microtubule filaments that radiate outward from two centrosomes. In many transformed cells, however, centrosome number is misregulated resulting in cells with more than two centrosomes. Addressing the question of how these cells accommodate these additional structures by coalescing supernumerary centrosomes to form normal spindles will provide a powerful insight toward understanding the proliferation of cancer cells and developing new therapeutics. The process of centrosome coalescence is thought to involve motor proteins that function to slide microtubules relative to one another. Here we use in vitro motility assays combined with fluorescence microscopy to visualize, characterize and quantify microtubule-microtubule interactions. After segmenting the microtubules, their speed and direction of movement are the extracted features to cluster their interaction type. In order to evaluate the potential of our processing algorithm, we created a simulated dataset similar to the time-lapse series. Once our procedure has been optimized using the simulated data, we will apply it to the real data. Results of our analyses will provide a quantitative description of interaction among microtubules. This is a potentially important step toward more thorough understanding of cancer.
[ { "created": "Mon, 27 May 2019 19:02:46 GMT", "version": "v1" } ]
2019-05-29
[ [ "Masoudi", "Samira", "" ], [ "Wright", "Cameron H. G.", "" ], [ "Gatlin", "Jesse C.", "" ], [ "Oakey", "John. S.", "" ] ]
This paper describes an investigation into part of the mechanical mechanisms underlying the formation of mitotic spindle, the cellular machinery responsible for chromosomal separation during cell division. In normal eukaryotic cells, spindles are composed of microtubule filaments that radiate outward from two centrosomes. In many transformed cells, however, centrosome number is misregulated resulting in cells with more than two centrosomes. Addressing the question of how these cells accommodate these additional structures by coalescing supernumerary centrosomes to form normal spindles will provide a powerful insight toward understanding the proliferation of cancer cells and developing new therapeutics. The process of centrosome coalescence is thought to involve motor proteins that function to slide microtubules relative to one another. Here we use in vitro motility assays combined with fluorescence microscopy to visualize, characterize and quantify microtubule-microtubule interactions. After segmenting the microtubules, their speed and direction of movement are the extracted features to cluster their interaction type. In order to evaluate the potential of our processing algorithm, we created a simulated dataset similar to the time-lapse series. Once our procedure has been optimized using the simulated data, we will apply it to the real data. Results of our analyses will provide a quantitative description of interaction among microtubules. This is a potentially important step toward more thorough understanding of cancer.
2112.06845
Jules Fraboul
Jules Fraboul, Giulio Biroli and Silvia De Monte
Artificial selection of communities drives the emergence of structured interactions
null
Journal of Theoretical Biology, Volume 571, 2023, 111557, ISSN 0022-5193
10.1016/j.jtbi.2023.111557
null
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech
http://creativecommons.org/licenses/by-nc-nd/4.0/
Species-rich communities, such as the microbiota or microbial ecosystems, provide key functions for human health and climatic resilience. Increasing effort is being dedicated to design experimental protocols for selecting community-level functions of interest. These experiments typically involve selection acting on populations of communities, each of which is composed of multiple species. If numerical simulations started to explore the evolutionary dynamics of this complex, multi-scale system, a comprehensive theoretical understanding of the process of artificial selection of communities is still lacking. Here, we propose a general model for the evolutionary dynamics of communities composed of a large number of interacting species, described by disordered generalised Lotka-Volterra equations. Our analytical and numerical results reveal that selection for scalar community functions leads to the emergence, along an evolutionary trajectory, of a low-dimensional structure in an initially featureless interaction matrix. Such structure reflects the combination of the properties of the ancestral community and of the selective pressure. Our analysis determines how the speed of adaptation scales with the system parameters and the abundance distribution of the evolved communities. Artificial selection for larger total abundance is thus shown to drive increased levels of mutualism and interaction diversity. Inference of the interaction matrix is proposed as a method to assess the emergence of structured interactions from experimentally accessible measures.
[ { "created": "Mon, 13 Dec 2021 18:00:39 GMT", "version": "v1" }, { "created": "Tue, 14 Dec 2021 11:36:59 GMT", "version": "v2" }, { "created": "Thu, 11 Aug 2022 10:06:19 GMT", "version": "v3" }, { "created": "Wed, 21 Jun 2023 12:45:05 GMT", "version": "v4" } ]
2023-06-22
[ [ "Fraboul", "Jules", "" ], [ "Biroli", "Giulio", "" ], [ "De Monte", "Silvia", "" ] ]
Species-rich communities, such as the microbiota or microbial ecosystems, provide key functions for human health and climatic resilience. Increasing effort is being dedicated to design experimental protocols for selecting community-level functions of interest. These experiments typically involve selection acting on populations of communities, each of which is composed of multiple species. If numerical simulations started to explore the evolutionary dynamics of this complex, multi-scale system, a comprehensive theoretical understanding of the process of artificial selection of communities is still lacking. Here, we propose a general model for the evolutionary dynamics of communities composed of a large number of interacting species, described by disordered generalised Lotka-Volterra equations. Our analytical and numerical results reveal that selection for scalar community functions leads to the emergence, along an evolutionary trajectory, of a low-dimensional structure in an initially featureless interaction matrix. Such structure reflects the combination of the properties of the ancestral community and of the selective pressure. Our analysis determines how the speed of adaptation scales with the system parameters and the abundance distribution of the evolved communities. Artificial selection for larger total abundance is thus shown to drive increased levels of mutualism and interaction diversity. Inference of the interaction matrix is proposed as a method to assess the emergence of structured interactions from experimentally accessible measures.
1610.09427
Jorge Fernandez-De-Cossio
Jorge Fernandez-de-Cossio, Yasser Perera
Impact of germline susceptibility variants in cancer genetic studies
7 pages, 4 tables
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although somatic mutations are the main contributor to cancer, underlying germline alterations may increase the risk of cancer, mold the somatic alteration landscape and cooperate with acquired mutations to promote the tumor onset and/or maintenance. Therefore, both tumor genome and germline sequence data have to be analyzed to have a more complete picture of the overall genetic foundation of the disease. To reinforce such notion we quantitatively assess the bias of restricting the analysis to somatic mutation data using mutational data from well-known cancer genes which displays both types of alterations, inherited and somatically acquired mutations.
[ { "created": "Fri, 28 Oct 2016 23:28:59 GMT", "version": "v1" } ]
2016-11-01
[ [ "Fernandez-de-Cossio", "Jorge", "" ], [ "Perera", "Yasser", "" ] ]
Although somatic mutations are the main contributor to cancer, underlying germline alterations may increase the risk of cancer, mold the somatic alteration landscape and cooperate with acquired mutations to promote the tumor onset and/or maintenance. Therefore, both tumor genome and germline sequence data have to be analyzed to have a more complete picture of the overall genetic foundation of the disease. To reinforce such notion we quantitatively assess the bias of restricting the analysis to somatic mutation data using mutational data from well-known cancer genes which displays both types of alterations, inherited and somatically acquired mutations.
q-bio/0402041
Oliver Beckstein
Oliver Beckstein and Mark S. P. Sansom
The influence of geometry, surface character and flexibility on the permeation of ions and water through biological pores
Peer reviewed article appeared in Physical Biology http://www.iop.org/EJ/abstract/1478-3975/1/1/005/
Physical Biology 1 (2004), 43-53
10.1088/1478-3967/1/1/005
null
q-bio.SC physics.bio-ph
null
A hydrophobic constriction site can act as an efficient barrier to ion and water permeation if its diameter is less than the diameter of an ion's first hydration shell. This hydrophobic gating mechanism is thought to operate in a number of ion channels, e.g. the nicotinic receptor, bacterial mechanosensitive channels (MscL and MscS) and perhaps in some potassium channels (e.g. KcsA, MthK, and KvAP). Simplified pore models allow one to investigate the primary characteristics of a conduction pathway, namely its geometry (shape, pore length, and radius), the chemical character of the pore wall surface, and its local flexibility and surface roughness. Our extended (ca. 0.1 \mu s) molecular dynamic simulations show that a short hydrophobic pore is closed to water for radii smaller than 0.45 nm. By increasing the polarity of the pore wall (and thus reducing its hydrophobicity) the transition radius can be decreased until for hydrophilic pores liquid water is stable down to a radius comparable to a water molecule's radius. Ions behave similarly but the transition from conducting to non-conducting pores is even steeper and occurs at a radius of 0.65 nm for hydrophobic pores. The presence of water vapour in a constriction zone indicates a barrier for ion permeation. A thermodynamic model can explain the behaviour of water in nanopores in terms of the surface tensions, which leads to a simple measure of "hydrophobicity" in this context. Furthermore, increased local flexibility decreases the permeability of polar species. An increase in temperature has the same effect, and we hypothesise that both effects can be explained by a decrease in the effective solvent-surface attraction which in turn leads to an increase in the solvent-wall surface free energy.
[ { "created": "Wed, 25 Feb 2004 15:45:03 GMT", "version": "v1" }, { "created": "Thu, 17 Jun 2004 16:08:25 GMT", "version": "v2" } ]
2007-05-23
[ [ "Beckstein", "Oliver", "" ], [ "Sansom", "Mark S. P.", "" ] ]
A hydrophobic constriction site can act as an efficient barrier to ion and water permeation if its diameter is less than the diameter of an ion's first hydration shell. This hydrophobic gating mechanism is thought to operate in a number of ion channels, e.g. the nicotinic receptor, bacterial mechanosensitive channels (MscL and MscS) and perhaps in some potassium channels (e.g. KcsA, MthK, and KvAP). Simplified pore models allow one to investigate the primary characteristics of a conduction pathway, namely its geometry (shape, pore length, and radius), the chemical character of the pore wall surface, and its local flexibility and surface roughness. Our extended (ca. 0.1 \mu s) molecular dynamic simulations show that a short hydrophobic pore is closed to water for radii smaller than 0.45 nm. By increasing the polarity of the pore wall (and thus reducing its hydrophobicity) the transition radius can be decreased until for hydrophilic pores liquid water is stable down to a radius comparable to a water molecule's radius. Ions behave similarly but the transition from conducting to non-conducting pores is even steeper and occurs at a radius of 0.65 nm for hydrophobic pores. The presence of water vapour in a constriction zone indicates a barrier for ion permeation. A thermodynamic model can explain the behaviour of water in nanopores in terms of the surface tensions, which leads to a simple measure of "hydrophobicity" in this context. Furthermore, increased local flexibility decreases the permeability of polar species. An increase in temperature has the same effect, and we hypothesise that both effects can be explained by a decrease in the effective solvent-surface attraction which in turn leads to an increase in the solvent-wall surface free energy.
2311.00480
Issam Boukhris
Issam Boukhris, Nicola Puletti, Christian Vonderach, Matteo Guasti, Said Lahssini, Monia Santini, Riccardo Valentini
Comparative analysis of taper models for Pinus nigra: A study across parametric, semi-parametric, and non-parametric models using terrestrial laser scanner acquired data
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Taper equations are indispensable tools for characterizing the stem profile of trees, providing valuable insights for forest management, timber inventory, and optimal assortments allocation. The recent progress in Terrestrial Laser Scanning (TLS) has revolutionized forest inventory practices by enabling non-destructive data collection. In this study, four taper models from three different model categories were established based on point cloud data of 219 Pinus nigra trees. The taper equations fitted with TLS data were used to predict the diameter at specific stem heights and the total stem volume. The results show that among fitted models, the Max and Burkhart segmented model calibrated by the means of a mixed-effects approach provided the best estimate of the diameter at different heights and the total stem volume evaluated for different diameter at breast height (DBH) classes. In numerical terms, this model estimated the diameter and the volume with a respective overall error of 0.781 cm and 0.021 m3. The predicted profile also shows that above a relative height of 0.7, the diameter error tends to increase due to the low reliability of data collected beyond the base of the crown primarily caused by interference from branches and leaves. Nevertheless, this study shows that TLS technology presents a compelling opportunity and a promising non-destructive alternative for generating taper profiles and estimating tree volume.
[ { "created": "Tue, 31 Oct 2023 14:13:12 GMT", "version": "v1" } ]
2023-11-02
[ [ "Boukhris", "Issam", "" ], [ "Puletti", "Nicola", "" ], [ "Vonderach", "Christian", "" ], [ "Guasti", "Matteo", "" ], [ "Lahssini", "Said", "" ], [ "Santini", "Monia", "" ], [ "Valentini", "Riccardo", "" ] ]
Taper equations are indispensable tools for characterizing the stem profile of trees, providing valuable insights for forest management, timber inventory, and optimal assortments allocation. The recent progress in Terrestrial Laser Scanning (TLS) has revolutionized forest inventory practices by enabling non-destructive data collection. In this study, four taper models from three different model categories were established based on point cloud data of 219 Pinus nigra trees. The taper equations fitted with TLS data were used to predict the diameter at specific stem heights and the total stem volume. The results show that among fitted models, the Max and Burkhart segmented model calibrated by the means of a mixed-effects approach provided the best estimate of the diameter at different heights and the total stem volume evaluated for different diameter at breast height (DBH) classes. In numerical terms, this model estimated the diameter and the volume with a respective overall error of 0.781 cm and 0.021 m3. The predicted profile also shows that above a relative height of 0.7, the diameter error tends to increase due to the low reliability of data collected beyond the base of the crown primarily caused by interference from branches and leaves. Nevertheless, this study shows that TLS technology presents a compelling opportunity and a promising non-destructive alternative for generating taper profiles and estimating tree volume.
2303.01466
Peter Weinberg
Stephen G Gray and Peter D Weinberg
Investigating biomechanical determinants of endothelial permeability in a modified hollow fibre bioreactor
33 pages, 13 figures
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Effects of mechanical stress on the permeability of vascular endothelium are important to normal physiology and may be critical in the development of atherosclerosis, where they can account for the patchy arterial distribution of the disease. Such properties are frequently investigated in vitro. Here we evaluate and use the hollow fibre bioreactor for this purpose; in this system, endothelial cells form a confluent monolayer lining numerous plastic capillaries with porous walls, contained in a cartridge. The capillaries were perfused with a near-aortic waveform by an external pump, and permeability was assessed by the movement of rhodamine-labelled albumin from the intracapillary space to the extracapillary space. Confluence and quiescence of the cells was confirmed by electron microscopy and measurements of glucose consumption and permeability. The system was able to detect previously established influences on permeability: tracer transport was increased by acute application of shear stress and decreased by chronic shear stress compared to a static control, and was increased by thrombin or an NO synthase inhibitor under chronic shear. Increasing viscosity by addition of xanthan gum reduced permeability under both acute and chronic shear. Addition of damping chambers to reduce flow pulsatility increased permeability. Modifying the cartridge to allow chronic convection across the monolayer increased effective permeability more than could be explained the addition of convective transport alone, indicating that it caused an increase in permeability. The off-the-shelf hollow fibre bioreactor provides an excellent system for investigating the biomechanics of endothelial permeability and its potential is increased by simple modifications.
[ { "created": "Thu, 2 Mar 2023 18:27:58 GMT", "version": "v1" } ]
2023-03-03
[ [ "Gray", "Stephen G", "" ], [ "Weinberg", "Peter D", "" ] ]
Effects of mechanical stress on the permeability of vascular endothelium are important to normal physiology and may be critical in the development of atherosclerosis, where they can account for the patchy arterial distribution of the disease. Such properties are frequently investigated in vitro. Here we evaluate and use the hollow fibre bioreactor for this purpose; in this system, endothelial cells form a confluent monolayer lining numerous plastic capillaries with porous walls, contained in a cartridge. The capillaries were perfused with a near-aortic waveform by an external pump, and permeability was assessed by the movement of rhodamine-labelled albumin from the intracapillary space to the extracapillary space. Confluence and quiescence of the cells was confirmed by electron microscopy and measurements of glucose consumption and permeability. The system was able to detect previously established influences on permeability: tracer transport was increased by acute application of shear stress and decreased by chronic shear stress compared to a static control, and was increased by thrombin or an NO synthase inhibitor under chronic shear. Increasing viscosity by addition of xanthan gum reduced permeability under both acute and chronic shear. Addition of damping chambers to reduce flow pulsatility increased permeability. Modifying the cartridge to allow chronic convection across the monolayer increased effective permeability more than could be explained the addition of convective transport alone, indicating that it caused an increase in permeability. The off-the-shelf hollow fibre bioreactor provides an excellent system for investigating the biomechanics of endothelial permeability and its potential is increased by simple modifications.