id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1908.07075
Andrey Morgulis
Andrey Morgulis, Konstantin Ilin
Drift, stabilizing and destabilizing for a Patlak-Keller-Segel system with the short-wavelength external signal
null
null
null
null
q-bio.PE nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article aims at exploring the short-wavelength stabilization and destabilization of the advection-diffusion systems formulated using the Patlak-Keller-Segel cross-diffusion. We study a model of the taxis partly driven by an external signal. We address the general short-wavelength signal using the homogenization technique, and then we give a detailed analysis of the signals emitted as the travelling waves. It turns out that homogenizing produces the drift of species, which is the main translator of the external signal effects, in particular, on the stability issues. We examine the stability of the quasi-equilibria - that is, the simplest short-wavelength patterns fully imposed by the external signal. Comparing the results to the case of switching the signal off allows us to estimate the effect of it. For instance, the effect of the travelling wave turns out to be not single-valued but depending on the wave speed. Namely, there is an independent threshold value such that increasing the amplitude of the wave destabilizes the quasi-equilibria provided that the wave speed is above this value. Otherwise, the same action exerts the opposite effect. It is worth to note that the effect is exponential in the amplitude of the wave in both cases.
[ { "created": "Mon, 19 Aug 2019 21:26:50 GMT", "version": "v1" }, { "created": "Mon, 2 Sep 2019 17:15:55 GMT", "version": "v2" }, { "created": "Fri, 21 Aug 2020 10:58:56 GMT", "version": "v3" } ]
2020-08-24
[ [ "Morgulis", "Andrey", "" ], [ "Ilin", "Konstantin", "" ] ]
This article aims at exploring the short-wavelength stabilization and destabilization of the advection-diffusion systems formulated using the Patlak-Keller-Segel cross-diffusion. We study a model of the taxis partly driven by an external signal. We address the general short-wavelength signal using the homogenization technique, and then we give a detailed analysis of the signals emitted as the travelling waves. It turns out that homogenizing produces the drift of species, which is the main translator of the external signal effects, in particular, on the stability issues. We examine the stability of the quasi-equilibria - that is, the simplest short-wavelength patterns fully imposed by the external signal. Comparing the results to the case of switching the signal off allows us to estimate the effect of it. For instance, the effect of the travelling wave turns out to be not single-valued but depending on the wave speed. Namely, there is an independent threshold value such that increasing the amplitude of the wave destabilizes the quasi-equilibria provided that the wave speed is above this value. Otherwise, the same action exerts the opposite effect. It is worth to note that the effect is exponential in the amplitude of the wave in both cases.
2012.07608
Frederic Barraquand
Alix M.C. Sauve, Rachel A. Taylor, Fr\'ed\'eric Barraquand
The effect of seasonal strength and abruptness on predator-prey dynamics
null
null
10.1016/j.jtbi.2020.110175
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Coupled dynamical systems in ecology are known to respond to the seasonal forcing of their parameters with multiple dynamical behaviours, ranging from seasonal cycles to chaos. Seasonal forcing is predominantly modelled as a sine wave but the transition between seasons is often more sudden. Some studies mentioned the robustness of their results to the shape of the forcing signal, without detailed analyses. Therefore, whether and how the shape of seasonal forcing affects the dynamics of coupled dynamical systems remains unclear, while abrupt seasonal transitions are widespread in ecological systems. To provide some answers, we conduct a numerical analysis of the dynamical response of predator--prey communities to the shape of the forcing signal by exploring the joint effect of two features of seasonal forcing: the magnitude of the signal, which is classically the only one studied, and the shape of the signal, abrupt or sinusoidal. We consider both linear and saturating functional responses, and focus on seasonal forcing of the predator's discovery rate, which fluctuates with changing environmental conditions and prey's ability to escape predation. Our numerical results highlight that a more abrupt seasonal forcing mostly alters the magnitude of population fluctuations and triggers period--doubling bifurcations, as well as the emergence of chaos, at lower forcing strength than for sine waves. Controlling the variance of the forcing signal mitigates this trend but does not fully suppress it, which suggests that the variance is not the only feature of the shape of seasonal forcing that acts on community dynamics. Although theoretical studies may predict correctly the sequence of bifurcations using sine waves as a representation of seasonality, there is a rationale for applied studies to implement as realistic seasonal forcing as possible to make precise predictions of community dynamics.
[ { "created": "Mon, 14 Dec 2020 14:58:48 GMT", "version": "v1" } ]
2020-12-15
[ [ "Sauve", "Alix M. C.", "" ], [ "Taylor", "Rachel A.", "" ], [ "Barraquand", "Frédéric", "" ] ]
Coupled dynamical systems in ecology are known to respond to the seasonal forcing of their parameters with multiple dynamical behaviours, ranging from seasonal cycles to chaos. Seasonal forcing is predominantly modelled as a sine wave but the transition between seasons is often more sudden. Some studies mentioned the robustness of their results to the shape of the forcing signal, without detailed analyses. Therefore, whether and how the shape of seasonal forcing affects the dynamics of coupled dynamical systems remains unclear, while abrupt seasonal transitions are widespread in ecological systems. To provide some answers, we conduct a numerical analysis of the dynamical response of predator--prey communities to the shape of the forcing signal by exploring the joint effect of two features of seasonal forcing: the magnitude of the signal, which is classically the only one studied, and the shape of the signal, abrupt or sinusoidal. We consider both linear and saturating functional responses, and focus on seasonal forcing of the predator's discovery rate, which fluctuates with changing environmental conditions and prey's ability to escape predation. Our numerical results highlight that a more abrupt seasonal forcing mostly alters the magnitude of population fluctuations and triggers period--doubling bifurcations, as well as the emergence of chaos, at lower forcing strength than for sine waves. Controlling the variance of the forcing signal mitigates this trend but does not fully suppress it, which suggests that the variance is not the only feature of the shape of seasonal forcing that acts on community dynamics. Although theoretical studies may predict correctly the sequence of bifurcations using sine waves as a representation of seasonality, there is a rationale for applied studies to implement as realistic seasonal forcing as possible to make precise predictions of community dynamics.
2104.10678
Christopher Thron
Christopher Thron, Vianney Mbazumutima, Luis Vargas Tamayo, Leonard Todjihounde
Cost Effective Reproduction Number Based Strategies for Reducing Deaths from COVID-19
36 pages, 20 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In epidemiology, the effective reproduction number $R_e$ is used to characterize the growth rate of an epidemic outbreak. In this paper, we investigate properties of $R_e$ for a modified SEIR model of COVID-19 in the city of Houston, TX USA, in which the population is divided into low-risk and high-risk subpopulations. The response of $R_e$ to two types of control measures (testing and distancing) applied to the two different subpopulations is characterized. A nonlinear cost model is used for control measures, to include the effects of diminishing returns. We propose three types of heuristic strategies for mitigating COVID-19 that are targeted at reducing $R_e$, and we exhibit the tradeoffs between strategy implementation costs and number of deaths. We also consider two variants of each type of strategy: basic strategies, which consider only the effects of controls on $R_e$, without regard to subpopulation; and high-risk prioritizing strategies, which maximize control of the high-risk subpopulation. Results showed that of the three heuristic strategy types, the most cost-effective involved setting a target value for $R_e$ and applying sufficient controls to attain that target value. This heuristic led to strategies that begin with strict distancing of the entire population, later followed by increased testing. Strategies that maximize control on high-risk individuals were less cost-effective than basic strategies that emphasize reduction of the rate of spreading of the disease. The model shows that delaying the start of control measures past a certain point greatly worsens strategy outcomes. We conclude that the effective reproduction can be a valuable real-time indicator in determining cost-effective control strategies.
[ { "created": "Mon, 19 Apr 2021 18:56:16 GMT", "version": "v1" } ]
2021-04-22
[ [ "Thron", "Christopher", "" ], [ "Mbazumutima", "Vianney", "" ], [ "Tamayo", "Luis Vargas", "" ], [ "Todjihounde", "Leonard", "" ] ]
In epidemiology, the effective reproduction number $R_e$ is used to characterize the growth rate of an epidemic outbreak. In this paper, we investigate properties of $R_e$ for a modified SEIR model of COVID-19 in the city of Houston, TX USA, in which the population is divided into low-risk and high-risk subpopulations. The response of $R_e$ to two types of control measures (testing and distancing) applied to the two different subpopulations is characterized. A nonlinear cost model is used for control measures, to include the effects of diminishing returns. We propose three types of heuristic strategies for mitigating COVID-19 that are targeted at reducing $R_e$, and we exhibit the tradeoffs between strategy implementation costs and number of deaths. We also consider two variants of each type of strategy: basic strategies, which consider only the effects of controls on $R_e$, without regard to subpopulation; and high-risk prioritizing strategies, which maximize control of the high-risk subpopulation. Results showed that of the three heuristic strategy types, the most cost-effective involved setting a target value for $R_e$ and applying sufficient controls to attain that target value. This heuristic led to strategies that begin with strict distancing of the entire population, later followed by increased testing. Strategies that maximize control on high-risk individuals were less cost-effective than basic strategies that emphasize reduction of the rate of spreading of the disease. The model shows that delaying the start of control measures past a certain point greatly worsens strategy outcomes. We conclude that the effective reproduction can be a valuable real-time indicator in determining cost-effective control strategies.
2401.06155
Xiuyuan Hu
Xiuyuan Hu, Guoqing Liu, Yang Zhao, Hao Zhang
De novo Drug Design using Reinforcement Learning with Multiple GPT Agents
Accepted by NeurIPS 2023
null
null
null
q-bio.BM cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
De novo drug design is a pivotal issue in pharmacology and a new area of focus in AI for science research. A central challenge in this field is to generate molecules with specific properties while also producing a wide range of diverse candidates. Although advanced technologies such as transformer models and reinforcement learning have been applied in drug design, their potential has not been fully realized. Therefore, we propose MolRL-MGPT, a reinforcement learning algorithm with multiple GPT agents for drug molecular generation. To promote molecular diversity, we encourage the agents to collaborate in searching for desirable molecules in diverse directions. Our algorithm has shown promising results on the GuacaMol benchmark and exhibits efficacy in designing inhibitors against SARS-CoV-2 protein targets. The codes are available at: https://github.com/HXYfighter/MolRL-MGPT.
[ { "created": "Thu, 21 Dec 2023 13:24:03 GMT", "version": "v1" } ]
2024-01-15
[ [ "Hu", "Xiuyuan", "" ], [ "Liu", "Guoqing", "" ], [ "Zhao", "Yang", "" ], [ "Zhang", "Hao", "" ] ]
De novo drug design is a pivotal issue in pharmacology and a new area of focus in AI for science research. A central challenge in this field is to generate molecules with specific properties while also producing a wide range of diverse candidates. Although advanced technologies such as transformer models and reinforcement learning have been applied in drug design, their potential has not been fully realized. Therefore, we propose MolRL-MGPT, a reinforcement learning algorithm with multiple GPT agents for drug molecular generation. To promote molecular diversity, we encourage the agents to collaborate in searching for desirable molecules in diverse directions. Our algorithm has shown promising results on the GuacaMol benchmark and exhibits efficacy in designing inhibitors against SARS-CoV-2 protein targets. The codes are available at: https://github.com/HXYfighter/MolRL-MGPT.
0809.4080
Sahand Jamal Rahi
Sahand Jamal Rahi, Peter Virnau, Leonid A. Mirny, Mehran Kardar
Predicting Transcription Factor Specificity with All-Atom Models
26 pages, 3 figures
Nucleic Acids Res. 36, 6209-6217 (2008)
10.1093/nar/gkn589
null
q-bio.BM physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The binding of a transcription factor (TF) to a DNA operator site can initiate or repress the expression of a gene. Computational prediction of sites recognized by a TF has traditionally relied upon knowledge of several cognate sites, rather than an ab initio approach. Here, we examine the possibility of using structure-based energy calculations that require no knowledge of bound sites but rather start with the structure of a protein-DNA complex. We study the PurR E. coli TF, and explore to which extent atomistic models of protein-DNA complexes can be used to distinguish between cognate and non-cognate DNA sites. Particular emphasis is placed on systematic evaluation of this approach by comparing its performance with bioinformatic methods, by testing it against random decoys and sites of homologous TFs. We also examine a set of experimental mutations in both DNA and the protein. Using our explicit estimates of energy, we show that the specificity for PurR is dominated by direct protein-DNA interactions, and weakly influenced by bending of DNA.
[ { "created": "Wed, 24 Sep 2008 04:16:58 GMT", "version": "v1" } ]
2008-11-10
[ [ "Rahi", "Sahand Jamal", "" ], [ "Virnau", "Peter", "" ], [ "Mirny", "Leonid A.", "" ], [ "Kardar", "Mehran", "" ] ]
The binding of a transcription factor (TF) to a DNA operator site can initiate or repress the expression of a gene. Computational prediction of sites recognized by a TF has traditionally relied upon knowledge of several cognate sites, rather than an ab initio approach. Here, we examine the possibility of using structure-based energy calculations that require no knowledge of bound sites but rather start with the structure of a protein-DNA complex. We study the PurR E. coli TF, and explore to which extent atomistic models of protein-DNA complexes can be used to distinguish between cognate and non-cognate DNA sites. Particular emphasis is placed on systematic evaluation of this approach by comparing its performance with bioinformatic methods, by testing it against random decoys and sites of homologous TFs. We also examine a set of experimental mutations in both DNA and the protein. Using our explicit estimates of energy, we show that the specificity for PurR is dominated by direct protein-DNA interactions, and weakly influenced by bending of DNA.
1904.06973
Ali Jalilvand
Ali Jalilvand, Behzad Akbari, Fatemeh Zare Mirakabad, Foad Ghaderi
Disease gene prioritization using network topological analysis from a sequence based human functional linkage network
null
null
null
null
q-bio.MN q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequencing large number of candidate disease genes which cause diseases in order to identify the relationship between them is an expensive and time-consuming task. To handle these challenges, different computational approaches have been developed. Based on the observation that genes associated with similar diseases have a higher likelihood of interaction, a large class of these approaches relay on analyzing the topological properties of biological networks. However, the incomplete and noisy nature of biological networks is known as an important challenge in these approaches. In this paper, we propose a two-step framework for disease gene prioritization: (1) construction of a reliable human FLN using sequence information and machine learning techniques, (2) prioritizing the disease gene relations based on the constructed FLN. On our framework, unlike other FLN based frameworks that using FLNs based on integration of various low quality biological data, the sequence of proteins is used as the comprehensive data to construct a reliable initial network. In addition, the physicochemical properties of amino-acids are employed to describe the functionality of proteins. All in all, the proposed approach is evaluated and the results indicate the high efficiency and validity of the FLN in disease gene prioritization.
[ { "created": "Mon, 15 Apr 2019 11:39:11 GMT", "version": "v1" } ]
2019-04-16
[ [ "Jalilvand", "Ali", "" ], [ "Akbari", "Behzad", "" ], [ "Mirakabad", "Fatemeh Zare", "" ], [ "Ghaderi", "Foad", "" ] ]
Sequencing large number of candidate disease genes which cause diseases in order to identify the relationship between them is an expensive and time-consuming task. To handle these challenges, different computational approaches have been developed. Based on the observation that genes associated with similar diseases have a higher likelihood of interaction, a large class of these approaches relay on analyzing the topological properties of biological networks. However, the incomplete and noisy nature of biological networks is known as an important challenge in these approaches. In this paper, we propose a two-step framework for disease gene prioritization: (1) construction of a reliable human FLN using sequence information and machine learning techniques, (2) prioritizing the disease gene relations based on the constructed FLN. On our framework, unlike other FLN based frameworks that using FLNs based on integration of various low quality biological data, the sequence of proteins is used as the comprehensive data to construct a reliable initial network. In addition, the physicochemical properties of amino-acids are employed to describe the functionality of proteins. All in all, the proposed approach is evaluated and the results indicate the high efficiency and validity of the FLN in disease gene prioritization.
2006.15548
Gang Yi
Qiuyue Duan, Qi Yan, Yuqi Huang, Wenxiu Zhang, Shuhui Zhao, Gang Yi
Polymerase/nicking enzyme powered dual-template multi-cycled G-triplex machine for HIV-1 determination
16 pages, 7 Postscript figures, 3 tables
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We proposed a dual-template multi-cycled DNA nanomachine driven by polymerase nicking enzyme with high efficiency. The reaction system simply consists of two templates (T1, T2) and two enzymes (KF polymerase, Nb.BbvCI). The two templates are similar in structure (X-X-Y, Y-Y-C): primer recognition region, primer analogue generation region, output region (3 to 5), and there is a nicking site between each two regions. Output of T1 is the primer of T2 and G-rich fragment (G3) is designed as the final products. In the presence of HIV-1, numerous of G3 were generated owing to the multi-cycled amplification strategy and formed into G-triplex ThT complex after the addition of thioflavin T (ThT), which greatly enhanced the fluorescence intensity as signal reporter in the label-free sensing strategy. A dynamic response range of 50 fM-2 nM for HIV-1 gene detection can be achieved through this multi-cycled G-triplex machine, and benefit from the high efficiency amplification strategy, enzymatic reaction can be completed within 45 minutes followed by fluorescence measurement. In addition, analysis of other targets can be achieved by replacing the template sequence. Thus there is a certain application potential for trace biomarker analysis in this strategy.
[ { "created": "Sun, 28 Jun 2020 09:14:10 GMT", "version": "v1" } ]
2020-06-30
[ [ "Duan", "Qiuyue", "" ], [ "Yan", "Qi", "" ], [ "Huang", "Yuqi", "" ], [ "Zhang", "Wenxiu", "" ], [ "Zhao", "Shuhui", "" ], [ "Yi", "Gang", "" ] ]
We proposed a dual-template multi-cycled DNA nanomachine driven by polymerase nicking enzyme with high efficiency. The reaction system simply consists of two templates (T1, T2) and two enzymes (KF polymerase, Nb.BbvCI). The two templates are similar in structure (X-X-Y, Y-Y-C): primer recognition region, primer analogue generation region, output region (3 to 5), and there is a nicking site between each two regions. Output of T1 is the primer of T2 and G-rich fragment (G3) is designed as the final products. In the presence of HIV-1, numerous of G3 were generated owing to the multi-cycled amplification strategy and formed into G-triplex ThT complex after the addition of thioflavin T (ThT), which greatly enhanced the fluorescence intensity as signal reporter in the label-free sensing strategy. A dynamic response range of 50 fM-2 nM for HIV-1 gene detection can be achieved through this multi-cycled G-triplex machine, and benefit from the high efficiency amplification strategy, enzymatic reaction can be completed within 45 minutes followed by fluorescence measurement. In addition, analysis of other targets can be achieved by replacing the template sequence. Thus there is a certain application potential for trace biomarker analysis in this strategy.
0910.4067
Steven Kelk
Katharina T. Huber, Leo van Iersel, Steven Kelk and Radoslaw Suchecki
A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of network slightly more general than a phylogenetic tree - from triplets. Our algorithm has been made publicly available as the program LEV1ATHAN. It combines ideas from several known theoretical algorithms for phylogenetic tree and network reconstruction with two novel subroutines. Namely, an exponential-time exact and a greedy algorithm both of which are of independent theoretical interest. Most importantly, LEV1ATHAN runs in polynomial time and always constructs a level-1 network. If the data is consistent with a phylogenetic tree, then the algorithm constructs such a tree. Moreover, if the input triplet set is dense and, in addition, is fully consistent with some level-1 network, it will find such a network. The potential of LEV1ATHAN is explored by means of an extensive simulation study and a biological data set. One of our conclusions is that LEV1ATHAN is able to construct networks consistent with a high percentage of input triplets, even when these input triplets are affected by a low to moderate level of noise.
[ { "created": "Wed, 21 Oct 2009 12:17:05 GMT", "version": "v1" } ]
2009-10-22
[ [ "Huber", "Katharina T.", "" ], [ "van Iersel", "Leo", "" ], [ "Kelk", "Steven", "" ], [ "Suchecki", "Radoslaw", "" ] ]
Recently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of network slightly more general than a phylogenetic tree - from triplets. Our algorithm has been made publicly available as the program LEV1ATHAN. It combines ideas from several known theoretical algorithms for phylogenetic tree and network reconstruction with two novel subroutines. Namely, an exponential-time exact and a greedy algorithm both of which are of independent theoretical interest. Most importantly, LEV1ATHAN runs in polynomial time and always constructs a level-1 network. If the data is consistent with a phylogenetic tree, then the algorithm constructs such a tree. Moreover, if the input triplet set is dense and, in addition, is fully consistent with some level-1 network, it will find such a network. The potential of LEV1ATHAN is explored by means of an extensive simulation study and a biological data set. One of our conclusions is that LEV1ATHAN is able to construct networks consistent with a high percentage of input triplets, even when these input triplets are affected by a low to moderate level of noise.
2211.01867
Ishraq Ahmed
Ishraq U. Ahmed, Helen M. Byrne, Mary R. Myerscough
Macrophage anti-inflammatory behaviour in a multiphase model of atherosclerotic plaque development
null
Bull Math Biol 85, 37 (2023)
10.1007/s11538-023-01142-7
null
q-bio.CB q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Atherosclerosis is an inflammatory disease characterised by the formation of plaques, which are deposits of lipids and cholesterol-laden macrophages that form in the artery wall. The inflammation is often non-resolving, due in large part to changes in normal macrophage anti-inflammatory behaviour that are induced by the toxic plaque microenvironment. These changes include higher death rates, defective efferocytic uptake of dead cells, and reduced rates of emigration. We develop a free boundary multiphase model for early atherosclerotic plaques, and we use it to investigate the effects of impaired macrophage anti-inflammatory behaviour on plaque structure and growth. We find that high rates of cell death relative to efferocytic uptake results in a plaque populated mostly by dead cells. We also find that emigration can potentially slow or halt plaque growth by allowing material to exit the plaque, but this is contingent on the availability of live macrophage foam cells in the deep plaque. Finally, we introduce an additional bead species to model macrophage tagging via microspheres, and we use the extended model to explore how high rates of cell death and low rates of efferocytosis and emigration prevent the clearance of macrophages from the plaque.
[ { "created": "Thu, 3 Nov 2022 14:56:41 GMT", "version": "v1" } ]
2024-01-19
[ [ "Ahmed", "Ishraq U.", "" ], [ "Byrne", "Helen M.", "" ], [ "Myerscough", "Mary R.", "" ] ]
Atherosclerosis is an inflammatory disease characterised by the formation of plaques, which are deposits of lipids and cholesterol-laden macrophages that form in the artery wall. The inflammation is often non-resolving, due in large part to changes in normal macrophage anti-inflammatory behaviour that are induced by the toxic plaque microenvironment. These changes include higher death rates, defective efferocytic uptake of dead cells, and reduced rates of emigration. We develop a free boundary multiphase model for early atherosclerotic plaques, and we use it to investigate the effects of impaired macrophage anti-inflammatory behaviour on plaque structure and growth. We find that high rates of cell death relative to efferocytic uptake results in a plaque populated mostly by dead cells. We also find that emigration can potentially slow or halt plaque growth by allowing material to exit the plaque, but this is contingent on the availability of live macrophage foam cells in the deep plaque. Finally, we introduce an additional bead species to model macrophage tagging via microspheres, and we use the extended model to explore how high rates of cell death and low rates of efferocytosis and emigration prevent the clearance of macrophages from the plaque.
2007.12073
Shujaat Khan Engr
Seongyong Park, Shujaat Khan, Abdul Wahab
E3-targetPred: Prediction of E3-Target Proteins Using Deep Latent Space Encoding
Submitted to IEEE/ACM transactions on computational biology and bioinformatics
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding E3 ligase and target substrate interactions are important for cell biology and therapeutic development. However, experimental identification of E3 target relationships is not an easy task due to the labor-intensive nature of the experiments. In this article, a sequence-based E3-target prediction model is proposed for the first time. The proposed framework utilizes composition of k-spaced amino acid pairs (CKSAAP) to learn the relationship between E3 ligases and their target protein. A class separable latent space encoding scheme is also devised that provides a compressed representation of feature space. A thorough ablation study is performed to identify an optimal gap size for CKSAAP and the number of latent variables that can represent the E3-target relationship successfully. The proposed scheme is evaluated on an independent dataset for a variety of standard quantitative measures. In particular, it achieves an average accuracy of $70.63\%$ on an independent dataset. The source code and datasets used in the study are available at the author's GitHub page (https://github.com/psychemistz/E3targetPred).
[ { "created": "Fri, 26 Jun 2020 17:21:30 GMT", "version": "v1" } ]
2020-07-24
[ [ "Park", "Seongyong", "" ], [ "Khan", "Shujaat", "" ], [ "Wahab", "Abdul", "" ] ]
Understanding E3 ligase and target substrate interactions are important for cell biology and therapeutic development. However, experimental identification of E3 target relationships is not an easy task due to the labor-intensive nature of the experiments. In this article, a sequence-based E3-target prediction model is proposed for the first time. The proposed framework utilizes composition of k-spaced amino acid pairs (CKSAAP) to learn the relationship between E3 ligases and their target protein. A class separable latent space encoding scheme is also devised that provides a compressed representation of feature space. A thorough ablation study is performed to identify an optimal gap size for CKSAAP and the number of latent variables that can represent the E3-target relationship successfully. The proposed scheme is evaluated on an independent dataset for a variety of standard quantitative measures. In particular, it achieves an average accuracy of $70.63\%$ on an independent dataset. The source code and datasets used in the study are available at the author's GitHub page (https://github.com/psychemistz/E3targetPred).
1602.08526
Hyun Youk
Eduardo P. Olimpio, Diego R. Gomez-Alvarez, Hyun Youk
Progress towards quantitative design principles of multicellular systems
Invited review - Submitted version - 36 pages (including 6 figures at the end)
null
null
null
q-bio.MN physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living systems, particularly multicellular systems, often seem hopelessly complex. But recent studies have suggested that beneath this complexity, there may be unifying quantitative principles that we are only now starting to unravel. All cells interact with their environments and with other cells. Communication among cells is a primary means for cells to interact with each other. The complexity of these multicellular systems, due to the large numbers of cells and the diversity of intracellular and intercellular interactions, makes understanding multicellular systems a daunting task. To overcome this challenge, we will likely need judicious simplifications and conceptual frameworks that can reveal design principles that are shared among diverse multicellular systems. Here we review some recent progress towards developing such frameworks.
[ { "created": "Fri, 26 Feb 2016 22:42:21 GMT", "version": "v1" } ]
2016-03-01
[ [ "Olimpio", "Eduardo P.", "" ], [ "Gomez-Alvarez", "Diego R.", "" ], [ "Youk", "Hyun", "" ] ]
Living systems, particularly multicellular systems, often seem hopelessly complex. But recent studies have suggested that beneath this complexity, there may be unifying quantitative principles that we are only now starting to unravel. All cells interact with their environments and with other cells. Communication among cells is a primary means for cells to interact with each other. The complexity of these multicellular systems, due to the large numbers of cells and the diversity of intracellular and intercellular interactions, makes understanding multicellular systems a daunting task. To overcome this challenge, we will likely need judicious simplifications and conceptual frameworks that can reveal design principles that are shared among diverse multicellular systems. Here we review some recent progress towards developing such frameworks.
1406.1675
Michael Bowler Ph D
Michael G Bowler, Colleen K Kelly
On the statistical machinery of alien species
14 pages, 3 figures. Follows and strengthens arXiv:1004.2271 . Version 2 has 16 pages and 3 figures. It differs from the original version in a revised and extended discussion of the biological aspects and a small change in a parameter to improve agreement with data
Entropy (2017) 19 674
10.3390/e19120674
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many species of plants are found in regions to which they are alien and their global distribution has been found to exhibit several remarkable patterns,characterised by exponential functions of the kind that could arise through versions of MacArthur's broken stick. We show here that these various patterns are all quantitatively reproduced by a simple algorithm, in terms of a single parameter- a single stick to be broken. This algorithm admits a biological interpretation in terms of niche structures fluctuating with time and productivity; with sites and species highly idiosyncratic. Technically, this is an application of statistical mechanics to ecology quite different from the familiar application to species abundance distributions.
[ { "created": "Fri, 6 Jun 2014 13:09:58 GMT", "version": "v1" }, { "created": "Fri, 22 Jul 2016 11:57:41 GMT", "version": "v2" } ]
2017-12-12
[ [ "Bowler", "Michael G", "" ], [ "Kelly", "Colleen K", "" ] ]
Many species of plants are found in regions to which they are alien and their global distribution has been found to exhibit several remarkable patterns,characterised by exponential functions of the kind that could arise through versions of MacArthur's broken stick. We show here that these various patterns are all quantitatively reproduced by a simple algorithm, in terms of a single parameter- a single stick to be broken. This algorithm admits a biological interpretation in terms of niche structures fluctuating with time and productivity; with sites and species highly idiosyncratic. Technically, this is an application of statistical mechanics to ecology quite different from the familiar application to species abundance distributions.
1310.2063
J.H. van Hateren
J.H. van Hateren
Active causation and the origin of meaning
revised and extended
Biological Cybernetics 109, 33-46 (2015)
10.1007/s00422-014-0622-6
null
q-bio.PE cs.NE nlin.AO q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose and meaning are necessary concepts for understanding mind and culture, but appear to be absent from the physical world and are not part of the explanatory framework of the natural sciences. Understanding how meaning (in the broad sense of the term) could arise from a physical world has proven to be a tough problem. The basic scheme of Darwinian evolution produces adaptations that only represent apparent ("as if") goals and meaning. Here I use evolutionary models to show that a slight, evolvable extension of the basic scheme is sufficient to produce genuine goals. The extension, targeted modulation of mutation rate, is known to be generally present in biological cells, and gives rise to two phenomena that are absent from the non-living world: intrinsic meaning and the ability to initiate goal-directed chains of causation (active causation). The extended scheme accomplishes this by utilizing randomness modulated by a feedback loop that is itself regulated by evolutionary pressure. The mechanism can be extended to behavioural variability as well, and thus shows how freedom of behaviour is possible. A further extension to communication suggests that the active exchange of intrinsic meaning between organisms may be the origin of consciousness, which in combination with active causation can provide a physical basis for the phenomenon of free will.
[ { "created": "Tue, 8 Oct 2013 09:49:39 GMT", "version": "v1" }, { "created": "Tue, 22 Oct 2013 11:37:41 GMT", "version": "v2" }, { "created": "Tue, 18 Feb 2014 15:10:27 GMT", "version": "v3" }, { "created": "Sun, 25 May 2014 08:39:42 GMT", "version": "v4" } ]
2015-02-04
[ [ "van Hateren", "J. H.", "" ] ]
Purpose and meaning are necessary concepts for understanding mind and culture, but appear to be absent from the physical world and are not part of the explanatory framework of the natural sciences. Understanding how meaning (in the broad sense of the term) could arise from a physical world has proven to be a tough problem. The basic scheme of Darwinian evolution produces adaptations that only represent apparent ("as if") goals and meaning. Here I use evolutionary models to show that a slight, evolvable extension of the basic scheme is sufficient to produce genuine goals. The extension, targeted modulation of mutation rate, is known to be generally present in biological cells, and gives rise to two phenomena that are absent from the non-living world: intrinsic meaning and the ability to initiate goal-directed chains of causation (active causation). The extended scheme accomplishes this by utilizing randomness modulated by a feedback loop that is itself regulated by evolutionary pressure. The mechanism can be extended to behavioural variability as well, and thus shows how freedom of behaviour is possible. A further extension to communication suggests that the active exchange of intrinsic meaning between organisms may be the origin of consciousness, which in combination with active causation can provide a physical basis for the phenomenon of free will.
1305.2086
Bob Eisenberg
Bob Eisenberg
Interacting Ions in Biophysics: Real is not Ideal
null
Biophysical Journal (2013) 104:1849-1866
10.1016/j.bpj.2013.03.049
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ions in water are important in biology, from molecules to organs. Classically, ions in water are treated as ideal noninteracting particles in a perfect gas. Excess free energy of ion was zero. Mathematics was not available to deal consistently with flows, or interactions with ions or boundaries. Non-classical approaches are needed because ions in biological conditions flow and interact. The concentration gradient of one ion can drive the flow of another, even in a bulk solution. A variational multiscale approach is needed to deal with interactions and flow. The recently developed energetic variational approach to dissipative systems allows mathematically consistent treatment of bio-ions Na, K, Ca and Cl as they interact and flow. Interactions produce large excess free energy that dominate the properties of the high concentration of ions in and near protein active sites, channels, and nucleic acids: the number density of ions is often more than 10 M. Ions in such crowded quarters interact strongly with each other as well as with the surrounding protein. Non-ideal behavior has classically been ascribed to allosteric interactions mediated by protein conformation changes. Ion-ion interactions present in crowded solutions--independent of conformation changes of proteins--are likely to change interpretations of allosteric phenomena. Computation of all atoms is a popular alternative to the multiscale approach. Such computations involve formidable challenges. Biological systems exist on very different scales from atomic motion. Biological systems exist in ionic mixtures (extracellular/intracellular solutions), and usually involve flow and trace concentrations of messenger ions (e.g., 10-7 M Ca2+). Energetic variational methods can deal with these characteristic properties of biological systems while we await the maturation and calibration of all atom simulations of ionic mixtures and divalents.
[ { "created": "Thu, 9 May 2013 13:36:45 GMT", "version": "v1" } ]
2015-06-15
[ [ "Eisenberg", "Bob", "" ] ]
Ions in water are important in biology, from molecules to organs. Classically, ions in water are treated as ideal noninteracting particles in a perfect gas. Excess free energy of ion was zero. Mathematics was not available to deal consistently with flows, or interactions with ions or boundaries. Non-classical approaches are needed because ions in biological conditions flow and interact. The concentration gradient of one ion can drive the flow of another, even in a bulk solution. A variational multiscale approach is needed to deal with interactions and flow. The recently developed energetic variational approach to dissipative systems allows mathematically consistent treatment of bio-ions Na, K, Ca and Cl as they interact and flow. Interactions produce large excess free energy that dominate the properties of the high concentration of ions in and near protein active sites, channels, and nucleic acids: the number density of ions is often more than 10 M. Ions in such crowded quarters interact strongly with each other as well as with the surrounding protein. Non-ideal behavior has classically been ascribed to allosteric interactions mediated by protein conformation changes. Ion-ion interactions present in crowded solutions--independent of conformation changes of proteins--are likely to change interpretations of allosteric phenomena. Computation of all atoms is a popular alternative to the multiscale approach. Such computations involve formidable challenges. Biological systems exist on very different scales from atomic motion. Biological systems exist in ionic mixtures (extracellular/intracellular solutions), and usually involve flow and trace concentrations of messenger ions (e.g., 10-7 M Ca2+). Energetic variational methods can deal with these characteristic properties of biological systems while we await the maturation and calibration of all atom simulations of ionic mixtures and divalents.
1702.05739
Julian Garcia
Rui Chen, Garcia Julian and Meyer Bernd
Social learning in a simple task allocation game
null
null
null
null
q-bio.PE cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the effects of social interactions in task al- location using Evolutionary Game Theory (EGT). We propose a simple task-allocation game and study how different learning mechanisms can give rise to specialised and non- specialised colonies under different ecological conditions. By combining agent-based simulations and adaptive dynamics we show that social learning can result in colonies of generalists or specialists, depending on ecological parameters. Agent-based simulations further show that learning dynamics play a crucial role in task allocation. In particular, introspective individual learning readily favours the emergence of specialists, while a process resembling task recruitment favours the emergence of generalists.
[ { "created": "Sun, 19 Feb 2017 11:06:28 GMT", "version": "v1" } ]
2017-02-21
[ [ "Chen", "Rui", "" ], [ "Julian", "Garcia", "" ], [ "Bernd", "Meyer", "" ] ]
We investigate the effects of social interactions in task al- location using Evolutionary Game Theory (EGT). We propose a simple task-allocation game and study how different learning mechanisms can give rise to specialised and non- specialised colonies under different ecological conditions. By combining agent-based simulations and adaptive dynamics we show that social learning can result in colonies of generalists or specialists, depending on ecological parameters. Agent-based simulations further show that learning dynamics play a crucial role in task allocation. In particular, introspective individual learning readily favours the emergence of specialists, while a process resembling task recruitment favours the emergence of generalists.
1808.06307
Johan Nygren
Johan Nygren
The speciation of Australopithecus and Paranthropus was caused by introgression from the Gorilla lineage
6 pages, 7 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The discovery of Paranthropus deyiremeda in 3.3-3.5 million year old fossil sites in Afar (Haile-Selassie, 2015), together with 30% of the gorilla genome showing lineage sorting between humans and chimpanzees (Scally, 2012), and a NUMT ("nuclear mitochondrial DNA segment") that is shared by both gorillas, humans and chimpanzees, and that dates back to 6 million years ago (Popadin, 2017), is conclusive evidence that introgression from the gorilla lineage caused the speciation of both the Australopithecus lineage and the Paranthropus lineage, providing a lens into the gorilla-like features within Paranthropus, as well as traits within Homo that originate from the gorilla branch, such as a high opposable thumb index (Alm\'ecija, 2015), an adducted great toe (Tocheri, 2011, McHenry, 2006), and large deposits of subcutaneous fat.
[ { "created": "Mon, 20 Aug 2018 05:25:52 GMT", "version": "v1" }, { "created": "Tue, 21 Aug 2018 00:45:06 GMT", "version": "v2" } ]
2018-08-22
[ [ "Nygren", "Johan", "" ] ]
The discovery of Paranthropus deyiremeda in 3.3-3.5 million year old fossil sites in Afar (Haile-Selassie, 2015), together with 30% of the gorilla genome showing lineage sorting between humans and chimpanzees (Scally, 2012), and a NUMT ("nuclear mitochondrial DNA segment") that is shared by both gorillas, humans and chimpanzees, and that dates back to 6 million years ago (Popadin, 2017), is conclusive evidence that introgression from the gorilla lineage caused the speciation of both the Australopithecus lineage and the Paranthropus lineage, providing a lens into the gorilla-like features within Paranthropus, as well as traits within Homo that originate from the gorilla branch, such as a high opposable thumb index (Alm\'ecija, 2015), an adducted great toe (Tocheri, 2011, McHenry, 2006), and large deposits of subcutaneous fat.
1812.03455
Gr\'egory Dumont
Gregory Dumont and Boris Gutkin
Macroscopic phase resetting-curves determine oscillatory coherence and signal transfer in inter-coupled neural circuits
35 pages, 9 figures
null
10.1371/journal.pcbi.1007019
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Macroscopic oscillations of different brain regions show multiple phase relationships that are persistent across time and have been implicated routing information. Various cellular level mechanisms influence the network dynamics and structure the macroscopic firing patterns. Key question is to identify the biophysical neuronal and synaptic properties that permit such motifs to arise and how the different coherence states determine the communication between the neural circuits. We analyse the emergence of phase locking within bidirectionally delayed-coupled spiking circuits showing global gamma band oscillations. We consider both the interneuronal (ING) and the pyramidal-interneuronal (PING) gamma rhythms and the inter coupling targeting the pyramidal or the inhibitory interneurons. Using a mean-field approach together with an exact reduction method, we break down each spiking network into a low dimensional nonlinear system and derive the macroscopic phase resetting-curves (mPRCs) that determine how the phase of the global oscillation responds to incoming perturbations. Depending on the type of gamma oscillation, we show that incoming excitatory inputs can either only speed up the oscillation (phase advance; type I PRC) or induce both an advance and a delay the macroscopic oscillation (phase delay; type II PRC). From there we determine the structure of macroscopic coherence states (phase locking) of two weakly synaptically-coupled networks. To do so we derive a phase equation for the coupled system which links the synaptic mechanisms to the coherence state of the system. We show that the transmission delay is a necessary condition for symmetry breaking, i.e. a non-symmetric phase lag between the macroscopic oscillations, potentially giving an explanation to the experimentally observed variety of gamma phase-locking modes.
[ { "created": "Sun, 9 Dec 2018 10:26:18 GMT", "version": "v1" } ]
2019-06-19
[ [ "Dumont", "Gregory", "" ], [ "Gutkin", "Boris", "" ] ]
Macroscopic oscillations of different brain regions show multiple phase relationships that are persistent across time and have been implicated routing information. Various cellular level mechanisms influence the network dynamics and structure the macroscopic firing patterns. Key question is to identify the biophysical neuronal and synaptic properties that permit such motifs to arise and how the different coherence states determine the communication between the neural circuits. We analyse the emergence of phase locking within bidirectionally delayed-coupled spiking circuits showing global gamma band oscillations. We consider both the interneuronal (ING) and the pyramidal-interneuronal (PING) gamma rhythms and the inter coupling targeting the pyramidal or the inhibitory interneurons. Using a mean-field approach together with an exact reduction method, we break down each spiking network into a low dimensional nonlinear system and derive the macroscopic phase resetting-curves (mPRCs) that determine how the phase of the global oscillation responds to incoming perturbations. Depending on the type of gamma oscillation, we show that incoming excitatory inputs can either only speed up the oscillation (phase advance; type I PRC) or induce both an advance and a delay the macroscopic oscillation (phase delay; type II PRC). From there we determine the structure of macroscopic coherence states (phase locking) of two weakly synaptically-coupled networks. To do so we derive a phase equation for the coupled system which links the synaptic mechanisms to the coherence state of the system. We show that the transmission delay is a necessary condition for symmetry breaking, i.e. a non-symmetric phase lag between the macroscopic oscillations, potentially giving an explanation to the experimentally observed variety of gamma phase-locking modes.
1011.0669
Robert Rosenbaum
Robert Rosenbaum, Jianfu Ma, Fabien Marpeau, Aditya Barua, Kresimir Josic
Finite volume and asymptotic methods for stochastic neuron models with correlated inputs
null
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a pair of stochastic integrate and fire neurons receiving correlated stochastic inputs. The evolution of this system can be described by the corresponding Fokker-Planck equation with non-trivial boundary conditions resulting from the refractory period and firing threshold. We propose a finite volume method that is orders of magnitude faster than the Monte Carlo methods traditionally used to model such systems. The resulting numerical approximations are proved to be accurate, nonnegative and integrate to 1. We also approximate the transient evolution of the system using an Ornstein--Uhlenbeck process, and use the result to examine the properties of the joint output of cell pairs. The results suggests that the joint output of a cell pair is most sensitive to changes in input variance, and less sensitive to changes in input mean and correlation.
[ { "created": "Tue, 2 Nov 2010 16:41:12 GMT", "version": "v1" }, { "created": "Mon, 13 Dec 2010 16:37:43 GMT", "version": "v2" } ]
2010-12-14
[ [ "Rosenbaum", "Robert", "" ], [ "Ma", "Jianfu", "" ], [ "Marpeau", "Fabien", "" ], [ "Barua", "Aditya", "" ], [ "Josic", "Kresimir", "" ] ]
We consider a pair of stochastic integrate and fire neurons receiving correlated stochastic inputs. The evolution of this system can be described by the corresponding Fokker-Planck equation with non-trivial boundary conditions resulting from the refractory period and firing threshold. We propose a finite volume method that is orders of magnitude faster than the Monte Carlo methods traditionally used to model such systems. The resulting numerical approximations are proved to be accurate, nonnegative and integrate to 1. We also approximate the transient evolution of the system using an Ornstein--Uhlenbeck process, and use the result to examine the properties of the joint output of cell pairs. The results suggests that the joint output of a cell pair is most sensitive to changes in input variance, and less sensitive to changes in input mean and correlation.
2005.00394
Sophie Abby
Sophie Saphia Abby, Katayoun Kazemzadeh, Charles Vragniau, Ludovic Pelosi and Fabien Pierrel
Bacterial pathways for the biosynthesis of ubiquinone
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ubiquinone is an important component of the electron transfer chains in proteobacteria and eukaryotes. The biosynthesis of ubiquinone requires multiple steps, most of which are common to bacteria and eukaryotes. Whereas the enzymes of the mitochondrial pathway that produces ubiquinone are highly similar across eukaryotes, recent results point to a rather high diversity of pathways in bacteria. This review focuses on ubiquinone in bacteria, highlighting newly discovered functions and detailing the proteins that are known to participate to its biosynthetic pathways. Novel results showing that ubiquinone can be produced by a pathway independent of dioxygen suggest that ubiquinone may participate to anaerobiosis, in addition to its well established role for aerobiosis. We also discuss the supramolecular organization of ubiquinone biosynthesis proteins and we summarize the current understanding of the evolution of the ubiquinone pathways relative to those of other isoprenoid quinones like menaquinone and plastoquinone.
[ { "created": "Fri, 1 May 2020 14:19:58 GMT", "version": "v1" } ]
2020-05-04
[ [ "Abby", "Sophie Saphia", "" ], [ "Kazemzadeh", "Katayoun", "" ], [ "Vragniau", "Charles", "" ], [ "Pelosi", "Ludovic", "" ], [ "Pierrel", "Fabien", "" ] ]
Ubiquinone is an important component of the electron transfer chains in proteobacteria and eukaryotes. The biosynthesis of ubiquinone requires multiple steps, most of which are common to bacteria and eukaryotes. Whereas the enzymes of the mitochondrial pathway that produces ubiquinone are highly similar across eukaryotes, recent results point to a rather high diversity of pathways in bacteria. This review focuses on ubiquinone in bacteria, highlighting newly discovered functions and detailing the proteins that are known to participate to its biosynthetic pathways. Novel results showing that ubiquinone can be produced by a pathway independent of dioxygen suggest that ubiquinone may participate to anaerobiosis, in addition to its well established role for aerobiosis. We also discuss the supramolecular organization of ubiquinone biosynthesis proteins and we summarize the current understanding of the evolution of the ubiquinone pathways relative to those of other isoprenoid quinones like menaquinone and plastoquinone.
1904.05467
Mohammad Nami
Ali-Mohammad Kamali, Mohammad Javad Gholamzadeh, Seyedeh Zahra Mousavi, Maryam Vasaghi Gharamaleki, Mohammad Nami
Improved visual function in a case of ultra-low vision following ischemic encephalopathy following transcranial electrical stimulation; A case study
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cortical visual impairment is amongst the key pathological causes of pediatric visual abnormalities predominantly resulting from hypoxic-ischemic brain injury. Such an injury results in profound visual impairments which severely impairs the patient's quality of life. Given the nature of the pathology, treatments are mostly limited to rehabilitation strategies such as transcranial electrical stimulation and visual rehabilitation therapy. Here, we discussed an 11 year-old girl case with cortical visual impairment who underwent concurrent visual rehabilitation therapy and transcranial electrical stimulation resulting in her improved visual function. This novel and noninvasive therapeutic intervention has shown potential for application in neuro-visual rehabilitation therapy (nVRT).
[ { "created": "Wed, 10 Apr 2019 22:34:17 GMT", "version": "v1" } ]
2019-04-12
[ [ "Kamali", "Ali-Mohammad", "" ], [ "Gholamzadeh", "Mohammad Javad", "" ], [ "Mousavi", "Seyedeh Zahra", "" ], [ "Gharamaleki", "Maryam Vasaghi", "" ], [ "Nami", "Mohammad", "" ] ]
Cortical visual impairment is amongst the key pathological causes of pediatric visual abnormalities predominantly resulting from hypoxic-ischemic brain injury. Such an injury results in profound visual impairments which severely impairs the patient's quality of life. Given the nature of the pathology, treatments are mostly limited to rehabilitation strategies such as transcranial electrical stimulation and visual rehabilitation therapy. Here, we discussed an 11 year-old girl case with cortical visual impairment who underwent concurrent visual rehabilitation therapy and transcranial electrical stimulation resulting in her improved visual function. This novel and noninvasive therapeutic intervention has shown potential for application in neuro-visual rehabilitation therapy (nVRT).
1512.07033
Ezio Di Costanzo
Ezio Di Costanzo, Alessandro Giacomello, Elisa Messina, Roberto Natalini, Giuseppe Pontrelli, Fabrizio Rossi, Robert Smits, Monika Twarogowska
A discrete in continuous mathematical model of cardiac progenitor cells formation and growth as spheroid clusters (Cardiospheres)
null
Mathematical Medicine and Biology, 35-1, 121-144, 2018
10.1093/imammb/dqw022
null
q-bio.CB q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a discrete in continuous mathematical model describing the in vitro growth process of biophsy-derived mammalian cardiac progenitor cells growing as clusters in the form of spheres (Cardiospheres). The approach is hybrid: discrete at cellular scale and continuous at molecular level. In the present model cells are subject to the self-organizing collective dynamics mechanism and, additionally, they can proliferate and differentiate, also depending on stochastic processes. The two latter processes are triggered and regulated by chemical signals present in the environment. Numerical simulations show the structure and the development of the clustered progenitors and are in a good agreement with the results obtained from in vitro experiments.
[ { "created": "Tue, 22 Dec 2015 11:17:19 GMT", "version": "v1" } ]
2018-08-15
[ [ "Di Costanzo", "Ezio", "" ], [ "Giacomello", "Alessandro", "" ], [ "Messina", "Elisa", "" ], [ "Natalini", "Roberto", "" ], [ "Pontrelli", "Giuseppe", "" ], [ "Rossi", "Fabrizio", "" ], [ "Smits", "Robert", "" ], [ "Twarogowska", "Monika", "" ] ]
We propose a discrete in continuous mathematical model describing the in vitro growth process of biophsy-derived mammalian cardiac progenitor cells growing as clusters in the form of spheres (Cardiospheres). The approach is hybrid: discrete at cellular scale and continuous at molecular level. In the present model cells are subject to the self-organizing collective dynamics mechanism and, additionally, they can proliferate and differentiate, also depending on stochastic processes. The two latter processes are triggered and regulated by chemical signals present in the environment. Numerical simulations show the structure and the development of the clustered progenitors and are in a good agreement with the results obtained from in vitro experiments.
2205.03382
Gabriel Palma
Gabriel R. Palma, Renato M. Coutinho, Wesley A. C. Godoy, Fernando L. C\^onsoli and Roberto A. Kraenkel
Bacteriophage effect on parasitism resistance
19 pages
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Many studies have shown that the protection of the host $\it{Acyrthosiphon~pisum}$ (Hemiptera, Aphididae) against the parasitoid $\it{Aphidius~ervi}$ (Hymenoptera, Braconidae) is conferred by the interaction between the secondary endosymbiont $\it{Hamiltonella~defensa}$ and the bacteriophage $\it{APSE}$ ($\it{Acyrthosiphon~pisum}$ secondary endosymbiont). This interaction consists of the production of toxins by the endosymbiont's molecular machinery, which is encoded by the inserted $\it{APSE}$ genes. The toxins prevent the development of the parasitoid's egg, conferring protection for the host. However, the effects of this microscopic interaction on host-parasitoid dynamics are still an open question. We presented a new mathematical model based on the bacteriophage effect on parasitism resistance. We identified that the vertical transmission of the bacteriophage and the host survival after the parasitoid attack are potential drivers of coexistence. Also, we showed that the vertical transmission of $\it{H.~defensa}$ is proportional to the time that the protected population became extinct. Our results showed that the protected and unprotected hosts' survival after the parasitoid attack is fundamental to understanding the equilibrium of long host-parasitoid dynamics. Finally, we illustrated our model considering its parameters based on experiments performed with $\it{A.~pisum}$ biotypes $\it{Genista~tinctoria}$ and $\it{Medicago~sativa}$.
[ { "created": "Fri, 6 May 2022 17:30:10 GMT", "version": "v1" } ]
2022-05-09
[ [ "Palma", "Gabriel R.", "" ], [ "Coutinho", "Renato M.", "" ], [ "Godoy", "Wesley A. C.", "" ], [ "Cônsoli", "Fernando L.", "" ], [ "Kraenkel", "Roberto A.", "" ] ]
Many studies have shown that the protection of the host $\it{Acyrthosiphon~pisum}$ (Hemiptera, Aphididae) against the parasitoid $\it{Aphidius~ervi}$ (Hymenoptera, Braconidae) is conferred by the interaction between the secondary endosymbiont $\it{Hamiltonella~defensa}$ and the bacteriophage $\it{APSE}$ ($\it{Acyrthosiphon~pisum}$ secondary endosymbiont). This interaction consists of the production of toxins by the endosymbiont's molecular machinery, which is encoded by the inserted $\it{APSE}$ genes. The toxins prevent the development of the parasitoid's egg, conferring protection for the host. However, the effects of this microscopic interaction on host-parasitoid dynamics are still an open question. We presented a new mathematical model based on the bacteriophage effect on parasitism resistance. We identified that the vertical transmission of the bacteriophage and the host survival after the parasitoid attack are potential drivers of coexistence. Also, we showed that the vertical transmission of $\it{H.~defensa}$ is proportional to the time that the protected population became extinct. Our results showed that the protected and unprotected hosts' survival after the parasitoid attack is fundamental to understanding the equilibrium of long host-parasitoid dynamics. Finally, we illustrated our model considering its parameters based on experiments performed with $\it{A.~pisum}$ biotypes $\it{Genista~tinctoria}$ and $\it{Medicago~sativa}$.
2003.01214
Ana Pastore Y Piontti
Dina Mistry (1), Maria Litvinova (2 and 3), Ana Pastore y Piontti (2), Matteo Chinazzi (2), Laura Fumanelli (4), Marcelo F. C. Gomes (5), Syed A. Haque (2), Quan-Hui Liu (6), Kunpeng Mu (2), Xinyue Xiong (2), M. Elizabeth Halloran (7 and 8), Ira M. Longini Jr. (9), Stefano Merler (4), Marco Ajelli (4), Alessandro Vespignani (2 and 3) ((1) Institute for Disease Modeling, Bellevue, WA, USA, (2) Northeastern University, Boston, MA, USA, (3) Institute for Scientific Interchange Foundation, Turin, Italy, (4) Bruno Kessler Foundation, Trento, Italy, (5) Funda\c{c}\~ao Oswaldo Cruz, Rio de Janeiro, Brazil, (6) College of Computer Science, Sichuan University, Chengdu, Sichuan, China, (7) Fred Hutchinson Cancer Research Center, Seattle, WA, USA, (8) Department of Biostatistics, University of Washington, Seattle, WA, USA, (9) Department of Biostatistics, College of Public Health and Health Professions, University of Florida, Gainesville, FL, USA)
Inferring high-resolution human mixing patterns for disease modeling
18 pages, 7 figures
null
null
null
q-bio.PE physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical and computational modeling approaches are increasingly used as quantitative tools in the analysis and forecasting of infectious disease epidemics. The growing need for realism in addressing complex public health questions is however calling for accurate models of the human contact patterns that govern the disease transmission processes. Here we present a data-driven approach to generate effective descriptions of population-level contact patterns by using highly detailed macro (census) and micro (survey) data on key socio-demographic features. We produce age-stratified contact matrices for 277 sub-national administrative regions of countries covering approximately 3.5 billion people and reflecting the high degree of cultural and societal diversity of the focus countries. We use the derived contact matrices to model the spread of airborne infectious diseases and show that sub-national heterogeneities in human mixing patterns have a marked impact on epidemic indicators such as the reproduction number and overall attack rate of epidemics of the same etiology. The contact patterns derived here are made publicly available as a modeling tool to study the impact of socio-economic differences and demographic heterogeneities across populations on the epidemiology of infectious diseases.
[ { "created": "Tue, 25 Feb 2020 22:33:39 GMT", "version": "v1" } ]
2020-03-04
[ [ "Mistry", "Dina", "", "2 and 3" ], [ "Litvinova", "Maria", "", "2 and 3" ], [ "Piontti", "Ana Pastore y", "", "7 and 8" ], [ "Chinazzi", "Matteo", "", "7 and 8" ], [ "Fumanelli", "Laura", "", "7 and 8" ], [ "Gomes", "Marcelo F. C.", "", "7 and 8" ], [ "Haque", "Syed A.", "", "7 and 8" ], [ "Liu", "Quan-Hui", "", "7 and 8" ], [ "Mu", "Kunpeng", "", "7 and 8" ], [ "Xiong", "Xinyue", "", "7 and 8" ], [ "Halloran", "M. Elizabeth", "", "7 and 8" ], [ "Longini", "Ira M.", "Jr.", "2 and 3" ], [ "Merler", "Stefano", "", "2 and 3" ], [ "Ajelli", "Marco", "", "2 and 3" ], [ "Vespignani", "Alessandro", "", "2 and 3" ] ]
Mathematical and computational modeling approaches are increasingly used as quantitative tools in the analysis and forecasting of infectious disease epidemics. The growing need for realism in addressing complex public health questions is however calling for accurate models of the human contact patterns that govern the disease transmission processes. Here we present a data-driven approach to generate effective descriptions of population-level contact patterns by using highly detailed macro (census) and micro (survey) data on key socio-demographic features. We produce age-stratified contact matrices for 277 sub-national administrative regions of countries covering approximately 3.5 billion people and reflecting the high degree of cultural and societal diversity of the focus countries. We use the derived contact matrices to model the spread of airborne infectious diseases and show that sub-national heterogeneities in human mixing patterns have a marked impact on epidemic indicators such as the reproduction number and overall attack rate of epidemics of the same etiology. The contact patterns derived here are made publicly available as a modeling tool to study the impact of socio-economic differences and demographic heterogeneities across populations on the epidemiology of infectious diseases.
2007.06602
Peter Ashcroft
Peter Ashcroft, Jana S. Huisman, Sonja Lehtinen, Judith A. Bouman, Christian L. Althaus, Roland R. Regoes, Sebastian Bonhoeffer
COVID-19 infectivity profile correction
5 pages, 2 figures
null
null
null
q-bio.PE stat.ME
http://creativecommons.org/licenses/by-nc-sa/4.0/
The infectivity profile of an individual with COVID-19 is attributed to the paper Temporal dynamics in viral shedding and transmissibility of COVID-19 by He et al., published in Nature Medicine in April 2020. However, the analysis within this paper contains a mistake such that the published infectivity profile is incorrect and the conclusion that infectiousness begins 2.3 days before symptom onset is no longer supported. In this document we discuss the error and compute the correct infectivity profile. We also establish confidence intervals on this profile, quantify the difference between the published and the corrected profiles, and discuss an issue of normalisation when fitting serial interval data. This infectivity profile plays a central role in policy and decision making, thus it is crucial that this issue is corrected with the utmost urgency to prevent the propagation of this error into further studies and policies. We hope that this preprint will reach all researchers and policy makers who are using the incorrect infectivity profile to inform their work.
[ { "created": "Mon, 13 Jul 2020 18:08:03 GMT", "version": "v1" } ]
2020-07-15
[ [ "Ashcroft", "Peter", "" ], [ "Huisman", "Jana S.", "" ], [ "Lehtinen", "Sonja", "" ], [ "Bouman", "Judith A.", "" ], [ "Althaus", "Christian L.", "" ], [ "Regoes", "Roland R.", "" ], [ "Bonhoeffer", "Sebastian", "" ] ]
The infectivity profile of an individual with COVID-19 is attributed to the paper Temporal dynamics in viral shedding and transmissibility of COVID-19 by He et al., published in Nature Medicine in April 2020. However, the analysis within this paper contains a mistake such that the published infectivity profile is incorrect and the conclusion that infectiousness begins 2.3 days before symptom onset is no longer supported. In this document we discuss the error and compute the correct infectivity profile. We also establish confidence intervals on this profile, quantify the difference between the published and the corrected profiles, and discuss an issue of normalisation when fitting serial interval data. This infectivity profile plays a central role in policy and decision making, thus it is crucial that this issue is corrected with the utmost urgency to prevent the propagation of this error into further studies and policies. We hope that this preprint will reach all researchers and policy makers who are using the incorrect infectivity profile to inform their work.
1902.08902
Michael Assaf
Carmel Sagi and Michael Assaf
Time Distribution for Persistent Viral Infection
19 pages, 12 figures
J. Stat. Mech. (2019), P063403
10.1088/1742-5468/ab1dd7
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the early stages of viral infection, and the distribution of times to obtain a persistent infection. The virus population proliferates by entering and reproducing inside a target cell until a sufficient number of new virus particles are released via a burst, with a given burst size distribution, which results in the death of the infected cell. Starting with a 2D model describing the joint dynamics of the virus and infected cell populations, we analyze the corresponding master equation using the probability generating function formalism. Exploiting time-scale separation between the virus and infected cell dynamics, the 2D model can be cast into an effective 1D model. To this end, we solve the 1D model analytically for a particular choice of burst size distribution. In the general case, we solve the model numerically by performing extensive Monte-Carlo simulations, and demonstrate the equivalence between the 2D and 1D models by measuring the Kullback-Leibler divergence between the corresponding distributions. Importantly, we find that the distribution of infection times is highly skewed with a "fat" exponential right tail. This indicates that there is non-negligible portion of individuals with an infection time, significantly longer than the mean, which may have implications on when HIV tests should be performed.
[ { "created": "Sun, 24 Feb 2019 07:02:35 GMT", "version": "v1" }, { "created": "Wed, 10 Jul 2019 10:35:01 GMT", "version": "v2" } ]
2019-07-11
[ [ "Sagi", "Carmel", "" ], [ "Assaf", "Michael", "" ] ]
We study the early stages of viral infection, and the distribution of times to obtain a persistent infection. The virus population proliferates by entering and reproducing inside a target cell until a sufficient number of new virus particles are released via a burst, with a given burst size distribution, which results in the death of the infected cell. Starting with a 2D model describing the joint dynamics of the virus and infected cell populations, we analyze the corresponding master equation using the probability generating function formalism. Exploiting time-scale separation between the virus and infected cell dynamics, the 2D model can be cast into an effective 1D model. To this end, we solve the 1D model analytically for a particular choice of burst size distribution. In the general case, we solve the model numerically by performing extensive Monte-Carlo simulations, and demonstrate the equivalence between the 2D and 1D models by measuring the Kullback-Leibler divergence between the corresponding distributions. Importantly, we find that the distribution of infection times is highly skewed with a "fat" exponential right tail. This indicates that there is non-negligible portion of individuals with an infection time, significantly longer than the mean, which may have implications on when HIV tests should be performed.
2112.10730
Nils Winter
Nils R. Winter, Ramona Leenings, Jan Ernsting, Kelvin Sarink, Lukas Fisch, Daniel Emden, Julian Blanke, Janik Goltermann, Nils Opel, Carlotta Barkhau, Susanne Meinert, Katharina Dohm, Jonathan Repple, Marco Mauritz, Marius Gruber, Elisabeth J. Leehr, Dominik Grotegerd, Ronny Redlich, Andreas Jansen, Igor Nenadic, Markus N\"othen, Andreas Forstner, Marcella Rietschel, Joachim Gro{\ss}, Jochen Bauer, Walter Heindel, Till Andlauer, Simon Eickhoff, Tilo Kircher, Udo Dannlowski, Tim Hahn
More Alike than Different: Quantifying Deviations of Brain Structure and Function in Major Depressive Disorder across Neuroimaging Modalities
12 pages, 3 figures
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Introduction: Identifying neurobiological differences between patients suffering from Major Depressive Disorder (MDD) and healthy individuals has been a mainstay of clinical neuroscience for decades. However, recent meta- and mega-analyses have raised concerns regarding the replicability and clinical relevance of brain alterations in depression. Methods: Here, we systematically investigate healthy controls and MDD patients across a comprehensive range of modalities including structural magnetic resonance imaging (MRI), diffusion tensor imaging, functional task-based and resting-state MRI under near-ideal conditions. To this end, we quantify the upper bounds of univariate effect sizes, predictive utility, and distributional dissimilarity in a fully harmonized cohort of N=1,809 participants. We compare the results to an MDD polygenic risk score (PRS) and environmental variables. Results: The upper bound of the effect sizes range from partial eta squared = .004 to .017, distributions overlap between 89% and 95%, with classification accuracies ranging between 54% and 55% across neuroimaging modalities. This pattern remains virtually unchanged when considering only acutely or chronically depressed patients. Differences are comparable to those found for PRS, but substantially smaller than for environmental variables. Discussion: We provide a large-scale, multimodal analysis of univariate biological differences between MDD patients and controls and show that even under near-ideal conditions and for maximum biological differences, deviations are extremely small and similarity dominates. We sketch an agenda for a new focus of future research in biological psychiatry facilitating quantitative, theory-driven research, an emphasis on computational psychiatry and multivariate machine learning approaches, as well as the utilization of ecologically valid phenotyping.
[ { "created": "Mon, 20 Dec 2021 18:27:07 GMT", "version": "v1" } ]
2021-12-21
[ [ "Winter", "Nils R.", "" ], [ "Leenings", "Ramona", "" ], [ "Ernsting", "Jan", "" ], [ "Sarink", "Kelvin", "" ], [ "Fisch", "Lukas", "" ], [ "Emden", "Daniel", "" ], [ "Blanke", "Julian", "" ], [ "Goltermann", "Janik", "" ], [ "Opel", "Nils", "" ], [ "Barkhau", "Carlotta", "" ], [ "Meinert", "Susanne", "" ], [ "Dohm", "Katharina", "" ], [ "Repple", "Jonathan", "" ], [ "Mauritz", "Marco", "" ], [ "Gruber", "Marius", "" ], [ "Leehr", "Elisabeth J.", "" ], [ "Grotegerd", "Dominik", "" ], [ "Redlich", "Ronny", "" ], [ "Jansen", "Andreas", "" ], [ "Nenadic", "Igor", "" ], [ "Nöthen", "Markus", "" ], [ "Forstner", "Andreas", "" ], [ "Rietschel", "Marcella", "" ], [ "Groß", "Joachim", "" ], [ "Bauer", "Jochen", "" ], [ "Heindel", "Walter", "" ], [ "Andlauer", "Till", "" ], [ "Eickhoff", "Simon", "" ], [ "Kircher", "Tilo", "" ], [ "Dannlowski", "Udo", "" ], [ "Hahn", "Tim", "" ] ]
Introduction: Identifying neurobiological differences between patients suffering from Major Depressive Disorder (MDD) and healthy individuals has been a mainstay of clinical neuroscience for decades. However, recent meta- and mega-analyses have raised concerns regarding the replicability and clinical relevance of brain alterations in depression. Methods: Here, we systematically investigate healthy controls and MDD patients across a comprehensive range of modalities including structural magnetic resonance imaging (MRI), diffusion tensor imaging, functional task-based and resting-state MRI under near-ideal conditions. To this end, we quantify the upper bounds of univariate effect sizes, predictive utility, and distributional dissimilarity in a fully harmonized cohort of N=1,809 participants. We compare the results to an MDD polygenic risk score (PRS) and environmental variables. Results: The upper bound of the effect sizes range from partial eta squared = .004 to .017, distributions overlap between 89% and 95%, with classification accuracies ranging between 54% and 55% across neuroimaging modalities. This pattern remains virtually unchanged when considering only acutely or chronically depressed patients. Differences are comparable to those found for PRS, but substantially smaller than for environmental variables. Discussion: We provide a large-scale, multimodal analysis of univariate biological differences between MDD patients and controls and show that even under near-ideal conditions and for maximum biological differences, deviations are extremely small and similarity dominates. We sketch an agenda for a new focus of future research in biological psychiatry facilitating quantitative, theory-driven research, an emphasis on computational psychiatry and multivariate machine learning approaches, as well as the utilization of ecologically valid phenotyping.
2112.12839
Srikanth Namuduri
Srikanth Namuduri, Prateek Mehta, Lise Barbe, Stephanie Lam, Zohreh Faghihmonzavi, Steve Finkbeiner, Shekhar Bhansali
Faster Deep Ensemble Averaging for Quantification of DNA Damage from Comet Assay Images With Uncertainty Estimates
null
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Several neurodegenerative diseases involve the accumulation of cellular DNA damage. Comet assays are a popular way of estimating the extent of DNA damage. Current literature on the use of deep learning to quantify DNA damage presents an empirical approach to hyper-parameter optimization and does not include uncertainty estimates. Deep ensemble averaging is a standard approach to estimating uncertainty but it requires several iterations of network training, which makes it time-consuming. Here we present an approach to quantify the extent of DNA damage that combines deep learning with a rigorous and comprehensive method to optimize the hyper-parameters with the help of statistical tests. We also use an architecture that allows for a faster computation of deep ensemble averaging and performs statistical tests applicable to networks using transfer learning. We applied our approach to a comet assay dataset with more than 1300 images and achieved an $R^2$ of 0.84, where the output included the confidence interval for each prediction. The proposed architecture is an improvement over the current approaches since it speeds up the uncertainty estimation by 30X while being statistically more rigorous.
[ { "created": "Thu, 23 Dec 2021 20:48:28 GMT", "version": "v1" } ]
2021-12-28
[ [ "Namuduri", "Srikanth", "" ], [ "Mehta", "Prateek", "" ], [ "Barbe", "Lise", "" ], [ "Lam", "Stephanie", "" ], [ "Faghihmonzavi", "Zohreh", "" ], [ "Finkbeiner", "Steve", "" ], [ "Bhansali", "Shekhar", "" ] ]
Several neurodegenerative diseases involve the accumulation of cellular DNA damage. Comet assays are a popular way of estimating the extent of DNA damage. Current literature on the use of deep learning to quantify DNA damage presents an empirical approach to hyper-parameter optimization and does not include uncertainty estimates. Deep ensemble averaging is a standard approach to estimating uncertainty but it requires several iterations of network training, which makes it time-consuming. Here we present an approach to quantify the extent of DNA damage that combines deep learning with a rigorous and comprehensive method to optimize the hyper-parameters with the help of statistical tests. We also use an architecture that allows for a faster computation of deep ensemble averaging and performs statistical tests applicable to networks using transfer learning. We applied our approach to a comet assay dataset with more than 1300 images and achieved an $R^2$ of 0.84, where the output included the confidence interval for each prediction. The proposed architecture is an improvement over the current approaches since it speeds up the uncertainty estimation by 30X while being statistically more rigorous.
1402.1728
Chris Cotsapas
Boel Brynedal, Towfique Raj, Barbara E Stranger, Robert Bjornson, Benjamin M Neale, Benjamin F Voight, Chris Cotsapas
Cross-phenotype meta-analysis reveals large-scale trans-eQTLs mediating patterns of transcriptional co-regulation
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/3.0/
Genetic variation affecting gene regulation is a central driver of phenotypic differences between individuals and can be used to uncover how biological processes are organized in a cell. Although detecting cis-eQTLs is now routine, trans-eQTLs have proven more challenging to find due to the modest variance explained and the multiple tests burden of testing millions of SNPs for association to thousands of transcripts. Here, we successfully map trans-eQTLs with the complementary approach of looking for SNPs associated to the expression of multiple genes simultaneously. We find 732 trans- eQTLs that replicate across two continental populations; each trans-eQTL controls large groups of target transcripts (regulons), which are part of interacting networks controlled by transcription factors. We are thus able to uncover co-regulated gene sets and begin describing the cell circuitry of gene regulation.
[ { "created": "Fri, 7 Feb 2014 18:38:11 GMT", "version": "v1" } ]
2014-02-10
[ [ "Brynedal", "Boel", "" ], [ "Raj", "Towfique", "" ], [ "Stranger", "Barbara E", "" ], [ "Bjornson", "Robert", "" ], [ "Neale", "Benjamin M", "" ], [ "Voight", "Benjamin F", "" ], [ "Cotsapas", "Chris", "" ] ]
Genetic variation affecting gene regulation is a central driver of phenotypic differences between individuals and can be used to uncover how biological processes are organized in a cell. Although detecting cis-eQTLs is now routine, trans-eQTLs have proven more challenging to find due to the modest variance explained and the multiple tests burden of testing millions of SNPs for association to thousands of transcripts. Here, we successfully map trans-eQTLs with the complementary approach of looking for SNPs associated to the expression of multiple genes simultaneously. We find 732 trans- eQTLs that replicate across two continental populations; each trans-eQTL controls large groups of target transcripts (regulons), which are part of interacting networks controlled by transcription factors. We are thus able to uncover co-regulated gene sets and begin describing the cell circuitry of gene regulation.
1912.07782
Jinyue Cui
Jinyue Cui
Process simulation and optimization of agro-systems by DNDC model
arXiv admin note: This submission has been removed by arXiv administrators because the submitter did not have the right to agree to the license at the time of submission
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many people are still facing hunger and the global food shortages is still an urgent problem. Meanwhile, global warming is still severe. Therefore, we propose a simulation-based optimization approach for improving crop yield and reducing the greenhouse gas emissions (GHG) of agriculture system. We simulated and verified the crop yield and carbon/nitrogen cycle with Denitrification-Decomposition (DNDC) model. A set of empirical equations of DNDC model were selected and implemented in gPROMS for obtaining the optimal solution of fertilizer usage. A case study shows that the optimized framework improves crop yield by 18%, when 72.42kg N/ha urea was used. Meanwhile the GHG emission of the system was reduced by 10%. The results show the necessity of optimal planning and usage of fertilizer in agriculture system.
[ { "created": "Fri, 13 Dec 2019 04:02:54 GMT", "version": "v1" }, { "created": "Fri, 27 Mar 2020 19:07:54 GMT", "version": "v2" } ]
2020-08-13
[ [ "Cui", "Jinyue", "" ] ]
Many people are still facing hunger and the global food shortages is still an urgent problem. Meanwhile, global warming is still severe. Therefore, we propose a simulation-based optimization approach for improving crop yield and reducing the greenhouse gas emissions (GHG) of agriculture system. We simulated and verified the crop yield and carbon/nitrogen cycle with Denitrification-Decomposition (DNDC) model. A set of empirical equations of DNDC model were selected and implemented in gPROMS for obtaining the optimal solution of fertilizer usage. A case study shows that the optimized framework improves crop yield by 18%, when 72.42kg N/ha urea was used. Meanwhile the GHG emission of the system was reduced by 10%. The results show the necessity of optimal planning and usage of fertilizer in agriculture system.
0709.2243
Alessandro Pelizzola
Pierpaolo Bruscolini, Alessandro Pelizzola and Marco Zamparo
Rate Determining Factors in Protein Model Structures
4 pages, 2 figures
Phys. Rev. Lett. 99, 038103 (2007)
10.1103/PhysRevLett.99.038103
null
q-bio.BM cond-mat.stat-mech
null
Previous research has shown a strong correlation of protein folding rates to the native state geometry, yet a complete explanation for this dependence is still lacking. Here we study the rate-geometry relationship with a simple statistical physics model, and focus on two classes of model geometries, representing ideal parallel and antiparallel structures. We find that the logarithm of the rate shows an almost perfect linear correlation with the "absolute contact order", but the slope depends on the particular class considered. We discuss these findings in the light of experimental results.
[ { "created": "Fri, 14 Sep 2007 09:48:22 GMT", "version": "v1" } ]
2007-09-17
[ [ "Bruscolini", "Pierpaolo", "" ], [ "Pelizzola", "Alessandro", "" ], [ "Zamparo", "Marco", "" ] ]
Previous research has shown a strong correlation of protein folding rates to the native state geometry, yet a complete explanation for this dependence is still lacking. Here we study the rate-geometry relationship with a simple statistical physics model, and focus on two classes of model geometries, representing ideal parallel and antiparallel structures. We find that the logarithm of the rate shows an almost perfect linear correlation with the "absolute contact order", but the slope depends on the particular class considered. We discuss these findings in the light of experimental results.
1508.03774
Mike Steel Prof.
Jamie V. de Jong, Jeanette C McLeod and Mike Steel
Neighbourhoods of phylogenetic trees: exact and asymptotic counts
41 pages
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central theme in phylogenetics is the reconstruction and analysis of evolutionary trees from a given set of data. To determine the optimal search methods for reconstructing trees, it is crucial to understand the size and structure of the neighbourhoods of trees under tree rearrangement operations. The diameter and size of the immediate neighbourhood of a tree has been well-studied, however little is known about the number of trees at distance two, three or (more generally) $k$ from a given tree. In this paper we provide a number of exact and asymptotic results concerning these quantities, and identify some key aspects of tree shape that play a role in determining these quantities. We obtain several new results for two of the main tree rearrangement operations - Nearest Neighbour Interchange and Subtree Prune and Regraft -- as well as for the Robinson-Foulds metric on trees.
[ { "created": "Sat, 15 Aug 2015 22:24:33 GMT", "version": "v1" } ]
2016-08-15
[ [ "de Jong", "Jamie V.", "" ], [ "McLeod", "Jeanette C", "" ], [ "Steel", "Mike", "" ] ]
A central theme in phylogenetics is the reconstruction and analysis of evolutionary trees from a given set of data. To determine the optimal search methods for reconstructing trees, it is crucial to understand the size and structure of the neighbourhoods of trees under tree rearrangement operations. The diameter and size of the immediate neighbourhood of a tree has been well-studied, however little is known about the number of trees at distance two, three or (more generally) $k$ from a given tree. In this paper we provide a number of exact and asymptotic results concerning these quantities, and identify some key aspects of tree shape that play a role in determining these quantities. We obtain several new results for two of the main tree rearrangement operations - Nearest Neighbour Interchange and Subtree Prune and Regraft -- as well as for the Robinson-Foulds metric on trees.
2211.13712
Ulisse Ferrari
Gabriel Mahuas, Olivier Marre, Thierry Mora, Ulisse Ferrari
A small-correlation expansion to quantify information in noisy sensory systems
null
null
null
null
q-bio.NC cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
Neural networks encode information through their collective spiking activity in response to external stimuli. This population response is noisy and strongly correlated, with complex interplay between correlations induced by the stimulus, and correlations caused by shared noise. Understanding how these correlations affect information transmission has so far been limited to pairs or small groups of neurons, because the curse of dimensionality impedes the evaluation of mutual information in larger populations. Here we develop a small-correlation expansion to compute the stimulus information carried by a large population of neurons, yielding interpretable analytical expressions in terms of the neurons' firing rates and pairwise correlations. We validate the approximation on synthetic data and demonstrate its applicability to electrophysiological recordings in the vertebrate retina, allowing us to quantify the effects of noise correlations between neurons and of memory in single neurons.
[ { "created": "Thu, 24 Nov 2022 17:00:32 GMT", "version": "v1" } ]
2022-11-28
[ [ "Mahuas", "Gabriel", "" ], [ "Marre", "Olivier", "" ], [ "Mora", "Thierry", "" ], [ "Ferrari", "Ulisse", "" ] ]
Neural networks encode information through their collective spiking activity in response to external stimuli. This population response is noisy and strongly correlated, with complex interplay between correlations induced by the stimulus, and correlations caused by shared noise. Understanding how these correlations affect information transmission has so far been limited to pairs or small groups of neurons, because the curse of dimensionality impedes the evaluation of mutual information in larger populations. Here we develop a small-correlation expansion to compute the stimulus information carried by a large population of neurons, yielding interpretable analytical expressions in terms of the neurons' firing rates and pairwise correlations. We validate the approximation on synthetic data and demonstrate its applicability to electrophysiological recordings in the vertebrate retina, allowing us to quantify the effects of noise correlations between neurons and of memory in single neurons.
2108.02066
Alejandro Tabas
Alejandro Tabas, Stefan Kiebel, Michael Marxen, and Katharina von Kriegstein
Fast frequency modulation is encoded according to the listener expectations in the human subcortical auditory pathway
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Expectations aid and bias our perception. In speech, expected words are easier to recognise than unexpected words, particularly in noisy environments, and incorrect expectations can make us misunderstand our conversational partner. Expectations are combined with the output from the sensory pathways to form representations of speech in the cerebral cortex. However, it is unclear whether expectations are propagated further down to subcortical structures to aid the encoding of the basic dynamic constituent of speech: fast frequency-modulation (FM). Fast FM-sweeps are the basic invariant constituent of consonants, and their correct encoding is fundamental for speech recognition. Here we tested the hypothesis that subjective expectations drive the encoding of fast FM-sweeps characteristic of speech in the human subcortical auditory pathway. We used fMRI to measure neural responses in the human auditory midbrain (inferior colliculus) and thalamus (medial geniculate body). Participants listened to sequences of FM-sweeps for which they held different expectations based on the task instructions. We found robust evidence that the responses in auditory midbrain and thalamus encode the difference between the acoustic input and the subjective expectations of the listener. The results indicate that FM-sweeps are already encoded at the level of the human auditory midbrain and that encoding is mainly driven by subjective expectations. We conclude that the subcortical auditory pathway is integrated in the cortical network of predictive speech processing and that expectations are used to optimise the encoding of even the most basic acoustic constituents of speech.
[ { "created": "Wed, 4 Aug 2021 13:48:51 GMT", "version": "v1" } ]
2021-08-05
[ [ "Tabas", "Alejandro", "" ], [ "Kiebel", "Stefan", "" ], [ "Marxen", "Michael", "" ], [ "von Kriegstein", "Katharina", "" ] ]
Expectations aid and bias our perception. In speech, expected words are easier to recognise than unexpected words, particularly in noisy environments, and incorrect expectations can make us misunderstand our conversational partner. Expectations are combined with the output from the sensory pathways to form representations of speech in the cerebral cortex. However, it is unclear whether expectations are propagated further down to subcortical structures to aid the encoding of the basic dynamic constituent of speech: fast frequency-modulation (FM). Fast FM-sweeps are the basic invariant constituent of consonants, and their correct encoding is fundamental for speech recognition. Here we tested the hypothesis that subjective expectations drive the encoding of fast FM-sweeps characteristic of speech in the human subcortical auditory pathway. We used fMRI to measure neural responses in the human auditory midbrain (inferior colliculus) and thalamus (medial geniculate body). Participants listened to sequences of FM-sweeps for which they held different expectations based on the task instructions. We found robust evidence that the responses in auditory midbrain and thalamus encode the difference between the acoustic input and the subjective expectations of the listener. The results indicate that FM-sweeps are already encoded at the level of the human auditory midbrain and that encoding is mainly driven by subjective expectations. We conclude that the subcortical auditory pathway is integrated in the cortical network of predictive speech processing and that expectations are used to optimise the encoding of even the most basic acoustic constituents of speech.
1304.2266
Mark Rowan
Mark Rowan and Samuel Neymotin
Synaptic Scaling Balances Learning in a Spiking Model of Neocortex
10 pages
M. Rowan and S. Neymotin. Synaptic scaling balances learning in a spiking model of neocortex. In M. Tomassini et al., eds, 11th Int. Conf. Adaptive and Natural Comp. Algorithms (ICANNGA), LNCS vol. 7824, pp. 20-29, Lausanne, 2013. Springer
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning in the brain requires complementary mechanisms: potentiation and activity-dependent homeostatic scaling. We introduce synaptic scaling to a biologically-realistic spiking model of neocortex which can learn changes in oscillatory rhythms using STDP, and show that scaling is necessary to balance both positive and negative changes in input from potentiation and atrophy. We discuss some of the issues that arise when considering synaptic scaling in such a model, and show that scaling regulates activity whilst allowing learning to remain unaltered.
[ { "created": "Mon, 8 Apr 2013 16:54:24 GMT", "version": "v1" } ]
2013-04-09
[ [ "Rowan", "Mark", "" ], [ "Neymotin", "Samuel", "" ] ]
Learning in the brain requires complementary mechanisms: potentiation and activity-dependent homeostatic scaling. We introduce synaptic scaling to a biologically-realistic spiking model of neocortex which can learn changes in oscillatory rhythms using STDP, and show that scaling is necessary to balance both positive and negative changes in input from potentiation and atrophy. We discuss some of the issues that arise when considering synaptic scaling in such a model, and show that scaling regulates activity whilst allowing learning to remain unaltered.
1504.00374
Alan Rogers
Alan R. Rogers
Rate of Adaptive Evolution under Blending Inheritance
4 pages, no figures; relevant to the history of biology
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In a population of size N, adaptive evolution is 2N times faster under Mendelian inheritance than the rate implied by Victorian theories of heredity and evolution.
[ { "created": "Wed, 1 Apr 2015 20:15:35 GMT", "version": "v1" }, { "created": "Fri, 18 Dec 2015 16:23:10 GMT", "version": "v2" }, { "created": "Thu, 28 Jan 2016 20:42:22 GMT", "version": "v3" }, { "created": "Sun, 28 Feb 2021 17:42:25 GMT", "version": "v4" }, { "created": "Sat, 31 Jul 2021 20:22:49 GMT", "version": "v5" } ]
2021-08-03
[ [ "Rogers", "Alan R.", "" ] ]
In a population of size N, adaptive evolution is 2N times faster under Mendelian inheritance than the rate implied by Victorian theories of heredity and evolution.
1210.1095
Francesco Vezzi
Francesco Vezzi, Giuseppe Narzisi and Bud Mishra
Reevaluating Assembly Evaluations with Feature Response Curves: GAGE and Assemblathons
Submitted to PLoS One. Supplementary material available at http://www.nada.kth.se/~vezzi/publications/supplementary.pdf and http://cs.nyu.edu/mishra/PUBLICATIONS/12.supplementaryFRC.pdf
null
10.1371/journal.pone.0052210
null
q-bio.GN cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In just the last decade, a multitude of bio-technologies and software pipelines have emerged to revolutionize genomics. To further their central goal, they aim to accelerate and improve the quality of de novo whole-genome assembly starting from short DNA reads. However, the performance of each of these tools is contingent on the length and quality of the sequencing data, the structure and complexity of the genome sequence, and the resolution and quality of long-range information. Furthermore, in the absence of any metric that captures the most fundamental "features" of a high-quality assembly, there is no obvious recipe for users to select the most desirable assembler/assembly. International competitions such as Assemblathons or GAGE tried to identify the best assembler(s) and their features. Some what circuitously, the only available approach to gauge de novo assemblies and assemblers relies solely on the availability of a high-quality fully assembled reference genome sequence. Still worse, reference-guided evaluations are often both difficult to analyze, leading to conclusions that are difficult to interpret. In this paper, we circumvent many of these issues by relying upon a tool, dubbed FRCbam, which is capable of evaluating de novo assemblies from the read-layouts even when no reference exists. We extend the FRCurve approach to cases where lay-out information may have been obscured, as is true in many deBruijn-graph-based algorithms. As a by-product, FRCurve now expands its applicability to a much wider class of assemblers -- thus, identifying higher-quality members of this group, their inter-relations as well as sensitivity to carefully selected features, with or without the support of a reference sequence or layout for the reads. The paper concludes by reevaluating several recently conducted assembly competitions and the datasets that have resulted from them.
[ { "created": "Wed, 3 Oct 2012 13:02:30 GMT", "version": "v1" } ]
2015-06-11
[ [ "Vezzi", "Francesco", "" ], [ "Narzisi", "Giuseppe", "" ], [ "Mishra", "Bud", "" ] ]
In just the last decade, a multitude of bio-technologies and software pipelines have emerged to revolutionize genomics. To further their central goal, they aim to accelerate and improve the quality of de novo whole-genome assembly starting from short DNA reads. However, the performance of each of these tools is contingent on the length and quality of the sequencing data, the structure and complexity of the genome sequence, and the resolution and quality of long-range information. Furthermore, in the absence of any metric that captures the most fundamental "features" of a high-quality assembly, there is no obvious recipe for users to select the most desirable assembler/assembly. International competitions such as Assemblathons or GAGE tried to identify the best assembler(s) and their features. Some what circuitously, the only available approach to gauge de novo assemblies and assemblers relies solely on the availability of a high-quality fully assembled reference genome sequence. Still worse, reference-guided evaluations are often both difficult to analyze, leading to conclusions that are difficult to interpret. In this paper, we circumvent many of these issues by relying upon a tool, dubbed FRCbam, which is capable of evaluating de novo assemblies from the read-layouts even when no reference exists. We extend the FRCurve approach to cases where lay-out information may have been obscured, as is true in many deBruijn-graph-based algorithms. As a by-product, FRCurve now expands its applicability to a much wider class of assemblers -- thus, identifying higher-quality members of this group, their inter-relations as well as sensitivity to carefully selected features, with or without the support of a reference sequence or layout for the reads. The paper concludes by reevaluating several recently conducted assembly competitions and the datasets that have resulted from them.
1911.07322
Issaka Haruna Mr
Issaka Haruna, Oluwole Daniel Makinde and David Mwangi Theuri
Modelling of Bad Biomass Invasion of a Food Chain Ecosystem with Optimal Control
20 pages, 12 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we provide a model to describe the dynamics of the species of the ecosystem after it has been raided by a bad competing specie. The competing specie invades the native plants for nutrition, carbon dioxide and space. This affects the population of the native species of the ecosystem. The effect of the bad biomass on the ecosystem is examined by considering it's equilibrium points as well as its stability. An optimal control system is developed by using Pontryagin's maximum principle to construct a Hamiltonian function which minimizes the spread of the bad biomass. Numerical simulations were conducted to analyze the results. It was established that the intrinsic growth rate r of the good biomass was responsible for the sustenance and continuous survival of the ecosystem. If growth rate of the good biomass is increased, the other species showed positive growth. In addition to the growth rate of the good biomass, death rates of both fish and birds also affected the state of the ecosystem. Simulation results also revealed that the invasive bad species biomass introduction affected the growth of the good biomass. This was as a result of the competition for nutrition, carbon dioxide, space between the good and the bad biomases. After implementing the control units, simulation results showed an improvement in the growth of the good biomass, fish and birds populations whereas it showed a decline in the bad biomass growth.
[ { "created": "Sun, 17 Nov 2019 19:39:55 GMT", "version": "v1" } ]
2019-11-19
[ [ "Haruna", "Issaka", "" ], [ "Makinde", "Oluwole Daniel", "" ], [ "Theuri", "David Mwangi", "" ] ]
In this paper we provide a model to describe the dynamics of the species of the ecosystem after it has been raided by a bad competing specie. The competing specie invades the native plants for nutrition, carbon dioxide and space. This affects the population of the native species of the ecosystem. The effect of the bad biomass on the ecosystem is examined by considering it's equilibrium points as well as its stability. An optimal control system is developed by using Pontryagin's maximum principle to construct a Hamiltonian function which minimizes the spread of the bad biomass. Numerical simulations were conducted to analyze the results. It was established that the intrinsic growth rate r of the good biomass was responsible for the sustenance and continuous survival of the ecosystem. If growth rate of the good biomass is increased, the other species showed positive growth. In addition to the growth rate of the good biomass, death rates of both fish and birds also affected the state of the ecosystem. Simulation results also revealed that the invasive bad species biomass introduction affected the growth of the good biomass. This was as a result of the competition for nutrition, carbon dioxide, space between the good and the bad biomases. After implementing the control units, simulation results showed an improvement in the growth of the good biomass, fish and birds populations whereas it showed a decline in the bad biomass growth.
1203.5885
Korbinian Strimmer
Sebastian Gibb and Korbinian Strimmer
MALDIquant: a versatile R package for the analysis of mass spectrometry data
5 pages, 1 figure
Bioinformatics 2012, Vol. 28, Issue 17, 2270-2271
10.1093/bioinformatics/bts447
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Summary: MALDIquant is an R package providing a complete and modular analysis pipeline for quantitative analysis of mass spectrometry data. MALDIquant is specifically designed with application in clinical diagnostics in mind and implements sophisticated routines for importing raw data, preprocessing, non-linear peak alignment, and calibration. It also handles technical replicates as well as spectra with unequal resolution. Availability: MALDIquant and its associated R packages readBrukerFlexData and readMzXmlData are freely available from the R archive CRAN (http://cran.r-project.org). The software is distributed under the GNU General Public License (version 3 or later) and is accompanied by example files and data. Additional documentation is available from http://strimmerlab.org/software/maldiquant/.
[ { "created": "Tue, 27 Mar 2012 07:45:50 GMT", "version": "v1" }, { "created": "Wed, 6 Jun 2012 08:23:05 GMT", "version": "v2" }, { "created": "Wed, 4 Jul 2012 13:14:00 GMT", "version": "v3" } ]
2012-08-30
[ [ "Gibb", "Sebastian", "" ], [ "Strimmer", "Korbinian", "" ] ]
Summary: MALDIquant is an R package providing a complete and modular analysis pipeline for quantitative analysis of mass spectrometry data. MALDIquant is specifically designed with application in clinical diagnostics in mind and implements sophisticated routines for importing raw data, preprocessing, non-linear peak alignment, and calibration. It also handles technical replicates as well as spectra with unequal resolution. Availability: MALDIquant and its associated R packages readBrukerFlexData and readMzXmlData are freely available from the R archive CRAN (http://cran.r-project.org). The software is distributed under the GNU General Public License (version 3 or later) and is accompanied by example files and data. Additional documentation is available from http://strimmerlab.org/software/maldiquant/.
2112.10093
Martin Weigt
Juan Rodriguez-Rivas, Giancarlo Croce, Maureen Muscat, Martin Weigt
Epistatic models predict mutable sites in SARS-CoV-2 proteins and epitopes
21 pages + supplementary information
null
10.1073/pnas.2113118119
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of new variants of SARS-CoV-2 is a major concern given their potential impact on the transmissibility and pathogenicity of the virus as well as the efficacy of therapeutic interventions. Here, we predict the mutability of all positions in SARS-CoV-2 protein domains to forecast the appearance of unseen variants. Using sequence data from other coronaviruses, pre-existing to SARS-CoV-2, we build statistical models that do not only capture amino-acid conservation but more complex patterns resulting from epistasis. We show that these models are notably superior to conservation profiles in estimating the already observable SARS-CoV-2 variability. In the receptor binding domain of the spike protein, we observe that the predicted mutability correlates well with experimental measures of protein stability and that both are reliable mutability predictors (ROC AUC ~0.8). Most interestingly, we observe an increasing agreement between our model and the observed variability as more data become available over time, proving the anticipatory capacity of our model. When combined with data concerning the immune response, our approach identifies positions where current variants of concern are highly overrepresented. These results could assist studies on viral evolution, future viral outbreaks and, in particular, guide the exploration and anticipation of potentially harmful future SARS-CoV-2 variants.
[ { "created": "Sun, 19 Dec 2021 09:20:16 GMT", "version": "v1" } ]
2022-05-11
[ [ "Rodriguez-Rivas", "Juan", "" ], [ "Croce", "Giancarlo", "" ], [ "Muscat", "Maureen", "" ], [ "Weigt", "Martin", "" ] ]
The emergence of new variants of SARS-CoV-2 is a major concern given their potential impact on the transmissibility and pathogenicity of the virus as well as the efficacy of therapeutic interventions. Here, we predict the mutability of all positions in SARS-CoV-2 protein domains to forecast the appearance of unseen variants. Using sequence data from other coronaviruses, pre-existing to SARS-CoV-2, we build statistical models that do not only capture amino-acid conservation but more complex patterns resulting from epistasis. We show that these models are notably superior to conservation profiles in estimating the already observable SARS-CoV-2 variability. In the receptor binding domain of the spike protein, we observe that the predicted mutability correlates well with experimental measures of protein stability and that both are reliable mutability predictors (ROC AUC ~0.8). Most interestingly, we observe an increasing agreement between our model and the observed variability as more data become available over time, proving the anticipatory capacity of our model. When combined with data concerning the immune response, our approach identifies positions where current variants of concern are highly overrepresented. These results could assist studies on viral evolution, future viral outbreaks and, in particular, guide the exploration and anticipation of potentially harmful future SARS-CoV-2 variants.
0803.0888
Sergey Petoukhov
Sergey V. Petoukhov (Department of Biomechanics, Mechanical Engineering Research Institute of the Russian Academy of Sciences)
Matrix genetics, part 1: permutations of positions in triplets and symmetries of genetic matrices
34 pages; 25 figures; added materials and corrections in section 11
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Kronecker family of the genetic matrices is investigated, which is based on the genetic matrix [C T; A G], where C, T, A, G are the letters of the genetic alphabet. The matrix [C T; A G] in the second Kronecker power is the (4*4)-matrix of 16 duplets. The matrix [C T; A G] in the third Kronecker power is the (8*8)-matrix of 64 triplets. It is significant that peculiarities of the degeneracy of the genetic code are reflected in the symmetrical black-and-white mosaic of these genetic matrices. The article represents interesting mathematical properties of these mosaic matrices, which are connected with positional permutations inside duplets and triplets; with projector operators; with unitary matrices and cyclic groups, etc. Fractal genetic nets are proposed as a new effective tool to study long nucleotide sequences. Some results about revealing new symmetry principles of long nucleotide sequences are described.
[ { "created": "Thu, 6 Mar 2008 15:15:44 GMT", "version": "v1" }, { "created": "Mon, 29 Mar 2010 14:51:07 GMT", "version": "v2" }, { "created": "Thu, 15 Apr 2010 08:55:52 GMT", "version": "v3" }, { "created": "Wed, 9 May 2012 14:28:09 GMT", "version": "v4" }, { "created": "Mon, 5 Nov 2012 09:14:07 GMT", "version": "v5" }, { "created": "Wed, 16 Jan 2013 06:14:44 GMT", "version": "v6" } ]
2013-01-17
[ [ "Petoukhov", "Sergey V.", "", "Department of Biomechanics, Mechanical\n Engineering Research Institute of the Russian Academy of Sciences" ] ]
The Kronecker family of the genetic matrices is investigated, which is based on the genetic matrix [C T; A G], where C, T, A, G are the letters of the genetic alphabet. The matrix [C T; A G] in the second Kronecker power is the (4*4)-matrix of 16 duplets. The matrix [C T; A G] in the third Kronecker power is the (8*8)-matrix of 64 triplets. It is significant that peculiarities of the degeneracy of the genetic code are reflected in the symmetrical black-and-white mosaic of these genetic matrices. The article represents interesting mathematical properties of these mosaic matrices, which are connected with positional permutations inside duplets and triplets; with projector operators; with unitary matrices and cyclic groups, etc. Fractal genetic nets are proposed as a new effective tool to study long nucleotide sequences. Some results about revealing new symmetry principles of long nucleotide sequences are described.
1312.6439
Michael Harvey
Michael G. Harvey, Brian Tilston Smith, Travis C. Glenn, Brant C. Faircloth, and Robb T. Brumfield
Sequence Capture Versus Restriction Site Associated DNA Sequencing for Phylogeography
4 Tables, 5 Supplemental Tables, 4 Figures
Systematic Biology 65: 910-924 (2016)
10.1093/sysbio/syw036
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genomic datasets generated with massively parallel sequencing methods have the potential to propel systematics in new and exciting directions, but selecting appropriate markers and methods is not straightforward. We applied two approaches with particular promise for systematics, restriction site associated DNA sequencing (RAD-Seq) and sequence capture (Seq-cap) of ultraconserved elements (UCEs), to the same set of samples from a non-model, Neotropical bird. We found that both RAD-Seq and Seq-cap produced genomic datasets containing thousands of loci and SNPs and that the inferred population assignments and species trees were concordant between datasets. However, model-based estimates of demographic parameters differed between datasets, particularly when we estimated the parameters using a method based on allele frequency spectra. The differences we observed may result from differences in assembly, alignment, and filtering of sequence data between methods, and our findings suggest that caution is warranted when using allele frequencies to estimate parameters from low-coverage sequencing data. We further explored the differences between methods using simulated Seq-cap- and RAD-Seq-like datasets. Analyses of simulated data suggest that increasing the number of loci from 500 to 5000 increased phylogenetic concordance factors and the accuracy and precision of demographic parameter estimates, but increasing the number of loci past 5000 resulted in minimal gains. Increasing locus length from 64 bp to 500 bp improved phylogenetic concordance factors and minimal gains were observed with loci longer than 500 bp, but locus length did not influence the accuracy and precision of demographic parameter estimates. We discuss our results relative to the diversity of data collection methods available, and we provide advice for harnessing next-generation sequencing for systematics research.
[ { "created": "Sun, 22 Dec 2013 23:02:34 GMT", "version": "v1" } ]
2017-03-28
[ [ "Harvey", "Michael G.", "" ], [ "Smith", "Brian Tilston", "" ], [ "Glenn", "Travis C.", "" ], [ "Faircloth", "Brant C.", "" ], [ "Brumfield", "Robb T.", "" ] ]
Genomic datasets generated with massively parallel sequencing methods have the potential to propel systematics in new and exciting directions, but selecting appropriate markers and methods is not straightforward. We applied two approaches with particular promise for systematics, restriction site associated DNA sequencing (RAD-Seq) and sequence capture (Seq-cap) of ultraconserved elements (UCEs), to the same set of samples from a non-model, Neotropical bird. We found that both RAD-Seq and Seq-cap produced genomic datasets containing thousands of loci and SNPs and that the inferred population assignments and species trees were concordant between datasets. However, model-based estimates of demographic parameters differed between datasets, particularly when we estimated the parameters using a method based on allele frequency spectra. The differences we observed may result from differences in assembly, alignment, and filtering of sequence data between methods, and our findings suggest that caution is warranted when using allele frequencies to estimate parameters from low-coverage sequencing data. We further explored the differences between methods using simulated Seq-cap- and RAD-Seq-like datasets. Analyses of simulated data suggest that increasing the number of loci from 500 to 5000 increased phylogenetic concordance factors and the accuracy and precision of demographic parameter estimates, but increasing the number of loci past 5000 resulted in minimal gains. Increasing locus length from 64 bp to 500 bp improved phylogenetic concordance factors and minimal gains were observed with loci longer than 500 bp, but locus length did not influence the accuracy and precision of demographic parameter estimates. We discuss our results relative to the diversity of data collection methods available, and we provide advice for harnessing next-generation sequencing for systematics research.
1112.5506
Joshua Vogelstein
Joshua T. Vogelstein and Carey E. Priebe
Shuffled Graph Classification: Theory and Connectome Applications
12 pages, 1 figure
null
null
null
q-bio.QM math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a formalism to address statistical pattern recognition of graph valued data. Of particular interest is the case of all graphs having the same number of uniquely labeled vertices. When the vertex labels are latent, such graphs are called shuffled graphs. Our formalism provides insight to trivially answer a number of open statistical questions including: (i) under what conditions does shuffling the vertices degrade classification performance and (ii) do universally consistent graph classifiers exist? The answers to these questions lead to practical heuristic algorithms with state-of-the-art finite sample performance, in agreement with our theoretical asymptotics.
[ { "created": "Fri, 23 Dec 2011 02:47:31 GMT", "version": "v1" }, { "created": "Tue, 16 Oct 2012 09:57:12 GMT", "version": "v2" } ]
2012-10-17
[ [ "Vogelstein", "Joshua T.", "" ], [ "Priebe", "Carey E.", "" ] ]
We develop a formalism to address statistical pattern recognition of graph valued data. Of particular interest is the case of all graphs having the same number of uniquely labeled vertices. When the vertex labels are latent, such graphs are called shuffled graphs. Our formalism provides insight to trivially answer a number of open statistical questions including: (i) under what conditions does shuffling the vertices degrade classification performance and (ii) do universally consistent graph classifiers exist? The answers to these questions lead to practical heuristic algorithms with state-of-the-art finite sample performance, in agreement with our theoretical asymptotics.
2007.01378
Enzo Tagliazucchi
Yonatan Sanz Perl, Hern\'an Boccacio, Ignacio P\'erez-Ipi\~na, Federico Zamberl\'an, Helmut Laufs, Morten Kringelbach, Gustavo Deco, Enzo Tagliazucchi
Generative embeddings of brain collective dynamics using variational autoencoders
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of encoding pairwise correlations between coupled dynamical systems in a low-dimensional latent space based on few distinct observations. We used variational autoencoders (VAE) to embed temporal correlations between coupled nonlinear oscillators that model brain states in the wake-sleep cycle into a two-dimensional manifold. Training a VAE with samples generated using two different parameter combinations resulted in an embedding that represented the whole repertoire of collective dynamics, as well as the topology of the underlying connectivity network. We first followed this approach to infer the trajectory of brain states measured from wakefulness to deep sleep from the two endpoints of this trajectory; next, we showed that the same architecture was capable of representing the pairwise correlations of generic Landau-Stuart oscillators coupled by complex network topology
[ { "created": "Thu, 2 Jul 2020 20:43:52 GMT", "version": "v1" } ]
2020-07-06
[ [ "Perl", "Yonatan Sanz", "" ], [ "Boccacio", "Hernán", "" ], [ "Pérez-Ipiña", "Ignacio", "" ], [ "Zamberlán", "Federico", "" ], [ "Laufs", "Helmut", "" ], [ "Kringelbach", "Morten", "" ], [ "Deco", "Gustavo", "" ], [ "Tagliazucchi", "Enzo", "" ] ]
We consider the problem of encoding pairwise correlations between coupled dynamical systems in a low-dimensional latent space based on few distinct observations. We used variational autoencoders (VAE) to embed temporal correlations between coupled nonlinear oscillators that model brain states in the wake-sleep cycle into a two-dimensional manifold. Training a VAE with samples generated using two different parameter combinations resulted in an embedding that represented the whole repertoire of collective dynamics, as well as the topology of the underlying connectivity network. We first followed this approach to infer the trajectory of brain states measured from wakefulness to deep sleep from the two endpoints of this trajectory; next, we showed that the same architecture was capable of representing the pairwise correlations of generic Landau-Stuart oscillators coupled by complex network topology
2402.05543
Aida Calvi\~no
Aida Calvi\~no and Almudena Moreno-Ribera and Silvia Pineda
Machine learning applied to omics data
Part of the book "Statistical Methods at the Forefront of Biomedical Advances" published by Springer Cham
null
10.1007/978-3-031-32729-2_2
null
q-bio.GN cs.LG stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this chapter we illustrate the use of some Machine Learning techniques in the context of omics data. More precisely, we review and evaluate the use of Random Forest and Penalized Multinomial Logistic Regression for integrative analysis of genomics and immunomics in pancreatic cancer. Furthermore, we propose the use of association rules with predictive purposes to overcome the low predictive power of the previously mentioned models. Finally, we apply the reviewed methods to a real data set from TCGA made of 107 tumoral pancreatic samples and 117,486 germline SNPs, showing the good performance of the proposed methods to predict the immunological infiltration in pancreatic cancer.
[ { "created": "Thu, 8 Feb 2024 10:22:45 GMT", "version": "v1" } ]
2024-02-09
[ [ "Calviño", "Aida", "" ], [ "Moreno-Ribera", "Almudena", "" ], [ "Pineda", "Silvia", "" ] ]
In this chapter we illustrate the use of some Machine Learning techniques in the context of omics data. More precisely, we review and evaluate the use of Random Forest and Penalized Multinomial Logistic Regression for integrative analysis of genomics and immunomics in pancreatic cancer. Furthermore, we propose the use of association rules with predictive purposes to overcome the low predictive power of the previously mentioned models. Finally, we apply the reviewed methods to a real data set from TCGA made of 107 tumoral pancreatic samples and 117,486 germline SNPs, showing the good performance of the proposed methods to predict the immunological infiltration in pancreatic cancer.
2204.09291
Milena Pavlovi\'c
Milena Pavlovi\'c, Ghadi S. Al Hajj, Chakravarthi Kanduri, Johan Pensar, Mollie Wood, Ludvig M. Sollid, Victor Greiff, Geir Kjetil Sandve
Improving generalization of machine learning-identified biomarkers with causal modeling: an investigation into immune receptor diagnostics
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine learning is increasingly used to discover diagnostic and prognostic biomarkers from high-dimensional molecular data. However, a variety of factors related to experimental design may affect the ability to learn generalizable and clinically applicable diagnostics. Here, we argue that a causal perspective improves the identification of these challenges and formalizes their relation to the robustness and generalization of machine learning-based diagnostics. To make for a concrete discussion, we focus on a specific, recently established high-dimensional biomarker - adaptive immune receptor repertoires (AIRRs). Through simulations, we illustrate how major biological and experimental factors of the AIRR domain may influence the learned biomarkers. In conclusion, we argue that causal modeling improves machine learning-based biomarker robustness by identifying stable relations between variables and by guiding the adjustment of the relations and variables that vary between populations.
[ { "created": "Wed, 20 Apr 2022 08:15:54 GMT", "version": "v1" }, { "created": "Mon, 3 Apr 2023 09:03:07 GMT", "version": "v2" } ]
2023-04-04
[ [ "Pavlović", "Milena", "" ], [ "Hajj", "Ghadi S. Al", "" ], [ "Kanduri", "Chakravarthi", "" ], [ "Pensar", "Johan", "" ], [ "Wood", "Mollie", "" ], [ "Sollid", "Ludvig M.", "" ], [ "Greiff", "Victor", "" ], [ "Sandve", "Geir Kjetil", "" ] ]
Machine learning is increasingly used to discover diagnostic and prognostic biomarkers from high-dimensional molecular data. However, a variety of factors related to experimental design may affect the ability to learn generalizable and clinically applicable diagnostics. Here, we argue that a causal perspective improves the identification of these challenges and formalizes their relation to the robustness and generalization of machine learning-based diagnostics. To make for a concrete discussion, we focus on a specific, recently established high-dimensional biomarker - adaptive immune receptor repertoires (AIRRs). Through simulations, we illustrate how major biological and experimental factors of the AIRR domain may influence the learned biomarkers. In conclusion, we argue that causal modeling improves machine learning-based biomarker robustness by identifying stable relations between variables and by guiding the adjustment of the relations and variables that vary between populations.
0711.2061
Igor M. Suslov
I. M. Suslov (P.L.Kapitza Institute for Physical Problems, Moscow, Russia)
Computer Model of a "Sense of Humour". II. Realization in Neural Networks
13 pages, 5 figures included; continuation of this series to appear
Biofizika SSSR 37, 325 (1992) [Biophysics 37, 249 (1992)]
null
null
q-bio.NC cs.AI
null
The computer realization of a "sense of humour" requires the creation of an algorithm for solving the "linguistic problem", i.e. the problem of recognizing a continuous sequence of polysemantic images. Such algorithm may be realized in the Hopfield model of a neural network after its proper modification.
[ { "created": "Tue, 13 Nov 2007 20:15:10 GMT", "version": "v1" } ]
2007-11-27
[ [ "Suslov", "I. M.", "", "P.L.Kapitza Institute for Physical Problems, Moscow,\n Russia" ] ]
The computer realization of a "sense of humour" requires the creation of an algorithm for solving the "linguistic problem", i.e. the problem of recognizing a continuous sequence of polysemantic images. Such algorithm may be realized in the Hopfield model of a neural network after its proper modification.
0712.1219
Francois Meyer
Francois G. Meyer and Greg J. Stephens
Locality and low-dimensions in the prediction of natural experience from fMRI
To appear in: Advances in Neural Information Processing Systems 20, Scholkopf B., Platt J. and Hofmann T. (Editors), MIT Press, 2008
null
null
null
q-bio.NC stat.ML
null
Functional Magnetic Resonance Imaging (fMRI) provides dynamical access into the complex functioning of the human brain, detailing the hemodynamic activity of thousands of voxels during hundreds of sequential time points. One approach towards illuminating the connection between fMRI and cognitive function is through decoding; how do the time series of voxel activities combine to provide information about internal and external experience? Here we seek models of fMRI decoding which are balanced between the simplicity of their interpretation and the effectiveness of their prediction. We use signals from a subject immersed in virtual reality to compare global and local methods of prediction applying both linear and nonlinear techniques of dimensionality reduction. We find that the prediction of complex stimuli is remarkably low-dimensional, saturating with less than 100 features. In particular, we build effective models based on the decorrelated components of cognitive activity in the classically-defined Brodmann areas. For some of the stimuli, the top predictive areas were surprisingly transparent, including Wernicke's area for verbal instructions, visual cortex for facial and body features, and visual-temporal regions for velocity. Direct sensory experience resulted in the most robust predictions, with the highest correlation ($c \sim 0.8$) between the predicted and experienced time series of verbal instructions. Techniques based on non-linear dimensionality reduction (Laplacian eigenmaps) performed similarly. The interpretability and relative simplicity of our approach provides a conceptual basis upon which to build more sophisticated techniques for fMRI decoding and offers a window into cognitive function during dynamic, natural experience.
[ { "created": "Fri, 7 Dec 2007 20:21:18 GMT", "version": "v1" }, { "created": "Sat, 12 Jan 2008 01:00:50 GMT", "version": "v2" } ]
2008-01-16
[ [ "Meyer", "Francois G.", "" ], [ "Stephens", "Greg J.", "" ] ]
Functional Magnetic Resonance Imaging (fMRI) provides dynamical access into the complex functioning of the human brain, detailing the hemodynamic activity of thousands of voxels during hundreds of sequential time points. One approach towards illuminating the connection between fMRI and cognitive function is through decoding; how do the time series of voxel activities combine to provide information about internal and external experience? Here we seek models of fMRI decoding which are balanced between the simplicity of their interpretation and the effectiveness of their prediction. We use signals from a subject immersed in virtual reality to compare global and local methods of prediction applying both linear and nonlinear techniques of dimensionality reduction. We find that the prediction of complex stimuli is remarkably low-dimensional, saturating with less than 100 features. In particular, we build effective models based on the decorrelated components of cognitive activity in the classically-defined Brodmann areas. For some of the stimuli, the top predictive areas were surprisingly transparent, including Wernicke's area for verbal instructions, visual cortex for facial and body features, and visual-temporal regions for velocity. Direct sensory experience resulted in the most robust predictions, with the highest correlation ($c \sim 0.8$) between the predicted and experienced time series of verbal instructions. Techniques based on non-linear dimensionality reduction (Laplacian eigenmaps) performed similarly. The interpretability and relative simplicity of our approach provides a conceptual basis upon which to build more sophisticated techniques for fMRI decoding and offers a window into cognitive function during dynamic, natural experience.
1811.02335
Richard Gerum
Achim Schilling, Richard Gerum, Patrick Krauss, Claus Metzner, Konstantin Tziridis, Holger Schulze
Objective estimation of sensory thresholds based on neurophysiological parameters
null
null
10.3389/fnins.2019.00481
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reliable determination of sensory thresholds is the holy grail of signal detection theory. However, there exists no gold standard for the estimation of thresholds based on neurophysiological parameters, although a reliable estimation method is crucial for both scientific investigations and clinical diagnosis. Whenever it is impossible to communicate with the subjects, as in studies with animals or neonatales, thresholds have to be derived from neural recordings. In such cases when the threshold is estimated based on neuronal measures, the standard approach is still the subjective setting of the threshold to the value where at least a "clear" neuronal signal is detectable. These measures are highly subjective, strongly depend on the noise, and fluctuate due to the low signal-to-noise ratio near the threshold. Here we show a novel method to reliably estimate physiological thresholds based on neurophysiological parameters. Using surrogate data, we demonstrate that fitting the responses to different stimulus intensities with a hard sigmoid function, in combination with subsampling, provides a robust threshold value as well as an accurate uncertainty estimate. This method has no systematic dependence on the noise and does not even require samples in the full dynamic range of the sensory system. It is universally applicable to all types of sensory systems, ranging from somatosensory stimulus processing in the cortex to auditory processing in the brain stem.
[ { "created": "Tue, 6 Nov 2018 13:07:32 GMT", "version": "v1" } ]
2019-10-01
[ [ "Schilling", "Achim", "" ], [ "Gerum", "Richard", "" ], [ "Krauss", "Patrick", "" ], [ "Metzner", "Claus", "" ], [ "Tziridis", "Konstantin", "" ], [ "Schulze", "Holger", "" ] ]
Reliable determination of sensory thresholds is the holy grail of signal detection theory. However, there exists no gold standard for the estimation of thresholds based on neurophysiological parameters, although a reliable estimation method is crucial for both scientific investigations and clinical diagnosis. Whenever it is impossible to communicate with the subjects, as in studies with animals or neonatales, thresholds have to be derived from neural recordings. In such cases when the threshold is estimated based on neuronal measures, the standard approach is still the subjective setting of the threshold to the value where at least a "clear" neuronal signal is detectable. These measures are highly subjective, strongly depend on the noise, and fluctuate due to the low signal-to-noise ratio near the threshold. Here we show a novel method to reliably estimate physiological thresholds based on neurophysiological parameters. Using surrogate data, we demonstrate that fitting the responses to different stimulus intensities with a hard sigmoid function, in combination with subsampling, provides a robust threshold value as well as an accurate uncertainty estimate. This method has no systematic dependence on the noise and does not even require samples in the full dynamic range of the sensory system. It is universally applicable to all types of sensory systems, ranging from somatosensory stimulus processing in the cortex to auditory processing in the brain stem.
1005.3887
Fabio Pichierri
Fabio Pichierri
The electronic structure and dipole moment of charybdotoxin, a scorpion venom peptide with K+ channel blocking activity
16 pages, 6 figures
Computational and Theoretical Chemistry 963 (2011) 384-393
10.1016/j.comptc.2010.11.003
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The electronic structure of charybdotoxin (ChTX), a scorpion venom peptide that is known to act as a potassium channel blocker, is investigated with the aid of quantum mechanical calculations. The dipole moment vector (145 D) of ChTX can be stirred by the full length KcsA potassium channel's macrodipole (403 D) thereby assuming the proper orientation before binding the ion channel on the cell surface. The localization of the frontier orbitals of ChTX has been revealed for the first time. HOMO is localized on Trp14 while the three lowest-energy MOs (LUMO, LUMO+1, and LUMO+2) are localized on the three disulfide bonds that characterize this pepetide. An effective way to engineer the HOMO-LUMO (H-L) gap of ChTX is that of replacing its Trp14 residue with Ala14 whereas deletion of the LUMO-associated disulfide bond with the insertion of a pair of L-alpha-aminobutyric acid residues does not affect the H-L energy gap.
[ { "created": "Fri, 21 May 2010 05:19:10 GMT", "version": "v1" } ]
2015-03-17
[ [ "Pichierri", "Fabio", "" ] ]
The electronic structure of charybdotoxin (ChTX), a scorpion venom peptide that is known to act as a potassium channel blocker, is investigated with the aid of quantum mechanical calculations. The dipole moment vector (145 D) of ChTX can be stirred by the full length KcsA potassium channel's macrodipole (403 D) thereby assuming the proper orientation before binding the ion channel on the cell surface. The localization of the frontier orbitals of ChTX has been revealed for the first time. HOMO is localized on Trp14 while the three lowest-energy MOs (LUMO, LUMO+1, and LUMO+2) are localized on the three disulfide bonds that characterize this pepetide. An effective way to engineer the HOMO-LUMO (H-L) gap of ChTX is that of replacing its Trp14 residue with Ala14 whereas deletion of the LUMO-associated disulfide bond with the insertion of a pair of L-alpha-aminobutyric acid residues does not affect the H-L energy gap.
1304.3546
Anandarup Bhadra
Anandarup Bhadra, Debottam Bhattacharjee, Manabi Paul and Anindita Bhadra
The Meat of the Matter: A thumb rule for scavenging dogs?
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animals that scavenge in and around human localities need to utilize a broad range of resources. Preference for any one kind of food, under such circumstances, might be inefficient. Indian free-ranging dogs, Canis lupus familiaris are scavengers that are heavily dependent on humans for sustaining their omnivorous diet. The current study suggests that because of evolutionary load, these dogs, which are descendants of the decidedly carnivorous gray wolf, still retain a preference for meat though they live on carbohydrate-rich resources. The plasticity in their diet probably fosters efficient scavenging in a competitive environment, while a thumb rule for preferentially acquiring specific nutrients enables them to sequester proteins from the carbohydrate-rich environment.
[ { "created": "Fri, 12 Apr 2013 06:30:33 GMT", "version": "v1" } ]
2013-04-15
[ [ "Bhadra", "Anandarup", "" ], [ "Bhattacharjee", "Debottam", "" ], [ "Paul", "Manabi", "" ], [ "Bhadra", "Anindita", "" ] ]
Animals that scavenge in and around human localities need to utilize a broad range of resources. Preference for any one kind of food, under such circumstances, might be inefficient. Indian free-ranging dogs, Canis lupus familiaris are scavengers that are heavily dependent on humans for sustaining their omnivorous diet. The current study suggests that because of evolutionary load, these dogs, which are descendants of the decidedly carnivorous gray wolf, still retain a preference for meat though they live on carbohydrate-rich resources. The plasticity in their diet probably fosters efficient scavenging in a competitive environment, while a thumb rule for preferentially acquiring specific nutrients enables them to sequester proteins from the carbohydrate-rich environment.
1310.4258
Ron Nielsen
Ron W Nielsen (aka Jan Nurzynski)
Scientifically unacceptable concept of the Epoch of Malthusian Stagnation
43 pages
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to control the growth of human population it is helpful to understand correctly the mechanism of growth, and the first essential step is to investigate current interpretations and reject any unscientific explanations. One of such popular but questionable interpretations is the concept of the Epoch of Malthusian Stagnation. We discuss its origin, narrative and claims. We explain why this concept is scientifically unacceptable. This investigation questions also the closely-related Demographic Transition Theory, whose essential component is the assumed mechanism of Malthusian stagnation for the first stage of growth.
[ { "created": "Wed, 16 Oct 2013 03:49:59 GMT", "version": "v1" }, { "created": "Mon, 21 Oct 2013 00:37:55 GMT", "version": "v2" }, { "created": "Fri, 1 Nov 2013 03:36:02 GMT", "version": "v3" } ]
2017-10-04
[ [ "Nielsen", "Ron W", "", "aka Jan Nurzynski" ] ]
In order to control the growth of human population it is helpful to understand correctly the mechanism of growth, and the first essential step is to investigate current interpretations and reject any unscientific explanations. One of such popular but questionable interpretations is the concept of the Epoch of Malthusian Stagnation. We discuss its origin, narrative and claims. We explain why this concept is scientifically unacceptable. This investigation questions also the closely-related Demographic Transition Theory, whose essential component is the assumed mechanism of Malthusian stagnation for the first stage of growth.
2209.04923
Samuel Gershman
Samuel J. Gershman
The molecular memory code and synaptic plasticity: a synthesis
null
null
null
null
q-bio.NC q-bio.MN
http://creativecommons.org/licenses/by/4.0/
The most widely accepted view of memory in the brain holds that synapses are the storage sites of memory, and that memories are formed through associative modification of synapses. This view has been challenged on conceptual and empirical grounds. As an alternative, it has been proposed that molecules within the cell body are the storage sites of memory, and that memories are formed through biochemical operations on these molecules. This paper proposes a synthesis of these two views, grounded in a computational theory of memory. Synapses are conceived as storage sites for the parameters of an approximate posterior probability distribution over latent causes. Intracellular molecules are conceived as storage sites for the parameters of a generative model. The theory stipulates how these two components work together as part of an integrated algorithm for learning and inference.
[ { "created": "Sun, 11 Sep 2022 19:38:23 GMT", "version": "v1" } ]
2022-09-13
[ [ "Gershman", "Samuel J.", "" ] ]
The most widely accepted view of memory in the brain holds that synapses are the storage sites of memory, and that memories are formed through associative modification of synapses. This view has been challenged on conceptual and empirical grounds. As an alternative, it has been proposed that molecules within the cell body are the storage sites of memory, and that memories are formed through biochemical operations on these molecules. This paper proposes a synthesis of these two views, grounded in a computational theory of memory. Synapses are conceived as storage sites for the parameters of an approximate posterior probability distribution over latent causes. Intracellular molecules are conceived as storage sites for the parameters of a generative model. The theory stipulates how these two components work together as part of an integrated algorithm for learning and inference.
1607.04122
Antoni Aguilar-Mogas
Antoni Aguilar-Mogas (1), Marta Sales-Pardo (1), Miriam Navarro (2 and 3), Ralf Tautenhahn (4), Roger Guimer\`a (1 and 5) and Oscar Yanes (2 and 3) ((1) Departament d'Enginyeria Qu\'imica, Universitat Rovira i Virgili, Tarragona, Spain, (2) Centre for Omic Sciences, Universitat Rovira i Virgili, Reus, Spain, (3) Metabolomics Platform, Spanish Biomedical Research Center in Diabetes and Associated Metabolic Disorders (CIBERDEM), Madrid, Spain (4) Scripps Center for Metabolomics and Mass Spectrometry, The Scripps Research Institute, La Jolla, CA, USA, (5) Instituci\'o Catalana de Recerca i Estudis Avan\c{c}ats (ICREA), Barcelona, Spain)
iMet: A computational tool for structural annotation of unknown metabolites from tandem mass spectra
21 pages, 6 figures
null
10.1021/acs.analchem.6b04512
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Untargeted metabolomic studies are revealing large numbers of naturally occurring metabolites that cannot be characterized because their chemical structures and MS/MS spectra are not available in databases. Here we present iMet, a computational tool based on experimental tandem mass spectrometry that could potentially allow the annotation of metabolites not discovered previously. iMet uses MS/MS spectra to identify metabolites structurally similar to an unknown metabolite, and gives a net atomic addition or removal that converts the known metabolite into the unknown one. We validate the algorithm with 148 metabolites, and show that for 89% of them at least one of the top four matches identified by iMet enables the proper annotation of the unknown metabolite. iMet is freely available at http://imet.seeslab.net.
[ { "created": "Thu, 14 Jul 2016 13:25:22 GMT", "version": "v1" } ]
2017-03-17
[ [ "Aguilar-Mogas", "Antoni", "", "2 and\n 3" ], [ "Sales-Pardo", "Marta", "", "2 and\n 3" ], [ "Navarro", "Miriam", "", "2 and\n 3" ], [ "Tautenhahn", "Ralf", "", "1 and 5" ], [ "Guimerà", "Roger", "", "1 and 5" ], [ "Yanes", "Oscar", "", "2 and 3" ] ]
Untargeted metabolomic studies are revealing large numbers of naturally occurring metabolites that cannot be characterized because their chemical structures and MS/MS spectra are not available in databases. Here we present iMet, a computational tool based on experimental tandem mass spectrometry that could potentially allow the annotation of metabolites not discovered previously. iMet uses MS/MS spectra to identify metabolites structurally similar to an unknown metabolite, and gives a net atomic addition or removal that converts the known metabolite into the unknown one. We validate the algorithm with 148 metabolites, and show that for 89% of them at least one of the top four matches identified by iMet enables the proper annotation of the unknown metabolite. iMet is freely available at http://imet.seeslab.net.