id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2006.01265
Misha Perepelitsa
Misha Perepelitsa
A model of cultural evolution in the context of strategic conflict
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a model of cultural evolution for a strategy selection in a population of individuals who interact in a game theoretic framework. The evolution combines individual learning of the environment (population strategy profile), reproduction, proportional to the success of the acquired knowledge, and social transmission of the knowledge to the next generation. A mean-field type equation is derived that describes the dynamics of the distribution of cultural traits, in terms of the rate of learning, the reproduction rate and population size. We establish global well-posedness of the initial-boundary value problem for this equation and give several examples that illustrate the process of the cultural evolution for some classical games.
[ { "created": "Mon, 1 Jun 2020 21:06:53 GMT", "version": "v1" } ]
2020-06-03
[ [ "Perepelitsa", "Misha", "" ] ]
We consider a model of cultural evolution for a strategy selection in a population of individuals who interact in a game theoretic framework. The evolution combines individual learning of the environment (population strategy profile), reproduction, proportional to the success of the acquired knowledge, and social transmission of the knowledge to the next generation. A mean-field type equation is derived that describes the dynamics of the distribution of cultural traits, in terms of the rate of learning, the reproduction rate and population size. We establish global well-posedness of the initial-boundary value problem for this equation and give several examples that illustrate the process of the cultural evolution for some classical games.
1808.07539
Michael Baker Ph.D.
Yoshinao Katsu, Satomi Kohno, Kaori Oka, Xiaozhi Lin, Sumika Otake, Nisha E. Pillai, Wataru Takagi, Susumu Hyodo, Byrappa Venkatesh, Michael E. Baker
Transcriptional Activation of Elephant Shark Mineralocorticoid Receptor by Corticosteroids, Progesterone and Spironolactone
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We report the analysis of activation by corticosteroids and progesterone of full-length mineralocorticoid receptor (MR) from elephant shark, a cartilaginous fish belonging to the oldest group of jawed vertebrates. Based on their measured activities, aldosterone, cortisol, 11-deoxycorticosterone, corticosterone, 11-deoxcortisol, progesterone and 19-norprogesterone are potential physiological mineralocorticoids. However, aldosterone, the physiological mineralocorticoid in humans and other terrestrial vertebrates, is not found in cartilaginous or ray-finned fishes. Because progesterone is a precursor for corticosteroids that activate elephant shark MR, we propose that progesterone was an ancestral ligand for elephant shark MR. Although progesterone activates ray-finned fish MRs, progesterone does not activate human, amphibian or alligator MRs, suggesting that during the transition to terrestrial vertebrates, progesterone lost the ability to activate the MR. Comparison of RNA-sequence analysis of elephant shark MR with that of human MR suggests that MR expression in the human brain, heart, ovary, testis and other non-epithelial tissues evolved in cartilaginous fishes. Together, these data suggest that progesterone-activated MR may have unappreciated functions in elephant shark ovary and testis.
[ { "created": "Wed, 22 Aug 2018 19:43:02 GMT", "version": "v1" } ]
2018-08-24
[ [ "Katsu", "Yoshinao", "" ], [ "Kohno", "Satomi", "" ], [ "Oka", "Kaori", "" ], [ "Lin", "Xiaozhi", "" ], [ "Otake", "Sumika", "" ], [ "Pillai", "Nisha E.", "" ], [ "Takagi", "Wataru", "" ], [ "Hyodo", "Susumu", "" ], [ "Venkatesh", "Byrappa", "" ], [ "Baker", "Michael E.", "" ] ]
We report the analysis of activation by corticosteroids and progesterone of full-length mineralocorticoid receptor (MR) from elephant shark, a cartilaginous fish belonging to the oldest group of jawed vertebrates. Based on their measured activities, aldosterone, cortisol, 11-deoxycorticosterone, corticosterone, 11-deoxcortisol, progesterone and 19-norprogesterone are potential physiological mineralocorticoids. However, aldosterone, the physiological mineralocorticoid in humans and other terrestrial vertebrates, is not found in cartilaginous or ray-finned fishes. Because progesterone is a precursor for corticosteroids that activate elephant shark MR, we propose that progesterone was an ancestral ligand for elephant shark MR. Although progesterone activates ray-finned fish MRs, progesterone does not activate human, amphibian or alligator MRs, suggesting that during the transition to terrestrial vertebrates, progesterone lost the ability to activate the MR. Comparison of RNA-sequence analysis of elephant shark MR with that of human MR suggests that MR expression in the human brain, heart, ovary, testis and other non-epithelial tissues evolved in cartilaginous fishes. Together, these data suggest that progesterone-activated MR may have unappreciated functions in elephant shark ovary and testis.
1909.07737
Christoph Simon Hundschell
Christoph Simon Hundschell, Frank Jakob, Anja Maria Wagemans
Molecular Weight Dependent Structure and Polymer Density of the Exopolysaccharide Levan
null
null
null
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Levan is a bacterial homopolysaccharide, which consists of beta-2,6 linked beta-D-fructose monomers. Because of its structural properties and its health promoting effects, levan is a promising functional ingredient for the food, cosmetic and pharma industry. The properties of levan have been reported to be linked to its molecular weight. For a better understanding of how its molecular weight determines its polymer conformation in aqueous solution, levan produced by the food grade acetic acid bacterium Gluconobacter albidus TMW 2.1191 was analysed over a broad molecular weight range using dynamic and static light scattering and viscometry. Levan, with low molecular weight, exhibited a compact random coil structure. As the molecular weight increased, the structure transformed into a compact non-drained sphere. The density of the sphere continued to increase with increasing molecular weight. This resulted in a negative exponent in the Mark-Houwink-Sakurada Plot. For the first time, an increase in molecular density with increasing molecular weight, as determined by a negative Mark-Houwink-Sakurada exponent, could be shown for biopolymers. Our results reveal the unique properties of high-molecular weight levan and indicate the need of further systematic studies on the structure-function relationship of levans for their targeted use in food, cosmetic and pharmaceutical applications.
[ { "created": "Tue, 17 Sep 2019 12:02:58 GMT", "version": "v1" }, { "created": "Wed, 18 Sep 2019 08:26:32 GMT", "version": "v2" } ]
2019-09-19
[ [ "Hundschell", "Christoph Simon", "" ], [ "Jakob", "Frank", "" ], [ "Wagemans", "Anja Maria", "" ] ]
Levan is a bacterial homopolysaccharide, which consists of beta-2,6 linked beta-D-fructose monomers. Because of its structural properties and its health promoting effects, levan is a promising functional ingredient for the food, cosmetic and pharma industry. The properties of levan have been reported to be linked to its molecular weight. For a better understanding of how its molecular weight determines its polymer conformation in aqueous solution, levan produced by the food grade acetic acid bacterium Gluconobacter albidus TMW 2.1191 was analysed over a broad molecular weight range using dynamic and static light scattering and viscometry. Levan, with low molecular weight, exhibited a compact random coil structure. As the molecular weight increased, the structure transformed into a compact non-drained sphere. The density of the sphere continued to increase with increasing molecular weight. This resulted in a negative exponent in the Mark-Houwink-Sakurada Plot. For the first time, an increase in molecular density with increasing molecular weight, as determined by a negative Mark-Houwink-Sakurada exponent, could be shown for biopolymers. Our results reveal the unique properties of high-molecular weight levan and indicate the need of further systematic studies on the structure-function relationship of levans for their targeted use in food, cosmetic and pharmaceutical applications.
1209.5006
Deep Ganguli
Deep Ganguli and Eero Simoncelli
Implicit embedding of prior probabilities in optimally efficient neural populations
15 pages, 2 figures, generalizes and extends Ganguli & Simoncelli, NIPS 2010
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine how the prior probability distribution of a sensory variable in the environment influences the optimal allocation of neurons and spikes in a population that represents that variable. We start with a conventional response model, in which the spikes of each neuron are drawn from a Poisson distribution with a mean rate governed by an associated tuning curve. For this model, we approximate the Fisher information in terms of the density and amplitude of the tuning curves, under the assumption that tuning width varies inversely with cell density. We consider a family of objective functions based on the expected value, over the sensory prior, of a functional of the Fisher information. This family includes lower bounds on mutual information and perceptual discriminability as special cases. For all cases, we obtain a closed form expression for the optimum, in which the density and gain of the cells in the population are power law functions of the stimulus prior. Thus, the allocation of these resources is uniquely specified by the prior. Since perceptual discriminability may be expressed directly in terms of the Fisher information, it too will be a power law function of the prior. We show that these results hold for tuning curves of arbitrary shape and correlated neuronal variability. This framework thus provides direct and experimentally testable predictions regarding the relationship between sensory priors, tuning properties of neural representations, and perceptual discriminability.
[ { "created": "Sat, 22 Sep 2012 17:30:53 GMT", "version": "v1" } ]
2012-09-25
[ [ "Ganguli", "Deep", "" ], [ "Simoncelli", "Eero", "" ] ]
We examine how the prior probability distribution of a sensory variable in the environment influences the optimal allocation of neurons and spikes in a population that represents that variable. We start with a conventional response model, in which the spikes of each neuron are drawn from a Poisson distribution with a mean rate governed by an associated tuning curve. For this model, we approximate the Fisher information in terms of the density and amplitude of the tuning curves, under the assumption that tuning width varies inversely with cell density. We consider a family of objective functions based on the expected value, over the sensory prior, of a functional of the Fisher information. This family includes lower bounds on mutual information and perceptual discriminability as special cases. For all cases, we obtain a closed form expression for the optimum, in which the density and gain of the cells in the population are power law functions of the stimulus prior. Thus, the allocation of these resources is uniquely specified by the prior. Since perceptual discriminability may be expressed directly in terms of the Fisher information, it too will be a power law function of the prior. We show that these results hold for tuning curves of arbitrary shape and correlated neuronal variability. This framework thus provides direct and experimentally testable predictions regarding the relationship between sensory priors, tuning properties of neural representations, and perceptual discriminability.
0903.3825
Artur Garcia-Saez
Artur Garcia-Saez and J. Miguel Rubi
Strong cooperativity and inhibitory effects in DNA multi-looping processes
4 pages, 4 figures
null
null
null
q-bio.BM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show the existence of a high interrelation between the different loops that may appear in a DNA segment. Conformational changes in a chain segment caused by the formation of a particular loop may either promote or prevent the appearance of another. The underlying loop selection mechanism is analyzed by means of a Hamiltonian model from which the looping free energy and the corresponding repression level can be computed. We show significant differences between the probability of single and multiple loop formation. The consequences that these collective effects might have on gene regulation processes are outlined.
[ { "created": "Mon, 23 Mar 2009 10:16:32 GMT", "version": "v1" } ]
2009-03-24
[ [ "Garcia-Saez", "Artur", "" ], [ "Rubi", "J. Miguel", "" ] ]
We show the existence of a high interrelation between the different loops that may appear in a DNA segment. Conformational changes in a chain segment caused by the formation of a particular loop may either promote or prevent the appearance of another. The underlying loop selection mechanism is analyzed by means of a Hamiltonian model from which the looping free energy and the corresponding repression level can be computed. We show significant differences between the probability of single and multiple loop formation. The consequences that these collective effects might have on gene regulation processes are outlined.
0810.3877
Emilio Hernandez-Garcia
Emilio Hernandez-Garcia, Murat Tugrul, E. Alejandro Herrada, Victor M. Eguiluz (IFISC, Palma de Mallorca, Spain), Konstantin Klemm (IFISC and Bioinformatics, Leipzig, Germany)
Simple models for scaling in phylogenetic trees
7 pages, 4 figures. A new figure, with example trees, has been added. To appear in Int. J. Bifurcation and Chaos
International Journal of Bifurcation and Chaos 20, 805-811 (2010)
10.1142/S0218127410026095
null
q-bio.QM cond-mat.stat-mech q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many processes and models --in biological, physical, social, and other contexts-- produce trees whose depth scales logarithmically with the number of leaves. Phylogenetic trees, describing the evolutionary relationships between biological species, are examples of trees for which such scaling is not observed. With this motivation, we analyze numerically two branching models leading to non-logarithmic scaling of the depth with the number of leaves. For Ford's alpha model, although a power-law scaling of the depth with tree size was established analytically, our numerical results illustrate that the asymptotic regime is approached only at very large tree sizes. We introduce here a new model, the activity model, showing analytically and numerically that it also displays a power-law scaling of the depth with tree size at a critical parameter value.
[ { "created": "Tue, 21 Oct 2008 17:21:40 GMT", "version": "v1" }, { "created": "Mon, 2 Feb 2009 17:08:49 GMT", "version": "v2" } ]
2010-05-11
[ [ "Hernandez-Garcia", "Emilio", "", "IFISC, Palma de Mallorca, Spain" ], [ "Tugrul", "Murat", "", "IFISC, Palma de Mallorca, Spain" ], [ "Herrada", "E. Alejandro", "", "IFISC, Palma de Mallorca, Spain" ], [ "Eguiluz", "Victor M.", "", "IFISC, Palma de Mallorca, Spain" ], [ "Klemm", "Konstantin", "", "IFISC and\n Bioinformatics, Leipzig, Germany" ] ]
Many processes and models --in biological, physical, social, and other contexts-- produce trees whose depth scales logarithmically with the number of leaves. Phylogenetic trees, describing the evolutionary relationships between biological species, are examples of trees for which such scaling is not observed. With this motivation, we analyze numerically two branching models leading to non-logarithmic scaling of the depth with the number of leaves. For Ford's alpha model, although a power-law scaling of the depth with tree size was established analytically, our numerical results illustrate that the asymptotic regime is approached only at very large tree sizes. We introduce here a new model, the activity model, showing analytically and numerically that it also displays a power-law scaling of the depth with tree size at a critical parameter value.
1307.1934
Michiaki Hamada
Michiaki Hamada
RNA secondary structure prediction from multi-aligned sequences
A preprint of an invited review manuscript that will be published in a chapter of the book `Methods in Molecular Biology'. Note that this version of the manuscript may differ from the published version
null
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been well accepted that the RNA secondary structures of most functional non-coding RNAs (ncRNAs) are closely related to their functions and are conserved during evolution. Hence, prediction of conserved secondary structures from evolutionarily related sequences is one important task in RNA bioinformatics; the methods are useful not only to further functional analyses of ncRNAs but also to improve the accuracy of secondary structure predictions and to find novel functional RNAs from the genome. In this review, I focus on common secondary structure prediction from a given aligned RNA sequence, in which one secondary structure whose length is equal to that of the input alignment is predicted. I systematically review and classify existing tools and algorithms for the problem, by utilizing the information employed in the tools and by adopting a unified viewpoint based on maximum expected gain (MEG) estimators. I believe that this classification will allow a deeper understanding of each tool and provide users with useful information for selecting tools for common secondary structure predictions.
[ { "created": "Mon, 8 Jul 2013 01:38:29 GMT", "version": "v1" } ]
2013-07-09
[ [ "Hamada", "Michiaki", "" ] ]
It has been well accepted that the RNA secondary structures of most functional non-coding RNAs (ncRNAs) are closely related to their functions and are conserved during evolution. Hence, prediction of conserved secondary structures from evolutionarily related sequences is one important task in RNA bioinformatics; the methods are useful not only to further functional analyses of ncRNAs but also to improve the accuracy of secondary structure predictions and to find novel functional RNAs from the genome. In this review, I focus on common secondary structure prediction from a given aligned RNA sequence, in which one secondary structure whose length is equal to that of the input alignment is predicted. I systematically review and classify existing tools and algorithms for the problem, by utilizing the information employed in the tools and by adopting a unified viewpoint based on maximum expected gain (MEG) estimators. I believe that this classification will allow a deeper understanding of each tool and provide users with useful information for selecting tools for common secondary structure predictions.
2102.08335
Juul Goossens
Juul Goossens, Gilles Oudebrouckx, Seppe Bormans, Thijs Vandenryt, Ronald Thoelen
Detecting cell and protein concentrations by the use of a thermal based sensor
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biosensors are frequently used nowadays for the sake of their attractive capabilities. Because of their high accuracy and precision, they are more and more used in the medical sector. Natural receptors are mostly used, but their use have some specific drawbacks. Therefore, new read-out methods are being developed where there is no need for these receptors. Via a Transient Plane Source (TPS) sensor, the thermal properties of a fluid can be determined. These sensors can detect the capability of a fluid to absorb heat i.e. the thermal effusivity. By the way of monitoring this property, many potential bioprocesses can be monitored. The use of this promising technique was further developed in this research for later use of detecting cell growth and protein concentrations. Firstly, the thermal properties of growth medium and yeast cells were determined. Here, it became clear that the thermal properties change in different concentrations. Also, measurements were performed on protein concentration. No unambiguously results were obtained from these tests. But, the overall results from the use of this sensor are very promising, especially in the cell detection compartment. However, further research will tell about the applicability and sensitivity of this type of sensor.
[ { "created": "Tue, 16 Feb 2021 18:20:31 GMT", "version": "v1" } ]
2021-02-17
[ [ "Goossens", "Juul", "" ], [ "Oudebrouckx", "Gilles", "" ], [ "Bormans", "Seppe", "" ], [ "Vandenryt", "Thijs", "" ], [ "Thoelen", "Ronald", "" ] ]
Biosensors are frequently used nowadays for the sake of their attractive capabilities. Because of their high accuracy and precision, they are more and more used in the medical sector. Natural receptors are mostly used, but their use have some specific drawbacks. Therefore, new read-out methods are being developed where there is no need for these receptors. Via a Transient Plane Source (TPS) sensor, the thermal properties of a fluid can be determined. These sensors can detect the capability of a fluid to absorb heat i.e. the thermal effusivity. By the way of monitoring this property, many potential bioprocesses can be monitored. The use of this promising technique was further developed in this research for later use of detecting cell growth and protein concentrations. Firstly, the thermal properties of growth medium and yeast cells were determined. Here, it became clear that the thermal properties change in different concentrations. Also, measurements were performed on protein concentration. No unambiguously results were obtained from these tests. But, the overall results from the use of this sensor are very promising, especially in the cell detection compartment. However, further research will tell about the applicability and sensitivity of this type of sensor.
2003.07602
Malik Magdon-Ismail
Malik Magdon-Ismail
Machine Learning the Phenomenology of COVID-19 From Early Infection Dynamics
Test data up to April 02. Reorganized a little
null
null
null
q-bio.PE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a robust data-driven machine learning analysis of the COVID-19 pandemic from its early infection dynamics, specifically infection counts over time. The goal is to extract actionable public health insights. These insights include the infectious force, the rate of a mild infection becoming serious, estimates for asymtomatic infections and predictions of new infections over time. We focus on USA data starting from the first confirmed infection on January 20 2020. Our methods reveal significant asymptomatic (hidden) infection, a lag of about 10 days, and we quantitatively confirm that the infectious force is strong with about a 0.14% transition from mild to serious infection. Our methods are efficient, robust and general, being agnostic to the specific virus and applicable to different populations or cohorts.
[ { "created": "Tue, 17 Mar 2020 09:42:14 GMT", "version": "v1" }, { "created": "Mon, 23 Mar 2020 02:05:04 GMT", "version": "v2" }, { "created": "Fri, 3 Apr 2020 13:21:58 GMT", "version": "v3" } ]
2020-04-06
[ [ "Magdon-Ismail", "Malik", "" ] ]
We present a robust data-driven machine learning analysis of the COVID-19 pandemic from its early infection dynamics, specifically infection counts over time. The goal is to extract actionable public health insights. These insights include the infectious force, the rate of a mild infection becoming serious, estimates for asymtomatic infections and predictions of new infections over time. We focus on USA data starting from the first confirmed infection on January 20 2020. Our methods reveal significant asymptomatic (hidden) infection, a lag of about 10 days, and we quantitatively confirm that the infectious force is strong with about a 0.14% transition from mild to serious infection. Our methods are efficient, robust and general, being agnostic to the specific virus and applicable to different populations or cohorts.
1903.05615
Pablo Rodr\'iguez-S\'anchez
Pablo Rodr\'iguez-S\'anchez, Egbert H. van Nes, Marten Scheffer
Climbing Escher's stairs: a way to approximate stability landscapes in multidimensional systems
null
Rodriguez-Sanchez P. at al. (2020) PLOS Computational Biology 16(4): e1007788
10.1371/journal.pcbi.1007788
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stability landscapes are useful for understanding the properties of dynamical systems. These landscapes can be calculated from the system's dynamical equations using the physical concept of scalar potential. Unfortunately, for most biological systems with two or more state variables such potentials do not exist. Here we use an analogy with art to provide an accessible explanation of why this happens. Additionally, we introduce a numerical method for decomposing differential equations into two terms: the gradient term that has an associated potential, and the non-gradient term that lacks it. In regions of the state space where the magnitude of the non-gradient term is small compared to the gradient part, we use the gradient term to approximate the potential as quasi-potential. The non-gradient to gradient ratio can be used to estimate the local error introduced by our approximation. Both the algorithm and a ready-to-use implementation in the form of an R package are provided.
[ { "created": "Wed, 13 Mar 2019 17:28:12 GMT", "version": "v1" }, { "created": "Thu, 18 Apr 2019 07:45:11 GMT", "version": "v2" } ]
2020-04-24
[ [ "Rodríguez-Sánchez", "Pablo", "" ], [ "van Nes", "Egbert H.", "" ], [ "Scheffer", "Marten", "" ] ]
Stability landscapes are useful for understanding the properties of dynamical systems. These landscapes can be calculated from the system's dynamical equations using the physical concept of scalar potential. Unfortunately, for most biological systems with two or more state variables such potentials do not exist. Here we use an analogy with art to provide an accessible explanation of why this happens. Additionally, we introduce a numerical method for decomposing differential equations into two terms: the gradient term that has an associated potential, and the non-gradient term that lacks it. In regions of the state space where the magnitude of the non-gradient term is small compared to the gradient part, we use the gradient term to approximate the potential as quasi-potential. The non-gradient to gradient ratio can be used to estimate the local error introduced by our approximation. Both the algorithm and a ready-to-use implementation in the form of an R package are provided.
1911.08398
Alexandru Hening
Alexandru Hening
Coexistence, extinction, and optimal harvesting in discrete-time stochastic population models
37 pages, 9 figures
Journal of Nonlinear Science, vol. 31, 2021
null
null
q-bio.PE math.OC math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the long term behavior of interacting populations which can be controlled through harvesting. The dynamics is assumed to be discrete in time and stochastic due to the effect of environmental fluctuations. We present extinction and coexistence criteria when there are one or two interacting species. We then use these tools in order to see when harvesting leads to extinction or persistence of species, as well as what the optimal harvesting strategies, which maximize the expected long term yield, look like. For single species systems, we show under certain conditions that the optimal harvesting strategy is of bang-bang type: there is a threshold under which there is no harvesting, while everything above this threshold gets harvested. The second part of the paper is concerned with the analysis of ecosystems that have two interacting species which can be harvested. In particular, we carefully study predator-prey and competitive Ricker models when there are two species. In this setting we show how to find the optimal proportional harvesting strategy. If the system is of predator-prey type the optimal proportional harvesting strategy is, depending on the interaction parameters and the price of predators relative to prey, either to harvest the predator to extinction and maximize the asymptotic yield of the prey or to not harvest the prey and to maximize the asymptotic harvesting yield of the predators. If the system is competitive, in certain instances it is optimal to drive one species extinct and to harvest the other one. In other cases it is best to let the two species coexist and harvest both species while maintaining coexistence. In the setting of the competitive Ricker model we show that if one competitor is dominant and pushes the other species to extinction, the harvesting of the dominant species can lead to coexistence.
[ { "created": "Tue, 19 Nov 2019 17:02:20 GMT", "version": "v1" } ]
2021-02-18
[ [ "Hening", "Alexandru", "" ] ]
We analyze the long term behavior of interacting populations which can be controlled through harvesting. The dynamics is assumed to be discrete in time and stochastic due to the effect of environmental fluctuations. We present extinction and coexistence criteria when there are one or two interacting species. We then use these tools in order to see when harvesting leads to extinction or persistence of species, as well as what the optimal harvesting strategies, which maximize the expected long term yield, look like. For single species systems, we show under certain conditions that the optimal harvesting strategy is of bang-bang type: there is a threshold under which there is no harvesting, while everything above this threshold gets harvested. The second part of the paper is concerned with the analysis of ecosystems that have two interacting species which can be harvested. In particular, we carefully study predator-prey and competitive Ricker models when there are two species. In this setting we show how to find the optimal proportional harvesting strategy. If the system is of predator-prey type the optimal proportional harvesting strategy is, depending on the interaction parameters and the price of predators relative to prey, either to harvest the predator to extinction and maximize the asymptotic yield of the prey or to not harvest the prey and to maximize the asymptotic harvesting yield of the predators. If the system is competitive, in certain instances it is optimal to drive one species extinct and to harvest the other one. In other cases it is best to let the two species coexist and harvest both species while maintaining coexistence. In the setting of the competitive Ricker model we show that if one competitor is dominant and pushes the other species to extinction, the harvesting of the dominant species can lead to coexistence.
1002.2644
Jan-Timm Kuhr
Gerlinde Schwake, Simon Youssef, Jan-Timm Kuhr, Sebastian Gude, Maria Pamela David, Eduardo Mendoza, Erwin Frey, Joachim O. R\"adler
Predictive Modeling of Non-Viral Gene Transfer
20 pages, 6 figures, 12 pages supporting information
Biotechnology and Bioengineering, Volume 105, Issue 4, 805-813, (2010)
10.1002/bit.22604
LMU-ASC 44/09
q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In non-viral gene delivery, the variance of transgenic expression stems from the low number of plasmids successfully transferred. Here, we experimentally determine Lipofectamine- and PEI-mediated exogenous gene expression distributions from single cell time-lapse analysis. Broad Poisson-like distributions of steady state expression are observed for both transfection agents, when used with synchronized cell lines. At the same time, co-transfection analysis with YFP- and CFP-coding plasmids shows that multiple plasmids are simultaneously expressed, suggesting that plasmids are delivered in correlated units (complexes). We present a mathematical model of transfection, where a stochastic, two-step process is assumed, with the first being the low-probability entry step of complexes into the nucleus, followed by the subsequent release and activation of a small number of plasmids from a delivered complex. This conceptually simple model consistently predicts the observed fraction of transfected cells, the cotransfection ratio and the expression level distribution. It yields the number of efficient plasmids per complex and elucidates the origin of the associated noise, consequently providing a platform for evaluating and improving non-viral vectors.
[ { "created": "Fri, 12 Feb 2010 21:13:37 GMT", "version": "v1" } ]
2010-02-16
[ [ "Schwake", "Gerlinde", "" ], [ "Youssef", "Simon", "" ], [ "Kuhr", "Jan-Timm", "" ], [ "Gude", "Sebastian", "" ], [ "David", "Maria Pamela", "" ], [ "Mendoza", "Eduardo", "" ], [ "Frey", "Erwin", "" ], [ "Rädler", "Joachim O.", "" ] ]
In non-viral gene delivery, the variance of transgenic expression stems from the low number of plasmids successfully transferred. Here, we experimentally determine Lipofectamine- and PEI-mediated exogenous gene expression distributions from single cell time-lapse analysis. Broad Poisson-like distributions of steady state expression are observed for both transfection agents, when used with synchronized cell lines. At the same time, co-transfection analysis with YFP- and CFP-coding plasmids shows that multiple plasmids are simultaneously expressed, suggesting that plasmids are delivered in correlated units (complexes). We present a mathematical model of transfection, where a stochastic, two-step process is assumed, with the first being the low-probability entry step of complexes into the nucleus, followed by the subsequent release and activation of a small number of plasmids from a delivered complex. This conceptually simple model consistently predicts the observed fraction of transfected cells, the cotransfection ratio and the expression level distribution. It yields the number of efficient plasmids per complex and elucidates the origin of the associated noise, consequently providing a platform for evaluating and improving non-viral vectors.
q-bio/0701036
Robert Maier
Robert S. Maier
Parametrized Stochastic Grammars for RNA Secondary Structure Prediction
5 pages, submitted to the 2007 Information Theory and Applications Workshop (ITA 2007)
null
10.1109/ITA.2007.4357589
null
q-bio.BM math.PR
null
We propose a two-level stochastic context-free grammar (SCFG) architecture for parametrized stochastic modeling of a family of RNA sequences, including their secondary structure. A stochastic model of this type can be used for maximum a posteriori estimation of the secondary structure of any new sequence in the family. The proposed SCFG architecture models RNA subsequences comprising paired bases as stochastically weighted Dyck-language words, i.e., as weighted balanced-parenthesis expressions. The length of each run of unpaired bases, forming a loop or a bulge, is taken to have a phase-type distribution: that of the hitting time in a finite-state Markov chain. Without loss of generality, each such Markov chain can be taken to have a bounded complexity. The scheme yields an overall family SCFG with a manageable number of parameters.
[ { "created": "Wed, 24 Jan 2007 16:58:56 GMT", "version": "v1" } ]
2014-03-06
[ [ "Maier", "Robert S.", "" ] ]
We propose a two-level stochastic context-free grammar (SCFG) architecture for parametrized stochastic modeling of a family of RNA sequences, including their secondary structure. A stochastic model of this type can be used for maximum a posteriori estimation of the secondary structure of any new sequence in the family. The proposed SCFG architecture models RNA subsequences comprising paired bases as stochastically weighted Dyck-language words, i.e., as weighted balanced-parenthesis expressions. The length of each run of unpaired bases, forming a loop or a bulge, is taken to have a phase-type distribution: that of the hitting time in a finite-state Markov chain. Without loss of generality, each such Markov chain can be taken to have a bounded complexity. The scheme yields an overall family SCFG with a manageable number of parameters.
1908.06872
Jeyashree Krishnan
Jeyashree Krishnan, Reza Torabi, Edoardo Di Napoli, Andreas Schuppert
A Modified Ising Model of Barab\'asi-Albert Network with Gene-type Spins
30 pages, 8 figures, presented in poster form in SBHD'18, LA, USA; as a talk in SIAM CSE'19
null
null
null
q-bio.QM cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
The central question of systems biology is to understand how individual components of a biological system such as genes or proteins cooperate in emerging phenotypes resulting in the evolution of diseases. As living cells are open systems in quasi-steady state type equilibrium in continuous exchange with their environment, computational techniques that have been successfully applied in statistical thermodynamics to describe phase transitions may provide new insights to emerging behavior of biological systems. Here we will systematically evaluate the translation of computational techniques from solid-state physics to network models that closely resemble biological networks and develop specific translational rules to tackle problems unique to living systems. Hence we will focus on logic models exhibiting only two states in each network node. Motivated by the apparent asymmetry between biological states where an entity exhibits boolean states i.e. is active or inactive, we present an adaptation of symmetric Ising model towards an asymmetric one fitting to living systems here referred to as the modified Ising model with gene-type spins. We analyze phase transitions by Monte Carlo simulations and propose mean-field solution of modified Ising model of a network type that closely resembles real-world network, the Barab\'{a}si-Albert model of scale-free networks. We show that asymmetric Ising models show similarities to symmetric Ising models with external field and undergoes a discontinuous phase transition of the first-order and exhibits hysteresis. The simulation setup presented here can be directly used for any biological network connectivity dataset and is also applicable for other networks that exhibit similar states of activity. This is a general statistical method to deal with non-linear large scale models arising in the context of biological systems and is scalable to any network size.
[ { "created": "Mon, 19 Aug 2019 15:13:24 GMT", "version": "v1" } ]
2019-08-20
[ [ "Krishnan", "Jeyashree", "" ], [ "Torabi", "Reza", "" ], [ "Di Napoli", "Edoardo", "" ], [ "Schuppert", "Andreas", "" ] ]
The central question of systems biology is to understand how individual components of a biological system such as genes or proteins cooperate in emerging phenotypes resulting in the evolution of diseases. As living cells are open systems in quasi-steady state type equilibrium in continuous exchange with their environment, computational techniques that have been successfully applied in statistical thermodynamics to describe phase transitions may provide new insights to emerging behavior of biological systems. Here we will systematically evaluate the translation of computational techniques from solid-state physics to network models that closely resemble biological networks and develop specific translational rules to tackle problems unique to living systems. Hence we will focus on logic models exhibiting only two states in each network node. Motivated by the apparent asymmetry between biological states where an entity exhibits boolean states i.e. is active or inactive, we present an adaptation of symmetric Ising model towards an asymmetric one fitting to living systems here referred to as the modified Ising model with gene-type spins. We analyze phase transitions by Monte Carlo simulations and propose mean-field solution of modified Ising model of a network type that closely resembles real-world network, the Barab\'{a}si-Albert model of scale-free networks. We show that asymmetric Ising models show similarities to symmetric Ising models with external field and undergoes a discontinuous phase transition of the first-order and exhibits hysteresis. The simulation setup presented here can be directly used for any biological network connectivity dataset and is also applicable for other networks that exhibit similar states of activity. This is a general statistical method to deal with non-linear large scale models arising in the context of biological systems and is scalable to any network size.
2211.12935
Sihao Liu
Sihao Liu (Daniel), Augustine N Mavor-Parker, Caswell Barry
Functional Connectome: Approximating Brain Networks with Artificial Neural Networks
13 pages, 10 figures
null
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
We aimed to explore the capability of deep learning to approximate the function instantiated by biological neural circuits-the functional connectome. Using deep neural networks, we performed supervised learning with firing rate observations drawn from synthetically constructed neural circuits, as well as from an empirically supported Boundary Vector Cell-Place Cell network. The performance of trained networks was quantified using a range of criteria and tasks. Our results show that deep neural networks were able to capture the computations performed by synthetic biological networks with high accuracy, and were highly data efficient and robust to biological plasticity. We show that trained deep neural networks are able to perform zero-shot generalisation in novel environments, and allows for a wealth of tasks such as decoding the animal's location in space with high accuracy. Our study reveals a novel and promising direction in systems neuroscience, and can be expanded upon with a multitude of downstream applications, for example, goal-directed reinforcement learning.
[ { "created": "Wed, 23 Nov 2022 13:12:13 GMT", "version": "v1" } ]
2022-11-24
[ [ "Liu", "Sihao", "", "Daniel" ], [ "Mavor-Parker", "Augustine N", "" ], [ "Barry", "Caswell", "" ] ]
We aimed to explore the capability of deep learning to approximate the function instantiated by biological neural circuits-the functional connectome. Using deep neural networks, we performed supervised learning with firing rate observations drawn from synthetically constructed neural circuits, as well as from an empirically supported Boundary Vector Cell-Place Cell network. The performance of trained networks was quantified using a range of criteria and tasks. Our results show that deep neural networks were able to capture the computations performed by synthetic biological networks with high accuracy, and were highly data efficient and robust to biological plasticity. We show that trained deep neural networks are able to perform zero-shot generalisation in novel environments, and allows for a wealth of tasks such as decoding the animal's location in space with high accuracy. Our study reveals a novel and promising direction in systems neuroscience, and can be expanded upon with a multitude of downstream applications, for example, goal-directed reinforcement learning.
q-bio/0311018
Lior Pachter
Nicolas Bray, Lior Pachter
MAVID: Constrained ancestral alignment of multiple sequences
null
null
null
null
q-bio.GN
null
We describe a new global multiple alignment program capable of aligning a large number of genomic regions. Our progressive alignment approach incorporates the following ideas: maximum-likelihood inference of ancestral sequences, automatic guide-tree construction, protein based anchoring of ab-initio gene predictions, and constraints derived from a global homology map of the sequences. We have implemented these ideas in the MAVID program, which is able to accurately align multiple genomic regions up to megabases long. MAVID is able to effectively align divergent sequences, as well as incomplete unfinished sequences. We demonstrate the capabilities of the program on the benchmark CFTR region which consists of 1.8Mb of human sequence and 20 orthologous regions in marsupials, birds, fish, and mammals. Finally, we describe two large MAVID alignments: an alignment of all the available HIV genomes and a multiple alignment of the entire human, mouse and rat genomes.
[ { "created": "Thu, 13 Nov 2003 10:12:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bray", "Nicolas", "" ], [ "Pachter", "Lior", "" ] ]
We describe a new global multiple alignment program capable of aligning a large number of genomic regions. Our progressive alignment approach incorporates the following ideas: maximum-likelihood inference of ancestral sequences, automatic guide-tree construction, protein based anchoring of ab-initio gene predictions, and constraints derived from a global homology map of the sequences. We have implemented these ideas in the MAVID program, which is able to accurately align multiple genomic regions up to megabases long. MAVID is able to effectively align divergent sequences, as well as incomplete unfinished sequences. We demonstrate the capabilities of the program on the benchmark CFTR region which consists of 1.8Mb of human sequence and 20 orthologous regions in marsupials, birds, fish, and mammals. Finally, we describe two large MAVID alignments: an alignment of all the available HIV genomes and a multiple alignment of the entire human, mouse and rat genomes.
1602.05135
Mark Transtrum
Andrew White, Malachi Tolman, Howard D. Thames, Hubert Rodney Withers, Kathy A. Mason, Mark K. Transtrum
The limitations of model-based experimental design and parameter estimation in sloppy systems
27 pages, 8 figures
null
10.1371/journal.pcbi.1005227
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the relationship among model fidelity, experimental design, and parameter estimation in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physics that must be included to explain collective behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which details are relevant/irrelevant vary among potential experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's inadequacy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the error in the model renders it less predictive than it was in the sloppy regime where model error is small. We introduce the concept of a \emph{sloppy system}--a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that system identification better approached by considering a hierarchy of models of varying detail rather than focusing parameter estimation in a single model.
[ { "created": "Tue, 16 Feb 2016 18:43:47 GMT", "version": "v1" }, { "created": "Fri, 6 May 2016 23:04:07 GMT", "version": "v2" }, { "created": "Tue, 14 Jun 2016 15:02:13 GMT", "version": "v3" } ]
2017-02-08
[ [ "White", "Andrew", "" ], [ "Tolman", "Malachi", "" ], [ "Thames", "Howard D.", "" ], [ "Withers", "Hubert Rodney", "" ], [ "Mason", "Kathy A.", "" ], [ "Transtrum", "Mark K.", "" ] ]
We explore the relationship among model fidelity, experimental design, and parameter estimation in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physics that must be included to explain collective behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which details are relevant/irrelevant vary among potential experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's inadequacy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the error in the model renders it less predictive than it was in the sloppy regime where model error is small. We introduce the concept of a \emph{sloppy system}--a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that system identification better approached by considering a hierarchy of models of varying detail rather than focusing parameter estimation in a single model.
2301.07599
Viktoria Blavatska
Viktoria Blavatska, Bartlomiej Waclaw
Evolutionary adaptation is facilitated by the presence of lethal genotypes
null
Phys. Rev. Research, 6 (2024) 013286
10.1103/PhysRevResearch.6.013286
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The adaptation rate in theoretical models of biological evolution increases with the mutation rate but only to a point when mutations into lethal states cause extinction. One would expect that removing such states should be beneficial for evolution. We show here that, counter-intuitively, lethal mutations speed up adaptation on rugged fitness landscapes with many fitness maxima and minima, if strong competition for resources exist. We consider a modified stochastic version of the quasispecies model with two types of genotypes, viable and lethal, and show that increasing the rate of lethal mutations decreases the time to evolve the best-fit genotype. This can be explained by an increased rate of crossing fitness valleys, facilitated by reduced selection against less-fit variants.
[ { "created": "Wed, 18 Jan 2023 15:26:50 GMT", "version": "v1" }, { "created": "Tue, 7 Feb 2023 14:03:09 GMT", "version": "v2" } ]
2024-06-27
[ [ "Blavatska", "Viktoria", "" ], [ "Waclaw", "Bartlomiej", "" ] ]
The adaptation rate in theoretical models of biological evolution increases with the mutation rate but only to a point when mutations into lethal states cause extinction. One would expect that removing such states should be beneficial for evolution. We show here that, counter-intuitively, lethal mutations speed up adaptation on rugged fitness landscapes with many fitness maxima and minima, if strong competition for resources exist. We consider a modified stochastic version of the quasispecies model with two types of genotypes, viable and lethal, and show that increasing the rate of lethal mutations decreases the time to evolve the best-fit genotype. This can be explained by an increased rate of crossing fitness valleys, facilitated by reduced selection against less-fit variants.
2405.12022
Giampiero Bardella
Giampiero Bardella, Simone Franchini, Pierpaolo Pani and Stefano Ferraina
Lattice physics approaches for neural networks
10 pages
null
null
null
q-bio.NC physics.app-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Modern neuroscience has evolved into a frontier field that draws on numerous disciplines, resulting in the flourishing of novel conceptual frames primarily inspired by physics and complex systems science. Contributing in this direction, we recently introduced a mathematical framework to describe the spatiotemporal interactions of systems of neurons using lattice field theory, the reference paradigm for theoretical particle physics. In this note, we provide a concise summary of the basics of the theory, aiming to be intuitive to the interdisciplinary neuroscience community. We contextualize our methods, illustrating how to readily connect the parameters of our formulation to experimental variables using well-known renormalization procedures. This synopsis yields the key concepts needed to describe neural networks using lattice physics. Such classes of methods are attention-worthy in an era of blistering improvements in numerical computations, as they can facilitate relating the observation of neural activity to generative models underpinned by physical principles.
[ { "created": "Mon, 20 May 2024 13:42:54 GMT", "version": "v1" }, { "created": "Fri, 21 Jun 2024 09:47:56 GMT", "version": "v2" } ]
2024-06-24
[ [ "Bardella", "Giampiero", "" ], [ "Franchini", "Simone", "" ], [ "Pani", "Pierpaolo", "" ], [ "Ferraina", "Stefano", "" ] ]
Modern neuroscience has evolved into a frontier field that draws on numerous disciplines, resulting in the flourishing of novel conceptual frames primarily inspired by physics and complex systems science. Contributing in this direction, we recently introduced a mathematical framework to describe the spatiotemporal interactions of systems of neurons using lattice field theory, the reference paradigm for theoretical particle physics. In this note, we provide a concise summary of the basics of the theory, aiming to be intuitive to the interdisciplinary neuroscience community. We contextualize our methods, illustrating how to readily connect the parameters of our formulation to experimental variables using well-known renormalization procedures. This synopsis yields the key concepts needed to describe neural networks using lattice physics. Such classes of methods are attention-worthy in an era of blistering improvements in numerical computations, as they can facilitate relating the observation of neural activity to generative models underpinned by physical principles.
1906.05390
Mattia Miotto
Mattia Miotto, Lorenzo Di Rienzo, Pietro Corsi, Giancarlo Ruocco, Domenico Raimondo, Edoardo Milanetti
Simulated Epidemics in 3D Protein Structures to Detect Functional Properties
9 pages, 5 figures
Journal of chemical information and modeling 60 (3), 1884-1891 (2020)
10.1021/acs.jcim.9b01027
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The outcome of an epidemic is closely related to the network of interactions between the individuals. Likewise, protein functions depend on the 3D arrangement of their residues and on the underlying energetic interaction network. Borrowing ideas from the theoretical framework that has been developed to address the spreading of real diseases, we study the diffusion of a fictitious epidemic inside the protein non-bonded interaction network. Our approach allowed to probe the overall stability and the capability to propagate information in the complex 3D-structures and proved to be very efficient in addressing different problems, from the assessment of thermal stability to the identification of allosteric sites.
[ { "created": "Wed, 12 Jun 2019 21:33:56 GMT", "version": "v1" }, { "created": "Thu, 12 Dec 2019 11:41:41 GMT", "version": "v2" } ]
2021-07-19
[ [ "Miotto", "Mattia", "" ], [ "Di Rienzo", "Lorenzo", "" ], [ "Corsi", "Pietro", "" ], [ "Ruocco", "Giancarlo", "" ], [ "Raimondo", "Domenico", "" ], [ "Milanetti", "Edoardo", "" ] ]
The outcome of an epidemic is closely related to the network of interactions between the individuals. Likewise, protein functions depend on the 3D arrangement of their residues and on the underlying energetic interaction network. Borrowing ideas from the theoretical framework that has been developed to address the spreading of real diseases, we study the diffusion of a fictitious epidemic inside the protein non-bonded interaction network. Our approach allowed to probe the overall stability and the capability to propagate information in the complex 3D-structures and proved to be very efficient in addressing different problems, from the assessment of thermal stability to the identification of allosteric sites.
2011.13533
Arnd Pralle
Weixiang Jin, Michael Zucker and Arnd Pralle
Membrane Nanodomains Homeostasis During Propofol Anesthesia as Function of Dosage and Temperature
null
null
10.1016/j.bbamem.2020.183511
null
q-bio.CB q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Some anesthetics bind and potentiate gamma-aminobutyric-acid-type receptors, but no universal mechanism for general anesthesia is known. Furthermore, often encountered complications such as anesthesia induced amnesia are not understood. General anesthetics are hydrophobic molecules easily dissolving into lipid bilayers. Recently, it was shown that general anesthetics perturb phase separation in vesicles extracted from fixed cells. Unclear is whether under physiological conditions general anesthetics induce perturbation of the lipid bilayer, and whether this contributes to the transient loss of consciousness or anesthesia side effects. Here we show that propofol perturbs lipid nanodomains in the outer and inner leaflet of the plasma membrane in intact cells, affecting membrane nanodomains in a concentration dependent manner: 1 {\mu}M to 5 {\mu}M propofol destabilize nanodomains; however, propofol concentrations higher than 5 {\mu}M stabilize nanodomains with time. Stabilization occurs only at physiological temperature and in intact cells. This process requires ARP2/3 mediated actin nucleation and Myosin II activity. The rate of nanodomain stabilization is potentiated by GABA receptor activity. Our results show that active nanodomain homeostasis counteracts the initial disruption causing large changes in cortical actin.
[ { "created": "Fri, 27 Nov 2020 03:05:54 GMT", "version": "v1" } ]
2020-11-30
[ [ "Jin", "Weixiang", "" ], [ "Zucker", "Michael", "" ], [ "Pralle", "Arnd", "" ] ]
Some anesthetics bind and potentiate gamma-aminobutyric-acid-type receptors, but no universal mechanism for general anesthesia is known. Furthermore, often encountered complications such as anesthesia induced amnesia are not understood. General anesthetics are hydrophobic molecules easily dissolving into lipid bilayers. Recently, it was shown that general anesthetics perturb phase separation in vesicles extracted from fixed cells. Unclear is whether under physiological conditions general anesthetics induce perturbation of the lipid bilayer, and whether this contributes to the transient loss of consciousness or anesthesia side effects. Here we show that propofol perturbs lipid nanodomains in the outer and inner leaflet of the plasma membrane in intact cells, affecting membrane nanodomains in a concentration dependent manner: 1 {\mu}M to 5 {\mu}M propofol destabilize nanodomains; however, propofol concentrations higher than 5 {\mu}M stabilize nanodomains with time. Stabilization occurs only at physiological temperature and in intact cells. This process requires ARP2/3 mediated actin nucleation and Myosin II activity. The rate of nanodomain stabilization is potentiated by GABA receptor activity. Our results show that active nanodomain homeostasis counteracts the initial disruption causing large changes in cortical actin.
2002.07655
Johannes Kleiner
Johannes Kleiner and Sean Tull
The Mathematical Structure of Integrated Information Theory
null
null
null
null
q-bio.NC cs.AI cs.IT math.IT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integrated Information Theory is one of the leading models of consciousness. It aims to describe both the quality and quantity of the conscious experience of a physical system, such as the brain, in a particular state. In this contribution, we propound the mathematical structure of the theory, separating the essentials from auxiliary formal tools. We provide a definition of a generalized IIT which has IIT 3.0 of Tononi et. al., as well as the Quantum IIT introduced by Zanardi et. al. as special cases. This provides an axiomatic definition of the theory which may serve as the starting point for future formal investigations and as an introduction suitable for researchers with a formal background.
[ { "created": "Tue, 18 Feb 2020 15:44:02 GMT", "version": "v1" } ]
2020-02-19
[ [ "Kleiner", "Johannes", "" ], [ "Tull", "Sean", "" ] ]
Integrated Information Theory is one of the leading models of consciousness. It aims to describe both the quality and quantity of the conscious experience of a physical system, such as the brain, in a particular state. In this contribution, we propound the mathematical structure of the theory, separating the essentials from auxiliary formal tools. We provide a definition of a generalized IIT which has IIT 3.0 of Tononi et. al., as well as the Quantum IIT introduced by Zanardi et. al. as special cases. This provides an axiomatic definition of the theory which may serve as the starting point for future formal investigations and as an introduction suitable for researchers with a formal background.
1711.06325
Anthony Greenberg
Anthony J. Greenberg
Fast ordered sampling of DNA sequence variants
six figures
null
null
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects.
[ { "created": "Thu, 16 Nov 2017 21:35:12 GMT", "version": "v1" } ]
2017-11-20
[ [ "Greenberg", "Anthony J.", "" ] ]
Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects.
1801.03316
Ekaterina Myasnikova
Ekaterina Myasnikova and Alexander Spirov
Modeling of gap gene regulatory networks in Drosophila with the account of the gene modular structure (in Russian)
in Russian
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genes are frequently regulated in complex manners, necessitating modelling approaches which go beyond simple (linear) gene-to-gene interactions and address the modularity of cis-regulatory regions and alternate transcription initiation sites. In particular, sharp expression patterns (peaks or stripes) indicate that gene regulation involves nonlinear transcription factor kinetics (beyond first-order). We propose a methodology for approaching this problem, using the example of the multiple cis-regulatory modules (CRMs) and two transcripts (P1 and P2) found in the Drosophila hunchback (hb) and (CD1 and CD2)in Kruppel (Kr) genes, the genes expressed along the anteroposterior axis of the embryo. The positional characteristics of the mRNA expression patterns are studied for their sensitivity to the variation of the input factors and initial conditions.
[ { "created": "Wed, 10 Jan 2018 11:38:43 GMT", "version": "v1" } ]
2018-01-11
[ [ "Myasnikova", "Ekaterina", "" ], [ "Spirov", "Alexander", "" ] ]
Genes are frequently regulated in complex manners, necessitating modelling approaches which go beyond simple (linear) gene-to-gene interactions and address the modularity of cis-regulatory regions and alternate transcription initiation sites. In particular, sharp expression patterns (peaks or stripes) indicate that gene regulation involves nonlinear transcription factor kinetics (beyond first-order). We propose a methodology for approaching this problem, using the example of the multiple cis-regulatory modules (CRMs) and two transcripts (P1 and P2) found in the Drosophila hunchback (hb) and (CD1 and CD2)in Kruppel (Kr) genes, the genes expressed along the anteroposterior axis of the embryo. The positional characteristics of the mRNA expression patterns are studied for their sensitivity to the variation of the input factors and initial conditions.
1110.1091
Carsten Lemmen
Carsten Lemmen and Aurangzeb Khan
A simulation of the Neolithic transition in the Indus valley
Chapter manuscript revision submitted to AGU monograph "Climates, landscapes and civilizations", 6 pages, 2 figures
null
10.1029/2012GM001217
null
q-bio.PE cs.MA
http://creativecommons.org/licenses/by/3.0/
The Indus Valley Civilization (IVC) was one of the first great civilizations in prehistory. This bronze age civilization flourished from the end of the fourth millennium BC. It disintegrated during the second millennium BC; despite much research effort, this decline is not well understood. Less research has been devoted to the emergence of the IVC, which shows continuous cultural precursors since at least the seventh millennium BC. To understand the decline, we believe it is necessary to investigate the rise of the IVC, i.e., the establishment of agriculture and livestock, dense populations and technological developments 7000--3000 BC. Although much archaeological information is available, our capability to investigate the system is hindered by poorly resolved chronology, and by a lack of field work in the intermediate areas between the Indus valley and Mesopotamia. We thus employ a complementary numerical simulation to develop a consistent picture of technology, agropastoralism and population developments in the IVC domain. Results from this Global Land Use and technological Evolution Simulator show that there is (1) fair agreement between the simulated timing of the agricultural transition and radiocarbon dates from early agricultural sites, but the transition is simulated first in India then Pakistan; (2) an independent agropastoralism developing on the Indian subcontinent; and (3) a positive relationship between archeological artifact richness and simulated population density which remains to be quantified.
[ { "created": "Wed, 5 Oct 2011 20:04:10 GMT", "version": "v1" }, { "created": "Wed, 18 Apr 2012 08:56:26 GMT", "version": "v2" }, { "created": "Mon, 7 May 2012 08:47:42 GMT", "version": "v3" } ]
2017-02-24
[ [ "Lemmen", "Carsten", "" ], [ "Khan", "Aurangzeb", "" ] ]
The Indus Valley Civilization (IVC) was one of the first great civilizations in prehistory. This bronze age civilization flourished from the end of the fourth millennium BC. It disintegrated during the second millennium BC; despite much research effort, this decline is not well understood. Less research has been devoted to the emergence of the IVC, which shows continuous cultural precursors since at least the seventh millennium BC. To understand the decline, we believe it is necessary to investigate the rise of the IVC, i.e., the establishment of agriculture and livestock, dense populations and technological developments 7000--3000 BC. Although much archaeological information is available, our capability to investigate the system is hindered by poorly resolved chronology, and by a lack of field work in the intermediate areas between the Indus valley and Mesopotamia. We thus employ a complementary numerical simulation to develop a consistent picture of technology, agropastoralism and population developments in the IVC domain. Results from this Global Land Use and technological Evolution Simulator show that there is (1) fair agreement between the simulated timing of the agricultural transition and radiocarbon dates from early agricultural sites, but the transition is simulated first in India then Pakistan; (2) an independent agropastoralism developing on the Indian subcontinent; and (3) a positive relationship between archeological artifact richness and simulated population density which remains to be quantified.
2405.16865
Dehong Xu
Dehong Xu, Ruiqi Gao, Wen-Hao Zhang, Xue-Xin Wei, Ying Nian Wu
An Investigation of Conformal Isometry Hypothesis for Grid Cells
arXiv admin note: text overlap with arXiv:2310.19192
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the conformal isometry hypothesis as a potential explanation for the emergence of hexagonal periodic patterns in the response maps of grid cells. The hypothesis posits that the activities of the population of grid cells form a high-dimensional vector in the neural space, representing the agent's self-position in 2D physical space. As the agent moves in the 2D physical space, the vector rotates in a 2D manifold in the neural space, driven by a recurrent neural network. The conformal isometry hypothesis proposes that this 2D manifold in the neural space is a conformally isometric embedding of the 2D physical space, in the sense that local displacements of the vector in neural space are proportional to local displacements of the agent in the physical space. Thus the 2D manifold forms an internal map of the 2D physical space, equipped with an internal metric. In this paper, we conduct numerical experiments to show that this hypothesis underlies the hexagon periodic patterns of grid cells. We also conduct theoretical analysis to further support this hypothesis. In addition, we propose a conformal modulation of the input velocity of the agent so that the recurrent neural network of grid cells satisfies the conformal isometry hypothesis automatically. To summarize, our work provides numerical and theoretical evidences for the conformal isometry hypothesis for grid cells and may serve as a foundation for further development of normative models of grid cells and beyond.
[ { "created": "Mon, 27 May 2024 06:31:39 GMT", "version": "v1" } ]
2024-05-28
[ [ "Xu", "Dehong", "" ], [ "Gao", "Ruiqi", "" ], [ "Zhang", "Wen-Hao", "" ], [ "Wei", "Xue-Xin", "" ], [ "Wu", "Ying Nian", "" ] ]
This paper investigates the conformal isometry hypothesis as a potential explanation for the emergence of hexagonal periodic patterns in the response maps of grid cells. The hypothesis posits that the activities of the population of grid cells form a high-dimensional vector in the neural space, representing the agent's self-position in 2D physical space. As the agent moves in the 2D physical space, the vector rotates in a 2D manifold in the neural space, driven by a recurrent neural network. The conformal isometry hypothesis proposes that this 2D manifold in the neural space is a conformally isometric embedding of the 2D physical space, in the sense that local displacements of the vector in neural space are proportional to local displacements of the agent in the physical space. Thus the 2D manifold forms an internal map of the 2D physical space, equipped with an internal metric. In this paper, we conduct numerical experiments to show that this hypothesis underlies the hexagon periodic patterns of grid cells. We also conduct theoretical analysis to further support this hypothesis. In addition, we propose a conformal modulation of the input velocity of the agent so that the recurrent neural network of grid cells satisfies the conformal isometry hypothesis automatically. To summarize, our work provides numerical and theoretical evidences for the conformal isometry hypothesis for grid cells and may serve as a foundation for further development of normative models of grid cells and beyond.
1901.09861
Vishwa Parekh
Vishwa S. Parekh, Michael A. Jacobs
Tumor Connectomics: Mapping the intra-tumoral complex interaction network
7 pages, 5 figures, SPIE Medical Imaging
null
null
null
q-bio.QM cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumors are extremely heterogeneous and comprise of a number of intratumor microenvironments or sub-regions. These tumor microenvironments may interact with eac based on complex high-level relationships, which could provide important insight into the organizational structure of the tumor network. To that end, we developed a tumor connectomics framework (TCF) to understand and model the complex functional and morphological interactions within the tumor. Then, we demonstrate the TCF's potential in predicting treatment response in breast cancer patients being treated with neoadjuvant chemotherapy. The TCF was implemented on a breast cancer patient cohort of thirty-four patients with dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) undergoing neodjuvant chemotherapy treatment. The intra-tumor network connections (tumor connectome) before and after treatment were modeled using advanced graph theoretic centrality, path length and clustering metrics from the DCE-MRI. The percentage change of the graph metrics between two time-points (Baseline and 1st cycle) was computed to predict the patient's final response to treatment. The TCF visualized the inter-voxel network connections across multiple time-points and was able to evaluate specific changes in the tumor connectome with treatment. Degree centrality was identified as the most significant predictor of treatment response with an AUC of 0.83 for classifying responders from non-responders. In conclusion, the TCF graph metrics produced excellent biomarkers for prediction of breast cancer treatment response with improved visualization and interpretability of changes both locally and globally in the tumor.
[ { "created": "Mon, 28 Jan 2019 18:07:46 GMT", "version": "v1" } ]
2019-01-29
[ [ "Parekh", "Vishwa S.", "" ], [ "Jacobs", "Michael A.", "" ] ]
Tumors are extremely heterogeneous and comprise of a number of intratumor microenvironments or sub-regions. These tumor microenvironments may interact with eac based on complex high-level relationships, which could provide important insight into the organizational structure of the tumor network. To that end, we developed a tumor connectomics framework (TCF) to understand and model the complex functional and morphological interactions within the tumor. Then, we demonstrate the TCF's potential in predicting treatment response in breast cancer patients being treated with neoadjuvant chemotherapy. The TCF was implemented on a breast cancer patient cohort of thirty-four patients with dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) undergoing neodjuvant chemotherapy treatment. The intra-tumor network connections (tumor connectome) before and after treatment were modeled using advanced graph theoretic centrality, path length and clustering metrics from the DCE-MRI. The percentage change of the graph metrics between two time-points (Baseline and 1st cycle) was computed to predict the patient's final response to treatment. The TCF visualized the inter-voxel network connections across multiple time-points and was able to evaluate specific changes in the tumor connectome with treatment. Degree centrality was identified as the most significant predictor of treatment response with an AUC of 0.83 for classifying responders from non-responders. In conclusion, the TCF graph metrics produced excellent biomarkers for prediction of breast cancer treatment response with improved visualization and interpretability of changes both locally and globally in the tumor.
1303.4383
J. C. Phillips
J. C. Phillips
Hierarchical hydropathic evolution of influenza glycoproteins (N2, H3, A/H3N2) under relentless vaccination pressure
null
null
null
null
q-bio.BM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hemagglutinin (HA) and neuraminidase (NA) are highly variable envelope glycoproteins. Here hydropathic analysis, previously applied to quantify common flu (H1N1) evolution (1934-), is applied to the evolution of less common but more virulent (avian derived) H3N2 (1968-), beginning with N2. Whereas N1 exhibited opposing migration and vaccination pressures, the dominant N2 trend is due to vaccination, with only secondary migration interactions. Separation and evaluation of these effects is made possible by the use of two distinct hydropathic scales representing first-order and second-order thermodynamic interactions. The evolutions of H1 and H3 are more complex, with larger competing migration and vaccination effects. The linkages of H3 and N2 evolutionary trends are examined on two modular length scales, medium (glycosidic) and large (corresponding to sialic acid interactions). The hierarchical hydropathic results complement and greatly extend advanced phylogenetic results obtained from similarity studies. They exhibit simple quantitative trends that can be transferred to engineer oncolytic properties of other viral proteins to treat recalcitrant cancers.
[ { "created": "Sat, 16 Mar 2013 16:54:18 GMT", "version": "v1" } ]
2013-03-20
[ [ "Phillips", "J. C.", "" ] ]
Hemagglutinin (HA) and neuraminidase (NA) are highly variable envelope glycoproteins. Here hydropathic analysis, previously applied to quantify common flu (H1N1) evolution (1934-), is applied to the evolution of less common but more virulent (avian derived) H3N2 (1968-), beginning with N2. Whereas N1 exhibited opposing migration and vaccination pressures, the dominant N2 trend is due to vaccination, with only secondary migration interactions. Separation and evaluation of these effects is made possible by the use of two distinct hydropathic scales representing first-order and second-order thermodynamic interactions. The evolutions of H1 and H3 are more complex, with larger competing migration and vaccination effects. The linkages of H3 and N2 evolutionary trends are examined on two modular length scales, medium (glycosidic) and large (corresponding to sialic acid interactions). The hierarchical hydropathic results complement and greatly extend advanced phylogenetic results obtained from similarity studies. They exhibit simple quantitative trends that can be transferred to engineer oncolytic properties of other viral proteins to treat recalcitrant cancers.
1806.01078
Pedro Mendes
Pedro Mendes
Reproducible research using biomodels
null
Bulletin of Mathematical Biology 80:3081-3087 (2018)
10.1007/s11538-018-0498-z
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Like other types of computational research, modeling and simulation of biological processes (biomodels) is still largely communicated without sufficient detail to allow independent reproduction of results. But reproducibility in this area of research could easily be achieved by making use of existing resources, such as supplying models in standard formats and depositing code, models, and results in public repositories.
[ { "created": "Mon, 4 Jun 2018 12:45:54 GMT", "version": "v1" } ]
2023-04-19
[ [ "Mendes", "Pedro", "" ] ]
Like other types of computational research, modeling and simulation of biological processes (biomodels) is still largely communicated without sufficient detail to allow independent reproduction of results. But reproducibility in this area of research could easily be achieved by making use of existing resources, such as supplying models in standard formats and depositing code, models, and results in public repositories.
1902.06630
Eugene Shakhnovich
Jo\~ao V. Rodrigues and Eugene Shakhnovich
Evolutionary dynamics determines adaptation to inactivation of an essential gene
null
null
null
null
q-bio.PE q-bio.BM
http://creativecommons.org/publicdomain/zero/1.0/
Genetic inactivation of essential genes creates an evolutionary scenario distinct from escape from drug inhibition, but the mechanisms of microbe adaptations in such cases remain unknown. Here we inactivate E. coli dihydrofolate reductase (DHFR) by introducing D27G,N,F chromosomal mutations in a key catalytic residue with subsequent adaptation by serial dilutions. The partial reversal G27->C occurred in three evolutionary trajectories. Conversely, in one trajectory for D27G and in all trajectories for D27F,N strains adapted to grow at very low supplement folAmix concentrations but did not escape entirely from supplement auxotrophy. Major global shifts in metabolome and proteome occurred upon DHFR inactivation, which were partially reversed in adapted strains. Loss of function mutations in two genes, thyA and deoB, ensured adaptation to low folAmix by rerouting the 2-Deoxy-D-ribose-phosphate metabolism from glycolysis towards synthesis of dTMP. Multiple evolutionary pathways of adaptation to low folAmix converge to highly accessible yet suboptimal fitness peak.
[ { "created": "Mon, 18 Feb 2019 16:06:58 GMT", "version": "v1" } ]
2019-02-19
[ [ "Rodrigues", "João V.", "" ], [ "Shakhnovich", "Eugene", "" ] ]
Genetic inactivation of essential genes creates an evolutionary scenario distinct from escape from drug inhibition, but the mechanisms of microbe adaptations in such cases remain unknown. Here we inactivate E. coli dihydrofolate reductase (DHFR) by introducing D27G,N,F chromosomal mutations in a key catalytic residue with subsequent adaptation by serial dilutions. The partial reversal G27->C occurred in three evolutionary trajectories. Conversely, in one trajectory for D27G and in all trajectories for D27F,N strains adapted to grow at very low supplement folAmix concentrations but did not escape entirely from supplement auxotrophy. Major global shifts in metabolome and proteome occurred upon DHFR inactivation, which were partially reversed in adapted strains. Loss of function mutations in two genes, thyA and deoB, ensured adaptation to low folAmix by rerouting the 2-Deoxy-D-ribose-phosphate metabolism from glycolysis towards synthesis of dTMP. Multiple evolutionary pathways of adaptation to low folAmix converge to highly accessible yet suboptimal fitness peak.
1609.07293
Antoine Recanati
Antoine Recanati, Thomas Br\"uls, Alexandre d'Aspremont
A spectral algorithm for fast de novo layout of uncorrected long nanopore reads
Now includes additional experiments, with a comparison of the method to Canu, Miniasm and Racon with other datasets, and evaluation of the computational performance
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: New long read sequencers promise to transform sequencing and genome assembly by producing reads tens of kilobases long. However their high error rate significantly complicates assembly and requires expensive correction steps to layout the reads using standard assembly engines. Results: We present an original and efficient spectral algorithm to layout the uncorrected nanopore reads, and its seamless integration into a straightforward overlap/layout/consensus (OLC) assembly scheme. The method is shown to assemble Oxford Nanopore reads from several bacterial genomes into good quality (~99% identity to the reference) genome-sized contigs, while yielding more fragmented assemblies from a Sacharomyces cerevisiae reference strain. Availability and implementation: http://github.com/antrec/spectrassembler Contact: antoine.recanati@inria.fr
[ { "created": "Fri, 23 Sep 2016 09:54:42 GMT", "version": "v1" }, { "created": "Mon, 6 Mar 2017 13:25:44 GMT", "version": "v2" }, { "created": "Mon, 17 Jul 2017 15:50:11 GMT", "version": "v3" } ]
2017-07-18
[ [ "Recanati", "Antoine", "" ], [ "Brüls", "Thomas", "" ], [ "d'Aspremont", "Alexandre", "" ] ]
Motivation: New long read sequencers promise to transform sequencing and genome assembly by producing reads tens of kilobases long. However their high error rate significantly complicates assembly and requires expensive correction steps to layout the reads using standard assembly engines. Results: We present an original and efficient spectral algorithm to layout the uncorrected nanopore reads, and its seamless integration into a straightforward overlap/layout/consensus (OLC) assembly scheme. The method is shown to assemble Oxford Nanopore reads from several bacterial genomes into good quality (~99% identity to the reference) genome-sized contigs, while yielding more fragmented assemblies from a Sacharomyces cerevisiae reference strain. Availability and implementation: http://github.com/antrec/spectrassembler Contact: antoine.recanati@inria.fr
2011.11343
Yu Zhang
Yueran Yang, Yu Zhang, Shuai Li, Xubin Zheng, Man-Hon Wong, Kwong-Sak Leung, Lixin Cheng
A robust and generalizable immune-relatedsignature for sepsis diagnostics
null
null
10.1109/TCBB.2021.3107874
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
High-throughput sequencing can detect tens of thousands of genes in parallel, providing opportunities for improving the diagnostic accuracy of multiple diseases including sepsis, which is an aggressive inflammatory response to infection that can cause organ failure and death. Early screening of sepsis is essential in clinic, but no effective diagnostic biomarkers are available yet. Here, we present a novel method, Recurrent Logistic Regression, to identify diagnostic biomarkers for sepsis from the blood transcriptome data. A panel including five immune-related genes, LRRN3, IL2RB, FCER1A, TLR5, and S100A12, are determined as diagnostic biomarkers (LIFTS) for sepsis. LIFTS discriminates patients with sepsis from normal controls in high accuracy (AUROC = 0.9959 on average; IC = [0.9722-1.0]) on nine validation cohorts across three independent platforms, which outperforms existing markers. Our analysis determined an accurate prediction model and reproducible transcriptome biomarkers that can lay a foundation for clinical diagnostic tests and biological mechanistic studies.
[ { "created": "Mon, 23 Nov 2020 11:49:26 GMT", "version": "v1" }, { "created": "Mon, 16 Aug 2021 14:42:43 GMT", "version": "v2" }, { "created": "Wed, 8 Dec 2021 17:52:50 GMT", "version": "v3" } ]
2021-12-09
[ [ "Yang", "Yueran", "" ], [ "Zhang", "Yu", "" ], [ "Li", "Shuai", "" ], [ "Zheng", "Xubin", "" ], [ "Wong", "Man-Hon", "" ], [ "Leung", "Kwong-Sak", "" ], [ "Cheng", "Lixin", "" ] ]
High-throughput sequencing can detect tens of thousands of genes in parallel, providing opportunities for improving the diagnostic accuracy of multiple diseases including sepsis, which is an aggressive inflammatory response to infection that can cause organ failure and death. Early screening of sepsis is essential in clinic, but no effective diagnostic biomarkers are available yet. Here, we present a novel method, Recurrent Logistic Regression, to identify diagnostic biomarkers for sepsis from the blood transcriptome data. A panel including five immune-related genes, LRRN3, IL2RB, FCER1A, TLR5, and S100A12, are determined as diagnostic biomarkers (LIFTS) for sepsis. LIFTS discriminates patients with sepsis from normal controls in high accuracy (AUROC = 0.9959 on average; IC = [0.9722-1.0]) on nine validation cohorts across three independent platforms, which outperforms existing markers. Our analysis determined an accurate prediction model and reproducible transcriptome biomarkers that can lay a foundation for clinical diagnostic tests and biological mechanistic studies.
1612.06319
Anton Sinitskiy
Anton V. Sinitskiy, Nathaniel H. Stanley, David H. Hackos, Jesse E. Hanson, Benjamin D. Sellers, Vijay S. Pande
Computationally Discovered Potentiating Role of Glycans on NMDA Receptors
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
N-methyl-D-aspartate receptors (NMDARs) are glycoproteins in the brain central to learning and memory. The effects of glycosylation on the structure and dynamics of NMDARs are largely unknown. In this work, we use extensive molecular dynamics simulations of GluN1 and GluN2B ligand binding domains (LBDs) of NMDARs to investigate these effects. Our simulations predict that intra-domain interactions involving the glycan attached to residue GluN1-N440 stabilize closed-clamshell conformations of the GluN1 LBD. The glycan on GluN2B-N688 shows a similar, though weaker, effect. Based on these results, and assuming the transferability of the results of LBD simulations to the full receptor, we predict that glycans at GluN1-N440 might play a potentiator role in NMDARs. To validate this prediction, we perform electrophysiological analysis of full-length NMDARs with a glycosylation-preventing GluN1-N440Q mutation, and demonstrate an increase in the glycine EC50 value. Overall, our results suggest an intramolecular potentiating role of glycans on NMDA receptors.
[ { "created": "Mon, 19 Dec 2016 19:29:06 GMT", "version": "v1" } ]
2016-12-20
[ [ "Sinitskiy", "Anton V.", "" ], [ "Stanley", "Nathaniel H.", "" ], [ "Hackos", "David H.", "" ], [ "Hanson", "Jesse E.", "" ], [ "Sellers", "Benjamin D.", "" ], [ "Pande", "Vijay S.", "" ] ]
N-methyl-D-aspartate receptors (NMDARs) are glycoproteins in the brain central to learning and memory. The effects of glycosylation on the structure and dynamics of NMDARs are largely unknown. In this work, we use extensive molecular dynamics simulations of GluN1 and GluN2B ligand binding domains (LBDs) of NMDARs to investigate these effects. Our simulations predict that intra-domain interactions involving the glycan attached to residue GluN1-N440 stabilize closed-clamshell conformations of the GluN1 LBD. The glycan on GluN2B-N688 shows a similar, though weaker, effect. Based on these results, and assuming the transferability of the results of LBD simulations to the full receptor, we predict that glycans at GluN1-N440 might play a potentiator role in NMDARs. To validate this prediction, we perform electrophysiological analysis of full-length NMDARs with a glycosylation-preventing GluN1-N440Q mutation, and demonstrate an increase in the glycine EC50 value. Overall, our results suggest an intramolecular potentiating role of glycans on NMDA receptors.
1511.07977
Rani Parvathy Venkitakrishnan
Rani P Venkitakrishnan and Manojendu Choudhury
Importance of method validation: Implications of non-correlated observables in sweet taste receptor studies
2 tables, 2 figures, information on common sweeteners and methodology. Submitted to: current science
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Validation of research methodology is critical in research design. Correlation between experimental observables must be established before undertaking extensive experiments or propose mechanisms. This article shows that, observables in the popular calcium flux strength assay used in the characterization of sweetener-sweet taste receptor (STR) interaction are uncorrelated. In pursuit to find potential sweeteners and enhancers, calcium flux generated via G-protein coupling for wildtype and mutant receptors expressed on cell surface is measured to identify and localize sweetener binding sites. Results are channeled for sweetener development with direct impact on public health. We show that flux strength is independent of EC50 and sweet potency. Sweet potency-EC50 relation is non-linear and anti-correlated. Single point mutants affecting receptor efficiency, without significant shift in EC50 have been published, indicating flux strength is independent of ligand binding. G-protein coupling step is likely observed in the assay. Thus, years have been spent generating uncorrelated data. Data from uncorrelated observables does not give meaningful results. Still, majority of research in the field, uses change in calcium flux strength to study the receptor. Methodology away from flux strength monitor is required for sweetener development, reestablish binding localization of sweeteners established by flux strength method. This article serves to remind researchers to validate methodology before plunging into long term projects. Ignoring validation test on methodology, have been a costly mistake in the field. Concepts discussed here is applicable, whenever observable in biological systems are many steps moved from the event of interest.
[ { "created": "Wed, 25 Nov 2015 07:30:07 GMT", "version": "v1" } ]
2015-11-26
[ [ "Venkitakrishnan", "Rani P", "" ], [ "Choudhury", "Manojendu", "" ] ]
Validation of research methodology is critical in research design. Correlation between experimental observables must be established before undertaking extensive experiments or propose mechanisms. This article shows that, observables in the popular calcium flux strength assay used in the characterization of sweetener-sweet taste receptor (STR) interaction are uncorrelated. In pursuit to find potential sweeteners and enhancers, calcium flux generated via G-protein coupling for wildtype and mutant receptors expressed on cell surface is measured to identify and localize sweetener binding sites. Results are channeled for sweetener development with direct impact on public health. We show that flux strength is independent of EC50 and sweet potency. Sweet potency-EC50 relation is non-linear and anti-correlated. Single point mutants affecting receptor efficiency, without significant shift in EC50 have been published, indicating flux strength is independent of ligand binding. G-protein coupling step is likely observed in the assay. Thus, years have been spent generating uncorrelated data. Data from uncorrelated observables does not give meaningful results. Still, majority of research in the field, uses change in calcium flux strength to study the receptor. Methodology away from flux strength monitor is required for sweetener development, reestablish binding localization of sweeteners established by flux strength method. This article serves to remind researchers to validate methodology before plunging into long term projects. Ignoring validation test on methodology, have been a costly mistake in the field. Concepts discussed here is applicable, whenever observable in biological systems are many steps moved from the event of interest.
1810.07215
Elizabeth Hobson
Elizabeth A. Hobson, Dan M{\o}nster, Simon DeDeo
Aggression heuristics underlie animal dominance hierarchies and provide evidence of group-level social information
Comments welcome
Proceedings of the National Academy of Sciences, 118(10), e2022912118. 2021
10.1073/pnas.2022912118
null
q-bio.PE nlin.AO physics.soc-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Members of a social species need to make appropriate decisions about who, how, and when to interact with others in their group. However, it has been difficult for researchers to detect the inputs to these decisions and, in particular, how much information individuals actually have about their social context. We present a new method that can serve as a social assay to quantify how patterns of aggression depend upon information about the ranks of individuals within social dominance hierarchies. Applied to existing data on aggression in 172 social groups across 85 species in 23 orders, it reveals three main patterns of rank-dependent social dominance: the downward heuristic (aggress uniformly against lower-ranked opponents), close competitors (aggress against opponents ranked slightly below self), and bullying (aggress against opponents ranked much lower than self). The majority of the groups (133 groups, 77%) follow a downward heuristic, but a significant minority (38 groups, 22%) show more complex social dominance patterns (close competitors or bullying) consistent with higher levels of social information use. These patterns are not phylogenetically constrained and different groups within the same species can use different patterns, suggesting that heuristics use may depend on context and the structuring of aggression by social information should not be considered a fixed characteristic of a species. Our approach provides new opportunities to study the use of social information within and across species and the evolution of social complexity and cognition.
[ { "created": "Tue, 16 Oct 2018 18:18:44 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2019 19:15:31 GMT", "version": "v2" }, { "created": "Thu, 12 Nov 2020 15:09:12 GMT", "version": "v3" } ]
2024-07-15
[ [ "Hobson", "Elizabeth A.", "" ], [ "Mønster", "Dan", "" ], [ "DeDeo", "Simon", "" ] ]
Members of a social species need to make appropriate decisions about who, how, and when to interact with others in their group. However, it has been difficult for researchers to detect the inputs to these decisions and, in particular, how much information individuals actually have about their social context. We present a new method that can serve as a social assay to quantify how patterns of aggression depend upon information about the ranks of individuals within social dominance hierarchies. Applied to existing data on aggression in 172 social groups across 85 species in 23 orders, it reveals three main patterns of rank-dependent social dominance: the downward heuristic (aggress uniformly against lower-ranked opponents), close competitors (aggress against opponents ranked slightly below self), and bullying (aggress against opponents ranked much lower than self). The majority of the groups (133 groups, 77%) follow a downward heuristic, but a significant minority (38 groups, 22%) show more complex social dominance patterns (close competitors or bullying) consistent with higher levels of social information use. These patterns are not phylogenetically constrained and different groups within the same species can use different patterns, suggesting that heuristics use may depend on context and the structuring of aggression by social information should not be considered a fixed characteristic of a species. Our approach provides new opportunities to study the use of social information within and across species and the evolution of social complexity and cognition.
0807.0673
Satoru Morita
Satoru Morita, Kei-ichi Tainaka, Hiroyasu Nagata, and Jin Yoshimura
Population Uncertainty in Model Ecosystem: Analysis by Stochastic Differential Equation
16 pages, 4 figure, submitted to J. Phys. Soc. Jpn
Journal of the Physical Society of Japan, 77, 093801 1-4 (2008)
10.1143/JPSJ.77.093801
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Perturbation experiments are carried out by contact process and its mean-field version. Here, the mortality rate is increased or decreased suddenly. It is known that the fluctuation enhancement (FE) occurs after the perturbation, where FE means a population uncertainty. In the present paper, we develop a new theory of stochastic differential equation. The agreement between the theory and the mean-field simulation is almost perfect. This theory enables us to find much stronger FE than reported previously. We discuss the population uncertainty in the recovering process of endangered species.
[ { "created": "Fri, 4 Jul 2008 03:00:19 GMT", "version": "v1" } ]
2012-07-20
[ [ "Morita", "Satoru", "" ], [ "Tainaka", "Kei-ichi", "" ], [ "Nagata", "Hiroyasu", "" ], [ "Yoshimura", "Jin", "" ] ]
Perturbation experiments are carried out by contact process and its mean-field version. Here, the mortality rate is increased or decreased suddenly. It is known that the fluctuation enhancement (FE) occurs after the perturbation, where FE means a population uncertainty. In the present paper, we develop a new theory of stochastic differential equation. The agreement between the theory and the mean-field simulation is almost perfect. This theory enables us to find much stronger FE than reported previously. We discuss the population uncertainty in the recovering process of endangered species.
2107.02906
Alessandro Casa
Maria Frizzarin, Antonio Bevilacqua, Bhaskar Dhariyal, Katarina Domijan, Federico Ferraccioli, Elena Hayes, Georgiana Ifrim, Agnieszka Konkolewska, Thach Le Nguyen, Uche Mbaka, Giovanna Ranzato, Ashish Singh, Marco Stefanucci, Alessandro Casa
Mid infrared spectroscopy and milk quality traits: a data analysis competition at the "International Workshop on Spectroscopy and Chemometrics 2021"
17 pages, 6 figures, 6 tables
Chemometrics and Intelligent Laboratory Systems, 2021, Volume 219, 104442
10.1016/j.chemolab.2021.104442
null
q-bio.QM stat.AP
http://creativecommons.org/licenses/by/4.0/
A chemometric data analysis challenge has been arranged during the first edition of the "International Workshop on Spectroscopy and Chemometrics", organized by the Vistamilk SFI Research Centre and held online in April 2021. The aim of the competition was to build a calibration model in order to predict milk quality traits exploiting the information contained in mid-infrared spectra only. Three different traits have been provided, presenting heterogeneous degrees of prediction complexity thus possibly requiring trait-specific modelling choices. In this paper the different approaches adopted by the participants are outlined and the insights obtained from the analyses are critically discussed.
[ { "created": "Mon, 5 Jul 2021 14:40:17 GMT", "version": "v1" }, { "created": "Mon, 19 Sep 2022 07:40:37 GMT", "version": "v2" } ]
2022-09-20
[ [ "Frizzarin", "Maria", "" ], [ "Bevilacqua", "Antonio", "" ], [ "Dhariyal", "Bhaskar", "" ], [ "Domijan", "Katarina", "" ], [ "Ferraccioli", "Federico", "" ], [ "Hayes", "Elena", "" ], [ "Ifrim", "Georgiana", "" ], [ "Konkolewska", "Agnieszka", "" ], [ "Nguyen", "Thach Le", "" ], [ "Mbaka", "Uche", "" ], [ "Ranzato", "Giovanna", "" ], [ "Singh", "Ashish", "" ], [ "Stefanucci", "Marco", "" ], [ "Casa", "Alessandro", "" ] ]
A chemometric data analysis challenge has been arranged during the first edition of the "International Workshop on Spectroscopy and Chemometrics", organized by the Vistamilk SFI Research Centre and held online in April 2021. The aim of the competition was to build a calibration model in order to predict milk quality traits exploiting the information contained in mid-infrared spectra only. Three different traits have been provided, presenting heterogeneous degrees of prediction complexity thus possibly requiring trait-specific modelling choices. In this paper the different approaches adopted by the participants are outlined and the insights obtained from the analyses are critically discussed.
1905.01337
Ulrich S. Schwarz
Felix Frey, Falko Ziebert, Ulrich S. Schwarz (Heidelberg University)
Dynamics of particle uptake at cell membranes
20 pages, 19 figures
Phys. Rev. E 100, 052403 (2019)
10.1103/PhysRevE.100.052403
null
q-bio.SC cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Receptor-mediated endocytosis requires that the energy of adhesion overcomes the deformation energy of the plasma membrane. The resulting driving force is balanced by dissipative forces, leading to deterministic dynamical equations. While the shape of the free membrane does not play an important role for tensed and loose membranes, in the intermediate regime it leads to an important energy barrier. Here we show that this barrier is similar to but different from an effective line tension and suggest a simple analytical approximation for it. We then explore the rich dynamics of uptake for particles of different shapes and present the corresponding dynamical state diagrams. We also extend our model to include stochastic fluctuations, which facilitate uptake and lead to corresponding changes in the phase diagrams.
[ { "created": "Fri, 3 May 2019 18:34:02 GMT", "version": "v1" }, { "created": "Fri, 25 Oct 2019 18:48:09 GMT", "version": "v2" } ]
2019-11-20
[ [ "Frey", "Felix", "", "Heidelberg University" ], [ "Ziebert", "Falko", "", "Heidelberg University" ], [ "Schwarz", "Ulrich S.", "", "Heidelberg University" ] ]
Receptor-mediated endocytosis requires that the energy of adhesion overcomes the deformation energy of the plasma membrane. The resulting driving force is balanced by dissipative forces, leading to deterministic dynamical equations. While the shape of the free membrane does not play an important role for tensed and loose membranes, in the intermediate regime it leads to an important energy barrier. Here we show that this barrier is similar to but different from an effective line tension and suggest a simple analytical approximation for it. We then explore the rich dynamics of uptake for particles of different shapes and present the corresponding dynamical state diagrams. We also extend our model to include stochastic fluctuations, which facilitate uptake and lead to corresponding changes in the phase diagrams.
0805.1776
Andrew Mugler
Andrew Mugler, Etay Ziv, Ilya Nemenman, Chris H. Wiggins
Serially-regulated biological networks fully realize a constrained set of functions
9 pages, 3 figures
IET Sys. Bio. 2 (2008) 313-322
10.1049/iet-syb:20080097
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that biological networks with serial regulation (each node regulated by at most one other node) are constrained to {\it direct functionality}, in which the sign of the effect of an environmental input on a target species depends only on the direct path from the input to the target, even when there is a feedback loop allowing for multiple interaction pathways. Using a stochastic model for a set of small transcriptional regulatory networks that have been studied experimentally, we further find that all networks can achieve all functions permitted by this constraint under reasonable settings of biochemical parameters. This underscores the functional versatility of the networks.
[ { "created": "Tue, 13 May 2008 05:00:26 GMT", "version": "v1" } ]
2009-11-30
[ [ "Mugler", "Andrew", "" ], [ "Ziv", "Etay", "" ], [ "Nemenman", "Ilya", "" ], [ "Wiggins", "Chris H.", "" ] ]
We show that biological networks with serial regulation (each node regulated by at most one other node) are constrained to {\it direct functionality}, in which the sign of the effect of an environmental input on a target species depends only on the direct path from the input to the target, even when there is a feedback loop allowing for multiple interaction pathways. Using a stochastic model for a set of small transcriptional regulatory networks that have been studied experimentally, we further find that all networks can achieve all functions permitted by this constraint under reasonable settings of biochemical parameters. This underscores the functional versatility of the networks.
1909.09478
Ludvig Lizana
Markus Nyberg and Tobias Ambj\"ornsson and Per Stenberg and and Ludvig Lizana
Modelling Protein Target-Search in Human Chromosomes
null
Phys. Rev. Research 3, 013055 (2021)
10.1103/PhysRevResearch.3.013055
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several processes in the cell, such as gene regulation, start when key proteins recognise and bind to short DNA sequences. However, as these sequences can be hundreds of million times shorter than the genome, they are hard to find by simple diffusion: diffusion-limited association rates may underestimate $in~vitro$ measurements up to several orders of magnitude. Moreover, the rates increase if the DNA is coiled rather than straight. Here we model how this works $in~vivo$ in mammalian cells. We use chromatin-chromatin contact data from state-of-the-art Hi-C experiments to map the protein target-search onto a network problem. The nodes represent a DNA segment and the weight of the links is proportional to measured contact probabilities. We then put forward a master equation for the density of searching protein that allows us to calculate the association rates across the genome analytically. For segments where the rates are high, we find that they are enriched with active genes and have high RNA expression levels. This paper suggests that the DNA's 3D conformation is important for protein search times $in~vivo$ and offers a method to interpret protein-binding profiles in eukaryotes that cannot be explained by the DNA sequence itself.
[ { "created": "Thu, 19 Sep 2019 07:18:12 GMT", "version": "v1" } ]
2021-01-27
[ [ "Nyberg", "Markus", "" ], [ "Ambjörnsson", "Tobias", "" ], [ "Stenberg", "Per", "" ], [ "Lizana", "and Ludvig", "" ] ]
Several processes in the cell, such as gene regulation, start when key proteins recognise and bind to short DNA sequences. However, as these sequences can be hundreds of million times shorter than the genome, they are hard to find by simple diffusion: diffusion-limited association rates may underestimate $in~vitro$ measurements up to several orders of magnitude. Moreover, the rates increase if the DNA is coiled rather than straight. Here we model how this works $in~vivo$ in mammalian cells. We use chromatin-chromatin contact data from state-of-the-art Hi-C experiments to map the protein target-search onto a network problem. The nodes represent a DNA segment and the weight of the links is proportional to measured contact probabilities. We then put forward a master equation for the density of searching protein that allows us to calculate the association rates across the genome analytically. For segments where the rates are high, we find that they are enriched with active genes and have high RNA expression levels. This paper suggests that the DNA's 3D conformation is important for protein search times $in~vivo$ and offers a method to interpret protein-binding profiles in eukaryotes that cannot be explained by the DNA sequence itself.
1501.04605
Lee Altenberg
Lee Altenberg
Statistical Problems in a Paper on Variation In Cancer Risk Among Tissues, and New Discoveries
7 pages, 4 figures, 1 table
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tomasetti and Vogelstein (2015) collected data on 31 different tissue types and found a correlation of 0.8 between the logarithms of the incidence of cancer (LCI), and the estimated number of stem cell divisions in those tissues (LSCD). Some of their conclusions however are statistically erroneous. Their excess risk score, "ERS" (log10 LCI x log10 LSCD), is non-monotonic under a change of time units for the rates, which renders meaningless the results derived from it, including a cluster of 22 "R-tumor" types for which they conclude, "primary prevention measures are not likely to be very effective". Further, r = 0.8 is consistent with the three orders of magnitude variation in other unmeasured factors, leaving room for the possibility of primary prevention if such factors can be intervened upon. Further exploration of the data reveals additional findings: (1) that LCI grows at approximately the square root of LSCD, which may provide a clue to the biology; (2) among different possible combinations of the primary data, the one maximizing the correlations with LCI is almost precisely the formula used by Tomasetti and Vogelstein to estimate LSCD, giving support to stem cell divisions as an independent factor in carcinogenesis, while not excluding other such factors.
[ { "created": "Mon, 19 Jan 2015 20:00:11 GMT", "version": "v1" } ]
2015-01-20
[ [ "Altenberg", "Lee", "" ] ]
Tomasetti and Vogelstein (2015) collected data on 31 different tissue types and found a correlation of 0.8 between the logarithms of the incidence of cancer (LCI), and the estimated number of stem cell divisions in those tissues (LSCD). Some of their conclusions however are statistically erroneous. Their excess risk score, "ERS" (log10 LCI x log10 LSCD), is non-monotonic under a change of time units for the rates, which renders meaningless the results derived from it, including a cluster of 22 "R-tumor" types for which they conclude, "primary prevention measures are not likely to be very effective". Further, r = 0.8 is consistent with the three orders of magnitude variation in other unmeasured factors, leaving room for the possibility of primary prevention if such factors can be intervened upon. Further exploration of the data reveals additional findings: (1) that LCI grows at approximately the square root of LSCD, which may provide a clue to the biology; (2) among different possible combinations of the primary data, the one maximizing the correlations with LCI is almost precisely the formula used by Tomasetti and Vogelstein to estimate LSCD, giving support to stem cell divisions as an independent factor in carcinogenesis, while not excluding other such factors.
2203.00449
Ruqayya Awan
Ruqayya Awan, Mohammed Nimir, Shan E Ahmed Raza, Mohsin Bilal, Johannes Lotz, David Snead, Andrew Robinson, Nasir Rajpoot
Deep Learning based Prediction of MSI using MMR Markers in Colorectal Cancer
null
null
null
null
q-bio.QM cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
The accurate diagnosis and molecular profiling of colorectal cancers are critical for planning the best treatment options for patients. Microsatellite instability (MSI) or mismatch repair (MMR) status plays a vital role in appropriate treatment selection, has prognostic implications and is used to investigate the possibility of patients having underlying genetic disorders (Lynch syndrome). NICE recommends that all CRC patients should be offered MMR/MSI testing. Immunohistochemistry is commonly used to assess MMR status with subsequent molecular testing performed as required. This incurs significant extra costs and requires additional resources. The introduction of automated methods that can predict MSI or MMR status from a target image could substantially reduce the cost associated with MMR testing. Unlike previous studies on MSI prediction involving training a CNN using coarse labels (MSI vs Microsatellite Stable (MSS)), we have utilised fine-grain MMR labels for training purposes. In this paper, we present our work on predicting MSI status in a two-stage process using a single target slide either stained with CK8/18 or H&E. First, we trained a multi-headed convolutional neural network model where each head was responsible for predicting one of the MMR protein expressions. To this end, we performed the registration of MMR stained slides to the target slide as a pre-processing step. In the second stage, statistical features computed from the MMR prediction maps were used for the final MSI prediction. Our results demonstrated that MSI classification can be improved by incorporating fine-grained MMR labels in comparison to the previous approaches in which only coarse labels were utilised.
[ { "created": "Thu, 24 Feb 2022 18:56:59 GMT", "version": "v1" }, { "created": "Sun, 17 Apr 2022 11:18:54 GMT", "version": "v2" }, { "created": "Tue, 26 Apr 2022 04:07:14 GMT", "version": "v3" } ]
2022-04-27
[ [ "Awan", "Ruqayya", "" ], [ "Nimir", "Mohammed", "" ], [ "Raza", "Shan E Ahmed", "" ], [ "Bilal", "Mohsin", "" ], [ "Lotz", "Johannes", "" ], [ "Snead", "David", "" ], [ "Robinson", "Andrew", "" ], [ "Rajpoot", "Nasir", "" ] ]
The accurate diagnosis and molecular profiling of colorectal cancers are critical for planning the best treatment options for patients. Microsatellite instability (MSI) or mismatch repair (MMR) status plays a vital role in appropriate treatment selection, has prognostic implications and is used to investigate the possibility of patients having underlying genetic disorders (Lynch syndrome). NICE recommends that all CRC patients should be offered MMR/MSI testing. Immunohistochemistry is commonly used to assess MMR status with subsequent molecular testing performed as required. This incurs significant extra costs and requires additional resources. The introduction of automated methods that can predict MSI or MMR status from a target image could substantially reduce the cost associated with MMR testing. Unlike previous studies on MSI prediction involving training a CNN using coarse labels (MSI vs Microsatellite Stable (MSS)), we have utilised fine-grain MMR labels for training purposes. In this paper, we present our work on predicting MSI status in a two-stage process using a single target slide either stained with CK8/18 or H&E. First, we trained a multi-headed convolutional neural network model where each head was responsible for predicting one of the MMR protein expressions. To this end, we performed the registration of MMR stained slides to the target slide as a pre-processing step. In the second stage, statistical features computed from the MMR prediction maps were used for the final MSI prediction. Our results demonstrated that MSI classification can be improved by incorporating fine-grained MMR labels in comparison to the previous approaches in which only coarse labels were utilised.
q-bio/0607001
Herculano Martinho
P. O. Andrade, R. A. Bitar, K. Yassoyama, H. Martinho, A. M. E. Santo, P. M. Bruno, A. A. Martin
Study of Non-Altered Colorectal Tissue by FT-Raman Spectroscopy
14 pages, 4 figures, submited to Analytical and Bioanalytical Chemistry
null
null
null
q-bio.TO q-bio.BM
null
FT-Raman spectroscopy was employed to study (in-vitro) non-altered human colorectal tissues aiming evaluate their spectral differences. The samples were collected from 39 patients, adding 144 spectra. The results enable one estabilish 3 well defined spectroscopic groups of what was consistently checked by statistical (clustering) and biological (histopathology) analysis. The similarity within each group was better than 90%. Groups 1 and 2 had 80% of similarity while presenting both 70% of similarity to group 3. All groups presented connective and epithelial tissues as histopathological characteristic. Group 1 differs from others by the presence of soft muscle while group 3 by presence of fatty tissue. Tissues from group 2 are composed by samples with connective and epithelial tissues. This study is very relevant in order to establish the non-altered or normal spectroscopic standard for future studies and applications on optical biopsy and diagnosis of colorectal cancer.
[ { "created": "Mon, 3 Jul 2006 13:32:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Andrade", "P. O.", "" ], [ "Bitar", "R. A.", "" ], [ "Yassoyama", "K.", "" ], [ "Martinho", "H.", "" ], [ "Santo", "A. M. E.", "" ], [ "Bruno", "P. M.", "" ], [ "Martin", "A. A.", "" ] ]
FT-Raman spectroscopy was employed to study (in-vitro) non-altered human colorectal tissues aiming evaluate their spectral differences. The samples were collected from 39 patients, adding 144 spectra. The results enable one estabilish 3 well defined spectroscopic groups of what was consistently checked by statistical (clustering) and biological (histopathology) analysis. The similarity within each group was better than 90%. Groups 1 and 2 had 80% of similarity while presenting both 70% of similarity to group 3. All groups presented connective and epithelial tissues as histopathological characteristic. Group 1 differs from others by the presence of soft muscle while group 3 by presence of fatty tissue. Tissues from group 2 are composed by samples with connective and epithelial tissues. This study is very relevant in order to establish the non-altered or normal spectroscopic standard for future studies and applications on optical biopsy and diagnosis of colorectal cancer.
1609.03948
Lee Friedman Lee Friedman
Lee Friedman, Ioannis Rigas, Mark S. Nixon, Oleg V. Komogortsev
Method to Assess the Temporal Persistence of Potential Biometric Features: Application to Oculomotor, and Gait-Related Databases
13 pages, 8 figures, 5 tables
null
10.1371/journal.pone.0178501
null
q-bio.QM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although temporal persistence, or permanence, is a well understood requirement for optimal biometric features, there is no general agreement on how to assess temporal persistence. We suggest that the best way to assess temporal persistence is to perform a test-retest study, and assess test-retest reliability. For ratio-scale features that are normally distributed, this is best done using the Intraclass Correlation Coefficient (ICC). For 10 distinct data sets (8 eye-movement related, and 2 gait related), we calculated the test-retest reliability ('Temporal persistence') of each feature, and compared biometric performance of high-ICC features to lower ICC features, and to the set of all features. We demonstrate that using a subset of only high-ICC features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 9 of 10 databases (p = 0.01, one-tailed). For Equal Error Rate (EER), using a subset of only high-ICC features produced superior performance in 8 of 10 databases (p = 0.055, one-tailed). In general, then, prescreening potential biometric features, and choosing only highly reliable features will yield better performance than lower ICC features or than the set of all features combined. We hypothesize that this would likely be the case for any biometric modality where the features can be expressed as quantitative values on an interval or ratio scale, assuming an adequate number of relatively independent features.
[ { "created": "Tue, 13 Sep 2016 17:38:39 GMT", "version": "v1" } ]
2017-07-05
[ [ "Friedman", "Lee", "" ], [ "Rigas", "Ioannis", "" ], [ "Nixon", "Mark S.", "" ], [ "Komogortsev", "Oleg V.", "" ] ]
Although temporal persistence, or permanence, is a well understood requirement for optimal biometric features, there is no general agreement on how to assess temporal persistence. We suggest that the best way to assess temporal persistence is to perform a test-retest study, and assess test-retest reliability. For ratio-scale features that are normally distributed, this is best done using the Intraclass Correlation Coefficient (ICC). For 10 distinct data sets (8 eye-movement related, and 2 gait related), we calculated the test-retest reliability ('Temporal persistence') of each feature, and compared biometric performance of high-ICC features to lower ICC features, and to the set of all features. We demonstrate that using a subset of only high-ICC features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 9 of 10 databases (p = 0.01, one-tailed). For Equal Error Rate (EER), using a subset of only high-ICC features produced superior performance in 8 of 10 databases (p = 0.055, one-tailed). In general, then, prescreening potential biometric features, and choosing only highly reliable features will yield better performance than lower ICC features or than the set of all features combined. We hypothesize that this would likely be the case for any biometric modality where the features can be expressed as quantitative values on an interval or ratio scale, assuming an adequate number of relatively independent features.
1911.06602
Toby St Clere Smithe
Toby B. St Clere Smithe
Radically Compositional Cognitive Concepts
6 pages, 2 figures; NeurIPS 2019 Context and Compositionality workshop. Work in progress
null
null
null
q-bio.NC cs.AI cs.CL cs.NE
http://creativecommons.org/licenses/by/4.0/
Despite ample evidence that our concepts, our cognitive architecture, and mathematics itself are all deeply compositional, few models take advantage of this structure. We therefore propose a radically compositional approach to computational neuroscience, drawing on the methods of applied category theory. We describe how these tools grant us a means to overcome complexity and improve interpretability, and supply a rigorous common language for scientific modelling, analogous to the type theories of computer science. As a case study, we sketch how to translate from compositional narrative concepts to neural circuits and back again.
[ { "created": "Thu, 14 Nov 2019 18:20:36 GMT", "version": "v1" } ]
2019-11-18
[ [ "Smithe", "Toby B. St Clere", "" ] ]
Despite ample evidence that our concepts, our cognitive architecture, and mathematics itself are all deeply compositional, few models take advantage of this structure. We therefore propose a radically compositional approach to computational neuroscience, drawing on the methods of applied category theory. We describe how these tools grant us a means to overcome complexity and improve interpretability, and supply a rigorous common language for scientific modelling, analogous to the type theories of computer science. As a case study, we sketch how to translate from compositional narrative concepts to neural circuits and back again.
2403.02733
Frederik F. Fl\"other
Frederik F. Fl\"other
Early quantum computing applications on the path towards precision medicine
null
null
null
null
q-bio.QM quant-ph
http://creativecommons.org/licenses/by-sa/4.0/
The last few years have seen rapid progress in transitioning quantum computing from lab to industry. In healthcare and life sciences, more than 40 proof-of-concept experiments and studies have been conducted; an increasing number of these are even run on real quantum hardware. Major investments have been made with hundreds of millions of dollars already allocated towards quantum applications and hardware in medicine. In addition to pharmaceutical and life sciences uses, clinical and medical applications are now increasingly coming into the picture. This chapter focuses on three key use case areas associated with (precision) medicine, including genomics and clinical research, diagnostics, and treatments and interventions. Examples of organizations and the use cases they have been researching are given; ideas how the development of practical quantum computing applications can be further accelerated are described.
[ { "created": "Tue, 5 Mar 2024 07:41:29 GMT", "version": "v1" } ]
2024-03-06
[ [ "Flöther", "Frederik F.", "" ] ]
The last few years have seen rapid progress in transitioning quantum computing from lab to industry. In healthcare and life sciences, more than 40 proof-of-concept experiments and studies have been conducted; an increasing number of these are even run on real quantum hardware. Major investments have been made with hundreds of millions of dollars already allocated towards quantum applications and hardware in medicine. In addition to pharmaceutical and life sciences uses, clinical and medical applications are now increasingly coming into the picture. This chapter focuses on three key use case areas associated with (precision) medicine, including genomics and clinical research, diagnostics, and treatments and interventions. Examples of organizations and the use cases they have been researching are given; ideas how the development of practical quantum computing applications can be further accelerated are described.
2306.15940
Feng Xu
Feng Xu, Shuxiang Zhang, Linjie Ma, Yong Hou, Jie Li, Andrej Denisenko, Zifu Li, Joachim Spatz, J\"org Wrachtrup, Qiang Wei, and Zhiqin Chu
Quantum-Enhanced Diamond Molecular Tension Microscopy for Quantifying Cellular Forces
51 pages, 20 figures
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by-nc-nd/4.0/
The constant interplay and information exchange between cells and their micro-environment are essential to their survival and ability to execute biological functions. To date, a few leading technologies such as traction force microscopy, have been broadly used in measuring cellular forces. However, the considerable limitations, regarding the sensitivity and ambiguities in data interpretation, are hindering our thorough understanding of mechanobiology. Herein, we propose an innovative approach, namely quantum-enhanced diamond molecular tension microscopy (QDMTM), to precisely quantify the integrin-based cell adhesive forces. Specifically, we construct a force sensing platform by conjugating the magnetic nanotags labeled, force-responsive polymer to the surface of diamond membrane containing nitrogen vacancy (NV) centers. Thus, the coupled mechanical information can be quantified through optical readout of spin relaxation of NV centers modulated by those magnetic nanotags. To validate QDMTM, we have carefully performed corresponding measurements both in control and real cell samples. Particularly, we have obtained the quantitative cellular adhesion force mapping by correlating the measurement with established theoretical model. We anticipate that our method can be routinely used in studying important issues like cell-cell or cell-material interactions and mechanotransduction.
[ { "created": "Wed, 28 Jun 2023 05:58:51 GMT", "version": "v1" } ]
2023-06-29
[ [ "Xu", "Feng", "" ], [ "Zhang", "Shuxiang", "" ], [ "Ma", "Linjie", "" ], [ "Hou", "Yong", "" ], [ "Li", "Jie", "" ], [ "Denisenko", "Andrej", "" ], [ "Li", "Zifu", "" ], [ "Spatz", "Joachim", "" ], [ "Wrachtrup", "Jörg", "" ], [ "Wei", "Qiang", "" ], [ "Chu", "Zhiqin", "" ] ]
The constant interplay and information exchange between cells and their micro-environment are essential to their survival and ability to execute biological functions. To date, a few leading technologies such as traction force microscopy, have been broadly used in measuring cellular forces. However, the considerable limitations, regarding the sensitivity and ambiguities in data interpretation, are hindering our thorough understanding of mechanobiology. Herein, we propose an innovative approach, namely quantum-enhanced diamond molecular tension microscopy (QDMTM), to precisely quantify the integrin-based cell adhesive forces. Specifically, we construct a force sensing platform by conjugating the magnetic nanotags labeled, force-responsive polymer to the surface of diamond membrane containing nitrogen vacancy (NV) centers. Thus, the coupled mechanical information can be quantified through optical readout of spin relaxation of NV centers modulated by those magnetic nanotags. To validate QDMTM, we have carefully performed corresponding measurements both in control and real cell samples. Particularly, we have obtained the quantitative cellular adhesion force mapping by correlating the measurement with established theoretical model. We anticipate that our method can be routinely used in studying important issues like cell-cell or cell-material interactions and mechanotransduction.
1001.5241
Frederick Matsen IV
Mar\'ia Ang\'elica Cueto and Frederick A. Matsen
Polyhedral geometry of Phylogenetic Rogue Taxa
In this version, we add quartet distances and fix Table 4.
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is well known among phylogeneticists that adding an extra taxon (e.g. species) to a data set can alter the structure of the optimal phylogenetic tree in surprising ways. However, little is known about this "rogue taxon" effect. In this paper we characterize the behavior of balanced minimum evolution (BME) phylogenetics on data sets of this type using tools from polyhedral geometry. First we show that for any distance matrix there exist distances to a "rogue taxon" such that the BME-optimal tree for the data set with the new taxon does not contain any nontrivial splits (bipartitions) of the optimal tree for the original data. Second, we prove a theorem which restricts the topology of BME-optimal trees for data sets of this type, thus showing that a rogue taxon cannot have an arbitrary effect on the optimal tree. Third, we construct polyhedral cones computationally which give complete answers for BME rogue taxon behavior when our original data fits a tree on four, five, and six taxa. We use these cones to derive sufficient conditions for rogue taxon behavior for four taxa, and to understand the frequency of the rogue taxon effect via simulation.
[ { "created": "Thu, 28 Jan 2010 18:40:38 GMT", "version": "v1" }, { "created": "Mon, 29 Mar 2010 03:52:11 GMT", "version": "v2" }, { "created": "Wed, 31 Mar 2010 01:47:11 GMT", "version": "v3" }, { "created": "Sat, 24 Apr 2010 14:39:26 GMT", "version": "v4" } ]
2010-04-27
[ [ "Cueto", "María Angélica", "" ], [ "Matsen", "Frederick A.", "" ] ]
It is well known among phylogeneticists that adding an extra taxon (e.g. species) to a data set can alter the structure of the optimal phylogenetic tree in surprising ways. However, little is known about this "rogue taxon" effect. In this paper we characterize the behavior of balanced minimum evolution (BME) phylogenetics on data sets of this type using tools from polyhedral geometry. First we show that for any distance matrix there exist distances to a "rogue taxon" such that the BME-optimal tree for the data set with the new taxon does not contain any nontrivial splits (bipartitions) of the optimal tree for the original data. Second, we prove a theorem which restricts the topology of BME-optimal trees for data sets of this type, thus showing that a rogue taxon cannot have an arbitrary effect on the optimal tree. Third, we construct polyhedral cones computationally which give complete answers for BME rogue taxon behavior when our original data fits a tree on four, five, and six taxa. We use these cones to derive sufficient conditions for rogue taxon behavior for four taxa, and to understand the frequency of the rogue taxon effect via simulation.
2011.12050
Alessandro Sarracino
Dario Raimo, Alessandro Sarracino, Lucilla de Arcangelis
Role of inhibitory neurons in temporal correlations of critical and supercritical spontaneous activity
15 pages, 6 figures
Physica A, 125555 (2020)
10.1016/j.physa.2020.125555
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experimental and numerical results suggest that the brain can be viewed as a system acting close to a critical point, as confirmed by scale-free distributions of relevant quantities in a variety of different systems and models. Less attention has received the investigation of the temporal correlation functions in brain activity in different, healthy and pathological, conditions. Here we perform this analysis by means of a model with short and long-term plasticity which implements the novel feature of different recovery rates for excitatory and inhibitory neurons, found experimentally. We evidence the important role played by inhibitory neurons in the supercritical state: We detect an unexpected oscillatory behaviour of the correlation decay, whose frequency depends on the fraction of inhibitory neurons and their connectivity degree. This behaviour can be rationalized by the observation that bursts in activity become more frequent and with a smaller amplitude as inhibition becomes more relevant.
[ { "created": "Tue, 24 Nov 2020 11:58:06 GMT", "version": "v1" } ]
2020-11-25
[ [ "Raimo", "Dario", "" ], [ "Sarracino", "Alessandro", "" ], [ "de Arcangelis", "Lucilla", "" ] ]
Experimental and numerical results suggest that the brain can be viewed as a system acting close to a critical point, as confirmed by scale-free distributions of relevant quantities in a variety of different systems and models. Less attention has received the investigation of the temporal correlation functions in brain activity in different, healthy and pathological, conditions. Here we perform this analysis by means of a model with short and long-term plasticity which implements the novel feature of different recovery rates for excitatory and inhibitory neurons, found experimentally. We evidence the important role played by inhibitory neurons in the supercritical state: We detect an unexpected oscillatory behaviour of the correlation decay, whose frequency depends on the fraction of inhibitory neurons and their connectivity degree. This behaviour can be rationalized by the observation that bursts in activity become more frequent and with a smaller amplitude as inhibition becomes more relevant.
1112.4589
Hong Qian
Hong Qian and Sumit Roy
An Information Theoretical Analysis of Kinase Activated Phosphorylation Dephosphorylation Cycle
17 pages, 7 figures
IEEE Transactions on NanoBioscience, Volume 11, Issue 3, pp. 289 - 295 (2012)
10.1109/TNB.2011.2182658
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Signal transduction, the information processing mechanism in biological cells, is carried out by a network of biochemical reactions. The dynamics of driven biochemical reactions can be studied in terms of nonequilibrium statistical physics. Such systems may also be studied in terms of Shannon's information theory. We combine these two perspectives in this study of the basic units (modules) of cellular signaling: the phosphorylation dephosphorylation cycle (PdPC) and the guanosine triphosphatase (GTPase). We show that the channel capacity is zero if and only if the free energy expenditure of biochemical system is zero. In fact, a positive correlation between the channel capacity and free energy expenditure is observed. In terms of the information theory, a linear signaling cascade consisting of multiple steps of PdPC can function as a distributed "multistage code". With increasing number of steps in the cascade, the system trades channel capacity with the code complexity. Our analysis shows that while a static code can be molecular structural based; a biochemical communication channel has to have energy expenditure.
[ { "created": "Tue, 20 Dec 2011 07:29:01 GMT", "version": "v1" } ]
2012-09-13
[ [ "Qian", "Hong", "" ], [ "Roy", "Sumit", "" ] ]
Signal transduction, the information processing mechanism in biological cells, is carried out by a network of biochemical reactions. The dynamics of driven biochemical reactions can be studied in terms of nonequilibrium statistical physics. Such systems may also be studied in terms of Shannon's information theory. We combine these two perspectives in this study of the basic units (modules) of cellular signaling: the phosphorylation dephosphorylation cycle (PdPC) and the guanosine triphosphatase (GTPase). We show that the channel capacity is zero if and only if the free energy expenditure of biochemical system is zero. In fact, a positive correlation between the channel capacity and free energy expenditure is observed. In terms of the information theory, a linear signaling cascade consisting of multiple steps of PdPC can function as a distributed "multistage code". With increasing number of steps in the cascade, the system trades channel capacity with the code complexity. Our analysis shows that while a static code can be molecular structural based; a biochemical communication channel has to have energy expenditure.
2008.13229
Ernest Greene
Ernest Greene
An evolutionary perspective on the design of neuromorphic shape filters
null
IEEE Access, 2020, 8, 114228-114238
10.1109/ACCESS.2020_3004412
null
q-bio.NC cs.AI cs.CV cs.NE eess.IV
http://creativecommons.org/licenses/by/4.0/
A substantial amount of time and energy has been invested to develop machine vision using connectionist (neural network) principles. Most of that work has been inspired by theories advanced by neuroscientists and behaviorists for how cortical systems store stimulus information. Those theories call for information flow through connections among several neuron populations, with the initial connections being random (or at least non-functional). Then the strength or location of connections are modified through training trials to achieve an effective output, such as the ability to identify an object. Those theories ignored the fact that animals that have no cortex, e.g., fish, can demonstrate visual skills that outpace the best neural network models. Neural circuits that allow for immediate effective vision and quick learning have been preprogrammed by hundreds of millions of years of evolution and the visual skills are available shortly after hatching. Cortical systems may be providing advanced image processing, but most likely are using design principles that had been proven effective in simpler systems. The present article provides a brief overview of retinal and cortical mechanisms for registering shape information, with the hope that it might contribute to the design of shape-encoding circuits that more closely match the mechanisms of biological vision.
[ { "created": "Sun, 30 Aug 2020 17:53:44 GMT", "version": "v1" } ]
2020-09-01
[ [ "Greene", "Ernest", "" ] ]
A substantial amount of time and energy has been invested to develop machine vision using connectionist (neural network) principles. Most of that work has been inspired by theories advanced by neuroscientists and behaviorists for how cortical systems store stimulus information. Those theories call for information flow through connections among several neuron populations, with the initial connections being random (or at least non-functional). Then the strength or location of connections are modified through training trials to achieve an effective output, such as the ability to identify an object. Those theories ignored the fact that animals that have no cortex, e.g., fish, can demonstrate visual skills that outpace the best neural network models. Neural circuits that allow for immediate effective vision and quick learning have been preprogrammed by hundreds of millions of years of evolution and the visual skills are available shortly after hatching. Cortical systems may be providing advanced image processing, but most likely are using design principles that had been proven effective in simpler systems. The present article provides a brief overview of retinal and cortical mechanisms for registering shape information, with the hope that it might contribute to the design of shape-encoding circuits that more closely match the mechanisms of biological vision.
1201.1424
Mustafa Barasa
Joshua Muli Mutiso, John Chege Macharia, Mustafa Barasa, Evans Taracha, Alain J. Bourdichon, Michael Muita Gicheru
In vitro and in vivo antileishmanial efficacy of a combination therapy of diminazene and artesunate against Leishmania donovani in BALB /c mice
4 Pages, 3 Figures
Rev. Inst. Med. Trop. Sao Paulo. 2011, 53 (3): 129 - 132
10.1590/S0036-46652011000300003
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The in vitro and in vivo activity of diminazene (Dim), artesunate (Art) and combination of Dim and Art (Dim-Art) against Leishmania donovani was compared to reference drug; amphotericin B. IC50 of Dim-Art was found to be $2.28 \pm 0.24 \mu$ g/mL while those of Dim and Art were $9.16 \pm 0.3 \mu$ g/mL and $4.64 \pm 0.48 \mu$ g/mL respectively. The IC50 for Amphot B was $0.16 \pm 0.32 \mu$ g/mL against stationary-phase promastigotes. In vivo evaluation in the L. donovani BALB/c mice model indicated that treatments with the combined drug therapy at doses of 12.5 mg/kg for 28 consecutive days significantly ($p < 0.001$) reduced parasite burden in the spleen as compared to the single drug treatments given at the same dosages. Although parasite burden was slightly lower ($p < 0.05$) in the Amphot B group than in the Dim-Art treatment group, the present study demonstrates the positive advantage and the potential use of the combined therapy of Dim-Art over the constituent drugs, Dim or Art when used alone. Further evaluation is recommended to determine the most efficacious combination ratio of the two compounds.
[ { "created": "Thu, 22 Dec 2011 10:09:53 GMT", "version": "v1" } ]
2012-01-17
[ [ "Mutiso", "Joshua Muli", "" ], [ "Macharia", "John Chege", "" ], [ "Barasa", "Mustafa", "" ], [ "Taracha", "Evans", "" ], [ "Bourdichon", "Alain J.", "" ], [ "Gicheru", "Michael Muita", "" ] ]
The in vitro and in vivo activity of diminazene (Dim), artesunate (Art) and combination of Dim and Art (Dim-Art) against Leishmania donovani was compared to reference drug; amphotericin B. IC50 of Dim-Art was found to be $2.28 \pm 0.24 \mu$ g/mL while those of Dim and Art were $9.16 \pm 0.3 \mu$ g/mL and $4.64 \pm 0.48 \mu$ g/mL respectively. The IC50 for Amphot B was $0.16 \pm 0.32 \mu$ g/mL against stationary-phase promastigotes. In vivo evaluation in the L. donovani BALB/c mice model indicated that treatments with the combined drug therapy at doses of 12.5 mg/kg for 28 consecutive days significantly ($p < 0.001$) reduced parasite burden in the spleen as compared to the single drug treatments given at the same dosages. Although parasite burden was slightly lower ($p < 0.05$) in the Amphot B group than in the Dim-Art treatment group, the present study demonstrates the positive advantage and the potential use of the combined therapy of Dim-Art over the constituent drugs, Dim or Art when used alone. Further evaluation is recommended to determine the most efficacious combination ratio of the two compounds.
q-bio/0609023
Ashok Palaniappan
Ashok Palaniappan, Eric Jakobsson
Evolutionary Analysis of Biological Excitability
Includes a contribution to Chomskyan linguistics
null
null
null
q-bio.BM q-bio.CB q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Excitability is an attribute of life, and is a driving force in the descent of complexity. Cellular electrical activity as realized by membrane proteins that act as either channels or transporters is the basis of excitability. Electrical signaling is mediated by a wave of action potentials, which consist of synchronous redistribution of ionic gradients down ion channels. Ion channels select for the passage of a particular ion species. Potassium ion channels are gated by a variety of stimuli, including membrane voltage. Sodium and calcium channels are gated only by membrane voltage, suggesting the conservative argument that voltage-gated potassium channels are the founding members of the voltage-gated ion channel superfamily. The principal focus of this work is the investigation of the complement of potassium ion channels in our genome and its generalizabilty. An array of issues relevant to excitability is addressed, and a range of engagement in questions regarding the unity of life is proffered.
[ { "created": "Fri, 15 Sep 2006 05:17:57 GMT", "version": "v1" }, { "created": "Tue, 4 Sep 2007 06:25:02 GMT", "version": "v2" }, { "created": "Mon, 20 Apr 2009 10:35:15 GMT", "version": "v3" }, { "created": "Sun, 18 Oct 2009 11:53:27 GMT", "version": "v4" }, { "created": "Thu, 5 Nov 2009 09:18:13 GMT", "version": "v5" } ]
2009-11-05
[ [ "Palaniappan", "Ashok", "" ], [ "Jakobsson", "Eric", "" ] ]
Excitability is an attribute of life, and is a driving force in the descent of complexity. Cellular electrical activity as realized by membrane proteins that act as either channels or transporters is the basis of excitability. Electrical signaling is mediated by a wave of action potentials, which consist of synchronous redistribution of ionic gradients down ion channels. Ion channels select for the passage of a particular ion species. Potassium ion channels are gated by a variety of stimuli, including membrane voltage. Sodium and calcium channels are gated only by membrane voltage, suggesting the conservative argument that voltage-gated potassium channels are the founding members of the voltage-gated ion channel superfamily. The principal focus of this work is the investigation of the complement of potassium ion channels in our genome and its generalizabilty. An array of issues relevant to excitability is addressed, and a range of engagement in questions regarding the unity of life is proffered.
1909.06683
Peter Taylor
Sriharsha Ramaraju, Simon Reichert, Yujiang Wang, Rob Forsyth, Peter N Taylor
Carbogen inhalation during Non-Convulsive Status Epilepticus: A quantitative analysis of EEG recordings
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Objective: To quantify the effect of inhaled 5% carbon-dioxide/95% oxygen on EEG recordings from patients in non-convulsive status epilepticus (NCSE). Methods: Five children of mixed aetiology in NCSE were given high flow of inhaled carbogen (5% carbon dioxide/95% oxygen) using a face mask for maximum 120s. EEG was recorded concurrently in all patients. The effects of inhaled carbogen on patient EEG recordings were investigated using band-power, functional connectivity and graph theory measures. Carbogen effect was quantified by measuring effect size (Cohen's d) between "before", "during" and "after" carbogen delivery states. Results: Carbogen's apparent effect on EEG band-power and network metrics across all patients for "before-during" and "before-after" inhalation comparisons was inconsistent across the five patients. Conclusion: The changes in different measures suggest a potentially non-homogeneous effect of carbogen on the patients' EEG. Different aetiology and duration of the inhalation may underlie these non-homogeneous effects. Tuning the carbogen parameters (such as ratio between CO2 and O2, duration of inhalation) on a personalised basis may improve seizure suppression in future.
[ { "created": "Sat, 14 Sep 2019 21:49:06 GMT", "version": "v1" } ]
2019-09-17
[ [ "Ramaraju", "Sriharsha", "" ], [ "Reichert", "Simon", "" ], [ "Wang", "Yujiang", "" ], [ "Forsyth", "Rob", "" ], [ "Taylor", "Peter N", "" ] ]
Objective: To quantify the effect of inhaled 5% carbon-dioxide/95% oxygen on EEG recordings from patients in non-convulsive status epilepticus (NCSE). Methods: Five children of mixed aetiology in NCSE were given high flow of inhaled carbogen (5% carbon dioxide/95% oxygen) using a face mask for maximum 120s. EEG was recorded concurrently in all patients. The effects of inhaled carbogen on patient EEG recordings were investigated using band-power, functional connectivity and graph theory measures. Carbogen effect was quantified by measuring effect size (Cohen's d) between "before", "during" and "after" carbogen delivery states. Results: Carbogen's apparent effect on EEG band-power and network metrics across all patients for "before-during" and "before-after" inhalation comparisons was inconsistent across the five patients. Conclusion: The changes in different measures suggest a potentially non-homogeneous effect of carbogen on the patients' EEG. Different aetiology and duration of the inhalation may underlie these non-homogeneous effects. Tuning the carbogen parameters (such as ratio between CO2 and O2, duration of inhalation) on a personalised basis may improve seizure suppression in future.
2311.09131
Farzan Vahedifard
Farzan Vahedifard, Atieh Sadeghniiat Haghighi, Tirth Dave, Mohammad Tolouei, Fateme Hoshyar Zare
Practical Use of ChatGPT in Psychiatry for Treatment Plan and Psychoeducation
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial Intelligence (AI) has revolutionized various fields, including medicine and mental health support. One promising application is ChatGPT, an advanced conversational AI model that uses deep learning techniques to provide human-like responses. This review paper explores the potential impact of ChatGPT in psychiatry and its various applications, highlighting its role in therapy and counseling techniques, self-help and coping strategies, mindfulness and relaxation techniques, screening and monitoring, education and information dissemination, specialized support, group and family support, learning and training, expressive and artistic therapies, telepsychiatry and online support, and crisis management and prevention. While ChatGPT offers personalized, accessible, and scalable support, it is essential to emphasize that it should not replace the expertise and guidance of qualified mental health professionals. Ethical considerations, such as user privacy, data security, and human oversight, are also discussed. By examining the potential and challenges, this paper sheds light on the responsible integration of ChatGPT in psychiatric research and practice, fostering improved mental health outcomes.
[ { "created": "Wed, 15 Nov 2023 17:21:09 GMT", "version": "v1" } ]
2023-11-16
[ [ "Vahedifard", "Farzan", "" ], [ "Haghighi", "Atieh Sadeghniiat", "" ], [ "Dave", "Tirth", "" ], [ "Tolouei", "Mohammad", "" ], [ "Zare", "Fateme Hoshyar", "" ] ]
Artificial Intelligence (AI) has revolutionized various fields, including medicine and mental health support. One promising application is ChatGPT, an advanced conversational AI model that uses deep learning techniques to provide human-like responses. This review paper explores the potential impact of ChatGPT in psychiatry and its various applications, highlighting its role in therapy and counseling techniques, self-help and coping strategies, mindfulness and relaxation techniques, screening and monitoring, education and information dissemination, specialized support, group and family support, learning and training, expressive and artistic therapies, telepsychiatry and online support, and crisis management and prevention. While ChatGPT offers personalized, accessible, and scalable support, it is essential to emphasize that it should not replace the expertise and guidance of qualified mental health professionals. Ethical considerations, such as user privacy, data security, and human oversight, are also discussed. By examining the potential and challenges, this paper sheds light on the responsible integration of ChatGPT in psychiatric research and practice, fostering improved mental health outcomes.
2011.14118
Eberhard Bodenschatz
Freja Nordsiek and Eberhard Bodenschatz and Gholamhossein Bagheri
Risk assessment for airborne disease transmission by poly-pathogen aerosols
updated file with link to software on GitHub
null
10.1371/journal.pone.0248004
null
q-bio.QM physics.med-ph
http://creativecommons.org/licenses/by/4.0/
In the case of airborne diseases, pathogen copies are transmitted by droplets of respiratory tract fluid that are exhaled by the infectious and, after partial or full drying, inhaled as aerosols by the susceptible. The risk of infection in indoor environments is typically modelled using the Wells-Riley model or a Wells-Riley-like formulation, usually assuming the pathogen dose follows a Poisson distribution (mono-pathogen assumption). Aerosols that hold more than one pathogen copy, i.e. poly-pathogen aerosols, break this assumption even if the aerosol dose itself follows a Poisson distribution. For the largest aerosols where the number of pathogen in each aerosol can sometimes be several hundred or several thousand, the effect is non-negligible, especially in diseases where the risk of infection per pathogen is high. Here we report on a generalization of the Wells-Riley model and dose-response models for poly-pathogen aerosols by separately modeling each number of pathogen copies per aerosol, while the aerosol dose itself follows a Poisson distribution. This results in a model for computational risk assessment suitable for mono-/poly-pathogen aerosols. We show that the mono-pathogen assumption significantly overestimates the risk of infection for high pathogen concentrations in the respiratory tract fluid. The model also includes the aerosol removal due to filtering by the individuals which becomes significant for poorly ventilated environments with a high density of individuals, and systematically includes the effects of facemasks in the infectious aerosol source and sink terms and dose calculations.
[ { "created": "Sat, 28 Nov 2020 11:58:01 GMT", "version": "v1" }, { "created": "Mon, 15 Feb 2021 12:16:16 GMT", "version": "v2" } ]
2021-06-09
[ [ "Nordsiek", "Freja", "" ], [ "Bodenschatz", "Eberhard", "" ], [ "Bagheri", "Gholamhossein", "" ] ]
In the case of airborne diseases, pathogen copies are transmitted by droplets of respiratory tract fluid that are exhaled by the infectious and, after partial or full drying, inhaled as aerosols by the susceptible. The risk of infection in indoor environments is typically modelled using the Wells-Riley model or a Wells-Riley-like formulation, usually assuming the pathogen dose follows a Poisson distribution (mono-pathogen assumption). Aerosols that hold more than one pathogen copy, i.e. poly-pathogen aerosols, break this assumption even if the aerosol dose itself follows a Poisson distribution. For the largest aerosols where the number of pathogen in each aerosol can sometimes be several hundred or several thousand, the effect is non-negligible, especially in diseases where the risk of infection per pathogen is high. Here we report on a generalization of the Wells-Riley model and dose-response models for poly-pathogen aerosols by separately modeling each number of pathogen copies per aerosol, while the aerosol dose itself follows a Poisson distribution. This results in a model for computational risk assessment suitable for mono-/poly-pathogen aerosols. We show that the mono-pathogen assumption significantly overestimates the risk of infection for high pathogen concentrations in the respiratory tract fluid. The model also includes the aerosol removal due to filtering by the individuals which becomes significant for poorly ventilated environments with a high density of individuals, and systematically includes the effects of facemasks in the infectious aerosol source and sink terms and dose calculations.
2007.07429
Laurent H\'ebert-Dufresne
Blake J. M. Williams, Guillaume St-Onge and Laurent H\'ebert-Dufresne
Localization, epidemic transitions, and unpredictability of multistrain epidemics with an underlying genotype network
null
PLoS Comput Biol 17(2): e1008606 (2021)
10.1371/journal.pcbi.1008606
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical disease modelling has long operated under the assumption that any one infectious disease is caused by one transmissible pathogen spreading among a population. This paradigm has been useful in simplifying the biological reality of epidemics and has allowed the modelling community to focus on the complexity of other factors such as population structure and interventions. However, there is an increasing amount of evidence that the strain diversity of pathogens, and their interplay with the host immune system, can play a large role in shaping the dynamics of epidemics. Here, we introduce a disease model with an underlying genotype network to account for two important mechanisms. One, the disease can mutate along network pathways as it spreads in a host population. Two, the genotype network allows us to define a genetic distance across strains and therefore to model the transcendence of immunity often observed in real world pathogens. We study the emergence of epidemics in this model, through its epidemic phase transitions, and highlight the role of the genotype network in driving cyclicity of diseases, large scale fluctuations, sequential epidemic transitions, as well as localization around specific strains of the associated pathogen. More generally, our model illustrates the richness of behaviours that are possible even in well-mixed host populations once we consider strain diversity and go beyond the "one disease equals one pathogen" paradigm.
[ { "created": "Wed, 15 Jul 2020 01:49:49 GMT", "version": "v1" }, { "created": "Tue, 1 Jun 2021 17:39:22 GMT", "version": "v2" } ]
2021-06-02
[ [ "Williams", "Blake J. M.", "" ], [ "St-Onge", "Guillaume", "" ], [ "Hébert-Dufresne", "Laurent", "" ] ]
Mathematical disease modelling has long operated under the assumption that any one infectious disease is caused by one transmissible pathogen spreading among a population. This paradigm has been useful in simplifying the biological reality of epidemics and has allowed the modelling community to focus on the complexity of other factors such as population structure and interventions. However, there is an increasing amount of evidence that the strain diversity of pathogens, and their interplay with the host immune system, can play a large role in shaping the dynamics of epidemics. Here, we introduce a disease model with an underlying genotype network to account for two important mechanisms. One, the disease can mutate along network pathways as it spreads in a host population. Two, the genotype network allows us to define a genetic distance across strains and therefore to model the transcendence of immunity often observed in real world pathogens. We study the emergence of epidemics in this model, through its epidemic phase transitions, and highlight the role of the genotype network in driving cyclicity of diseases, large scale fluctuations, sequential epidemic transitions, as well as localization around specific strains of the associated pathogen. More generally, our model illustrates the richness of behaviours that are possible even in well-mixed host populations once we consider strain diversity and go beyond the "one disease equals one pathogen" paradigm.
1906.00944
Aqib Hasnain
Aqib Hasnain, Nibodh Boddupalli, Enoch Yeung
Optimal reporter placement in sparsely measured genetic networks using the Koopman operator
6 pages, 3 figures, to appear in 2019 IEEE Conference on Decision and Control (CDC)
null
null
null
q-bio.MN cs.SY eess.SY math.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimal sensor placement is an important yet unsolved problem in control theory. In biological organisms, genetic activity is often highly nonlinear, making it difficult to design libraries of promoters to act as reporters of the cell state. We make use of the Koopman observability gramian to develop an algorithm for optimal sensor (or reporter) placement for discrete time nonlinear dynamical systems to ease the difficulty of design of the promoter library. This ease is enabled due to the fact that the Koopman operator represents the evolution of a nonlinear system linearly by lifting the states to an infinite-dimensional space of observables. The Koopman framework ideally demands high temporal resolution, but data in biology are often sampled sparsely in time. Therefore we compute what we call the temporally fine-grained Koopman operator from the temporally coarse-grained Koopman operator, the latter of which is identified from the sparse data. The optimal placement of sensors then corresponds to maximizing the observability of the fine-grained system. We demonstrate the algorithm on a simulation example of a circadian oscillator.
[ { "created": "Mon, 3 Jun 2019 17:52:38 GMT", "version": "v1" }, { "created": "Wed, 18 Sep 2019 07:37:05 GMT", "version": "v2" } ]
2019-09-19
[ [ "Hasnain", "Aqib", "" ], [ "Boddupalli", "Nibodh", "" ], [ "Yeung", "Enoch", "" ] ]
Optimal sensor placement is an important yet unsolved problem in control theory. In biological organisms, genetic activity is often highly nonlinear, making it difficult to design libraries of promoters to act as reporters of the cell state. We make use of the Koopman observability gramian to develop an algorithm for optimal sensor (or reporter) placement for discrete time nonlinear dynamical systems to ease the difficulty of design of the promoter library. This ease is enabled due to the fact that the Koopman operator represents the evolution of a nonlinear system linearly by lifting the states to an infinite-dimensional space of observables. The Koopman framework ideally demands high temporal resolution, but data in biology are often sampled sparsely in time. Therefore we compute what we call the temporally fine-grained Koopman operator from the temporally coarse-grained Koopman operator, the latter of which is identified from the sparse data. The optimal placement of sensors then corresponds to maximizing the observability of the fine-grained system. We demonstrate the algorithm on a simulation example of a circadian oscillator.
0707.0245
Bard Ermentrout
G. Bard Ermentrout, Roberto F. Gal\'an Nathaniel N. Urban
Relating Neural Dynamics to Neural Coding
10 pages, 3 figures
null
null
null
q-bio.NC
null
We demonstrate that two key theoretical objects used widely in Computational Neuroscience, the phase-resetting curve (PRC) from dynamics and the spike triggered average (STA) from statistical analysis, are closely related under a wide range of stimulus conditions. We prove that the STA is proportional to the derivative of the PRC. We compare these analytic results to numerical calculations for the Hodgkin-Huxley neuron and we apply the method to neurons in the olfactory bulb of mice. This observation allows us to relate the stimulus-response properties of a neuron to its dynamics, bridging the gap between dynamical and information theoretic approaches to understanding brain computations and facilitating the interpretation of changes in channels and other cellular properties as influencing the representation of stimuli.
[ { "created": "Mon, 2 Jul 2007 14:56:43 GMT", "version": "v1" }, { "created": "Wed, 26 Sep 2007 18:49:49 GMT", "version": "v2" } ]
2007-09-26
[ [ "Ermentrout", "G. Bard", "" ], [ "Urban", "Roberto F. Galán Nathaniel N.", "" ] ]
We demonstrate that two key theoretical objects used widely in Computational Neuroscience, the phase-resetting curve (PRC) from dynamics and the spike triggered average (STA) from statistical analysis, are closely related under a wide range of stimulus conditions. We prove that the STA is proportional to the derivative of the PRC. We compare these analytic results to numerical calculations for the Hodgkin-Huxley neuron and we apply the method to neurons in the olfactory bulb of mice. This observation allows us to relate the stimulus-response properties of a neuron to its dynamics, bridging the gap between dynamical and information theoretic approaches to understanding brain computations and facilitating the interpretation of changes in channels and other cellular properties as influencing the representation of stimuli.
1801.07421
Kang Hao Cheong
Jin Ming Koh, Kang Hao Cheong
Strategic nomadic-colonial switching: Stochastic noise and subsidence-recovery cycles
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previously, we developed a population model incorporating the Allee effect and periodic environmental fluctuations, in which organisms alternate between nomadic and colonial behaviours. This switching strategy is regulated by biological clocks and the abundance of environmental resources, and can lead to population persistence despite both behaviours being individually losing. In the present study, we consider stochastic noise models in place of the original periodic ones, thereby allowing a wider range of environmental fluctuations to be modelled. The theoretical framework is generalized to account for resource depletion by both nomadic and colonial sub-populations, and an ecologically realistic population size-dependent switching scheme is proposed. We demonstrate the robustness of the modified switching scheme to stochastic noise, and we also present the intriguing possibility of consecutive subsidence-recovery cycles within the resulting population dynamics. Our results have relevance in biological and physical systems.
[ { "created": "Tue, 23 Jan 2018 07:44:09 GMT", "version": "v1" } ]
2018-01-24
[ [ "Koh", "Jin Ming", "" ], [ "Cheong", "Kang Hao", "" ] ]
Previously, we developed a population model incorporating the Allee effect and periodic environmental fluctuations, in which organisms alternate between nomadic and colonial behaviours. This switching strategy is regulated by biological clocks and the abundance of environmental resources, and can lead to population persistence despite both behaviours being individually losing. In the present study, we consider stochastic noise models in place of the original periodic ones, thereby allowing a wider range of environmental fluctuations to be modelled. The theoretical framework is generalized to account for resource depletion by both nomadic and colonial sub-populations, and an ecologically realistic population size-dependent switching scheme is proposed. We demonstrate the robustness of the modified switching scheme to stochastic noise, and we also present the intriguing possibility of consecutive subsidence-recovery cycles within the resulting population dynamics. Our results have relevance in biological and physical systems.
2110.11161
Olha Shchur
Olha Shchur and Alexander Vidybida
Distribution of interspike intervals of a neuron with inhibitory autapse stimulated with a renewal process
12 pages, 3 figures
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study analytically the impact of an inhibitory autapse on neuronal activity. In order to do this, we formulate conditions on a set of non-adaptive spiking neuron models with delayed feedback inhibition, instead of considering a particular neuronal model. The neuron is stimulated with a stochastic point renewal process of excitatory impulses. Probability density function (PDF) $p(t)$ of output interspike intervals (ISIs) of such a neuron is found exactly without any approximations made. It is expressed in terms of ISIs PDF for the input renewal stream and ISIs PDF for that same neuron without any feedback. Obtained results are applied to a subset of neuronal models with threshold 2 when the time intervals between input impulses are distributed according to the Erlang-2 distribution. In that case we have found explicitly the model-independent initial part of ISIs PDF $p(t)$ defined at some initial interval $[0;T_2]$ of ISI values.
[ { "created": "Thu, 21 Oct 2021 14:13:37 GMT", "version": "v1" }, { "created": "Wed, 12 Oct 2022 14:30:27 GMT", "version": "v2" } ]
2022-10-13
[ [ "Shchur", "Olha", "" ], [ "Vidybida", "Alexander", "" ] ]
In this paper, we study analytically the impact of an inhibitory autapse on neuronal activity. In order to do this, we formulate conditions on a set of non-adaptive spiking neuron models with delayed feedback inhibition, instead of considering a particular neuronal model. The neuron is stimulated with a stochastic point renewal process of excitatory impulses. Probability density function (PDF) $p(t)$ of output interspike intervals (ISIs) of such a neuron is found exactly without any approximations made. It is expressed in terms of ISIs PDF for the input renewal stream and ISIs PDF for that same neuron without any feedback. Obtained results are applied to a subset of neuronal models with threshold 2 when the time intervals between input impulses are distributed according to the Erlang-2 distribution. In that case we have found explicitly the model-independent initial part of ISIs PDF $p(t)$ defined at some initial interval $[0;T_2]$ of ISI values.
1701.01361
Christophe Gole
Pau Atela and Christophe Gole
Rhombic Tilings and Primordia Fronts of Phyllotaxis
33 pages, 10 pictures
null
null
null
q-bio.TO math.DS
http://creativecommons.org/publicdomain/zero/1.0/
We introduce and study properties of phyllotactic and rhombic tilings on the cylin- der. These are discrete sets of points that generalize cylindrical lattices. Rhombic tilings appear as periodic orbits of a discrete dynamical system S that models plant pattern formation by stacking disks of equal radius on the cylinder. This system has the advantage of allowing several disks at the same level, and thus multi-jugate config- urations. We provide partial results toward proving that the attractor for S is entirely composed of rhombic tilings and is a strongly normally attracting branched manifold and conjecture that this attractor persists topologically in nearby systems. A key tool in understanding the geometry of tilings and the dynamics of S is the concept of pri- mordia front, which is a closed ring of tangent disks around the cylinder. We show how fronts determine the dynamics, including transitions of parastichy numbers, and might explain the Fibonacci number of petals often encountered in compositae.
[ { "created": "Wed, 4 Jan 2017 20:19:44 GMT", "version": "v1" } ]
2017-01-06
[ [ "Atela", "Pau", "" ], [ "Gole", "Christophe", "" ] ]
We introduce and study properties of phyllotactic and rhombic tilings on the cylin- der. These are discrete sets of points that generalize cylindrical lattices. Rhombic tilings appear as periodic orbits of a discrete dynamical system S that models plant pattern formation by stacking disks of equal radius on the cylinder. This system has the advantage of allowing several disks at the same level, and thus multi-jugate config- urations. We provide partial results toward proving that the attractor for S is entirely composed of rhombic tilings and is a strongly normally attracting branched manifold and conjecture that this attractor persists topologically in nearby systems. A key tool in understanding the geometry of tilings and the dynamics of S is the concept of pri- mordia front, which is a closed ring of tangent disks around the cylinder. We show how fronts determine the dynamics, including transitions of parastichy numbers, and might explain the Fibonacci number of petals often encountered in compositae.
q-bio/0409036
Haret Rosu
L.A. Torres, V. Ibarra-Junquera, P. Escalante-Minakata, H.C. Rosu
High-gain nonlinear observer for simple genetic regulation process
9 pages, one figure
Physica A 380 (2007) 235-240
10.1016/j.physa.2007.02.105
null
q-bio.QM q-bio.MN
null
High-gain nonlinear observers occur in the nonlinear automatic control theory and are in standard usage in chemical engineering processes. We apply such a type of analysis in the context of a very simple one-gene regulation circuit. In general, an observer combines an analytical differential-equation-based model with partial measurement of the system in order to estimate the non-measured state variables. We use one of the simplest observers, that of Gauthier et al., which is a copy of the original system plus a correction term which is easy to calculate. For the illustration of this procedure, we employ a biological model, recently adapted from Goodwin's old book by De Jong, in which one plays with the dynamics of the concentrations of the messenger RNA coding for a given protein, the protein itself, and a single metabolite. Using the observer instead of the metabolite, it is possible to rebuild the non-measured concentrations of the mRNA and the protein
[ { "created": "Wed, 29 Sep 2004 22:42:46 GMT", "version": "v1" }, { "created": "Mon, 16 Jul 2007 17:47:20 GMT", "version": "v2" } ]
2007-07-16
[ [ "Torres", "L. A.", "" ], [ "Ibarra-Junquera", "V.", "" ], [ "Escalante-Minakata", "P.", "" ], [ "Rosu", "H. C.", "" ] ]
High-gain nonlinear observers occur in the nonlinear automatic control theory and are in standard usage in chemical engineering processes. We apply such a type of analysis in the context of a very simple one-gene regulation circuit. In general, an observer combines an analytical differential-equation-based model with partial measurement of the system in order to estimate the non-measured state variables. We use one of the simplest observers, that of Gauthier et al., which is a copy of the original system plus a correction term which is easy to calculate. For the illustration of this procedure, we employ a biological model, recently adapted from Goodwin's old book by De Jong, in which one plays with the dynamics of the concentrations of the messenger RNA coding for a given protein, the protein itself, and a single metabolite. Using the observer instead of the metabolite, it is possible to rebuild the non-measured concentrations of the mRNA and the protein
2006.03611
Markus D Schirmer
Markus D. Schirmer, Archana Venkataraman, Islem Rekik, Minjeong Kim, Stewart H. Mostofsky, Mary Beth Nebel, Keri Rosch, Karen Seymour, Deana Crocetti, Hassna Irzan, Michael H\"utel, Sebastien Ourselin, Neil Marlow, Andrew Melbourne, Egor Levchenko, Shuo Zhou, Mwiza Kunda, Haiping Lu, Nicha C. Dvornek, Juntang Zhuang, Gideon Pinto, Sandip Samal, Jennings Zhang, Jorge L. Bernal-Rusiel, Rudolph Pienaar, Ai Wern Chung
Neuropsychiatric Disease Classification Using Functional Connectomics -- Results of the Connectomics in NeuroImaging Transfer Learning Challenge
CNI-TLC was held in conjunction with MICCAI 2019
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large, open-source consortium datasets have spurred the development of new and increasingly powerful machine learning approaches in brain connectomics. However, one key question remains: are we capturing biologically relevant and generalizable information about the brain, or are we simply overfitting to the data? To answer this, we organized a scientific challenge, the Connectomics in NeuroImaging Transfer Learning Challenge (CNI-TLC), held in conjunction with MICCAI 2019. CNI-TLC included two classification tasks: (1) diagnosis of Attention-Deficit/Hyperactivity Disorder (ADHD) within a pre-adolescent cohort; and (2) transference of the ADHD model to a related cohort of Autism Spectrum Disorder (ASD) patients with an ADHD comorbidity. In total, 240 resting-state fMRI time series averaged according to three standard parcellation atlases, along with clinical diagnosis, were released for training and validation (120 neurotypical controls and 120 ADHD). We also provided demographic information of age, sex, IQ, and handedness. A second set of 100 subjects (50 neurotypical controls, 25 ADHD, and 25 ASD with ADHD comorbidity) was used for testing. Models were submitted in a standardized format as Docker images through ChRIS, an open-source image analysis platform. Utilizing an inclusive approach, we ranked the methods based on 16 different metrics. The final rank was calculated using the rank product for each participant across all measures. Furthermore, we assessed the calibration curves of each method. Five participants submitted their model for evaluation, with one outperforming all other methods in both ADHD and ASD classification. However, further improvements are needed to reach the clinical translation of functional connectomics. We are keeping the CNI-TLC open as a publicly available resource for developing and validating new classification methodologies in the field of connectomics.
[ { "created": "Fri, 5 Jun 2020 18:05:42 GMT", "version": "v1" }, { "created": "Wed, 25 Nov 2020 11:45:57 GMT", "version": "v2" } ]
2020-11-26
[ [ "Schirmer", "Markus D.", "" ], [ "Venkataraman", "Archana", "" ], [ "Rekik", "Islem", "" ], [ "Kim", "Minjeong", "" ], [ "Mostofsky", "Stewart H.", "" ], [ "Nebel", "Mary Beth", "" ], [ "Rosch", "Keri", "" ], [ "Seymour", "Karen", "" ], [ "Crocetti", "Deana", "" ], [ "Irzan", "Hassna", "" ], [ "Hütel", "Michael", "" ], [ "Ourselin", "Sebastien", "" ], [ "Marlow", "Neil", "" ], [ "Melbourne", "Andrew", "" ], [ "Levchenko", "Egor", "" ], [ "Zhou", "Shuo", "" ], [ "Kunda", "Mwiza", "" ], [ "Lu", "Haiping", "" ], [ "Dvornek", "Nicha C.", "" ], [ "Zhuang", "Juntang", "" ], [ "Pinto", "Gideon", "" ], [ "Samal", "Sandip", "" ], [ "Zhang", "Jennings", "" ], [ "Bernal-Rusiel", "Jorge L.", "" ], [ "Pienaar", "Rudolph", "" ], [ "Chung", "Ai Wern", "" ] ]
Large, open-source consortium datasets have spurred the development of new and increasingly powerful machine learning approaches in brain connectomics. However, one key question remains: are we capturing biologically relevant and generalizable information about the brain, or are we simply overfitting to the data? To answer this, we organized a scientific challenge, the Connectomics in NeuroImaging Transfer Learning Challenge (CNI-TLC), held in conjunction with MICCAI 2019. CNI-TLC included two classification tasks: (1) diagnosis of Attention-Deficit/Hyperactivity Disorder (ADHD) within a pre-adolescent cohort; and (2) transference of the ADHD model to a related cohort of Autism Spectrum Disorder (ASD) patients with an ADHD comorbidity. In total, 240 resting-state fMRI time series averaged according to three standard parcellation atlases, along with clinical diagnosis, were released for training and validation (120 neurotypical controls and 120 ADHD). We also provided demographic information of age, sex, IQ, and handedness. A second set of 100 subjects (50 neurotypical controls, 25 ADHD, and 25 ASD with ADHD comorbidity) was used for testing. Models were submitted in a standardized format as Docker images through ChRIS, an open-source image analysis platform. Utilizing an inclusive approach, we ranked the methods based on 16 different metrics. The final rank was calculated using the rank product for each participant across all measures. Furthermore, we assessed the calibration curves of each method. Five participants submitted their model for evaluation, with one outperforming all other methods in both ADHD and ASD classification. However, further improvements are needed to reach the clinical translation of functional connectomics. We are keeping the CNI-TLC open as a publicly available resource for developing and validating new classification methodologies in the field of connectomics.
2407.07930
Rui Qin
Jike Wang, Rui Qin, Mingyang Wang, Meijing Fang, Yangyang Zhang, Yuchen Zhu, Qun Su, Qiaolin Gou, Chao Shen, Odin Zhang, Zhenxing Wu, Dejun Jiang, Xujun Zhang, Huifeng Zhao, Xiaozhe Wan, Zhourui Wu, Liwei Liu, Yu Kang, Chang-Yu Hsieh, Tingjun Hou
Token-Mol 1.0: Tokenized drug design with large language model
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Significant interests have recently risen in leveraging sequence-based large language models (LLMs) for drug design. However, most current applications of LLMs in drug discovery lack the ability to comprehend three-dimensional (3D) structures, thereby limiting their effectiveness in tasks that explicitly involve molecular conformations. In this study, we introduced Token-Mol, a token-only 3D drug design model. This model encodes all molecular information, including 2D and 3D structures, as well as molecular property data, into tokens, which transforms classification and regression tasks in drug discovery into probabilistic prediction problems, thereby enabling learning through a unified paradigm. Token-Mol is built on the transformer decoder architecture and trained using random causal masking techniques. Additionally, we proposed the Gaussian cross-entropy (GCE) loss function to overcome the challenges in regression tasks, significantly enhancing the capacity of LLMs to learn continuous numerical values. Through a combination of fine-tuning and reinforcement learning (RL), Token-Mol achieves performance comparable to or surpassing existing task-specific methods across various downstream tasks, including pocket-based molecular generation, conformation generation, and molecular property prediction. Compared to existing molecular pre-trained models, Token-Mol exhibits superior proficiency in handling a wider range of downstream tasks essential for drug design. Notably, our approach improves regression task accuracy by approximately 30% compared to similar token-only methods. Token-Mol overcomes the precision limitations of token-only models and has the potential to integrate seamlessly with general models such as ChatGPT, paving the way for the development of a universal artificial intelligence drug design model that facilitates rapid and high-quality drug design by experts.
[ { "created": "Wed, 10 Jul 2024 07:22:15 GMT", "version": "v1" } ]
2024-07-12
[ [ "Wang", "Jike", "" ], [ "Qin", "Rui", "" ], [ "Wang", "Mingyang", "" ], [ "Fang", "Meijing", "" ], [ "Zhang", "Yangyang", "" ], [ "Zhu", "Yuchen", "" ], [ "Su", "Qun", "" ], [ "Gou", "Qiaolin", "" ], [ "Shen", "Chao", "" ], [ "Zhang", "Odin", "" ], [ "Wu", "Zhenxing", "" ], [ "Jiang", "Dejun", "" ], [ "Zhang", "Xujun", "" ], [ "Zhao", "Huifeng", "" ], [ "Wan", "Xiaozhe", "" ], [ "Wu", "Zhourui", "" ], [ "Liu", "Liwei", "" ], [ "Kang", "Yu", "" ], [ "Hsieh", "Chang-Yu", "" ], [ "Hou", "Tingjun", "" ] ]
Significant interests have recently risen in leveraging sequence-based large language models (LLMs) for drug design. However, most current applications of LLMs in drug discovery lack the ability to comprehend three-dimensional (3D) structures, thereby limiting their effectiveness in tasks that explicitly involve molecular conformations. In this study, we introduced Token-Mol, a token-only 3D drug design model. This model encodes all molecular information, including 2D and 3D structures, as well as molecular property data, into tokens, which transforms classification and regression tasks in drug discovery into probabilistic prediction problems, thereby enabling learning through a unified paradigm. Token-Mol is built on the transformer decoder architecture and trained using random causal masking techniques. Additionally, we proposed the Gaussian cross-entropy (GCE) loss function to overcome the challenges in regression tasks, significantly enhancing the capacity of LLMs to learn continuous numerical values. Through a combination of fine-tuning and reinforcement learning (RL), Token-Mol achieves performance comparable to or surpassing existing task-specific methods across various downstream tasks, including pocket-based molecular generation, conformation generation, and molecular property prediction. Compared to existing molecular pre-trained models, Token-Mol exhibits superior proficiency in handling a wider range of downstream tasks essential for drug design. Notably, our approach improves regression task accuracy by approximately 30% compared to similar token-only methods. Token-Mol overcomes the precision limitations of token-only models and has the potential to integrate seamlessly with general models such as ChatGPT, paving the way for the development of a universal artificial intelligence drug design model that facilitates rapid and high-quality drug design by experts.
1006.5560
Jose A Capitan
Jose A. Capitan and Jose A. Cuesta
Species assembly in model ecosystems, I: Analysis of the population model and the invasion dynamics
16 pages, 8 figures. Revised version
Journal of Theoretical Biology 269, 330-343 (2011)
10.1016/j.jtbi.2010.09.032
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently we have introduced a simplified model of ecosystem assembly (Capitan et al., 2009) for which we are able to map out all assembly pathways generated by external invasions in an exact manner. In this paper we provide a deeper analysis of the model, obtaining analytical results and introducing some approximations which allow us to reconstruct the results of our previous work. In particular, we show that the population dynamics equations of a very general class of trophic-level structured food-web have an unique interior equilibrium point which is globally stable. We show analytically that communities found as end states of the assembly process are pyramidal and we find that the equilibrium abundance of any species at any trophic level is approximately inversely proportional to the number of species in that level. We also find that the per capita growth rate of a top predator invading a resident community is key to understand the appearance of complex end states reported in our previous work. The sign of these rates allows us to separate regions in the space of parameters where the end state is either a single community or a complex set containing more than one community. We have also built up analytical approximations to the time evolution of species abundances that allow us to determine, with high accuracy, the sequence of extinctions that an invasion may cause. Finally we apply this analysis to obtain the communities in the end states. To test the accuracy of the transition probability matrix generated by this analytical procedure for the end states, we have compared averages over those sets with those obtained from the graph derived by numerical integration of the Lotka-Volterra equations. The agreement is excellent.
[ { "created": "Tue, 29 Jun 2010 10:26:13 GMT", "version": "v1" }, { "created": "Wed, 22 Sep 2010 11:19:01 GMT", "version": "v2" } ]
2015-02-18
[ [ "Capitan", "Jose A.", "" ], [ "Cuesta", "Jose A.", "" ] ]
Recently we have introduced a simplified model of ecosystem assembly (Capitan et al., 2009) for which we are able to map out all assembly pathways generated by external invasions in an exact manner. In this paper we provide a deeper analysis of the model, obtaining analytical results and introducing some approximations which allow us to reconstruct the results of our previous work. In particular, we show that the population dynamics equations of a very general class of trophic-level structured food-web have an unique interior equilibrium point which is globally stable. We show analytically that communities found as end states of the assembly process are pyramidal and we find that the equilibrium abundance of any species at any trophic level is approximately inversely proportional to the number of species in that level. We also find that the per capita growth rate of a top predator invading a resident community is key to understand the appearance of complex end states reported in our previous work. The sign of these rates allows us to separate regions in the space of parameters where the end state is either a single community or a complex set containing more than one community. We have also built up analytical approximations to the time evolution of species abundances that allow us to determine, with high accuracy, the sequence of extinctions that an invasion may cause. Finally we apply this analysis to obtain the communities in the end states. To test the accuracy of the transition probability matrix generated by this analytical procedure for the end states, we have compared averages over those sets with those obtained from the graph derived by numerical integration of the Lotka-Volterra equations. The agreement is excellent.
1403.2488
Margarita Ifti
Margarita Ifti and Birger Bergersen
Phase Transitions in Systems of Interacting Species
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss an autocatalytic reaction system: the cyclic competition model A1 + A2 --> 2 A2, A2 + A3 --> 2 A3, A3 + A4 --> 2 A4, A4 + A1 --> 2 A1), as well as its neutral counterpart. Migrations are introduced into the model. When stochastic phenomena are taken into account, a phase transition between a ``fixation'' and a ``neutral'' regime is observed. In the ``fixation'' regime, species A1 and A3 form an alliance against species A2 and A4, and the final state is one in which one of the symbiotic pairs has won. The odd--even ``coarse--grained'' systems is mapped onto the two--species neutral (Kimura) model. In the ``neutral'' regime, all four species survive for long (evolutionary) times. The analytical results are checked against computer simulations of the model. The model is generalized for n species. Also, a generalized version of the Volterra model is analysed.
[ { "created": "Tue, 11 Mar 2014 07:13:17 GMT", "version": "v1" } ]
2014-03-12
[ [ "Ifti", "Margarita", "" ], [ "Bergersen", "Birger", "" ] ]
We discuss an autocatalytic reaction system: the cyclic competition model A1 + A2 --> 2 A2, A2 + A3 --> 2 A3, A3 + A4 --> 2 A4, A4 + A1 --> 2 A1), as well as its neutral counterpart. Migrations are introduced into the model. When stochastic phenomena are taken into account, a phase transition between a ``fixation'' and a ``neutral'' regime is observed. In the ``fixation'' regime, species A1 and A3 form an alliance against species A2 and A4, and the final state is one in which one of the symbiotic pairs has won. The odd--even ``coarse--grained'' systems is mapped onto the two--species neutral (Kimura) model. In the ``neutral'' regime, all four species survive for long (evolutionary) times. The analytical results are checked against computer simulations of the model. The model is generalized for n species. Also, a generalized version of the Volterra model is analysed.
2011.09232
Audrey B\"urki
Pamela Fuhrmeister, Sylvain Madec, Antje Lorenz, Shereen Elbuy, Audrey B\"urki
Behavioral and EEG evidence for inter-individual variability in late encoding stages of word production
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Individuals differ in the time it takes to produce words when naming a picture. However, it is unknown whether this inter-individual variability emerges in earlier stages of word production (e.g., lexical selection) or later stages (e.g., articulation). The current study measured participants' (N = 45) naming latencies and continuous EEG in a picture-word-interference task, as well as naming latencies in a delayed naming task. The inter-individual variability in naming latencies in immediate naming was not larger than the variability in the delayed task. Thus, a large part of the variability in immediate naming seems to originate in relatively late stages of word production. This interpretation was complemented by the EEG data: Differences between relatively fast vs. slow speakers were seen in response-aligned analyses in a time window close to the vocal response. Finally, we show that inter-individual variability can influence EEG results at the group level.
[ { "created": "Wed, 18 Nov 2020 11:58:32 GMT", "version": "v1" } ]
2020-11-19
[ [ "Fuhrmeister", "Pamela", "" ], [ "Madec", "Sylvain", "" ], [ "Lorenz", "Antje", "" ], [ "Elbuy", "Shereen", "" ], [ "Bürki", "Audrey", "" ] ]
Individuals differ in the time it takes to produce words when naming a picture. However, it is unknown whether this inter-individual variability emerges in earlier stages of word production (e.g., lexical selection) or later stages (e.g., articulation). The current study measured participants' (N = 45) naming latencies and continuous EEG in a picture-word-interference task, as well as naming latencies in a delayed naming task. The inter-individual variability in naming latencies in immediate naming was not larger than the variability in the delayed task. Thus, a large part of the variability in immediate naming seems to originate in relatively late stages of word production. This interpretation was complemented by the EEG data: Differences between relatively fast vs. slow speakers were seen in response-aligned analyses in a time window close to the vocal response. Finally, we show that inter-individual variability can influence EEG results at the group level.
1611.09648
Ranjit Bahadur P
Naga Bhushana Rao .K and Ranjit Prasad Bahadur
PeBLes: Prediction of B-cell epitope using molecular layers
27 pages, 5 Figures and 3 Tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Characterization of B-cell protein epitope and developing critical parameters for its identification is one of the long standing interests. Using Layers algorithm, we introduced the concept of anchor residues to identify epitope. We have shown that majority of the epitope is composed of anchor residues and have significant bias in epitope for these residues. We optimized the search space reduction for epitope identification. We used Layers to non-randomly sample the antigen surface reducing the molecular surface to an average of 75 residues while preserving 50% of the epitope in the sample surface. To facilitate the comparison of favorite methods of researchers we compared the popular techniques used to identify epitope with their sampling performance and evaluation. We proposed an optimum Sr of 16 {\AA} to sample the antigen molecules to reduce the search space, in which epitope is identified using buried surface area method. We used the combinations of molecular surface sampling, anchor residue intensity in surface, secondary structure and sequence information to predict epitope at an accuracy of 89%. A web application is made available at http://www.csb.iitkgp.ernet.in/applications/b_cell_epitope_pred/main.
[ { "created": "Tue, 29 Nov 2016 14:30:56 GMT", "version": "v1" } ]
2016-11-30
[ [ "K", "Naga Bhushana Rao .", "" ], [ "Bahadur", "Ranjit Prasad", "" ] ]
Characterization of B-cell protein epitope and developing critical parameters for its identification is one of the long standing interests. Using Layers algorithm, we introduced the concept of anchor residues to identify epitope. We have shown that majority of the epitope is composed of anchor residues and have significant bias in epitope for these residues. We optimized the search space reduction for epitope identification. We used Layers to non-randomly sample the antigen surface reducing the molecular surface to an average of 75 residues while preserving 50% of the epitope in the sample surface. To facilitate the comparison of favorite methods of researchers we compared the popular techniques used to identify epitope with their sampling performance and evaluation. We proposed an optimum Sr of 16 {\AA} to sample the antigen molecules to reduce the search space, in which epitope is identified using buried surface area method. We used the combinations of molecular surface sampling, anchor residue intensity in surface, secondary structure and sequence information to predict epitope at an accuracy of 89%. A web application is made available at http://www.csb.iitkgp.ernet.in/applications/b_cell_epitope_pred/main.
1408.5729
Peter Ashcroft
Peter Ashcroft, Franziska Michor and Tobias Galla
Stochastic tunneling and metastable states during the somatic evolution of cancer
33 pages, 7 figures
Genetics 199.4 (2015) 1213-1228
10.1534/genetics.114.171553
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumors initiate when a population of proliferating cells accumulates a certain number and type of genetic and/or epigenetic alterations. The population dynamics of such sequential acquisition of (epi)genetic alterations has been the topic of much investigation. The phenomenon of stochastic tunneling, where an intermediate mutant in a sequence does not reach fixation in a population before generating a double mutant, has been studied using a variety of computational and mathematical methods. However, the field still lacks a comprehensive analytical description since theoretical predictions of fixation times are only available for cases in which the second mutant is advantageous. Here, we study stochastic tunneling in a Moran model. Analyzing the deterministic dynamics of large populations we systematically identify the parameter regimes captured by existing approaches. Our analysis also reveals fitness landscapes and mutation rates for which finite populations are found in long-lived metastable states. These are landscapes in which the final mutant is not the most advantageous in the sequence, and resulting metastable states are a consequence of a mutation-selection balance. The escape from these states is driven by intrinsic noise, and their location affects the probability of tunneling. Existing methods no longer apply. In these regimes it is the escape from the metastable states that is the key bottleneck; fixation is no longer limited by the emergence of a successful mutant lineage. We used the so-called Wentzel-Kramers-Brillouin method to compute fixation times in these parameter regimes, successfully validated by stochastic simulations. Our work fills a gap left by previous approaches and provides a more comprehensive description of the acquisition of multiple mutations in populations of somatic cells.
[ { "created": "Mon, 25 Aug 2014 12:16:35 GMT", "version": "v1" }, { "created": "Wed, 8 Apr 2015 19:06:52 GMT", "version": "v2" } ]
2015-04-09
[ [ "Ashcroft", "Peter", "" ], [ "Michor", "Franziska", "" ], [ "Galla", "Tobias", "" ] ]
Tumors initiate when a population of proliferating cells accumulates a certain number and type of genetic and/or epigenetic alterations. The population dynamics of such sequential acquisition of (epi)genetic alterations has been the topic of much investigation. The phenomenon of stochastic tunneling, where an intermediate mutant in a sequence does not reach fixation in a population before generating a double mutant, has been studied using a variety of computational and mathematical methods. However, the field still lacks a comprehensive analytical description since theoretical predictions of fixation times are only available for cases in which the second mutant is advantageous. Here, we study stochastic tunneling in a Moran model. Analyzing the deterministic dynamics of large populations we systematically identify the parameter regimes captured by existing approaches. Our analysis also reveals fitness landscapes and mutation rates for which finite populations are found in long-lived metastable states. These are landscapes in which the final mutant is not the most advantageous in the sequence, and resulting metastable states are a consequence of a mutation-selection balance. The escape from these states is driven by intrinsic noise, and their location affects the probability of tunneling. Existing methods no longer apply. In these regimes it is the escape from the metastable states that is the key bottleneck; fixation is no longer limited by the emergence of a successful mutant lineage. We used the so-called Wentzel-Kramers-Brillouin method to compute fixation times in these parameter regimes, successfully validated by stochastic simulations. Our work fills a gap left by previous approaches and provides a more comprehensive description of the acquisition of multiple mutations in populations of somatic cells.
2001.03614
Claire Meissner-Bernard
Claire Meissner-Bernard, Matthias Tsai, Laureline Logiaco, Wulfram Gerstner
Paradoxical Results of Long-Term Potentiation explained by Voltage-based Plasticity Rule
null
Front. Synaptic Neurosci. (2020) 12:585539
10.3389/fnsyn.2020.585539
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experiments have shown that the same stimulation pattern that causes Long-Term Potentiation in proximal synapses, will induce Long-Term Depression in distal ones. In order to understand these, and other, surprising observations we use a phenomenological model of Hebbian plasticity at the location of the synapse. Our computational model describes the Hebbian condition of joint activity of pre- and post-synaptic neuron in a compact form as the interaction of the glutamate trace left by a presynaptic spike with the time course of the postsynaptic voltage. We test the model using experimentally recorded dendritic voltage traces in hippocampus and neocortex. We find that the time course of the voltage in the neighborhood of a stimulated synapse is a reliable predictor of whether a stimulated synapse undergoes potentiation, depression, or no change. Our model can explain the existence of different -- at first glance seemingly paradoxical -- outcomes of synaptic potentiation and depression experiments depending on the dendritic location of the synapse and the frequency or timing of the stimulation.
[ { "created": "Fri, 10 Jan 2020 18:57:06 GMT", "version": "v1" }, { "created": "Mon, 12 Oct 2020 21:52:02 GMT", "version": "v2" }, { "created": "Wed, 18 Nov 2020 20:03:33 GMT", "version": "v3" } ]
2020-11-20
[ [ "Meissner-Bernard", "Claire", "" ], [ "Tsai", "Matthias", "" ], [ "Logiaco", "Laureline", "" ], [ "Gerstner", "Wulfram", "" ] ]
Experiments have shown that the same stimulation pattern that causes Long-Term Potentiation in proximal synapses, will induce Long-Term Depression in distal ones. In order to understand these, and other, surprising observations we use a phenomenological model of Hebbian plasticity at the location of the synapse. Our computational model describes the Hebbian condition of joint activity of pre- and post-synaptic neuron in a compact form as the interaction of the glutamate trace left by a presynaptic spike with the time course of the postsynaptic voltage. We test the model using experimentally recorded dendritic voltage traces in hippocampus and neocortex. We find that the time course of the voltage in the neighborhood of a stimulated synapse is a reliable predictor of whether a stimulated synapse undergoes potentiation, depression, or no change. Our model can explain the existence of different -- at first glance seemingly paradoxical -- outcomes of synaptic potentiation and depression experiments depending on the dendritic location of the synapse and the frequency or timing of the stimulation.
2011.01294
Kerim Anlas
Kerim Anlas and Vikas Trivedi
Studying evolution of the primary body axis in vivo and in vitro
null
null
null
null
q-bio.PE q-bio.CB q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
The metazoan body plan is established during early embryogenesis via collective cell rearrangements and evolutionarily conserved gene networks, as part of a process commonly referred to as gastrulation. While substantial progress has been achieved in terms of characterizing the embryonic development of several model organisms, underlying principles of many early patterning processes nevertheless remain enigmatic. Despite the diversity of (pre-)gastrulating embryo and adult body shapes across the animal kingdom, the body axes, which are arguably the most fundamental features, generally remain identical between phyla. Recently there has been a renewed appreciation of ex vivo and in vitro embryo-like systems to model early embryonic patterning events. Here, we briefly review key examples and propose that similarities in morphogenesis as well as associated gene expression dynamics may reveal an evolutionarily conserved developmental mode as well as provide further insights into the role of external or extraembryonic cues in shaping the early embryo. In summary, we argue that embryo-like systems can be employed to inform previously uncharted aspects of animal body plan evolution as well as associated patterning rules.
[ { "created": "Mon, 2 Nov 2020 20:24:33 GMT", "version": "v1" }, { "created": "Mon, 23 Nov 2020 11:26:07 GMT", "version": "v2" } ]
2020-11-24
[ [ "Anlas", "Kerim", "" ], [ "Trivedi", "Vikas", "" ] ]
The metazoan body plan is established during early embryogenesis via collective cell rearrangements and evolutionarily conserved gene networks, as part of a process commonly referred to as gastrulation. While substantial progress has been achieved in terms of characterizing the embryonic development of several model organisms, underlying principles of many early patterning processes nevertheless remain enigmatic. Despite the diversity of (pre-)gastrulating embryo and adult body shapes across the animal kingdom, the body axes, which are arguably the most fundamental features, generally remain identical between phyla. Recently there has been a renewed appreciation of ex vivo and in vitro embryo-like systems to model early embryonic patterning events. Here, we briefly review key examples and propose that similarities in morphogenesis as well as associated gene expression dynamics may reveal an evolutionarily conserved developmental mode as well as provide further insights into the role of external or extraembryonic cues in shaping the early embryo. In summary, we argue that embryo-like systems can be employed to inform previously uncharted aspects of animal body plan evolution as well as associated patterning rules.
2111.10628
Andrew Whetten
Andrew B. Whetten
Localized Mutual Information Monitoring of Pairwise Associations in Animal Movement
null
null
null
null
q-bio.QM q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in satellite imaging and GPS tracking devices have given rise to a new era of remote sensing and geospatial analysis. In environmental science and conservation ecology, biotelemetric data is often high-dimensional, spatially and/or temporally, and functional in nature, meaning that there is an underlying continuity to the biological process of interest. GPS-tracking of animal movement is commonly characterized by irregular time-recording of animal position, and the movement relationships between animals are prone to sudden change. In this paper, I propose a measure of localized mutual information (LMI) to derive a correlation function for monitoring changes in the pairwise association between animal movement trajectories. The properties of the LMI measure are assessed analytically and by simulation under a variety of circumstances. Advantages and disadvantages of the LMI measure are assessed and an alternate measure of LMI is proposed to handle potential disadvantages. The measure of LMI is shown to be an effective tool for detecting shifts in the correlation of animal movements, and seasonal/phasal correlatory structure.
[ { "created": "Sat, 20 Nov 2021 16:45:50 GMT", "version": "v1" }, { "created": "Thu, 2 Dec 2021 02:51:00 GMT", "version": "v2" }, { "created": "Tue, 7 Dec 2021 05:10:11 GMT", "version": "v3" }, { "created": "Thu, 6 Jan 2022 01:29:47 GMT", "version": "v4" } ]
2022-01-07
[ [ "Whetten", "Andrew B.", "" ] ]
Advances in satellite imaging and GPS tracking devices have given rise to a new era of remote sensing and geospatial analysis. In environmental science and conservation ecology, biotelemetric data is often high-dimensional, spatially and/or temporally, and functional in nature, meaning that there is an underlying continuity to the biological process of interest. GPS-tracking of animal movement is commonly characterized by irregular time-recording of animal position, and the movement relationships between animals are prone to sudden change. In this paper, I propose a measure of localized mutual information (LMI) to derive a correlation function for monitoring changes in the pairwise association between animal movement trajectories. The properties of the LMI measure are assessed analytically and by simulation under a variety of circumstances. Advantages and disadvantages of the LMI measure are assessed and an alternate measure of LMI is proposed to handle potential disadvantages. The measure of LMI is shown to be an effective tool for detecting shifts in the correlation of animal movements, and seasonal/phasal correlatory structure.
1306.3465
Alvaro Sanchez
Andrew Chen, Alvaro Sanchez, Lei Dai, Jeff Gore
Dynamics of a producer-parasite ecosystem on the brink of collapse
null
null
10.1038/ncomms4713
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecosystems can undergo sudden shifts to undesirable states, but recent studies with simple single species ecosystems have demonstrated that advance warning can be provided by the slowing down of population dynamics near a tipping point. However, it is not clear how this effect of critical slowing down will manifest in ecosystems with strong interactions between their components. Here we probe the dynamics of an experimental producer parasite ecosystem as it approaches a catastrophic collapse. Surprisingly, the producer population grows in size as the environment deteriorates, highlighting that population size can be a misleading measure of ecosystem stability. By analyzing the oscillatory producer parasite dynamics for over ~100 generations in multiple environmental conditions, we found that the collective ecosystem dynamics slows down as the tipping point is approached. Analysis of the coupled dynamics of interacting populations may therefore be necessary to provide advance warning of collapse in complex communities.
[ { "created": "Fri, 14 Jun 2013 17:37:25 GMT", "version": "v1" } ]
2015-06-16
[ [ "Chen", "Andrew", "" ], [ "Sanchez", "Alvaro", "" ], [ "Dai", "Lei", "" ], [ "Gore", "Jeff", "" ] ]
Ecosystems can undergo sudden shifts to undesirable states, but recent studies with simple single species ecosystems have demonstrated that advance warning can be provided by the slowing down of population dynamics near a tipping point. However, it is not clear how this effect of critical slowing down will manifest in ecosystems with strong interactions between their components. Here we probe the dynamics of an experimental producer parasite ecosystem as it approaches a catastrophic collapse. Surprisingly, the producer population grows in size as the environment deteriorates, highlighting that population size can be a misleading measure of ecosystem stability. By analyzing the oscillatory producer parasite dynamics for over ~100 generations in multiple environmental conditions, we found that the collective ecosystem dynamics slows down as the tipping point is approached. Analysis of the coupled dynamics of interacting populations may therefore be necessary to provide advance warning of collapse in complex communities.
q-bio/0605040
Alexei Vazquez
Alexei Vazquez
Epidemic outbreaks on structured populations
10 pages, 4 figures. J. Theor. Biol. (In press)
J. Theor. Biol. 245, 125 (2007)
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph q-bio.QM
null
Our chances to halt epidemic outbreaks rely on how accurately we represent the population structure underlying the disease spread. When analyzing global epidemics this force us to consider metapopulation models taking into account intra- and inter-community interactions. Recently Watts et al introduced a metapopulation model which accounts for several features observed in real outbreaks [Watts et al, PNAS 102, 11157 (2005)]. In this work I provide an analytical solution to this model, enhancing our understanding of the model and the epidemic outbreaks it represents. First, I demonstrate that depending on the intra-community expected outbreak size and the fraction of social bridges the epidemic outbreaks die out or there is a finite probability to observe a global epidemics. Second, I show that the global scenario is characterized by resurgent epidemics, their number increasing with increasing the intra-community average distance between individuals. Finally, I present empirical data for the AIDS epidemics supporting the model predictions.
[ { "created": "Wed, 24 May 2006 16:30:45 GMT", "version": "v1" }, { "created": "Wed, 20 Sep 2006 14:19:43 GMT", "version": "v2" } ]
2007-12-10
[ [ "Vazquez", "Alexei", "" ] ]
Our chances to halt epidemic outbreaks rely on how accurately we represent the population structure underlying the disease spread. When analyzing global epidemics this force us to consider metapopulation models taking into account intra- and inter-community interactions. Recently Watts et al introduced a metapopulation model which accounts for several features observed in real outbreaks [Watts et al, PNAS 102, 11157 (2005)]. In this work I provide an analytical solution to this model, enhancing our understanding of the model and the epidemic outbreaks it represents. First, I demonstrate that depending on the intra-community expected outbreak size and the fraction of social bridges the epidemic outbreaks die out or there is a finite probability to observe a global epidemics. Second, I show that the global scenario is characterized by resurgent epidemics, their number increasing with increasing the intra-community average distance between individuals. Finally, I present empirical data for the AIDS epidemics supporting the model predictions.
2103.08984
Mar\'ia Vallet-Regi
Juan L. Paris, Paz de la Torre, M. Victoria Cabanas, Miguel Manzano, Ana I. Flores, Maria Vallet-Regi
Suicide-Gene Transfection of Tumor-tropic Placental Stem Cells employing Ultrasound-Responsive Nanoparticles
24 pages, 6 figures
Acta Biomaterialia. 83, 372-378 (2018)
10.1016/j.actbio.2018.11.006
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
A Trojan-horse strategy for cancer therapy employing tumor-tropic mesenchymal stem cells transfected with a non-viral nanovector is here presented. In this sense, ultrasound-responsive mesoporous silica nanoparticles were coated with a polycation (using two different molecular weights), providing them with gene transfection capabilities that were evaluated using two different plasmids. First, the expression of Green Fluorescent Protein was analyzed in Decidua-derived Mesenchymal Stem Cells after incubation with the silica nanoparticles. The most successful nanoparticle was then employed to induce the expression of two suicide genes: cytosine deaminase and uracil phosphoribosyl transferase, which allow the cells to convert a non-toxic pro-drug (5-fluorocytosine) into a toxic drug (5-Fluorouridine monophosphate). The effect of the production of the toxic final product was also evaluated in a cancer cell line (NMU cells) co-cultured with the transfected vehicle cells, Decidua-derived Mesenchymal Stem Cells.
[ { "created": "Tue, 16 Mar 2021 11:22:19 GMT", "version": "v1" } ]
2021-03-17
[ [ "Paris", "Juan L.", "" ], [ "de la Torre", "Paz", "" ], [ "Cabanas", "M. Victoria", "" ], [ "Manzano", "Miguel", "" ], [ "Flores", "Ana I.", "" ], [ "Vallet-Regi", "Maria", "" ] ]
A Trojan-horse strategy for cancer therapy employing tumor-tropic mesenchymal stem cells transfected with a non-viral nanovector is here presented. In this sense, ultrasound-responsive mesoporous silica nanoparticles were coated with a polycation (using two different molecular weights), providing them with gene transfection capabilities that were evaluated using two different plasmids. First, the expression of Green Fluorescent Protein was analyzed in Decidua-derived Mesenchymal Stem Cells after incubation with the silica nanoparticles. The most successful nanoparticle was then employed to induce the expression of two suicide genes: cytosine deaminase and uracil phosphoribosyl transferase, which allow the cells to convert a non-toxic pro-drug (5-fluorocytosine) into a toxic drug (5-Fluorouridine monophosphate). The effect of the production of the toxic final product was also evaluated in a cancer cell line (NMU cells) co-cultured with the transfected vehicle cells, Decidua-derived Mesenchymal Stem Cells.
2206.08813
Richard Gast Dr.
Richard Gast, Sara A. Solla, Ann Kennedy
Effects of Neural Heterogeneity on Spiking Neural Network Dynamics
11 pages, 4 figures
null
null
null
q-bio.NC nlin.AO
http://creativecommons.org/licenses/by-nc-sa/4.0/
The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does neural heterogeneity affect macroscopic neural dynamics and how does it contribute to neurodynamic functions? In this letter, we address these questions by studying the macroscopic dynamics of networks of heterogeneous Izhikevich neurons. We derive mean-field equations for these networks and examine how heterogeneity in the spiking thresholds of Izhikevich neurons affects the emergent macroscopic dynamics. Our results suggest that the level of heterogeneity of inhibitory populations controls resonance and hysteresis properties of systems of coupled excitatory and inhibitory neurons. Neural heterogeneity may thus serve as a means to control the dynamic repertoire of mesoscopic brain circuits.
[ { "created": "Fri, 17 Jun 2022 14:44:17 GMT", "version": "v1" } ]
2022-06-20
[ [ "Gast", "Richard", "" ], [ "Solla", "Sara A.", "" ], [ "Kennedy", "Ann", "" ] ]
The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does neural heterogeneity affect macroscopic neural dynamics and how does it contribute to neurodynamic functions? In this letter, we address these questions by studying the macroscopic dynamics of networks of heterogeneous Izhikevich neurons. We derive mean-field equations for these networks and examine how heterogeneity in the spiking thresholds of Izhikevich neurons affects the emergent macroscopic dynamics. Our results suggest that the level of heterogeneity of inhibitory populations controls resonance and hysteresis properties of systems of coupled excitatory and inhibitory neurons. Neural heterogeneity may thus serve as a means to control the dynamic repertoire of mesoscopic brain circuits.
1604.08612
Jerome Feldman
Jerome Feldman
Mysteries of Visual Experience
This 3/27/2022 revision retains all the original text but adds new comments (in italics) There are several new references, connecting with current research
Behavioral and Brain Sciences, 45, E48 (2022)
10.1017/S0140525X21001886
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Science is a crowning glory of the human spirit and its applications remain our best hope for social progress. But there are limitations to current science and perhaps to any science. The general mind-body problem is known to be intractable and currently mysterious. This is one of many deep problems that are universally agreed to be beyond the current purview of Science, including quantum phenomena, etc. But all of these famous unsolved problems are either remote from everyday experience (entanglement, dark matter) or are hard to even define sharply (phenomenology, consciousness, etc.). An updated summary of this work has been published as: Feldman, J. (2022). Computation, perception, and mind. Behavioral and Brain Sciences, 45, E48. doi:10.1017/S0140525X21001886 A more readable, open access, version is: https://escholarship.org/uc/item/6cs78450
[ { "created": "Thu, 28 Apr 2016 20:41:25 GMT", "version": "v1" }, { "created": "Wed, 28 Sep 2016 17:33:49 GMT", "version": "v2" }, { "created": "Tue, 10 Jan 2017 18:46:42 GMT", "version": "v3" }, { "created": "Tue, 20 Mar 2018 16:07:22 GMT", "version": "v4" }, { "created": "Fri, 23 Jul 2021 16:13:55 GMT", "version": "v5" }, { "created": "Fri, 25 Mar 2022 23:27:44 GMT", "version": "v6" } ]
2022-03-29
[ [ "Feldman", "Jerome", "" ] ]
Science is a crowning glory of the human spirit and its applications remain our best hope for social progress. But there are limitations to current science and perhaps to any science. The general mind-body problem is known to be intractable and currently mysterious. This is one of many deep problems that are universally agreed to be beyond the current purview of Science, including quantum phenomena, etc. But all of these famous unsolved problems are either remote from everyday experience (entanglement, dark matter) or are hard to even define sharply (phenomenology, consciousness, etc.). An updated summary of this work has been published as: Feldman, J. (2022). Computation, perception, and mind. Behavioral and Brain Sciences, 45, E48. doi:10.1017/S0140525X21001886 A more readable, open access, version is: https://escholarship.org/uc/item/6cs78450
q-bio/0601031
Sidney Redner
T. Antal, S. Redner, and V. Sood
Evolutionary dynamics on degree-heterogeneous graphs
4 pages, 4 figures, 2 column revtex4 format. Revisions in response to referee comments for publication in PRL. The version on arxiv.org has one more figure than the published PRL
Phys. Rev. Lett. 96, 188104 (2006)
10.1103/PhysRevLett.96.188104
null
q-bio.PE cond-mat.stat-mech
null
The evolution of two species with different fitness is investigated on degree-heterogeneous graphs. The population evolves either by one individual dying and being replaced by the offspring of a random neighbor (voter model (VM) dynamics) or by an individual giving birth to an offspring that takes over a random neighbor node (invasion process (IP) dynamics). The fixation probability for one species to take over a population of N individuals depends crucially on the dynamics and on the local environment. Starting with a single fitter mutant at a node of degree k, the fixation probability is proportional to k for VM dynamics and to 1/k for IP dynamics.
[ { "created": "Sat, 21 Jan 2006 22:16:54 GMT", "version": "v1" }, { "created": "Thu, 11 May 2006 21:17:21 GMT", "version": "v2" } ]
2009-11-13
[ [ "Antal", "T.", "" ], [ "Redner", "S.", "" ], [ "Sood", "V.", "" ] ]
The evolution of two species with different fitness is investigated on degree-heterogeneous graphs. The population evolves either by one individual dying and being replaced by the offspring of a random neighbor (voter model (VM) dynamics) or by an individual giving birth to an offspring that takes over a random neighbor node (invasion process (IP) dynamics). The fixation probability for one species to take over a population of N individuals depends crucially on the dynamics and on the local environment. Starting with a single fitter mutant at a node of degree k, the fixation probability is proportional to k for VM dynamics and to 1/k for IP dynamics.
0907.5531
Erika Cerasti
Erika Cerasti, Alessandro Treves
How informative are spatial CA3 representations established by the dentate gyrus?
19 pages, 11 figures, 1 table, submitted
null
10.1371/journal.pcbi.1000759
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the mammalian hippocampus, the dentate gyrus (DG) is characterized by sparse and powerful unidirectional projections to CA3 pyramidal cells, the so-called mossy fibers. Mossy fiber synapses appear to duplicate, in terms of the information they convey, what CA3 cells already receive from entorhinal cortex layer II cells, which project both to the dentate gyrus and to CA3. Computational models of episodic memory have hypothesized that the function of the mossy fibers is to enforce a new, well separated pattern of activity onto CA3 cells, to represent a new memory, prevailing over the interference produced by the traces of older memories already stored on CA3 recurrent collateral connections. Can this hypothesis apply also to spatial representations, as described by recent neurophysiological recordings in rats? To address this issue quantitatively, we estimate the amount of information DG can impart on a new CA3 pattern of spatial activity, using both mathematical analysis and computer simulations of a simplified model. We confirm that, also in the spatial case, the observed sparse connectivity and level of activity are most appropriate for driving memory storage and not to initiate retrieval. Surprisingly, the model also indicates that even when DG codes just for space, much of the information it passes on to CA3 acquires a non-spatial and episodic character, akin to that of a random number generator. It is suggested that further hippocampal processing is required to make full spatial use of DG inputs.
[ { "created": "Fri, 31 Jul 2009 13:14:37 GMT", "version": "v1" } ]
2015-05-13
[ [ "Cerasti", "Erika", "" ], [ "Treves", "Alessandro", "" ] ]
In the mammalian hippocampus, the dentate gyrus (DG) is characterized by sparse and powerful unidirectional projections to CA3 pyramidal cells, the so-called mossy fibers. Mossy fiber synapses appear to duplicate, in terms of the information they convey, what CA3 cells already receive from entorhinal cortex layer II cells, which project both to the dentate gyrus and to CA3. Computational models of episodic memory have hypothesized that the function of the mossy fibers is to enforce a new, well separated pattern of activity onto CA3 cells, to represent a new memory, prevailing over the interference produced by the traces of older memories already stored on CA3 recurrent collateral connections. Can this hypothesis apply also to spatial representations, as described by recent neurophysiological recordings in rats? To address this issue quantitatively, we estimate the amount of information DG can impart on a new CA3 pattern of spatial activity, using both mathematical analysis and computer simulations of a simplified model. We confirm that, also in the spatial case, the observed sparse connectivity and level of activity are most appropriate for driving memory storage and not to initiate retrieval. Surprisingly, the model also indicates that even when DG codes just for space, much of the information it passes on to CA3 acquires a non-spatial and episodic character, akin to that of a random number generator. It is suggested that further hippocampal processing is required to make full spatial use of DG inputs.
q-bio/0506009
Kristina Klinkner
Kristina Lisa Klinkner, Cosma Rohilla Shalizi and Marcelo F. Camperi
Measuring Shared Information and Coordinated Activity in Neuronal Networks
8 pages, 6 figures
null
null
null
q-bio.NC math.ST nlin.CD q-bio.QM stat.TH
null
Most nervous systems encode information about stimuli in the responding activity of large neuronal networks. This activity often manifests itself as dynamically coordinated sequences of action potentials. Since multiple electrode recordings are now a standard tool in neuroscience research, it is important to have a measure of such network-wide behavioral coordination and information sharing, applicable to multiple neural spike train data. We propose a new statistic, informational coherence, which measures how much better one unit can be predicted by knowing the dynamical state of another. We argue informational coherence is a measure of association and shared information which is superior to traditional pairwise measures of synchronization and correlation. To find the dynamical states, we use a recently-introduced algorithm which reconstructs effective state spaces from stochastic time series. We then extend the pairwise measure to a multivariate analysis of the network by estimating the network multi-information. We illustrate our method by testing it on a detailed model of the transition from gamma to beta rhythms.
[ { "created": "Tue, 7 Jun 2005 23:48:15 GMT", "version": "v1" }, { "created": "Fri, 29 Jul 2005 16:57:38 GMT", "version": "v2" } ]
2011-11-09
[ [ "Klinkner", "Kristina Lisa", "" ], [ "Shalizi", "Cosma Rohilla", "" ], [ "Camperi", "Marcelo F.", "" ] ]
Most nervous systems encode information about stimuli in the responding activity of large neuronal networks. This activity often manifests itself as dynamically coordinated sequences of action potentials. Since multiple electrode recordings are now a standard tool in neuroscience research, it is important to have a measure of such network-wide behavioral coordination and information sharing, applicable to multiple neural spike train data. We propose a new statistic, informational coherence, which measures how much better one unit can be predicted by knowing the dynamical state of another. We argue informational coherence is a measure of association and shared information which is superior to traditional pairwise measures of synchronization and correlation. To find the dynamical states, we use a recently-introduced algorithm which reconstructs effective state spaces from stochastic time series. We then extend the pairwise measure to a multivariate analysis of the network by estimating the network multi-information. We illustrate our method by testing it on a detailed model of the transition from gamma to beta rhythms.
1911.03509
Samuel Lord
Samuel J. Lord, Katrina B. Velle, R. Dyche Mullins, Lillian K. Fritz-Laylin
If your P value looks too good to be true, it probably is: Communicating reproducibility and variability in cell biology
Modified Figure 1A to use the identical dataset as B-C. Included tutorial for making plots in R, Python, and Excel. Replaced on comparing biological vs technical replicates with expanded explanation of population sampling. Included discussion of estimation statistics and forest plots as a reasonable alternative to P values. Clarified the benefits of the P value, despite its flaws
J. Cell. Biol. 219 (2020) e202001064
10.1083/jcb.202001064
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
The cell biology literature is littered with erroneously tiny P values, often the result of evaluating individual cells as independent samples. Because readers use P values and error bars to infer whether a reported difference would likely recur if the experiment were repeated, the sample size N used for statistical tests should actually be the number of times an experiment is performed, not the number of cells (or subcellular structures) analyzed across all experiments. P values calculated using the number of cells do not reflect the reproducibility of the result and are thus highly misleading. To help authors avoid this mistake, we provide examples and practical tutorials for creating figures that communicate both the cell-level variability and the experimental reproducibility.
[ { "created": "Fri, 8 Nov 2019 19:22:41 GMT", "version": "v1" }, { "created": "Fri, 20 Dec 2019 23:59:18 GMT", "version": "v2" } ]
2020-04-30
[ [ "Lord", "Samuel J.", "" ], [ "Velle", "Katrina B.", "" ], [ "Mullins", "R. Dyche", "" ], [ "Fritz-Laylin", "Lillian K.", "" ] ]
The cell biology literature is littered with erroneously tiny P values, often the result of evaluating individual cells as independent samples. Because readers use P values and error bars to infer whether a reported difference would likely recur if the experiment were repeated, the sample size N used for statistical tests should actually be the number of times an experiment is performed, not the number of cells (or subcellular structures) analyzed across all experiments. P values calculated using the number of cells do not reflect the reproducibility of the result and are thus highly misleading. To help authors avoid this mistake, we provide examples and practical tutorials for creating figures that communicate both the cell-level variability and the experimental reproducibility.
2311.07315
Kristopher Jensen
Kristopher T. Jensen
An introduction to reinforcement learning for neuroscience
Code available at: https://colab.research.google.com/drive/1ZC4lR8kTO48yySDZtcOEdMKd3NqY_ly1?usp=sharing
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by/4.0/
Reinforcement learning has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal for temporal difference learning (Schultz et al., 1997) to recent work suggesting that dopamine could implement a form of 'distributional reinforcement learning' popularized in deep learning (Dabney et al., 2020). Throughout this literature, there has been a tight link between theoretical advances in reinforcement learning and neuroscientific experiments and findings. As a result, the theories describing our experimental data have become increasingly complex and difficult to navigate. In this review, we cover the basic theory underlying classical work in reinforcement learning and build up to an introductory overview of methods in modern deep reinforcement learning that have found applications in systems neuroscience. We start with an overview of the reinforcement learning problem and classical temporal difference algorithms, followed by a discussion of 'model-free' and 'model-based' reinforcement learning together with methods such as DYNA and successor representations that fall in between these two extremes. Throughout these sections, we highlight the close parallels between such machine learning methods and related work in both experimental and theoretical neuroscience. We then provide an introduction to deep reinforcement learning with examples of how these methods have been used to model different learning phenomena in systems neuroscience, such as meta-reinforcement learning (Wang et al., 2018) and distributional reinforcement learning (Dabney et al., 2020). Code that implements the methods discussed in this work and generates the figures is also provided.
[ { "created": "Mon, 13 Nov 2023 13:10:52 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2024 16:07:02 GMT", "version": "v2" } ]
2024-08-02
[ [ "Jensen", "Kristopher T.", "" ] ]
Reinforcement learning has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal for temporal difference learning (Schultz et al., 1997) to recent work suggesting that dopamine could implement a form of 'distributional reinforcement learning' popularized in deep learning (Dabney et al., 2020). Throughout this literature, there has been a tight link between theoretical advances in reinforcement learning and neuroscientific experiments and findings. As a result, the theories describing our experimental data have become increasingly complex and difficult to navigate. In this review, we cover the basic theory underlying classical work in reinforcement learning and build up to an introductory overview of methods in modern deep reinforcement learning that have found applications in systems neuroscience. We start with an overview of the reinforcement learning problem and classical temporal difference algorithms, followed by a discussion of 'model-free' and 'model-based' reinforcement learning together with methods such as DYNA and successor representations that fall in between these two extremes. Throughout these sections, we highlight the close parallels between such machine learning methods and related work in both experimental and theoretical neuroscience. We then provide an introduction to deep reinforcement learning with examples of how these methods have been used to model different learning phenomena in systems neuroscience, such as meta-reinforcement learning (Wang et al., 2018) and distributional reinforcement learning (Dabney et al., 2020). Code that implements the methods discussed in this work and generates the figures is also provided.
2007.01344
Rui Wang
Rui Wang, Yuta Hozumi, Changchuan Yin, and Guo-Wei Wei
Decoding asymptomatic COVID-19 infection and transmission
18 pages, 5 figures
null
null
null
q-bio.PE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coronavirus disease 2019 (COVID-19) is a continuously devastating public health and the world economy. One of the major challenges in controlling the COVID-19 outbreak is its asymptomatic infection and transmission, which are elusive and defenseless in most situations. The pathogenicity and virulence of asymptomatic COVID-19 remain mysterious. Based on the genotyping of 20656 Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) genome isolates, we reveal that asymptomatic infection is linked to SARS-CoV-2 11083G>T mutation, i.e., leucine (L) to phenylalanine (F) substitution at the residue 37 (L37F) of nonstructure protein 6 (NSP6). By analyzing the distribution of 11083G>T in various countries, we unveil that 11083G>T may correlate with the hypotoxicity of SARS-CoV-2. Moreover, we show a global decaying tendency of the 11083G>T mutation ratio indicating that 11083G>T hinders SARS-CoV-2 transmission capacity. Sequence alignment found both NSP6 and residue 37 neighborhoods are relatively conservative over a few coronaviral species, indicating their importance in regulating host cell autophagy to undermine innate cellular defense against viral infection. Using machine learning and topological data analysis, we demonstrate that mutation L37F has made NSP6 energetically less stable. The rigidity and flexibility index and several network models suggest that mutation L37F may have compromised the NSP6 function, leading to a relatively weak SARS-CoV subtype. This assessment is a good agreement with our genotyping of SARS-CoV-2 evolution and transmission across various countries and regions over the past few months.
[ { "created": "Thu, 2 Jul 2020 19:09:26 GMT", "version": "v1" } ]
2020-07-06
[ [ "Wang", "Rui", "" ], [ "Hozumi", "Yuta", "" ], [ "Yin", "Changchuan", "" ], [ "Wei", "Guo-Wei", "" ] ]
Coronavirus disease 2019 (COVID-19) is a continuously devastating public health and the world economy. One of the major challenges in controlling the COVID-19 outbreak is its asymptomatic infection and transmission, which are elusive and defenseless in most situations. The pathogenicity and virulence of asymptomatic COVID-19 remain mysterious. Based on the genotyping of 20656 Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) genome isolates, we reveal that asymptomatic infection is linked to SARS-CoV-2 11083G>T mutation, i.e., leucine (L) to phenylalanine (F) substitution at the residue 37 (L37F) of nonstructure protein 6 (NSP6). By analyzing the distribution of 11083G>T in various countries, we unveil that 11083G>T may correlate with the hypotoxicity of SARS-CoV-2. Moreover, we show a global decaying tendency of the 11083G>T mutation ratio indicating that 11083G>T hinders SARS-CoV-2 transmission capacity. Sequence alignment found both NSP6 and residue 37 neighborhoods are relatively conservative over a few coronaviral species, indicating their importance in regulating host cell autophagy to undermine innate cellular defense against viral infection. Using machine learning and topological data analysis, we demonstrate that mutation L37F has made NSP6 energetically less stable. The rigidity and flexibility index and several network models suggest that mutation L37F may have compromised the NSP6 function, leading to a relatively weak SARS-CoV subtype. This assessment is a good agreement with our genotyping of SARS-CoV-2 evolution and transmission across various countries and regions over the past few months.
2407.12870
Jianan Fan
Jianan Fan, Dongnan Liu, Canran Li, Hang Chang, Heng Huang, Filip Braet, Mei Chen, and Weidong Cai
Revisiting Adaptive Cellular Recognition Under Domain Shifts: A Contextual Correspondence View
ECCV 2024 main conference
null
null
null
q-bio.QM cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cellular nuclei recognition serves as a fundamental and essential step in the workflow of digital pathology. However, with disparate source organs and staining procedures among histology image clusters, the scanned tiles inherently conform to a non-uniform data distribution, which induces deteriorated promises for general cross-cohort usages. Despite the latest efforts leveraging domain adaptation to mitigate distributional discrepancy, those methods are subjected to modeling the morphological characteristics of each cell individually, disregarding the hierarchical latent structure and intrinsic contextual correspondences across the tumor micro-environment. In this work, we identify the importance of implicit correspondences across biological contexts for exploiting domain-invariant pathological composition and thereby propose to exploit the dependence over various biological structures for domain adaptive cellular recognition. We discover those high-level correspondences via unsupervised contextual modeling and use them as bridges to facilitate adaptation over diverse organs and stains. In addition, to further exploit the rich spatial contexts embedded amongst nuclear communities, we propose self-adaptive dynamic distillation to secure instance-aware trade-offs across different model constituents. The proposed method is extensively evaluated on a broad spectrum of cross-domain settings under miscellaneous data distribution shifts and outperforms the state-of-the-art methods by a substantial margin. Code is available at https://github.com/camwew/CellularRecognition_DA_CC.
[ { "created": "Sun, 14 Jul 2024 04:41:16 GMT", "version": "v1" }, { "created": "Fri, 19 Jul 2024 05:26:06 GMT", "version": "v2" } ]
2024-07-22
[ [ "Fan", "Jianan", "" ], [ "Liu", "Dongnan", "" ], [ "Li", "Canran", "" ], [ "Chang", "Hang", "" ], [ "Huang", "Heng", "" ], [ "Braet", "Filip", "" ], [ "Chen", "Mei", "" ], [ "Cai", "Weidong", "" ] ]
Cellular nuclei recognition serves as a fundamental and essential step in the workflow of digital pathology. However, with disparate source organs and staining procedures among histology image clusters, the scanned tiles inherently conform to a non-uniform data distribution, which induces deteriorated promises for general cross-cohort usages. Despite the latest efforts leveraging domain adaptation to mitigate distributional discrepancy, those methods are subjected to modeling the morphological characteristics of each cell individually, disregarding the hierarchical latent structure and intrinsic contextual correspondences across the tumor micro-environment. In this work, we identify the importance of implicit correspondences across biological contexts for exploiting domain-invariant pathological composition and thereby propose to exploit the dependence over various biological structures for domain adaptive cellular recognition. We discover those high-level correspondences via unsupervised contextual modeling and use them as bridges to facilitate adaptation over diverse organs and stains. In addition, to further exploit the rich spatial contexts embedded amongst nuclear communities, we propose self-adaptive dynamic distillation to secure instance-aware trade-offs across different model constituents. The proposed method is extensively evaluated on a broad spectrum of cross-domain settings under miscellaneous data distribution shifts and outperforms the state-of-the-art methods by a substantial margin. Code is available at https://github.com/camwew/CellularRecognition_DA_CC.
2209.13521
Yihao Chen
Shunming Tao, Yihao Chen, Jingxing Wu, Duancheng Zhao, Hanxuan Cai, Ling Wang
VDDB: a comprehensive resource and machine learning platform for antiviral drug discovery
null
null
null
null
q-bio.BM cs.AI cs.LG q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virus infection is one of the major diseases that seriously threaten human health. To meet the growing demand for mining and sharing data resources related to antiviral drugs and to accelerate the design and discovery of new antiviral drugs, we presented an open-access antiviral drug resource and machine learning platform (VDDB), which, to the best of our knowledge, is the first comprehensive dedicated resource for experimentally verified potential drugs/molecules based on manually curated data. Currently, VDDB highlights 848 clinical vaccines, 199 clinical antibodies, as well as over 710,000 small molecules targeting 39 medically important viruses including SARS-CoV-2. Furthermore, VDDB stores approximately 3 million records of pharmacological data for these collected potential antiviral drugs/molecules, involving 314 cell infection-based phenotypic and 234 target-based genotypic assays. Based on these annotated pharmacological data, VDDB allows users to browse, search and download reliable information about these collects for various viruses of interest. In particular, VDDB also integrates 57 cell infection- and 117 target-based associated high-accuracy machine learning models to support various antivirals identification-related tasks, such as compound activity prediction, virtual screening, drug repositioning and target fishing. VDDB is freely accessible at http://vddb.idruglab.cn.
[ { "created": "Sat, 17 Sep 2022 09:02:46 GMT", "version": "v1" } ]
2022-09-28
[ [ "Tao", "Shunming", "" ], [ "Chen", "Yihao", "" ], [ "Wu", "Jingxing", "" ], [ "Zhao", "Duancheng", "" ], [ "Cai", "Hanxuan", "" ], [ "Wang", "Ling", "" ] ]
Virus infection is one of the major diseases that seriously threaten human health. To meet the growing demand for mining and sharing data resources related to antiviral drugs and to accelerate the design and discovery of new antiviral drugs, we presented an open-access antiviral drug resource and machine learning platform (VDDB), which, to the best of our knowledge, is the first comprehensive dedicated resource for experimentally verified potential drugs/molecules based on manually curated data. Currently, VDDB highlights 848 clinical vaccines, 199 clinical antibodies, as well as over 710,000 small molecules targeting 39 medically important viruses including SARS-CoV-2. Furthermore, VDDB stores approximately 3 million records of pharmacological data for these collected potential antiviral drugs/molecules, involving 314 cell infection-based phenotypic and 234 target-based genotypic assays. Based on these annotated pharmacological data, VDDB allows users to browse, search and download reliable information about these collects for various viruses of interest. In particular, VDDB also integrates 57 cell infection- and 117 target-based associated high-accuracy machine learning models to support various antivirals identification-related tasks, such as compound activity prediction, virtual screening, drug repositioning and target fishing. VDDB is freely accessible at http://vddb.idruglab.cn.
2212.01508
Rajkumar Vasudeva Raju
Rajkumar Vasudeva Raju, J. Swaroop Guntupalli, Guangyao Zhou, Miguel L\'azaro-Gredilla and Dileep George
Space is a latent sequence: Structured sequence learning as a unified theory of representation in the hippocampus
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Fascinating and puzzling phenomena, such as landmark vector cells, splitter cells, and event-specific representations to name a few, are regularly discovered in the hippocampus. Without a unifying principle that can explain these divergent observations, each experiment seemingly discovers a new anomaly or coding type. Here, we provide a unifying principle that the mental representation of space is an emergent property of latent higher-order sequence learning. Treating space as a sequence resolves myriad phenomena, and suggests that the place-field mapping methodology where sequential neuron responses are interpreted in spatial and Euclidean terms might itself be a source of anomalies. Our model, called Clone-structured Causal Graph (CSCG), uses a specific higher-order graph scaffolding to learn latent representations by mapping sensory inputs to unique contexts. Learning to compress sequential and episodic experiences using CSCGs result in the emergence of cognitive maps - mental representations of spatial and conceptual relationships in an environment that are suited for planning, introspection, consolidation, and abstraction. We demonstrate that over a dozen different hippocampal phenomena, ranging from those reported in classic experiments to the most recent ones, are succinctly and mechanistically explained by our model.
[ { "created": "Sat, 3 Dec 2022 02:00:56 GMT", "version": "v1" } ]
2022-12-06
[ [ "Raju", "Rajkumar Vasudeva", "" ], [ "Guntupalli", "J. Swaroop", "" ], [ "Zhou", "Guangyao", "" ], [ "Lázaro-Gredilla", "Miguel", "" ], [ "George", "Dileep", "" ] ]
Fascinating and puzzling phenomena, such as landmark vector cells, splitter cells, and event-specific representations to name a few, are regularly discovered in the hippocampus. Without a unifying principle that can explain these divergent observations, each experiment seemingly discovers a new anomaly or coding type. Here, we provide a unifying principle that the mental representation of space is an emergent property of latent higher-order sequence learning. Treating space as a sequence resolves myriad phenomena, and suggests that the place-field mapping methodology where sequential neuron responses are interpreted in spatial and Euclidean terms might itself be a source of anomalies. Our model, called Clone-structured Causal Graph (CSCG), uses a specific higher-order graph scaffolding to learn latent representations by mapping sensory inputs to unique contexts. Learning to compress sequential and episodic experiences using CSCGs result in the emergence of cognitive maps - mental representations of spatial and conceptual relationships in an environment that are suited for planning, introspection, consolidation, and abstraction. We demonstrate that over a dozen different hippocampal phenomena, ranging from those reported in classic experiments to the most recent ones, are succinctly and mechanistically explained by our model.
q-bio/0310030
Valmir Barbosa
L. E. Flores, E. J. Aguilar, V. C. Barbosa, L. A. V. de Carvalho
A graph model for the evolution of specificity in humoral immunity
null
Journal of Theoretical Biology 229 (2004), 311-325
10.1016/j.jtbi.2004.04.005
ES-617/03
q-bio.CB
null
The immune system protects the body against health-threatening entities, known as antigens, through very complex interactions involving the antigens and the system's own entities. One remarkable feature resulting from such interactions is the immune system's ability to improve its capability to fight antigens commonly found in the individual's environment. This adaptation process is called the evolution of specificity. In this paper, we introduce a new mathematical model for the evolution of specificity in humoral immunity, based on Jerne's functional, or idiotypic, network. The evolution of specificity is modeled as the dynamic updating of connection weights in a graph whose nodes are related to the network's idiotypes. At the core of this weight-updating mechanism are the increase in specificity caused by clonal selection and the decrease in specificity due to the insertion of uncorrelated idiotypes by the bone marrow. As we demonstrate through numerous computer experiments, for appropriate choices of parameters the new model correctly reproduces, in qualitative terms, several immune functions.
[ { "created": "Thu, 23 Oct 2003 17:39:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Flores", "L. E.", "" ], [ "Aguilar", "E. J.", "" ], [ "Barbosa", "V. C.", "" ], [ "de Carvalho", "L. A. V.", "" ] ]
The immune system protects the body against health-threatening entities, known as antigens, through very complex interactions involving the antigens and the system's own entities. One remarkable feature resulting from such interactions is the immune system's ability to improve its capability to fight antigens commonly found in the individual's environment. This adaptation process is called the evolution of specificity. In this paper, we introduce a new mathematical model for the evolution of specificity in humoral immunity, based on Jerne's functional, or idiotypic, network. The evolution of specificity is modeled as the dynamic updating of connection weights in a graph whose nodes are related to the network's idiotypes. At the core of this weight-updating mechanism are the increase in specificity caused by clonal selection and the decrease in specificity due to the insertion of uncorrelated idiotypes by the bone marrow. As we demonstrate through numerous computer experiments, for appropriate choices of parameters the new model correctly reproduces, in qualitative terms, several immune functions.
1612.04256
Robert Hoehndorf
Mona Alshahrani, Mohammed Asif Khan, Omar Maddouri, Akira R Kinjo, N\'uria Queralt-Rosinach, Robert Hoehndorf
Neuro-symbolic representation learning on biological knowledge graphs
null
null
10.1093/bioinformatics/btx275
null
q-bio.QM cs.LG q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge. Results: We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of Semantic Web based knowledge bases in biology to use in machine learning and data analytics. Availability and Implementation: https://github.com/bio-ontology-research-group/walking-rdf-and-owl Contact: robert.hoehndorf@kaust.edu.sa
[ { "created": "Tue, 13 Dec 2016 16:06:39 GMT", "version": "v1" } ]
2017-05-01
[ [ "Alshahrani", "Mona", "" ], [ "Khan", "Mohammed Asif", "" ], [ "Maddouri", "Omar", "" ], [ "Kinjo", "Akira R", "" ], [ "Queralt-Rosinach", "Núria", "" ], [ "Hoehndorf", "Robert", "" ] ]
Motivation: Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge. Results: We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of Semantic Web based knowledge bases in biology to use in machine learning and data analytics. Availability and Implementation: https://github.com/bio-ontology-research-group/walking-rdf-and-owl Contact: robert.hoehndorf@kaust.edu.sa
1905.04283
Dennis Dimond
Dennis Dimond, Rebecca Perry, Giuseppe Iaria, Signe Bray
Visuospatial short-term memory and dorsal visual gray matter volume
22 pages, 2 figures
J.Cortex (2019), 113:184-190
10.1016/j.cortex.2018.12.007
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual short-term memory (VSTM) is an important cognitive capacity that varies across the healthy adult population and is affected in several neurodevelopmental disorders. It has been suggested that neuroanatomy places limits on this capacity through a map architecture that creates competition for cortical space. This suggestion has been supported by the finding that primary visual (V1) gray matter volume (GMV) is positively associated with VSTM capacity. However, evidence from neurodevelopmental disorders suggests that the dorsal visual stream more broadly is vulnerable and atypical volumes of other map-containing regions may therefore play a role. For example, Turner syndrome is associated with concomitantly reduced volume of the right intraparietal sulcus (IPS) and deficits in VSTM. As posterior IPS regions (IPS0-2) contains topographic maps, together this suggests that posterior IPS volumes may also associate with VSTM. In this study, we assessed VSTM using two tasks, as well as a composite score, and used voxel-based morphometry of T1-weighted magnetic resonance images to assess GMV in V1 and right IPS0-2 in 32 healthy young adults (16 female). For comparison with previous work, we also assessed associations between VSTM and voxel-wise GMV on a whole-brain basis. We found that total brain volume (TBV) significantly correlated with VSTM, and that correlations between VSTM and regional GMV were substantially reduced in strength when controlling for TBV. In our whole-brain analysis, we found that VSTM was associated with GMV of clusters centered around the right putamen and left Rolandic operculum, though only when TBV was not controlled for. Our results suggest that VSTM ability is unlikely to be accounted for by the volume of an individual cortical region and may instead rely on distributed structural properties.
[ { "created": "Fri, 10 May 2019 17:50:21 GMT", "version": "v1" } ]
2019-05-13
[ [ "Dimond", "Dennis", "" ], [ "Perry", "Rebecca", "" ], [ "Iaria", "Giuseppe", "" ], [ "Bray", "Signe", "" ] ]
Visual short-term memory (VSTM) is an important cognitive capacity that varies across the healthy adult population and is affected in several neurodevelopmental disorders. It has been suggested that neuroanatomy places limits on this capacity through a map architecture that creates competition for cortical space. This suggestion has been supported by the finding that primary visual (V1) gray matter volume (GMV) is positively associated with VSTM capacity. However, evidence from neurodevelopmental disorders suggests that the dorsal visual stream more broadly is vulnerable and atypical volumes of other map-containing regions may therefore play a role. For example, Turner syndrome is associated with concomitantly reduced volume of the right intraparietal sulcus (IPS) and deficits in VSTM. As posterior IPS regions (IPS0-2) contains topographic maps, together this suggests that posterior IPS volumes may also associate with VSTM. In this study, we assessed VSTM using two tasks, as well as a composite score, and used voxel-based morphometry of T1-weighted magnetic resonance images to assess GMV in V1 and right IPS0-2 in 32 healthy young adults (16 female). For comparison with previous work, we also assessed associations between VSTM and voxel-wise GMV on a whole-brain basis. We found that total brain volume (TBV) significantly correlated with VSTM, and that correlations between VSTM and regional GMV were substantially reduced in strength when controlling for TBV. In our whole-brain analysis, we found that VSTM was associated with GMV of clusters centered around the right putamen and left Rolandic operculum, though only when TBV was not controlled for. Our results suggest that VSTM ability is unlikely to be accounted for by the volume of an individual cortical region and may instead rely on distributed structural properties.
1605.00591
Gianluca Calcagni
Gianluca Calcagni
The geometry of learning
17 pages, 7 figures, 1 table. v2: new sections, figures and references added, including discussions on random fractals and their applications and on 1/f cognitive noise models; v3: consequences and predictions of the theory added, typos corrected
J. Math. Psychol. 84 (2018) 74
10.1016/j.jmp.2018.03.007
null
q-bio.QM cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We establish a correspondence between Pavlovian conditioning processes and fractals. The association strength at a training trial corresponds to a point in a disconnected set at a given iteration level. In this way, one can represent a training process as a hopping on a fractal set, instead of the traditional learning curve as a function of the trial. The main advantage of this novel perspective is to provide an elegant classification of associative theories in terms of the geometric features of fractal sets. In particular, the dimension of fractals can measure the efficiency of conditioning models. We illustrate the correspondence with the examples of the Hull, Rescorla-Wagner, and Mackintosh models and show that they are equivalent to a Cantor set. More generally, conditioning programs are described by the geometry of their associated fractal, which gives much more information than just its dimension. We show this in several examples of random fractals and also comment on a possible relation between our formalism and other "fractal" findings in the cognitive literature.
[ { "created": "Mon, 2 May 2016 18:11:17 GMT", "version": "v1" }, { "created": "Sat, 17 Dec 2016 11:39:44 GMT", "version": "v2" }, { "created": "Sun, 22 Apr 2018 18:28:16 GMT", "version": "v3" } ]
2018-04-24
[ [ "Calcagni", "Gianluca", "" ] ]
We establish a correspondence between Pavlovian conditioning processes and fractals. The association strength at a training trial corresponds to a point in a disconnected set at a given iteration level. In this way, one can represent a training process as a hopping on a fractal set, instead of the traditional learning curve as a function of the trial. The main advantage of this novel perspective is to provide an elegant classification of associative theories in terms of the geometric features of fractal sets. In particular, the dimension of fractals can measure the efficiency of conditioning models. We illustrate the correspondence with the examples of the Hull, Rescorla-Wagner, and Mackintosh models and show that they are equivalent to a Cantor set. More generally, conditioning programs are described by the geometry of their associated fractal, which gives much more information than just its dimension. We show this in several examples of random fractals and also comment on a possible relation between our formalism and other "fractal" findings in the cognitive literature.
2402.06169
Eleanor Dunlop Dr
Eleanor Dunlop, Judy Cunningham, Paul Adorno, Shari Fatupaito, Stuart K Johnson, Lucinda J Black
Development of an updated, comprehensive food composition database for Australian-grown horticultural commodities
34 pages, 4 tables
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Australian agriculture supplies many horticultural commodities to domestic and international markets; however, food composition data for many commodities are outdated or unavailable. We produced an up-to-date, nationally representative dataset of up to 148 nutrients and related components in 92 Australian-grown fruit (fresh n=39, dried n=6), vegetables (n=43) and nuts (n=4) by replacing outdated data (pre-2000), confirming concentrations of important nutrients and retaining relevant existing data. Primary samples (n = 902) were purchased during peak growing season in Sydney, Melbourne and Perth between June 2021 and May 2022. While new data reflect current growing practices, varieties, climate and analytical methods, few notable differences were found between old and new data where methods of analysis are comparable. The new data will be incorporated into the Australian Food Composition Database, allowing free online access to stakeholders. The approach used could serve as a model for cost-effective updates of national food composition databases worldwide.
[ { "created": "Fri, 9 Feb 2024 03:54:35 GMT", "version": "v1" } ]
2024-02-12
[ [ "Dunlop", "Eleanor", "" ], [ "Cunningham", "Judy", "" ], [ "Adorno", "Paul", "" ], [ "Fatupaito", "Shari", "" ], [ "Johnson", "Stuart K", "" ], [ "Black", "Lucinda J", "" ] ]
Australian agriculture supplies many horticultural commodities to domestic and international markets; however, food composition data for many commodities are outdated or unavailable. We produced an up-to-date, nationally representative dataset of up to 148 nutrients and related components in 92 Australian-grown fruit (fresh n=39, dried n=6), vegetables (n=43) and nuts (n=4) by replacing outdated data (pre-2000), confirming concentrations of important nutrients and retaining relevant existing data. Primary samples (n = 902) were purchased during peak growing season in Sydney, Melbourne and Perth between June 2021 and May 2022. While new data reflect current growing practices, varieties, climate and analytical methods, few notable differences were found between old and new data where methods of analysis are comparable. The new data will be incorporated into the Australian Food Composition Database, allowing free online access to stakeholders. The approach used could serve as a model for cost-effective updates of national food composition databases worldwide.
1805.01447
Bryan Daniels
Bryan C. Daniels, Hyunju Kim, Douglas Moore, Siyu Zhou, Harrison Smith, Bradley Karas, Stuart A. Kauffman, and Sara I. Walker
Logic and connectivity jointly determine criticality in biological gene regulatory networks
10 pages, 7 figures
Phys. Rev. Lett. 121, 138102 (2018)
10.1103/PhysRevLett.121.138102
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The complex dynamics of gene expression in living cells can be well-approximated using Boolean networks. The average sensitivity is a natural measure of stability in these systems: values below one indicate typically stable dynamics associated with an ordered phase, whereas values above one indicate chaotic dynamics. This yields a theoretically motivated adaptive advantage to being near the critical value of one, at the boundary between order and chaos. Here, we measure average sensitivity for 66 publicly available Boolean network models describing the function of gene regulatory circuits across diverse living processes. We find the average sensitivity values for these networks are clustered around unity, indicating they are near critical. In many types of random networks, mean connectivity <K> and the average activity bias of the logic functions <p> have been found to be the most important network properties in determining average sensitivity, and by extension a network's criticality. Surprisingly, many of these gene regulatory networks achieve the near-critical state with <K> and <p> far from that predicted for critical systems: randomized networks sharing the local causal structure and local logic of biological networks better reproduce their critical behavior than controlling for macroscale properties such as <K> and <p> alone. This suggests the local properties of genes interacting within regulatory networks are selected to collectively be near-critical, and this non-local property of gene regulatory network dynamics cannot be predicted using the density of interactions alone.
[ { "created": "Thu, 3 May 2018 17:47:30 GMT", "version": "v1" } ]
2018-10-03
[ [ "Daniels", "Bryan C.", "" ], [ "Kim", "Hyunju", "" ], [ "Moore", "Douglas", "" ], [ "Zhou", "Siyu", "" ], [ "Smith", "Harrison", "" ], [ "Karas", "Bradley", "" ], [ "Kauffman", "Stuart A.", "" ], [ "Walker", "Sara I.", "" ] ]
The complex dynamics of gene expression in living cells can be well-approximated using Boolean networks. The average sensitivity is a natural measure of stability in these systems: values below one indicate typically stable dynamics associated with an ordered phase, whereas values above one indicate chaotic dynamics. This yields a theoretically motivated adaptive advantage to being near the critical value of one, at the boundary between order and chaos. Here, we measure average sensitivity for 66 publicly available Boolean network models describing the function of gene regulatory circuits across diverse living processes. We find the average sensitivity values for these networks are clustered around unity, indicating they are near critical. In many types of random networks, mean connectivity <K> and the average activity bias of the logic functions <p> have been found to be the most important network properties in determining average sensitivity, and by extension a network's criticality. Surprisingly, many of these gene regulatory networks achieve the near-critical state with <K> and <p> far from that predicted for critical systems: randomized networks sharing the local causal structure and local logic of biological networks better reproduce their critical behavior than controlling for macroscale properties such as <K> and <p> alone. This suggests the local properties of genes interacting within regulatory networks are selected to collectively be near-critical, and this non-local property of gene regulatory network dynamics cannot be predicted using the density of interactions alone.
2009.10953
Chiara Villa
Chiara Villa, Mark A. J. Chaplain, Alf Gerisch, Tommaso Lorenzi
Mechanical models of pattern and form in biological tissues: the role of stress-strain constitutive equations
33 pages + 6 pages of supplementary material, 15 figures, 6 movies
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mechanochemical models of pattern formation in biological tissues have been used to study a variety of biomedical systems and describe the physical interactions between cells and their local surroundings. These models generally consist of a balance equation for the cell density, one for the density of the extracellular matrix (ECM), and a force-balance equation describing the mechanical equilibrium of the cell-ECM system. Assuming this system can be regarded as an isotropic linear viscoelastic material, the force-balance equation is often defined using the Kelvin-Voigt model of linear viscoelasticity to represent the stress-strain relation of the ECM. However, due to the multifaceted bio-physical nature of the ECM constituents, there are rheological aspects that cannot be effectively captured by this model and, therefore, depending on the type of biological tissue considered, other constitutive models of linear viscoelasticity may be better suited. In this work, we systematically assess the pattern formation potential of different stress-strain constitutive equations for the ECM within a mechanical model of pattern formation in biological tissues. The results obtained through linear stability analysis support the idea that constitutive equations capturing viscous flow and permanent set (Maxwell model, Jeffrey model) have a pattern formation potential much higher than the others (Kelvin-Voigt model, standard linear solid model), further confirmed by the results of our numerical simulations. Our findings suggest that further empirical work is required to acquire detailed quantitative information on the mechanical properties of components of the ECM in different biological tissues in order to furnish mechanochemical models of pattern formation with stress-strain constitutive equations for the ECM that provide a more faithful representation of the underlying tissue rheology.
[ { "created": "Wed, 23 Sep 2020 06:53:55 GMT", "version": "v1" }, { "created": "Thu, 1 Oct 2020 08:23:44 GMT", "version": "v2" }, { "created": "Wed, 24 Mar 2021 17:24:22 GMT", "version": "v3" }, { "created": "Thu, 25 Mar 2021 08:13:55 GMT", "version": "v4" }, { "created": "Fri, 2 Apr 2021 12:29:17 GMT", "version": "v5" }, { "created": "Tue, 11 May 2021 11:18:58 GMT", "version": "v6" } ]
2021-05-12
[ [ "Villa", "Chiara", "" ], [ "Chaplain", "Mark A. J.", "" ], [ "Gerisch", "Alf", "" ], [ "Lorenzi", "Tommaso", "" ] ]
Mechanochemical models of pattern formation in biological tissues have been used to study a variety of biomedical systems and describe the physical interactions between cells and their local surroundings. These models generally consist of a balance equation for the cell density, one for the density of the extracellular matrix (ECM), and a force-balance equation describing the mechanical equilibrium of the cell-ECM system. Assuming this system can be regarded as an isotropic linear viscoelastic material, the force-balance equation is often defined using the Kelvin-Voigt model of linear viscoelasticity to represent the stress-strain relation of the ECM. However, due to the multifaceted bio-physical nature of the ECM constituents, there are rheological aspects that cannot be effectively captured by this model and, therefore, depending on the type of biological tissue considered, other constitutive models of linear viscoelasticity may be better suited. In this work, we systematically assess the pattern formation potential of different stress-strain constitutive equations for the ECM within a mechanical model of pattern formation in biological tissues. The results obtained through linear stability analysis support the idea that constitutive equations capturing viscous flow and permanent set (Maxwell model, Jeffrey model) have a pattern formation potential much higher than the others (Kelvin-Voigt model, standard linear solid model), further confirmed by the results of our numerical simulations. Our findings suggest that further empirical work is required to acquire detailed quantitative information on the mechanical properties of components of the ECM in different biological tissues in order to furnish mechanochemical models of pattern formation with stress-strain constitutive equations for the ECM that provide a more faithful representation of the underlying tissue rheology.
2406.01599
Amirreza Kachabi
Amirreza Kachabi, Mitchel J. Colebank, Sofia Altieri Correa, Naomi C. Chesler
Markov Chain Monte Carlo with Gaussian Process Emulation for a 1D Hemodynamics Model of CTEPH
null
null
null
null
q-bio.QM cs.CE cs.LG physics.data-an stat.AP
http://creativecommons.org/licenses/by/4.0/
Microvascular disease is a contributor to persistent pulmonary hypertension in those with chronic thromboembolic pulmonary hypertension (CTEPH). The heterogenous nature of the micro and macrovascular defects motivates the use of personalized computational models, which can predict flow dynamics within multiple generations of the arterial tree and into the microvasculature. Our study uses computational hemodynamics models and Gaussian processes for rapid, subject-specific calibration using retrospective data from a large animal model of CTEPH. Our subject-specific predictions shed light on microvascular dysfunction and arterial wall shear stress changes in CTEPH.
[ { "created": "Thu, 25 Apr 2024 18:22:22 GMT", "version": "v1" } ]
2024-06-10
[ [ "Kachabi", "Amirreza", "" ], [ "Colebank", "Mitchel J.", "" ], [ "Correa", "Sofia Altieri", "" ], [ "Chesler", "Naomi C.", "" ] ]
Microvascular disease is a contributor to persistent pulmonary hypertension in those with chronic thromboembolic pulmonary hypertension (CTEPH). The heterogenous nature of the micro and macrovascular defects motivates the use of personalized computational models, which can predict flow dynamics within multiple generations of the arterial tree and into the microvasculature. Our study uses computational hemodynamics models and Gaussian processes for rapid, subject-specific calibration using retrospective data from a large animal model of CTEPH. Our subject-specific predictions shed light on microvascular dysfunction and arterial wall shear stress changes in CTEPH.
2408.00160
Mridul Khurana
Mridul Khurana, Arka Daw, M. Maruf, Josef C. Uyeda, Wasila Dahdul, Caleb Charpentier, Yasin Bak{\i}\c{s}, Henry L. Bart Jr., Paula M. Mabee, Hilmar Lapp, James P. Balhoff, Wei-Lun Chao, Charles Stewart, Tanya Berger-Wolf, Anuj Karpatne
Hierarchical Conditioning of Diffusion Models Using Tree-of-Life for Studying Species Evolution
null
null
null
null
q-bio.PE cs.CV cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
A central problem in biology is to understand how organisms evolve and adapt to their environment by acquiring variations in the observable characteristics or traits of species across the tree of life. With the growing availability of large-scale image repositories in biology and recent advances in generative modeling, there is an opportunity to accelerate the discovery of evolutionary traits automatically from images. Toward this goal, we introduce Phylo-Diffusion, a novel framework for conditioning diffusion models with phylogenetic knowledge represented in the form of HIERarchical Embeddings (HIER-Embeds). We also propose two new experiments for perturbing the embedding space of Phylo-Diffusion: trait masking and trait swapping, inspired by counterpart experiments of gene knockout and gene editing/swapping. Our work represents a novel methodological advance in generative modeling to structure the embedding space of diffusion models using tree-based knowledge. Our work also opens a new chapter of research in evolutionary biology by using generative models to visualize evolutionary changes directly from images. We empirically demonstrate the usefulness of Phylo-Diffusion in capturing meaningful trait variations for fishes and birds, revealing novel insights about the biological mechanisms of their evolution.
[ { "created": "Wed, 31 Jul 2024 21:06:14 GMT", "version": "v1" } ]
2024-08-02
[ [ "Khurana", "Mridul", "" ], [ "Daw", "Arka", "" ], [ "Maruf", "M.", "" ], [ "Uyeda", "Josef C.", "" ], [ "Dahdul", "Wasila", "" ], [ "Charpentier", "Caleb", "" ], [ "Bakış", "Yasin", "" ], [ "Bart", "Henry L.", "Jr." ], [ "Mabee", "Paula M.", "" ], [ "Lapp", "Hilmar", "" ], [ "Balhoff", "James P.", "" ], [ "Chao", "Wei-Lun", "" ], [ "Stewart", "Charles", "" ], [ "Berger-Wolf", "Tanya", "" ], [ "Karpatne", "Anuj", "" ] ]
A central problem in biology is to understand how organisms evolve and adapt to their environment by acquiring variations in the observable characteristics or traits of species across the tree of life. With the growing availability of large-scale image repositories in biology and recent advances in generative modeling, there is an opportunity to accelerate the discovery of evolutionary traits automatically from images. Toward this goal, we introduce Phylo-Diffusion, a novel framework for conditioning diffusion models with phylogenetic knowledge represented in the form of HIERarchical Embeddings (HIER-Embeds). We also propose two new experiments for perturbing the embedding space of Phylo-Diffusion: trait masking and trait swapping, inspired by counterpart experiments of gene knockout and gene editing/swapping. Our work represents a novel methodological advance in generative modeling to structure the embedding space of diffusion models using tree-based knowledge. Our work also opens a new chapter of research in evolutionary biology by using generative models to visualize evolutionary changes directly from images. We empirically demonstrate the usefulness of Phylo-Diffusion in capturing meaningful trait variations for fishes and birds, revealing novel insights about the biological mechanisms of their evolution.
1308.6808
Alexandra Jilkine
Alexandra Jilkine, Ryan N. Gutenkunst
Effect of Dedifferentiation on Time to Mutation Acquisition in Stem Cell-Driven Cancers
null
null
10.1371/journal.pcbi.1003481
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accumulating evidence suggests that many tumors have a hierarchical organization, with the bulk of the tumor composed of relatively differentiated short-lived progenitor cells that are maintained by a small population of undifferentiated long-lived cancer stem cells. It is unclear, however, whether cancer stem cells originate from normal stem cells or from dedifferentiated progenitor cells. To address this, we mathematically modeled the effect of dedifferentiation on carcinogenesis. We considered a hybrid stochastic-deterministic model of mutation accumulation in both stem cells and progenitors, including dedifferentiation of progenitor cells to a stem cell-like state. We performed exact computer simulations of the emergence of tumor subpopulations with two mutations, and we derived semi-analytical estimates for the waiting time distribution to fixation. Our results suggest that dedifferentiation may play an important role in carcinogenesis, depending on how stem cell homeostasis is maintained. If the stem cell population size is held strictly constant (due to all divisions being asymmetric), we found that dedifferentiation acts like a positive selective force in the stem cell population and thus speeds carcinogenesis. If the stem cell population size is allowed to vary stochastically with density-dependent reproduction rates (allowing both symmetric and asymmetric divisions), we found that dedifferentiation beyond a critical threshold leads to exponential growth of the stem cell population. Thus, dedifferentiation may play a crucial role, the common modeling assumption of constant stem cell population size may not be adequate, and further progress in understanding carcinogenesis demands a more detailed mechanistic understanding of stem cell homeostasis.
[ { "created": "Fri, 30 Aug 2013 17:50:45 GMT", "version": "v1" } ]
2015-06-17
[ [ "Jilkine", "Alexandra", "" ], [ "Gutenkunst", "Ryan N.", "" ] ]
Accumulating evidence suggests that many tumors have a hierarchical organization, with the bulk of the tumor composed of relatively differentiated short-lived progenitor cells that are maintained by a small population of undifferentiated long-lived cancer stem cells. It is unclear, however, whether cancer stem cells originate from normal stem cells or from dedifferentiated progenitor cells. To address this, we mathematically modeled the effect of dedifferentiation on carcinogenesis. We considered a hybrid stochastic-deterministic model of mutation accumulation in both stem cells and progenitors, including dedifferentiation of progenitor cells to a stem cell-like state. We performed exact computer simulations of the emergence of tumor subpopulations with two mutations, and we derived semi-analytical estimates for the waiting time distribution to fixation. Our results suggest that dedifferentiation may play an important role in carcinogenesis, depending on how stem cell homeostasis is maintained. If the stem cell population size is held strictly constant (due to all divisions being asymmetric), we found that dedifferentiation acts like a positive selective force in the stem cell population and thus speeds carcinogenesis. If the stem cell population size is allowed to vary stochastically with density-dependent reproduction rates (allowing both symmetric and asymmetric divisions), we found that dedifferentiation beyond a critical threshold leads to exponential growth of the stem cell population. Thus, dedifferentiation may play a crucial role, the common modeling assumption of constant stem cell population size may not be adequate, and further progress in understanding carcinogenesis demands a more detailed mechanistic understanding of stem cell homeostasis.
2401.08873
Zahra Sheikhbahaee
Zahra Sheikhbahaee and Adam Safron and Casper Hesp and Guillaume Dumas
From Physics to Sentience: Deciphering the Semantics of the Free-Energy Principle and Evaluating its Claims
null
null
10.1016/j.plrev.2023.11.004
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Free-Energy Principle (FEP) [1-3] has been adopted in a variety of ambitious proposals that aim to characterize all adaptive, sentient, and cognitive systems within a unifying framework. Judging by the amount of attention it has received from the scientific community, the FEP has gained significant traction in these pursuits. The current target article represents an important iteration of this research paradigm in formally describing emergent dynamics rather than merely (quasi-)steady states. This affords more in-depth considerations of the spatio-temporal complexities of cross-scale causality - as we have encouraged and built towards in previous publications (e.g., [4-9]). In this spirit of constructive feedback, we submit a few technical comments on some of the matters that appear to require further attention, in order to improve the clarity, rigour, and applicability of this framework.
[ { "created": "Tue, 16 Jan 2024 23:11:51 GMT", "version": "v1" } ]
2024-01-18
[ [ "Sheikhbahaee", "Zahra", "" ], [ "Safron", "Adam", "" ], [ "Hesp", "Casper", "" ], [ "Dumas", "Guillaume", "" ] ]
The Free-Energy Principle (FEP) [1-3] has been adopted in a variety of ambitious proposals that aim to characterize all adaptive, sentient, and cognitive systems within a unifying framework. Judging by the amount of attention it has received from the scientific community, the FEP has gained significant traction in these pursuits. The current target article represents an important iteration of this research paradigm in formally describing emergent dynamics rather than merely (quasi-)steady states. This affords more in-depth considerations of the spatio-temporal complexities of cross-scale causality - as we have encouraged and built towards in previous publications (e.g., [4-9]). In this spirit of constructive feedback, we submit a few technical comments on some of the matters that appear to require further attention, in order to improve the clarity, rigour, and applicability of this framework.
2109.02178
Felix Tena
Felix Tena, Oscar Garnica, Juan Lanchares and J. Ignacio Hidalgo
A Critical Review of the state-of-the-art on Deep Neural Networks for Blood Glucose Prediction in Patients with Diabetes
17 pages, 20 figures and 16 tables
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article compares ten recently proposed neural networks and proposes two ensemble neural network-based models for blood glucose prediction. All of them are tested under the same dataset, preprocessing workflow, and tools using the OhioT1DM Dataset at three different prediction horizons: 30, 60, and 120 minutes. We compare their performance using the most common metrics in blood glucose prediction and rank the best-performing ones using three methods devised for the statistical comparison of the performance of multiple algorithms: scmamp, model confidence set, and superior predictive ability. Our analysis highlights those models with the highest probability of being the best predictors, estimates the increase in error of the models that perform more poorly with respect to the best ones, and provides a guide for their use in clinical practice.
[ { "created": "Thu, 2 Sep 2021 09:08:26 GMT", "version": "v1" } ]
2021-09-07
[ [ "Tena", "Felix", "" ], [ "Garnica", "Oscar", "" ], [ "Lanchares", "Juan", "" ], [ "Hidalgo", "J. Ignacio", "" ] ]
This article compares ten recently proposed neural networks and proposes two ensemble neural network-based models for blood glucose prediction. All of them are tested under the same dataset, preprocessing workflow, and tools using the OhioT1DM Dataset at three different prediction horizons: 30, 60, and 120 minutes. We compare their performance using the most common metrics in blood glucose prediction and rank the best-performing ones using three methods devised for the statistical comparison of the performance of multiple algorithms: scmamp, model confidence set, and superior predictive ability. Our analysis highlights those models with the highest probability of being the best predictors, estimates the increase in error of the models that perform more poorly with respect to the best ones, and provides a guide for their use in clinical practice.
1701.04905
Michael Schaub
Yazan N. Billeh and Michael T. Schaub
Feedforward Architectures Driven by Inhibitory Interactions
13 pages, 6 figures, J Comput Neurosci (2017)
null
10.1007/s10827-017-0669-1
null
q-bio.NC cond-mat.dis-nn nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Directed information transmission is paramount for many social, physical, and biological systems. For neural systems, scientists have studied this problem under the paradigm of feedforward networks for decades. In most models of feedforward networks, activity is exclusively driven by excitatory neurons and the wiring patterns between them, while inhibitory neurons play only a stabilizing role for the network dynamics. Motivated by recent experimental discoveries of hippocampal circuitry, cortical circuitry, and the diversity of inhibitory neurons throughout the brain, here we illustrate that one can construct such networks even if the connectivity between the excitatory units in the system remains random. This is achieved by endowing inhibitory nodes with a more active role in the network. Our findings demonstrate that apparent feedforward activity can be caused by a much broader network-architectural basis than often assumed.
[ { "created": "Wed, 18 Jan 2017 00:13:43 GMT", "version": "v1" }, { "created": "Fri, 20 Oct 2017 21:50:21 GMT", "version": "v2" }, { "created": "Fri, 17 Nov 2017 22:34:51 GMT", "version": "v3" } ]
2017-11-21
[ [ "Billeh", "Yazan N.", "" ], [ "Schaub", "Michael T.", "" ] ]
Directed information transmission is paramount for many social, physical, and biological systems. For neural systems, scientists have studied this problem under the paradigm of feedforward networks for decades. In most models of feedforward networks, activity is exclusively driven by excitatory neurons and the wiring patterns between them, while inhibitory neurons play only a stabilizing role for the network dynamics. Motivated by recent experimental discoveries of hippocampal circuitry, cortical circuitry, and the diversity of inhibitory neurons throughout the brain, here we illustrate that one can construct such networks even if the connectivity between the excitatory units in the system remains random. This is achieved by endowing inhibitory nodes with a more active role in the network. Our findings demonstrate that apparent feedforward activity can be caused by a much broader network-architectural basis than often assumed.