id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1103.5780
Joseph Heled
Heled Joseph and Alexei Drummond
Calibrated Tree Priors for Relaxed Phylogenetics and Divergence Time Estimation
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of fossil evidence to calibrate divergence time estimation has a long history. More recently Bayesian MCMC has become the dominant method of divergence time estimation and fossil evidence has been re-interpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting has not been carefully investigated. Here we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.
[ { "created": "Tue, 29 Mar 2011 22:03:08 GMT", "version": "v1" } ]
2011-03-31
[ [ "Joseph", "Heled", "" ], [ "Drummond", "Alexei", "" ] ]
The use of fossil evidence to calibrate divergence time estimation has a long history. More recently Bayesian MCMC has become the dominant method of divergence time estimation and fossil evidence has been re-interpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting has not been carefully investigated. Here we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.
1708.05077
Michael Deem
Pu Han and Michael W. Deem
Nonclassical phase diagram for virus bacterial co-evolution mediated by CRISPR
16 pages, 7 figures
J. Roy. Soc. Interface 14 (2017) 20160905
10.1098/rsif.2016.0905
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CRISPR is a newly discovered prokaryotic immune system. Bacteria and archaea with this system incorporate genetic material from invading viruses into their genomes, providing protection against future infection by similar viruses. The conditions for coexistence of prokaryots and viruses is an interesting problem in evolutionary biology. In this work, we show an intriguing phase diagram of the virus extinction probability, which is more complex than that of the classical predator-prey model. As the CRISPR incorporates genetic material, viruses are under pressure to evolve to escape the recognition by CRISPR. When bacteria have a small rate of deleting spacers, a new parameter region in which bacteria and viruses can coexist arises, and it leads to a more complex coexistence patten for bacteria and viruses. For example, when the virus mutation rate is low, the virus extinction probability changes non-montonically with the bacterial exposure rate. The virus and bacteria co-evolution not only alters the virus extinction probability, but also changes the bacterial population structure. Additionally, we show that recombination is a successful strategy for viruses to escape from CRISPR recognition when viruses have multiple proto-spacers, providing support for a recombination-mediated escape mechanism suggested experimentally. Finally, we suggest that the reentrant phase diagram, in which phages can progress through three phases of extinction and two phases of abundance at low spacer deletion rates as a function of exposure rate to bacteria, is an experimentally testable phenomenon.
[ { "created": "Wed, 16 Aug 2017 20:48:27 GMT", "version": "v1" } ]
2017-08-18
[ [ "Han", "Pu", "" ], [ "Deem", "Michael W.", "" ] ]
CRISPR is a newly discovered prokaryotic immune system. Bacteria and archaea with this system incorporate genetic material from invading viruses into their genomes, providing protection against future infection by similar viruses. The conditions for coexistence of prokaryots and viruses is an interesting problem in evolutionary biology. In this work, we show an intriguing phase diagram of the virus extinction probability, which is more complex than that of the classical predator-prey model. As the CRISPR incorporates genetic material, viruses are under pressure to evolve to escape the recognition by CRISPR. When bacteria have a small rate of deleting spacers, a new parameter region in which bacteria and viruses can coexist arises, and it leads to a more complex coexistence patten for bacteria and viruses. For example, when the virus mutation rate is low, the virus extinction probability changes non-montonically with the bacterial exposure rate. The virus and bacteria co-evolution not only alters the virus extinction probability, but also changes the bacterial population structure. Additionally, we show that recombination is a successful strategy for viruses to escape from CRISPR recognition when viruses have multiple proto-spacers, providing support for a recombination-mediated escape mechanism suggested experimentally. Finally, we suggest that the reentrant phase diagram, in which phages can progress through three phases of extinction and two phases of abundance at low spacer deletion rates as a function of exposure rate to bacteria, is an experimentally testable phenomenon.
1605.01441
Vince Grolmusz
Csaba Kerepesi and Balint Varga and Balazs Szalkai and Vince Grolmusz
The Dorsal Striatum and the Dynamics of the Consensus Connectomes in the Frontal Lobe of the Human Brain
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the applications of the graph theory it is unusual that one considers numerous, pairwise different graphs on the very same set of vertices. In the case of human braingraphs or connectomes, however, this is the standard situation: the nodes correspond to anatomically identified cerebral regions, and two vertices are connected by an edge if a diffusion MRI-based workflow identifies a fiber of axons, running between the two regions, corresponding to the two vertices. Therefore, if we examine the braingraphs of $n$ subjects, then we have $n$ graphs on the very same, anatomically identified vertex set. It is a natural idea to describe the $k$-frequently appearing edges in these graphs: the edges that are present between the same two vertices in at least $k$ out of the $n$ graphs. Based on the NIH-funded large Human Connectome Project's public data release, we have reported the construction of the Budapest Reference Connectome Server \url{http://connectome.pitgroup.org} that generates and visualizes these $k$-frequently appearing edges. We call the graphs of the $k$-frequently appearing edges "$k$-consensus connectomes" since an edge could be included only if it is present in at least $k$ graphs out of $n$. Considering the whole human brain, we have reported a surprising property of these consensus connectomes earlier. In the present work we are focusing on the frontal lobe of the brain, and we report here a similarly surprising dynamical property of the consensus connectomes when $k$ is gradually changed from $k=n$ to $k=1$: the connections between the nodes of the frontal lobe are seemingly emanating from those nodes that were connected to sub-cortical structures of the dorsal striatum: the caudate nucleus, and the putamen. We hypothesize that this dynamic behavior copies the axonal fiber development of the frontal lobe.
[ { "created": "Wed, 4 May 2016 21:24:16 GMT", "version": "v1" } ]
2016-05-06
[ [ "Kerepesi", "Csaba", "" ], [ "Varga", "Balint", "" ], [ "Szalkai", "Balazs", "" ], [ "Grolmusz", "Vince", "" ] ]
In the applications of the graph theory it is unusual that one considers numerous, pairwise different graphs on the very same set of vertices. In the case of human braingraphs or connectomes, however, this is the standard situation: the nodes correspond to anatomically identified cerebral regions, and two vertices are connected by an edge if a diffusion MRI-based workflow identifies a fiber of axons, running between the two regions, corresponding to the two vertices. Therefore, if we examine the braingraphs of $n$ subjects, then we have $n$ graphs on the very same, anatomically identified vertex set. It is a natural idea to describe the $k$-frequently appearing edges in these graphs: the edges that are present between the same two vertices in at least $k$ out of the $n$ graphs. Based on the NIH-funded large Human Connectome Project's public data release, we have reported the construction of the Budapest Reference Connectome Server \url{http://connectome.pitgroup.org} that generates and visualizes these $k$-frequently appearing edges. We call the graphs of the $k$-frequently appearing edges "$k$-consensus connectomes" since an edge could be included only if it is present in at least $k$ graphs out of $n$. Considering the whole human brain, we have reported a surprising property of these consensus connectomes earlier. In the present work we are focusing on the frontal lobe of the brain, and we report here a similarly surprising dynamical property of the consensus connectomes when $k$ is gradually changed from $k=n$ to $k=1$: the connections between the nodes of the frontal lobe are seemingly emanating from those nodes that were connected to sub-cortical structures of the dorsal striatum: the caudate nucleus, and the putamen. We hypothesize that this dynamic behavior copies the axonal fiber development of the frontal lobe.
1508.05367
Sebasti\'an Basterrech
Andrea Mesa, Sebasti\'an Basterrech, Gustavo Guerberoff, Fernando Alvarez-Valin
Hidden Markov Models for Gene Sequence Classification: Classifying the VSG genes in the Trypanosoma brucei Genome
Accepted article in July, 2015 in Pattern Analysis and Applications, Springer. The article contains 23 pages, 4 figures, 8 tables and 51 references
null
10.1007/s10044-015-0508-9
null
q-bio.GN cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The article presents an application of Hidden Markov Models (HMMs) for pattern recognition on genome sequences. We apply HMM for identifying genes encoding the Variant Surface Glycoprotein (VSG) in the genomes of Trypanosoma brucei (T. brucei) and other African trypanosomes. These are parasitic protozoa causative agents of sleeping sickness and several diseases in domestic and wild animals. These parasites have a peculiar strategy to evade the host's immune system that consists in periodically changing their predominant cellular surface protein (VSG). The motivation for using patterns recognition methods to identify these genes, instead of traditional homology based ones, is that the levels of sequence identity (amino acid and DNA sequence) amongst these genes is often below of what is considered reliable in these methods. Among pattern recognition approaches, HMM are particularly suitable to tackle this problem because they can handle more naturally the determination of gene edges. We evaluate the performance of the model using different number of states in the Markov model, as well as several performance metrics. The model is applied using public genomic data. Our empirical results show that the VSG genes on T. brucei can be safely identified (high sensitivity and low rate of false positives) using HMM.
[ { "created": "Fri, 31 Jul 2015 14:57:09 GMT", "version": "v1" }, { "created": "Wed, 21 Oct 2015 19:39:43 GMT", "version": "v2" } ]
2015-10-22
[ [ "Mesa", "Andrea", "" ], [ "Basterrech", "Sebastián", "" ], [ "Guerberoff", "Gustavo", "" ], [ "Alvarez-Valin", "Fernando", "" ] ]
The article presents an application of Hidden Markov Models (HMMs) for pattern recognition on genome sequences. We apply HMM for identifying genes encoding the Variant Surface Glycoprotein (VSG) in the genomes of Trypanosoma brucei (T. brucei) and other African trypanosomes. These are parasitic protozoa causative agents of sleeping sickness and several diseases in domestic and wild animals. These parasites have a peculiar strategy to evade the host's immune system that consists in periodically changing their predominant cellular surface protein (VSG). The motivation for using patterns recognition methods to identify these genes, instead of traditional homology based ones, is that the levels of sequence identity (amino acid and DNA sequence) amongst these genes is often below of what is considered reliable in these methods. Among pattern recognition approaches, HMM are particularly suitable to tackle this problem because they can handle more naturally the determination of gene edges. We evaluate the performance of the model using different number of states in the Markov model, as well as several performance metrics. The model is applied using public genomic data. Our empirical results show that the VSG genes on T. brucei can be safely identified (high sensitivity and low rate of false positives) using HMM.
1601.05506
Kyle Kloster
Biaobin Jiang, Kyle Kloster, David F. Gleich, Michael Gribskov
AptRank: An Adaptive PageRank Model for Protein Function Prediction on Bi-relational Graphs
20 pages, code available at this url https://github.rcac.purdue.edu/mgribsko/aptrank
null
null
null
q-bio.MN cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood- and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is an application of traditional PageRank with fixed decay parameters. In contrast, AptRank uses an adaptive mechanism to improve the performance of BirgRank. We evaluate both methods in predicting protein function on yeast, fly, and human datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design three validation strategies: missing function prediction, de novo function prediction, and guided function prediction to comprehensively evaluate all six methods. We find that both BirgRank and AptRank outperform the others, especially in missing function prediction when using only 10% of the data for training. AptRank combines protein-protein associations and the GO function-function hierarchy into a two-layer network model without flattening the hierarchy into a similarity kernel. Introducing an adaptive mechanism to the traditional, fixed-parameter model of PageRank greatly improves the accuracy of protein function prediction.
[ { "created": "Thu, 21 Jan 2016 04:22:57 GMT", "version": "v1" }, { "created": "Sun, 22 May 2016 06:04:09 GMT", "version": "v2" } ]
2016-05-24
[ [ "Jiang", "Biaobin", "" ], [ "Kloster", "Kyle", "" ], [ "Gleich", "David F.", "" ], [ "Gribskov", "Michael", "" ] ]
Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood- and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is an application of traditional PageRank with fixed decay parameters. In contrast, AptRank uses an adaptive mechanism to improve the performance of BirgRank. We evaluate both methods in predicting protein function on yeast, fly, and human datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design three validation strategies: missing function prediction, de novo function prediction, and guided function prediction to comprehensively evaluate all six methods. We find that both BirgRank and AptRank outperform the others, especially in missing function prediction when using only 10% of the data for training. AptRank combines protein-protein associations and the GO function-function hierarchy into a two-layer network model without flattening the hierarchy into a similarity kernel. Introducing an adaptive mechanism to the traditional, fixed-parameter model of PageRank greatly improves the accuracy of protein function prediction.
1204.0710
Donald Cooper Ph.D.
Michael V. Baratta, Shinya Nakamura, Peter Dobelis, Matthew B. Pomrenze, Samuel D. Dolzani and Donald C. Cooper
Optogenetic control of genetically-targeted pyramidal neuron activity in prefrontal cortex
2 pages, 2 figures Posted on http://www.neuro-cloud.net/nature-precedings/baratta
null
10.1038/npre.2012.7102.1
null
q-bio.NC q-bio.CB
http://creativecommons.org/licenses/by-nc-sa/3.0/
A salient feature of prefrontal cortex organization is the vast diversity of cell types that support the temporal integration of events required for sculpting future responses. A major obstacle in understanding the routing of information among prefrontal neuronal subtypes is the inability to manipulate the electrical activity of genetically defined cell types over behaviorally relevant timescales and activity patterns. To address these constraints, we present here a simple approach for selective activation of prefrontal excitatory neurons in both in vitro and in vivo preparations. Rat prelimbic pyramidal neurons were genetically targeted to express a light-activated nonselective cation channel, channelrhodopsin-2, or a light-driven inward chloride pump, halorhodopsin, which enabled them to be rapidly and reversibly activated or inhibited by pulses of light. These light responsive tools provide a spatially and temporally precise means of studying how different cell types contribute to information processing in cortical circuits. Our customized optrodes and optical commutators for in vivo recording allow for efficient light delivery and recording and can be requested at www.neuro-cloud.net/nature-precedings/baratta.
[ { "created": "Tue, 3 Apr 2012 15:26:53 GMT", "version": "v1" } ]
2012-04-04
[ [ "Baratta", "Michael V.", "" ], [ "Nakamura", "Shinya", "" ], [ "Dobelis", "Peter", "" ], [ "Pomrenze", "Matthew B.", "" ], [ "Dolzani", "Samuel D.", "" ], [ "Cooper", "Donald C.", "" ] ]
A salient feature of prefrontal cortex organization is the vast diversity of cell types that support the temporal integration of events required for sculpting future responses. A major obstacle in understanding the routing of information among prefrontal neuronal subtypes is the inability to manipulate the electrical activity of genetically defined cell types over behaviorally relevant timescales and activity patterns. To address these constraints, we present here a simple approach for selective activation of prefrontal excitatory neurons in both in vitro and in vivo preparations. Rat prelimbic pyramidal neurons were genetically targeted to express a light-activated nonselective cation channel, channelrhodopsin-2, or a light-driven inward chloride pump, halorhodopsin, which enabled them to be rapidly and reversibly activated or inhibited by pulses of light. These light responsive tools provide a spatially and temporally precise means of studying how different cell types contribute to information processing in cortical circuits. Our customized optrodes and optical commutators for in vivo recording allow for efficient light delivery and recording and can be requested at www.neuro-cloud.net/nature-precedings/baratta.
1910.13563
Tarik Gouhier
Pradeep Pillai and Tarik C. Gouhier
Not even wrong: Reply to Loreau and Hector
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Loreau and Hector (2019) Comment on our paper (Pillai and Gouhier, 2019) failed to address the two core elements of our critique, both the circularity of the BEF research program, in general, and the mathematical flaws of the Loreau-Hector partitioning scheme, in particular. Loreau and Hector avoided dealing with the first part of our critique by arguing against a non-existent claim that all biodiversity effects could be reduced to coexistence, while the mathematical flaws in the Loreau-Hector partitioning method that we described in the second part of our critique were ignored altogether. Here, we address these misconceptions and demonstrate that all of the claims that were made in our original paper hold. We conclude that (i) BEF studies need to adopt baselines that account for coexistence in order to avoid overestimating the effects of biodiversity and (ii) the LH partitioning method should not be used unless the linearity of the abundance-ecosystem functioning relationship in monocultures can be verified for all species.
[ { "created": "Tue, 29 Oct 2019 22:19:38 GMT", "version": "v1" }, { "created": "Mon, 11 Nov 2019 18:00:59 GMT", "version": "v2" } ]
2019-11-12
[ [ "Pillai", "Pradeep", "" ], [ "Gouhier", "Tarik C.", "" ] ]
The Loreau and Hector (2019) Comment on our paper (Pillai and Gouhier, 2019) failed to address the two core elements of our critique, both the circularity of the BEF research program, in general, and the mathematical flaws of the Loreau-Hector partitioning scheme, in particular. Loreau and Hector avoided dealing with the first part of our critique by arguing against a non-existent claim that all biodiversity effects could be reduced to coexistence, while the mathematical flaws in the Loreau-Hector partitioning method that we described in the second part of our critique were ignored altogether. Here, we address these misconceptions and demonstrate that all of the claims that were made in our original paper hold. We conclude that (i) BEF studies need to adopt baselines that account for coexistence in order to avoid overestimating the effects of biodiversity and (ii) the LH partitioning method should not be used unless the linearity of the abundance-ecosystem functioning relationship in monocultures can be verified for all species.
2406.16991
Partha Ghose Professor
Partha Ghose
Why Quantum-like Models of Cognition Work
6 pages, no figures
null
null
null
q-bio.NC quant-ph
http://creativecommons.org/licenses/by/4.0/
It is shown that Brownian motions executed by state points of neural membranes generate a Schr\"{o}dinger-like equation with $\hbar/m$ replaced by the coefficient of diffusion $\sigma$ of the substrates.
[ { "created": "Mon, 24 Jun 2024 07:25:57 GMT", "version": "v1" } ]
2024-06-26
[ [ "Ghose", "Partha", "" ] ]
It is shown that Brownian motions executed by state points of neural membranes generate a Schr\"{o}dinger-like equation with $\hbar/m$ replaced by the coefficient of diffusion $\sigma$ of the substrates.
1712.02216
Marco Monticelli
M. Monticelli, D. S. Jokhun, D. Petti, G. V. Shivashankar and R. Bertacco
Active nano-mechanical stimulation of single cells for mechanobiology
Single .pdf file, 17 pages, 5 figures
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-vivo, cells are frequently exposed to multiple mechanical stimuli arising from the extracellular microenvironment, with deep impact on many biological functions. On the other hand, current methods for mechanobiology do not allow to easily replicate in-vitro the complex spatio-temporal behavior of such mechanical signals. Here, we introduce a new platform for studying the mechanical coupling between the extracellular environment and the nucleus in living cells, based on active substrates for cell culture made of Fe-coated polymeric micropillars. Under the action of quasi-static external magnetic fields, each group of pillars produces synchronous nano-mechanical stimuli at different points of the cell membrane, thanks to the highly controllable pillars' deflection. This method enables to perform a new set of experiments for the investigation of cellular dynamics and mechanotransduction mechanisms in response to a periodic nano-pinching, with tunable strength and arbitrary temporal shape. We applied this methodology to NIH3T3 cells, demonstrating how the nano-mechanical stimulation affects the actin cytoskeleton, nuclear morphology, H2B core-histone dynamics and MKL transcription-cofactor nuclear to cytoplasmic translocation.
[ { "created": "Wed, 6 Dec 2017 15:03:16 GMT", "version": "v1" } ]
2017-12-07
[ [ "Monticelli", "M.", "" ], [ "Jokhun", "D. S.", "" ], [ "Petti", "D.", "" ], [ "Shivashankar", "G. V.", "" ], [ "Bertacco", "R.", "" ] ]
In-vivo, cells are frequently exposed to multiple mechanical stimuli arising from the extracellular microenvironment, with deep impact on many biological functions. On the other hand, current methods for mechanobiology do not allow to easily replicate in-vitro the complex spatio-temporal behavior of such mechanical signals. Here, we introduce a new platform for studying the mechanical coupling between the extracellular environment and the nucleus in living cells, based on active substrates for cell culture made of Fe-coated polymeric micropillars. Under the action of quasi-static external magnetic fields, each group of pillars produces synchronous nano-mechanical stimuli at different points of the cell membrane, thanks to the highly controllable pillars' deflection. This method enables to perform a new set of experiments for the investigation of cellular dynamics and mechanotransduction mechanisms in response to a periodic nano-pinching, with tunable strength and arbitrary temporal shape. We applied this methodology to NIH3T3 cells, demonstrating how the nano-mechanical stimulation affects the actin cytoskeleton, nuclear morphology, H2B core-histone dynamics and MKL transcription-cofactor nuclear to cytoplasmic translocation.
2112.08309
Greg Kronberg
Greg Kronberg, Ahmet O. Ceceli, Yuefeng Huang, Pierre-Olivier Gaudreault, Sarah King, Natalie McClain, Pazia Miller, Lily Gabay, Devarshi Vasa, Pias Malaker, Defne Ekin, Nelly Alia-Klein, Rita Z. Goldstein
Heroin addiction hijacks the Nucleus Accumbens: craving and reactivity to naturalistic stimuli
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Drug-related cues hijack attention away from alternative reinforcers in drug addiction, inducing craving and motivating drug-seeking. However, the neural correlates underlying this biased processing, its expression in the real-world, and its relationship to cue-induced craving are not fully established, especially in opioid addiction. Here we tracked inter-brain synchronization in the Nucleus Accumbens (NAc), a hub of motivational salience, while heroin-addicted individuals and healthy control subjects watched the same engaging heroin-related movie. Strikingly, the left NAc was synchronized during drug scenes in the addicted individuals and non-drug scenes in controls, predicting scene- and movie-induced heroin craving in the former. Our results open a window into the neurobiology underlying shared drug-biased processing of naturalistic stimuli and cue-induced craving in opiate addiction as they unfold in the real world.
[ { "created": "Wed, 15 Dec 2021 17:57:26 GMT", "version": "v1" } ]
2021-12-16
[ [ "Kronberg", "Greg", "" ], [ "Ceceli", "Ahmet O.", "" ], [ "Huang", "Yuefeng", "" ], [ "Gaudreault", "Pierre-Olivier", "" ], [ "King", "Sarah", "" ], [ "McClain", "Natalie", "" ], [ "Miller", "Pazia", "" ], [ "Gabay", "Lily", "" ], [ "Vasa", "Devarshi", "" ], [ "Malaker", "Pias", "" ], [ "Ekin", "Defne", "" ], [ "Alia-Klein", "Nelly", "" ], [ "Goldstein", "Rita Z.", "" ] ]
Drug-related cues hijack attention away from alternative reinforcers in drug addiction, inducing craving and motivating drug-seeking. However, the neural correlates underlying this biased processing, its expression in the real-world, and its relationship to cue-induced craving are not fully established, especially in opioid addiction. Here we tracked inter-brain synchronization in the Nucleus Accumbens (NAc), a hub of motivational salience, while heroin-addicted individuals and healthy control subjects watched the same engaging heroin-related movie. Strikingly, the left NAc was synchronized during drug scenes in the addicted individuals and non-drug scenes in controls, predicting scene- and movie-induced heroin craving in the former. Our results open a window into the neurobiology underlying shared drug-biased processing of naturalistic stimuli and cue-induced craving in opiate addiction as they unfold in the real world.
1205.0584
Mike Steel Prof.
Wim Hordijk, Mike Steel and Stuart Kauffman
The Structure of Autocatalytic Sets: Evolvability, Enablement, and Emergence
12 pages, 5 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents new results from a detailed study of the structure of autocatalytic sets. We show how autocatalytic sets can be decomposed into smaller autocatalytic subsets, and how these subsets can be identified and classified. We then argue how this has important consequences for the evolvability, enablement, and emergence of autocatalytic sets. We end with some speculation on how all this might lead to a generalized theory of autocatalytic sets, which could possibly be applied to entire ecologies or even economies.
[ { "created": "Wed, 2 May 2012 23:53:22 GMT", "version": "v1" }, { "created": "Fri, 4 May 2012 18:05:36 GMT", "version": "v2" } ]
2012-05-07
[ [ "Hordijk", "Wim", "" ], [ "Steel", "Mike", "" ], [ "Kauffman", "Stuart", "" ] ]
This paper presents new results from a detailed study of the structure of autocatalytic sets. We show how autocatalytic sets can be decomposed into smaller autocatalytic subsets, and how these subsets can be identified and classified. We then argue how this has important consequences for the evolvability, enablement, and emergence of autocatalytic sets. We end with some speculation on how all this might lead to a generalized theory of autocatalytic sets, which could possibly be applied to entire ecologies or even economies.
2007.05297
Christina Surulescu
Pawan Kumar, Jing Li, and Christina Surulescu
Multiscale modeling of glioma pseudopalisades: contributions from the tumor microenvironment
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around the occlusion site of a capillary are typical for GBM and hint on poor prognosis of patient survival. We propose a multiscale modeling approach in the kinetic theory of active particles framework and deduce by an upscaling process a reaction-diffusion model with repellent pH-taxis. We prove existence of a unique global bounded classical solution for a version of the obtained macroscopic system and investigate the asymptotic behavior of the solution. Moreover, we study two different types of scaling and compare the behavior of the obtained macroscopic PDEs by way of simulations. These show that patterns1 (including pseudopalisades) can be formed for some parameter ranges, in accordance with the tumor grade. This is true when the PDEs are obtained via parabolic scaling (undirected tissue), while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related.
[ { "created": "Fri, 10 Jul 2020 10:55:51 GMT", "version": "v1" } ]
2020-07-13
[ [ "Kumar", "Pawan", "" ], [ "Li", "Jing", "" ], [ "Surulescu", "Christina", "" ] ]
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around the occlusion site of a capillary are typical for GBM and hint on poor prognosis of patient survival. We propose a multiscale modeling approach in the kinetic theory of active particles framework and deduce by an upscaling process a reaction-diffusion model with repellent pH-taxis. We prove existence of a unique global bounded classical solution for a version of the obtained macroscopic system and investigate the asymptotic behavior of the solution. Moreover, we study two different types of scaling and compare the behavior of the obtained macroscopic PDEs by way of simulations. These show that patterns1 (including pseudopalisades) can be formed for some parameter ranges, in accordance with the tumor grade. This is true when the PDEs are obtained via parabolic scaling (undirected tissue), while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related.
1609.08980
Rahul Remanan
Rahul Remanan (M.B.B.S.), Viktor Sukhotskiy (Ph.D. graduate student), Mona Shahbazi (N.P.), Edward P. Furlani (Ph.D.), Dale J. Lange (M.D.)
Assessment of corticospinal tract dysfunction and disease severity in amyotrophic lateral sclerosis
null
null
null
null
q-bio.NC q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The upper motor neuron dysfunction in amyotrophic lateral sclerosis was quantified using triple stimulation and more focal transcranial magnetic stimulation techniques that were developed to reduce recording variability. These measurements were combined with clinical and neurophysiological data to develop a novel random forest based supervised machine learning prediction model. This model was capable of predicting cross-sectional ALS disease severity as measured by the ALSFRSr scale with 97% overall accuracy and 99% precision. The machine learning model developed in this research provides a new, unique and objective diagnostic method for quantifying disease severity and identifying subtle changes in disease progression in ALS.
[ { "created": "Wed, 28 Sep 2016 15:59:12 GMT", "version": "v1" } ]
2016-09-29
[ [ "Remanan", "Rahul", "", "M.B.B.S." ], [ "Sukhotskiy", "Viktor", "", "Ph.D. graduate student" ], [ "Shahbazi", "Mona", "", "N.P." ], [ "Furlani", "Edward P.", "", "Ph.D." ], [ "Lange", "Dale J.", "", "M.D." ] ]
The upper motor neuron dysfunction in amyotrophic lateral sclerosis was quantified using triple stimulation and more focal transcranial magnetic stimulation techniques that were developed to reduce recording variability. These measurements were combined with clinical and neurophysiological data to develop a novel random forest based supervised machine learning prediction model. This model was capable of predicting cross-sectional ALS disease severity as measured by the ALSFRSr scale with 97% overall accuracy and 99% precision. The machine learning model developed in this research provides a new, unique and objective diagnostic method for quantifying disease severity and identifying subtle changes in disease progression in ALS.
1309.2848
Shabnam Kadir
Shabnam N. Kadir, Dan F. M. Goodman, and Kenneth D. Harris
High-dimensional cluster analysis with the Masked EM Algorithm
10 pages, 2 figures
null
null
null
q-bio.QM cs.LG q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cluster analysis faces two problems in high dimensions: first, the `curse of dimensionality' that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. In many applications, only a small subset of features provide information about the cluster membership of any one data point, however this informative feature subset may not be the same for all data points. Here we introduce a `Masked EM' algorithm for fitting mixture of Gaussians models in such cases. We show that the algorithm performs close to optimally on simulated Gaussian data, and in an application of `spike sorting' of high channel-count neuronal recordings.
[ { "created": "Wed, 11 Sep 2013 14:55:50 GMT", "version": "v1" } ]
2013-09-12
[ [ "Kadir", "Shabnam N.", "" ], [ "Goodman", "Dan F. M.", "" ], [ "Harris", "Kenneth D.", "" ] ]
Cluster analysis faces two problems in high dimensions: first, the `curse of dimensionality' that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. In many applications, only a small subset of features provide information about the cluster membership of any one data point, however this informative feature subset may not be the same for all data points. Here we introduce a `Masked EM' algorithm for fitting mixture of Gaussians models in such cases. We show that the algorithm performs close to optimally on simulated Gaussian data, and in an application of `spike sorting' of high channel-count neuronal recordings.
2210.13925
Alexey Mazur K
Alexey K. Mazur and Eugene Gladyshev
Direct pairing of homologous DNA double helices may involve the B-to-C form transition
12 pages, 1 figure
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
In many organisms, homologous (or repetitive) chromosomal regions can associate or/and undergo concerted epigenetic changes in the absence of DNA breakage and recombination. The direct specific pairing of DNA duplexes with similar nucleotide sequences represents an attractive mechanism for recognizing such regions. Whereas the pairing of B-DNA duplexes may involve a large energy barrier, C-DNA duplexes are expected to pair much more readily. This unique feature of C-DNA is largely due to the fact that its major groove is wide and very shallow, permitting almost perfect initial homologous contacts between two duplexes without clashing. Overall, the conjectured role of C-DNA in recombination-independent pairing should revive the efforts to understand its structure and function in the cell.
[ { "created": "Tue, 25 Oct 2022 11:23:49 GMT", "version": "v1" } ]
2022-10-26
[ [ "Mazur", "Alexey K.", "" ], [ "Gladyshev", "Eugene", "" ] ]
In many organisms, homologous (or repetitive) chromosomal regions can associate or/and undergo concerted epigenetic changes in the absence of DNA breakage and recombination. The direct specific pairing of DNA duplexes with similar nucleotide sequences represents an attractive mechanism for recognizing such regions. Whereas the pairing of B-DNA duplexes may involve a large energy barrier, C-DNA duplexes are expected to pair much more readily. This unique feature of C-DNA is largely due to the fact that its major groove is wide and very shallow, permitting almost perfect initial homologous contacts between two duplexes without clashing. Overall, the conjectured role of C-DNA in recombination-independent pairing should revive the efforts to understand its structure and function in the cell.
1308.5668
Richard Naud
Richard Naud
An Integral Equation Approach to the Dynamics of L2-3 Cortical Neurons
6 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How do neuronal populations encode time-dependent stimuli in their population firing rate? To address this question, I consider the quasi-renewal equation and the event-based expansion, two theoretical approximations proposed recently, and test these against peri-stimulus time histograms from L2-3 pyramidal cells in vitro. Parameters are optimized by gradient descent to best match the firing rate output given the current input. The fitting method can estimate single-neuron parameters that are normally obtained either with intracellular recordings or with individual spike trains. I find that quasi-renewal theory predicts the adapting firing rate with good precision but not the event-based expansion. Quasi-renewal predictions are equal in quality with state-of-the-art spike timing prediction methods, and does so without resorting to the indiviual spike times or the membrane potential responses.
[ { "created": "Mon, 26 Aug 2013 19:55:20 GMT", "version": "v1" }, { "created": "Tue, 3 Dec 2013 03:31:29 GMT", "version": "v2" } ]
2013-12-04
[ [ "Naud", "Richard", "" ] ]
How do neuronal populations encode time-dependent stimuli in their population firing rate? To address this question, I consider the quasi-renewal equation and the event-based expansion, two theoretical approximations proposed recently, and test these against peri-stimulus time histograms from L2-3 pyramidal cells in vitro. Parameters are optimized by gradient descent to best match the firing rate output given the current input. The fitting method can estimate single-neuron parameters that are normally obtained either with intracellular recordings or with individual spike trains. I find that quasi-renewal theory predicts the adapting firing rate with good precision but not the event-based expansion. Quasi-renewal predictions are equal in quality with state-of-the-art spike timing prediction methods, and does so without resorting to the indiviual spike times or the membrane potential responses.
2001.11547
Lee Cooper
Sanghoon Lee, Mohamed Amgad, Deepak R. Chittajallu, Matt McCormick, Brian P Pollack, Habiba Elfandy, Hagar Hussein, David A Gutman, Lee AD Cooper
HistomicsML2.0: Fast interactive machine learning for whole slide imaging data
null
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting quantitative phenotypic information from whole-slide images presents significant challenges for investigators who are not experienced in developing image analysis algorithms. We present new software that enables rapid learn-by-example training of machine learning classifiers for detection of histologic patterns in whole-slide imaging datasets. HistomicsML2.0 uses convolutional networks to be readily adaptable to a variety of applications, provides a web-based user interface, and is available as a software container to simplify deployment.
[ { "created": "Thu, 30 Jan 2020 20:10:26 GMT", "version": "v1" } ]
2020-02-03
[ [ "Lee", "Sanghoon", "" ], [ "Amgad", "Mohamed", "" ], [ "Chittajallu", "Deepak R.", "" ], [ "McCormick", "Matt", "" ], [ "Pollack", "Brian P", "" ], [ "Elfandy", "Habiba", "" ], [ "Hussein", "Hagar", "" ], [ "Gutman", "David A", "" ], [ "Cooper", "Lee AD", "" ] ]
Extracting quantitative phenotypic information from whole-slide images presents significant challenges for investigators who are not experienced in developing image analysis algorithms. We present new software that enables rapid learn-by-example training of machine learning classifiers for detection of histologic patterns in whole-slide imaging datasets. HistomicsML2.0 uses convolutional networks to be readily adaptable to a variety of applications, provides a web-based user interface, and is available as a software container to simplify deployment.
2101.11710
James Powell
James Powell and Kari Sentz
Tracking Short-Term Temporal Linguistic Dynamics to Characterize Candidate Therapeutics for COVID-19 in the CORD-19 Corpus
null
null
null
null
q-bio.OT cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Scientific literature tends to grow as a function of funding and interest in a given field. Mining such literature can reveal trends that may not be immediately apparent. The CORD-19 corpus represents a growing corpus of scientific literature associated with COVID-19. We examined the intersection of a set of candidate therapeutics identified in a drug-repurposing study with temporal instances of the CORD-19 corpus to determine if it was possible to find and measure changes associated with them over time. We propose that the techniques we used could form the basis of a tool to pre-screen new candidate therapeutics early in the research process.
[ { "created": "Sat, 9 Jan 2021 23:24:05 GMT", "version": "v1" } ]
2021-01-29
[ [ "Powell", "James", "" ], [ "Sentz", "Kari", "" ] ]
Scientific literature tends to grow as a function of funding and interest in a given field. Mining such literature can reveal trends that may not be immediately apparent. The CORD-19 corpus represents a growing corpus of scientific literature associated with COVID-19. We examined the intersection of a set of candidate therapeutics identified in a drug-repurposing study with temporal instances of the CORD-19 corpus to determine if it was possible to find and measure changes associated with them over time. We propose that the techniques we used could form the basis of a tool to pre-screen new candidate therapeutics early in the research process.
1803.08207
Tristan Bepler
Tristan Bepler, Andrew Morin, Julia Brasch, Lawrence Shapiro, Alex J. Noble, and Bonnie Berger
Positive-unlabeled convolutional neural networks for particle picking in cryo-electron micrographs
43 pages, 5 main figures, 6 supplemental figures
Nature Methods (2019)
10.1038/s41592-019-0575-8
null
q-bio.QM cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryo-electron microscopy (cryoEM) is an increasingly popular method for protein structure determination. However, identifying a sufficient number of particles for analysis (often >100,000) can take months of manual effort. Current computational approaches are limited by high false positive rates and require significant ad-hoc post-processing, especially for unusually shaped particles. To address this shortcoming, we develop Topaz, an efficient and accurate particle picking pipeline using neural networks trained with few labeled particles by newly leveraging the remaining unlabeled particles through the framework of positive-unlabeled (PU) learning. Remarkably, despite using minimal labeled particles, Topaz allows us to improve reconstruction resolution by up to 0.15 {\AA} over published particles on three public cryoEM datasets without any post-processing. Furthermore, we show that our novel generalized-expectation criteria approach to PU learning outperforms existing general PU learning approaches when applied to particle detection, especially for challenging datasets of non-globular proteins. We expect Topaz to be an essential component of cryoEM analysis.
[ { "created": "Thu, 22 Mar 2018 02:24:22 GMT", "version": "v1" }, { "created": "Mon, 8 Oct 2018 19:18:18 GMT", "version": "v2" } ]
2019-10-17
[ [ "Bepler", "Tristan", "" ], [ "Morin", "Andrew", "" ], [ "Brasch", "Julia", "" ], [ "Shapiro", "Lawrence", "" ], [ "Noble", "Alex J.", "" ], [ "Berger", "Bonnie", "" ] ]
Cryo-electron microscopy (cryoEM) is an increasingly popular method for protein structure determination. However, identifying a sufficient number of particles for analysis (often >100,000) can take months of manual effort. Current computational approaches are limited by high false positive rates and require significant ad-hoc post-processing, especially for unusually shaped particles. To address this shortcoming, we develop Topaz, an efficient and accurate particle picking pipeline using neural networks trained with few labeled particles by newly leveraging the remaining unlabeled particles through the framework of positive-unlabeled (PU) learning. Remarkably, despite using minimal labeled particles, Topaz allows us to improve reconstruction resolution by up to 0.15 {\AA} over published particles on three public cryoEM datasets without any post-processing. Furthermore, we show that our novel generalized-expectation criteria approach to PU learning outperforms existing general PU learning approaches when applied to particle detection, especially for challenging datasets of non-globular proteins. We expect Topaz to be an essential component of cryoEM analysis.
2102.03669
Katar\'ina Bo\v{d}ov\'a
K. Bodova, E. Szep, N. H. Barton
Dynamic maximum entropy provides accurate approximation of structured population dynamics
14 pages, 4 figures
null
10.1371/journal.pcbi.1009661
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
Realistic models of biological processes typically involve interacting components on multiple scales, driven by changing environment and inherent stochasticity. Such models are often analytically and numerically intractable. We revisit a dynamic maximum entropy method that combines a static maximum entropy and a quasi-stationary approximation. This allows us to reduce stochastic non-equilibrium dynamics expressed by the Fokker-Planck equation to a simpler low-dimensional deterministic dynamics, without the need to track microscopic details. Although the method has been previously applied to a few (rather complicated) applications in population genetics, our main goal here is to explain and to better understand how the method works. We demonstrate the usefulness of the method for two widely studied stochastic problems, highlighting its accuracy in capturing important macroscopic quantities even in rapidly changing non-stationary conditions. For the Ornstein-Uhlenbeck process, the method recovers the exact dynamics whilst for a stochastic island model with migration from other habitats, the approximation retains high macroscopic accuracy under a wide range of scenarios for a dynamic environment.
[ { "created": "Sat, 6 Feb 2021 21:27:52 GMT", "version": "v1" } ]
2022-01-19
[ [ "Bodova", "K.", "" ], [ "Szep", "E.", "" ], [ "Barton", "N. H.", "" ] ]
Realistic models of biological processes typically involve interacting components on multiple scales, driven by changing environment and inherent stochasticity. Such models are often analytically and numerically intractable. We revisit a dynamic maximum entropy method that combines a static maximum entropy and a quasi-stationary approximation. This allows us to reduce stochastic non-equilibrium dynamics expressed by the Fokker-Planck equation to a simpler low-dimensional deterministic dynamics, without the need to track microscopic details. Although the method has been previously applied to a few (rather complicated) applications in population genetics, our main goal here is to explain and to better understand how the method works. We demonstrate the usefulness of the method for two widely studied stochastic problems, highlighting its accuracy in capturing important macroscopic quantities even in rapidly changing non-stationary conditions. For the Ornstein-Uhlenbeck process, the method recovers the exact dynamics whilst for a stochastic island model with migration from other habitats, the approximation retains high macroscopic accuracy under a wide range of scenarios for a dynamic environment.
1901.07536
Evelyn Tang
Evelyn Tang, Harang Ju, Graham L. Baum, David R. Roalf, Theodore D. Satterthwaite, Fabio Pasqualetti and Danielle S. Bassett
The control of brain network dynamics across diverse scales of space and time
12 pages, 7 figures. arXiv admin note: text overlap with arXiv:1607.01010
Phys. Rev. E 101, 062301 (2020)
10.1103/PhysRevE.101.062301
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain is composed of distinct regions that are each associated with particular functions and distinct propensities for the control of neural dynamics. However, the relation between these functions and control profiles is poorly understood, as is the variation in this relation across diverse scales of space and time. Here we probe the relation between control and dynamics in brain networks constructed from diffusion tensor imaging data in a large community based sample of young adults. Specifically, we probe the control properties of each brain region and investigate their relationship with dynamics across various spatial scales using the Laplacian eigenspectrum. In addition, through analysis of regional modal controllability and partitioning of modes, we determine whether the associated dynamics are fast or slow, as well as whether they are alternating or monotone. We find that brain regions that facilitate the control of energetically easy transitions are associated with activity on short length scales and slow time scales. Conversely, brain regions that facilitate control of difficult transitions are associated with activity on long length scales and fast time scales. Built on linear dynamical models, our results offer parsimonious explanations for the activity propagation and network control profiles supported by regions of differing neuroanatomical structure.
[ { "created": "Mon, 21 Jan 2019 13:22:19 GMT", "version": "v1" }, { "created": "Thu, 2 May 2019 08:11:43 GMT", "version": "v2" }, { "created": "Mon, 1 Jun 2020 15:42:13 GMT", "version": "v3" } ]
2020-06-02
[ [ "Tang", "Evelyn", "" ], [ "Ju", "Harang", "" ], [ "Baum", "Graham L.", "" ], [ "Roalf", "David R.", "" ], [ "Satterthwaite", "Theodore D.", "" ], [ "Pasqualetti", "Fabio", "" ], [ "Bassett", "Danielle S.", "" ] ]
The human brain is composed of distinct regions that are each associated with particular functions and distinct propensities for the control of neural dynamics. However, the relation between these functions and control profiles is poorly understood, as is the variation in this relation across diverse scales of space and time. Here we probe the relation between control and dynamics in brain networks constructed from diffusion tensor imaging data in a large community based sample of young adults. Specifically, we probe the control properties of each brain region and investigate their relationship with dynamics across various spatial scales using the Laplacian eigenspectrum. In addition, through analysis of regional modal controllability and partitioning of modes, we determine whether the associated dynamics are fast or slow, as well as whether they are alternating or monotone. We find that brain regions that facilitate the control of energetically easy transitions are associated with activity on short length scales and slow time scales. Conversely, brain regions that facilitate control of difficult transitions are associated with activity on long length scales and fast time scales. Built on linear dynamical models, our results offer parsimonious explanations for the activity propagation and network control profiles supported by regions of differing neuroanatomical structure.
1304.1054
Eunjung Kim
Eunjung Kim, Vito Rebecca, Inna V. Fedorenko, Jane L. Messina, Rahel Mathew, Silvya S. Maria-Engler, David Basanta, Keiran S.M. Smalley, Alexander R.A. Anderson
Senescent fibroblasts can drive melanoma initiation and progression
33 pages, 16 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Skin is one of the largest human organ systems whose primary purpose is the protection of deeper tissues. As such, the skin must maintain a homeostatic balance in the face of many microenvironmental and genetic perturbations. At its simplest, skin homeostasis is maintained by the balance between skin cell growth and death such that skin architecture is preserved. This study presents a hybrid multiscale mathematical model of normal skin (vSkin). The model focuses on key cellular and microenvironmental variables that regulate homeostatic interactions among keratinocytes, melanocytes and fibroblasts, key components of the skin. The model recapitulates normal skin structure, and is robust enough to withstand physical as well as biochemical perturbations. Furthermore, the vSkin model revealed the important role of the skin microenvironment in melanoma initiation and progression. Our experiments showed that dermal fibroblasts, which are an important source of growth factors in the skin, adopt a phenotype that facilitates cancer cell growth and invasion when they become senescent. Based on these experimental results, we incorporated senescent fibroblasts into vSkin model and showed that senescent fibroblasts transform the skin microenvironment and enhance the growth and invasion of normal melanocytes as well as early stage melanoma cells. These predictions are consistent with our experimental results as well as clinical observations. Our co-culture experiments show that the senescent fibroblasts promote the growth and invasion of non-tumorigenic melanoma cells. We also observed increased proteolytic activity in stromal fields adjacent to melanoma lesions in human histology. Collectively, senescent fibroblasts create a pro-oncogenic environment that synergizes with mutations to drive melanoma initiation and progression and should therefore be considered as a potential future therapeutic target.
[ { "created": "Wed, 3 Apr 2013 18:55:01 GMT", "version": "v1" } ]
2013-04-04
[ [ "Kim", "Eunjung", "" ], [ "Rebecca", "Vito", "" ], [ "Fedorenko", "Inna V.", "" ], [ "Messina", "Jane L.", "" ], [ "Mathew", "Rahel", "" ], [ "Maria-Engler", "Silvya S.", "" ], [ "Basanta", "David", "" ], [ "Smalley", "Keiran S. M.", "" ], [ "Anderson", "Alexander R. A.", "" ] ]
Skin is one of the largest human organ systems whose primary purpose is the protection of deeper tissues. As such, the skin must maintain a homeostatic balance in the face of many microenvironmental and genetic perturbations. At its simplest, skin homeostasis is maintained by the balance between skin cell growth and death such that skin architecture is preserved. This study presents a hybrid multiscale mathematical model of normal skin (vSkin). The model focuses on key cellular and microenvironmental variables that regulate homeostatic interactions among keratinocytes, melanocytes and fibroblasts, key components of the skin. The model recapitulates normal skin structure, and is robust enough to withstand physical as well as biochemical perturbations. Furthermore, the vSkin model revealed the important role of the skin microenvironment in melanoma initiation and progression. Our experiments showed that dermal fibroblasts, which are an important source of growth factors in the skin, adopt a phenotype that facilitates cancer cell growth and invasion when they become senescent. Based on these experimental results, we incorporated senescent fibroblasts into vSkin model and showed that senescent fibroblasts transform the skin microenvironment and enhance the growth and invasion of normal melanocytes as well as early stage melanoma cells. These predictions are consistent with our experimental results as well as clinical observations. Our co-culture experiments show that the senescent fibroblasts promote the growth and invasion of non-tumorigenic melanoma cells. We also observed increased proteolytic activity in stromal fields adjacent to melanoma lesions in human histology. Collectively, senescent fibroblasts create a pro-oncogenic environment that synergizes with mutations to drive melanoma initiation and progression and should therefore be considered as a potential future therapeutic target.
2305.15295
Marcelo P. Becker
Marcelo P. Becker and Marco A. P. Idiart
A Mean-Field Method for Generic Conductance-Based Integrate-and-Fire Neurons with Finite Timescales
11 pages, 6 figures, research article
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The construction of transfer functions in theoretical neuroscience plays an important role in determining the spiking rate behavior of neurons in networks. These functions can be obtained through various fitting methods, but the biological relevance of the parameters is not always clear. However, for stationary inputs, such functions can be obtained without the adjustment of free parameters by using mean-field methods. In this work, we expand current Fokker-Planck approaches to account for the concurrent influence of colored and multiplicative noise terms on generic conductance-based integrate-and-fire neurons. We reduce the resulting stochastic system from the application of the diffusion approximation to a one-dimensional Langevin equation. An effective Fokker-Planck is then constructed using Fox Theory, which is solved numerically to obtain the transfer function. The solution is capable of reproducing the transfer function behavior of simulated neurons across a wide range of parameters. The method can also be easily extended to account for different sources of noise with various multiplicative terms, and it can be used in other types of problems in principle.
[ { "created": "Wed, 24 May 2023 16:18:35 GMT", "version": "v1" } ]
2023-05-25
[ [ "Becker", "Marcelo P.", "" ], [ "Idiart", "Marco A. P.", "" ] ]
The construction of transfer functions in theoretical neuroscience plays an important role in determining the spiking rate behavior of neurons in networks. These functions can be obtained through various fitting methods, but the biological relevance of the parameters is not always clear. However, for stationary inputs, such functions can be obtained without the adjustment of free parameters by using mean-field methods. In this work, we expand current Fokker-Planck approaches to account for the concurrent influence of colored and multiplicative noise terms on generic conductance-based integrate-and-fire neurons. We reduce the resulting stochastic system from the application of the diffusion approximation to a one-dimensional Langevin equation. An effective Fokker-Planck is then constructed using Fox Theory, which is solved numerically to obtain the transfer function. The solution is capable of reproducing the transfer function behavior of simulated neurons across a wide range of parameters. The method can also be easily extended to account for different sources of noise with various multiplicative terms, and it can be used in other types of problems in principle.
2401.04149
Iker Malaina
Virginia del Campo, Iker Malaina
Im\'agenes de Resonancia Magn\'etica con Contraste en el C\'ancer de Mama
9 pages, text in Spanish
null
null
null
q-bio.OT eess.IV physics.med-ph
http://creativecommons.org/licenses/by/4.0/
In this study, 529 variables extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of 922 breast cancer patients have been evaluated, focusing on distinguishing between recurrent and non-recurrent cases, as well as those with and without metastasis. Special emphasis is placed on the differences among invasive breast cancer subtypes (Luminal A, Luminal B, HER2 positive, and TNBC). The accurate identification of the subtype is crucial as it impacts both treatment and prognosis. The analysis is based on the dataset from Saha et al., highlighting key factors for predicting recurrences and metastases, providing valuable information for proper monitoring and the selection of effective treatments.
[ { "created": "Mon, 8 Jan 2024 12:30:26 GMT", "version": "v1" } ]
2024-01-10
[ [ "del Campo", "Virginia", "" ], [ "Malaina", "Iker", "" ] ]
In this study, 529 variables extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of 922 breast cancer patients have been evaluated, focusing on distinguishing between recurrent and non-recurrent cases, as well as those with and without metastasis. Special emphasis is placed on the differences among invasive breast cancer subtypes (Luminal A, Luminal B, HER2 positive, and TNBC). The accurate identification of the subtype is crucial as it impacts both treatment and prognosis. The analysis is based on the dataset from Saha et al., highlighting key factors for predicting recurrences and metastases, providing valuable information for proper monitoring and the selection of effective treatments.
1805.09758
Vince Grolmusz
Kristof Takacs, Balint Varga, and Vince Grolmusz
PDB_Amyloid: The Extended Live Amyloid Structure List from the PDB
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Protein Data Bank (PDB) contains more than 135 000 entries today. From these, relatively few amyloid structures can be identified, since amyloids are insoluble in water. Therefore, mostly solid state NMR-recorded amyloid structures are deposited in the PDB. Based on the geometric analysis of these deposited structures we have prepared an automatically updated webserver, which generates the list of the deposited amyloid structures, and, additionally, those globular protein entries, which have amyloid-like substructures of a given size and characteristics. We have found that applying only the properly chosen geometric conditions, it is possible to identify the deposited amyloid structures, and a number of globular proteins with amyloid-like substructures. We have analyzed these globular proteins and have found that many of them are known to form amyloids more easily than many other globular proteins. Our results relate to the method of (Stankovic, I. et al. (2017): Construction of Amyloid PDB Files Database. Transactions on Internet Research. 13 (1): 47-51), who have applied a hybrid textual-search and geometric approach for finding amyloids in the PDB. If one intends to identify a subset of the PDB for some applications, the identification algorithm needs to be re-run periodically, since in 2017, on average, every day 30 new entries were deposited in the data bank. Our webserver is updated regularly and automatically, and the identified amyloid- and partial amyloid structures can be viewed or their list can be downloaded from the site https://pitgroup.org/amyloid.
[ { "created": "Thu, 24 May 2018 16:21:55 GMT", "version": "v1" }, { "created": "Fri, 25 May 2018 16:56:03 GMT", "version": "v2" } ]
2018-05-28
[ [ "Takacs", "Kristof", "" ], [ "Varga", "Balint", "" ], [ "Grolmusz", "Vince", "" ] ]
The Protein Data Bank (PDB) contains more than 135 000 entries today. From these, relatively few amyloid structures can be identified, since amyloids are insoluble in water. Therefore, mostly solid state NMR-recorded amyloid structures are deposited in the PDB. Based on the geometric analysis of these deposited structures we have prepared an automatically updated webserver, which generates the list of the deposited amyloid structures, and, additionally, those globular protein entries, which have amyloid-like substructures of a given size and characteristics. We have found that applying only the properly chosen geometric conditions, it is possible to identify the deposited amyloid structures, and a number of globular proteins with amyloid-like substructures. We have analyzed these globular proteins and have found that many of them are known to form amyloids more easily than many other globular proteins. Our results relate to the method of (Stankovic, I. et al. (2017): Construction of Amyloid PDB Files Database. Transactions on Internet Research. 13 (1): 47-51), who have applied a hybrid textual-search and geometric approach for finding amyloids in the PDB. If one intends to identify a subset of the PDB for some applications, the identification algorithm needs to be re-run periodically, since in 2017, on average, every day 30 new entries were deposited in the data bank. Our webserver is updated regularly and automatically, and the identified amyloid- and partial amyloid structures can be viewed or their list can be downloaded from the site https://pitgroup.org/amyloid.
0912.3093
Vasily Ogryzko V
D. Parkhomchuk, V.S. Amstislavskiy, A. Soldatov, V. Ogryzko
Use of high throughput sequencing to observe genome dynamics at a single cell level
22 pages, 9 figures (including 5 supplementary), one table
null
10.1073/pnas.0906681106
null
q-bio.GN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of high throughput sequencing technology, it becomes possible to directly analyze mutation distribution in a genome-wide fashion, dissociating mutation rate measurements from the traditional underlying assumptions. Here, we sequenced several genomes of Escherichia coli from colonies obtained after chemical mutagenesis and observed a strikingly nonrandom distribution of the induced mutations. These include long stretches of exclusively G to A or C to T transitions along the genome and orders of magnitude intra- and inter-genomic differences in mutation density. Whereas most of these observations can be explained by the known features of enzymatic processes, the others could reflect stochasticity in the molecular processes at the single-cell level. Our results demonstrate how analysis of the molecular records left in the genomes of the descendants of an individual mutagenized cell allows for genome-scale observations of fixation and segregation of mutations, as well as recombination events, in the single genome of their progenitor.
[ { "created": "Wed, 16 Dec 2009 10:29:31 GMT", "version": "v1" } ]
2015-05-14
[ [ "Parkhomchuk", "D.", "" ], [ "Amstislavskiy", "V. S.", "" ], [ "Soldatov", "A.", "" ], [ "Ogryzko", "V.", "" ] ]
With the development of high throughput sequencing technology, it becomes possible to directly analyze mutation distribution in a genome-wide fashion, dissociating mutation rate measurements from the traditional underlying assumptions. Here, we sequenced several genomes of Escherichia coli from colonies obtained after chemical mutagenesis and observed a strikingly nonrandom distribution of the induced mutations. These include long stretches of exclusively G to A or C to T transitions along the genome and orders of magnitude intra- and inter-genomic differences in mutation density. Whereas most of these observations can be explained by the known features of enzymatic processes, the others could reflect stochasticity in the molecular processes at the single-cell level. Our results demonstrate how analysis of the molecular records left in the genomes of the descendants of an individual mutagenized cell allows for genome-scale observations of fixation and segregation of mutations, as well as recombination events, in the single genome of their progenitor.
2210.01768
James Whittington
James C.R. Whittington, Will Dorrell, Surya Ganguli, Timothy E.J. Behrens
Disentanglement with Biological Constraints: A Theory of Functional Cell Types
null
null
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Neurons in the brain are often finely tuned for specific task variables. Moreover, such disentangled representations are highly sought after in machine learning. Here we mathematically prove that simple biological constraints on neurons, namely nonnegativity and energy efficiency in both activity and weights, promote such sought after disentangled representations by enforcing neurons to become selective for single factors of task variation. We demonstrate these constraints lead to disentanglement in a variety of tasks and architectures, including variational autoencoders. We also use this theory to explain why the brain partitions its cells into distinct cell types such as grid and object-vector cells, and also explain when the brain instead entangles representations in response to entangled task factors. Overall, this work provides a mathematical understanding of why single neurons in the brain often represent single human-interpretable factors, and steps towards an understanding task structure shapes the structure of brain representation.
[ { "created": "Fri, 30 Sep 2022 14:27:28 GMT", "version": "v1" }, { "created": "Fri, 31 Mar 2023 18:41:15 GMT", "version": "v2" } ]
2023-04-04
[ [ "Whittington", "James C. R.", "" ], [ "Dorrell", "Will", "" ], [ "Ganguli", "Surya", "" ], [ "Behrens", "Timothy E. J.", "" ] ]
Neurons in the brain are often finely tuned for specific task variables. Moreover, such disentangled representations are highly sought after in machine learning. Here we mathematically prove that simple biological constraints on neurons, namely nonnegativity and energy efficiency in both activity and weights, promote such sought after disentangled representations by enforcing neurons to become selective for single factors of task variation. We demonstrate these constraints lead to disentanglement in a variety of tasks and architectures, including variational autoencoders. We also use this theory to explain why the brain partitions its cells into distinct cell types such as grid and object-vector cells, and also explain when the brain instead entangles representations in response to entangled task factors. Overall, this work provides a mathematical understanding of why single neurons in the brain often represent single human-interpretable factors, and steps towards an understanding task structure shapes the structure of brain representation.
2307.00295
Sedigheh Behrouzifar
Sedigheh Behrouzifar
Comparative study of under-expressed prognostic biomarkers and pivotal signaling pathways in colon cancer and ulcerative colitis using integrated bioinformatics approach
16 pages,5 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Colon cancer is a prevalent gastrointestinal malignancy arising in the colon. Ulcerative colitis(UC) is one of the risk factors of colorectal cancer. The detection of under-expressed biomarkers and molecular mechanisms in UC and colon cancer can lead to effective management of colitis-associated cancer. A total of two mRNA expression datasets (GSE87473 and GSE44076) were downloaded from the Gene Expression Omnibus (GEO) database. GEO2R was used to screen differentially expressed genes (DEGs) between extensive ulcerative colitis samples and healthy samples, limited ulcerative colitis samples and healthy samples, and colon cancer samples and healthy samples. In extensive ulcerative colitis, limited ulcerative colitis and colon cancer groups, 95,69 and 635 under-expressed genes with adjusted p-value<0.05 and log(2) fold change<-2 were detected respectively. Using Cytoscape software, the genes with degree> 15 including CLCA1, SLC26A3, SI, KIT, HPGDS, NR1H4, ADIPOQ, PPARGC1A, GCG, MS4A12, GUCA2A and FABP1 were screened as hub under-expressed genes in colon cancer. In extensive ulcerative colitis, the genes with degree>5 including ABCB1, ABCG2, UGT1A6, CYP2B6 and AQP8 were identified as hub genes. Moreover, the genes including NR1H4, CYP2B6, ABCB1, ABCG2, UGT2A3 and PLA2G12B were detected as hub genes with degree>5 in limited ulcerative colitis. According to inclusion criteria and venn diagram, the downregulated gene NR1H4 was common gene in limited ulcerative colitis and colon cancer. The current in silico study showed that downregulation of CLCA1, PPARGC1A and AQP8 genes may increase cancer cell invasion and metastasis ability.
[ { "created": "Sat, 1 Jul 2023 10:31:02 GMT", "version": "v1" } ]
2023-07-04
[ [ "Behrouzifar", "Sedigheh", "" ] ]
Colon cancer is a prevalent gastrointestinal malignancy arising in the colon. Ulcerative colitis(UC) is one of the risk factors of colorectal cancer. The detection of under-expressed biomarkers and molecular mechanisms in UC and colon cancer can lead to effective management of colitis-associated cancer. A total of two mRNA expression datasets (GSE87473 and GSE44076) were downloaded from the Gene Expression Omnibus (GEO) database. GEO2R was used to screen differentially expressed genes (DEGs) between extensive ulcerative colitis samples and healthy samples, limited ulcerative colitis samples and healthy samples, and colon cancer samples and healthy samples. In extensive ulcerative colitis, limited ulcerative colitis and colon cancer groups, 95,69 and 635 under-expressed genes with adjusted p-value<0.05 and log(2) fold change<-2 were detected respectively. Using Cytoscape software, the genes with degree> 15 including CLCA1, SLC26A3, SI, KIT, HPGDS, NR1H4, ADIPOQ, PPARGC1A, GCG, MS4A12, GUCA2A and FABP1 were screened as hub under-expressed genes in colon cancer. In extensive ulcerative colitis, the genes with degree>5 including ABCB1, ABCG2, UGT1A6, CYP2B6 and AQP8 were identified as hub genes. Moreover, the genes including NR1H4, CYP2B6, ABCB1, ABCG2, UGT2A3 and PLA2G12B were detected as hub genes with degree>5 in limited ulcerative colitis. According to inclusion criteria and venn diagram, the downregulated gene NR1H4 was common gene in limited ulcerative colitis and colon cancer. The current in silico study showed that downregulation of CLCA1, PPARGC1A and AQP8 genes may increase cancer cell invasion and metastasis ability.
2312.00842
Wenwu Zeng
Wenwu Zeng, Dafeng Lv, Wenjuan Liu, Shaoliang Peng
ESM-NBR: fast and accurate nucleic acid-binding residue prediction via protein language model feature representation and multi-task learning
null
null
10.1109/BIBM58861.2023.10385509
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-nucleic acid interactions play a very important role in a variety of biological activities. Accurate identification of nucleic acid-binding residues is a critical step in understanding the interaction mechanisms. Although many computationally based methods have been developed to predict nucleic acid-binding residues, challenges remain. In this study, a fast and accurate sequence-based method, called ESM-NBR, is proposed. In ESM-NBR, we first use the large protein language model ESM2 to extract discriminative biological properties feature representation from protein primary sequences; then, a multi-task deep learning model composed of stacked bidirectional long short-term memory (BiLSTM) and multi-layer perceptron (MLP) networks is employed to explore common and private information of DNA- and RNA-binding residues with ESM2 feature as input. Experimental results on benchmark data sets demonstrate that the prediction performance of ESM2 feature representation comprehensively outperforms evolutionary information-based hidden Markov model (HMM) features. Meanwhile, the ESM-NBR obtains the MCC values for DNA-binding residues prediction of 0.427 and 0.391 on two independent test sets, which are 18.61 and 10.45% higher than those of the second-best methods, respectively. Moreover, by completely discarding the time-cost multiple sequence alignment process, the prediction speed of ESM-NBR far exceeds that of existing methods (5.52s for a protein sequence of length 500, which is about 16 times faster than the second-fastest method). A user-friendly standalone package and the data of ESM-NBR are freely available for academic use at: https://github.com/wwzll123/ESM-NBR.
[ { "created": "Fri, 1 Dec 2023 04:00:20 GMT", "version": "v1" } ]
2024-01-23
[ [ "Zeng", "Wenwu", "" ], [ "Lv", "Dafeng", "" ], [ "Liu", "Wenjuan", "" ], [ "Peng", "Shaoliang", "" ] ]
Protein-nucleic acid interactions play a very important role in a variety of biological activities. Accurate identification of nucleic acid-binding residues is a critical step in understanding the interaction mechanisms. Although many computationally based methods have been developed to predict nucleic acid-binding residues, challenges remain. In this study, a fast and accurate sequence-based method, called ESM-NBR, is proposed. In ESM-NBR, we first use the large protein language model ESM2 to extract discriminative biological properties feature representation from protein primary sequences; then, a multi-task deep learning model composed of stacked bidirectional long short-term memory (BiLSTM) and multi-layer perceptron (MLP) networks is employed to explore common and private information of DNA- and RNA-binding residues with ESM2 feature as input. Experimental results on benchmark data sets demonstrate that the prediction performance of ESM2 feature representation comprehensively outperforms evolutionary information-based hidden Markov model (HMM) features. Meanwhile, the ESM-NBR obtains the MCC values for DNA-binding residues prediction of 0.427 and 0.391 on two independent test sets, which are 18.61 and 10.45% higher than those of the second-best methods, respectively. Moreover, by completely discarding the time-cost multiple sequence alignment process, the prediction speed of ESM-NBR far exceeds that of existing methods (5.52s for a protein sequence of length 500, which is about 16 times faster than the second-fastest method). A user-friendly standalone package and the data of ESM-NBR are freely available for academic use at: https://github.com/wwzll123/ESM-NBR.
2204.07162
Victor Delvigne
Victor Delvigne, Hazem Wannous, Jean-Philippe Vandeborre, Laurence Ris, Thierry Dutoit
Spatio-Temporal Analysis of Transformer based Architecture for Attention Estimation from EEG
null
null
null
null
q-bio.NC cs.AI cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
For many years now, understanding the brain mechanism has been a great research subject in many different fields. Brain signal processing and especially electroencephalogram (EEG) has recently known a growing interest both in academia and industry. One of the main examples is the increasing number of Brain-Computer Interfaces (BCI) aiming to link brains and computers. In this paper, we present a novel framework allowing us to retrieve the attention state, i.e degree of attention given to a specific task, from EEG signals. While previous methods often consider the spatial relationship in EEG through electrodes and process them in recurrent or convolutional based architecture, we propose here to also exploit the spatial and temporal information with a transformer-based network that has already shown its supremacy in many machine-learning (ML) related studies, e.g. machine translation. In addition to this novel architecture, an extensive study on the feature extraction methods, frequential bands and temporal windows length has also been carried out. The proposed network has been trained and validated on two public datasets and achieves higher results compared to state-of-the-art models. As well as proposing better results, the framework could be used in real applications, e.g. Attention Deficit Hyperactivity Disorder (ADHD) symptoms or vigilance during a driving assessment.
[ { "created": "Mon, 4 Apr 2022 08:05:33 GMT", "version": "v1" } ]
2022-04-18
[ [ "Delvigne", "Victor", "" ], [ "Wannous", "Hazem", "" ], [ "Vandeborre", "Jean-Philippe", "" ], [ "Ris", "Laurence", "" ], [ "Dutoit", "Thierry", "" ] ]
For many years now, understanding the brain mechanism has been a great research subject in many different fields. Brain signal processing and especially electroencephalogram (EEG) has recently known a growing interest both in academia and industry. One of the main examples is the increasing number of Brain-Computer Interfaces (BCI) aiming to link brains and computers. In this paper, we present a novel framework allowing us to retrieve the attention state, i.e degree of attention given to a specific task, from EEG signals. While previous methods often consider the spatial relationship in EEG through electrodes and process them in recurrent or convolutional based architecture, we propose here to also exploit the spatial and temporal information with a transformer-based network that has already shown its supremacy in many machine-learning (ML) related studies, e.g. machine translation. In addition to this novel architecture, an extensive study on the feature extraction methods, frequential bands and temporal windows length has also been carried out. The proposed network has been trained and validated on two public datasets and achieves higher results compared to state-of-the-art models. As well as proposing better results, the framework could be used in real applications, e.g. Attention Deficit Hyperactivity Disorder (ADHD) symptoms or vigilance during a driving assessment.
2006.12077
William Waites
William Waites, Matteo Cavaliere, David Manheim, Jasmina Panovska-Griffiths, Vincent Danos
Rule-based epidemic models
null
null
10.1016/j.jtbi.2021.110851
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper gives an introduction to rule-based modelling applied to topics in infectious diseases. Rule-based models generalise reaction-based models with reagents that have internal state and may be bound together to form complexes, as in chemistry. Rule-based modelling is directly transferable from molecular biology to epidemiology and allows us to express a broad class of models for processes of interest in epidemiology that would not otherwise be feasible in compartmental models. This includes dynamics commonly found in compartmental models such as the spread of a virus from an infectious to a susceptible population, and more complex dynamics outside the typical scope of such models such as social behaviours and decision-making, testing capacity constraints, and tracing of people exposed to a virus but not yet symptomatic. We propose that such dynamics are well-captured with rule-based models, and that doing so combines intuitiveness and transparency of representation with scalability and compositionality. We demonstrate this feasibility of our approach using a suite of seven models to describe a spread of infectious diseases under different scenarios: wearing masks, infection via fomites and prevention by hand-washing, the concept of vector-borne diseases, testing and contact tracing interventions, disease propagation within motif-structured populations with shared environments such as schools, and superspreading events. The machine-readable description of these models corresponds closely to the mathematical description and also functions as a human-readable format so that one knows readily "what is in the model".
[ { "created": "Mon, 22 Jun 2020 08:52:22 GMT", "version": "v1" }, { "created": "Tue, 23 Jun 2020 07:30:22 GMT", "version": "v2" }, { "created": "Sat, 27 Jun 2020 18:05:32 GMT", "version": "v3" }, { "created": "Fri, 26 Feb 2021 15:00:27 GMT", "version": "v4" } ]
2021-08-10
[ [ "Waites", "William", "" ], [ "Cavaliere", "Matteo", "" ], [ "Manheim", "David", "" ], [ "Panovska-Griffiths", "Jasmina", "" ], [ "Danos", "Vincent", "" ] ]
This paper gives an introduction to rule-based modelling applied to topics in infectious diseases. Rule-based models generalise reaction-based models with reagents that have internal state and may be bound together to form complexes, as in chemistry. Rule-based modelling is directly transferable from molecular biology to epidemiology and allows us to express a broad class of models for processes of interest in epidemiology that would not otherwise be feasible in compartmental models. This includes dynamics commonly found in compartmental models such as the spread of a virus from an infectious to a susceptible population, and more complex dynamics outside the typical scope of such models such as social behaviours and decision-making, testing capacity constraints, and tracing of people exposed to a virus but not yet symptomatic. We propose that such dynamics are well-captured with rule-based models, and that doing so combines intuitiveness and transparency of representation with scalability and compositionality. We demonstrate this feasibility of our approach using a suite of seven models to describe a spread of infectious diseases under different scenarios: wearing masks, infection via fomites and prevention by hand-washing, the concept of vector-borne diseases, testing and contact tracing interventions, disease propagation within motif-structured populations with shared environments such as schools, and superspreading events. The machine-readable description of these models corresponds closely to the mathematical description and also functions as a human-readable format so that one knows readily "what is in the model".
1311.1850
Ron Nielsen
Ron W Nielsen aka Jan Nurzynski
Impacts of demographic catastrophes
19 pages, 1 table
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analysis of demographic catastrophes shows that, with the exception of perhaps only two critical events, they were too weak to influence the growth of human population. These results reinforce the conclusion that the concept of the Epoch of Malthusian Stagnation, the alleged first stage of growth claimed by the Demographic Transition Theory, is not supported by empirical evidence. They show that even if we assume that Malthusian positive checks are capable of suppressing the growth of population their impact was too weak to create the Epoch of Malthusian Stagnation.
[ { "created": "Fri, 8 Nov 2013 00:08:32 GMT", "version": "v1" } ]
2013-11-11
[ [ "Nurzynski", "Ron W Nielsen aka Jan", "" ] ]
Analysis of demographic catastrophes shows that, with the exception of perhaps only two critical events, they were too weak to influence the growth of human population. These results reinforce the conclusion that the concept of the Epoch of Malthusian Stagnation, the alleged first stage of growth claimed by the Demographic Transition Theory, is not supported by empirical evidence. They show that even if we assume that Malthusian positive checks are capable of suppressing the growth of population their impact was too weak to create the Epoch of Malthusian Stagnation.
2207.12805
Aldo Pacchiano
Aldo Pacchiano, Drausin Wulsin, Robert A. Barton, Luis Voloch
Neural Design for Genetic Perturbation Experiments
22 pages main, 15 pages appendix
null
null
null
q-bio.QM cs.AI cs.LG q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of how to genetically modify cells in order to maximize a certain cellular phenotype has taken center stage in drug development over the last few years (with, for example, genetically edited CAR-T, CAR-NK, and CAR-NKT cells entering cancer clinical trials). Exhausting the search space for all possible genetic edits (perturbations) or combinations thereof is infeasible due to cost and experimental limitations. This work provides a theoretically sound framework for iteratively exploring the space of perturbations in pooled batches in order to maximize a target phenotype under an experimental budget. Inspired by this application domain, we study the problem of batch query bandit optimization and introduce the Optimistic Arm Elimination ($\mathrm{OAE}$) principle designed to find an almost optimal arm under different functional relationships between the queries (arms) and the outputs (rewards). We analyze the convergence properties of $\mathrm{OAE}$ by relating it to the Eluder dimension of the algorithm's function class and validate that $\mathrm{OAE}$ outperforms other strategies in finding optimal actions in experiments on simulated problems, public datasets well-studied in bandit contexts, and in genetic perturbation datasets when the regression model is a deep neural network. OAE also outperforms the benchmark algorithms in 3 of 4 datasets in the GeneDisco experimental planning challenge.
[ { "created": "Tue, 26 Jul 2022 10:59:52 GMT", "version": "v1" }, { "created": "Thu, 2 Feb 2023 14:18:54 GMT", "version": "v2" } ]
2023-02-03
[ [ "Pacchiano", "Aldo", "" ], [ "Wulsin", "Drausin", "" ], [ "Barton", "Robert A.", "" ], [ "Voloch", "Luis", "" ] ]
The problem of how to genetically modify cells in order to maximize a certain cellular phenotype has taken center stage in drug development over the last few years (with, for example, genetically edited CAR-T, CAR-NK, and CAR-NKT cells entering cancer clinical trials). Exhausting the search space for all possible genetic edits (perturbations) or combinations thereof is infeasible due to cost and experimental limitations. This work provides a theoretically sound framework for iteratively exploring the space of perturbations in pooled batches in order to maximize a target phenotype under an experimental budget. Inspired by this application domain, we study the problem of batch query bandit optimization and introduce the Optimistic Arm Elimination ($\mathrm{OAE}$) principle designed to find an almost optimal arm under different functional relationships between the queries (arms) and the outputs (rewards). We analyze the convergence properties of $\mathrm{OAE}$ by relating it to the Eluder dimension of the algorithm's function class and validate that $\mathrm{OAE}$ outperforms other strategies in finding optimal actions in experiments on simulated problems, public datasets well-studied in bandit contexts, and in genetic perturbation datasets when the regression model is a deep neural network. OAE also outperforms the benchmark algorithms in 3 of 4 datasets in the GeneDisco experimental planning challenge.
2301.09987
Su Hyeong Lee
Su Hyeong Lee
Chemical Integration of ODEs using Idealized Abstract Solutions
null
null
null
null
q-bio.MN math.DS
http://creativecommons.org/licenses/by/4.0/
In this work, we propose a general inversion framework to non-uniquely invert a very large class of ordinary differential equations (ODEs) into chemical reaction networks. A thorough treatment of the relevant chemical reaction network theory from the literature is given. Various simulation results are provided to augment the selection procedure for the inverse framework, where a previously known kineticization strategy is shown to be deterministically excellent but undesirable in chemical simulations. The utility of the framework is verified by simulating reaction network forms of meaningful ODE systems, and their time series are analyzed. In particular, we provide simulations of deterministic chaotic attractors whose newly discovered reaction networks are non-equivalent with any existing chemical interpretations within the literature, as well as presenting exemplary figures which may form a roadmap to the successful biochemical implementation of the integration of ODE systems.
[ { "created": "Mon, 23 Jan 2023 17:14:00 GMT", "version": "v1" } ]
2023-01-25
[ [ "Lee", "Su Hyeong", "" ] ]
In this work, we propose a general inversion framework to non-uniquely invert a very large class of ordinary differential equations (ODEs) into chemical reaction networks. A thorough treatment of the relevant chemical reaction network theory from the literature is given. Various simulation results are provided to augment the selection procedure for the inverse framework, where a previously known kineticization strategy is shown to be deterministically excellent but undesirable in chemical simulations. The utility of the framework is verified by simulating reaction network forms of meaningful ODE systems, and their time series are analyzed. In particular, we provide simulations of deterministic chaotic attractors whose newly discovered reaction networks are non-equivalent with any existing chemical interpretations within the literature, as well as presenting exemplary figures which may form a roadmap to the successful biochemical implementation of the integration of ODE systems.
2105.06222
David Hahn
David F. Hahn, Christopher I. Bayly, Hannah E. Bruce Macdonald, John D. Chodera, Vytautas Gapsys, Antonia S. J. S. Mey, David L. Mobley, Laura Perez Benito, Christina E.M. Schindler, Gary Tresadern, Gregory L. Warren
Best practices for constructing, preparing, and evaluating protein-ligand binding affinity benchmarks
null
null
10.33011/livecoms.4.1.1497
null
q-bio.BM physics.bio-ph physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
Free energy calculations are rapidly becoming indispensable in structure-enabled drug discovery programs. As new methods, force fields, and implementations are developed, assessing their expected accuracy on real-world systems (benchmarking) becomes critical to provide users with an assessment of the accuracy expected when these methods are applied within their domain of applicability, and developers with a way to assess the expected impact of new methodologies. These assessments require construction of a benchmark - a set of well-prepared, high quality systems with corresponding experimental measurements designed to ensure the resulting calculations provide a realistic assessment of expected performance when these methods are deployed within their domains of applicability. To date, the community has not yet adopted a common standardized benchmark, and existing benchmark reports suffer from a myriad of issues, including poor data quality, limited statistical power, and statistically deficient analyses, all of which can conspire to produce benchmarks that are poorly predictive of real-world performance. Here, we address these issues by presenting guidelines for (1) curating experimental data to develop meaningful benchmark sets, (2) preparing benchmark inputs according to best practices to facilitate widespread adoption, and (3) analysis of the resulting predictions to enable statistically meaningful comparisons among methods and force fields.
[ { "created": "Thu, 13 May 2021 12:22:49 GMT", "version": "v1" }, { "created": "Fri, 12 Nov 2021 11:19:33 GMT", "version": "v2" } ]
2023-03-30
[ [ "Hahn", "David F.", "" ], [ "Bayly", "Christopher I.", "" ], [ "Macdonald", "Hannah E. Bruce", "" ], [ "Chodera", "John D.", "" ], [ "Gapsys", "Vytautas", "" ], [ "Mey", "Antonia S. J. S.", "" ], [ "Mobley", "David L.", "" ], [ "Benito", "Laura Perez", "" ], [ "Schindler", "Christina E. M.", "" ], [ "Tresadern", "Gary", "" ], [ "Warren", "Gregory L.", "" ] ]
Free energy calculations are rapidly becoming indispensable in structure-enabled drug discovery programs. As new methods, force fields, and implementations are developed, assessing their expected accuracy on real-world systems (benchmarking) becomes critical to provide users with an assessment of the accuracy expected when these methods are applied within their domain of applicability, and developers with a way to assess the expected impact of new methodologies. These assessments require construction of a benchmark - a set of well-prepared, high quality systems with corresponding experimental measurements designed to ensure the resulting calculations provide a realistic assessment of expected performance when these methods are deployed within their domains of applicability. To date, the community has not yet adopted a common standardized benchmark, and existing benchmark reports suffer from a myriad of issues, including poor data quality, limited statistical power, and statistically deficient analyses, all of which can conspire to produce benchmarks that are poorly predictive of real-world performance. Here, we address these issues by presenting guidelines for (1) curating experimental data to develop meaningful benchmark sets, (2) preparing benchmark inputs according to best practices to facilitate widespread adoption, and (3) analysis of the resulting predictions to enable statistically meaningful comparisons among methods and force fields.
1705.09205
Ernest Montbrio
Federico Devalle, Alex Roxin, Ernest Montbri\'o
Firing rate equations require a spike synchrony mechanism to correctly describe fast oscillations in inhibitory networks
null
PLoS Comput Biol 13(12): e1005881 (2017)
10.1371/journal.pcbi.1005881
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recurrently coupled networks of inhibitory neurons robustly generate oscillations in the gamma band. Nonetheless, the corresponding Wilson-Cowan type firing rate equation for such an inhibitory population does not generate such oscillations without an explicit time delay. We show that this discrepancy is due to a voltage-dependent spike-synchronization mechanism inherent in networks of spiking neurons which is not captured by standard firing rate equations. Here we investigate an exact low-dimensional description for a network of heterogeneous canonical type-I inhibitory neurons which includes the sub-threshold dynamics crucial for generating synchronous states. In the limit of slow synaptic kinetics the spike-synchrony mechanism is suppressed and the standard Wilson-Cowan equations are formally recovered as long as external inputs are also slow. However, even in this limit synchronous spiking can be elicited by inputs which fluctuate on a time-scale of the membrane time-constant of the neurons. Our meanfield equations therefore represent an extension of the standard Wilson-Cowan equations in which spike synchrony is also correctly described.
[ { "created": "Thu, 25 May 2017 14:48:31 GMT", "version": "v1" }, { "created": "Tue, 30 May 2017 06:31:31 GMT", "version": "v2" }, { "created": "Fri, 5 Jan 2018 14:10:07 GMT", "version": "v3" } ]
2018-01-08
[ [ "Devalle", "Federico", "" ], [ "Roxin", "Alex", "" ], [ "Montbrió", "Ernest", "" ] ]
Recurrently coupled networks of inhibitory neurons robustly generate oscillations in the gamma band. Nonetheless, the corresponding Wilson-Cowan type firing rate equation for such an inhibitory population does not generate such oscillations without an explicit time delay. We show that this discrepancy is due to a voltage-dependent spike-synchronization mechanism inherent in networks of spiking neurons which is not captured by standard firing rate equations. Here we investigate an exact low-dimensional description for a network of heterogeneous canonical type-I inhibitory neurons which includes the sub-threshold dynamics crucial for generating synchronous states. In the limit of slow synaptic kinetics the spike-synchrony mechanism is suppressed and the standard Wilson-Cowan equations are formally recovered as long as external inputs are also slow. However, even in this limit synchronous spiking can be elicited by inputs which fluctuate on a time-scale of the membrane time-constant of the neurons. Our meanfield equations therefore represent an extension of the standard Wilson-Cowan equations in which spike synchrony is also correctly described.
1911.12124
Ana Maria Triana
A.M. Triana, E. Glerean, J. Saram\"aki, O. Korhonen
Effects of spatial smoothing on group-level differences in functional brain networks
17 pages, 4 main figures, 44 supplementary figures, 15 supplementary tables. Supplementary information available in https://doi.org/10.5281/zenodo.3671882
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
Brain connectivity with functional Magnetic Resonance Imaging (fMRI) is a popular approach for detecting differences between healthy and clinical populations. Before creating a functional brain network, the fMRI time series must undergo several preprocessing steps to control for artifacts and to improve data quality. However, preprocessing may affect the results in an undesirable way. Spatial smoothing, for example, is known to alter functional network structure. Yet, its effects on group-level network differences remain unknown. Here, we investigate the effects of spatial smoothing on the difference between patients and controls for two clinical conditions: autism spectrum disorder and bipolar disorder, considering fMRI data smoothed with Gaussian kernels (0-32 mm). We find that smoothing affects network differences between groups. For weighted networks, incrementing the smoothing kernel makes networks more different. For thresholded networks, larger smoothing kernels lead to more similar networks, although this depends on the network density. Smoothing also alters the effect sizes of the individual link differences. This is independent of the ROI size, but vary with link length. The effects of spatial smoothing are diverse, non-trivial, and difficult to predict. This has important consequences: the choice of smoothing kernel affects the observed network differences.
[ { "created": "Sat, 23 Nov 2019 09:52:54 GMT", "version": "v1" }, { "created": "Wed, 19 Feb 2020 17:36:24 GMT", "version": "v2" } ]
2020-02-20
[ [ "Triana", "A. M.", "" ], [ "Glerean", "E.", "" ], [ "Saramäki", "J.", "" ], [ "Korhonen", "O.", "" ] ]
Brain connectivity with functional Magnetic Resonance Imaging (fMRI) is a popular approach for detecting differences between healthy and clinical populations. Before creating a functional brain network, the fMRI time series must undergo several preprocessing steps to control for artifacts and to improve data quality. However, preprocessing may affect the results in an undesirable way. Spatial smoothing, for example, is known to alter functional network structure. Yet, its effects on group-level network differences remain unknown. Here, we investigate the effects of spatial smoothing on the difference between patients and controls for two clinical conditions: autism spectrum disorder and bipolar disorder, considering fMRI data smoothed with Gaussian kernels (0-32 mm). We find that smoothing affects network differences between groups. For weighted networks, incrementing the smoothing kernel makes networks more different. For thresholded networks, larger smoothing kernels lead to more similar networks, although this depends on the network density. Smoothing also alters the effect sizes of the individual link differences. This is independent of the ROI size, but vary with link length. The effects of spatial smoothing are diverse, non-trivial, and difficult to predict. This has important consequences: the choice of smoothing kernel affects the observed network differences.
1912.10003
Rafat Damseh
Rafat Damseh, Patrick Delafontaine-Martel, Philippe Pouliot, Farida Cheriet, Frederic Lesage
Laplacian Flow Dynamics on Geometric Graphs for Anatomical Modeling of Cerebrovascular Networks
null
null
null
null
q-bio.QM cs.CG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating computational anatomical models of cerebrovascular networks is vital for improving clinical practice and understanding brain oxygen transport. This is achieved by extracting graph-based representations based on pre-mapping of vascular structures. Recent graphing methods can provide smooth vessels trajectories and well-connected vascular topology. However, they require water-tight surface meshes as inputs. Furthermore, adding vessels radii information on their graph compartments restricts their alignment along vascular centerlines. Here, we propose a novel graphing scheme that works with relaxed input requirements and intrinsically captures vessel radii information. The proposed approach is based on deforming geometric graphs constructed within vascular boundaries. Under a laplacian optimization framework, we assign affinity weights on the initial geometry that drives its iterative contraction toward vessels centerlines. We present a mechanism to decimate graph structure at each run and a convergence criterion to stop the process. A refinement technique is then introduced to obtain final vascular models. Our implementation is available on https://github.com/Damseh/VascularGraph. We benchmarked our results with that obtained using other efficient and stateof-the-art graphing schemes, validating on both synthetic and real angiograms acquired with different imaging modalities. The experiments indicate that the proposed scheme produces the lowest geometric and topological error rates on various angiograms. Furthermore, it surpasses other techniques in providing representative models that capture all anatomical aspects of vascular structures.
[ { "created": "Fri, 20 Dec 2019 18:35:13 GMT", "version": "v1" } ]
2019-12-23
[ [ "Damseh", "Rafat", "" ], [ "Delafontaine-Martel", "Patrick", "" ], [ "Pouliot", "Philippe", "" ], [ "Cheriet", "Farida", "" ], [ "Lesage", "Frederic", "" ] ]
Generating computational anatomical models of cerebrovascular networks is vital for improving clinical practice and understanding brain oxygen transport. This is achieved by extracting graph-based representations based on pre-mapping of vascular structures. Recent graphing methods can provide smooth vessels trajectories and well-connected vascular topology. However, they require water-tight surface meshes as inputs. Furthermore, adding vessels radii information on their graph compartments restricts their alignment along vascular centerlines. Here, we propose a novel graphing scheme that works with relaxed input requirements and intrinsically captures vessel radii information. The proposed approach is based on deforming geometric graphs constructed within vascular boundaries. Under a laplacian optimization framework, we assign affinity weights on the initial geometry that drives its iterative contraction toward vessels centerlines. We present a mechanism to decimate graph structure at each run and a convergence criterion to stop the process. A refinement technique is then introduced to obtain final vascular models. Our implementation is available on https://github.com/Damseh/VascularGraph. We benchmarked our results with that obtained using other efficient and stateof-the-art graphing schemes, validating on both synthetic and real angiograms acquired with different imaging modalities. The experiments indicate that the proposed scheme produces the lowest geometric and topological error rates on various angiograms. Furthermore, it surpasses other techniques in providing representative models that capture all anatomical aspects of vascular structures.
1305.7271
David Albers
D J Albers, J Claassen, M J Schmidt, G Hripcsak
A methodology for detecting and exploring non-convulsive seizures in patients with SAH
Submitted to NOLTA 2013
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A methodology for understanding and de- tecting nonconvulsive seizures in individuals with sub- arachnoid hemorrhage is introduced. Specifically, begin- ning with an EEG signal, the power spectrum is esti- mated yielding a multivariate time series which is then ana- lyzed using empirical orthogonal functional analysis. This methodology allows for easy identification and observation of seizures that are otherwise only identifiable though ex- pert analysis of the raw EEG.
[ { "created": "Thu, 30 May 2013 23:46:15 GMT", "version": "v1" } ]
2013-06-03
[ [ "Albers", "D J", "" ], [ "Claassen", "J", "" ], [ "Schmidt", "M J", "" ], [ "Hripcsak", "G", "" ] ]
A methodology for understanding and de- tecting nonconvulsive seizures in individuals with sub- arachnoid hemorrhage is introduced. Specifically, begin- ning with an EEG signal, the power spectrum is esti- mated yielding a multivariate time series which is then ana- lyzed using empirical orthogonal functional analysis. This methodology allows for easy identification and observation of seizures that are otherwise only identifiable though ex- pert analysis of the raw EEG.
2209.06348
Juan Ponciano
Juan Adolfo Ponciano, Juan Diego Chang, Mariela Abdalah, Kevin Facey and Jos\'e Miguel Ponciano
COVID-19 Regional Waves and Spread Risk Assessment through the Analysis of the Initial Outbreak in Guatemala
22 pages, 7 figures, 2 tables
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The initial surge of the COVID-19 pandemic hit Guatemala on March 2020. On a country scale, the epidemic has undergone a fairly well-known and distinguishable initial phase, reaching its peak on mid July 2020. However, the detailed picture is more involved and reflects inter-regional variations in the epidemic dynamics, presumably grounded on socio-demographic, connectivity, and human mobility factors. Classifying the regional epidemic curves and identifying the major hubs of regional COVID-19 spread can contribute towards defining an evidence-based risk map for future outbreaks of infectious diseases with similar transmissibility properties. In this work, we make a regional wave decomposition of the initial epidemic phase registered in Guatemala, and we use the Richards phenomenological model alongside multivariate ordination techniques of its estimated model parameters to draw a countrywide picture of the first epidemiological wave. By exploring similarities in the model space parameters, we traced routes for the disease spread across the country. We evaluated how well the proposed classification can help to define a regional risk hierarchy comprising early stage focal points, major hubs, and secondary regions of epidemic progression.
[ { "created": "Tue, 13 Sep 2022 23:49:37 GMT", "version": "v1" } ]
2022-09-15
[ [ "Ponciano", "Juan Adolfo", "" ], [ "Chang", "Juan Diego", "" ], [ "Abdalah", "Mariela", "" ], [ "Facey", "Kevin", "" ], [ "Ponciano", "José Miguel", "" ] ]
The initial surge of the COVID-19 pandemic hit Guatemala on March 2020. On a country scale, the epidemic has undergone a fairly well-known and distinguishable initial phase, reaching its peak on mid July 2020. However, the detailed picture is more involved and reflects inter-regional variations in the epidemic dynamics, presumably grounded on socio-demographic, connectivity, and human mobility factors. Classifying the regional epidemic curves and identifying the major hubs of regional COVID-19 spread can contribute towards defining an evidence-based risk map for future outbreaks of infectious diseases with similar transmissibility properties. In this work, we make a regional wave decomposition of the initial epidemic phase registered in Guatemala, and we use the Richards phenomenological model alongside multivariate ordination techniques of its estimated model parameters to draw a countrywide picture of the first epidemiological wave. By exploring similarities in the model space parameters, we traced routes for the disease spread across the country. We evaluated how well the proposed classification can help to define a regional risk hierarchy comprising early stage focal points, major hubs, and secondary regions of epidemic progression.
1807.09393
Naoto Hori
Naoto Hori, Natalia A. Denesyuk, D. Thirumalai
Frictional Effects on RNA Folding: Speed Limit and Kramers Turnover
null
null
10.1021/acs.jpcb.8b07129
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigated frictional effects on the folding rates of a human telomerase hairpin (hTR HP) and H-type pseudoknot from the Beet Western Yellow Virus (BWYV PK) using simulations of the Three Interaction Site (TIS) model for RNA. The heat capacity from TIS model simulations, calculated using temperature replica exchange simulations, reproduces nearly quantitatively the available experimental data for the hTR HP. The corresponding results for BWYV PK serve as predictions. We calculated the folding rates ($k_\mathrm{F}$) from more than 100 folding trajectories for each value of the solvent viscosity ($\eta$) at a fixed salt concentration of 200 mM. By using the theoretical estimate ($\propto$$\sqrt{N}$ where $N$ is the number of nucleotides) for folding free energy barrier, $k_\mathrm{F}$ data for both the RNAs are quantitatively fit using one-dimensional Kramers' theory with two parameters specifying the curvatures in the unfolded basin and the barrier top. In the high-friction regime ($\eta\gtrsim10^{-5}\,\textrm{Pa\ensuremath{\cdot}s}$), for both HP and PK, $k_\mathrm{F}$s decrease as $1/\eta$ whereas in the low friction regime, $k_\mathrm{F}$ values increase as $\eta$ increases, leading to a maximum folding rate at a moderate viscosity ($\sim10^{-6}\,\textrm{Pa\ensuremath{\cdot}s}$), which is the Kramers turnover. From the fits, we find that the speed limit to RNA folding at water viscosity is between 1 and 4 $\mathrm{\mu s}$, which is in accord with our previous theoretical prediction as well as results from several single molecule experiments. Both the RNA constructs fold by parallel pathways. Surprisingly, we find that the flux through the pathways could be altered by changing solvent viscosity, a prediction that is more easily testable in RNA than in proteins.
[ { "created": "Tue, 24 Jul 2018 23:44:16 GMT", "version": "v1" }, { "created": "Thu, 26 Jul 2018 14:22:21 GMT", "version": "v2" }, { "created": "Fri, 14 Sep 2018 18:05:27 GMT", "version": "v3" } ]
2018-09-18
[ [ "Hori", "Naoto", "" ], [ "Denesyuk", "Natalia A.", "" ], [ "Thirumalai", "D.", "" ] ]
We investigated frictional effects on the folding rates of a human telomerase hairpin (hTR HP) and H-type pseudoknot from the Beet Western Yellow Virus (BWYV PK) using simulations of the Three Interaction Site (TIS) model for RNA. The heat capacity from TIS model simulations, calculated using temperature replica exchange simulations, reproduces nearly quantitatively the available experimental data for the hTR HP. The corresponding results for BWYV PK serve as predictions. We calculated the folding rates ($k_\mathrm{F}$) from more than 100 folding trajectories for each value of the solvent viscosity ($\eta$) at a fixed salt concentration of 200 mM. By using the theoretical estimate ($\propto$$\sqrt{N}$ where $N$ is the number of nucleotides) for folding free energy barrier, $k_\mathrm{F}$ data for both the RNAs are quantitatively fit using one-dimensional Kramers' theory with two parameters specifying the curvatures in the unfolded basin and the barrier top. In the high-friction regime ($\eta\gtrsim10^{-5}\,\textrm{Pa\ensuremath{\cdot}s}$), for both HP and PK, $k_\mathrm{F}$s decrease as $1/\eta$ whereas in the low friction regime, $k_\mathrm{F}$ values increase as $\eta$ increases, leading to a maximum folding rate at a moderate viscosity ($\sim10^{-6}\,\textrm{Pa\ensuremath{\cdot}s}$), which is the Kramers turnover. From the fits, we find that the speed limit to RNA folding at water viscosity is between 1 and 4 $\mathrm{\mu s}$, which is in accord with our previous theoretical prediction as well as results from several single molecule experiments. Both the RNA constructs fold by parallel pathways. Surprisingly, we find that the flux through the pathways could be altered by changing solvent viscosity, a prediction that is more easily testable in RNA than in proteins.
q-bio/0606004
Georgy Karev
F. Berezovskaya, G. Karev
Bifurcations of self-similar solutions of the Fokker-Plank Equation
9 pages; submitted to Discrete and Continuous Dynamical Systems
null
null
null
q-bio.QM q-bio.OT
null
A class of one-dimensional Fokker-Plank equations having a common stationary solution, which is a power function of the state of the process, was found. We prove that these equations also have generalized self-similar solutions which describe the temporary transition from one stationary state to another. The study was motivated by problems arising in mathematical modeling of genome size evolution.
[ { "created": "Fri, 2 Jun 2006 18:47:21 GMT", "version": "v1" } ]
2007-05-23
[ [ "Berezovskaya", "F.", "" ], [ "Karev", "G.", "" ] ]
A class of one-dimensional Fokker-Plank equations having a common stationary solution, which is a power function of the state of the process, was found. We prove that these equations also have generalized self-similar solutions which describe the temporary transition from one stationary state to another. The study was motivated by problems arising in mathematical modeling of genome size evolution.
2211.03553
Natasa Tagasovska
Romain Lopez, Nata\v{s}a Tagasovska, Stephen Ra, Kyunghyn Cho, Jonathan K. Pritchard, Aviv Regev
Learning Causal Representations of Single Cells via Sparse Mechanism Shift Modeling
Accepted at CLeaR (Causal Learning and Reasoning) 2023
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Latent variable models such as the Variational Auto-Encoder (VAE) have become a go-to tool for analyzing biological data, especially in the field of single-cell genomics. One remaining challenge is the interpretability of latent variables as biological processes that define a cell's identity. Outside of biological applications, this problem is commonly referred to as learning disentangled representations. Although several disentanglement-promoting variants of the VAE were introduced, and applied to single-cell genomics data, this task has been shown to be infeasible from independent and identically distributed measurements, without additional structure. Instead, recent methods propose to leverage non-stationary data, as well as the sparse mechanism shift assumption in order to learn disentangled representations with a causal semantic. Here, we extend the application of these methodological advances to the analysis of single-cell genomics data with genetic or chemical perturbations. More precisely, we propose a deep generative model of single-cell gene expression data for which each perturbation is treated as a stochastic intervention targeting an unknown, but sparse, subset of latent variables. We benchmark these methods on simulated single-cell data to evaluate their performance at latent units recovery, causal target identification and out-of-domain generalization. Finally, we apply those approaches to two real-world large-scale gene perturbation data sets and find that models that exploit the sparse mechanism shift hypothesis surpass contemporary methods on a transfer learning task. We implement our new model and benchmarks using the scvi-tools library, and release it as open-source software at https://github.com/Genentech/sVAE.
[ { "created": "Mon, 7 Nov 2022 15:47:40 GMT", "version": "v1" }, { "created": "Tue, 8 Nov 2022 12:44:03 GMT", "version": "v2" }, { "created": "Wed, 9 Nov 2022 22:04:16 GMT", "version": "v3" }, { "created": "Thu, 16 Feb 2023 22:31:44 GMT", "version": "v4" } ]
2023-02-20
[ [ "Lopez", "Romain", "" ], [ "Tagasovska", "Nataša", "" ], [ "Ra", "Stephen", "" ], [ "Cho", "Kyunghyn", "" ], [ "Pritchard", "Jonathan K.", "" ], [ "Regev", "Aviv", "" ] ]
Latent variable models such as the Variational Auto-Encoder (VAE) have become a go-to tool for analyzing biological data, especially in the field of single-cell genomics. One remaining challenge is the interpretability of latent variables as biological processes that define a cell's identity. Outside of biological applications, this problem is commonly referred to as learning disentangled representations. Although several disentanglement-promoting variants of the VAE were introduced, and applied to single-cell genomics data, this task has been shown to be infeasible from independent and identically distributed measurements, without additional structure. Instead, recent methods propose to leverage non-stationary data, as well as the sparse mechanism shift assumption in order to learn disentangled representations with a causal semantic. Here, we extend the application of these methodological advances to the analysis of single-cell genomics data with genetic or chemical perturbations. More precisely, we propose a deep generative model of single-cell gene expression data for which each perturbation is treated as a stochastic intervention targeting an unknown, but sparse, subset of latent variables. We benchmark these methods on simulated single-cell data to evaluate their performance at latent units recovery, causal target identification and out-of-domain generalization. Finally, we apply those approaches to two real-world large-scale gene perturbation data sets and find that models that exploit the sparse mechanism shift hypothesis surpass contemporary methods on a transfer learning task. We implement our new model and benchmarks using the scvi-tools library, and release it as open-source software at https://github.com/Genentech/sVAE.
2201.04663
Xudong Tang
Xudong Tang, Leonardo Zepeda-Nunez, Shengwen Yang, Zelin Zhao, Claudia Solis-Lemus
Novel Symmetry-preserving Neural Network Model for Phylogenetic Inference
15 pages, 6 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Scientists world-wide are putting together massive efforts to understand how the biodiversity that we see on Earth evolved from single-cell organisms at the origin of life and this diversification process is represented through the Tree of Life. Low sampling rates and high heterogeneity in the rate of evolution across sites and lineages produce a phenomenon denoted "long branch attraction" (LBA) in which long non-sister lineages are estimated to be sisters regardless of their true evolutionary relationship. LBA has been a pervasive problem in phylogenetic inference affecting different types of methodologies from distance-based to likelihood-based. Here, we present a novel neural network model that outperforms standard phylogenetic methods and other neural network implementations under LBA settings. Furthermore, unlike existing neural network models, our model naturally accounts for the tree isomorphisms via permutation invariant functions which ultimately result in lower memory and allows the seamless extension to larger trees.
[ { "created": "Wed, 12 Jan 2022 19:37:59 GMT", "version": "v1" }, { "created": "Fri, 15 Sep 2023 00:01:52 GMT", "version": "v2" }, { "created": "Thu, 21 Dec 2023 03:24:06 GMT", "version": "v3" } ]
2023-12-22
[ [ "Tang", "Xudong", "" ], [ "Zepeda-Nunez", "Leonardo", "" ], [ "Yang", "Shengwen", "" ], [ "Zhao", "Zelin", "" ], [ "Solis-Lemus", "Claudia", "" ] ]
Scientists world-wide are putting together massive efforts to understand how the biodiversity that we see on Earth evolved from single-cell organisms at the origin of life and this diversification process is represented through the Tree of Life. Low sampling rates and high heterogeneity in the rate of evolution across sites and lineages produce a phenomenon denoted "long branch attraction" (LBA) in which long non-sister lineages are estimated to be sisters regardless of their true evolutionary relationship. LBA has been a pervasive problem in phylogenetic inference affecting different types of methodologies from distance-based to likelihood-based. Here, we present a novel neural network model that outperforms standard phylogenetic methods and other neural network implementations under LBA settings. Furthermore, unlike existing neural network models, our model naturally accounts for the tree isomorphisms via permutation invariant functions which ultimately result in lower memory and allows the seamless extension to larger trees.
q-bio/0603031
Niko Beerenwinkel
Niko Beerenwinkel and Mathias Drton
A Mutagenetic Tree Hidden Markov Model for Longitudinal Clonal HIV Sequence Data
20 pages, 6 figures
Biostatistics 2007, Vol. 8, No. 1, 53-71
10.1093/biostatistics/kxj033
null
q-bio.PE math.ST stat.TH
null
RNA viruses provide prominent examples of measurably evolving populations. In HIV infection, the development of drug resistance is of particular interest, because precise predictions of the outcome of this evolutionary process are a prerequisite for the rational design of antiretroviral treatment protocols. We present a mutagenetic tree hidden Markov model for the analysis of longitudinal clonal sequence data. Using HIV mutation data from clinical trials, we estimate the order and rate of occurrence of seven amino acid changes that are associated with resistance to the reverse transcriptase inhibitor efavirenz.
[ { "created": "Mon, 27 Mar 2006 18:33:34 GMT", "version": "v1" } ]
2010-03-04
[ [ "Beerenwinkel", "Niko", "" ], [ "Drton", "Mathias", "" ] ]
RNA viruses provide prominent examples of measurably evolving populations. In HIV infection, the development of drug resistance is of particular interest, because precise predictions of the outcome of this evolutionary process are a prerequisite for the rational design of antiretroviral treatment protocols. We present a mutagenetic tree hidden Markov model for the analysis of longitudinal clonal sequence data. Using HIV mutation data from clinical trials, we estimate the order and rate of occurrence of seven amino acid changes that are associated with resistance to the reverse transcriptase inhibitor efavirenz.
2408.07636
Bing Hu
Bing Hu and Anita Layton and Helen Chen
Drug Discovery SMILES-to-Pharmacokinetics Diffusion Models with Deep Molecular Understanding
13 pages, 5 figures, 4 tables
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence (AI) is increasingly used in every stage of drug development. One challenge facing drug discovery AI is that drug pharmacokinetic (PK) datasets are often collected independently from each other, often with limited overlap, creating data overlap sparsity. Data sparsity makes data curation difficult for researchers looking to answer research questions in poly-pharmacy, drug combination research, and high-throughput screening. We propose Imagand, a novel SMILES-to-Pharmacokinetic (S2PK) diffusion model capable of generating an array of PK target properties conditioned on SMILES inputs. We show that Imagand-generated synthetic PK data closely resembles real data univariate and bivariate distributions, and improves performance for downstream tasks. Imagand is a promising solution for data overlap sparsity and allows researchers to efficiently generate ligand PK data for drug discovery research. Code is available at \url{https://github.com/bing1100/Imagand}.
[ { "created": "Wed, 14 Aug 2024 16:01:02 GMT", "version": "v1" } ]
2024-08-15
[ [ "Hu", "Bing", "" ], [ "Layton", "Anita", "" ], [ "Chen", "Helen", "" ] ]
Artificial intelligence (AI) is increasingly used in every stage of drug development. One challenge facing drug discovery AI is that drug pharmacokinetic (PK) datasets are often collected independently from each other, often with limited overlap, creating data overlap sparsity. Data sparsity makes data curation difficult for researchers looking to answer research questions in poly-pharmacy, drug combination research, and high-throughput screening. We propose Imagand, a novel SMILES-to-Pharmacokinetic (S2PK) diffusion model capable of generating an array of PK target properties conditioned on SMILES inputs. We show that Imagand-generated synthetic PK data closely resembles real data univariate and bivariate distributions, and improves performance for downstream tasks. Imagand is a promising solution for data overlap sparsity and allows researchers to efficiently generate ligand PK data for drug discovery research. Code is available at \url{https://github.com/bing1100/Imagand}.
1811.11236
Liane Gabora
Liane Gabora
The Making of a Creative Worldview
Chapter prepared for Suzanne Nalbantian and Paul Matthews (eds.) 'Secrets of Creativity': Oxford University Press
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research at the interface between cognitive psychology, neuroscience, and the science of complex, dynamical systems, is piecing together an understanding of the creative process, including how it works, how it can be fostered, and the developmental antecedents and personality traits of particularly creative people. This chapter examines the workings of creative minds, those with the potential to significantly impact the evolution of human culture.
[ { "created": "Tue, 27 Nov 2018 20:14:31 GMT", "version": "v1" } ]
2018-11-29
[ [ "Gabora", "Liane", "" ] ]
Research at the interface between cognitive psychology, neuroscience, and the science of complex, dynamical systems, is piecing together an understanding of the creative process, including how it works, how it can be fostered, and the developmental antecedents and personality traits of particularly creative people. This chapter examines the workings of creative minds, those with the potential to significantly impact the evolution of human culture.
2203.16330
Hong Qin
Hong Qin, Syed Tareq, William Torres, Megan Doman, Cleo Falvey, Jamaree Moore, Meng Hsiu Tsai, Yingfeng Wang, Azad Hossain, Mengjun Xie, Li Yang
Cointegration of SARS-CoV-2 Transmission with Weather Conditions and Mobility during the First Year of the COVID-19 Pandemic in the United States
6 pages, 3 figures
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
Correlation between weather and the transmission of SARS-CoV-2 may suggest its seasonality. Cointegration analysis can avoid spurious correlation among time series data. We examined the cointegration of virus transmission with daily temperature, dewpoint, and confounding factors of mobility measurements during the first year of the pandemic in the United States. We examined the cointegration of the effective reproductive rate, Rt, of the virus with the dewpoint at two meters, the temperature at two meters, Apple driving mobility, and Google workplace mobility measurements. We found that dewpoint and Apple driving mobility are the best factors to cointegrate with Rt, although temperature and Google workplace mobility also cointegrate with Rt at substantial levels. We found that the optimal lag is two days for cointegration between Rt and weather variables, and three days for Rt and mobility. We observed clusters of states that share similar cointegration results of Rt, weather, and mobility, suggesting regional patterns. Our results support the correlation of weather with the spread of SARS-CoV-2 and its potential seasonality.
[ { "created": "Tue, 29 Mar 2022 02:00:26 GMT", "version": "v1" }, { "created": "Thu, 31 Mar 2022 00:53:31 GMT", "version": "v2" } ]
2022-04-01
[ [ "Qin", "Hong", "" ], [ "Tareq", "Syed", "" ], [ "Torres", "William", "" ], [ "Doman", "Megan", "" ], [ "Falvey", "Cleo", "" ], [ "Moore", "Jamaree", "" ], [ "Tsai", "Meng Hsiu", "" ], [ "Wang", "Yingfeng", "" ], [ "Hossain", "Azad", "" ], [ "Xie", "Mengjun", "" ], [ "Yang", "Li", "" ] ]
Correlation between weather and the transmission of SARS-CoV-2 may suggest its seasonality. Cointegration analysis can avoid spurious correlation among time series data. We examined the cointegration of virus transmission with daily temperature, dewpoint, and confounding factors of mobility measurements during the first year of the pandemic in the United States. We examined the cointegration of the effective reproductive rate, Rt, of the virus with the dewpoint at two meters, the temperature at two meters, Apple driving mobility, and Google workplace mobility measurements. We found that dewpoint and Apple driving mobility are the best factors to cointegrate with Rt, although temperature and Google workplace mobility also cointegrate with Rt at substantial levels. We found that the optimal lag is two days for cointegration between Rt and weather variables, and three days for Rt and mobility. We observed clusters of states that share similar cointegration results of Rt, weather, and mobility, suggesting regional patterns. Our results support the correlation of weather with the spread of SARS-CoV-2 and its potential seasonality.
2305.05193
Shloka Janapaty
Shloka V. Janapaty
A Chip-Firing Game for Biocrust Reverse Succession
11 pages, 6 figures; arguments revised, citations added
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Experimental work suggests that biological soil crusts, dominant primary producers in drylands and tundra, are particularly vulnerable to disturbances that cause reverse ecological succession. To model successional transitions in biocrust communities, we propose a resource-firing game that captures succession dynamics without specifying detailed function forms. The model is evaluated in idealized terrestrial ecosystems, where disturbances are modeled as a reduction in available resources that triggers inter-species competition. The resource-firing game is executed on a finite graph with nodes representing species in the community and a sink node that becomes active when every species is depleted of resources. First, we discuss the theoretical basis of the resource-firing game, evaluate it in the light of existing literature, and consider the characteristics of a biocrust community that has evolved to equilibrium. We then examine the dependence of resource-firing and game stability on species richness, showing that high species richness increases the probability of very short and long avalanches, but not those of intermediate length. Indeed, this result suggests that the response of the community to disturbance is both directional and episodic, proceeding towards reverse succession in bursts of variable length. Finally, we incorporate the spatial structure of the biocrust community into a Cayley Tree and derive a formula for the probability that a disturbance, modeled as a random attack, initiates a large species-death event.
[ { "created": "Tue, 9 May 2023 06:08:42 GMT", "version": "v1" }, { "created": "Mon, 12 Jun 2023 18:17:05 GMT", "version": "v2" } ]
2023-06-14
[ [ "Janapaty", "Shloka V.", "" ] ]
Experimental work suggests that biological soil crusts, dominant primary producers in drylands and tundra, are particularly vulnerable to disturbances that cause reverse ecological succession. To model successional transitions in biocrust communities, we propose a resource-firing game that captures succession dynamics without specifying detailed function forms. The model is evaluated in idealized terrestrial ecosystems, where disturbances are modeled as a reduction in available resources that triggers inter-species competition. The resource-firing game is executed on a finite graph with nodes representing species in the community and a sink node that becomes active when every species is depleted of resources. First, we discuss the theoretical basis of the resource-firing game, evaluate it in the light of existing literature, and consider the characteristics of a biocrust community that has evolved to equilibrium. We then examine the dependence of resource-firing and game stability on species richness, showing that high species richness increases the probability of very short and long avalanches, but not those of intermediate length. Indeed, this result suggests that the response of the community to disturbance is both directional and episodic, proceeding towards reverse succession in bursts of variable length. Finally, we incorporate the spatial structure of the biocrust community into a Cayley Tree and derive a formula for the probability that a disturbance, modeled as a random attack, initiates a large species-death event.
1809.02872
Alexey Chernov
A. Chernov, M. Kelbert, A. Shemendyuk
Optimal vaccine allocation during the mumps outbreak in two SIR centers
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this work is to investigate the optimal vaccine sharing between two SIR centers in the presence of migration fluxes of susceptibles and infected individuals during the mumps outbreak. Optimality of the vaccine allocation means the minimization of the total number of lost working days during the whole period of epidemic outbreak $[0,t_f]$, which can be described by the functional $Q=\int_0^{t_f}I(t){\rm d}t$ where $I(t)$ stands for the number of infectives at time $t$. We explain the behavior of the optimal allocation, which depends on the model parameters and the amount of available vaccine.
[ { "created": "Sat, 8 Sep 2018 20:46:13 GMT", "version": "v1" } ]
2018-09-11
[ [ "Chernov", "A.", "" ], [ "Kelbert", "M.", "" ], [ "Shemendyuk", "A.", "" ] ]
The aim of this work is to investigate the optimal vaccine sharing between two SIR centers in the presence of migration fluxes of susceptibles and infected individuals during the mumps outbreak. Optimality of the vaccine allocation means the minimization of the total number of lost working days during the whole period of epidemic outbreak $[0,t_f]$, which can be described by the functional $Q=\int_0^{t_f}I(t){\rm d}t$ where $I(t)$ stands for the number of infectives at time $t$. We explain the behavior of the optimal allocation, which depends on the model parameters and the amount of available vaccine.
1210.4695
David Balduzzi
David Balduzzi
Regulating the information in spikes: a useful bias
NIPS 2012 workshop on Information in Perception and Action
null
null
null
q-bio.NC cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The bias/variance tradeoff is fundamental to learning: increasing a model's complexity can improve its fit on training data, but potentially worsens performance on future samples. Remarkably, however, the human brain effortlessly handles a wide-range of complex pattern recognition tasks. On the basis of these conflicting observations, it has been argued that useful biases in the form of "generic mechanisms for representation" must be hardwired into cortex (Geman et al). This note describes a useful bias that encourages cooperative learning which is both biologically plausible and rigorously justified.
[ { "created": "Wed, 17 Oct 2012 11:12:02 GMT", "version": "v1" } ]
2012-10-18
[ [ "Balduzzi", "David", "" ] ]
The bias/variance tradeoff is fundamental to learning: increasing a model's complexity can improve its fit on training data, but potentially worsens performance on future samples. Remarkably, however, the human brain effortlessly handles a wide-range of complex pattern recognition tasks. On the basis of these conflicting observations, it has been argued that useful biases in the form of "generic mechanisms for representation" must be hardwired into cortex (Geman et al). This note describes a useful bias that encourages cooperative learning which is both biologically plausible and rigorously justified.
1904.06645
Mohammad Nami
Ali-Mohammad Kamali, Mohammad Taghi Najafi, Mohammad Nami
Brain on the 3D Visual Art through Virtual Reality; Introducing Neuro-Art in a Case Investigation
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The reciprocal impact of applied neuroscience and cognitive studies on humanities has been extensive and growing over the past 30 years of research. Studies on neuroaesthetics have provided novel insights in visual arts, music as well as abstract and dramatic art. Neuro-Art is an experimental concept in applied neuroscience where scientists can study the mechanistic pathways involved for instance in visual art through which creativity and artistic capacity might receive further empowerment. Based on the existing evidence, at least 3 large-scale brain networks are involved simultaneously when one is submitted to a creativity-related task. The question whether the key brain regions involved in visual art creativity can be identified and receive neuromodulation to get empowered prompted us to perform the present case investigation. Virtual reality and functional quantitative electroencephalography upon 2- vs 3-dimentional painting were employed to study cortical neurodynamics in a professional painting artist.
[ { "created": "Sun, 14 Apr 2019 07:35:47 GMT", "version": "v1" } ]
2019-04-16
[ [ "Kamali", "Ali-Mohammad", "" ], [ "Najafi", "Mohammad Taghi", "" ], [ "Nami", "Mohammad", "" ] ]
The reciprocal impact of applied neuroscience and cognitive studies on humanities has been extensive and growing over the past 30 years of research. Studies on neuroaesthetics have provided novel insights in visual arts, music as well as abstract and dramatic art. Neuro-Art is an experimental concept in applied neuroscience where scientists can study the mechanistic pathways involved for instance in visual art through which creativity and artistic capacity might receive further empowerment. Based on the existing evidence, at least 3 large-scale brain networks are involved simultaneously when one is submitted to a creativity-related task. The question whether the key brain regions involved in visual art creativity can be identified and receive neuromodulation to get empowered prompted us to perform the present case investigation. Virtual reality and functional quantitative electroencephalography upon 2- vs 3-dimentional painting were employed to study cortical neurodynamics in a professional painting artist.
1503.06667
Jonathan Potts
Jonathan R Potts, Mark A Lewis
Territorial pattern formation in the absence of an attractive potential
22 pages, 4 figures
null
10.1007/s00285-015-0881-4
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Territoriality is a phenomenon exhibited throughout nature. On the individual level, it is the processes by which organisms exclude others of the same species from certain parts of space. On the population level, it is the segregation of space into separate areas, each used by subsections of the population. Proving mathematically that such individual-level processes can cause observed population-level patterns to form is necessary for linking these two levels of description in a non-speculative way. Previous mathematical analysis has relied upon assuming animals are attracted to a central area. This can either be a fixed geographical point, such as a den- or nest-site, or a region where they have previously visited. However, recent simulation-based studies suggest that this attractive potential is not necessary for territorial pattern formation. Here, we construct a partial differential equation (PDE) model of territorial interactions based on the individual-based model (IBM) from those simulation studies. The resulting PDE does not rely on attraction to spatial locations, but purely on conspecific avoidance, mediated via scent-marking. We show analytically that steady-state patterns can form, as long as (i) the scent does not decay faster than it takes the animal to traverse the terrain, and (ii) the spatial scale over which animals detect scent is incorporated into the PDE. As part of the analysis, we develop a general method for taking the PDE limit of an IBM that avoids destroying any intrinsic spatial scale in the underlying behavioral decisions.
[ { "created": "Mon, 23 Mar 2015 14:50:29 GMT", "version": "v1" } ]
2015-08-17
[ [ "Potts", "Jonathan R", "" ], [ "Lewis", "Mark A", "" ] ]
Territoriality is a phenomenon exhibited throughout nature. On the individual level, it is the processes by which organisms exclude others of the same species from certain parts of space. On the population level, it is the segregation of space into separate areas, each used by subsections of the population. Proving mathematically that such individual-level processes can cause observed population-level patterns to form is necessary for linking these two levels of description in a non-speculative way. Previous mathematical analysis has relied upon assuming animals are attracted to a central area. This can either be a fixed geographical point, such as a den- or nest-site, or a region where they have previously visited. However, recent simulation-based studies suggest that this attractive potential is not necessary for territorial pattern formation. Here, we construct a partial differential equation (PDE) model of territorial interactions based on the individual-based model (IBM) from those simulation studies. The resulting PDE does not rely on attraction to spatial locations, but purely on conspecific avoidance, mediated via scent-marking. We show analytically that steady-state patterns can form, as long as (i) the scent does not decay faster than it takes the animal to traverse the terrain, and (ii) the spatial scale over which animals detect scent is incorporated into the PDE. As part of the analysis, we develop a general method for taking the PDE limit of an IBM that avoids destroying any intrinsic spatial scale in the underlying behavioral decisions.
1301.1710
Vilhelm Verendel
Kristian Lindgren, Vilhelm Verendel
Evolutionary Exploration of the Finitely Repeated Prisoners' Dilemma--The Effect of Out-of-Equilibrium Play
null
Games 2013, 4(1):1-20
10.3390/g4010001
null
q-bio.PE cs.GT
http://creativecommons.org/licenses/by/3.0/
The finitely repeated Prisoners' Dilemma is a good illustration of the discrepancy between the strategic behaviour suggested by a game-theoretic analysis and the behaviour often observed among human players, where cooperation is maintained through most of the game. A game-theoretic reasoning based on backward induction eliminates strategies step by step until defection from the first round is the only remaining choice, reflecting the Nash equilibrium of the game. We investigate the Nash equilibrium solution for two different sets of strategies in an evolutionary context, using replicator-mutation dynamics. The first set consists of conditional cooperators, up to a certain round, while the second set in addition to these contains two strategy types that react differently on the first round action: The "Convincer" strategies insist with two rounds of initial cooperation, trying to establish more cooperative play in the game, while the "Follower" strategies, although being first round defectors, have the capability to respond to an invite in the first round. For both of these strategy sets, iterated elimination of strategies shows that the only Nash equilibria are given by defection from the first round. We show that the evolutionary dynamics of the first set is always characterised by a stable fixed point, corresponding to the Nash equilibrium, if the mutation rate is sufficiently small (but still positive). The second strategy set is numerically investigated, and we find that there are regions of parameter space where fixed points become unstable and the dynamics exhibits cycles of different strategy compositions. The results indicate that, even in the limit of very small mutation rate, the replicator-mutation dynamics does not necessarily bring the system with Convincers and Followers to the fixed point corresponding to the Nash equilibrium of the game.
[ { "created": "Tue, 8 Jan 2013 22:20:43 GMT", "version": "v1" } ]
2013-01-10
[ [ "Lindgren", "Kristian", "" ], [ "Verendel", "Vilhelm", "" ] ]
The finitely repeated Prisoners' Dilemma is a good illustration of the discrepancy between the strategic behaviour suggested by a game-theoretic analysis and the behaviour often observed among human players, where cooperation is maintained through most of the game. A game-theoretic reasoning based on backward induction eliminates strategies step by step until defection from the first round is the only remaining choice, reflecting the Nash equilibrium of the game. We investigate the Nash equilibrium solution for two different sets of strategies in an evolutionary context, using replicator-mutation dynamics. The first set consists of conditional cooperators, up to a certain round, while the second set in addition to these contains two strategy types that react differently on the first round action: The "Convincer" strategies insist with two rounds of initial cooperation, trying to establish more cooperative play in the game, while the "Follower" strategies, although being first round defectors, have the capability to respond to an invite in the first round. For both of these strategy sets, iterated elimination of strategies shows that the only Nash equilibria are given by defection from the first round. We show that the evolutionary dynamics of the first set is always characterised by a stable fixed point, corresponding to the Nash equilibrium, if the mutation rate is sufficiently small (but still positive). The second strategy set is numerically investigated, and we find that there are regions of parameter space where fixed points become unstable and the dynamics exhibits cycles of different strategy compositions. The results indicate that, even in the limit of very small mutation rate, the replicator-mutation dynamics does not necessarily bring the system with Convincers and Followers to the fixed point corresponding to the Nash equilibrium of the game.
1812.11117
Jose Fontanari
Guilherme M. Lopes and Jos\'e F. Fontanari
Influence of technological progress and renewability on the sustainability of ecosystem engineers populations
null
Mathematical Biosciences and Engineering 16, 3450-3464 ( 2019)
10.3934/mbe.2019173
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Overpopulation and environmental degradation due to inadequate resource-use are outcomes of human's ecosystem engineering that has profoundly modified the world's landscape. Despite the age-old concern that unchecked population and economic growth may be unsustainable, the prospect of societal collapse remains contentious today. Contrasting with the usual approach to modeling human-nature interactions, which are based on the Lotka-Volterra predator-prey model with humans as the predators and nature as the prey, here we address this issue using a discrete-time population dynamics model of ecosystem engineers. The growth of the population of engineers is modeled by the Beverton-Holt equation with a density-dependent carrying capacity that is proportional to the number of usable habitats. These habitats (e.g., farms) are the products of the work of the individuals on the virgin habitats (e.g., native forests), hence the denomination engineers of ecosystems to those agents. The human-made habitats decay into degraded habitats, which eventually regenerate into virgin habitats. For slow regeneration resources, we find that the dynamics is dominated by cycles of prosperity and collapse, in which the population reaches vanishing small densities. However, increase of the efficiency of the engineers to explore the resources eliminates the dangerous cyclical patterns of feast and famine and leads to a stable equilibrium that balances population growth and resource availability. This finding supports the viewpoint of growth optimists that technological progress may avoid collapse.
[ { "created": "Fri, 28 Dec 2018 17:14:41 GMT", "version": "v1" } ]
2019-04-26
[ [ "Lopes", "Guilherme M.", "" ], [ "Fontanari", "José F.", "" ] ]
Overpopulation and environmental degradation due to inadequate resource-use are outcomes of human's ecosystem engineering that has profoundly modified the world's landscape. Despite the age-old concern that unchecked population and economic growth may be unsustainable, the prospect of societal collapse remains contentious today. Contrasting with the usual approach to modeling human-nature interactions, which are based on the Lotka-Volterra predator-prey model with humans as the predators and nature as the prey, here we address this issue using a discrete-time population dynamics model of ecosystem engineers. The growth of the population of engineers is modeled by the Beverton-Holt equation with a density-dependent carrying capacity that is proportional to the number of usable habitats. These habitats (e.g., farms) are the products of the work of the individuals on the virgin habitats (e.g., native forests), hence the denomination engineers of ecosystems to those agents. The human-made habitats decay into degraded habitats, which eventually regenerate into virgin habitats. For slow regeneration resources, we find that the dynamics is dominated by cycles of prosperity and collapse, in which the population reaches vanishing small densities. However, increase of the efficiency of the engineers to explore the resources eliminates the dangerous cyclical patterns of feast and famine and leads to a stable equilibrium that balances population growth and resource availability. This finding supports the viewpoint of growth optimists that technological progress may avoid collapse.
2003.13543
Marc Foretz
Camille Huet (IC UM3), Nadia Boudaba (IC UM3), Bruno Guigas, Benoit Viollet (IC UM3), Marc Foretz (IC UM3)
Glucose availability but not changes in pancreatic hormones sensitizes hepatic AMPK activity during nutritional transition in rodents
Journal of Biological Chemistry, American Society for Biochemistry and Molecular Biology, 2020
null
10.1074/jbc.RA119.010244
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cellular energy sensor AMP-activated protein kinase (AMPK) is a metabolic regulator that mediates adaptation to nutritional variations in order to maintain a proper energy balance in cells. We show here that suckling-weaning and fasting-refeeding transitions in rodents are associated with changes in AMPK activation and the cellular energy state in the liver. These nutritional transitions were characterized by a metabolic switch from lipid to glucose utilization, orchestrated by modifications in glucose levels and the glucagon:insulin ratio in the bloodstream. We therefore investigated the respective roles of glucose and pancreatic hormones on AMPK activation in mouse primary hepatocytes. We found that glucose starvation transiently activates AMPK, whereas changes in glucagon and insulin levels had no impact on AMPK. Challenge of hepatocytes with metformin-induced metabolic stress strengthened both AMPK activation and cellular energy depletion limited-glucose conditions, whereas neither glucagon nor insulin altered AMPK activation. Although both insulin and glucagon induced AMPK$\alpha$ phosphorylation at its Ser-485/491 residue, they did not affect its activity. Finally, the decrease in cellular ATP levels in response to an energy stress was additionally exacerbated under fasting conditions and by AMPK deficiency in hepatocytes, revealing metabolic inflexibility and emphasizing the importance of AMPK for maintaining hepatic energy charge. Our results suggest that nutritional changes (i.e. glucose availability), rather than the related hormonal changes (i.e. the glucagon:insulin ratio), sensitize AMPK activation to the energetic stress induced by the dietary transition during fasting. This effect is critical for preserving the cellular energy state in the liver.
[ { "created": "Fri, 20 Mar 2020 10:15:36 GMT", "version": "v1" } ]
2020-03-31
[ [ "Huet", "Camille", "", "IC UM3" ], [ "Boudaba", "Nadia", "", "IC UM3" ], [ "Guigas", "Bruno", "", "IC UM3" ], [ "Viollet", "Benoit", "", "IC UM3" ], [ "Foretz", "Marc", "", "IC UM3" ] ]
The cellular energy sensor AMP-activated protein kinase (AMPK) is a metabolic regulator that mediates adaptation to nutritional variations in order to maintain a proper energy balance in cells. We show here that suckling-weaning and fasting-refeeding transitions in rodents are associated with changes in AMPK activation and the cellular energy state in the liver. These nutritional transitions were characterized by a metabolic switch from lipid to glucose utilization, orchestrated by modifications in glucose levels and the glucagon:insulin ratio in the bloodstream. We therefore investigated the respective roles of glucose and pancreatic hormones on AMPK activation in mouse primary hepatocytes. We found that glucose starvation transiently activates AMPK, whereas changes in glucagon and insulin levels had no impact on AMPK. Challenge of hepatocytes with metformin-induced metabolic stress strengthened both AMPK activation and cellular energy depletion limited-glucose conditions, whereas neither glucagon nor insulin altered AMPK activation. Although both insulin and glucagon induced AMPK$\alpha$ phosphorylation at its Ser-485/491 residue, they did not affect its activity. Finally, the decrease in cellular ATP levels in response to an energy stress was additionally exacerbated under fasting conditions and by AMPK deficiency in hepatocytes, revealing metabolic inflexibility and emphasizing the importance of AMPK for maintaining hepatic energy charge. Our results suggest that nutritional changes (i.e. glucose availability), rather than the related hormonal changes (i.e. the glucagon:insulin ratio), sensitize AMPK activation to the energetic stress induced by the dietary transition during fasting. This effect is critical for preserving the cellular energy state in the liver.
0906.4549
David Jasnow
Chun-Chung Chen, David Jasnow
Mean-field theory of a plastic network of integrate-and-fire neurons
13 pages, 7 figures
Phys. Rev. E 81, 011907 (2010)
10.1103/PhysRevE.81.011907
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a noise driven network of integrate-and-fire neurons. The network evolves as result of the activities of the neurons following spike-timing-dependent plasticity rules. We apply a self-consistent mean-field theory to the system to obtain the mean activity level for the system as a function of the mean synaptic weight, which predicts a first-order transition and hysteresis between a noise-dominated regime and a regime of persistent neural activity. Assuming Poisson firing statistics for the neurons, the plasticity dynamics of a synapse under the influence of the mean-field environment can be mapped to the dynamics of an asymmetric random walk in synaptic-weight space. Using a master-equation for small steps, we predict a narrow distribution of synaptic weights that scales with the square root of the plasticity rate for the stationary state of the system given plausible physiological parameter values describing neural transmission and plasticity. The dependence of the distribution on the synaptic weight of the mean-field environment allows us to determine the mean synaptic weight self-consistently. The effect of fluctuations in the total synaptic conductance and plasticity step sizes are also considered. Such fluctuations result in a smoothing of the first-order transition for low number of afferent synapses per neuron and a broadening of the synaptic weight distribution, respectively.
[ { "created": "Wed, 24 Jun 2009 19:54:34 GMT", "version": "v1" }, { "created": "Fri, 13 Nov 2009 18:20:30 GMT", "version": "v2" } ]
2010-02-05
[ [ "Chen", "Chun-Chung", "" ], [ "Jasnow", "David", "" ] ]
We consider a noise driven network of integrate-and-fire neurons. The network evolves as result of the activities of the neurons following spike-timing-dependent plasticity rules. We apply a self-consistent mean-field theory to the system to obtain the mean activity level for the system as a function of the mean synaptic weight, which predicts a first-order transition and hysteresis between a noise-dominated regime and a regime of persistent neural activity. Assuming Poisson firing statistics for the neurons, the plasticity dynamics of a synapse under the influence of the mean-field environment can be mapped to the dynamics of an asymmetric random walk in synaptic-weight space. Using a master-equation for small steps, we predict a narrow distribution of synaptic weights that scales with the square root of the plasticity rate for the stationary state of the system given plausible physiological parameter values describing neural transmission and plasticity. The dependence of the distribution on the synaptic weight of the mean-field environment allows us to determine the mean synaptic weight self-consistently. The effect of fluctuations in the total synaptic conductance and plasticity step sizes are also considered. Such fluctuations result in a smoothing of the first-order transition for low number of afferent synapses per neuron and a broadening of the synaptic weight distribution, respectively.
2007.01953
Sebastian Schreiber
Sebastian J. Schreiber
The $P^*$ rule in the stochastic Holt-Lawton model of apparent competition
10 pages, 1 figure
null
null
null
q-bio.PE math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In $1993$, Holt and Lawton introduced a stochastic model of two host species parasitized by a common parasitoid species. We introduce and analyze a generalization of these stochastic difference equations with any number of host species, stochastically varying parasitism rates, stochastically varying host intrinsic fitnesses, and stochastic immigration of parasitoids. Despite the lack of direct, host density-dependence, we show that this system is dissipative i.e. enters a compact set in finite time for all initial conditions. When there is a single host species, stochastic persistence and extinction of the host is characterized using external Lyapunov exponents corresponding to the average per-capita growth rates of the host when rare. When a single host persists, say species $i$, a explicit expression is derived for the average density, $P_i^*$, of the parasitoid at the stationary distributions supporting both species. When there are multiple host species, we prove that the host species with the largest $P_i^*$ value stochastically persists, while the other host species are asymptotically driven to extinction. A review of the main mathematical methods used to prove the results and future challenges are given.
[ { "created": "Fri, 3 Jul 2020 22:16:35 GMT", "version": "v1" }, { "created": "Mon, 30 Nov 2020 22:20:07 GMT", "version": "v2" } ]
2020-12-02
[ [ "Schreiber", "Sebastian J.", "" ] ]
In $1993$, Holt and Lawton introduced a stochastic model of two host species parasitized by a common parasitoid species. We introduce and analyze a generalization of these stochastic difference equations with any number of host species, stochastically varying parasitism rates, stochastically varying host intrinsic fitnesses, and stochastic immigration of parasitoids. Despite the lack of direct, host density-dependence, we show that this system is dissipative i.e. enters a compact set in finite time for all initial conditions. When there is a single host species, stochastic persistence and extinction of the host is characterized using external Lyapunov exponents corresponding to the average per-capita growth rates of the host when rare. When a single host persists, say species $i$, a explicit expression is derived for the average density, $P_i^*$, of the parasitoid at the stationary distributions supporting both species. When there are multiple host species, we prove that the host species with the largest $P_i^*$ value stochastically persists, while the other host species are asymptotically driven to extinction. A review of the main mathematical methods used to prove the results and future challenges are given.
1702.00759
Jon Chapman
Jake P. Taylor-King, David Basanta, S. Jonathan Chapman and Mason A. Porter
A Mean-Field Approach to Evolving Spatial Networks, with an Application to Osteocyte Network Formation
null
Phys. Rev. E 96, 012301 (2017)
10.1103/PhysRevE.96.012301
null
q-bio.QM cond-mat.dis-nn cond-mat.stat-mech nlin.AO q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider evolving networks in which each node can have various associated properties (a state) in addition to those that arise from network structure. For example, each node can have a spatial location and a velocity, or some more abstract internal property that describes something like social trait. Edges between nodes are created and destroyed, and new nodes enter the system. We introduce a "local state degree distribution" (LSDD) as the degree distribution at a particular point in state space. We then make a mean-field assumption and thereby derive an integro-partial differential equation that is satisfied by the LSDD. We perform numerical experiments and find good agreement between solutions of the integro-differential equation and the LSDD from stochastic simulations of the full model. To illustrate our theory, we apply it to a simple continuum model for osteocyte network formation within bones, with a view to understanding changes that may take place during cancer. Our results suggest that increased rates of differentiation lead to higher densities of osteocytes but with a lower number of dendrites. To help provide biological context, we also include an introduction to osteocytes, the formation of osteocyte networks, and the role of osteocytes in bona metastasis.
[ { "created": "Tue, 31 Jan 2017 10:18:02 GMT", "version": "v1" } ]
2017-07-12
[ [ "Taylor-King", "Jake P.", "" ], [ "Basanta", "David", "" ], [ "Chapman", "S. Jonathan", "" ], [ "Porter", "Mason A.", "" ] ]
We consider evolving networks in which each node can have various associated properties (a state) in addition to those that arise from network structure. For example, each node can have a spatial location and a velocity, or some more abstract internal property that describes something like social trait. Edges between nodes are created and destroyed, and new nodes enter the system. We introduce a "local state degree distribution" (LSDD) as the degree distribution at a particular point in state space. We then make a mean-field assumption and thereby derive an integro-partial differential equation that is satisfied by the LSDD. We perform numerical experiments and find good agreement between solutions of the integro-differential equation and the LSDD from stochastic simulations of the full model. To illustrate our theory, we apply it to a simple continuum model for osteocyte network formation within bones, with a view to understanding changes that may take place during cancer. Our results suggest that increased rates of differentiation lead to higher densities of osteocytes but with a lower number of dendrites. To help provide biological context, we also include an introduction to osteocytes, the formation of osteocyte networks, and the role of osteocytes in bona metastasis.
1909.01915
Mark Dubbelman
Mark A. Dubbelman, Merike Verrijp, David Facal, Gonzalo S\'anchez-Benavides, Laura J.E. Brown, Wiesje M. van der Flier, Hanna Jokinen, Athene Lee, Iracema Leroi, Cristina Lojo-Seoane, Vuk Milosevic, Jos\'e Lu\'is Molinuevo, Arturo X. Pereiro Rozas, Craig Ritchie, Stephen Salloway, Gemma Stringer, Stelios Zygouris, Bruno Dubois, St\'ephane Epelbaum, Philip Scheltens, Sietske A.M. Sikkes
The influence of diversity on the measurement of functional impairment: An international validation of the Amsterdam IADL Questionnaire in 8 countries
null
null
10.1002/dad2.12021
null
q-bio.NC stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
INTRODUCTION: To understand the potential influence of diversity on the measurement of functional impairment in dementia, we aimed to investigate possible bias caused by age, gender, education, and cultural differences. METHODS: 3,571 individuals (67.1 {\pm} 9.5 years old, 44.7% female) from the Netherlands, Spain, France, United States, United Kingdom, Greece, Serbia and Finland were included. Functional impairment was measured using the Amsterdam IADL Questionnaire. Item bias was assessed using differential item functioning (DIF) analysis. RESULTS: There were some differences in activity endorsement. A few items showed statistically significant DIF. However, there was no evidence of meaningful item bias: effect sizes were low ({\Delta}R2 range 0-0.03). Impact on total scores was minimal. DISCUSSION: The results imply a limited bias for age, gender, education and culture in the measurement of functional impairment. This study provides an important step in recognizing the potential influence of diversity on primary outcomes in dementia research.
[ { "created": "Wed, 4 Sep 2019 16:08:14 GMT", "version": "v1" }, { "created": "Wed, 29 Jan 2020 07:48:06 GMT", "version": "v2" } ]
2020-05-28
[ [ "Dubbelman", "Mark A.", "" ], [ "Verrijp", "Merike", "" ], [ "Facal", "David", "" ], [ "Sánchez-Benavides", "Gonzalo", "" ], [ "Brown", "Laura J. E.", "" ], [ "van der Flier", "Wiesje M.", "" ], [ "Jokinen", "Hanna", "" ], [ "Lee", "Athene", "" ], [ "Leroi", "Iracema", "" ], [ "Lojo-Seoane", "Cristina", "" ], [ "Milosevic", "Vuk", "" ], [ "Molinuevo", "José Luís", "" ], [ "Rozas", "Arturo X. Pereiro", "" ], [ "Ritchie", "Craig", "" ], [ "Salloway", "Stephen", "" ], [ "Stringer", "Gemma", "" ], [ "Zygouris", "Stelios", "" ], [ "Dubois", "Bruno", "" ], [ "Epelbaum", "Stéphane", "" ], [ "Scheltens", "Philip", "" ], [ "Sikkes", "Sietske A. M.", "" ] ]
INTRODUCTION: To understand the potential influence of diversity on the measurement of functional impairment in dementia, we aimed to investigate possible bias caused by age, gender, education, and cultural differences. METHODS: 3,571 individuals (67.1 {\pm} 9.5 years old, 44.7% female) from the Netherlands, Spain, France, United States, United Kingdom, Greece, Serbia and Finland were included. Functional impairment was measured using the Amsterdam IADL Questionnaire. Item bias was assessed using differential item functioning (DIF) analysis. RESULTS: There were some differences in activity endorsement. A few items showed statistically significant DIF. However, there was no evidence of meaningful item bias: effect sizes were low ({\Delta}R2 range 0-0.03). Impact on total scores was minimal. DISCUSSION: The results imply a limited bias for age, gender, education and culture in the measurement of functional impairment. This study provides an important step in recognizing the potential influence of diversity on primary outcomes in dementia research.
1209.2096
Nicholas Eriksson
Nicholas Eriksson, Shirley Wu, Chuong B. Do, Amy K. Kiefer, Joyce Y. Tung, Joanna L. Mountain, David A. Hinds and Uta Francke
A genetic variant near olfactory receptor genes influences cilantro preference
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The leaves of the Coriandrum sativum plant, known as cilantro or coriander, are widely used in many cuisines around the world. However, far from being a benign culinary herb, cilantro can be polarizing---many people love it while others claim that it tastes or smells foul, often like soap or dirt. This soapy or pungent aroma is largely attributed to several aldehydes present in cilantro. Cilantro preference is suspected to have a genetic component, yet to date nothing is known about specific mechanisms. Here we present the results of a genome-wide association study among 14,604 participants of European ancestry who reported whether cilantro tasted soapy, with replication in a distinct set of 11,851 participants who declared whether they liked cilantro. We find a single nucleotide polymorphism (SNP) significantly associated with soapy-taste detection that is confirmed in the cilantro preference group. This SNP, rs72921001, (p=6.4e-9, odds ratio 0.81 per A allele) lies within a cluster of olfactory receptor genes on chromosome 11. Among these olfactory receptor genes is OR6A2, which has a high binding specificity for several of the aldehydes that give cilantro its characteristic odor. We also estimate the heritability of cilantro soapy-taste detection in our cohort, showing that the heritability tagged by common SNPs is low, about 0.087. These results confirm that there is a genetic component to cilantro taste perception and suggest that cilantro dislike may stem from genetic variants in olfactory receptors. We propose that OR6A2 may be the olfactory receptor that contributes to the detection of a soapy smell from cilantro in European populations.
[ { "created": "Mon, 10 Sep 2012 19:08:28 GMT", "version": "v1" } ]
2012-09-11
[ [ "Eriksson", "Nicholas", "" ], [ "Wu", "Shirley", "" ], [ "Do", "Chuong B.", "" ], [ "Kiefer", "Amy K.", "" ], [ "Tung", "Joyce Y.", "" ], [ "Mountain", "Joanna L.", "" ], [ "Hinds", "David A.", "" ], [ "Francke", "Uta", "" ] ]
The leaves of the Coriandrum sativum plant, known as cilantro or coriander, are widely used in many cuisines around the world. However, far from being a benign culinary herb, cilantro can be polarizing---many people love it while others claim that it tastes or smells foul, often like soap or dirt. This soapy or pungent aroma is largely attributed to several aldehydes present in cilantro. Cilantro preference is suspected to have a genetic component, yet to date nothing is known about specific mechanisms. Here we present the results of a genome-wide association study among 14,604 participants of European ancestry who reported whether cilantro tasted soapy, with replication in a distinct set of 11,851 participants who declared whether they liked cilantro. We find a single nucleotide polymorphism (SNP) significantly associated with soapy-taste detection that is confirmed in the cilantro preference group. This SNP, rs72921001, (p=6.4e-9, odds ratio 0.81 per A allele) lies within a cluster of olfactory receptor genes on chromosome 11. Among these olfactory receptor genes is OR6A2, which has a high binding specificity for several of the aldehydes that give cilantro its characteristic odor. We also estimate the heritability of cilantro soapy-taste detection in our cohort, showing that the heritability tagged by common SNPs is low, about 0.087. These results confirm that there is a genetic component to cilantro taste perception and suggest that cilantro dislike may stem from genetic variants in olfactory receptors. We propose that OR6A2 may be the olfactory receptor that contributes to the detection of a soapy smell from cilantro in European populations.
0910.1736
Henry Herce D
H. D. Herce, A. E. Garcia, J. Litt, R. S. Kane, P. Martin, N. Enrique, A. Rebolledo, and V. Milesi
Arginine-rich peptides destabilize the plasma membrane, consistent with a pore formation translocation mechanism of cell penetrating peptides
This is an extended version of the published manuscript, which had to be shortened before publication to fit within the number of pages required by the journal
Biophysical Journal, Volume 97, Issue 7, 7 October 2009, Pages 1917-1925
10.1016/j.bpj.2009.05.066
null
q-bio.BM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent molecular dynamics simulations (Herce and Garcia, PNAS, 104: 20805 (2007)) have suggested that the arginine-rich HIV Tat peptides might be able to translocate by destabilizing and inducing transient pores in phospholipid bilayers. In this pathway for peptide translocation, arginine residues play a fundamental role not only in the binding of the peptide to the surface of the membrane but also in the destabilization and nucleation of transient pores across the bilayer, despite being charged and highly hydrophilic. Here we present a molecular dynamics simulation of a peptide composed of nine arginines (Arg-9) that shows that this peptide follows the same translocation pathway previously found for the Tat peptide. We test this hypothesis experimentally by measuring ionic currents across phospholipid bilayers and cell membranes through the pores induced by Arg-9 peptides. We find that Arg-9 peptides, in the presence of an electrostatic potential gradient, induce ionic currents across planar phospholipid bilayers, as well as in cultured osteosarcoma cells and human smooth muscle cells freshly isolated from the umbilical artery. Our results suggest that the mechanism of action of Arg-9 peptide involves the creation of transient pores in lipid bilayers and cell membranes.
[ { "created": "Fri, 9 Oct 2009 13:28:20 GMT", "version": "v1" } ]
2009-10-12
[ [ "Herce", "H. D.", "" ], [ "Garcia", "A. E.", "" ], [ "Litt", "J.", "" ], [ "Kane", "R. S.", "" ], [ "Martin", "P.", "" ], [ "Enrique", "N.", "" ], [ "Rebolledo", "A.", "" ], [ "Milesi", "V.", "" ] ]
Recent molecular dynamics simulations (Herce and Garcia, PNAS, 104: 20805 (2007)) have suggested that the arginine-rich HIV Tat peptides might be able to translocate by destabilizing and inducing transient pores in phospholipid bilayers. In this pathway for peptide translocation, arginine residues play a fundamental role not only in the binding of the peptide to the surface of the membrane but also in the destabilization and nucleation of transient pores across the bilayer, despite being charged and highly hydrophilic. Here we present a molecular dynamics simulation of a peptide composed of nine arginines (Arg-9) that shows that this peptide follows the same translocation pathway previously found for the Tat peptide. We test this hypothesis experimentally by measuring ionic currents across phospholipid bilayers and cell membranes through the pores induced by Arg-9 peptides. We find that Arg-9 peptides, in the presence of an electrostatic potential gradient, induce ionic currents across planar phospholipid bilayers, as well as in cultured osteosarcoma cells and human smooth muscle cells freshly isolated from the umbilical artery. Our results suggest that the mechanism of action of Arg-9 peptide involves the creation of transient pores in lipid bilayers and cell membranes.
1606.07779
Sonali Sali
Sonali S. Sali
Natural calcium carbonate for biomedical applications
M.Tech (Biotechnology) Dissertation Thesis project
null
null
null
q-bio.OT
http://creativecommons.org/publicdomain/zero/1.0/
Sea shells are found to be a very rich natural resource for calcium carbonate. Sea shells are made up of CaCO3 mainly in the aragonite form, which are columnar or fibrous or microsphere structured crystals. The bioactivity of nanoparticles of sea shell has been studied in this work. The sea shells collected were thoroughly washed, dried and pulverized. The powder was sieved and particles in the range of 45 to 63 microns were collected. The powdered sea shells were characterized using X-Ray Diffraction and Field Emission Scanning Electron Microscopy. The XRD data showed that the particles were mainly microspheres. Traces of calcite and vaterite were also present. Experiments were conducted to study the aspirin and strontium ranelate drug loading into the sea shell powder using soak and dry method. Different concentrations of drug solution was made in ethanol and water. The shell powder was soaked in drug solutions and was kept soaking for 48 hrs with intermittent ultrasonication. The mixture was gently dried in a vacuum oven. The in vitro drug release studies were done using Phosphate Buffered Saline. The FESEM images displayed a distribution of differently sized and shaped particles. The sea shells due to its natural porosity and crystallinity are expected to be useful for drug delivery. About 50% drug entrapment efficiency for aspirin and 39% for strontium ranelate was seen. A burst release of the drug (80 percent) was observed within two hours for both the drugs studied. Rest of the drug was released slowly in 19 hrs. Further modification of the sea shell with non toxic polymers is also planned as a part of this work. Sea shell powder has become a potential candidate for drug delivery due to all the aforementioned advantages.
[ { "created": "Thu, 23 Jun 2016 13:01:43 GMT", "version": "v1" } ]
2016-06-27
[ [ "Sali", "Sonali S.", "" ] ]
Sea shells are found to be a very rich natural resource for calcium carbonate. Sea shells are made up of CaCO3 mainly in the aragonite form, which are columnar or fibrous or microsphere structured crystals. The bioactivity of nanoparticles of sea shell has been studied in this work. The sea shells collected were thoroughly washed, dried and pulverized. The powder was sieved and particles in the range of 45 to 63 microns were collected. The powdered sea shells were characterized using X-Ray Diffraction and Field Emission Scanning Electron Microscopy. The XRD data showed that the particles were mainly microspheres. Traces of calcite and vaterite were also present. Experiments were conducted to study the aspirin and strontium ranelate drug loading into the sea shell powder using soak and dry method. Different concentrations of drug solution was made in ethanol and water. The shell powder was soaked in drug solutions and was kept soaking for 48 hrs with intermittent ultrasonication. The mixture was gently dried in a vacuum oven. The in vitro drug release studies were done using Phosphate Buffered Saline. The FESEM images displayed a distribution of differently sized and shaped particles. The sea shells due to its natural porosity and crystallinity are expected to be useful for drug delivery. About 50% drug entrapment efficiency for aspirin and 39% for strontium ranelate was seen. A burst release of the drug (80 percent) was observed within two hours for both the drugs studied. Rest of the drug was released slowly in 19 hrs. Further modification of the sea shell with non toxic polymers is also planned as a part of this work. Sea shell powder has become a potential candidate for drug delivery due to all the aforementioned advantages.
1211.2366
Jeremy Van Cleve
Jeremy Van Cleve and Erol Ak\c{c}ay
Pathways to social evolution: reciprocity, relatedness, and synergy
4 figures
null
10.1111/evo.12438
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many organisms live in populations structured by space and by class, exhibit plastic responses to their social partners, and are subject to non-additive ecological and fitness effects. Social evolution theory has long recognized that all of these factors can lead to different selection pressures but has only recently attempted to synthesize how these factors interact. Using models for both discrete and continuous phenotypes, we show that analyzing these factors in a consistent framework reveals that they interact with one another in ways previously overlooked. Specifically, behavioral responses (reciprocity), genetic relatedness, and synergy interact in non-trivial ways that cannot be easily captured by simple summary indices of assortment. We demonstrate the importance of these interactions by showing how they have been neglected in previous synthetic models of social behavior both within and between species. These interactions also affect the level of behavioral responses that can evolve in the long run; proximate biological mechanisms are evolutionarily stable when they generate enough responsiveness relative to the level of responsiveness that exactly balances the ecological costs and benefits. Given the richness of social behavior across taxa, these interactions should be a boon for empirical research as they are likely crucial for describing the complex relationship linking ecology, demography, and social behavior.
[ { "created": "Sun, 11 Nov 2012 01:51:59 GMT", "version": "v1" }, { "created": "Fri, 15 Nov 2013 21:13:51 GMT", "version": "v2" }, { "created": "Thu, 17 Apr 2014 04:08:47 GMT", "version": "v3" } ]
2014-04-24
[ [ "Van Cleve", "Jeremy", "" ], [ "Akçay", "Erol", "" ] ]
Many organisms live in populations structured by space and by class, exhibit plastic responses to their social partners, and are subject to non-additive ecological and fitness effects. Social evolution theory has long recognized that all of these factors can lead to different selection pressures but has only recently attempted to synthesize how these factors interact. Using models for both discrete and continuous phenotypes, we show that analyzing these factors in a consistent framework reveals that they interact with one another in ways previously overlooked. Specifically, behavioral responses (reciprocity), genetic relatedness, and synergy interact in non-trivial ways that cannot be easily captured by simple summary indices of assortment. We demonstrate the importance of these interactions by showing how they have been neglected in previous synthetic models of social behavior both within and between species. These interactions also affect the level of behavioral responses that can evolve in the long run; proximate biological mechanisms are evolutionarily stable when they generate enough responsiveness relative to the level of responsiveness that exactly balances the ecological costs and benefits. Given the richness of social behavior across taxa, these interactions should be a boon for empirical research as they are likely crucial for describing the complex relationship linking ecology, demography, and social behavior.
1705.05248
Bertha V\'azquez-Rodr\'iguez Bertha
Bertha V\'azquez-Rodr\'iguez, Andrea Avena-Koenigsberger, Olaf Sporns, Alessandra Griffa, Patric Hagmann, Hern\'an Larralde
Stochastic resonance and optimal information transfer at criticality on a network model of the human connectome
null
Scientific Reports 7, Article number: 13020 (2017)
10.1038/s41598-017-13400-5
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic resonance is a phenomenon in which noise enhances the response of a system to an input signal. The brain is an example of a system that has to detect and transmit signals in a noisy environment, suggesting that it is a good candidate to take advantage of SR. In this work, we aim to identify the optimal levels of noise that promote signal transmission through a simple network model of the human brain. Specifically, using a dynamic model implemented on an anatomical brain network (connectome), we investigate the similarity between an input signal and a signal that has traveled across the network while the system is subject to different noise levels. We find that non-zero levels of noise enhance the similarity between the input signal and the signal that has traveled through the system. The optimal noise level is not unique; rather, there is a set of parameter values at which the information is transmitted with greater precision, this set corresponds to the parameter values that place the system in a critical regime. The multiplicity of critical points in our model allows it to adapt to different noise situations and remain at criticality.
[ { "created": "Fri, 12 May 2017 17:43:29 GMT", "version": "v1" }, { "created": "Thu, 18 May 2017 16:56:03 GMT", "version": "v2" } ]
2017-10-16
[ [ "Vázquez-Rodríguez", "Bertha", "" ], [ "Avena-Koenigsberger", "Andrea", "" ], [ "Sporns", "Olaf", "" ], [ "Griffa", "Alessandra", "" ], [ "Hagmann", "Patric", "" ], [ "Larralde", "Hernán", "" ] ]
Stochastic resonance is a phenomenon in which noise enhances the response of a system to an input signal. The brain is an example of a system that has to detect and transmit signals in a noisy environment, suggesting that it is a good candidate to take advantage of SR. In this work, we aim to identify the optimal levels of noise that promote signal transmission through a simple network model of the human brain. Specifically, using a dynamic model implemented on an anatomical brain network (connectome), we investigate the similarity between an input signal and a signal that has traveled across the network while the system is subject to different noise levels. We find that non-zero levels of noise enhance the similarity between the input signal and the signal that has traveled through the system. The optimal noise level is not unique; rather, there is a set of parameter values at which the information is transmitted with greater precision, this set corresponds to the parameter values that place the system in a critical regime. The multiplicity of critical points in our model allows it to adapt to different noise situations and remain at criticality.
2012.09027
Radu Horaud P
Miles Hansard and Radu Horaud
A Differential Model of the Complex Cell
null
Neural Computation, 23(9), 2011
10.1162/NECO_a_00163
null
q-bio.NC cs.CV cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
The receptive fields of simple cells in the visual cortex can be understood as linear filters. These filters can be modelled by Gabor functions, or by Gaussian derivatives. Gabor functions can also be combined in an `energy model' of the complex cell response. This paper proposes an alternative model of the complex cell, based on Gaussian derivatives. It is most important to account for the insensitivity of the complex response to small shifts of the image. The new model uses a linear combination of the first few derivative filters, at a single position, to approximate the first derivative filter, at a series of adjacent positions. The maximum response, over all positions, gives a signal that is insensitive to small shifts of the image. This model, unlike previous approaches, is based on the scale space theory of visual processing. In particular, the complex cell is built from filters that respond to the \twod\ differential structure of the image. The computational aspects of the new model are studied in one and two dimensions, using the steerability of the Gaussian derivatives. The response of the model to basic images, such as edges and gratings, is derived formally. The response to natural images is also evaluated, using statistical measures of shift insensitivity. The relevance of the new model to the cortical image representation is discussed.
[ { "created": "Wed, 9 Dec 2020 10:23:23 GMT", "version": "v1" } ]
2020-12-17
[ [ "Hansard", "Miles", "" ], [ "Horaud", "Radu", "" ] ]
The receptive fields of simple cells in the visual cortex can be understood as linear filters. These filters can be modelled by Gabor functions, or by Gaussian derivatives. Gabor functions can also be combined in an `energy model' of the complex cell response. This paper proposes an alternative model of the complex cell, based on Gaussian derivatives. It is most important to account for the insensitivity of the complex response to small shifts of the image. The new model uses a linear combination of the first few derivative filters, at a single position, to approximate the first derivative filter, at a series of adjacent positions. The maximum response, over all positions, gives a signal that is insensitive to small shifts of the image. This model, unlike previous approaches, is based on the scale space theory of visual processing. In particular, the complex cell is built from filters that respond to the \twod\ differential structure of the image. The computational aspects of the new model are studied in one and two dimensions, using the steerability of the Gaussian derivatives. The response of the model to basic images, such as edges and gratings, is derived formally. The response to natural images is also evaluated, using statistical measures of shift insensitivity. The relevance of the new model to the cortical image representation is discussed.
q-bio/0406033
Andrei Ludu
Nathan R. Hutchings, Andrei Ludu
A model for African trypanosome cell motility and quantitative description of flagellar dynamics
35 pages in pdf format, 6 figures included in text, and 4 additional movies, not uploaded here, but available at http://scitech.nsula.edu/IDEAS/
null
null
null
q-bio.QM physics.bio-ph q-bio.CB
null
A quantitative description of the flagellar dynamics in the procyclic T. brucei is presented in terms of stationary oscillations and traveling waves. By using digital video microscopy to quantify the kinematics of trypanosome flagellar waveforms. A theoretical model is build starting from a Bernoulli-Euler flexural-torsional model of an elastic string with internal distribution of force and torque. The dynamics is internally driven by the action of the molecular motors along the string, which is proportional to the local shift and consequently to the local curvature. The model equation is a nonlinear partial differential wave equation of order four, containing nonlinear terms specific to the Korteweg-de Vries (KdV) equation and the modified-KdV equation. For different ranges of parameters we obtained kink-like solitons, breather solitons, and a new class of solutions constructed by smoothly piece-wise connected conic functions arcs (e.g. ellipse). The predicted amplitude and wavelengths are in good match with experiments. We also present the hypotheses for a step-wise kinematical model of swimming of procyclic African trypanosome.
[ { "created": "Wed, 16 Jun 2004 01:00:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Hutchings", "Nathan R.", "" ], [ "Ludu", "Andrei", "" ] ]
A quantitative description of the flagellar dynamics in the procyclic T. brucei is presented in terms of stationary oscillations and traveling waves. By using digital video microscopy to quantify the kinematics of trypanosome flagellar waveforms. A theoretical model is build starting from a Bernoulli-Euler flexural-torsional model of an elastic string with internal distribution of force and torque. The dynamics is internally driven by the action of the molecular motors along the string, which is proportional to the local shift and consequently to the local curvature. The model equation is a nonlinear partial differential wave equation of order four, containing nonlinear terms specific to the Korteweg-de Vries (KdV) equation and the modified-KdV equation. For different ranges of parameters we obtained kink-like solitons, breather solitons, and a new class of solutions constructed by smoothly piece-wise connected conic functions arcs (e.g. ellipse). The predicted amplitude and wavelengths are in good match with experiments. We also present the hypotheses for a step-wise kinematical model of swimming of procyclic African trypanosome.
2201.08689
Samuel Okyere
Samuel Okyere, Joseph Ackora-Prah, Kwaku Forkuoh Darkwah, Francis Tabi Oduro and Ebenezer Bonyah
Fractional Optimal Control Model of SARS-CoV-2 (COVID-19) Disease in Ghana
The manuscript contains 22 figures and has 41 pages
Journal of Mathematics, vol. 2023, Article ID 3308529, 25 pages, 2023
10.1155/2023/3308529
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Research focus on optimal control problems brought on by fractional differential equations has been extensively applied in practice. However, because they are still open-ended and challenging, a number of problems with fractional mathematical modeling and problems with optimal control require additional study. Using fractional-order derivatives defined in the Atangana Baleanu Caputo sense, we alter the integer-order model that has been proposed in the literature. We prove the solution's existence, uniqueness, equilibrium points, fundamental reproduction number, and local stability of the equilibrium points. The operator's numerical approach was put into practice to obtain a numerical simulation to back up the analytical conclusions. Fractional optimum controls were incorporated into the model to identify the most efficient intervention strategies for controlling the disease. Utilizing actual data from Ghana for the months of March 2020 to March 2021, the model is validated. The simulation's results show that the fractional operator significantly affected each compartment and that the incidence rate of the population rose when v>0.6. The examination of the most effective control technique discovered that social exclusion and vaccination were both very effective methods for halting the development of the illness.
[ { "created": "Tue, 18 Jan 2022 18:41:11 GMT", "version": "v1" }, { "created": "Sun, 6 Feb 2022 09:21:44 GMT", "version": "v2" }, { "created": "Sat, 22 Apr 2023 19:52:18 GMT", "version": "v3" } ]
2023-04-25
[ [ "Okyere", "Samuel", "" ], [ "Ackora-Prah", "Joseph", "" ], [ "Darkwah", "Kwaku Forkuoh", "" ], [ "Oduro", "Francis Tabi", "" ], [ "Bonyah", "Ebenezer", "" ] ]
Research focus on optimal control problems brought on by fractional differential equations has been extensively applied in practice. However, because they are still open-ended and challenging, a number of problems with fractional mathematical modeling and problems with optimal control require additional study. Using fractional-order derivatives defined in the Atangana Baleanu Caputo sense, we alter the integer-order model that has been proposed in the literature. We prove the solution's existence, uniqueness, equilibrium points, fundamental reproduction number, and local stability of the equilibrium points. The operator's numerical approach was put into practice to obtain a numerical simulation to back up the analytical conclusions. Fractional optimum controls were incorporated into the model to identify the most efficient intervention strategies for controlling the disease. Utilizing actual data from Ghana for the months of March 2020 to March 2021, the model is validated. The simulation's results show that the fractional operator significantly affected each compartment and that the incidence rate of the population rose when v>0.6. The examination of the most effective control technique discovered that social exclusion and vaccination were both very effective methods for halting the development of the illness.
2004.08701
Sander Heinsalu
Sander Heinsalu
Infection arbitrage
null
null
null
null
q-bio.PE econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Increasing the infection risk early in an epidemic is individually and socially optimal under some parameter values. The reason is that the early patients recover or die before the peak of the epidemic, which flattens the peak. This improves welfare if the peak exceeds the capacity of the healthcare system and the social loss rises rapidly enough in the number infected. The individual incentive to get infected early comes from the greater likelihood of receiving treatment than at the peak when the disease has overwhelmed healthcare capacity. Calibration to the Covid-19 pandemic data suggests that catching the infection at the start was individually optimal and for some loss functions would have reduced the aggregate loss.
[ { "created": "Sat, 18 Apr 2020 20:39:46 GMT", "version": "v1" }, { "created": "Sun, 26 Apr 2020 17:16:25 GMT", "version": "v2" } ]
2020-04-28
[ [ "Heinsalu", "Sander", "" ] ]
Increasing the infection risk early in an epidemic is individually and socially optimal under some parameter values. The reason is that the early patients recover or die before the peak of the epidemic, which flattens the peak. This improves welfare if the peak exceeds the capacity of the healthcare system and the social loss rises rapidly enough in the number infected. The individual incentive to get infected early comes from the greater likelihood of receiving treatment than at the peak when the disease has overwhelmed healthcare capacity. Calibration to the Covid-19 pandemic data suggests that catching the infection at the start was individually optimal and for some loss functions would have reduced the aggregate loss.
2311.02029
Arvid Ernst Gollwitzer
Arvid E. Gollwitzer, Mohammed Alser, Joel Bergtholdt, Joel Lindegger, Maximilian-David Rumpf, Can Firtina, Serghei Mangul, Onur Mutlu
MetaTrinity: Enabling Fast Metagenomic Classification via Seed Counting and Edit Distance Approximation
null
null
null
null
q-bio.GN cs.AR q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Metagenomics, the study of genome sequences of diverse organisms cohabiting in a shared environment, has experienced significant advancements across various medical and biological fields. Metagenomic analysis is crucial, for instance, in clinical applications such as infectious disease screening and the diagnosis and early detection of diseases such as cancer. A key task in metagenomics is to determine the species present in a sample and their relative abundances. Currently, the field is dominated by either alignment-based tools, which offer high accuracy but are computationally expensive, or alignment-free tools, which are fast but lack the needed accuracy for many applications. In response to this dichotomy, we introduce MetaTrinity, a tool based on heuristics, to achieve a fundamental improvement in accuracy-runtime tradeoff over existing methods. We benchmark MetaTrinity against two leading metagenomic classifiers, each representing different ends of the performance-accuracy spectrum. On one end, Kraken2, a tool optimized for performance, shows modest accuracy yet a rapid runtime. The other end of the spectrum is governed by Metalign, a tool optimized for accuracy. Our evaluations show that MetaTrinity achieves an accuracy comparable to Metalign while gaining a 4x speedup without any loss in accuracy. This directly equates to a fourfold improvement in runtime-accuracy tradeoff. Compared to Kraken2, MetaTrinity requires a 5x longer runtime yet delivers a 17x improvement in accuracy. This demonstrates a 3.4x enhancement in the accuracy-runtime tradeoff for MetaTrinity. This dual comparison positions MetaTrinity as a broadly applicable solution for metagenomic classification, combining advantages of both ends of the spectrum: speed and accuracy. MetaTrinity is publicly available at https://github.com/CMU-SAFARI/MetaTrinity.
[ { "created": "Fri, 3 Nov 2023 16:58:32 GMT", "version": "v1" }, { "created": "Sat, 18 Nov 2023 18:06:52 GMT", "version": "v2" }, { "created": "Fri, 16 Feb 2024 12:06:00 GMT", "version": "v3" } ]
2024-02-19
[ [ "Gollwitzer", "Arvid E.", "" ], [ "Alser", "Mohammed", "" ], [ "Bergtholdt", "Joel", "" ], [ "Lindegger", "Joel", "" ], [ "Rumpf", "Maximilian-David", "" ], [ "Firtina", "Can", "" ], [ "Mangul", "Serghei", "" ], [ "Mutlu", "Onur", "" ] ]
Metagenomics, the study of genome sequences of diverse organisms cohabiting in a shared environment, has experienced significant advancements across various medical and biological fields. Metagenomic analysis is crucial, for instance, in clinical applications such as infectious disease screening and the diagnosis and early detection of diseases such as cancer. A key task in metagenomics is to determine the species present in a sample and their relative abundances. Currently, the field is dominated by either alignment-based tools, which offer high accuracy but are computationally expensive, or alignment-free tools, which are fast but lack the needed accuracy for many applications. In response to this dichotomy, we introduce MetaTrinity, a tool based on heuristics, to achieve a fundamental improvement in accuracy-runtime tradeoff over existing methods. We benchmark MetaTrinity against two leading metagenomic classifiers, each representing different ends of the performance-accuracy spectrum. On one end, Kraken2, a tool optimized for performance, shows modest accuracy yet a rapid runtime. The other end of the spectrum is governed by Metalign, a tool optimized for accuracy. Our evaluations show that MetaTrinity achieves an accuracy comparable to Metalign while gaining a 4x speedup without any loss in accuracy. This directly equates to a fourfold improvement in runtime-accuracy tradeoff. Compared to Kraken2, MetaTrinity requires a 5x longer runtime yet delivers a 17x improvement in accuracy. This demonstrates a 3.4x enhancement in the accuracy-runtime tradeoff for MetaTrinity. This dual comparison positions MetaTrinity as a broadly applicable solution for metagenomic classification, combining advantages of both ends of the spectrum: speed and accuracy. MetaTrinity is publicly available at https://github.com/CMU-SAFARI/MetaTrinity.
1410.2421
Frank Stollmeier
Frank Stollmeier, Theo Geisel, Jan Nagler
Possible Origin of Stagnation and Variability of Earth's Biodiversity
5 pages, 6 pages supplement
Phys. Rev. Let. 112, 228101 (2014)
10.1103/PhysRevLett.112.228101
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The magnitude and variability of Earth's biodiversity have puzzled scientists ever since paleontologic fossil databases became available. We identify and study a model of interdependent species where both endogenous and exogenous impacts determine the nonstationary extinction dynamics. The framework provides an explanation for the qualitative difference of marine and continental biodiversity growth. In particular, the stagnation of marine biodiversity may result from a global transition from an imbalanced to a balanced state of the species dependency network. The predictions of our framework are in agreement with paleontologic databases.
[ { "created": "Thu, 9 Oct 2014 11:06:14 GMT", "version": "v1" } ]
2014-10-10
[ [ "Stollmeier", "Frank", "" ], [ "Geisel", "Theo", "" ], [ "Nagler", "Jan", "" ] ]
The magnitude and variability of Earth's biodiversity have puzzled scientists ever since paleontologic fossil databases became available. We identify and study a model of interdependent species where both endogenous and exogenous impacts determine the nonstationary extinction dynamics. The framework provides an explanation for the qualitative difference of marine and continental biodiversity growth. In particular, the stagnation of marine biodiversity may result from a global transition from an imbalanced to a balanced state of the species dependency network. The predictions of our framework are in agreement with paleontologic databases.
2003.06419
Ji Wang
Jun Zhang, Lihong Wang, Ji Wang
SIR Model-based Prediction of Infected Population of Coronavirus in Hubei Province
Chinese with English abstract. 5 Figures, 1 Table
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After the sudden outbreak of Coronavirus in Wuhan, continuous and rich data of the epidemic has been made public as the vital fact for decision support in control measures and aggressive implementation of containment strategies and plans. With the further growth and spreading of the virus, future resource allocation and planning under updated strategies and measures rely on careful study of the epidemic data and characteristics for accurate prediction and estimation. By using the SIR model and reported data, key parameters are obtained from least sum of squared errors for an accurate prediction of epidemic trend in the last four weeks.
[ { "created": "Tue, 10 Mar 2020 15:08:36 GMT", "version": "v1" } ]
2020-03-16
[ [ "Zhang", "Jun", "" ], [ "Wang", "Lihong", "" ], [ "Wang", "Ji", "" ] ]
After the sudden outbreak of Coronavirus in Wuhan, continuous and rich data of the epidemic has been made public as the vital fact for decision support in control measures and aggressive implementation of containment strategies and plans. With the further growth and spreading of the virus, future resource allocation and planning under updated strategies and measures rely on careful study of the epidemic data and characteristics for accurate prediction and estimation. By using the SIR model and reported data, key parameters are obtained from least sum of squared errors for an accurate prediction of epidemic trend in the last four weeks.
1909.05203
Ana Osojnik
Ana Osojnik, Eamonn A. Gaffney, Michael Davies, James W. T. Yates, Helen M. Byrne
Identifying and characterising the impact of excitability in a mathematical model of tumour-immune interactions
29 pages, 15 figures
null
null
null
q-bio.TO math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a five-compartment mathematical model originally proposed by Kuznetsov et al. (1994) to investigate the effect of nonlinear interactions between tumour and immune cells in the tumour microenvironment, whereby immune cells may induce tumour cell death, and tumour cells may inactivate immune cells. Exploiting a separation of timescales in the model, we use the method of matched asymptotics to derive a new two-dimensional, long-timescale, approximation of the full model, which differs from the quasi-steady-state approximation introduced by Kuznetsov et al. (1994), but is validated against numerical solutions of the full model. Through a phase-plane analysis, we show that our reduced model is excitable, a feature not traditionally associated with tumour-immune dynamics. Through a systematic parameter sensitivity analysis, we demonstrate that excitability generates complex bifurcating dynamics in the model. These are consistent with a variety of clinically observed phenomena, and suggest that excitability may underpin tumour-immune interactions. The model exhibits the three stages of immunoediting - elimination, equilibrium, and escape, via stable steady states with different tumour cell concentrations. Such heterogeneity in tumour cell numbers can stem from variability in initial conditions and/or model parameters that control the properties of the immune system and its response to the tumour. We identify different biophysical parameter targets that could be manipulated with immunotherapy in order to control tumour size, and we find that preferred strategies may differ between patients depending on the strength of their immune systems, as determined by patient-specific values of associated model parameters.
[ { "created": "Tue, 3 Sep 2019 10:12:56 GMT", "version": "v1" } ]
2019-09-12
[ [ "Osojnik", "Ana", "" ], [ "Gaffney", "Eamonn A.", "" ], [ "Davies", "Michael", "" ], [ "Yates", "James W. T.", "" ], [ "Byrne", "Helen M.", "" ] ]
We study a five-compartment mathematical model originally proposed by Kuznetsov et al. (1994) to investigate the effect of nonlinear interactions between tumour and immune cells in the tumour microenvironment, whereby immune cells may induce tumour cell death, and tumour cells may inactivate immune cells. Exploiting a separation of timescales in the model, we use the method of matched asymptotics to derive a new two-dimensional, long-timescale, approximation of the full model, which differs from the quasi-steady-state approximation introduced by Kuznetsov et al. (1994), but is validated against numerical solutions of the full model. Through a phase-plane analysis, we show that our reduced model is excitable, a feature not traditionally associated with tumour-immune dynamics. Through a systematic parameter sensitivity analysis, we demonstrate that excitability generates complex bifurcating dynamics in the model. These are consistent with a variety of clinically observed phenomena, and suggest that excitability may underpin tumour-immune interactions. The model exhibits the three stages of immunoediting - elimination, equilibrium, and escape, via stable steady states with different tumour cell concentrations. Such heterogeneity in tumour cell numbers can stem from variability in initial conditions and/or model parameters that control the properties of the immune system and its response to the tumour. We identify different biophysical parameter targets that could be manipulated with immunotherapy in order to control tumour size, and we find that preferred strategies may differ between patients depending on the strength of their immune systems, as determined by patient-specific values of associated model parameters.
1304.3637
Trevor Bedford
Trevor Bedford, Marc A. Suchard, Philippe Lemey, Gytis Dudas, Victoria Gregory, Alan J. Hay, John W. McCauley, Colin A. Russell, Derek J. Smith, Andrew Rambaut
Integrating influenza antigenic dynamics with molecular evolution
32 pages, 13 figures, 2 tables
null
null
null
q-bio.PE q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influenza viruses undergo continual antigenic evolution allowing mutant viruses to evade host immunity acquired to previous virus strains. Antigenic phenotype is often assessed through pairwise measurement of cross-reactivity between influenza strains using the hemagglutination inhibition (HI) assay. Here, we extend previous approaches to antigenic cartography, and simultaneously characterize antigenic and genetic evolution by modeling the diffusion of antigenic phenotype over a shared virus phylogeny. Using HI data from influenza lineages A/H3N2, A/H1N1, B/Victoria and B/Yamagata, we determine patterns of antigenic drift across viral lineages, showing that A/H3N2 evolves faster and in a more punctuated fashion than other influenza lineages. We also show that year-to-year antigenic drift appears to drive incidence patterns within each influenza lineage. This work makes possible substantial future advances in investigating the dynamics of influenza and other antigenically-variable pathogens by providing a model that intimately combines molecular and antigenic evolution.
[ { "created": "Fri, 12 Apr 2013 14:00:27 GMT", "version": "v1" }, { "created": "Fri, 20 Dec 2013 01:05:38 GMT", "version": "v2" } ]
2013-12-23
[ [ "Bedford", "Trevor", "" ], [ "Suchard", "Marc A.", "" ], [ "Lemey", "Philippe", "" ], [ "Dudas", "Gytis", "" ], [ "Gregory", "Victoria", "" ], [ "Hay", "Alan J.", "" ], [ "McCauley", "John W.", "" ], [ "Russell", "Colin A.", "" ], [ "Smith", "Derek J.", "" ], [ "Rambaut", "Andrew", "" ] ]
Influenza viruses undergo continual antigenic evolution allowing mutant viruses to evade host immunity acquired to previous virus strains. Antigenic phenotype is often assessed through pairwise measurement of cross-reactivity between influenza strains using the hemagglutination inhibition (HI) assay. Here, we extend previous approaches to antigenic cartography, and simultaneously characterize antigenic and genetic evolution by modeling the diffusion of antigenic phenotype over a shared virus phylogeny. Using HI data from influenza lineages A/H3N2, A/H1N1, B/Victoria and B/Yamagata, we determine patterns of antigenic drift across viral lineages, showing that A/H3N2 evolves faster and in a more punctuated fashion than other influenza lineages. We also show that year-to-year antigenic drift appears to drive incidence patterns within each influenza lineage. This work makes possible substantial future advances in investigating the dynamics of influenza and other antigenically-variable pathogens by providing a model that intimately combines molecular and antigenic evolution.
2004.02011
Ramon Grima
James Holehouse, Abhishek Gupta and Ramon Grima
Steady-state fluctuations of a genetic feedback loop with fluctuating rate parameters using the unified colored noise approximation
33 pages, 10 figures
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A common model of stochastic auto-regulatory gene expression describes promoter switching via cooperative protein binding, effective protein production in the active state and dilution of proteins. Here we consider an extension of this model whereby colored noise with a short correlation time is added to the reaction rate parameters -- we show that when the size and timescale of the noise is appropriately chosen it accounts for fast reactions that are not explicitly modelled, e.g., in models with no mRNA description, fluctuations in the protein production rate can account for rapid multiple stages of nuclear mRNA processing which precede translation in eukaryotes. We show how the unified colored noise approximation can be used to derive expressions for the protein number distribution that is in good agreement with stochastic simulations. We find that even when the noise in the rate parameters is small, the protein distributions predicted by our model can be significantly different than models assuming constant reaction rates.
[ { "created": "Sat, 4 Apr 2020 20:31:40 GMT", "version": "v1" } ]
2020-04-07
[ [ "Holehouse", "James", "" ], [ "Gupta", "Abhishek", "" ], [ "Grima", "Ramon", "" ] ]
A common model of stochastic auto-regulatory gene expression describes promoter switching via cooperative protein binding, effective protein production in the active state and dilution of proteins. Here we consider an extension of this model whereby colored noise with a short correlation time is added to the reaction rate parameters -- we show that when the size and timescale of the noise is appropriately chosen it accounts for fast reactions that are not explicitly modelled, e.g., in models with no mRNA description, fluctuations in the protein production rate can account for rapid multiple stages of nuclear mRNA processing which precede translation in eukaryotes. We show how the unified colored noise approximation can be used to derive expressions for the protein number distribution that is in good agreement with stochastic simulations. We find that even when the noise in the rate parameters is small, the protein distributions predicted by our model can be significantly different than models assuming constant reaction rates.
2405.02374
Oliver Bent
Arturo Fiorellini-Bernardis, Sebastien Boyer, Christoph Brunken, Bakary Diallo, Karim Beguir, Nicolas Lopez-Carranza, Oliver Bent
Protein binding affinity prediction under multiple substitutions applying eGNNs on Residue and Atomic graphs combined with Language model information: eGRAL
null
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Protein-protein interactions (PPIs) play a crucial role in numerous biological processes. Developing methods that predict binding affinity changes under substitution mutations is fundamental for modelling and re-engineering biological systems. Deep learning is increasingly recognized as a powerful tool capable of bridging the gap between in-silico predictions and in-vitro observations. With this contribution, we propose eGRAL, a novel SE(3) equivariant graph neural network (eGNN) architecture designed for predicting binding affinity changes from multiple amino acid substitutions in protein complexes. eGRAL leverages residue, atomic and evolutionary scales, thanks to features extracted from protein large language models. To address the limited availability of large-scale affinity assays with structural information, we generate a simulated dataset comprising approximately 500,000 data points. Our model is pre-trained on this dataset, then fine-tuned and tested on experimental data.
[ { "created": "Fri, 3 May 2024 10:33:19 GMT", "version": "v1" } ]
2024-05-07
[ [ "Fiorellini-Bernardis", "Arturo", "" ], [ "Boyer", "Sebastien", "" ], [ "Brunken", "Christoph", "" ], [ "Diallo", "Bakary", "" ], [ "Beguir", "Karim", "" ], [ "Lopez-Carranza", "Nicolas", "" ], [ "Bent", "Oliver", "" ] ]
Protein-protein interactions (PPIs) play a crucial role in numerous biological processes. Developing methods that predict binding affinity changes under substitution mutations is fundamental for modelling and re-engineering biological systems. Deep learning is increasingly recognized as a powerful tool capable of bridging the gap between in-silico predictions and in-vitro observations. With this contribution, we propose eGRAL, a novel SE(3) equivariant graph neural network (eGNN) architecture designed for predicting binding affinity changes from multiple amino acid substitutions in protein complexes. eGRAL leverages residue, atomic and evolutionary scales, thanks to features extracted from protein large language models. To address the limited availability of large-scale affinity assays with structural information, we generate a simulated dataset comprising approximately 500,000 data points. Our model is pre-trained on this dataset, then fine-tuned and tested on experimental data.
1904.03399
Bosiljka Tadic
Bosiljka Tadic, Miroslav Andjelkovic, Roderick Melnik
Functional Geometry of Human Connectome and Robustness of Gender Differences
13 pages, 7 figures
null
null
null
q-bio.NC cond-mat.dis-nn math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mapping the brain imaging data to networks, where each node represents a specific area of the brain, has enabled an objective graph-theoretic analysis of human connectome. However, the latent structure on higher-order connections remains unexplored, where many brain regions acting in synergy perform complex functions. Here we analyse this hidden structure using the simplicial complexes parametrisation where the shared faces of simplexes encode higher-order relationships between groups of nodes and emerging hyperbolic geometry. Based on data collected within the Human Connectome Project, we perform a systematic analysis of consensus networks of 100 female (F-connectome) and 100 male (M-connectome) subjects by varying the number of fibres launched. Our analysis reveals that the functional geometry of the common F\&M-connectome coincides with the M-connectome and is characterized by a complex architecture of simplexes to the 14th order, which is built in six anatomical communities, and short cycles among them. Furthermore, the F-connectome has additional connections that involve different brain regions, thereby increasing the size of simplexes and introducing new cycles. By providing new insights into the internal organisation of anatomical brain modules as well as into the links between them that are essential to dynamics, these results also highlight the functional gender-related differences
[ { "created": "Sat, 6 Apr 2019 09:37:27 GMT", "version": "v1" } ]
2019-04-09
[ [ "Tadic", "Bosiljka", "" ], [ "Andjelkovic", "Miroslav", "" ], [ "Melnik", "Roderick", "" ] ]
Mapping the brain imaging data to networks, where each node represents a specific area of the brain, has enabled an objective graph-theoretic analysis of human connectome. However, the latent structure on higher-order connections remains unexplored, where many brain regions acting in synergy perform complex functions. Here we analyse this hidden structure using the simplicial complexes parametrisation where the shared faces of simplexes encode higher-order relationships between groups of nodes and emerging hyperbolic geometry. Based on data collected within the Human Connectome Project, we perform a systematic analysis of consensus networks of 100 female (F-connectome) and 100 male (M-connectome) subjects by varying the number of fibres launched. Our analysis reveals that the functional geometry of the common F\&M-connectome coincides with the M-connectome and is characterized by a complex architecture of simplexes to the 14th order, which is built in six anatomical communities, and short cycles among them. Furthermore, the F-connectome has additional connections that involve different brain regions, thereby increasing the size of simplexes and introducing new cycles. By providing new insights into the internal organisation of anatomical brain modules as well as into the links between them that are essential to dynamics, these results also highlight the functional gender-related differences
2302.01187
Stephen Keeley
Stephen Keeley, Benjamin Letham, Chase Tymms, Craig Sanders, Michael Shvartsman
A Semi-Parametric Model for Decision Making in High-Dimensional Sensory Discrimination Tasks
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Psychometric functions typically characterize binary sensory decisions along a single stimulus dimension. However, real-life sensory tasks vary along a greater variety of dimensions (e.g. color, contrast and luminance for visual stimuli). Approaches to characterizing high-dimensional sensory spaces either require strong parametric assumptions about these additional contextual dimensions, or fail to leverage known properties of classical psychometric curves. We overcome both limitations by introducing a semi-parametric model of sensory discrimination that applies traditional psychophysical models along a stimulus intensity dimension, but puts Gaussian process (GP) priors on the parameters of these models with respect to the remaining dimensions. By combining the flexibility of the GP with the deep literature on parametric psychophysics, our semi-parametric models achieve good performance with much less data than baselines on both synthetic and real-world high-dimensional psychophysics datasets. We additionally show strong performance in a Bayesian active learning setting, and present a novel active learning paradigm for the semi-parametric model.
[ { "created": "Thu, 2 Feb 2023 16:14:16 GMT", "version": "v1" } ]
2023-02-03
[ [ "Keeley", "Stephen", "" ], [ "Letham", "Benjamin", "" ], [ "Tymms", "Chase", "" ], [ "Sanders", "Craig", "" ], [ "Shvartsman", "Michael", "" ] ]
Psychometric functions typically characterize binary sensory decisions along a single stimulus dimension. However, real-life sensory tasks vary along a greater variety of dimensions (e.g. color, contrast and luminance for visual stimuli). Approaches to characterizing high-dimensional sensory spaces either require strong parametric assumptions about these additional contextual dimensions, or fail to leverage known properties of classical psychometric curves. We overcome both limitations by introducing a semi-parametric model of sensory discrimination that applies traditional psychophysical models along a stimulus intensity dimension, but puts Gaussian process (GP) priors on the parameters of these models with respect to the remaining dimensions. By combining the flexibility of the GP with the deep literature on parametric psychophysics, our semi-parametric models achieve good performance with much less data than baselines on both synthetic and real-world high-dimensional psychophysics datasets. We additionally show strong performance in a Bayesian active learning setting, and present a novel active learning paradigm for the semi-parametric model.
1802.00840
Andrei Amatuni
Andrei Amatuni, Estelle He, Elika Bergelson
Preserved Structure Across Vector Space Representations
presented at CogSci 2018
null
null
null
q-bio.NC cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Certain concepts, words, and images are intuitively more similar than others (dog vs. cat, dog vs. spoon), though quantifying such similarity is notoriously difficult. Indeed, this kind of computation is likely a critical part of learning the category boundaries for words within a given language. Here, we use a set of 27 items (e.g. 'dog') that are highly common in infants' input, and use both image- and word-based algorithms to independently compute similarity among them. We find three key results. First, the pairwise item similarities derived within image-space and word-space are correlated, suggesting preserved structure among these extremely different representational formats. Second, the closest 'neighbors' for each item, within each space, showed significant overlap (e.g. both found 'egg' as a neighbor of 'apple'). Third, items with the most overlapping neighbors are later-learned by infants and toddlers. We conclude that this approach, which does not rely on human ratings of similarity, may nevertheless reflect stable within-class structure across these two spaces. We speculate that such invariance might aid lexical acquisition, by serving as an informative marker of category boundaries.
[ { "created": "Fri, 2 Feb 2018 20:35:36 GMT", "version": "v1" }, { "created": "Mon, 14 May 2018 21:11:13 GMT", "version": "v2" } ]
2018-05-16
[ [ "Amatuni", "Andrei", "" ], [ "He", "Estelle", "" ], [ "Bergelson", "Elika", "" ] ]
Certain concepts, words, and images are intuitively more similar than others (dog vs. cat, dog vs. spoon), though quantifying such similarity is notoriously difficult. Indeed, this kind of computation is likely a critical part of learning the category boundaries for words within a given language. Here, we use a set of 27 items (e.g. 'dog') that are highly common in infants' input, and use both image- and word-based algorithms to independently compute similarity among them. We find three key results. First, the pairwise item similarities derived within image-space and word-space are correlated, suggesting preserved structure among these extremely different representational formats. Second, the closest 'neighbors' for each item, within each space, showed significant overlap (e.g. both found 'egg' as a neighbor of 'apple'). Third, items with the most overlapping neighbors are later-learned by infants and toddlers. We conclude that this approach, which does not rely on human ratings of similarity, may nevertheless reflect stable within-class structure across these two spaces. We speculate that such invariance might aid lexical acquisition, by serving as an informative marker of category boundaries.
1307.7005
Nicolas Le Nov\`ere
Finja B\"uchel, Nicolas Rodriguez, Neil Swainston, Clemens Wrzodek, Tobias Czauderna, Roland Keller, Florian Mittag, Michael Schubert, Mihai Glont, Martin Golebiewski, Martijn van Iersel, Sarah Keating, Matthias Rall, Michael Wybrow, Henning Hermjakob, Michael Hucka, Douglas B. Kell, Wolfgang M\"uller, Pedro Mendes, Andreas Zell, Claudine Chaouiya, Julio Saez-Rodriguez, Falk Schreiber, Camille Laibe, Andreas Dr\"ager, Nicolas Le Nov\`ere
Large-scale generation of computational models from biochemical pathway maps
29 pages, 8 figures
BMC Systems Biology 2013, 7:116
10.1186/1752-0509-7-116
null
q-bio.MN
http://creativecommons.org/licenses/by/3.0/
Background: Systems biology projects and omics technologies have led to a growing number of biochemical pathway reconstructions. However, mathematical models are still most often created de novo, based on reading the literature and processing pathway data manually. Results: To increase the efficiency with which such models can be created, we automatically generated mathematical models from pathway representations using a suite of freely available software. We produced models that combine data from KEGG PATHWAY, BioCarta, MetaCyc and SABIO-RK; According to the source data, three types of models are provided: kinetic, logical and constraint-based. All models are encoded using SBML Core and Qual packages, and available through BioModels Database. Each model contains the list of participants, the interactions, and the relevant mathematical constructs, but, in most cases, no meaningful parameter values. Most models are also available as easy to understand graphical SBGN maps. Conclusions: to date, the project has resulted in more than 140000 models freely available. We believe this resource can tremendously accelerate the development of mathematical models by providing initial starting points ready for parametrization.
[ { "created": "Fri, 26 Jul 2013 11:48:24 GMT", "version": "v1" }, { "created": "Thu, 10 Oct 2013 15:58:39 GMT", "version": "v2" } ]
2013-11-07
[ [ "Büchel", "Finja", "" ], [ "Rodriguez", "Nicolas", "" ], [ "Swainston", "Neil", "" ], [ "Wrzodek", "Clemens", "" ], [ "Czauderna", "Tobias", "" ], [ "Keller", "Roland", "" ], [ "Mittag", "Florian", "" ], [ "Schubert", "Michael", "" ], [ "Glont", "Mihai", "" ], [ "Golebiewski", "Martin", "" ], [ "van Iersel", "Martijn", "" ], [ "Keating", "Sarah", "" ], [ "Rall", "Matthias", "" ], [ "Wybrow", "Michael", "" ], [ "Hermjakob", "Henning", "" ], [ "Hucka", "Michael", "" ], [ "Kell", "Douglas B.", "" ], [ "Müller", "Wolfgang", "" ], [ "Mendes", "Pedro", "" ], [ "Zell", "Andreas", "" ], [ "Chaouiya", "Claudine", "" ], [ "Saez-Rodriguez", "Julio", "" ], [ "Schreiber", "Falk", "" ], [ "Laibe", "Camille", "" ], [ "Dräger", "Andreas", "" ], [ "Novère", "Nicolas Le", "" ] ]
Background: Systems biology projects and omics technologies have led to a growing number of biochemical pathway reconstructions. However, mathematical models are still most often created de novo, based on reading the literature and processing pathway data manually. Results: To increase the efficiency with which such models can be created, we automatically generated mathematical models from pathway representations using a suite of freely available software. We produced models that combine data from KEGG PATHWAY, BioCarta, MetaCyc and SABIO-RK; According to the source data, three types of models are provided: kinetic, logical and constraint-based. All models are encoded using SBML Core and Qual packages, and available through BioModels Database. Each model contains the list of participants, the interactions, and the relevant mathematical constructs, but, in most cases, no meaningful parameter values. Most models are also available as easy to understand graphical SBGN maps. Conclusions: to date, the project has resulted in more than 140000 models freely available. We believe this resource can tremendously accelerate the development of mathematical models by providing initial starting points ready for parametrization.
2302.00142
Tobias Wenzel
Carolus Vitalis and Tobias Wenzel
Leveraging Interactions in Microfluidic Droplets for Enhanced Biotechnology Screens
10 pages, 3 figures
Current Opinion in Biotechnology, August 2023, Volume 82, 102966
10.1016/j.copbio.2023.10
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Microfluidic droplet screens serve as an innovative platform for high-throughput biotechnology, enabling significant advancements in discovery, product optimization, and analysis. This review sheds light on the emerging trend of interaction assays in microfluidic droplets, underscoring the unique suitability of droplets for these applications. Encompassing a diverse range of biological entities such as antibodies, enzymes, DNA, RNA, various microbial and mammalian cell types, drugs, and other molecules, these assays demonstrate their versatility and scope. Recent methodological breakthroughs have escalated these screens to novel scales of bioanalysis and biotechnological product design. Moreover, we highlight pioneering advancements that extend droplet-based screens into new domains: cargo delivery within human bodies, application of synthetic gene circuits in natural environments, 3D-printing, and the development of droplet structures responsive to environmental signals. The potential of this field is profound and only set to increase.
[ { "created": "Tue, 31 Jan 2023 23:20:01 GMT", "version": "v1" }, { "created": "Wed, 31 May 2023 23:11:53 GMT", "version": "v2" } ]
2023-09-11
[ [ "Vitalis", "Carolus", "" ], [ "Wenzel", "Tobias", "" ] ]
Microfluidic droplet screens serve as an innovative platform for high-throughput biotechnology, enabling significant advancements in discovery, product optimization, and analysis. This review sheds light on the emerging trend of interaction assays in microfluidic droplets, underscoring the unique suitability of droplets for these applications. Encompassing a diverse range of biological entities such as antibodies, enzymes, DNA, RNA, various microbial and mammalian cell types, drugs, and other molecules, these assays demonstrate their versatility and scope. Recent methodological breakthroughs have escalated these screens to novel scales of bioanalysis and biotechnological product design. Moreover, we highlight pioneering advancements that extend droplet-based screens into new domains: cargo delivery within human bodies, application of synthetic gene circuits in natural environments, 3D-printing, and the development of droplet structures responsive to environmental signals. The potential of this field is profound and only set to increase.
q-bio/0505001
Ignacio D. Peixoto
Ignacio D. Peixoto, Luca Giuggioli, and V. M. Kenkre
Study of Arbitrary Nonlinearities in Convective Population Dynamics with Small Diffusion
14 pages, 9 figures
Phys. Rev. E 72, 041902 (2005)
10.1103/PhysRevE.72.041902
null
q-bio.PE
null
Convective counterparts of variants of the nonlinear Fisher equation which describes reaction diffusion systems in population dynamics are studied with the help of an analytic prescription and shown to lead to interesting consequences for the evolution of population densities. The initial value problem is solved explicitly for some cases and for others it is shown how to find traveling wave solutions analytically. The effect of adding diffusion to the convective equations is first studied through exact analysis through a piecewise linear representation of the nonlinearity. Using an appropriate small parameter suggested by that analysis, a perturbative treatment is developed to treat the case in which the convective evolution is augmented by a small amount of diffusion.
[ { "created": "Fri, 29 Apr 2005 21:13:48 GMT", "version": "v1" } ]
2007-05-23
[ [ "Peixoto", "Ignacio D.", "" ], [ "Giuggioli", "Luca", "" ], [ "Kenkre", "V. M.", "" ] ]
Convective counterparts of variants of the nonlinear Fisher equation which describes reaction diffusion systems in population dynamics are studied with the help of an analytic prescription and shown to lead to interesting consequences for the evolution of population densities. The initial value problem is solved explicitly for some cases and for others it is shown how to find traveling wave solutions analytically. The effect of adding diffusion to the convective equations is first studied through exact analysis through a piecewise linear representation of the nonlinearity. Using an appropriate small parameter suggested by that analysis, a perturbative treatment is developed to treat the case in which the convective evolution is augmented by a small amount of diffusion.
1908.08190
Tom Chou
Song Xu, Lucas B\"ottcher, Tom Chou
Diversity in Biology: definitions, quantification, and models
Revised, corrected, and in press, 22 pages, 9 figures, 1 table
null
10.1088/1478-3975/ab6754
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Diversity indices are useful single-number metrics for characterizing a complex distribution of a set of attributes across a population of interest. The utility of these different metrics or sets of metrics depend on the context and application, and whether a predictive mechanistic model exists. In this topical review, we first summarize the relevant mathematical principles underlying heterogeneity in a large population before outlining the various definitions of `diversity' and providing examples of scientific topics in which its quantification plays an important role. We then review how diversity has been a ubiquitous concept across multiple fields including ecology, immunology, cellular barcoding experiments, and socioeconomic studies. Since many of these applications involve sampling of populations, we also review how diversity in small samples is related to the diversity in the entire population. Features that arise in each of these applications are highlighted.
[ { "created": "Thu, 22 Aug 2019 03:51:50 GMT", "version": "v1" }, { "created": "Thu, 5 Mar 2020 03:53:34 GMT", "version": "v2" } ]
2020-03-06
[ [ "Xu", "Song", "" ], [ "Böttcher", "Lucas", "" ], [ "Chou", "Tom", "" ] ]
Diversity indices are useful single-number metrics for characterizing a complex distribution of a set of attributes across a population of interest. The utility of these different metrics or sets of metrics depend on the context and application, and whether a predictive mechanistic model exists. In this topical review, we first summarize the relevant mathematical principles underlying heterogeneity in a large population before outlining the various definitions of `diversity' and providing examples of scientific topics in which its quantification plays an important role. We then review how diversity has been a ubiquitous concept across multiple fields including ecology, immunology, cellular barcoding experiments, and socioeconomic studies. Since many of these applications involve sampling of populations, we also review how diversity in small samples is related to the diversity in the entire population. Features that arise in each of these applications are highlighted.
1409.0269
Zvi Rosen
Adam L. MacLean, Zvi Rosen, Helen M. Byrne, Heather A. Harrington
Parameter-free methods distinguish Wnt pathway models and guide design of experiments
37 pages, 6 figures; errors fixed and comparison with data
null
10.1073/pnas.1416655112
null
q-bio.QM math.AG q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The canonical Wnt signaling pathway, mediated by $\beta$-catenin, is crucially involved in development, adult stem cell tissue maintenance and a host of diseases including cancer. We undertake analysis of different mathematical models of Wnt from the literature, and compare them to a new mechanistic model of Wnt signaling that targets spatial localization of key molecules. Using Bayesian methods we infer parameters for each of the models to mammalian Wnt signaling data and find that all models can fit this time course. We are able to overcome this lack of data by appealing to algebraic methods (concepts from chemical reaction network theory and matroid theory) to analyze the models without recourse to specific parameter values. These approaches provide insight into Wnt signaling: The new model (unlike any other investigated) permits a bistable switch in the system via control of shuttling and degradation parameters, corresponding to stem-like vs committed cell states in the differentiation hierarchy. Our analysis also identifies groups of variables that must be measured to fully characterize and discriminate between competing models, and thus serves as a guide for performing minimal experiments for model comparison.
[ { "created": "Mon, 1 Sep 2014 00:03:51 GMT", "version": "v1" }, { "created": "Tue, 9 Dec 2014 03:47:26 GMT", "version": "v2" }, { "created": "Sun, 8 Feb 2015 17:08:32 GMT", "version": "v3" } ]
2015-06-22
[ [ "MacLean", "Adam L.", "" ], [ "Rosen", "Zvi", "" ], [ "Byrne", "Helen M.", "" ], [ "Harrington", "Heather A.", "" ] ]
The canonical Wnt signaling pathway, mediated by $\beta$-catenin, is crucially involved in development, adult stem cell tissue maintenance and a host of diseases including cancer. We undertake analysis of different mathematical models of Wnt from the literature, and compare them to a new mechanistic model of Wnt signaling that targets spatial localization of key molecules. Using Bayesian methods we infer parameters for each of the models to mammalian Wnt signaling data and find that all models can fit this time course. We are able to overcome this lack of data by appealing to algebraic methods (concepts from chemical reaction network theory and matroid theory) to analyze the models without recourse to specific parameter values. These approaches provide insight into Wnt signaling: The new model (unlike any other investigated) permits a bistable switch in the system via control of shuttling and degradation parameters, corresponding to stem-like vs committed cell states in the differentiation hierarchy. Our analysis also identifies groups of variables that must be measured to fully characterize and discriminate between competing models, and thus serves as a guide for performing minimal experiments for model comparison.
1911.03268
Dan Schwartz
Dan Schwartz, Mariya Toneva, Leila Wehbe
Inducing brain-relevant bias in natural language processing models
To be published in the proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada
null
null
null
q-bio.NC cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants. We also show that, for some participants, the fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. While changes to language representations help the model predict brain activity, they also do not harm the model's ability to perform downstream NLP tasks. Our findings are notable for research on language understanding in the brain.
[ { "created": "Tue, 29 Oct 2019 23:28:16 GMT", "version": "v1" } ]
2019-11-11
[ [ "Schwartz", "Dan", "" ], [ "Toneva", "Mariya", "" ], [ "Wehbe", "Leila", "" ] ]
Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants. We also show that, for some participants, the fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. While changes to language representations help the model predict brain activity, they also do not harm the model's ability to perform downstream NLP tasks. Our findings are notable for research on language understanding in the brain.
q-bio/0406027
Dietrich Stauffer
Stanislaw Cebrat and Dietrich Stauffer
Altruism and Antagonistic Pleiotropy in Penna Ageing Model
4 pages including 2 figures
null
null
null
q-bio.PE
null
The Penna ageing model is based on mutation accumulation theory. We show that it also allows for self-organization of antagonistic pleiotropy which helps at young age at the expense of old age. This can be interpreted as emergence of altruism.
[ { "created": "Mon, 14 Jun 2004 12:42:30 GMT", "version": "v1" } ]
2007-05-23
[ [ "Cebrat", "Stanislaw", "" ], [ "Stauffer", "Dietrich", "" ] ]
The Penna ageing model is based on mutation accumulation theory. We show that it also allows for self-organization of antagonistic pleiotropy which helps at young age at the expense of old age. This can be interpreted as emergence of altruism.
1805.06391
Haoqi Sun
Haoqi Sun, Luis Paixao, Jefferson T. Oliva, Balaji Goparaju, Diego Z. Carvalho, Kicky G. van Leeuwen, Oluwaseun Akeju, Robert Joseph Thomas, Sydney S. Cash, Matt T. Bianchi, M. Brandon Westover
Brain Age from the Electroencephalogram of Sleep
null
Neurobiology of aging 74 (2019): 112-120
10.1016/j.neurobiolaging.2018.10.016
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human electroencephalogram (EEG) of sleep undergoes profound changes with age. These changes can be conceptualized as "brain age", which can be compared to an age norm to reflect the deviation from normal aging process. Here, we develop an interpretable machine learning model to predict brain age based on two large sleep EEG datasets: the Massachusetts General Hospital sleep lab dataset (MGH, N = 2,621) covering age 18 to 80; and the Sleep Hearth Health Study (SHHS, N = 3,520) covering age 40 to 80. The model obtains a mean absolute deviation of 8.1 years between brain age and chronological age in the healthy participants in the MGH dataset. As validation, we analyze a subset of SHHS containing longitudinal EEGs 5 years apart, which shows a 5.5 years difference in brain age. Participants with neurological and psychiatric diseases, as well as diabetes and hypertension medications show an older brain age compared to chronological age. The findings raise the prospect of using sleep EEG as a biomarker for healthy brain aging.
[ { "created": "Wed, 16 May 2018 16:04:23 GMT", "version": "v1" } ]
2020-06-23
[ [ "Sun", "Haoqi", "" ], [ "Paixao", "Luis", "" ], [ "Oliva", "Jefferson T.", "" ], [ "Goparaju", "Balaji", "" ], [ "Carvalho", "Diego Z.", "" ], [ "van Leeuwen", "Kicky G.", "" ], [ "Akeju", "Oluwaseun", "" ], [ "Thomas", "Robert Joseph", "" ], [ "Cash", "Sydney S.", "" ], [ "Bianchi", "Matt T.", "" ], [ "Westover", "M. Brandon", "" ] ]
The human electroencephalogram (EEG) of sleep undergoes profound changes with age. These changes can be conceptualized as "brain age", which can be compared to an age norm to reflect the deviation from normal aging process. Here, we develop an interpretable machine learning model to predict brain age based on two large sleep EEG datasets: the Massachusetts General Hospital sleep lab dataset (MGH, N = 2,621) covering age 18 to 80; and the Sleep Hearth Health Study (SHHS, N = 3,520) covering age 40 to 80. The model obtains a mean absolute deviation of 8.1 years between brain age and chronological age in the healthy participants in the MGH dataset. As validation, we analyze a subset of SHHS containing longitudinal EEGs 5 years apart, which shows a 5.5 years difference in brain age. Participants with neurological and psychiatric diseases, as well as diabetes and hypertension medications show an older brain age compared to chronological age. The findings raise the prospect of using sleep EEG as a biomarker for healthy brain aging.
1405.2464
Joseph Rusinko
Emili Moan, Joseph Rusinko
Combinatorics of Linked Systems of Quartet Trees
7 pages
null
null
null
q-bio.QM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply classical quartet techniques to the problem of phylogenetic decisiveness and find a value $k$ such that all collections of at least $k$ quartets are decisive. Moreover, we prove that this bound is optimal and give a lower-bound on the probability that a collection of quartets is decisive.
[ { "created": "Sat, 10 May 2014 19:33:29 GMT", "version": "v1" }, { "created": "Mon, 16 Mar 2015 14:05:18 GMT", "version": "v2" } ]
2015-03-17
[ [ "Moan", "Emili", "" ], [ "Rusinko", "Joseph", "" ] ]
We apply classical quartet techniques to the problem of phylogenetic decisiveness and find a value $k$ such that all collections of at least $k$ quartets are decisive. Moreover, we prove that this bound is optimal and give a lower-bound on the probability that a collection of quartets is decisive.
2203.03461
Manuel Reinhardt
Manuel Reinhardt, Ga\v{s}per Tka\v{c}ik, Pieter Rein ten Wolde
Path Weight Sampling: Exact Monte Carlo Computation of the Mutual Information between Stochastic Trajectories
19 pages (+ 14 pages appendix), 9 figures
Phys. Rev. X 13 (2023) 041017
10.1103/physrevx.13.041017
null
q-bio.MN cond-mat.soft cs.IT math.IT physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Most natural and engineered information-processing systems transmit information via signals that vary in time. Computing the information transmission rate or the information encoded in the temporal characteristics of these signals, requires the mutual information between the input and output signals as a function of time, i.e. between the input and output trajectories. Yet, this is notoriously difficult because of the high-dimensional nature of the trajectory space, and all existing techniques require approximations. We present an exact Monte Carlo technique called Path Weight Sampling (PWS) that, for the first time, makes it possible to compute the mutual information between input and output trajectories for any stochastic system that is described by a master equation. The principal idea is to use the master equation to evaluate the exact conditional probability of an individual output trajectory for a given input trajectory, and average this via Monte Carlo sampling in trajectory space to obtain the mutual information. We present three variants of PWS, which all generate the trajectories using the standard stochastic simulation algorithm. While Direct PWS is a brute-force method, Rosenbluth-Rosenbluth PWS exploits the analogy between signal trajectory sampling and polymer sampling, and Thermodynamic Integration PWS is based on a reversible work calculation in trajectory space. PWS also makes it possible to compute the mutual information between input and output trajectories for systems with hidden internal states as well as systems with feedback from output to input. Applying PWS to the bacterial chemotaxis system, consisting of 182 coupled chemical reactions, demonstrates not only that the scheme is highly efficient, but also that the number of receptor clusters is much smaller than hitherto believed, while their size is much larger.
[ { "created": "Mon, 7 Mar 2022 15:20:21 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2023 10:07:01 GMT", "version": "v2" } ]
2023-10-27
[ [ "Reinhardt", "Manuel", "" ], [ "Tkačik", "Gašper", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
Most natural and engineered information-processing systems transmit information via signals that vary in time. Computing the information transmission rate or the information encoded in the temporal characteristics of these signals, requires the mutual information between the input and output signals as a function of time, i.e. between the input and output trajectories. Yet, this is notoriously difficult because of the high-dimensional nature of the trajectory space, and all existing techniques require approximations. We present an exact Monte Carlo technique called Path Weight Sampling (PWS) that, for the first time, makes it possible to compute the mutual information between input and output trajectories for any stochastic system that is described by a master equation. The principal idea is to use the master equation to evaluate the exact conditional probability of an individual output trajectory for a given input trajectory, and average this via Monte Carlo sampling in trajectory space to obtain the mutual information. We present three variants of PWS, which all generate the trajectories using the standard stochastic simulation algorithm. While Direct PWS is a brute-force method, Rosenbluth-Rosenbluth PWS exploits the analogy between signal trajectory sampling and polymer sampling, and Thermodynamic Integration PWS is based on a reversible work calculation in trajectory space. PWS also makes it possible to compute the mutual information between input and output trajectories for systems with hidden internal states as well as systems with feedback from output to input. Applying PWS to the bacterial chemotaxis system, consisting of 182 coupled chemical reactions, demonstrates not only that the scheme is highly efficient, but also that the number of receptor clusters is much smaller than hitherto believed, while their size is much larger.
1511.01339
Bob Eisenberg
Robert Eisenberg
Electrical Structure of Biological Cells and Tissues: impedance spectroscopy, stereology, and singular perturbation theory
A chapter in "Impedance Spectroscopy Theory, Experiment, and Applications:Solid State, Corrosion, Power sources",3rd Edition Evgenij Barsoukov (ed.), J. Ross Macdonald (ed.), Wiley-Interscience, 2016
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Impedance Spectroscopy resolves electrical properties into uncorrelated variables, as a function of frequency, with exquisite resolution. Separation is robust and most useful when the system is linear. Impedance spectroscopy combined with appropriate structural knowledge provides insight into pathways for current flow, with more success than other methods. Biological applications of impedance spectroscopy are often not useful since so much of biology is strongly nonlinear in its essential features, and impedance spectroscopy is fundamentally a linear analysis. All cells and tissues have cell membranes and its capacitance is both linear and important to cell function. Measurements proved straightforward in skeletal muscle, cardiac muscle, and lens of the eye. In skeletal muscle, measurements provided the best estimates of the predominant (cell) membrane system that dominates electrical properties. In cardiac muscle, measurements showed definitively that classical microelectrode voltage clamp could not control the potential of the predominant membranes, that were in the tubular system separated from the extracellular space by substantial distributed resistance. In the lens of the eye, impedance spectroscopy changed the basis of all recording and interpretation of electrical measurements and laid the basis for Rae and Mathias extensive later experimental work. Many tissues are riddled with extracellular space as clefts and tubules, for example, cardiac muscle, the lens of the eye, most epithelia, and of course frog muscle. These tissues are best analyzed with a bidomain theory that arose from the work on electrical structure described here. There has been a great deal of work since then on the bi-domain and this represents the most important contribution to biology of the analysis of electrical structure in my view.
[ { "created": "Wed, 4 Nov 2015 14:15:18 GMT", "version": "v1" } ]
2015-11-05
[ [ "Eisenberg", "Robert", "" ] ]
Impedance Spectroscopy resolves electrical properties into uncorrelated variables, as a function of frequency, with exquisite resolution. Separation is robust and most useful when the system is linear. Impedance spectroscopy combined with appropriate structural knowledge provides insight into pathways for current flow, with more success than other methods. Biological applications of impedance spectroscopy are often not useful since so much of biology is strongly nonlinear in its essential features, and impedance spectroscopy is fundamentally a linear analysis. All cells and tissues have cell membranes and its capacitance is both linear and important to cell function. Measurements proved straightforward in skeletal muscle, cardiac muscle, and lens of the eye. In skeletal muscle, measurements provided the best estimates of the predominant (cell) membrane system that dominates electrical properties. In cardiac muscle, measurements showed definitively that classical microelectrode voltage clamp could not control the potential of the predominant membranes, that were in the tubular system separated from the extracellular space by substantial distributed resistance. In the lens of the eye, impedance spectroscopy changed the basis of all recording and interpretation of electrical measurements and laid the basis for Rae and Mathias extensive later experimental work. Many tissues are riddled with extracellular space as clefts and tubules, for example, cardiac muscle, the lens of the eye, most epithelia, and of course frog muscle. These tissues are best analyzed with a bidomain theory that arose from the work on electrical structure described here. There has been a great deal of work since then on the bi-domain and this represents the most important contribution to biology of the analysis of electrical structure in my view.
1807.09505
Paola Sessa
Paola Sessa, Arianna Schiano Lomoriello and Roy Luria
Neural measures of the causal role of observers' facial mimicry on visual working memory for facial expressions
37 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simulation models of facial expressions propose that sensorimotor regions may increase the clarity of facial expressions representations in extrastriate areas. We monitored the event-related potential marker of visual working memory (VWM) representations, namely the sustained posterior contralateral negativity (SPCN), also termed contralateral delay activity (CDA), while participants performed a change detection task including to-be-memorized faces with different intensities of anger. In one condition participants could freely use their facial mimicry during the encoding/VWM maintenance of the faces, while in a different condition, participants had their facial mimicry blocked by a gel. Notably, SPCN amplitude was reduced for faces in the blocked mimicry condition when compared to the free mimicry condition. This modulation interacted with the empathy levels of participants such that only participants with medium-high empathy scores showed such reduction of the SPCN amplitude when their mimicry was blocked. The SPCN amplitude was larger for full expressions when compared to neutral and subtle expressions, while subtle expressions elicited lower SPCN amplitudes than neutral faces. These findings provide evidence of a functional link between mimicry and VWM for faces, and further shed light on how this memory system may receive feedbacks from sensorimotor regions during the processing of facial expressions.
[ { "created": "Wed, 25 Jul 2018 09:50:56 GMT", "version": "v1" }, { "created": "Sat, 11 Aug 2018 14:23:59 GMT", "version": "v2" } ]
2018-08-14
[ [ "Sessa", "Paola", "" ], [ "Lomoriello", "Arianna Schiano", "" ], [ "Luria", "Roy", "" ] ]
Simulation models of facial expressions propose that sensorimotor regions may increase the clarity of facial expressions representations in extrastriate areas. We monitored the event-related potential marker of visual working memory (VWM) representations, namely the sustained posterior contralateral negativity (SPCN), also termed contralateral delay activity (CDA), while participants performed a change detection task including to-be-memorized faces with different intensities of anger. In one condition participants could freely use their facial mimicry during the encoding/VWM maintenance of the faces, while in a different condition, participants had their facial mimicry blocked by a gel. Notably, SPCN amplitude was reduced for faces in the blocked mimicry condition when compared to the free mimicry condition. This modulation interacted with the empathy levels of participants such that only participants with medium-high empathy scores showed such reduction of the SPCN amplitude when their mimicry was blocked. The SPCN amplitude was larger for full expressions when compared to neutral and subtle expressions, while subtle expressions elicited lower SPCN amplitudes than neutral faces. These findings provide evidence of a functional link between mimicry and VWM for faces, and further shed light on how this memory system may receive feedbacks from sensorimotor regions during the processing of facial expressions.
1712.00962
Da Zhou Dr.
Da Zhou, Shanjun Mao, Kaiyi Chen, Xiaofang Cao and Jie Hu
A Bayesian statistical analysis of stochastic phenotypic plasticity model of cancer cells
16 pages, 1 figure
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phenotypic plasticity of cancer cells has received special attention in recent years. Even though related models have been widely studied in terms of mathematical properties, a thorough statistical analysis on parameter estimation and model selection is still very lacking. In this study, we present a Bayesian approach on the relative frequencies of cancer stem cells (CSCs). Both Gibbs sampling and Metropolis-Hastings (MH) algorithm are used to perform point and interval estimations of cell-state transition rates between CSCs and non-CSCs. Extensive simulations demonstrate the validity of our model and algorithm. By applying this method to a published data on SW620 colon cancer cell line, the model selection favors the phenotypic plasticity model, relative to conventional hierarchical model of cancer cells. Moreover, it is found that the initial state of CSCs after cell sorting significantly influences the occurrence of phenotypic plasticity.
[ { "created": "Mon, 4 Dec 2017 09:10:24 GMT", "version": "v1" } ]
2017-12-05
[ [ "Zhou", "Da", "" ], [ "Mao", "Shanjun", "" ], [ "Chen", "Kaiyi", "" ], [ "Cao", "Xiaofang", "" ], [ "Hu", "Jie", "" ] ]
The phenotypic plasticity of cancer cells has received special attention in recent years. Even though related models have been widely studied in terms of mathematical properties, a thorough statistical analysis on parameter estimation and model selection is still very lacking. In this study, we present a Bayesian approach on the relative frequencies of cancer stem cells (CSCs). Both Gibbs sampling and Metropolis-Hastings (MH) algorithm are used to perform point and interval estimations of cell-state transition rates between CSCs and non-CSCs. Extensive simulations demonstrate the validity of our model and algorithm. By applying this method to a published data on SW620 colon cancer cell line, the model selection favors the phenotypic plasticity model, relative to conventional hierarchical model of cancer cells. Moreover, it is found that the initial state of CSCs after cell sorting significantly influences the occurrence of phenotypic plasticity.
q-bio/0604033
Ueli Rutishauser
Ueli Rutishauser, Erin M. Schuman, Adam N. Mamelak
Online detection and sorting of extracellularly recorded action potentials in human medial temporal lobe recordings, in vivo
9 figures, 2 tables. Journal of Neuroscience Methods 2006 (in press). Journal of Neuroscience Methods, 2006 (in press)
J Neurosci Methods, 2006; 154(1-2):204-224
10.1016/j.jneumeth.2005.12.033
null
q-bio.QM q-bio.NC
null
Understanding the function of complex cortical circuits requires the simultaneous recording of action potentials from many neurons in awake and behaving animals. Practically, this can be achieved by extracellularly recording from multiple brain sites using single wire electrodes. However, in densely packed neural structures such as the human hippocampus, a single electrode can record the activity of multiple neurons. Thus, analytic techniques that differentiate action potentials of different neurons are required. Offline spike sorting approaches are currently used to detect and sort action potentials after finishing the experiment. Because the opportunities to record from the human brain are relatively rare, it is desirable to analyze large numbers of simultaneous recordings quickly using online sorting and detection algorithms. In this way, the experiment can be optimized for the particular response properties of the recorded neurons. Here we present and evaluate a method that is capable of detecting and sorting extracellular single-wire recordings in realtime. We demonstrate the utility of the method by applying it to an extensive data set we acquired from chronically-implanted depth electrodes in the hippocampus of human epilepsy patients. This dataset is particularly challenging because it was recorded in a noisy clinical environment. This method will allow the development of closed-loop experiments, which immediately adapt the experimental stimuli and/or tasks to the neural response observed.
[ { "created": "Wed, 26 Apr 2006 18:59:25 GMT", "version": "v1" } ]
2008-09-25
[ [ "Rutishauser", "Ueli", "" ], [ "Schuman", "Erin M.", "" ], [ "Mamelak", "Adam N.", "" ] ]
Understanding the function of complex cortical circuits requires the simultaneous recording of action potentials from many neurons in awake and behaving animals. Practically, this can be achieved by extracellularly recording from multiple brain sites using single wire electrodes. However, in densely packed neural structures such as the human hippocampus, a single electrode can record the activity of multiple neurons. Thus, analytic techniques that differentiate action potentials of different neurons are required. Offline spike sorting approaches are currently used to detect and sort action potentials after finishing the experiment. Because the opportunities to record from the human brain are relatively rare, it is desirable to analyze large numbers of simultaneous recordings quickly using online sorting and detection algorithms. In this way, the experiment can be optimized for the particular response properties of the recorded neurons. Here we present and evaluate a method that is capable of detecting and sorting extracellular single-wire recordings in realtime. We demonstrate the utility of the method by applying it to an extensive data set we acquired from chronically-implanted depth electrodes in the hippocampus of human epilepsy patients. This dataset is particularly challenging because it was recorded in a noisy clinical environment. This method will allow the development of closed-loop experiments, which immediately adapt the experimental stimuli and/or tasks to the neural response observed.
0908.0408
Tom Britton
Tom Britton and Peter Neal
The time to extinction for an SIS-household-epidemic model
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyse a stochastic SIS epidemic amongst a finite population partitioned into households. Since the population is finite, the epidemic will eventually go extinct, i.e., have no more infectives in the population. We study the effects of population size and within household transmission upon the time to extinction. This is done through two approximations. The first approximation is suitable for all levels of within household transmission and is based upon an Ornstein-Uhlenbeck process approximation for the diseases fluctuations about an endemic level relying on a large population. The second approximation is suitable for high levels of within household transmission and approximates the number of infectious households by a simple homogeneously mixing SIS model with the households replaced by individuals. The analysis, supported by a simulation study, shows that the mean time to extinction is minimized by moderate levels of within household transmission.
[ { "created": "Tue, 4 Aug 2009 08:45:26 GMT", "version": "v1" } ]
2009-08-05
[ [ "Britton", "Tom", "" ], [ "Neal", "Peter", "" ] ]
We analyse a stochastic SIS epidemic amongst a finite population partitioned into households. Since the population is finite, the epidemic will eventually go extinct, i.e., have no more infectives in the population. We study the effects of population size and within household transmission upon the time to extinction. This is done through two approximations. The first approximation is suitable for all levels of within household transmission and is based upon an Ornstein-Uhlenbeck process approximation for the diseases fluctuations about an endemic level relying on a large population. The second approximation is suitable for high levels of within household transmission and approximates the number of infectious households by a simple homogeneously mixing SIS model with the households replaced by individuals. The analysis, supported by a simulation study, shows that the mean time to extinction is minimized by moderate levels of within household transmission.
1305.4506
Bart Haegeman
Bart Haegeman, Tewfik Sari, Rampal S. Etienne
Predicting coexistence of plants subject to a tolerance-competition trade-off
To be published in Journal of Mathematical Biology. 30 pages, 5 figures, 5 appendices
null
10.1007/s00285-013-0692-4
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecological trade-offs between species are often invoked to explain species coexistence in ecological communities. However, few mathematical models have been proposed for which coexistence conditions can be characterized explicitly in terms of a trade-off. Here we present a model of a plant community which allows such a characterization. In the model plant species compete for sites where each site has a fixed stress condition. Species differ both in stress tolerance and competitive ability. Stress tolerance is quantified as the fraction of sites with stress conditions low enough to allow establishment. Competitive ability is quantified as the propensity to win the competition for empty sites. We derive the deterministic, discrete-time dynamical system for the species abundances. We prove the conditions under which plant species can coexist in a stable equilibrium. We show that the coexistence conditions can be characterized graphically, clearly illustrating the trade-off between stress tolerance and competitive ability. We compare our model with a recently proposed, continuous-time dynamical system for a tolerance-fecundity trade-off in plant communities, and we show that this model is a special case of the continuous-time version of our model.
[ { "created": "Mon, 20 May 2013 11:59:26 GMT", "version": "v1" } ]
2013-05-21
[ [ "Haegeman", "Bart", "" ], [ "Sari", "Tewfik", "" ], [ "Etienne", "Rampal S.", "" ] ]
Ecological trade-offs between species are often invoked to explain species coexistence in ecological communities. However, few mathematical models have been proposed for which coexistence conditions can be characterized explicitly in terms of a trade-off. Here we present a model of a plant community which allows such a characterization. In the model plant species compete for sites where each site has a fixed stress condition. Species differ both in stress tolerance and competitive ability. Stress tolerance is quantified as the fraction of sites with stress conditions low enough to allow establishment. Competitive ability is quantified as the propensity to win the competition for empty sites. We derive the deterministic, discrete-time dynamical system for the species abundances. We prove the conditions under which plant species can coexist in a stable equilibrium. We show that the coexistence conditions can be characterized graphically, clearly illustrating the trade-off between stress tolerance and competitive ability. We compare our model with a recently proposed, continuous-time dynamical system for a tolerance-fecundity trade-off in plant communities, and we show that this model is a special case of the continuous-time version of our model.
1612.09420
Fahime Sheikhzadeh
Fahime Sheikhzadeh, Martial Guillaud, Rabab K. Ward
Automatic labeling of molecular biomarkers of whole slide immunohistochemistry images using fully convolutional networks
null
null
null
null
q-bio.TO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of quantifying biomarkers in multi-stained tissues, based on color and spatial information. A deep learning based method that can automatically localize and quantify the cells expressing biomarker(s) in a whole slide image is proposed. The deep learning network is a fully convolutional network (FCN) whose input is the true RGB color image of a tissue and output is a map of the different biomarkers. The FCN relies on a convolutional neural network (CNN) that classifies each cell separately according to the biomarker it expresses. In this study, images of immunohistochemistry (IHC) stained slides were collected and used. More than 4,500 RGB images of cells were manually labeled based on the expressing biomarkers. The labeled cell images were used to train the CNN (obtaining an accuracy of 92% in a test set). The trained CNN is then extended to an FCN that generates a map of all biomarkers in the whole slide image acquired by the scanner (instead of classifying every cell image). To evaluate our method, we manually labeled all nuclei expressing different biomarkers in two whole slide images and used theses as the ground truth. Our proposed method for immunohistochemical analysis compares well with the manual labeling by humans (average F-score of 0.96).
[ { "created": "Fri, 30 Dec 2016 08:27:04 GMT", "version": "v1" } ]
2017-01-02
[ [ "Sheikhzadeh", "Fahime", "" ], [ "Guillaud", "Martial", "" ], [ "Ward", "Rabab K.", "" ] ]
This paper addresses the problem of quantifying biomarkers in multi-stained tissues, based on color and spatial information. A deep learning based method that can automatically localize and quantify the cells expressing biomarker(s) in a whole slide image is proposed. The deep learning network is a fully convolutional network (FCN) whose input is the true RGB color image of a tissue and output is a map of the different biomarkers. The FCN relies on a convolutional neural network (CNN) that classifies each cell separately according to the biomarker it expresses. In this study, images of immunohistochemistry (IHC) stained slides were collected and used. More than 4,500 RGB images of cells were manually labeled based on the expressing biomarkers. The labeled cell images were used to train the CNN (obtaining an accuracy of 92% in a test set). The trained CNN is then extended to an FCN that generates a map of all biomarkers in the whole slide image acquired by the scanner (instead of classifying every cell image). To evaluate our method, we manually labeled all nuclei expressing different biomarkers in two whole slide images and used theses as the ground truth. Our proposed method for immunohistochemical analysis compares well with the manual labeling by humans (average F-score of 0.96).
0903.0662
Giovanni Paternostro
Jacob D. Feala, Jorge Cortes, Phillip M. Duxbury, Carlo Piermarocchi, Andrew D. McCulloch, Giovanni Paternostro
Systems approaches and algorithms for discovery of combinatorial therapies
25 pages
WIREs Syst Biol Med 2009
10.1002/wsbm.51
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effective therapy of complex diseases requires control of highly non-linear complex networks that remain incompletely characterized. In particular, drug intervention can be seen as control of signaling in cellular networks. Identification of control parameters presents an extreme challenge due to the combinatorial explosion of control possibilities in combination therapy and to the incomplete knowledge of the systems biology of cells. In this review paper we describe the main current and proposed approaches to the design of combinatorial therapies, including the empirical methods used now by clinicians and alternative approaches suggested recently by several authors. New approaches for designing combinations arising from systems biology are described. We discuss in special detail the design of algorithms that identify optimal control parameters in cellular networks based on a quantitative characterization of control landscapes, maximizing utilization of incomplete knowledge of the state and structure of intracellular networks. The use of new technology for high-throughput measurements is key to these new approaches to combination therapy and essential for the characterization of control landscapes and implementation of the algorithms. Combinatorial optimization in medical therapy is also compared with the combinatorial optimization of engineering and materials science and similarities and differences are delineated.
[ { "created": "Wed, 4 Mar 2009 00:14:33 GMT", "version": "v1" } ]
2009-09-03
[ [ "Feala", "Jacob D.", "" ], [ "Cortes", "Jorge", "" ], [ "Duxbury", "Phillip M.", "" ], [ "Piermarocchi", "Carlo", "" ], [ "McCulloch", "Andrew D.", "" ], [ "Paternostro", "Giovanni", "" ] ]
Effective therapy of complex diseases requires control of highly non-linear complex networks that remain incompletely characterized. In particular, drug intervention can be seen as control of signaling in cellular networks. Identification of control parameters presents an extreme challenge due to the combinatorial explosion of control possibilities in combination therapy and to the incomplete knowledge of the systems biology of cells. In this review paper we describe the main current and proposed approaches to the design of combinatorial therapies, including the empirical methods used now by clinicians and alternative approaches suggested recently by several authors. New approaches for designing combinations arising from systems biology are described. We discuss in special detail the design of algorithms that identify optimal control parameters in cellular networks based on a quantitative characterization of control landscapes, maximizing utilization of incomplete knowledge of the state and structure of intracellular networks. The use of new technology for high-throughput measurements is key to these new approaches to combination therapy and essential for the characterization of control landscapes and implementation of the algorithms. Combinatorial optimization in medical therapy is also compared with the combinatorial optimization of engineering and materials science and similarities and differences are delineated.
1808.05137
Daniele Faccio
Alessandro Boccolini, Alessandro Fedrizzi, Daniele Faccio
Ghost imaging with the human eye
null
null
10.1364/OE.27.009258
null
q-bio.NC cs.CV physics.optics
http://creativecommons.org/licenses/by/4.0/
Computational ghost imaging relies on the decomposition of an image into patterns that are summed together with weights that measure the overlap of each pattern with the scene being imaged. These tasks rely on a computer. Here we demonstrate that the computational integration can be performed directly with the human eye. We use this human ghost imaging technique to evaluate the temporal response of the eye and establish the image persistence time to be around 20 ms followed by a further 20 ms exponential decay. These persistence times are in agreement with previous studies but can now potentially be extended to include a more precise characterisation of visual stimuli and provide a new experimental tool for the study of visual perception.
[ { "created": "Mon, 13 Aug 2018 17:04:53 GMT", "version": "v1" } ]
2019-03-27
[ [ "Boccolini", "Alessandro", "" ], [ "Fedrizzi", "Alessandro", "" ], [ "Faccio", "Daniele", "" ] ]
Computational ghost imaging relies on the decomposition of an image into patterns that are summed together with weights that measure the overlap of each pattern with the scene being imaged. These tasks rely on a computer. Here we demonstrate that the computational integration can be performed directly with the human eye. We use this human ghost imaging technique to evaluate the temporal response of the eye and establish the image persistence time to be around 20 ms followed by a further 20 ms exponential decay. These persistence times are in agreement with previous studies but can now potentially be extended to include a more precise characterisation of visual stimuli and provide a new experimental tool for the study of visual perception.
1701.04703
Majid Bani-Yaghoub
Majid Bani-Yaghoub
Introduction to Delay Models and Their Wave Solutions
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a brief review of delay population models and their applications in ecology is provided. The inclusion of diffusion and nonlocality terms in delay models has given more capabilities to these models enabling them to capture several ecological phenomena such as the Allee effect, waves of invasive species and spatio-temporal competitions of interacting species. Moreover, recent advances in the studies of traveling and stationary wave solutions of delay models are outlined. In particular, the existence of stationary and traveling wave solutions of delay models, stability of wave solutions, formation of wavefronts in the special domain, and possible outcomes of delay models are discussed.
[ { "created": "Mon, 16 Jan 2017 05:27:33 GMT", "version": "v1" } ]
2017-01-18
[ [ "Bani-Yaghoub", "Majid", "" ] ]
In this paper, a brief review of delay population models and their applications in ecology is provided. The inclusion of diffusion and nonlocality terms in delay models has given more capabilities to these models enabling them to capture several ecological phenomena such as the Allee effect, waves of invasive species and spatio-temporal competitions of interacting species. Moreover, recent advances in the studies of traveling and stationary wave solutions of delay models are outlined. In particular, the existence of stationary and traveling wave solutions of delay models, stability of wave solutions, formation of wavefronts in the special domain, and possible outcomes of delay models are discussed.
2209.00384
Anca Radulescu
Anca Radulescu, Michael Anderson
Gap junctions and synchronization clusters in the Thalamic Reticular Nuclei
13 pages, 9 figures, 17 references
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The Thalamic Reticular Nuclei (TRN) mediate processes like attentional modulation, sensory gating and sleep spindles. The GABAergic inter neurons in the TRN are know to exhibit widespread synchronized activity patterns. One known contribution to shaping synchronization and clustering patterns in the TRN is coming from the presence of gap junctions. These are organized in specific connectivity architectures, that have been identified empirically through dye and electrical coupling studies. Our study uses a computational model in conjunction to implement realistic connectivity schemes in a small network. We explored the potential effects of the size, strength and distribution of gap junctional clusters on the synchronization patterns in TRN, and how these effects are modulated by other factors, such as the level of background inhibition.
[ { "created": "Thu, 1 Sep 2022 11:53:44 GMT", "version": "v1" }, { "created": "Fri, 10 Nov 2023 12:13:53 GMT", "version": "v2" } ]
2023-11-13
[ [ "Radulescu", "Anca", "" ], [ "Anderson", "Michael", "" ] ]
The Thalamic Reticular Nuclei (TRN) mediate processes like attentional modulation, sensory gating and sleep spindles. The GABAergic inter neurons in the TRN are know to exhibit widespread synchronized activity patterns. One known contribution to shaping synchronization and clustering patterns in the TRN is coming from the presence of gap junctions. These are organized in specific connectivity architectures, that have been identified empirically through dye and electrical coupling studies. Our study uses a computational model in conjunction to implement realistic connectivity schemes in a small network. We explored the potential effects of the size, strength and distribution of gap junctional clusters on the synchronization patterns in TRN, and how these effects are modulated by other factors, such as the level of background inhibition.