id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1702.04515
Ivo Siekmann
Ivo Siekmann and Michael Bengfort and Horst Malchow
Invasion patterns in competitive systems
10 pages, 9 figures, 1 table, submitted to European Physical Journal Special Topics (EPJST)
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic reaction-diffusion equations are a popular modelling approach for studying interacting populations in a heterogeneous environment under the influence of environmental fluctuations. Although the theoretical basis of alternative models such as Fokker-Planck diffusion is not less convincing, movement of populations is commonly modelled using the diffusion law due to Fick. It is an interesting feature of Fokker-Planck diffusion that for spatially varying diffusion coefficients the stationary solution is not a homogeneous distribution; in contrast to Fickian diffusion. Instead, concentration accumulates in regions of low diffusivity and tends to lower levels for areas of high diffusivity. Thus, we may interpret the stationary distribution of the Fokker-Planck diffusion as a reflection of different levels of habitat quality. Moreover, the most common model for environmental fluctuations, linear multiplicative noise, is based on the assumption that individuals respond independently to stochastic environmental fluctuations. For large population densities the assumption of independence is debatable and the model further implies that noise intensities can increase to arbitrarily high levels. Therefore, instead of the commonly used linear multiplicative noise model, we implement environmental variability by an alternative nonlinear noise term which never exceeds a certain maximum noise intensity. With Fokker-Planck diffusion and the nonlinear noise model replacing the classical approaches we investigate a simple invasive system based on the Lotka-Volterra competition model. We observe that the heterogeneous stationary distribution generated by Fokker-Planck diffusion generally facilitates the formation of segregated habitats of resident and invader. However, this segregation can be broken by nonlinear noise leading to coexistence of resident and invader across the whole spatial domain.
[ { "created": "Wed, 15 Feb 2017 09:30:14 GMT", "version": "v1" } ]
2017-02-16
[ [ "Siekmann", "Ivo", "" ], [ "Bengfort", "Michael", "" ], [ "Malchow", "Horst", "" ] ]
Stochastic reaction-diffusion equations are a popular modelling approach for studying interacting populations in a heterogeneous environment under the influence of environmental fluctuations. Although the theoretical basis of alternative models such as Fokker-Planck diffusion is not less convincing, movement of populations is commonly modelled using the diffusion law due to Fick. It is an interesting feature of Fokker-Planck diffusion that for spatially varying diffusion coefficients the stationary solution is not a homogeneous distribution; in contrast to Fickian diffusion. Instead, concentration accumulates in regions of low diffusivity and tends to lower levels for areas of high diffusivity. Thus, we may interpret the stationary distribution of the Fokker-Planck diffusion as a reflection of different levels of habitat quality. Moreover, the most common model for environmental fluctuations, linear multiplicative noise, is based on the assumption that individuals respond independently to stochastic environmental fluctuations. For large population densities the assumption of independence is debatable and the model further implies that noise intensities can increase to arbitrarily high levels. Therefore, instead of the commonly used linear multiplicative noise model, we implement environmental variability by an alternative nonlinear noise term which never exceeds a certain maximum noise intensity. With Fokker-Planck diffusion and the nonlinear noise model replacing the classical approaches we investigate a simple invasive system based on the Lotka-Volterra competition model. We observe that the heterogeneous stationary distribution generated by Fokker-Planck diffusion generally facilitates the formation of segregated habitats of resident and invader. However, this segregation can be broken by nonlinear noise leading to coexistence of resident and invader across the whole spatial domain.
1804.09667
Achim Schilling
Richard Gerum, Hinrich Rahlfs, Matthias Streb, Patrick Krauss, Claus Metzner, Konstantin Tziridis, Michael G\"unther, Holger Schulze, Walter Kellermann, Achim Schilling
Open(G)PIAS: An open source solution for the construction of a high-precision Acoustic-Startle-Response (ASR) setup for tinnitus screening and threshold estimation in rodents
null
null
10.3389/fnbeh.2019.00140
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The acoustic startle reflex (ASR) that can be induced by a loud sound stimulus can be used as a versatile tool to, e.g., estimate hearing thresholds or identify subjective tinnitus percepts in rodents. These techniques are based on the fact that the ASR amplitude can be suppressed by a pre-stimulus of lower, non-startling intensity, an effect named pre-pulse inhibition (PPI). For hearing threshold estimation, pure tone pre-stimuli of varying amplitudes are presented before an intense noise burst serving as startle stimulus. The amount of suppression of the ASR amplitude as a function of the pre-stimulus intensity can be used as a behavioral correlate to determine the hearing ability. For tinnitus assessment, the pure-tone pre-stimulus is replaced by a gap of silence in a narrowband noise background, a paradigm termed GPIAS (gap-pre-pulse inhibition of the acoustic startle response). A proper application of these paradigms depend on a reliable measurement of the ASR amplitudes, an exact stimulus presentation in terms of frequency and intensity. Here we introduce a novel open source solution for the construction of a low-cost ASR setup for the above mentioned purpose. The complete software for data acquisition and stimulus presentation is written in Python 3.6 and is provided as an anaconda package. Furthermore, we provide a construction plan for the sensory system based on low-cost hardware components. Exemplary data show that the ratios (1-PPI) of the pre and post trauma ASR amplitudes can be well described by a lognormal distribution being in good accordance to previous studies with already established setups. Hence, the open access solution described here will help to further establish the ASR method in many laboratories and thus facilitate and standardize research in animal models of tinnitus or hearing loss.
[ { "created": "Wed, 25 Apr 2018 16:31:53 GMT", "version": "v1" } ]
2019-10-01
[ [ "Gerum", "Richard", "" ], [ "Rahlfs", "Hinrich", "" ], [ "Streb", "Matthias", "" ], [ "Krauss", "Patrick", "" ], [ "Metzner", "Claus", "" ], [ "Tziridis", "Konstantin", "" ], [ "Günther", "Michael", "" ], [ "Schulze", "Holger", "" ], [ "Kellermann", "Walter", "" ], [ "Schilling", "Achim", "" ] ]
The acoustic startle reflex (ASR) that can be induced by a loud sound stimulus can be used as a versatile tool to, e.g., estimate hearing thresholds or identify subjective tinnitus percepts in rodents. These techniques are based on the fact that the ASR amplitude can be suppressed by a pre-stimulus of lower, non-startling intensity, an effect named pre-pulse inhibition (PPI). For hearing threshold estimation, pure tone pre-stimuli of varying amplitudes are presented before an intense noise burst serving as startle stimulus. The amount of suppression of the ASR amplitude as a function of the pre-stimulus intensity can be used as a behavioral correlate to determine the hearing ability. For tinnitus assessment, the pure-tone pre-stimulus is replaced by a gap of silence in a narrowband noise background, a paradigm termed GPIAS (gap-pre-pulse inhibition of the acoustic startle response). A proper application of these paradigms depend on a reliable measurement of the ASR amplitudes, an exact stimulus presentation in terms of frequency and intensity. Here we introduce a novel open source solution for the construction of a low-cost ASR setup for the above mentioned purpose. The complete software for data acquisition and stimulus presentation is written in Python 3.6 and is provided as an anaconda package. Furthermore, we provide a construction plan for the sensory system based on low-cost hardware components. Exemplary data show that the ratios (1-PPI) of the pre and post trauma ASR amplitudes can be well described by a lognormal distribution being in good accordance to previous studies with already established setups. Hence, the open access solution described here will help to further establish the ASR method in many laboratories and thus facilitate and standardize research in animal models of tinnitus or hearing loss.
1507.02452
Juergen Reingruber
J\'er\^ome Cartailler and J\"urgen Reingruber
Facilitated diffusion framework for transcription factor search with conformational changes
Article+SI, 3 figures, 25 pages total. To appear in Physical Biology 2015
null
10.1088/1478-3975/12/4/046012
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cellular responses often require the fast activation or repression of specific genes, which depends on Transcription Factors (TFs) that have to quickly find the promoters of these genes within a large genome. Transcription Factors (TFs) search for their DNA promoter target by alternating between bulk diffusion and sliding along the DNA, a mechanism known as facilitated diffusion. We study a facilitated diffusion framework with switching between three search modes: a bulk mode and two sliding modes triggered by conformational changes between two protein conformations. In one conformation (search mode) the TF interacts unspecifically with the DNA backbone resulting in fast sliding. In the other conformation (recognition mode) it interacts specifically and strongly with DNA base pairs leading to slow displacement. From the bulk, a TF associates with the DNA at a random position that is correlated with the previous dissociation point, which implicitly is a function of the DNA structure. The target affinity depends on the conformation. We derive exact expressions for the mean first passage time (MFPT) to bind to the promoter and the conditional probability to bind before detaching when arriving at the promoter site. We systematically explore the parameter space and compare various search scenarios. We compare our results with experimental data for the dimeric Lac repressor search in E.Coli bacteria. We find that a coiled DNA conformation is absolutely necessary for a fast MFPT. With frequent spontaneous conformational changes, a fast search time is achieved even when a TF becomes immobilized in the recognition state due to the specific bindings. We find a MFPT compatible with experimental data in presence of a specific TF-DNA interaction energy that has a Gaussian distribution with a large variance.
[ { "created": "Thu, 9 Jul 2015 10:39:16 GMT", "version": "v1" } ]
2015-09-02
[ [ "Cartailler", "Jérôme", "" ], [ "Reingruber", "Jürgen", "" ] ]
Cellular responses often require the fast activation or repression of specific genes, which depends on Transcription Factors (TFs) that have to quickly find the promoters of these genes within a large genome. Transcription Factors (TFs) search for their DNA promoter target by alternating between bulk diffusion and sliding along the DNA, a mechanism known as facilitated diffusion. We study a facilitated diffusion framework with switching between three search modes: a bulk mode and two sliding modes triggered by conformational changes between two protein conformations. In one conformation (search mode) the TF interacts unspecifically with the DNA backbone resulting in fast sliding. In the other conformation (recognition mode) it interacts specifically and strongly with DNA base pairs leading to slow displacement. From the bulk, a TF associates with the DNA at a random position that is correlated with the previous dissociation point, which implicitly is a function of the DNA structure. The target affinity depends on the conformation. We derive exact expressions for the mean first passage time (MFPT) to bind to the promoter and the conditional probability to bind before detaching when arriving at the promoter site. We systematically explore the parameter space and compare various search scenarios. We compare our results with experimental data for the dimeric Lac repressor search in E.Coli bacteria. We find that a coiled DNA conformation is absolutely necessary for a fast MFPT. With frequent spontaneous conformational changes, a fast search time is achieved even when a TF becomes immobilized in the recognition state due to the specific bindings. We find a MFPT compatible with experimental data in presence of a specific TF-DNA interaction energy that has a Gaussian distribution with a large variance.
1702.06538
Caitlyn Parmelee
Caitlyn M. Parmelee
Applications of Discrete Mathematics for Understanding Dynamics of Synapses and Networks in Neuroscience
PhD Thesis
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical modeling has broad applications in neuroscience whether modeling the dynamics of a single synapse or an entire network of neurons. In Part I, we model vesicle replenishment and release at the photoreceptor synapse to better understand how visual information is processed. In Part II, we explore a simple model of neural networks with the goal of discovering how network structure shapes the behavior of the network. To fully understand how visual information is processed requires an understanding of the way signals are transformed at the ribbon synapse of photoreceptor neurons. These synapses possess a ribbon-like structure capable of storing around 100 synaptic vesicles, allowing graded responses through the release of different numbers of vesicles in response to visual input. These responses depend critically on the ability of the ribbon to replenish itself as ribbon sites empty upon release. The rate of vesicle replenishment is thus an important factor in shaping neural coding in the retina. In collaboration with experimental neuroscientists we developed a mathematical model to describe the dynamics of vesicle release and replenishment at the ribbon synapse. To learn more about how network architecture shapes the dynamics of the network, we study a specific type of threshold-linear network that is constructed from a simple directed graph. The network construction guarantees that differences in dynamics arise solely from differences in the connectivity of the underlying graph. By design, the activity of these networks is bounded and there are no stable fixed points. Computational experiments show that most of these networks yield limit cycles where the neurons fire in sequence. We devised an algorithm to predict the sequence of firing using the structure of the underlying graph. Using the algorithm we classify all the networks of this type on five or fewer nodes.
[ { "created": "Tue, 21 Feb 2017 18:38:58 GMT", "version": "v1" } ]
2017-02-23
[ [ "Parmelee", "Caitlyn M.", "" ] ]
Mathematical modeling has broad applications in neuroscience whether modeling the dynamics of a single synapse or an entire network of neurons. In Part I, we model vesicle replenishment and release at the photoreceptor synapse to better understand how visual information is processed. In Part II, we explore a simple model of neural networks with the goal of discovering how network structure shapes the behavior of the network. To fully understand how visual information is processed requires an understanding of the way signals are transformed at the ribbon synapse of photoreceptor neurons. These synapses possess a ribbon-like structure capable of storing around 100 synaptic vesicles, allowing graded responses through the release of different numbers of vesicles in response to visual input. These responses depend critically on the ability of the ribbon to replenish itself as ribbon sites empty upon release. The rate of vesicle replenishment is thus an important factor in shaping neural coding in the retina. In collaboration with experimental neuroscientists we developed a mathematical model to describe the dynamics of vesicle release and replenishment at the ribbon synapse. To learn more about how network architecture shapes the dynamics of the network, we study a specific type of threshold-linear network that is constructed from a simple directed graph. The network construction guarantees that differences in dynamics arise solely from differences in the connectivity of the underlying graph. By design, the activity of these networks is bounded and there are no stable fixed points. Computational experiments show that most of these networks yield limit cycles where the neurons fire in sequence. We devised an algorithm to predict the sequence of firing using the structure of the underlying graph. Using the algorithm we classify all the networks of this type on five or fewer nodes.
1303.1302
David Lukatsky
Ariel Afek and David B. Lukatsky
Genome-wide organization of eukaryotic pre-initiation complex is influenced by nonconsensus protein-DNA binding
null
Biophysical Journal 104, 1107 (2013)
10.1016/j.bpj.2013.01.038
null
q-bio.BM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genome-wide binding preferences of the key components of eukaryotic pre-initiation complex (PIC) have been recently measured with high resolution in Saccharomyces cerevisiae by Rhee and Pugh (Nature (2012) 483:295-301). Yet the rules determining the PIC binding specificity remain poorly understood. In this study we show that nonconsensus protein-DNA binding significantly influences PIC binding preferences. We estimate that such nonconsensus binding contribute statistically at least 2-3 kcal/mol (on average) of additional attractive free energy per protein, per core promoter region. The predicted attractive effect is particularly strong at repeated poly(dA:dT) and poly(dC:dG) tracts. Overall, the computed free energy landscape of nonconsensus protein-DNA binding shows strong correlation with the measured genome-wide PIC occupancy. Remarkably, statistical PIC binding preferences to both TFIID-dominated and SAGA-dominated genes correlate with the nonconsensus free energy landscape, yet these two groups of genes are distinguishable based on the average free energy profiles. We suggest that the predicted nonconsensus binding mechanism provides a genome-wide background for specific promoter elements, such as transcription factor binding sites, TATA-like elements, and specific binding of the PIC components to nucleosomes. We also show that nonconsensus binding influences transcriptional frequency genome-wide.
[ { "created": "Wed, 6 Mar 2013 11:12:04 GMT", "version": "v1" } ]
2013-03-07
[ [ "Afek", "Ariel", "" ], [ "Lukatsky", "David B.", "" ] ]
Genome-wide binding preferences of the key components of eukaryotic pre-initiation complex (PIC) have been recently measured with high resolution in Saccharomyces cerevisiae by Rhee and Pugh (Nature (2012) 483:295-301). Yet the rules determining the PIC binding specificity remain poorly understood. In this study we show that nonconsensus protein-DNA binding significantly influences PIC binding preferences. We estimate that such nonconsensus binding contribute statistically at least 2-3 kcal/mol (on average) of additional attractive free energy per protein, per core promoter region. The predicted attractive effect is particularly strong at repeated poly(dA:dT) and poly(dC:dG) tracts. Overall, the computed free energy landscape of nonconsensus protein-DNA binding shows strong correlation with the measured genome-wide PIC occupancy. Remarkably, statistical PIC binding preferences to both TFIID-dominated and SAGA-dominated genes correlate with the nonconsensus free energy landscape, yet these two groups of genes are distinguishable based on the average free energy profiles. We suggest that the predicted nonconsensus binding mechanism provides a genome-wide background for specific promoter elements, such as transcription factor binding sites, TATA-like elements, and specific binding of the PIC components to nucleosomes. We also show that nonconsensus binding influences transcriptional frequency genome-wide.
1206.6490
Bob Eisenberg
Bob Eisenberg
Living Devices
Typos Corrected in Second Version
null
null
null
q-bio.OT physics.hist-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The physiological tradition of biological research analyzes biological systems using reduced descriptions much as an engineer uses a 'black box' description of an amplifier. Simple models have been used by physiologists for a very long time. Physiologists have successfully analyzed a broad range of biological systems using a 'device-oriented' approach similar to the approach an engineer would use to investigate her devices. The present generation views biology through the powerful lenses of structural and (molecular) dynamic analysis, understandably enough because of the beauty and power of the analysis, and the ease of using these structures with present freely available software. The problem is that these powerful lenses offer such magnification that the engineering approach cannot be seen. High magnification means limited field of view, because the (spatial) dynamic range cannot cover everything. The function of the structures and molecular dynamics cannot be seen in the work of many biologists, probably because function cannot be immediately seen in the structures and molecular dynamics they compute. It is just as important for biologists to measure the inputs and outputs of their systems as it is to measure their structures. It seems clear, at least to one physiologist, that this research will be catalyzed by assuming that most biological systems are devices that can be analyzed with the same strategies one would use to analyze engineering devices. Thinking today about your biological preparation as a device tells you what experiments to do tomorrow. An important task for many of us is to transmit the physiological tradition to the next generation of biophysicists to help them adapt traditional questions to the new length scales and techniques of molecular and atomic biology.
[ { "created": "Wed, 27 Jun 2012 21:14:56 GMT", "version": "v1" }, { "created": "Thu, 5 Jul 2012 12:26:37 GMT", "version": "v2" } ]
2012-07-06
[ [ "Eisenberg", "Bob", "" ] ]
The physiological tradition of biological research analyzes biological systems using reduced descriptions much as an engineer uses a 'black box' description of an amplifier. Simple models have been used by physiologists for a very long time. Physiologists have successfully analyzed a broad range of biological systems using a 'device-oriented' approach similar to the approach an engineer would use to investigate her devices. The present generation views biology through the powerful lenses of structural and (molecular) dynamic analysis, understandably enough because of the beauty and power of the analysis, and the ease of using these structures with present freely available software. The problem is that these powerful lenses offer such magnification that the engineering approach cannot be seen. High magnification means limited field of view, because the (spatial) dynamic range cannot cover everything. The function of the structures and molecular dynamics cannot be seen in the work of many biologists, probably because function cannot be immediately seen in the structures and molecular dynamics they compute. It is just as important for biologists to measure the inputs and outputs of their systems as it is to measure their structures. It seems clear, at least to one physiologist, that this research will be catalyzed by assuming that most biological systems are devices that can be analyzed with the same strategies one would use to analyze engineering devices. Thinking today about your biological preparation as a device tells you what experiments to do tomorrow. An important task for many of us is to transmit the physiological tradition to the next generation of biophysicists to help them adapt traditional questions to the new length scales and techniques of molecular and atomic biology.
1412.1579
Musa Mammadov
Robin J. Evans and Musa Mammadov
Dynamics of Ebola epidemics in West Africa 2014
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the dynamics of Ebola virus transmission in West Africa during 2014. The reproduction numbers for the total period of epidemic and for different consequent time intervals are estimated based on a newly suggested linear model. It contains one major variable - the average time of infectiousness (time from onset to hospitalization) that is considered as a parameter for controlling the future dynamics of epidemics. Numerical implementations are carried out on data collected from three countries Guinea, Sierra Leone and Liberia as well as the total data collected worldwide. Predictions are provided by considering different scenarios involving the average times of infectiousness for the next few months and the end of the current epidemic is estimated according to each scenario.
[ { "created": "Thu, 4 Dec 2014 08:05:47 GMT", "version": "v1" }, { "created": "Tue, 9 Dec 2014 04:13:47 GMT", "version": "v2" } ]
2014-12-10
[ [ "Evans", "Robin J.", "" ], [ "Mammadov", "Musa", "" ] ]
This paper investigates the dynamics of Ebola virus transmission in West Africa during 2014. The reproduction numbers for the total period of epidemic and for different consequent time intervals are estimated based on a newly suggested linear model. It contains one major variable - the average time of infectiousness (time from onset to hospitalization) that is considered as a parameter for controlling the future dynamics of epidemics. Numerical implementations are carried out on data collected from three countries Guinea, Sierra Leone and Liberia as well as the total data collected worldwide. Predictions are provided by considering different scenarios involving the average times of infectiousness for the next few months and the end of the current epidemic is estimated according to each scenario.
1707.07247
Baohua Zhou
Baohua Zhou, David Hofmann, Itai Pinkoviezky, Samuel J. Sober, and Ilya Nemenman
Chance, long tails, and inference: a non-Gaussian, Bayesian theory of vocal learning in songbirds
null
null
10.1073/pnas.1713020115
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Traditional theories of sensorimotor learning posit that animals use sensory error signals to find the optimal motor command in the face of Gaussian sensory and motor noise. However, most such theories cannot explain common behavioral observations, for example that smaller sensory errors are more readily corrected than larger errors and that large abrupt (but not gradually introduced) errors lead to weak learning. Here we propose a new theory of sensorimotor learning that explains these observations. The theory posits that the animal learns an entire probability distribution of motor commands rather than trying to arrive at a single optimal command, and that learning arises via Bayesian inference when new sensory information becomes available. We test this theory using data from a songbird, the Bengalese finch, that is adapting the pitch (fundamental frequency) of its song following perturbations of auditory feedback using miniature headphones. We observe the distribution of the sung pitches to have long, non-Gaussian tails, which, within our theory, explains the observed dynamics of learning. Further, the theory makes surprising predictions about the dynamics of the shape of the pitch distribution, which we confirm experimentally.
[ { "created": "Sun, 23 Jul 2017 04:43:54 GMT", "version": "v1" } ]
2022-06-08
[ [ "Zhou", "Baohua", "" ], [ "Hofmann", "David", "" ], [ "Pinkoviezky", "Itai", "" ], [ "Sober", "Samuel J.", "" ], [ "Nemenman", "Ilya", "" ] ]
Traditional theories of sensorimotor learning posit that animals use sensory error signals to find the optimal motor command in the face of Gaussian sensory and motor noise. However, most such theories cannot explain common behavioral observations, for example that smaller sensory errors are more readily corrected than larger errors and that large abrupt (but not gradually introduced) errors lead to weak learning. Here we propose a new theory of sensorimotor learning that explains these observations. The theory posits that the animal learns an entire probability distribution of motor commands rather than trying to arrive at a single optimal command, and that learning arises via Bayesian inference when new sensory information becomes available. We test this theory using data from a songbird, the Bengalese finch, that is adapting the pitch (fundamental frequency) of its song following perturbations of auditory feedback using miniature headphones. We observe the distribution of the sung pitches to have long, non-Gaussian tails, which, within our theory, explains the observed dynamics of learning. Further, the theory makes surprising predictions about the dynamics of the shape of the pitch distribution, which we confirm experimentally.
1802.01996
Donald Forsdyke Dr.
Donald R. Forsdyke
When acting as a reproductive barrier for sympatric speciation, hybrid sterility can only be primary
22 pages, 3 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In many animals parental gametes unite to form a zygote that develops into an adult with gonads that, in turn, produce gametes. Interruption of this germinal cycle by prezygotic or postzygotic reproductive barriers can result in two independent cycles, each with the potential to evolve into a new species. When the speciation process is complete, members of each species are fully reproductively isolated from those of the other. During speciation a primary barrier may be supported and eventually superceded by a later appearing secondary barrier. For those holding certain cases of prezygotic isolation to be primary (e.g. elephant cannot copulate with mouse), the onus is to show that they had not been preceded over evolutionary time by periods of postzygotic hybrid inviability (genically determined) or sterility (genically or chromosomally determined). Likewise, the onus is upon those holding cases of hybrid inviability to be primary (e.g. Dobzhansky-Muller epistatic incompatibilities), to show that they had not been preceded by periods, however brief, of hybrid sterility. The latter, when acting as a sympatric barrier causing reproductive isolation, can only be primary. In many cases, hybrid sterility may result from incompatibilities between parental chromosomes that attempt to pair during meiosis in the gonad of their offspring (Winge-Crowther-Bateson incompatibilities). While WCB incompatibilities have long been observed on a microscopic scale, there is growing evidence for a role of dispersed finer DNA sequence differences.
[ { "created": "Tue, 6 Feb 2018 15:17:17 GMT", "version": "v1" }, { "created": "Sun, 18 Feb 2018 19:43:03 GMT", "version": "v2" }, { "created": "Mon, 24 Sep 2018 19:45:03 GMT", "version": "v3" }, { "created": "Sat, 9 Mar 2019 18:59:29 GMT", "version": "v4" } ]
2019-03-12
[ [ "Forsdyke", "Donald R.", "" ] ]
In many animals parental gametes unite to form a zygote that develops into an adult with gonads that, in turn, produce gametes. Interruption of this germinal cycle by prezygotic or postzygotic reproductive barriers can result in two independent cycles, each with the potential to evolve into a new species. When the speciation process is complete, members of each species are fully reproductively isolated from those of the other. During speciation a primary barrier may be supported and eventually superceded by a later appearing secondary barrier. For those holding certain cases of prezygotic isolation to be primary (e.g. elephant cannot copulate with mouse), the onus is to show that they had not been preceded over evolutionary time by periods of postzygotic hybrid inviability (genically determined) or sterility (genically or chromosomally determined). Likewise, the onus is upon those holding cases of hybrid inviability to be primary (e.g. Dobzhansky-Muller epistatic incompatibilities), to show that they had not been preceded by periods, however brief, of hybrid sterility. The latter, when acting as a sympatric barrier causing reproductive isolation, can only be primary. In many cases, hybrid sterility may result from incompatibilities between parental chromosomes that attempt to pair during meiosis in the gonad of their offspring (Winge-Crowther-Bateson incompatibilities). While WCB incompatibilities have long been observed on a microscopic scale, there is growing evidence for a role of dispersed finer DNA sequence differences.
2404.03718
Diogo L. Pires
Diogo L. Pires, Mark Broom
The rules of multiplayer cooperation in networks of communities
28 pages, 6 figures, 4 tables
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Community organization permeates both social and biological complex systems. To study its interplay with behavior emergence, we model mobile structured populations with multiplayer interactions. We derive general analytical methods for evolutionary dynamics under high home fidelity when populations self-organize into networks of asymptotically isolated communities. In this limit, community organization dominates over the network structure and emerging behavior is independent of network topology. We obtain the rules of multiplayer cooperation in networks of communities for different types of social dilemmas. The success of cooperation is a result of the benefits shared amongst communal cooperators outperforming the benefits reaped by defectors in mixed communities. Under weak selection, cooperation can evolve and be stable for any size (Q) and number (M) of communities if the reward-to-cost ratio (V/K) of public goods is higher than a critical value. Community organization is a solid mechanism for sustaining the evolution of cooperation under public goods dilemmas, particularly when populations are organized into a higher number of smaller communities. Contrary to public goods dilemmas relating to production, the multiplayer Hawk-Dove (HD) dilemma is a commons dilemma focusing on the fair consumption of preexisting resources. This game holds mixed results but tends to favour cooperation under larger communities, highlighting that the two types of social dilemmas might lead to solid differences in the behaviour adopted under community structure.
[ { "created": "Thu, 4 Apr 2024 18:00:00 GMT", "version": "v1" } ]
2024-04-08
[ [ "Pires", "Diogo L.", "" ], [ "Broom", "Mark", "" ] ]
Community organization permeates both social and biological complex systems. To study its interplay with behavior emergence, we model mobile structured populations with multiplayer interactions. We derive general analytical methods for evolutionary dynamics under high home fidelity when populations self-organize into networks of asymptotically isolated communities. In this limit, community organization dominates over the network structure and emerging behavior is independent of network topology. We obtain the rules of multiplayer cooperation in networks of communities for different types of social dilemmas. The success of cooperation is a result of the benefits shared amongst communal cooperators outperforming the benefits reaped by defectors in mixed communities. Under weak selection, cooperation can evolve and be stable for any size (Q) and number (M) of communities if the reward-to-cost ratio (V/K) of public goods is higher than a critical value. Community organization is a solid mechanism for sustaining the evolution of cooperation under public goods dilemmas, particularly when populations are organized into a higher number of smaller communities. Contrary to public goods dilemmas relating to production, the multiplayer Hawk-Dove (HD) dilemma is a commons dilemma focusing on the fair consumption of preexisting resources. This game holds mixed results but tends to favour cooperation under larger communities, highlighting that the two types of social dilemmas might lead to solid differences in the behaviour adopted under community structure.
2007.11968
Justyna Signerska-Rynkowska
Jonathan Rubin, Justyna Signerska-Rynkowska, Jonathan D. Touboul
Type III Responses to Transient Inputs in Hybrid Nonlinear Neuron Models
null
null
null
null
q-bio.NC math.DS nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experimental characterization of neuronal dynamics involves recording both of spontaneous activity patterns and of responses to transient and sustained inputs. While much theoretical attention has been devoted to the spontaneous activity of neurons, less is known about the dynamic mechanisms shaping their responses to transient inputs, although these bear significant physiological relevance. Here, we study responses to transient inputs in a widely used class of neuron models (nonlinear adaptive hybrid models) well-known to reproduce a number of biologically realistic behaviors. We focus on responses to transient inputs that have been previously associated with Type III neurons, arguably the least studied category in Hodgkin's classification, which are those neurons that never exhibit continuous firing in response to sustained excitatory currents. The two phenomena that we study are post-inhibitory facilitation, in which an otherwise subthreshold excitatory input can induce a spike if it is applied with proper timing after an inhibitory pulse, and slope detection, in which a neuron spikes to a transient input only when the input's rate of change is in a specific, bounded range. We analyze the origin of these phenomena in nonlinear hybrid models and provide a geometric characterization of dynamical structures associated with PIF in the system and an analytical study of slope detection for tent inputs. While the necessary and sufficient conditions for these behaviors are easily satisfied in neurons with Type III excitability, our proofs are quite general and valid for neurons that do not exhibit Type III excitability as well. This study therefore provides a framework for the mathematical analysis of these responses to transient inputs associated with Type III neurons in other systems and for advancing our understanding of these systems' computational properties.
[ { "created": "Thu, 23 Jul 2020 12:32:51 GMT", "version": "v1" } ]
2020-07-24
[ [ "Rubin", "Jonathan", "" ], [ "Signerska-Rynkowska", "Justyna", "" ], [ "Touboul", "Jonathan D.", "" ] ]
Experimental characterization of neuronal dynamics involves recording both of spontaneous activity patterns and of responses to transient and sustained inputs. While much theoretical attention has been devoted to the spontaneous activity of neurons, less is known about the dynamic mechanisms shaping their responses to transient inputs, although these bear significant physiological relevance. Here, we study responses to transient inputs in a widely used class of neuron models (nonlinear adaptive hybrid models) well-known to reproduce a number of biologically realistic behaviors. We focus on responses to transient inputs that have been previously associated with Type III neurons, arguably the least studied category in Hodgkin's classification, which are those neurons that never exhibit continuous firing in response to sustained excitatory currents. The two phenomena that we study are post-inhibitory facilitation, in which an otherwise subthreshold excitatory input can induce a spike if it is applied with proper timing after an inhibitory pulse, and slope detection, in which a neuron spikes to a transient input only when the input's rate of change is in a specific, bounded range. We analyze the origin of these phenomena in nonlinear hybrid models and provide a geometric characterization of dynamical structures associated with PIF in the system and an analytical study of slope detection for tent inputs. While the necessary and sufficient conditions for these behaviors are easily satisfied in neurons with Type III excitability, our proofs are quite general and valid for neurons that do not exhibit Type III excitability as well. This study therefore provides a framework for the mathematical analysis of these responses to transient inputs associated with Type III neurons in other systems and for advancing our understanding of these systems' computational properties.
2304.01346
Lautaro Estienne
Lautaro Estienne
Towards an Hybrid Hodgkin-Huxley Action Potential Generation Model
null
null
10.1109/RPIC53795.2021.9648523
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical models for the generation of the action potential can improve the understanding of physiological mechanisms that are consequence of the electrical activity in neurons. In such models, some equations involving empirically obtained functions of the membrane potential are usually defined. The best known of these models, the Hodgkin-Huxley model, is an example of this paradigm since it defines the conductances of ion channels in terms of the opening and closing rates of each type of gate present in the channels. These functions need to be derived from laboratory measurements that are often very expensive and produce little data because they involve a time-space-independent measurement of the voltage in a single channel of the cell membrane. In this work, we investigate the possibility of finding the Hodgkin-Huxley model's parametric functions using only two simple measurements (the membrane voltage as a function of time and the injected current that triggered that voltage) and applying Deep Learning methods to estimate these functions. This would result in an hybrid model of the action potential generation composed by the original Hodgkin-Huxley equations and an Artificial Neural Network that requires a small set of easy-to-perform measurements to be trained. Experiments were carried out using data generated from the original Hodgkin-Huxley model, and results show that a simple two-layer artificial neural network (ANN) architecture trained on a minimal amount of data can learn to model some of the fundamental proprieties of the action potential generation by estimating the model's rate functions.
[ { "created": "Wed, 15 Mar 2023 22:39:23 GMT", "version": "v1" } ]
2023-04-05
[ [ "Estienne", "Lautaro", "" ] ]
Mathematical models for the generation of the action potential can improve the understanding of physiological mechanisms that are consequence of the electrical activity in neurons. In such models, some equations involving empirically obtained functions of the membrane potential are usually defined. The best known of these models, the Hodgkin-Huxley model, is an example of this paradigm since it defines the conductances of ion channels in terms of the opening and closing rates of each type of gate present in the channels. These functions need to be derived from laboratory measurements that are often very expensive and produce little data because they involve a time-space-independent measurement of the voltage in a single channel of the cell membrane. In this work, we investigate the possibility of finding the Hodgkin-Huxley model's parametric functions using only two simple measurements (the membrane voltage as a function of time and the injected current that triggered that voltage) and applying Deep Learning methods to estimate these functions. This would result in an hybrid model of the action potential generation composed by the original Hodgkin-Huxley equations and an Artificial Neural Network that requires a small set of easy-to-perform measurements to be trained. Experiments were carried out using data generated from the original Hodgkin-Huxley model, and results show that a simple two-layer artificial neural network (ANN) architecture trained on a minimal amount of data can learn to model some of the fundamental proprieties of the action potential generation by estimating the model's rate functions.
2205.14945
Pablo Cruz-Morales
Pablo Cruz-Morales
Functional and evolutionary genomics of the Streptomyces metabolism
175 pages
PhD thesis, published by Cinvestav-LANGEBIO MEXICO 2013
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
This thesis is focused in the study of the evolution of the metabolic repertoire of Streptomyces, which are renowned as proficient producers of bioactive Natural Products (NPs). The main goal of my work was to contribute into the understanding of the evolutionary mechanisms behind the evolution of NP biosynthetic pathways. Specifically, the development of a bioinformatic method that helps into the discovery of new NP biosynthetic pathways from actinobacterial genome sequences with emphasis on members of the genus Streptomyces. I developed this method using a comparative and functional genomics perspective. My studies indicate that central metabolic enzymes were expanded in a genus-specific manner in Actinobacteria, and that they have been extensively recruited for the biosynthesis of NPs. Based in these observations, I developed EvoMining, a bioinformatic pipeline for the identificatoon of novel biosynthetic pathways in microbial genomes. Using EvoMining several new NP biosynthetic pathways have been predicted in different members of the phylum Actinobacteria, including the model organism S. lividans 66. To test this approach, the genome sequence of this model strain was obtained, and its analysis led to the discovery of an unprecedented system for peptide bond formation, as well as a biosynthetic pathway for an arsenic-containing metabolite. Moreover, this work also led to the identification of expansions on a conserved metabolic node in the glycolytic pathway of Streptomyces. These expansions occurred before the radiation of Streptomyces and are concomitant with the evolution of their capability to produce NPs. Experimental analyses indicate that this node evolved to mediate the interplay between central an NP metabolism.
[ { "created": "Mon, 30 May 2022 09:25:35 GMT", "version": "v1" } ]
2022-05-31
[ [ "Cruz-Morales", "Pablo", "" ] ]
This thesis is focused in the study of the evolution of the metabolic repertoire of Streptomyces, which are renowned as proficient producers of bioactive Natural Products (NPs). The main goal of my work was to contribute into the understanding of the evolutionary mechanisms behind the evolution of NP biosynthetic pathways. Specifically, the development of a bioinformatic method that helps into the discovery of new NP biosynthetic pathways from actinobacterial genome sequences with emphasis on members of the genus Streptomyces. I developed this method using a comparative and functional genomics perspective. My studies indicate that central metabolic enzymes were expanded in a genus-specific manner in Actinobacteria, and that they have been extensively recruited for the biosynthesis of NPs. Based in these observations, I developed EvoMining, a bioinformatic pipeline for the identificatoon of novel biosynthetic pathways in microbial genomes. Using EvoMining several new NP biosynthetic pathways have been predicted in different members of the phylum Actinobacteria, including the model organism S. lividans 66. To test this approach, the genome sequence of this model strain was obtained, and its analysis led to the discovery of an unprecedented system for peptide bond formation, as well as a biosynthetic pathway for an arsenic-containing metabolite. Moreover, this work also led to the identification of expansions on a conserved metabolic node in the glycolytic pathway of Streptomyces. These expansions occurred before the radiation of Streptomyces and are concomitant with the evolution of their capability to produce NPs. Experimental analyses indicate that this node evolved to mediate the interplay between central an NP metabolism.
2109.08261
Alexander Kaiser
Alexander D. Kaiser, Rohan Shad, Nicole Schiavone, William Hiesinger, Alison L. Marsden
Controlled Comparison of Simulated Hemodynamics across Tricuspid and Bicuspid Aortic Valves
null
null
10.1007/s10439-022-02983-4
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bicuspid aortic valve is the most common congenital heart defect, affecting 1-2% of the global population. Patients with bicuspid valves frequently develop dilation and aneurysms of the ascending aorta. Both hemodynamic and genetic factors are believed to contribute to dilation, yet the precise mechanism underlying this progression remains under debate. Controlled comparisons of hemodynamics in patients with different forms of bicuspid valve disease are challenging because of confounding factors, and simulations offer the opportunity for direct and systematic comparisons. Using fluid-structure interaction simulations, we simulate flows through multiple aortic valve models in a patient-specific geometry. The aortic geometry is based on a healthy patient with no known aortic or valvular disease, which allows us to isolate the hemodynamic consequences of changes to the valve alone. Four fully-passive, elastic model valves are studied: a tricuspid valve and bicuspid valves with fusion of the left- and right-, right- and non-, and non- and left-coronary cusps. The resulting tricuspid flow is relatively uniform, with little secondary or reverse flow, and little to no pressure gradient across the valve. The bicuspid cases show localized jets of forward flow, excess streamwise momentum, elevated secondary and reverse flow, and clinically significant levels of stenosis. Localized high flow rates correspond to locations of dilation observed in patients, with the location related to which valve cusps are fused. Thus, the simulations support the hypothesis that chronic exposure to high local flow contributes to localized dilation and aneurysm formation.
[ { "created": "Fri, 17 Sep 2021 00:45:25 GMT", "version": "v1" }, { "created": "Wed, 13 Oct 2021 23:07:56 GMT", "version": "v2" }, { "created": "Fri, 20 May 2022 20:33:02 GMT", "version": "v3" } ]
2022-08-26
[ [ "Kaiser", "Alexander D.", "" ], [ "Shad", "Rohan", "" ], [ "Schiavone", "Nicole", "" ], [ "Hiesinger", "William", "" ], [ "Marsden", "Alison L.", "" ] ]
Bicuspid aortic valve is the most common congenital heart defect, affecting 1-2% of the global population. Patients with bicuspid valves frequently develop dilation and aneurysms of the ascending aorta. Both hemodynamic and genetic factors are believed to contribute to dilation, yet the precise mechanism underlying this progression remains under debate. Controlled comparisons of hemodynamics in patients with different forms of bicuspid valve disease are challenging because of confounding factors, and simulations offer the opportunity for direct and systematic comparisons. Using fluid-structure interaction simulations, we simulate flows through multiple aortic valve models in a patient-specific geometry. The aortic geometry is based on a healthy patient with no known aortic or valvular disease, which allows us to isolate the hemodynamic consequences of changes to the valve alone. Four fully-passive, elastic model valves are studied: a tricuspid valve and bicuspid valves with fusion of the left- and right-, right- and non-, and non- and left-coronary cusps. The resulting tricuspid flow is relatively uniform, with little secondary or reverse flow, and little to no pressure gradient across the valve. The bicuspid cases show localized jets of forward flow, excess streamwise momentum, elevated secondary and reverse flow, and clinically significant levels of stenosis. Localized high flow rates correspond to locations of dilation observed in patients, with the location related to which valve cusps are fused. Thus, the simulations support the hypothesis that chronic exposure to high local flow contributes to localized dilation and aneurysm formation.
1805.04975
Brian Lee
Brian C. Lee, Meng Kuan Lin, Yan Fu, Junichi Hata, Michael I. Miller, Partha P. Mitra
Multimodal Cross-registration and Quantification of Metric Distortions in Whole Brain Histology of Marmoset using Diffeomorphic Mappings
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whole brain neuroanatomy using tera-voxel light-microscopic data sets is of much current interest. A fundamental problem in this field is the mapping of individual brain data sets to a reference space. Previous work has not rigorously quantified the distortions in brain geometry from in-vivo to ex-vivo brains due to the tissue processing, which will be important when computing properties such as local cell and process densities at the voxel level in creating reference brain maps. Further, existing approaches focus on registering uni-modal volumetric data; however, given the increasing interest in the marmoset model for neuroscience research, it is necessary to cross-register multi-modal data sets including MRIs and multiple histological series that can help address individual variations in brain architecture. Here we present a computational approach for same-subject multimodal MRI guided reconstruction of a histological series, jointly with diffeomorphic mapping to a reference atlas. We quantify the scale change during the different stages of histological processing of the brains using the Jacobian determinant of the diffeomorphic transformations involved. There are two major steps in the histology process with associated scale distortions (a) brain perfusion (b) histological sectioning and reassembly. By mapping the final image stacks to the ex-vivo post fixation MRI, we show that tape-transfer histology can be reassembled accurately into 3D volumes with a local scale change of 2.0 $\pm$ 0.4% per axis dimension. In contrast, the perfusion step, as assessed by mapping the in-vivo MRIs to the ex-vivo post fixation MRIs, shows a larger local scale change of 6.9 $\pm$ 2.1% per axis dimension. This is the first systematic quantification of the local metric distortions associated with whole-brain histological processing, and we expect that the results will generalize to other species.
[ { "created": "Mon, 14 May 2018 00:36:28 GMT", "version": "v1" }, { "created": "Wed, 17 Apr 2019 04:26:23 GMT", "version": "v2" } ]
2019-04-18
[ [ "Lee", "Brian C.", "" ], [ "Lin", "Meng Kuan", "" ], [ "Fu", "Yan", "" ], [ "Hata", "Junichi", "" ], [ "Miller", "Michael I.", "" ], [ "Mitra", "Partha P.", "" ] ]
Whole brain neuroanatomy using tera-voxel light-microscopic data sets is of much current interest. A fundamental problem in this field is the mapping of individual brain data sets to a reference space. Previous work has not rigorously quantified the distortions in brain geometry from in-vivo to ex-vivo brains due to the tissue processing, which will be important when computing properties such as local cell and process densities at the voxel level in creating reference brain maps. Further, existing approaches focus on registering uni-modal volumetric data; however, given the increasing interest in the marmoset model for neuroscience research, it is necessary to cross-register multi-modal data sets including MRIs and multiple histological series that can help address individual variations in brain architecture. Here we present a computational approach for same-subject multimodal MRI guided reconstruction of a histological series, jointly with diffeomorphic mapping to a reference atlas. We quantify the scale change during the different stages of histological processing of the brains using the Jacobian determinant of the diffeomorphic transformations involved. There are two major steps in the histology process with associated scale distortions (a) brain perfusion (b) histological sectioning and reassembly. By mapping the final image stacks to the ex-vivo post fixation MRI, we show that tape-transfer histology can be reassembled accurately into 3D volumes with a local scale change of 2.0 $\pm$ 0.4% per axis dimension. In contrast, the perfusion step, as assessed by mapping the in-vivo MRIs to the ex-vivo post fixation MRIs, shows a larger local scale change of 6.9 $\pm$ 2.1% per axis dimension. This is the first systematic quantification of the local metric distortions associated with whole-brain histological processing, and we expect that the results will generalize to other species.
1902.10604
Peter Zeidman
Peter Zeidman, Amirhossein Jafarian, Mohamed L. Seghier, Vladimir Litvak, Hayriye Cagnan, Cathy J. Price, Karl J. Friston
A tutorial on group effective connectivity analysis, part 2: second level analysis with PEB
null
null
10.1016/j.neuroimage.2019.06.032
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This tutorial provides a worked example of using Dynamic Causal Modelling (DCM) and Parametric Empirical Bayes (PEB) to characterise inter-subject variability in neural circuitry (effective connectivity). This involves specifying a hierarchical model with two or more levels. At the first level, state space models (DCMs) are used to infer the effective connectivity that best explains a subject's neuroimaging timeseries (e.g. fMRI, MEG, EEG). Subject-specific connectivity parameters are then taken to the group level, where they are modelled using a General Linear Model (GLM) that partitions between-subject variability into designed effects and additive random effects. The ensuing (Bayesian) hierarchical model conveys both the estimated connection strengths and their uncertainty (i.e., posterior covariance) from the subject to the group level; enabling hypotheses to be tested about the commonalities and differences across subjects. This approach can also finesse parameter estimation at the subject level, by using the group-level parameters as empirical priors. We walk through this approach in detail, using data from a published fMRI experiment that characterised individual differences in hemispheric lateralization in a semantic processing task. The preliminary subject specific DCM analysis is covered in detail in a companion paper. This tutorial is accompanied by the example dataset and step-by-step instructions to reproduce the analyses.
[ { "created": "Wed, 27 Feb 2019 15:54:08 GMT", "version": "v1" } ]
2019-07-15
[ [ "Zeidman", "Peter", "" ], [ "Jafarian", "Amirhossein", "" ], [ "Seghier", "Mohamed L.", "" ], [ "Litvak", "Vladimir", "" ], [ "Cagnan", "Hayriye", "" ], [ "Price", "Cathy J.", "" ], [ "Friston", "Karl J.", "" ] ]
This tutorial provides a worked example of using Dynamic Causal Modelling (DCM) and Parametric Empirical Bayes (PEB) to characterise inter-subject variability in neural circuitry (effective connectivity). This involves specifying a hierarchical model with two or more levels. At the first level, state space models (DCMs) are used to infer the effective connectivity that best explains a subject's neuroimaging timeseries (e.g. fMRI, MEG, EEG). Subject-specific connectivity parameters are then taken to the group level, where they are modelled using a General Linear Model (GLM) that partitions between-subject variability into designed effects and additive random effects. The ensuing (Bayesian) hierarchical model conveys both the estimated connection strengths and their uncertainty (i.e., posterior covariance) from the subject to the group level; enabling hypotheses to be tested about the commonalities and differences across subjects. This approach can also finesse parameter estimation at the subject level, by using the group-level parameters as empirical priors. We walk through this approach in detail, using data from a published fMRI experiment that characterised individual differences in hemispheric lateralization in a semantic processing task. The preliminary subject specific DCM analysis is covered in detail in a companion paper. This tutorial is accompanied by the example dataset and step-by-step instructions to reproduce the analyses.
1312.7283
Nicholas Schafer
N.P. Schafer, B.L. Kim, W. Zheng, P.G. Wolynes
Learning to Fold Proteins Using Energy Landscape Theory
95 pages, 22 figures
null
null
null
q-bio.BM cond-mat.soft cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This review is a tutorial for scientists interested in the problem of protein structure prediction, particularly those interested in using coarse-grained molecular dynamics models that are optimized using lessons learned from the energy landscape theory of protein folding. We also present a review of the results of the AMH/AMC/AMW/AWSEM family of coarse-grained molecular dynamics protein folding models to illustrate the points covered in the first part of the article. Accurate coarse-grained structure prediction models can be used to investigate a wide range of conceptual and mechanistic issues outside of protein structure prediction; specifically, the paper concludes by reviewing how AWSEM has in recent years been able to elucidate questions related to the unusual kinetic behavior of artificially designed proteins, multidomain protein misfolding, and the initial stages of protein aggregation.
[ { "created": "Tue, 24 Dec 2013 18:15:44 GMT", "version": "v1" } ]
2014-01-06
[ [ "Schafer", "N. P.", "" ], [ "Kim", "B. L.", "" ], [ "Zheng", "W.", "" ], [ "Wolynes", "P. G.", "" ] ]
This review is a tutorial for scientists interested in the problem of protein structure prediction, particularly those interested in using coarse-grained molecular dynamics models that are optimized using lessons learned from the energy landscape theory of protein folding. We also present a review of the results of the AMH/AMC/AMW/AWSEM family of coarse-grained molecular dynamics protein folding models to illustrate the points covered in the first part of the article. Accurate coarse-grained structure prediction models can be used to investigate a wide range of conceptual and mechanistic issues outside of protein structure prediction; specifically, the paper concludes by reviewing how AWSEM has in recent years been able to elucidate questions related to the unusual kinetic behavior of artificially designed proteins, multidomain protein misfolding, and the initial stages of protein aggregation.
1508.05226
Mvogo Alain
Alain Mvogo, Antoine Tambue, Germain Hubert Ben-Bolie, Timoleon Crepin Kofane
Localized modulated wave solutions in diffusive glucose-insulin systems
null
Phys. Lett. A (2016)
10.1016/j.physleta.2016.04.039
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate intercellular insulin dynamics in an array of diffusively coupled pancreatic islet \b{eta}-cells. The cells are connected via gap junction coupling, where nearest neighbor interactions are included. Through the multiple scale expansion in the semi-discrete approximation, we show that the insulin dynamics can be governed by the complex Ginzburg-Landau equation. The localized solutions of this equation are reported. The results suggest from the biophysical point of view that the insulin propagates in pancreatic islet \b{eta}-cells using both temporal and spatial dimensions in the form of localized modulated waves.
[ { "created": "Fri, 21 Aug 2015 09:47:24 GMT", "version": "v1" }, { "created": "Wed, 26 Aug 2015 08:57:50 GMT", "version": "v2" }, { "created": "Thu, 27 Aug 2015 16:05:19 GMT", "version": "v3" }, { "created": "Mon, 31 Aug 2015 11:59:45 GMT", "version": "v4" }, { "created": "Thu, 28 Apr 2016 22:30:55 GMT", "version": "v5" } ]
2016-05-02
[ [ "Mvogo", "Alain", "" ], [ "Tambue", "Antoine", "" ], [ "Ben-Bolie", "Germain Hubert", "" ], [ "Kofane", "Timoleon Crepin", "" ] ]
We investigate intercellular insulin dynamics in an array of diffusively coupled pancreatic islet \b{eta}-cells. The cells are connected via gap junction coupling, where nearest neighbor interactions are included. Through the multiple scale expansion in the semi-discrete approximation, we show that the insulin dynamics can be governed by the complex Ginzburg-Landau equation. The localized solutions of this equation are reported. The results suggest from the biophysical point of view that the insulin propagates in pancreatic islet \b{eta}-cells using both temporal and spatial dimensions in the form of localized modulated waves.
1904.08937
Barak Brill
Barak Brill, Amnon Amir, Ruth Heller
Testing for differential abundance in compositional counts data, with application to microbiome studies
null
null
null
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying which taxa in our microbiota are associated with traits of interest is important for advancing science and health. However, the identification is challenging because the measured vector of taxa counts (by amplicon sequencing) is compositional, so a change in the abundance of one taxon in the microbiota induces a change in the number of sequenced counts across all taxa. The data is typically sparse, with zero counts present either due to biological variance or limited sequencing depth (technical zeros). For low abundance taxa, the chance for technical zeros is non-negligible. We show that existing methods designed to identify differential abundance for compositional data may have an inflated number of false positives due to improper handling of the zero counts. We introduce a novel non-parametric approach which provides valid inference even when the fraction of zero counts is substantial. Our approach uses a set of reference taxa that are non-differentially abundant, which can be estimated from the data or from outside information. We show the usefulness of our approach via simulations, as well as on three different data sets: a Crohn's disease study, the Human Microbiome Project, and an experiment with 'spiked-in' bacteria.
[ { "created": "Thu, 18 Apr 2019 13:39:58 GMT", "version": "v1" }, { "created": "Sun, 2 Jun 2019 11:44:29 GMT", "version": "v2" }, { "created": "Thu, 1 Aug 2019 11:12:02 GMT", "version": "v3" }, { "created": "Mon, 5 Aug 2019 12:56:08 GMT", "version": "v4" }, { "created": "Mon, 30 Mar 2020 12:17:34 GMT", "version": "v5" } ]
2020-03-31
[ [ "Brill", "Barak", "" ], [ "Amir", "Amnon", "" ], [ "Heller", "Ruth", "" ] ]
Identifying which taxa in our microbiota are associated with traits of interest is important for advancing science and health. However, the identification is challenging because the measured vector of taxa counts (by amplicon sequencing) is compositional, so a change in the abundance of one taxon in the microbiota induces a change in the number of sequenced counts across all taxa. The data is typically sparse, with zero counts present either due to biological variance or limited sequencing depth (technical zeros). For low abundance taxa, the chance for technical zeros is non-negligible. We show that existing methods designed to identify differential abundance for compositional data may have an inflated number of false positives due to improper handling of the zero counts. We introduce a novel non-parametric approach which provides valid inference even when the fraction of zero counts is substantial. Our approach uses a set of reference taxa that are non-differentially abundant, which can be estimated from the data or from outside information. We show the usefulness of our approach via simulations, as well as on three different data sets: a Crohn's disease study, the Human Microbiome Project, and an experiment with 'spiked-in' bacteria.
0812.1864
Gerald Fournier
G\'erald Fournier (LEPS)
Paul Alsberg (1883-1965) et le transfert adaptatif du biologique au technique : un pr\'ecurseur de la "cultural niche construction" ?
16 pages
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose, in this paper, both a presentation and a discussion of Paul Alsberg's thesis on the supposed specificity principle of human evolution. The author maintains a difference of nature between Man and Animal relying on an opposition between "body-adaptation" - that of the Animal - and "extrabodily-adaptation" - that of Man in which the means of adaptation are switched outside of the organisms by tool-using. This difference is not a mere difference of state, but of evolutionary dynamics. Here, Man is not simply "Homo faber", as in Bergson's view, but produced and made possible by technique; a technique which then appears as an hominisation factor. Thus, his "principle of body-liberation" by tool-using is to be retrospectively understood as a part of the logics of the modification of selection pressure logics, which reminds us the seminal contemporary niche construction theory (F. John Odling-Smee). It seems therefore possible to make Paul Alsberg, from his 1922 work, one of the most important precursors of the cultural niche construction theory.
[ { "created": "Wed, 10 Dec 2008 08:38:18 GMT", "version": "v1" } ]
2008-12-11
[ [ "Fournier", "Gérald", "", "LEPS" ] ]
We propose, in this paper, both a presentation and a discussion of Paul Alsberg's thesis on the supposed specificity principle of human evolution. The author maintains a difference of nature between Man and Animal relying on an opposition between "body-adaptation" - that of the Animal - and "extrabodily-adaptation" - that of Man in which the means of adaptation are switched outside of the organisms by tool-using. This difference is not a mere difference of state, but of evolutionary dynamics. Here, Man is not simply "Homo faber", as in Bergson's view, but produced and made possible by technique; a technique which then appears as an hominisation factor. Thus, his "principle of body-liberation" by tool-using is to be retrospectively understood as a part of the logics of the modification of selection pressure logics, which reminds us the seminal contemporary niche construction theory (F. John Odling-Smee). It seems therefore possible to make Paul Alsberg, from his 1922 work, one of the most important precursors of the cultural niche construction theory.
2402.03410
Daniel Schindler
Stijn T. de Vries, Laura Kley and Daniel Schindler
Use of a Golden Gate plasmid set enabling scarless MoClo-compatible transcription unit assembly
26 pages, 5 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Golden Gate cloning has become a powerful and widely used DNA assembly method. Its modular nature and the reusability of standardized parts allow rapid construction of transcription units and multi-gene constructs. Importantly, its modular structure makes it compatible with laboratory automation, allowing for systematic and highly complex DNA assembly. Golden Gate cloning relies on Type IIS enzymes that cleave an adjacent undefined sequence motif at a defined distance from the directed enzyme recognition motif. This feature has been used to define hierarchical Golden Gate assembly standards with defined overhangs ("fusion sites") for defined part libraries. The simplest Golden Gate standard would consist of three part libraries, namely promoter, coding and terminator sequences, respectively. Each library would have defined fusion sites, allowing a hierarchical Golden Gate assembly to generate transcription units. Typically, Type IIS enzymes are used, which generate four nucleotide overhangs. This results in small scar sequences in hierarchical DNA assemblies, which can affect the functionality of transcription units. However, there are enzymes that generate three nucleotide overhangs, such as SapI. Here we provide a step-by-step protocol on how to use SapI to assemble transcription units using the start and stop codon for scarless transcription unit assembly. The protocol also provides guidance on how to perform multi-gene Golden Gate assemblies with the resulting transcription units using the Modular Cloning standard. The transcription units expressing fluorophores are used as an example.
[ { "created": "Mon, 5 Feb 2024 16:04:54 GMT", "version": "v1" } ]
2024-02-07
[ [ "de Vries", "Stijn T.", "" ], [ "Kley", "Laura", "" ], [ "Schindler", "Daniel", "" ] ]
Golden Gate cloning has become a powerful and widely used DNA assembly method. Its modular nature and the reusability of standardized parts allow rapid construction of transcription units and multi-gene constructs. Importantly, its modular structure makes it compatible with laboratory automation, allowing for systematic and highly complex DNA assembly. Golden Gate cloning relies on Type IIS enzymes that cleave an adjacent undefined sequence motif at a defined distance from the directed enzyme recognition motif. This feature has been used to define hierarchical Golden Gate assembly standards with defined overhangs ("fusion sites") for defined part libraries. The simplest Golden Gate standard would consist of three part libraries, namely promoter, coding and terminator sequences, respectively. Each library would have defined fusion sites, allowing a hierarchical Golden Gate assembly to generate transcription units. Typically, Type IIS enzymes are used, which generate four nucleotide overhangs. This results in small scar sequences in hierarchical DNA assemblies, which can affect the functionality of transcription units. However, there are enzymes that generate three nucleotide overhangs, such as SapI. Here we provide a step-by-step protocol on how to use SapI to assemble transcription units using the start and stop codon for scarless transcription unit assembly. The protocol also provides guidance on how to perform multi-gene Golden Gate assemblies with the resulting transcription units using the Modular Cloning standard. The transcription units expressing fluorophores are used as an example.
1507.01962
Andrea De Martino
Fabrizio Capuani, Daniele De Martino, Enzo Marinari, Andrea De Martino
Quantitative constraint-based computational model of tumor-to-stroma coupling via lactate shuttle
26 pages incl. supporting material
Scientific Reports 5, 11880 (2015)
10.1038/srep11880
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer cells utilize large amounts of ATP to sustain growth, relying primarily on non-oxidative, fermentative pathways for its production. In many types of cancers this leads, even in the presence of oxygen, to the secretion of carbon equivalents (usually in the form of lactate) in the cell's surroundings, a feature known as the Warburg effect. While the molecular basis of this phenomenon are still to be elucidated, it is clear that the spilling of energy resources contributes to creating a peculiar microenvironment for tumors, possibly characterized by a degree of toxicity. This suggests that mechanisms for recycling the fermentation products (e.g. a lactate shuttle) may be active, effectively inducing a mutually beneficial metabolic coupling between aberrant and non-aberrant cells. Here we analyze this scenario through a large-scale in silico metabolic model of interacting human cells. By going beyond the cell-autonomous description, we show that elementary physico-chemical constraints indeed favor the establishment of such a coupling under very broad conditions. The characterization we obtained by tuning the aberrant cell's demand for ATP, amino-acids and fatty acids and/or the imbalance in nutrient partitioning provides quantitative support to the idea that synergistic multi-cell effects play a central role in cancer sustainment.
[ { "created": "Tue, 7 Jul 2015 20:31:24 GMT", "version": "v1" } ]
2015-07-09
[ [ "Capuani", "Fabrizio", "" ], [ "De Martino", "Daniele", "" ], [ "Marinari", "Enzo", "" ], [ "De Martino", "Andrea", "" ] ]
Cancer cells utilize large amounts of ATP to sustain growth, relying primarily on non-oxidative, fermentative pathways for its production. In many types of cancers this leads, even in the presence of oxygen, to the secretion of carbon equivalents (usually in the form of lactate) in the cell's surroundings, a feature known as the Warburg effect. While the molecular basis of this phenomenon are still to be elucidated, it is clear that the spilling of energy resources contributes to creating a peculiar microenvironment for tumors, possibly characterized by a degree of toxicity. This suggests that mechanisms for recycling the fermentation products (e.g. a lactate shuttle) may be active, effectively inducing a mutually beneficial metabolic coupling between aberrant and non-aberrant cells. Here we analyze this scenario through a large-scale in silico metabolic model of interacting human cells. By going beyond the cell-autonomous description, we show that elementary physico-chemical constraints indeed favor the establishment of such a coupling under very broad conditions. The characterization we obtained by tuning the aberrant cell's demand for ATP, amino-acids and fatty acids and/or the imbalance in nutrient partitioning provides quantitative support to the idea that synergistic multi-cell effects play a central role in cancer sustainment.
1406.2573
Dagmar Iber
Britta Velten, Erkan Uenal and Dagmar Iber
Image-based Parameter Inference for Spatio-temporal models of Organogenesis
NOLTA 2014: 2014 Int'l Symposium on Nonlinear Theory & its Applications, to be held in Luzern, Switzerland from September 14-18, 2014
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in imaging technology now provide us with detailed 3D data on gene expression patterns in developing embryos. This information can be used to build predictive mathematical models of embryogenesis. Current modelling approaches are, however, limited by lack of methods to automatically infer the regulatory networks and the parameter values from the image-based information. Here we make a first step to the development of such methods. We use limb bud development as a model system. For a given regulatory network we developed a decision tree based algorithm to automatically determine parameter values for which the model reproduces the expression patterns. Starting from this parameter set, local optimization was performed to further reduce the chosen goodness-of-fit measure. This approach allowed us to recover the target expression patterns, as judged by eye, and thus provides a first step towards the automated inference of parameter values for a given regulatory network.
[ { "created": "Tue, 10 Jun 2014 14:55:19 GMT", "version": "v1" } ]
2014-06-11
[ [ "Velten", "Britta", "" ], [ "Uenal", "Erkan", "" ], [ "Iber", "Dagmar", "" ] ]
Advances in imaging technology now provide us with detailed 3D data on gene expression patterns in developing embryos. This information can be used to build predictive mathematical models of embryogenesis. Current modelling approaches are, however, limited by lack of methods to automatically infer the regulatory networks and the parameter values from the image-based information. Here we make a first step to the development of such methods. We use limb bud development as a model system. For a given regulatory network we developed a decision tree based algorithm to automatically determine parameter values for which the model reproduces the expression patterns. Starting from this parameter set, local optimization was performed to further reduce the chosen goodness-of-fit measure. This approach allowed us to recover the target expression patterns, as judged by eye, and thus provides a first step towards the automated inference of parameter values for a given regulatory network.
1702.06463
Aditya Gilra
Aditya Gilra, Wulfram Gerstner
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network
null
eLife 2017;6:e28295
10.7554/eLife.28295
null
q-bio.NC cs.LG cs.NE cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brains need to predict how the body reacts to motor commands. It is an open question how networks of spiking neurons can learn to reproduce the non-linear body dynamics caused by motor commands, using local, online and stable learning rules. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics, while an online and local rule changes the weights. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Using the Lyapunov method, and under reasonable assumptions and approximations, we show that FOLLOW learning is stable uniformly, with the error going to zero asymptotically.
[ { "created": "Tue, 21 Feb 2017 16:15:34 GMT", "version": "v1" }, { "created": "Wed, 26 Apr 2017 17:58:00 GMT", "version": "v2" } ]
2017-11-30
[ [ "Gilra", "Aditya", "" ], [ "Gerstner", "Wulfram", "" ] ]
Brains need to predict how the body reacts to motor commands. It is an open question how networks of spiking neurons can learn to reproduce the non-linear body dynamics caused by motor commands, using local, online and stable learning rules. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics, while an online and local rule changes the weights. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Using the Lyapunov method, and under reasonable assumptions and approximations, we show that FOLLOW learning is stable uniformly, with the error going to zero asymptotically.
1611.07677
Claus Metzner
Patrick Krauss, Claus Metzner, Achim Schilling, Konstantin Tziridis, Maximilian Traxdorf and Holger Schulze
A statistical method for analyzing and comparing spatiotemporal cortical activation patterns
null
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new statistical method to analyze multichannel steady-state local field potentials (LFP) recorded within different sensory cortices of different rodent species. Our spatiotemporal multi-dimensional cluster statistics (MCS) method enables statistical analyzing and comparing clusters of data points in n-dimensional space. We demonstrate that using this approach stimulus-specific attractor-like spatiotemporal activity patterns can be detected and be significantly different from each other during stimulation with long-lasting stimuli. Our method may be applied to other types of multichannel neuronal data, like EEG, MEG or spiking responses and used for the development of new read-out algorithms of brain activity and by that opens new perspectives for the development of brain-computer interfaces.
[ { "created": "Wed, 23 Nov 2016 08:08:17 GMT", "version": "v1" } ]
2016-11-24
[ [ "Krauss", "Patrick", "" ], [ "Metzner", "Claus", "" ], [ "Schilling", "Achim", "" ], [ "Tziridis", "Konstantin", "" ], [ "Traxdorf", "Maximilian", "" ], [ "Schulze", "Holger", "" ] ]
We present a new statistical method to analyze multichannel steady-state local field potentials (LFP) recorded within different sensory cortices of different rodent species. Our spatiotemporal multi-dimensional cluster statistics (MCS) method enables statistical analyzing and comparing clusters of data points in n-dimensional space. We demonstrate that using this approach stimulus-specific attractor-like spatiotemporal activity patterns can be detected and be significantly different from each other during stimulation with long-lasting stimuli. Our method may be applied to other types of multichannel neuronal data, like EEG, MEG or spiking responses and used for the development of new read-out algorithms of brain activity and by that opens new perspectives for the development of brain-computer interfaces.
2112.03259
Mihir Rao
Mihir Rao
Novel Local Radiomic Bayesian Classifiers for Non-Invasive Prediction of MGMT Methylation Status in Glioblastoma
null
null
null
null
q-bio.QM cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Glioblastoma, an aggressive brain cancer, is amongst the most lethal of all cancers. Expression of the O6-methylguanine-DNA-methyltransferase (MGMT) gene in glioblastoma tumor tissue is of clinical importance as it has a significant effect on the efficacy of Temozolomide, the primary chemotherapy treatment administered to glioblastoma patients. Currently, MGMT methylation is determined through an invasive brain biopsy and subsequent genetic analysis of the extracted tumor tissue. In this work, we present novel Bayesian classifiers that make probabilistic predictions of MGMT methylation status based on radiomic features extracted from FLAIR-sequence magnetic resonance imagery (MRIs). We implement local radiomic techniques to produce radiomic activation maps and analyze MRIs for the MGMT biomarker based on statistical features of raw voxel-intensities. We demonstrate the ability for simple Bayesian classifiers to provide a boost in predictive performance when modelling local radiomic data rather than global features. The presented techniques provide a non-invasive MRI-based approach to determining MGMT methylation status in glioblastoma patients.
[ { "created": "Tue, 30 Nov 2021 04:53:23 GMT", "version": "v1" } ]
2021-12-07
[ [ "Rao", "Mihir", "" ] ]
Glioblastoma, an aggressive brain cancer, is amongst the most lethal of all cancers. Expression of the O6-methylguanine-DNA-methyltransferase (MGMT) gene in glioblastoma tumor tissue is of clinical importance as it has a significant effect on the efficacy of Temozolomide, the primary chemotherapy treatment administered to glioblastoma patients. Currently, MGMT methylation is determined through an invasive brain biopsy and subsequent genetic analysis of the extracted tumor tissue. In this work, we present novel Bayesian classifiers that make probabilistic predictions of MGMT methylation status based on radiomic features extracted from FLAIR-sequence magnetic resonance imagery (MRIs). We implement local radiomic techniques to produce radiomic activation maps and analyze MRIs for the MGMT biomarker based on statistical features of raw voxel-intensities. We demonstrate the ability for simple Bayesian classifiers to provide a boost in predictive performance when modelling local radiomic data rather than global features. The presented techniques provide a non-invasive MRI-based approach to determining MGMT methylation status in glioblastoma patients.
1310.3518
Mikl\'os Cs\H{u}r\"os
Mikl\'os Cs\H{u}r\"os
Non-identifiability of identity coefficients at biallelic loci
18 pages, 3 figures, 2 tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shared genealogies introduce allele dependencies in diploid genotypes, as alleles within an individual or between different individuals will likely match when they originate from a recent common ancestor. At a locus shared by a pair of diploid individuals, there are nine combinatorially distinct modes of identity-by-descent (IBD), capturing all possible combinations of coancestry and inbreeding. A distribution over the IBD modes is described by the nine associated probabilities, known as (Jacquard's) identity coefficients. The genetic relatedness between two individuals can be succinctly characterized by the identity coefficients corresponding to the joint genealogy. The identity coefficients (together with allele frequencies) determine the distribution of joint genotypes at a locus. At a locus with two possible alleles, identity coefficients are not identifiable because different coefficients can generate the same genotype distribution. We analyze precisely how different IBD modes combine into identical genotype distributions at diallelic loci. In particular, we describe IBD mode mixtures that result in identical genotype distributions at all allele frequencies, implying the non-identifiability of the identity coefficients from independent loci. Our analysis yields an exhaustive characterization of relatedness statistics that are always identifiable. Importantly, we show that identifiable relatedness statistics include the kinship coefficient (probability that a random pair of alleles are identical by descent between individuals) and inbreeding-related measures, which can thus be estimated from genotype distributions at independent loci.
[ { "created": "Sun, 13 Oct 2013 20:27:37 GMT", "version": "v1" } ]
2013-10-15
[ [ "Csűrös", "Miklós", "" ] ]
Shared genealogies introduce allele dependencies in diploid genotypes, as alleles within an individual or between different individuals will likely match when they originate from a recent common ancestor. At a locus shared by a pair of diploid individuals, there are nine combinatorially distinct modes of identity-by-descent (IBD), capturing all possible combinations of coancestry and inbreeding. A distribution over the IBD modes is described by the nine associated probabilities, known as (Jacquard's) identity coefficients. The genetic relatedness between two individuals can be succinctly characterized by the identity coefficients corresponding to the joint genealogy. The identity coefficients (together with allele frequencies) determine the distribution of joint genotypes at a locus. At a locus with two possible alleles, identity coefficients are not identifiable because different coefficients can generate the same genotype distribution. We analyze precisely how different IBD modes combine into identical genotype distributions at diallelic loci. In particular, we describe IBD mode mixtures that result in identical genotype distributions at all allele frequencies, implying the non-identifiability of the identity coefficients from independent loci. Our analysis yields an exhaustive characterization of relatedness statistics that are always identifiable. Importantly, we show that identifiable relatedness statistics include the kinship coefficient (probability that a random pair of alleles are identical by descent between individuals) and inbreeding-related measures, which can thus be estimated from genotype distributions at independent loci.
1012.5775
Giangiacomo Bravo
Giangiacomo Bravo and Lucia Tamburino
Are two resources really better than one? Some unexpected results of the availability of substitutes
14 pages, 8 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The possibility of exploiting multiple resources is usually regarded as positive from both the economic and the environmental point of view. However, resource switching may also lead to unsustainable growth and, ultimately, to an equilibrium condition which is worse than the one that could have been achieved with a single resource. We applied a system dynamics model where users exploit multiple resources and have different levels of preference among them. In this setting, exploiting multiple resources leads to worse outcomes than the single-resource case under a wide range of parameter configurations. Our arguments are illustrated using two empirical situations, namely oil drilling in the North Sea and whale hunting in the Antarctic.
[ { "created": "Tue, 28 Dec 2010 15:10:11 GMT", "version": "v1" } ]
2010-12-30
[ [ "Bravo", "Giangiacomo", "" ], [ "Tamburino", "Lucia", "" ] ]
The possibility of exploiting multiple resources is usually regarded as positive from both the economic and the environmental point of view. However, resource switching may also lead to unsustainable growth and, ultimately, to an equilibrium condition which is worse than the one that could have been achieved with a single resource. We applied a system dynamics model where users exploit multiple resources and have different levels of preference among them. In this setting, exploiting multiple resources leads to worse outcomes than the single-resource case under a wide range of parameter configurations. Our arguments are illustrated using two empirical situations, namely oil drilling in the North Sea and whale hunting in the Antarctic.
2207.09250
Matteo Aldeghi
Matteo Aldeghi, David E. Graff, Nathan Frey, Joseph A. Morrone, Edward O. Pyzer-Knapp, Kirk E. Jordan, Connor W. Coley
Roughness of molecular property landscapes and its impact on modellability
17 pages, 6 figures, 2 tables (SI with 17 pages, 16 figures)
J. Chem. Inf. Model. 2022, 62, 19, 4660-4671
10.1021/acs.jcim.2c00903
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In molecular discovery and drug design, structure-property relationships and activity landscapes are often qualitatively or quantitatively analyzed to guide the navigation of chemical space. The roughness (or smoothness) of these molecular property landscapes is one of their most studied geometric attributes, as it can characterize the presence of activity cliffs, with rougher landscapes generally expected to pose tougher optimization challenges. Here, we introduce a general, quantitative measure for describing the roughness of molecular property landscapes. The proposed roughness index (ROGI) is loosely inspired by the concept of fractal dimension and strongly correlates with the out-of-sample error achieved by machine learning models on numerous regression tasks.
[ { "created": "Tue, 19 Jul 2022 13:05:59 GMT", "version": "v1" } ]
2022-10-13
[ [ "Aldeghi", "Matteo", "" ], [ "Graff", "David E.", "" ], [ "Frey", "Nathan", "" ], [ "Morrone", "Joseph A.", "" ], [ "Pyzer-Knapp", "Edward O.", "" ], [ "Jordan", "Kirk E.", "" ], [ "Coley", "Connor W.", "" ] ]
In molecular discovery and drug design, structure-property relationships and activity landscapes are often qualitatively or quantitatively analyzed to guide the navigation of chemical space. The roughness (or smoothness) of these molecular property landscapes is one of their most studied geometric attributes, as it can characterize the presence of activity cliffs, with rougher landscapes generally expected to pose tougher optimization challenges. Here, we introduce a general, quantitative measure for describing the roughness of molecular property landscapes. The proposed roughness index (ROGI) is loosely inspired by the concept of fractal dimension and strongly correlates with the out-of-sample error achieved by machine learning models on numerous regression tasks.
1504.00539
R.K. Brojen Singh
G. Reenaroy Devi, R. K. Brojen Singh, Ram Ramaswamy
Synchronization efficiency in coupled stochastic oscillators: The role of connection topology
null
null
null
null
q-bio.SC nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the efficiency of synchronization in ensembles of identical coupled stochastic oscillator systems. By deriving a chemical Langevin equation, we measure the rate at which the systems synchronize. The rate at which the difference in the Hilbert phases of the systems evolve provides a suitable order parameter, and a 2--dimensional recurrence plot further facilitates the analysis of stochastic synchrony. We find that a global mean--field coupling effects the most rapid approach to global synchrony, and that when the number of "information carrying" molecular species increases, the rate of synchrony increases. The Langevin analysis is complemented by numerical simulations.
[ { "created": "Thu, 2 Apr 2015 13:12:59 GMT", "version": "v1" } ]
2015-04-03
[ [ "Devi", "G. Reenaroy", "" ], [ "Singh", "R. K. Brojen", "" ], [ "Ramaswamy", "Ram", "" ] ]
We study the efficiency of synchronization in ensembles of identical coupled stochastic oscillator systems. By deriving a chemical Langevin equation, we measure the rate at which the systems synchronize. The rate at which the difference in the Hilbert phases of the systems evolve provides a suitable order parameter, and a 2--dimensional recurrence plot further facilitates the analysis of stochastic synchrony. We find that a global mean--field coupling effects the most rapid approach to global synchrony, and that when the number of "information carrying" molecular species increases, the rate of synchrony increases. The Langevin analysis is complemented by numerical simulations.
1809.10389
Piero Fariselli
Ludovica Montanucci, Pier Luigi Martelli, Nir Ben-Tal, Piero Fariselli
A natural upper bound to the accuracy of predicting protein stability changes upon mutations
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate prediction of protein stability changes upon single-site variations (DDG) is important for protein design, as well as our understanding of the mechanism of genetic diseases. The performance of high-throughput computational methods to this end is evaluated mostly based on the Pearson correlation coefficient between predicted and observed data, assuming that the upper bound would be 1 (perfect correlation). However, the performance of these predictors can be limited by the distribution and noise of the experimental data. Here we estimate, for the first time, a theoretical upper-bound to the DDG prediction performances imposed by the intrinsic structure of currently available DDG data. Given a set of measured DDG protein variations, the theoretically best predictor is estimated based on its similarity to another set of experimentally determined DDG values. We investigate the correlation between pairs of measured DDG variations, where one is used as a predictor for the other. We analytically derive an upper bound to the Pearson correlation as a function of the noise and distribution of the DDG data. We also evaluate the available datasets to highlight the effect of the noise in conjunction with DDG distribution. We conclude that the upper bound is a function of both uncertainty and spread of the DDG values, and that with current data the best performance should be between 0.7-0.8, depending on the dataset used; higher Pearson correlations might be indicative of overtraining. It also follows that comparisons of predictors using different datasets are inherently misleading.
[ { "created": "Thu, 27 Sep 2018 08:01:32 GMT", "version": "v1" } ]
2018-09-28
[ [ "Montanucci", "Ludovica", "" ], [ "Martelli", "Pier Luigi", "" ], [ "Ben-Tal", "Nir", "" ], [ "Fariselli", "Piero", "" ] ]
Accurate prediction of protein stability changes upon single-site variations (DDG) is important for protein design, as well as our understanding of the mechanism of genetic diseases. The performance of high-throughput computational methods to this end is evaluated mostly based on the Pearson correlation coefficient between predicted and observed data, assuming that the upper bound would be 1 (perfect correlation). However, the performance of these predictors can be limited by the distribution and noise of the experimental data. Here we estimate, for the first time, a theoretical upper-bound to the DDG prediction performances imposed by the intrinsic structure of currently available DDG data. Given a set of measured DDG protein variations, the theoretically best predictor is estimated based on its similarity to another set of experimentally determined DDG values. We investigate the correlation between pairs of measured DDG variations, where one is used as a predictor for the other. We analytically derive an upper bound to the Pearson correlation as a function of the noise and distribution of the DDG data. We also evaluate the available datasets to highlight the effect of the noise in conjunction with DDG distribution. We conclude that the upper bound is a function of both uncertainty and spread of the DDG values, and that with current data the best performance should be between 0.7-0.8, depending on the dataset used; higher Pearson correlations might be indicative of overtraining. It also follows that comparisons of predictors using different datasets are inherently misleading.
1606.03138
Qinsi Wang
Qinsi Wang and Natasa Miskov-Zivanov and Bing Liu and James R. Faeder and Michael Lotze and Edmund M. Clarke
Formal Modeling and Analysis of Pancreatic Cancer Microenvironment
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The focus of pancreatic cancer research has been shifted from pancreatic cancer cells towards their microenvironment, involving pancreatic stellate cells that interact with cancer cells and influence tumor progression. To quantitatively understand the pancreatic cancer microenvironment, we construct a computational model for intracellular signaling networks of cancer cells and stellate cells as well as their intercellular communication. We extend the rule-based BioNetGen language to depict intra- and inter-cellular dynamics using discrete and continuous variables respectively. Our framework also enables a statistical model checking procedure for analyzing the system behavior in response to various perturbations. The results demonstrate the predictive power of our model by identifying important system properties that are consistent with existing experimental observations. We also obtain interesting insights into the development of novel therapeutic strategies for pancreatic cancer.
[ { "created": "Thu, 9 Jun 2016 22:51:35 GMT", "version": "v1" } ]
2016-06-13
[ [ "Wang", "Qinsi", "" ], [ "Miskov-Zivanov", "Natasa", "" ], [ "Liu", "Bing", "" ], [ "Faeder", "James R.", "" ], [ "Lotze", "Michael", "" ], [ "Clarke", "Edmund M.", "" ] ]
The focus of pancreatic cancer research has been shifted from pancreatic cancer cells towards their microenvironment, involving pancreatic stellate cells that interact with cancer cells and influence tumor progression. To quantitatively understand the pancreatic cancer microenvironment, we construct a computational model for intracellular signaling networks of cancer cells and stellate cells as well as their intercellular communication. We extend the rule-based BioNetGen language to depict intra- and inter-cellular dynamics using discrete and continuous variables respectively. Our framework also enables a statistical model checking procedure for analyzing the system behavior in response to various perturbations. The results demonstrate the predictive power of our model by identifying important system properties that are consistent with existing experimental observations. We also obtain interesting insights into the development of novel therapeutic strategies for pancreatic cancer.
2206.05915
Ulrich S. Schwarz
Oliver M. Drozdowski, Falko Ziebert and Ulrich S. Schwarz (Heidelberg University, Germany)
Optogenetic switching of migration of contractile cells
12 pages including supplement, 5 main and 4 supplemental figures
Communications Physics 6:158 (2023)
10.1038/s42005-023-01275-0
null
q-bio.SC cond-mat.soft
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cell crawling on flat substrates is based on intracellular flows of the actin cytoskeleton that are driven by both actin polymerization at the front and myosin contractility at the back. The new experimental tool of optogenetics makes it possible to spatially control contraction and thereby possibly also cell migration. Here we analyze this situation theoretically using a one-dimensional active gel model in which the excluded volume interactions of myosin and their aggregation into minifilaments is modeled by a supercritical van der Waals fluid. This physically simple and transparent, but nonlinear and thermodynamically rigorous model predicts bistability between sessile and motile solutions. We then show that one can switch between these two states at realistic parameter ranges via optogenetic activation or inhibition of contractility, in agreement with recent experiments. We also predict the required activation strengths and initiation times.
[ { "created": "Mon, 13 Jun 2022 06:00:26 GMT", "version": "v1" } ]
2023-07-04
[ [ "Drozdowski", "Oliver M.", "", "Heidelberg\n University, Germany" ], [ "Ziebert", "Falko", "", "Heidelberg\n University, Germany" ], [ "Schwarz", "Ulrich S.", "", "Heidelberg\n University, Germany" ] ]
Cell crawling on flat substrates is based on intracellular flows of the actin cytoskeleton that are driven by both actin polymerization at the front and myosin contractility at the back. The new experimental tool of optogenetics makes it possible to spatially control contraction and thereby possibly also cell migration. Here we analyze this situation theoretically using a one-dimensional active gel model in which the excluded volume interactions of myosin and their aggregation into minifilaments is modeled by a supercritical van der Waals fluid. This physically simple and transparent, but nonlinear and thermodynamically rigorous model predicts bistability between sessile and motile solutions. We then show that one can switch between these two states at realistic parameter ranges via optogenetic activation or inhibition of contractility, in agreement with recent experiments. We also predict the required activation strengths and initiation times.
2009.06457
Punam Bedi
Punam Bedi, Shivani, Pushkar Gole, Neha Gupta, Vinita Jindal
Projections for COVID-19 spread in India and its worst affected five states using the Modified SEIRD and LSTM models
null
null
null
null
q-bio.PE cs.CY physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The last leg of the year 2019 gave rise to a virus named COVID-19 (Corona Virus Disease 2019). Since the beginning of this infection in India, the government implemented several policies and restrictions to curtail its spread among the population. As the time passed, these restrictions were relaxed and people were advised to follow precautionary measures by themselves. These timely decisions taken by the Indian government helped in decelerating the spread of COVID-19 to a large extent. Despite these decisions, the pandemic continues to spread and hence, there is an urgent need to plan and control the spread of this disease. This is possible by finding the future predictions about the spread. Scientists across the globe are working towards estimating the future growth of COVID-19. This paper proposes a Modified SEIRD (Susceptible-Exposed-Infected-Recovered-Deceased) model for projecting COVID-19 infections in India and its five states having the highest number of total cases. In this model, exposed compartment contains individuals which may be asymptomatic but infectious. Deep Learning based Long Short-Term Memory (LSTM) model has also been used in this paper to perform short-term projections. The projections obtained from the proposed Modified SEIRD model have also been compared with the projections made by LSTM for next 30 days. The epidemiological data up to 15th August 2020 has been used for carrying out predictions in this paper. These predictions will help in arranging adequate medical infrastructure and providing proper preventive measures to handle the current pandemic. The effect of different lockdowns imposed by the Indian government has also been used in modelling and analysis in the proposed Modified SEIRD model. The results presented in this paper will act as a beacon for future policy-making to control the COVID-19 spread in India.
[ { "created": "Mon, 7 Sep 2020 07:38:10 GMT", "version": "v1" } ]
2020-09-15
[ [ "Bedi", "Punam", "" ], [ "Shivani", "", "" ], [ "Gole", "Pushkar", "" ], [ "Gupta", "Neha", "" ], [ "Jindal", "Vinita", "" ] ]
The last leg of the year 2019 gave rise to a virus named COVID-19 (Corona Virus Disease 2019). Since the beginning of this infection in India, the government implemented several policies and restrictions to curtail its spread among the population. As the time passed, these restrictions were relaxed and people were advised to follow precautionary measures by themselves. These timely decisions taken by the Indian government helped in decelerating the spread of COVID-19 to a large extent. Despite these decisions, the pandemic continues to spread and hence, there is an urgent need to plan and control the spread of this disease. This is possible by finding the future predictions about the spread. Scientists across the globe are working towards estimating the future growth of COVID-19. This paper proposes a Modified SEIRD (Susceptible-Exposed-Infected-Recovered-Deceased) model for projecting COVID-19 infections in India and its five states having the highest number of total cases. In this model, exposed compartment contains individuals which may be asymptomatic but infectious. Deep Learning based Long Short-Term Memory (LSTM) model has also been used in this paper to perform short-term projections. The projections obtained from the proposed Modified SEIRD model have also been compared with the projections made by LSTM for next 30 days. The epidemiological data up to 15th August 2020 has been used for carrying out predictions in this paper. These predictions will help in arranging adequate medical infrastructure and providing proper preventive measures to handle the current pandemic. The effect of different lockdowns imposed by the Indian government has also been used in modelling and analysis in the proposed Modified SEIRD model. The results presented in this paper will act as a beacon for future policy-making to control the COVID-19 spread in India.
0801.1885
Takuma Tanaka
Takuma Tanaka, Takeshi Kaneko, Toshio Aoyagi
Recurrent infomax generates cell assemblies, avalanches, and simple cell-like selectivity
16 pages, 4 figures
null
null
null
q-bio.NC cond-mat.dis-nn
null
Through evolution, animals have acquired central nervous systems (CNSs), which are extremely efficient information processing devices that improve an animal's adaptability to various environments. It has been proposed that the process of information maximization (infomax), which maximizes the information transmission from the input to the output of a feedforward network, may provide an explanation of the stimulus selectivity of neurons in CNSs. However, CNSs contain not only feedforward but also recurrent synaptic connections, and little is known about information retention over time in such recurrent networks. Here, we propose a learning algorithm based on infomax in a recurrent network, which we call "recurrent infomax" (RI). RI maximizes information retention and thereby minimizes information loss in a network. We find that feeding in external inputs consisting of information obtained from photographs of natural scenes into an RI-based model of a recurrent network results in the appearance of Gabor-like selectivity quite similar tothat existing in simple cells of the primary visual cortex (V1). More importantly, we find that without external input, this network exhibits cell assembly-like and synfire chain-like spontaneous activity and a critical neuronal avalanche. RI provides a simple framework to explain a wide range of phenomena observed in in vivo and in vitro neuronal networks, and it should provide a novel understanding of experimental results for multineuronal activity and plasticity from an information-theoretic point of view.
[ { "created": "Sun, 13 Jan 2008 08:16:23 GMT", "version": "v1" } ]
2008-01-15
[ [ "Tanaka", "Takuma", "" ], [ "Kaneko", "Takeshi", "" ], [ "Aoyagi", "Toshio", "" ] ]
Through evolution, animals have acquired central nervous systems (CNSs), which are extremely efficient information processing devices that improve an animal's adaptability to various environments. It has been proposed that the process of information maximization (infomax), which maximizes the information transmission from the input to the output of a feedforward network, may provide an explanation of the stimulus selectivity of neurons in CNSs. However, CNSs contain not only feedforward but also recurrent synaptic connections, and little is known about information retention over time in such recurrent networks. Here, we propose a learning algorithm based on infomax in a recurrent network, which we call "recurrent infomax" (RI). RI maximizes information retention and thereby minimizes information loss in a network. We find that feeding in external inputs consisting of information obtained from photographs of natural scenes into an RI-based model of a recurrent network results in the appearance of Gabor-like selectivity quite similar tothat existing in simple cells of the primary visual cortex (V1). More importantly, we find that without external input, this network exhibits cell assembly-like and synfire chain-like spontaneous activity and a critical neuronal avalanche. RI provides a simple framework to explain a wide range of phenomena observed in in vivo and in vitro neuronal networks, and it should provide a novel understanding of experimental results for multineuronal activity and plasticity from an information-theoretic point of view.
1612.09474
Eduardo Garc\'ia-Portugu\'es
Michael Golden, Eduardo Garc\'ia-Portugu\'es, Michael S{\o}rensen, Kanti V. Mardia, Thomas Hamelryck, Jotun Hein
A generative angular model of protein structure evolution
23 pages, 10 figures. Supplementary material: 5 pages, 4 figures
Molecular Biology and Evolution, 34(8):2085-2100, 2017
10.1093/molbev/msx137
null
q-bio.PE stat.ME
http://creativecommons.org/licenses/by-sa/4.0/
Recently described stochastic models of protein evolution have demonstrated that the inclusion of structural information in addition to amino acid sequences leads to a more reliable estimation of evolutionary parameters. We present a generative, evolutionary model of protein structure and sequence that is valid on a local length scale. The model concerns the local dependencies between sequence and structure evolution in a pair of homologous proteins. The evolutionary trajectory between the two structures in the protein pair is treated as a random walk in dihedral angle space, which is modelled using a novel angular diffusion process on the two-dimensional torus. Coupling sequence and structure evolution in our model allows for modelling both "smooth" conformational changes and "catastrophic" conformational jumps, conditioned on the amino acid changes. The model has interpretable parameters and is comparatively more realistic than previous stochastic models, providing new insights into the relationship between sequence and structure evolution. For example, using the trained model we were able to identify an apparent sequence-structure evolutionary motif present in a large number of homologous protein pairs. The generative nature of our model enables us to evaluate its validity and its ability to simulate aspects of protein evolution conditioned on an amino acid sequence, a related amino acid sequence, a related structure or any combination thereof.
[ { "created": "Fri, 30 Dec 2016 12:25:05 GMT", "version": "v1" }, { "created": "Mon, 8 May 2017 20:08:18 GMT", "version": "v2" }, { "created": "Wed, 29 Apr 2020 12:45:09 GMT", "version": "v3" }, { "created": "Mon, 21 Sep 2020 09:42:02 GMT", "version": "v4" } ]
2020-09-22
[ [ "Golden", "Michael", "" ], [ "García-Portugués", "Eduardo", "" ], [ "Sørensen", "Michael", "" ], [ "Mardia", "Kanti V.", "" ], [ "Hamelryck", "Thomas", "" ], [ "Hein", "Jotun", "" ] ]
Recently described stochastic models of protein evolution have demonstrated that the inclusion of structural information in addition to amino acid sequences leads to a more reliable estimation of evolutionary parameters. We present a generative, evolutionary model of protein structure and sequence that is valid on a local length scale. The model concerns the local dependencies between sequence and structure evolution in a pair of homologous proteins. The evolutionary trajectory between the two structures in the protein pair is treated as a random walk in dihedral angle space, which is modelled using a novel angular diffusion process on the two-dimensional torus. Coupling sequence and structure evolution in our model allows for modelling both "smooth" conformational changes and "catastrophic" conformational jumps, conditioned on the amino acid changes. The model has interpretable parameters and is comparatively more realistic than previous stochastic models, providing new insights into the relationship between sequence and structure evolution. For example, using the trained model we were able to identify an apparent sequence-structure evolutionary motif present in a large number of homologous protein pairs. The generative nature of our model enables us to evaluate its validity and its ability to simulate aspects of protein evolution conditioned on an amino acid sequence, a related amino acid sequence, a related structure or any combination thereof.
2211.02553
Cristiano Capone
Cristiano Capone, Cosimo Lupo, Paolo Muratore, Pier Stanislao Paolucci
Beyond spiking networks: the computational advantages of dendritic amplification and input segregation
arXiv admin note: substantial text overlap with arXiv:2201.11717
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons, and cannot achieve the state-of-the-art performances in machine learning. Recent works have proposed that segregation of dendritic input (neurons receive sensory information and higher-order feedback in segregated compartments) and generation of high-frequency bursts of spikes would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatio-temporal structure to the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for biologically plausible target-based learning, which does not require error propagation. We propose a pyramidal neuron model composed of three separated compartments. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture allows for a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing the support for target-based learning. We show that this framework can be used to efficiently solve spatio-temporal tasks, such as the store and recall of 3D trajectories. Finally, we suggest that this neuronal architecture naturally allows for orchestrating ``hierarchical imitation learning'', enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. This can be implemented in a two-level network, where the high-network acts as a ``manager'' and produces the contextual signal for the low-network, the ``worker''.
[ { "created": "Fri, 4 Nov 2022 16:20:15 GMT", "version": "v1" } ]
2022-11-07
[ [ "Capone", "Cristiano", "" ], [ "Lupo", "Cosimo", "" ], [ "Muratore", "Paolo", "" ], [ "Paolucci", "Pier Stanislao", "" ] ]
The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons, and cannot achieve the state-of-the-art performances in machine learning. Recent works have proposed that segregation of dendritic input (neurons receive sensory information and higher-order feedback in segregated compartments) and generation of high-frequency bursts of spikes would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatio-temporal structure to the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for biologically plausible target-based learning, which does not require error propagation. We propose a pyramidal neuron model composed of three separated compartments. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture allows for a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing the support for target-based learning. We show that this framework can be used to efficiently solve spatio-temporal tasks, such as the store and recall of 3D trajectories. Finally, we suggest that this neuronal architecture naturally allows for orchestrating ``hierarchical imitation learning'', enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. This can be implemented in a two-level network, where the high-network acts as a ``manager'' and produces the contextual signal for the low-network, the ``worker''.
1906.08548
Patricia Faisca
Ana Nunes and Patr\'icia FN Fa\'isca
Knotted Proteins: Tie Etiquette in Structural Biology
33 pages, 5 figures, accepted for publication
Contemporary Mathematics (AMS) 2019
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A small fraction of all protein structures characterized so far are entangled. The challenge of understanding the properties of these knotted proteins, and the why and the how of their natural folding process, has been taken up in the past decade with different approaches, such as structural characterization, in vitro experiments, and simulations of protein models with varying levels of complexity. The simplest among these are the lattice G\=o models, which belong to the class of structure-based models, i.e., models that are biased to the native structure by explicitly including structural data. In this review we highlight the contributions to the field made in the scope of lattice G\=o models, putting them into perspective in the context of the main experimental and theoretical results and of other, more realistic, computational approaches.
[ { "created": "Thu, 20 Jun 2019 10:48:27 GMT", "version": "v1" } ]
2019-06-21
[ [ "Nunes", "Ana", "" ], [ "Faísca", "Patrícia FN", "" ] ]
A small fraction of all protein structures characterized so far are entangled. The challenge of understanding the properties of these knotted proteins, and the why and the how of their natural folding process, has been taken up in the past decade with different approaches, such as structural characterization, in vitro experiments, and simulations of protein models with varying levels of complexity. The simplest among these are the lattice G\=o models, which belong to the class of structure-based models, i.e., models that are biased to the native structure by explicitly including structural data. In this review we highlight the contributions to the field made in the scope of lattice G\=o models, putting them into perspective in the context of the main experimental and theoretical results and of other, more realistic, computational approaches.
0809.2585
Razvan Radulescu M.D.
Razvan Tudor Radulescu
Contagious obesity: from adenovirus 36 to RB dysfunction
6 pages, 1 figure
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Significant overweight represents a major health problem in industrialized countries. Besides its known metabolic origins, this condition may also have an infectious cause, as recently postulated. Here, it is surmised that the potentially causative adenovirus 36 contributes to such disorder by inactivating the retinoblastoma tumor suppressor protein (RB) in a manner reminiscent of a mechanism employed by both another pathogenic adenoviral agent and insulin. The present insight additionally suggests novel modes of interfering with obesity-associated pathology.
[ { "created": "Mon, 15 Sep 2008 18:43:24 GMT", "version": "v1" } ]
2008-09-16
[ [ "Radulescu", "Razvan Tudor", "" ] ]
Significant overweight represents a major health problem in industrialized countries. Besides its known metabolic origins, this condition may also have an infectious cause, as recently postulated. Here, it is surmised that the potentially causative adenovirus 36 contributes to such disorder by inactivating the retinoblastoma tumor suppressor protein (RB) in a manner reminiscent of a mechanism employed by both another pathogenic adenoviral agent and insulin. The present insight additionally suggests novel modes of interfering with obesity-associated pathology.
2110.04335
Evan Yip
Evan Yip, Herbert Sauro
Computing Sensitivities in Reaction Networks using Finite Difference Methods
null
null
null
null
q-bio.QM q-bio.MN
http://creativecommons.org/licenses/by/4.0/
In this article, we investigate various numerical methods for computing scaled or logarithmic sensitivities of the form $\partial \ln y/\partial \ln x$. The methods tested include One Point, Two Point, Five Point, and the Richardson Extrapolation. The different methods were applied to a variety of mathematical functions as well as a reaction network model. The algorithms were validated by comparing results with known analytical solutions for functions and using the Reder method for computing the sensitivities in reaction networks via the Tellurium package. For evaluation, two aspects were looked at, accuracy and time taken to compute the sensitivities. Of the four methods, Richardson's extrapolation was by far the most accurate but also the slowest in terms of performance. For fast, reasonably accurate estimates, we recommend the two-point method. For most other cases where the derivatives are changing rapidly, the five-point method is a good choice, although it is three times slower than the two-point method. For ultimate accuracy which would apply particularly to very fast changing derivatives the Richardson method is without doubt the best, but it is seven-times slower than the two point method. We do not recommend the one-point method in any circumstance. The Python software that was used in the study with documentation is available at: \url{https://github.com/evanyfyip/SensitivityAnalysis}.
[ { "created": "Fri, 8 Oct 2021 18:55:52 GMT", "version": "v1" } ]
2021-10-12
[ [ "Yip", "Evan", "" ], [ "Sauro", "Herbert", "" ] ]
In this article, we investigate various numerical methods for computing scaled or logarithmic sensitivities of the form $\partial \ln y/\partial \ln x$. The methods tested include One Point, Two Point, Five Point, and the Richardson Extrapolation. The different methods were applied to a variety of mathematical functions as well as a reaction network model. The algorithms were validated by comparing results with known analytical solutions for functions and using the Reder method for computing the sensitivities in reaction networks via the Tellurium package. For evaluation, two aspects were looked at, accuracy and time taken to compute the sensitivities. Of the four methods, Richardson's extrapolation was by far the most accurate but also the slowest in terms of performance. For fast, reasonably accurate estimates, we recommend the two-point method. For most other cases where the derivatives are changing rapidly, the five-point method is a good choice, although it is three times slower than the two-point method. For ultimate accuracy which would apply particularly to very fast changing derivatives the Richardson method is without doubt the best, but it is seven-times slower than the two point method. We do not recommend the one-point method in any circumstance. The Python software that was used in the study with documentation is available at: \url{https://github.com/evanyfyip/SensitivityAnalysis}.
2107.01379
Nicholas Senofsky
Nicholas Senofsky, Justin Faber, Dolores Bozovic
Vestibular Drop Attacks and Meniere's Disease as Results of Otolithic Membrane Damage -- A Numerical Model
12 Pages, 4 Figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BACKGROUND: Meniere's Disease (MD) is a condition of the inner ear with symptoms affecting both vestibular and hearing functions. Some patients with MD experience vestibular drop attacks (VDAs), which are violent falls caused by spurious vestibular signals from the utricle and/or saccule. Recent surgical work has shown that patients who experience VDAs also show distrupted utricular otolithic membranes. OBJECTIVE: The objective of this study is to determine if otolithic membrane damage alone is sufficient to induce spurious vestibular signals, thus potentially eliciting VDAs and the vestibular dysfunction seen in patients with MD. METHODS: We use a previously developed numerical model to describe the nonlinear dynamics of an array of active, elastically coupled hair cells. We then reduce the coupling strength of a selected region of the membrane to model the effects of tissue damage. RESULTS: As we reduce the coupling strength, we observe large and abrupt spikes in hair bundle position. As bundle displacements from the equilibrium position have been shown to lead to depolarization of the hair-cell soma and hence trigger neural activity, this spontaneous activity could elicit false detection of a vestibular signal. CONCLUSIONS: The results of this numerical model suggest that otolithic membrane damage alone may be sufficient to induce VDAs and the vestibular dysfunction seen in patients with MD. Future experimental work is needed to confirm these results in vitro.
[ { "created": "Sat, 3 Jul 2021 08:34:18 GMT", "version": "v1" } ]
2021-07-06
[ [ "Senofsky", "Nicholas", "" ], [ "Faber", "Justin", "" ], [ "Bozovic", "Dolores", "" ] ]
BACKGROUND: Meniere's Disease (MD) is a condition of the inner ear with symptoms affecting both vestibular and hearing functions. Some patients with MD experience vestibular drop attacks (VDAs), which are violent falls caused by spurious vestibular signals from the utricle and/or saccule. Recent surgical work has shown that patients who experience VDAs also show distrupted utricular otolithic membranes. OBJECTIVE: The objective of this study is to determine if otolithic membrane damage alone is sufficient to induce spurious vestibular signals, thus potentially eliciting VDAs and the vestibular dysfunction seen in patients with MD. METHODS: We use a previously developed numerical model to describe the nonlinear dynamics of an array of active, elastically coupled hair cells. We then reduce the coupling strength of a selected region of the membrane to model the effects of tissue damage. RESULTS: As we reduce the coupling strength, we observe large and abrupt spikes in hair bundle position. As bundle displacements from the equilibrium position have been shown to lead to depolarization of the hair-cell soma and hence trigger neural activity, this spontaneous activity could elicit false detection of a vestibular signal. CONCLUSIONS: The results of this numerical model suggest that otolithic membrane damage alone may be sufficient to induce VDAs and the vestibular dysfunction seen in patients with MD. Future experimental work is needed to confirm these results in vitro.
0804.1202
Jean-Charles Boisson
Jean-Charles Boisson (LIFL, INRIA Lille - Nord Europe), Laetitia Jourdan (LIFL, INRIA Futurs), El-Ghazali Talbi (INRIA Futurs)
A Preliminary Work on Evolutionary Identification of Protein Variants and New Proteins on Grids
null
Dans AINA 2006, HIPCOMB Workshop (2006)
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein identification is one of the major task of Proteomics researchers. Protein identification could be resumed by searching the best match between an experimental mass spectrum and proteins from a database. Nevertheless this approach can not be used to identify new proteins or protein variants. In this paper an evolutionary approach is proposed to discover new proteins or protein variants thanks a "de novo sequencing" method. This approach has been experimented on a specific grid called Grid5000 with simulated spectra and also real spectra.
[ { "created": "Tue, 8 Apr 2008 07:49:57 GMT", "version": "v1" } ]
2008-12-18
[ [ "Boisson", "Jean-Charles", "", "LIFL, INRIA Lille - Nord Europe" ], [ "Jourdan", "Laetitia", "", "LIFL, INRIA Futurs" ], [ "Talbi", "El-Ghazali", "", "INRIA Futurs" ] ]
Protein identification is one of the major task of Proteomics researchers. Protein identification could be resumed by searching the best match between an experimental mass spectrum and proteins from a database. Nevertheless this approach can not be used to identify new proteins or protein variants. In this paper an evolutionary approach is proposed to discover new proteins or protein variants thanks a "de novo sequencing" method. This approach has been experimented on a specific grid called Grid5000 with simulated spectra and also real spectra.
2209.13045
Anne-Florence Bitbol
Nicola Dietler, Umberto Lupo and Anne-Florence Bitbol
Impact of phylogeny on structural contact inference from protein sequence data
31 pages, 19 figures, 3 tables
null
null
null
q-bio.BM physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local and global inference methods have been developed to infer structural contacts from multiple sequence alignments of homologous proteins. They rely on correlations in amino-acid usage at contacting sites. Because homologous proteins share a common ancestry, their sequences also feature phylogenetic correlations, which can impair contact inference. We investigate this effect by generating controlled synthetic data from a minimal model where the importance of contacts and of phylogeny can be tuned. We demonstrate that global inference methods, specifically Potts models, are more resilient to phylogenetic correlations than local methods, based on covariance or mutual information. This holds whether or not phylogenetic corrections are used, and may explain the success of global methods. We analyse the roles of selection strength and of phylogenetic relatedness. We show that sites that mutate early in the phylogeny yield false positive contacts. We consider natural data and realistic synthetic data, and our findings generalise to these cases. Our results highlight the impact of phylogeny on contact prediction from protein sequences and illustrate the interplay between the rich structure of biological data and inference.
[ { "created": "Mon, 26 Sep 2022 21:57:46 GMT", "version": "v1" }, { "created": "Fri, 2 Dec 2022 14:32:43 GMT", "version": "v2" }, { "created": "Thu, 5 Jan 2023 22:46:39 GMT", "version": "v3" } ]
2023-01-09
[ [ "Dietler", "Nicola", "" ], [ "Lupo", "Umberto", "" ], [ "Bitbol", "Anne-Florence", "" ] ]
Local and global inference methods have been developed to infer structural contacts from multiple sequence alignments of homologous proteins. They rely on correlations in amino-acid usage at contacting sites. Because homologous proteins share a common ancestry, their sequences also feature phylogenetic correlations, which can impair contact inference. We investigate this effect by generating controlled synthetic data from a minimal model where the importance of contacts and of phylogeny can be tuned. We demonstrate that global inference methods, specifically Potts models, are more resilient to phylogenetic correlations than local methods, based on covariance or mutual information. This holds whether or not phylogenetic corrections are used, and may explain the success of global methods. We analyse the roles of selection strength and of phylogenetic relatedness. We show that sites that mutate early in the phylogeny yield false positive contacts. We consider natural data and realistic synthetic data, and our findings generalise to these cases. Our results highlight the impact of phylogeny on contact prediction from protein sequences and illustrate the interplay between the rich structure of biological data and inference.
2407.01649
Ruidong Wu
Ruidong Wu, Ruihan Guo, Rui Wang, Shitong Luo, Yue Xu, Jiahan Li, Jianzhu Ma, Qiang Liu, Yunan Luo, Jian Peng
FAFE: Immune Complex Modeling with Geodesic Distance Loss on Noisy Group Frames
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Despite the striking success of general protein folding models such as AlphaFold2(AF2, Jumper et al. (2021)), the accurate computational modeling of antibody-antigen complexes remains a challenging task. In this paper, we first analyze AF2's primary loss function, known as the Frame Aligned Point Error (FAPE), and raise a previously overlooked issue that FAPE tends to face gradient vanishing problem on high-rotational-error targets. To address this fundamental limitation, we propose a novel geodesic loss called Frame Aligned Frame Error (FAFE, denoted as F2E to distinguish from FAPE), which enables the model to better optimize both the rotational and translational errors between two frames. We then prove that F2E can be reformulated as a group-aware geodesic loss, which translates the optimization of the residue-to-residue error to optimizing group-to-group geodesic frame distance. By fine-tuning AF2 with our proposed new loss function, we attain a correct rate of 52.3\% (DockQ $>$ 0.23) on an evaluation set and 43.8\% correct rate on a subset with low homology, with substantial improvement over AF2 by 182\% and 100\% respectively.
[ { "created": "Mon, 1 Jul 2024 06:47:21 GMT", "version": "v1" } ]
2024-07-03
[ [ "Wu", "Ruidong", "" ], [ "Guo", "Ruihan", "" ], [ "Wang", "Rui", "" ], [ "Luo", "Shitong", "" ], [ "Xu", "Yue", "" ], [ "Li", "Jiahan", "" ], [ "Ma", "Jianzhu", "" ], [ "Liu", "Qiang", "" ], [ "Luo", "Yunan", "" ], [ "Peng", "Jian", "" ] ]
Despite the striking success of general protein folding models such as AlphaFold2(AF2, Jumper et al. (2021)), the accurate computational modeling of antibody-antigen complexes remains a challenging task. In this paper, we first analyze AF2's primary loss function, known as the Frame Aligned Point Error (FAPE), and raise a previously overlooked issue that FAPE tends to face gradient vanishing problem on high-rotational-error targets. To address this fundamental limitation, we propose a novel geodesic loss called Frame Aligned Frame Error (FAFE, denoted as F2E to distinguish from FAPE), which enables the model to better optimize both the rotational and translational errors between two frames. We then prove that F2E can be reformulated as a group-aware geodesic loss, which translates the optimization of the residue-to-residue error to optimizing group-to-group geodesic frame distance. By fine-tuning AF2 with our proposed new loss function, we attain a correct rate of 52.3\% (DockQ $>$ 0.23) on an evaluation set and 43.8\% correct rate on a subset with low homology, with substantial improvement over AF2 by 182\% and 100\% respectively.
2102.11071
Fernando Saldana PhD
Hugo Flores-Arguedas, Jos\'e Ariel Camacho-Guti\'errez, Fernando Salda\~na
Estimating the impact of non-pharmaceutical interventions and vaccination on the progress of the COVID-19 epidemic in Mexico: a mathematical approach
19 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-pharmaceutical interventions have been critical in the fight against the COVID-19 pandemic. However, these sanitary measures have been partially lifted due to socioeconomic factors causing a worrisome rebound of the epidemic in several countries. In this work, we assess the effectiveness of the mitigation implemented to constrain the spread of SARS-CoV-2 in the Mexican territory during 2020. We also investigate to what extent the initial deployment of the vaccine will help to mitigate the pandemic and reduce the need for social distancing and other mobility restrictions. Our modeling approach is based on a simple mechanistic Kermack-McKendrick-type model. To quantify the effect of NPIs, we perform a monthly Bayesian inference using officially published data. The results suggest that in the absence of the sanitary measures, the cumulative number of infections, hospitalizations, and deaths would have been at least twice the official number. Moreover, for low vaccine coverage levels, relaxing NPIs may dramatically increase the disease burden; therefore, safety measures are of critical importance at the early stages of vaccination. The simulations also suggest that it may be more desirable to employ a vaccine with low efficacy but reach a high coverage than a vaccine with high effectiveness but low coverage levels. This supports the hypothesis that single doses to more individuals will be more effective than two doses for every person.
[ { "created": "Fri, 19 Feb 2021 15:26:28 GMT", "version": "v1" } ]
2021-02-23
[ [ "Flores-Arguedas", "Hugo", "" ], [ "Camacho-Gutiérrez", "José Ariel", "" ], [ "Saldaña", "Fernando", "" ] ]
Non-pharmaceutical interventions have been critical in the fight against the COVID-19 pandemic. However, these sanitary measures have been partially lifted due to socioeconomic factors causing a worrisome rebound of the epidemic in several countries. In this work, we assess the effectiveness of the mitigation implemented to constrain the spread of SARS-CoV-2 in the Mexican territory during 2020. We also investigate to what extent the initial deployment of the vaccine will help to mitigate the pandemic and reduce the need for social distancing and other mobility restrictions. Our modeling approach is based on a simple mechanistic Kermack-McKendrick-type model. To quantify the effect of NPIs, we perform a monthly Bayesian inference using officially published data. The results suggest that in the absence of the sanitary measures, the cumulative number of infections, hospitalizations, and deaths would have been at least twice the official number. Moreover, for low vaccine coverage levels, relaxing NPIs may dramatically increase the disease burden; therefore, safety measures are of critical importance at the early stages of vaccination. The simulations also suggest that it may be more desirable to employ a vaccine with low efficacy but reach a high coverage than a vaccine with high effectiveness but low coverage levels. This supports the hypothesis that single doses to more individuals will be more effective than two doses for every person.
1611.00110
Susan Khor
Susan Khor
Folding with a protein's native shortcut network
Major modifications and additions
null
null
null
q-bio.MN q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
A complex network approach to protein folding is proposed. The graph object is the network of shortcut edges present in a native-state protein (SCN0). Although SCN0s are found via an intuitive message passing algorithm (S. Milgram, Psychology Today, May 1967 pp. 61-67), they are meaningful enough that the logarithm form of their contact order (SCN0_lnCO) correlates significantly with protein kinetic rates, regardless of protein size. Further, the clustering coefficient of a SCN0 (CSCN0) can be used to combine protein segments iteratively within the Restricted Binary Collision model to form the whole native structure. This simple yet surprisingly effective strategy identified reasonable folding pathways for 12 small single-domain two-state folders, and three non-canonical proteins: ACBP (non-two-state), Top7 (non-cooperative) and DHFR (non-single-domain, > 100 residues). For two-state folders, CSCN0 is relatable to folding rates, transition-state placement and stability. The influence of CSCN0 on folding extends to non-native structures. Moreover, SCN analysis of non-native structures could suggest three fold success factors for the fast folding Villin headpiece peptide. These results support the view of protein folding as a bottom-up hierarchical process guided from above by native-state topology, and could facilitate future constructive demonstrations of this long held hypothesis for larger proteins.
[ { "created": "Tue, 1 Nov 2016 02:43:28 GMT", "version": "v1" }, { "created": "Wed, 25 Jan 2017 05:30:59 GMT", "version": "v2" }, { "created": "Tue, 2 May 2017 17:38:18 GMT", "version": "v3" }, { "created": "Wed, 13 Dec 2017 22:25:31 GMT", "version": "v4" } ]
2017-12-15
[ [ "Khor", "Susan", "" ] ]
A complex network approach to protein folding is proposed. The graph object is the network of shortcut edges present in a native-state protein (SCN0). Although SCN0s are found via an intuitive message passing algorithm (S. Milgram, Psychology Today, May 1967 pp. 61-67), they are meaningful enough that the logarithm form of their contact order (SCN0_lnCO) correlates significantly with protein kinetic rates, regardless of protein size. Further, the clustering coefficient of a SCN0 (CSCN0) can be used to combine protein segments iteratively within the Restricted Binary Collision model to form the whole native structure. This simple yet surprisingly effective strategy identified reasonable folding pathways for 12 small single-domain two-state folders, and three non-canonical proteins: ACBP (non-two-state), Top7 (non-cooperative) and DHFR (non-single-domain, > 100 residues). For two-state folders, CSCN0 is relatable to folding rates, transition-state placement and stability. The influence of CSCN0 on folding extends to non-native structures. Moreover, SCN analysis of non-native structures could suggest three fold success factors for the fast folding Villin headpiece peptide. These results support the view of protein folding as a bottom-up hierarchical process guided from above by native-state topology, and could facilitate future constructive demonstrations of this long held hypothesis for larger proteins.
q-bio/0407022
Bari\c{s} \"Oztop
B. \"Oztop, M. R. Ejtehadi, S. S. Plotkin
Protein folding rates correlate with heterogeneity of folding mechanism
11 pages, 3 figures, 1 table
null
10.1103/PhysRevLett.93.208105
null
q-bio.QM q-bio.BM
null
By observing trends in the folding kinetics of experimental 2-state proteins at their transition midpoints, and by observing trends in the barrier heights of numerous simulations of coarse grained, C-alpha model, Go proteins, we show that folding rates correlate with the degree of heterogeneity in the formation of native contacts. Statistically significant correlations are observed between folding rates and measures of heterogeneity inherent in the native topology, as well as between rates and the variance in the distribution of either experimentally measured or simulated phi-values.
[ { "created": "Wed, 14 Jul 2004 22:33:59 GMT", "version": "v1" } ]
2009-11-10
[ [ "Öztop", "B.", "" ], [ "Ejtehadi", "M. R.", "" ], [ "Plotkin", "S. S.", "" ] ]
By observing trends in the folding kinetics of experimental 2-state proteins at their transition midpoints, and by observing trends in the barrier heights of numerous simulations of coarse grained, C-alpha model, Go proteins, we show that folding rates correlate with the degree of heterogeneity in the formation of native contacts. Statistically significant correlations are observed between folding rates and measures of heterogeneity inherent in the native topology, as well as between rates and the variance in the distribution of either experimentally measured or simulated phi-values.
1911.05316
Ziqi Ke
Ziqi Ke, Haris Vikalo
A Graph Auto-Encoder for Haplotype Assembly and Viral Quasispecies Reconstruction
null
null
null
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing components of a genomic mixture from data obtained by means of DNA sequencing is a challenging problem encountered in a variety of applications including single individual haplotyping and studies of viral communities. High-throughput DNA sequencing platforms oversample mixture components to provide massive amounts of reads whose relative positions can be determined by mapping the reads to a known reference genome; assembly of the components, however, requires discovery of the reads' origin -- an NP-hard problem that the existing methods struggle to solve with the required level of accuracy. In this paper, we present a learning framework based on a graph auto-encoder designed to exploit structural properties of sequencing data. The algorithm is a neural network which essentially trains to ignore sequencing errors and infers the posteriori probabilities of the origin of sequencing reads. Mixture components are then reconstructed by finding consensus of the reads determined to originate from the same genomic component. Results on realistic synthetic as well as experimental data demonstrate that the proposed framework reliably assembles haplotypes and reconstructs viral communities, often significantly outperforming state-of-the-art techniques.
[ { "created": "Wed, 13 Nov 2019 06:32:48 GMT", "version": "v1" } ]
2019-11-14
[ [ "Ke", "Ziqi", "" ], [ "Vikalo", "Haris", "" ] ]
Reconstructing components of a genomic mixture from data obtained by means of DNA sequencing is a challenging problem encountered in a variety of applications including single individual haplotyping and studies of viral communities. High-throughput DNA sequencing platforms oversample mixture components to provide massive amounts of reads whose relative positions can be determined by mapping the reads to a known reference genome; assembly of the components, however, requires discovery of the reads' origin -- an NP-hard problem that the existing methods struggle to solve with the required level of accuracy. In this paper, we present a learning framework based on a graph auto-encoder designed to exploit structural properties of sequencing data. The algorithm is a neural network which essentially trains to ignore sequencing errors and infers the posteriori probabilities of the origin of sequencing reads. Mixture components are then reconstructed by finding consensus of the reads determined to originate from the same genomic component. Results on realistic synthetic as well as experimental data demonstrate that the proposed framework reliably assembles haplotypes and reconstructs viral communities, often significantly outperforming state-of-the-art techniques.
1306.5262
Fabrizio De Vico Fallani
Fabrizio De Vico Fallani, Floriana Pichiorri, Giovanni Morone, Marco Molinari, Fabio Babiloni, Febo Cincotti, Donatella Mattia
Multiscale Topological Properties Of Functional Brain Networks During Motor Imagery After Stroke
Neuroimage, accepted manuscript (unedited version) available online 19-June-2013
null
10.1016/j.neuroimage.2013.06.039
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, network analyses have been used to evaluate brain reorganization following stroke. However, many studies have often focused on single topological scales, leading to an incomplete model of how focal brain lesions affect multiple network properties simultaneously and how changes on smaller scales influence those on larger scales. In an EEG-based experiment on the performance of hand motor imagery (MI) in 20 patients with unilateral stroke, we observed that the anatomic lesion affects the functional brain network on multiple levels. In the beta (13-30 Hz) frequency band, the MI of the affected hand (Ahand) elicited a significantly lower smallworldness and local efficiency (Eloc) versus the unaffected hand (Uhand). Notably, the abnormal reduction in Eloc significantly depended on the increase in interhemispheric connectivity, which was in turn determined primarily by the rise in regional connectivity in the parieto-occipital sites of the affected hemisphere. Further, in contrast to the Uhand MI, in which significantly high connectivity was observed for the contralateral sensorimotor regions of the unaffected hemisphere, the regions that increased in connection during the Ahand MI lay in the frontal and parietal regions of the contralaterally affected hemisphere. Finally, the overall sensorimotor function of our patients, as measured by Fugl-Meyer Assessment (FMA) index, was significantly predicted by the connectivity of their affected hemisphere. These results increase our understanding of stroke-induced alterations in functional brain networks.
[ { "created": "Fri, 21 Jun 2013 22:02:49 GMT", "version": "v1" } ]
2014-09-10
[ [ "Fallani", "Fabrizio De Vico", "" ], [ "Pichiorri", "Floriana", "" ], [ "Morone", "Giovanni", "" ], [ "Molinari", "Marco", "" ], [ "Babiloni", "Fabio", "" ], [ "Cincotti", "Febo", "" ], [ "Mattia", "Donatella", "" ] ]
In recent years, network analyses have been used to evaluate brain reorganization following stroke. However, many studies have often focused on single topological scales, leading to an incomplete model of how focal brain lesions affect multiple network properties simultaneously and how changes on smaller scales influence those on larger scales. In an EEG-based experiment on the performance of hand motor imagery (MI) in 20 patients with unilateral stroke, we observed that the anatomic lesion affects the functional brain network on multiple levels. In the beta (13-30 Hz) frequency band, the MI of the affected hand (Ahand) elicited a significantly lower smallworldness and local efficiency (Eloc) versus the unaffected hand (Uhand). Notably, the abnormal reduction in Eloc significantly depended on the increase in interhemispheric connectivity, which was in turn determined primarily by the rise in regional connectivity in the parieto-occipital sites of the affected hemisphere. Further, in contrast to the Uhand MI, in which significantly high connectivity was observed for the contralateral sensorimotor regions of the unaffected hemisphere, the regions that increased in connection during the Ahand MI lay in the frontal and parietal regions of the contralaterally affected hemisphere. Finally, the overall sensorimotor function of our patients, as measured by Fugl-Meyer Assessment (FMA) index, was significantly predicted by the connectivity of their affected hemisphere. These results increase our understanding of stroke-induced alterations in functional brain networks.
1312.1876
Vince Grolmusz
Balazs Szalkai, Vince K. Grolmusz, Vince I. Grolmusz, Coalition Against Major Diseases
Identifying Combinatorial Biomarkers by Association Rule Mining in the CAMD Alzheimer's Database
null
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: The concept of combinatorial biomarkers was conceived around 2010: it was noticed that simple biomarkers are often inadequate for recognizing and characterizing complex diseases. Methods: Here we present an algorithmic search method for complex biomarkers which may predict or indicate Alzheimer's disease (AD) and other kinds of dementia. We applied data mining techniques that are capable to uncover implication-like logical schemes with detailed quality scoring. Our program SCARF is capable of finding multi-factor relevant association rules automatically. The new SCARF program was applied for the Tucson, Arizona based Critical Path Institute's CAMD database, containing laboratory and cognitive test data for more than 6000 patients from the placebo arm of clinical trials of large pharmaceutical companies, and consequently, the data is much more reliable than numerous other databases for dementia. Results: The results suggest connections between liver enzyme-, B12 vitamin-, sodium- and cholesterol levels and dementia, and also some hematologic parameter-levels and dementia.
[ { "created": "Fri, 6 Dec 2013 14:55:53 GMT", "version": "v1" } ]
2013-12-09
[ [ "Szalkai", "Balazs", "" ], [ "Grolmusz", "Vince K.", "" ], [ "Grolmusz", "Vince I.", "" ], [ "Diseases", "Coalition Against Major", "" ] ]
Background: The concept of combinatorial biomarkers was conceived around 2010: it was noticed that simple biomarkers are often inadequate for recognizing and characterizing complex diseases. Methods: Here we present an algorithmic search method for complex biomarkers which may predict or indicate Alzheimer's disease (AD) and other kinds of dementia. We applied data mining techniques that are capable to uncover implication-like logical schemes with detailed quality scoring. Our program SCARF is capable of finding multi-factor relevant association rules automatically. The new SCARF program was applied for the Tucson, Arizona based Critical Path Institute's CAMD database, containing laboratory and cognitive test data for more than 6000 patients from the placebo arm of clinical trials of large pharmaceutical companies, and consequently, the data is much more reliable than numerous other databases for dementia. Results: The results suggest connections between liver enzyme-, B12 vitamin-, sodium- and cholesterol levels and dementia, and also some hematologic parameter-levels and dementia.
2109.09801
Rebekah Rogers
Brandon A. Turner, Theresa R. Miorin, Nicholas B. Stewart, Robert W. Reid, Cathy C. Moore, Rebekah L. Rogers
Chromosomal rearrangements as a source of local adaptation in island Drosophila
53 pages; 3 tables, 6 figures main; 1 table, 20 figures supplement
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chromosomal rearrangements act as a source of genetic novelty by shuffling DNA throughout the genome. These mutations can produce chimeric genes, induce de novo gene formation, or alter gene expression changes for existing genes. Here, we explore how these mutations may serve as agents of evolutionary change as populations adapt to new environments during habitat shifts. We identify 16,480 rearrangements in mainland D. yakuba and two locally adapted populations of D. santomea and D yakuba on Sao Tome. Three loci that are associated with signals of strong differentiation in D. santomea lie adjacent to UV resistance or DNA repair genes, suggesting that these rearrangements confer selective advantages in high altitude environments with greater UV stressors. Some 55% of these mutations are facilitated by TE insertions, and 28% are TE facilitated ectopic recombination. In D. santomea 468 mutations are associated with strong signals of differentiation from the mainland while in island D yakuba we identify 383 candidates of local adaptation. A total of 49.4% of mutations associated with signals of local adaptation also show significant changes in transcript levels, suggesting that the adaptive value of rearrangements is related to effects on gene expression. Together, this survey of structural variation identifies key modes of evolutionary innovation that would be missed in SNP-based screens. This work offers a portrait of how these mutations appear and help organisms to survive during habitat shifts, furthering our understanding of evolutionary processes.
[ { "created": "Mon, 20 Sep 2021 19:19:16 GMT", "version": "v1" } ]
2021-09-22
[ [ "Turner", "Brandon A.", "" ], [ "Miorin", "Theresa R.", "" ], [ "Stewart", "Nicholas B.", "" ], [ "Reid", "Robert W.", "" ], [ "Moore", "Cathy C.", "" ], [ "Rogers", "Rebekah L.", "" ] ]
Chromosomal rearrangements act as a source of genetic novelty by shuffling DNA throughout the genome. These mutations can produce chimeric genes, induce de novo gene formation, or alter gene expression changes for existing genes. Here, we explore how these mutations may serve as agents of evolutionary change as populations adapt to new environments during habitat shifts. We identify 16,480 rearrangements in mainland D. yakuba and two locally adapted populations of D. santomea and D yakuba on Sao Tome. Three loci that are associated with signals of strong differentiation in D. santomea lie adjacent to UV resistance or DNA repair genes, suggesting that these rearrangements confer selective advantages in high altitude environments with greater UV stressors. Some 55% of these mutations are facilitated by TE insertions, and 28% are TE facilitated ectopic recombination. In D. santomea 468 mutations are associated with strong signals of differentiation from the mainland while in island D yakuba we identify 383 candidates of local adaptation. A total of 49.4% of mutations associated with signals of local adaptation also show significant changes in transcript levels, suggesting that the adaptive value of rearrangements is related to effects on gene expression. Together, this survey of structural variation identifies key modes of evolutionary innovation that would be missed in SNP-based screens. This work offers a portrait of how these mutations appear and help organisms to survive during habitat shifts, furthering our understanding of evolutionary processes.
1304.4856
Ulrike Ober
Ulrike Ober, Alexander Malinowski, Martin Schlather, Henner Simianer
The Expected Linkage Disequilibrium in Finite Populations Revisited
32 pages, 11 Figures, formatting edits compared to v1 (pdf-links)
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The expected level of linkage disequilibrium (LD) in a finite ideal population at equilibrium is of relevance for many applications in population and quantitative genetics. Several recursion formulae have been proposed during the last decades, whose derivations mostly contain heuristic parts and therefore remain mathematically questionable. We propose a more justifiable approach, including an alternative recursion formula for the expected LD. Since the exact formula depends on the distribution of allele frequencies in a very complicated manner, we suggest an approximate solution and analyze its validity extensively in a simulation study. Compared to the widely used formula of Sved, the proposed formula performs better for all parameter constellations considered. We then analyze the expected LD at equilibrium using the theory on discrete-time Markov chains based on the linear recursion formula, with equilibrium being defined as the steady-state of the chain, which finally leads to a formula for the effective population size N_e. An additional analysis considers the effect of non-exactness of a recursion formula on the steady-state, demonstrating that the resulting error in expected LD can be substantial. In an application to the HapMap data of two human populations we illustrate the dependency of the N_e-estimate on the distribution of minor allele frequencies (MAFs), showing that the estimated N_e can vary by up to 30% when a uniform instead of a skewed distribution of MAFs is taken as a basis to select SNPs for the analyses. Our analyses provide new insights into the mathematical complexity of the problem studied.
[ { "created": "Wed, 17 Apr 2013 15:15:38 GMT", "version": "v1" }, { "created": "Thu, 18 Apr 2013 11:15:17 GMT", "version": "v2" } ]
2013-04-19
[ [ "Ober", "Ulrike", "" ], [ "Malinowski", "Alexander", "" ], [ "Schlather", "Martin", "" ], [ "Simianer", "Henner", "" ] ]
The expected level of linkage disequilibrium (LD) in a finite ideal population at equilibrium is of relevance for many applications in population and quantitative genetics. Several recursion formulae have been proposed during the last decades, whose derivations mostly contain heuristic parts and therefore remain mathematically questionable. We propose a more justifiable approach, including an alternative recursion formula for the expected LD. Since the exact formula depends on the distribution of allele frequencies in a very complicated manner, we suggest an approximate solution and analyze its validity extensively in a simulation study. Compared to the widely used formula of Sved, the proposed formula performs better for all parameter constellations considered. We then analyze the expected LD at equilibrium using the theory on discrete-time Markov chains based on the linear recursion formula, with equilibrium being defined as the steady-state of the chain, which finally leads to a formula for the effective population size N_e. An additional analysis considers the effect of non-exactness of a recursion formula on the steady-state, demonstrating that the resulting error in expected LD can be substantial. In an application to the HapMap data of two human populations we illustrate the dependency of the N_e-estimate on the distribution of minor allele frequencies (MAFs), showing that the estimated N_e can vary by up to 30% when a uniform instead of a skewed distribution of MAFs is taken as a basis to select SNPs for the analyses. Our analyses provide new insights into the mathematical complexity of the problem studied.
2308.15264
Sangita Swapnasrita
Charlotte A Veser, Aur\'elie MF Carlier, Silvia M Mih\u{a}il\u{a}, Sangita Swapnasrita
Embracing Sex-specific Differences in Engineered Kidney Models for Enhanced Biological Understanding
44 pages, 6 figures
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
In vitro models play a crucial role in advancing our understanding of biological processes, disease mechanisms, and developing screening platforms for drug discovery. Kidneys play an instrumental role in transport and elimination of drugs and toxins. However, despite the well-established patient-to-patient differences in kidney function and disease manifestation, progression and prognostic, few studies take this variability into consideration. In particular, the discrepancies between female and male biology warrants a better representation within kidney in vitro models. The omission of sex as a biological variable poses the significant risk of overlooking sex-specific mechanisms in health and disease and potential differences in drug efficacy and toxicity between males and females. This review aims to highlight the importance of incorporating sex dimorphism in kidney in vitro models by examining the sexual characteristics in the context of the current state-of-the-art. Furthermore, this review underscores opportunities for improving kidney models by incorporating sex-specific traits. Ultimately, this roadmap to incorporating sex-dimorphism in kidney in vitro models will facilitate the creation of better models for studying sex-specific mechanisms in the kidney and their impact on drug efficacy and safety.
[ { "created": "Tue, 29 Aug 2023 12:48:20 GMT", "version": "v1" } ]
2023-08-30
[ [ "Veser", "Charlotte A", "" ], [ "Carlier", "Aurélie MF", "" ], [ "Mihăilă", "Silvia M", "" ], [ "Swapnasrita", "Sangita", "" ] ]
In vitro models play a crucial role in advancing our understanding of biological processes, disease mechanisms, and developing screening platforms for drug discovery. Kidneys play an instrumental role in transport and elimination of drugs and toxins. However, despite the well-established patient-to-patient differences in kidney function and disease manifestation, progression and prognostic, few studies take this variability into consideration. In particular, the discrepancies between female and male biology warrants a better representation within kidney in vitro models. The omission of sex as a biological variable poses the significant risk of overlooking sex-specific mechanisms in health and disease and potential differences in drug efficacy and toxicity between males and females. This review aims to highlight the importance of incorporating sex dimorphism in kidney in vitro models by examining the sexual characteristics in the context of the current state-of-the-art. Furthermore, this review underscores opportunities for improving kidney models by incorporating sex-specific traits. Ultimately, this roadmap to incorporating sex-dimorphism in kidney in vitro models will facilitate the creation of better models for studying sex-specific mechanisms in the kidney and their impact on drug efficacy and safety.
1507.02119
Louxin Zhang
Andreas D.M. Gunawan, Bhaskar DasGupta, Louxin Zhang
Locating a Tree in a Reticulation-Visible Network in Cubic Time
25 pages, 3 figures
null
null
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we answer an open problem in the study of phylogenetic networks. Phylogenetic trees are rooted binary trees in which all edges are directed away from the root, whereas phylogenetic networks are rooted acyclic digraphs. For the purpose of evolutionary model validation, biologists often want to know whether or not a phylogenetic tree is contained in a phylogenetic network. The tree containment problem is NP-complete even for very restricted classes of networks such as tree-sibling phylogenetic networks. We prove that this problem is solvable in cubic time for stable phylogenetic networks. A linear time algorithm is also presented for the cluster containment problem.
[ { "created": "Wed, 8 Jul 2015 12:18:03 GMT", "version": "v1" }, { "created": "Wed, 11 Nov 2015 04:29:05 GMT", "version": "v2" } ]
2015-11-12
[ [ "Gunawan", "Andreas D. M.", "" ], [ "DasGupta", "Bhaskar", "" ], [ "Zhang", "Louxin", "" ] ]
In this work, we answer an open problem in the study of phylogenetic networks. Phylogenetic trees are rooted binary trees in which all edges are directed away from the root, whereas phylogenetic networks are rooted acyclic digraphs. For the purpose of evolutionary model validation, biologists often want to know whether or not a phylogenetic tree is contained in a phylogenetic network. The tree containment problem is NP-complete even for very restricted classes of networks such as tree-sibling phylogenetic networks. We prove that this problem is solvable in cubic time for stable phylogenetic networks. A linear time algorithm is also presented for the cluster containment problem.
q-bio/0410022
Supratim Sengupta
Supratim Sengupta and Paul G. Higgs
A Unified Model of Codon Reassignment in Alternative Genetic Codes
Latex file, 11 pages including 5 ps figures; revised version; to appear in 'Genetics'
Genetics 170 (2005) 831-840
null
null
q-bio.PE
null
Many modified genetic codes are found in specific genomes in which one or more codons have been reassigned to a different amino acid from that in the canonical code. We present a model that unifies four possible mechanisms for reassignment, based on the observation that reassignment involves a gain and a loss. The loss could be the deletion or loss of function of a tRNA or release factor. The gain could be the gain of a new type of tRNA for the reassigned codon, or the gain of function of an existing tRNA due to a mutation or a base modification. In the codon disappearance mechanism, the codon disappears from the genome during the period of reassignment. In the other mechanisms, the codon does not disappear. In the ambiguous intermediate mechanism, the gain precedes the loss; in the unassigned codon mechanism, the loss precedes the gain; and in the compensatory change mechanism, the loss and gain spread through the population simultaneously. We present simulations of the gain-loss model and demonstrate that all four mechanisms are possible. The frequencies of the different mechanisms are influenced by selection strengths, number of codons undergoing reassignment, directional mutation pressure and the possibility of selection for reduced genome size.
[ { "created": "Tue, 19 Oct 2004 22:12:54 GMT", "version": "v1" }, { "created": "Sun, 24 Oct 2004 21:26:18 GMT", "version": "v2" }, { "created": "Wed, 2 Feb 2005 19:26:39 GMT", "version": "v3" } ]
2007-07-17
[ [ "Sengupta", "Supratim", "" ], [ "Higgs", "Paul G.", "" ] ]
Many modified genetic codes are found in specific genomes in which one or more codons have been reassigned to a different amino acid from that in the canonical code. We present a model that unifies four possible mechanisms for reassignment, based on the observation that reassignment involves a gain and a loss. The loss could be the deletion or loss of function of a tRNA or release factor. The gain could be the gain of a new type of tRNA for the reassigned codon, or the gain of function of an existing tRNA due to a mutation or a base modification. In the codon disappearance mechanism, the codon disappears from the genome during the period of reassignment. In the other mechanisms, the codon does not disappear. In the ambiguous intermediate mechanism, the gain precedes the loss; in the unassigned codon mechanism, the loss precedes the gain; and in the compensatory change mechanism, the loss and gain spread through the population simultaneously. We present simulations of the gain-loss model and demonstrate that all four mechanisms are possible. The frequencies of the different mechanisms are influenced by selection strengths, number of codons undergoing reassignment, directional mutation pressure and the possibility of selection for reduced genome size.
1804.11081
Michael B\"orsch
Ilka Starke, Gary D. Glick, Michael B\"orsch
Visualizing mitochondrial FoF1-ATP synthase as the target of the immunomodulatory drug Bz-423
10 pages, 2 figures
null
null
null
q-bio.SC q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Targeting the mitochondrial enzyme FoF1-ATP synthase and modulating its catalytic activities with small molecules is a promising new approach for treatment of autoimmune diseases. The immuno-modulatory compound Bz-423 is such a drug that binds to subunit OSCP of the mitochondrial FoF1-ATP synthase and induces apoptosis via increased reactive oxygen production in coupled, actively respiring mitochondria. Here we review the experimental progress to reveal the binding of Bz-423 to the mitochondrial target and discuss how subunit rotation of FoF1-ATP synthase is affected by Bz-423. Briefly, we report how F\"orster resonance energy transfer (FRET) can be employed to colocalize the enzyme and the fluorescently tagged Bz-423 within the mitochondria of living cells with nanometer resolution.
[ { "created": "Mon, 30 Apr 2018 08:36:05 GMT", "version": "v1" } ]
2018-05-01
[ [ "Starke", "Ilka", "" ], [ "Glick", "Gary D.", "" ], [ "Börsch", "Michael", "" ] ]
Targeting the mitochondrial enzyme FoF1-ATP synthase and modulating its catalytic activities with small molecules is a promising new approach for treatment of autoimmune diseases. The immuno-modulatory compound Bz-423 is such a drug that binds to subunit OSCP of the mitochondrial FoF1-ATP synthase and induces apoptosis via increased reactive oxygen production in coupled, actively respiring mitochondria. Here we review the experimental progress to reveal the binding of Bz-423 to the mitochondrial target and discuss how subunit rotation of FoF1-ATP synthase is affected by Bz-423. Briefly, we report how F\"orster resonance energy transfer (FRET) can be employed to colocalize the enzyme and the fluorescently tagged Bz-423 within the mitochondria of living cells with nanometer resolution.
q-bio/0611037
Tatyana Sharpee
Tatyana O. Sharpee, Hiroki Sugihara, Andrei V. Kurgansky, Sergei P. Rebrik, Michael P. Stryker, and Kenneth D. Miller
Adaptive Filtering Enhances Information Transmission in Visual Cortex
20 pages, 11 figures, includes supplementary information
Nature, vol. 439, pp. 936- 942 (02/23/2006)
10.1038/nature04519
null
q-bio.NC
null
Sensory neuroscience seeks to understand how the brain encodes natural environments. However, neural coding has largely been studied using simplified stimuli. In order to assess whether the brain's coding strategy depend on the stimulus ensemble, we apply a new information-theoretic method that allows unbiased calculation of neural filters (receptive fields) from responses to natural scenes or other complex signals with strong multipoint correlations. In the cat primary visual cortex we compare responses to natural inputs with those to noise inputs matched for luminance and contrast. We find that neural filters adaptively change with the input ensemble so as to increase the information carried by the neural response about the filtered stimulus. Adaptation affects the spatial frequency composition of the filter, enhancing sensitivity to under-represented frequencies in agreement with optimal encoding arguments. Adaptation occurs over 40 s to many minutes, longer than most previously reported forms of adaptation.
[ { "created": "Thu, 9 Nov 2006 19:15:12 GMT", "version": "v1" } ]
2007-05-23
[ [ "Sharpee", "Tatyana O.", "" ], [ "Sugihara", "Hiroki", "" ], [ "Kurgansky", "Andrei V.", "" ], [ "Rebrik", "Sergei P.", "" ], [ "Stryker", "Michael P.", "" ], [ "Miller", "Kenneth D.", "" ] ]
Sensory neuroscience seeks to understand how the brain encodes natural environments. However, neural coding has largely been studied using simplified stimuli. In order to assess whether the brain's coding strategy depend on the stimulus ensemble, we apply a new information-theoretic method that allows unbiased calculation of neural filters (receptive fields) from responses to natural scenes or other complex signals with strong multipoint correlations. In the cat primary visual cortex we compare responses to natural inputs with those to noise inputs matched for luminance and contrast. We find that neural filters adaptively change with the input ensemble so as to increase the information carried by the neural response about the filtered stimulus. Adaptation affects the spatial frequency composition of the filter, enhancing sensitivity to under-represented frequencies in agreement with optimal encoding arguments. Adaptation occurs over 40 s to many minutes, longer than most previously reported forms of adaptation.
2002.02530
Kevin McCloskey
Kevin McCloskey, Eric A. Sigel, Steven Kearnes, Ling Xue, Xia Tian, Dennis Moccia, Diana Gikunju, Sana Bazzaz, Betty Chan, Matthew A. Clark, John W. Cuozzo, Marie-Aude Gui\'e, John P. Guilinger, Christelle Huguet, Christopher D. Hupp, Anthony D. Keefe, Christopher J. Mulhern, Ying Zhang, and Patrick Riley
Machine learning on DNA-encoded libraries: A new paradigm for hit-finding
null
null
10.1021/acs.jmedchem.0c00452
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DNA-encoded small molecule libraries (DELs) have enabled discovery of novel inhibitors for many distinct protein targets of therapeutic value through screening of libraries with up to billions of unique small molecules. We demonstrate a new approach applying machine learning to DEL selection data by identifying active molecules from a large commercial collection and a virtual library of easily synthesizable compounds. We train models using only DEL selection data and apply automated or automatable filters with chemist review restricted to the removal of molecules with potential for instability or reactivity. We validate this approach with a large prospective study (nearly 2000 compounds tested) across three diverse protein targets: sEH (a hydrolase), ER{\alpha} (a nuclear receptor), and c-KIT (a kinase). The approach is effective, with an overall hit rate of {\sim}30% at 30 {\textmu}M and discovery of potent compounds (IC50 <10 nM) for every target. The model makes useful predictions even for molecules dissimilar to the original DEL and the compounds identified are diverse, predominantly drug-like, and different from known ligands. Collectively, the quality and quantity of DEL selection data; the power of modern machine learning methods; and access to large, inexpensive, commercially-available libraries creates a powerful new approach for hit finding.
[ { "created": "Fri, 31 Jan 2020 19:31:23 GMT", "version": "v1" } ]
2020-06-15
[ [ "McCloskey", "Kevin", "" ], [ "Sigel", "Eric A.", "" ], [ "Kearnes", "Steven", "" ], [ "Xue", "Ling", "" ], [ "Tian", "Xia", "" ], [ "Moccia", "Dennis", "" ], [ "Gikunju", "Diana", "" ], [ "Bazzaz", "Sana", "" ], [ "Chan", "Betty", "" ], [ "Clark", "Matthew A.", "" ], [ "Cuozzo", "John W.", "" ], [ "Guié", "Marie-Aude", "" ], [ "Guilinger", "John P.", "" ], [ "Huguet", "Christelle", "" ], [ "Hupp", "Christopher D.", "" ], [ "Keefe", "Anthony D.", "" ], [ "Mulhern", "Christopher J.", "" ], [ "Zhang", "Ying", "" ], [ "Riley", "Patrick", "" ] ]
DNA-encoded small molecule libraries (DELs) have enabled discovery of novel inhibitors for many distinct protein targets of therapeutic value through screening of libraries with up to billions of unique small molecules. We demonstrate a new approach applying machine learning to DEL selection data by identifying active molecules from a large commercial collection and a virtual library of easily synthesizable compounds. We train models using only DEL selection data and apply automated or automatable filters with chemist review restricted to the removal of molecules with potential for instability or reactivity. We validate this approach with a large prospective study (nearly 2000 compounds tested) across three diverse protein targets: sEH (a hydrolase), ER{\alpha} (a nuclear receptor), and c-KIT (a kinase). The approach is effective, with an overall hit rate of {\sim}30% at 30 {\textmu}M and discovery of potent compounds (IC50 <10 nM) for every target. The model makes useful predictions even for molecules dissimilar to the original DEL and the compounds identified are diverse, predominantly drug-like, and different from known ligands. Collectively, the quality and quantity of DEL selection data; the power of modern machine learning methods; and access to large, inexpensive, commercially-available libraries creates a powerful new approach for hit finding.
1902.04614
Melinda Liu Perkins
Melinda Liu Perkins, Murat Arcak
A Spatial Filtering Approach to Biological Patterning
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactions between neighboring cells are essential for generating or refining patterns in a number of biological systems. We propose a discrete filtering approach to predict how networks of cells modulate spatially varying input signals to produce more complicated or precise output signals. The interconnections between cells determine the set of spatial modes that are amplified or suppressed based on the coupling and internal dynamics of each cell, analogously to the way a traditional digital filter modifies the frequency components of a discrete signal. We apply the framework to two systems in developmental biology: the Notch-Delta interaction that shapes \textit{Drosophila} wing veins and the Sox9/Bmp/Wnt network responsible for digit formation in vertebrate limbs. The latter case study demonstrates that Turing-like patterns may occur even in the absence of instabilities. Results also indicate that developmental biological systems may be inherently robust to both correlated and uncorrelated noise sources. Our work shows that a spatial frequency-based interpretation simplifies the process of predicting patterning in living organisms when both environmental influences and intercellular interactions are present.
[ { "created": "Fri, 8 Feb 2019 23:32:13 GMT", "version": "v1" } ]
2019-02-14
[ [ "Perkins", "Melinda Liu", "" ], [ "Arcak", "Murat", "" ] ]
Interactions between neighboring cells are essential for generating or refining patterns in a number of biological systems. We propose a discrete filtering approach to predict how networks of cells modulate spatially varying input signals to produce more complicated or precise output signals. The interconnections between cells determine the set of spatial modes that are amplified or suppressed based on the coupling and internal dynamics of each cell, analogously to the way a traditional digital filter modifies the frequency components of a discrete signal. We apply the framework to two systems in developmental biology: the Notch-Delta interaction that shapes \textit{Drosophila} wing veins and the Sox9/Bmp/Wnt network responsible for digit formation in vertebrate limbs. The latter case study demonstrates that Turing-like patterns may occur even in the absence of instabilities. Results also indicate that developmental biological systems may be inherently robust to both correlated and uncorrelated noise sources. Our work shows that a spatial frequency-based interpretation simplifies the process of predicting patterning in living organisms when both environmental influences and intercellular interactions are present.
1809.03835
Mohammad Sultan Alam
Mohammad Sultan Alam
Analysis of Sequence Polymorphism of LINEs and SINEs in Entamoeba histolytica
M.Tech Dissertation (2010-2012) under the guidance of Prof. Alok Bhattacharya and Prof. Ram Ramaswamy in JNU. 59 pages, 28 figures, 8 tables
null
null
null
q-bio.PE cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this dissertation is to study the sequence polymorphism in retrotransposable elements of Entamoeba histolytica. The Quasispecies theory, a concept of equilibrium (stationary), has been used to understand the behaviour of these elements. Two datasets of retrotransposons of Entamoeba histolytica have been used. We present results from both datasets of retrotransposons (SINE1s) of E. histolytica. We have calculated the mutation rate of EhSINE1s for both datasets and drawn a phylogenetic tree for newly determined EhSINE1s (dataset II). We have also discussed the variation in lengths of EhSINE1s for both datasets. Using the quasispecies model, we have shown how sequences of SINE1s vary within the population. The outputs of the quasispecies model are discussed in the presence and the absence of back mutation by taking different values of fitness. From our study of Non-long terminal repeat retrotransposons (LINEs and their non-autonomous partner's SINEs) of Entamoeba histolytica, we can conclude that an active EhSINE can generate very similar copies of itself by retrotransposition. Due to this reason, it increases mutations which give the result of sequence polymorphism. We have concluded that the mutation rate of SINE is very high. This high mutation rate provides an idea for the existence of SINEs, which may affect the genetic analysis of EhSINE1 ancestries, and calculation of phylogenetic distances.
[ { "created": "Mon, 10 Sep 2018 10:33:39 GMT", "version": "v1" } ]
2018-09-12
[ [ "Alam", "Mohammad Sultan", "" ] ]
The goal of this dissertation is to study the sequence polymorphism in retrotransposable elements of Entamoeba histolytica. The Quasispecies theory, a concept of equilibrium (stationary), has been used to understand the behaviour of these elements. Two datasets of retrotransposons of Entamoeba histolytica have been used. We present results from both datasets of retrotransposons (SINE1s) of E. histolytica. We have calculated the mutation rate of EhSINE1s for both datasets and drawn a phylogenetic tree for newly determined EhSINE1s (dataset II). We have also discussed the variation in lengths of EhSINE1s for both datasets. Using the quasispecies model, we have shown how sequences of SINE1s vary within the population. The outputs of the quasispecies model are discussed in the presence and the absence of back mutation by taking different values of fitness. From our study of Non-long terminal repeat retrotransposons (LINEs and their non-autonomous partner's SINEs) of Entamoeba histolytica, we can conclude that an active EhSINE can generate very similar copies of itself by retrotransposition. Due to this reason, it increases mutations which give the result of sequence polymorphism. We have concluded that the mutation rate of SINE is very high. This high mutation rate provides an idea for the existence of SINEs, which may affect the genetic analysis of EhSINE1 ancestries, and calculation of phylogenetic distances.
1407.4440
Jack Heal
J. W. Heal, S. A. Wells, R. B. Freedman and R. A. R\"omer
Characterizing the folding core of the cyclophilin A - cyclosporin A complex II: improving folding core predictions by including mobility
9 pages, 6 figures
Biophys J. 108, 1739-1746 (2015)
10.1016/j.bpj.2015.02.017
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the folding core of a protein yields information about its folding process and dynamics. The experimental procedures for identifying the amino acids which make up the folding core include hydrogen-deuterium exchange and $\Phi$-value analysis and can be expensive and time consuming. As such there is a desire to improve upon existing methods for determining protein folding cores theoretically. Here, we use a combined method of rigidity analysis alongside coarse-grained simulations of protein motion in order to improve folding core predictions for unbound CypA and for the CypA-CsA complex. We find that the most specific prediction of folding cores in CypA and CypA-CsA comes from the intersection of the results of static rigidity analysis, implemented in the FIRST software suite, and simulations of the propensity for flexible motion, using the FRODA tool.
[ { "created": "Wed, 16 Jul 2014 19:37:16 GMT", "version": "v1" } ]
2015-04-09
[ [ "Heal", "J. W.", "" ], [ "Wells", "S. A.", "" ], [ "Freedman", "R. B.", "" ], [ "Römer", "R. A.", "" ] ]
Determining the folding core of a protein yields information about its folding process and dynamics. The experimental procedures for identifying the amino acids which make up the folding core include hydrogen-deuterium exchange and $\Phi$-value analysis and can be expensive and time consuming. As such there is a desire to improve upon existing methods for determining protein folding cores theoretically. Here, we use a combined method of rigidity analysis alongside coarse-grained simulations of protein motion in order to improve folding core predictions for unbound CypA and for the CypA-CsA complex. We find that the most specific prediction of folding cores in CypA and CypA-CsA comes from the intersection of the results of static rigidity analysis, implemented in the FIRST software suite, and simulations of the propensity for flexible motion, using the FRODA tool.
1705.10647
Jing Tu
Junji Li, Na Lu, Xulian Shi, Yi Qiao, Liang Chen, Mengqin Duan, Yong Hou, Qinyu Ge, Yuhan Tao, Jing Tu, Zuhong Lu
MDA in Capillary for Whole Genome Amplification
null
Anal. Chem. 2017, 89, 10147-10152
10.1021/acs.analchem.7b02183
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Whole genome amplification (WGA) plays an important role in sample preparation of low-input templates for high-throughput sequencing. Multiple displacement amplification (MDA), a popular isothermal WGA method, suffers a major hurdle of highly uneven amplification. Optimizations have been made in the past by separating the reagents into numbers of tiny chambers or droplets in microfluidic devices, which significantly improves the amplification uniformity of MDA. However, skill barrier still exists for biological researchers to handle chip fabrication and droplet manipulation. Here, we present a novel MDA protocol, in-capillary MDA (icMDA), which significantly simplifies the manipulation and improves the uniformity of amplification by dispersing reagents in a long quasi-1D capillary tubing. We demonstrated that icMDA is able to accurately detect SNVs with higher efficiency and sensitivity. Moreover, this straightforward method employs neither customized instruments nor complicated operations, making it a ready-to-use approach for most laboratories.
[ { "created": "Tue, 30 May 2017 13:53:25 GMT", "version": "v1" } ]
2019-02-12
[ [ "Li", "Junji", "" ], [ "Lu", "Na", "" ], [ "Shi", "Xulian", "" ], [ "Qiao", "Yi", "" ], [ "Chen", "Liang", "" ], [ "Duan", "Mengqin", "" ], [ "Hou", "Yong", "" ], [ "Ge", "Qinyu", "" ], [ "Tao", "Yuhan", "" ], [ "Tu", "Jing", "" ], [ "Lu", "Zuhong", "" ] ]
Whole genome amplification (WGA) plays an important role in sample preparation of low-input templates for high-throughput sequencing. Multiple displacement amplification (MDA), a popular isothermal WGA method, suffers a major hurdle of highly uneven amplification. Optimizations have been made in the past by separating the reagents into numbers of tiny chambers or droplets in microfluidic devices, which significantly improves the amplification uniformity of MDA. However, skill barrier still exists for biological researchers to handle chip fabrication and droplet manipulation. Here, we present a novel MDA protocol, in-capillary MDA (icMDA), which significantly simplifies the manipulation and improves the uniformity of amplification by dispersing reagents in a long quasi-1D capillary tubing. We demonstrated that icMDA is able to accurately detect SNVs with higher efficiency and sensitivity. Moreover, this straightforward method employs neither customized instruments nor complicated operations, making it a ready-to-use approach for most laboratories.
1109.2209
Wilfred Ndifon
Wilfred Ndifon
Supporting information for: New methods for analyzing serological data with applications to influenza surveillance
null
Influenza and Other Respiratory Viruses 5:206--212, 2011
10.1111/j.1750-2659.2010.00192.x
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For decades, the hemagglutination-inhibition (HI) assay has been used in epidemiological and basic biological studies of influenza viruses. The mechanistic basis of the assay results (called titers) is not well understood. The first part of this document describes a biophysical model of HI that illuminates the mechanistic basis of and provides the theoretical motivation for new ways of interpreting titers. The biophysical model is applicable to other serological assays; this fact is illustrated using the neutralization assay. The second part of the document describes some new ways of interpreting titers, which involve, among other methods, singular value decomposition and probabilistic multidimensional scaling. The third part of the document discusses biological and mathematical issues related to the determination of the effective dimensionality of titers, and describes an algorithm for recovering unavailable titers.
[ { "created": "Sat, 10 Sep 2011 10:32:11 GMT", "version": "v1" } ]
2011-09-13
[ [ "Ndifon", "Wilfred", "" ] ]
For decades, the hemagglutination-inhibition (HI) assay has been used in epidemiological and basic biological studies of influenza viruses. The mechanistic basis of the assay results (called titers) is not well understood. The first part of this document describes a biophysical model of HI that illuminates the mechanistic basis of and provides the theoretical motivation for new ways of interpreting titers. The biophysical model is applicable to other serological assays; this fact is illustrated using the neutralization assay. The second part of the document describes some new ways of interpreting titers, which involve, among other methods, singular value decomposition and probabilistic multidimensional scaling. The third part of the document discusses biological and mathematical issues related to the determination of the effective dimensionality of titers, and describes an algorithm for recovering unavailable titers.
2204.11954
Takeshi Yasui
Shogo Miyamura, Ryo Oe, Takuya Nakahara, Shota Okada, Shuji Taue, Yu Tokizane, Takeo Minamikawa, Taka-aki Yano, Kunihiro Otsuka, Ayuko Sakane, Takuya Sasaki, Koji Yasutomo, Taira Kajisa, and Takeshi Yasui
Rapid, high-sensitivity detection of biomolecules using dual-comb biosensing: application to the SARS-CoV-2 nucleocapsid protein
42 pages, 6 figures, 2 tables
Scientific Reports, volume 13, Article number: 14541 (2023)
10.1038/s41598-023-41436-3
null
q-bio.QM physics.optics
http://creativecommons.org/licenses/by/4.0/
Rapid, sensitive detection of biomolecules is important for improved testing methods for viruses as well as biomarkers and environmental hormones. For example, testing for SARS-CoV-2 is essential in the fight against the COVID-19 pandemic. Reverse-transcription polymerase chain reaction (RT-PCR) is the current standard for COVID-19 testing; however, it is hampered by the long testing process. Shortening the testing process while achieving high sensitivity would facilitate sooner quarantine and thus presumably prevention of the spread of SARS-CoV-2. Here, we aim to achieve rapid, sensitive detection of the SARS-CoV-2 nucleocapsid protein by enhancing the performance of optical biosensing with a dual-comb configuration of optical frequency combs. The virus-concentration-dependent optical spectrum shift is transformed into a photonic RF shift by frequency conversion between the optical and RF regions, facilitating mature electrical frequency measurements. Furthermore, active-dummy temperature-drift compensation enables very small changes in the virus-concentration-dependent signal to be extracted from the large, variable background signal. This dual-comb biosensing technique has the potential to reduce the COVID-19 testing time to 10 min while maintaining sensitivity close to that of RT-PCR. Furthermore, this system can be applied for sensing of not only viruses but also various biomolecules for clinical diagnosis, health care, and environmental monitoring.
[ { "created": "Mon, 25 Apr 2022 20:30:34 GMT", "version": "v1" }, { "created": "Tue, 26 Jul 2022 21:18:36 GMT", "version": "v2" } ]
2023-10-24
[ [ "Miyamura", "Shogo", "" ], [ "Oe", "Ryo", "" ], [ "Nakahara", "Takuya", "" ], [ "Okada", "Shota", "" ], [ "Taue", "Shuji", "" ], [ "Tokizane", "Yu", "" ], [ "Minamikawa", "Takeo", "" ], [ "Yano", "Taka-aki", "" ], [ "Otsuka", "Kunihiro", "" ], [ "Sakane", "Ayuko", "" ], [ "Sasaki", "Takuya", "" ], [ "Yasutomo", "Koji", "" ], [ "Kajisa", "Taira", "" ], [ "Yasui", "Takeshi", "" ] ]
Rapid, sensitive detection of biomolecules is important for improved testing methods for viruses as well as biomarkers and environmental hormones. For example, testing for SARS-CoV-2 is essential in the fight against the COVID-19 pandemic. Reverse-transcription polymerase chain reaction (RT-PCR) is the current standard for COVID-19 testing; however, it is hampered by the long testing process. Shortening the testing process while achieving high sensitivity would facilitate sooner quarantine and thus presumably prevention of the spread of SARS-CoV-2. Here, we aim to achieve rapid, sensitive detection of the SARS-CoV-2 nucleocapsid protein by enhancing the performance of optical biosensing with a dual-comb configuration of optical frequency combs. The virus-concentration-dependent optical spectrum shift is transformed into a photonic RF shift by frequency conversion between the optical and RF regions, facilitating mature electrical frequency measurements. Furthermore, active-dummy temperature-drift compensation enables very small changes in the virus-concentration-dependent signal to be extracted from the large, variable background signal. This dual-comb biosensing technique has the potential to reduce the COVID-19 testing time to 10 min while maintaining sensitivity close to that of RT-PCR. Furthermore, this system can be applied for sensing of not only viruses but also various biomolecules for clinical diagnosis, health care, and environmental monitoring.
q-bio/0309020
Eli Eisenberg
Eli Eisenberg and Erez Y. Levanon
Human housekeeping genes are compact
null
Trends in Genetics, vol. 19, 362-365 (2003)
null
null
q-bio.GN
null
We identify a set of 575 human genes that are expressed in all conditions tested in a publicly available database of microarray results. Based on this common occurrence, the set is expected to be rich in "housekeeping" genes, showing constitutive expression in all tissues. We compare selected aspects of their genomic structure with a set of background genes. We find that the introns, untranslated regions and coding sequences of the housekeeping genes are shorter, indicating a selection for compactness in these genes.
[ { "created": "Tue, 30 Sep 2003 11:26:28 GMT", "version": "v1" } ]
2007-05-23
[ [ "Eisenberg", "Eli", "" ], [ "Levanon", "Erez Y.", "" ] ]
We identify a set of 575 human genes that are expressed in all conditions tested in a publicly available database of microarray results. Based on this common occurrence, the set is expected to be rich in "housekeeping" genes, showing constitutive expression in all tissues. We compare selected aspects of their genomic structure with a set of background genes. We find that the introns, untranslated regions and coding sequences of the housekeeping genes are shorter, indicating a selection for compactness in these genes.
1406.1793
Jerome Vanclay
Jerome K. Vanclay
Planning horizons and end conditions for sustained yield studies in continuous cover forests
8 pages, 1 figure, 54 references, Ecological Indicators (2014)
Ecological Indicators 48 (2015) 436-439
10.1016/j.ecolind.2014.09.012
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The contemporary forestry preoccupation with non-declining even-flow during yield simulations detracts from more important questions about the constraints that should bind the end of a simulation. Whilst long simulations help to convey a sense of sustainability, they are inferior to stronger indicators such as the optimal state and binding conditions at the end of a simulation. Rigorous definitions of sustainability that constrain the terminal state should allow flexibility in the planning horizon and relaxation of non-declining even-flow, allowing both greater economic efficiency and better environmental outcomes. Suitable definitions cannot be divorced from forest type and management objectives, but should embrace concepts that ensure the anticipated value of the next harvest, the continuity of growing stock, and in the case of uneven-aged management, the adequacy of regeneration.
[ { "created": "Sat, 7 Jun 2014 09:50:01 GMT", "version": "v1" }, { "created": "Tue, 2 Sep 2014 14:25:37 GMT", "version": "v2" }, { "created": "Thu, 11 Sep 2014 03:00:05 GMT", "version": "v3" } ]
2014-10-07
[ [ "Vanclay", "Jerome K.", "" ] ]
The contemporary forestry preoccupation with non-declining even-flow during yield simulations detracts from more important questions about the constraints that should bind the end of a simulation. Whilst long simulations help to convey a sense of sustainability, they are inferior to stronger indicators such as the optimal state and binding conditions at the end of a simulation. Rigorous definitions of sustainability that constrain the terminal state should allow flexibility in the planning horizon and relaxation of non-declining even-flow, allowing both greater economic efficiency and better environmental outcomes. Suitable definitions cannot be divorced from forest type and management objectives, but should embrace concepts that ensure the anticipated value of the next harvest, the continuity of growing stock, and in the case of uneven-aged management, the adequacy of regeneration.
0905.0701
Ernest Montbrio
Alex Roxin, Ernest Montbrio
How effective delays shape oscillatory dynamics in neuronal networks
59 pages, 25 figures
Physica D 240, (2011) 323-345
10.1016/j.physd.2010.09.009
null
q-bio.NC nlin.PS q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synaptic, dendritic and single-cell kinetics generate significant time delays that shape the dynamics of large networks of spiking neurons. Previous work has shown that such effective delays can be taken into account with a rate model through the addition of an explicit, fixed delay [Roxin et al. PRL 238103 (2005)]. Here we extend this work to account for arbitrary symmetric patterns of synaptic connectivity and generic nonlinear transfer functions. Specifically, we conduct a weakly nonlinear analysis of the dynamical states arising via primary instabilities of the asynchronous state. In this way we determine analytically how the nature and stability of these states depend on the choice of transfer function and connectivity. We arrive at two general observations of physiological relevance that could not be explained in previous works. These are: 1 - Fast oscillations are always supercritical for realistic transfer functions. 2 - Traveling waves are preferred over standing waves given plausible patterns of local connectivity. We finally demonstrate that these results show a good agreement with those obtained performing numerical simulations of a network of Hodgkin-Huxley neurons.
[ { "created": "Wed, 6 May 2009 16:15:10 GMT", "version": "v1" }, { "created": "Thu, 13 Aug 2009 21:49:46 GMT", "version": "v2" }, { "created": "Tue, 17 May 2011 14:56:45 GMT", "version": "v3" } ]
2014-01-31
[ [ "Roxin", "Alex", "" ], [ "Montbrio", "Ernest", "" ] ]
Synaptic, dendritic and single-cell kinetics generate significant time delays that shape the dynamics of large networks of spiking neurons. Previous work has shown that such effective delays can be taken into account with a rate model through the addition of an explicit, fixed delay [Roxin et al. PRL 238103 (2005)]. Here we extend this work to account for arbitrary symmetric patterns of synaptic connectivity and generic nonlinear transfer functions. Specifically, we conduct a weakly nonlinear analysis of the dynamical states arising via primary instabilities of the asynchronous state. In this way we determine analytically how the nature and stability of these states depend on the choice of transfer function and connectivity. We arrive at two general observations of physiological relevance that could not be explained in previous works. These are: 1 - Fast oscillations are always supercritical for realistic transfer functions. 2 - Traveling waves are preferred over standing waves given plausible patterns of local connectivity. We finally demonstrate that these results show a good agreement with those obtained performing numerical simulations of a network of Hodgkin-Huxley neurons.
2101.11400
Jose Marie Antonio Minoza
Jose Marie Antonio Minoza, Vena Pearl Bongolan and Joshua Frankie Rayo
COVID-19 Agent-Based Model with Multi-objective Optimization for Vaccine Distribution
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Now that SARS-CoV-2 (COVID-19) vaccines are developed, it is very important to plan its distribution strategy. In this paper, we formulated a multi-objective linear programming model to optimize vaccine distribution and applied it to the agent-based version of our age-stratified and quarantine-modified SEIR with non-linear incidence rates (ASQ-SEIR-NLIR) compartmental model. Simulations were performed using COVID-19 data from Quezon City and results were analyzed under various scenarios: (1) no vaccination, (2) base vaccination (prioritizing essential workers and vulnerable population), (3) prioritizing mobile workforce, (4) prioritizing elderly, and (5) prioritizing mobile workforce and elderly; in terms of (a) reducing infection rates and (b) reducing mortality incidence. After 10 simulations on distributing 500,000 vaccine courses, results show that prioritizing mobile workforce minimizes further infections by 24.14%, which is better than other scenarios. On the other hand, prioritizing the elderly yields the highest protection (439%) for the Quezon City population compared to other scenarios. This could be due to younger people, when contracted the disease, has higher chances of recovery than the elderly. Thus, this leads to reduction of mortality cases.
[ { "created": "Wed, 27 Jan 2021 18:07:10 GMT", "version": "v1" } ]
2021-01-28
[ [ "Minoza", "Jose Marie Antonio", "" ], [ "Bongolan", "Vena Pearl", "" ], [ "Rayo", "Joshua Frankie", "" ] ]
Now that SARS-CoV-2 (COVID-19) vaccines are developed, it is very important to plan its distribution strategy. In this paper, we formulated a multi-objective linear programming model to optimize vaccine distribution and applied it to the agent-based version of our age-stratified and quarantine-modified SEIR with non-linear incidence rates (ASQ-SEIR-NLIR) compartmental model. Simulations were performed using COVID-19 data from Quezon City and results were analyzed under various scenarios: (1) no vaccination, (2) base vaccination (prioritizing essential workers and vulnerable population), (3) prioritizing mobile workforce, (4) prioritizing elderly, and (5) prioritizing mobile workforce and elderly; in terms of (a) reducing infection rates and (b) reducing mortality incidence. After 10 simulations on distributing 500,000 vaccine courses, results show that prioritizing mobile workforce minimizes further infections by 24.14%, which is better than other scenarios. On the other hand, prioritizing the elderly yields the highest protection (439%) for the Quezon City population compared to other scenarios. This could be due to younger people, when contracted the disease, has higher chances of recovery than the elderly. Thus, this leads to reduction of mortality cases.
1601.05388
Steven N. Evans
Joshua G. Schraiber and Steven N. Evans and Montgomery Slatkin
Bayesian inference of natural selection from allele frequency time series
27 pages
null
null
null
q-bio.PE math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of accessible ancient DNA technology now allows the direct ascertainment of allele frequencies in ancestral populations, thereby enabling the use of allele frequency time series to detect and estimate natural selection. Such direct observations of allele frequency dynamics are expected to be more powerful than inferences made using patterns of linked neutral variation obtained from modern individuals. We develop a Bayesian method to make use of allele frequency time series data and infer the parameters of general diploid selection, along with allele age, in non-equilibrium populations. We introduce a novel path augmentation approach, in which we use Markov chain Monte Carlo to integrate over the space of allele frequency trajectories consistent with the observed data. Using simulations, we show that this approach has good power to estimate selection coefficients and allele age. Moreover, when applying our approach to data on horse coat color, we find that ignoring a relevant demographic history can significantly bias the results of inference. Our approach is made available in a C++ software package.
[ { "created": "Wed, 20 Jan 2016 19:48:24 GMT", "version": "v1" } ]
2016-01-21
[ [ "Schraiber", "Joshua G.", "" ], [ "Evans", "Steven N.", "" ], [ "Slatkin", "Montgomery", "" ] ]
The advent of accessible ancient DNA technology now allows the direct ascertainment of allele frequencies in ancestral populations, thereby enabling the use of allele frequency time series to detect and estimate natural selection. Such direct observations of allele frequency dynamics are expected to be more powerful than inferences made using patterns of linked neutral variation obtained from modern individuals. We develop a Bayesian method to make use of allele frequency time series data and infer the parameters of general diploid selection, along with allele age, in non-equilibrium populations. We introduce a novel path augmentation approach, in which we use Markov chain Monte Carlo to integrate over the space of allele frequency trajectories consistent with the observed data. Using simulations, we show that this approach has good power to estimate selection coefficients and allele age. Moreover, when applying our approach to data on horse coat color, we find that ignoring a relevant demographic history can significantly bias the results of inference. Our approach is made available in a C++ software package.
2105.13705
Arthur Prat-Carrabin
Arthur Prat-Carrabin and Michael Woodford
Bias and variance of the Bayesian-mean decoder
Accepted at NeurIPS 2021. 12 pages, 2 figures. Supplementary Materials: 7 pages, 1 figure
Advances in Neural Information Processing Systems 34 (NeurIPS 2021)
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Perception, in theoretical neuroscience, has been modeled as the encoding of external stimuli into internal signals, which are then decoded. The Bayesian mean is an important decoder, as it is optimal for purposes of both estimation and discrimination. We present widely-applicable approximations to the bias and to the variance of the Bayesian mean, obtained under the minimal and biologically-relevant assumption that the encoding results from a series of independent, though not necessarily identically-distributed, signals. Simulations substantiate the accuracy of our approximations in the small-noise regime. The bias of the Bayesian mean comprises two components: one driven by the prior, and one driven by the precision of the encoding. If the encoding is 'efficient', the two components have opposite effects; their relative strengths are determined by the objective that the encoding optimizes. The experimental literature on perception reports both 'Bayesian' biases directed towards prior expectations, and opposite, 'anti-Bayesian' biases. We show that different tasks are indeed predicted to yield such contradictory biases, under a consistently-optimal encoding-decoding model. Moreover, we recover Wei and Stocker's "law of human perception", a relation between the bias of the Bayesian mean and the derivative of its variance, and show how the coefficient of proportionality in this law depends on the task at hand. Our results provide a parsimonious theory of optimal perception under constraints, in which encoding and decoding are adapted both to the prior and to the task faced by the observer.
[ { "created": "Fri, 28 May 2021 09:59:37 GMT", "version": "v1" }, { "created": "Mon, 6 Dec 2021 15:41:28 GMT", "version": "v2" } ]
2022-03-03
[ [ "Prat-Carrabin", "Arthur", "" ], [ "Woodford", "Michael", "" ] ]
Perception, in theoretical neuroscience, has been modeled as the encoding of external stimuli into internal signals, which are then decoded. The Bayesian mean is an important decoder, as it is optimal for purposes of both estimation and discrimination. We present widely-applicable approximations to the bias and to the variance of the Bayesian mean, obtained under the minimal and biologically-relevant assumption that the encoding results from a series of independent, though not necessarily identically-distributed, signals. Simulations substantiate the accuracy of our approximations in the small-noise regime. The bias of the Bayesian mean comprises two components: one driven by the prior, and one driven by the precision of the encoding. If the encoding is 'efficient', the two components have opposite effects; their relative strengths are determined by the objective that the encoding optimizes. The experimental literature on perception reports both 'Bayesian' biases directed towards prior expectations, and opposite, 'anti-Bayesian' biases. We show that different tasks are indeed predicted to yield such contradictory biases, under a consistently-optimal encoding-decoding model. Moreover, we recover Wei and Stocker's "law of human perception", a relation between the bias of the Bayesian mean and the derivative of its variance, and show how the coefficient of proportionality in this law depends on the task at hand. Our results provide a parsimonious theory of optimal perception under constraints, in which encoding and decoding are adapted both to the prior and to the task faced by the observer.
1309.1108
Nicholas Glykos
Panagiotis I. Koukos and Nicholas M. Glykos
On the application of Good-Turing statistics to quantify convergence of biomolecular simulations
null
null
10.1021/ci4005817
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantifying convergence and sufficient sampling of macromolecular molecular dynamics simulations is more often than not a source of controversy (and of various ad hoc solutions) in the field. Clearly, the only reasonable, consistent and satisfying way to infer convergence (or otherwise) of a molecular dynamics trajectory must be based on probability theory. Ideally, the question we would wish to answer is the following : "What is the probability that a molecular configuration important for the analysis in hand has not yet been observed ?". Here we propose a method for answering a variant of this question by using the Good-Turing formalism for frequency estimation of unobserved species in a sample. Although several approaches may be followed in order to deal with the problem of discretizing the configurational space, for this work we use the classical RMSD matrix as a means to answering the following question: "What is the probability that a molecular configuration with an RMSD (from all other already observed configurations) higher than a given threshold has not actually been observed ?". We apply the proposed method to several different trajectories and show that the procedure appears to be both computationally stable and internally consistent. A free, open-source program implementing these ideas is immediately available for download via public repositories.
[ { "created": "Wed, 4 Sep 2013 17:13:25 GMT", "version": "v1" }, { "created": "Sat, 7 Sep 2013 09:19:41 GMT", "version": "v2" } ]
2013-12-23
[ [ "Koukos", "Panagiotis I.", "" ], [ "Glykos", "Nicholas M.", "" ] ]
Quantifying convergence and sufficient sampling of macromolecular molecular dynamics simulations is more often than not a source of controversy (and of various ad hoc solutions) in the field. Clearly, the only reasonable, consistent and satisfying way to infer convergence (or otherwise) of a molecular dynamics trajectory must be based on probability theory. Ideally, the question we would wish to answer is the following : "What is the probability that a molecular configuration important for the analysis in hand has not yet been observed ?". Here we propose a method for answering a variant of this question by using the Good-Turing formalism for frequency estimation of unobserved species in a sample. Although several approaches may be followed in order to deal with the problem of discretizing the configurational space, for this work we use the classical RMSD matrix as a means to answering the following question: "What is the probability that a molecular configuration with an RMSD (from all other already observed configurations) higher than a given threshold has not actually been observed ?". We apply the proposed method to several different trajectories and show that the procedure appears to be both computationally stable and internally consistent. A free, open-source program implementing these ideas is immediately available for download via public repositories.
2306.16520
Jean Pierre G\'omez Matos
Jean Pierre Gomez
Nonparametric causal discovery with applications to cancer bioinformatics
Diploma Thesis in Computer Science. In spanish. Supervised by Drs Gabriel Gil and Augusto Gonzalez. 74 pages, 11 figures, 12 tables
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Many natural phenomena are intrinsically causal. The discovery of the cause-effect relationships implicit in these processes can help us to understand and describe them more effectively, which boils down to causal discovery about the data and variables that describe them. However, causal discovery is not an easy task. Current methods for this are extremely complex and costly, and their usefulness is strongly compromised in contexts with large amounts of data or where the nature of the variables involved is unknown. As an alternative, this paper presents an original methodology for causal discovery, built on essential aspects of the main theories of causality, in particular probabilistic causality, with many meeting points with the inferential approach of regularity theories and others. Based on this methodology, a non-parametric algorithm is developed for the discovery of causal relationships between binary variables associated to data sets, and the modeling in graphs of the causal networks they describe. This algorithm is applied to gene expression data sets in normal and cancerous prostate tissues, with the aim of discovering cause-effect relationships between gene dysregulations leading to carcinogenesis. The gene characterizations constructed from the causal relationships discovered are compared with another study based on principal component analysis (PCA) on the same data, with satisfactory results.
[ { "created": "Wed, 28 Jun 2023 19:26:19 GMT", "version": "v1" }, { "created": "Mon, 8 Jan 2024 05:11:44 GMT", "version": "v2" } ]
2024-01-09
[ [ "Gomez", "Jean Pierre", "" ] ]
Many natural phenomena are intrinsically causal. The discovery of the cause-effect relationships implicit in these processes can help us to understand and describe them more effectively, which boils down to causal discovery about the data and variables that describe them. However, causal discovery is not an easy task. Current methods for this are extremely complex and costly, and their usefulness is strongly compromised in contexts with large amounts of data or where the nature of the variables involved is unknown. As an alternative, this paper presents an original methodology for causal discovery, built on essential aspects of the main theories of causality, in particular probabilistic causality, with many meeting points with the inferential approach of regularity theories and others. Based on this methodology, a non-parametric algorithm is developed for the discovery of causal relationships between binary variables associated to data sets, and the modeling in graphs of the causal networks they describe. This algorithm is applied to gene expression data sets in normal and cancerous prostate tissues, with the aim of discovering cause-effect relationships between gene dysregulations leading to carcinogenesis. The gene characterizations constructed from the causal relationships discovered are compared with another study based on principal component analysis (PCA) on the same data, with satisfactory results.
1805.04072
Jamie Oaks
Jamie R. Oaks, Kerry A. Cobb, Vladimir N. Minin, Adam D. Leach\'e
Marginal likelihoods in phylogenetics: a review of methods and applications
33 pages, 3 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
By providing a framework of accounting for the shared ancestry inherent to all life, phylogenetics is becoming the statistical foundation of biology. The importance of model choice continues to grow as phylogenetic models continue to increase in complexity to better capture micro and macroevolutionary processes. In a Bayesian framework, the marginal likelihood is how data update our prior beliefs about models, which gives us an intuitive measure of comparing model fit that is grounded in probability theory. Given the rapid increase in the number and complexity of phylogenetic models, methods for approximating marginal likelihoods are increasingly important. Here we try to provide an intuitive description of marginal likelihoods and why they are important in Bayesian model testing. We also categorize and review methods for estimating marginal likelihoods of phylogenetic models, highlighting several recent methods that provide well-behaved estimates. Furthermore, we review some empirical studies that demonstrate how marginal likelihoods can be used to learn about models of evolution from biological data. We discuss promising alternatives that can complement marginal likelihoods for Bayesian model choice, including posterior-predictive methods. Using simulations, we find one alternative method based on approximate-Bayesian computation (ABC) to be biased. We conclude by discussing the challenges of Bayesian model choice and future directions that promise to improve the approximation of marginal likelihoods and Bayesian phylogenetics as a whole.
[ { "created": "Thu, 10 May 2018 17:26:09 GMT", "version": "v1" }, { "created": "Mon, 21 Jan 2019 20:47:57 GMT", "version": "v2" }, { "created": "Fri, 1 Feb 2019 20:03:10 GMT", "version": "v3" } ]
2019-02-05
[ [ "Oaks", "Jamie R.", "" ], [ "Cobb", "Kerry A.", "" ], [ "Minin", "Vladimir N.", "" ], [ "Leaché", "Adam D.", "" ] ]
By providing a framework of accounting for the shared ancestry inherent to all life, phylogenetics is becoming the statistical foundation of biology. The importance of model choice continues to grow as phylogenetic models continue to increase in complexity to better capture micro and macroevolutionary processes. In a Bayesian framework, the marginal likelihood is how data update our prior beliefs about models, which gives us an intuitive measure of comparing model fit that is grounded in probability theory. Given the rapid increase in the number and complexity of phylogenetic models, methods for approximating marginal likelihoods are increasingly important. Here we try to provide an intuitive description of marginal likelihoods and why they are important in Bayesian model testing. We also categorize and review methods for estimating marginal likelihoods of phylogenetic models, highlighting several recent methods that provide well-behaved estimates. Furthermore, we review some empirical studies that demonstrate how marginal likelihoods can be used to learn about models of evolution from biological data. We discuss promising alternatives that can complement marginal likelihoods for Bayesian model choice, including posterior-predictive methods. Using simulations, we find one alternative method based on approximate-Bayesian computation (ABC) to be biased. We conclude by discussing the challenges of Bayesian model choice and future directions that promise to improve the approximation of marginal likelihoods and Bayesian phylogenetics as a whole.
1504.04756
James P. Crutchfield
Sarah E. Marzen and Michael R. DeWeese and James P. Crutchfield
Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality
20 pages, 6 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.htm
null
null
null
q-bio.NC cond-mat.dis-nn cs.NE math.PR nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., $\tau$-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.
[ { "created": "Sat, 18 Apr 2015 20:14:19 GMT", "version": "v1" } ]
2015-04-21
[ [ "Marzen", "Sarah E.", "" ], [ "DeWeese", "Michael R.", "" ], [ "Crutchfield", "James P.", "" ] ]
The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., $\tau$-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.
1704.07280
Joseba Dalmau
Joseba Dalmau
Asymptotic behavior of Eigen's quasispecies model
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study Eigen's quasispecies model in the asymptotic regime where the length of the genotypes goes to infinity and the mutation probability goes to 0. A limiting infinite system of differential equations is obtained. We prove the convergence of the trajectories, as well as the convergence of the equilibrium solutions. We give the analogous results for a discrete-time version of Eigen's model, which coincides with a model proposed by Moran.
[ { "created": "Mon, 24 Apr 2017 15:21:45 GMT", "version": "v1" } ]
2017-04-25
[ [ "Dalmau", "Joseba", "" ] ]
We study Eigen's quasispecies model in the asymptotic regime where the length of the genotypes goes to infinity and the mutation probability goes to 0. A limiting infinite system of differential equations is obtained. We prove the convergence of the trajectories, as well as the convergence of the equilibrium solutions. We give the analogous results for a discrete-time version of Eigen's model, which coincides with a model proposed by Moran.
2106.07292
Andreas Ott
Michael Bleher, Lukas Hahn, Maximilian Neumann, Juan Angel Patino-Galindo, Mathieu Carriere, Ulrich Bauer, Raul Rabadan, Andreas Ott
Topological data analysis identifies emerging adaptive mutations in SARS-CoV-2
Major revisions; new analyses added
null
null
null
q-bio.PE cs.CG q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The COVID-19 pandemic has initiated an unprecedented worldwide effort to characterize its evolution through the mapping of mutations of the coronavirus SARS-CoV-2. The early identification of mutations that could confer adaptive advantages to the virus, such as higher infectivity or immune evasion, is of paramount importance. However, the large number of currently available genomes precludes the efficient use of phylogeny-based methods. Here we present CoVtRec, a fast and scalable Topological Data Analysis approach for the surveillance of emerging adaptive mutations in large genomic datasets. Our method overcomes limitations of state-of-the-art phylogeny-based approaches by quantifying the potential adaptiveness of mutations merely by their topological footprint in the genome alignment, without resorting to the reconstruction of a single optimal phylogenetic tree. Analyzing millions of SARS-CoV-2 genomes from GISAID, we find a correlation between topological signals and adaptation to the human host. By leveraging the stratification by time in sequence data, our method enables the high-resolution longitudinal analysis of topological signals of adaptation. We characterize the convergent evolution of the coronavirus throughout the whole pandemic to date, report on emerging potentially adaptive mutations, and pinpoint mutations in Variants of Concern that are likely associated with positive selection. Our approach can improve the surveillance of mutations of concern and guide experimental studies.
[ { "created": "Mon, 14 Jun 2021 10:38:40 GMT", "version": "v1" }, { "created": "Mon, 14 Feb 2022 16:45:27 GMT", "version": "v2" }, { "created": "Fri, 25 Aug 2023 22:38:26 GMT", "version": "v3" } ]
2023-08-29
[ [ "Bleher", "Michael", "" ], [ "Hahn", "Lukas", "" ], [ "Neumann", "Maximilian", "" ], [ "Patino-Galindo", "Juan Angel", "" ], [ "Carriere", "Mathieu", "" ], [ "Bauer", "Ulrich", "" ], [ "Rabadan", "Raul", "" ], [ "Ott", "Andreas", "" ] ]
The COVID-19 pandemic has initiated an unprecedented worldwide effort to characterize its evolution through the mapping of mutations of the coronavirus SARS-CoV-2. The early identification of mutations that could confer adaptive advantages to the virus, such as higher infectivity or immune evasion, is of paramount importance. However, the large number of currently available genomes precludes the efficient use of phylogeny-based methods. Here we present CoVtRec, a fast and scalable Topological Data Analysis approach for the surveillance of emerging adaptive mutations in large genomic datasets. Our method overcomes limitations of state-of-the-art phylogeny-based approaches by quantifying the potential adaptiveness of mutations merely by their topological footprint in the genome alignment, without resorting to the reconstruction of a single optimal phylogenetic tree. Analyzing millions of SARS-CoV-2 genomes from GISAID, we find a correlation between topological signals and adaptation to the human host. By leveraging the stratification by time in sequence data, our method enables the high-resolution longitudinal analysis of topological signals of adaptation. We characterize the convergent evolution of the coronavirus throughout the whole pandemic to date, report on emerging potentially adaptive mutations, and pinpoint mutations in Variants of Concern that are likely associated with positive selection. Our approach can improve the surveillance of mutations of concern and guide experimental studies.
1512.08309
Rakesh Malladi
Rakesh Malladi, Giridhar Kalamangalam, Nitin Tandon and Behnaam Aazhang
Identifying Seizure Onset Zone from the Causal Connectivity Inferred Using Directed Information
This paper is accepted for publication in IEEE Journal of Selected Topics in Signal Processing, special issue on Advanced Signal Processing in Brain Networks, October 2016. 16 pages, 11 figures and 2 tables
null
10.1109/JSTSP.2016.2601485
null
q-bio.NC cs.IT math.IT stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.
[ { "created": "Mon, 28 Dec 2015 02:53:24 GMT", "version": "v1" }, { "created": "Tue, 16 Aug 2016 19:21:32 GMT", "version": "v2" } ]
2016-11-03
[ [ "Malladi", "Rakesh", "" ], [ "Kalamangalam", "Giridhar", "" ], [ "Tandon", "Nitin", "" ], [ "Aazhang", "Behnaam", "" ] ]
In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.
2301.10203
Sandra Nestler
Sandra Nestler, Moritz Helias and Matthieu Gilson
Neuronal architecture extracts statistical temporal patterns
null
null
10.1103/PhysRevResearch.5.033177
null
q-bio.NC cond-mat.dis-nn cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuronal systems need to process temporal signals. We here show how higher-order temporal (co-)fluctuations can be employed to represent and process information. Concretely, we demonstrate that a simple biologically inspired feedforward neuronal model is able to extract information from up to the third order cumulant to perform time series classification. This model relies on a weighted linear summation of synaptic inputs followed by a nonlinear gain function. Training both - the synaptic weights and the nonlinear gain function - exposes how the non-linearity allows for the transfer of higher order correlations to the mean, which in turn enables the synergistic use of information encoded in multiple cumulants to maximize the classification accuracy. The approach is demonstrated both on a synthetic and on real world datasets of multivariate time series. Moreover, we show that the biologically inspired architecture makes better use of the number of trainable parameters as compared to a classical machine-learning scheme. Our findings emphasize the benefit of biological neuronal architectures, paired with dedicated learning algorithms, for the processing of information embedded in higher-order statistical cumulants of temporal (co-)fluctuations.
[ { "created": "Tue, 24 Jan 2023 18:21:33 GMT", "version": "v1" } ]
2023-09-13
[ [ "Nestler", "Sandra", "" ], [ "Helias", "Moritz", "" ], [ "Gilson", "Matthieu", "" ] ]
Neuronal systems need to process temporal signals. We here show how higher-order temporal (co-)fluctuations can be employed to represent and process information. Concretely, we demonstrate that a simple biologically inspired feedforward neuronal model is able to extract information from up to the third order cumulant to perform time series classification. This model relies on a weighted linear summation of synaptic inputs followed by a nonlinear gain function. Training both - the synaptic weights and the nonlinear gain function - exposes how the non-linearity allows for the transfer of higher order correlations to the mean, which in turn enables the synergistic use of information encoded in multiple cumulants to maximize the classification accuracy. The approach is demonstrated both on a synthetic and on real world datasets of multivariate time series. Moreover, we show that the biologically inspired architecture makes better use of the number of trainable parameters as compared to a classical machine-learning scheme. Our findings emphasize the benefit of biological neuronal architectures, paired with dedicated learning algorithms, for the processing of information embedded in higher-order statistical cumulants of temporal (co-)fluctuations.
1909.08421
Anindya Ghose Choudhury
Anindya Ghose-Choudhury and Partha Guha
Reduction and Hamiltonian aspects of a model for virus-tumour interaction in oncolytic virotherapy
null
null
null
null
q-bio.PE math-ph math.MP nlin.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyse the Hamiltonian structure of a system of first-order ordinary differential equations used for modeling the interaction of an oncolytic virus with a tumour cell population. The analysis is based on the existence of a Jacobi Last Multiplier for the system and a time dependent first integral. For suitable conditions on the model parameters this allows for the reduction of the problem to a planar system of equations for which the time dependent Hamiltonian flows are described. The geometry of the Hamiltonian flows are finally investigated using the symplectic and cosymplectic methods.
[ { "created": "Tue, 17 Sep 2019 04:56:32 GMT", "version": "v1" } ]
2019-09-19
[ [ "Ghose-Choudhury", "Anindya", "" ], [ "Guha", "Partha", "" ] ]
We analyse the Hamiltonian structure of a system of first-order ordinary differential equations used for modeling the interaction of an oncolytic virus with a tumour cell population. The analysis is based on the existence of a Jacobi Last Multiplier for the system and a time dependent first integral. For suitable conditions on the model parameters this allows for the reduction of the problem to a planar system of equations for which the time dependent Hamiltonian flows are described. The geometry of the Hamiltonian flows are finally investigated using the symplectic and cosymplectic methods.
1503.03445
H. G. Solari
M. L. Fern\'andez, M. Otero, N. Schweigmann, H. G. Solari
A mathematically assisted reconstruction of the initial focus of the yellow fever outbreak in Buenos Aires (1871)
25 pages, 12 figures
Papers in Physics 5, 050002 (2013)
10.4279/PIP.050002
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
We discuss the historic mortality record corresponding to the initial focus of the yellow fever epidemic outbreak registered in Buenos Aires during the year 1871 as compared to simulations of a stochastic population dynamics model. This model incorporates the biology of the urban vector of yellow fever, the mosquito Aedes aegypti, the stages of the disease in the human being as well as the spatial extension of the epidemic outbreak. After introducing the historical context and the restrictions it puts on initial conditions and ecological parameters, we discuss the general features of the simulation and the dependence on initial conditions and available sites for breeding the vector. We discuss the sensitivity, to the free parameters, of statistical estimators such as: final death toll, day of the year when the outbreak reached half the total mortality and the normalized daily mortality, showing some striking regularities. The model is precise and accurate enough to discuss the truthfulness of the presently accepted historic discussions of the epidemic causes, showing that there are more likely scenarios for the historic facts.
[ { "created": "Thu, 18 Dec 2014 10:35:53 GMT", "version": "v1" } ]
2015-03-12
[ [ "Fernández", "M. L.", "" ], [ "Otero", "M.", "" ], [ "Schweigmann", "N.", "" ], [ "Solari", "H. G.", "" ] ]
We discuss the historic mortality record corresponding to the initial focus of the yellow fever epidemic outbreak registered in Buenos Aires during the year 1871 as compared to simulations of a stochastic population dynamics model. This model incorporates the biology of the urban vector of yellow fever, the mosquito Aedes aegypti, the stages of the disease in the human being as well as the spatial extension of the epidemic outbreak. After introducing the historical context and the restrictions it puts on initial conditions and ecological parameters, we discuss the general features of the simulation and the dependence on initial conditions and available sites for breeding the vector. We discuss the sensitivity, to the free parameters, of statistical estimators such as: final death toll, day of the year when the outbreak reached half the total mortality and the normalized daily mortality, showing some striking regularities. The model is precise and accurate enough to discuss the truthfulness of the presently accepted historic discussions of the epidemic causes, showing that there are more likely scenarios for the historic facts.
2004.10118
Alexei Vasetsky
E.M. Koltsova, E.S. Kurkina, A.M. Vasetsky
Mathematical Modeling of the Spread of COVID-19 in Moscow and Russian Regions
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To model the spread of COVID-19 coronavirus in Russian regions and in Moscow, a discrete logistic equation describing the increase in the number of cases is used. To check the adequacy of the mathematical model, the simulation results were compared with the spread of coronavirus in China, in a number of European and Asian countries, and the United States. The parameters of the logistics equation for Russia, Moscow and other large regions were determined in the interval (01.03 - 08.04). A comparative analysis of growth rates of COVID-19 infected population for different countries and regions is presented. Various scenarios of the spread of COVID-19 coronavirus in Moscow and in the regions of Russia are considered. For each scenario, curves for the daily new cases and graphs for the increase in the total number of cases were obtained, and the dynamics of infection spread by day was studied. Peak times, epidemic periods, the number of infected people at the peak and their growth were determined.
[ { "created": "Tue, 21 Apr 2020 15:55:38 GMT", "version": "v1" } ]
2020-04-22
[ [ "Koltsova", "E. M.", "" ], [ "Kurkina", "E. S.", "" ], [ "Vasetsky", "A. M.", "" ] ]
To model the spread of COVID-19 coronavirus in Russian regions and in Moscow, a discrete logistic equation describing the increase in the number of cases is used. To check the adequacy of the mathematical model, the simulation results were compared with the spread of coronavirus in China, in a number of European and Asian countries, and the United States. The parameters of the logistics equation for Russia, Moscow and other large regions were determined in the interval (01.03 - 08.04). A comparative analysis of growth rates of COVID-19 infected population for different countries and regions is presented. Various scenarios of the spread of COVID-19 coronavirus in Moscow and in the regions of Russia are considered. For each scenario, curves for the daily new cases and graphs for the increase in the total number of cases were obtained, and the dynamics of infection spread by day was studied. Peak times, epidemic periods, the number of infected people at the peak and their growth were determined.
2212.11485
An Mo
An Mo, Viktoriia Kamska, Fernanda Bribiesca-Contreras, Janet Hauptmann, Monica Daley, Alexander Badri-Spr\"owitz
Biophysical Simulation Reveals the Mechanics of the Avian Lumbosacral Organ
For supplementary materials, see https://doi.org/10.17617/3.VTHO81
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The lumbosacral organ (LSO) is a lumbosacral spinal canal morphology that is universally and uniquely found in birds. Recent studies suggested an intraspinal mechanosensor function that relies on the compliant motion of soft tissue in the spinal cord fluid. It has not yet been possible to observe LSO soft tissue motion in vivo due to limitations of imaging technologies. As an alternative approach, we developed an artificial biophysical model of the LSO, and characterize the dynamic responses of this model when entrained by external motion. The parametric model incorporates morphological and material properties of the LSO. We varied the model's parameters to study the influence of individual features on the system response. We characterized the system in a locomotion simulator, producing vertical oscillations similar to the trunk motions. We show how morphological and material properties effectively shape the system's oscillation characteristics. We conclude that external oscillations could entrain the soft tissue of the intraspinal lumbosacral organ during locomotion, consistent with recently proposed sensing mechanisms.
[ { "created": "Thu, 22 Dec 2022 05:00:20 GMT", "version": "v1" }, { "created": "Wed, 17 May 2023 08:16:32 GMT", "version": "v2" } ]
2023-05-18
[ [ "Mo", "An", "" ], [ "Kamska", "Viktoriia", "" ], [ "Bribiesca-Contreras", "Fernanda", "" ], [ "Hauptmann", "Janet", "" ], [ "Daley", "Monica", "" ], [ "Badri-Spröwitz", "Alexander", "" ] ]
The lumbosacral organ (LSO) is a lumbosacral spinal canal morphology that is universally and uniquely found in birds. Recent studies suggested an intraspinal mechanosensor function that relies on the compliant motion of soft tissue in the spinal cord fluid. It has not yet been possible to observe LSO soft tissue motion in vivo due to limitations of imaging technologies. As an alternative approach, we developed an artificial biophysical model of the LSO, and characterize the dynamic responses of this model when entrained by external motion. The parametric model incorporates morphological and material properties of the LSO. We varied the model's parameters to study the influence of individual features on the system response. We characterized the system in a locomotion simulator, producing vertical oscillations similar to the trunk motions. We show how morphological and material properties effectively shape the system's oscillation characteristics. We conclude that external oscillations could entrain the soft tissue of the intraspinal lumbosacral organ during locomotion, consistent with recently proposed sensing mechanisms.
1410.8091
Luca Canzian
Luca Canzian, Kun Zhao, Gerard C. L. Wong, Mihaela van der Schaar
A Dynamic Network Formation Model for Understanding Bacterial Self-Organization into Micro-Colonies
null
null
null
null
q-bio.PE cs.GT q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a general parametrizable model to capture the dynamic interaction among bacteria in the formation of micro-colonies. micro-colonies represent the first social step towards the formation of structured multicellular communities known as bacterial biofilms, which protect the bacteria against antimicrobials. In our model, bacteria can form links in the form of intercellular adhesins (such as polysaccharides) to collaborate in the production of resources that are fundamental to protect them against antimicrobials. Since maintaining a link can be costly, we assume that each bacterium forms and maintains a link only if the benefit received from the link is larger than the cost, and we formalize the interaction among bacteria as a dynamic network formation game. We rigorously characterize some of the key properties of the network evolution depending on the parameters of the system. In particular, we derive the parameters under which it is guaranteed that all bacteria will join micro-colonies and the parameters under which it is guaranteed that some bacteria will not join micro-colonies. Importantly, our study does not only characterize the properties of networks emerging in equilibrium, but it also provides important insights on how the network dynamically evolves and on how the formation history impacts the emerging networks in equilibrium. This analysis can be used to develop methods to influence on- the-fly the evolution of the network, and such methods can be useful to treat or prevent biofilm-related diseases.
[ { "created": "Thu, 23 Oct 2014 19:20:43 GMT", "version": "v1" } ]
2014-10-30
[ [ "Canzian", "Luca", "" ], [ "Zhao", "Kun", "" ], [ "Wong", "Gerard C. L.", "" ], [ "van der Schaar", "Mihaela", "" ] ]
We propose a general parametrizable model to capture the dynamic interaction among bacteria in the formation of micro-colonies. micro-colonies represent the first social step towards the formation of structured multicellular communities known as bacterial biofilms, which protect the bacteria against antimicrobials. In our model, bacteria can form links in the form of intercellular adhesins (such as polysaccharides) to collaborate in the production of resources that are fundamental to protect them against antimicrobials. Since maintaining a link can be costly, we assume that each bacterium forms and maintains a link only if the benefit received from the link is larger than the cost, and we formalize the interaction among bacteria as a dynamic network formation game. We rigorously characterize some of the key properties of the network evolution depending on the parameters of the system. In particular, we derive the parameters under which it is guaranteed that all bacteria will join micro-colonies and the parameters under which it is guaranteed that some bacteria will not join micro-colonies. Importantly, our study does not only characterize the properties of networks emerging in equilibrium, but it also provides important insights on how the network dynamically evolves and on how the formation history impacts the emerging networks in equilibrium. This analysis can be used to develop methods to influence on- the-fly the evolution of the network, and such methods can be useful to treat or prevent biofilm-related diseases.
2007.00577
Wasiur R. KhudaBukhsh
Wasiur R. KhudaBukhsh and Hye-Won Kang and Eben Kenah and Grzegorz A. Rempala
Incorporating age and delay into models for biophysical systems
21 pages, 4 figures. Under review for publication
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many biological systems, chemical reactions or changes in a physical state are assumed to occur instantaneously. For describing the dynamics of those systems, Markov models that require exponentially distributed inter-event times have been used widely. However, some biophysical processes such as gene transcription and translation are known to have a significant gap between the initiation and the completion of the processes, which renders the usual assumption of exponential distribution untenable. In this paper, we consider relaxing this assumption by incorporating age-dependent random time delays into the system dynamics. We do so by constructing a measure-valued Markov process on a more abstract state space, which allows us to keep track of the "ages" of molecules participating in a chemical reaction. We study the large-volume limit of such age-structured systems. We show that, when appropriately scaled, the stochastic system can be approximated by a system of Partial Differential Equations (PDEs) in the large-volume limit, as opposed to Ordinary Differential Equations (ODEs) in the classical theory. We show how the limiting PDE system can be used for the purpose of further model reductions and for devising efficient simulation algorithms. In order to describe the ideas, we use a simple transcription process as a running example. We, however, note that the methods developed in this paper apply to a wide class of biophysical systems.
[ { "created": "Wed, 1 Jul 2020 16:03:50 GMT", "version": "v1" }, { "created": "Fri, 3 Jul 2020 01:26:22 GMT", "version": "v2" } ]
2020-07-06
[ [ "KhudaBukhsh", "Wasiur R.", "" ], [ "Kang", "Hye-Won", "" ], [ "Kenah", "Eben", "" ], [ "Rempala", "Grzegorz A.", "" ] ]
In many biological systems, chemical reactions or changes in a physical state are assumed to occur instantaneously. For describing the dynamics of those systems, Markov models that require exponentially distributed inter-event times have been used widely. However, some biophysical processes such as gene transcription and translation are known to have a significant gap between the initiation and the completion of the processes, which renders the usual assumption of exponential distribution untenable. In this paper, we consider relaxing this assumption by incorporating age-dependent random time delays into the system dynamics. We do so by constructing a measure-valued Markov process on a more abstract state space, which allows us to keep track of the "ages" of molecules participating in a chemical reaction. We study the large-volume limit of such age-structured systems. We show that, when appropriately scaled, the stochastic system can be approximated by a system of Partial Differential Equations (PDEs) in the large-volume limit, as opposed to Ordinary Differential Equations (ODEs) in the classical theory. We show how the limiting PDE system can be used for the purpose of further model reductions and for devising efficient simulation algorithms. In order to describe the ideas, we use a simple transcription process as a running example. We, however, note that the methods developed in this paper apply to a wide class of biophysical systems.
1110.3742
Heather Harrington
Heather A. Harrington, Micha{\l} Komorowski, Mariano Beguerisse D\'iaz, Gian Michele Ratto, and Michael P.H. Stumpf
Dependence of MAPK mediated signaling on Erk isoforms and differences in nuclear shuttling
19 pages, 7 figures
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mitogen activated protein kinase (MAPK) family of proteins is involved in regulating cellular fate activities such as proliferation, differentiation and apoptosis. Their fundamental importance has attracted considerable attention on different aspects of the MAPK signaling dynamics; this is particularly true for the Erk/Mek system, which has become the canonical example for MAPK signaling systems. Erk exists in many different isoforms, of which the most widely studied are Erk1 and Erk2. Until recently, these two kinases were considered equivalent as they differ only subtly at the sequence level; however, these isoforms exhibit radically different trafficking between cytoplasm and nucleus. Here we use spatially resolved data on Erk1/2 to develop and analyze spatio-temporal models of these cascades; and we discuss how sensitivity analysis can be used to discriminate between mechanisms. We are especially interested in understanding why two such similar proteins should co-exist in the same organism, as their functional roles appear to be different. Our models elucidate some of the factors governing the interplay between processes and the Erk1/2 localization in different cellular compartments, including competition between isoforms. This methodology is applicable to a wide range of systems, such as activation cascades, where translocation of species occurs via signal pathways. Furthermore, our work may motivate additional emphasis for considering potentially different roles for isoforms that differ subtly at the sequence level.
[ { "created": "Mon, 17 Oct 2011 17:53:11 GMT", "version": "v1" }, { "created": "Fri, 18 Nov 2011 17:42:56 GMT", "version": "v2" } ]
2015-03-19
[ [ "Harrington", "Heather A.", "" ], [ "Komorowski", "Michał", "" ], [ "Díaz", "Mariano Beguerisse", "" ], [ "Ratto", "Gian Michele", "" ], [ "Stumpf", "Michael P. H.", "" ] ]
The mitogen activated protein kinase (MAPK) family of proteins is involved in regulating cellular fate activities such as proliferation, differentiation and apoptosis. Their fundamental importance has attracted considerable attention on different aspects of the MAPK signaling dynamics; this is particularly true for the Erk/Mek system, which has become the canonical example for MAPK signaling systems. Erk exists in many different isoforms, of which the most widely studied are Erk1 and Erk2. Until recently, these two kinases were considered equivalent as they differ only subtly at the sequence level; however, these isoforms exhibit radically different trafficking between cytoplasm and nucleus. Here we use spatially resolved data on Erk1/2 to develop and analyze spatio-temporal models of these cascades; and we discuss how sensitivity analysis can be used to discriminate between mechanisms. We are especially interested in understanding why two such similar proteins should co-exist in the same organism, as their functional roles appear to be different. Our models elucidate some of the factors governing the interplay between processes and the Erk1/2 localization in different cellular compartments, including competition between isoforms. This methodology is applicable to a wide range of systems, such as activation cascades, where translocation of species occurs via signal pathways. Furthermore, our work may motivate additional emphasis for considering potentially different roles for isoforms that differ subtly at the sequence level.
2208.12552
Lucas Daniel Wittwer
Lucas Daniel Wittwer, Felix Reichel, Paul M\"uller, Jochen Guck, Sebastian Aland
A New Hyperelastic Lookup Table for RT-DC
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Real-time deformability cytometry (RT-DC) is an established method that quantifies features like size, shape, and stiffness for whole cell populations on a single-cell level in real time. To extract the cell stiffness, a lookup table (LUT) disentangles the experimentally derived steady state cell deformation and the projected area, yielding the Young's modulus. So far, two lookup tables exists, but are limited to simple linear material models and cylindrical channel geometries. Here, we present two new lookup tables for RT-DC based on a neo-Hookean hyperelastic material numerically derived by simulations based on the finite element method in square and cylindrical channel geometries. At the same time, we quantify the influence of the shear-thinning behaviour of the surrounding medium on the stationary deformation of cells in RT-DC and discuss the applicability and impact of the proposed LUTs regarding past and future RT-DC data analysis. Additionally, we provide insights about the cell strain and stresses, as well as the influence resulting from the rotational symmetric assumption on the cell deformation and volume estimation. The new lookup tables as well as the numerical cell shapes are made freely available.
[ { "created": "Fri, 26 Aug 2022 10:07:16 GMT", "version": "v1" } ]
2022-08-29
[ [ "Wittwer", "Lucas Daniel", "" ], [ "Reichel", "Felix", "" ], [ "Müller", "Paul", "" ], [ "Guck", "Jochen", "" ], [ "Aland", "Sebastian", "" ] ]
Real-time deformability cytometry (RT-DC) is an established method that quantifies features like size, shape, and stiffness for whole cell populations on a single-cell level in real time. To extract the cell stiffness, a lookup table (LUT) disentangles the experimentally derived steady state cell deformation and the projected area, yielding the Young's modulus. So far, two lookup tables exists, but are limited to simple linear material models and cylindrical channel geometries. Here, we present two new lookup tables for RT-DC based on a neo-Hookean hyperelastic material numerically derived by simulations based on the finite element method in square and cylindrical channel geometries. At the same time, we quantify the influence of the shear-thinning behaviour of the surrounding medium on the stationary deformation of cells in RT-DC and discuss the applicability and impact of the proposed LUTs regarding past and future RT-DC data analysis. Additionally, we provide insights about the cell strain and stresses, as well as the influence resulting from the rotational symmetric assumption on the cell deformation and volume estimation. The new lookup tables as well as the numerical cell shapes are made freely available.
q-bio/0309002
Eduardo D. Sontag
David Angeli and Eduardo D. Sontag
Multi-Stability in Monotone Input/Output Systems
See http://www.math.rutgers.edu/~sontag for related work; to appear in Systems and Control Letters
null
null
null
q-bio.QM q-bio.MN
null
This paper studies the emergence of multi-stability and hysteresis in those systems that arise, under positive feedback, starting from monotone systems with well-defined steady-state responses. Such feedback configurations appear routinely in several fields of application, and especially in biology. Characterizations of global stability behavior are stated in terms of easily checkable graphical conditions. An example of a signaling cascade under positive feedback is presented.
[ { "created": "Tue, 16 Sep 2003 14:08:09 GMT", "version": "v1" } ]
2007-05-23
[ [ "Angeli", "David", "" ], [ "Sontag", "Eduardo D.", "" ] ]
This paper studies the emergence of multi-stability and hysteresis in those systems that arise, under positive feedback, starting from monotone systems with well-defined steady-state responses. Such feedback configurations appear routinely in several fields of application, and especially in biology. Characterizations of global stability behavior are stated in terms of easily checkable graphical conditions. An example of a signaling cascade under positive feedback is presented.
2403.15284
Andrea Arnold
Sara Amato, Andrea Arnold
A data-informed mathematical model of microglial cell dynamics during ischemic stroke in the middle cerebral artery
25 pages, 8 figures
null
null
null
q-bio.CB q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuroinflammation immediately follows the onset of ischemic stroke in the middle cerebral artery. During this process, microglial cells are activated in and recruited to the penumbra. Microglial cells can be activated into two different phenotypes: M1, which can worsen brain injury; or M2, which can aid in long-term recovery. In this study, we contribute a summary of experimental data on microglial cell counts in the penumbra following ischemic stroke induced by middle cerebral artery occlusion (MCAO) in mice and compile available data sets into a single set suitable for time series analysis. Further, we formulate a mathematical model of microglial cells in the penumbra during ischemic stroke due to MCAO. Through use of global sensitivity analysis and Markov Chain Monte Carlo (MCMC)-based parameter estimation, we analyze the effects of the model parameters on the number of M1 and M2 cells in the penumbra and fit identifiable parameters to the compiled experimental data set. We utilize results from MCMC parameter estimation to ascertain uncertainty bounds and forward predictions for the number of M1 and M2 microglial cells over time. Results demonstrate the significance of parameters related to M1 and M2 activation on the number of M1 and M2 microglial cells. Simulations further suggest that potential outliers in the observed data may be omitted and forecast predictions suggest a lingering inflammatory response.
[ { "created": "Fri, 22 Mar 2024 15:31:07 GMT", "version": "v1" } ]
2024-03-25
[ [ "Amato", "Sara", "" ], [ "Arnold", "Andrea", "" ] ]
Neuroinflammation immediately follows the onset of ischemic stroke in the middle cerebral artery. During this process, microglial cells are activated in and recruited to the penumbra. Microglial cells can be activated into two different phenotypes: M1, which can worsen brain injury; or M2, which can aid in long-term recovery. In this study, we contribute a summary of experimental data on microglial cell counts in the penumbra following ischemic stroke induced by middle cerebral artery occlusion (MCAO) in mice and compile available data sets into a single set suitable for time series analysis. Further, we formulate a mathematical model of microglial cells in the penumbra during ischemic stroke due to MCAO. Through use of global sensitivity analysis and Markov Chain Monte Carlo (MCMC)-based parameter estimation, we analyze the effects of the model parameters on the number of M1 and M2 cells in the penumbra and fit identifiable parameters to the compiled experimental data set. We utilize results from MCMC parameter estimation to ascertain uncertainty bounds and forward predictions for the number of M1 and M2 microglial cells over time. Results demonstrate the significance of parameters related to M1 and M2 activation on the number of M1 and M2 microglial cells. Simulations further suggest that potential outliers in the observed data may be omitted and forecast predictions suggest a lingering inflammatory response.
1903.07394
Keith Willison
Keith R Willison
An intracellular calcium frequency code model extended to the Riemann zeta function
11 pages, 5 figures,1 table, 1 appendix
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have used the Nernst chemical potential treatment to couple the time domains of sodium and calcium ion channel opening and closing rates to the spatial domain of the diffusing waves of the travelling calcium ions inside single cells. The model is plausibly evolvable with respect to the origins of the molecular components and the scaling of the system from simple cells to neurons. The mixed chemical potentials are calculated by summing the concentrations or particle numbers of the two constituent ions which are pure numbers and thus dimensionless. Chemical potentials are true thermodynamic free Gibbs/Fermi energies and the forces acting on chemical flows are calculated from the natural logarithms of the particle numbers or their concentrations. The mixed chemical potential is converted to the time domain of an action potential by assuming that the injection of calcium ions accelerates depolarization in direct proportion to the amplitude of the total charge contribution of the calcium pulse. We assert that the natural logarithm of the real component of the imaginary term of any Riemann zeta zero corresponds to an instantaneous calcium potential. In principle, in a physiologically plausible fashion, the first few thousand Riemann zeta-zeros can be encoded on this chemical scale manifested as regulated step-changes in the amplitudes of naturally occurring calcium current transients. We show that pairs of Zn channels can form Dirac fences which encode the logarithmic spacings and summed amplitudes of any pair of Riemann zeros. Remarkably the beat frequencies of the pairings of the early frequency terms overlap the naturally occurring frequency modes in vertebrate brains. The equation for the time domain in the biological model has a similar form to the Riemann zeta function on the half-plane and mimics analytical continuation on the complex plane.
[ { "created": "Thu, 7 Feb 2019 12:04:20 GMT", "version": "v1" } ]
2019-03-19
[ [ "Willison", "Keith R", "" ] ]
We have used the Nernst chemical potential treatment to couple the time domains of sodium and calcium ion channel opening and closing rates to the spatial domain of the diffusing waves of the travelling calcium ions inside single cells. The model is plausibly evolvable with respect to the origins of the molecular components and the scaling of the system from simple cells to neurons. The mixed chemical potentials are calculated by summing the concentrations or particle numbers of the two constituent ions which are pure numbers and thus dimensionless. Chemical potentials are true thermodynamic free Gibbs/Fermi energies and the forces acting on chemical flows are calculated from the natural logarithms of the particle numbers or their concentrations. The mixed chemical potential is converted to the time domain of an action potential by assuming that the injection of calcium ions accelerates depolarization in direct proportion to the amplitude of the total charge contribution of the calcium pulse. We assert that the natural logarithm of the real component of the imaginary term of any Riemann zeta zero corresponds to an instantaneous calcium potential. In principle, in a physiologically plausible fashion, the first few thousand Riemann zeta-zeros can be encoded on this chemical scale manifested as regulated step-changes in the amplitudes of naturally occurring calcium current transients. We show that pairs of Zn channels can form Dirac fences which encode the logarithmic spacings and summed amplitudes of any pair of Riemann zeros. Remarkably the beat frequencies of the pairings of the early frequency terms overlap the naturally occurring frequency modes in vertebrate brains. The equation for the time domain in the biological model has a similar form to the Riemann zeta function on the half-plane and mimics analytical continuation on the complex plane.
2206.01685
Charlotte Caucheteux
Juliette Millet, Charlotte Caucheteux, Pierre Orhan, Yves Boubenec, Alexandre Gramfort, Ewan Dunbar, Christophe Pallier, Jean-Remi King
Toward a realistic model of speech processing in the brain with self-supervised learning
Accepted to NeurIPS 2022
Neural Information Processing Systems (NeurIPS), 2022
null
null
q-bio.NC cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Several deep neural networks have recently been shown to generate activations similar to those of the brain in response to the same input. These algorithms, however, remain largely implausible: they require (1) extraordinarily large amounts of data, (2) unobtainable supervised labels, (3) textual rather than raw sensory input, and / or (4) implausibly large memory (e.g. thousands of contextual words). These elements highlight the need to identify algorithms that, under these limitations, would suffice to account for both behavioral and brain responses. Focusing on the issue of speech processing, we here hypothesize that self-supervised algorithms trained on the raw waveform constitute a promising candidate. Specifically, we compare a recent self-supervised architecture, Wav2Vec 2.0, to the brain activity of 412 English, French, and Mandarin individuals recorded with functional Magnetic Resonance Imaging (fMRI), while they listened to ~1h of audio books. Our results are four-fold. First, we show that this algorithm learns brain-like representations with as little as 600 hours of unlabelled speech -- a quantity comparable to what infants can be exposed to during language acquisition. Second, its functional hierarchy aligns with the cortical hierarchy of speech processing. Third, different training regimes reveal a functional specialization akin to the cortex: Wav2Vec 2.0 learns sound-generic, speech-specific and language-specific representations similar to those of the prefrontal and temporal cortices. Fourth, we confirm the similarity of this specialization with the behavior of 386 additional participants. These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain.
[ { "created": "Fri, 3 Jun 2022 17:01:46 GMT", "version": "v1" }, { "created": "Mon, 20 Mar 2023 10:11:41 GMT", "version": "v2" } ]
2023-03-21
[ [ "Millet", "Juliette", "" ], [ "Caucheteux", "Charlotte", "" ], [ "Orhan", "Pierre", "" ], [ "Boubenec", "Yves", "" ], [ "Gramfort", "Alexandre", "" ], [ "Dunbar", "Ewan", "" ], [ "Pallier", "Christophe", "" ], [ "King", "Jean-Remi", "" ] ]
Several deep neural networks have recently been shown to generate activations similar to those of the brain in response to the same input. These algorithms, however, remain largely implausible: they require (1) extraordinarily large amounts of data, (2) unobtainable supervised labels, (3) textual rather than raw sensory input, and / or (4) implausibly large memory (e.g. thousands of contextual words). These elements highlight the need to identify algorithms that, under these limitations, would suffice to account for both behavioral and brain responses. Focusing on the issue of speech processing, we here hypothesize that self-supervised algorithms trained on the raw waveform constitute a promising candidate. Specifically, we compare a recent self-supervised architecture, Wav2Vec 2.0, to the brain activity of 412 English, French, and Mandarin individuals recorded with functional Magnetic Resonance Imaging (fMRI), while they listened to ~1h of audio books. Our results are four-fold. First, we show that this algorithm learns brain-like representations with as little as 600 hours of unlabelled speech -- a quantity comparable to what infants can be exposed to during language acquisition. Second, its functional hierarchy aligns with the cortical hierarchy of speech processing. Third, different training regimes reveal a functional specialization akin to the cortex: Wav2Vec 2.0 learns sound-generic, speech-specific and language-specific representations similar to those of the prefrontal and temporal cortices. Fourth, we confirm the similarity of this specialization with the behavior of 386 additional participants. These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain.
2108.01514
Kai Wu
Kai Wu, Vinit Kumar Chugh, Venkatramana D. Krishna, Arturo di Girolamo, Yongqiang Andrew Wang, Renata Saha, Shuang Liang, Maxim C-J Cheeran, and Jian-Ping Wang
One-step, Wash-free, Nanoparticle Clustering-based Magnetic Particle Spectroscopy (MPS) Bioassay Method for Detection of SARS-CoV-2 Spike and Nucleocapsid Proteins in Liquid Phase
35 pages, 14 figures
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
With the ongoing global pandemic of coronavirus disease 2019 (COVID-19), there is an increasing quest for more accessible, easy-to-use, rapid, inexpensive, and high accuracy diagnostic tools. Traditional disease diagnostic methods such as qRT-PCR (quantitative reverse transcription-PCR) and ELISA (enzyme-linked immunosorbent assay) require multiple steps, trained technicians, and long turnaround time that may worsen the disease surveillance and pandemic control. In sight of this situation, a rapid, one-step, easy-to-use, and high accuracy diagnostic platform will be valuable for future epidemic control especially for regions with scarce medical resources. Herein, we report a magnetic particle spectroscopy (MPS) platform for detection of SARS-CoV-2 biomarkers: spike and nucleocapsid proteins.
[ { "created": "Sun, 1 Aug 2021 15:59:17 GMT", "version": "v1" } ]
2021-08-04
[ [ "Wu", "Kai", "" ], [ "Chugh", "Vinit Kumar", "" ], [ "Krishna", "Venkatramana D.", "" ], [ "di Girolamo", "Arturo", "" ], [ "Wang", "Yongqiang Andrew", "" ], [ "Saha", "Renata", "" ], [ "Liang", "Shuang", "" ], [ "Cheeran", "Maxim C-J", "" ], [ "Wang", "Jian-Ping", "" ] ]
With the ongoing global pandemic of coronavirus disease 2019 (COVID-19), there is an increasing quest for more accessible, easy-to-use, rapid, inexpensive, and high accuracy diagnostic tools. Traditional disease diagnostic methods such as qRT-PCR (quantitative reverse transcription-PCR) and ELISA (enzyme-linked immunosorbent assay) require multiple steps, trained technicians, and long turnaround time that may worsen the disease surveillance and pandemic control. In sight of this situation, a rapid, one-step, easy-to-use, and high accuracy diagnostic platform will be valuable for future epidemic control especially for regions with scarce medical resources. Herein, we report a magnetic particle spectroscopy (MPS) platform for detection of SARS-CoV-2 biomarkers: spike and nucleocapsid proteins.
1902.03739
Nasim Nematzadeh
Nasim Nematzadeh, David M. W. Powers
Prediction of Dashed Caf\'e Wall illusion by the Classical Receptive Field Model
6 pages, 6 figures. Accepted in ICECCE2020 conference
null
null
null
q-bio.NC cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
The Caf\'e Wall illusion is one of a class of tilt illusions where lines that are parallel appear to be tilted. We demonstrate that a simple Differences of Gaussian model provides an explanatory mechanism for the illusory tilt perceived in a family of Caf\'e Wall illusion generalizes to the dashed versions of Caf\'e Wall. Our explanation models the visual mechanisms in low level stages and the lateral inhibition of simple cells that can reveal tilt cues in Geometrical distortion illusions such as Tile illusions particularly Caf\'e Wall illusions. For this, we simulate the activations of the retinal/cortical simple cells in responses to these patterns based on a Classical Receptive Field (CRF) model (referred to as Vis-CRF) to explain tilt effects in these illusions. Previously, it was assumed that all these visual experiences of tilt arise from the orientation selectivity properties described for more complex cortical cells. An estimation of an overall tilt angle perceived in these illusions is based on the integration of the local tilts detected by simple cells which is presumed to be a key mechanism utilized by the complex cells to create our final perception of tilt.
[ { "created": "Fri, 8 Feb 2019 03:23:22 GMT", "version": "v1" }, { "created": "Mon, 11 May 2020 07:59:53 GMT", "version": "v2" } ]
2020-05-12
[ [ "Nematzadeh", "Nasim", "" ], [ "Powers", "David M. W.", "" ] ]
The Caf\'e Wall illusion is one of a class of tilt illusions where lines that are parallel appear to be tilted. We demonstrate that a simple Differences of Gaussian model provides an explanatory mechanism for the illusory tilt perceived in a family of Caf\'e Wall illusion generalizes to the dashed versions of Caf\'e Wall. Our explanation models the visual mechanisms in low level stages and the lateral inhibition of simple cells that can reveal tilt cues in Geometrical distortion illusions such as Tile illusions particularly Caf\'e Wall illusions. For this, we simulate the activations of the retinal/cortical simple cells in responses to these patterns based on a Classical Receptive Field (CRF) model (referred to as Vis-CRF) to explain tilt effects in these illusions. Previously, it was assumed that all these visual experiences of tilt arise from the orientation selectivity properties described for more complex cortical cells. An estimation of an overall tilt angle perceived in these illusions is based on the integration of the local tilts detected by simple cells which is presumed to be a key mechanism utilized by the complex cells to create our final perception of tilt.
1105.4751
Armita Nourmohammad
Armita Nourmohammad and Michael Laessig
Formation of regulatory modules by local sequence duplication
null
null
10.1371/journal.pcbi.1002167
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Turnover of regulatory sequence and function is an important part of molecular evolution. But what are the modes of sequence evolution leading to rapid formation and loss of regulatory sites? Here, we show that a large fraction of neighboring transcription factor binding sites in the fly genome have formed from a common sequence origin by local duplications. This mode of evolution is found to produce regulatory information: duplications can seed new sites in the neighborhood of existing sites. Duplicate seeds evolve subsequently by point mutations, often towards binding a different factor than their ancestral neighbor sites. These results are based on a statistical analysis of 346 cis-regulatory modules in the Drosophila melanogaster genome, and a comparison set of intergenic regulatory sequence in Saccharomyces cerevisiae. In fly regulatory modules, pairs of binding sites show significantly enhanced sequence similarity up to distances of about 50 bp. We analyze these data in terms of an evolutionary model with two distinct modes of site formation: (i) evolution from independent sequence origin and (ii) divergent evolution following duplication of a common ancestor sequence. Our results suggest that pervasive formation of binding sites by local sequence duplications distinguishes the complex regulatory architecture of higher eukaryotes from the simpler architecture of unicellular organisms.
[ { "created": "Tue, 24 May 2011 12:48:06 GMT", "version": "v1" } ]
2015-03-19
[ [ "Nourmohammad", "Armita", "" ], [ "Laessig", "Michael", "" ] ]
Turnover of regulatory sequence and function is an important part of molecular evolution. But what are the modes of sequence evolution leading to rapid formation and loss of regulatory sites? Here, we show that a large fraction of neighboring transcription factor binding sites in the fly genome have formed from a common sequence origin by local duplications. This mode of evolution is found to produce regulatory information: duplications can seed new sites in the neighborhood of existing sites. Duplicate seeds evolve subsequently by point mutations, often towards binding a different factor than their ancestral neighbor sites. These results are based on a statistical analysis of 346 cis-regulatory modules in the Drosophila melanogaster genome, and a comparison set of intergenic regulatory sequence in Saccharomyces cerevisiae. In fly regulatory modules, pairs of binding sites show significantly enhanced sequence similarity up to distances of about 50 bp. We analyze these data in terms of an evolutionary model with two distinct modes of site formation: (i) evolution from independent sequence origin and (ii) divergent evolution following duplication of a common ancestor sequence. Our results suggest that pervasive formation of binding sites by local sequence duplications distinguishes the complex regulatory architecture of higher eukaryotes from the simpler architecture of unicellular organisms.
2209.06865
Gabriele Scheler
Gabriele Scheler
Sketch of a novel approach to a neural model
null
null
null
null
q-bio.NC cond-mat.dis-nn cs.AI cs.NE q-bio.MN
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we lay out a novel model of neuroplasticity in the form of a horizontal-vertical integration model of neural processing. The horizontal plane consists of a network of neurons connected by adaptive transmission links. This fits with standard computational neuroscience approaches. Each individual neuron also has a vertical dimension with internal parameters steering the external membrane-expressed parameters. These determine neural transmission. The vertical system consists of (a) external parameters at the membrane layer, divided into compartments (spines, boutons) (b) internal parameters in the sub-membrane zone and the cytoplasm with its protein signaling network and (c) core parameters in the nucleus for genetic and epigenetic information. In such models, each node (=neuron) in the horizontal network has its own internal memory. Neural transmission and information storage are systematically separated. This is an important conceptual advance over synaptic weight models. We discuss the membrane-based (external) filtering and selection of outside signals for processing. Not every transmission event leaves a trace. We also illustrate the neuron-internal computing strategies from intracellular protein signaling to the nucleus as the core system. We want to show that the individual neuron has an important role in the computation of signals. Many assumptions derived from the synaptic weight adjustment hypothesis of memory may not hold in a real brain. We present the neuron as a self-programming device, rather than passively determined by ongoing input. We believe a new approach to neural modeling will benefit the third wave of AI. Ultimately we strive to build a flexible memory system that processes facts and events automatically.
[ { "created": "Wed, 14 Sep 2022 18:28:39 GMT", "version": "v1" }, { "created": "Sat, 8 Oct 2022 08:43:55 GMT", "version": "v2" }, { "created": "Tue, 1 Nov 2022 13:20:18 GMT", "version": "v3" }, { "created": "Thu, 10 Nov 2022 09:14:05 GMT", "version": "v4" }, { "created": "Wed, 16 Nov 2022 13:39:09 GMT", "version": "v5" }, { "created": "Thu, 5 Jan 2023 21:08:14 GMT", "version": "v6" } ]
2023-01-09
[ [ "Scheler", "Gabriele", "" ] ]
In this paper, we lay out a novel model of neuroplasticity in the form of a horizontal-vertical integration model of neural processing. The horizontal plane consists of a network of neurons connected by adaptive transmission links. This fits with standard computational neuroscience approaches. Each individual neuron also has a vertical dimension with internal parameters steering the external membrane-expressed parameters. These determine neural transmission. The vertical system consists of (a) external parameters at the membrane layer, divided into compartments (spines, boutons) (b) internal parameters in the sub-membrane zone and the cytoplasm with its protein signaling network and (c) core parameters in the nucleus for genetic and epigenetic information. In such models, each node (=neuron) in the horizontal network has its own internal memory. Neural transmission and information storage are systematically separated. This is an important conceptual advance over synaptic weight models. We discuss the membrane-based (external) filtering and selection of outside signals for processing. Not every transmission event leaves a trace. We also illustrate the neuron-internal computing strategies from intracellular protein signaling to the nucleus as the core system. We want to show that the individual neuron has an important role in the computation of signals. Many assumptions derived from the synaptic weight adjustment hypothesis of memory may not hold in a real brain. We present the neuron as a self-programming device, rather than passively determined by ongoing input. We believe a new approach to neural modeling will benefit the third wave of AI. Ultimately we strive to build a flexible memory system that processes facts and events automatically.
1410.1475
Zachary Kilpatrick PhD
Paul C. Bressloff and Zachary P. Kilpatrick
Nonlinear Langevin equations for wandering patterns in stochastic neural fields
28 pages, 8 figures
null
null
null
q-bio.NC nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the effects of additive, spatially extended noise on spatiotemporal patterns in continuum neural fields. Our main focus is how fluctuations impact patterns when they are weakly coupled to an external stimulus or another equivalent pattern. Showing the generality of our approach, we study both propagating fronts and stationary bumps. Using a separation of time scales, we represent the effects of noise in terms of a phase-shift of a pattern from its uniformly translating position at long time scales, and fluctuations in the pattern profile around its instantaneous position at short time scales. In the case of a stimulus-locked front, we show that the phase-shift satisfies a nonlinear Langevin equation (SDE) whose deterministic part has a unique stable fixed point. Using a linear-noise approximation, we thus establish that wandering of the front about the stimulus-locked state is given by an Ornstein-Uhlenbeck (OU) process. Analogous results hold for the relative phase-shift between a pair of mutually coupled fronts, provided that the coupling is excitatory. On the other hand, if the mutual coupling is given by a Mexican hat function (difference of exponentials), then the linear-noise approximation breaks down due to the co-existence of stable and unstable phase-locked states in the deterministic limit. Similarly, the stochastic motion of mutually coupled bumps can be described by a system of nonlinearly coupled SDEs, which can be linearized to yield a multivariate OU process. As in the case of fronts, large deviations can cause bumps to temporarily decouple, leading to a phase-slip in the bump positions.
[ { "created": "Mon, 6 Oct 2014 17:49:00 GMT", "version": "v1" }, { "created": "Wed, 7 Jan 2015 03:29:00 GMT", "version": "v2" } ]
2015-01-08
[ [ "Bressloff", "Paul C.", "" ], [ "Kilpatrick", "Zachary P.", "" ] ]
We analyze the effects of additive, spatially extended noise on spatiotemporal patterns in continuum neural fields. Our main focus is how fluctuations impact patterns when they are weakly coupled to an external stimulus or another equivalent pattern. Showing the generality of our approach, we study both propagating fronts and stationary bumps. Using a separation of time scales, we represent the effects of noise in terms of a phase-shift of a pattern from its uniformly translating position at long time scales, and fluctuations in the pattern profile around its instantaneous position at short time scales. In the case of a stimulus-locked front, we show that the phase-shift satisfies a nonlinear Langevin equation (SDE) whose deterministic part has a unique stable fixed point. Using a linear-noise approximation, we thus establish that wandering of the front about the stimulus-locked state is given by an Ornstein-Uhlenbeck (OU) process. Analogous results hold for the relative phase-shift between a pair of mutually coupled fronts, provided that the coupling is excitatory. On the other hand, if the mutual coupling is given by a Mexican hat function (difference of exponentials), then the linear-noise approximation breaks down due to the co-existence of stable and unstable phase-locked states in the deterministic limit. Similarly, the stochastic motion of mutually coupled bumps can be described by a system of nonlinearly coupled SDEs, which can be linearized to yield a multivariate OU process. As in the case of fronts, large deviations can cause bumps to temporarily decouple, leading to a phase-slip in the bump positions.
2407.00566
Shruthi Viswanath
Kartik Majila, Shreyas Arvindekar, Muskaan Jindal, and Shruthi Viswanath
Frontiers in integrative structural biology: modeling disordered proteins and utilizing in situ data
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-sa/4.0/
Integrative modeling enables structure determination for large macromolecular assemblies by combining data from multiple sources of experiment data with theoretical and computational predictions. Recent advancements in AI-based structure prediction and electron cryo-microscopy have sparked renewed enthusiasm for integrative modeling; structures from AI-based methods can be integrated with in situ maps to characterize large assemblies. This approach previously allowed us and others to determine the architectures of diverse macromolecular assemblies, such as nuclear pore complexes, chromatin remodelers, and cell-cell junctions. Experimental data spanning several scales was used in these studies, ranging from high-resolution data, such as X-ray crystallography and Alphafold structures, to low-resolution data, such as cryo-electron tomography maps and data from co-immunoprecipitation experiments. Two recurrent modeling challenges emerged across a range of studies. First, modeling disordered regions, which constituted a significant portion of these assemblies, necessitated the development of new methods. Second, methods needed to be developed to utilize the information from cryo-electron tomography, a timely challenge as structural biology is increasingly moving towards in situ characterization. Here, we recapitulate recent developments in the modeling of disordered proteins and the analysis of cryo-electron tomography data and highlight opportunities for method development in the context of integrative modeling.
[ { "created": "Sun, 30 Jun 2024 02:36:45 GMT", "version": "v1" } ]
2024-07-02
[ [ "Majila", "Kartik", "" ], [ "Arvindekar", "Shreyas", "" ], [ "Jindal", "Muskaan", "" ], [ "Viswanath", "Shruthi", "" ] ]
Integrative modeling enables structure determination for large macromolecular assemblies by combining data from multiple sources of experiment data with theoretical and computational predictions. Recent advancements in AI-based structure prediction and electron cryo-microscopy have sparked renewed enthusiasm for integrative modeling; structures from AI-based methods can be integrated with in situ maps to characterize large assemblies. This approach previously allowed us and others to determine the architectures of diverse macromolecular assemblies, such as nuclear pore complexes, chromatin remodelers, and cell-cell junctions. Experimental data spanning several scales was used in these studies, ranging from high-resolution data, such as X-ray crystallography and Alphafold structures, to low-resolution data, such as cryo-electron tomography maps and data from co-immunoprecipitation experiments. Two recurrent modeling challenges emerged across a range of studies. First, modeling disordered regions, which constituted a significant portion of these assemblies, necessitated the development of new methods. Second, methods needed to be developed to utilize the information from cryo-electron tomography, a timely challenge as structural biology is increasingly moving towards in situ characterization. Here, we recapitulate recent developments in the modeling of disordered proteins and the analysis of cryo-electron tomography data and highlight opportunities for method development in the context of integrative modeling.
2402.00054
Arshmeet Kaur
Arshmeet Kaur and Morteza Sarmadi
Predicting loss-of-function impact of genetic mutations: a machine learning approach
Index Terms: Machine Learning, Prediction Algorithms, Supervised Learning, Support vector machines, K-Nearest Neighbors, RANSAC, Decision Trees, Random Forest, Ge- netic mutations, LoFtool, Next Generation Sequencing
null
10.54364/AAIML.2024.41119
null
q-bio.GN cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
The innovation of next-generation sequencing (NGS) techniques has significantly reduced the price of genome sequencing, lowering barriers to future medical research; it is now feasible to apply genome sequencing to studies where it would have previously been cost-inefficient. Identifying damaging or pathogenic mutations in vast amounts of complex, high-dimensional genome sequencing data may be of particular interest to researchers. Thus, this paper's aims were to train machine learning models on the attributes of a genetic mutation to predict LoFtool scores (which measure a gene's intolerance to loss-of-function mutations). These attributes included, but were not limited to, the position of a mutation on a chromosome, changes in amino acids, and changes in codons caused by the mutation. Models were built using the univariate feature selection technique f-regression combined with K-nearest neighbors (KNN), Support Vector Machine (SVM), Random Sample Consensus (RANSAC), Decision Trees, Random Forest, and Extreme Gradient Boosting (XGBoost). These models were evaluated using five-fold cross-validated averages of r-squared, mean squared error, root mean squared error, mean absolute error, and explained variance. The findings of this study include the training of multiple models with testing set r-squared values of 0.97.
[ { "created": "Fri, 26 Jan 2024 19:27:38 GMT", "version": "v1" } ]
2024-06-06
[ [ "Kaur", "Arshmeet", "" ], [ "Sarmadi", "Morteza", "" ] ]
The innovation of next-generation sequencing (NGS) techniques has significantly reduced the price of genome sequencing, lowering barriers to future medical research; it is now feasible to apply genome sequencing to studies where it would have previously been cost-inefficient. Identifying damaging or pathogenic mutations in vast amounts of complex, high-dimensional genome sequencing data may be of particular interest to researchers. Thus, this paper's aims were to train machine learning models on the attributes of a genetic mutation to predict LoFtool scores (which measure a gene's intolerance to loss-of-function mutations). These attributes included, but were not limited to, the position of a mutation on a chromosome, changes in amino acids, and changes in codons caused by the mutation. Models were built using the univariate feature selection technique f-regression combined with K-nearest neighbors (KNN), Support Vector Machine (SVM), Random Sample Consensus (RANSAC), Decision Trees, Random Forest, and Extreme Gradient Boosting (XGBoost). These models were evaluated using five-fold cross-validated averages of r-squared, mean squared error, root mean squared error, mean absolute error, and explained variance. The findings of this study include the training of multiple models with testing set r-squared values of 0.97.
2402.05856
Zuobai Zhang
Zuobai Zhang, Jiarui Lu, Vijil Chenthamarakshan, Aur\'elie Lozano, Payel Das, Jian Tang
Structure-Informed Protein Language Model
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein language models are a powerful tool for learning protein representations through pre-training on vast protein sequence datasets. However, traditional protein language models lack explicit structural supervision, despite its relevance to protein function. To address this issue, we introduce the integration of remote homology detection to distill structural information into protein language models without requiring explicit protein structures as input. We evaluate the impact of this structure-informed training on downstream protein function prediction tasks. Experimental results reveal consistent improvements in function annotation accuracy for EC number and GO term prediction. Performance on mutant datasets, however, varies based on the relationship between targeted properties and protein structures. This underscores the importance of considering this relationship when applying structure-aware training to protein function prediction tasks. Code and model weights are available at https://github.com/DeepGraphLearning/esm-s.
[ { "created": "Wed, 7 Feb 2024 09:32:35 GMT", "version": "v1" } ]
2024-02-09
[ [ "Zhang", "Zuobai", "" ], [ "Lu", "Jiarui", "" ], [ "Chenthamarakshan", "Vijil", "" ], [ "Lozano", "Aurélie", "" ], [ "Das", "Payel", "" ], [ "Tang", "Jian", "" ] ]
Protein language models are a powerful tool for learning protein representations through pre-training on vast protein sequence datasets. However, traditional protein language models lack explicit structural supervision, despite its relevance to protein function. To address this issue, we introduce the integration of remote homology detection to distill structural information into protein language models without requiring explicit protein structures as input. We evaluate the impact of this structure-informed training on downstream protein function prediction tasks. Experimental results reveal consistent improvements in function annotation accuracy for EC number and GO term prediction. Performance on mutant datasets, however, varies based on the relationship between targeted properties and protein structures. This underscores the importance of considering this relationship when applying structure-aware training to protein function prediction tasks. Code and model weights are available at https://github.com/DeepGraphLearning/esm-s.
q-bio/0310012
Zoltan Dezso
Zoltan Dezso, Zoltan N. Oltvai and Albert-Laszlo Barabasi
Bioinformatics analysis of experimentally determined protein complexes in the yeast, S. cerevisiae
Supplementary Material available at http://www.nd.edu/~networks
null
null
null
q-bio.MN cond-mat q-bio.GN
null
Many important cellular functions are implemented by protein complexes that act as sophisticated molecular machines of varying size and temporal stability. Here we demonstrate quantitatively that protein complexes in the yeast, Saccharomyces cerevisiae, are comprised of a core in which subunits are highly coexpressed, display the same deletion phenotype (essential or non-essential) and share identical functional classification and cellular localization. This core is surrounded by a functionally mixed group of proteins, which likely represent short-lived- or spurious attachments. The results allow us to define the deletion phenotype and cellular task of most known complexes, and to identify with high confidence the biochemical role of hundreds of proteins with yet unassigned functionality.
[ { "created": "Fri, 10 Oct 2003 17:08:54 GMT", "version": "v1" } ]
2007-05-23
[ [ "Dezso", "Zoltan", "" ], [ "Oltvai", "Zoltan N.", "" ], [ "Barabasi", "Albert-Laszlo", "" ] ]
Many important cellular functions are implemented by protein complexes that act as sophisticated molecular machines of varying size and temporal stability. Here we demonstrate quantitatively that protein complexes in the yeast, Saccharomyces cerevisiae, are comprised of a core in which subunits are highly coexpressed, display the same deletion phenotype (essential or non-essential) and share identical functional classification and cellular localization. This core is surrounded by a functionally mixed group of proteins, which likely represent short-lived- or spurious attachments. The results allow us to define the deletion phenotype and cellular task of most known complexes, and to identify with high confidence the biochemical role of hundreds of proteins with yet unassigned functionality.
2310.08774
Ming Yang Zhou
Mingyang Zhou, Zichao Yan, Elliot Layne, Nikolay Malkin, Dinghuai Zhang, Moksh Jain, Mathieu Blanchette, Yoshua Bengio
PhyloGFN: Phylogenetic inference with generative flow networks
null
null
null
null
q-bio.PE cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Phylogenetics is a branch of computational biology that studies the evolutionary relationships among biological entities. Its long history and numerous applications notwithstanding, inference of phylogenetic trees from sequence data remains challenging: the high complexity of tree space poses a significant obstacle for the current combinatorial and probabilistic techniques. In this paper, we adopt the framework of generative flow networks (GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and Bayesian phylogenetic inference. Because GFlowNets are well-suited for sampling complex combinatorial structures, they are a natural choice for exploring and sampling from the multimodal posterior distribution over tree topologies and evolutionary distances. We demonstrate that our amortized posterior sampler, PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real benchmark datasets. PhyloGFN is competitive with prior works in marginal likelihood estimation and achieves a closer fit to the target distribution than state-of-the-art variational inference methods. Our code is available at https://github.com/zmy1116/phylogfn.
[ { "created": "Thu, 12 Oct 2023 23:46:08 GMT", "version": "v1" }, { "created": "Mon, 25 Mar 2024 00:18:35 GMT", "version": "v2" } ]
2024-03-26
[ [ "Zhou", "Mingyang", "" ], [ "Yan", "Zichao", "" ], [ "Layne", "Elliot", "" ], [ "Malkin", "Nikolay", "" ], [ "Zhang", "Dinghuai", "" ], [ "Jain", "Moksh", "" ], [ "Blanchette", "Mathieu", "" ], [ "Bengio", "Yoshua", "" ] ]
Phylogenetics is a branch of computational biology that studies the evolutionary relationships among biological entities. Its long history and numerous applications notwithstanding, inference of phylogenetic trees from sequence data remains challenging: the high complexity of tree space poses a significant obstacle for the current combinatorial and probabilistic techniques. In this paper, we adopt the framework of generative flow networks (GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and Bayesian phylogenetic inference. Because GFlowNets are well-suited for sampling complex combinatorial structures, they are a natural choice for exploring and sampling from the multimodal posterior distribution over tree topologies and evolutionary distances. We demonstrate that our amortized posterior sampler, PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real benchmark datasets. PhyloGFN is competitive with prior works in marginal likelihood estimation and achieves a closer fit to the target distribution than state-of-the-art variational inference methods. Our code is available at https://github.com/zmy1116/phylogfn.